This is a copy–paste friendly guide to get a Hive consensus node running for HF28.
Simple recipes for:
- seed node
- witness node
- exchange / personal wallet node
- basic API node
- Docker version of all of the above
Assumptions / pre-requisites
To keep the recipes dead simple, I assume:
Hardware (minimum reasonable):
- x86-64 CPU (*)
- 8 GB RAM
- 1 TB fast SSD / NVMe
Single-core CPU performance matters most during the initial replay/resync, and later for keeping up with the head block. The more RAM you have, the more data Linux can keep in the page cache, which means less pressure on your storage.
The required size of the shared memory file (where the node state lives) has dropped significantly, so we no longer recommend forcing it into RAM via tmpfs. In most cases it's enough to keep it on disk and let the kernel’s page cache do its job, especially on systems with plenty of RAM. You can still use tmpfs as an optional optimization if you really want to squeeze out every bit of replay/resync performance.
(*) Others are just out of scope now
Software
- OS: Ubuntu 24.04 LTS
- User: local user
hiveuid = 1000HOME = /home/hive
- Data dir:
/home/hive/datadir - We use:
screento keephivedrunninglbzip2for compressed snapshotsdockerif you follow the Docker section
- Ports (adjust firewall / security groups):
2001– P2P (seed)8090,8091– WebSocket / HTTP APIs
What these recipes can run
Same binary, different config:
- Seed node – helps the P2P network, no special plugins.
- Witness node – produces blocks, keeps the surface small.
- Exchange / wallet node – uses history plugin to track deposits/withdrawals for chosen accounts.
- Basic API node – serves simple RPC like
get_block, broadcasts transactions, tracks the head block.
The role is decided by config.ini (plugins, tracked accounts, witness name, private key, etc.), not by different binaries.
A snapshot may already contain extra data (like history for tracked accounts), but if you remove the related plugins or tracked accounts from config.ini, that data simply won’t be used.
Part 1 – native binary (non-Docker) recipes
Recipe 1 – One-time prep (run as hive user)
# create basic tree
mkdir -pv ~/datadir/{blockchain,snapshot} ~/bin
Recipe 2 – Get sample config
Start from the "exchange" config and then tweak it for your node's desired role (seed / witness / wallet / API).
wget https://gtg.openhive.network/get/snapshot/exchange/example-exchange-config.ini \
-O ~/datadir/config.ini
Later you might want to:
- disable
plugin = ...entries you don’t need
(adding new plugin entries may require a replay) - remove tracked accounts you don’t need
(mainly for exchange / wallet-style nodes)
(adding new tracked accounts will require a replay) - set
witnessandprivate-keyif you run this node as a witness - tweak API bind addresses and ports for your setup
- change the location of the shared memory file or comments / history databases
- adjust how
block_logis split to match your storage preferences
Recipe 3 – Download Hive 1.28.3 binaries
wget https://gtg.openhive.network/get/bin/hived-1.28.3 -nc -P ~/bin
wget https://gtg.openhive.network/get/bin/cli_wallet-1.28.3 -nc -P ~/bin
chmod u+x ~/bin/{hived,cli_wallet}-1.28.3
Recipe 4 – Put shared_memory on tmpfs (optional, advanced)
shared_memory.bin is hot. Putting it in RAM can speed up replay and reduce SSD wear, but if it’s gone (for example after a reboot) or corrupted, you will need to start over with replay or load a snapshot. Treat this as an optional optimization, not the default.
- Enable tmpfs path in config:
sed -i '/^# shared-file-dir/s/^# //' ~/datadir/config.ini
# or manually uncomment line: shared-file-dir = "/run/hive"
- Prepare
/run/hive(run as root):
sudo mkdir -p /run/hive
sudo chown -Rc hive:hive /run/hive
sudo mount -o remount,size=12G /run
Please note that aside from shared_memory.bin, Hive now also uses a comments-rocksdb-storage directory for part of the state. By default this lives alongside shared memory in the shared-file-dir (on disk), but if you move shared-file-dir to /run/hive, both shared_memory.bin and comments-rocksdb-storage will live in RAM.
Recipe 5 – Use existing block_log (faster start)
You can either:
- use your existing
block_log(recommended), or - download a public one (huge, but can save replay time in some setups)
wget https://gtg.openhive.network/get/blockchain/block_log -nc -P ~/datadir/blockchain
wget https://gtg.openhive.network/get/blockchain/block_log.artifacts -nc -P ~/datadir/blockchain
The
block_logis very large (hundreds of GB), so downloading it can take many hours.
If you already have ablock_log, definitely reuse it for upgrades.
If you don’t have one yet, with the current improvements it’s usually better to just let the node sync from the P2P network instead of downloading a freshblock_logfile from a single source.
Recipe 6 – Use snapshot (fastest way to state)
Snapshot = ready-made node state from another machine.
wget https://gtg.openhive.network/get/snapshot/exchange/latest.tar.bz2 -O - \
| lbzip2 -dc \
| tar xvC ~/datadir/snapshot
- Snapshot name in this recipe:
latest
(it will end up in~/datadir/snapshot/latest)
Make sure:
- your
block_logis at least as fresh as the snapshot - your
hived-1.28.3andconfig.iniare compatible with it
(plugins, tracked accounts, etc.)
Here compatible means: your config does not require any extra plugins or tracked accounts that were not used when the snapshot was created
(for example, using new account-history-rocksdb-track-account-range entries that weren’t present when the snapshot was made).
Having fewer plugins or a subset of tracked accounts is fine.
Recipe 7 – Adjust for specific roles
All roles use the same data dir and binary, just different config.ini.
Seed node
In ~/datadir/config.ini:
- make sure P2P port is open and public:
p2p-endpoint = 0.0.0.0:2001
You can now start hived using Recipe 8.
Witness node
In ~/datadir/config.ini:
witness = "yourwitnessname"
private-key = 5XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Keep it clean and simple: comment out or remove non-essential plugins (APIs, history, etc.).
Then start hived using Recipe 8.
Exchange / personal wallet node
In ~/datadir/config.ini:
Example config.ini you've downloaded in Recipe 2 is good enough as long as your desired account(s) are already tracked.
If so, use Recipe 8a.
If you need to add any new tracked accounts to the list of tracked accounts using format:
account-history-rocksdb-track-account-range = ["mytrackedaccount","mytrackedaccount"]
you simply cannot use that snapshot anymore. In that case you must rebuild the state from scratch using your block_log (no --load-snapshot), use Recipe 8b.
Basic API node (bots, simple apps)
- Start from
example-exchange-config.ini. - Disable anything you don’t need (like detailed history for accounts you don’t care about).
- Make sure HTTP/WebSocket bind addresses are what you want:
webserver-http-endpoint = 0.0.0.0:8090
webserver-ws-endpoint = 0.0.0.0:8091
Then start hived using Recipe 8a.
Recipe 8 – Start hived (native binary)
8a) Start from snapshot (if your config is compatible)
screen -S hived
~/bin/hived-1.28.3 -d ~/datadir --load-snapshot=latest
Detach from screen with: Ctrl+a, then d
Reattach with: screen -r hived
8b) Start with replay from block_log (no snapshot)
If you don’t use a snapshot and just want to rebuild state from your block_log:
screen -S hived
~/bin/hived-1.28.3 -d ~/datadir --replay
Detach from screen with: Ctrl+a, then d
Reattach with: screen -r hived
8c) Resync from scratch (no snapshot, no block_log)
WARNING: this removes your local blockchain data (don't do that unless you want to download everything again from the P2P network).
If you are upgrading, see Recipe 9.
Before you "start from scratch", double-check your directory tree. If you really want a clean resync, make sure there are no leftovers in your data dir, especially in ~/datadir/blockchain/. Old files (like a previous monolithic block_log) can still occupy a lot of space even if they’re no longer used. When you’re sure you don’t need them anymore, rm -rf ~/datadir/blockchain/* gives you a truly empty blockchain directory to start from.
screen -S hived
~/bin/hived-1.28.3 -d ~/datadir --resync
Detach from screen with: Ctrl+a, then d
Reattach with: screen -r hived
If there is no existing state or block_log, running hived without --load-snapshot and without --replay will also effectively start resync from scratch.
Recipe 9 – Upgrading from older version to 1.28.3
Assuming you already run a node laid out like this:
- Stop your current
hived. - Keep your existing data dir (
~/datadir), especiallyblockchain/block_log. - Update binaries using Recipe 3 (download
hived-1.28.3andcli_wallet-1.28.3). - Optionally download latest snapshot using Recipe 6 if you are going to use it instead of replay.
Then choose one of these paths:
- If you use snapshots: start with Recipe 8a (
--load-snapshot=latest). - If you don’t use snapshots but have a
block_log: start with Recipe 8b (--replay). - If you don’t have a usable
block_log(unlikely when you are upgrading): let the node sync from P2P, i.e. go with Recipe 8c (--resync).
In all cases, you reuse the same config.ini (adjusted as needed for your role).
Some upgrades don’t require replay (for example, certain bug-fix releases within the same 1.28.x line – please refer to the release notes for details).
In such case it's enough to just stop your current hived-1.28.0 and start it with hived-1.28.3 to resume its operations.
But make sure you don't use any of --load-snapshot, --force-replay, or --resync.
Part 2 – Docker recipe (Hive 1.28.3)
Same idea as Part 1, just wrapped in a container.
All assumptions from Part 1 still apply (same ~/datadir, same config.ini, optional block_log and snapshot).
Additionally we assume:
- Docker is installed and user
hivecan rundocker.
Recipe 10 – Run Hive 1.28.3 in Docker
Most common Docker run (seed + basic API), using your existing /home/hive/datadir from Part 1:
docker run \
-e HIVED_UID=$(id -u) \
-p 2001:2001 \
-p 8090:8090 \
-p 8091:8091 \
-v /home/hive/datadir:/home/hived/datadir \
hiveio/hive:1.28.3 \
--set-benchmark-interval=100000 \
--load-snapshot=latest \
--replay
What this does:
- runs image
hiveio/hive:1.28.3 - maps your host
/home/hive/datadirto/home/hived/datadirinside the container - exposes ports
2001,8090,8091from the container to the host - uses your user ID inside the container (
HIVED_UID=$(id -u)) so files created byhivedare owned byhiveon the host - tells
hivedto:- use
/home/hived/datadir/snapshot/latestas the snapshot (--load-snapshot=latest) - rebuild state, combining snapshot and
block_logas needed (--replay) - report periodic benchmark info (
--set-benchmark-interval=100000)
- use
You still need:
config.iniinside/home/hive/datadir(on the host)block_log, andblock_log.artifacts(unless pruned) and snapshot, exactly like in the bare-metal recipes
For simpler cases:
- if you don’t want to use a snapshot, drop
--load-snapshot=latest(keep--replayor--force-replayif you want a full replay fromblock_log) - if you already have a healthy state and just want to restart, drop both
--load-snapshot=latestand--replay
Adjust ports / extra flags to match your intended role (seed / witness / exchange / wallet / API), using the same config.ini rules as in Part 1.
TL;DR – "Complete simple recipe" (native binary)
If you just want one long paste (native binary, default config, download block_log, use snapshot):
screen -S hived
mkdir -pv ~/datadir/{blockchain,snapshot} ~/bin
wget https://gtg.openhive.network/get/bin/hived-1.28.3 -nc -P ~/bin
wget https://gtg.openhive.network/get/bin/cli_wallet-1.28.3 -nc -P ~/bin
chmod u+x ~/bin/{hived,cli_wallet}-1.28.3
wget https://gtg.openhive.network/get/blockchain/block_log -nc -P ~/datadir/blockchain
wget https://gtg.openhive.network/get/blockchain/block_log.artifacts -nc -P ~/datadir/blockchain
wget https://gtg.openhive.network/get/snapshot/exchange/latest.tar.bz2 -O - | \
lbzip2 -dc | tar xvC ~/datadir/snapshot
# that will overwrite your config
wget https://gtg.openhive.network/get/snapshot/exchange/example-exchange-config.ini \
-O ~/datadir/config.ini
~/bin/hived-1.28.3 -d ~/datadir --load-snapshot=latest
Estimated times (very rough)
- Sync from scratch – long (a day, or two)
- Replay with existing
block_log– roughly half that - Load snapshot (existing block_log or pruned) – up to an hour
Congratulations, you have your Hive HF28 node running (or at least a copy-paste away).
Congratulations @gtg! Your post has been a top performer on the Hive blockchain and you have been rewarded with this rare badge
You can view your badges on your board and compare yourself to others in the Ranking
If you no longer want to receive notifications, reply to this comment with the word
STOP