Hive HardFork 25 Jump Starter Kit

in Blockchain Wizardrylast year (edited)

Intended for the Hive API node operators, witnesses, and developers.

At the time of Eclipse release I made a similar post that saved many (hours) of lives, so I’m creating an updated one for the upcoming Hard Fork 25.

Yes, new Hive Hard Fork, new fancy logo reveal.


Our core development efforts takes place in a community hosted GitLab repository (thanks @blocktrades). There's Hive core itself, but also many other Hive related software repositories.

We use it as a push mirror for GitLab repository, mostly for visibility and decentralization - if you have an account on GitHub please fork at least hive and hivemind and star them if you haven’t done so yet. We haven't paid much attention to it but apparently it's important for some outside metrics.

Please click both buttons


API node
Soon to be switched to v1.25.0 but because it’s heavily used in Hive related R&D it might not be your best choice if you are looking for a fast API node without any rate limiting. During the maintenance mode, it will fall back to

Seed node

hived v1.25.0 listens on
to use it in your config.ini file just add the line:

p2p-seed-node =

If you don't have any p2p-seed-node = entries in your config file, built-in defaults will be used (which contains my node too).

Stuff for download



./get/bincontains hived and cli_wallet binaries built on Ubuntu 18.04 LTS which should also run fine on Ubuntu 20.04 LTS


As usual, the block_log file, roughly 350GB and counting.
For testing needs there's also block_log.5M that is limited to first 5 million blocks.



./get/snapshot/api/ contains a relatively recent snapshot of the API node with all the fancy plugins.
There’s a snapshot for the upcoming version v1.25.0 but also for the old one v1.24.8 if you need to switch back.
Uncompressed snapshot takes roughly 480GB
There’s also the example-api-config.ini file out there that contains settings compatible with the snapshot.

To decompress, you can use simply run it through something like: lbzip2 -dc | tar xv
(Using parallel bzip2 on multi-threaded systems might save you a lot of time)

To use snapshot you need:

  • A block_log file, not smaller than the one used when the snapshot was made.
  • A config.ini file, compatible with the snapshot (see above), adjusted to your needs, without changes that could affect it in a way that changes the state.
  • A hived binary compatible with the snapshot

All of that you can find above.

Run hived with --load-snapshot name, assuming the snapshot is stored in snapshot/name

hived API node runtime currently takes 823GB (incl. shm 19GB, excl. snapshot)


There’s also a snapshot meant for exchanges in ./get/snapshot/exchange/ that allows them to quickly get up and running, it requires a compatible configuration and that exchange account is one of those who are tracked by my node. If you run an exchange and want to be on that list to use a snapshot, just please let me know.

Hivemind database dump

./get/hivemind/ contains a relatively recent dump of the Hivemind database.
I use self-describing file names such as:
Date when dump was taken, revision of hivemind that was running it.
You need at least that version, remember about intarray extension
Consider running pg_restore with at least -j 6 to run long running tasks in parallel
After restoring the database, make sure to run the db_upgrade script.

Even though during full sync database size peaks easily over 750GB, when restored from dump it takes roughly 500GB. Dump file itself is just 53GB.

All resources are offered AS IS.


Did my 'ol due diligence and threw a star and fork at the openhive repo. I am excited for HF25 and I'm prepped to upgrade the sicarius witness ( just about as soon as it rolls out!

Also added support for some of the new ops and features to beempy in a pending merge request so that the library can be used to interact with some of the new features right away for those who prefer to interact with hive via python. A couple of people also added support in the hive-js library as well from the looks of it. Shaping up to be a pretty smooth rollout!

Great to hear, thank you! :-)

I know its too late to comment. But I can't help because its really an informative.thank you for sharing

Hopefully HF25 will increase the real human interaction.
Currently this platform is very disappointing in the social aspect. The average number of comments per post is 2-3, and most of those comments are bot comments.

Should be more worth voting on comments, so there's hope of more engagement. Let's make it social.

Voting on comments... Now that's a goal worth pursuing!

Nothing was stopping users before from voting on comments, other than maybe greed or sloth.

Those damn greedy sloths!!!


- Sloth voting for comments, Hive, 2021, zootopizied.

Well I've done it all along since my vote was worth something. Hive should be social, so I will encourage engagement.

Currently this platform is very disappointing in the social aspect.

Sounds like you are talking about the Earth. True, true. ;-)

While HF25 improves Hive a lot, including better incentives for interaction, there's nothing that code could change a lot when it comes to human behavior, in the end it's our - users - job.


So far so good...

This account was created today how is it earning so much in curation rewrds?!?

HF25 changed some rules regarding reward algorithms so I guess people might be trying to benefit from a transition period (pre HF-post with post-HF rules), it might look way off if there are posts with their curation windows where just one or few people voted for, but it will smooth out when more peopl e cast their votes.

I hope that this HF25 brings with it the reduction of the powerdown from 13 to 4 weeks, as it is difficult to wait long to withdraw some credit in an emergency and for investors to keep an eye on the HIVE community in general, so they look more on investing some of your money in the community.

No, there's no change in power down time lock in the upcoming HF.

Nice and hard work I see.😉

Hive hard forks are the best hard forks!

Hive hardforks? As far as I know, there was only one (HF24) so far. Someone will correct me, if I am wrong.

Depends on naming convention, nonetheless cat's words still holds :-)

What will happen to the hive hardfork? Thx

It will be (hopefully) executed ;-)

What's the list of exchange accounts currently tracked by the exchange snapshot?

account-history-rocksdb-track-account-range = ["binance-hot","binance-hot"]
account-history-rocksdb-track-account-range = ["bittrex","bittrex"]
account-history-rocksdb-track-account-range = ["blocktrades","blocktrades"]
account-history-rocksdb-track-account-range = ["deepcrypto8","deepcrypto8"]
account-history-rocksdb-track-account-range = ["huobi-pro","huobi-pro"]

Cheers! Any idea when the Fork will be complete?!?

30th Jun, 14:00 UTC

Hey thanks, very exact!

Well, if everything will go as planned, assuming that all witnesses will vote for approving it.

Oh I'm sure everything. will go to plan!

I mean what could Possibly go wrong?!?


I can think of a few things...

All part of the fun, it'll get there in the end!

Nice ;))

Added a couple of stars on Github for what difference it makes. I'm having fun with some Python scripts around Hive, but not really up to working on the core system. Respect for those who can.

Python skills are enough to write HAF dApps :-) Soon in your blockchain.

I shall HAF to try that. Been having fun with Beem and HiveSQL.


Starred and Forked both GitHub repos. :)

Thank you! :-)

Congratulations @gtg! You received a personal badge!

Happy Hive Birthday! You are on the Hive blockchain for 5 years!

You can view your badges on your board and compare yourself to others in the Ranking

Check out the last post from @hivebuzz:

Hive Power Up Day - July 1st 2021 - Hive Power Delegation

Hello GTG, I've been hanging out in the HIVE discord asking some questions about setting up a witness and ah api and seed node. All though I am a little confused on a few things and I was hoping you might be able to clear them up for me? I do have a few questions sitting in the discord when/if you have time as well.

So I was going to have my witness p2p-seed connect to the AH API node that I will also setup. I understand that with a pure witness node, you don't want the p2p endpoint active. On the AH API node, I guess the way how I undertood it is that it would be viewed as a standalone public api node? Would you just put in your uri for the API node in it's own config like:
p2p-seed-node =

My intentions are to help the hive network by adding a US based API public node and help with splinterlands api calls in the US. Splinterlands has motivated me to change quite a bit actually. I am currently a 8GPU miner and would like to stop doing that in favor of completely supporting HIVE. The coin i mine, the community is lacking and the dev team has yet to respond to my github inquires about issues I am faced with. My view of the hive community and it's involvement has shown me what I am missing out on. I'd much rather be a part of an active community than an inactive one.

Looking forward to hearing from you sir! Thank you.