
Release of HAF API server stack (version 1.28.3)
As many of you already know, we recently released 1.28.3 of hived with the hardfork scheduled for November 19th.
Tomorrow, as a followup, we’re releasing the corresponding version 1.28.3 of the HAF API server stack. This is the set of software used by Hive API servers to support the various Hive APIs such as hafah, hivemind, balance tracker, reputation tracker, block explorer, nft tracker, hivesense, and hafsql (optional install).
If you’re planning to replay the API stack manually, you should begin updating your API server to the new stack ASAP to allow plenty of time before the hardfork happens. We will also be releasing a ZFS snapshot in the next few days for those who want to avoid an expensive replay. I’ll post details in a comment below this post when the snapshot is available.
New hive profile for witnesses in docker compose scripts
As a side note, we also added a hive profile to the haf_api_node docker compose scripts that allows configuring just a hived witness node (including a price feed). This has only been lightly tested so far, so feedback is welcome.
Server Updates
We’ve already updated our “develop and test” API node, https://api.syncad.com, to the new stack and we will complete updating api.hive.blog to the new stack in the next day or two.
If you support a Hive app and haven’t tested it against api.syncad.com yet, please do it ASAP to avoid unexpected issues with changes to the API stack.
Stack replay time
Below are replay times for HAF and the various apps on our fastest system (a very fast 9950X3D with 128GB RAM and 2 4TB T705 nvmes) using the assisted_startup.sh script (which still uses a ramdisk):
Replay of HAF itself took: 12.86h
Then: 1 hour for creation of block explorer indexes
Followed by simultaneous replay of all apps (time dominated by hivemind replay of 2 days 6 hours).
Followed by the time to create the hivesense HNSW index: 1 hour
In total, complete stack replay took 69 hours (2.875 days).
We typically do our replays on a very fast machine like the one above, then copy ZFS snapshots to update our other API servers.
Installation requirements
The default install of the HAF database with ZFS compression uses 2.23TB of storage. We recommend API nodes have at least 4TB of fast storage (e.g. nvme drives) and at least 64GB of RAM for good performance (128GB is better, of course). For the server OS, we recommend Ubuntu 24 or 25.
To install the stack, follow the instructions here: https://gitlab.syncad.com/hive/haf_api_node/README.md
The instructions have changed a bit, so please read the updated README file.
In particular, we no longer recommend using a RAM disk during the replay process (instead there is a reduce_writebacks.sh script which prevents excessive writes to disk storage during the replay). This new method has the secondary benefit of preventing you from occasionally forgetting to move the statefile from the ramdisk after the replay.
The assisted_startup.sh file has also been updated to optionally avoid using a ramdisk (see command-line options for details).
By default, the stack is configured to sync vector embeddings from api.hive.blog. If you want to generate your own embeddings, you will need at least one reasonably powerful GPU and appropriately change the configuration of your .env file.
Optional installs: three web sites and HAF SQL
You can now optionally install three web sites on your API server: HAF block explorer, Denser blog, and Denser wallet.
To install these apps, add the ui service to the PROFILE line (you can also choose to install the apps individually if you don’t want all three) in your .env file.
They are installed respectively at routes /explorer, /blog, and /wallet on your API server. For example, on our server, you can access the block explorer UI at https://api.syncad.com/explorer, the blog at https://api.syncad.com/blog, and the wallet at https://api.syncad.com/wallet
You can also optionally install mahdiyari’s hafSQL app on your API server by adding the hafsql service to your profile. You can find more details about HafSQL here: https://gitlab.com/mahdiyari/hafsql
Thanks for your hard work
BTW 1.28.3 hived is not tagged yet
Tks... now tagged.
https://gitlab.syncad.com/hive/hive/-/releases/1.28.3
Testing on ARM again... and tomorrow on the new VM.
Super excited about this release!
This is going to be an interesting upgrade. It could also be a two edge sword where current devs with existing apps would be slow in migrating over due to lack of "motivation" to upgrade thus keeping their apps broken due to lack of resources(whatever it may be). Else if planned and executed well .. we could see a new breed of devs onboarding (especially those vibe - coders ). It could tip both ways .. just my opinions alone.
By now I think most apps have been tested against the new stack, but definitely could be some stragglers, we'll see soon I guess.
As to new new coders, yes, I think there's a good chance we'll see a new breed soon. My near term goal is to make it reasonable for even "non-programmers" to be able to create Hive apps.
Excellent! Perfect! Everything good and well understood. Thank you very much for the info, instructions and updates.
Therefore, I think I'll get started on updating and setting up all my servers with v1.28.3 right away so they're ready way before November 19th.
Cheers!!
Thank you for letting me know. I appreciate it in advance.
Exelente ya había leído algo de esta mejora en varios aspectos de la plataforma.
Fine work, @blocktrades as always.
Congratulations @blocktrades! Your post has been a top performer on the Hive blockchain and you have been rewarded with this rare badge
You can view your badges on your board and compare yourself to others in the Ranking
If you no longer want to receive notifications, reply to this comment with the word
STOPCheck out our last posts: