Below are a few highlights of the Hive-related programming issues worked on by the BlockTrades team since my last report. I hope we’ll have a new release of hived, HAF, and various HAF apps and tools in about two weeks, but it has been a while since I’ve last made an update and before long these posts would just get to long to write if I waited longer. I’m also preparing to go to HiveFest, so I’m going to try to keep this post pretty terse with links to details about the work. I’d also like to point out this is only some highlights: there’s too many devs working on this now for me to report all the improvements (even some I’ve worked on personally aren’t mentioned).
New features, optimizations, and bug fixes
- Optimized block broadcasting
- Fixes to bugs found in testing of Beekeeper tool
- Fixed bugs in code preventing a memo key leak. Further extended this code by supporting recurrent_transfer operations.
- HF28 - remove limit on vote edits
- HF28 prerequisites to clear artificial 1 in power down rate despite account is not doing power down
- Corrections for problem of witness plugin accessing comment before it was created
- Unify logging in Hived and eliminate direct cout/cerr prints in the appbase and chainbase part of code to avoid confusing ordering of log messaages
- Fixes to Hived shutdown that could result in an unclean shutdown during P2P sync which in turn could result in a corrupted HAF database
- Fixes to debug_node_api to eliminate random failures at CI
- Testnet build bugs that could lead to a hang during block generation
Many new regression tests, related to private key leaks, and dedicated operation testing scenarios (limit_order2, transfer, transfer_from_savings, transfer_to_savings).
- Fixed and simplified automatic mirrornet conversion job: https://gitlab.syncad.com/hive/hive/-/merge_requests/1014
- Job parameter unification: https://gitlab.syncad.com/hive/hive/-/merge_requests/994
We’ve added several devs to this project since my last report. The current status of the UI can be tracked here: https://gitlab.syncad.com/hive/denser/-/wikis/Comparison-of-views-of-the-Denser-project-with-the-old-Hive-Blog
Clive has been undergoing a lot of work: lately we made a lot of improvements and bug fixes and implemented support for an external CLI version (i.e. enables executing individual wallet commands from a bash script).
We’re releasing an initial version (deployed as a docker image) that supports transfer operations as a technology preview, with a full release to follow later this year. I’ll make a separate post about how to get and/or build the initial version soon.
HAF is a SQL-based framework for creating highly scalable and robust 2nd layer apps that operate using data from the Hive blockchain.
HAF bug fixes and test improvements
- Fixes to bugs in metatada state data provider (found while BlockExplorer backend development)
- Fixed HAF instance upgrade procedure
- New stored procedures for context management to perform a transaction commit correctly
- RowLevelSecurity rules allows to look into hive.context data for correlated database roles
- Improved JSONB conversion and also error/exception management in Hive Postgres extension C++ code
- Prerequisites for docker compose deployments
I’ve also been benchmarking HAF under various conditions including running on a server with only 32GB of memory here: https://gitlab.syncad.com/hive/haf/-/wikis/home
An “irreversible block only” version of HAF may be in the cards
I recently had an idea to provide an “irreversible blocks only” version of HAF (enabled by a simple command-line option on a regular HAF server), for performance optimization purposes.
For most blocks, such a server will only lag a couple hundred milliseconds behind a normal “reversible” version of HAF, due to OBI, and might be as much as two times as fast for queries (just my guess at this point, pending actual benchmarks).
A HAF app would work on either configuration of the HAF server without any changes. It would basically be like a global switch that automatically converted all the HAF apps on that server to “irreversible block only” apps.
Docker compose scripts for easy deployment of API nodes
We’re creating docker scripts and documentation for best practices for setting up an API node. The idea is to enable setting up an entire API node with just two or three commands and an environment config file to bind map node-related storage to various locations in the local file system. These scripts will be located in a new repo in the Hive group.
As part of this work, we’re developing and testing health checks for the various apps, especially the new HAF apps, and much of this work will be re-usable for future HAF apps.
We’ve also been testing routine maintenance needs such as shutting down and bringing back up various subsystems on the API node and we’ve found and fixed several bugs during this process, as well as made various performance improvements.
As a best practice, we’re strongly recommending the use of ZFS for deployment of future API nodes: in fact, the docker compose scripts assume the API node will be configured and maintained on a ZFS filesystem and we expect considerable extra setup and maintenance effort will be required for anyone who wants to avoid ZFS.
One of the key drivers for selecting ZFS is we plan to regularly provide ZFS snapshots containing hived/HAF synced to the latest Hive blockchain headblock (originally we were using pgdump/pgrestore to provide filled HAF databases, but we found that ZFS snapshots were a much better solution).
ZFS compression also dramatically lowers the disk storage requirements needed for a full API node. This approach also allows us to provide API node operators with a suggested optimal partitioning of the storage of the node in terms of space and performance needs.
This default docker compose script to setup an API node will launch dockers for:
- hived to connect to the hive network (currently this docker also contains the HAF database server, but in the future these will be deployed as individual dockers)
- HAF to act as a database for Hive blockchain data received from hived
- haproxy for load balancing of API calls
- jussi for routing, caching, and legacy processing of json-based API calls
- varnish for caching of new REST-based API calls
- various HAF apps: HAFAH API (for Hive account history info), block explorer API and UI for lookup of general blockchain data, and Hivemind API (for Hive social media applications). A HAF app is usually deployed as several dockers. For example, for the block explorer, there is one docker to run the app’s main event loop on the SQL server and another that runs the postgresT docker to serve REST calls made to the database.
The addition of varnish is due to another point I should highlight: for future APIs, we're mainly adding REST-based APIs as we found that this led to superior performance from PostgresT API servers.
We’re also developing a new general purpose python-based library for Hive called HELpy, but it is still in an early stage of development. Initial uses in our projects include integration in Clive and test-tools (in test-tools it will replace calls to the old cli_wallet). After it is successfully used there, it should be tested well enough for general usage.
Ongoing and upcoming tasks
- New release candidates of hived, haf, and haf apps for 1.27.5 (likely in two weeks).
- Finish docker compose scripts to ease deployment of API node infrastructure (this will be part of the new release).
- Finish up initial version of HAF-based block explorer backend and GUI (also part of the new release hopefully but that is less certain).
- Integration of keyauth state provider provided by HAF into HAF block explorer.
- Continue work for initial Denser release
- Continue work on Consensus State Providers (for more powerful HAF apps).
- Add support for more operations to Clive wallet.
- Collect benchmarks for a hafah app operating in “irreversible block mode” and compare to a hafah app operating in “normal” mode (low priority).
- Publish more documentation for various new tools (beekeeper, Clive, consensus state providers) and separate HAF documentation into smaller, more easily digestible chunks.
- Benchmark some of the recent performance improvements in HAF and HAF-based apps.
- Deploy an updated version of HAF to our publicly-accessible HAF server.