Work by the BlockTrades team slowed a little in the last period, as much of our staff was off for Corpus Christi holiday, and in the past two days there was a company-sponsored recreational trip to the mountains (and today many people are probably recovering from that). Still we made progress on several fronts:
Hived work (blockchain node software)
We expect to tag release candidate 4 tomorrow (this version has been running on the testnet for over a week now but hasn’t yet been officially tagged because of some false-positive test fails that need to be eliminated).
Assuming no problems are reported for this release candidate by Monday (hopefully the testnet will get a lot of testing this weekend), we’ll merge the code to the master branch and tag an official release of hived (aka v1.25.0).
We’ll also be doing a final review of the build and installation docs and next we’ll contact exchanges and notify them of the expected hardfork on June 30th so that they can update their wallets ahead of time.
Hivemind (2nd layer applications + social media middleware)
We found a bug with the latest release of hivemind and we’re working on a fix for it now. This problem manifested most visibly as a reputation calculation error for two Hive accounts, although we believe it could possibly result in other errors.
The root cause of the problem is where we did some code refactoring to share some of the code used during massive sync to process blocks during live sync. This version of the code processed blocks in three separate transactions, and if one of these transactions failed, the work done by the other two transactions persisted, which could lead to inconsistencies in the data on the failing node.
The likely fix will be to revert to using a single transaction for block processing during live sync, and we'll probably be able to deploy the fix late next week.
Fix for hivemind sync time regression
We also tested/fixed a regression in hivemind sync time where the
update_posts_rshares query was sometimes inappropriately including a sequential scan in its query planning, which resulted in about 12 hours being added to initial sync time for a database with 53M blocks.
Our initial test of the fix shows a dramatic improvement (only takes 15 minutes now), but we need to test some more to ensure that the improvement persists under all conditions. The initial performance test for the fix was performed by starting from a database dump file, whereas the 12hr measurement was made on a database constructed from a full sync, so we still need to test performance under a full hivemind sync (which takes several days) to do a strictly accurate comparison.
I consider the current fix a bit of a workaround, because we’re essentially overriding the query planner’s plan based on its internal estimates and forcing it to avoid using a sequential scan. Later we will try to get the planner to estimate better by improving its collection of statistics and weighting of costs, because we know similar bad estimates were causing postgres 13 to apply inappropriate “just-in-time” optimizations to this same query. So it is conceivable that if we are able to correct the planner’s estimation process, we’ll get more optimal performance as the data’s statistical “shape” changes over time.
Hive Application Framework (HAF is new "official" name for modular hivemind)
Unless there’s a popular uprising, I think I’ve settled on an official name for the project that we’ve been developing under the “codename” of modular hivemind: Hive Application Framework aka HAF.
I think Hive Application Framework is much more descriptive of the actual functionality and it is more clearly distinguishable from the hivemind social media application. Plus it lends itself better to promotional taglines like: “HAF your blockchain app is already done!"
Initial release of HAF expected sometime in July
For the past week, we’ve been reviewing and improving the code and documentation for HAF, in particular the code that provides automated fork handling. We had a teleconference on Monday to discuss the state of the work and discuss further enhancements, and based on my positive feelings from that meeting, I estimate we will be able to do an official release of HAF sometime in July.
Ongoing work on HAF
The most recent change being made is to simplify swapping back and forth between a Hive application that only relies on irreversible blocks versus one that also processes reversible blocks.
On a related note, we’re also going to try to automate the optimization of block processing when a process that works on reversible blocks is currently processing irreversible blocks (originally the plan was to require the app to signal when it planned to work in an optimized fashion on irreversible blocks).
We’ll also be creating a skeleton code template to illustrate how a typical Hive app would use the HAF framework, and later, an full-blown example application.
Updates to sql_serializer plug for changes to hive_fork_plugin
Before we can do a full test of the framework, we also need to make some needed improvements to the sql_serializer plugin for hived that writes data to the postgres database. In particular, this code needs to be updated to support API changes that have been made to the Hive fork manager extension (this is the SQL code that manages tracking reversible blocks and reverting an application’s table changes when a fork occurs).
The above work will begin tomorrow and I hope we will be able to execute a full test of hived + hivemind sync by late next week.
Call for apps reviewers for HAF
At this point, while it still suffers from some typos which I’ll correct shortly, I think the architectural documentation for HAF is good enough that I want feedback from all app developers with an existing or planned app, especially any applications that require their own database and custom API.
Please review the architectural doc and help us create the best possible starting point for Hive apps. Here’s a link to the most recent version of that document: https://gitlab.syncad.com/hive/psql_tools/-/blob/mi_hive_fork_plugin2/src/hive_fork/Readme.md