Work by the BlockTrades team slowed a little in the last period, as much of our staff was off for Corpus Christi holiday, and in the past two days there was a company-sponsored recreational trip to the mountains (and today many people are probably recovering from that). Still we made progress on several fronts:
Hived work (blockchain node software)
We expect to tag release candidate 4 tomorrow (this version has been running on the testnet for over a week now but hasn’t yet been officially tagged because of some false-positive test fails that need to be eliminated).
Assuming no problems are reported for this release candidate by Monday (hopefully the testnet will get a lot of testing this weekend), we’ll merge the code to the master branch and tag an official release of hived (aka v1.25.0).
We’ll also be doing a final review of the build and installation docs and next we’ll contact exchanges and notify them of the expected hardfork on June 30th so that they can update their wallets ahead of time.
Hivemind (2nd layer applications + social media middleware)
We found a bug with the latest release of hivemind and we’re working on a fix for it now. This problem manifested most visibly as a reputation calculation error for two Hive accounts, although we believe it could possibly result in other errors.
The root cause of the problem is where we did some code refactoring to share some of the code used during massive sync to process blocks during live sync. This version of the code processed blocks in three separate transactions, and if one of these transactions failed, the work done by the other two transactions persisted, which could lead to inconsistencies in the data on the failing node.
The likely fix will be to revert to using a single transaction for block processing during live sync, and we'll probably be able to deploy the fix late next week.
Fix for hivemind sync time regression
We also tested/fixed a regression in hivemind sync time where the update_posts_rshares
query was sometimes inappropriately including a sequential scan in its query planning, which resulted in about 12 hours being added to initial sync time for a database with 53M blocks.
Our initial test of the fix shows a dramatic improvement (only takes 15 minutes now), but we need to test some more to ensure that the improvement persists under all conditions. The initial performance test for the fix was performed by starting from a database dump file, whereas the 12hr measurement was made on a database constructed from a full sync, so we still need to test performance under a full hivemind sync (which takes several days) to do a strictly accurate comparison.
I consider the current fix a bit of a workaround, because we’re essentially overriding the query planner’s plan based on its internal estimates and forcing it to avoid using a sequential scan. Later we will try to get the planner to estimate better by improving its collection of statistics and weighting of costs, because we know similar bad estimates were causing postgres 13 to apply inappropriate “just-in-time” optimizations to this same query. So it is conceivable that if we are able to correct the planner’s estimation process, we’ll get more optimal performance as the data’s statistical “shape” changes over time.
Hive Application Framework (HAF is new "official" name for modular hivemind)
Unless there’s a popular uprising, I think I’ve settled on an official name for the project that we’ve been developing under the “codename” of modular hivemind: Hive Application Framework aka HAF.
I think Hive Application Framework is much more descriptive of the actual functionality and it is more clearly distinguishable from the hivemind social media application. Plus it lends itself better to promotional taglines like: “HAF your blockchain app is already done!"
Initial release of HAF expected sometime in July
For the past week, we’ve been reviewing and improving the code and documentation for HAF, in particular the code that provides automated fork handling. We had a teleconference on Monday to discuss the state of the work and discuss further enhancements, and based on my positive feelings from that meeting, I estimate we will be able to do an official release of HAF sometime in July.
Ongoing work on HAF
The most recent change being made is to simplify swapping back and forth between a Hive application that only relies on irreversible blocks versus one that also processes reversible blocks.
On a related note, we’re also going to try to automate the optimization of block processing when a process that works on reversible blocks is currently processing irreversible blocks (originally the plan was to require the app to signal when it planned to work in an optimized fashion on irreversible blocks).
We’ll also be creating a skeleton code template to illustrate how a typical Hive app would use the HAF framework, and later, an full-blown example application.
Updates to sql_serializer plug for changes to hive_fork_plugin
Before we can do a full test of the framework, we also need to make some needed improvements to the sql_serializer plugin for hived that writes data to the postgres database. In particular, this code needs to be updated to support API changes that have been made to the Hive fork manager extension (this is the SQL code that manages tracking reversible blocks and reverting an application’s table changes when a fork occurs).
The above work will begin tomorrow and I hope we will be able to execute a full test of hived + hivemind sync by late next week.
Call for apps reviewers for HAF
At this point, while it still suffers from some typos which I’ll correct shortly, I think the architectural documentation for HAF is good enough that I want feedback from all app developers with an existing or planned app, especially any applications that require their own database and custom API.
Please review the architectural doc and help us create the best possible starting point for Hive apps. Here’s a link to the most recent version of that document: https://gitlab.syncad.com/hive/psql_tools/-/blob/mi_hive_fork_plugin2/src/hive_fork/Readme.md
Wonderful to see the transparency of your developments. So much going on that I miss due to business but always informed when I catch a blocktrades post.
Thanks to the team for all that you do!
Expect more puns around HAF :) Sounds like all is going well. Finding bugs is good if they are getting fixed. No software is ever perfect.
I'm hoping all goes smoothly for the hardfork so you can get a bit of a break.
Cheers!
the HAF and HAF nots 🤣
You don't know the HAF of it.
🤣
Mine's a HAF!
We HAF ways of making you talk!
!BEER
😂 "I can't do that dave" - HAF 3000
Just get David HasselHAF in to sing to us.
Ahaha these could go on for ever, they were right
A little dev humor!
Thanks for keeping us updated.
as always, thank you very much for the update. cant wait how this works after its launch on june 30.
keep us posted.
Cool update
I enjoy the blocktrades platform and look forward to any future upgrades. Great work and thanks for the update!
great work!
Thanks for new updates
Is there any hive based exchange like Ethereum has uniswap so I can convert Kanda tokens.I am new here so sorry about the lame question
Hive-engine
Congratulations @blocktrades! You have completed the following achievement on the Hive blockchain and have been rewarded with new badge(s) :
Your next payout target is 1185000 HP.
The unit is Hive Power equivalent because your rewards can be split into HP and HBD
You can view your badges on your board and compare yourself to others in the Ranking
If you no longer want to receive notifications, reply to this comment with the word
STOP
🥳 bt at the same time... for most users... 😱
how about Hive Application Live Framework then you can say HALF the job is done.
As usual, I only understand about 35% but I’m glad development is continuing, we are positioned really nicely with some of the development of dapps and communities. I think we are about to see some really nice growth.
Excited with the direction of Hive, planning to make dApps soon 😍
Finally some "posting" movement. Been looking at those commits for a while... trying to understand what was being forged.
Question about HBD interest:
My next HBD interest payout is tomorrow. The payout after that would be 30 days after, so after HF 25 is in effect. If I continue to hold HBD as non-savings, will I get any interest for that after the next payout, or will I need to put my HBD into savings to get full interest for that 30 period after the fork?
It is explained here: https://peakd.com/hive-102930/@blocktrades/important-note-for-anyone-holding-hbd-before-hardfork-25