Below are a few highlights of the Hive-related programming issues worked on by the BlockTrades team since my last post.
Release candidate of hived available now
The most important accomplishment of this period is that, as of today, we’ve tagged a release candidate of hived for hardfork 26. Here’s the merge request, with an incomplete summary of the changes since the last hardfork: https://gitlab.syncad.com/hive/hive/-/merge_requests/647 . The summary there is mostly aimed at other devs, so I expect we’ll see a post from @hiveio before too long that covers the changes in a way that is more accessible to a larger audience.
Although it's been announced in the past, please be aware that the new version of hived requires Ubuntu 20 or later to compile. Most of our testing has been done with Ubuntu 20 and 22.
Weeks of testing, bug fixes, and ...
Over the past several weeks, we’ve mainly just been creating new tests, manually testing via the mirrornet, benchmarking performance, and making bug fixes for hived as our testing has uncovered them.
Our regression tests allowed us to detect a lot of the “simple” bugs and the more rich interactions and long term testing achievable using the mirrornet has enabled us to detect some of the more subtle bugs (one crashing bug took 5 days to get triggered on the mirrornet).
All-in-all, I suspect we now have the most robust testing environment among all blockchain projects of any complexity. That doesn’t mean it is perfect yet, by any means, and we’re still working thru some bugs like intermittent false positive test failures that sometimes require a test to be run more than once to pass, and we’ve also still got a decent backlog of regression tests to write. But the mirrornet has done a good job at filling the remaining gaps in regression-based tests that still need to be written.
... more optimizations
When we’ve made test fixes, we’ve sometimes had the opportunity to make further performance improvements as part of the fix. As one example, we recently fixed a bug in the fork handling code and in the process we also eliminated some unnecessary popping and re-applying of blocks, which will increase the performance of nodes in the case of multiple competing forks being seen by the network nodes. This will, for example, make the network more robust in the face of a denial-of-service attack on the block producer nodes or when the network it is operating on is experiencing link outages or routing delays.
Call for Hive app devs to test against the mirrornet to prepare for the hardfork
In the past, Hive apps have often waited till after the hardfork before doing the most serious testing. Previously the best that could be done was to setup a testnet running with the hardfork already executed, then setup an api node on that testnet that apps could test against. But such testnets lacked real world traffic and therefore wasn’t really comparable to testing against the mainnet after the real hardfork.
With the mirrornet, this is no longer true. All Hive apps can easily point their application software to the mirrornet api node being operated by @gtg and quickly test out their app against real world data being transported from the mainnet to the mirrornet.
Only two easy steps are needed to test your app using the mirrornet:
- To read mirrornet data, use https://api.fake.openhive.network as your API node
- To write mirrornet transactions, tell your app to use a chain id of 42 (instead of the mainnet chainid of 0xbeeabode) when it signs transactions.
We’re currently hosting a copy of condenser that we’re testing on the mirrornet that uses the mirrornet API node. We’re also testing HAF servers on the mirrornet work, including the HAF-based version of hivemind, and we’ll likely switch the mirrornet API node to using that version of hivemind after apps have finished testing against the old hivemind implementation currently running there.
Also, I learned today that the peakd team has setup a block explorer for the mirrornet, which will likely be very useful when apps are doing their testing: https://mirrornet.hivehub.dev
We officially incorporated support for the get_block API into HAF (this will allow us to stop loading down hived nodes with get_block calls as these have traditionally been a very expensive call). Performance is already pretty good, but I think we have some room for further improvements from what I’ve been told.
Also, last I heard from one of our devs working on it, we’re about 90% of the way to a draft release of the HAF-based block explorer, so we will may deploy that in a few weeks as well.
Some upcoming tasks
- Continue documenting all major code changes since last hardfork
- Test and update deployment documentation for hived and HAF apps.
- Support Hive apps devs as they begin testing their app against the mirrornet
- More testing of hived release candidate on production API node to confirm real-world performance improvements
- Continued testing on the mirrornet
- Finish testing and dockerize HAF-based hivemind and use docker image in CI tests.
- Continue work on HAF-based block explorer
- Collect benchmarks for a hafah app operating in “irreversible block mode” and compare to a hafah app operating in “normal” mode (low priority)
When hardfork 26?
The new release candidate has an informal date of Oct 11th for the date of the hardfork. The code requires us to always specify some date, and this was the earliest “feasible” date that we considered for it (feasible in terms of giving sufficient time for other parts of the ecosystem to be prepared for the hardfork). Discussions among various stakeholders supporting the ecosystem (core devs, apps devs, witnesses, api node operators, etc) will be necessary before this date or a later date can be made official. And, of course, it is always possible that we encounter some problematic error despite our quite rigorous testing that could cause the date to be pushed (but I think it is unlikely).
I hope to firm up whether this date is acceptable to everyone in the next few days. A formal announcement of the official date should follow shortly from the hiveio account after we’ve got some consensus on the date.