13th update of 2022 on BlockTrades work on Hive software

in HiveDevs2 years ago

blocktrades update.png

Below are a few highlights of the Hive-related programming issues worked on by the BlockTrades team since my last post.

Hived (blockchain node software)

The work on this hardfork and the associated work on HAF-based projects, represents the biggest amount of labor ever focused on Hive’s development. We have 12 people from the BlockTrades team (including myself) working full-time on these projects.

This hardfork has also taken the most time of any of the Hive hardforks to complete, and the sheer amount of work done to the hived blockchain software can also be seen in the number of merge requests (these are made to add features or fix bugs to the hived code base). Over the lifetime of Hive, there have been 484 merge requests merged into the hive code repository so far, and more than half of them have been merged since the last hardfork.

Our work during this last week and a half has been focused on creating new tests and finding and fixing bugs in hived, so work on HAF-related projects has temporarily slowed (but nonetheless continued) over the last week or so.

Automated testing (continuous integration tests)

Our automated tests have helped us identify a lot of bugs, but there is some costs associated both with updating the tests to account for new hived behavior and also dealing with occasional weaknesses in the tests themselves that get exposed (sometimes test fails are false positives, especially if there is some inadvertent race condition in the test itself).

So when a test fails, it can still be a challenging job at times to identify the true root of the failure: the software, the test, or even some issue with the hardware where the test is running.

Sometimes we find quite strange problems in hived as a result of automated tests that are time-sensitive, however. For example, in the past couple of days this lead us to the discovery that testnet nodes were starting up about two to six seconds slower than they should (depending on the hardware speed and loading levels) because of a longstanding bug that erroneously assumed that that the testnet had missed 69M blocks at startup (since the testnet starts with no blocks, and the genesis block time was back in 2016, it considered all those potentially-generated blocks to be missed blocks) and then looped through all 69 million of them to potentially report them as missed by individual witnesses. Computers are fast, but doing work 69 million times still requires a little time.

As an example of a more serious bug caught by the tests, we found that the new operation used by one-block-irreversibility (OBI) to allow witnesses to approve blocks had shifted the operation ids of all the virtual operations, breaking the filtering of virtual operations by account_history API calls.

Our ultimate solution to this dilemma was to re-use the id of an existing-but-never-used-or-deployed operation for reporting a witness for double production, and therefore avoiding the shift of the virtual operation ids by one. But without our automated tests, this is the kind of bug that could have easily sneaked through simple testnet-based tests.

Mirrornet testing (a testnet that replicates traffic from the mainnet)

We’ve also been testing OBI on the mirrornet. The mirrornet has been critical to testing of OBI, because it would take weeks to create automated tests that can properly simulate a network with a lot of traffic and frequent network interruptions that cause forks.

But with the mirror network, we were able to easily construct test scenarios with 4-5 nodes with groups of witnesses on each node, then temporarily disable network connections between these nodes to observe OBI behavior under fragmented network conditions, all in a single day.

OBI itself performed well under these testing conditions, but we did get a chance to observe various other behaviors under these conditions that we can probably improve in the future (e.g. cases where the new data provided by OBI could allow more resilience for a fragmented network).

And we may make a another optimization to OBI itself to reduce unnecessary OBI-related traffic during forking conditions:currently a scheduled witness casts an approval vote for every block it adds to its head block when switching forks, but such votes are probably not useful for blocks in the distant past when switching to a “long” fork of more than 20 blocks (one of the tests we ran created forks of several hundred blocks).

We also found another bug during mirrornet testing that was extremely subtle: when we updated to a later version of the open-source boost library, this brought in a new behavior for multi-index containers (these containers are used throughout hived to track blockchain state information and in the p2p layer as well). Now the modify call for multi-index containers will erase the object being modified if the lambda function being used to do the modification throws an exception (previously this only caused the modify to be considered a failure, but the object stayed in the container).

This unexpected change in behavior started showing up as random trash in objects that had references to deleted objects in these containers, and eventually caused one of the mirrornet nodes with the highest activity (the one mirroring traffic from the mainnet) to crash. We already have one workable solution to this new behavior, and we’ll examine other options tomorrow before making a final decision on how we’ll handle it.

Near to a release candidate

Despite the bugs discovered and the testing challenges, getting all tests created so far to pass has given us the confidence to merge in all the outstanding features branches for hardfork 26. As of today, we’ve merged in all performance optimizations, including support for OBI and resource credit rationalization, to the develop branch of the hive repository, and we’re now doing what can be viewed as final testing of this branch prior to tagging a release candidate.

Hive Application Framework (HAF)

While working on the block explorer and the balance tracker apps, we discovered it would be useful to have HAF create a table to track the block numbers where each hardfork was triggered. We’ll implement this new table soon.

HAF-based block explorer

Current work here is focused on optimizing queries associated with the new tables recently added to the block explorer (witness table, witness vote tables, and vesting balances for accounts).

HAF-based balance tracker application

We added support for a few more operations, including newly added virtual operations, to allow the balance_tracker to correctly compute vests for accounts. This is an iterative process: we add support for more operations, then compare the computed balances against a replay of the blockchain that dumps balances for each account at each block.

HAF-based hivemind (social media middleware server used by web sites)

Two of our Hive developers are currently working on HAF-based hivemind. One is focused on continuous integration and docker support, and the other is writing tests, fixing bugs, and making optimizations.

Some upcoming tasks

  • Final testing and performance analysis of hived release candidate on mirrornet.
  • Test hived release candidate on production API node to confirm real-world performance improvements.
  • Tag hived release candidate ASAP (hopefully Friday, if not then Monday)
  • Finish testing and dockerize HAF-based hivemind and use docker image in CI tests.
  • Test and update deployment documentation for hived and HAF apps.
  • Complete work for get_block API supplied via HAF app
  • Continue work on HAF-based block explorer
  • Collect benchmarks for a hafah app operating in “irreversible block mode” and compare to a hafah app operating in “normal” mode (low priority)
  • Document all major code changes since last hardfork

When hardfork 26?

A formal announcement of the date should follow shortly from hive.io account, but the base assumption is that it will be no less than 2 weeks after the release candidate is tagged and documentation has been provided to witnesses and exchanges that need to deploy the new software.

Sort:  

The work on this hardfork and the associated work on HAF-based projects, represents the biggest amount of labor ever focused on Hive’s development. We have 12 people from the BlockTrades team (including myself) working full-time on these projects.

Not a lot of people are seeing the development and the layer one infrastructure that is being made for builders to build on. These days all layer one seems to be some copy and tweaks from Ethereum, while we have here a unique and original development.

very good work being done ser, much uniqueness

good point

true!

I want to learn how to build on this layer, but I'm just an HTML cowboy 🤠

Thank you and all the team working on it for the huge work done! I hope that the documentation will be up to the mark because over the years the existing one has accumulated a lot of delays containing obsolete/removed things.

I'll bet 1 Hive it won't be sooner than September 10th :P

Sorry for the slightly offside question, but what happened to the changes in the HBD haircut ratio? Or was that in HF25?

It's a relevant question, and the change to 30% is included in HF26.

good question, and I would also like to have an answer :)

When will ETH-based trading/conversion be resumed on blocktrades.us?

We had a problem with our ETH wallet and a fix for it will require full resync. And a full resync of that wallet takes a long, long time nowadays (it's much slower than Hive, unfortunately). Last estimate I heard was it would take around 27 days. And all our key personnel are tied up with Hive hardfork-related work right now.

So unless we find some trick that allows us to do it sooner, will probably be about a month. It was suggested that we could try a different type of ETH wallet that might be faster, but that would require us to add code to interface to that new wallet type, and again everyone is just tied up with Hive hardfork and other projects.

That is pretty bad news for me. Thanks for giving me that update.

Great work guys

Can there be more tokens tradable on blocktrades?

This is amazing work 👍👌💪

Keep up the great work! Also, I have a question is there a way to contact you through discord or something else?
Thanks in advance!

Great efforts thus far and thanks to you and your team for the awesome work. Hopefully, things will go as you have planned.
weldone.

I hope hardfork Hive gives a boost. It will cause a higher rise again.

So much work done, impressive.

Can you accept TRX on block trades @blocktrades

No, we don't accept any TRON-based tokens. In my opinion, TRON's founder is responsible for tokens being taken from a number of Hive users (including me) and we have filed a lawsuit against his company. Based on his behavior to date, I don't think anyone should trust security of any TRON-based tokens.

Oh thanks for replying

take your time

Congratulations @blocktrades! Your post has been a top performer on the Hive blockchain and you have been rewarded with the following badges:

Post with the most upvotes of the day.
Post with the highest payout of the day.

You can view your badges on your board and compare yourself to others in the Ranking
If you no longer want to receive notifications, reply to this comment with the word STOP

Check out the last post from @hivebuzz:

Our Hive Power Delegations to the July PUM Winners
Feedback from the August 1st Hive Power Up Day
Hive Power Up Month Challenge 2022-07 - Winners List
Support the HiveBuzz project. Vote for our proposal!

A Great Development I must say.
The Hive Ecosystem still Marvel's me with what they are building.
Good Job!!!

All this hard work so I can post memes, thank you ! We live in amazing times

Can delegations be added to the expiry list like witness votes are?

It would be possible to do it, but the mechanism is different, so it wouldn't be just a matter of a small change to the existing governance expiry code, entirely new code would need to be written.

If we could have that it would help us in the anti-abuse community to insure that dead people or lost keys don't go on indefinitely.

I was hoping for a simple if delegation days >/= 365 then ignore but that would be too easy, I guess.

Saludos hice una operación y no se ha efectuado: 981bd056-e71a-4ace-8148-1a185590c6ad