8th update of 2022 on BlockTrades work on Hive software

in HiveDevs2 years ago (edited)

blocktrades update.png

Below are highlights of some of the Hive-related programming issues worked on by the BlockTrades team since my last post.

Hived (blockchain node software) work

Mirrornet (testnet that mirrors traffic from mainnet) to test p2p code

We implemented and tested the change I mentioned last post to allow a node to request multiple transactions from an idle peer. This resolved the bottleneck with transactions not getting distributed in a timely manner on the mirror net when there were only a few peers connected to a node.

We also experimented with various programatic and server-level settings and we were able to dramatically decrease the latency of p2p traffic. On the mirrornet, we’re now seeing negative Block Time Offsets on all nodes in the network (that’s a good thing). I did come up with an idea for one further enhancement to reduce block latency, but I’ve decided it is best to postpone that work until after the hardfork as our latency times are already looking very good at this point. Details of these changes can be found here: https://gitlab.syncad.com/hive/hive/-/merge_requests/437

Further optimization of OBI (one-block irreversibility) protocol

While analyzing the OBI protocol implementation, we realized we could make a further optimization to detect the case where a block in a fork received enough votes to become irreversible and immediately switch to that fork. Previously, the node would not switch forks until it received enough blocks on the fork that the fork became a longer chain.

To see the benefits of this optimization, consider a network split occurs, with 16 block producers remaining interconnected on one side of the split, and 5 block producers on the other side of the split. If the block producers on the minority split generate 5 of the next 6 blocks, then rejoin during the other 16 block producers, they wouldn’t switch to the irreversible fork approved by the other 16 block producers until 6 more blocks were produced (which would trigger the normal fork switching logic because it would be the longest chain). With the new optimization, the minority block producers will typically rejoin the majority fork after one block gets produced and approved by the majority block producers.

Command-Line Interface (CLI) wallet changes

In addition to creating new tests and fixes of small issues uncovered by those tests, we made changes to the CLI wallet to support generating both “legacy” transactions and the new style where assets are represented using NAI (network asset identifiers) instead of strings (e.g. “HIVE”). These changes should be helpful for API library developers when they update their libraries to support NAI-based transactions.

Next we need to modify hived itself to support binary serialization of the new transaction format using NAIs, but this is expected to be done quickly (probably just a day or two).

Another feature we’re looking to add to the CLI wallet is to be able to a write transaction to a file in the new formats, which may later be useful as another means for generating transactions for cold wallets.

Compressed block logs

We’ve merged in all the changes associated with maintaining block logs in compressed form. After many optimizations, we were able to achieve a compression ration of 49% without adding much in the way of CPU overhead.

The way this works currently, hived automatically compresses new blocks using the zstd library as they are written to the node’s local block_log file. In a future version of hived, we’re planning to actually exchange blocks between nodes directly in their compressed form, which would have two benefits: 1) blocks only compressed once rather than by every node and 2) reduced p2p traffic. But with the hardfork rapidly approaching, it was decided to postpone this enhancement to a later date.

We also added a new utility program called compress_block_log that can be used to compress/uncompress an existing block_log. It also replaces the functionality of the now-obsolete utility truncate_blocklog.

Hive Application Framework (HAF)

We updated some of the database roles used to manage a HAF database and fixed some permissions issues (we’ve lowered permissions where possible for various roles). In particular, the “hive” role has been renamed “hive_app_admin” to reflect that the usage of this role is install new HAF apps, and it only has the privileges necessary for this task now. It can only read data in the “hive” schema that contains blockchain data, and it can create new schemas to store any data required by the HAF application being installed.

We’ve also made various improvements (still ongoing) to the scripts used to setup a HAF database and build docker images.

HAF account history app (aka hafah)

We completed tests associated with the postgREST server version of the hafah, and these changes should be merged into develop in the next couple of days, along with changes to the benchmarking scripts.

With the postgREST server, hafah now supports two distinct APIs: the “legacy” API which uses the same syntax as the account history plugin (where API names are embedded into the json body) and a new “direct” API (where API names are embedded directly into the URL). These two APIs support the exact same set of methods, but the direct API has better relative performance, especially when making calls where the database query time doesn’t dominate. The legacy API, as the name suggests, is to allow legacy applications to move in their own time to the new API. It’s also worth noting in passing that even with the “legacy” API, performance is significantly better than the performance of the old account history plugin.

Hivemind (social media middleware server used by web sites)

We’re currently working on the port of hivemind to a HAF-based app. We completed our first test of a full sync to the headblock. We fixed a memory leak discovered during this process and we’re now analyzing a problem that occurred as hivemind switched from massive sync to live sync mode.

Concurrently with the above testing, we’re also making modifications to store hivemind’s data directly into the HAF database using a hivemind-specific schema (the one currently being tested still stores hivemind data to a separate database).

Some upcoming tasks

  • Modify hived to process transactions containing either NAI based assets or legacy assets.
  • Merge in new RC cost rationalization code.
  • Analyzing performance issues observed with postponed transactions when spamming the network with a massive number of transactions.
  • Finish dockerization and CI improvements for HAF and HAF apps.
  • Update hafah benchmarking code to benchmark the new “direct” API.
  • Collect benchmarks for a hafah app operating in “irreversible block mode” and compare to a hafah app operating in “normal” mode.
  • Test postgREST-based hafah on production server (api.hive.blog).
  • Comple and benchmark HAF-based hivemind, then deploy and test on our API node.
  • Complete enhancements to one-block irreversibility (OBI) algorithm and test them.
  • Test updated blockchain converter for mirrornet.

When hardfork 26?

We discovered some new tasks during this last work period that we decided to be take on before the hardfork (enhancement of OBI, binary serialization for transactions containing NAIs, and a few others), but some previous tasks also completed faster than expected, so I think we’re still on track for the same date mentioned in my last post (around end of June), barring any as yet unknown issues uncovered during further testing.

Sort:  

I really like the OBI work you have done. For some reason, that gets me excited.

I'm also excited about it, because I see it being a game-changer for 2nd layer apps.


~~~ embed:1526645812301910017 twitter metadata:ZGl2aWRlbmRlbmNhc2h8fGh0dHBzOi8vdHdpdHRlci5jb20vZGl2aWRlbmRlbmNhc2gvc3RhdHVzLzE1MjY2NDU4MTIzMDE5MTAwMTd8 ~~~
The rewards earned on this comment will go directly to the person sharing the post on Twitter as long as they are registered with @poshtoken. Sign up at https://hiveposh.com.

Congratulations @blocktrades! You have completed the following achievement on the Hive blockchain and have been rewarded with new badge(s):

You received more than 1670000 HP as payout for your posts, comments and curation.
Your next payout target is 1675000 HP.
The unit is Hive Power equivalent because post and comment rewards can be split into HP and HBD

You can view your badges on your board and compare yourself to others in the Ranking
If you no longer want to receive notifications, reply to this comment with the word STOP

To support your work, I also upvoted your post!

Check out the last post from @hivebuzz:

Hive Power Up Month - Feedback from May - Day 15
Support the HiveBuzz project. Vote for our proposal!

Test postgREST-based hafah on production server (api.hive.blog).

I am not a software developer, but out of curiosity, I opened the mentioned website, and it says "Further configuration is required".

Is the production server properly configured?

img_0.6113845078472049.jpg

It is configured correctly, it's just you tried to access it via the http protocol and should via https (secured with SSL certificate): https://api.hive.blog

I tried the HTTPS now. The website is currently writing the following error message:

{"jsonrpc":"2.0","id":null,"error":{"code":-32603,"message":"Internal Error","data":{"error_id":"b6aa41c0-c3c0-4d1b-9dc2-1eee691879f0","jussi_request_id":"000804972750537071"}}}

img_0.5266181957966829.jpg

It is not a website, it is an API node to serve data for other applications. This is the expected behavior.

Now I understand. It is visible that I am not a software developer. At least today I learned something new. Thank you for the explanation.

Have a nice day. All the best. Greetings and much love from Hungary.

is it possible to pull up our transaction data using the API and get an export?

will June become the official HF month ? :P

Congratulations @blocktrades! Your post has been a top performer on the Hive blockchain and you have been rewarded with the following badge:

Post with the highest payout of the day.

You can view your badges on your board and compare yourself to others in the Ranking
If you no longer want to receive notifications, reply to this comment with the word STOP

Check out the last post from @hivebuzz:

Hive Power Up Month - Feedback from May - Day 15
Support the HiveBuzz project. Vote for our proposal!

Thanks for all the work you're putting on Hive @blocktrades! So deeply appreciated! Let's build 😁

Please like me sir

Oh u think this update is better?

Good job!

Congratulations @blocktrades! You received a personal badge!

You made another user happy by powering him up some HIVE on Hive Power Up Day and got awarded this Power Up Helper badge.

You can view your badges on your board and compare yourself to others in the Ranking

Check out the last post from @hivebuzz:

Be ready for the 6th edition of the Hive Power Up Month!
Hive Power Up Day - June 1st 2022
Support the HiveBuzz project. Vote for our proposal!

excited about to kniw more about new tasks related to hardfork 26

I am not sure I understood everything in your post but I do appreciate the information. Thanks for sharing.

Why don't you work on removing the down vote button? Oh that's right, you would lose complete centralized control, ain't that right boss?

Oh the irony when Blurt has a centralized "Coal list" where thecryptodrive can (and has suggested) remove a users ability to receive upvotes on all posts if he feels like it. So instead of a downvote on one particular post where everyone's stake in the community can decide rewards, one central authority can completely disable one person's ability to receive upvotes, effectively a single authority downvoting every future post to $0 instantly.

You really should understand what you are using.