1st update of 2024: Change notices for HAF app developers

in HiveDevs3 months ago (edited)

blocktrades update.png

Below are a few highlights of the Hive-related programming issues worked on by the BlockTrades team since my last report. I looked back at my last report before beginning this one, written more than a month ago, in order to see what’s changed since my last report. My first thought was “where did all the time go?”. We did have a holiday period, but I worked through most of it. Sometimes the best time to get work done is when no one else is around because the computers run faster and there are fewer people to say “no” :-)

Anyways, as I review the merge request history in the repos, it turns out quite a lot was done, to the point where a simple progress report would just run too long. So instead, I’ve decided to focus this report on issues which will most directly affect Hive app developers, as we are close to releasing new versions of everything, so they need to prepare their apps for various changes. So I won’t be discussing any ongoing work except for API-related work in this particular post.

HAF (Hive Application Framework)

We did a major review and overhaul of the scripts used to install and uninstall HAF apps, creating a common methodology for the process. This was actually quite a lot of work, and even now I’m not sure if we got everything “perfect”, but it is much better than before.

Operation ids are no longer monotonic

Operation ids don’t exist as such in the blockchain itself. They are an arbitrary “handles” we assign to an operation to uniquely identify them.

Previously operation ids were created sequentially by HAF as new operations were processed, but they could potentially vary across HAF nodes which replayed at different times and hence saw different forks (the operation id counter wasn’t reverted back to earlier values after a fork). Having them vary across nodes could be slightly troublesome if an app switched from one node to another and assumed the operation ids were the same across nodes, so we decided this needed to be fixed.

One option would have been to fix the sql_serializer to revert the counter after forks, but it was decided for some potential future efficiency reasons to instead use operation ids built from block_number, operation_position_in_block, and operation_type. The resulting ids keep operations in the same “total order” as previous implementations, but the ids don’t increase sequentially anymore. However, they are deterministically assigned now, allowing for full inter-node compatibility of operation ids.

The same change was made to the hived account_history plugin, so operation ids will also be consistent between ah plugin and hafah (which also simplifies future testing).

In practice, I don’t think any apps relied on operation ids to be monotonic, but if they do, they will need to adapt to this change.

Role updates (roles are essentially database privilege levels)

Haf apps typically create two roles: an owner role (which creates the app’s schema and writes to its table) and a user role (which is used by the app’s API server to read the app’s tables during execution of API queries).

There are also two “admin” level roles: haf_admin (a super user who manages the haf server itself) and haf_app_admin (a user who can install haf apps).

We updated HAF and all the HAF apps to eliminate the haf_app_admin role in favor of haf_admin as I decided it was too fine a distinction for the extra complexity it entailed.

Updates to how apps report their status during massive sync

This weekend I changed the method a HAF app uses to update its current block number when it is massive sync mode (when it is processing a bunch of old blocks in order to catch up to the blockchain’s head block). I think the new methodology is much simpler and less likely to lead to errors during app creation. I also added docs to discuss how to employ the new calls: hive.app_get/set_current_block_num.

As part of the above process I also fixed several errors that could occur when interrupting and restarting the balance_tracker and haf_block_explorer apps (both in massive sync and live sync), and updated the haf docs to discuss how to avoid problems of this type (discusses when and when not to commit during block processing).
https://gitlab.syncad.com/hive/balance_tracker/-/merge_requests/61
https://gitlab.syncad.com/hive/balance_tracker/-/merge_requests/65
https://gitlab.syncad.com/hive/haf_block_explorer/-/merge_requests/135
https://gitlab.syncad.com/hive/haf_block_explorer/-/merge_requests/138

HAFAH – HAF-based account history API (IMPORTANT NOTE FOR APP DEVELOPERS)

HAFAH is a replacement for the old hive-based account-history plugin. HAFAH is much faster than the plugin, but during testing we did discover one issue that will require a change by apps that use the account-history API. When we originally added support for filtering operations to the hived account-history plugin, it was simply too slow to implement the API as we wanted to do. In particular, if you ask the old API for 1000 operations filtered by an operation type such as “transfers”, it will only return however many transfers it finds in the last 1000 operations in an account’s history, because if the plugin had tried to find 1000 actual transfers it would take far too long. So, apps making these calls would ask for far more operations than they needed, hoping to actually receive enough that they actually needed (for example enough to fill a page), and just ask for more if they didn’t get enough.

Since HAFAH is much faster than the plugin, HAFAH’s API actually does return the last 1000 transfers when this call is made as was originally desired. As a result, to reduce the workload on API servers running hafah, apps should reduce the number of operations requested to the amount they actually need to display (e.g. 100 or less). It might be beneficial for API caching purposes if apps asked for the same amount, so maybe we could standardize on 100 for most apps?

HAF Block Explorer and Balance Tracker APIs

We made a bunch of performance optimizations to both of these APIs, and as a result of that, we reduced the time to sync the block explorer backend (which uses the balance tracker backend) from 70+ hours down to 16 hours. We also were able to eliminate several indexes, reducing the storage requirements for the block explorer dramatically. And changes to indexes and index clustering allowed us to speedup several “slow” block-explorer API calls to acceptable performance levels.

Hivemind API

We’ve been using goreplay to test HAF hivemind API responses against legacy hivemind using “real-world” traffic from our API node. We’ve found several differences, for the most part expected ones, but we’re still analyzing the difference data, and I’ll report later if we find any of especial significance.

One thing of particular note for app developers is that the new hivemind reports changes related to blocks much faster: the old non-haf based hivemind always reported changes two blocks behind the head block (i.e. 6 seconds later) whereas the haf-based hivemind reports changes as soon as they become irreversible and this usually happens within a couple hundred millseconds after blocks are broadcast normally.

Shoutouts for testing assistance

Several of the API node operators have helped us during testing of the new API stack. I’d especially like to thank @mahdiyari for his feedback during testing of HAFAH performance and @disregardfiat for contributing an assisted_startup.sh script to simplify rapidly setting up a new API node:
https://gitlab.syncad.com/hive/haf_api_node/-/issues/1
https://gitlab.syncad.com/hive/haf_api_node/-/issues/6
One note I’ll add is that I usually run the script in a screen to avoid it getting interrupted when I’m working on a remote terminal.

What’s next?

After we’ve finished analyzing hivemind differences, we’ll decide what, if anything, still needs to be done before we tag new release candidates. After that, we’ll setup a test node where apps that don’t run their own API nodes can test their apps against the new API stack.

Sort:  

Thank you for such a detailed report. I have a hard time understanding most of it, but I'm sure other folks will find the information here valuable. I just want to express my gratitude to the BlockTrades team - and API node operators who helped with testing - for all the great work that's being done to our ecosystem.

A great effort that deserves recognition. Thank you for standing behind this platform and forging efficiency. Blessings.

Thanks for the work and the updates.
How close is HAF to being ready to market to the outside world.
With a set of documents for them to plug into their system and port to hive?

HAF is perfectly usable now and the docs plus examples are sufficient for someone to create a HAF-based API.

But we've been develop several apps so that we could determine best practices for how to create, deploy, and maintain HAF based apps. This kind of information goes beyond basic docs, but it is very useful for developers. At this point I think we figured out most of that, but we'll probably learn a little more after they get deployed to production environments.

Most important for our near term work is HAF will be the basis for our smart contract processing engine, so we've put a lot of effort into optimizing its performance.

Great to hear it.
Thanks for the reply and hopefully we can start pushing this feature to the wider world soon.

Will it be like eth with smart contract processing engine?

Conceptually similar, but the implementation is quite different.

And is there any timeframe?

Not yet, we'll need to review the scope of required work after we release all the new HAF stuff, then come up with a reasonable timeline.

Congratulations @blocktrades! You have completed the following achievement on the Hive blockchain And have been rewarded with New badge(s)

You received more than 160000 upvotes.
Your next target is to reach 170000 upvotes.

You can view your badges on your board and compare yourself to others in the Ranking
If you no longer want to receive notifications, reply to this comment with the word STOP

Check out our last posts:

Be ready for the February edition of the Hive Power Up Month!
Hive Power Up Day - February 1st 2024

Kicking 2024 strong! 💪🏻
So exciting to read all the news!

Gracias 😊😊😊🫂🫂🫂

 3 months ago  

Even 50 limit would be fine for the AH call.

It sounds like you've been busy with significant updates and improvements to the Hive-related programming issues. It's great to hear about the progress you've made, especially with the major review and overhaul of the HAF installation process. Keep up the good work!

Can you please give me one upvote🙏

That is a great update and information shared, please I have a Hive project that I would like to present to you for a support decision, I don't know how I can contact you for discussion, can I get you on Discord? here is my Discord ID: abu78, if there is another way to get in touch with you, I am available too. Thanks, and I am looking forward to hearing from you soon.

I'm been wondering why my Blogs haven't gone Viral yet... Why do I feel so invisible...???