4th update of 2023: Putting HAF apps through their paces

in HiveDevs10 months ago (edited)

blocktrades update.png

Below are a few highlights of the Hive-related programming issues worked on by the BlockTrades team since my last report. It’s been a while since I last wrote, and a lot of work has been done since then, so a lot of work that was done isn’t covered here, but I’ve tried to capture some of the more important tasks that might affect or be useful to other Hive devs.

Hived (blockchain node software)

Fixed log rotation bug

There was a longstanding bug in the fc file appender that was fixed, enabling log rotation to work properly for hived. Log rotation is a process where a program periodically creates a new log file instead of just always writing to the same log file, and then compresses the previously used log file.

Most commonly node operators may want to configure their log rotation settings so that a new log file is created every day and the last week’s or month’s worth of log files are kept.

Log rotation is particularly useful when hived is configured to log a lot of events, because without log rotation such log files can quickly get too large to manage effectively and can have detrimental side effects such as using up all the disk space on a server.

Improved error handling for hived “notifications”

External processes can register with a hived node to receive notifications of events (e.g. arrival of a new block, etc). Recently we improved hived’s HTTP server by supporting 204 HTTP status codes.

This change was made to fix intermittent CI test failures for tests that relied on notifications sent by hived. Although the change was initiated to reduce intermittent test failures, it is also an important improvement for other applications that use hived’s notification service.

Fixed bug when processing a HAF database backup

We fixed an invalid assert in the code for public key serialization of null key values. This would show up when a null public key was specified for a witness key and resulted in a failure when processing a HAF database backup.

Automated action code removed

We removed the never-used code for automated actions. Automated actions were introduced as a concept in hived to prevent a bunch of “automatically created” actions from requiring too much cpu time, resulting in a block not being able to be produced with the allowed block computation time.

This concept was originally created by Steemit devs because of fears that automated actions associated with SMTs (e.g. periodic creation of new SMT tokens) could become computationally burdensome. We considered using this capability when recurring transfers were developed, as these operations also generated “future work” beyond the initial block in which they were included, but we decided that there were other ways to schedule such future actions so that the nodes don’t get overburdened.

RC code continuing to be refactored

As a step towards eventually allowing the resource credit logic being part of blockchain consensus, we’ve been refactoring the code in the resource credit plugin and moving some of this code into the core code. None of this work will impact current consensus rules, however.

Hived continuous integration (CI) test improvements

Out testing system now generates a reusable blockchain state file built from the first 5M blocks of the blockchain. This state file is versioned to the revision of the hived code that created, so as long as there are no changes to the revision of the hived code, this same statefile can be reused across multiple builds and tests of the code, reducing build and test time.

In other words, when a commit revision only modifies the tests, the commit does not trigger a new build operation (hived images are reused). Similarly, state file generation is skipped (as the state file is also reused).

This means the turnaround time for CI pipelines is very quick when only tests are being modified/added.

Other testing improvements:

  • new blockchain operation tests (testcases for dedicated testing scenarios)
  • A new Hived fixture was created that is used for unit tests to much better emulate the real world Hived environment during testing time. With this new text fixture, the whole appbase application is started and initialized, instead of hacking the initialization of specific plugins being tested. The impetus for this change was that we found a unit test that wasn’t properly testing functionality because of a mismatch between the test setup and real-world hived setup.
  • Several fixes to Python regression tests itself to avoid CI random failures: https://gitlab.syncad.com/hive/hive/-/merge_requests/952 https://gitlab.syncad.com/hive/hive/-/merge_requests/936
  • Preliminary fixes to libfaketime library eliminating timer blocking problem. Due to bad time recalculations, the witness plugin became blocked and block production was stopped. This error often led to CI failures (tests that timed out).

Clive (a new Hive wallet with a text-based user interface)

Clive is a Hive wallet written in Python that runs on your own computer (it is not a web-based wallet where the code comes from a remote server) so it is inherently more secure than web-based wallets. Currently there are two such wallets available and supported in the Hive ecosystem: 1) a command-line interface wallet (aka the CLI wallet) written using C++ and 2) a graphical interface wallet called Vessel (a JavaScript-based wallet).

Clive is designed to be easier to use than the existing CLI wallet, but at the same time it doesn’t require a terminal that supports graphics like Vessel does. For most people, Clive should provide a more friendly interface for performing Hive operations in a high security environment compared to using the CLI wallet.

As it turns out I was overly optimistic about the development status of Clive in my previous report, so a publicly usable version is still a couple weeks away, and that initial version will mainly support basic operations such as Hive/HBD transfers.

Beekeeper (tool for managing encryption keys)

Along with the creation of Clive, we created a new C++ program that can be used to store encryption keys and sign transactions with those keys. The purpose of this new program is to separate the high-security aspects of encryption key management from other wallet operations. For example, Clive doesn’t directly store keys or sign transactions. Instead it communicates with a separate beekeeper process and requests these operations to be performed. For those familiar with the EOS ecosystem, beekeeper is based on the ideas embodied in keosd.

Recently beekeeper was improved to allow a single instance of beekeeper to support multiple clive instances.

Hive Application Framework (HAF)

HAF is a SQL-based framework for creating highly scalable and robust 2nd layer apps that operate using data from the Hive blockchain.

New HAF features

Implemented standardized composite data types for Hive blockchain operations

We completed the creation of SQL “composite data types” that correspond to Hive blockchain operations, and the associated code for creating such composite objects from raw operations data.

Having a common SQL-based representation of such data promotes more efficient coding practices and code re-use across HAF applications.

Using these composite objects we were able to cut in half the block processing time required by the haf block explorer indexing engine.

Next we plan to update more of the existing haf apps to uses this new feature.

HAF query supervisor to prevent rogue queries consuming too many resources

One of the more important tasks we’ve been working on for HAF recently is the creation of a “query supervisor”. This is a set of code that is used to monitor the queries executed on a SQL server and limit how many resources can be used by that query before the query will be terminated.

The query supervisor is necessary to prevent potential denial-of-service attacks that might be launched against a HAF server, especially a HAF server that allows the execution of semi-arbitrary SQL code (e.g. the code of SQL code that would likely be used for a smart contract system running on a HAF server).

Implementation of the query supervisor has proceeded to the point where it is functionally useful and we’re getting ready to setup an instance of a HAF server that will be publicly accessible with the query supervisor enabled to rate limit loading on the database.

HAF bug fixes and test improvements

HAFAH - HAF-based account history application

Hivemind (social media HAF app for Hive)

HAF-based balance tracker: tracks Hive account balances

This is a HAF app that we originally created as an example of how to make a simple app. It provides an api to track hive and hbd account balances for any Hive account over time (for example, to graph how your account's hive value increases/decreases with time).

But we found its functionality would be very useful for the haf block explorer, so we've extended its functionality significantly to fit the needs of the block explorer.

HAF-based block explorer

We’ve been working for a while on new HAF-based block explorer for Hive. There were two main reasons for creating another block explorer for Hive: 1) we wanted an open-source block explorer than could easily be deployed by any of the existing API node operators without adding much overhead to their servers as a means of further decentralizing this critical functionality and 2) we wanted a “heavy-duty” HAF app that could help us identify and fix potential weaknesses and programming difficulties that might arise when developing a complex HAF-based app. So far, I feel we’re well on the way to achieving both goals with this task.

Some upcoming tasks

  • Integration of keyauth state provider provided by HAF into haf block explorer.
  • Improve pg_dump/pg_restore performance by changing format of dumped hive.operation from textual json to bytea literal.
  • Complete refactoring work on resource credit code (for eventual inclusion in consensus logic).
  • Test real world performance of SQL query_supervisor by setting up a publicly accessible HAF server that users can directly perform read-only queries on.
  • Continue work on Consensus State Providers (for more powerful HAF apps).
  • Continue work on HAF-based block explorer backend and GUI
  • Finish up work on Clive wallet.
  • Collect benchmarks for a hafah app operating in “irreversible block mode” and compare to a hafah app operating in “normal” mode (low priority).
  • Publish more documentation for various new tools (beekeeper, Clive, consensus state providers) and separate HAF documentation into smaller, more easily digestible chunks.
  • Create docker compose scripts to ease deployment of API node infrastructure.
Sort:  

Publish more documentation for various new tools

What I think is important (at least for me) is to come up with a standard way to run and maintain HAF apps. Maybe some kind of boilerplate script to run/stop/update/resync it. I'm afraid every dapp dev will come up with a different way, making it super hard for node operators to maintain custom apps, therefore limiting the HAF usage.

Also, some advice on how to actually write a HAF app would be great (especially apps that need to broadcast some transactions on user interactions). But I will gather some example use cases in a separate thread/posts with specific questions.

We are working on boilerplate scripts for various operations like you described.

Looking forward to your example use case questions.

Awesome stuff. Now that it seems the query supervisor is getting closer to done (after some real world testing), does that mean the pieces are close to in place to start development on the layer 2 smart contract code? I believe I remember you saying in the past that the query supervisor was one of the first major hurdles you all needed to finish first before you could seriously focus on smart contracts.

Yes, it means we're close to starting on it. In a way, I consider all this work (HAF improvements, query supervisor, dev ops work with docker deployments to ease development and testing) part of the smart contract code development as it is the foundation for it.

Awesome work. Thanks for all you do!

PIZZA!
The Hive.Pizza team manually curated this post.

Learn more at https://hive.pizza.

Congratulations @blocktrades! You have completed the following achievement on the Hive blockchain And have been rewarded with New badge(s)

You published more than 250 posts.
Your next target is to reach 300 posts.

You can view your badges on your board and compare yourself to others in the Ranking
If you no longer want to receive notifications, reply to this comment with the word STOP

Check out our last posts:

Our Hive Power Delegations to the June PUM Winners
Yearly Authors Challenge Status
Feedback from the July Hive Power Up Day

Congratulations @blocktrades! Your post has been a top performer on the Hive blockchain and you have been rewarded with this rare badge

Post with the highest payout of the day.

You can view your badges on your board and compare yourself to others in the Ranking
If you no longer want to receive notifications, reply to this comment with the word STOP

Check out our last posts:

Our Hive Power Delegations to the June PUM Winners
Yearly Authors Challenge Status
Feedback from the July Hive Power Up Day

Hive is unveiling so many features. There was a time someone told me about clive and now you are talking about it too.
I believe that it is becoming more popular

I wonder what will come of it.

I wonder what will come of it

Hey king! We need you, you're a such valuable member of the HIVE community and we need your help to bring the most advanced NFT market to HIVE that will integrate all the games and offer a new level experience to the NFT traders on all our blockchain. We're running a proposal and your vote is necessary to reach the goal. Thanks in advance king!

 10 months ago  Reveal Comment