3rd update of 2023: Building on HAF

in HiveDevs11 months ago

blocktrades update.png

Below are a few highlights of the Hive-related programming issues worked on by the BlockTrades team since my last report. It’s been a while since I last wrote, and a lot of work has been done since then, so a lot of work that was done isn’t covered here, but I’ve tried to capture some of the more important tasks that might affect or be useful to other Hive devs.

Hived (blockchain node software)

RC code being refactored

As a step towards eventually allowing the resource credit logic being part of blockchain consensus, we’ve been refactoring the code in the resource credit plugin and moving some of this code into the core code. None of this work will impact current consensus rules, however.

Versioning support for state files and snapshots

One problem that sometimes comes up is running hived with a state file with wrong versioning. For example, after you upgrade to a newer version of hived, it may have a data structure with different fields than the previous version, in which case it is often no longer safe to use that state file and a new one must be created from scratch by running a replay of the block log file. But for minor version changes, it wasn’t really easy for a node operator to know if such a change had taken place or not.

To overcome this issue, we’ve implemented an automatic versioning of state files that works as follows: each version of hived scans all data structures contained in statefiles that it uses and creates a long string that describes the types of these data structures. When it creates a statefile, it first writes this description string (and a shorter hash of it) into the statefile. This hash acts as a “version” of the statefile. Whenever a hived node is launched, a locally contained copy of its statefile version hash and compares it to the hash in the statefile it will be using (assuming there is one). If these hashes don’t match, the hived will not use the incompatible statefile. If the file is incompatible, it will also report how the statefile differs from the one expected, which can be useful to hived devs if they have inadvertently changed the structure of the statefile and need to know what the change was.

This new feature also allowed us to eliminate the –force-open option which previously was needed to specify that you wanted hived to use an older statefile that wasn’t created by the current version of hived, but that was still compatible with the current version because there had been no statefile data structure changes between the version that created the statefile and the current version. This option, while useful in the past, could also cause a hived node to be started in an inconsistent state if the statefile really wasn’t compatible, because hived had to rely on the user to know the actual compatibility of the statefile. So eliminating this option is very beneficial in terms of ensuring nodes on the network are always started in a consistent state.

Similar version functionality was also added for state snapshots. Snapshots are essentially platform-independent statefiles, so they suffered from the same potential incompatibility problems as statefiles.

Blockchain testing

We created a bunch of low level unit tests to cover all account history operations. More details on this can be found here: https://gitlab.syncad.com/hive/hive/-/merge_requests/895

We also fixed some edge-case bugs in hafah code that were found during testing of account history API regression tests.

We created some new tests to test stopping and restarting a hived node. These tests were useful, for instance, in testing the new statefile and snapshot versioning code.

New ICEBerg technology

In order to test hived and HAF apps, it is often necessary to perform a full replay of the blockchain in order to setup a robust testing environment. Unfortunately, this takes a lot of time and consumes a lot of computing resources (memory, disk space, etc).

The idea behind ICEBerg is to create a testing environment using a smaller set of blocks (e.g. the last 5 million blocks), taking less time and consuming less resources, enabling it to be cheaper and easier for HAF developers to create useful testing environments.

Currently, we already do something similar in our regression tests, using the first 5 million blocks plus some “mock” blocks that contain interesting operations we need to have in our test environment. The difference with ICEBerg is that we will be able to use any existing arbitrary set of blocks, not just a set of sequential blocks starting at the first block.

ICEBerg works by analyzing the blocks to be processed and determining the set of initial conditions that are required to enable the block transactions to be “valid” when processed without having to first process all the proceeding blocks. For example, it determines what accounts and balances need to exist in order for the transactions to be valid.

ICEBerg is still a work-in-progress, but so far it’s able to create enough initial conditions to allow 80% of the last 5 million blocks to successfully validate.

Clive (a new Hive wallet with a text-based user interface)

Clive is a Hive wallet written in Python that runs on your own computer (it is not a web-based wallet where the code comes from a remote server) so it is inherently more secure than web-based wallets. Currently there are two such wallets available and supported in the Hive ecosystem: 1) a command-line interface wallet (aka the CLI wallet) written using C++ and 2) a graphical interface wallet called Vessel (a JavaScript-based wallet).

Clive is designed to be easier to use than the existing CLI wallet, but at the same time it doesn’t require a terminal that supports graphics like Vessel does. For most people, Clive should provide a more friendly interface for performing Hive operations in a high security environment compared to using the CLI wallet.

Clive was also designed to be easier to maintain and enhanced compared to the existing CLI wallet, especially as future changes are made to the Hive protocol and ecosystem. Another of the impetuses for the creation of Clive was to simplify future testing efforts.

Clive is still a work in progress, but we expect to release a “recommended for public use” version in the next couple of weeks.

Beekeeper (a new tool for managing encryption keys)

Along with the creation of Clive, we created a new C++ program that can be used to store encryption keys and sign transactions with those keys. The purpose of this new program is to separate the high-security aspects of encryption key management from other wallet operations. For example, Clive doesn’t directly store keys or sign transactions. Instead it communicates with a separate beekeeper process and requests these operations to be performed. For those familiar with the EOS ecosystem, beekeeper is based on the ideas embodied in keosd.

The code for beekeeper is located in the programs/beekeeper subdirectory of the hive repo.

Hive Application Framework (HAF)

HAF is a SQL-based framework for creating highly scalable and robust 2nd layer apps that operate using data from the Hive blockchain.

HAF query supervisor to prevent rogue queries consuming too many resources

One of the more important tasks we’ve been working on for HAF recently is the creation of a “query supervisor”. This is a set of code that is used to monitor the queries executed on a SQL server and limit how many resources can be used by that query before the query will be terminated.

The query supervisor is necessary to prevent potential denial-of-service attacks that might be launched against a HAF server, especially a HAF server that allows the execution of semi-arbitrary SQL code (e.g. the code of SQL code that would likely be used for a smart contract system running on a HAF server).

In its current state, the query supervisor can terminate queries that run beyond a specified amount of time or that touch too many rows in the database.

As an initial test of the query supervisor, we’re creating a HAF server configuration where the query supervisor is installed as part of the HAF deployment process, enabling someone to offer up a “public HAF server” where arbitrary “read-only” queries can be performed on the HAF server’s database (limited in query execution time by the query supervisor).

Standardized composite data types for Hive blockchain operations

Another HAF task under way is the creation of SQL “composite data types” that correspond to Hive blockchain operations, and the associated code for creating such composite objects from raw operations data. Having a common SQL-based representation of such data should promote code re-use across HAF applications.

HAF-based block explorer

We’ve been working for a while on new HAF-based block explorer for Hive. There were two main reasons for creating another block explorer for Hive: 1) we wanted an open-source block explorer than could easily be deployed by any of the existing API node operators without adding much overhead to their servers as a means of further decentralizing this critical functionality and 2) we wanted a “heavy-duty” HAF app that could help us identify and fix potential weaknesses and programming difficulties that might arise when developing a complex HAF-based app. So far, I feel we’re well on the way to achieving both goals with this task.

In the past month we added a couple of new programmers to the task to solve some of the problems we’ve encountered along the way. New eyes added to the project quickly identified one improvement to the project: we’ve now split the python-based backend code (the HAF block explorer app itself) into a separate repository from the front-end web-based app written in javascript that communicates with the HAF app.

We’ve also substantially increased the speed of some of the key SQL queries needed by the block explorer, and in the process we’ve seen some of the pitfalls that can lead to slow queries when processing block operations in the HAF database. We’re taking this knowledge and working on re-usable code that will help future HAF app developers avoid such mistakes.

Improving interoperability of HAF apps

Another useful aspect of working on the block explorer was that it required us to create a HAF app that worked with data from another HAF app, the balance_tracker that tracks the balances of Hive accounts on a block-by-block basis. In the future, we see this type of code re-use as very important to the rapid expansion of the HAF app ecosystem, so it was very important to test it out in practice and solve issues related to such code reuse. As it turns out, we did find and resolve several problems related to such code reuse.

Consensus State Providers (a work-in-progress)

The block explorer app also sometimes needed state information that wasn’t previously stored in the HAF database. Some of this data is now stored in the database, but we also foresee that there might be future apps that need even more such data, but we don’t want to require it to be stored in every HAF server.

Early on, we developed a potential solution to such issues that we call a Consensus State Provider. Using a local Consensus State Provider of its own, each HAF app can automatically replicate the consensus logic used by hived to compute new hived state information on a block-by-block basis.

Functionally, it is kind of like each HAF app is able to spawn its own copy of a hived node, then play blocks to that hived node in order to update a personally-owned statefile. This eliminates the need to replicate the algorithms used by hived inside a HAF app itself, unlike, for example, the balance_tracker HAF app, which essentially replicates the algorithm for updating account balances using its own SQL code.

The work on Consensus State Providers is taking place in the HAF repo.

Generic devops tasks (primarily Continuous Integration tasks for Gitlab repositories)

We made various improvements to the tasks used to build and create deployable docker images for various Hive repos. At this pint we have CI tasks to create docker images for hived, HAF, hafah, and hivemind. Improvements include making the dockers more configurable and solving intermittent problems associating with parallel testing during the creation of the images that could result in erroneous test failures. We also created a new CI task for deploying a mirrornet consensus node. We’ve also done work to allow re-use of existing docker images where possible to speedup overall time required by CI tasks.

Some upcoming tasks

  • Continue refactoring work on resource credit code (for eventual inclusion in consensus logic)
  • Continue work on query_supervisor (precursor to HAF-based smart contract processing)
  • Continue work on ICEBerg (for hived and HAF app testing)
  • Continue work on Consensus State Provider code (for more powerful HAF apps)
  • Continue work on HAF-based block explorer backend and GUI
  • Finish up work on Clive wallet
  • Collect benchmarks for a hafah app operating in “irreversible block mode” and compare to a hafah app operating in “normal” mode (low priority)
  • Create documentation for various new tools (beekeeper, Clive, consensus state providers) and separate HAF documentation into smaller, more easily digestible chunks.
  • Create docker compose scripts to ease deployment of API node infrastructure
Sort:  

Im still waiting for smt, any timeframe?

 11 months ago  

SMT won't be coming. We will have smart contracts though. Probably within 1 year.

don't we already have them through dlux and hive-engine?

 11 months ago  

I meant decentralized smart contracts

Plus VSC (@vsc.network) coming out in the next few months bringing decentralized smart contracts to hive.

do you know why they are "virtual"? or what the difference of this to blocktrades will be?

The "virtual" part implies the L2 aspect of the overall system, and partially just naming only.. The smart contracts operate a bit differently than other solutions, instead of smart contracts operating across 100% of nodes, only a handful of nodes execute each smart contract. This allows the network to scale horizontally and maintain long term scalability. Each smart contract is just JS code that is designed under the restrictions of the smart contract VM (i.e max execution time, deterministic, no 3rd party libs, etc). The goal is to provide users with a highly advanced smart contract interface that both tackles all the needed offchain functionality and onchain functionality as well. While most of the data will be offchain, it will likely end up having the most support for onchain operations out of everything built so far.

thanks for clarifying that

SMT means?

steem media tokens

Slow base layer Media Tokens

Smtees!™ were determined to slow the base layer chain down by too much due to all the calculations that would have to be performed so it was decided to push that to the 2nd layer and to adopt smart contracts later.

What a delightful update!! All of these are something I would want to eventually dig into. Some are earlier than others.

A quick and very important question: Can the combination of Clive and BeeKeeper be used to create Hive accounts? The application in mind, is related towards carrying out hive account creation functions without having to go on a browser. This will allow apps to be connected to the blockchain without having to minimize and go on a browser that will break the user experience.

I didn't see anything related to account creation in the .cpp files.

Clive doesn't currently support the account creation operation, but such functionality could be added. Its a bit more complicated than most blockchain operations, but certainly possible to do.

It is a feature I would request at the earliest. Without dipping in too technically, I would be able to run multiple instances of a Unity standalone game through the inbuilt command line arguments of Unity, and run coroutines synchronously, meaning the user would have to complete the account creation before proceeding with the game.

I am very excited with what you have built and what you are planning to build.

As a non technical person I am not able to understand everything explained or shere in this post, but I am happy to hear about clive wallet and I also really impressed with your hard work towards hive blockchain development and improvements

Congratulations @blocktrades! You have completed the following achievement on the Hive blockchain And have been rewarded with New badge(s)

You received more than 150000 upvotes.
Your next target is to reach 160000 upvotes.

You can view your badges on your board and compare yourself to others in the Ranking
If you no longer want to receive notifications, reply to this comment with the word STOP

Check out our last posts:

The Hive Gamification Proposal
Support the HiveBuzz project. Vote for our proposal!

Very good news. I am currently making use of the hive API and other services to develop technology in my project and community. I am very excited to see all the resources that hive offers for developers.

very good news for us to read, I hope you are always healthy in creating it

Thanks for doing something.

I was looking for your comment, now waiting for another post that explains this for dummies :)

In order to achieve things, people must do stuff, and sometimes that stuff leads to other things that require fancy names. Oh, and there's plans to continue accomplishments. Everything should get easier for everyone except for them as things progress and stuff happens.

Hope that helps.

Pro Tip: If you immerse yourself in a language, you'll eventually learn it with enough time. However, don't ever let anyone know because they might approach you expecting answers, and that could cost you an entire day.

nowordslefttorespond 💀

You guys have been pretty busy. Good to see all this progress. I will have to look at the different wallets. Of course security is a high priority.

I hope the boss lets you take a break now and then.

You have been very busy! While most of this doesn't make much sense to me because I am technically illiterate in this space, I just wanted to say I appreciate all the effort that goes into it!

Congratulations @blocktrades! Your post has been a top performer on the Hive blockchain and you have been rewarded with this rare badge

Post with the highest payout of the day.

You can view your badges on your board and compare yourself to others in the Ranking
If you no longer want to receive notifications, reply to this comment with the word STOP

Check out our last posts:

The Hive Gamification Proposal
Support the HiveBuzz project. Vote for our proposal!

$WINE


Congratulations, @theguruasia You Successfully Shared 0.100 WINEX With @blocktrades.
You Earned 0.100 WINEX As Curation Reward.
You Utilized 1/4 Successful Calls.

wine_logo


Contact Us : WINEX Token Discord Channel
WINEX Current Market Price : 0.095


Swap Your Hive <=> Swap.Hive With Industry Lowest Fee (0.1%) : Click This Link
Read Latest Updates Or Contact Us

Hive keeps chugging along. We appreciate it all

Your persistence is truly admirable!

Congratulations @blocktrades! Your post has been a top performer on the Hive blockchain and you have been rewarded with this rare badge

Post with the highest payout of the week.

You can view your badges on your board and compare yourself to others in the Ranking
If you no longer want to receive notifications, reply to this comment with the word STOP

Check out our last posts:

LEO Power Up Day - May 15, 2023
The Hive Gamification Proposal
Support the HiveBuzz project. Vote for our proposal!

App my please

wow, I look forward to Clive. I am sure it will prevent any sort of fraud. Hive is light years ahead of other such blockchain-based systems. (if there are)