6th update of 2021 on BlockTrades work on Hive software

in HiveDevs2 months ago

Hived work (blockchain node software)

SQL account history plugin

In the last week or so, we continued our work on the SQL account history plugin.

Don’t be confused by the name of this plugin: while it can and will function as a very fast replacement for the rocksdb account history plugin, its primary purpose is to lay the foundation for “modular hivemind” (an application framework for 2nd layer applications).

We added support for resuming a hived replay configured with the SQL account history plugin. This support is also necessary for enabling the use of this plugin when operated in conjunction with a hivemind running in “live sync” mode.

The SQL account history plugin also now populates the database with block and transaction signatures needed for get_block api calls.

Finally, we added code to drop/restore the SQL indexes associated with the account history data to enable faster hived replays (this can be useful, for example, if no hivemind sync is happening at the same time).

Other work on hived

We reviewed recent changes in the hived code, which resulted in some code fixes and code reformats and new unit tests. We also did some preliminary reviews of current merge requests for hived.

Modular Hivemind (2nd layer application framework)

Syncing hivemind from SQL account history plugin databases

Hivemind can now run an “initial sync” directly from the postgres data generated by the SQL account history plugin. We repeated the “initial sync” of hivemind that died near the end during our previous test (running for 49.58M blocks) and confirmed that it takes just under 46 hours (just under two days). For comparison, a hivemind sync using the old “pull” process with all our other recent optimizations still took 61.2 hours.

The final database size for this test was: SQL account history-generated data (~1.1 TB) + hivemind sync-generated data (~0.5 TB) for a total size of 1.6 TB.

Experimenting with Postgres extensions for fork handling

One of the key challenges for modular hivemind is automated support of fork handling, to remove this burden from 2nd layer applications. Currently we’re experimenting with the use of C-based postgres extensions to help accomplish this task.

Miscellaneous changes to hive sync

We eliminated use of some SQL constructs in hivemind sync that obtained exclusive locks on tables and therefore prevented database dumps from being run concurrently with a hivemind sync. Without this change, a hivemind server operator would need to temporarily disable his hivemind server in order to make a database backup.

Official hivemind release v1.24.2

We completed testing of the develop branch, then merged all our latest work on hivemind to the master branch, tagged it as v1.24.2.

We notified all API node operators that they should upgrade to v1.24.2 as soon as practical, as it contains not only optimizations and the exclusive lock fix, but also various bugfixes related to hivemind indexing and API responses.

Condenser (https://hive.blog code base)

We finished fixing condenser bugs related to the follows, mutes, and decentralized lists, and deployed a new version of hive.blog with the fixes. At this point, we’ve completed and tested all work on decentralized lists.

Plans for next week

On hived side, we’ll continue working on changes for HF25 (vote expiration for governance changes, curation vote window changes, merge request review, etc). Depending on how much time we have, we may take a look at the HBD interest code, now that witnesses are experimenting with enabling HBD interest payments. I’ve also considered adding a mechanism for helping to maintain the HBD peg if we have time. I’ll write a post about it if we make enough progress on other fronts in the next week or so that it seems feasible to fit the enhanced pegging mechanism into HF25.

For modular hivemind, we’ll continue to explore ways to automate fork-handling.

On the optimization front, we’re experimenting with python profiling to see if we can further reduce the time for a hivemind initial sync from SQL account history data. For a visualization tool, we’re using kcachegrind.

We also want to look at combining the process of replaying hived with the SQL account history plugin (which takes about 11 hours) while running a hivemind sync simultaneously (which takes 46 hours). Currently these process are running sequentially, which takes 11 + 46 = 57 hours.

By running them concurrently, I believe we can reduce the overall time to nearly just the time of the hivemind sync (i.e. 46 hours currently). As a reminder, previously this process took 17 hours for a hived replay with rockdsb account history + about 96 hours for a hivemind sync, before we began optimizations of sync time.

Sort:  

Make payments eternal, 7 days is poppy-nonsense.
& calendar

Congratulations @blocktrades! You have completed the following achievement on the Hive blockchain and have been rewarded with new badge(s) :

You distributed more than 80000 upvotes.
Your next target is to reach 81000 upvotes.

You can view your badges on your board and compare yourself to others in the Ranking
If you no longer want to receive notifications, reply to this comment with the word STOP

Check out the last post from @hivebuzz:

Feedback from the March 1st Hive Power Up Day
Hive Tour Update - Financial stages

Why it's necessary to have another Fork? About also this interest that witnesses talking about? When will it be distributed that 3%?

By the way thanks for the hard work.

The fork is to implement features discussed in section titled "Governance changes planned for hardfork 25" in this post: https://hive.blog/hive-139531/@blocktrades/roadmap-for-hive-related-work-by-blocktrades-in-the-next-6-months

In addition to those changes, @howo is also working on some change for the HF (e.g. recurrent payments).

This recent post has an in-depth discussion on how HBD interest works: https://hive.blog/hive-167922/@themarkymark/finding-out-how-does-hbd-interest-work-by-reviewing-code

Nice to see the optimization efforts you all have been talking about paying off.

Great work! Keep going.

I think I remember you saying in a post several months ago that the team was going to review the SMT code to see how much was possibly usable when you get to that stage. Did you ever provide an update on that? If not, how much seems usable? If more than 20%, I would call that a win. Thanks.

yes, here's an excerpt from the follow up report: "We performed a preliminary review of the current state of the SMT code. Unfortunately, we found that the code was far from in a complete state (not only is much of it untested, there’s a fair amount that hasn’t yet been implemented). We have a partially complete report on the state of the C++ code for SMTs, if there’s any developers that would like to review it."

Essentially, it wasn't in a state that could potentially be included in HF25.

Thanks. I am not surprised unfortunately.

I see you consistently upvoted @jrcornel, giving the impression that you valued his content. As you know, a group campaign, based on their own selective criteria which they know not how to express, has downvoted him to zero and basically ruined his existence here on HIVE. As such, I just wanted you to know what one core member of this community had to say about @jrcornel and ask if you still think it’s appropriate not to come to his defence? Really come your own defence? Because not only was that said about him (and it's very representative of the group think, I might add), but it was also said about everyone who valued, curated and upvoted his content.

Yes, that’s how they feel about you too.

Best Regards

TotalDisrespect.jpg

Hi @blocktrades, so you think this will be included in HF26? And when will HF26 potentially come out?

I don't know if it will be in HF26 or not, depends on too many future events to say. I don't expect to start planning for HF26 until HF25 code enters the testing phase (about a month before release of HF25).

As for the timing of HF26, the idea is to maintain a 6 month development cycle for hardforks when we can, so if we expect HF25 in June, we're likely talking December for HF26. But that's not a hard-and-fast rule, just a general guideline. HFs can be slower or faster depending on what's being done and the urgency of the issue.

Hi @blocktrades - just a quick question. Are there any plans to lower the amount of witnesses votes?

It's been discussed several times in the past, but I've seen little consensus on the issue. It's not planned for HF25.

Any info about smart contracts?

We're planning to implement smart contracts on top of the modular hivemind framework. Phase I is modular hivemind, phase 2 is smart contract implementation. Of the two phases, I think phase 1 is the most challenging technically.

its primary purpose is to lay the foundation for “modular hivemind” (an application framework for 2nd layer applications)

That is a killer - 2nd layer has lot of potential - literally app can store anything in JSON and process as needed - once it's ready, we should plan for publishing some ready to run sample codes in various languages, that can help developers to set up a second layer app in few hours. I hope the data that's going to be stored in second layer, is not going to hurt the performance of the chain, right ? Also is there a limitation on the size of data that can be stored in second layer ?

A full-scale example app is planned as a deliverable for modular hivemind. Most likely the example will be a wallet app.

I don't expect 2nd layer data to have any major impact on the chain. From a bandwidth perspective, RC limiting will prevent traffic congestion. Most of the computation takes place at the 2nd layer in this model, and the design is such that not every modular hivemind node will need to run every 2nd layer app (for example, in the case of the light wallet, users could just run their own local copy even). This means that each modular hivemind installation gets to determine what data it wants to keep, and it only needs to keep the data for the app or apps it wants to support.

There's no overall limit on 2nd layer data other than that imposed by RC limits, but there is a limit on the size of a single 2nd layer transaction (currently it's 8K per transaction, IIRC).

Thank you for everything you do for our HIVE

Thanks for sharing the space hivemind is going to use (roughly) with the new hivemind software, I've been thinking of running a hivemind api node, now I know I'm going to need a lot more hdd space than I currently have. Think I'll wait till HF25 before setting up one though so I can benefit from all these awesome changes.


My witness node - Stream on Vimm.tv

@blocktrades I think it would be a good idea to integrate the Hive blockchain to Cardano's Internet of blockchains and link it to one of Polkadot's bridges whenever those services are available.