BlockTrades update on Hive development work

in HiveDevs2 years ago

Last week we continued work on post-HF24 optimizations. Below is a summary of the work done last week and our plans for the upcoming week.

Hived work (blockchain node software)

We added reporting of some virtual ops related to hive fund for better accounting by a block explorer (done in conjunction with @howo):
https://gitlab.syncad.com/hive/hive/-/merge_requests/144
https://gitlab.syncad.com/hive/hive/-/merge_requests/135

We made fixes to filtering of get_account_history functionality and a fix to the legacy get_account_history plugin (it used a 1-based indexing of operation history instead of a 0-based indexing like the get_account_history_rocksdb plugin, now they both use 0-based indexing):
https://gitlab.syncad.com/hive/hive/-/merge_requests/145
https://gitlab.syncad.com/hive/hive/-/merge_requests/146
https://gitlab.syncad.com/hive/hive/-/merge_requests/148

Fixes to hived API tests:
https://gitlab.syncad.com/hive/hive/-/merge_requests/141

Miscellaneous:
https://gitlab.syncad.com/hive/hive/-/merge_requests/147 (set reported version to 1.24.6)

We’re currently working on a major optimization to the get_block_api plugin that should likely provide a big boost in performance for the get_block API call (this will likely enable us to speed up the hivemind “full sync” process as well). The old implementation used an overly pessimistic mutex locking scheme that severely degraded potential performance under loading.

Hived status

We completed all tests on changes made in the previous week and this week and there are no known outstanding issues with hived operation (other than the known longstanding issue with servicing of API requests during startup of a hived node).

We plan to tag v1.24.7 as soon as we complete the optimizations to the get_block_api plugin. V1.24.7 will be a recommended upgrade for API node operators, but it doesn’t contain changes needed by witness nodes or exchanges.

Hivemind

We made numerous optimizations and bug fixes in hivemind this past week:
https://gitlab.syncad.com/hive/hivemind/-/merge_requests/332
https://gitlab.syncad.com/hive/hivemind/-/merge_requests/333
https://gitlab.syncad.com/hive/hivemind/-/merge_requests/330
https://gitlab.syncad.com/hive/hivemind/-/merge_requests/334
https://gitlab.syncad.com/hive/hivemind/-/merge_requests/228
https://gitlab.syncad.com/hive/hivemind/-/merge_requests/335
https://gitlab.syncad.com/hive/hivemind/-/merge_requests/211
https://gitlab.syncad.com/hive/hivemind/-/merge_requests/338
https://gitlab.syncad.com/hive/hivemind/-/merge_requests/341
https://gitlab.syncad.com/hive/hivemind/-/merge_requests/342
https://gitlab.syncad.com/hive/hivemind/-/merge_requests/343
https://gitlab.syncad.com/hive/hivemind/-/merge_requests/344
https://gitlab.syncad.com/hive/hivemind/-/merge_requests/345
One of the more visible fixes is the comment counts are correctly reported now for posts.

Due to the rapid pace with which changes are being made to hivemind, we also started upgrading our automated build-and-test (CI) system to support building on multiple gitlab runners so that our devs could get faster feedback on changes they make.

The primary challenge was to setup more than one system configured for performing a hivemind sync and to allow troubleshooting in the case of test fails). For speed reasons, hivemind’s CI system is configured to only sync to the 5 millionth block, but we’re adding an option to do testing with a full sync as well (via a manual trigger, as this test is much more time-consuming).
https://gitlab.syncad.com/hive/hivemind/-/merge_requests/327
https://gitlab.syncad.com/hive/hivemind/-/merge_requests/340

We also introduced mock data providers to allow testing of operations that didn’t occur by the 5 millionth block:
https://gitlab.syncad.com/hive/hivemind/-/merge_requests/336

Hivemind status (2nd layer social media microservice)

We deployed all the above changes to hived and hivemind to our production node, api.hive.blog, and they’re working well. We still have a few optimizations to make (mostly importantly, further speedups to unread_notifications which averages around 1.5 seconds to complete right now).

We created a data dump of our hivemind database and several other API node operators have used that data dump to update their node and begin serving data with the latest code.

Condenser + Condenser wallet (open-source code for hive.blog)

The most visible change that BlockTrades made to condenser last week will likely be deployed tomorrow. This change will update the vote information for a post 9 seconds after the user does an upvote or a downovte on the post:
https://gitlab.syncad.com/hive/condenser/-/merge_requests/136

What’s the plan for next week?

We’ll be finishing up a few more optimizations to hived and hivemind. In addition to speeding up API calls for both, we’re also going to look at speeding up the hivemind full sync time (currently it takes 4 days). And we’ll continue filling out the test cases in the automated testing suites for both projects.

I’d hope to begin analysis of future features for Hive (both for hardfork 25 and for 2nd layer apps support), but most of our time last week was consumed with optimization of the current system. We did make a little headway on this issue in the Hive developers meeting we had earlier today, though.

I’ll make a post later this week in the Hive improvements community on some of the features we’re considering both for HF25 and for 2nd layer features (all the hardfork features are ones that have been previously discussed many times by the Hive community and have met general approval).

One of the nice things about the architecture we’re moving Hive towards is that we can now add more capabilities to Hive without requiring a hardfork to do so. We will still need to do a hardfork when we make governance improvements, of course, but for many of our future features, these features can be released as they become ready, without having to coordinate their release with other features and with exchanges.

Sort:  

Thanks for the update and all the hard work!

This change will update the vote information for a post 9 seconds after the user does an upvote or a downovte

Could you elaborate why a shorter delay is not aimed at? What are the technical limitations here, considering that blocks are validated within 3 secs? I would consider a fast response time to be a critical factor for the user experience.

Thank you!

It'll be speeded up further after we enhance hivemind to support microfork recovery. In the past, hivemind has always stayed 2 blocks behind the current block (i.e. 6 seconds in the past) because it didn't have a way to recover if it put data in the database that gets changed due to a microfork. We do have a plant to enhance the data that is placed into the database, so that hivemind can revert data from a microfork. Once that is possible, we can allow hivemind to directly report data from the head block of the blockchain, and this delay will no longer be necessary in condenser.

By the way, this is deployed in production now, just tested it and it works.

Thanks for the explanation, makes a lot of sense now understanding the concern with microforks.

What I wonder though, couldn't condenser itself bridge this delay with a cached estimate which then gets refined and solidified after 2-3 blocks of time? Basically having an instant response at the cost of a slight and momentary inaccuracy (payouts in $ are anyway in flux even if rshares don't change). I may miss problematic implications this can have but I consider it worth thinking in that direction since responsiveness is so critical for the user experience. Thanks!

Adding code to condenser to estimate it is possible, but if we're going to make hivemind faster soon, it's probably not worth it.

Sounds great.

I think, we need to push some marketing campaigns on the work being done, to let the outside world know. What's your vision on how hive can be showcased to the crypt community ?

Should we not start promoting using professional marketing ?

There's several people working on this now and I hope to see some attractive proposals put forward soon. I've stayed out of marketing, as I hope this is an area where other people can contribute. I want to keep the Blocktrades team focused on technology development as much as possible.

Yeah improvement to blockchain is great. But hive should be more dapp friendly and all the docs are jumbled up and we are on our own to scan through commits on github to know what needed to be done with our dapp. Maybe hire more teams to do that?

One of the nice things about the architecture we’re moving Hive towards is that we can now add more capabilities to Hive without requiring a hardfork to do so.

This will be of great benefit. Updates are very important for the outlook of the community. Waiting for a hard fork, which tends to be 6 months, causes people to lose interest.

However, waking up one day and suddenly seeing a few new updates really can get people excited. It shows that progress is being made.

Thank you for the hard work and feedback.

We’re currently working on a major optimization to the get_block_api plugin that should likely provide a big boost in performance for the get_block API call

Will these performance improvements also apply to get_ops_in_block?

There's already been some recent improvements in get_ops_in_block, especially when an API node is under load conditions, due to the earlier mutex lock fixes we made, but this particular improvement is just focused on the get_block call, because the get_block_api had its own mutex (one that was unnecessary, because all writing is done from a single thread, and the read calls only read older blocks).

Moving as much logic as possible to the 2nd layer is definitely the right move! An app shouldn't even need to call any RPC node directly to get data but only a 2nd layer server. Most of the Hive state can be rebuilt on a more conventional 2nd layer database.

One of the nice things about the architecture we’re moving Hive towards is that we can now add more capabilities to Hive without requiring a hardfork to do so.

I think this is good, it could lead to more rapid development of the nice little goodies people want.

One of the nice things about the architecture we’re moving Hive towards is that we can now add more capabilities to Hive without requiring a hardfork to do so....

Cool! this bit i get

As always, excellent work friend @blocktrades, the observations I made to you a few days ago are fully resolved, apart from that we can already see the value of each vote again, despite living in a country where the internet is crap, I have noticed that Hive is faster, I did not know that through your website I could exchange HBD for Hive, I tried and it was super fast. in a few days I will exchange HBD for Hive Power Delegation. This to grow my account. Thanks again for everything 👊😎

Thanks for all the hard work that you all are doing.

One of the nice things about the architecture we’re moving Hive towards is that we can now add more capabilities to Hive without requiring a hardfork to do so.

Everyone is quoting this so I agree also.

Here's to less hardforking and to building capabilities!

Studieshavefoundthatwhitewomenwhoareoverweightare_40dd9e2ed667fccee8ce9940cd05ed0b.jpg

How hard/easy will it be to add the number of views on a post?
Will it affect performance?
I remember steemit had it for post hits (including same user) it wasn't ideal but it was better than nothing.
Don't know why it was removed.

This isn't a blockchain level change, it's a feature that has to directly supported by frontend post browsers and that is how it was done in the past by condenser. I think Steemit probably removed it from condenser because the view counts wouldn't report accurate view counts when an article is being viewed across multiple frontend browsers, making view counts overly pessimistic.

I suppose in theory, frontends could periodically report their own view counts via the blockchain and aggregate the view counts that way. This would still depend on the frontends to all report their view counts, of course. And it could add a lot of overhead to the blockchain data, depending on how often such reports were made.

Another possibility would be to add some kind of peer-to-peer sharing of the viewing data between frontend browsers via a 2nd layer service such as hivemind, and this latter approach wouldn't add any blockchain bloat.

That view count did not even work proper, as you could just refresh your browser to add more count.

Better than nothing

Thank you so much for this explanation

Congratulations @blocktrades! You have completed the following achievement on the Hive blockchain and have been rewarded with new badge(s) :

You received more than 90000 upvotes. Your next target is to reach 95000 upvotes.

You can view your badges on your board and compare yourself to others in the Ranking
If you no longer want to receive notifications, reply to this comment with the word STOP

Do not miss the last post from @hivebuzz:

Feedback from the November 1st Hive Power Up Day

I think getting to a point where we don't need to do any more hard forks would be a great thing in terms of attracting new users. Part of the reason investors haven't flocked to steem/hive has been due to the fact that the rules are always changing. Let the layer 2 dapps change their rules if they want but lets get the layer 1 stuff set in stone sooner rather than later.

Awesome, great to see all the hard work that has been made😊👍

Great work! Somebody knows if the Smart Media Tokens are still on the table?

We're going to review the work that has been done there as one of our next steps after we finish up our hivemind work. It's too soon to say yet.

Thanks for your quick answer... Just in order to get an idea, when do you think you will finish up your hivemind work? In a month, 6 months, 12 months? How long do you think will your analysis take? I am really into SMTs, this was for me the reason to stay.

Finishing up current hivemind work should be done in less than a month, in fact hopefully within a week or so. There will still be work done afterwards, though, as we'll schedule future work for it too.

For the analysis, it's hard to say for sure, since we'll be looking not only at SMT code, but also at potentially competing methods of achieving similar features. My guess is it will take a couple of weeks for the analysis.

This sounds excellent. Regarding the alternative methods like Hive-Engine they are far less attractive as having an incorporated asset-exchange within Hive like with SMTs. The Hive-Token would become a gateway to 1000s of website-projects, creating a good deal of demand. Years into Steem-engine I still see it as a graveyard.
SMTs as they were planned and on a Testnet (supposedly) last year already, would have much less overhead for developers... hell, even I had a project in mind using SMTs for years now. If SMTs do not come to fruition I definitely will need to look around for other blockchain-solutions with higher traffic and name recognition.

Thanks for all the development work you are doing for this chain and I hope to hear soon about a positive decision regarding SMTs.

block trade is always a trust-able exchange in Block-chain community
my every transactions with this