Below is a list of Hive-related programming issues worked on by BlockTrades team during last week or so:
Hived work (blockchain node software)
Many users last week experienced problems getting their transactions accepted into the blockchain. It turns out there were two separate problems that could cause this issue, but to the outside observer they probably looked much the same:
Beem had a problem detecting v1.25.0 Hive’s chainid, leading to invalidly signed transactions
The first problem, discovered a few days ago, was that the python-based Beem API library used by several Hive apps was misdetecting v1.25.0 Hive nodes as Steem nodes, and therefore trying to sign with the wrong chain id. This, in turn, led to Beem-based Hive Apps creating transactions that would be rejected by any Hive node, because the transactions weren’t properly signed for Hive.
We assisted Beem developers with identifying the source of the problem and suggested a solution. Beem now checks for the configuration constant HIVE_CHAIN_ID, before it checks for the constant STEEM_CHAIN_ID, giving precedence to Hive nodes over Steem nodes in the detection process and thus using the proper chainid during transaction signing.
In related work, we replaced the name STEEM_CHAIN_ID with OLD_CHAIN_ID and eliminated some duplicated constant values returned by get_config (but note these changes will not be included in the master branch until after the hardfork, like other changes completed in the last week):
https://gitlab.syncad.com/hive/hive/-/merge_requests/268
https://gitlab.syncad.com/hive/hive/-/merge_requests/267
Snapshot-based initialization of Hived-nodes requires a restart to properly initialize chainid
The second problem appeared one day later, but a couple of days before the Beem library was updated, so the two problems overlapped in time, and this led to some confusion among app developers as to what was the source of the new problem.
But by testing against various API nodes, we were able to determine that this second problem, although similar in appearance to the Beem problem, was occurring on non-Beem apps (e.g. hive.blog and peakd) and only occurring when those apps were using api.hive.blog as their API node. Since we hard recently updated the hived nodes on api.hive.blog, I suspected the problem has something to do with the update process.
After comparing update procedures used by other API node operators, I noticed that most of the API node operators had done a replay to update their nodes, whereas we had updated our internal hived nodes from a snapshot. Ultimately this allowed us to identify a previously undetected error with nodes initialized from a snapshot: the variable that stores the chainid isn’t part of the database state that gets loaded from the snapshot data and the snapshot loading code didn’t trigger this variable to be updated. So a node updated in this way would reject validly signed transactions because it was operating with the wrong chainid (essentially the reverse of the Beem problem).
Snapshot workaround
We also found that this improperly set chainid problem was resolved by stopping and restarting the node that was initialized from a snapshot, and that’s how we have worked around the issue until we release a full fix for snapshot-based node initialization.
The code fix is currently being implemented and tested, and will be released after the hardfork (to keep things simple, we’re avoiding making too many changes to the development branch until after the hardfork has triggered).
Completed hived work
Improvements to Testtools used to verify hived functionality: https://gitlab.syncad.com/hive/hive/-/merge_requests/266
Removed the copy/pasted version of secp256k1 library from hived and replaced with a submodule link:
https://gitlab.syncad.com/hive/hive/-/merge_requests/264
Fixed issue where enum_virtual_ops sometimes returned non-virtual ops: https://gitlab.syncad.com/hive/hive/-/merge_requests/219
We completed work to eliminate duplicated code in the command-line wallet for hived, but we’re delaying merging those changes into the develop branch until later because of the sheer amount of changes involved. The still-open merge request for that work is here: https://gitlab.syncad.com/hive/hive/-/merge_requests/170
Continuing work on hived
We’re working on speeding up the blockchain converter that we’ve developed to initialize and control a testnet configured in basically the same way as a mainnet. This is to allow for more realistic testing inside of a testnet environment (especially for better performance testing). So far, we’ve identified that the performance bottleneck seems to be related to the function that generates nonces.
We’re continuing to work on the sql serializer plugin for hived that provides data to HAF-based applications. We expecting to complete that work this week, assuming no further distractions arise in the meantime (this work got delayed while we were troubleshooting the snapshot problem), then begin performance testing.
We’re also continuing to work on the CI system to automate deployment of hived for more strenuous testing scenarios (e.g. long replay testing).
Hivemind (2nd layer applications + social media middleware)
Fix for previously report list_subscribers bug: https://gitlab.syncad.com/hive/hivemind/-/merge_requests/513
We continue to investigate ways to improve performance of the update_rshares function immediately after massive sync of a hivemind instance. In addition to analyzing the cause of slowness of the current implementation, we’re also simultaneously looking at two different alternatives to improve overall performance: 1) eliminating the call completing and performing equivalent functionality during live sync and 2) adding an index (at least temporarily just after massive sync) to speed up update_rshares.
The new index dramatically speeds up performance of update_rshares (time drops from 10 hours to 3 minutes and the index only takes about 10minutes to create), but it consumes 25GB of disk space (hence the idea to drop it after using it).
We’re still running functional and performance tests related to the broken reputation/ordering issue because of multiple transactions used per block. As part of this work we’re also adding “completion markers” to the database state to better detect any database consistency error that can arise from an abrupt shutdown of postgres (notably, this can happen when postgres does an auto-upgrade for a security fix).
We also resumed research into the reason why some hivemind nodes consume more memory than others. It has been suggested that it may be related to differences in python or python-library installations on the different systems.
Hive Application Framework
We made good progress last week on both HAF code and the documentation for it, but our main dev for this work is now on vacation until June July 19th, at which time the work will resume.
But in the meantime, we should be able to do performance testing of this code using the new sql_serializer plugin, and I’m looking forward to seeing just how well we do.
Reminder: hardfork 25 triggers tomorrow
Witnesses and devs alike will be “manning their keyboards” tomorrow (June 30th) as hardfork 25 triggers, ready for any potential problems that may arise. The hardfork is scheduled for approximately 14:00 UTC.
July 19th ?
yep, my brain was just much more focused on end of June at the time I wrote this...
Very good work. Hive to the moon.😁. The work you do is the best.
I also noted the snapshot problem after I upgraded my witness nodes to HF25 last week.
Starting my witness nodes from a snapshot I got a weird "popping blocks" error which was fixed by stopping and restarting the witness node.
I referred to this in this post: https://peakd.com/hf25/@apshamilton/witness-node-upgrade-to-hf25-hiab-hf25-snapshot-available-for-download
Good work for hive platform
I hope it will work smoothly since you've done a lot of testing. Still, I'm so excited for this HF. Thank you for the update and your hard work.
As you say, I don't anticipate any problems due to the amount of testing that's been done, but it always best to be prepared :-) The snapshot bug still managed to slip through all our previous testing, because it takes so long to run really comprehensive tests (and that's why one of the things we're working on is improving not only application performance but also the performance of the testing tools).
That might explain why syncing the past couple of months worth of blocks were slow
I don't think any of the issues above would impact the rate at which blocks are synced. But, anecdotally, it seems like we're getting about twice as many transactions included into blocks lately (i.e. more traffic). And that could lead to increased time to sync the latest blocks.
Also, as I may have previously reported, I've seen issues regarding the incorrect usage of concurrency primitives by the original devs that impact how the blockchain and p2p threads interact (on a performance level), and that needs to be corrected when we have some time.
Congratulations @blocktrades! You have completed the following achievement on the Hive blockchain and have been rewarded with new badge(s) :
Your next payout target is 1210000 HP.
The unit is Hive Power equivalent because your rewards can be split into HP and HBD
Your next target is to reach 99000 upvotes.
You can view your badges on your board and compare yourself to others in the Ranking
If you no longer want to receive notifications, reply to this comment with the word
STOP
Check out the last post from @hivebuzz:
Thanks for the post. Happy to be here in Hive
Thanks for the update, hoping for the best...
Amazing work as you do always. I am curiously waiting for the 25 Hard Fork Triggers. We are all familiar that when the working of hard fork will be completed then we will be able to see #Hive to the #Moon as it is already overwhelming the world of cryptocurrencies and decentralization.
Go Hive Go To The Moon
Thanks for all the work you are doing for the blockchain!
I hate HF... something always goes wrong although it seems like y'all are prepared this time
I'm sorry, I want to ask again. and maybe the question has nothing to do with your post.. I just want to ask why in my wallet I don't see HBD and HIVE trading history on Hive.blog's internal market. this already happened in the past 3 days. I am waiting for the answer.
Not sure it has ever shown up there, it definitely should show on hiveblocks.com, though, have you checked there?
I just made a HBD exchange to Hive and I'm on the internal market.. I've also checked at https://hiveblocks.com/@my451r . and I can't find it there.. have you also tried it @acidyo?
I guess this is why we love @peakd, because of all its resources @my451r
thanks for the information.. i will check the peak soon. I rarely use other apps besides hive.blog
Once again, thank you @moster-one
Here's hoping for a smooth hardfork. We can't really afford the disruption as seen on some previously, but as a developer I know how unexpected issues can crop up. Enjoy your break and have a !BEER
Does HF25 only affect the curation reward curve or does it also change the non-linear author reward curve?
The reward to authors is also linear.
Thx for the clarification, would be cool to include it in the changelog.
It was discussed long ago in the original roadmap for the HF (in the comments, which is where the elimination of the quasi-linear curve got discussed and decided on) and the removal of this curve has been further discussed in official posts by hiveio, I'm fairly sure.
The release notes were written as a set of quick "one-liners" to point to relevant code changes, and are really focused on communicating with devs who might want to review the code changes. By one-liners, I mean they just very briefly discuss the idea attached to the change, with the expectation that the reader will follow the link to the merge request to gain a complete understanding of the associated change.
In the case of this curve removal, removing it from curation implicitly meant it was removed from the voting impact on author rewards too (otherwise, we'd be changing the reward sharing balance between authors and curators).
Interesting information! Thanks for share!
Gracias por corregir los problemas que se han presentado en la actualización del software hive, espero que termine de sincronizar los últimos bloques con éxito, saludos.
Thank you. !PIZZA
@blocktrades! I sent you a slice of $PIZZA on behalf of @lxsxl.
Learn more about $PIZZA Token at hive.pizza (4/10)
No, we stopped supporting Steem after they stole our funds.