Here’s a quick update on what the BlockTrades team has been working on since my last report.
My plan is to stick with the previously announced time of mid-December for upgrading all the Hive API nodes, but I decided it was best we delay hardfork 28 to first quarter of 2025. This means only API node operators will need to upgrade in December, and exchanges can upgrade later.
I think separating this into two releases should reduce the pressure on everyone, since there’s more likely to be issues with the API node and non-hardfork hived changes (just because there are so many changes and the changes need to be tested against all the Hive apps out there), compared to the actual hardfork changes (relatively few at the moment).
This should also reduce the chance for potential issues when exchanges finally upgrade, since most of the hived changes will have been tested in production for a while by then.
Hived: blockchain node software
As part of the release of the HAF API node software suite, we’ll be releasing a new version of hived (which will be followed by a later release with the final “feature set” for hardfork 28). As a tentative date, we’ll be setting the hardfork date to February 8, 2025, but that may be overly optimistic.
Recent improvements
- We optimized the time to split a monolithic block_log into 1M block size files from 38 minutes down to 13 minutes: https://gitlab.syncad.com/hive/hive/-/merge_requests/1406
- Fixed value in database_api::api_account_object::post_voting_power to correctly show voting manabar values at client side.
In progress
- Continuing analysis of hived under transaction flooding conditions with large blocks (e.g 1-2MB blocks).
- Beginning rewrite of transaction signing algorithms (e.g. to allow more signatures, etc). This work will also affect how we manage 2nd layer lite accounts.
HAF: framework for creating new Hive APIs and apps
Recent improvements
- Eliminated unnecessary ANALYZE calls to speedup HAF instance restart time: https://gitlab.syncad.com/hive/haf/-/merge_requests/547
- Upgrade to Postgres 17 and eliminated associated performance regressions: https://gitlab.syncad.com/hive/haf/-/merge_requests/550
- Eliminated problems in HAF instance upgrades: https://gitlab.syncad.com/hive/haf/-/merge_requests/533
- Fixed bug preventing using --skip-hived option in HAF container: https://gitlab.syncad.com/hive/haf/-/merge_requests/544
- Added automatic performance logging for HAF app block-processing times and keep the performance history in a table to simplify long-term performance analysis: https://gitlab.syncad.com/hive/haf/-/merge_requests/553
- Fixed a bug in updating last_active_time in apps with multiple contexts: https://gitlab.syncad.com/hive/haf/-/merge_requests/558
- Added a generic health-check that can be used by any HAF app’s block processor (i.e. the process that syncs the app’s local tables): https://gitlab.syncad.com/hive/haf/-/merge_requests/551
In progress
- Adding API methods to allow a HAF app to request and wait on the creation of indexes on HAF’s own tables in a way that optimizes index creation time and avoids deadlocks.
Hivemind: social media API
We completed the switch to postgREST, including final optimizations I promised in my last post to fix the few performance regressions that we found. Every hivemind API call is much faster than before now. Here’s the latest API benchmarks:
Reference python times:
Endpoint | Max [ms] | Min [ms] | Average [ms] | Median [ms] |
---|---|---|---|---|
condenser_api.get_discussions_by_blog | 6703 | 166 | 2305 | 2396 |
bridge.get_account_posts | 3344 | 26 | 864 | 825 |
bridge.get_discussion | 22713 | 3 | 1935 | 215 |
bridge.get_ranked_posts | 3409 | 205 | 971 | 955 |
condenser_api.get_discussions_by_comments | 2676 | 4 | 587 | 574 |
condenser_api.get_followers | 136 | 20 | 38 | 31 |
condenser_api.get_discussions_by_created | 2899 | 85 | 638 | 308 |
bridge.get_profile | 611 | 328 | 417 | 395 |
condenser_api.get_discussions_by_feed | 3146 | 1272 | 1748 | 1694 |
condenser_api.get_blog | 4617 | 1198 | 3256 | 3496 |
condenser_api.get_following | 349 | 177 | 270 | 267 |
PostgREST:
Endpoint | Max [ms] | Min [ms] | Average [ms] | Median [ms] |
---|---|---|---|---|
condenser_api.get_discussions_by_blog | 2383 | 46 | 603 | 643 |
bridge.get_account_posts | 4099 | 9 | 218 | 188 |
bridge.get_discussion | 3266 | 1 | 327 | 44 |
bridge.get_ranked_posts | 449 | 58 | 228 | 226 |
condenser_api.get_discussions_by_comments | 268 | 1 | 94 | 107 |
condenser_api.get_followers | 86 | 10 | 15 | 14 |
condenser_api.get_discussions_by_created | 745 | 32 | 201 | 130 |
bridge.get_profile | 485 | 320 | 388 | 379 |
condenser_api.get_discussions_by_feed | 594 | 402 | 495 | 497 |
condenser_api.get_blog | 1533 | 312 | 861 | 897 |
condenser_api.get_following | 219 | 189 | 204 | 204 |
In progress
- Various optimizations
- Switch to pure SQL resulted in some changes when an error occurs for an API call, so we’re analyzing if we can make them fully compatible with old error messages.
HAfAH: account history API
Fixed bug in account_history_api::get_transaction returning broken JSON when transaction has no signatures: https://gitlab.syncad.com/hive/HAfAH/-/merge_requests/165
Balance tracker API: tracks token balance histories for accounts
- Locally create docker image for postgREST url rewriter: https://gitlab.syncad.com/hive/balance_tracker/-/merge_requests/119
- Reduce unnecessary logging by rewriter by default: https://gitlab.syncad.com/hive/balance_tracker/-/merge_requests/123
- Fixed a slowdown regression in live sync with Postgres 17: https://gitlab.syncad.com/hive/balance_tracker/-/merge_requests/125
- Pass application_name parameter to postgres connection string for simplified database trouble shooting in tools like pgadmin: https://gitlab.syncad.com/hive/balance_tracker/-/merge_requests/126
Reputation tracker: API for fetching account reputation
- Create docker image for postgREST url rewriter: https://gitlab.syncad.com/hive/reputation_tracker/-/merge_requests/47
- Split schema installation and HAF index creation into two phases to speed up block processor sync time: https://gitlab.syncad.com/hive/reputation_tracker/-/merge_requests/48 https://gitlab.syncad.com/hive/haf_api_node/-/merge_requests/36
- Reduce unnecessary logging by rewriter by default: https://gitlab.syncad.com/hive/reputation_tracker/-/merge_requests/52
- Pass application_name parameter to postgres connection string for simplified database troubleshooting in tools like pgadmin:
https://gitlab.syncad.com/hive/reputation_tracker/-/merge_requests/54 - Updated common-ci-configuration reference to get swagger versioning feature: https://gitlab.syncad.com/hive/reputation_tracker/-/merge_requests/50
HAF Block Explorer
Recent improvements
- Elimination of random errors during application setup: https://gitlab.syncad.com/hive/haf_block_explorer/-/merge_requests/237 https://gitlab.syncad.com/hive/haf_block_explorer/-/merge_requests/236
- Improved functionality of search APIs: https://gitlab.syncad.com/hive/haf_block_explorer/-/merge_requests/233
WAX API library for Hive apps
Recent improvements
- Fixed a bug in the health-checker component: https://gitlab.syncad.com/hive/wax/-/merge_requests/227
In progress
- Creating a generic UI component for the health-checker.
- Creating an object-oriented interface for the Python version of Wax.
- Porting the code for preventing accidental leakage of private keys to Wax’s transaction construction code.
- Create documentation.
HAF API Node
Recent improvements
- Allow setting a different public hostname from the private hostname: https://gitlab.syncad.com/hive/haf_api_node/-/merge_requests/32
- Move rewriter creation to individual HAF apps and now use those containers: https://gitlab.syncad.com/hive/haf_api_node/-/merge_requests/34
- Fix reputation tracker health check: https://gitlab.syncad.com/hive/haf_api_node/-/merge_requests/35
- Add the ability to get email notifications when haproxy detects a service going down: https://gitlab.syncad.com/hive/haf_api_node/-/merge_requests/37
In progress
- Testing replaying HAF API servers
What's next?
I left notes on most of the remaining code changes that are planned in each application’s section, but other than those changes, our main focus in the next weeks will be testing all the apps together, especially under real world loading conditions (e.g. mirroring traffic from our production API server).
Then, finally, we’ll need all the apps devs to begin verifying their code against the new API server code. Limited forms of this testing can be done now by testing against api.syncad.com, which is where we deploy release candidates for the API server code.
I have read your last several reports in an effort to try to catch up on what is going on with the block size. Am I reading it correctly that the block size will be increased to two megabytes? Also, thank you!
No, its not that it will be definitely be increased to 2MB, it's just that it is theoretically possible for it to be increased to 2MB: the block size is a parameter voted for by the witnesses and the max allowed size they can vote for is 2MB.
Currently they are voting for 64K size blocks and this is plenty of space for now, IMO.
But we want to be sure there are no issues in the future if witnesses vote for a larger block size (this would become necessary if Hive starts having to process a lot more data), so we're doing testing related to that now.
A little while back, we began developing 'flood testing' code to test how the Hive network will behave under much, much larger transaction loading than we currently have.
We recently used this flood code to ensure that increasing the allowable transaction expiration time to 24 hours would not open up any vulnerabilities for the network. Now we are using it to test for issues that might occur if blocks got increased to 2MB (not surprisingly we've seen some issues since this was never tested before under this kind of heavy loading condition, but they need more analysis).
Just FYI, I think one of our devs is going to write more about this code and what's he found in a bit.
That all makes sense. I had seen the part about it being able to be voted on by Witnesses... but was not sure if that meant that it would be getting rolled out soon or what. Appreciate the detailed response.
Reference Python Server Response Times
PostgREST Server Response Times
Comparison Of Average Response Times: Python Vs PostgREST
super report, every report bring boom boom motivation, great work, all team, superb effort,
i have one question, my reputation is not increasing, there is any issue or bug facing, or any other,
my reputation is not increasing, or hope respected team, expert will guide me, i know my question is not related to report, hope respected team will guide, love you all , boom boom hive
Actually, it is increasing even now, but the web sites don't show the exact reputation value, only the integer portion. Reputation is on an exponential scale, so the higher your reputation goes, the harder it is to increase it visibly. Yours is already quite high, so it will take a while to see the increase on most web sites.
thank you so much respected for instant guide, and information in regards of my point .
I reminisce about the old days of MIRA :D Hive has come a long way. Good stuff with the development.
Good old MIRA: the best solution when you want to take a month to sync :-)
I had just been saying to someone how much I appreciate your detailed reports of what you are doing for the chain, but I do not think I have ever said it to you -- thank you.
Hmmm, great improvement
!ALIVE
Good option for all, for those like me who need learn more about and use it right.
Too many web3 content to learn quick, time us running!!
!PIZZA
$PIZZA slices delivered:
@danzocal(2/10) tipped @blocktrades
Accidental leakage of private keys?
Sorry, but how is that even possible?
Should we worry?
How easily a hacker can intercept a private key in this case?
I am grateful and thankful about what you are doing for the blockchain, it is really a huge work, but security should come first and foremost, and the accidental leakage of private keys should not happen under any circumstances.
This is protection against "user error".
In many Hive applications, a user can send a transfer or some form of custom_json, and sometimes they don't understand the instructions of an app, and accidentally put a private key into their actual transaction. Most commonly this is done when someone puts a private key into a memo field. This is pretty rate, but it does happen, and then everyone can see their private key (hence we call it leaking a private key).
We're designing Wax so that any app that uses it to construct transactions will automatically get a check that detects such leaks and prevents these transactions from being broadcast.
So there are no real leaks technically, but the users are publishing their own private keys with the custom_json transactions by their own mistake, and the goal is to prevent these transactions to protect the accounts of the users.
Now I understand.
Thank you for the reply.
Amazing 🥲
Congratulations @blocktrades! Your post has been a top performer on the Hive blockchain and you have been rewarded with this rare badge
You can view your badges on your board and compare yourself to others in the Ranking
If you no longer want to receive notifications, reply to this comment with the word
STOP