10th update of 2024: API node software release in Dec, hardfork tentatively set for Q1 2025

in HiveDevs16 days ago

blocktrades update.png

Here’s a quick update on what the BlockTrades team has been working on since my last report.

My plan is to stick with the previously announced time of mid-December for upgrading all the Hive API nodes, but I decided it was best we delay hardfork 28 to first quarter of 2025. This means only API node operators will need to upgrade in December, and exchanges can upgrade later.

I think separating this into two releases should reduce the pressure on everyone, since there’s more likely to be issues with the API node and non-hardfork hived changes (just because there are so many changes and the changes need to be tested against all the Hive apps out there), compared to the actual hardfork changes (relatively few at the moment).

This should also reduce the chance for potential issues when exchanges finally upgrade, since most of the hived changes will have been tested in production for a while by then.

Hived: blockchain node software

As part of the release of the HAF API node software suite, we’ll be releasing a new version of hived (which will be followed by a later release with the final “feature set” for hardfork 28). As a tentative date, we’ll be setting the hardfork date to February 8, 2025, but that may be overly optimistic.

Recent improvements

  • We optimized the time to split a monolithic block_log into 1M block size files from 38 minutes down to 13 minutes: https://gitlab.syncad.com/hive/hive/-/merge_requests/1406
  • Fixed value in database_api::api_account_object::post_voting_power to correctly show voting manabar values at client side.

In progress

  • Continuing analysis of hived under transaction flooding conditions with large blocks (e.g 1-2MB blocks).
  • Beginning rewrite of transaction signing algorithms (e.g. to allow more signatures, etc). This work will also affect how we manage 2nd layer lite accounts.

HAF: framework for creating new Hive APIs and apps

Recent improvements

In progress

  • Adding API methods to allow a HAF app to request and wait on the creation of indexes on HAF’s own tables in a way that optimizes index creation time and avoids deadlocks.

Hivemind: social media API

We completed the switch to postgREST, including final optimizations I promised in my last post to fix the few performance regressions that we found. Every hivemind API call is much faster than before now. Here’s the latest API benchmarks:

Reference python times:

EndpointMax [ms]Min [ms]Average [ms]Median [ms]
condenser_api.get_discussions_by_blog670316623052396
bridge.get_account_posts334426864825
bridge.get_discussion2271331935215
bridge.get_ranked_posts3409205971955
condenser_api.get_discussions_by_comments26764587574
condenser_api.get_followers136203831
condenser_api.get_discussions_by_created289985638308
bridge.get_profile611328417395
condenser_api.get_discussions_by_feed3146127217481694
condenser_api.get_blog4617119832563496
condenser_api.get_following349177270267

PostgREST:

EndpointMax [ms]Min [ms]Average [ms]Median [ms]
condenser_api.get_discussions_by_blog238346603643
bridge.get_account_posts40999218188
bridge.get_discussion3266132744
bridge.get_ranked_posts44958228226
condenser_api.get_discussions_by_comments268194107
condenser_api.get_followers86101514
condenser_api.get_discussions_by_created74532201130
bridge.get_profile485320388379
condenser_api.get_discussions_by_feed594402495497
condenser_api.get_blog1533312861897
condenser_api.get_following219189204204

In progress

  • Various optimizations
  • Switch to pure SQL resulted in some changes when an error occurs for an API call, so we’re analyzing if we can make them fully compatible with old error messages.

HAfAH: account history API

Fixed bug in account_history_api::get_transaction returning broken JSON when transaction has no signatures: https://gitlab.syncad.com/hive/HAfAH/-/merge_requests/165

Balance tracker API: tracks token balance histories for accounts

Reputation tracker: API for fetching account reputation

HAF Block Explorer

Recent improvements

WAX API library for Hive apps

Recent improvements

In progress

  • Creating a generic UI component for the health-checker.
  • Creating an object-oriented interface for the Python version of Wax.
  • Porting the code for preventing accidental leakage of private keys to Wax’s transaction construction code.
  • Create documentation.

HAF API Node

Recent improvements

In progress

  • Testing replaying HAF API servers

What's next?

I left notes on most of the remaining code changes that are planned in each application’s section, but other than those changes, our main focus in the next weeks will be testing all the apps together, especially under real world loading conditions (e.g. mirroring traffic from our production API server).

Then, finally, we’ll need all the apps devs to begin verifying their code against the new API server code. Limited forms of this testing can be done now by testing against api.syncad.com, which is where we deploy release candidates for the API server code.

Sort:  

I have read your last several reports in an effort to try to catch up on what is going on with the block size. Am I reading it correctly that the block size will be increased to two megabytes? Also, thank you!

No, its not that it will be definitely be increased to 2MB, it's just that it is theoretically possible for it to be increased to 2MB: the block size is a parameter voted for by the witnesses and the max allowed size they can vote for is 2MB.

Currently they are voting for 64K size blocks and this is plenty of space for now, IMO.

But we want to be sure there are no issues in the future if witnesses vote for a larger block size (this would become necessary if Hive starts having to process a lot more data), so we're doing testing related to that now.

A little while back, we began developing 'flood testing' code to test how the Hive network will behave under much, much larger transaction loading than we currently have.

We recently used this flood code to ensure that increasing the allowable transaction expiration time to 24 hours would not open up any vulnerabilities for the network. Now we are using it to test for issues that might occur if blocks got increased to 2MB (not surprisingly we've seen some issues since this was never tested before under this kind of heavy loading condition, but they need more analysis).

Just FYI, I think one of our devs is going to write more about this code and what's he found in a bit.

That all makes sense. I had seen the part about it being able to be voted on by Witnesses... but was not sure if that meant that it would be getting rolled out soon or what. Appreciate the detailed response.

Reference Python Server Response Times

image.png

PostgREST Server Response Times

image.png


Comparison Of Average Response Times: Python Vs PostgREST

image.png

super report, every report bring boom boom motivation, great work, all team, superb effort,
i have one question, my reputation is not increasing, there is any issue or bug facing, or any other,
my reputation is not increasing, or hope respected team, expert will guide me, i know my question is not related to report, hope respected team will guide, love you all , boom boom hive

Actually, it is increasing even now, but the web sites don't show the exact reputation value, only the integer portion. Reputation is on an exponential scale, so the higher your reputation goes, the harder it is to increase it visibly. Yours is already quite high, so it will take a while to see the increase on most web sites.

thank you so much respected for instant guide, and information in regards of my point .

I reminisce about the old days of MIRA :D Hive has come a long way. Good stuff with the development.

Good old MIRA: the best solution when you want to take a month to sync :-)

I had just been saying to someone how much I appreciate your detailed reports of what you are doing for the chain, but I do not think I have ever said it to you -- thank you.

Hmmm, great improvement
!ALIVE

Good option for all, for those like me who need learn more about and use it right.

Too many web3 content to learn quick, time us running!!

PIZZA!

$PIZZA slices delivered:
@danzocal(2/10) tipped @blocktrades

Porting the code for preventing accidental leakage of private keys to Wax’s transaction construction code.

Accidental leakage of private keys?

Sorry, but how is that even possible?

Should we worry?

How easily a hacker can intercept a private key in this case?

I am grateful and thankful about what you are doing for the blockchain, it is really a huge work, but security should come first and foremost, and the accidental leakage of private keys should not happen under any circumstances.

This is protection against "user error".

In many Hive applications, a user can send a transfer or some form of custom_json, and sometimes they don't understand the instructions of an app, and accidentally put a private key into their actual transaction. Most commonly this is done when someone puts a private key into a memo field. This is pretty rate, but it does happen, and then everyone can see their private key (hence we call it leaking a private key).

We're designing Wax so that any app that uses it to construct transactions will automatically get a check that detects such leaks and prevents these transactions from being broadcast.

So there are no real leaks technically, but the users are publishing their own private keys with the custom_json transactions by their own mistake, and the goal is to prevent these transactions to protect the accounts of the users.

Now I understand.

Thank you for the reply.

Amazing 🥲

Congratulations @blocktrades! Your post has been a top performer on the Hive blockchain and you have been rewarded with this rare badge

Post with the highest payout of the week.

You can view your badges on your board and compare yourself to others in the Ranking
If you no longer want to receive notifications, reply to this comment with the word STOP