Reconfiguration notice for API node infrastructure

in Hive Improvementlast year (edited)

As I mentioned in my last post on development progress at BlockTrades, we’ve been asking Hive-based apps to change the way they broadcast transactions. Transactions are used to add user-generated operations such as voting, posting, transferring funds, delegating HP, etcetera to the Hive blockchain.

In this post I’m going to briefly describe why we’re requesting this change and also describe a change we’ve made to our API node infrastructure to better handle the increased traffic from bots playing Splinterlands with “bad” API calls. This latter information will likely be interesting to other API node operators.

Apps should use broadcast_transaction call

Most Hive-based apps previously used the call broadcast_transaction_synchronous to broadcast transactions. This call, as the name implies, waits for a transaction to be included into the blockchain before it returns to the calling application. The problem with this call is that we’ve measured it takes about 3 seconds on average to complete on our API node infrastructure even under normal loading conditions. That’s a long time to a computer.

So if one of our hived nodes gets a lot of these calls, the calls keep all of that hived’s worker threads busy, effectively slowing down all API calls made to that hived server.

By adding additional logging to one of our hived nodes, we were able to observe that even read-only API calls arriving at the loaded hived node could wait 2 seconds (or even more under very heavy loads) before they got assigned a worker thread to process them. A few days ago, this was noticeable as a slowdown on not only web sites that broadcast transactions such as, ecency, and peakd, but even on read-only Hive-based sites (e.g. Hive block explorers such as

The ideal solution to this problem is for apps to replace all these slow calls with the newer, faster broadcast_transaction call. This call doesn’t wait for the transaction to be included into the blockchain, and it completes on within an average of 0.027 seconds (more than 100x faster than the synchronous version).

Most hive apps have already migrated to the better API call

Most of the major Hive apps have moved to the faster broadcast call within the past few days, as app devs saw how much more responsive their app became with the new calls (i.e. they became faster than they ever were before, even when we had less loading on the Hive network).

And I expect the few remaining big apps, such as, to convert to using the new calls within the next couple of days (we have an engineer working now to fix one known issue preventing us from rolling out the latest version of condenser with the broadcast_transaction fix).

Mitigating bad traffic from misbehaving bots

But despite movement of the major Hive apps to the faster call, we still see a lot of broadcast_transaction_synchronous traffic on our node being generated by presumably home-grown bots playing Splinterlands for their users. I suppose these bot devs will eventually fix their bots, but in the meantime, we have no easy way to contact them, so we’ve made a change to our API node infrastructure so that this “bad” traffic doesn’t impact the apps generating “good traffic”.

We have redirected all incoming broadcast_transaction_synchronous traffic to a single hived node that only processes this type of API traffic, and all other API calls (including the “good” broadcast_transaction) are routed to our other hived nodes. This means that apps using the proper calls will not be slowed down by the bad traffic. And it will probably ultimately encourage the bad traffic generators to change their bots as well, although I’m not holding my breath for when that will happen.

Add an extra consensus hived to manage broadcast_transaction_synchronous traffic

If other API nodes want to be capable of serving all the traffic from the Hive network right now, here’s the relatively easy way to do it:

  • add one additional consensus (not account history) hived node to your server. Since consensus nodes don’t require much memory (around 4GB), the main additional resource cost is around 370GB of disk space for the additional block_log file. You only need a consensus node because it is only going to be processing broadcast calls and nothing else.
  • Modify your jussi config file to redirect all below types of traffic to your consensus node:
    ** steemd.network_broadcast_api.broadcast_transaction_synchronous
    ** appbase.condenser_api.broadcast_transaction_synchronous
    ** appbase.network_broadcast_api.broadcast_transaction_synchronous

With the above steps, your regular account_history node(s) will process all the good traffic and the bad traffic will be offloaded to the light weight consensus node. Also, if you want to further improve quality of service for the bad traffic, you can increase the web-threads setting in your consensus node’s configuration file from the default value of 32 to 64 (or higher), at the cost of increased memory usage.

Quick Update

We swapped over to use the new calls earlier today and we swapped over about an hour ago. The UX feels much more responsive now.


This is a fantastic explanation of the problem that should be user digestible.

I really like the solution of migrating the poor performing traffic so most of the traffic is unaffected.

Well, I'm sure parts of it will be pretty hard to digest, such as "worker threads", but hopefully the gist of it will be understandable.

It makes enough sense to a noob (me) that I can understand what is happening and why :)

I just picture a disoriented Roomba trying to play cards (52 Pickup) and getting clogged. Am I close?

Close enough, the hived was getting clogged up by bad traffic (the cards), so it couldn't handle even the good traffic (dust).

And although I didn't mention it in the post, perhaps surprisingly, even the bad traffic processes better without having the read-only traffic on the same node. The two types of traffic don't mix well due to some weaknesses in hived's current database locking mechanism.

HAF-based applications will help a lot with this latter issue too, because most of our read-only traffic will be migrating from the hived node to the HAF node. We've done this a lot already with our earlier optimization work on hived and hivemind, but HAF will allow us to migrate more API calls away from hived nodes so that they can focus on their primary job which is processing transactions.

No point in jamming the squares and triangles through the round holes if you don't have to. Those are blocks, this is a blockchain; see what I did there?

52 Pickup is the only game I know how to play.

What's your high score?

Less than the Roomba

Letting "bad" requests fall to a lower perfs node is a brilliant idea. I have always loved how easily you can fool computers with a simple workaround.

what do you think will happen to RC prices if splinterlands should increase again massive?

I see some increases on some transactions already.

RC will definitely keep going up if we get more traffic. One of the things I want to do as part of the next hardfork is rationalize RC cost calculations more. With increased traffic, we need it to be more accurate to real costs to the blockchain.

would mean lower the RC costs because hive is efficient right?

Would it be smart to have some base cost and an adjustment with price? I know that would be difficult to build, but it could be something we gather data today and can find out a fair price level for Rcs against chain running cost.

Like a mirror of benefit/cost. I know it cant be a 1 to 1 correlation between RC and price, simply because it would play out the reason to have hive.

Or what would be your idea to make it semi-automatic?

Some interesting ideas there, but I haven't given it much thought yet. First thing I need to do is review current implementation in depth; I haven't looked at the resource credit code much.

thanks :)

btw, I think the RC pools can fix a lot, not for price finding but to open market Rcs. Valuable Dapps will receive more Rcs from delegators because the return would be most likely better.

I think this could be a hidden superpower for hive, if done right. Overall it would delegate resources 100x more efficiently than any other chain I know about and investors will become a real reason to stake a ton of hive.

The ideal solution to this problem is for apps to replace all these slow calls with the newer, faster broadcast_transaction call.

Which call does hive-js make ... i made a fun side project using hive-js and can someone please tell me if hive-js uses broadcast_transaction_synchronous ... or are the js libs updated to this issue ?

hive-js was updated a couple of days ago. So you just need to make sure you are using the very latest version.

oh okay thank you 🙂 ... i used jsdelivr so i'm hoping it got updated automatically

It might have, but I always recommend checking such things :-)

=) just did !!

And so Hive continues to outpace their peers and you guys do excellent work.
The future looks great.

Even I get the gist of this. Thank you. What a journey these last 4 years (for me). I am constantly delighted at how things are developing. Especially since the advent of Hive. Thank you.

Posted using Dapplr

That's a very elegant solution to reroute the blocking traffic to a dedicated node and keep the unblocking traffic separate.

By the way, this name sounds... not updated yet? :)
** steemd.network_broadcast_api.broadcast_transaction_synchronous

Actually, that name is appropriate, because that route on jussi is for older, slower pre-appbase traffic that jussi has to translate from the old format before a hived node can process it.

That makes sense. Now that sounds like legacy meets evolution. :)

unrelated question, dear blocktrades why do u keep sending Hive to people from your wallet?

That is people buying Hive from our cryptocurrency exchange site:

Congratulations @blocktrades! You have completed the following achievement on the Hive blockchain and have been rewarded with new badge(s) :

You received more than 1255000 HP as payout for your posts and comments.
Your next payout target is 1260000 HP.
The unit is Hive Power equivalent because your rewards can be split into HP and HBD

You can view your badges on your board and compare yourself to others in the Ranking
If you no longer want to receive notifications, reply to this comment with the word STOP

Check out the last post from @hivebuzz:

Feedback from the August 1st Hive Power Up Day

Congratulations @blocktrades! Your post has been a top performer on the Hive blockchain and you have been rewarded with the following badge:

Post with the highest payout of the week.

You can view your badges on your board and compare yourself to others in the Ranking
If you no longer want to receive notifications, reply to this comment with the word STOP

Check out the last post from @hivebuzz:

Feedback from the August 1st Hive Power Up Day

What a great and neet job you did. I must confess to you that i really enjoyed reading and digetsing the whole of this work because you cited it well in your programming. Thanks a lot bro.

hey, tell me, please, what the community needs to be ranked here : . I'm working on DCooperation for a long time, and we finally have pending rewards more than $100. Maybe we need more people to post there ? Or what are the criteria for a community to appear in that page ?

I don't know off the top of my head, that algorithm was written a long time ago, before I was involved with the development of the hivemind codebase, so somebody would need to dive into the hivemind code to figure it out.

But that's how work Hive-JS, isn't it?

I don't understand your question, could you add some detail?

That's how customJson function works, isn't it? (in HiveJS library). Using synchronic function instead async?