You are viewing a single comment's thread from:

RE: Building On Hive: Why It Is A Smart Idea

in LeoFinance2 years ago

Thanks for the interesting post/video!

I'm thinking about the "ability to scale" and what that means. Investopedia defines scalability as the ability of a system to handle an increased workload. https://www.investopedia.com/terms/s/scalability.asp

I'm thinking about Hive's ability to handle increased workload that could come from

  1. A big influx of Hive posters
  2. Popular new games on Hive / more players of existing games
  3. Team Leo's twitter-like project
  4. Things we don't even know about.

I think that the way Hive is set up allows for more workload; we're not at its limits yet. Blocks are not full.

But there is the limit of only being able to add 20 blocks a minute, each at a fixed size (64KB?). We can't increase the speed of block production to faster than 3 seconds due to the time for a block to be transmitted around the world to all node operators; this seems to be a hard limit due to the speed of light.

We can increase block size, but how much more? 4x, 16x? I'm thinking about the ultimate limit of a blockchain dpos system like Hive. In something like AWS or Azure, you can dynamically add higher processing speeds, more simultaneous data connections, more and faster storage. On Hive, adding more processors/nodes doesn't increase processing power since all the nodes are doing the same thing - recording the same data.

Compressing data helps. Offloading data for access, to legacy databases helps. Not putting all the data on the blockchain in the first place helps.

But I don't think anyone is suggesting we have infinite scalability so I'm wondering how close we are to the limits of the Hive system of data storage and retrieval. I think we do have room to scale up. But how much room? If 100x workload hit Hive tomorrow, would the system slow down as blocks fill up and data must wait for room on the next block? And if this load is continuous, can data get lost / not included in any blocks?

I don't know the answers, but maybe someone with more insight into the actual mechanics of Hive blockchain storage can provide a more concrete assessment of how far Hive can actually scale up. Is there a maximum workload? Are we currently at 1% of the max? 5%? 20%?

Sort:  

This is blocktrades forte and he is on the case. In his latest post he discussed a test they ran, increasing the block size while also upping the transactions. I forget the order of magnitude ratio it was, but it was significant. And he wrote there wasnt a hiccup.

Fortunately, Hive is in the same position as everyone else with data: it only grows. That means compression technology is going to have to evolve. We can take advantage of that.

While it is not infinite, we will see new data architecture that helps out. What that is, I cannot say. What I do know is blocktrades is thinking about this all day long and doing things to scale before there is a traffic issue unlike Ethereum.

Posted Using LeoFinance Beta