Hardfork 24 update: testnets and hivemind progress

in HiveDevs4 years ago

Below is a quick summary of the Hive core work done by the BlockTrades team last week.

Hived testing and bug fixes

Status: Hived is feature complete and is undergoing testnet testing by the witnesses. So far, I think the testing is going very well.

Earlier in the week, testnet-based testing discovered a bug in the airdrop code for hardfork 24, which was fixed here:
https://gitlab.syncad.com/hive/hive/-/merge_requests/106/diffs

And this weekend, testnet testing uncovered another bug in this code, which was fixed today:
https://gitlab.syncad.com/hive/hive/-/merge_requests/107/diffs

There are no currently known issues with hived that should affect the hardfork date.

Hivemind progress

Most of our work this week continues to be focused on hivemind testing and fixes.

We have quite a large number of programmers working on hivemind now, and it really shows in the number of bugs that were identified and fixed last week. Here’s a list of fixes that were merged into the gitlab repo last week (there was too much work to easily summarize, but the interested reader can follow the links). A few were features rather than bug fixes, so I added comments beside those.
https://gitlab.syncad.com/hive/hivemind/-/merge_requests/86/diffs
https://gitlab.syncad.com/hive/hivemind/-/merge_requests/89
https://gitlab.syncad.com/hive/hivemind/-/merge_requests/91 (add auto-http-server-port cli option)
https://gitlab.syncad.com/hive/hivemind/-/merge_requests/92 (add prometheus monitoring)
https://gitlab.syncad.com/hive/hivemind/-/merge_requests/93
https://gitlab.syncad.com/hive/hivemind/-/merge_requests/94
https://gitlab.syncad.com/hive/hivemind/-/merge_requests/95
https://gitlab.syncad.com/hive/hivemind/-/merge_requests/96
https://gitlab.syncad.com/hive/hivemind/-/merge_requests/97
https://gitlab.syncad.com/hive/hivemind/-/merge_requests/99
https://gitlab.syncad.com/hive/hivemind/-/merge_requests/100
https://gitlab.syncad.com/hive/hivemind/-/merge_requests/101
https://gitlab.syncad.com/hive/hivemind/-/merge_requests/102
https://gitlab.syncad.com/hive/hivemind/-/merge_requests/104
https://gitlab.syncad.com/hive/hivemind/-/merge_requests/105

Hivemind initial sync testing

We also ran “hivemind sync” tests this week. When hivemind is started for the first time on a server, it must first load and organize all the blockchain data into its internal database format. After that, it enters a “live” sync mode where it adds new blocks as they are produced.

In our tests, the initial sync process took just under 4 days (there’s a lot of data in the Hive blockchain). This was our first successful sync of the entire blockchain using the new Eclipse version of hivemind, so it’s a significant milestone for our work on Eclipse.

Below is the database space consumed by Eclipse hivemind tables after a full sync:

       table_name        |   total    |   index    |   toast    |   table    
-------------------------+------------+------------+------------+------------
 hive_votes              | 212 GB     | 132 GB     | 8192 bytes | 80 GB
 hive_posts              | 103 GB     | 60 GB      | 8192 bytes | 43 GB
 hive_post_data          | 75 GB      | 1882 MB    | 20 GB      | 53 GB
 hive_permlink_data      | 20 GB      | 12 GB      |            | 8616 MB
 hive_follows            | 19 GB      | 12 GB      |            | 6805 MB
 hive_blocks             | 9993 MB    | 4033 MB    |            | 5961 MB
 hive_post_tags          | 8233 MB    | 5505 MB    |            | 2728 MB
 hive_feed_cache         | 2514 MB    | 1716 MB    |            | 798 MB
 hive_accounts           | 779 MB     | 352 MB     | 1216 kB    | 425 MB
 hive_tag_data           | 140 MB     | 80 MB      |            | 61 MB
 following_counts        | 59 MB      | 0 bytes    |            | 59 MB
 follower_counts         | 59 MB      | 0 bytes    |            | 59 MB
 hive_category_data      | 42 MB      | 24 MB      |            | 18 MB
 hive_payments           | 31 MB      | 19 MB      |            | 12 MB
 hive_reblogs            | 26 MB      | 18 MB      |            | 8384 kB
 hive_subscriptions      | 17 MB      | 11 MB      |            | 6768 kB
 hive_notifs             | 8176 kB    | 5072 kB    | 8192 bytes | 3096 kB
 hive_communities        | 4416 kB    | 1632 kB    | 72 kB      | 2712 kB
 hive_roles              | 1560 kB    | 912 kB     |            | 648 kB

Plans for upcoming week

The witnesses are continuing to do testnet-based tests of hived. That will likely be completed this week.

Our team will continue with fixes to hivemind. Most recently, we realized that we need to implement the reputation code directly inside hivemind now. This is just a reimplementation of the existing algorithm, not a rewrite, which would be a much longer task.

Ideally, we’ll complete hivemind work this week, which would allow us to keep the Sept 16th hardfork date. If not, we’ll likely push the hardfork date to the 22nd.

Sort:  

I really appreciate the regular and detailed updates on the work in progress, and the specifics of it's completion and performance. It's a marked and preferable change from the lamentable past regarding such improvements.

Thanks!

Thank you for the update

Well done @blocktrade for putting up effort to give this chain of information to the whole community. Am very glad that the powering down was not shortened, and all noticed bugs fixed. Hoping to see best result after the HF. Also implementing the reputation code too is highly welcome.

Thanks to you and your team for the amazing work!

Thanks!
Where is the custom JSON data in the above tables?
hive_follows?

hive_follows contains the custom_json data that is "understood" by hivemind. For arbitrary custom_json, we'll likely create one or more tables specifically for it, along with appropriate API methods to access it.

One of the ideas that's been tossed around is making hivemind more modular. For example, it'd be nice to be able to run a hivemind that just has accounts, transaction data, and custom_json, without all the heavy "social media" tables.

This could be extended further to where the hivemind pre-filters custom_json it's not interested in storing. This would make it easy for people to run very lightweight API servers for one or more specific apps.

Probably obvious, but this is all driving towards creating a simpler architecture for creating 2nd layer apps.

ok.. I joined recently and not updated on the HF...
will the 13 week power down shortened?

No, there's no change to powerdown time in this HF.

Thank you for your prompt reply.
I hope its talked about in the next HF
Meanwhile, keep up the good work.. I love your service for swapping coins and support you as a witness :)

Will you consider shortening it?

It's been suggested before, but it seems a controversial topic, and my initial impression was that there were more people "against" than "for" the change, but there's been no formal stake vote on it that I recall. Ultimately, it's up to the community to decide.

Some kind of vote about this would be cool. What do you think?

Anyone can create a pair of for/against proposals on this topic. I suspect someone who wants the change will raise the issue as a pair of proposals, once we get past HF24, since the suggestion has come up several times.
Right now, HF24 has all the focus of my team.

Thank you @blocktrades.
Let's see how it all goes!

I find it interesting that the votes take up more data than the posts themselves. I suppose there must be so much more going on under the surface than is visible on the frontend. Happy to see things going so well for the 16th.

I agree it's a bit counter-intuitive. But one thing to note is that the post title/body/etc is actually stored in hive_post_data, hive_posts contains meta-data. So you need to add up those two table (and hive_permlink_data too, for that matter) to get a better idea of space consumed by posts (in which case, you get a size similar to the votes).

The other point, of course, is that there are a lot of votes cast. And there's a lot of indexes associated with voting.

Hey man one thing I've wondered about Blockchain is.. the ACTUAL CONTENT. Is HIVE content stored on AWS servers?

I remember steemit content was and that made me think.. doesn't that nullify any claim of DECENTRALISED and IMMUTABILITY?

Or at the least exist as "not truly outside of the mainstream" reach and all still under the control of technocrats?

Hi, As far as I understand the text content is stored on the blockchain itself and that is stored on many decentralised servers some witnesses use AWS as far as I know but some have their own making it quite decentralised. Image etc. may be stored on centralised servers but I am not sure if they are mirrored. Maybe someone else can weigh in on this.

The image data is mirrored across both cloud severs and individually owned servers.

Well that's the thing, witnesses are about making sure the blocks of records are correct. Their data is essentially what we can view on an explorer. Very raw data. That's my understanding.

Images and Video though can't be stored on Blockchain. This is what has always existed as one big reason why Blockchain can't scale well. Transactions of meta data is one thing. Bandwidth like streaming video is another thing. I'm not really sure how it all works but I remember when I did some research on steemit it was utilising AWS as the content storage servers.

So this really means that Blockchain isn't truly decentralised and it's why it can't just take over regular internet like many claim or hope.

Classically, I think the text data was considered the most important to keep decentralized. For politically controversial posts, the images are mostly window-dressing to the text content. And of course, all the financial data is decentralized.

Nonetheless, the image server data is distributed across multiple servers now, and the plan is to truly decentralize this data as well. You can search thru my previous posts to find more details about this plan.

In a similar way, I expect the same thing to happen to the video data with time, but the video servers are maintained by other groups right now (e.g. 3speak) so they will likely move on their own timelines.

Thanks for the explanation.
It completes my understanding.

Videos can be hosted on @threespeak (decentralized).

Isn't threespeak just another front-end?
Where does the video content store.. who has control of the servers where the content lives?

Thank you for taking the time to reply and in such detail.

have a nice day, everyone

To connect to the private instance of Syncad Gitlab, it's necessary to create a new user? it's not possible to use a gitlab user account?

What was the spec (CPU/RAM) of the server running the initial sync process?


Witness FR - Gen X - Geek 🤓 Gamer 🎮 traveler ⛩️
Don't miss the Hive Power UP Day! more info here

You can access the gitlab server without creating an account: https://gitlab.syncad.com/explore
But you need to create an account to file issues and submit merge requests.

We ran it on two systems:

  • 3960X 256GB RAM 2 NVMe drives (Corsair MP600s 2TB and 1TB)
  • 3800X 128GB RAM 1 NVMe drive (Sabrent Rocket 4.0 2TB) and 1 SSD (Samsung 860 EVO 1TB)

Times:

  • 3960X took 3 days 20 hours
  • 3800X took 4 days 13 hours

Thanks @blocktrades, yes I know for access without creating an account, I just wanted to avoid an additional account to my already existing gitlab and github 😂

Really interesting that the tests were done with a Ryzen processor. In case you do a more detailed post later on the tests, it will be possible to have also the CPU/RAM load after synchronization (I mean in load testing)?


Witness FR - Gen X - Geek 🤓 Gamer 🎮 traveler ⛩️
Don't miss the Hive Power UP Day! more info here

I really like the current generation Ryzens, they've moved me back to mostly buying AMD systems, whereas previously we'd mostly been buying Intel systems. We started dabbling with AMD again with Ryzen and Ryzen2, but started going with AMD in a big way with Ryzen3.

One really nice feature of the new AMDs is we can get relatively cheap systems with 256GB (via the threadripper series which allows for 8 32GB DIMMs), which is very important to us since we run a lot of VMs, and the hosts for those VMs tend to be memory bottlenecked. PCI-4 support is also very nice to have for the high-speed NVMes.

We'll be publishing a lot more data about CPU/RAM loading in the next few weeks. But, the short summary is performance for Eclipse is really good.

Thanks 👍, you made me want to test it with VMware and Unraid on a project I have to upgrade and which is using several VMs.


Witness FR - Gen X - Geek 🤓 Gamer 🎮 traveler ⛩️
Don't miss the Hive Power UP Day! more info here

Great work.
Can you refer to the CI used?

Thanks for the information

You guys have done a great job. I have reviewed all the links so far, everything is messed up in my head!)) I hope you will have the same productive next week. Good luck.

That's pretttey cool to know.

Congratulations @blocktrades! You have completed the following achievement on the Hive blockchain and have been rewarded with new badge(s) :

You distributed more than 59000 upvotes. Your next target is to reach 60000 upvotes.

You can view your badges on your board and compare yourself to others in the Ranking
If you no longer want to receive notifications, reply to this comment with the word STOP

Do not miss the last post from @hivebuzz:

Hive Power Up Day - The countdown is ticking
Hive Power Up Day - Let's grow together!

This has been hard work guys! You must be exhausted! I love the way things are going, the great work to make the platform better and better. Congratulations!

It has been a lot of work and I'm really looking forward to getting this release behind us, since it lays the groundwork for everything we plan to do afterwords.

Is there any plan or date for SmT ? @blocktrades

We plan to look at the various options for tokens after we complete the current hardfork.

Awesome!

I’m so confused there’s a hive hard fork and why?

You can find out more about the upcoming hardfork here: https://hive.blog/@hiveio

Very good work. So i am happy to see you do not vote yourself. Very Nice. Rep+

I hope all will going good for the development team. My best wishes.
Salve
Alucian

4 days of synchronization !? This is the volume!
We are waiting for a hard fork!

why did you give me a negative feedback?

There's a guy called haejin who's voting with a possibly stolen account called ranchorelaxo. He randomly votes up the top trending posts with 100% strength without reading them. I have a bot setup that does a lower value downvote to counter some of his upvote, so that trending posts don't get hugely overrewarded versus other posts.

The blockchain gives out a fixed amount of rewards, based on the token price. A downvote simply redistributes the rewards away from one post to all the other posts. So downvoting an overrewarded post just means other posters get a little more for their post. People are constantly upset about how some people's posts are constantly getting more rewards than other deserving posts. Downvoting counters this trend.

ok I did not know that, but if you see that your bot does that, it is good that you leave a comment about why you did it, not because you did not like the post, I thought you did it because you did not like the post

I wish the bot had the ability to leave a comment, but it doesn't. And the voter it's countering usually votes 20-40 times when he does it, so I would waste a lot of time leaving comments.

Thank you for explaining this. It makes a lot of sense. The bot you have downvoted my post and I wasn't really upset about it. I figured it was more of a balancing thing so your explanation confirmed that.

Just wanted to give a shout out to thank you. Have an awesome day! :)

Hello...!!!
You again cast a negative vote on my latest post. What mistakes have I made. Please answer. Why are you weakening me here. Am I not allowed to be here?

There's a guy called haejin who's voting with a possibly stolen account called ranchorelaxo. He randomly votes up the top trending posts with 100% strength without reading them. I have a bot setup that does a lower value downvote to counter some of his upvote, so that trending posts don't get hugely overrewarded versus other posts.

The blockchain gives out a fixed amount of rewards, based on the token price. A downvote simply redistributes the rewards away from one post to all the other posts. So downvoting an overrewarded post just means other posters get a little more for their post. People are constantly upset about how some people's posts are constantly getting more rewards than other deserving posts. Downvoting counters this trend.

Thank you, now I understand. I really support your program.👍👍👍👍

@blocktrades, I am not good with Technical Details but good to know about the Hivemind Progress.

Keep updating Hive Community and keep making this space a Powerful Ecosystem.

Good wishes from my side team and stay blessed always.

Can we have a 'poll the community' proposal that costs on the order of 1hbd?
Maybe with both up and down votes counted within the same proposal?
Downvotes to give the apathetic one less reason not to vote.
Counting the votes by each, and not weight, to gauge popularity within the community?
Weight could be shown as normal.

The HBD idea is interesting, but it's still just as game-able by sybil attacks, assuming the voter is willing to fund his sockpuppet accounts for their votes. And that probably leads to a worse voting system, since it seems more likely to me that voters who are paying to vote with sockpuppets would be more likely to do so in order to generate a personal return.

For DHF expenditures, I think it makes sense that the votes are stake-weighted, since the more stake you have, the more you have to win or lose based on the results, and stakeholders are in some sense "funding" everything else by being willing to hold Hive tokens instead of selling them, which drives their value.

Despite that, I don't think stake-weighted decision-making has to be the "only-game-in-town" on Hive. The second layer reputation app I plan to create, for example, will be totally divorced from stake-weighted voting. The only connection to the first layer in this app will be for resource credits to create transactions.

I was aiming more for a proposal cost of 1hbd to poll the community.

Stake weighted voting is the game we are playing here, that makes sense, to me, in a governance situation.
Presuming the stake doesn't act against its own best interests.

I was just wanting the ui to show the number of up/down votes on the proposal to easier gauge community support, the stake weighted portion could remain the same.

Perhaps the proposal feature isn't the best venue, but neither have the apps filled the role due to nobody seeing them, and most of us not wanting one more app to keep up with.

Maybe a poll feature should be added to the drop down menu where proposals are now?
This lets everybody see them, if they look.
At 1hbd to create a poll, it can provide additional revenue to the dao.

If we paid dpoll for their code and merged it to condenser, presuming its compatibility, it might help with their bent feelers at us taking over their thing?

Ah, I see. Yes, it would be quite trivial for the front end UIs to also show vote counts as well as vote stakes. The only downside I can see to that is that it would of course be subject to sybil voting, if someone wanted to distort the data. But I don't see that as a problem to worry about.

I guess the other part of your idea is to lower the cost, but I'm not sure it's necessary, as I guess most people who want to get people's attention for a poll about the chain don't mind spending 10 HBD that much. If they do, dpoll is certainly another option for them.

as I guess most people who want to get people's attention for a poll about the chain don't mind spending 10 HBD that much.

Lol, at least not the ones making 20hbd a post.
Those that have to make 20 posts to get that much might disagree, but likely they haven't been here long enough to get over the learning curve, anyway.

Thanks for your indulgence, and keeping us alive after we got kneecapped by ned.

ObsoleteError.JPG

Can anyone tell me what this is supposed to mean and what I need to do to be able to post again?

If that were actually true, you've identified a great personal strategy for buying/selling, since I post every Monday right now, until after the HF.

Did you actually create an account just to send this anonymous message? Downvoted for waste. Here's a suggestion for you: "Be the change you want to see".

Here's news for you: I'm not here to pump your bags, I'm just here to develop tech I'm interested in. Go search elsewhere for your get-rich-quick scheme.