24th update of 2021 on BlockTrades work on Hive software

in HiveDevs3 years ago

blocktrades update.png
Below is a list of Hive-related programming issues worked on by BlockTrades team during last week:

Hived work (blockchain node software)

Voting changes that will take effect with HF26

Eliminated rule that prevented voting more than once per block (only allowed one vote every 3s). Vote edits are no longer penalized with no curation rewards - they behave mostly as if the original/previous vote was not there. Dust votes are fully considered as votes (it will no longer be possible to delete a comment that has received dust votes).
https://gitlab.syncad.com/hive/hive/-/merge_requests/258

CLI wallet

Wallet tests were rewritten to use the new, faster Test tools: https://gitlab.syncad.com/hive/hive/-/merge_requests/251

We’re also working on supporting offline operation of the cli wallet:
https://gitlab.syncad.com/hive/hive/-/merge_requests/265

Code cleanup

We’ve made websocketpp into a submodule (previously entire codebase for websocketpp was directly copied into hived code repo via the embedded fc library). This should make it easier to update the websocketpp library in the future from its source repo:
https://gitlab.syncad.com/hive/hive/-/merge_requests/235

sql_serializer (Hived plugin that streams data to HAF database)

During testing of HAF, we found some and fixed some bugs in the sql serializer plugin.

When a hived node is started, it drops all irreversible data it has, so reversible data in the associated HAF database also has to be removed, and the data of HAF-based apps data have to be rewound to the last irreversible block. To fix this issue, the sql_serializer now generates an artificial BACK_FROM_FORK HAF event, so that all the reversible data will be removed from the HAF database:
https://gitlab.syncad.com/hive/hive/-/merge_requests/266

As mentioned in a previous report, there was a bug if the sql_serializer lost connection to the postgres database it is writing to (for example, if the postgres database was temporarily shutdown for maintenance then restarted). So we enhanced the sql_serializer to automatically try to reconnect in this case:
https://gitlab.syncad.com/hive/hive/-/merge_requests/263

Hivemind (2nd layer applications + social media middleware)

As mentioned previously, we’re planning to migrate to Ubuntu 20 as the recommended deployment environment for hived and hivemind. As part of this change, we’re planning to move to postgres 12 for hivemind, because this is the default version of postgres shipped with Ubuntu 20.

Final postgres 10 release for production servers, moving development releases to postgres 12

During our performance testing of postgres 12 in the last couple of weeks, we’ve found numerous places where we will need to inject postgres 12-specific syntax to achieve good performance.

So we’re planning to release a final version of postgres 10-compatible hivemind in the coming week containing all current fixes and performance enhancements, then move the develop branch to be postgres 12-only. Going forward this means that future releases of hivemind will require postgres 12 (but stick with postgres 10 for production servers until those releases are made available, as the final postgres 10 release will perform much better on postgres 10).

Analyzing Postgres 12’s “just-in-time” compilation of queries

We’ve determined that just-in-time (jit) compiling of queries (first introduced in postgres 11) has a detrimental effect on several hivemind-based queries and no observed benefits so far. This should not be surprising as jit is mostly beneficial for speeding up execution of long-running queries, and hivemind was designed to avoid such queries in general. Also, hivemind queries are often complex, so they take longer to compile.

We’re planning to disable jit during hivemind live sync. We’ll be performing some benchmarks in the coming week to see if jit benefits any queries during massive sync (these are the only queries in hivemind that might plausibly benefit). We’ll also need to check if the move to a HAF-based hivemind changes the performance profile of jit during massive sync.

Postgres 12 breaks ring-fencing of Common Table Expressions (CTEs) by default

Another issue we found during the port to postgres 12 is that postgres 12 tries to globally optimize queries that contain CTEs (i.e. WITH clauses in a query). Previously, sql code inside a CTE was always optimized separately from other parts of the query that depend on the data generated by the CTE. This separate optimization step is often referred to as “ring-fencing” of the CTE.

In many hivemind queries, we’ve taken advantage of ring-fencing to force the SQL query planner to use a beneficial ordering of joins within the queries. By breaking ring-fencing, postgres 12 was breaking these optimizations.

Fortunately, postgres 12 updated the syntax for WITH statements to enable the old ring-fencing behavior on specific queries by adding the MATERIALIZED keyword. For example, to enforce ring-fencing under postgres 12, you would use the syntax:
WITH results AS MATERIALIZED to achieve the same behavior as WITH results AS in postgres 10. Unfortunately, this syntax isn’t accepted by postgres 10, and we need to make this change on a number of queries, so this is one of the driving reasons we decided to move future development to postgres 12 (but I suspect we’ll find other reasons as we go along).

Hive Application Framework: framework for building robust and scalable Hive apps

Fixing/Optimizing HAF-based account history app (Hafah)

We’re currently optimizing and testing our first HAF-based app (code-named Hafah) that emulates the functionality of hived’s account history plugin (and ultimately will replace it). During the past week we’ve been running full tests to the current headblock of the mainnet (i.e. 57M+ blocks) and making sure it enters live sync mode properly (after some fixes, it does).

We’ve also been testing using our “fork-inducing” tool on a testnet and this also helped us to identify some bugs in HAF (now fixed). We’re also doing further work on this tool to eliminate some random aspects of its operation to ensure repeatability of its testing ability.

Some bugs identified and fixed by recent testing of HAF include:
https://gitlab.syncad.com/hive/psql_tools/-/merge_requests/13
https://gitlab.syncad.com/hive/psql_tools/-/merge_requests/14
https://gitlab.syncad.com/hive/psql_tools/-/merge_requests/16

Benchmarking concurrent operation of sql_serializer and Hafah

We tested running the sql_serializer replaying from block 0 to headblock while at the same time concurrently running the Hafah app on the same postgres database. Unfortunately, this unexpectedly resulted in a slowdown as compared to separately running the sql_serializer in massive sync mode, followed by running Hafah on the resulting data, so we’re investigating potential causes for this.

The slowdown manifested in the form of the sql_serializer taking longer to reach the headblock. Hafah initially trailed the sql_serializer but was eventually able to keep up with the data streamed by the sql_serializer in massive sync mode before it actually reached the head block and entered live sync mode.

Currently I suspect the issue is either the introduction of indexes and foreign keys into the HAF tables (required for Hafah to run) or autovacuums on the HAF tables (these make Hafah perform better), and we’ll be investigating selectively disabling some of these to see if any are the root cause of the slowdown.

Investigating multi-threading the jsonrpc server used by HAF

We’ve assigned a dev to investigate possible ways to multi-thread the jsonrpc server used by HAF (and traditional hivemind). As mentioned in my previous report, we discovered that this becomes a bottleneck for API traffic at high loads when the API calls themselves are fast. As this is a research project, it will likely take several weeks before we have something more to report on this issue.

Conversion of hivemind to HAF-based app

We didn’t have a chance to work on HAF-based hivemind during the previous week as we were tied up with HAF and the HAF account history app, but I think we’ll be able to resume work on it during the upcoming week.

Condenser (source code for hive.blog and a number of other Hive frontend web sites)

We reviewed and deployed a number of enhancements and bug fixes by @quochuy.

While investigating another issue with hive.blog recently, I saw some malformed URL requests on https://hive.blog server logs which I suspect are being programmatically generated by condenser itself. This issue is a relatively minor problem and is still under investigation.

Upcoming work for next week

  • Release a final official version of hivemind with postgres 10 support, then update hivemind CI to start testing using postgres 12 instead of 10.
  • Finish testing fixes and optimizations to HAF base-level components (sql_serializer and forkmanager).
  • For Hafah, we’ll be 1) continuing researching multithreading of jsonrpc bottleneck, 2) further benchmarking of API performance, 3) verifying results against a hived account history node, 4) analyze causes of slowdown of concurrent hived replay and Hafah during massive sync, and 5) continuing set up continuous integration testing for Hafah.
  • Resume work on HAF-based hivemind. For HAF-based hivemind, we plan to restructure its massive sync process to simplify and optimize performance by taking advantage of HAF-based design. Next we’ll modify live sync operation to only use HAF data (currently it still makes some calls to hived during live sync). Once we’re further along with HAF-based hivemind, we’ll test it using the fork-inducing tool.
Sort:  

Thanks!
Any ideas, talks, about the powerdown period?

very curious about that as well

Thats really cool.

I like the modular things because I believe it enables a lot of future build on top of that and makes hive way more flexible.

Yes, the modular aspect will make server setup more flexible and it will also make it easier for devs working on different apps to cooperate more efficiently.

hmmm i love that. One of my wet dreams is a front end for hive that has easy handling like a WordPress installation. Easy log in with keychain.

Easy token integration + filter options ( for example tags).

Would spread the doors to hive a lot. But sure that's a different topic :)

what are the best frameworks for someone wanting to make quick scripts, frontends, post scheduling and other kinds of apps without running their own HAF node on an expensive VPS? I'm personally very interested in making some simple single-page dApps.

I normally just use plain HTML, CSS and JS =)

???? I meant on HAF or something equivalent! I meant something that would remove the need to manually connect to dsteem/dhive, create db schemas for HiveSQL, authentication utilities, etc.

It's currently a hassle lol, and it can't be bypassed by just using "plain html, css and js" hahahah

manually connect to dsteem/dhive, create db schemas for HiveSQL, authentication utilities, etc.

🤔 why are you doing all that ! okay, i admit custom jsons are a PITA but for smthng like a plain frontend or a SPW maybe, it's is fine and easy too !

I want to make some curation profitability calculators, post schedulers, tracking top voters and their tendencies, etc.

Just my random appetite for research and automation.

Thanks for your updates, your team is working as hard as bees. Off topic: Any chance of seeing bnb and bep20 tokens on the blocktrades exchange???

Our focus is on other projects right now, so I don't expect to add any new coins to our exchange in the near future.

Thanks for the reply

Vote edits are no longer penalized with no curation rewards

Didn't even realize this was a thing to be honest! Great work on the changes and updates.

Yes, it is definitely a worthwhile change, I'm sure most people weren't aware of the associated loss. Note that this issue will persist until hardfork 26.

no curation at all if you edit, or less curation if you voted less at first?

It's no curation at all if you change vote.

damn, I didn't know this! It's good that I didn't edit too many votes hahah, but I can't say I didn't edit some to "increase my curation" and definitely hijacked my own rewards by doing so. Thanks for the clarification.

This doesn't de-penalize that. When you vote and unvote, you don't get your vote power back.

This was a different issue: once you vote on a post, unvote, then revote, your revote didn't give you any curation rewards. It is strange and unexpected behavior.

Wow I really like that vote fix!! Resetting curation rewards was kind of problematic at times.

Congratulations @blocktrades! Your post has been a top performer on the Hive blockchain and you have been rewarded with the following badge:

Post with the highest payout of the day.

You can view your badges on your board and compare yourself to others in the Ranking
If you no longer want to receive notifications, reply to this comment with the word STOP

Check out the last post from @hivebuzz:

Hive Power Up Month - Feedback from Day 20

Congratulations @blocktrades! You have completed the following achievement on the Hive blockchain and have been rewarded with new badge(s) :

You received more than 1335000 HP as payout for your posts and comments.
Your next payout target is 1340000 HP.
The unit is Hive Power equivalent because your rewards can be split into HP and HBD

You can view your badges on your board and compare yourself to others in the Ranking
If you no longer want to receive notifications, reply to this comment with the word STOP

To support your work, I also upvoted your post!

Check out the last post from @hivebuzz:

Hive Power Up Month - Feedback from Day 20

Great work - whats the timeframe for hf26?

It's not fixed yet, but probably the earliest would be in December.

Thank you @steemik. For HF26, there is no date defined yet.

Connect

Trade


@blocktrades! This post has been manually curated by the $PIZZA Token team!

Learn more about $PIZZA Token at hive.pizza. Enjoy a slice of $PIZZA on us!

Excellent information, the modular aspect to give more flexibility to the server is great. I hope the minor issues that are still being investigated will be resolved. Congratulations for the work you do every day.

Small correction:

Dust votes are fully considered as votes (it will no longer be possible to delete a comment that has received dust votes).

The first part is true, but the explanation in parentheses is not. There are certain checks that depend on POWER of votes - as dust votes by definition have zero power, they behaved correctly and nothing changes in the matter. In particular, comments with just dust votes will still have net power at zero and as such it will continue to be possible to delete them (comment cannot be deleted when it has positive net power, meaning it received more in upvotes than in downvotes, nothing changed here). There are however some checks that depend on PRESENCE of votes and these were incorrectly disregarding presence of dust votes. You can't set comment beneficiaries or change certain comment options when votes were already cast, however dust votes were omitted as an undesired side effect of them having no power. It didn't present any real problem for hived or Hivemind, however it was decided to be fixed for consistency.

If any user is posting spam or copy paste content or do any other activity which is against hive ecosystem. Then you can take action on his comment or his post or you can give warning to the user . But why you blacklist the user's account ? My account has blacklisted for permanently by hivewatchers. I am here from 4 years and did only 2 mistakes by mistake. This is not fair. I like hive, I talked about hive to my friends . But you put my account into permanent blacklist.

I believe spaminator has an appeal system. And he doesn't work on spamminator. You are asking the wrong guy.

There's this certain comment I made more than seven days ago that I'm trying to delete but was unable to no matter how many times try despite getting no response and no upvote/downvote dust or whatever. Is this a bug thing or a feature?

Don't know without being able to check the comment.

Is this final and executory or will there be necessary witness consensus for this to get implemented globally?

All hardfork changes require witness consensus. Based on prior public discussions, I don't think these are controversial changes, so I think approval of these particular changes is likely.

You cannot delete a comment past its cashout time which is 7 days after its creation.

FC_ASSERT( comment_cashout, "Cannot delete comment after payout." ); in void delete_comment_evaluator::do_apply( const delete_comment_operation& o ) - would you like to argue with hived code?

You said that you created a comment more than 7 days ago, which means it was already cashed out (even if it received zero rewards). That means that related comment_cashout_object no longer exists. When comment delete operation is tried, it first checks the existence of that object - when it is not present, it fails and no other check are performed. It does not matter if comment had net positive power or replies, especially since all the data necessary to perform such checks in the first place is read from said object, which obviously is not possible when it was removed.