Wanna help test RC delegations ? Here's all you need to know

in #test3 years ago (edited)

Hello,

social_hive_city.jpg

some users asked me how they can help testing rc delegations so I set to write a guide

disclaimer: this assumes that you are very technical and have a good understanding of how hive works.

rc delegations

Pools

Each account got an RC pool in his account, anyone can delegate to any account pool, that RC won't be usable by the account, it's just a pool he can delegate RC from. (he can delegate to himself if he wishes).

RC slots

There is a notion of delegation slots, each account got 3 slots from which they can receive delegations. meaning they can receive rc from at most from 3 pools, also, each slot can only receive from a specific pool. If you are not "whitelisted" you can't delegate to an user.

For new accounts the slots will be set like this by default:
Slot 1 is set to the account creator of the account, and both the account and the account creator can change that slot
Slot 2 is set to the recovery account or not set, the account and the recovery partner for the account can change that slot
Slot 3 is not set, and only the account can change that slot.

Most of the use cases for rc delegations are linked to account creation, games or other operation intensive applications could have a button to "ask for rc" that would make the user set a slot to the game and then the app would delegate to it.

It's worth noting that if you change your slot, you lose your previous delegation on that slot.

Oversubscription

Pool work on an oversubscription model:

Let's say Alice delegates 100 RC to an RC Pool.
That pool can then delegates 30 RC to Bob, Carol, Dave, and Eugene.
There is a total of 120 RC delegated, but only 100 RC in the pool.

Is that a problem? No.The implementation allows each of the out delegations to use up to the delegation's worth of RC, but that is charged to the pool.

If Bob uses all 30 RC and no one else does, Bob will no longer have access to RC from the pool until his regenerates, even though to pool still has 70 RC available. However, if Bob, Carol, and Dave all use 30 RC, then Eugene will only be able to use 10 RC before he cannot access any more.

Although he has RC still available from the delegation, because the pool ran out of RC, Eugene does not have any to use.

This is useful for this use case: multiple large stakeholder delegating to one account creation pool making many small delegations to new users. Users will consume their RC, but because not all users are retained, this can be significantly over subscribed.
Another real life example:

Alice delegates 50 rc to her pool
Bob changes it slot 3 to Alice
Alice delegates 30 RC from it's pool to Bob (Bob can now use the RC)
Alice tries to delegate to Eve, but since she didn't set a slot to Alice, Alice cannot delegate RC to it

That's it ! Feel free to comment if you have questions
this doc may give you some more info: https://gitlab.syncad.com/hive/hive/-/blob/feature/rc_delegation_rebase/doc/devs/delegation_pools.md

Now onto the actual testing.

compiling hive

if you are on ubuntu 20 see this other guide first: https://peakd.com/hive/@howo/how-to-build-hive-on-ubuntu-20-04

first install all of the dependencies :

apt-get install -y \
        autoconf \
        automake \
        autotools-dev \
        build-essential \
        cmake \
        doxygen \
        git \
        libboost-all-dev \
        libyajl-dev \
        libreadline-dev \
        libssl-dev \
        libtool \
        liblz4-tool \
        ncurses-dev \
        python3 \
        python3-dev \
        python3-jinja2 \
        python3-pip \
        libgflags-dev \
        libsnappy-dev \
        zlib1g-dev \
        libbz2-dev \
        liblz4-dev \
        libzstd-dev

then build hive on the rc delegations branch:

git clone [email protected]:hive/hive.git
cd hive
git checkout feature/rc_delegation_rebase
git submodule update --init --recursive
mkdir build
cd build

cmake -DENABLE_COVERAGE_TESTING=ON -DBUILD_HIVE_TESTNET=ON -DLOW_MEMORY_NODE=ON ..

make -j$(nproc) hived cli_wallet

if you got an error while doing the git clone try with this url:

https://gitlab.syncad.com/hive/hive.git

running it

Run the node a few seconds and then exit hived like so

./programs/steemd/hived -d testnet/

this will create a testnet directory with the default config file.

open the config.ini file

nano testnethf24/config.ini

replace the config.ini with this one:

# Appender definition json: {"appender", "stream", "file"} Can only specify a file OR a stream
log-appender = {"appender":"stderr","stream":"std_error"} {"appender":"p2p","file":"logs/p2p/p2p.log"}

# log-console-appender = 

# log-file-appender = 

# Logger definition json: {"name", "level", "appender"}
log-logger = {"name":"default","level":"info","appender":"stderr"} {"name":"p2p","level":"warn","appender":"p2p"}

# Whether to print backtrace on SIGSEGV
backtrace = yes

# Plugin(s) to enable, may be specified multiple times
plugin = witness account_by_key account_by_key_api condenser_api 

# Defines a range of accounts to track as a json pair ["from","to"] [from,to] Can be specified multiple times.
# account-history-track-account-range = 

# Defines a range of accounts to track as a json pair ["from","to"] [from,to] Can be specified multiple times. Deprecated in favor of account-history-track-account-range.
# track-account-range = 

# Defines a list of operations which will be explicitly logged.
# account-history-whitelist-ops = 

# Defines a list of operations which will be explicitly logged. Deprecated in favor of account-history-whitelist-ops.
# history-whitelist-ops = 

# Defines a list of operations which will be explicitly ignored.
# account-history-blacklist-ops = 

# Defines a list of operations which will be explicitly ignored. Deprecated in favor of account-history-blacklist-ops.
# history-blacklist-ops = 

# Disables automatic account history trimming
history-disable-pruning = 0

# The location of the rocksdb database for account history. By default it is $DATA_DIR/blockchain/account-history-rocksdb-storage
account-history-rocksdb-path = "blockchain/account-history-rocksdb-storage"

# Defines a range of accounts to track as a json pair ["from","to"] [from,to] Can be specified multiple times.
# account-history-rocksdb-track-account-range = 

# Defines a list of operations which will be explicitly logged.
# account-history-rocksdb-whitelist-ops = 

# Defines a list of operations which will be explicitly ignored.
# account-history-rocksdb-blacklist-ops = 

# Where to export data (NONE to discard)
block-data-export-file = NONE

# How often to print out block_log_info (default 1 day)
block-log-info-print-interval-seconds = 86400

# Whether to defer printing until block is irreversible
block-log-info-print-irreversible = 1

# Where to print (filename or special sink ILOG, STDOUT, STDERR)
block-log-info-print-file = ILOG

# Maximum numbers of proposals/votes which can be removed in the same cycle
sps-remove-threshold = 200

# the location of the chain shared memory files (absolute path or relative to application data dir)
shared-file-dir = "blockchain"

# Size of the shared memory file. Default: 54G. If running a full node, increase this value to 200G.
shared-file-size = 54G

# A 2 precision percentage (0-10000) that defines the threshold for when to autoscale the shared memory file. Setting this to 0 disables autoscaling. Recommended value for consensus node is 9500 (95%). Full node is 9900 (99%)
shared-file-full-threshold = 0

# A 2 precision percentage (0-10000) that defines how quickly to scale the shared memory file. When autoscaling occurs the file's size will be increased by this percent. Setting this to 0 disables autoscaling. Recommended value is between 1000-2000 (10-20%)
shared-file-scale-rate = 0

# Pairs of [BLOCK_NUM,BLOCK_ID] that should be enforced as checkpoints.
# checkpoint = 

# flush shared memory changes to disk every N blocks
# flush-state-interval = 

# Database edits to apply on startup (may specify multiple times)
# debug-node-edit-script = 

# Database edits to apply on startup (may specify multiple times). Deprecated in favor of debug-node-edit-script.
# edit-script = 

# Set the maximum size of cached feed for an account
follow-max-feed-size = 500

# Block time (in epoch seconds) when to start calculating feeds
follow-start-feeds = 0

# json-rpc log directory name.
# log-json-rpc = 

# Track market history by grouping orders into buckets of equal size measured in seconds specified as a JSON array of numbers
market-history-bucket-size = [15,60,300,3600,86400]

# How far back in time to track history for each bucket size, measured in the number of buckets (default: 5760)
market-history-buckets-per-size = 5760

# The local IP address and port to listen for incoming connections.
# p2p-endpoint = 

# Maxmimum number of incoming connections on P2P endpoint.
# p2p-max-connections = 

# The IP address and port of a remote peer to sync with. Deprecated in favor of p2p-seed-node.
# seed-node = 

# The IP address and port of a remote peer to sync with.
# p2p-seed-node = 

# P2P network parameters. (Default: {"listen_endpoint":"0.0.0.0:0","accept_incoming_connections":true,"wait_if_endpoint_is_busy":true,"private_key":"0000000000000000000000000000000000000000000000000000000000000000","desired_number_of_connections":20,"maximum_number_of_connections":200,"peer_connection_retry_timeout":30,"peer_inactivity_timeout":5,"peer_advertising_disabled":false,"maximum_number_of_blocks_to_handle_at_one_time":200,"maximum_number_of_sync_blocks_to_prefetch":2000,"maximum_blocks_per_peer_during_syncing":200,"active_ignored_request_timeout_microseconds":6000000} )
# p2p-parameters = 

# Skip rejecting transactions when account has insufficient RCs. This is not recommended.
rc-skip-reject-not-enough-rc = 0

# Generate historical resource credits
rc-compute-historical-rc = 0

# Start calculating RCs at a specific block
rc-start-at-block = 0

# Ignore RC calculations for the whitelist
# rc-account-whitelist = 

# Endpoint to send statsd messages to.
# statsd-endpoint = 

# Size to batch statsd messages.
statsd-batchsize = 1

# Whitelist of statistics to capture.
# statsd-whitelist = 

# Blacklist of statistics to capture.
# statsd-blacklist = 

# Block time (in epoch seconds) when to start calculating promoted content. Should be 1 week prior to current time.
tags-start-promoted = 0

# Skip updating tags on startup. Can safely be skipped when starting a previously running node. Should not be skipped when reindexing.
tags-skip-startup-update = 0

# Defines the number of blocks from the head block that transaction statuses will be tracked.
transaction-status-block-depth = 64000

# Defines the block number the transaction status plugin will begin tracking.
transaction-status-track-after-block = 0

# Local http endpoint for webserver requests.
# webserver-http-endpoint = 

# Local unix http endpoint for webserver requests.
# webserver-unix-endpoint = 

# Local websocket endpoint for webserver requests.
# webserver-ws-endpoint = 

# Local http and websocket endpoint for webserver requests. Deprecated in favor of webserver-http-endpoint and webserver-ws-endpoint
# rpc-endpoint = 

# Number of threads used to handle queries. Default: 32.
webserver-thread-pool-size = 32

# Enable block production, even if the chain is stale.
enable-stale-production = 1

# Percent of witnesses (0-99) that must be participating in order to produce blocks
required-participation = 0

# name of witness controlled by this node (e.g. initwitness )
witness = "initminer"

# WIF PRIVATE KEY to be used by one or more witnesses or miners
private-key = 5JNHfZYKGaomSFvd4NUdQ9qMcEAC43kujbfjueTHpVapX1Kzq2n

# Skip enforcing bandwidth restrictions. Default is true in favor of rc_plugin.
witness-skip-enforce-bandwidth = 1

# Local http endpoint for webserver requests.
webserver-http-endpoint = 127.0.0.1:8090

# Local websocket endpoint for webserver requests.
webserver-ws-endpoint =127.0.0.1:8091

plugin = webserver p2p json_rpc witness account_by_key reputation market_history

plugin = database_api account_by_key_api network_broadcast_api reputation_api market_history_api condenser_api block_api rc_api account_history account_history_api

then run hived again
./programs/steemd/hived -d testnet/

Testing rc delegations

I mostly test using the cli_wallet for real world tests, you can also look at hive-js that has support for the new operations (see this old half made script, ask me questions if you really want to use hive-js and not cli_wallet: https://github.com/drov0/hf23-testing/blob/master/rc_pools.js )

to run the cli_wallet locally:

./programs/cli_wallet/cli_wallet --server-rpc-endpoint="ws://127.0.0.1:8091"

here's a simple "happy path" setup to give you an idea:

# set a password
set_password p
# unlock the wallet
unlock p
# import initminer's key 
import_key 5JNHfZYKGaomSFvd4NUdQ9qMcEAC43kujbfjueTHpVapX1Kzq2n
# account who will get the rc
create_account "initminer" "howo" "" true
# pool account 
create_account "initminer" "pool" "" true
# transfer some funds to test rc usage
transfer  initminer howo "2500.000 TESTS" "" true
# doesn't work because howo has no rc
transfer howo initminer "1.000 TESTS" "" true
# set slot 0 to point to pool signing using initminer
set_slot_delegator "pool" "howo" 0 "initminer" true
# delegate 100 rc to the pool account's rc pool from initminer
delegate_to_pool initminer pool {"symbol":"VESTS","amount": "100", "precision": 6, "nai": "@@000000037"} true
# delegate 10 tests so that the pool account can delegate some rcs (delegating to the pool of an account doesn't make it available to the account)
transfer_to_vesting "initminer" "pool" "10.000 TESTS" true
# delegate 100 rc from the pool to howo
delegate_drc_from_pool pool howo {"decimals": 6, "nai": "@@000000037"} 100 true
# howo can now transfer
transfer howo initminer "1.000 TESTS" "" true

Some additional useful cli commands to see how things are going:

find_rc_accounts ["account"]:

gives you the state of your the rc account

unlocked >>> find_rc_accounts ["howo"]
find_rc_accounts ["howo"]
[{
    "account": "howo",
    "creator": "initminer",
    "rc_manabar": {
      "current_mana": 356050,
      "last_update_time": 1617274335
    },
    "max_rc_creation_adjustment": {
      "amount": "0",
      "precision": 6,
      "nai": "@@000000037"
    },
    "max_rc": 356054,
    "vests_delegated_to_pools": {
      "amount": "0",
      "precision": 6,
      "nai": "@@000000037"
    },
    "delegation_slots": [{
        "rc_manabar": {
          "current_mana": 96,
          "last_update_time": 1617274335
        },
        "max_mana": 100,
        "delegator": "pool"
      },{
        "rc_manabar": {
          "current_mana": 0,
          "last_update_time": 0
        },
        "max_mana": 0,
        "delegator": ""
      },{
        "rc_manabar": {
          "current_mana": 0,
          "last_update_time": 0
        },
        "max_mana": 0,
        "delegator": ""
      }
    ],
    "out_delegation_total": 0
  }
]

find_rc_delegation_pools ["pool"]

gets the status of a pool (how much rc is in it)

[{
    "id": 0,
    "account": "pool",
    "asset_symbol": {
      "nai": "@@000000037",
      "decimals": 6
    },
    "rc_pool_manabar": {
      "current_mana": 100,
      "last_update_time": 1617278439
    },
    "max_rc": 100
  }
]

Here's a dump of all the commands and their parameters:

delegate_drc_from_pool(account_name_type from_pool, account_name_type to_account, asset_symbol_type asset_symbol, int64_t drc_max_mana, bool broadcast)
delegate_to_pool(account_name_type from_account, account_name_type to_pool, asset amount, bool broadcast)
set_slot_delegator(account_name_type from_pool, account_name_type to_account, uint8_t to_slot, account_name_type signer, bool broadcast)


find_rc_accounts(vector<account_name_type> accounts)
find_rc_delegation_pools(vector<account_name_type> accounts)
find_rc_delegations(account_name_type account) 
list_rc_accounts(account_name_type account, uint32_t limit, rc::sort_order_type order)
list_rc_delegation_pools(account_name_type account, uint32_t limit, rc::sort_order_type order)
list_rc_delegations(vector<account_name_type> account, uint32_t limit, rc::sort_order_type order)    

for list the sort orders are:

by_name
by_edge (edge is both an account and a pool)
by_pool

Finally if you want to reset your testnet, just delete the contents the blockchain dir and rerun hived:
rm -rf testnet/blockchain/*

Here's a few things you can test:

  • lots of users delegating to a single pool (like 10k accounts)
  • one pool delegating to many users
  • trying to break it by using very large or negative numbers
  • use your imagination !

please do ask questions if you have any I'd gladly reply to them.

Support what I'm doing

If you like what I'm doing, please consider voting on my new proposal:
https://peakd.com/proposals/167
hivesigner

Sort:  
Loading...

I hope this works out and is sufficiently tested to make it into the next release. It has obvious utility and is powerful.

However, I always felt there is a place for simple RC delegation that works like regular delegation. That is, moves some of the RC cap and charge rate from one account to another without delegating the rest of the HP abilities. If I have more RC than I need I can delegate it to someone else. When I want to stop delegating, I set the delegation back to zero. As with regular delegation, it would only require one operation. This would solve 80% of the problem at roughly 1% of the complexity.

Anyway, I understand this is the direction you are going with development and I hope it works out.

I agree that this is a simple and clean solution. But for large app or games it's not efficient. Suppose @peakd want to delegate a small amount to each new user (and hopefully the number will go way up in the future). We'll have to micro-manage thousands of small delegations and keep doing minor adjustment to the delegate amount according to how much each user is using.
Having a pool that is shared and only 'consumed' as needed is much easier and more efficient for this scenario.

Agree with that. One thing I will say is that RC prices are dynamic. If there is more usage from pools then the prices will go up. So you will need to delegate more per user to the pool than you might expect from average usage. The more apps/games do this the more prices will go up. This will also hurt low-HP users who aren't getting a delegation from an app/pool. It's also a negative for the price of HIVE if apps can get away with needing less HP to support their users (within reason; if the cost becomes too high then it becomes non-viable and they fail or leave, but there is a range).

Anyway, the points I just made are true but that wasn't really my intention in mentioning it here. My bigger concern is that the more complex solution becomes an obstacle in terms of completion and reliability (the later includes ongoing maintenance; once people start using it, it will need to be maintained if problems crop up, and this can be a future burden) if it takes resources away from other development. No question, as I said, the pool model is more powerful and useful, but the utility gap between no RC delegations at all and simple ones is much, much bigger than between simple ones and pools IMO.

We'll have to micro-manage thousands of small delegations and keep doing minor adjustment to the delegate amount according to how much each user is using.

Sounds very easily automated to me.

Mostly agree on all points. Just a few considerations:

If there is more usage from pools then the prices will go up. So you will need to delegate more per user to the pool than you might expect from average usage. The more apps/games do this the more prices will go up.

I understand the concern, but having more users or more interactions should be a greater benefit for the chain. I mean it can be an healthy thing if the cost goes up because there are so many users that want to interact on the chain.

No question, as I said, the pool model is more powerful and useful, but the utility gap between no RC delegations at all and simple ones is much, much bigger than between simple ones and pools IMO.

I agree on this, but we waited 6 months for the HF and it's good to have some solid improvements when it take so long for a new release. From my understanding most of the development is already done and hopefully we'll be able to test it properly :D

Sounds very easily automated to me.

Fair point :)

I understand the concern, but having more users or more interactions should be a greater benefit for the chain

Of course that is true. I imagine you will get more usage (perhaps a lot more), and at lower cost too, either way, certainly compared to no RC delegations at all. Needing to delegate or buy HP along with RC just to enable usage is a much higher cost.

The point I'm making here is that the biggest part of the savings comes from being able to delegate RC at all. There is some savings from pooling, because users can share delegated RC, but with sharing comes a higher usage factor, and higher costs, all else being equal. So the benefit from the latter will be muted, and also comes with the tradeoff of increased cost to users not part of a pool.

None of that is necessarily a bad tradeoff. As you say a lot of it comes down to enabling more usage, which is a good thing overall.

Awesome work, as always! I can't tell you how excited I am for RC delegation pools to be a thing.

One thing I am curious about though is why users have to "set a slot" to an app before the app can delegate RC to that user rather than an app/account being able to just delegate RCs from their pool to any other account without that account having to do something to allow it.

It's because you can only have a finite amount of inbound delegations (for performance reasons). So if there was no whitelist someone could send 1 rc to everyone and completely block you from getting any other delegation.

There was other ideas like being able to "undelegate" as a user but ultimately that was the decision that was done in the original design so I didn't change it since they probably had good reasons. maybe we'll revisit it in a future hard fork if slots prove to be too cumbersome.

That makes sense, thanks for the response!

I'm trying to catch up on this discussion and this thread caught my attention :)
Having the user to 'open' the slot before the delegation will work from my point of view, but maybe worth considering also the following options:

  1. each user can have max 3 delegations at a time
  2. every time a new RC delegation is done we check if there is an empty slot for the receiver
    2a. if an empty slot is available just use it
    2b. if all slots are full the delegation with the lower amount is removed and the new one is accepted

it's tricky to do 2b because you can overdelegate so it's hard to tell if an incoming delegation is "better" than the existing ones. For instance I could delegate 1m rc to you but my pool is empty so your existing slot gets replaced even though the new one is not better.

Fair point. We just need to be sure that when an account ran out of RC (or most of it) there is still enough to cast the set_slot_delegator trx. Hopefully that trx will cost almost nothing

Yeah it costs very little, but also this is where other accounts can set your slot for you, the account creator can change your slot 1 and your recovery partner can change your slot 2.

You say that an account can have 3 slots, and it can receive RC from all 3 of these slots (i.e. a single account can get RC from multiple pools). In what order is RC consumed from these three "connected" pools?

From first slot to the last.

Order is account rc > slot 1 > slot 2 > slot 3

I think it's a good enough for a first implementation to see how it goes and then maybe experiment where it consumes from the slot with the most available rc to the slot with the least rc

What about the RC you use yourself that you have from your own HP?
You use your own first then the 3 slots in order? Aka your own-non-delegated-out RC first.

You use your own first then the 3 slots in order?
Yes

The idea is that if you have rc you shouldn't pull from the pool, don't take from others when you have some yourself basicaly

When we decide to allow a user to use our pool we should be looking at their HP first. If we looked at their RC there is a possibility they delegate out all their RC to a different pool and then try to sneak a few RC from us because they're low. Just envisioning potential scams for a potential day when RC is more valuable.

Oh yes definitely

disclaimer: this assumes that you are very technical and have a good understanding of how hive works.

That's me out of the game then lol but great work and best wishes with it :-)

Haha, thanks !

I second that sentiment. I know though that when it is all done and set up it will likely be just like how we delegate HP to someone, or I hope it will be that easy.

Hi @bashadow! I`m here to help :) If you want to delegate, please make a comment (for example replay to this one) like this:
I want to delegate X sp to @user
Where X is the amount of SP and @user is the steem user you want to delegate to. For example:
I want to delegate 100 sp to @nathen007
@tipU will answer with a delegation link. Hope this helps!:)

I’m good with people and community.

Not so good with Technical stuff.

What I can tell is that our community been waiting for RC delegations option eagerly!

!WINE


Congratulations, @cryptoaeneas You Successfully Shared 0.100 WINE With @howo.
You Earned 0.100 WINE As Curation Reward.
You Utilized 1/2 Successful Calls.

wine-greeting
Total Purchase : 24844.918 WINE & Last Price : 0.290 HIVE
HURRY UP & GET YOUR SPOT IN WINE INITIAL TOKEN OFFERING -ITO-


WINE Current Market Price : 0.270 HIVE

Good job. But so much text with comments to read :D