Sort:  

Nah, it takes too much time. I should be doing stuff, not writing about it :o) But let's play a bit.

Even the simplest changes are not as straightforward as one would think. F.e. in order to make hard limit more flexible we first have to change hardcoded part into generalized formula.

Let's define h = HBD supply (corrected for treasury balance), v = HIVE supply. Current code sets minimal price as 9*h HBD per v HIVE. Check: virtual supply = h*price + v = (1/9)*v + v = (10/9)*v; the HBD part (1/9)*v in relation to virtual supply (10/9)*v is ((1/9)*v) / ((10/9)*v) = 1/10 = 10% exactly as we wanted.

Generalized price formula could look like this: (10000/limit - 1) * h HBD per v HIVE. For limit = 1000 (10%) we get exactly the same price as before. It also looks nice for 20% (4*h HBD per v HIVE), but for 30% we get 2*h HBD per v HIVE - exactly the same as for 33%. Not good.

Alternative price formula would look like this: (10000-limit) * h HBD per limit*v HIVE. It has the advantage of pushing the inevitable truncation during integer division to the very end (when price is multiplied by asset). However it means that we're expressing price using much larger values. It is true that in general price expressed as a per b is equivalent to price expressed as 10000*a per 10000*b, that is, as long as we can fit bigger numbers in 64bit variables. We are using total supply numbers inside price definition. Current HIVE supply is ~350 million, so let's triple that number to account for future inflation and safety margin. Amount of HBD is much smaller. We also have to consider that it is expressed with 3 digit precision and another 4 places to account for 10000 basis points (100%) of potential limit value. We get 10(9+3+4) = 1016. Signed 64bit integer can hold up to 9*1018, so there is still some space left... at least as long as we are not doing another similar price percentage scaling somewhere. The only place that I currently remember where scaling happens is during calculation of modified price for collateralized_convert_operation, however we are not actually scaling price itself, we are doing the scaling along with multiplication with asset (multiply_with_fee) and that calculation happens on 128bit integers, so we are safe there. But to make sure we are not going to run into trouble, all the places where price is constructed need to be found and verified (cheers for someone who thought constructing price with use of operator / is going to be a good idea - that needs to be eliminated first). On top of that it might be a good idea to also assume (static_assert) that limit is at least in whole percentages, so instead of (10000-limit) * h HBD per limit*v HIVE we can use (100-limit/100) * h HBD per (limit/100)*v HIVE which would give us two more levels of magnitude of margin.

The above is an example of a lot of work required to change couple of lines of code :o) And it doesn't even touch the unit tests, that tend to go belly-up whenever you sneeze near the code, especially when they are designed to test the edge cases.

It also looks nice for 20% (4h HBD per v HIVE), but for 30% we get 2h HBD per v HIVE - exactly the same as for 33%.

Is there anything wrong with using 20% or 25% or 33 1/3% and keeping the "couple of lines" of code change minimal? The number seems mostly arbitrarily picked, I don't see the value in fretting over getting right to 30% vs 25% or 33 1/3%?