Core dev meeting #55

in #core5 months ago

@howo

I'm just going to look through my merge requests and stuff that happened over the past month. So yeah, we ended up reverting the set roles and the title change over the Christmas break.

We found out that we needed to implement a hard fork logic to hivemind if we wanted to make changes that are not retroactive. We spoke a bit about it with blocktrades and we didn't feel like the added complexity was worth the feature for now. So it's been shelved. This will be done for later.

I did a fix as a beneficiary payout is included in the payout values. It also has been merged Thank you Dan.

Meanwhile, I know Dan, you left me a bunch of comments that I need to review today. So the mock helper, I will add the helper today.

Beneficiaries changes it's still ongoing. The main reason why it's ongoing is because I need the muted reason API to be stored somewhere so that I don't have to recompute stuff. It's mostly a performance issue.

For that, I saw that you sent me a message three days ago, but I didn't have a chance to review it yet for the muted reason. But overall, I don't mind making the change so that it's empty. I mean, do not return the options. I don't think it would take that much more work and at least it'll make Bartek happy as a middle ground. So yeah, that's pretty much what I'm working on these days.

@blocktrades

Okay, sounds good. Well, I didn't get a chance to talk to Bartek too much before the meeting, but we've been interacting indirectly for the past couple of days, quite a bit. So we're still trying to get HAF, the new version of HAF out. I think we're getting very, very close, but keep running into small issues along the way. So we basically been just doing a lot of testing and then trying to make tweaks and changes as we go along. As far as our overall, I'd say, like I said, I hope this week will be when we actually release, maybe even next couple of days, if things go really well. The problem is we still, the testing times for some of this stuff takes so long. Every time, we've got HAF replays down much, much quicker than before, but Hivemind itself still takes, even on our fast machine, takes two days. So that's usually roughly three days time to, first we have to sync HAF, then we have to, which takes probably 14 to 20 hours. And then we've got to sync Hivemind, and even that two days is still 48 hours, no one, how you slice it. So three days before we can do a final test again. So I guess realistically, I can't say we'll, we'll be at a release until near the end of the week, I think, I'm afraid.

Let's see, as far as the things we've done, that's too much to really cover, but I think one of the most significant things related to what's going on right now, is we've seen a lot of loading on the network. And we figured out, just I guess this weekend, that one of the issues was that we were being spidered by some bots, and they were making a bunch of account history calls. And those are some of the most expensive calls, especially depending on whose account they're checking. If they check a big, some of the accounts that have like really long histories, especially block producers, that can be kind of slow. So we've kind of blocked some of the spiders for now. So that should have, things should be better on the overall network. Remember here referring to API nodes specifically, this doesn't affect the blockchain network, but it affects API nodes. And so we were seeing a lot of traffic on our API node because of that. And some people were reported slowness on like the front ends and things because of that. So I think the blocking the bot should probably have temporarily resolved the problem, although I haven't checked myself.

And moreover, we have a much better solution long-term. Some of the changes we've been making with HAF 2.0, have really speeded up, get account history calls in particular. So I would say it's come cases as much as 100x times faster than previously. So I don't expect any problems once we get the new version up. So we'll probably be deploying it on our site this week, maybe even the next couple of days, because that I can do before we can divert HAF the traffic there, account history traffic there before we divert hivemind traffic there. So we don't have to wait for hivemind to fully replay.

Beyond that, I don't really want to get into the details of problems we've encountered and things we fixed. So it's probably better to see if there's anything that's more related to what everybody else is working on. I guess one thing we should mention if anyone's doing HAF work right now, we're making a change the way operation IDs are stored. So operation IDs are basically in the previous version of HAF, it was literally the number of the operation in the blockchain. So if it was the 10th operation put to the blockchain, the operation number would be 10. If it was the 100 millionth operation of the blockchain, then the operation ID would be 100 million. So I don't think any apps really rely on that value to be that way. I think they mainly just rely on it being sequential. All of our apps do, for instance, they only rely on it being sequential. But if anyone's relying on it being an absolute number that's incremented by one, then the changes now, so that that operation ID is being replaced by block number followed by the index of that operation within the block.

So it's another way to basically say the same thing. We identify the block and then we divide the operations position inside the block. So hopefully that doesn't cause anybody problems, but if they do, let us know. And I'm just gonna leave it kind of short there because I wanna get back to work and I got a lot to do today.

@brianoflondon

Can I ask a question on that? Which I might be, maybe I use transaction IDs all over the place. You're not talking about the TRX ID ? Because I've come across a problem in my work, which is just because of stuff I did three years ago when I didn't know what I was doing, that the transaction ID doesn't uniquely lead to an operation because there can be multiple operations in a transaction. What I should be using, if I'm gonna rewrite everything which I am, it might be that I would rather use what you've just described, which is a block operation plus the operation ID.

@blocktrades

if you're trying to uniquely identify an operation, that's exactly correct.

@brianoflondon

Or I'll have to do what I should have done from the start, which is add the operation ID ordinal onto the end of. The TRX ID, but.

@blocktrades

So actually I should make one clarification too. So the new operation ID is a little more than just what I said. I just kind of, let's kind of glossing over a little bit. The actual new operation ID is the block number, then it's the position in the block, like I mentioned. And finally, it actually contains inside the field, we're also squeezing in the type of the operation. So that one field has everything, has not only its position, but also the type of the operation embedded inside of it. So it's like a, what's the term?

@brianoflondon

That also makes it good for filtering then.

@blocktrades

Yeah, exactly. That's why that was being built in. It solves a problem we ran into. That's probably the main thing for it, but there's been quite a, the other, I guess the other thing I should mention, that's kind of significant terms of performance was, we're now clustering the account operations table. And so that really, that's where we're getting a lot of speed up, I mentioned it in the account history calls. And basically, when you cluster a table, it basically organized the table in the same order as whatever index you specified. So you can basically put all the data of a certain type close to each other inside the table. So it's really helpful for certain kinds of tables to, if you're doing searches of the tables, if all the data of the type you're most likely to search for with your queries is in the same location on the disk. So that's probably the biggest performance improvement we made.

I guess, well, that's on the database side. The other thing we did was on the, we speeded up HAF quite a bit in terms of how fast it replays. And a big change there is we're no longer using inserts to put the data into the table from hived. We're using another statement, Postgres support, which is called a copy statement. And for whatever reason, inserting that way is just much, much faster than doing inserts. So if you've got an application that needs to stream a bunch of data fast into a database, these copies can be a very useful alternative to inserts. So I guess, let's just open it up to anybody else who has anything to talk about or wants to have any questions.

@mcfarhat

Yeah, quick updates from my end. You know, we've been having those issues with the HAF API node with that ticket still open. I'm really happy with the speed of loading now. That was a great milestone because we've replayed, I think, maybe six or seven times. But I left the node a few days ago. I think we reached a stage where everything was complete, but still the process shut down. I left you a log there.

@blocktrades

Yeah, I saw it. I did see it. I didn't know what was going on right off hand, and I haven't seen that particular behavior. So I'm not sure yet what it is, but I will get back to that for sure soon. Basically, I want to get out the new version that we have. Okay. And then I'll get back to that for sure.

@mcfarhat

No worries. I'm gonna actually try to do, I mean, to do an old fashioned replay without the whole log thing and see how it goes. Instead of, if we don't replay via the log file, we don't need to make those changes towards the end, right? To modify the environment file, and then stop the replay and then do these things again. I'm guessing something relates to these changes. This is what happened with us on the last run.

@blocktrades

Yeah, it could be. Well, you can do a replay without using the file too. You can actually, you can, I mean, you can still do a replay, just not use the script.

@mcfarhat

Exactly. That's what I'm gonna try to do.

@blocktrades

Okay, yeah, that's probably a reasonable thing, because I haven't tried the script myself in quite a while. So that's where I've been doing manual runs myself.

@mcfarhat

Okay, cool. And another thing, I mean, we're doing with ActiveFit. It's about the Java API for Hive. We use that because we use native API development for our Android app. So it's open source, of course. We're building a complete Hive API that can be used elsewhere. So hopefully we should be able to release this within the year.

@blocktrades

Okay, is that on a repo somewhere now, or is it just internal right now?

@mcfarhat

No, currently, no, it's public on our repo under ActiveFit, but it's bundled within the app, the Android version. Okay. There's a utils class inside it. There are a few functions that help out with some of the functionality, but we intend to completely rebuild this and make it separate so that anyone who wants to build it on their own and use it as a public API for Hive, that is Java or Android built, they should be able to use that.

@blocktrades

Cool. Let's see, Bartek, I'll mention one thing. I did find one issue with the latest change I made for the clustering code, so I'm gonna push a fix after this meeting. It's not a big deal. It basically means if you shut down and restart it, it'll cluster again. So I don't think it'll cause any meeting problems for any of the ongoing tests we have, but I'll push a fix so that we won't. So if you're running a test now, don't shut it down and restart it until push the fix.

Okay, anyone else have anything? Bartek, did you have anything to report on your side? I saw you had merged in the changes for the into the account, the operation ID branch.

Bartek

Yes, yes. I have changed the way how they are generated according to the Decided how to write it. We also have been merged today a few other changes because I'd like to have some common version. We have also marked some quite important changes related to HAF-instance maintenance and automatically the touching of applications which earlier happened prematurely. And applications which started when HAF-instance was not yet ready weren't ready immediately. It was ready for live sync. And there are also some changes related to the speeding up the application.

Actually, it was quite a complex problem related to locking between HiveD calls which changed irreversible block the code, the application, consume that information and data. And even description of merge request and initial motivation didn't seem so important. I think this is quite important change which can significantly help our...

@blocktrades

So it could solve our locking problem basically, hopefully. I was going to say one other thing on the... I did review the change you made for the... For passing the context for the wait-for-ready instance which it looked good. The only thing I wanted to add was we used that call for some other applications that don't have context. So I assume in those cases we'll just pass an empty context array and it looks like it would be fine.

Bartek

Okay, so of course we can discuss other cases also.

@blocktrades

Yeah, I think in the balance tracker the code that installs the balance tracker checks for that before it creates indexes for balance tracker. So we'll need to modify that call to pass, I think, just an empty array. I think it'll work. I haven't tried it yet, but I'll try it after the meeting. And then I guess... Do we think that the issue with the locking will solve a hivemind problems or...?

Bartek

Yes, I hope so. Anyway, I let one of our guys write a written few years ago quite complex multi-threaded data provider code. I'm going to simplify it because previously it was optimized to network calls because hivemind have been data supplied to hiveD API calls. And right now we have a much more efficient way to get data from the database and definitely there is no need to have such complex data access there. And I think part of hivemind problems, the locking hivemind problems can be also related to complex nature of such calls and few threads which can be still active when hivemind enters live sync. And they can still have open transaction. So that can explain why this problem appears only at hivemind instance and never on different applications. I hope it will be simplified quite soon.

@blocktrades

Okay, so that works already got started?

Bartek

Yes, today. Marcin started analysis it and I hope also the work is progressing but the code is quite complex and as far as I yesterday look at there it was just too complex.

@blocktrades

I know that code. I've looked at it before too. It's way too complicated. I've looked at multi threads and caching bunches of data blocks.

Bartek

Yes, and concurrent queues. But really this code very improved speed of all hivemind.

@blocktrades

I mean, I don't get me wrong. I understand it was important at the time. I mean, fortunately we don't need it anymore. So, okay. Sounds good. That made me think of something else. What was it? Did you see my message about the go replay software?

Bartek

Yes, we saw it. Probably. We discussed it tomorrow. Tomorrow at office and we will try to find someone to analyze it.

@blocktrades

Okay. Excellent. I think it's going to be really important to check our API calls for responses, but the thing is they're going to differ. We know, I mean, Eric's done a little bit of this work already. And it was clear that there were differences, but a lot of the differences, you know, aren't important. So what we need to do is sort of find out what differences matter. So we need somebody who's aware of the API calls, has some basic knowledge of the API calls to sort of analyze that.

Bartek

So definitely we will focus on that. But today we are. getting, getting everything ready.

@blocktrades

I think we need to do something to test before we can test it. But, but, but if we had anybody free to work on it, I figure we can start throwing somebody at it.

Bartek

So, I think someone will be fine for that. Okay, great. Okay, we can do that. Yes, I think for most important works, that's, that's all you can say now. And then we need to wait until tomorrow to see effects of standard replace and testing to check.

@blocktrades

So I've got one replay finished, I think, or just about finished for, for the new indexing stuff on steam 16, because it's the fast machine. So I'll test, I'll be testing that soon. I'll get back to you on how that goes. And then we'll see other one other than that.

How is the other projects going? I know I saw he making some commits related to, I think seems like, I don't know if it was Clive or if it was beat the wax stuff or what, but I saw a few other commits going in that I know.

Bartek

Yes, yes, yes, that's, that's some common library. Maybe it will be interesting subject for @mcfarhat, which is trying to integrate Android stuff with Java. And because we in the past had problems with integrating high mostly to Python and Javascript because we tried to, to implement our Clive wallet, which is written in Python. And of course, we have a lot of typescript products, which are representing frontends. And we developed some module, which is cross language module and provides common high functionality, such environment. And of course, there is no support for Java. And actually, I'm not sure if that environment is needed there, but we are iteratively supplementing support there by features that are required by mostly our frontend developers, because we'd like to eliminate the application of code in each frontend application and put their common stuff like asset formatting, operations formatting, and other lower-level processing code, which mostly is shared to C++ code of blockchain, because we used WebAssembly technology to provide it to, it is also shared to Python environment, through Python extensions. So yes, the work has been committed there later because we required some parts of code, literally to be ready for block explorer to display operations and asset values in frontend.

@blocktrades

Ah, okay. So you're going to make, you're making API calls to that code for block explorer?

Bartek

Yes. And guys, actually, that's not API calls. That's our regular functions, which then...

@blocktrades

Oh, they're like calculational calls.

Bartek

Yes, yes, yes.There are variations of, conversions of assets in I form, numeric asset, asset identifier for some textual version.

@blocktrades

So previously people were basically writing this kind of code and they, if they had to do something like this, they wrote it in, I guess, JavaScript and were reading.

Bartek

Yes, exactly. And they easily can integrate such code into their applications and they have a guarantee that it is done correctly and works. Okay. So this is going to be useful for basically for a bunch of the frontends, the libraries in general.

@mcfarhat

Yeah, I would love to check it out. If you can share a link, if it's a public repo.

Bartek

Yes, of course, it is in public repo. You can also take a look there because we have, a few months ago, created ProtoBuf, Google ProtoBuf definitions of operations in Hive. And there are also generated code for TypeScript and for Python from such ProtoBuf definitions, which are, which are supplemented by comments and other important information. And such ProtoBuf definitions are stored inside a Hive repo, inside protocol subdirectory, but it is directly shared to the WAX repo and results in generation of such, such representation specific to Hive language. Probably if you are interested in using operations or concerning them somehow, you can try to generate Java steps or so from that goal and use it directly. And this way you will have access to maintained definitions of other operations so that your own definitions will be outdated somehow. That is especially important for the virtual operations which can change sometimes.

@blocktrades

So just, this is the WAX library, correct?

Bartek

Yes, it is. That's everything is this one library because we decided to have some one point where cross languages can get some support from Hive.

@blocktrades

So you can just find out a third of the Hive group under WAX for Hive.

@mcfarhat

Okay, I'll have a look at it.


Sort:  

What a tremendous job they do to keep #Hive at the top of Blockchain technology and at the same time achieve the greatest possible development

This is fantastic
I’m not familiar with the other accounts but @brainoflondon is king a great job
Kudos to him

May I ask you, as a core developer of the Hive chain, whether you believe the VSC proposal for a layer 1 smart contract is feasible or not?

$WINE