Core dev meeting #48 (with full transcript)

in #corelast year (edited)

As usual now I've included a rough transcript:

@howo

On my end over the past month, as I said last time, I spent a lot of time figuring out hivemind and doing some onboarding for @bloomyx.I did do some back and forth with Bartek on my recurrent transfer pull requests. And I spent a better part of last week trying to get basically an overload on the recurrent transfer function in the CLI wallets.I don't think it's possible, so either we keep the status quo like this, meaning we add the pair ID at the end, or we create a new command which is like recurrenttransfer_pair_id.The reason why we don't want to keep the current status quo is that it means that we update a bunch of the existing tests, which is the way it's currently done, but Bartek doesn'tlike it too much, which I get. Apart from that, so we spent a bunch of time with @bloomyx on hivemind and HAF. I think at this point I got everything related to restoring a database correctly. There was a ton of issues where when you restore a HAF database with hivemind, it basically does not restore hivemind, or HAF is not restored, and you basically need to enable and disable the indexes a bunch of times while you restore, and I ended up mapping all of the state transactions so that I found the correct way to restore and get the correct status every time.Apart from that, I wrote a bunch of documentation, and that's about it for me.

@Blocktrades

Okay, well, I guess I'll probably make a post with more details, because obviously thelot's gone on since last meeting, but just some highlights, we are getting ready to release a new version of hived and HAF, and also later probably not too long to a new version of HAF as well.In the new release, I guess one of the important things I guess for everybody will be that we're officially supporting Ubuntu 22 now, that'll be the base supported version. So if anyone's running 18, still got nodes, or even 20 maybe shift to 22. One of the things we've done along the way, and I think it's all complete now, is that we've got Docker images for doing the building, so they're already the correct version, The idea is you'll be able to check out one of these dockers and actually develop under the docker if you want, you don't have to, but it's a convenience that you can get an environment setup for doing development. In terms of other things that are coming in the new version of hiveD, we've got some new virtual operations to basically get more information that was requested for like treasury operations and things like that. We also got code prepared for the hard fork 28, so in other words, it's not actually going to do hard fork 28, but we've got some of the preliminary code done for that in terms of just doing the actual fork itself. And we've been working on some changes to basically detect if you, when you upgrade, if you've got like, if there's going to be a conflict between your shared memory situation or not. So it's just, it's an issue sometimes people come up with, they change their plugins and then they don't have a proper shared memory anymore because different plugins store different data inside shared memory and stuff like that. So in the same applies to snapshots as well.So we've made some, we've added some code that basically allows you to detect and report these kinds of conditions. So you don't have to go through a whole replay just to find out things are messed up. We've also fixed a bunch of bugs that have been reported in hiveD over time. Some of them related to account history and we're also getting set prepared. Some of the changes are also memory enhancements.We're lowering memory requirements a little bit more. And part of the driver for that is also going to be to eventually make RC consensus. It's not going to be now, but we know eventually we're probably going to make a consensus. So we're preparing for that. Let's see. That's, that's all the major changes in hiveD. I can think of right off hand. Like I said, there's more, but I'll cover that in a post.

The other changes of the most significant changes, I guess, are in HAF itself. I mentioned a while back that we've been working on changing, storing the binary operations data in, I mean, the operations data in a binary format rather than a text format. And so that's done. It's been done for a bit, but we've been testing it with various HAF apps like HAF and hivemind and things like that. And that's all looking good.So that's now, that'll be part of the official release of HAF will be the, the JSON, thebinary storage of the operations.So that'll reduce significantly reduce the amount of storage used by the database, theHAF, the HAF database, especially ones that are not filtering that are grabbing all theoperations, of course.We also have some more data that's stored in HAF now, we've been, we've been developingone of the things we've been developing is a block explorer, I think I've mentionedseveral times.And that's been useful in helping us figure out other information we might want to addto the HAF database itself, since this new block explorer is basically designed to beeasy to run if you have a HAF database already.So as part of that, so one of the changes we found we were storing some of the dynamicglobal data information now into HAF, because that was useful for the DAF database and wecould see how it'd be useful for other HAF applications.And we also wound up storing hard fork data into the database as well, so that, you know,someone, a HAF application could, can see the points at which hard forks took placein history of HAF's blockchain. Other things we've been doing to HAF is related to what, how it just mentioned, which is,we've been working on dumping and loading HAF instances. And I guess that's kind of the main things there we've the other things we've been doing is a lot of testing and automatic building related, we've been trying to really smoothout that process and also make it faster because we've added a ton of tests in the past month or two. And, you know, we didn't want our test times to keep increasing because that means it takes longer for us to verify any changes. So we've been having to make improvements to the speed at which we were able to run our tests too, just so we could not put a undue burden on how long it takes before you, you can verify your code changes. That's probably the main things.

Like I said, a lot of small things, but I'll cover those later. So at that point, I guess I'll probably hand over, usually a handover to @imwatsi So I guess let's keep with tradition and I'll switch to him.

Imwatsi

So I pushed out a bunch of updates for GNS, which is the notification system that's based on HAF. I added support for user preferences. There's seven new notification types now available. I also added keychain login for the preview site. So you can log in with keychain and see the notifications that are available in your account. And I made a bunch of performance improvements there as well.

I've also been working on the DAO dashboard. I made a post about it earlier on. Now I'm doing final tests. I think we should be ready to release that this week as well as free space, which is the DAP that's part of the free beings DAO that we're working on. Development is going well there as well. I also released the white paper for the token protocol I mentioned a while back. It's on GitHub. I'll also share the link on mattermost. If you could give that a look and give some feedback when you get the time, that would be helpful. And yeah, that's pretty much the update from my side.

@arcange

I have just one question for @blocktrades, as you mentioned changes in hiveD, HAF of hivemind, what will be the impact of those changes for API nodes operators?

@blocktrades

On the HAF side, there's some fairly minor changes. Let me see if I can pull up details here again. So there's a transaction counter and the rocksdb plugin, and it didn't count, I think, some transactions. So that transaction counter got improved. And I think it apparently says we fixed impact account collection for comment payout beneficiaries. So I think basically we found that this operation wasn't being collected by the account history. And I guess it was by HAF, by HAFAH. So we noticed that when we're doing comparisons between the two, I think that's what happened if I understood things correctly.

@arcange

Will we have to replay with hiveD and to re-sync with hivemind ?

@blocktrades:

Yeah, I mean, I think, yeah, to get the correct account, there definitely needs to be a replay. That's for sure.

Bartek:

Yes, exactly. To get correct account history, you should perform full replay of HiveD if you are still using all account history rocksDB plugin or perform full HAF replay if you would like to use HAF and HAFAH. And fixes related to account history mostly are related to very specific test cases when some specific blocks are required. And actually, right now, it happens very rarely because OBI makes blocks actually immediately irreversible. So there is not a big chance that some problems will appear. Also the set of accounts impacted by community payout when is probably not big. But of course, better to have the full history if we found this very old bug, it was already fixed. It's for other compatibility, potential compatibility issues to have applications and HAF which could be caused by switching to binary operation format. One thing which we identified to progressing from HAFAF to HiveMind, HAF block explorer in balanced tracker operations, was at most adding some cast from operation body column to JSON or JSON Binary type. And usually it was very simple and easy things to do if any possible changes were needed. Because we tried to make everything compatible as previously.

@arcange:

Okay, and do you have any estimation when it will be rolled out?

@blocktrades:

So we're probably talking, I guess, I mean, I would say next week is my guess, because this week we, I mean, we could probably release some of it earlier, but we also still got to make changes to HAF and HiveMind, you know, verify those. So I don't think we should. I think it's probably best to release them all at once is my feeling. I don't know what bartek thinks, but I haven't talked about this. But so I think my feeling is it should be next week at the earliest and he may want to do it later. So I'll ask him what he thinks too.

bartek

Well, I think next week is, I don't want to say very sure because it can make us some strange things to happen, but wish everything should work correctly because we have tested it already. All of what I'm doing right now is collecting changes to write these notes and preparing the merge tomorrow. So hopefully as Dan said, it can be next week, maybe even earlier, we'll see.

@arcange

Okay. And has it been tested in production?

@bartek

Yes. it was tested on production and it worked fine as I remember over a week without problems. Worked at HiveMind and HAFAH on single server and then let's call production HiveBlock site.

@howo

All right, so moving on to the few topics I had, I didn't have much apart from, well, the CLI wallets thing, I don't know if you remember on the recurrent transfer, you wanted to try overloading the CLI wallets definition, but turns out that doesn't work. So would you rather that I add a new CLI wallet comment for recurrent transfer, including the pair ID, or would you rather keeping it like this where we update the old tests?

Bartek:

Okay, and did you update the test tools also?

@howo

Yes. So test tools have been updated, but basically the overload does not work well in the, when there is a generation of the documentation in the CLI wallets thing.

Bartek:

So maybe best solution will be to create some slightly differently named methods for new recurrent transfer and this way solve the issue.

@howo:

Yeah, that's why I thought so. Okay. Cool, so I'll change that this afternoon and I'll put that for review. I also still haven't fixed the issue where the CLI runs for a very long time. I don't know what that way that is, but we can report on that offline and I think.

@blocktrades:

So I have a question about that. Is that just, is it just your branch is doing it?

@howo:

Yeah, Yeah, it's very odd, the issue that I cannot reproduce it locally.

@blocktrades:

So it's just when it's run on the automated system?

@howo:

Yeah. Initially I thought it was depending on the, like the build system because I know there is different builders going around, but even when like the most beefy ones who run with a 64 calls, they still get, 48 calls, they still get locked on that. And yeah, basically the timeout is 15 minutes when master, I mean, develop runs in 10 minutes. So there's an extra five minutes that I don't quite know or they're getting spent.

bartek:

Okay. Well, I think best is to try run such tests locally and see what happens there. If you would like to have prepared environment for building and testing hive applications, we have prepared some Docker image containing all of it set up inside, you can check builder image request, and after building the given image, you will get the exact environment as this one used at CI process, because the CI-based image is used as the base of this image. And this way, you will have the same environment as the one used in executing the CI process. Maybe you have introduced some slowdown process and that's why CI process timeouts every time. So, are observing sometimes job timeouts, but usually there are few jobs working on the edge and having timeout near to 50 minutes. And for example, working correctly after in 25 minutes and when run is loaded, the time can be, of course, increased and exceed it and even limit. Such things are stable and we didn't identify any problems, even unknown behavior of that. So the reason must be somehow related to your code, and best will be to identify why and what it is.

@howo:

Yeah, pretty much. My current task is basically disabling more and more of my code and run the CI to find the actual place where all that time is being spent.

bartek:

You are talking, of course, on recurrent transfer implementation.

@howo:

Yeah, which is strange because the only thing I can think of is basically maybe the added index.

bartek:

Yes, but maybe looking for some recurrent transfers is slower right now. I don't know, it didn't look lately, so we need to reflect.

@blocktrades:

Just to follow up also on the builder image stuff he's talking about, there is actually a separate repo for the builder image, and you'll see it there, it's just called builder dash image.

@borislavzlatanov

Hi everyone, can you hear me? Yes. Wait, well I don't know if I missed the part with the dev sync, everyone giving an update. So I just wanted to give a little update. working with the peakd team where they have the peakd open projects and there's like several of those projects, so there's several developers and we're working in parallel on various projects.

So I guess one little update I can give about kind of the things that I've been working on. I guess I've just completed working on something called Hive OpenStats, which is a statistical platform for Hive open source and open available for everyone to build it on their own or to use it wherever they want to.

Right now I have started working on a HAF app, basically porting the hiveminds plugin that they have developed into a HAF app. So Bartek has been helping me a little bit with answering some questions that I had there with kind of how things tie together with the whole stack, so that's been helpful. And I guess this is just kind of like a start, a small taste of how to develop a HAF app, and so we'll see if I will go from there, but so far so good.

@imwatsi:

If you have questions, you can also reach out to me, I'll gladly lend a hand on HAF-related matters.

Bartek:

As for developing HAF applications, I'd like to mention that we have added some boilerplate code for Docker compose statements which can simplify setup of HAF applications and HAF instance itself. It's maybe very early stage of such definitions, but we try to define them to allow multi-stage usage and easy and backend and application part, like also environment-based deployments to make things easy and stable. So I think if someone is trying, it's worth looking at that and initial setup of Docker compose.

@borislavzlatanov

That sounds interesting. Are you developing that in any particular repository?

Bartek:

For this example, compose files are located in HAF repository, probably under some examples directory.

@blocktrades:

And just to clarify, we're talking about the develop branch, not the production branch. Actually, in general, I'd recommend that if anybody's doing development in HAF, I think it makes sense to work in the develop branch. If you're developing a HAF app, maybe some people will disagree with me, but it's such that, obviously, we're going to assume most of the development into release, but release, my view is a release of HAF is for production apps that are being used by people, but when you're developing an app, I think it probably makes sense to use the develop branch for your own app. I think that's frequently going to make you most prepared for the future, your app's most prepared for the future of HAF. And you know, it also allows gives more feedback to us too, so I think it's helpful for both sides.

Sort:  

Thanks for providing the full text of the conversation it helps me perform AI recaps so that i can understand what you devs are talking about hahahaha

Though it was having a bit of issue with the length so i only did the first 60% then asked it some questions on the second half to see how the discussion with boris went and the peak open source stuff

image.png

Hahahahahahahha I might just post that in the future tbh 😂

It would be nice if you could proofread these in the future and correct the grammar to make them look a bit more professional and readable.

I basically decided to go for exactitude rather than putting words in other people's mouths, it's a transcript, not a summary. My post could almost be used as a subtitle track and a bunch of the core devs are not native speakers so obviously some spelling mistakes/grammar creep in.

did you transcript all of than manually?

I'm using a tool called whisper (https://openai.com/research/whisper), it gets 90% of the work done and then I edit the rest by hand for stuff that it doesn't get

Interesting thanks

Ok, it's your blog, up to you. You do provide a lot of useful info with insight into the future so I guess we'll take what we can get. Thanks for the reply.

Thanks for the feedback ! I think it's still better than what I did before which was to provide quick summaries. In the end the goal is still for people to listen in instead of reading.

The plans are okay especially the one pointed for BlockTrades which you'll providing more details in future. Good 👍 job.

I read to the end...

https://leofinance.io/threads/@seckorama/re-leothreads-hhmq2w1k
The rewards earned on this comment will go directly to the people ( seckorama ) sharing the post on LeoThreads,LikeTu,dBuzz.