Core dev meeting #52

in #core8 months ago

@howo:

All right, so on my end, I am still working on communities. I spent a lot of time trying to find a bug in my code, and in the end, it turns out it was an old issue in Hivemind itself. That's why I talked to you yesterday, Dan.

Long story short, subcomments are not being noted as being part of a community, so the community ID part is null, even though it shouldn't. So I'm working on a fix for that.

@blocktrades

What is the impact? Do moderators still able to moderate comments and stuff like that?

@howo

The impact is kind of negligible. To be honest, I haven't tried any of the community features on top of it, and I know that the APIs, when you query a post, they tend to write everything under it as part of the community. So that's too much of an issue, but it's an issue for data integrity, because you have stuff that's marked as not being part of a community, even though it should. So I assume some people like me could get confused by it. And also, I'm not fully sure, because I haven't checked, but I would assume that some functions
may not work.

@blocktrades

Yeah, I guess, I mean, the other thing we have to be sure of is such a change doesn't somehow break something too, right? Maybe something's expected only posts to be marked this way, so it could be something that breaks as a result.

@howo

To be a bit more technical, the way we know if a post or comment is part of a community is currently by checking the parent permlink, which is like the first tag. And so it would be like hive-1111-whatever, the issue is for a post the parent permlink is correct.
For a comment, the parent permlink is also correct because the parent post would have the correct tag as well.

But then if you have a depth 2 comment, it's parent permlink is going to be its parent comment and no longer the community name

@blocktrades

Yeah, it's gonna look like it's, I understand.
I get the problem.
Okay, so right now, the first level comments are correctly part of it, so it seems likely that there should be no problem in marking the rest of them.

@howo

Yeah, exactly.
So yeah, so I stumbled upon that. That was a fun time. Now I'm deep into the PGSQL stuff because I need to fix the huge process community post stuff.
Apart from that, I had a chat with CrimsonClad, who, following up on what we said last month about the API docs, and basically, from my understanding, the API, let's say, definitions
are going to drastically change in the future, with like HAF-enabled nodes and such.

So I have started to spend a bit of time just fixing the current docs, initially, I was thinking of doing an overhaul of the whole thing.
But instead, I'm just basically going through each example, testing if it still works or if something broke, because she told me it's a source of frustrations for devs
that she's involved in.

@blocktrades

I mean, I'm surprised that if anything's broken, I mean, I'm not aware of any breakage in the API.

@howo

It's not as much as the API being broken, as much as examples being incorrectly written or something is pointing to the wrong api it's really like very minor stuff for us, but pain for someone who's not used to the API.

@blocktrades

Okay, so as far as what we've been working on lately, We're basically getting ready to be able to deploy a new version of Hive and HAF.

This won't be a hard fork change, obviously. But it is substantial change in many ways. And as part of that, one of the things I think I mentioned before is we're really trying to ease the whole deployment of the whole system, especially since we expect everybody to be running HAF going forward and not running the old or account history or the old Hivemind.

And to make that all as smooth as possible, We're basically setting up dockers that use a Docker compose type command to launch everything. And you can obviously edit to not launch things you don't want to launch, but in general, for a full node, we expect them to run with all the dockers that we've got specified. And so as part of that, to make this whole deployment as easy as possible, I kind of feel that we have to go to ZFS.

So we're basically a recommended way to run an API node going forward is going to be to run the API nodes, dockers and everything bound to ZFS attached storage.
This will solve a lot of problems we have with providing backups of the databases, for instance, because we'll just be able to provide ZFS snapshots instead. And that's just so much quicker and has a lot of benefits in terms of ease of updating.
Another advantage of all of this standardizing on using ZFS is we can also give a standardized layout for how to set up the Postgres database for optimal HAF usage. For instance, one of the things we found is that a lot of people will probably want to
run compressed storage for their database itself.

But if they do, they don't want to compress the key tables like the wall where the right ahead storage is for Postgres and probably even for the PG tables themselves, the sort of standard PG tables.

So we'll have a standard basically layout where you could that says, here's the uncompressed where the uncompressed data gets stored, here's where everything else is stored in a compressed form.
And similarly, we could have store your blockchain here and uncompressed storage because you're already compressing it at the high D level, so there's no point in trying to recompress
it again. So I think just saying the standard requirement is use ZFS if you want to use our stuff, doesn't prevent somebody from still using their own file storage system if they want to do it
themselves. But the stuff that we provide to make it easy is going to assume that ZFS is the way to do it.

So I definitely want to mention that. I think we're kind of, we're basically doing testing now in our own environment. We haven't quite got to production testing yet, but I expect we will in this coming week. And so I hope within the next two weeks to have a release out, assuming no issues are encountered along the way.

So that's kind of where we're, as far as the schedule goes, what we're looking at. I still, I can't promise in the two weeks for sure. There's certainly things could come up, but I feel relatively competent, but you can make it in two weeks. As far as what's going on beneath that, there's just a ton of stuff we're doing. I don't, I can't even encapsulate it really quickly in this call. So I'm not sure I even want to, unless # bartek, I don't know if there's anything in particular you think we should mention that I haven't talked about, but otherwise I'm probably just going to make a post with details about stuff we're doing.

bartek

I don't know for sure, release plans are our biggest, biggest plans and biggest efforts is we are mostly focused on that. For the other side, I can announce our new tool, Clive, which probably is made up at some quality level, able to be tried for some, for some external people. And other interesting things we are processing, but actually at some initial stages are some,
some support for other languages integrations being done by ProtoBuff integrations and definitions of high-operations written in this format. As far as we have done research for that, we were able to generate very useful steps for TypeScript, JavaScript and Python and this way actually we'll be able to offer some standardized definitions for such environments and in the future eliminate some logical copies of such operations from external libraries, like no w, like hiveJs, and I don't know what in Python probably being used.
Definitely this step will be useful, also we would like to make some much standard virtual operations, which we plan further years and probably for other depths it will be much more useful if some important comments will appear also while developing the code, because
as far as we checked the IDE support for them is quite good and you will be able just to create transactions by operating on classes and object programming. So it looks like very promising and really much better than current provided ways. Okay, that sounds great.

@blocktrades

Actually what you're talking about, just maybe think of something I mentioned before, I don't know if I'd ask you about, have we yet assigned someone to create an example application? I think we were talking about one, like to replace the old Lies application that would,
as an example of a HAF-app that generated transactions, did we ever?

bartek

Yes, yes, yes, and that's one of our plans and actually all of such work was involved by needs of our new HV out component and their needs to operate somehow on transactions, operations and signing them. And definitely such stuff will simplify creation of such example applications, both for Python like also JavaScript and integrations in the blockchain on such platforms shall be much simpler than currently.

@blocktrades

Okay, so okay, I guess my question was if we had assigned someone to start looking into doing that app. I know that they're going to need to use the stuff you're talking about, but I thought they might be able to just start some parts of it even without the actual, I just don't know if we have anybody free there for that matter. Anyways, we can talk about that offline.

Okay, so I guess that's, like I said, I don't want to get into a lot of the details of what we're doing. I'll write more of it in the post because we could spend the whole time here talking about that if we did.

So I guess we'll just open up and see if there's any other things for discussion or if anybody wants to talk about anything they're working on.

bartek

Actually, I have a whole question to you related to the work you have mentioned. Can you check or just answer if are you sure that verifying this comments, relation to the community's will not involve any performance penalties because from your description, I suppose that you need to traverse some hierarchies of points to verify that they are related to some communities. So maybe I'm wrong.

@blocktrades

Yeah, I guess the question is there is there explicit marker that says they're part of the community now and or you need to add that or are you just changing the code to walk up the tree?

@howo

Yes, there will be performance impacts for sure because there is an extra join to be done there. The question is how much and to that I don't have the answer.

@blocktrades

Well, I guess the question is how you're doing it.Are you doing it by walking up the tree or are you doing it by are they somehow marked to say this is part of this community?

@howo

Oh, yes there's a field for storing it, it's done during indexing by going up one in the tree.

@blocktrades

Yeah, you don't have to walk up the tree every time it's just on insert. you do it during indexing you walk up the tree but then after that it's done, I don't see that as a big problem if you're just doing it during indexing it's a bit annoying but I don't expect the performance hit to be this big

@howo

it's mostly tricky because the database is very fragmented to be efficient and so I have to start from like a parent permlink and then join the the permlink data and then join the posts and finally get the community ID but yeah I don't expect it's to impact the performance too much and I mean it's only gonna be during indexing not during reading right okay it's not just such a big deal.

@blocktrades

yeah okay that sounds reasonable to me is anybody else have anything else they want to talk about today or is it a quick day today ?

@borislavzlatanov

well I guess I just mentioned especially maybe for anyone listening if you're developing any apps on hype if you'd like to review the post in the hype that's community called standalone app apps request for feedback just see what feedback we get from the community about whether they would enjoy powering having their app powered by these two or both API nodes that'd be great

@blocktrades

so I have been thinking about that a little bit just so you're aware I've been I've been thinking about how we can do it in an efficient way I think we can do it pretty efficiently do the publish subscribe if one of the things I was thinking about we could potentially do is just write the have it write the most current blocks to a table so basically have a table which just stores information about the current block and then every every every database sub database like the ones you're talking about would basically just subscribe to that one table they would just get the data about that one block push to them as soon as they get it and then they could basically do whatever they want with that data # bartek suggested another way to do it rather than pushing it through the publish subscribe would just be to provide an API for it but I think there is one advantage to the the publish subscribe method which is the data gets pushed right away instead of being pulled so we get the data more more quickly another issue that that comes to mind though is if we do this I still think it'd be nice if we could do it in such a way that basically something that looked like a traditional HAF app could operate off this data and right now that definitely would not work so I want to see if there's some way we can think about modifying or what we could do to make something that look like a HAF app even though it doesn't have all the HAF at tables run in this kind of architecture I think it's maybe possible there's definitely going to be some limitations because obviously an app like this won't have all the prior data that a true full HAF app has it's only going to have the current block data you know as it comes in it won't be able to go look in the previous history but still maybe for a lot of apps that HAF apps that might be enough I think for a lot of them it could be so but it's going to require some more thinking I think I need to I need to think about it for a couple days to see if we can reasonably make it so that we could basically take the skillet and architecture of a HAF app and run on this kind of database that doesn't have all the data.

So I'll definitely add some more to it once I've had a chance to think about it more and look over the architecture that we've got for HAF app and see if it could be mapped to it so that's about all
I had thoughts on that right now.

the proposal talks about the so-called relay app which I guess could what we were thinking is that it could subscribe to to a full note to get the stream to stream irreversible blocks right from full note without relay app and then the relay app could...

Well I don't think you need a relay app in this I mean I don't see any need for a relay app or say I think you can just have a database that's directly subscribes to HAF database I don't see any need for a relay app at first glance I think that's just writing code for that we already have functionality for unless you're thinking of something doing something that

@borislavzlatanov

I'm not well it would just be I guess an extremely small like JavaScript with a postgres database

@blocktrades

yeah but what I mean if you can have a database directly get the data it's going to be more efficient than anything we write that pulls the data

@borislavzlatanov

yeah that's that's a good point yeah I guess I was thinking a higher number of relay apps would be advantageous for you know a more robust network so

@blocktrades

yeah I mean I understand what you're saying you you want to have the idea that we could have like a hundred apps a hundred databases that anybody individually runs that only has the data that need it by some subset of the full HAF database yeah I mean I don't think it's I think the idea is reasonable and like I said it's something I've been actually thinking about too I the issues that I see really going to revolve around having an app that doesn't have as much data as a full HAF database and making sure that we can still it's we don't have to change a lot the sort of flow of control of the app because of that probably.

I think we can make that work but it needs some looking into I mean obviously another solution is we just have these have these databases running something that aren't even HAF apps at all and that obviously is pretty trivial to do but it's not as it would mean that those apps are more custom in the sense that they wouldn't be portable and things like that which I don't think is really desirable so yeah but anyways it's the idea itself and how to do it I think I think we can make something that works in some way that so that we can make light databases that can serve up API calls I don't think this isn't I don't think this is anything like we have an immediate need for because I think I think the existing HAF API HAF nodes will easily be to serve all the near term traffic but at the same time yeah we got to think long term anyways and it might be interesting for somebody who just wants to run a very small database yeah and if we want to build a bunch of apps you know I hope we would like to do that so that's a plan yeah yeah the only thing I want to do is make sure that it's we the apps get written in such a way that they're have a common architecture yes otherwise everybody has to go relearn each one individually and that just gets to be a nightmare especially if they have a lot of a lot of apps now defeat the whole purpose.

Sort:  

Communities could be a powerful feature I'd be curious to see how it works in action a bit and how it will be implemented / open source so other applications can tap into it.

Outside of that are there plans for more front end things such as a NFT marketplace (something comparable to atomic hub) or is development strictly to provide framework for someone else to come in and able to to build an application such as that?

Communities could be a powerful feature I'd be curious to see how it works in action a bit and how it will be implemented / open source so other applications can tap into it.

Anyone can, communities is powered by hivemind which is an open source software.

Outside of that are there plans for more front end things such as a NFT marketplace

Our work is about building frameworks/infrastructure so that others can come in, it's more of a "we'll give you the tools to build it"

As for NFTs, there's already a few projects, most notable is https://nftshowroom.com/

Thank you for the deets and reply howo always appreciated 👍

Is it possible to create an api call that supports searching a blog by calendar month?
I'd like to see what account x was posting in sep of 19 I just scroll the calendar back, or shortcut in a search box, to the month and year?
I asked peakd about it and they said it was a core issue.

HAF was designed exactly to allow such api calls to be added easily. In this case, I suspect it could be easily added to the hivemind app.

I don't consider such changes "core changes", nowadays (it's strictly a 2nd layer change), although hivemind was sometimes referred to as a "core app" just because the social media app was considered one of the most critical apps.

I consider "core" everything that is run by full node operators. Right now to run a "full" node the operators have to provide:

  • hived
  • haf + hivemind
  • jussi (or something capable of providing the same functionalities)

We created a simple HAF a few months ago. We have not been able to get any node operators to run it 😔

But I totally agree with you that this is something to integrate directly into Hivemind. As far I've seen it would require at least one additional DB index. Do you think it can be added in one of the next updates?

BTW just opened an issue on GitLab to keep track of this: https://gitlab.syncad.com/hive/hivemind/-/issues/209

We're in the process of simplifying and standardizing deployment of haf apps right now: soon it should be trivial to deploy properly configured ones.

That's great 🙌
Would make everything so much easier for everyone building on Hive.

Keep up the great work.

If we're talking about semantics, I consider hivemind core because it's vital to making most of the front end features work, so even though it's L2, it's as important as core imho. Same logic goes for HAF

That's very true. And if we can tweak HAF a beat to allow such search will be great for a few people who love to connect with old articles and also be able to go straight to some of my old charitable posts and donations.

That's very true. And if we can tweak HAF a beat to allow such search will be great for a few people who love to connect with old articles and also be able to go straight to some of my old charitable posts and donations.

Is it possible to use the escrow feature to sell 'bonds'?

You had said that the second layer can't effect the first.
Can you clarify what that means in terms of making the bonds liquid?

Most recently I've been thinking that complex financial instruments are all better handled at the 2nd layer. In other words, bonds, markets, escrow, etc would all be implemented there. Any link to base layer tokens (if desired) would be done thru proxy tokens issued via some kind of multisig setup.

This 'second layer' would be jsons written to hive and managed largely by 3rd parties?

Is there a technical reason that precludes allowing hive to organically escrow a token created by locking hbd?

A gui would have to be created/incorporated into the existing front ends, but a base layer function that allows me to transfer the token/bond to another account through an escrow feature seems the most elegant way to me.
To have to introduce 3rd party risk would be a deal breaker outside the community being small enough that we can all know each other's reputation for trustworthiness.
Simply creating another H-E type arrangement doesn't solve the 3rd party risk issue, for me.
If there is a technical reason that is a bad idea please state that so I can better understand why we have waited this long for this feature.

1st layer functions don't know anything about 2nd layer stuff. So you can't escrow 2nd layer tokens.

But you can create a 2nd layer escrow function.

You can achieve all functions on the 2nd layer just like on the first layer: it's just a different group of people running different software.

Essentially with 2nd layer, you can run one or more separate "blockchain protocols" with their own rules that just piggy back of the p2p network and transaction ordering functionality of the 1st layer. So the "trustworthiness" of a specific 2nd layer protocol just depends on how that protocol is designed. There's no reason it can't be just as trustworthy as the 1st layer protocol.

So you can't escrow 2nd layer tokens

Creating this token at 1st layer would slow the chain?
Too much data to be tracked?

it's just a different group of people running different software.

It is possible that a malicious update could be successful?
At least for the amount of time it takes to revert the changes?
Keychain would be my example.
I couldn't defend against a malicious update absent the social aspect of hive tipping me off to delete the app.

So the "trustworthiness" of a specific 2nd layer protocol just depends on how that protocol is designed.

So with open source distributed nodes keeping everybody honest a 2nd layer is just as secure as hive?

How are you envisioning the functioning of 'bonds'?
A '2nd layer' gui that manages 'bond tokens' according to the contract at the time of escrow?
ie, I stake hbd for a set period of time and receive interest, to liquidate the position I send the token to a 2nd layer contract that executes according to its parameters?

Can you look at making reputation scores go down faster than they go up?
With all the high rep people that have been losing their minds, some way to get that rep down once it goes past ~75 is desirable, imo.
I'd further think it desirable for high rep people to be more accountable.
Perhaps once past 75, accounts up to 3 points lower can affect their reputations to the negative.
We have a number of accounts with very high scores and very low reputations with 'the community', and there is currently nothing we can do about it.

Reputation is kind of a bad metric that should go away eventually. The idea was good in theory but in practice there are too many misaligned incentives, automated votes and it's generally too easy to game. I don't think any amount of tweaking would solve it unless a complete overhaul is made.

@blocktrades is working on that although don't expect that to show up soon, it's a project far in the horizon as it's low priority.

Hi, @howo I just came here to express my gratitude for voting for my introduction post. I truly appreciate it :)