How To Set Up A Hive Engine Witness- Step By Step Guide

in #engine5 months ago (edited)

With hive-engine coming out with their P2P validation layer recently, I thought it might be useful to make a guide in order to help users who want to get set up. While this isn't a true block production layer, it's better than nothing since it can help show if particular nodes are out of sync(though if everyone just uses the "official" node, then the p2p layer is kinda useless, so use other nodes developers).

I decided to write a simple guide so that way anyone who wants to use the programs can do so. I'm assuming you know the basics of how to connect to a server, and how to do basic troubleshooting(if it says "error something's not installed, go ahead and install it) as well as basic security stuff for your server(PLEASE USE SSH KEYS AND NOT PASSWORD AT THE VERY LEAST INSTALL FAIL2BAN https://www.techrepublic.com/article/how-to-install-fail2ban-on-ubuntu-server-18-04/ ESPECIALLY SINCE YOUR IP IS PUBLIC). I'm also using Ubuntu 20.04 while writing this, if you are on other systems it might vary. You'll need a server with at least 4 gigs of ram (or 2gb and some swap, but real ram is always better) and I'd recommend at least 2 cores and 30 GB of SSD.

Doing this WILL REVEAL YOUR IP of the server to the world, since the witness enable command broadcasts the on the chain(its how the nodes find each other to talk to).

Being a top engine witness does take a lot of RC as well. While testing, I delegated 500 HP to my engine witness and that wasn't enough. I'd recommend about 1,000 HP worth of RC.

When you get your server you'll want to install a few basic things, including git, nodejs, mongodb and screen.

sudo apt-get update -y

sudo apt-get upgrade -y

sudo apt install git -y

sudo apt install screen -y

sudo apt install npm -y

sudo apt install ufw -y


wget -qO - https://www.mongodb.org/static/pgp/server-4.4.asc | sudo apt-key add -

echo "deb [ arch=amd64,arm64 ] https://repo.mongodb.org/apt/ubuntu focal/mongodb-org/4.4 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-4.4.list

sudo apt-get update

sudo apt-get install -y mongodb-org

This should install the packages that we need. Next we'll need to change our mongo settings to use replication sets. If you have any problems, feel free to consult https://docs.mongodb.com/manual/tutorial/install-mongodb-on-ubuntu/.

sudo nano /etc/mongod.conf

On it you'll see a line that looks like #replication:

You'll want to change it so that you remove the # at the start and on the line below it, add two spaces and replSetName: "rs0". The final result should look like:

replication:
  replSetName: "rs0"

Now save that, Press control + O, then control + X.

Next we'll restart mongod.

sudo systemctl stop mongod
sudo systemctl start mongod

Could also use restart here, but I like stop and start :)

Now we want to activate replica sets on mongodb.

mongo
rs.initiate()

You'll see a bunch of stuff and it should say something like "rs0"... at the bottom left. If thats all good, press control + D to exit.

Now we'll want to update our version of nodejs.

sudo npm i -g n
sudo n latest

At this point please disconnect and reconnect to the server. It's the easiest way for the update to take place.

Now the following step you want to take ONLY IF YOU GOT A 2 GB RAM SERVER!!! While it won't hurt it if you do on a 4 GB server, it's not necessary. But it is required on a 2 GB server. This is making a swap file on your server, because mongo import takes more than 2 GB of ram. If you have any problems feel free to consult https://linuxize.com/post/create-a-linux-swap-file/. We'll be making a 2GB swap file.

sudo fallocate -l 2G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile

Now if you do sudo swapon --show you should see some number and under size it should be 2048M showing you did it all correctly.

The following steps are for everyone again.

We'll want to grab the latest snapshot for engine from a source. I provide snapshots over at https://cdn.rishipanthee.com/hiveengine/ (grab the one with the latest date), and the official engine snapshot can be found at http://api2.hive-engine.com/hsc_20210203_b50993579.archive. For this example we'll be using the snapshot from engine as I regularly update mine and delete older ones and so the dates can change.

wget http://api2.hive-engine.com/hsc_20210203_b50993579.archive

We wait for that to finish. Take note of the name of the file, here it is hsc_20210203_b50993579.archive. If you grab one of a different name, you'll need to update what you do on the following step to match the name.

Then we'll want to import in the snapshot onto mongodb. For this we use a screen so that way if you disconnect it still works as this is the longest part. If you got the bare minimum specs, it'll take about 20 minutes.

screen -S reload
mongorestore --gzip --archive=hsc_20210203_b50993579.archive

If you know how to get out of a screen, you can do so and move onto other steps while waiting for this to finish. If you don't, just sit tight, you can wait a little bit. If you get disconnected from your server and need to reconnect to the screen you can type in screen -r reload. Once it's done, you can press control + D to disconnect from the screen.

Up next is getting the hive engine code. We use git to get this. Yes it is still called steem smart contracts.

git clone https://github.com/hive-engine/steemsmartcontracts.git 
cd steemsmartcontracts
git checkout hive-engine

Now if you are following this guide at a point when 1.1 is old, just replace it with the name of the branch of the latest release. Go ask them for what it is.

Then we want to install the dependencies to the program as well as pm2 which we will use to run this in the background.

npm i
sudo npm i -g pm2

You'll want to then modify the config file to work with the import we got, to open it type in nano config.json, as well as add or remove nodes. The list is pretty good right now, the only thing you'll want to change is the part after "startHiveBlock": to be 0. Final result should look like:

 "startHiveBlock": 0,

Make sure the comma after that is there. On the last line, there's witnessEnabled which should say false for now, change that to true. Result should be like:

"witnessEnabled": true

Save the file and exit, control + O, control + X.

Now we need to move the example env file to be a real one.

mv .env.example .env
nano .env

Now this is all blank. You'll want to fill it in with your details, after the = on each line.

Once you do that, control + O, control + X to save and exit.

Almost there. Now we want to allow access to the ports that are necessary for this.

sudo ufw allow ssh
sudo ufw allow 5001
sudo ufw enable

Now we start the node up.

pm2 start app.js --no-treekill --kill-timeout 10000 --no-autorestart --name engwit

Now we need to wait for it to sync. This can take some time. The older the snapshot that you use, the longer it takes. To monitor your progress, type in pm2 logs. Once you see lines that look like 0|app | 2021-02-03 03:15:04 info: [Streamer] head_block_number 50994938 currentBlock 50994939 Hive blockchain is 0 blocks ahead you are good to go. The 0 blocks ahead part is very important. Otherwise you could miss blocks. You can disconnect while waiting at this point. To close the logs, press control + C.

Once thats up and running, you might need to cd back into the folder cd steemsmartcontracts if you exited the connection to the server.

Then enable your witness

node witness_action.js register

And you did it. You are now a witness. Be sure to vote for me for Hive witness (@hextech) as well as hive-engine witness(@h-e). To vote for hive engine witnesses, you can check out https://cdn.rishipanthee.com/enginewit.html or the official tool at https://tribaldex.com/witnesses.

Sort:  

Hi, I have this error when run mongo. Could you help me, please?

image.png

Did you start mongo up?
sudo systemctl start mongod

Yes. I've stopped, restarted, reinstalled...
Mongo shell run until I change and put replSetName: "rs0" on the config file.
I don't know what more else I can do.

Loading...

is that the huge key we need to import first when me setting up the mOngo? https://www.mongodb.org/static/pgp/server-4.4.asc here?

Yup thats covered by wget -qO - https://www.mongodb.org/static/pgp/server-4.4.asc | sudo apt-key add -.

Wow, this looks like a awesome project! I know what i’m working on this weekend!

Definitely is. Engine is one of the most used tools on hive and decentralization of it is very important.

Great work!
This is just what I was looking for.
Any reason not to run a Hive-Engine witness on the same machine as a Hive witness?

IP gets revealed. Would not recommend. I'm going to add that to my post. Thanks for reminding.

Thanks. So you need to run it on an IP that is OK to be public.
So it could be on the same machine as a Hive full API node.

Can you hide it behind a reverse proxy like you can use jussi to hide an API node's actual IP behind?

When you broadcast to enable, it publishes the ip and ports that are open(see ex: https://hiveblocks.com/tx/14e6c348ced9d4211ce8392bf413ba290c754bb2). I guess if you manually broadcast the enable trx(I was told you could do it at https://tribaldex.com/witnesses) and you have a setup that forwards the requests to the proper ports on your engine witness I guess it could work. I'm not really too good with networking stuff though so maybe try it out on an ip you don't care about. I can give it a try this weekend to see how it goes.

Excellent guide! Thanks for generating the snapshots. It saves a lot of time, bandwidth, and node hammering.

The only issue I ran into is the firewall blocking RPC port 5000 by default.

Happy to have helped. Just shout if you need any help.

Let's wait for that 51022551 block now. 😉 !HYPNO !WINE


Cheers, @dannychain You Successfully Shared 0.100 WINE With @rishi556.
You Earned 0.100 WINE As Curation Reward.
You Utilized 1/3 Successful Calls.

wine-greeting


WINE Current Market Price : 1.200 HIVE

51022551

Less than 12 hours to go now :)

While this isn't a true block production layer, it's better than nothing

So hive-engine is still functionally centralized?

Not exactly. We now have a way for nodes to verify the data against each other and get rewarded for it. But as of now, most of the tools using hive-engine are still using the api from api.hive-engine.com. True decentralization will come once other nodes start getting used. Even on hive if everyone just used api.hive.blog as their only API and skipped all other nodes, even if they existed it wouldn't be a decentralized system.

Hey @rishi556, here is your HYPNO ;)

Looking good, looking good!

Awesome stuff... I might give it a try...

Do you have a sense on CPU and db growth over time?

DB growth will surpass CPU growth by a lot. CPU helps a lot on initial sync and serving as an RPC but then it doesn't take too much. DB's been growing by quite a bit in the last few days. My entire server is currently using 15726016 KB(just has this on it, no extras and no remaining archives). Let's wait a week and see what its at then to calculate growth.

Nice, thanks... this weekend will do a trial with a 4 core, 4 GB VM (disk I have plenty, but I wanted to understand how far can I plan something if committing). If that plays out nicely, I will be grabbing a new CPU/board/ram for a nicer thing.

15GB for a start is not that bad (assuming it has already all the previous history of hive-engine)... so, hopefully not going to be 30GB next week... that kind of growth would be already hard for me to sustain.

Thanks for the guide too... helps speed up stuff.

That should be more than enough. It ran fine on 2 cores and 2 gb + 2 gb swap while testing. Disk growth is probably biggest thing.

Thanks for the very concise instructions.

I suppose you can unregister your witness too? In other words, put it on hold temporarily?

I would like to check the procedure first at my home machine and if all goes well, repeat it on the real server.

Thanks!

!invest_vote

Yes you can. If you do node witness_action.js unregister that'll unregister you. Or if you prefer a UI, Tribaldex has that feature.

Thanks. You got my h-e witness vote. It's not much, for now :)

Have a great weekend.

Every but helps, thank you. You too. Feel free to ask if you have any questions. If you use discord, you can also get help there: https://discord.gg/a2wkAmqu9j.

A couple adds to this one...

First...

Add proper tools 🤣

sudo apt install vim nmon -y

Secondly... if you happen to have in-house disks, use LVM stuff, divide your DB through several disks (in case you have them) to optimize for IO and bandwidth at the same time. Note, that if you can maintain a separate disk for the OS, then that's even better. SSDs are expensive when you are talking about several TB of storage.

Also... if you have lots of memory, you can create a script to restore the db in memory (using a ramfs) and then shutdown the DB and copy it to disk. Saves a lot of time for big databases.

Great work @rishi556 =)

Note2... DB is quite OK for me... the problem of maintaining this kind of transactions is the FS snapshots I usually use to roll back once I have a problem.

Still having a good "try" on this... as the weekend went caput...

Ugh vim. Get that garbage out my face. My laptop defaults crontab to vim and I always forget and so when I try and do some nano stuff it doesn't work.

😃

Yeah... I blame old systems that didn't have any other editors as default, besides vi, and hence I had to master or face the consequences of spending extra hours fixing stuff.

nmon is cool though... Nigel (the IBM guy that built it) did very well in not allowing IBM to continue marketing the tool in a close project.

test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4test4