You are viewing a single comment's thread from:

RE: How To Setup Hive Witness Pricefeed Using Hivefeed-JS

in #witnesslast year

Hi @rishi556, it´s me again.
After sudo apt update it seems my storage is already too small, right?

image.png

I have ordered a larger one (+400GB), but in order to expand the existing partition, I have to run a script sudo apt install cloud-utils -y but this is also not possible due to lack of disc space ☹️.
Is there a way to remove some files from the hive-docker directory in order to change the partitoning?
If not I have to restart everything from scratch, right? At least now I know how to do it, but it is unfortunate to be offline again for 3 days after I have send out my announcement email :(.

Sort:  

You can delete stuff using rm FILENAME, if it's a folder, use rm -rf FOLDERNAME. Reclaiming storage might be a bit hard, I'd try with the logs first, generally located at :
/var/lib/docker/containers (you can just do cd /var/lib/docker/containers to get there.

You are going to need your container id, which you can figure out by doing docker ps and it'll be the first part listed:

ss.png

Copy that container id, type in cd past in the container id, and press tab for it to be auto completed. In that folder, there's going to be a CONATINAERID-json.log file. That's probably going to be the largest thing there, and might give you the room you need to get things going again. I normally install ncdu upon first setting up machines in order to have a good tool to easily see what's using my storage with, you can install it with sudo apt install ncdu. I don't know if there's a real lot you can gain from within the hive-engine directory though as the p2p logs are usually what's going to be the fullest and those have always been very tiny for me.

docker ps does not show any number, just an empty table:

image.png

Um, that's not a good sign. When you nagivate back to your hive-docker folder and do ./run.sh logs, is it able to pull up logs or not? Weird things happen when a device's storage gets full because applications aren't able to write the data they need and spew out errors and this might have been affected by that.

It shows recent blocks, but it is not continously tracking them like yesterday, but it stops always after app. 20 blocks :(

Hmm thats very weird, I'd definitely look into what could be eating that storage. How much space do you have? df -h should give a nice summary.

image.png
so there is some left, but not too much I guess

0 Avail mens there's nothing left. My sysadmin skills are meh at best so this is usually something I'd end up asking a friend about how to reclaim some disk space. Sorry I couldn't be more help onto this step.