You are viewing a single comment's thread from:

RE: [GUIDE] Optimise your RPC server for better performance

I am curious, what is your opinion on the post from @steemitblog that it is not necessary to use a 512GB server for a full node. Specifically this part:

A technique that we have been using to lower the memory requirements on a “full node” (one with everything including account history), is to split the API node into two servers. One server runs only “account history,” and the other server runs everything else. This allows both servers to use less than 256 GB RAM, instead of running everything on a 512 GB RAM server.

Sort:  

You can do it with less than 512 but your replay times will suffer and highly dependent on disk io speeds. I tried with raid 0 nvme for 3500mb/s and took 3.5 weeks to do a replay

As @themarkymark wrote, nobody believes @steemitblog's results.

I have never successfully gotten an NVME (without /dev/shm) server to replay without crashing. It is also slow as molasses.

At @privex we're experimenting with high quality NVME drives and locating CPUs with good single core performance, to try to make it more scalable. We think it may be possible to get half decent performance on a non-RAM node with 4 to 5 NVME drives in RAID 0, using XFS as the file system, storing the blockchain on a separate SSD, boot drive on a separate SSD, and various tweaks to XFS e.g. disable access time, move the journal onto the boot SSD so that it does not impact the NVME performance.

It is a lot more difficult than using RAM, but we're quickly approaching 512gb, and the next level can triple in price...

From their publication I had the impression that the scaling issues were not as severe as they have been portrayed in other blogs. At some point I considered setting up a full node but I realized that I need to learn a lot more and the cost is now beyond my budget. I appreciate that you took the time to respond.

I think he touched on it with this:

Another issue many RPC nodes face, is stale connections. This may be related to poor networking code within steemd or third party libraries for interfacing with Steem.

....stale connections can eat RAM too. Having more RAM than necessary is always ideal.