You are viewing a single comment's thread from:

RE: Steemit Update

in #steem5 years ago

Well, I have around 120 desktop (in my office), software routers instead of hardware routers etc. All of them custom made - so I tend to agree to what you say. But with these expensive hardware and network speeds (read QoS, BGP poisoning avoidance, DDoS protection) etc, its not quite possible to handle them the servers in our home. For example @anyx I am sure can handle most of the hard ware/CPU/memory related aspects. I can't. But I can handle lot of the network aspects - which others may not be able to. The witnesses are a mix of sysadmins, researchers, programmers and marketers - so we can't expect them to run the 17 + 2 + 2 servers in their home office. Even if everything including BGP is taken care of, I don't think DDoS is something we can monitor and retaliate that easily (unless we have a OpenBSD box and we are very handy with firewall rules).

Being said all this, I don't think we need to really host all the witness nodes in our backyard to save costs. Proper AWS cost optimizations can save a lot of costs. Personally I don't see any reason why the full nodes are on AWS where every disk access is billed. Once the development is done, those instances can be moved to data centers. AWS and cloud as we know is for elastic needs (CAPEX) and in the case of steem full nodes that kind of "elastic scaling" is not possible. (due the architecture.) May be sharding like NEAR protocol ( https://nearprotocol.com) is the way to reach there - but its not in the near term.

To conclude, its very much possible to run the infrastructure in traditional data centers at much lower costs than AWS. Steemit.com's workload is not something that needs cloud computing.