12,000€ (CAN$18,000) per year is more than @anyx paid to buy a 512 Gb full node server with an Xeon Gold CPU!!
Why would you pay more per year than the total capital cost of the item?!
12,000€ (CAN$18,000) per year is more than @anyx paid to buy a 512 Gb full node server with an Xeon Gold CPU!!
Why would you pay more per year than the total capital cost of the item?!
Not to mention my server was absolute overkill.
However, it does have a monthly cost for collocation hosting, but this is a fixed cost of only a couple hundred/mo, but that also buys things like redundant power, cooling, etc. But indeed, in the long run, purchased assets are the way to go.
The last 5 items are only necessary because of huge centralized data centers. They are not needed when running less than 10 machines as part of a decentralized system. Redundant internet is easy and cheap to add and redundant power supply is only necessary in places with regular power outages. Once in ten years is typical for central Tel Aviv where I live. 24 hour monitoring is automatic if server is in your home and you also work from there.
Posted using Partiko iOS
Well, I have around 120 desktop (in my office), software routers instead of hardware routers etc. All of them custom made - so I tend to agree to what you say. But with these expensive hardware and network speeds (read QoS, BGP poisoning avoidance, DDoS protection) etc, its not quite possible to handle them the servers in our home. For example @anyx I am sure can handle most of the hard ware/CPU/memory related aspects. I can't. But I can handle lot of the network aspects - which others may not be able to. The witnesses are a mix of sysadmins, researchers, programmers and marketers - so we can't expect them to run the 17 + 2 + 2 servers in their home office. Even if everything including BGP is taken care of, I don't think DDoS is something we can monitor and retaliate that easily (unless we have a OpenBSD box and we are very handy with firewall rules).
Being said all this, I don't think we need to really host all the witness nodes in our backyard to save costs. Proper AWS cost optimizations can save a lot of costs. Personally I don't see any reason why the full nodes are on AWS where every disk access is billed. Once the development is done, those instances can be moved to data centers. AWS and cloud as we know is for elastic needs (CAPEX) and in the case of steem full nodes that kind of "elastic scaling" is not possible. (due the architecture.) May be sharding like NEAR protocol ( https://nearprotocol.com) is the way to reach there - but its not in the near term.
To conclude, its very much possible to run the infrastructure in traditional data centers at much lower costs than AWS. Steemit.com's workload is not something that needs cloud computing.