Since its inception Hive has gained and lost public API nodes. Recent growth of Hive increased the demand on the current infrastructure and rotation of public nodes, making the move to scale capacity a no-brainer.
The new node is hived
with account history + hivemind via jussi and can be used by pointing your app to https://api.hive.blue
. It joins our network of witness primary/backup nodes (which are normally kept at one primary and two backups for the duration of the year and increased during forks), our seed node seed.hive.blue:2001
, and other supporting infrastructure.
Is there a problem?
A problem doesn't need to develop for there to be a proactive solution. Hive is doing great right now and always has been. We've been on a solid trajectory upwards in terms of utility and growth as a decentralized Web3 ecosystem of both the present and the future. There is a growing number of contributors to the technical side and as always, more and more people are choosing to build their unique visions on Hive. To help them achieve this, we need to always seek to improve our infrastructure and make it as reliable and resilient as possible.
Proactive scaling means increasing various infrastructure elements, updating and strengthening the required processes and servers, as well as diversifying our individual capabilities. What this means is if an important application runs on an overburdened/oversubscribed VPS, moving it to a larger instance or even a dedicated instance. Potatoes are alright for things that don't see constant use, but we believe they're not acceptable for key infrastructure.
About api.hive.blue
We are looking forward to see what kind of throughput we can push via this node, offloading and relieving some of the pressure on existing nodes. We monitor our infrastructure actively both to share interesting metrics with the rest of the Hive API node operators, as well as to identify any performance or security concerns that may arise. We have seen directed attempts to degrade API performance, or overwhelm services by DDoS on existing API nodes; such activities will be closely monitored and deterred.
Technical Specs
- API
- Located in sunny Finland DC
- 1Gbps IPv4 and IPv6 connectivity
- i9-9900K CPU
- 128GB RAM
- 2x 1TB NVMe
- 2x 1TB SSD
- Seed / Related services
- Germany DC
- 1Gbps IPv4 and IPv6 connectivity
- Ryzen 9 3900 CPU
- 128GB RAM
- 2x 512GB NVMe
Like what we're doing? Support us by voting for the @guiltyparties witness.
Do you want us to add it to peakd.com node option?
Yes please.
Thanks. I added the node to https://beacon.peakd.com/ and I'll check the results for a couple of days to be sure everything works as expected before adding the node to peakd.com 👍
Good news! We need a good infrastructure supporting this chain and handle demand. Thanks.
128 gb
Whats a normal witness node ?
Are all witness nodes an api? Is a hive witness api node a thing or am i just making up compound words? How is this related to the eosio history stuff @someguy123 was working with for @privex gateway ?
Theres a huge education gap wuth myself and ither users.... about how hive and eosio actually work because of lack of basic computer science and networking knowledge
One day well make it as easy as anunaki pokemon hieroglyphics.. with a hive chain built from scratch using stone tools and doped processor stencils
At this rate, hive will continue to increase strongly. The trend of 2022 must be web3
Lol increase in what?
Users? I want to know more about new accounts and resource credits and how many accounts we can create ... how much hivepower a user needs to be comfortable for life with posting etc ... 15 hp?
Costs 3 hive to make an account but then you need like 15 hp to be comfortable right? Lol
Create a hive 0 account. I just started lending 20 hive
@guiltyparties thank you so much for sharing this I totally agree with you web3 is the present and the future I also See a lot of projects building under hive blockchain. Hive has always been the best blockchain web3 blogging platform have seen so far and because of this there's always a need to expand and upgrade in tech.
Great way to end the year, with more infrastructure to make this a robust and fail-safe chain.
Spoken like a true highschool essay bullshiter lol
Good development on the project.... Nice pne
Awesome work man. Thank you for keep adding to the Hive blockchain.
Thanks for your forced fluffy comment lol its like all filler . But look im still upvoting it
I wanna get a special class about fundamentals of what a steem / hive witness does again because i really do see celebrities like Grimes using her fans to fork hive because they really are that smart
Terrible... I cannot support this shit...
Downvoted and reported.
This is really awesome and a work well-done, I appreciate every part of it.
Really appreciated! I’ve come to understand how important API servers are over my last year developing @podping and @v4vapp.
Great work on podping and v4vapp by the way, keep it up!
Question, what are the "Related services" ? witness? and RC plugins? Basically, everything that does not involve history and usual blockchain API requests? Or something else I am missing?
I am thinking on building a new mobo, with 128GB RAM at home and I am trying to understand if the seed/witness/others could be on a node with less than 128GB, say... 32 or up to 64GB, and then use the 128GB one for the history (HAF) and normal API stuff.
Then get a proper firewall, proper phisical links isolation, etc, and serve Oceania/ASIA islands zone, since from here access to Europe kind of "sucks" LOL... no one can cut off this crap distance latency, and until I get Elon Musk cheap Starlink satellites, it's still going to take a while.
Studying this properly before I regret doing something weird =) And grabbing enough experience to how the API is attacked to understand how to manage myself this.
Related services above doesn't refer to plugins but to other non-node related infrastructure type projects. I just realized I didn't specify what they were running on (they are not part of API or Seed).
The main node (API + Hivemind) doesn't need 128 GB and can be done with 48 GB. You may go with 128 GB but that would be overprovisioning. Definitely a good idea to launch out of Asia.
Thanks, that helps. Still need to run my own experimentation but I will be targeting for HAF and if by then hivemind is running with the haf, I will run it too.
48 GB is way better and I can sustain that easy. Do you know if with the HAF that might get lower or higher? ping @blocktrades? or anyone else that knows.
HAF should require less RAM than an existing full node, because there will be no need to run an account history node (this will be replaced by running hafah on your HAF server).
👍