4TB is afaik for hived + haf + hivemind, maybe you don't need whole stack, without hivemind this should take 2TB, you can use setting block-log-split
to not store whole block log because you will still have access to all data from block_api via haf, filtering out data from haf could further reduce requirements but i'm not sure how to define those filters for your use case, today about half of operations are custom_json and you probably don't need most, not sure how account history affects storage requirements
You are viewing a single comment's thread from: