Bad actors might be one of the most useful forces Hive has right now.
Not because they add value—they don’t—but because they expose every weakness in the system. They force us to confront what isn’t working, and whether we’re willing to fix it.

Hive doesn’t struggle because of a lack of ideals. If anything, we have too many. The real issue is our reluctance to balance those ideals with practicality. When we build apps, when we try to grow this ecosystem, we can’t cling so tightly to decentralization that we sacrifice user experience or shut the door on broader adoption.
Censorship resistance is one of Hive’s strongest pillars. It’s the reason many of us are still here. But in practice, even fundamental freedoms come with limits. The idea that a system can be completely open without consequence isn’t strength—it’s naivety.
And we’re seeing the cost of that naivety play out in real time.
There are accounts that contribute nothing and exist purely to spam—hundreds, sometimes thousands of repeated messages in bursts. This isn’t edge-case behavior anymore; it’s persistent enough to degrade core features. Notification systems like F.R.I.D.A.Y become unusable without aggressive filtering. Frontends like PeakD have already stepped in to patch the problem visually, making comment sections readable again—but these are surface-level fixes. The underlying issue remains untouched.
At the protocol level, the incentives are still misaligned.
Resource credits are too cheap. That’s partly a function of low network activity, but even if usage spikes, it won’t matter much. Some bad actors already hold enough stake to operate above any realistic RC pressure. We’ve seen this before—cost increases alone don’t solve abuse when the abusers are well-capitalized.
Reputation was supposed to be the counterweight, but in its current form, it doesn’t work. Some accounts are effectively immune. No amount of downvoting meaningfully impacts them. A system that can’t reflect behavior isn’t a reputation system—it’s decoration.
That’s why proposals like the one from @moeknows still stand out. A web-of-trust model, where users don’t just vote for witnesses but actively shape reputation, introduces something Hive is currently missing: accountability that evolves.
In that model, bad behavior doesn’t just get ignored—it gets progressively more expensive. Reputation drops, costs rise, and abuse becomes harder to sustain. More importantly, it allows for recovery. If someone changes course, the system can reflect that too. Accountability without the possibility of redemption just creates a different kind of failure.
And this is where the idea of invisibility comes in.
Because that’s the direction we’re already heading—just informally.
Users mute. Frontends filter. Notifications get customized. Bit by bit, the community is building its own way to deal with abuse: not by removing it, but by refusing to see it.
Invisibility is becoming the default defense.
The question is whether we acknowledge that reality and design around it—or keep pretending that unlimited openness, with no meaningful friction, is sustainable.
—MenO
We need to separzate curation flags from author flags.
Unless we can punish 'bad' curation without punishing 'good' authors the collateral damage is too great, imo.
Even then, the power will have to spread out or we are just spitting into the wind.
It's clear that very few people are accepting of the status quo.
We don't get reports on users by year joined anymore, the numbers must be pitiful.
It is IMO a front end problem to handle spam. Use Filter, Let user decide like " peakD filter, Community xyz Filter, own filter" to see posts.
As soon transactions priced on reputation, there is even less reason to hold hive lmao and it could be gamed to easy.
That works in an environment without rewards.
Unless we can claim back rewards from spam the chain will be useless once the spam bots take over.
people always forget rewards need a upvoter. it is 100x easier to track the upvote account ( as the spam head) and downvote all.
Our 'upvoters' can't be assed to do a good job, it's why they pay hw's to clean the coop.
You weren't here for @ cheetah, it got to be too expensive.
Well i see not a lot of people upvote spam ( regular users). have you an example for me?
I sent you a message in sting.
If you use peakd, you should have a notification, but if not, it's in the conversation box at the top.
yeah those reports look like spam.
That was back when it was ok to pay for stuff out of the pool, but, only for 'them', the restivus got flags.
It's not like 'they' don't know, the people told them as they were leaving, it's no secret, as long as one isn't a newb.
This is why I figure the plan must not be a short term one, most endeavors fail in the first 7 years, but we still got folks here after ten and the protocol is less than half done.
If an account routinely deboosts another account's content without regard to the content then the user should have the option to block the culprit's curation of that account as it affects their feed.
If an account routinely deboosts content that the user supports then they should have the option to similarily block its affect in their feed.
If an account routinely sends SPAM then the user should have the option to suppress its presence in their feed.
If all these actions are recorded on the chain then all that is left is for the user to also indicate who they trust and how much. Chain them together and you have a feed that is curated by those who the user personally esteems.
No.
The person getting flagged has broken a cultural norm.
Either fit in with the crowd, or suffer the consequences.
You can do this now with mute.
I'm sure that content management will improve as ai is used to code better discovery options.
That's one way of looking at it, but who decides what is a violation of a cultural norm? The answer, here, is those with the most stake. One would expect this would be in the best interest of that which they have stake in, but people differ in their vision for what is in that interest. Some even have overriding motivations. Though I value such input for the above reason, I don't like anyone imposing on me who I should pay attention to.
And, I don't think I'm alone in this sentiment. That is why I believe the user would feel valued if the content curation was recommended but not imposed as take-it-or-leave-it. It is, in fact, not imposed at the low level. Bringing this into the high level front-end is the logical exercise of this feature that attracted me to a block-chain social to begin with.
As for mute, it could be helpful but it is a kludge. Even assuming you'd never want to see someone's content again, it is still a stretch to think you'd adopt another's mute list. Blind censorship doesn't look to be Hive's culture, at least to me. However, the mute's use of the follow operation was clever, and it may be a good way to declare your interest in a user's blog, curations, and also their follow operations to build an accurate web of trust to inform your feed.
The crowd.
Plenty of folks have muted me, some of them names you would know.
I agree, but our crowd isn't that sophisticated, yet.
We are still a cudgel.
I'm surprised that we don't see more spam, but it could be bad if someone decides that Hive is a good target. We don't want censorship, but being able to opt into mute lists maintained by those we trust could help. If the cost of accounts and posting is low then we need options to deal with it.
As a bit of a decentralisation/anti-censorship purist, I feel the decision should be in the user's hands. If enough users block a spammer, it becomes not worth their time and effort to spam us. I agree with you that invisibility is the solution !
But for that, the tools need to be user-friendly enough that any muppet can use them. This is where front-end developers and Hive application coders come in. I can see a situation where a front end allows blocks to be implemented with a single key-click for users, tags, and even whole communities. But also a highly visible dashboard section where the lists can be visited to undo accidental clicks. To back this up, the same kind of places that enable automated curation should (and might already...) have the ability to make blocklists like curation trails, so that new users can block a whole bunch of known spammers with a single click.
would love to have a blacklist at witness level, simply be able to ignore those transaction they use to spam . but no it has to be decentralised uncontrolled chaos in the end. Sooner or later we might have to change things.
yes, be invisible my friend, be invisible. wait.. that's not the quote.
You can't stop spamming, that will always be there. Of course that makes this platform a decentralised protocol. But you only can control what you wanna see, and hopefully in the future, would also be able to control who can drop replies to your post.
The latter one would be nice. But it will also become a sort of censorship.
But it might be necessary, and people can always post anyhow.
I would be fine if the cost of fast posting could have somehow a bigger cost. Humans don't post as fast as scripts... and this is already independent from other transactions. So it could work... or at least will force spammers buy more HIVE to do the same idiocrazy.
I think this is right
#hive #posh
The only way to truly "fix" spam is to fix the underlying psychology and mindset of spammers. And that is not going to happen...
From where I am sitting, that leaves us with figuring out how to best keep our decentralized ideals while tackling something that's pretty much a systemic issue.
Which means, perhaps, creating a vastly improved and more toolset for individuals. That way blocking/muting/banning tools are in the hands of each content creator. For example, I would be able to simply block "Spammer X" from engaging with my content... perhaps by making my content invisible to THEM, like a "hide" feature.
Consider that when I block someone on Facebook, I can't see their content anymore, and thay can't see mine. That is my individual decision, not a Facebook (centralized) decision.
That said, pretty much every system will have its opposers and abusers. Meaning... there's no perfect solution...
=^..^=
Con tanto esfuerzo venimos y creamos un contenido , y un spamer lo destruye y comenta ,🧐 sabotea ..
Is this not a frontend problem only? Since spam will always happen and frontends should spam account not display ( and for freedom of speech people maybe something like " use peakd filter on/off). Thats easiest way.
it gets progressively more expensive. Reputation drops. Well stuff like this like 2 party pricing would be something for me to really power down and sell everything.
A transaction costs what it costs not because someone likes you or not it cost more or less. That sounds pretty communist to me.
As soon transactions are not prices on demand, i see no reason at all why use a Blockchain for Content storage at all. Btw it could be gamed so easy,
Short example, buy used account with high reputation and spam with rewards off.
Practicality wins over dogma sometimes