Who’s Responsible When Online Platforms Enable Harassment?

There’s a debate that’s been running on the internet for nearly two decades: when someone gets harassed or hurt online, whose fault is it? The platform’s? The creator who posted the content? The person who chose to read it?

This essay argues that the debate itself is the problem — and that keeping it going actually serves the interests of the powerful at the expense of everyone else.

The Shell Game

When someone runs a coordinated harassment campaign against a creator online — say, organizing dozens of people to mass-report their posts — each individual report looks, to the platform, like normal community moderation. The system processes each one and removes the content. Working as intended.

The platform didn’t “do” anything wrong, technically. The harassers used the tools correctly. The creator loses their work, maybe their livelihood. Nobody’s accountable.

This isn’t an accident or an edge case. It’s a structural feature.

How We Got Here

The idea that creators — not platforms — bear primary responsibility for preventing harm crystallized sometime in the late 2000s. The most plausible explanation is economic: platforms realized they couldn’t afford to moderate content themselves, legally or financially, and so they built systems where users police each other. Legal frameworks like intermediary liability protections reinforced this by insulating platforms from responsibility for third-party content. Early internet culture’s strong ideological commitments to decentralization and free expression provided the philosophical vocabulary. The justifications, in other words, came from multiple directions — but the underlying driver appears to have been economic feasibility, not principled reasoning about where responsibility best belongs. The philosophy arrived to legitimate a decision that had already been made.

That sequencing matters. Once a norm gets established and then naturalized, it starts to look timeless and inevitable. A useful way to test whether something is a natural law versus a constructed norm is to ask: who benefits? Natural laws don’t have beneficiaries — they apply indifferently to everyone. Constructed norms always serve someone, and the persistence of a norm that benefits specific parties is evidence, though not proof, that those parties have interests in maintaining it.

Here, the beneficiaries are clear: platforms avoid moderation costs, advertisers get plausible deniability about harmful content, and regulators avoid having to make difficult decisions about liability. These parties benefit whether “creators are responsible” or “readers are responsible” wins the philosophical debate. The debate just needs to keep going.

Clean Tools, Dirty Network

Platforms offer features like reporting buttons, blocking tools, and content warnings. Each one, examined on its own, looks neutral and reasonable. But these features don’t exist in isolation — they’re connected to systems designed to maximize engagement, capture attention, and monetize behavior.

When a harassment campaign weaponizes the reporting system, the platform can truthfully say users used the tools as designed. That’s the point. The clean-looking tools provide cover for the extractive system underneath.

Think of it like a legitimate storefront attached to a problematic supply chain. The store itself may be tidy and well-run. That tidiness is what makes the supply chain invisible — and what makes the store’s operators defensible when someone asks about the supply chain.

The View Depends on Where You Stand

Here’s the most uncomfortable part of the argument: the same feature — say, community reporting — genuinely looks like different things depending on your position, and both observations are accurate.

If you’re an institution (a platform, a regulator, an advertiser), coordinated mass-reporting looks like community self-governance working. More than that: from an institutional position, user-driven moderation actively reduces your exposure — to cost, to liability, to regulatory scrutiny. It is not merely harmless from where you stand. It is beneficial.

If you’re a creator with no power and no exit — someone whose whole audience lives on one platform — the same system looks entirely different. Your identity and livelihood are hostage to tools that can be weaponized against you at any time, with no meaningful recourse. The system that reduces institutional exposure directly produces your vulnerability.

This is why the perspectival gap doesn’t close with better communication. The two groups aren’t misunderstanding the same system. They are experiencing structurally different systems — different in what the tools do for them — even though they’re using the same platform. And that divergence may be load-bearing: if institutions fully perceived the system as vulnerable creators do, the current arrangement would face a legitimacy crisis it hasn’t had to face. The debate continuing to seem like a genuine philosophical disagreement is what prevents that crisis.

The Strongest Counterargument

Smart, good-faith people push back on this analysis: Sure, platforms aren’t perfect, but they’re improvable. Better abuse detection, stronger blocking tools, clearer standards — these things help. You’re mistaking fixable implementation problems for unfixable structural ones.

This objection is partly right. Better tools do help some people. Abuse detection improvements do reduce some harassment. The objection deserves its full weight before it gets answered.

But here’s where it falls short: improving the reporting system makes that feature work better. It does not change the relationship between that feature and the broader system it legitimates. Fewer creators get wrongly removed; the structural arrangement that made them vulnerable remains intact.

What would structural change actually look like, as opposed to implementation improvement? It would mean altering the underlying incentive architecture — not just the surface features. For example: modifying how virality mechanics amplify coordinated behavior, so mass-reporting campaigns don’t get the same algorithmic lift as organic engagement. Or building portability between platforms so that creators aren’t economically captive to a single audience location, which would dissolve the lock-in that makes them vulnerable in the first place. Or changing how reporting volume is weighted in automated enforcement, so that volume itself — the signature of coordination — triggers human review rather than automatic action. These are not feature improvements. They are changes to what the system optimizes for and who it protects.

The distinction between implementation fixes and structural changes also clarifies why the debate doesn’t resolve. If the problem were bad implementation, you’d expect disagreements about how fast to fix things. Instead, institutions and affected creators disagree about whether the tools are harmful at all — which is the pattern you’d expect if the two groups are experiencing structurally different systems, not debating the same one at different speeds.

What Follows From This

For researchers: The most useful work is empirical — mapping whether platforms with different architectures produce measurably different outcomes for creators over time. Useful variables would include creator departure rates correlated with policy changes, rates of coordinated false reporting across different enforcement designs, and how engagement mechanics interact with moderation volume. The structural patterns identified here are inferred from how systems are built; whether they match real-world outcomes is a genuinely open question.

For platform designers and policy advocates: Be honest about what feature improvements actually do. Improving a tool that’s being weaponized is real work with real benefits — and it is not the same work as changing the structural relationship between that tool and the harassment economy it enables. Treating them as the same obscures which problem is being solved. A platform that improves its abuse detection while creator vulnerability and harassment outcomes remain unchanged has improved a feature. It has not changed the structure.

For creators in vulnerable positions: The structural analysis doesn’t generate good prescriptions here, and pretending otherwise would be its own kind of dishonesty. Understanding why appeals to platform policy usually fail (the dispute isn’t really about policy), why exit is difficult (that’s a feature of the system, not an oversight), and why the people administering the tools often genuinely don’t see harm (they are experiencing different incentives, not just showing insufficient empathy) doesn’t make the situation easier. What it does is clarify that the path forward requires collective action at a scale that analysis alone can’t prescribe — and that the people most affected need to be the ones holding decision-making authority about what that looks like.


The core claim is simple, even if the supporting structure is dense: the debate about who is responsible for online harm is not a philosophical question awaiting a philosophical answer. It is a structural arrangement that benefits specific parties, and the debate continuing to seem philosophical — unresolved, genuine, worth having — is how the arrangement sustains itself. The question “whose fault is it?” doesn’t have a stable answer because the system is designed so that no answer threatens the actors who benefit from the question remaining open.

Leave a comment