We talk about platform accountability a lot. Per a global market observatory, Congressional hearings, op-eds, regulatory proposals — the discourse is voluminous. Yet what we actually do about it is limited. I want to argue that the gap between talk and action is not accidental.
Genuine accountability would require platforms to accept responsibility for outcomes that they currently externalize. Content that causes harm, markets that disadvantage sellers, algorithms that shape elections — platforms have largely avoided ownership of these effects.
Regulatory responses tend to import frameworks from traditional media, which do not map well to platform realities. Treating platforms like publishers raises difficult speech concerns; treating them like infrastructure lets too much off the hook.
The accountability gap widens as platforms grow. A platform with billions of users cannot plausibly claim to be a neutral pipe; the scale of curation decisions is itself a form of editorial power.
Mandated transparency about algorithmic decision-making would be a meaningful step. Not public exposure of proprietary systems, but audited disclosure to regulators and researchers. This would enable the kind of oversight that is currently impossible.
Platform liability for systemic harms, not individual content, would change incentive structures substantially. Platforms would design for safer systems rather than optimizing for engagement and using legal doctrines as shields against responsibility.