YouTube Knows Why They Throttled You. You Don't. That's the Point.
We're all grateful hostages.
The All-In pod bros lost half their audience and didn’t know why.
The views had cratered on the Summit episodes they thought would crush — Tulsi Gabbard, Mark Cuban and Tucker Carlson. The hosts immediately suspected the modern bogeyman, shadow-banning. YouTube has a history of stamping down on right-coded content and these are clearly right wing guests, but the truth was simpler and worse. Months earlier they’d stopped bleeping profanity and started muting it instead. YouTube’s transcript system still “heard” the words, flagged the episodes, and quietly dropped them into Restricted Mode. Schools, offices, public Wi-Fi—anywhere an admin flips that filter, the videos vanish. Not removed. Not flagged. Just invisible.
They called YouTube, and Lo!— they have enough clout and connections that YouTube helped. All-In learned about the difference, went back to bleeping, the platform un-restricted the back catalog, and everyone moved on grateful for the fix.
But YouTube had all the data. Their engineers could trace exactly what triggered the restriction and when. The platform could see the problem the whole time. The creators couldn’t. That asymmetry is part of the business model.
Opacity is the architecture.
We keep treating this stuff like bureaucratic incompetence — bad dashboard UX, no Restricted Mode indicator, no timestamps where the policy tripped, not enough support staff. We’re missing the actual structure of power here. YouTube isn’t withholding information because they’re disorganized. They’re withholding it because knowing the rules in detail would let you optimize for them, work around them, test their boundaries, and, just maybe, build somewhere else.
Here’s an example from the annals of tech history. When email first became a thing it was written as a protocol. You could read an RFC, implement SMTP on your own server, and if you got the verbs right — HELO, MAIL FROM, RCPT TO — you were in. Email worked because the rules were public. Competence was the price of entry, not permission. Today you can run an email server perfectly and still be invisible. Why? Because delivery is controlled by private reputation systems that Google and Microsoft operate. You can be technically correct and practically nonexistent.
Email still exists as a protocol. But in practice, without a Gmail account, you might not have email at all.
The same drift swallowed the web. YouTube, Twitter, and Facebook all solved real problems: spam, abuse, fraud, scale. I’m not romanticizing the old internet; it was a mess. But the winning solution to those problems wasn’t the only solution.. it was the one that made the most money and maximized control.
Now our lives are lived in their house under their rules. They write the rules, enforce the rules, and decide whether you get to see the rules. When they’re feeling generous, they explain what happened after you’ve already been punished. The All-In hosts thanked YouTube for explaining what had been done to them. But when they’re not generous, you get silence.
YouTube engineers can see exactly why content gets restricted: the flags, timestamps, and policy logic. Nothing technical prevents them from exposing these verdicts to creators. Nothing stops them from making a machine-readable verdict to the wider ecosystem under an open spec.
They just don’t. Why would they? Transparency creates accountability. Accountability creates constraints. If creators could see exactly why their content got throttled, they could test the boundaries, optimize around the rules, maybe even challenge decisions that don’t make sense. If the rules were machine-readable and public, competing platforms could implement the same moderation standards and compete on execution instead of lock-in.
The vagueness protects the moat.
I keep thinking about what interoperability used to mean. Not the modern version where a platform lets you download a ZIP file of your data — thank you, thank you, very generous — but real interoperability. The ability to use (or write!) third-party clients that work better for your use case. To see the rules you’re being judged by and verify that they’re being applied correctly.
We had pieces of this. Not perfectly, not everywhere, but we had it. The web was built on open protocols because the people building it thought rules should be legible to the people being ruled. WordPress is a mess in a hundred ways, but the open and movable nature of the framework still demonstrates the values of the open early aughts. We’ve traded that ethic for scale and surveillance economics, and now we act surprised that the platforms behave like landlords instead of utilities.
Traditional antitrust asks whether prices went up, but these platforms are all ‘free.’ This capture is subtler: private APIs with private rules aren’t toll booths, they’re zoning laws you can’t appeal. We gave these platforms Section 230 immunity on the theory that they were neutral conduits. Utilities. But they operate like landlords with total discretion over their property. They get the legal protection of the first model while exercising the power of the second.
Dominance should come with obligations. If you’re going to behave like a landlord, you don’t get utility protections. And if you want utility protections, you need to publish the rules you enforce and let people verify you’re enforcing them correctly. Appeals should be APIs, not ignored suggestion boxes.
The platforms know we won’t leave. We know they won’t change because the attention economy runs on captive audiences. This is the deal: total attention for them, total opacity for us. But at least we know what we’re choosing when we stay. Before you build on someone else’s platform, ask: where’s the spec? Where’s the export? What happens if they change the rules?
The answers will sort gardeners from jailers.
The Medium is the Message, as McLuhan said. Right now, the message is: we own your data and your attention. You can’t leave.