Skip to main content

When Companies Fire the Wrong People: What OpenAI's Decision Reveals About Real Priorities

· 5 min read

Ryan Beiermeister was fired after raising concerns about ChatGPT's planned "Adult Mode". Officially because of a discrimination complaint from a male colleague. The timing? January 2026. The launch of the feature? Q1 2026.

If you think this is coincidence, you've never worked at a company under growth pressure.

Beiermeister was VP of Product Policy at OpenAI. Her job was literally to ask exactly these kinds of questions: what risks does this feature carry? How could it harm users? Are we prepared for that? That's policy work. That's the job.

And that's exactly what she was fired for.

#The pattern behind the decision

OpenAI officially says Beiermeister's departure has nothing to do with the concerns she raised. She made "valuable contributions". That's corporate-speak for: "We don't want to be sued, so we won't say anything concrete."

But look at the facts:

  • Beiermeister raises concerns about a planned feature
  • She's put on leave
  • She's fired, officially because of a discrimination complaint
  • The feature stays on schedule for Q1 2026

That's not an HR process. That's a business decision packaged as an HR process.

I don't want to speculate here whether the discrimination complaint was justified or not. We can't judge that from the outside. But the timing is so obvious it hurts. When someone asks uncomfortable questions about a billion-dollar feature and gets fired shortly before its launch, that sends a signal. And not just to the outside.

#What that means internally

Imagine you work at OpenAI on the policy team. You see how your boss — someone with four years of Meta experience and over seven years at Palantir, not exactly a rookie — gets fired after raising risks.

What do you do at the next critical feature? Stay silent? Nod along? Or risk your job?

That's the real problem. Not that one person was let go. But that everyone else on this team now knows exactly: whoever gets too loud is next.

Policy teams exist to throw sand in the gears. They're supposed to ask questions nobody wants to hear. They're supposed to point out risks before they become PR disasters. If you punish them for that, you don't have a policy team anymore. You have an approval department.

#The "Adult Mode" and why the concerns were justified

ChatGPT with an erotica function. Fidji Simo, OpenAI's CEO of Applications, announced the feature for Q1 2026. Beiermeister and others in the company raised concerns about how it could "affect certain users".

That's diplomatically phrased. Let me translate it: how do we prevent the tool from being misused for deepfake porn? How do we protect minors? What happens when users use the tool to create non-consensual content? How do we deal with the legal risks in different markets?

These aren't hypothetical questions. These are exactly the problems every platform with user-generated content has — except here an AI is involved that can potentially generate anything you describe to it.

Beiermeister did her job. She asked the uncomfortable questions. And she was fired for it.

#What this says about OpenAI's priorities

OpenAI can decide that growth is more important to them than caution. That's a legitimate business decision. Companies make decisions like that every day.

But then own it. Don't say in blog posts and keynotes that you care about "responsible AI" and "safe development" while simultaneously firing the people who are supposed to enforce exactly that.

That's not just dishonest. That's dangerous. Because it means the safety rhetoric is just marketing. That the "Principles" on the website are only there because it looks good. That the real decisions are made elsewhere — by people who have no interest in being slowed down.

#What this means for you

If you're a developer, product owner, CTO — whatever: look at who you're working with. Not what they write on their website. Not what the CEO says in interviews. But who they fire and who they promote.

That's where you see the real values.

When a company fires policy people who ask uncomfortable questions, you know: this isn't about responsible development. This is about speed and revenue. That's not a moral judgement. That's information you need to decide whether you want to work with this partner.

And if you work at such a company? Then you now know what happens when you raise concerns. Act accordingly.

#The bigger picture

OpenAI is not alone. This is a pattern in the tech industry. Companies build policy teams because it looks good. Because investors want it. Because regulators expect it. But when these teams do their job and actually set limits, they become a threat.

Then the policy team becomes the problem that needs to be solved.

That's the moment when it shows whether a company takes its principles seriously or whether they're just decoration. OpenAI has decided. The signal is clear: growth before principles. Revenue before caution. Speed before safety.

You can respect that or reject it. But you can't ignore it.

Cheers,
Rafael

Have a project in mind? I listen, think along, and deliver.

Let's talk