OpenAI Erased ‘Safety’ From Its Legal DNA. We Should All Be Paying Attention.

Nobody reads IRS Form 990s for fun. They’re dense, bureaucratic slabs of paperwork designed to keep tax-exempt organizations honest — not to entertain the curious. But occasionally, buried in the boilerplate, you find a story. A genuinely consequential one.

According to Fortune, OpenAI’s latest IRS disclosure form — released in November 2025 and covering the 2024 financial year — contained a quiet but seismic revision. In restructuring into a for-profit entity, the company scrubbed all safety language from its official legal mission statement.

The word “safely” is gone. Vanished. Excised from the legal DNA of the most powerful artificial intelligence company on the planet.

Seven letters. One word. But in the world of corporate governance, words are load-bearing walls — remove the right one, and the entire structure fundamentally shifts. As of early 2026, watching AI systems weave themselves into everything from our smartphones to our power grids, that missing word feels less like a clerical oversight and more like a deliberate signal.

How OpenAI Quietly Rewrote Its Own Rulebook

To understand why this registers as alarming, you have to trace the history. OpenAI didn’t wake up one morning and spontaneously decide to edit its tax forms. This was a slow, deliberate ideological drift — the kind that’s easy to miss until you lay the filings side by side.

When the company launched in 2015 as a non-profit scientific research lab, its mission was aggressively idealistic. Pull up their 2016 and 2017 filings, and the language is unambiguous: they wanted to “help the world build safe AI technology” unconstrained by the pressure to generate financial returns. They wanted to openly share their research. They cast themselves as the principled alternative to corporate greed — the adults in the room.

Then reality intervened. Building artificial general intelligence (AGI) turned out to be ruinously expensive. Server infrastructure costs money. Top-tier research engineers cost considerably more. By the 2020 filing, the language had already begun to soften at the edges, though “safe AI technology” still held its place as a core pillar.

The November 2025 filing, though — that marked the end of the line. It was the last time OpenAI claimed tax-exempt status. Their new mission statement? “OpenAI’s mission is to ensure that artificial general intelligence benefits all of humanity.”

Notice what’s missing.

There is a vast, unsettling gulf between “building safe AGI” and ensuring AGI merely “benefits humanity.” Benefits are subjective. A highly capable AI that cures a rare disease but displaces ten million workers is, technically, delivering a benefit. It might not be safe by any reasonable measure — but it fits the new legal definition without a crease.

$6.6 Billion Has a Way of Clarifying Priorities

You can trace this shift directly back to the money. In late 2024, OpenAI secured a staggering $6.6 billion in fresh funding — the kind of number that doesn’t come without strings attached. Investors operating at that altitude don’t dispense capital out of philanthropic goodwill. They want returns. The funding carried a specific condition: it would convert to debt unless OpenAI restructured into a conventional for-profit tech company.

So, they did.

The non-profit OpenAI Foundation quietly surrendered the wheel. In the restructuring, it ceded 74% control, holding onto a mere 26% stake. Microsoft — thanks to a cumulative $13.8 billion investment — walked away owning 27% of company stock. Employees and private investors absorbed the rest.

This isn’t a change in letterhead. A non-profit board carries one legal obligation: uphold the organization’s stated mission. Its members literally cannot pocket a share of earnings. A for-profit board? Their fiduciary duty runs to the shareholders. Their job — their legal job — is to grow the bottom line.

When investors sit on the board, or wield heavy influence over it, and those same investors directly receive a cut of the profits, the incentive structure inverts completely. If installing a safety mechanism delays a product launch by six months, and that delay bleeds billions in market share, a for-profit board faces crushing pressure to skip the mechanism. That’s not cynicism. That’s just how the math works — and it’s precisely what accountability scholars have been flagging for years.

What a Tax Form Can Do That a Website Never Will

Alnoor Ebrahim, a professor at Tufts University’s Fletcher School, was among the first people to publicly flag the changes buried in those IRS filings. He isn’t buying the corporate reframing.

OpenAI has pushed back on the criticism. When they announced the restructuring, company representatives argued the updated phrasing simply serves the same ultimate goal in cleaner language. They point to their website — which still deploys safety rhetoric prominently, calling it “the most important challenge of our time” and stressing the need to advance capability and safety in tandem.

“Given that neither the mission of the foundation nor of the OpenAI group explicitly alludes to safety, it will be hard to hold their boards accountable for it.”

— Alnoor Ebrahim, Tufts University

Here’s the problem with that defense: a public relations page on a website carries zero legal weight. A tax form does. In corporate law, if a commitment isn’t written into the charter, it doesn’t functionally exist — and Ebrahim lands on this distinction with precision. Without an explicit safety mandate in the governing documents, investors cannot easily sue the board for deprioritizing it. Flip the scenario, though — if the board does prioritize safety at the expense of profit — and those same investors now have considerably stronger grounds to sue for breach of fiduciary duty.

They didn’t just remove a word. They removed a legal shield — and replaced it with nothing.

The Lawsuit Pile Is Already Growing

The timing of this erasure is, frankly, stunning. We aren’t discussing a company building photo filters or recommendation algorithms. OpenAI is actively attempting to construct a machine more capable than a human being — and doing so while facing a mounting pile of legal challenges. Over the past two years, the company and CEO Sam Altman have been named as defendants in multiple lawsuits alleging severe negligence, product liability, and wrongful death. The stakes are no longer confined to academic debate. Real-world harms are already being argued in courtrooms.

Is it really the right moment to strip “safety” from your legal obligations? That’s a question worth sitting with.

Public unease is tracking right alongside these developments. Per a late 2023 Pew Research Center survey, 52% of Americans already felt more concerned than excited about artificial intelligence — a figure that had climbed steadily from prior years. People sense, instinctively, that this technology is outrunning our collective ability to govern it.

Ebrahim frames the entire restructuring as a sprawling societal experiment. “I believe OpenAI’s makeover is a test case for how we, as a society, oversee the work of organizations that have the potential to both provide enormous benefits and do catastrophic harm,” he noted. Right now, by most honest assessments, we are failing that test.

The Non-Profit Halo Was Always a Little Borrowed

For years, the tech industry extended OpenAI a degree of latitude it rarely granted to Meta or Google, and the non-profit structure was the reason. The logic, however unspoken, was that a charter binding the organization to humanity’s benefit — rather than shareholder returns — made them different. More trustworthy. The conscientious actors in an otherwise reckless industry.

That assumption is officially retired.

The harder truth — one worth acknowledging — is that you genuinely cannot build AGI on a non-profit budget. The compute power required is immense and growing. According to Stanford’s 2024 AI Index Report, the costs of training frontier models have become astronomical, scaling in ways that would have seemed implausible just five years ago. To remain competitive, OpenAI needed billions. And billions, in practice, always arrive with conditions attached.

The condition this time was blunt: become a normal company. Stop performing charity while operating like a venture-backed startup chasing the most lucrative technological race in recorded history.

There’s a case to be made for the honesty of the transition — openly profit-driven beats hiding behind a 501(c)(3) while functionally behaving like a hedge fund. But we owe ourselves clarity about what the shift actually means for everyone outside the cap table.

No structural mandate compels OpenAI to keep us safe anymore. The board’s primary legal loyalty has migrated from humanity to the shareholders — Microsoft, the private investors, the employees holding equity. They’re the ones steering now. If they build safe AI, it will be because safe AI happens to be profitable. If the market eventually rewards faster, less restricted models — and signals it’s willing to pay a premium for them — the legal architecture of the company no longer stands in the way of that outcome. Nothing in the charter does. Nothing in the tax form does. Because that language was removed, quietly, in November 2025, and most people weren’t watching.

Words matter. When a company edits its founding documents to delete a commitment to safety, the only surprise would be acting shocked when its future products reflect that same omission — faithfully, precisely, and entirely by design.

Based on reporting from various media outlets. Any editorial opinion is that of the author.

Leave a Comment