Get your daily news on government and non-profits

Provided by AGP

AI Corporate Governance and Ben & Jerry’s Risk

In a recent paper, AI Corporate Governance and Ben & Jerry’s Risk, we critically analyze the governance arrangements of OpenAI and Anthropic. We show that these firms share an unusual built-in conflict. Each raises billions of dollars from profit-seeking investors, and then lets self-appointed individuals override investors and decide, directly or indirectly, whether and how much profit to sacrifice to ensure the firm’s AI benefits humanity. A deep and potentially unmanageable tension is hard-wired into these firms’ corporate DNA.

Such “self-appointed mission guardians” have been used only once before, at Unilever subsidiary Ben & Jerry’s. That experiment ended in spectacular failure, with the guardians causing what we call double trouble: they both harmed investors and achieved the opposite of their mission (as they saw it). Our analysis highlights the risk to firms and their investors of installing such guardians and can explain why Anthropic’s designers opted to install a “kill switch” allowing a super-majority of investors to fire its guardians.

The standard critique of self-appointed mission guardians is that they will do too little. Lacking incentives to pursue their mission and facing constant pressure from founders, investors, and equity-paid employees, these guardians will let the public benefit be subordinated to profit.

We agree that guardians may do too little. But we argue that they may also do too much. Because guardians lack both accountability to investors and interest-aligning incentives, they may not only harm investors but also act in ways that undermine the firm’s mission itself. In fact, at two of the three firms we identify with investor-overriding guardians—Ben & Jerry’s and OpenAI—guardians have endangered or harmed investors and achieved the opposite of their mission. In other words, guardians can be worse than useless.

We first turn to Ben & Jerry’s. When Unilever acquired Ben & Jerry’s in 2000, it agreed to install self-perpetuating independent directors empowered to override Unilever to preserve Ben & Jerry’s “social mission” and “brand integrity.” For about two decades, conflicts between Unilever and these directors were resolved behind closed doors. Then, in July 2021, Ben & Jerry’s independent directors announced, over Unilever’s objections, the prospective non-renewal of the license held by the ice-cream maker’s Israeli licensee. The announcement triggered a multi-year battle: counterboycotts, divestments by several U.S. states, an activist investor’s intervention, and several lawsuits between the directors and Unilever. Unilever’s CEO resigned, and the firm lost billions in market value—far more than Ben & Jerry’s was ever worth.

But the most striking feature of the episode is that the guardians not only harmed Unilever, but also achieved the opposite of their mission, as they understood it. In 2022, Unilever overrode the directors and gave the Israeli licensee everything it needed to sell Ben & Jerry’s ice cream in Israel and its controlled territories in perpetuity. The directors had said selling in Israel was inconsistent with their mission; their actions ensured those sales would continue indefinitely. Unilever then spun off its ice cream businesses in 2025—a move that will prevent Ben & Jerry’s guardians from ever again imposing costs on Unilever. Because Ben & Jerry’s was the first guardian arrangement to suffer a double-trouble meltdown, we call the risk of such a meltdown “Ben & Jerry’s risk.”

We then turn to OpenAI’s 2023 meltdown. Like Ben & Jerry’s, OpenAI paired self-appointed mission guardians with investors. The guardians were the directors of nonprofit OpenAI, Inc., who controlled a for-profit subsidiary, an LLC; the investors were funds that purchased equity in the LLC. When the guardians fired Sam Altman in November 2023, apparently in part for safety-related reasons, they nearly wiped out the LLC and its investors. After 700 of OpenAI’s 770 employees threatened to decamp to Microsoft, the board reversed course. Altman returned, the guardians were pressured to step down, and the reportedly most safety-oriented researchers (including Mira Murati and Ilya Sutskever) eventually left to start competing AI ventures. Not only were the LLC’s investors almost wiped out, but OpenAI may well have been rendered less safe—the opposite of what the guardians sought.

OpenAI’s 2025 restructuring, we show, does little to reduce this risk. The for-profit arm is now a Delaware public benefit corporation (OpenAI Group PBC). But it remains controlled by the nonprofit (now renamed OpenAI Foundation) which appoints every PBC director, holds veto rights over major transactions, and (through the Foundation’s Safety and Security Committee) can block any “PBC actions relating to safety and security.” Using their appointment powers, the OpenAI Foundation has appointed all but one of its own directors to the PBC’s board. The PBC’s fiduciary duties do not constrain these guardians; they require directors to ignore investors entirely on safety and security matters and otherwise allow profits to be subordinated to the mission.

Anthropic’s structure, we explain, makes it much less likely to experience a Ben & Jerry’s-style meltdown. Like OpenAI, Anthropic pairs a controlling mission entity (the Anthropic Long-Term Benefit Trust) with a PBC (Anthropic PBC). However, compared to OpenAI’s guardians, Anthropic’s guardians are more aligned with investors and have less power over them. The most important difference is that Anthropic’s designers installed a kill switch. A super-majority of Anthropic’s stockholders can terminate the Anthropic Long-Term Benefit Trust and remove the directors it appointed to the PBC’s board. That switch makes Anthropic’s guardians, unlike OpenAI’s, only partly insulated from investors. We expect that Anthropic’s guardians will be deterred from emulating their counterparts at Ben & Jerry’s (in 2021) and OpenAI (in 2023) and “doing too much.” And if they are not deterred, investors can dump them.

Given the fiascos at Ben & Jerry’s and OpenAI—the only two firms ever to install fully-insulated guardians—we would expect any firms installing guardians in the future to follow Anthropic’s example and install a kill switch.

The complete paper is available here.

Legal Disclaimer:

EIN Presswire provides this news content "as is" without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the author above.

Share us

on your social networks:

Sign up for:

All Things Government

The daily local news briefing you can trust. Every day. Subscribe now.

By signing up, you agree to our Terms & Conditions.