Home Reisen How Anthropic Designed Itself to Avoid OpenAI’s Mistakes

How Anthropic Designed Itself to Avoid OpenAI’s Mistakes

7
0

Last Thanksgiving, Brian Israel found himself being asked the same question again and again.

The general counsel at the AI lab Anthropic had been watching dumbfounded along with the rest of the tech world as, just two miles south of Anthropic’s headquarters in San Francisco, its main competitor OpenAI seemed to be imploding.

OpenAI’s board had fired CEO Sam Altman, saying he had lost their confidence, in a move that seemed likely to tank the startup’s $80 billion-plus valuation. The firing was only possible thanks to OpenAI’s strange corporate structure, in which its directors have no fiduciary duty to increase profits for shareholders—a structure Altman himself had helped design so that OpenAI could build powerful AI insulated from perverse market incentives. To many, it appeared that plan had badly backfired. Five days later, after a pressure campaign from OpenAI’s main investor Microsoft, venture capitalists, and OpenAI’s own staff—who held valuable equity in the company—Altman was reinstated as CEO, and two of the three directors who fired him resigned. “AI belongs to the capitalists now,” the New York Times concluded, as OpenAI began to build a new board that seemed more befitting of a high-growth company than a research lab concerned about the dangers of powerful AI.

And so Israel found himself being frantically asked by Anthropic’s investors and clients that weekend: Could the same thing happen at Anthropic?

Anthropic, which like OpenAI is a top AI lab, has an unorthodox corporate structure too. The company similarly structured itself in order to ensure it could develop AI without needing to cut corners in pursuit of profits. But that’s pretty much where the likeness ends. To everybody with questions on Thanksgiving, Israel’s answer was the same: what happened at OpenAI can’t happen to us.

Read More: Inside Anthropic, the AI Company Betting That Safety Can Be a Winning Strategy

Prior to the OpenAI disaster, questions about the corporate governance of AI seemed obscure. But it’s now clear that the structure of AI companies has vital implications for who controls what could be the 21st century’s most powerful technology. As AI grows more powerful, the stakes are only getting higher. Earlier in May, two OpenAI leaders on the safety side of the company quit. In a leaving statement one of them, Jan Leike, said that safety had “taken a backseat to shiny products,” and said that OpenAI needed a “cultural change” if it were going to develop advanced AI safely. On Tuesday, Leike announced he had moved to Anthropic. (Altman acknowledged Leike’s criticisms, saying “we have a lot more to do; we are committed to doing it.”)

Anthropic prides itself on being structured differently from OpenAI, but a question mark hangs over its future. Anthropic has raised $7 billion in the last year, mostly from Amazon and Google—big tech companies that, like Microsoft and Meta, are racing to secure dominance over the world of AI. At some point it will need to raise even more. If Anthropic’s structure isn’t strong enough to withstand pressure from those corporate juggernauts, it may struggle to prevent its AI from becoming dangerous, or might allow its technology to fall into Big Tech’s hands. On the other hand, if Anthropic’s governance structure turns out to be more robust than OpenAI’s, the company may be able to chart a new course—one where AI can be developed safely, protected from the worst pressures of the free market, and for the benefit of society at large.

Anthropic’s seven co-founders all previously worked at OpenAI. In his former role as OpenAI’s vice president for research, Anthropic CEO Dario Amodei even wrote the majority of OpenAI’s charter, the document that commits the lab and its workers to pursue the safe development of powerful AI. To be sure, Anthropic’s co-founders left OpenAI in 2021, well before the problems with its structure burst into the open with Altman’s firing. But their experience made them want to do things differently. Watching the meltdown that happened last Thanksgiving made Amodei feel that Anthropic’s governance structure “was the right approach,” he tells TIME. “The way we’ve done things, with all these checks and balances, puts us in a position where it’s much harder for something like that to happen.”

Paul Christiano, Dario Amodei, and Geoffrey Irving write equations on a whiteboard at OpenAI, the artificial intelligence lab founded by Elon Musk, in San Francisco, July 10, 2017.Christie Hemm Klok—The New York Times/Redux

Still, the high stakes have led many to question why novel and largely untested corporate governance structures are the primary constraint on the behavior of companies attempting to develop advanced AI. “Society must not let the roll-out of AI be controlled solely by private tech companies,” wrote Helen Toner and Tasha McCauley, two former OpenAI board members who voted to fire Altman last year, in a recent article in The Economist. “There are numerous genuine efforts in the private sector to guide the development of this technology responsibly, and we applaud those efforts. But even with the best of intentions, without external oversight, this kind of self-regulation will end up unenforceable, especially under the pressure of immense profit incentives. Governments must play an active role.”

A ‘public benefit corporation’

Unlike OpenAI, which essentially operates as a capped-profit company governed by a nonprofit board that is not accountable to the company’s shareholders, Anthropic is structured more like a traditional company. It has a board that is accountable to shareholders, including Google and Amazon, which between them have invested some $6 billion into Anthropic. (Salesforce, where TIME co-chair and owner Marc Benioff is CEO, has made a smaller investment.) But Anthropic makes use of a special element of Delaware corporate law. It is not a limited company, but a public benefit corporation (PBC), which means that as well as having a fiduciary obligation to increase profits for shareholders, its board also has legal room to follow a separate mission: to ensure that “transformative AI helps people and society flourish.” What that essentially means is that shareholders would find it more difficult to sue Anthropic’s board if the board chose to prioritize safety over increasing profits, Israel says. 

There is no obvious mechanism, however, for the public to sue Anthropic’s board members for not pursuing its public benefit mission strongly enough. “To my knowledge, there’s no way for the public interest to sue you to enforce that,” Israel says. The PBC structure gives the board “a flexibility, not a mandate,” he says.

The conventional wisdom that venture capitalists pass on to company founders is: innovate on your product, but don’t innovate on the structure of your business. But Anthropic’s co-founders decided at the company’s founding in 2021 to disregard that advice, reasoning that if AI was as powerful as they believed it could be, the technology would require new governance structures to ensure it benefited the public. “Many things are handled very well by the market,” Amodei says. “But there are also externalities, the most obvious ones being the risks of AI models (developing) autonomy, but also national security questions, and other things like whether they break or bend the economy in ways we haven’t seen before. So I wanted to make sure that the company was equipped to handle that whole range of issues.”

Being at the “frontier” of AI development—building bigger models than have ever been built before, and which could carry unknown capabilities and risks—required extra care. “There’s a very clear economic advantage to time in the market with the best (AI) model,” Israel says. On the other hand, he says, the more time Anthropic’s safety researchers can spend testing a model after it has been trained, the more confident they can be that launching it would be safe. “The two are at least theoretically in tension,” Israel says. “It was very important to us that we not be railroaded into (launching) a model that we’re not sure is safe.”

The Long Term Benefit Trust

To Anthropic’s founders, structuring the company as a public benefit corporation was a good first step, but didn’t address the question of who should be on the company’s board. To answer this question, they decided in 2023 to set up a separate body, called the Long Term Benefit Trust (LTBT), which would ultimately gain the power to elect and fire a majority of the board.

 The LTBT, whose members have no equity in the company, currently elects one out of the board’s five members. But that number will rise to two out of five this July, and then to three out of five this November—in line with fundraising milestones that the company has now surpassed, according to Israel and a copy of Anthropic’s incorporation documents reviewed by TIME. (Shareholders with voting stock elect the remaining board members.)

The LTBT’s first five members were picked by Anthropic’s executives for their expertise in three fields that the company’s co-founders felt were important to its mission: AI safety, national security, and social enterprise. Among those selected were Jason Matheny, CEO of the RAND corporation, Kanika Bahl, CEO of development nonprofit Evidence Action, and AI safety researcher Paul Christiano. (Christiano resigned from the LTBT prior to taking a new role in April leading the U.S. government’s new AI Safety Institute, he said in an email. His seat has yet to be filled.) On Wednesday, Anthropic announced that the LTBT had elected its first member of the company’s board: Jay Kreps, the co-founder and CEO of data company Confluent.

The LTBT receives advance notice of “actions that could significantly alter the corporation or its business,” Anthropic says, and “must use its powers to ensure that Anthropic responsibly balances the financial interests of stockholders with the interests of those affected by Anthropic’s conduct and our public benefit purpose.” 

“Anthropic will continue to be overseen by its board, which we expect will make the decisions of consequence on the path to transformative AI,” the company says in a blog post on its website. But “in navigating these decisions, a majority of the board will ultimately have accountability to the Trust as well as to stockholders, and will thus have incentives to appropriately balance the public benefit with stockholder interests.”

However, even the board members who are selected by the LTBT owe fiduciary obligations to Anthropic’s stockholders, Israel says. This nuance means that the board members appointed by the LTBT could probably not pull off an action as drastic as the one taken by OpenAI’s board members last November. It’s one of the reasons Israel was so confidently able to say, when asked last Thanksgiving, that what happened at OpenAI could never happen at Anthropic. But it also means that the LTBT ultimately has a limited influence on the company: while it will eventually have the power to select and remove a majority of board members, those members will in practice face similar incentives to the rest of the board. 

Company leaders, and a former advisor, emphasize that Anthropic’s structure is experimental in nature. “Nothing exactly like this has been tried, to my knowledge,” says Noah Feldman, a Harvard Law professor who served as an outside consultant to Anthropic when the company was setting up the earliest stages of its governance structure. “Even the best designs in the world sometimes don’t work,” he adds. “But this model has been designed with a tremendous amount of thought … and I have great hopes that it will succeed.”

The Amazon and Google question

According to Anthropic’s incorporation documents, there is a caveat to the agreement governing the Long Term Benefit Trust. If a supermajority of shareholders votes to do so, they can rewrite the rules that govern the LTBT without the consent of its five members. This mechanism was designed as a “failsafe” to account for the possibility of the structure being flawed in unexpected ways, Anthropic says. But it also raises the specter that Google and Amazon could force a change to Anthropic’s corporate governance.

But according to Israel, this would be impossible. Amazon and Google, he says, do not own voting shares in Anthropic, meaning they cannot elect board members and their votes would not be counted in any supermajority required to rewrite the rules governing the LTBT. (Holders of Anthropic’s Series B stock, much of which was initially bought by the defunct cryptocurrency exchange FTX, also do not have voting rights, Israel says.) 

Google and Amazon each own less than 15% of Anthropic, according to a person familiar with the matter. Amodei emphasizes that Amazon and Google’s investments in Anthropic are not in the same ballpark as Microsoft’s deal with OpenAI, where the tech giant has an agreement to receive 49% of OpenAI’s profits until its $13 billion investment is paid back. “It’s just worlds apart,” Amodei says. He acknowledges that Anthropic will likely have to raise more money in the future, but says that the company’s ability to punch above its weight will allow it to remain competitive with better-resourced rivals. “As long as we can do more with less, then in the end, the resources are going to find their way to the innovative companies,” he tells TIME.

Still, uncomfortable tradeoffs may loom in Anthropic’s future—ones that even the most well-considered governance structure cannot solve for. “The overwhelming priority at Anthropic is to keep up at the frontier,” says Daniel Colson, the executive director of the AI Policy Institute, a non-profit research group, referring to the lab’s belief that it must train its own world-leading AI models to do good safety research on them. But what happens when Anthropic’s money runs out, and it needs more investment to keep up with the big tech companies? “I think the manifestation of the board’s fiduciary responsibility will be, ‘OK, do we have to partner with a big tech company to get capital, or swallow any other kind of potential poison pill?’” Colson says. In dealing with such an existential question for the company, Anthropic’s board might be forced to weigh total collapse against some form of compromise in order to achieve what it sees as its long-term mission.

Ultimately, Colson says, the governance of AI “is not something that any corporate governance structure is adequate for.” While he believes Anthropic’s structure is better than OpenAI’s, he says the real task of ensuring that AI is developed safely lies with governments, who must issue binding regulations. “It seems like Anthropic did a good job” on its structure, Colson says. “But are these governance structures sufficient for the development of AGI? My strong sense is definitely no—they are extremely illegitimate.”

Correction, May 30

The original version of this story mischaracterized Brian Israel’s view of the aftermath of Sam Altman’s firing. Many observers concluded that OpenAI’s corporate structure had backfired, but Israel did not say so.

Kaynak

LEAVE A REPLY

Please enter your comment!
Please enter your name here