Governance is back in the news with the OpenAI debacle.
The last time I wrote about governance was in the summer of 2016. The United Kingdom had put a decision to exit the European Union to a popular vote. Brexit squeaked through in a heated process with much misinformation. It will all be a piece of cake, it was said.
Whatever the ultimate merits of Brexit, this was a crazy way to make such a momentous and hard-to-reverse decision. Most countries, communities and institutions that adhere to “democratic” principles do not go for 50%-plus 1-voting for important choices.
Bad governance models have also hobbled the public blockchains. The model pioneered by bitcoin hobbled crypto with a governance structure that has made it very difficult for public blockchains to scale and operate efficiently. They haven’t—and crypto has settled into being an asset used for speculation on its good days (today wasn’t).
Shame on me, and many of us, for not having taken a closer look at the OpenAI governance structure.
A majority of the OpenAI board announced a decision on Friday to fire Sam Altman. By Monday morning, the person who delivered the message announced his regrets. Maybe he should have slept on the decision before sending his co-founder packing. And by Wednesday morning, Altman had been reinstated as CEO, with only one of the four board members who pushed for his ouster remaining on the board and an agreement for additional new board members to come.
Many other commentators have piled on about how this board was structured in a way that allowed it to act without regard to the interests of the investors who poured massive sums into driving highly disruptive innovation or the employees who made that happen.
Let me talk instead about the mission statement that OpenAI embraced— their “primary fiduciary duty is to humanity.” That is quite a responsibility to entrust to a board that seemingly answers to no one and can’t explain why it did what it did.
Serving humanity is one tough assignment. It requires making difficult tradeoffs. And a job you probably can’t do part time.
People will lose their jobs and make less money because of AI, as is often the near-term effect of new technologies. However, AI can also rapidly improve health outcomes and save lives, as I’ve pointed out in my recent piece on the benefits of artificial intelligence.
It is hard to know how a board, entrusted with serving humanity, should balance these competing interests.
Our elected representatives, however, will likely help alleviate the effects of job loss, but encourage AI-based medical innovations. The process for them to reach those decisions won’t be pretty; the results will dissatisfy many. At least, however, they are ultimately answerable to the voters.
Then, there are the tradeoffs between current and future generations. This has been one of the contentious issues in policies towards climate change. How much weight should current generations give to the welfare of future ones? That was an easier call when people had large families and cared about future generations of the offspring. Now democratic countries have to make hard choices on spending money now to prevent climate catastrophes that will benefit people who haven’t even been born yet and can’t vote.
AI policy faces this problem, too. Should current generations be deprived of the benefits of AI because of the remote risk to AI destroying the world far into the future? Anyone entrusted with serving humanity will need to figure that one out.
OpenAI has made rapid progress in developing Gen AI and leading the way toward general artificial intelligence. The board’s decision to bounce Altman and the cascading effects on the enterprise could very well impact humanity.
On the one hand, maybe it will slow down AI innovation, including in the medical field, and result in more deaths that could have been avoided in the coming years. On the other hand, throwing sand in gears could reduce the probability that AI will destroy humanity by some tiny amount that multiplies out to a big number.
The problem is that it is hard to imagine that “humanity” — if humanity could decide — would choose to put its faith in an unelected part-time board that can make momentous choices on a seeming whim, with little explanation and foresight into the aftermath.
Hopefully, when those of us not on Ozempic come back fattened up from the Thanksgiving feast, we’ll learn that OpenAI has additional new board members and a mission that mere mortals can achieve.
That doesn’t mean giving up the desire to do good. It probably does, though, mean letting the market mainly figure out how to do that in the first instance. And leaving it to politicians and regulators to make the tough tradeoffs through the democratic process, such as it is.
Speaking for humanity (hey, why can’t I?): OpenAI Board — You’re fired!
David S. Evans is an economist who has published several books and many articles on the technology businesses, including digital and multisided platforms, including the award-winning Matchmakers: The New Economics of Multisided Platforms. He is currently the Global Leader for Digital Economy and Platform Markets at Berkeley Research Group. For more details on him, go to davidsevans.org.