I am writing this at lunchtime on Monday, November 20, 2023. What a last 72 hours these have been at OpenAI and the technology industry. We have not seen the end of the saga yet and much will be learned in terms of what happened, how these events have or have not changed the course of AI history, what changes in corporate governance, and how complicated it is to try and regulate this space. However, I feel compelled to share some thoughts in the spirit of my commitment to write about gray spaces and areas where there is no consensus.
First, we are seeing the strength of markets in action. It seems that the unorthodox corporate construct that OpenAI had put in place – with a non-profit holding company and a for-profit subsidiary – has made their non-profit board into a paper tiger. It had the power to oust the company’s CEO, but this movement triggered an revolt and a threat by most employees to walk out. In essence, OpenAI would become an empty shell. Amid this, Microsoft comes in, as the owner of 49% of the for-profit subsidiary, and announces that they are hiring OpenAI’s founder and CEO as well as their President who was also fired over the weekend. Most of the employees then declared that they would join Microsoft unless OpenAI’s board resigned and reinstated Sam Altman to his position. This is not over yet, but if it were to happen, in practical terms, Microsoft would secure most (if not all) brain power from OpenAI after having paid $13B (with a lot of this in the form of in-kind cloud services) for a company which was valued before the weekend at more than $85B. Is this acceptable to all other shareholders and is this even appropriate from a governance perspective? Is this a breach of fiduciary duty? Will yet another party surface and try and scoop up some of the assets? Will the employees shop around and join the highest bidder?
Second, this made me reflect on what is going on in the regulatory space. I am not convinced that regulating the technology is appropriate or feasible given the speed at which it changes. Moreover, for this class of technology (large language models) and at the current state of their development (in which model behavior is still somewhat opaque), is it even doable?
I, for one, feel that we need safeguards to ensure the continued development and deployment of these technologies and AI in general. This then leads one to conclude that risk and outcome-based regulation is an alternative to be seriously evaluated. Progress has been made in Europe to draft regulations along these lines. In the US, one can take advantage of existing industry regulations and regulatory bodies to enforce them, short of creating new and burdensome regulations. We may not need a new regulatory regime after all, we will need, rather, to enforce what is already in place.
If that is the case, then there are at least two important open questions:
Do we have the right liability structure in place to ensure that existing law is enforceable in the context of AI technology companies and frontier models? Or are we at risk of repeating what happened during Web 2.0 with large platforms being shielded from user-generated content liability?
Outcome-based regulation and resulting liability is an ex-post regime in which we act after the potential damage has occurred. Are we going to deem this an acceptable risk and bet that the system will self-correct and design fair and safe systems? Are there any categories of risk and use that need to be regulated ex-ante or even prohibited?
These are questions that will stay with us irrespective of how the OpenAI story unfolds.
As usual, I would love to hear your thoughts.
Paulo - so well captured. Our mental paths have been similarly shaped by careers in technology. I've been clear for some time that it is simply not feasible to expect laws to keep up with the pace of technology change. Law making cannot move this way in a democracy. Expecting otherwise is naive and/or ignorant. Risk and outcome-based regulation is perhaps the only reasonable and feasible path forward.
Thank you for covering this. Was really curious.