Sam Altman, Founder and CEO of OpenAI, was ousted by the OpenAI board on November 22nd. According to the New York Times, the decision was made on the concern that Altman prioritized the growth of the company over safety concerns and establishing A.I. regulations.
This decision has been met with much controversy among the A.I research the community, particularly the researchers at OpenAI. 700 out of the 770 employees at OpenAI – which accounts for almost every single employee – signed a letter to the OpenAI board, threatening to quit the company and migrate to Microsoft, where Altman currently resides as head researcher on A.I.
This astonishing amount of support is even more significant when the lack of researchers in the A.I. industry is taken into account. Currently, state-of-the-art artificial intelligence (AI) technologies such as ChatGPT and Bard are based on “transformers,” a revolutionary concept first introduced in 2017 in a research paper titled “Attention Is All You Need” uploaded on ArXiV. Transformers have since become foundational in almost every new A.I. application, yet there are only a handful of researchers who understand transformer technology – and the majority are working in OpenAI. If the board does not negotiate with Altman, this might as well mark the end of one of the world’s most largest and revolutionary A.I. companies.
Indeed, a recent statement released by the board of directors at OpenAI reinstates Altman as the CEO.
Regardless of what happens later, the controversy in OpenAI of prioritizing the development over the regulation of A.I. is a question that should be discussed everywhere: in classrooms, in government meetings, and worldwide forums. A.I. is still in its early stages, so it is crucial that actions are taken now as the next few years will undoubtedly dictate the future of A.I., and with it, the future of humanity.