By Kai Quizon,
When you buy through our links, we may earn an affiliate commission.
Apple Cofounder Steve Wozniak, Head Twit Elon Musk, and Politician Andrew Yang make for strange bedfellows. Yet they are three of the nearly thousand signers of an open letter calling for a halt in development of artificial intelligence models more advanced than OpenAI’s ChatGPT-4. An open letter published Wednesday by the Future of Life Institute claims “labs locked in an out-of-control race” that may result in models that no one can control or even fully understand.
The letter emphasizes that there must be more development of regulatory oversight for A.I. systems prior to the development of models more advanced than ChatGPT-4. Further, the 6-month pause should be “public and verifiable” and be made by all contributors. The signers correctly agree that any kind of halt is only effective if all development is paused. Labs, experts, and corporations should take this six months pause to collaborate on a shared set of safety regulations to ensure that AI in the future is “safe beyond a reasonable doubt.”
The signers of the letter range wildly from the likes of Elon Musk to technical staff from prestigious universities such as Oxford and Harvard. It makes assigning a singular motivation to the letter much more difficult. However, there are many social and political impacts of the rapid development and deployment of artificial intelligence. Similar to the internet, artificial intelligence software is an innately empowering technology. Every one who has access to the internet currently has access to ChatGPT. Now, a user who may only be slightly code savvy can complete much more complex coding projects. Writers can find inspiration much faster. Students can use ChatGPT to troubleshoot issues in their papers. This technology empowers individuals to achieve more by focusing on the truly human challenges of problem solving.
So then why halt development? The perspective of Musk: his AI is no good. Musk’s Tesla faces continued backlash to their self-driving cars and fails to garner praise for the AI Day presentations. However, the larger sentiment of the letter seems to be focusing on the potentially paradigm-altering introduction of “General Artificial Intelligence.” General Artificial Intelligence is only achieved when a single bot is capable of performing all of the intellectual tasks attributed to humans. Now, we commonly have bots that are capable of holding conversations (like ChatGPT) or generating images (like DALL-E), but the bots remain specialized and do not overlap. Scientific American emphasizes that “artificial general intelligence is still likely decades away.”
So why stop now? There is no singular answer, and who is to stay that standing in the way of progress will not do more harm than good?