In a bold move, OpenAI CEO Sam Altman addressed the Senate Judiciary Committee on Tuesday, demanding regulation for the very industry in which his company is a frontrunner. Altman emphasized the potential for catastrophic consequences if AI technology goes awry and expressed OpenAI’s commitment to collaborating with the government to prevent such outcomes.
During his testimony, Altman highlighted the perils of artificial intelligence, cautioning that tools like OpenAI’s ChatGPT could lead to job displacement despite the hope of creating new employment opportunities. He went a step further by recommending the establishment of a dedicated agency to regulate AI, underscoring the importance of proactive measures.
Unlike past congressional tech hearings that often turned confrontational, Tuesday’s session was surprisingly cordial. This atmosphere was likely influenced by Altman’s dinner with around 60 lawmakers the previous evening, during which he reportedly showcased ChatGPT’s capabilities. Witnesses present at the hearing seemed captivated by the demonstration, describing it as akin to witnessing a magic show rather than a tech presentation.
“I thought it was fantastic,” praised Rep. Ted Lieu (D-CA), while Rep. Mike Johnson (R-LA) remarked, “He gave fascinating demonstrations in real time.” Such positive reactions reflected the significant impact of Altman’s testimony.
However, Altman’s hearing was not the only AI-related event that day. On the same building’s upper floors, the Senate Committee on Homeland Security & Governmental Affairs held a separate AI hearing simultaneously, featuring notable speakers from various domains. Although this session addressed the real-world applications of AI and its impact on society, it received significantly less attention, leaving crucial discussions unnoticed.
This narrow focus on generative AI, fueled by global fascination with machine learning systems capable of content creation, may cause us to overlook the actual hazards of AI and leave ourselves vulnerable to harm. The dominance of generative AI, exemplified by OpenAI’s ChatGPT and other similar technologies like Bard, Midjourney, and DALL-E, has sparked immense hype and lofty promises. However, this trend has proven detrimental to workers, with media companies like Insider and Buzzfeed resorting to large language models (LLMs) and subsequently laying off employees.
The use of AI in the writing process also triggered disputes between the Writers Guild of America and the Alliance of Motion Picture and Television Producers, leading to a strike. Many businesses have already started replacing copywriters and graphic designers with LLMs and image generators. In reality, generative AIs have inherent limitations, despite the claims of Altman and other industry figures. Concerns have been raised that reliance on AI may lead to the underestimation of human creativity and ingenuity.
Experts such as Suresh Venkatasubramanian, director of Brown University’s Center for Tech Responsibility, and Emily M. Bender, a linguistics professor at the University of Washington, share these concerns. They view Altman’s testimony as a marketing ploy rather than genuine advocacy for AI regulation. While Altman publicly supports regulation, OpenAI’s lack of transparency regarding ChatGPT’s training dataset and its reluctance to grant access to third-party apps raise doubts about their true stance on regulation.
“We don’t ask arsonists to be in charge of the fire department,” Venkatasubramanian asserted. He also pointed out that while generative AI garners significant attention, other forms of AI that have actively harmed people for years remain under-discussed. AI applications in fraud detection, criminal justice decisions, hiring processes, and healthcare treatments raise ethical concerns that demand immediate attention.
Altman’s call for regulation may not be entirely genuine, but it does highlight the need for action. Venkatasubramanian co-authored the Blueprint for the AI Bill of Rights, providing guidelines for safe and responsible deployment of machine learning algorithms to safeguard data privacy. Although this framework has yet to gain traction in Congress, some states, like California, are already proposing bills inspired by it.
While Altman suggested creating a separate agency dedicated to AI regulation, Bender argued that existing governing bodies possess the necessary authority to regulate AI companies effectively. Recently, the Federal Trade Commission, Department of Justice, the Consumer Financial Protection Bureau, and the Equal Employment Opportunity Commission issued a joint statement clarifying that AI is not exempt from regulations.
The question remains as to whether meaningful policy changes will materialize. While Congress historically responds slowly to emerging technologies, Tuesday’s hearing indicated a cautious openness to AI regulation. However, Bender believes the focus on Altman’s testimony distracted attention from other critical aspects. She cautioned against being overly impressed by AI’s capabilities, reminding us to maintain a critical perspective as the industry’s stakeholders aim to sell their products and ideas.