During a Senate hearing, Sam Altman, the CEO of OpenAI, emphasized the necessity of government involvement in addressing the potential dangers associated with the growing capabilities of AI. Altman acknowledged the public’s concerns regarding the potential impact of AI on our lives and expressed that the people at OpenAI shared this apprehension. Growing concerns over AI bots like ChatGPT After the release of ChatGPT by Altman’s San Francisco-based start-up named OpenAI, the chatbot tool quickly gained public attention due to its remarkably humanlike responses when answering questions. Initially, the concerns were primarily centred around teachers worrying about its potential use for cheating on homework, tests and assignments. Also read: EU lawmakers take major step to curb AI, votes on curbs for ChatGPT and other similar tools However, these concerns have now expanded to encompass much broader issues related to the latest generation of “generative AI” tools. Such concerns include the ability of these tools to deceive people, propagate falsehoods, infringe upon copyright protections, and disrupt certain job sectors. While Congress currently shows no immediate intention of enacting comprehensive AI regulations, as their European counterparts are doing, the societal concerns surrounding these technologies prompted Altman and other tech CEOs to meet with officials at the White House earlier this month. US agencies have pledged to take action against harmful AI products that violate existing civil rights and consumer protection laws, indicating a commitment to address the potential risks associated with AI. Regulations have become necessary To address these concerns, Altman suggested the establishment of either a national (US-based) or international organization that would grant licenses to the most advanced AI systems. This organization would possess the power to revoke licenses and enforce adherence to safety regulations, thereby ensuring compliance with necessary precautions. Also read: AI is a more urgent risk than climate change, believes ‘Godfather of AI’ Geoffrey Hinton At the hearing, Senator Richard Blumenthal, a Democrat from Connecticut who chairs the Senate Judiciary Committee’s subcommittee on privacy, technology, and the law, began with a recorded speech that appeared to be his own voice but was actually generated using a voice clone trained on Blumenthal’s previous floor speeches. The speech, which was created using ChatGPT-written opening remarks, impressed Blumenthal. However, he raised a crucial question: What if he had asked the AI system for an endorsement of Ukraine surrendering or Vladimir Putin’s leadership? Both Democrats and Republicans expressed their interest in seeking Sam Altman’s expertise in preventing issues that have not yet arisen. Blumenthal emphasized the importance of AI companies being obligated to test their systems and disclose known risks before releasing them. He specifically voiced concerns about the potential destabilization of the job market by future AI systems. Altman largely agreed with Blumenthal’s perspective, although he maintained a more optimistic outlook on the future of work. Altman’s worst fears about AI When questioned about his worst fear regarding AI, Altman chose not to delve into specific scenarios but acknowledged the potential for the industry to cause significant harm to the world. He highlighted the critical point that if AI technology goes awry, the consequences can be quite severe. He said that the industry could cause “significant harm to the world” and that “if this technology goes wrong, it can go quite wrong.”\\ Also read: ChatGPT as God? AI bots capable of starting new religions, warns expert During the legislative session, Altman also expressed concerns about the potential repercussions of AI on democracy, particularly highlighting the risk of targeted misinformation being disseminated during elections through AI-powered mechanisms. However, Altman later suggested the establishment of a regulatory agency that would implement safeguards against AI models capable of self-replication and unauthorized distribution. This alluded to concerns about advanced AI systems manipulating humans into relinquishing control, a concept often associated with futuristic science fiction narratives. Are we moving in the wrong direction? Critics argue that focusing on these distant concerns about super-powerful AI systems could divert attention and hinder the ability to address existing harms that demand immediate action, such as issues related to data transparency, discriminatory behaviour, trickery, and disinformation. Also read: Elon Musk and AI experts call for pause in development of AI systems that outperform GPT-4 A former Biden administration official, who co-authored the Biden administration’s AI bill of rights plan, expressed that this emphasis on far-off scenarios contributes to an unfounded collective panic. Suresh Venkatasubramanian, a computer scientist at Brown University and former assistant director for science and justice at the White House Office of Science and Technology Policy, suggests that this fear distracts from the current concerns that need to be addressed urgently. Altman has additional plans to embark on a global tour this month, visiting national capitals and major cities across six continents. The purpose of this tour is to engage with policymakers and the public, discussing the technology and its implications. On the eve of his Senate testimony, Altman had a dinner meeting with numerous US lawmakers, who expressed their admiration for his comments and insights. No consensus on how should AI be regulated During the testimony, two other individuals provided their testimonies as well. Christina Montgomery, IBM’s chief privacy and trust officer, shared her expertise on privacy and trust considerations in the context of AI. Gary Marcus, a professor emeritus at New York University, was part of a group of AI experts who called for OpenAI and other tech companies to halt the development of more powerful AI models for a period of six months. The intention behind this request was to allow society more time to assess the associated risks. The experts’ letter was a response to the release of OpenAI’s GPT-4 in March, which was touted as a more potent model compared to ChatGPT. While many leaders in the tech industry acknowledge the need for AI oversight, they caution against overly burdensome regulations. Altman and Marcus advocated for the establishment of an AI-focused regulatory body, preferably on an international scale. Altman drew a parallel with the United Nations’ nuclear agency, while Marcus likened it to the role of the US Food and Drug Administration. However, IBM’s Montgomery took a different stance and urged Congress to adopt a “precision regulation” approach. She suggested that regulations should focus on governing the deployment of specific uses of AI rather than the technology itself, aiming to address risks associated with AI at the point of application. Read all the Latest News , Trending News , Cricket News , Bollywood News , India News and Entertainment News here. Follow us on Facebook , Twitter and Instagram .