AI systems have the capability to not only distort facts, but also to mislead and deceive. To underscore this potential for evil, one senator at the hearing, Richard Blumenthal, presented a recorded speech delivered by someone that sounded like him, but was actually a voice clone created by a ChatGPT.
Blumenthal also posited: “What if I had asked it (ChatGPT), and what if it had provided an endorsement of Ukraine surrendering or Vladimir Putin’s leadership?”
Already, instances of AI being put to nefarious uses are emerging. A German magazine has fired an editor for fabricating an interview with Formula One legend Michael Schumacher using AI-generated quotes.
A recent documentary revealed that AI-powered face-recognition programs are being used to profile pedestrians in London’s streets. One company using AI to screen jobseekers showed a definite bias against female applicants.
These cases are alarming because more chatbots are finding their way into the workplace. Goldman Sachs estimates that about 300 million global jobs could be exposed to automation, and one-fourth of all work could be replaced by generative AI.
Software designers and engineers could soon be edged out by robotic counterparts that can do the job faster and more efficiently. And, as the snide aside goes, robots do not take coffee breaks.
The sharp focus on the dark side of chatbots has for the moment obscured the benefits derived from AI. Innovative uses of AI range from helping develop medicines for curing cancer to modeling the climate.
Chatbots have become indispensable for businesses because they save time, money and manpower.
More importantly, having a chatbot enables a company to communicate with its customers on multiple platforms, a definite plus in today’s internet-driven commerce.
One research has predicted that by 2023, consumers and businesses will be saving over 2.5 billion customer service hours because of chatbots.
Altman’s OpenAI has raised the stakes by making ChatGPT available to almost everyone with a computing device. But widespread use also opens the door to widespread abuse.
That is what Altman wanted to address during the congressional hearing.
He proposed the creation of a new regulatory agency that would impose safeguards to prevent AI models from “self-replicating” and “self-exfiltrating into the wild.”
Altman preferred a body, patterned after the UN nuclear watchdog International Atomic Energy Agency, that would license the most powerful AI systems and have the authority to “take that license away and ensure compliance with safety standards.”
One senator suggested that AI companies test their own systems and reveal known risks before releasing them.
Altman said companies must inform users if they are interacting with AI-generated content or not.
He also said his company was working with visual artists and musicians to give them better control over their works.
Setting up a global agency to oversee AI use and development is a bold move, but it is doubtful if it can be achieved within the next five years. By that time, AI may have become too powerful to tame.
Some countries have taken the initiative to tackle the long-term risks from the rise of chatbots. Canada’s parliament is deliberating the Artificial Intelligence and Data Act, described as an “important step toward a proper AI governance regime.” But the measure needs more robust consultation with AI-affected shareholders before it is implemented.
Altman has sounded the alarm. It would be tragic if it falls on deaf ears.
Credit belongs to : www.manilatimes.net