Home / Headline / Hiroshima’s tragic legacy a reminder of potential dangers of today’s no-limits technologies

Hiroshima’s tragic legacy a reminder of potential dangers of today’s no-limits technologies

The G7 called for the adoption of technical standards to keep artificial intelligence “trustworthy.” But warnings that governance of the technology has not kept pace with its growth have been growing deafening, both inside and outside the tech community. 

Geoffrey Hinton, ‘godfather’ of AI has warned some chatbots are ‘quite scary.’

Two men stand in front of an arched monument. In the distant background is a building with a dome.

When you stand in its skeletal shadow, it is hard not to be moved by the silence and utter serenity of the Genbaku Dome.

Originally built as an industrial product exhibition hall near the beginning of the last century, it was one of the only buildings in Hiroshima to partially survive the world’s first atomic bombing.

Hiroshima city council, in the postwar years, debated long and hard whether to demolish the structure that was, for many survivors of the attack, a visceral reminder of the horrors they had endured both during and following the explosion which ushered in the nuclear age.

The decision to keep it, preserve its ruins and add a lush, green memorial park to embrace the site turned the dome into a potent anti-war symbol, a forceful plea for denuclearization.

Japanese Prime Minister Fumeo Kishida wanted his fellow G7 leaders to soak in that message along with the silence last Friday.

And they did.

“Most of us don’t remember a time when the world was under threat of nuclear war,” said Prime Minister Justin Trudeau. “The Cold War ended a long time ago and the danger of nuclear war is unfortunately being forgotten by many.”

There is, however, another lesson, an under-appreciated one, that Barack Obama touched upon in remarks back in 2016 when he became the first sitting U.S. president to visit Hiroshima. He said there are many memorials around the world that tell stories of courage and heroism and others, such as Auschwitz, that deliver an “echo of unspeakable depravity.”

What the Genbaku Dome and the great mushroom cloud that rose above it on Aug. 6, 1945 represented was something quite different from the other monuments, Obama said. It spoke to “humanity’s core contradiction” — that our creativity, our imagination and ability to bend nature to our will could also lead to our own destruction.

Zelenskyy, Trudeau meet face to face at G7 meeting in Japan

Ukraine’s president was the guest of honour at the G7 summit’s final day in Japan, where he secured more Western military assistance against Russia. The U.S. pledged another $375 million in aid, but Canada did not offer more weapons, despite promises of continued support.

In the immediate aftermath of the bombing, carried out by the B-29 bomber named the Enola Gay, the U.S. Army Air Force was left to wonder, as co-pilot Capt. Robert Lewis later confided to his journal: “My God, what have we done?”

With Russia’s nuclear threats over Ukraine, the fraying of decades-old arms control treaties and China’s refusal to accept nuclear weapons limitations, it’s easy to focus the mind on the what-might-be series of scenarios of the moment.

The arrival of Ukrainian President Volodymyr Zelenskyy at the G7 summit on Saturday only served to underscore fears that the world could be spiralling uncontrolled toward some kind of nuclear confrontation — be it big or small.

Growth of AI outpaces governance

What passed virtually unnoticed as journalists, officials and some leaders huddled around television monitors to watch the pool feed — and endless reruns — of Zelenskyy’s plane landing on Saturday was the somewhat dry, tentative steps taken by the world’s leading democracies toward addressing what some have described as even greater existential crisis: the rise of the machines through artificial intelligence (AI).

The G7 called for the adoption of technical standards to keep artificial intelligence “trustworthy.” Warnings that governance of the technology has not kept pace with its growth have grown deafening, both inside and outside the tech community.

Three men and a woman with blonde hair walk along a corridor.

“We want AI systems to be accurate, reliable, safe and non-discriminatory, regardless of their origin,” said European Commission President Ursula von der Leyen early in the summit.

The focus of concern for the leaders is so-called “generative AI,” which is a subset of the technology popularized by the ChatGPT app, developed by the company OpenAI.

The swift advances in the technology prompted 1,000 tech entrepreneurs, such as Elon Musk, and those developing the technology to sign an open letter in March calling for a six-month pause on the development of more powerful systems, citing potential risks to society. That letter now has 27,000 signatures.

In April, European Union lawmakers urged world leaders to find ways to control AI technologies.

The EU is close to passing the world’s most comprehensive law to regulate AI, something other advanced economies are watching closely.

A group of people sit and stand in front of display screens in a large convention room.

The United States so far has taken a wait-and-see approach, with President Joe Biden saying it remains to be seen whether AI is dangerous,

Regardless, G7 leaders, in their statement, said the rules for digital technologies like AI should be “in line with our shared democratic values.”

While the Group of Seven may be trying to get its members on the same page, other advanced nations that don’t necessarily share democratic values, such as China, are rushing to develop the same advanced technology.

And while there was a call at the G7 for international standards, there seemed to be a decided lack of urgency with leaders ordering the creation of a ministerial forum to discuss issues around generative AI through the narrow window of copyrights and disinformation, by the end of this year.

A number of experts have warned that AI’s potential for economic and social disruption — through the displacement of workers and even the discriminatory biases — is poorly understood.

Godfather of AI warns some chatbots are ‘quite scary’

The leaders also urged international organizations such as the Organization for Economic Co-operation and Development to consider analysis on the impact of policy developments.

Much like J. Robert Oppenheimer, the American theoretical physicist who regretted developing the atomic bomb, the man widely considered the Godfather of AI, Geoffrey Hinton, resigned from Google recently, warning that some of the AI chatbots now developed are “quite scary.”

A man in a suit stands in front of cityscape background. He gestures raising both hands up to eye level.

Hinton’s pioneering research was on neural networks and deep learning.

In AI, neural networks are systems that are similar to the human brain. They enable intelligence to learn from experience, as people do. It’s called deep learning.

“Right now, they’re not more intelligent than us, as far as I can tell. But I think they soon may be,” Hinton told the BBC on May 2.

In 2016, at the foot of the Genbaku Dome, Barack Obama spoke of a “moral awakening” in a message that today can be construed as a more broad warning.

“Science allows us to communicate across the seas and fly above the clouds; to cure disease and understand the cosmos, but those same discoveries can be turned into ever-more efficient killing machines,” he said.

“The wars of the modern age teach this truth. Hiroshima teaches this truth. Technological progress without an equivalent progress in human institutions can doom us.”

ABOUT THE AUTHOR

Murray Brewster

Senior reporter, defence and security

Murray Brewster is senior defence writer for CBC News, based in Ottawa. He has covered the Canadian military and foreign policy from Parliament Hill for over a decade. Among other assignments, he spent a total of 15 months on the ground covering the Afghan war for The Canadian Press. Prior to that, he covered defence issues and politics for CP in Nova Scotia for 11 years and was bureau chief for Standard Broadcast News in Ottawa.

*****
Credit belongs to : www.cbc.ca

Check Also

Just how far is Pierre Poilievre willing to take the notwithstanding clause?

Pierre Poilievre promises to “stop the crime” and “restore freedom” — to enforce law and …