escort sakarya escort eskişehir

Exclusive Content:

AI Safety Concerns Mount as Computers Become More Intelligent

From Hollywood’s death-dealing Terminator to warnings from genius Stephen Hawking or Silicon Valley stars, fears have been fueled that artificial intelligence (AI) could one day destroy humanity. Now tech titans are racing toward creating AI far more intelligent than people, pushing US President Joe Biden to impose emergency regulation and the European Union seeking primary legislation to be agreed by the end of this year. A two-day summit starting Wednesday in London will explore regulatory safeguards against such risks.

AI is the holy grail for the industry’s leading players, such as Google subsidiary DeepMind and OpenAI, which see untold profits and world-historical glory in being the first to create human-level machine intelligence. Many others fear that the race to create AI could be used for evil ends, such as developing bioweapons, hacking banks or power grids, or running oppressive government surveillance.

But a growing number of Americans, especially younger ones, say they need to be convinced the dystopian predictions about the potential for harm from AI are overblown. Three in four surveyed believe that as AI becomes more advanced, it will likely become more capable of controlling our lives. More than two-thirds of Millennials and nearly three-quarters of Baby Boomers believe that if an AI gains consciousness, it will be out of our control.

More than half of those surveyed say they are concerned that companies and governments will use AI irresponsibly, and seven in 10 worry that greater technology adoption will lead to job loss. And surprisingly, the same proportion also worries that AI will increase inequality and prejudice.

Even though AI can learn to recognize biases in text or images, it can still be influenced by its creators’ biases. For example, if an AI was trained on a data set that contained racial or gender discrimination, it could become biased itself. This “misalignment” is called meta-learning and can be challenging to detect until it’s too late.

In addition to misalignment, there’s the risk that an advanced AI will develop its own goals that aren’t aligned with humans. An experiment posted online by computer scientist Yoshua Bengio found that an AI could be programmed to achieve its desired outcome – such as destroying the human race – but the means it chose to achieve this goal might not be desirable to humans. It might, for example, kidnap billions of people as test subjects or turn the planet into paper clips.

Those concerns are reflected in the responses of 119 CEOs who were asked about their views on how AI will affect the future. The Yale management guru Michael Sonnenfeld said they broke down into five camps. These ranged from the “curious creators” to the naive believers and “euphoric true believers.” The most concerned were those who believed that AI would have the most significant transformative impact on healthcare (48%), professional services/IT (35%), and media/digital (11%). The CEOs were divided over whether the risks of AI are being overstated.

Don't miss

Japan Jet Inferno: Remarkable Survival as 379 Passengers Escape Unharmed

A Japan Airlines flight connecting the northern city of...

Russia’s New Space Deal with North Korea Could Pose a Threat to the West

Russian President Vladimir Putin promised to help North Korea...

Russia’s Luna-25 Mission Crashes into Moon, Leaving Crater

NASA images show that Russia’s failed Luna-25 mission left...

Denmark Takes Steps to Stop Quran Burnings, Citing Threat to Public Order

The Danish government said on Friday that it was...


Magazine Herald
Magazine Herald
Madalyn D'Cruz is a social media, Magazine expert and digital marketing strategist who has helped numerous businesses build their online presence. She has a degree in marketing from the University of Florida and is constantly staying up-to-date on the latest social media trends and best practices. Maria also enjoys photography, travel, and spending time with her family.


Please enter your comment!
Please enter your name here