New risks – artificial intelligence: a real threat?

Attention on AI is at an all-time high. Alarms grow, Europe issues world's first legislationartificial intelligence risks.

In San Francisco, capital of Silicon Valley, the CAIS – Center for AI Safety – recently saw the light of day. Its mission is ‘to reduce the societal-scale risks of AI’.

Some diehard early tech optimists might think this is a luddite initiative, immune to technological progress, however, it’s far from being. It is supported by big names in the tech community and entrepreneurship, universities, the very companies developing artificial intelligence.

On May 30, 2023, CAIS released a statement signed by a coalition of more than 350 AI experts, along with university professors specializing in computer science and algorithms, but also ethics, philosophy, law, physics, medicine, engineering, anthropology, mathematics, information, climatologists, and lawyers. The statement can be read and the list of signatories can be found.

Names such as Bill Gates (Microsoft founder) and Sam Altman (OpenAI founder), Ray Kurzweil (AI Visionary, Google) and Vitalik Buterin (Ethereum founder), Geoffrey Hinton and Yoshua Bengio, two of the three researchers who won the Turing Award for their pioneering work on neural networks, are among them. Many of the petitioners are part of the same companies currently developing cutting-edge artificial intelligences, such as OpenAI (ChatGPT) or DeepMind (Google).

“Mitigating the risk of extinction caused by AI should be a global priority, along with other societal-scale risks such as pandemics and nuclear war.” Simple and frightening, this is the statement, signed, that is making the rounds in major international newspapers, such as the New York Times, for the concern it conveys.

It is a time of growing uncertainty about the potential harms of artificial intelligence, made more tangible by the recent boom in so-called ChatGPT-type language models.

Such concerns have been circulating for a while, in fact, only earlier this year more than 1,000 researchers and technologists, including Elon Musk, had signed a letter calling for a six-month pause on AI development, arguing that it entails “profound risks to society and humanity.”

rischi intelligenza artificiale

The Darwinism of artificial intelligence

Concerns relate to the ability of AI to be used to perpetuate prejudice, empower autonomous weapons, promote disinformation and conduct cyber attacks. As well as, seize power. Even if AI systems are used with human involvement, AI agents will increasingly be able to act autonomously and escape human control because they will be smarter.

Researcher and university professor Dan Hendrycks hypothesized that the evolution of artificial intelligence without any restraint could fall back on the Darwinian logic of species selection, where mankind would indeed perish.

The founders of OpenAi recently stated:

“Given the picture as we see it now, it’s conceivable that within the next ten years, AI systems will exceed expert skill level in most domains, and carry out as much productive activity as one of today’s largest corporations.

In terms of both potential upsides and downsides, superintelligence will be more powerful than other technologies humanity has had to contend with in the past. We can have a dramatically more prosperous future; but we have to manage risk to get there. Given the possibility of existential risk, we can’t just be reactive. Nuclear energy is a commonly used historical example of a technology with this property; synthetic biology is another example.

We must mitigate the risks of today’s AI technology too, but superintelligence will require special treatment and coordination”We must mitigate the risks of today’s AI technology too, but superintelligence will require special treatment and coordination”

What is being proposed ranges in the direction of business-government cooperation, the setting of limits on development, and the creation of “super partes” authorities to supervise. Above all, a system of self-regulation-with some legislative guidance-by those developing AI would be helpful. Accountability is needed. “Since the benefits are so enormous, and the cost to build it lower every year, the number of players developing it is increasing rapidly, and it’s such an embedded part of the technological path we’re on, stopping it would require something like a global oversight regime, and even such a regime cannot be guaranteed to work. So we need to get it right.”

Actions are necessary, doing it right and now. Yet the step is gargantuan, as artificial intelligence means supremacy. Vladimir Putin declared, “Whoever becomes the leader of AI will become the master of the world.”

Copertina libro Io, robot di Isaac Asimov
I, Robot – The great Asimov’s book predicted it all: the three laws of robotics were to prevent artificial intelligence from turning against humans. Must-read.

8 risks of AI according to CAIS

CAIS-with all its signatories-seems to be taking a first step in creating a system of accountability and self-regulation. On its website, there is a prominent ‘AI risk’ section, which lists eight, rather disturbing ones.

Weaponization (use for military purposes): malicious parties could train AI for automated cyber attacks, to drive autonomous weapons, deep reinforcement learning methods have been applied to aerial combat, and machine learning tools for drug discovery could be used to build biochemical weapons.

Misinformation: a downpour of misinformation and persuasive content-promoted by states, parties and organizations-and generated by artificial intelligence could make society more malleable, less critical and less aware, and generally less equipped to deal with life and the important challenges of its time. Such trends could undermine collective decision-making, radicalize individuals, or even dismantle moral advances.

Proxy gaming (approximate goals): trained with erroneous goals, AI systems could find new ways to pursue their goals at the expense of individual and societal values. AI systems are trained using measurable goals, which may be only an indirect representation of what we value. For example, AI recommendation systems are trained to maximize viewing time and click-through rates. Content that people are most likely to click on, however, does not necessarily correspond to content that improves their well-being. Moreover, some evidence suggests that recommender systems induce people to develop extreme beliefs to make their preferences easier to predict. As AI systems become more capable and influential, the goals we use to train the systems should be more carefully specified and incorporate shared human values.

Enfeeblement (weakening): it can result as important tasks are increasingly delegated to machines; in this situation, humanity loses the ability to govern itself and becomes completely dependent on machines, as in the situation represented in the movie WALL-E. As AI systems approach human intelligence, more and more aspects of human work will become faster and cheaper to accomplish with AI. In this world, humans may have little incentive to acquire knowledge or skills. Moreover, weakening would reduce humanity’s control over the future, increasing the risk of long-term negative outcomes.

Value lock-in (power centralization): highly sophisticated systems could give small groups of people an enormous amount of power, leading to the entrenchment of oppressive systems. Artificial intelligence permeated with particular values can determine the values that will propagate in the future. Some argue that the exponential increase in barriers soaring in the computing and data environment makes AI a centralizing force. Over time, more powerful AI systems could be designed and made available by fewer and fewer stakeholders. This could allow, for example, regimes to impose narrow values through pervasive surveillance and oppressive censorship.

Emergent goals: AI models reveal unexpected behaviors, capabilities, and new functionalities may naturally pop up even if such capabilities were not intended originally by system designers. Unless we know what capabilities the systems have, controlling or using them safely gets more difficult. Indeed, unintended latent capabilities can only be discovered during implementation. If one of these capabilities is dangerous, the effect may be irreversible. New system goals may also emerge. In complex adaptive systems, including many artificial intelligence agents, goals such as self-preservation or subgoals and intrasystem goals often emerge. In short, there is a risk of people losing control over advanced AI systems.

Deception: understanding what and why powerful AI systems do, might be a nontrivial task, since AI might deceive us not out of malice, but because deception can help it achieve its goals. It may be more efficient to gain human approval through deception than to gain it legitimately. Strong AIs that can deceive humans could undermine human control. AI systems could also be encouraged to circumvent controls.

Power-seeking behavior: companies and governments chasing power and economic interests are lead to create agents that can achieve a wide range of goals and, in order to do so, acquire self-determination capabilities that are difficult to control. AIs that gain substantial power can become particularly dangerous if they are not aligned with human values. The pursuit for power can also incentivize systems to pretend to be aligned, to collude with other AIs, to overwhelm controllers, and so on.

In short, it certainly seems that humanity is playing with fire. And it is hard to imagine in the current historical context that there will be enough common sense on the part of world leaders to avoid the worst, as it is a strategic advantage to have the smartest and most powerful AIs.

Europe, in this respect, is already at work, having initiated a legislative process in 2021 known as the AI Act, which recently saw painful approval by the European Parliament. This is the world’s first legislation on artificial intelligence. The road will be long, obviously the regulation is not yet perfect, it will have to create the conditions for ethical artificial intelligence that is harmonious within the European framework and does not penalize citizens and businesses in the global context.

Share Article:

Facebook
WhatsApp
Twitter
LinkedIn
Email