The final frontier

Publish date 18-04-2024

by Matteo Spicuglia

How to face the global challenge of Artificial Intelligence: we talk about it with Paolo Benanti and Marco Landi, guests of the Sermig University of Dialogue.

Paolo Benanti is a Franciscan, theologian, president of the AI Commission for Information, as well as the only Italian member of the United Nations Committee on AI.
Marco Landi is the only Italian manager to have been at the top of Apple, as general director of the company in the mid-1990s. He played a crucial role in bringing Steve Jobs back into the fold
.

Nothing will be as before.
And maybe it already is. Artificial Intelligence (AI) marks an epochal transition point: infinite applications, opportunities, the possibility of dealing with systems capable of solving problems, making work more efficient, offering quick solutions. But also decisive ethical challenges: the impact on relationships, the economic interests at stake, the reduction of traditional jobs, the need to always put the person at the centre. Paolo Benanti and Marco Landi are two number one in the field. The first, friar and theologian, president of the Italian commission on AI and information and member of the UN commission of essays on artificial intelligence. The second, a successful manager, already at the helm of Apple in the 90s and responsible for Steve Jobs' return to the group. Their watchwords are "trust", but also "rules". «It's like what happened with the invention of the car, – explains Benanti – motorways were built, rules were defined, guardrails, traffic lights and roundabouts were introduced. This did not mean limiting the development of a technology. We must do the same."
«We must not be afraid of innovation, – adds Landi – artificial intelligence is nothing more than a novelty that allows us to access a large amount of data to develop answers and define problems. AI is the continuation of philosophy in other words. Our philosophers asked themselves great questions about man and the universe. Today we ask ourselves the question of how our brain is made. Its knowledge is fundamental to determining the functioning of AI. There are opportunities, but it is important to establish rules."

Let's start with the opportunities…
Benanti:
Let's think about medicine, an interesting example that combines humanistic aspects such as treatment and scientific aspects such as diagnosis. Today we live in a paradox: medicine has reached unimaginable frontiers and goals, but this is not true for everyone. There are different situations in the world. In Africa, for example, when things are going well we have one doctor for every three thousand people, but they are general doctors. Having specialist visits is almost impossible. Let's think about what could happen if these doctors had a smartphone with a camera capable of making a diagnosis and suggesting what to do, how to interpret a case. And let's think about what could happen if these doctors were connected to a large central medical center that could support and direct. From this point of view, artificial intelligence can amplify human capabilities, globalize knowledge and help us distribute scarce resources. And this is just one sector…

Speaking of risks, however, the loss of many jobs immediately comes to mind...
Landi:
The issue is complex. I'll start with an example. I visited the Danieli steelworks in Udine not long ago, after 20 years. At the time I was struck by the very harsh working conditions, in unbearable heat. Today there is a computerized control room where the entire production process is monitored. It's true, compared to 20 years ago there are fewer jobs, but there are better environmental conditions and higher salaries. I use this example to say that there will certainly be a reduction in jobs, but we can equip ourselves with tools to deal with this situation. I believe three directions are needed: reducing working hours, investing in training and new professional profiles, providing forms of social assistance for those who will not be able to make themselves available to the new profiles. By 2030 there will be enormous transformations, we must prepare now.
B.: The history of technology tells us that innovation changes the nature of products and reduces the number of market operators. The industry will be touched by artificial intelligence, it is inevitable. In Italy we must ask ourselves how we will react given that we have a productive fabric made up of small and medium-sized companies. At the same time, AI will affect the middle-high working classes and in the future we could have fewer white-collar workers. I agree that we must accompany this process with solid rules. If we don't do this, the transition will be a problem.

What impact will there be on social relationships?
B.:
Alan Turing was already asking this. In the 1950s, he said that a machine would be intelligent when it could be mistaken for a human being. But is that enough? We must return to Plato, to the myth of the cave: not be satisfied with shadows and exchange them for reality. A machine that shows love like a human being cannot be enough for us. We need to understand what is behind it, the origin of the shadows. Because the shadow is not real, it is precisely an image of reality. Another example. In fragment 63, Heraclitus denounces the claim of his contemporaries to discover the truth through oracles. Today we are not far from this situation: there will be someone who sees AI as a sort of oracular function. There will be someone who will rely on large companies as people once relied on the gods. It won't be Aphrodite but Tinder, it won't be Mercury but Amazon, it won't be Athena but Google. Once they sacrificed a chicken. Today we sacrifice our data. However, like Heraclitus, we will always be able to use our reason.

Is it a cultural, resource, initiative problem?
L.:
We cannot leave the initiative alone to Big Tech, to the technological giants who are investing billions of dollars. Europe must put forward its own proposal for artificial intelligence. However, information and training must be provided to overcome misinformation and fear. Universities should train data experts who should then work in European companies. Otherwise we would be playing into the hands of Big Tech. I'll say more: education is fundamental from an early age, from primary school. In France I created a place, a house of technology, to help young people learn about the concrete applications of artificial intelligence. There is a need to prepare the kids, but also a system where our excellences can make their contribution.

AI actually processes data and knowledge that is already available. It means that the contribution and originality of human thought will still be needed. However, isn't the risk that such a powerful technology limits the development and even the birth of new ideas? Would we have a very efficient, but not very original artificial intelligence?
Landi.:
This is very true, but there is something more. Let's take the example of chess engines that have won at chess. Their moves are the result of an impressive amount of available data. However, at one point the machine decided to sacrifice the queen, a move that a human player most likely would not have made. It means that the machine has proposed its own logic. Or, let's think about what happened at MIT, with a super-antibiotic created by artificial intelligence: even in this case the machine used its own way of proceeding that man would never have implemented. These examples invite us to question ourselves about the developments of AI. The truth is that technology is moving at an exponential rate and the industry cannot keep up. We cannot stand still and above all we must be the main players in technological development. We need a European plan.

One of the biggest challenges is also the blurring of the lines between what is true and false. Let's think about photos and videos generated by AI. How do we defend ourselves?
B.:
The great challenge for a democratic society is information. Creating life-like content is nothing new. Technology always has a possible dual use: the same tool can be used as a weapon or as a work tool. The question is how to make AI a useful tool for democracy and communication. One idea is to put a sort of license plate to identify the product of a machine. It's like when we use dangerous substances according to standardized procedures. We must do the same thing with artificial intelligence. In this case, the difficulty is to define the procedures and rules. I believe that a new responsibility of social media managers is also needed.

In what sense?
B.: In the 90s with the birth of the Internet, under the pressure of the American administration, it was decided that Internet managers are not responsible for published content. Now this rule no longer works. We have noticed this over the past twenty years. At first, it seemed that the internet opened the doors to freedom. Let's think about the role played by Twitter or social media in the Arab Spring. But ten years later those same tools favored the assault on Capitol Hill with fake news. This makes me say that if we want the car to go straight we need to put in guardrails to avoid skidding and large social companies must also take their responsibilities. The major companies are American and are unregulated. They have certainly begun to self-regulate, but according to subjective criteria consistent with market interests.

But who can make the rules and above all enforce them?
B.: There is always someone who thinks that rules will kill innovation. It is not so. Returning to the metaphor of the car, the highway code does not interrupt mobility, but aims to reduce accidents. In Europe we are ahead in terms of rules on the use of data, but we need more. Global governance is needed and this is an idea also shared within the United Nations. The problem is understanding what the rules might be. We move in a difficult context that affects reflections and discussions.

Can artificial intelligence become a tool for building more cohesive and fraternal societies?
B.: It depends on the human use made of it. The first computers were used for war: they calculated the quantity of uranium needed for nuclear bombs. In the interaction between machine and man, the problem is always man. The history of recent technology shows the attempt to make human-machine contact increasingly immediate. Why not think that these technologies can help men get to know each other better and overcome more and more barriers? The choice is ours.

What is the best way to face the future and not be afraid of it?
B.: The future must not scare us. Steve Jobs always said that we need a vision, that we must be clear about what we want to do with ourselves and our lives. Another thing he passed on to me is curiosity. An invitation that I make my own: be curious!
L.: The industrial revolution of the last century created new jobs but also instilled in our way of thinking the idea that only work counts and that this is the reason for our existence. It seems that this is the only basis of our life, our support and our way of developing. I think that with artificial intelligence the opportunity has come to redefine our priorities, to review our way of spending time with family, with friends, with nature. When we die we will not regret having worked little, but not having given love to our children, friends, family. Love is our reason for existing.


Matteo Spicuglia
NPFOCUS
NP March 2024

This website uses cookies. By using our website you consent to all cookies in accordance with our Cookie Policy. Click here for more info

Ok