Carme Torras: "We must build more utopias"

07/02/2024 - 20:22 h

Interview by Toni Pou for Barcelona Science and Universities and Núvol.

Mathematics, philosophy, computer science, robotics and literature. These are five of the areas of knowledge that Carme Torras cultivates. She began by moulding the brain of the crayfish and, now, in addition to writing science fiction novels, she directs a research group with 65 people working at the Institute of Robotics and Industrial Informatics, a joint centre of the Polytechnic University of Catalonia and the Spanish National Research Council.

Which came first, science or fiction?

Science, but not because I wasn’t interested in fiction. I was already reading science fiction when I was very young, at the age of nine, but I saw it more as something to nourish myself, because what I liked was mathematics and physics and, in short, understanding the world. I always wrote, though not with the intention of publishing. When I started publishing science fiction I was already a more or less recognised researcher.

This passion for understanding the world led you to study mathematics.

I studied mathematics and at the same time I enrolled in philosophy. Choosing was a trauma. I don’t think we should choose so early and that everything shouldn’t be so separate. In technology degrees there is now starting to be some ethics content, but before there was nothing. And this can’t be.

How did you come to robots?

I was very interested in intelligence, how we think and how machines can reproduce thought. At a certain point, I wrote a very long letter to Michael Arbib, a professor at the University of Massachusetts, explaining that I had read his books and that I was very interested in what he was doing. He wrote back! It was a turning point in my life. Because I ended up in the United States doing a master’s degree in computer science with a specialisation in brain theory.

Did you work with robots there?

No, but when I came back there was no possibility of doing that research here. On the one hand there were the experimentalists, who did experiments on small animals, and on the other hand there were the computer scientists. There was no way to link the two. I even went to the Cajal Institute in Madrid, but it was too technical. And this was a problem. A problem that is still there.

What is it?

Everyone says that multidisciplinarity is very important, but when it comes down to it, carrying out a multidisciplinary task is very complicated. I could no longer do what I was doing.

And what did you do?

Gabriel Ferraté, who was then parish priest at the UPC, told me that he wanted to open a robotics line and that if I got started he could hire me.

You got into it and your research evolved from industrial robots to assistive robots.

At first we were working with arms for assembly lines and things like that, but as the years went by I saw that technological tools were having more and more impact on people’s lives and I started to care about social issues. And about ten years ago, I saw that there was an opportunity in the field of clothing handling in care settings, both to help people who can’t dress themselves and to facilitate hospital logistics when handling towels or sheets.

You have also developed a feeding robot.

We started with robots that handle clothes and we have gone on to develop other applications. When we started going to social and health centres, such as Pere Virgili, we realised that 50% of the people admitted need to be fed. The healthcare staff can’t cope and ask the families to do it.

What stage of development is this robot at?

At the beginning we installed a camera to see when the person opened their mouth and a force sensor to avoid hurting them. After the pilot test, we have seen that it is also necessary for it to change from spoon to fork, to dry with a napkin and to give a drink with a straw. And also to have a friendly face that says, for example, whether he has eaten enough or not. We have implemented this in the form of an expressive face with glasses. Now we are waiting for someone who wants to take the step of technology transfer to commercialise it.

This is another level of interaction. A feeding arm is one thing, a talking face is another. Because this type of robot must have two ethical dimensions, right? The physical one, for not doing harm, and the emotional one, for not generating links with the user.

Of course, that’s why the first thing it does is say something like “I’ll talk to you, but if you talk to me I don’t understand you, don’t think I’m a living being”. They wanted this verbal and facial interaction at all costs and that is why we have implemented it.

You are very careful, but there are robots on the market that listen to you, respond to you and have a face that simulates emotions. And many are sold as children’s companions.

And more of them will be. Sentimental companions are also being made, not to mention sexual ones.

“The future is already here,” said science fiction writer William Gibson.

And it is very dangerous. That’s why we put a lot of emphasis on the ethical dimension. It’s one thing for a very old person who has no company and is fond of a robot, but it’s quite another for a child. This kind of interaction at certain ages can have a big impact on development. The robot has no life experience to pass on and the child can develop empathy problems, as well as other deficits.

Do your robots use artificial intelligence?

Yes, we work at the frontier of social robotics and artificial intelligence.

Are you concerned about the current state of artificial intelligence and the direction it may take?

Of course, much more than robotics. Because the deployment of robotics takes time. A robot is a physical entity that not everyone understands, but on the other hand, with mobile phones, in a second you have all the applications you want. With artificial intelligence, technological deployment has gone much faster than ethical regulation, but in robotics we still have time. A lot of effort has been made, because since artificial intelligence, the robotics community has really stepped up to the plate. A lot of regulations have already been made and therefore I think we are on the right track.

So artificial intelligence is not on the right track?

I think it has got out of hand and now we are trying to get it back on track. But it is very complicated, because the first thing that should be done is to educate the population. In Finland seven years ago they did a course open to everybody on the basics of artificial intelligence and how to use it correctly. It has been translated into English and all the languages of the European Community, and now it can also be done in Catalonia.

Do many people do it?

In Finland almost everybody did it, but here when I talk about it I see that people don’t know it exists. And it’s a pity because it’s a course in Catalan, open to everyone, that doesn’t require much effort and that teaches how this technology works. I think it helps to see that there is no magic behind it, that it is a more sophisticated programming, yes, but it can be understood. There are some key concepts that are accessible and this makes you lose the fear of it and better appreciate the risks, so you can see how to use it. Isn’t it true that everybody drives and most of them don’t know exactly how a car works?

With deep learning and layers of neural networks on top of each other, there is a point when you lose track of what that machine is doing and you no longer know how it learned how to do what it does and how it became what it is. This means that, in a way, the machine can be unpredictable. Even the scientists who design them cannot predict the reactions of systems trained in this way.

Are we talking about something that resembles human creativity and could be called artificial creativity?

I don’t think so. Because, basically, all the machine does is statistics from a large amount of data. If, for example, it has to diagnose tumours from X-rays, it may have seen millions, many more than a radiologist can see in a lifetime. There is an experiment that I like very much, in which a machine was trained to diagnose tumours and the machine was given x-rays and radiologists were given x-rays. Separately, they both made a 5% error rate. But when the radiologists analysed the machine’s diagnoses, the overall failures were reduced to 1%. This is the way forward. Obviously, the machine can compute much faster and see much more, but the doctor has experience and can give context. The problem is that the doctor accepts what the machine says without looking at it. Something similar happens with automatic driving: when there is a dangerous situation, the user must take control, but perhaps he is distracted. The problem, in the end, is always human.

Machines have long since beaten us at chess. In 1997 Deep Blue defeated world champion Garri Kasparov. But until recently it was said that they could never beat us at go, an Asian game of strategy. In 2016 the AlphaGo system took on Lee Sedol, a genius considered the best player in history, and in the second game, played on 10 March, the machine’s 37th move stunned everyone. It looked like a mistake but it later turned out to be an absolutely counter-intuitive manoeuvre that led to AlphaGo’s victory on move 211. I don’t know if this is creativity but it seems that this move was out of all the statistics and forecasts.

These machines work with statistics from a lot of data but they also do exploration. And because they are very fast, they can look many more steps ahead than we can. Although go has a much larger space of possibilities than chess, they can also look much further ahead and discover paths that rationally seem very strange but in fact exist within the universe of possibilities of the game and lead to a winning situation. I always say that machines work in a closed universe. We, however, can discover things outside a closed universe. Or, at least, it seems so to us.

Are we absolutely safe?

No.

So do artificial intelligence systems that compose songs and paint always do so within a limited process?

I think it is a combinatorial and exploratory process in a certain universe. There are known starting points and a space of possible paths that can be typified because that is what is programmed. You can’t get out of it.

And we are not like that?

We don’t know.

In addition to the risks of specific applications such as diagnostics or automated driving, it has been said that artificial intelligence may threaten us as a civilisation.

It is more likely that we will eliminate ourselves. The other day, Eudald Carbonell said it on a television programme: there is no need for artificial intelligence to come and exterminate us, because we have enough of human stupidity. And it’s a bit true, because, in the end, decisions and responsibility are human. We are destroying the planet ourselves, without any external force, no aliens.

Does science fiction have a role to play in helping to reflect on these issues?

Years ago I wrote the novel, The Sentimental Mutation, and then I made some educational materials that are being used in many places. Now I have even been invited to an international conference on social policy to talk about it. This kind of material gives guidelines to people, and also to the experts who develop these technologies, which are more understandable than philosophical texts. Putting people in concrete narrative situations helps a lot. In this sense, I think science fiction writers have a responsibility: we have to build more utopias. Because with so much dystopia, people believe it and in the end we will end up with the self-fulfilling prophecy. We have to make more optimistic prophecies.

In your case, how do research and writing science fiction feed each other?

They are mutually inspiring. Doing and reading fiction gives me ideas to develop research and vice versa. I wrote The Sentimental Mutation when I was making this transition from industrial robotics to assistive robotics. And it was because I started to think that the robots we were making would be marketed and people would use them. While I was writing I thought about how I wanted them to be and I ended up designing the social robots that have later materialized in everything we work on in my research group. On the other hand, scientific knowledge allows me to see which futures are feasible and which are not, which helps me to be plausible in the novels. I have a blast!

Have you read any interesting science fiction books lately?

I read a lot about social robotics and, in this sense, I have been very interested in Kazuo Ishiguro’s Klara and the Sun, which raises problems of social robotics. There is a girl who is sick and, so that she is not always alone at home, her mother buys her a robotic assistant, Klara. It is very funny how they treat each other. Another interesting book is Machines like me, by Ian McEwan, which has this double meaning title. It is a book that deals with the ethical issues that can arise from a relationship with a robot, such as whether someone has the right to hit it, who should set it up first, or how the privacy of each partner is managed.

What would you say to a girl or a young girl who likes science and thinks she can go into it?

First, that there are two preconceived misconceptions that hold girls back from choosing science and technology. One is that people who go into these disciplines are locked up in labs and isolated from the world. And since girls tend to ask for their work to have social involvement, they end up thinking that this is not for them. But technology is now very present in everyone’s life and they must think that they can have a lot of social influence, either with the development of mobile applications or robotic technology like we do. The other preconceived idea is that we must choose sciences or arts, technology or humanities. Well, no: you can do both. I have done it. Moreover, they must believe in their curiosity and creativity, they must let go and understand that their future has no limits, that they should not be artificial, that the planet needs them.