"Public research has to play a key watchdog role on the use of social robots".

Corporate

COMETS, the CNRS Ethics Committee, has published a report on the so-called 'social' robots and the effects of the increasing everyday use of these machines, particularly focusing on the phenomenon of user attachment to social robots. Christine Noiville, the director of COMETS, explains.

How did COMETS come to work on the issue of 'attachment' to social robots?

Christine Noiville: In recent years, a lot has been said or written – and a lot of ethical recommendations made - about chatbots, conversational agents and other robots programmed with artificial intelligence techniques. These are embedded in telephones, watches or computers and are now very much an integral part of our environment. Their benefits have been pointed out along with their actual or potential negative effects in terms of the spread of fake news, racist, sexist or conspiracy-related messages and also copyright issues. Also recently Scarlett Johansson has threatened to take Open AI to court about the latest version of ChatGPT which imitated the actress's voice. The case shows the wide range of legal and ethical issues raised by the roll-out of these tools.

However, all this thinking on the subject has too often overlooked a phenomenon accompanying the growing use of some of these robots, namely people's attachment to what are now called 'social' robots. COMETS wanted to take up this subject after an alert from one of its members, a computer scientist and robotics researcher. We felt the subject requires greater consideration by the public research sphere.

What is a 'social' robot?

C. N.: These are called 'relational' or 'emotional' pet robots such as Replika, Azuma, SimSensei Kiosk and many others. They're designed to act as a companion, a friend, a health or well-being coach or to replace a deceased loved one (deadbots). They often have a humanoid appearance - mostly female - which tends to reinforce gender stereotypes (women are gentle; they know how to listen and 'serve' other people, etc.!). Above all, as well as being capable of using audio sensors or cameras to dialogue and interact with their user like a human might in terms of the voice, intonation, gestures, facial expressions, they can also detect a user's emotions (are you sad? You look anxious!) and simulate these themselves by crying or laughing with a user, congratulating him or her, etc. Users can then tend to attribute human qualities to the machine and consider it intelligent, benevolent and empathetic. They may also develop the illusion that an intimate bond is being forged between the machine and themselves. In short, they become attached to their robot.

COMETS is aware of the benefits that can sometimes stem from this. For example we may think of certain conversational agents or robots like the Paro cuddly toy designed to keep elderly people company. However COMETS is also concerned about certain individual and collective impacts that could result from the tendency for people to become attached to their social robots.

What are the risks?

C. N.: Currently there is very little scientific documentation on this subject. It is not a particularly visible topic apart from very specific cases that get media coverage like young people marrying 'their' robot or a young man committing suicide following exchanges with his conversational agent friend. However, the growing use of relational and emotional conversational agents means what is currently an underground phenomenon could potentially impact our lifestyles and even the links we have with other humans.

Our consideration for and attachment to objects and machines that evoke emotions in us is clearly nothing new. Think of dolls, cuddly toys and certain machines like a locomotive, a weaving loom or even a Hoover! Once these have been 'tamed' - i.e. once the user has understood their benefits and limitations – they become part of what is seen as the work team and also of that team's quasi-affective environment to the extent that they are sometimes given a first name, by the way often a woman's name…

But one more novel aspect of social robots is that their designers do their utmost to make sure users attribute empathy to them and nurture the illusion of a special emotional bond being formed between them and the machines. Risks lie ahead in terms of control, addiction, de-socialisation and above all manipulation. This is a different, more insidious phenomenon from addiction to video games. It's linked to the fact that the machine speaks, responds, dialogues and captures emotions, which changes the whole game. As the author Alain Damasio put it, these social robots are creations that are made to be 'creatures' which brings us to the question of the relationships this will engender with the tool but also with other humans... We also need to highlight the addictive and de-socialising potential of a virtual world in which you're reassured and congratulated, apps are your 'real' and benevolent friends and you can bring the dead back to life! Not to mention the risk of manipulation which is even greater given that most social robots are developed by companies aiming to strengthen attachments to consolidate their market share and more effectively exploit users' emotions for commercial purposes.

How does this concern public research?

C. N.: Researchers and various ethics committees have already made their recommendations to manufacturers, public authorities and users of conversational agents. The work of Raja Chatila1  and Laurence Devillers2  notably springs to mind. Among other things, they all insist on the need to develop information and even education on using social robots in an enlightened and liberating way.

COMETS endorses these recommendations but also calls for researchers to be vigilant, specifically computer science and robotics researchers and their academic societies or research organisations. Public research is actually doubly concerned.

Firstly, the CNRS, the INRIA3 , the CEA4  and French universities are carrying out a number of studies to understand more about the 'socio-emotional' component in what is called 'human-agent interaction' and thus continually enhance the user experience of those with robots. This is clearly a laudable aim but most of this work actually reinforces the risk that causes COMETS concern – projecting human qualities onto the robot, over-attachment to the machine, etc. - but doesn't sufficiently examine the end goals and effects. Is it really useful to develop experimental social robots - which will help drive developments in private research - that imitate humans as closely as possible, right down to hesitations in language and expressing emotions? Researchers need to think more deeply about these kinds of questions.

Secondly, public research can play a key watchdog role in monitoring and measuring the long-term consequences of the use of the social robots currently on the market. Now these are being used on a large scale, it's important to gauge their impact on the cognition, psyche and behaviour of users and their relationship with others and the world. This means we have to construct an independent knowledge base in response to the challenges these machines entail and ensure they are used as responsibly and freely as possible.

What are your recommendations?

C. N.: First of all, the scientific communities concerned (particularly computer scientists and roboticists) need training and access to recent international literature on these issues so they can ask themselves the right collective questions on this subject. For example, what are the advantages and disadvantages of giving robots a humanoid form plus the ability to simulate human language and behaviour as accurately as possible while understanding and simulating emotions? Also, with the increasing use of social robots, it is important for public research to carry out large-scale, long-term scientific studies into the risks involved. This research needs to be interdisciplinary, combining work in computer science, robotics, behavioural sciences, language processing, etc. with psychology, neuroscience, linguistics, sociology, law, ethics, philosophy and anthropology research. In this way independent and sufficiently solid data can be obtained to feed into the regulatory decision-making process. To provide input for this research, COMETS recommends that a monitoring centre should collect large-scale, long-term data on the use of social robots, the way users appropriate them and their impact on users' emotional states and decisions. 

 

  • 1Professor of Artificial Intelligence, Robotics and Ethics at Sorbonne University and a member of the National Pilot Committee for Digital Ethics (CNPEN).
  • 2Professor of computer science applied to social sciences at Sorbonne University and a member of Allistene's (Digital Sciences and Technologies Alliance) 'Commission for Thought on Research Ethics in Digital Science and Technology '.
  • 3National Institute for Research in Computer Science and Control
  • 4Alternative Energies and Atomic Energy Commission.