SXSW – Sophie Kleber: Designing emotionally intelligent machines
As artificial intelligence (AI) and machine learning seep into the mainstream, how can designers ensure that intelligent machines and services will be more like “Big Hero 6” and less like “Ex Machina?” When we say designers, we don’t just mean those dealing with shapes and colors. We mean everyone involved with the creation of machines.
At this year’s SXSW conference, Sophie Kleber (Executive Director of Product & Innovation at Huge) explained how empathy is becoming an important design element in machine reaction algorithms. AI personalities will be tangible and give us the feeling we’re dealing with real humans. Clever design will humanize brands with emotional intelligence and conversational UIs.
But is there a need for our machines to always show empathy? Why is designing for emotions so important to us? And how can we design better emotional intelligence? Let’s have a look at Kleber’s vision of designing emotionally intelligent machines.
Why should we start designing for emotions today?
Right now, we are at what Kleber calls an ‘inflection point.’ That’s the moment where developments in machine learning, intuitive user interfaces, and ‘affective computing’ are all at full speed. The three of them combined make for a perfect time to start designing for emotions.
Of course, designers have always designed with emotions in mind. However, with the technologies available today, we can finally start adding a form of intelligent emotions to machines. With affective computing, systems and devices can recognize, interpret, process, and simulate human emotions. This brings many new opportunities. Given the prominence of emotion in the human experience, we can use affective computing for very powerful things.
That raises the question: what could an emotionally intelligent machine do for us?
What do people want from their machines?
When machines start to talk, people assume a certain relationship between themselves and their device. In one of Kleber’s recent research projects, people were asked about their relationship with Amazons’s Echo or Google Home. The respondents were mostly satisfied, but they were sometimes let down by the short answers their digital assistants gave them. In other words: they expected their machines to talk like a human would.
When researchers asked how such assistants should act in an ideal world, things became more interesting. The majority of the group was expecting to have a friendly relationship with their conversational UI. They even wanted their machines to show empathy, to give emotional support and to warn them about bad ideas, just like a good friend would.
People were addressing these machines as their friendly assistant, acquaintance, friend, best friend, or even as their mom! But whatever people called their machines, above all, they preferred them to be emotionally intelligent.
Detecting emotions with intelligent software
The emotional recognition software industry will become very big in the coming years, growing to 36.7 billion dollars by 2021. But what does it take for a computer to detect emotion? As we all know, emotions are tricky. There are thin lines between different types of feelings, turning them into very complex phenomena.
Kleber states that software for emotional recognition should consider the combination of all of these different aspects of emotions. Starting with facial recognition, the software must understand the mimicry and expression that we show in our faces. Currently, this is the most effective way of detecting emotional nuances, as this technique is already relatively advanced.
Voice recognition is another important element that we need if we want to understand what is going on. Emotional recognition software can capture the sentiments in a voice through analysis of frequency characteristics, time-related features, and voice quality.
The last pillar is biometrics. This is about the things going on inside the body that indicate a certain emotion. Intelligent software can detect a combination of electrodermal inputs, heart rate monitoring, skin temperature and movement. Wearable devices or epidermal sensing stickers are the most common tools to measure what’s going on. This technology is not yet comprehensive, and more testing is needed.
Context is king
Besides these three types of emotional clues, their is one more thing that matters: context. Context is king, as it can have a huge impact on our emotions.
By combining facial or voice recognition and biometrics with contextual data from our devices, intelligent software should be able to make better decisions. Here’s an example: based on location and calendar data, our devices might know when we’ll be late for an important meeting because we’re stuck in traffic. Intelligent software would use this context to guess that we’ll probably be at least a little frustrated. It can use that information to decide how it’s going to approach us emotionally.
Are we doing the right thing?
In 2014, Facebook intentionally made thousands upon thousands of people sad in a data experiment. That made us aware of something crucial. For every interaction in this field of emotional intelligence, Kleber says, we need to have permission of the user. That was not the case in Facebook’s unethical experiment.
To check if we are doing the right thing, two questions are key:
- How big is your desire to be emotional?
- Do you have permission to play?
When we talk about desire, we should ask ourselves a couple of questions. What is the user’s emotional state? What are the user’s ambitions? What’s the nature of the interaction? What is their context? Based on these answers, it might not always be necessary to add emotion to the game.
There’s an equally long list of questions regarding the permission of the user. Is it the right user and the right context? Has the user given you permission? Do you have the right value proposition? Do you have the right intelligence? And what’s the danger of being wrong?
How should machines respond to our emotions?
There are three ways an interface can use emotional intelligence to respond to a human. Picking the proper emotional response is key.
Firstly, an interface can respond like a machine, acknowledging the user’s emotions in the decision making process, but now showing any empathy in the output. This is what happens when a self-driving car makes a decision in order to protect your immediate safety.
Another option is reacting like an extension of the self. Here, the machine interprets emotions and presents them to the user. It is then up to the user to decide how to use this information. An example is the data generated by a smart wristband or smartwatch. These devices show you how you might feel, but don’t force you to change your emotions in a particular way.
The third option is for the machine to respond like a human. In this case, it gives advice or automatically triggers actions aimed at changing the emotion of the user. This, Kleber says, requires the permission of a user. An example is a robot that would give you a hug when it decides you are in need of consolation.
Forming a deeper connection
Kleber has faith in the future of affective computing. Ten years down the line, we won’t remember what it was like when we couldn’t just frown at our device, and our device would say: “Oh, you didn’t like that, did you?”
User expectations will shift in the coming years. Natural emotional interactions will become the norm in ubiquitous computing. With these expanding possibilities, brands will be able to form much deeper connections with their customers. A thorough understanding of emotional psychology will become mandatory in the field of designing machines.
Ultimately, these developments should be helpful to us as humans. That’s why we build machines. If we don’t fuck up the implementation of emotional intelligence, we might finally be able to live the life we want.