My manager’s assistant tricked me into thinking ...

My manager’s assistant tricked me into thinking she was human

Recently, I got an email from Amy, my manager’s assistant. She contacted me to schedule my half-year performance review. Nothing out of the ordinary, save for one thing: I didn’t know my manager had an assistant. I know everyone in our office, and I’d never met someone called Amy. Strange. A closer look at the email made clear what was going on:

In the second line of her signature, Amy reveals her true nature: she’s a digital assistant powered by artificial intelligence. Because her email looked so natural, it took me a good minute to realize Amy was a bot.

From a perspective of user experience (UX), this is an interesting case. Why? Whether I think I’m talking to a bot will influence how I respond to a question.

If (bot) { dumb-down ( response ) ; }

Had I thought Amy was human, I might have replied something like this:

“Hi Amy. Thanks for your message. I have another meeting at the time you suggested. Would sometime earlier in the afternoon work for Naos? Cheers!”

When I believe I’m talking to a human, I’ll go through the effort of formulating a polite response. I can also be more implicit, knowing the other person is able to infer what I say. (In the context of scheduling a meeting, a human will understand that “I have to pick up my kids from school” actually means “I can’t make Wednesday afternoon.”)

But once I realize Amy is a bot, things change. Based on the bot’s first message, I cannot anticipate its intelligence. That’s why I’ll intuitively dumb down my response to make sure I’m understood. Because I want the bot to get what I mean, I start talking like a computer myself:

“No, I can’t make Wednesday at 17:30. Is Wednesday at 16:30 possible?”

You could argue this is an improvement. Because I can skip the niceties, I end up writing a shorter message. Efficient, right? Not quite. It actually takes more effort to write like this, because I’m pre-processing my statements.

More crucially: I’m pre-processing information for someone whose intelligence I can’t gauge. That’s tricky. Even though my second reply is very straightforward, there’s still no way I can be entirely sure the bot will understand. And that’s a terrible user experience.

Don’t pretend to be human if you’re not

The better bots get at tricking us into thinking we’re talking to a human, the more complex our responses will be. Paradoxically, this will increase the odds of the bot misinterpreting those responses. For that reason, it’s in everyone’s interest for a bot to introduce itself as such. At least while we still have imperfect natural language understanding (NLU).

If we want users to embrace bots, bots need to make clear what they can and can’t do. Otherwise, we’ll see bad experiences that will haunt users for years. Siri is a good example here. Even though voice recognition and NLP have improved in recent years, I still talk to Apple’s assistant like she’s deaf and dumb. That’s all because of suboptimal experiences I had in Siri’s first years.

In a stimulating blog post about user expectations in voice control, Benedict Evans delves deeper into this flaw of Siri’s UX:

“This was the structural problem with Siri - no matter how well the voice recognition part worked, there were still only 20 things that you could ask, yet Apple managed to give people the impression that you could ask anything, so you were bound so ask something that wasn't on the list and get a computerized shrug. Conversely, Amazon's Alexa seems to have done a much better job at communicating what you can and cannot ask.”

With that in mind, here’s a call to all bots out there: you’re not human, so stop pretending to be. Yes – you might be able to fool us for a bit, but you’ll blow your cover eventually. Be frank about your limitations. Honesty is the best policy.

Your customer interaction can reach new heights with intelligent digital products. Wanna know what apps, chatbots and AI can do for your business? Reach out to us at