Do you sometimes address your phone as Siri or Alexa or some other personal name? Do you alert Google by voice when you need some information? The makers of digital assistants seem to be in a race to see who can make their digital assistant seem to be the most human. But it turns out that may not be such a good thing.
The more human digital assistants seem, the more we experience them as human and the more we may try to interact with them as if they were human. Or the more we may avoid interacting with them…
A couple of weeks ago, I was in an unfamiliar place and wanted to buy a nice bottle of wine as a hostess gift. I did an Internet search and found a likely candidate called Flame Liquors. Merrily I headed off without writing down the address since I felt sure that Siri would find it for me.
“Siri, what is the address of Flame Liquors,” I said clearly and politely. Siri said, “I cannot find a listing for Claim Liquors.”
“Siri, what is address of Flame Liquors,” I said with a little more fricative f. “I cannot find a listing for Claim Liquors.”
“Siri, Flame Liquors,” said I. Her response was the same. I don’t know why Siri couldn’t figure out that you can extend the hard k sound, but then Siri is not human and sometimes she is very stupid. Sorry, Siri. (Parenthetically, to whom am I apologizing? Is that voice too much like a person?) I found Flame Liquors without her help.
But here’s another scenario. The father-in-law of a dear friend has wanted a GPS system for years. He finally got one at Christmas. He entered the address they were going to, about 3 hours away. They finally got there 5 ┬¢ hours later. Not because the directions were wrong, or because of traffic, but because father-in-law wouldn’t believe or follow the directions provided by the GPS (which were correct).
In fact, a recent research study suggests that the more human-like a digital assistant is, the less likely some people are to ask it for help. The study, conducted by researchers at Chungbuk National University and published in Psychological Science, found that some individuals, particularly those with a fixed mindset, were reticent to ask advice from a digital assistant that seemed human. The concepts of fixed and growth mindsets comes from the work of Dr. Carol Dweck at Stanford. Someone with a growth mindset is more likely to believe that intelligence and talents can be developed. Someone with a fixed mindset believes that intelligence is fixed. As a result, fixed mindset individuals often hesitate to ask for help because it implies that they aren’t as smart as they should be.
But if the help is coming from a computer, then why care? Because it just seems too human. At least that is what the research suggests.
This has implications for many fields of digital interaction and one area is online educational or training programs. If you think about the ideal coach for a digital educational program, intuition tells us that it should behave as much like a human as possible. But is that really best?
Do kids like video games because the judge of whether you pass a level is more human or more impartial? What seems more fair? What seems more helpful? Do I want a human’s opinion as to how I can get to the next level of what I am studying, or training in … or do I want an impartial expert. Do I want someone who may have opinions based on my gender or race or ethnicity? Or do I want that advice to be “bias-free.” Do I want a hint from someone who might care whether or not I took that hint?
These are complex questions and have to do with mindset, emotion, motivation and learning. And they also have to do with the capabilities and deficits of the machines with which we interact on an increasingly frequent basis. How is your relationship with your digital assistant?
© BrainWare Learning Company | All Rights Reserved.