What my mopping robot taught me about the future of artificial intelligence

A few months ago, a friend noticed the condition of my kitchen floor and decided to do an intervention. I could understand her point, although in my defense I have two teenagers and a large dog. My friend gifted me a matching robotic mop and vacuum that’s programmed to move around a room and clean as it goes.

When the boxes arrived, I recoiled at the sight of the iRobot logo. I’m slow at figuring out new technology and was afraid the devices would spy on me and suck up data along with the dog hair. But the directions were easy, and I finally decided I didn’t care if someone studied the mysteries of my kitchen floor.

I powered up the two robots, watched them roll out of their docks to explore the space, and quickly fell in love with my newly sparkly floors. I kept making demos for all my guests. “I think you’re more interested in the Robo-mop than we are,” joked one of my teenagers. “They are like your new children.”

One day I returned home and discovered that one of my beloved robots had escaped. Our patio door had blown open and the mopping robot had rolled into the backyard, where it was busy trying to clean the edges of the flower beds. Even when its brushes were clogged with leaves, bugs, blossoms, and mud, its little wheels valiantly kept turning.

It showed the limits of artificial intelligence. The Robo-Mop acted rationally as it was programmed to clean “dirty” things. But the whole point of dirt, as anthropologist Mary Douglas once remarked, is that it’s best defined as “matter out of place.” Its meaning derives from what we mean by clean. This varies according to our largely unspoken societal assumptions.

In a kitchen, dirt can be garden debris such as leaves and mud. In a garden, this dirt is “on site,” in Douglas’ terminology, and does not need to be removed. The context is important. The problem for robots is that reading this cultural context is difficult, at least initially.

That’s what I thought of when I heard about the recent AI controversy in Silicon Valley. Last week, Blake Lemoine, a senior software engineer in Google’s Responsible AI unit, published a blog post claiming that he “could be fired soon for doing AI ethics work.” He was concerned that an AI program created by Google became sentient after expressing human-like feelings in online chats with Lemoine. “I’ve never said that out loud, but there’s a very deep fear of being turned off,” the program wrote at one point. Lemoine sought advice from experts outside of Google, and the company put him on paid leave for allegedly violating confidentiality guidelines.

Google and others argue that the AI ​​wasn’t sentient, it was simply well-trained in language and reproduced what it learned. But Lemoine claims a broader problem, noting that two other members of the AI ​​team were removed over the past year over (various) controversies and alleging that the company is “acting irresponsibly. . . with one of the most powerful information access tools ever invented.”

Whatever the merits of Lemoine’s particular complaint, it is undeniable that robots are being endowed with increasingly powerful intelligence, raising major philosophical and ethical questions. “This AI technology is powerful and so much more powerful than social media [and] It’s going to be transformative, so we need to move forward,” Eric Schmidt, a former Google chief, told me at an FT event last week.

Schmidt predicts that we will soon see not only AI-enabled robots designed to solve problems according to instructions, but also those with “general intelligence” – the ability to react to new problems they don’t need to solve , by learning from each other . This could eventually stop her from mopping a flower bed. But it could also lead to dystopian scenarios where the AI ​​takes the initiative in ways we never intended.

A priority is ensuring that ethical decisions about AI aren’t just made by “the small community of people who are building this future,” to quote Schmidt. We also need to think more about the context in which AI is created and used. And maybe we should stop talking so much about “artificial” intelligence and focus more on augmented intelligence, in the sense of systems that make it easier for people to solve problems. To do this, we need to combine AI with what you might call “anthropological intelligence” – or human insight.

People like Schmidt insist this will happen, arguing that AI will be a positive outcome for humanity, revolutionizing healthcare, education and more. The amount of money pouring into AI-linked medical startups suggests many agree. In the meantime I will keep my patio door closed.

Follow Gillian on Twitter @gilliantett and email her gillian.tett@ft.com

consequences @FTMag on Twitter to be the first to know about our latest stories

Leave a Comment