AI doesn’t belong everywhere. Stop using a hammer to make lasagna.


This article is a preview of The Tech Friend newsletter. Sign up here to get it in your inbox every Tuesday and Friday.

At the dawn of every profound development in technology, you can count on the profoundly stupid, too.

Let me take you back in time to fart apps.

When Apple first let anyone make apps for the iPhone about 15 years ago, one of the first viral hits was iFart, which — as you might have guessed — made flatulence sounds.

Its popularity spawned many hundreds of copycats. Apple was so embarrassed that it essentially banned fart apps.

Whenever we get excited about the potential of a new technology, we rush to try it for anything. We don’t always stop and think: “Wait, is this a good idea?”

We are at the cusp of what many technologists believe is a breakthrough moment for artificial intelligence. That makes it essential for us to learn that, just because you can apply AI to some task, it doesn’t mean that you should.

Already we’re starting to see examples of overly complicated, overly intrusive, overly expensive and overly clueless uses of AI in situations where simpler technology — or no technology at all — might be better.

Examples of AI for the sake of AI

  • Grocery delivery company Instacart bragged last week about an AI chatbot feature that suggests what you can make for dinner. Yeah, Instacart invented a cookbook. Or a web search. Or a cooking app.
  • The fast-food chain White Castle has experimented with AI-powered license plate readers and AI voice assistants at drive-through windows that require people to accept terms and conditions before ordering a burger. Is shouting into a fast-food Siri any better than shouting through a speaker to a restaurant worker? (Jamie Richardson, a White Castle vice president, said the company experiments with an open mind.)
  • Levi Strauss said it is testing AI-generated fashion models to show its clothes in a range of sizes and on bodies with many skin colors. It’s an illusion of diversity, without actual humans. (A Levi’s representative said the company is “still hiring a wider range of human models.”)
  • A restaurant chain in Japan recently responded to gross pranks of people licking items on sushi conveyor belts by deploying AI cameras and software to identify suspicious diner behavior. Another chain responded by limiting the use of open conveyor belts of food and installing physical barriers between diners and the sushi.

Don’t use a hammer to make lasagna

Maybe some of these AI projects will turn out great. And even if they don’t, a few silly AI ideas don’t invalidate the momentous ways people are using AI to improve cancer screenings, overcome struggles with writing, create our own apps without knowing software code, and brainstorm performance reviews.

We have to remember, though, that AI and other technologies are tools and not magic. They’re not appropriate for everything. Hammers are great, too, but we don’t use them to cook lasagna.

(Have you found a use of AI that seems ridiculous or unnecessary? Tell me about it at

Over the past decade, particularly when it comes to AI and other data-reliant computer software, we have repeatedly treated machines as superhuman rather than limited-use tools. The consequences have sometimes been dire.

Police departments are supposed to use facial recognition software as one investigative technique in catching criminals. But there have been multiple examples of law enforcement officials using facial recognition not as suggestions but as almost the sole basis for mistaken arrests.

Vanderbilt University officials could have used the AI language generator ChatGPT as suggestions for a difficult email to students grieving over a deadly shooting at another college. Instead, they used the AI’s soulless text verbatim. (The administrators involved apologized for using poor judgment.)

I have been reading an iconic book about AI, Cathy O’Neil’s “Weapons of Math Destruction,” that was published in 2016 and feels just as relevant today. The point of the book, and the work of other AI Cassandras including the data scientist Meredith Broussard, is that we all need to learn to put AI in its place. Sometimes it has no place at all.

Introducing the AI Juicero hall of fame

To remind us of the risks of technology as the wrong solution to the wrong problem, today I am adding Instacart, Levi’s and the other examples above to The Tech Friend’s AI Juicero hall of fame.

The name comes from an infamous Silicon Valley start-up. It created a WiFi-connected, $700 device designed to take bags of chopped fruits and vegetables and press them into juice. The Juicero machine had every fancy part and cutting-edge tech you could imagine.

Then in 2017, a couple of journalists demonstrated that it was just as effective to squeeze the bags into juice by hand. The resulting ridicule helped kill Juicero.

An overhyped juicer is an example of people getting so excited about solving a problem with technology that they don’t think — is there a simpler way to do this? Is this a problem at all?

From the past 48 hours in AI news:

  • Microsoft believes an AI chatbot can help workers spot cyberattacks. (Washington Post)
  • An AI-generated fake image of Pope Francis wearing a puffy coat “probably counts as the first real AI-generated hoax.” (Garbage Day) But viral fake images spread long before AI. Remember hurricane shark? (Snopes)
  • Can YOU ace your AI job interview? (Washington Post)

Inspired by a question from a friend and a couple of news articles about using AI for travel planning, I asked a chatbot this weekend for vacation suggestions.

My verdict: Interesting but not awesome.

Try AI vacation planning for yourself, and tell me what you think. The software ChatGPT is free for you to try at this link. You need to set up an account with your email address.

I asked ChatGPT to suggest an itinerary for a vacation this spring within a five-hour drive of New York City, and in a wooded area near a migratory bird route. (Bird-watching is one of my hobbies.)

ChatGPT suggested the Catskills and recommended activities for each day, including a visit to the Mohonk Preserve and John Burroughs Sanctuary. It named birds that I might see.

Not bad! Some of these were places I have wanted to try.

Was it better than searching on Google or scanning travel reviews? Maybe? It felt like a useful starting point for vacation ideas rather than harnessing the knowledge of a virtual travel agent who knew my tastes and budget.

I didn’t ask for more specific suggestions like towns I might stay in or restaurants I might want to try based on my likes and dislikes.

ChatGPT can (sort of) do that, although for now the freely available version is not connected to the live internet in most circumstances. That means ChatGPT can’t offer hotel room prices or availability, for example.

As with Google searches, it’s a skill to ask AI chatbots questions in a way that generates the best responses. My question was basic.

Leave a Reply

Your email address will not be published. Required fields are marked *