Quantum AI: Fancier isn't always better ZDNet's Tiernan Ray explains to Karen Roby of TechRepublic that users should be skeptical of current attempts to have quantum computers enhance AI. Read more: https://zd.net/2JhQxmP

You've seen the headlines: "AI can read your mind," "AI is brewing your next whisky," "AI is better than doctors at something or other," "This AI sounds so convincing it's too dangerous to release into the wild," and on and on. 

And, of course, the ubiquitous, AI is destroying jobs[1]

Most people can agree that the coverage of the phenomenon of artificial intelligence in the popular media is bad. AI researchers know it, some reporters know it, and probably the average consumer of media suspects it. 

The headlines are mostly filled with urgent appeals to panic, and the substance of articles is vague, obscure, and anthropomorphized, leading to terrible presumptions of sentience. 

Also: MIT finally gives a name to the sum of all AI fears[2]

Fifty years ago, Drew McDermott at the MIT AI Lab had a great term for such misleading characterizations. He called it, "artificial intelligence meets natural stupidity." Back then, McDermott was addressing his peers in the AI field and their unreasonable anthropomorphizing. It seems these days, natural stupidity is alive and well in journalism.

Why is that the case? It is the case because a lot of writing about AI is not about AI, per se, it is writing around AI, avoiding what it is. 

ai-headline-clippings-may-2019.jpg

What's missing in AI reporting is the machine. AI doesn't operate in a mysterious ether. It is not a glowing brain, as seen in

Read more from our friends at ZDNet