Artificial intelligence and machine learning models can work spectacularly -- until they don't. Then they tend to fail spectacularly. That's the lesson drawn from the COVID-19 crisis, as reported[1] in MIT Technology Review. Sudden, dramatic shifts in consumer and B2B buying behavior are, as author Will Douglas Heaven put it, "causing hiccups for the algorithms that run behind the scenes in inventory management, fraud detection, marketing, and more. Machine-learning models trained on normal human behavior are now finding that normal has changed, and some are no longer working as they should."

asimo-honda-cropped.jpg
Photo: Honda

Machine-learning models "are designed to respond to changes," he continues. "But most are also fragile; they perform badly when input data differs too much from the data they were trained on. It is a mistake to assume you can set up an AI system and walk away." 

It's evident, then, that we may be some ways off from completely self-managing systems, if ever. If this current situation tells us anything, it's that human insights will always be an essential part of the AI and machine learning equation. 

In recent months, I had been exploring the potential range of AI and machine learning with industry leaders, and what role humans need to play. Much of what I heard foreshadowed the COVID upheaval. "There is always the risk that the AI system makes bad assumptions, reducing performance or availability of the data," says Jason Phippen, head of global product and solutions marketing at SUSE[2]. "It is also possible that data derived from bad correlations and learning are used to make incorrect business or treatment decisions.  An even worse case would clearly be where the system is allowed to run free and it moves data to cold or cool storage that causes

Read more from our friends at ZDNet