“We are sitting on a pile of gold,” “… and then we train an AI on top of it”, “…our platform uses advanced analytics…”, “I want to use AI to automate… everything.”
to try to persuade someone or make them admire you by saying things that are not true
Conversations around artificial intelligence have become more like decisions concerning the color of chairs in a new nuclear plant: Almost everyone has a strong opinion about it, but few people are ready to discuss things that deserve a thoughtful chat.
And that is why most AI projects never make it past the “PowerPoint stage”: The mystical place where motivation dies.
Like with most technology, the problem isn’t that it isn’t capable, but organizations struggle to make it work in their favor. Why is that? After countless interviews that led nowhere, we found three main issues:
Issue #1: The expertise gap
Although AI has passed the peak of Gartner’s Hype Cycle, there are a surprising number of people who think of it as the thing that will get them promoted. It can, but they need to get a good grip on what to do with it in practice.
Some people come up with ideas for process improvements and people who know where the limitations are. That phenomenon is not exclusive to the AI space, but it seems particularly pronounced.
Issue #2: Exaggerated expectations
There is no single thing in the world where you dump something random into and get something great out. Except for apple crumble. Everyone who says differently is lying.
Seriously though: We get messages from people who say, “I want to automate everything,” and even though that’s the name of this publication, that is complete bonkers. We know that this person hasn’t even tried building something more than a Slack notification whenever we see it.
Implementing AI is nothing like sending a dropdown field to a database; it usually requires commitment and iteration outside of the power of individuals. Hell, even we can’t change the fact that machine learning models are fuzzy by nature.
And this is easily visible in our product funnel. Unless one of our template models is sufficient, most people drop off at the point where they need to bring data. We can help with pre-trained models or public datasets but when it comes to specific data, it’s up to the user. The second drop-off point is when the model has been trained – and mistaken a banana for a sausage.
I wouldn’t spend my Friday afternoon writing this if I didn’t know that AI could do wonders. But as my friend Roman likes to say: “If you want to get stronger, you need to go to the gym sometimes” and I would add to that: “experiment with a machine learning model”.
Issue #3: Understated expectations
It’s not that today’s machine learning (or ‘ML’) systems are stupid. If trained with care, they are capable of remarkable performance
. The technology is right here, and it comes in many shapes and forms. But because it doesn’t deliver instant value – systems are sometimes disregarded as useless by careless visitors – at least on things that matter.
Lying means being unaware of the facts. Talking bullshit means accepting a knowledge gap to persuade someone at the cost of truth. Just because Bob generally says bright things doesn’t mean his wisdom about AI is worth a dime.
We have yet to crack the conundrum of teaching people enough about AI before engaging in an opinionated conversation. To us, it is a pity to see lots of wasted potential on lukewarm projects that end up in tears (or fade out).
It seems that we are not the only ones though: The whole no-code space is giving new powers to a new audience. That’s great for an investor pitch (new market) but superpowers don’t always transfer 1:1. It is easy to forget that building a business takes more than just making a website or app just because all of a sudden you could.
The same goes for having the possibility of using AI: Parroting its “endless possibilities” is not what drives impact in day-to-day business. Thoughtful process design, clever use of the right technology, and focused implementation do.
By Arne Wolfewicz