A goodly part of my day-job involves sifting through technology marketing materials, which are – the times being what they are – concerned largely with AI.
What’s became apparent, as the initial excitement about LLMs passed (“Oh look, I can use normal language to ask this thing questions!”), is that apart from a few niche cases, AI isn’t really all that and although it can be a handy tool once in a while, the things it’s trying to replace are usually superior to AI’s own efforts. Web search – better with a decent, traditional search engine (pay for Kagi, is my advice); media creation – invariably crap, and crap that’s worsened by the spectre of copyright infringement; software generation – learn how to code, you know, with a book or video, or something; summarising documents – great if you’re OK with your summariser taking acid and making stuff up; and so on.
But the reality is never going to actually puncture the hyperbole bubble of the tech industry, or rather, the tech industry players who have a vested financial interest in promoting what is literally average. By literally, I really do mean literally – to whit: if the internet contains the output of the brains of humans, said sum of output will be created at an average IQ of 100. That’s the average human IQ. Thus, AIs trained on the internet will exhibit an IQ of 100. Which is OK, I guess, if there’s no alternative, like going to sources where you know the authors are more clever than you. But that would involve bypassing the AI’s recommendations or provided answers. Which is kind of the point: being alive, learning stuff, feeling full of the wonder of it all, having access to all the information in the world.
Anyway, AI fails to live up to its hype because it’s meh, middling, average dross at best. But shurely thatsh not true? Why then, is AI failing when it’s stuck into every bit of software we’re forced to use these day and put to use in every organisation under the sun?
There are plenty of reasons, of course, and none of them include the irrefutable fact that AI has about three niche uses, and unless your organisation happens to be in one of those niches, you’re throwing money away. What’s the publicised cause of an AI implementation, in real life, being shit? It’s absolutely not anything to do with the inherent limitations of large language models. No. The whole AI ‘thing’ is definitely not a paper construct, built to support an investment vehicle, and stood on foundations made of hype. Say it ain’t so, Joe.
Other things are at fault. I’ve been making a list of the things that apparently cause AI to be a bit crap, and cause such sadness to those who invest in it to ‘accelerate change in the business’, ‘make faster, smarter decisions’, and ‘free up time to concentrate on strategic matters’. These reasons are drawn from a myriad of company messaging, marketing campaigns and general promotional cruft it has been by sad duty to comb through for the last two years. They tend to follow a pattern, the logic of which is determined thus:
- We make widgets, and would like to sell more widgets. (Sometimes our widget is AI, or powered by AI.)
- AI is fucking fantastic.
- You’ve tried using AI, you say? And it’s turned out to be, at best, average, you say?
- The issue is lack of widgets.
- We can sell you some of those. Then AI will be great, and do all the things promised, because AI is fantastic, really, and we get to sell widgets.
So, here is my Bingo Card of AI Failure. The rules are simple. Every time you hear why AI doesn’t work very well after some ill-informed dickhead crowbars it into his bank, tick off the reason from the card. Winner gets a paper copy of Encyclopaedia Britannica, which they will be forced to memorise – all 17 volumes of it.
- “The wrong business processes” – that is, how people do their jobs with the tools they use to get stuff done. It’s all wrong. See, you have to change the way everyone works, even thinks about how their organisation works for AI to work for you. Forget the fact that things work now! You’re doing everything wrong, so AI will fail.
- “Legacy software systems, integration with.” Software that’s been around a while, being debugged and performing more or less how it should, won’t work with AI because AI software is all whizzy and new, and without some middleware or API, add-on, MCP or some box with flashing lights on the front, your AI implementation is doomed.
- “Networking infrastructure, failure of.” Your networks aren’t fast enough or have too constrained a bandwidth for the speed-of-light communications that zoomy old AI needs. If you have two offices or a couple of factories that are connected, you’re an idiot for thinking AI would work in that scenario! Use a cloud service via broadband? Are you insane? Of course AI won’t perform under those positively medieval conditions!
- “Giving people access to data they shouldn’t see or interact with.” AI makes shit up (it’s how it works) and that includes the list of who’s allowed to see the financial records and the top-secret blueprints. So the problem, you see, is that people who set AI off to work end up seeing stuff they shouldn’t, so whole projects get shut down. But sort out your privileges, and all will work just swimmingly. Really, it will.
- “Data existing in silos.” Ah, as old as the hills, this one. Information in my spreadsheets isn’t in the CRM, and data in the HR records stays in the HR records, and Finance have no idea about Sales, and Sales have no idea what Marketing do all day. All those systems, running in every organisation, hang onto their own data, keeping it often in proprietary formats designed never to be shared. And that’s the problem, see? If AI could get to all the data, everything would work really well.
- “Interconnectivity between different, existing systems.” This one is an amalgam of 3) Networking infrastructure and 5) Data silos. Connections between presumably-now-communicating data silos isn’t up to scratch. So, AI projects fail and the moon falls from the sky.
- “Data that’s not sanitised.” Sometimes, computers don’t know the difference between 01 and 001, or Thursdya and Thursday and Hurdsay. So your business-changing investment flops onto its arse. Sort out your spelling (look out for those trailing spaces!) and AI will be unleashed, leading to massive profits and Ferraris all round!
- “Data that’s not in the right format.” See 5). Because database #1 holds its information in format #1, and database #2 holds its information in format #2, getting an AI to understand both is going to take some work. Once that’s done, however, AI will somehow surmount its inherent shortcomings, and deftly swerving both the laws of physics and reality itself, start developing so far never experienced excellence at every task. Except maths, of course: it’s a large language model, not a large mathematics model.
- “Fragmented data.” Information gets lost, or corrupted, or isn’t captured correctly in the first place. And AI, had it a personality, is such an utter complete-ist that unless every record on every subject is full and 100% correct, your cost-saving, profit-maxing silicon buddy will sulk. Not AI’s fault – it’s a perfectionist.
- “Lack of monitoring to keep costs under control.” The more you use AI, the more it costs you. Even at the time of writing, where every query costs the AI providers more than they earn, those pesky per-token costs do tend to ramp up. So, your AI implementation will fail in the sense that it will cost you the user, paying the AI provider, more than any savings you might make by ‘optimising inefficient workflows and automating manual tasks.’ One reason for this is that most queries to an AI need repeating, refining, resubmitting, examining, resubmitting again, repeating, and so on. Suddenly, the task of “write me a terse email complaining about Sharon in Accounts” turns into a $35 festival of pointless greenhouse gas generation that takes an hour.
- “Better management of the technology stack in general.” That is, it’s not AI per se that’s a big bag of poop, it’s all the other tech you already have. That old version of Microsoft Office on Nigel’s workstation is definitely the cause of the expensive-consultancy-and-AI-roll-out’s failure.
- “The wrong type of AI – not a specialised AI, like a medical AI for healthcare, say.” Specialist AIs do exist, given extra training data to learn, for instance, medical or legal terminology in a process that takes place after the internet-slurping, copyright infringing, intellectual property be damned phase. But even if you’ve chosen the right polished turd (and why wouldn’t you? You’re likely aware of the sector you work in, and are therefore informed enough to choose the right AI variant), a specialist chocolate teapot is still a chocolate teapot.
- “The AI is hosted in the wrong place – it should be at the edge or in the cloud.” The problem is either that AIs are terribly susceptible to latency issues (should be running at the edge), or the kit you can afford isn’t powerful enough to run a decent AI (it should be running in the cloud, where it can scale). Install your AI stack in the right spot on the map and glittering prizes shall be yours for the taking.
- “Lack of human in the loop.” AIs hallucinate sometimes, and my, isn’t it amusing when they do? So, whatever an AI says – or for those buying into the agentic AI hyperbole, does of its own cognisance – has to be checked before being allowed to pass muster. Ensuring a human worker checks the work of an AI that replaces a human worker is essential for success. Entirely logical. A single human can check the work of an AI that does the work of dozens of humans, of course. And humans never get bored, or tired of this shit, or wander off to talk to Bob in the canteen for half an hour. Or make mistakes. And they work 24/7, too. So as long as you have a human checking every output of every AI activity at all times, your implementation is destined for greatness.
- “AIs that are too autonomous.” The AI you’ve just spent a king’s ransom on has access to too much information, and is on a leash that’s too long. It’s going to go mad, and start to send nudey pictures to everyone’s grandma. You should have stopped it. You were aware of hallucinations weren’t you? And did no one explain that every single word or pixel spat out by an AI is a hallucination? But we only call them hallucinations when we term such words or pixels ‘wrong’? No? Well, you should have known better to un-cage the cost-cutting, efficiency-enhancing beast.
- “AIs that are too constrained.” Being a cautious type, you didn’t let your new AI investment do anything more than answer customers’ simple queries. Why, you fool! What an AI needs is access to all your information, from the board meeting minutes to the picking lists on the floor of the loading bay. Then, it needs to have API keys to every networked app you run, and be told to go and do a good job. Without this type of laissez faire managing, your AI will do nothing but save you the cost of employing people to answer the phone. How very shortsighted of you. You deserve your failure.
- “Applications running underneath the AI are in a poor state.” Software has bugs, and often doesn’t work like it should. What an AI needs is to operate in a perfect world, where bugs have been squashed by magic unicorns that turn into pizza at 5pm on a Friday. So, sort that out, and your AI deployment will work as promised.
- “Poor alignment between security and application teams.” The applications that your developers are building that address an AI could be insecure, so make sure your cybersecurity experts (who don’t know as much about software development as the developers) work closely with the software developers (who don’t know as much about cybersecurity as their colleagues) in a joyous amalgam of shared knowledge. Then, when that orgy of mutual learning is complete, your devs will churn out code that cannot be circumvented by an agentic AI that’s been given hard-coded credentials to every system on the network. In the event of any incident, remember that it’s not the AI that’s at fault, it’s that all your well-paid specialists have had the audacity not to also get qualified in a second subject that’s at least as complicated and difficult as the one they’ve spent a lifetime learning. Thus, it’s an HR issue, not an AI issue that’s caused your systems to cough up free passwords to every passing bot on the internet.
According the ChatGPT as of today, ‘executives’ “are no longer impressed by hype; they are seeking rigorous frameworks for governance, workforce augmentation, and scaling autonomous agents to drive measurable business growth.” Biznuss leaders are looking for things to ‘drive measurable business growth,’ which will no doubt involve buying some widgets in addition to their AI outlays.
Naturally, asking the world’s best-known AI about AI is never going to come back with anything objective – unless you stuff your prompts to it with the equivalent of “…and don’t give me any of your usual shit.” Nevertheless, given that large language models are only spitting out what people on the internet are saying, it seems fair to assume that folk foolhardy enough to give an averagely intelligent semi-randomising mimic the power to copy and paste between software applications will want some kind of control over the ensuing madness.
But isn’t the best type of control imparted by turning the fucking thing off and asking a) a human to do something menial and repetitive many times until they quit, followed by, b) asking a software developer to automate the menial stuff using code wot they rite. Or put someone to work doing the menial and repetitive thing, and make sure there are good snacks and a table football machine in the office?
Readers of a certain age will remember an iPhone that had a shitty radio aerial in it, and the reason why it kept dropping calls was that its users were ‘holding it wrong.” Imagine an iPhone that cost a trillion dollars to dream up and produce, and everyone who buys it ‘holds it wrong.’ There are a few hundred iPhone users who can use their new, shiny thing in their specific circumstances, and fair play to them. But everyone else needs major surgery to their hands so they can’t hold it wrong, or to stand on a plinth made of solid silver while using it, or have a good gulp of Snake Oil Number 6 before trying to make a call. And even then, after all that plinth-sourcing, surgery and purchase of Snake Oil, they appear to have wasted their money. There’s a new iPhone, of course – shall we call it the iPhone 5.2 ? – which is just as shit, but it takes longer to connect to calls…which then invariably drop, too.
Sooner or later, people are going to stop wanting iPhones. Here endeth the comparison. And the lesson.