26 November 2021
On "Powered by AI / ML" marketing
An email I had sent in response to a survey on the use of “AI / ML” and the “AI-first mindset” in our organisation and in the industry was shared on social media[1] [2] sparking surprising amounts of interest. I did candidly state the simple fact that we haven’t come across any big problems that warrant any specific “AI / ML” solutions in our organisation yet (Zerodha - stock broker that offers online investment and trading platforms), and that the bulk of the “powered by AI” claims we have seen across industries and in the numerous startup pitches that we receive, have been cases of hollow marketing. There is also a general expectation (delusion) that any sufficiently large technology company should be using “AI / ML” somehow, somewhere, for some reason.
In this opinion piece (2019), in the same vein, on the absurdity of “AI + Blockchain” marketing—a fad that seems to have died down thankfully—I had written:
AI is a contentious term whose mainstream interpretation refers to not one particular thing, but to a broad category encompassing a wide variety of concepts, techniques, and technologies—all eventually working towards the common goal of eliciting “intelligent” behaviour in computers.
Artificial Intelligence (AI) is a term that is far wider in breadth and scope than Computer science, which is a sprawling umbrella term in itself. Machine Learning (ML) is one of its many subsets, although the terms are now used interchangeably by technical and non-technical people alike. There are schools of thought where it is AI vs. ML[3] [4] and not AI / ML.
Over the last decade, the exponential leap in innovation in ML (and many subsets of AI) has been nothing short of breathtaking. Highly sophisticated ML technologies (things like GPT-3 and countless “deep learning” models) have been commoditised to the point that one can include them in their applications to solve specific tasks with just a few lines of code. Here is an example of object detection and classification in images in 10 lines of code found after 5 seconds of Googling. Object detection, which was a holy grail quest not long ago, is now commoditised to the point of being a laughably trivial task for most use cases. Building a few CRUD forms and storing data in a database now takes more time than detecting humans in an image. The state of the art of AI research and ML technologies that used to be largely confined to the academia and private research labs have been commoditised and reduced to a simple pip install
, and it is only going to get better. How the tables have turned!
At the same time, the bulk of the “powered by AI / ML” product marketing has been reduced to such meaninglessness that they don’t even qualify to serve as amusing trompe l’œils. This is not about companies that outright lie about using “AI” in their marketing material, but the inherent meaninglessness of branding products with the generic and vague “powered by AI” label.
AI powered COVID test kit.
Recently, I happened to see a widely available COVID Rapid Antigen Test (RAT) kit meant for home use, and the first thing that caught my eye was the “powered by AI” label on its packaging. RAT kits work like pregnancy test kits, where chemical reactions change the colour of a chemical strip indicating different results. For the life of me, I couldn’t figure out how exactly this test strip was “powered by AI”. Did the manufacturer devise the chemical reactions using “AI”? Is there an electronic component in the test kit that somehow uses “AI”? Does the use of “AI” somehow dictate the efficacy of the medical test? Does it make the product superior to other products in the market? If yes, is there quantifiable scientific data that one can refer to? It is a matter of serious medical decision making, afterall. If not, how is it relevant at all to an end user—highly likely to be an anxiety ridden nervous wreck with health issues—who just wants to do a COVID test and move on with their life?
After reading the bundled instruction manual, I finally figured out what the “powered by AI” label meant. The user downloads a mobile app from the manufacturer to take a photo of the colours that appear on the test strip, the app processes the image, detects colours, and shows the results digitally in a shareable form. The “AI” must be the image processing here. An otherwise good quality, well built, well designed, and well documented medical product resorted to a prominent but meaningless “powered by AI” label because it happens to use some form of ML, if at all, for some narrow usecase unrelated to the actual product. Wow.
This is exactly what is wrong with the “AI / ML” marketing. Anecdotally, the vast majority of prominent “powered by AI” examples I have seen across industries are either examples of FOMO, fraud, intellectual dishonesty, or just ignorance. I rarely come across instances of such marketing that makes me think “wow, that is very clever use of an ML technology” as opposed to “what does that AI / ML label have got anything to do with the product?”. Why does it matter whether a product uses ML for fraud detection or document recognition? Consumers don’t need to care as long as the product solves their problems meaningfully using whatever technologies.
At work, we use some commodity image recognition models for processing KYC documents to aid human reviewers in document verification. They are very accurate, very useful, but aren’t exactly 100%; an optimal trade-off for our narrow usecase. I don’t know what specific technologies or mathematical models or the kind of neural networks those tools use underneath. They are irrelevant. What matters is that the particular piece of software meaningfully solves the very specific image processing problem that we have in a very narrow context of our organisation. It has no bearing on our actual product offerings, which are financial investment platforms. Its use and implementation is no different from any other software package or library which are trivially installed and integrated. And yet, we do not come across marketing labels such as “powered by OOP”, or “powered by SQL aggregate reporting”, or “powered by Fourier transforms”, or “powered by programming languages”, or in the same sweepingly broad vein as “AI”, “powered by Computer science” or “powered by mathematics”. What does “powered by AI / ML” even mean?
“Powered by Rumali roti”
Coming back to the use of “AI” in our KYC process, what if we claimed on our website that we are “powered by AI”? It would be true in the narrowest sense of the word, if one stretched the truth to a thin, fine layer like an Indian Rumali roti. In fact, it may be more meaningful to label a product “powered by Rumali roti” than to claim that it is “powered by AI”. The developers of the product can argue that they are on a strict diet of Rumali roti, which for unquantifiable reasons, implies that their product is somehow better than competing products. Really though, what relevance does it have to the quality or efficacy of the product? As much relevance as in the case of companies that use highly commoditised ML models or automation for narrow use cases that neither have any meaningful bearing on the quality of the product nor any relevance to its end users.
Similarly, why does it matter to our customers that we use some arbitrary ML software for KYC document processing, or financial fraud detection, or parsing text in a support ticket, or “intelligently” monitoring network anomalies, or whether our internal “DevOps” practices are “AI enabled”. These are irrelevant, commodity use cases that do not even warrant a mention let alone prominent marketing to end users, and if we do make such claims, it would be meaningless and disingenuous. Unless there are quantifiable metrics that prove that an “AI / ML” technology for solving a problem is demonstrably superior to other approaches, there is neither a point in using them, let alone advertising them. A hypothetical weather prediction product claiming “our WeatherML model predicts the probability of rain in a 10 KM square mile region with a 97.42% accuracy compared to competing models” is actually meaningful compared to companies that use arbitrary NLP chat bots for customer service, or “sentiment analysis” of social media chatter, or some narrow transactional fraud detection claiming to be “powered by AI”. Unless one can quantify and demonstrate that the use of any particular “AI / ML” technology is central to a product and has a measurable impact on its quality and its primary usecase, such generic marketing claims are junk. By the way, every website that uses Google Analytics is already using “AI / ML” indirectly. Do the hundreds of millions of websites now claim the use of “AI / ML” for “predictive business analysis”? It is irrelevant.
No, you are not missing out.
Finally, what about the whole “you are missing out on AI / ML” FOMO train? The statement in itself is meaningless. It is like telling a developer, “you are missing out on Computer science”. When there is a problem to which the most optimal solution is a particular technology, be it an SQL database, or “AI / ML”, or blockchain, or quantum computers, or inter-galactic wormholes, one can use whatever fits best. If our Rumali roti maker is operating at optimal capacity with fine local flour and a flame, why do they need to introduce an inter-galactic wormhole into that task? If Alpha Centauri starts selling the finest flour which suddenly becomes essential to the bread maker’s business, and the wormhole turns out to be the best technology to solve the particular procurement problem, then it makes sense. Until then, such offerings are solutions looking for problems and their perpetuation is either intellectual dishonesty or ignorance. One should ignore the relentless cacophony of "(How do you | Do you) use $Random-Broad-Concept-or-Technology in your business?". Assuming good levels of competence, one is not missing out on anything, and the day one finds a legitimate, objectively quantifiable usecase for an “AI / ML” model, it is most likely going to be a simple pip install
away.
At Zerodha, apart for the narrow, rudimentary, ancillary problems that we solve with some arbitrary commodity ML models that we don’t even particularly care about, we haven’t come across any problems that need the broad and ambiguous “AI / ML” umbrella. When we do use some of these technologies to solve problems that objectively warrant them, most importantly, if such problems arise at all, we most definitely wouldn’t be changing our website to say “powered by AI”.
By the way, who is to say that this post wasn’t written by GPT-3? In 2021, On the internet, nobody knows that you’re GPT-3.