13 May 2023

This time, it feels different

In an earlier post (2021) , I argued that much of the “powered by AI / ML” labelling and marketing out there was bogus and disingenuous. That AI / ML technologies were getting commoditised to the point of being as simple as pip install, where most organisations would not need to do any serious R&D to be able to use these technologies, enough to warrant the claim “Powered by AI / ML”. Excerpt from the post:

“Assuming good levels of competence, one is not missing out on anything, and the day one finds a legitimate, objectively quantifiable usecase for an “AI / ML” model, it is most likely going to be a simple pip install away.”

Turns out, one does not even need to do pip install anymore. The most sophisticated AI / ML technologies can now be availed via an even simpler HTTP / curl request, and for non-techies, via simple, familiar, chat interfaces. It is now possible to integrate such technologies into existing products in minutes. Forget needing to know anything about AI / ML or neural networks or weights or the latent space, one does not even need to know the basics of programming to use sophisticated AI technologies—one just needs to be able to converse in natural language. That is, prompt. Write code, stories, screenplays, synthesize any kind of text in any language, get instant answers to questions and dissect logical conundrums (of course, today, at the peril of the answers being wrong), semantically digest large amounts of unstructured information, generate high quality imagery … The breakneck speed of breakthroughs in the space, initially exciting and now increasingly worrying, has been stunning. Just in the LLM (Large Language Model) space, off the top of my head, there is GPT-3/4, Chinchilla, PaLM-1/2, LLaMA, BLOOM, Alpaca, Vicuna, AutoGPT, llama.cpp, LangChain, and numerous other models, their derivatives, tools, and hacks that have been coming out practically by the week. In no time, people have figured out how to run large models on everything from phones to RaspberryPis.[1] Then there’s the entire other genre of image and voice models that are also exploding. Multi-model-modal models will soon be a thing. One just has to look at the rate of model related stories posted on HackerNews[2] to get a sense of the hype and the dizzying pace. Forget the cat, the elephant in the room has broken loose and has started to run wild.

In the past several months, I have come across people who do programming, legal work, business, accountancy and finance, fashion design, architecture, graphic design, research, teaching, cooking, travel planning, event management etc., all of whom have started using the same tool, ChatGPT, to solve use cases specific to their domains and problems specific to their personal workflows. This is unlike everyone using the same messaging tool or the same document editor. This is one tool, a single class of technology (LLM), whose multi-dimensionality has achieved widespread adoption across demographics where people are discovering how to solve a multitude of problems with no technical training, in the one way that is most natural to humans—via language and conversations.

That is both fascinating and terrifying. I have been actively writing software, tinkering, and participating in technology/internet stuff for about 22 years. I cannot recall the last time a single tool gained such widespread acceptance so swiftly, for so many use cases, across entire demographics.[3] Until the recent breakthroughs, that is.

Skepticism

I have been an ardent skeptic of technology hype cycles who has always mentally blocked out phrases like “paradigm shift”. The level of frenzy and hype surrounding the recent AI breakthroughs is unprecedented, but understandable. To objectively evaluate some of them, I have experimented with the tools, read, reflected, and questioned myself over and over to make sure that I have not accidentally drunk some sort of LLM-KoolAid. More than everything, my increasing personal reliance on these tools for legitimate problem solving convinces me that there is significant substance beneath the hype. And that is what is worrying; the prospect of us starting to depend indiscriminately on poorly understood blackboxes, currently offered by megacorps, that actually work shockingly well.

Anyone who has paid attention to the current genre of skepticism (especially for LLMs) is sure to have noticed the goalposts shift rapidly over the past few months. Here are some of the common ones:

  • It just repeats patterns.
  • It is just a glorified probabilistic text generator.
  • It is just a “stochastic parrot”.
  • It “hallucinates”.
  • It is not “intelligent” the way humans are.
  • It does not actually “think” like humans.
  • It will never get human qualities like humour.[4]
  • It is snake oil and is no different from any other technology that has come before.
  • If it causes job losses, new jobs will emerge, like with all previous technological revolutions.

Sure. Few claim that LLMs possess human-like intelligence, “think” like humans, or exhibit self-awareness. Then again, there are also schools of thought that argue that humans are also just glorified response generators.[5][6][7] Regardless of philosophical dispositions, it is important to note that a large number of white collar jobs on top of which economies are built involve just reasonable amounts of comprehension, bounded decision making, and text generation—paper pushing, “code monkeying” and whatnot.[8] Really though, probabilistic text generation that maintains context in numerous computer and human languages while being meaningful, useful, and conversational, at the same time exhibiting at least an illusion of reason, in what world is that a trivial achievement ought to be dismissed with a “just”!? Those goal posts have shifted hilariously fast.

While the fact is that LLMs are blackboxes with emergent behaviour that are still being studied, one need not get into unresolvable philosophical debates to see why the recent breakthroughs have significant ramifications for humanity and why they should be taken seriously. No doubt, the philosophical implications are significant and those debates are necessary (and amazing), but the immediate, practical risks perhaps should not be rooted in debating whether LLMs have a soul or not.

The aforementioned list of reductive arguments really misses the point and ignores the elephant in the room, which has in fact, already left the room. If a single dumb, stochastic, probabilistic, hallucinating, snake oil LLM with a chat UI offered by one organisation can have such a viral, organic, and widespread adoption—where large disparate populations, corporations, and governments are integrating it into their daily lives for use cases that they are discovering themselves—imagine what better, faster, more “intelligent” systems to follow in the wake of what exists today would be capable of doing. This is at a time when people are starting to lose jobs to trivially implemented, newly possible AI automation. Given the break-neck pace of daily developments, what awaits us? Imagine not just one organisation, but numerous powerful organisations and governments suddenly showing interest in, benefiting from, and pushing these breakthroughs ahead with full force.

Debating GPT-4’s efficacy against human benchmarks is missing the big picture entirely. What it and its contemporaries have set loose in such a short period of time, and what is to follow on their heels tomorrow, is the real crux of the matter.

This time, it feels different

More than a decade ago (2009-2011), I got my PhD in a small subset of AI—neurocomputation and computational linguistics. It was fascinating to watch the leaky artificial neurons fire in synchrony forming Hebbian cell assemblies.[9] After a lot of tweaking, fine-tuning, and training, when neural networks did a small, specific task in a lab setting, it felt great. However, I was unable to see how those neurons could multi-task on their own, let alone envision them holding a conversation and reasoning with a human. I left academia right after that and became an outside observer of the space, just as it was slowly becoming mainstream in the industry. Elsewhere, at work (Zerodha), over the last 10 years, I have been asked this question countless times. “How does Zerodha use AI / ML?”. How, not if; the default assumption being that—thanks to the incessant hype and disingenuous AI / ML marketing prevalent in the industry—a financial technology organisation would surely be using AI / ML for something. The reaction has always been a look of surprise whenever I have responded that we were yet to find meaningful use cases within our organisation, even during the peak “deep learning transforms businesses” hype cycle.

Cut to 2023. A few curl experiments later, we did an experimental GPT-4 integration at work in about ~30 minutes, with GPT-4 generating the code to integrate itself, of course. The results were instant and freakishly good with quantifiable benefits. It took little time to identify multiple processes and business functions that could benefit from the same tool. Right tool for the right job. It required no training, no academic R&D, and definitely no nonsensical “powered by AI / ML” marketing. A highly sophisticated, powerful, poorly understood blackbox system that is available as a commodity technology. One tool, simple natural language, for a large number of realworld use cases across business functions. Soon after, we figured that if were to push a bit harder, just LLM based automation could directly obsolete 20% or more jobs at Zerodha across departments in no time. A rattling, somewhat visceral realisation—it is right here, right now.

A policy for “AI anxiety”

We ended up codifying this into an actual AI policy to bring clarity to the organisation.[10] It states that no one at Zerodha will lose their job if a technology implementation (AI or non-AI) directly renders their existing responsibilities and tasks obsolete. The goal is to prevent unexpected rug-pulls from under humans. Instead, there will be efforts to create avenues and opportunities for people to upskill and switch between roles and responsibilities. While this is a reassuring stopgap for individuals in the organisation who have been naturally asking questions fueled by AI anxiety, it is not an unconditional AI-shield or a bulletproof solution. The fact that such a policy had to be formulated marks an inflection point, the implications of which, I am yet to comprehend. Neither blockchain, serverless, web3, big data, nor earlier AI / ML technologies brought this about. But, the specific breakthroughs in the past few months finally did. All it took was 30 minutes to integrate, during which, it generated the code to integrate itself. This time, it feels different.

Now, this is the story from one small organisation amongst tens of millions worldwide that collectively provide hundreds of millions of jobs. When similar scenarios inevitably unfold across the board, all the while AI technologies keep progressing—in an increasingly unequal world where corporations slash two digit percentages of their workforces with an e-mail in the morning owing to “macroeconomic conditions” after overhiring out of FOMO—how exactly will it all unravel? This is not a philosophical commentary on the nature or merit of various kinds of white collar jobs, but a paranoid acknowledgement of the disruption and drastic change from the status quo that seems to be around the corner, for the better or worse.

To those who believe that new jobs will emerge at meaningful rates to absorb the losses and shocks, what exactly are those new jobs? To those who think that governments will wave magic wands to regulate AI technologies, one just has to look at how well governments have managed to regulate—and how well humanity has managed to self-regulate—human-made climate change and planetary destruction. It is not then a stretch to think that the unraveling of our civilisation and its socio-politico-economic systems that are built on extracting, mass producing, and mass consuming garbage, might be exacerbated. Ted Chiang’s recent essay is a grim, but fascinating exploration of this. Speaking of grim, we can always count on us to ruin nice things! Along the lines of Murphy’s Law,[11] I present:

Anything that can be ruined, will be ruined — Grumphy’s law

grumphy (noun.) a dirty, greedy, or bad-mannered person[12]

The subtle risk

Undoubtedly, there is going to be amazing progress and numerous positive breakthroughs that will become widely accessible. They are already being heavily discussed, hyped, marvelled at, and practised. However, my excitement for these developments is overshadowed by growing fear. While the inevitable weaponisation of AI technologies has probably started and large scale obsoletion of jobs may be around the corner, there is something more subtle that poses a bigger long term risk. An increasing number of decision-making systems in corporations, governments, and societies will start being offloaded to AI blackboxes for efficiency and convenience, which will slowly eat away at human agency, like frogs in slowly boiling water. Driven by a mix of FOMO and frenzy, we will be compelled to adopt these technologies, as the efficiency gains enjoyed by our contemporaries will exert a natural pressure that leaves those who don’t, behind—a competitive feedback loop. Decisions will start becoming opaque, untraceable, and unexplainable, if the current technology trends of overengineering and multilayered abstractions are anything to go by. LangChain[13] enters the chat! Imagine the horror stories of automated online account blocks and Kafkaesque customer support mazes that are all too common, manifesting at a societal level. The steady, deliberate, and gradual erosion of human agency must get us long before sentient AI wakes up.

I hope this paranoia ages like milk. The one about climate change has not.

Epilogue

I asked GPT-4 to summarise this post and write five haikus on it. I have always operated a piece of software, but never asked it anything—that is, until now. Anyway, here is the fifth one.

Future’s tangled web,

Offloading choices to black boxes,

Humanity’s voice fades.