Investment Thesis — Limitations of AI
I find it ironic to put this out following the new Pope’s views on AI and humanity. This post tries to look at the fallacies of AI implementation. In effect, it becomes an antithesis, so to speak.
As an investor, I am yet to build a thesis on AI. While I have done significant research (VC perspective) on artificial intelligence, most of the problems are solved by executing repeatable tasks. There is merit after all in solving for mundane task automation. But the question remains, how much has AI disrupted the industry?
I was prompted to put a post on this after a dinner meet with the Managing Director of a large GCC in Hyderabad last week. His views on AI were seemingly born out of a desire to keep costs down. After all, GCCs have eyes on reducing costs. Yet, after all the investment into AI, his team has struggled to find a way to automate meaningful tasks. This prompted an interesting question — will AI go the way of RPA? While RPA isn’t irrelevant, certain aspects of machine learning allow only repeatable tasks to be automated. AI was introduced to the market as an upgrade to RPA, with agents able to think and reason based on human intelligence.
As things stand, AI seems destined to end up with the same fate as RPA. However, given the significant investments being poured into it, AI may very well have a future.
AI use-cases today
From what we’ve observed, it appears that AI is used to cut costs. However, most use cases I’ve observed have been around automating distressing tasks. For instance, organizations operating monopolies or possessing significant market shares have brought in AI to automate support functions. AI’s entry into automating the customer support function has been its biggest fallacy.
In recent weeks, I have observed AI models interacting with me from my telecommunication service provider to banking and even travel. Here’s a list of how it worked previously, to what AI models have changed today. I have picked these industries due to my limited interaction with AI.
The issues outlined above clearly show the lack of proper AI implementation. A common thread here shows customer support being a task being outsourced to AI. However, this has seen significant churn. And across industries.
Efficacy vs Efficiency of AI
Efficacy is a term used in the medical field. If a drug does not work, its efficacy is redundant, and the drug is off the market. Take paracetamol, for instance. The efficacy of the drug is good enough for replication, which means that it will work on individuals across. Therefore, it is prudent for the FDA to deem it efficacious and efficient. Efficiency is essentially making an efficacious material/item replicable.
The question that needs to be asked now is whether AI is efficacious. In some cases, it is indeed efficacious. Tools like Zapier, Replit, and Copilot have built foundational models that have indeed disrupted systems. Albeit with challenges, but disrupted nonetheless. However, when we look at how AI models outside the foundational ones have been implemented, the results are bleak. The scariest thing at the moment is not whether AI will take over, but if AI will go down the RPA path.
RPA use-case(s)
I am inserting this for those who may have missed a previous post. Although RPA may have lost some luster, the value add is for all to see. You can see my previous post on RPA in Fintech applications. Despite being a bit dated, RPA seems to have a limited range of use cases. It is important to note that Machine Learning (the driver of RPA) remains to hold the lion’s share of revenue. Consider the graphic below in terms of a sample market/geography with the UAE. Machine Learning accounts for close to 50% of the AI revenue by segment.
The current state of AI implementation
1. Lack of resolutions — Something which has been a common thread across the issues listed. One can argue that the role of customer service is to interface with the customer as opposed to solving problems. However, the AI-generated WhatsApp message cannot generate tickets or even find time to connect with users/consumers. Most AI-generated customer service has seen significant churn. An example in telecom is that 12 million people have ported across the board due to poor customer service (ET Report Source).
2. Longer time taken — AI-generated tickets seem to take more time for resolution. One can argue that human-centric customer support may lead to employee churn and call center/outsourcing costs. However, the lack of resolution and time taken for resolving issues has seen customers churning at a faster rate.
3. One-size fits all solutions — Most issues faced by customers require a significant human touchpoint. As things stand, AI-generated responses have been one-size-fits-all. This has seen customers churn away from products.
4. Weak policies on AI use — Each time I get a call, I need to qualify if it is an AI agent or a bot. This brings up a threat to cybersecurity should a bot request personal information. At the moment, IRDAI, SEBI, and RBI may have policies on AI, but this might not be as robust as required.
Remediation
There are two broad themes based on prior observations. Is AI being implemented properly? Or is AI still in its infancy, such that the efficacy needs to be defined first?
Well, I believe it's a bit of both. AI has indeed made great strides. With ChatGPT and Nvidia making AI accessible, the possibilities seem endless.
However, the efficacy remains a question. Certain prompts in ChatGPT or Perplexity may yield fairly accurate results. Yet, several instances indicate AI hallucination. It may very well be that AI is a work in progress, but it is important to consider that the efficacy of AI needs to be defined before any sort of replication is in place. The statistic below only shows India being the only country with close to 60% AI deployment. Most models in India are built on top of existing foundation models.
A good metric to look at is the exploration of Generative AI in developed economies. Most are still exploring or are not sure. This gives a frame of reference for how AI is being adopted.
The graphic below shows some of the challenges faced by companies while using Generative AI. Data protection, bad quality results, and legal restrictions top the charts.
Therefore, there is an opportunity when it comes to framing regulations on AI (Europe-specific).
However, there are several other issues when it comes to building foundational models. Take the insurance industry (graphic below). Insurance or Insurance Tech has very strong use cases when it comes to the implementation of AI. While the number who have implemented AI from the previous year has gone up, the number considering using AI has gone down.
Foundational Models
I’d like to think of foundational models more as remedies to issues as opposed to reinventing the wheel. Foundational models are generally models that act independently. As things stand, there is significant human intervention required, barring the repetitive tasks.
That said, opportunities for foundational models exist. Github Copilot and Replit are sometimes able to automate certain aspects of code in the backend. Frontend design sometimes remains difficult to bring out (in terms of expectation), but this should get better with time.
Ultimately, foundational models are critical for ensuring that AI also does not go down the same path as RPA.
Disclaimer — I am not against AI. Far from it, actually. But when I see AI implemented in a bid to reduce distressing tasks, it is important to speak out. Also, I believe in the need to regulate the same. I am working on a draft for this. Ping me on LinkedIn or comment below if you want to work on this together.
FYI — This post isn’t AI generated :)