“India has to build its own AI [artificial intelligence], and we are fully committed towards building the country’s first complete AI computing stack,” said Ola founder Bhavish Agarwal in January 2024. His AI startup Krutrim, had just become India’s first AI unicorn as it announced a funding round that valued the venture at $1 billion.
In a world where artificial intelligence is set to shape the future of everything from healthcare to education, one fundamental question we must address is: How do we make AI speak our language? In India’s context, we are talking about a vast diverse landscape of over 22 official languages of India and hundreds of dialects.
Krutrim’s funding round and other investments such as the $41 million investment in Sarvam AI, or Reliance’s partnership with Nvidia highlight investor confidence in the potential for AI models trained in Indian languages. In 2024, India is making notable strides towards building AI models trained in Indian languages. Lightspeed-backed Sarvam AI launched its Large Language Model, Sarvam 1, on October 24. The 2-billion parameter language model is specifically optimized for Indian languages – Sarvam describes it is as India’s first homegrown multilingual LLM, trained from scratch on domestic AI infrastructure in 10 Indian languages.
Training a large language model is a highly intensive and expensive endeavor, dominated by technology giants with deep pockets like Open AI, Microsoft, Google, and Amazon. Investment has been on the rise for homegrown LLMs because the movement to build AI tools that make technology more relevant to the everyday needs of Indians represents a compelling opportunity.
It is an inspiring vision that can revolutionize how the Indian population interacts with technology, but still very much in its infancy. Indian language AI models as inclusive and diverse as the people who speak them, can help pave the way for a more inclusive digital future, unlocking the potential of millions who’ve been left out of the digital age due to language barriers.
For AI to flourish in India, it needs to build tools to serve local markets, industries, and consumers. Inclusion, preservation of linguistic diversity, boosting local content creation, and digitizing heritage cultural content are some of the loftier implications this movement could achieve. But the main value today lies in enabling access to AI services for the vast millions of non-English speakers.
If Indic LLMs have to make the generational impact they promise, factors such as language inclusivity, government backing, and enterprise demand are driving them to become one of India’s most promising areas for AI growth. However, the potential obstacles and hurdles facing AI models in Indian languages deserve a closer look.
Data is the most obvious problem – the high cost of collecting high-quality, annotated data across different Indian languages remains a significant challenge.
The fact is, that high-quality data for Indian languages is scarce, fragmented, and sometimes nonexistent. It’s like trying to teach a child English with books only available in French. Even more crucially, the “annotations” needed to train these models are often either missing or biased, leading to AI systems that are anything but inclusive. The data has to reflect regional variations and cultural contexts to produce accurate results.
Startups and companies are collaborating more than ever to solve the data annotation problem with government-backed initiatives to help create open-source resources.
This initiative launched by MeitY aims to create a National Language Translation Mission, which will power government services across languages, making them accessible in regional languages. AI4Bharat which leads the project has an ambitious approach that has led to the development of BERT-based Indian language models for languages such as Hindi, Tamil, Bengali, and Marathi, making them publicly available for tasks like sentiment analysis, text summarization, and machine translation.
Another project aims to be a Swiss army knife for developers of NLP in Indian languages. Their pre-trained models across a broad spectrum of Indian languages are models are widely used in research, product development, and deployment.
In the world of AI, where “one-size-fits-all” models rarely work, Indian languages need their own models, data, and research that reflect their specific nuances. These open-source projects look to address the “back-end” of AI development, for developers who want to build AI tools tailored to India’s unique linguistic fabric. Open source datasets though often rely on crowdsourcing or volunteers to collect and annotate data, which, while helpful, can’t replicate the accuracy of professionally annotated data.
Cross-lingual models are a hot trend in AI right now, with companies like Reliance-acquired Reverie working to create technologies that adapt to multiple Indian languages.
“We started in an era when there was absolutely zero Indian language data in the digital media,” Reverie founder Vivekanda Pani told AIM on the complexities of building an AI model in Indian languages. He observed that now “people have a belief that this can be achieved and therefore let’s go and invest.”
The catch is that Indian languages are not just variations of a single language; They come with distinct scripts, grammar, and regional dialects. When you build a model for Hindi, it doesn’t necessarily transfer well to Tamil or Punjabi — even though these languages share a regional proximity. AI models might understand a phrase in Hindi and work with Urdu, which shares much of the grammar and vocabulary, but they fail when the same phrase is spoken in – say Bhojpurii or Awadhi.
In essence, even the most advanced multilingual models struggle with India’s immense linguistic diversity. Even the most “inclusive” language models have a tough time balancing regional dialects. Unless we see serious investment in “low-resource languages” it could broaden the urban-rural divide. Rural communities could be stuck using services designed for the urban elite.
The challenge then is not just about building more models; it’s about specializing those models to reflect real-world use cases across different populations. Companies like Gnani.ai are working on problems like speech recognition for regional accents, but that’s just one piece of the puzzle. Building models that scale to every language, dialect, and accent across India is a Herculean task, and it’s far from being solved.
Even as the idea of making AI more inclusive takes shape, one of the biggest threats to Indian language LLMs is the potential for bias. AI systems are trained on existing datasets, meaning they inevitably inherit the biases already present in society — whether it’s gender, caste, or regional bias. An AI model trained on urban social media will result in systems better at understanding hybrid languages like “hinglish” and the urban vernacular.
Models trained on English-centric data reinforce stereotypes of which languages are considered worthy of serious AI investment. If we don’t consciously work to de-bias our datasets, we’ll end up with AI systems that not only exclude large portions of the population but may also end up reinforcing harmful stereotypes.
Bias also plays into the funding divide. Investors are much more likely to pour money into AI models for Hindi, or Tamil because these languages have a larger user base. India is a nation of 600 million vernacular speakers who are often left out of the digital conversation. Investors don’t care about low-income customers in rural areas unless there’s a profitable incentive – we need to make sure that any breakthrough in Indian language AI is reaching the people who need it most.
Another issue is the sometimes glacial pace of government intervention to impact AI and data policy in India. Policy standards and regulatory frameworks for data privacy and AI ethics are largely missing. Without clear data governance policies, efforts to create multilingual AI for government services could end up getting stuck in a bureaucratic quagmire.
Policy uncertainty can delay projects and prevent the widespread implementation of Indian language AI tools.
The challenges of Indian language AI — from data quality to bias, from the urban-rural divide to funding priorities — are formidable. Hurdles against which the industry and government will need to make progress. Addressing these obstacles will require a multi-faceted approach involving investments in data infrastructure, policy development, research, and scalable technology solutions.
Before we get too carried away with talk of AI-powered chatbots and voice assistants in every Indian language, we need to face the fact that the journey to an inclusive AI future is still very much in its infancy. It will take more than datasets and AI models to truly unlock the potential of Indian language AI for the vastly diverse Indian population. In the end, the challenge is not just about making AI multilingual. It’s about making it fair, accessible, and relevant to millions.
The strategic significance of building sovereign AI is a major geopolitical factor driving the frenzy to invest in Indian AI tailored for the Indian content that can be deployed by Enterprises. Overcoming that challenge will decide which of these well-backed ventures breaks through in the quest to drive mass adoption of AI for Indian languages.
The Global AI Conclave, presented by Moneycontrol and CNBC-TV18, is set to bring together visionaries,…
Imagine waking up peacefully on a Thursday morning without the worry of rushing to the…
Generative AI (GenAI) models like GPT and Google’s BERT are trained on vast datasets that…
“I really f**king hate generative AI.” As far as statements go, this is as straightforward…
Scammy AI detection tools are robbing some people of their money … and some of…
Instagram is one of the most powerful platforms for businesses today, with more than 2…