A showdown of AI philosophy: Build or Integrate?
Perplexity AI CEO Challenges Nandan Nilekani on India's AI Strategy
Last updated:

Edited By
Mackenzie Ferguson
AI Tools Researcher & Implementation Consultant
Perplexity AI's CEO, Aravind Srinivas, publicly disputes Nandan Nilekani's recommendation for Indian AI startups to avoid large language model (LLM) development. While Nilekani suggests focusing on AI applications using existing LLMs, Srinivas advocates for building India's own AI capabilities from scratch. These opposing views have sparked a heated debate about India's strategic allocation of resources and its position in the global AI landscape.
Introduction to the Debate
The debate surrounding India's AI strategy, specifically whether to focus on Large Language Model (LLM) development or application building, has attracted attention from industry leaders and technological experts. Prominent among them is Aravind Srinivas, CEO of Perplexity AI, who opposes Nandan Nilekani's recommendation for Indian startups to steer clear of LLM development. Nilekani, Infosys co-founder, advocates for utilizing existing LLMs to build applications, suggesting this approach aligns with India's strengths. This divergence of opinion not only highlights varied strategic priorities but also underscores a critical decision point for India's AI industry's future.
Aravind Srinivas argues that India should invest in developing its own LLMs, akin to the ambitious efforts of organizations like ISRO, which have proven successful in showcasing India's capabilities on the global stage. By developing home-grown AI models, Srinivas believes India can achieve greater technological independence and avoid reliance on foreign technologies, which often entail data sovereignty and security concerns. Conversely, Nandan Nilekani points to the advantages of concentrating on application development using open-source models, emphasizing efficient use of resources and quicker deployment timelines. This pragmatic approach aligns with India's recognized expertise in systems integration.
Learn to use AI like a Pro
Get the latest AI workflows to boost your productivity and business performance, delivered weekly by expert consultants. Enjoy step-by-step guides, weekly Q&A sessions, and full access to our AI workflow archive.














Support for shaping India's AI path ranges broadly across public and professional sectors, with divided opinions reflecting long-term strategic implications versus practical immediate gains. On one hand, building indigenous AI capabilities is compared to the national pride associated with Indian space achievements, offering a vision of enhanced technological sovereignty. Alternatively, leveraging existing LLMs limits the financial and infrastructural burdens thereby stimulating rapid growth in application sectors. This balance of innovation and pragmatism is key in determining India's future stance in the competitive global AI arena.
Aravind Srinivas's Vision for AI in India
Aravind Srinivas, the CEO of Perplexity AI, has taken a bold stance regarding the development of artificial intelligence (AI) in India. Recently, he publicly opposed Nandan Nilekani's suggestion that Indian startups should focus their efforts on developing applications using existing Large Language Models (LLMs) instead of creating new ones. Nilekani, a co-founder of Infosys, emphasized that leveraging current LLM technologies would be a more efficient use of resources for Indian companies. This debate has stirred discussions about the future direction of India's AI industry, with major implications for resource allocation, technological independence, and India's competitive position in the global AI landscape.
The discussion is grounded in a larger debate about India's strategic focus within the rapidly evolving global AI sector. On one hand, there is a call for India to develop its own LLMs to ensure technological sovereignty and avoid dependence on foreign technologies. On the other hand, there's a pragmatic viewpoint that advocates for utilizing existing LLMs and focusing on application development, given the intensive resources required for developing foundational models. This latter approach is argued to be more cost-effective and swift in delivering market-ready solutions, allowing for the efficient use of India's strengths in software development and systems integration.
Srinivas's vision involves building indigenous AI capabilities from scratch, akin to the achievements of India's space agency, ISRO, which demonstrated global space technology prowess. He argues that such capabilities are essential for India to establish itself as a significant player in AI, offering support to initiatives focused on developing foundational models. His vision also considers the long-term economic implications of investing heavily in AI infrastructure, which could drive major revenue streams and foster technological independence for India. Meanwhile, proponents of Nilekani's perspective, including TCS CEO K. Krithivasan, argue that focusing on system integration and application development plays to India's immediate strengths and resource realities.
Learn to use AI like a Pro
Get the latest AI workflows to boost your productivity and business performance, delivered weekly by expert consultants. Enjoy step-by-step guides, weekly Q&A sessions, and full access to our AI workflow archive.














The case for focusing on developing new LLMs is challenged by barriers such as the need for vast computing power, exorbitant energy costs, and the necessity for immense datasets and specialized talent, not to mention the substantial financial investment required. Yet the potential risks of relying solely on foreign LLMs—like data sovereignty issues, limited technological control, as well as cultural and linguistic biases—fuels the argument for developing indigenous capabilities. It's a strategic decision that might determine India's long-term standing in the AI realm.
In conclusion, Srinivas envisions a future where India is not just a user of AI technologies but a creator of fundamental AI frameworks. His position encourages an ISRO-like approach to AI development—where pioneering efforts could drive not only technological advancement but also national pride. This vision, although ambitious, seeks to position India as a leader in AI by building its own ground-up solutions, contrasting with Nilekani's perspective, which prioritizes pragmatic and financially viable strategies in the short term. Whether India chooses to develop its own LLMs or focus on applications may shape its future role in the global AI economy.
Nandan Nilekani's Perspective on AI Development
Nandan Nilekani, co-founder of Infosys and a prominent figure in India's technology landscape, has sparked a significant debate regarding the direction of AI development in India. During the Meta AI Summit, Nilekani recommended that Indian AI startups should prioritize building applications using existing Large Language Models (LLMs) rather than embarking on the creation of new foundational models. His viewpoint suggests a pragmatic approach where resources are efficiently utilized to develop applications that solve specific business problems and leverage proven technologies.
This perspective aligns with India's existing strengths in systems integration and application development, as echoed by leaders like TCS CEO K. Krithivasan. The focus on applications could offer benefits such as a lower barrier to entry, faster time to market, and reduced resource requirements, allowing Indian companies to swiftly make an impact in the AI market. By harnessing existing technologies and open-source LLMs, Indian entities could maintain a competitive edge while mitigating the high costs associated with developing new LLMs from scratch.
Aligning with Global Trends in AI
India's AI industry stands at a critical juncture, with diverging opinions on the path forward. On one side of the spectrum is Aravind Srinivas, the CEO of Perplexity AI, who advocates for the development of indigenous Large Language Models (LLMs) in India. His position challenges the recently suggested approach of Nandan Nilekani, the co-founder of Infosys, who spoke at the Meta AI Summit encouraging Indian companies to concentrate on building applications using existing LLMs.
Nilekani's position, which also finds support from TCS CEO K. Krithivasan, highlights a strategy focused on leveraging India's strengths in systems integration and application development. This is seen as a cost-effective and efficient way to participate in the growing AI economy without incurring the massive costs, risks, and requirements of developing foundational AI models from scratch. This approach is in alignment with India's historical proficiency in utilizing technology to create innovative solutions, particularly in software services and IT operations.
Learn to use AI like a Pro
Get the latest AI workflows to boost your productivity and business performance, delivered weekly by expert consultants. Enjoy step-by-step guides, weekly Q&A sessions, and full access to our AI workflow archive.














Despite the pragmatic outlook of utilizing existing models, many, including Srinivas, argue that India should not shy away from pursuing the development of its own foundational models. Drawing parallels with the Indian Space Research Organisation (ISRO), proponents of this view suggest that creating indigenous AI capabilities could establish India as a formidable player in the global AI landscape. This vision underlines a long-term investment in technological sovereignty and innovation capabilities, mirroring the success India has seen in its space endeavors.
In the broader global context, other major players are making considerable investments in AI. Notably, China has announced an ambitious $41 billion fund designated for AI chip development and LLM research, significantly heightening the competitive stakes worldwide. The European Union's implementation of a historic AI Act further sets global precedents, adding regulatory complexity to AI development. Amidst this backdrop, OpenAI's progression towards developing GPT-5 marks the massive resource demands inherent in cutting-edge AI model development.
Public discussion on this debate in India reflects a dichotomy between those advocating for indigenous model development and those supporting a pragmatic application-focused path. Strong sentiments back Srinivas's Indian AI initiative, seeing potential in breaking new grounds akin to ISRO's achievements. Simultaneously, a segment of industry professionals and business leaders emphasize practical advantages such as lower initial capital investment and quicker market deployment by utilizing existing technologies. The choice India makes will not only shape its economic future in AI but will also influence its strategic, cultural, and technological standing globally.
Challenges of LLM Development in India
The development of large language models (LLMs) in India faces significant challenges that stem from multiple areas, including resource limitations, strategic considerations, and the global competitive landscape. Despite the potential benefits of developing indigenous LLMs, experts debate whether such initiatives align with India's current technological and economic climate. According to Aravind Srinivas from Perplexity AI, building foundational LLMs is essential for India to demonstrate its AI capabilities on a global scale. Conversely, influential figures like Nandan Nilekani and K. Krithivasan argue that India's focus should be on tailoring applications using existing LLMs, capitalizing on strengths in systems integration and application development. This debate highlights the broader strategic decision of whether to invest heavily in building LLM infrastructure or to pursue more immediate returns through application development.
One of the most pressing challenges in developing LLMs in India is the significant resource requirement. The development of such models demands extensive computing power, substantial financial investment, and access to large, diverse datasets, which can be daunting in terms of both cost and logistics. The infrastructure needed to support these initiatives involves high energy consumption and necessitates specialized talent, which is a scarce resource. This scenario is further complicated by economic constraints and the strategic prioritization of resource allocation among different sectors of technology development.
Key industry leaders suggest that pursuing LLM development in India might entail opportunity costs, where resources allocated for foundational model training could detract from the growth of sectors where India holds competitive advantages, like application integration. Focusing on applications rather than foundational models could be advantageous because it requires fewer resources, offers faster market entry, and allows Indian companies to leverage existing technologies effectively. Such a strategy could yield faster economic returns and immediate job creation, although it could potentially limit India's future autonomy in AI technology development.
Learn to use AI like a Pro
Get the latest AI workflows to boost your productivity and business performance, delivered weekly by expert consultants. Enjoy step-by-step guides, weekly Q&A sessions, and full access to our AI workflow archive.














The decision not to develop indigenous LLMs might also pose risks associated with dependence on foreign technology. Concerns over data sovereignty, the influence of Western cultural biases in AI applications, and potential security vulnerabilities are significant. Relying heavily on foreign LLMs might lead to operational dependencies and reduced control over technological directions. These issues underscore the importance of weighing immediate economic benefits against long-term strategic independence.
The ongoing debate over LLM development highlights the divergent visions for India's role in the global AI landscape. The future of India's AI strategy will likely depend on how these visions are reconciled, considering both the benefits of technological independence and the realities of resource constraints. This choice will not only shape the technological capabilities of India but also influence its strategic positioning in global AI development. As countries like China make vast investments in AI infrastructure, India's decision could impact its competitive stance and innovation trajectory for decades to come.
Opportunities in AI Application Focus
The conversation around artificial intelligence (AI) application focus presents numerous opportunities for countries like India on the global stage. As AI technology rapidly evolves, the strategic decision of how to engage in the AI landscape becomes even more pivotal. Focusing on AI applications enables leveraging existing large language models (LLMs), thus presenting significant opportunities for innovation, speed, and market responsiveness. By building on top of existing frameworks, Indian tech firms can innovate rapidly, creating customized solutions that cater to specific industry needs without the massive investment required for developing new foundational models.
India's advantage in this application-focused approach is underpinned by its strength in software development, engineering talent, and a burgeoning startup ecosystem willing to experiment and scale new ideas. The potential for swift product development is particularly compelling since it allows companies to address local and global market demands effectively, offering tailored solutions with reduced time-to-market.
Furthermore, focusing on applications allows resources to be diverted towards critical areas such as synthetic data creation, enhancing model accuracy, and the development of complementary technologies. This pathway also offers cost efficiencies, as it alleviates the need for the extensive infrastructure required for training large models from scratch. Companies can thus allocate their investments and talent towards refining applications that provide direct value to consumers and businesses alike.
As the global AI landscape becomes increasingly competitive, India's stance could significantly influence its economic positioning and technological capabilities. An application-centric approach does not only promise immediate gains in terms of market penetration and employment but also sets a foundation for broader technological contributions and collaborations. In embracing this strategy, India can assert its role as a leader in AI applications, driving advancements that reshape industries and consumer experiences worldwide.
Learn to use AI like a Pro
Get the latest AI workflows to boost your productivity and business performance, delivered weekly by expert consultants. Enjoy step-by-step guides, weekly Q&A sessions, and full access to our AI workflow archive.














Public Reactions to the AI Strategy Debate
The debate surrounding India's AI strategy has prompted a wide range of public reactions, reflecting a division in opinion about the best path forward. Those in favor of developing indigenous AI capabilities emphasize technological sovereignty and the potential for demonstrating global AI leadership. This perspective draws inspiration from the success of ISRO (Indian Space Research Organisation), which has been cited as a model for India's potential achievements in AI. Advocates argue that building foundational models domestically would strengthen India's position in the global AI landscape and reduce dependency on foreign technologies.
Conversely, there is significant support for Nandan Nilekani's pragmatic approach, which advocates leveraging existing large language models (LLMs) to build applications. Proponents of this view highlight the immediate cost-effectiveness and practicality of focusing on applications, which can be developed more quickly and with fewer resources compared to the extensive demands of LLM development. Business leaders and industry professionals who align with Nilekani's perspective argue that India's strengths in systems integration make it well-suited for building on existing LLMs, utilizing them to solve domain-specific challenges.
Public discourse largely reflects these divided perspectives, with a greater share of tech professionals, especially those active on social media, rallying behind the call for independent AI development. The concern of technological dependence on foreign LLMs is a key point of contention, with debates often emphasizing the need for control over AI capabilities and data sovereignty to safeguard national interests. However, the conversation also acknowledges the significant resources and infrastructure required to develop such models domestically, questioning the feasibility of this approach given current constraints.
The debate has extended beyond technological implications, touching on broader socio-cultural concerns. There are apprehensions that continued reliance on foreign LLMs could perpetuate Western cultural biases in AI applications that may not resonate with Indian values. Public engagement has further emphasized the potential impact of either strategy on India's technological self-confidence and educational priorities. As India evaluates its next moves in the AI domain, the diversified public reactions illustrate the complexity of balancing short-term benefits with long-term strategic goals.
Impact of India’s AI Strategy on Technology and Economy
India's strategy towards AI development has garnished significant attention both domestically and globally, pivoting on the debate regarding whether to develop indigenous Large Language Models (LLMs) or to focus on applications using existing models. The divergent views primarily feature Aravind Srinivas, CEO of Perplexity AI, and Nandan Nilekani, co-founder of Infosys. While Srinivas advocates for India to demonstrate its technological prowess by building its own foundational AI models, akin to the success seen with ISRO, Nilekani suggests that India would benefit more from leveraging existing technologies to create innovative applications, a view supported by TCS's CEO K. Krithivasan.
At its core, this debate is not merely about technology but concerns deeper strategic implications regarding resource allocation, potential technological independence, and India's position in the global AI landscape. LLM development is notoriously resource-intensive, requiring vast amounts of data, computing power, and specialized talent, along with substantial financial investment. On the other hand, focusing on AI applications would lower entry barriers and hasten deployment by utilizing established models, thus aligning with India's strengths in IT services and systems integration.
Learn to use AI like a Pro
Get the latest AI workflows to boost your productivity and business performance, delivered weekly by expert consultants. Enjoy step-by-step guides, weekly Q&A sessions, and full access to our AI workflow archive.














As countries like China invest billions into AI infrastructure, there's a pressing question about whether India should do the same to maintain competitiveness and technological autonomy. The recent collaboration between Google and Indian educational institutes highlights another potential path - using external resources to strengthen domestic capabilities without solely pursuing indigenous LLM development. This strategic decision not only influences economic growth but also shapes India's technological identity on the world stage.
However, choosing not to develop LLMs poses risks, such as dependency on foreign technology which can lead to data sovereignty issues, control uncertainties, and potential biases in AI applications. These factors could have lasting effects on India's tech landscape, especially if global AI standards, influenced by large entities like the EU's AI Act, become more stringent. Meanwhile, public opinion shows a divide, with strong grassroots support for achieving technological sovereignty, reminiscent of ISRO's journey, while some pragmatists resonate with Nilekani's approach focusing on practical application rather than foundational theory.
Looking ahead, the direction India adopts in its AI strategy could dictate its economic and strategic positioning in the global tech hierarchy for decades. Developing indigenous LLMs might open new high-value markets and ensure long-term independence, albeit at a significant upfront cost. Alternatively, excelling in AI application development can yield rapid economic benefits and position India as a global leader in crafting innovative tech solutions, albeit at the expense of foundational AI development and technological self-reliance. The coming years will be crucial as India navigates these pivotal choices, balancing ambition with pragmatism.
Conclusion: The Future of AI Development in India
India stands at a crossroads in the domain of artificial intelligence (AI) development. As the nation charts its path forward, the choices made today will significantly impact its technological landscape in the coming decades. In his challenge to Nandan Nilekani's view, Aravind Srinivas highlights a fundamental debate: should India invest in building its own Large Language Models (LLMs) or focus on leveraging existing ones to create applications? This decision is more than just a matter of technological innovation; it is a question of economic strategy, cultural sovereignty, and long-term global positioning.
Proponents of developing indigenous LLMs argue that it is crucial for India to maintain control over its AI infrastructure. Such development could parallel the success story of ISRO, showcasing India's ability to achieve technological independence and leadership on the global stage. By investing in its own LLMs, India may avoid the pitfalls of dependency on foreign technology, including data privacy issues and cultural biases inherent in models developed elsewhere.
On the flip side, there is a pragmatic appeal to focusing on AI applications built on existing LLMs. This approach allows for quicker time-to-market, immediate application of solutions to real-world problems, and reduced capital expenditure. By concentrating on applications, India's tech industry can harness its strengths in software development and systems integration, potentially creating jobs and economic growth in the near term.
Learn to use AI like a Pro
Get the latest AI workflows to boost your productivity and business performance, delivered weekly by expert consultants. Enjoy step-by-step guides, weekly Q&A sessions, and full access to our AI workflow archive.














The future of AI in India could well depend on a nuanced strategy, balancing both approaches. Investing in LLM development does not necessarily preclude advancements in applications. Instead, a phased approach, beginning with specialized models and gradually scaling up, could ensure resource allocation is both strategic and effective. As global competition intensifies with massive investments from nations like China, India must carefully calibrate its AI strategy to strengthen its position in the global AI ecosystem.
Additionally, partnerships with international tech firms, such as Google's collaboration with Indian educational institutions, could provide the necessary knowledge transfer and technical backing to support both LLM and application development. The path forward must be paved with a commitment to fostering innovation, supporting robust research and development initiatives, and ensuring that the benefits of AI reach every segment of Indian society.
Ultimately, the future of AI development in India will be shaped by its willingness to take bold steps toward building AI capabilities that are both advanced and contextually relevant. As the discussion continues, the potential for India's AI industry to emerge as a leader in both application and foundational AI research remains immense. The choices made will define not only India's technological trajectory but also its socio-economic fabric in the years to come.