Categories
AI India

Intelligence as a Public Good: India’s “AI ka UPI” Revolution

There is a recurring rhythm to human progress: a breakthrough is born as a luxury, matures into a commodity, and ultimately solidifies into infrastructure.

We saw it with electricity, we saw it with the internet, and in 2016, we saw India do it with money through the Unified Payments Interface (UPI). UPI took the friction out of digital finance, transforming it from a walled garden guarded by private banks into a digital public good.

Now, it appears India is attempting to do for intelligence what they did for payments.

The global narrative around Artificial Intelligence is currently dominated at one end by massive private moats. At the other end are various open source/open weight efforts.

Silicon Valley primarily approaches AI as a capital-intensive arms race. Trillion-dollar tech players ramp huge compute, train very large models, and rent out intelligence via by the drink APIs. This intelligence is a proprietary and monetized luxury.

Enter the “AI ka UPI” initiative and the IndiaAI Mission discussed by Ashwini Vaishnaw at this week’s India AI Impact Summit.

Instead of treating AI as a product to be sold, India is architecting it as a Digital Public Infrastructure (DPI). The government is doing the heavy lifting—subsidizing the compute, curating population-scale datasets, and building foundational models.

Currently, they are making over 38,000 GPUs available to startups and researchers at around ₹65 (less than a dollar) an hour, a sheer fraction of the global cost. They are rolling out sovereign stacks like BharatGen and conversational models fluent in 22 regional languages.

“They are building an ‘orchestration layer’ for cognition.”

If a developer wants to build a voice-agent to help a rural farmer diagnose a crop disease, they don’t have to worry about the backend compute, the dataset acquisition, or paying a premium to a tech giant. They just plug into the public rails.

As I watch this unfold, I am struck by the philosophical shift it represents. We have become deeply conditioned to view AI through the lens of scarcity and subscription. But what happens when intelligence becomes a public utility?

It shifts the center of gravity of innovation. It becomes about who can solve the most acute, localized, human problems. The friction of creation drops to near zero. A bootstrapped team in a tier-two city can suddenly wield the same computational reasoning as a VC funded Silicon Valley startup.

There is also an element of sovereignty here. In the 21st century, relying on foreign infrastructure for your population’s cognitive processing seems akin to relying on a foreign nation for your electricity. True technological independence requires sovereign AI—models trained on indigenous data, reflecting local culture, nuances, and values, rather than the implicit biases of others.

The implications could be staggering. We are moving from an era where AI is an elite tool to an era where it is the invisible, ubiquitous fabric of daily life for over a billion people.

The true measure of AI’s ultimate impact won’t be found in benchmark scores on a server farm. It will be found in the quiet dignity of a citizen accessing global markets through a vernacular voice assistant, or a rural clinic predicting patient outcomes with public compute.

I look forward to following India’s AI efforts as this and other AI initiatives are more clearly defined.

Questions to consider

1. The Value of Human Capital: If artificial intelligence becomes as ubiquitous, reliable, and cheap as public electricity, what uniquely human skills will become the new premium in a hyper-automated society?

2. Cognitive Sovereignty: How will the geopolitical landscape shift when emerging economies no longer need to import their “cognitive infrastructure” and inherent cultural biases from Western tech players?

3. The Centralization of Truth: When a government builds and curates the foundational AI models for over a billion people, where is the line between providing a democratized public good and engineering a centralized cultural narrative?

What else???

Categories
AI Leadership

The Power of Two

I recently watched and thoroughly enjoyed Harry Stebbings’ interview with OpenAI’s Sam Altman (CEO) and Brad Lightcap (COO). In addition to gaining new insights into OpenAI’s evolution, their conversation covered a wide range of topics regarding the future of AI and its implications for society and new ventures.

One of the most fascinating aspects was the dynamic between Altman and Lightcap — hearing them discuss their respective strengths, weaknesses, and how those translate into their roles at OpenAI. It’s uncommon to witness a dual interview like this, with two colleagues who have clearly worked together for years and have complete confidence and trust in each other’s judgment and insights.

Throughout my involvement with various small companies, I wish I could have experienced such a powerful duo! In my experience, it’s not uncommon for the CEO to dominate the senior management team’s dynamics. While this sometimes works well, I’ve also seen it lead to reduced performance or frustration among senior managers due to the CEO’s actions.

Altman and Lightcap (and OpenAI by extension) appear to have a much more synergistic working relationship — effectively amounting to a co-equal division of responsibilities. I highly recommend watching this conversation for anyone involved in a startup aiming to scale quickly and effectively! Congratulations to Harry Stebbings for his hosting this excellent conversation with two key individuals leading the evolution of AI!

Categories
Living Work

The Silver Bullet Mindset

In my strategy consulting practice, I’ve come across a pattern that I find interesting. It’s what I’ve come to call “silver bullet thinking” – our desire to find the one right answer for any particular problem.

I think this need for one right answer is something we’re born with – which is then further developed in each of us through the years of education we go through. And, finally, when we go to work in a company, especially a larger company, the decision processes further refine this kind of thinking.

But sometimes the search for the silver bullet leads to the wrong outcome – a premature focus on a particular strategy which then gets organizationally committed, funded, and elevated in importance. In my experience, the larger the company, the more likely this kind of silver bullet thinking will dominate.

Yet, when I’ve worked with smaller, more innovative companies, they are less wed to their silver bullet – and more open to a process of on-going evaluation of the strategy based on the feedback from the market. Ideally, they’re able to pursue a couple of different strategies and test the market response to each in the process.

It seems to take a different, more entrepreneurial mindset for this to happen – and might one of the reasons that executives with big company experience find it so challenging to work in small company settings. Learning to be situational – considering when to stay loose and pursue multiple initial strategies vs. binding the whole organization to a single strategy – the silver bullet – may be one of the key leadership skills required of successful innovators.