Compute capacity defines the amount of server and storage resources that are available to the databases in an instance. When you create an instance, you specify its compute capacity as a number of processing units or as a number of nodes, with 1000 processing units being equal to 1 node.
people+ai, an initiative by EkStep Foundation—an organisation co-founded by Nandan Nilekani–is set to discuss a concept paper on ‘Digital Public Infrastructure for Open Compute’ next month at the Global Technology Summit in New Delhi.
The idea is to build a network of micro-data centres, with interoperable standards so that small businesses and startups can plug and play it as per their requirement.
“The government buying graphic processing units (GPUs) is necessary, but not sufficient,” Tanuj Bhojwani, head of people+ai, told ET. “We believe there’s a more sophisticated solution, which is, we need to create the conditions for the Indian ecosystem to be able to invest in compute.”
people+ai is a community of innovators that aims to help the Indian ecosystem leverage the power of AI for building solutions at scale. Pramod Varma, chief architect of Aadhaar and India Stack, and the technology mind behind several DPIs, including GSTN and UPI, is mentoring this latest DPI as well. The team is led by Tanvi Lall of people+ai.
Discover the stories of your interest
For certain types of compute workloads, a startup micro-data centre can procure the hardware and plug it into this network, Bhojwani said.“On the other end, anybody can provision this compute. If I’m a small startup and all I want to do is run a database, I can approach this network and request for someone to give me compute power. This network will help orchestrate that,” he said.
Santashil Palchaudhuri, former managing director of Cloud Platforms at JP Morgan Chase & Co and now a volunteer at people+ai, had helped build the private cloud for JP Morgan.
Palchaudhuri told ET that what people+ai is proposing is a set of interoperable open standards for end consumers and developers.
Currently, if a customer adopts an Amazon Web Services (AWS) stack, one is locked-in into that stack, into that software, he said. It takes a lot of engineering effort to move to another cloud like Google, or a smaller cloud player, he said. If you have an interoperable default set of standards, it becomes easier to switch from cloud service providers they don’t like, he explained.
“It becomes much more portable, and hence there is a cost leverage. The computing workload can shift to wherever it is less costly. Consumers get benefits,” he said.
Palchaudhuri cited a global example of a similar project–the Open Clouds for Research Environments project (OCRE)–which aims to accelerate cloud adoption in the European research community by bringing together cloud providers, earth observation (EO) organisations and the research and education community through ready-to-use service agreements and €9.5 million in adoption funding.
Trigger financing into compute
The current artificial intelligence (AI) boom has led to the requirement of huge compute capacity and has sparked a race of sorts between nations for access to such compute for their companies, especially startups.
On October 4, ET had reported that the government is considering a proposal to set up a cluster of 25,000 graphic processing units (GPUs) under a public-private partnership (PPP) to be made accessible to Indian companies working on AI and other emerging technologies that require high computing capacity.
The proposal, which will cost Rs 8,000-10,000 crore, is being debated at the highest levels in the ministry of electronics and IT (Meity), ET had reported.
Vigyanlabs’ founding chief executive Srinivas Varadarajan told ET he backs the idea of a network of micro-data centres in principle. Vigyanlabs has two data centres each in Mysuru and Pune in collaboration with Protean eGov Technologies.
“If we have a network of hundreds of micro-data centres in the country, and manage data locally, we can cater to customers in a faster manner,” Varadarajan said. “We won’t choke our backend network. It will be like a federated model of micro-data centres who are live and running. It will be like several startups coming in and plugging in,” he said.
“But if I start a data centre and plug into a pool of compute, acting as one more node in the pool, more demand will come to this pool and I will get business from it,” Bhojwani explained. “We can apply the same open network and discoverability concept of the Open Network for Digital Commerce (ONDC) to compute.”
“Compute is many differentiated services. Unless we create an open market alternative, we won’t get small ticket investments,” Bhojwani said.
“Even the Oracles of the world are struggling because even their cloud doesn’t have that much uptake,” Bhojwani said.
Can we have an open compute?
“We are still in the design mode. Next month we’d like to launch this as a discussion paper,” he said. “We are not necessarily experts on compute, but the problem looks a lot like what other DPI processes are solving.”
Sharad Sharma, co-founder of iSPIRT Foundation, a volunteer-driven technology think tank, is not associated with people+ai. As an independent expert he said, “This effectively ends up meaning one is asking for subsidy for compute. This thinking has evolved because you need compute, data, talent and data scientists to do something in AI, and compute is expensive.”
Sharma said the same conversation is happening in several countries. “In the UAE, for example, Falcon AI LLM has received good subsidies from the government,” he said.
The eighth Global Technology Summit, by policy think tank Carnegie India, is being co-hosted by the external affairs ministry. Themed Geopolitics of Technology, the summit will take place in New Delhi from December 4 to 6.
This year, it will convene industry experts, policymakers and scholars from around the world to discuss DPI, AI, critical and emerging technology, export controls, space, semiconductors, AI regulation, compute power, military applications, skilling, and innovation.