Overview of E2E Networks Business Model
E2E Networks is an Indian cloud computing infrastructure provider specializing in GPU-based compute resources. Operational since 2009, the company boasts over 3,900 cloud GPUs and a self-developed, web-accessible platform offering comprehensive self-service cloud capabilities comparable to major global providers.
Recent Key Developments and Orders
- Secured two significant orders from India AI Mission worth ₹88 crore and ₹177 crore, aimed at building large-scale domestic large language models (LLMs) focused on Indian data.
- These orders are expected to go live shortly, potentially accelerating the achievement of the company’s FY26 monthly revenue run rate target of ₹35-40 crore ahead of schedule.
- Continued vigorous global demand for cloud GPUs, particularly at competitive price points for NVIDIA’s Blackwell GPUs.
- Advanced procurement underway for approximately 248 Blackwell B200 GPUs, with flexibility to scale up to around 4,096 units based on market dynamics.
Capacity Expansion and Infrastructure
- All existing capacity, including the recently launched Chennai facility (operational since August 1, 2025), is fully online and serving customers.
- Data center infrastructure currently comprises about 10 megawatts across multiple locations, supporting an estimated 8,000–10,000 GPUs.
- The company relies on third-party data centers to host its infrastructure, evaluating new data center offerings such as TCS’s upcoming 1-gigawatt facility.
Technology and Sovereignty Focus
- E2E Networks commits to sovereign cloud infrastructure emphasizing indigenous software development and open-source reliance to minimize dependency on foreign proprietary technology.
- The platform incorporates AI/ML capabilities delivered fully via open-source or internally developed tools, aligning with India’s broader AI sovereignty initiatives.
Financial Highlights for Q2 FY26
- Reported revenue of ₹43.8 crore, a 21% increase over the previous quarter.
- Gross margin improved significantly to 41% from 29% in Q1.
- Net loss of ₹13.5 crore reported, with depreciation running at around ₹42–50 crore quarterly, reflecting new capacity capital expenditures.
- Other current assets increased notably due to advances paid for GPU procurement and GST input credits.
Market Outlook and Strategy
- The company sees a large addressable market with substantial growth potential in India and globally, especially in the AI and machine learning sectors.
- Plans to prudently yet aggressively expand GPU capacity to avoid loss of business due to undersupply.
- Target gross margins around 70% over the medium to long term.
- Emphasizes serving customers requiring secure, sovereign compute infrastructure, particularly for sensitive and large-scale AI workloads where data control and customization matter.
Q&A Insights
- GPU economic life estimated at 7-8 years; technological advancements expected to maintain this lifespan due to software and ecosystem compatibility.
- Utilization currently at 35-40%, targeting 80-90% with increased order execution.
- Pricing for Blackwell GPUs estimated at ~₹50 lakhs per unit, though precise CAPEX and ROI guidance is withheld.
- The company is not currently integrating AMD GPUs, focusing primarily on NVIDIA due to customer demand.
- Discussions with large enterprises and startups continue, with a blend of short-term pay-as-you-go and long-term GPU usage contracts.
- No immediate plans for equity fundraising announced despite authorized capital increase; future announcements to be communicated transparently.
Conclusion
E2E Networks is strategically positioned to capitalize on the growing demand for cloud GPU infrastructure driven by AI initiatives, particularly within India’s AI Mission framework. With expanding capacity, a sovereign technology platform, and large upcoming contracts, the company anticipates accelerated revenue growth and improved margins. Prudent capital deployment balanced with aggressive scaling underpins its growth trajectory amid a rapidly evolving market.
For a deeper understanding of the AI market landscape impacting such infrastructure demands, readers may refer to DeepSeek: The AI Revolution Shaking Silicon Valley. Insights into advanced GPU procurement and performance factors are discussed in Evaluating CPU Limitations with the RTX 5090: Insights and Performance Analysis. Finally, the company's strategy aligns with broader themes explored in India's Defense Export Surge: Key Insights and Industry Growth, highlighting India's expanding technological capabilities.
Ladies and gentlemen, good day and welcome to the E2E Networks Limited Q2 and H1 FI26 earnings conference call
hosted by Go India Advisers. As a reminder, all participant lines will be in the listenon mode and there will be
an opportunity for you to ask questions after the presentation concludes. Should you need assistance during the
conference call, please signal an operator by pressing south and zero on your touchstone phone. Please note that
this conference is being recorded. I now hand the conference over to Miss Rashi Khatri from Go India Advisor. Thank you
and over to you ma'am. Thank you and good afternoon everyone. We welcome you to E2E networks limited
H2 um and um H1 and Q2 uh results and earnings call. Um we have with us on call today Mr. Taran Lua managing
director uh Mr. Nathan Jen uh the chief financial officer and Mr. Donet Gaba the company secretary. I must remind you
that the discussion on today's call may include certain forward-looking statements and must be viewed in
conjunction with the risks that the company may face. I now request Mr. Sarin to take us to the company's
business and financial highlights subsequent to which we will open the floor for Q&A. Thank you and over to you
sir. >> Yeah, thank you. Um hi everyone welcome to the Q2 financial year 26 call for E3
network. Uh let me very briefly talk about our business. So uh we are a compute infrastructure player including
cloud GPUs uh based out of India. We have built capacity of more than 3900 cloud GPUs and we have been in the
business of providing compute infrastructure since 2009. So we have built a very strong
engineering team over last uh many years and uh we have built our from the ground up self-service platform uh for running
our entire cloud infrastructure which is accessible uh on the web on my account.2netbooks.com
and uh is our aiml platform which is a part of my account infrastructure. So our platform allows for like a uh a
lot of features which are typical of any uh large cloud provider uh on a self-service basis.
[clears throat] So let us uh talk about some of the updates uh recap some of the updates uh
uh some of which have already been shared with you. So uh your company received uh two uh reasonably large
orders from India mission for 88 crores and 177 crores and uh uh these are primarily uh for uh customers who want
to run uh their own LLM trading for building large domestic LLM models which are primarily focused on India and of
course like uh depending on the choice life of the customer they might be usable globally as well. So uh these two
large orders like uh uh essentially uh will uh we are in advanced discussions with both the India team as well as the
customers on u uh helping to go live with these and uh we expect these to go live very soon. Now we had made a
guidance for FY uh 26th March uh for the monthly run rate uh where we mentioned a number of anywhere between say uh uh
anywhere between 35 and 40 uh CR per month. So with these two orders like uh it looks like potentially we should be
able to meet our monthly run rate numbers uh hopefully much earlier than March. Um and uh uh obviously there is a
great deal of confidence uh because of these India orders that it should be uh possible to do this uh sooner. Now we
are also continuing to see a very robust global demand for cloud GPUs. Uh the demand continues at a price point at
which coppers are available globally and uh at a reasonably higher price point for black GPUs. We are also seeing a lot
of uh demand globally. Uh so basically like what we had positioned was that like as we have visibility into our
capacity utilization for hoppers we would be looking at uh procuring more black.
So given the market scenario of like a lot of demand picking up perils, we are
already in very advanced stages of uh placing the orders for nearly 248 uh blackmail
B200s. Uh this is primarily funded through our internal approval, previous fundraisers and debt from uh uh uh
institutions. Now we spoke about like how the Chennai location had become a bit delayed. So
our Chennai location went live from 1st of August. Now all of our capacity today is completely
online and available for use by our customers. Uh I would also like to once again share the news about the ongoing
acquisition of Darvis Lab asset. Uh so primarily the reason for uh kind of like acquiring the business
of uh Jagat uh was essentially to be able to service like lot of additional self-service
demand for a cohort of customers we were currently not serving uh especially the customer who operate globally
and uh uh the uh even more importantly for was the uh people who were a part of service labs now they are a part of uh
E2 network and uh essentially we believe in their ability to uh contribute overall to uh building up like a very
fine DNA or E2 network. So uh so that's the it's uh one of the reasons for the cloud lab acquisition and uh hopefully
in the medium and long term we'll see a lot of benefits from having access to uh the great talent uh that we have already
built uh within E2E and acquired through Jarvis and where we will continue to work on acquiring more uh uh tech talent
in India. So over a period of time we will see like great benefits of that. Now that being said like uh our focus
continues to be like uh from a management perspective like our focus continues to be to rightsize the uh
cloud GPU infrastructure acquisition in a way that like is prudent and uh as well as uh uh also aggressive. So
prudent and existence I uh I'm trying to reconcile that. So basically prudent from the point of view that like once we
have visibility on like how the capacity would be utilized we want to buy more and by as much as that like we don't
lose business because of lack of capacity. So that's the balance we have always tried to maintain and in future
also our strategy remains to kind of like balance these two. Now that being said like overall the global market if
we look at like we are still uh very very small uh and uh uh which essentially means that like there is
plenty of addressable markets for us to pursue and uh whatever capacity we build like we are very confident that like we
will be able to utilize that capacity for revenue. So, [clears throat] so that that continues to remain our strategy.
On the platform side, on technology side, we are committed to continue to invest in uh our sovereign technology.
So when I use the word sovereign uh I essentially mean that like we are building technology which is primarily
reliant on open source uh with minimal reliance on u uh interactual software which could be uh subject to uh say uh
encumbrances in the future. So essentially we continue to invest in our technology homegrown technology uh for
our customers for our cloud and we'll continue to do that and uh we foresee like a uh long-term benefit of
continuing with that strategy of having full stack sovereignity. Uh so uh essentially sovereignities like we
have defined it like uh for all our customers that like one is that like we are an Indian cloud GPU provider where
the infrastructure is owned by us. The data centers are physically located in India and the software being operated on
our cloud is primarily built by us or is open source with an ability to redistribute that open source howsoever
we think it. So we have three levels of sovereignity over there and we believe in uh India's strategy of building
sovereignity over the AI model which is what we are seeing uh the investment from India AI mission from where they
are trying to build a sovereignity of uh Indian AI model. So that becomes like four levels of priority and of course
like uh uh like as a country we will continue to uh invest into the entire uh sovereignity platform just like in the
past India has invested into the digital public infrastructure. I think like the uh partnership between public and
private enterprises would lead to and self-sufficiency and we believe that like we are a big part of that equation
uh in India. So with that note I would like to uh hand over the conversation to Nissan our CFO who is present and uh
then we would uh open up the discussion for a Q&A session. Thank you Darun. Good afternoon all. So
I would just run you through the financial highlight. So in the quarter ended September 30, 2025, we reported a
revenue of 43.8 cr which is 21% over the last quarter which is uh upper to June 2025. IDA margin have significantly
improved to 41% from 29% in the previous quarter. Other income comprising interest mainly comprising of
interest on deposit has declined on account of utilization of deposits for the payment of capex as a consequential
to the depreciation and uh our we reported a net loss of 13.5 cr. Now we can open the floor for the
question and answer session. >> Thank you very much. We will now begin the question and answer session. Anyone
who wishes to ask a question may press star and one on the touchstone telephone. If you wish to remove
yourself from the question queue, you may press star and two. Participants are requested to use handsets while asking a
question. Ladies and gentlemen, we will wait for a moment while the question assembles.
The first question is from the line of Bhagya Gandhi from Talal and Bcha stock broking. Please go ahead.
>> Yeah. Hi, thanks for the opportunity. My first question is regarding the capex since you are fully utilized uh with the
two contract that we've received recently. So what is the capex plan for this year and next two years if you can
provide in terms of amount as well as in terms of GPUs? Yeah,
>> let's stop in terms of GPUs. I think like uh so what we believe is that like uh globally what we have seen and we
hope that that would get replicated in India is that uh the blackwells are going to be much bigger than hoppers in
terms of like global utilization of that infrastructure and uh the price point for blackwells is different. So there is
no cannibalization for blackwell uh cannibalization of hoppers because of blackwell. So we don't expect that to
happen. So overall we believe that like uh uh whatever number of hoppers we have acquired like it could be the blackwell
could be anywhere uh between 2x to uh 8x that number. So when we run through the entire cycle we will know. So like I
mentioned that like we are both prudent as well as aggressive. So we'll continue to operate both uh credentially as well
as aggressively to acquire uh more capacity opportunistically. So as we uh are immediately planning to
acquire about 248 black wells. So as soon as they are deployed or while we are in the process of deploying them if
we see that like uh there is potentially more uh capacity that can be utilized uh based on the inputs that we keep getting
from uh overall the market and the ecosystem. We might decide to kind of like double up that number all the way
to 4096 in the uh before March. And uh uh we believe that like another 12 months or so [clears throat] we may keep
acquiring blackmail until we are ready to uh until the next generation becomes ready to ship and uh becomes available
for uh uh receipts like in the Indian market. So, so we believe that there is a long runway to be able to answer that
question today that what's the maximum capacity but I think like uh on an immediate basis I think anywhere between
2048 to 4096 GPU u should be there >> this this you are mentioning before March this March you are saying March 26
>> yeah I'm not talking about deployment but like in terms of like uh uh so one is of course like 208 like is already
underway Okay. Uh under process. So uh potentially like another 2048 like even if it goes under process. So whether
that deployment will happen in Feb or March or later that would be hard to say today unless we place those orders uh
with the vendors. >> So can we assume like 2048 if average pricing for one Blackwell G Blackwell
GPU is closer to 50 lakhs. Can you assume like a thousand crex by FI26 before the closure of FI26?
>> Uh see it's hard to kind of like uh put numbers like that. Like I said that like look uh the pricing is dependent on a
lot of things. Uh so let us not put like uh uh uh kind of like a hard number to uh these things but uh we we are
committed to capac building building like uh uh sufficient blackmail capacity with a view not to lose any
opportunities for uh blackmail infrastructure coming our way. >> Got it. And so second question is
regarding the purchase of service and consumables. I believe that is largely with respect to Chennai facility and
that is largely done with. So can you assume that 14 crores would be the run rate going forward in future quarters as
well? See some difference would be there before the lateral capacity gets added
in terms of that when uh your utilization goes high the cost obviously goes higher. Uh and we continue to build
capacities for non- flagship GPUs for GPUs for storage and many other things. So uh that being said that like uh
broadly that is in range that like it will not change drastically uh unless we add a few more clusters of
couple of thousand GPUs till that time we don't expect it to change drastically but like uh that number obviously would
change >> [clears throat] >> uh but not to a significant extent.
>> Got it. And sir, just regarding the data center capacity, how much megawatt do we currently have and how much do we
require with the blackwell capacity coming on stream? If you can share some light on that older, we have some of the
older capacity that we classify close to 1 megawatt or so. Then the newer capacity that we built in the recent
past like last two years or four including the Chennai one, the capacity available to us is about uh 9 megawatt
plus 1 megawatt holder capacity. So that's an overall of about 10 megawatt or so of capacity. So broadly if you
look at 10 megawatt you're looking at like something somewhere between 8,000 to 10,000 uh cloud GPUs can be made
available over there. That being said like our Chennai facility uh uh like we have the ability to kind of like uh get
more capacity over there at a reasonably short. >> Got it. So just a follow up on the
depreciation part. Uh is the depreciation charge for the uh new gross block already capitalized? I mean the
it's there in the PLL right now. We are a 42 crore run rate. So that would be the largely depreciation number for
following quarters as well. >> Let answer that question. >> Yeah. So I think uh the number would be
broadly similar but only one point that Chennai cluster became active on 1st August. So one month depreciation would
be a additional number. >> Got it. Got it. So we can assume like 50 cr uh depreciation run rate for the
entire year. Uh sorry for the next coming quarters. >> Yeah. Should be on the same lines.
>> Got it sir. Thank you so much. I'll get back in the queue and all the best for the future. Yeah.
>> Thank you ladies and gentlemen. In order to ensure that the management is available to address questions from all
participants, we request you please limit your questions to two per participant. If you have a follow-up
question, you may enter in the queue. The next question is from the line of an from semi private limited. Please go
ahead. >> Oh, good evening sir. Can you hear me? >> Yes. Yes. Yes. Please go ahead.
>> Yeah. Uh I just want to get this thing one thing. The data centers which you are operating that is of third party is
not all of them >> right yeah we are not we don't own any data centers so these are all third
party data centers yes >> okay so recently uh TCS announced that they'll be installing a 1 gawatt data
center in uh inra uh so and they said that uh the main customers will be for accelerators for
cloud GPU accelerators So will E2 networks be uh client or would be participating in that data center?
>> Um I think this is a question which is a bit premature. I think like we would like to we like to see a data center
where we are buying capacity. So once they build those data centers and they have the capacity available for sale we
would be happy to consider them along with all the other players like so like we essentially go by merits of any
player before buying any services. So if TCS is in the market to f those we are happy to speak to them of course.
>> No I'm saying are like talks underway or something or like >> not currently not currently.
>> Okay. Okay. Uh my second question is you said the uh MR for March 2026 would be uh the same like as you mentioned before
34. Yeah. Yeah. Weated our guidance based on the confidence that we have received from India mission where if we
put the two orders together then that it is about 20 crores plus uh MR coming and then we have another uh quarter and a
half to build up more runway. So we are reasonably uh confident of uh meeting the guidance that we have given earlier.
>> Okay. So would you say that in next quarter so would you reach that MR? >> You see the guidance we have given is
for March 26. So we would like to stick to that hopefully like it should be earlier than that. Uh obviously because
like we feel that like these are uh these deployments are uh imminent. So and then when they go online like uh
obviously they will start showing up. >> Okay. Okay. Uh I just also like last ask questions. I just want to get some
information. Right. So currently you have 3,900 plus GPUs available right that that is total capacity. So so
previously you mentioned 8,000 to 10,000 cloud GPUs that is apart from the AI GPUs that uh that is available or there
is a potential. So I think somebody asked a question about like uh what's the uh data center capacity. So I was
like essentially talking about the data center capacity can contain that many GPUs doesn't mean that we have that many
GPUs today. >> Okay. Okay. Okay. Okay. Thank you so much. I'll get back on. Sure. Sure.
>> Thank you. The next question is from the line of Kesha from Nvesh. Please go ahead.
>> Yeah. Hi. Thanks for the opportunity. Hope Yeah. Hi sir. Hope I'm audible. So right regarding the recent order that
you want to India emission. So as you can see that realization for this order appears to be on the lower side. So like
which could potentially extend the payback period. So could you provide some color on the margin profile for
this project and we should like what what margin should we expect going forward?
Uh I think like overall like once we actually go into implementation and then we actually look at like all the
requirements that might come through whether through India or through the customers who have received these
allocations. Uh only then we will be able to comment on like uh margin profile looking back.
So currently we are not able to kind of like uh say whether it would be uh too low or too high or whatever. So I think
we'll have to do a look back to figure out that like uh how the overall margin profile turns out to be.
So like are you in line with the margin guidance that we earlier guided the guidance like has been uh always
that like I think like we have the ability to hit somewhere close to 70% plus minus something.
So uh we stand by that like from an overall company perspective. >> Got it. And is there any visibility that
we have got on the software side on the so cloud? >> Uh no not currently. There is no uh uh
immediate visibility or revenue on the website. >> Got it sir. I'll come back in. Thank you
so much. Sure. >> Thank you. The next question is from the line of Shuba Madagarval from BS
Securities. Please go ahead. Hello. Yeah. Am I audible? >> Yeah. Yeah. Yeah. Hi. Yeah.
>> Yeah. Yeah. Hello. Hi. Hi. So, I just have a one question. Uh like what is the useful life that you take for your GPUs
uh for the depreciation consideration for the depreciation purpose? I I believe it is like as for the company's
act which is for all business and uh mostly useful life. We see that
like for GPUs or CPUs is at least like uh a easily about 7 8 years >> 7 8 years. Okay. Okay. Okay. That that
was the question I best of luck to the side. >> Thank you. The next question is from the
line of Vun Gandhi from Fredent Asset Management. Please go ahead. >> Hey, thanks for the opportunity. So
Taron uh my question to you is on the AI development side in India uh and as far as my understanding is if
someone is using an LLM API the computing happens on the LLM's end >> so essentially the computing will happen
on the on on foreign waters foreign land uh and and what I wanted to understand is in India do you see that significant
of compute requirement because This AI development is happening through APIs and I'm not sure how much is happening
through through through APIs and if it is happening then it's it's not much of a big market for us
>> just some clarity >> like there are like so many over there in terms of like what people are trying
to do with AI. So if you look at like the use cases where people need to have a control over the outcome uh where
people need to have security where people need to have the sovereignity of how they are processing the data. So let
us take the case of people with significant number of users. So eventually with significant number of
users if you end up doing uh financial transactions in India then I believe that like a lot of that infrastructure
would need to be set up in India itself. Second is that like these are not the people who would use a one-sizefits all
generic inference APIs to do their business or to run their business. So they would obviously look for an edge
which can only be achieved by having uh dedicated backend infrastructure or the inference APIs where you are able to
make changes to the underlying platform uh with the help of like uh retraining fine-tuning and uh being able to teach
the AI model like so we see a lot more development in the opensource AI which enables all of these things that we are
talking about. So while we have seen that proprietary AI definitely comes up first and uh says that yeah we are doing
something that nobody else can do but what we have seen is that the rate of change in open code is very very rapid
today and uh mostly we see that within one quarter to two quarter or three quarter gap we see open source which is
able to do 90% of what proprietary AI models are able to do Now uh the way to get derive maximum benefit out of those
open source models is by running your own uh or boundary around those models and uh being able to
take like u definite and direct benefit of those models and uh health is not the major uh uh use case alone for AI. AI is
present everywhere. So all said and done like so where the developers are using AI is not the only
question we should be asking. So there is a need and demand for fority over AI which is very important and uh thatation
sorry to interrupt. So the implication is that we're largely dependent on opensource architecture and developments
around that architecture. Um okay so sure that is one way of putting it
>> understood and and just uh a follow up on this. So OpenAI has you know uh opened an office in Delhi and and and
still very premature >> will not be commenting on OpenAI over here in this call. So yes, OpenAI uh is
a large company but uh we are not talking about that over here. >> I see. So the is there a chance we could
be fulfilling their compute uh resource or you won't be comp commenting on that
currently? >> Um too early to say is what I would say. >> Got you. And the last question would be
although the economic life of a GPU you said was 7 to 8 years. Do you think going forward since the technology is
advancing so rapidly do you think 7 to 8 years of economic life would be an appropriate assumption?
>> Uh yeah it has continued to be an appropriate assumption in the past. Now future of course nobody can predict but
what we have seen is that the entire ecosystem of what software has been built on top of what a IMS stack. So all
of these things put together like continue to operate best at a particular price performance over a particular
model of a GPU. So that doesn't change very rapidly without investing rapidly into uh development and data science and
a IML effort. So netnet uh what we have seen in the past is that that continues to
uh kind of like uh uh be the future as well where uh any GPU model that is existing uh that will have a sweet spot
for certain use cases uh which kind of like came in during the uh peak prime time of that particular GPU. So for
instance like currently we see uh both all three big models uh big GPUs like so MP the 800 series 800 and related series
uh and the oper series and the blackwell. So there is a demand for all three of them. So essentially we are
looking at like a uh almost like a six year life cycle across the three of them and blackwell we are simply entering
into that life cycle. So that being said like we believe that like uh uh our assumption that like GPUs will continue
to be uh useful is uh grounded in what we are saying. >> Understood. Thank you very much.
>> Thank you. The next question is from the line of Amina Jen, an individual investor. Please go ahead.
>> Yeah. Hi. Uh thank you for the opportunity. Also I just wanted to understand that other current assets uh
in the balance sheet in H1 has gone up significantly. What I understand is that it's a G GST input receivable uh item.
So I assume that it would it was for our GPUs that we had in CWIP. So uh I heard it was fully registered if
you can address that concern. Why why has it almost doubled in uh amount? uh lean uh can you answer that question?
>> Yeah. So the preliminary amount includes the GST balances and the fixed deposits other current
assets. It's a advance to vendors that we have given across uh for procurement of the new GPUs. That's the other reason
along with the previous balances of GST receivable. So, so the current increase that has
happened from the uh previous amount which has doubled it's not uh GST input hasn't gone up some other items have
gone up that >> yeah yeah so G GST input was already factored in the March financials because
the capitalization was under CIP and GST input was already taken during the
current quarter it has gone up primarily because of advances given for the procurement of new GPUs
Okay. Okay sir. Got it. And uh on the next question was regarding our debts that we've taken about 100 crores. So
can you guide what our pull your debt would be and what our debt utilizations would be for next year.
So that what we have taken the facility is 450CR which would be utilized as per the terms
of agreement with the vendor during the payment cycle. So by end of March that completely should be utilized across by
end of March or the year April. Got it sir. Got it. And just the last question uh regarding sir our initial
two contracts that we've won it's been more on the basis of GPUs being uh you know given out on an hourly basis. Going
forward if we have any enterprise wins will this continue to be our model or will we go move towards a more
traditional cloud-based model which is you know our pay as you go model which will actually avail our platform
benefits. If you can throw some light on how our model is going to shape up going forward.
>> Sorry I wasn't muted. So like so we continue to work on all all aspects of uh how people want to utilize the GPUs.
So there would always be uh customers who want to uh kind of like use uh GPUs for like short periods of time where
they would want to do hourly, weekly, monthly, three monthly and there would always be customers who would look for
like longer term uh engagement in terms of like how they want to uh utilize their GPUs.
So uh so that is uh uh that is there. >> So so just to follow >> sorry to interrupt you Mr. J may
requested to please rejoin the queue. >> Yeah thank you. >> Thank you. The next question is from the
line of Kartikat from Raj investments. Please go ahead. >> Yeah.
>> Yeah thanks for the opportunity. Yes sir. I think I have uh two to three questions. uh first is um so what
exactly is is the mode that uh EQ enjoys you know compared to other data center players and how you're sort of this
whole this LNT uh you know uh buying some of the you know this thing in your uh company right
how how are you sort of leveraging LNT tech into this whole uh place can you throw some light on this just want to
understand you know because there are other data center players. What is that? What is that unique solution that EQ
offers? >> We are a cloud player not a data center player. So essentially we have a full
stack of services. So we have been supporting GPUs all the way since 2019 20 uh kind of time frame and uh we uh
have been working on our software platform for last so many years and uh uh then we have the ability to provide
the solutions of the kind that customers need. So uh like what that's what we would like to believe is our mode that
our customer orientation towards uh providing the support providing the solutions and providing the software
platform on which to run uh either training or inference or model endpoint and deployment uh and the ability to
integrate with our general purpose cloud platform that we had built over last so many years. So all of these uh are a
part of our board and of course like running a cloud involves like doing a significant number of activities that
all need to think of and work together well and uh essentially we have been a team that has been doing that for a long
time and uh the team itself is also a must that we have. And how how how do you how are you
leveraging the NPU announcements that is it helping you in the physical infrastructure in any way that again
gives you some advantage? >> We have mentioned that in the past that like L&T is like uh one of the best
partners for a uh India centered uh uh company. So what I believe is that like uh LNT is a very large group which has
able to open a lot of doors, a lot of conversations with us and uh we continue to uh work with them to explore uh the
various uh inside and outside use cases where we can uh work together. >> Okay. Uh thank you. And uh so I think my
my second question is like I know you mentioned about uh you know the blackwells and all that right uh but you
know there are also other uh vendors like AMD also trying to you know eat some of the pie and I don't know whether
we are uh you know are you looking at uh deploying other GPUs or primarily these are uh again Nvidia GPUs and Again the
second use case I think you sort of addressed the other customer asked this question sorry the other investor asked
this question uh you you started looking at inferencing as well right earlier in your calls in the previous calls you
mentioned about training so I want to understand what is this you know who are these customers what kind of instance
serving are they looking for both on you know black Nvidia GPUs as well as other GPUs
I didn't really understand the question. So like you're talking about like >> right. So first is are you expanding
your portfolio to other vendors like AMD GPUs uh in addition to Nvidia GPUs or just
only focused on uh the cloud? Currently we are very focused on where the customer demand is coming from. So
primarily we feel that the customer demand today is for Nvidia GPUs especially at least in India. Uh that is
one. So >> okay >> what was the other question you had?
>> Yeah. So the second question I looked at your investor presentation. I think you [clears throat] in one of the slides you
talked about and lang uh all of that right these are >> so all of that is a part of our platform
that we have been doing for a while now and uh there are customers who are running many of those features that we
have built >> okay and and these customers are what are they are they uh AI startups
enterprises or are they you know government uh educational institutes all all all sorts of customers.
>> Okay. But I I thought of the training side of the story probably you had government and uh education institutes
but I I was expecting more uh influence side of the story on the AI startups and enterprises but uh you're saying all of
them okay there's no uh breakup here right? >> Yeah. Yeah. Okay.
>> There is no break up right now. >> Okay. Okay. Thank you. All right. Yeah. Thank you.
>> Yeah. Thank you. >> Thank you. The next question is from the line of Akash, an individual investor.
Please go ahead. >> Yeah. Thanks for the opportunity. Uh sir, my first question is uh I think in
the articles of association we have increased our authorized capital by around 1,000 CR. So just wanted to
understand uh are we looking for an imminent fund raise uh in the near term and if yes uh how would you be doing it
like uh through a QIP? >> Uh so we will obviously keep everyone informed about our plans.
So >> yeah. >> So sir actually
[clears throat] we'll make announcements as appropriate >> since since you all have changed the
authorized capital I believe most likely it will be an equity fund base. So if that happens uh you know how much of a
dilution uh uh in the promoter space are we looking at and what could be the impact the price at which tech equity
fun will happen is basically what I wanted to understand like I mentioned that uh we will
obviously uh keep everyone informed about our exact plan. So that's what I would like to say for now.
>> So nothing for this year 26 uh it's not near term. >> So once again the third time I would
like to say the same thing that we would like to keep everyone informed about whatever we are doing. So like if we do
anything we will obviously make a full announcement and uh you'll learn from that.
>> Understood sir. Thanks a lot. Yeah. >> Thank you. Yes, >> thank you. The next question is from the
line of Akil Jawat from Radanta Vision Private Limited. Please go ahead. >> Hi, am I audible?
>> Yes. >> Yes, you are. Please go ahead. >> Uh okay. So my question is could you uh
please explain the current utilization level across uh all your GPUs like H200, S100 and other GPUs you are like black
and all uh all uh see I think like uh in the previous uh quarter that has gone by the overall
utilization for on revenue was close to 35 40% or so. uh the there is obviously like a element of like a lot of
nonrevenue utilization as well. Uh so that was the those were the numbers for like the previous quarter. So we are
obviously targeting uh somewhere between 80 to 90% utilization for the current
infrastructure that we have. uh especially we have the confidence of being able to achieve those numbers
based on the India mission orders uh that have been uh allocated to us. So we we are very hopeful that by March for
the current infrastructure that is already there that utilization will get to about 80 to 90%.
>> Okay. and like uh what utilization is like required for given on the newly deployed.
>> I'm sorry I didn't uh was not able to hear your question clearly. >> Yeah. Yeah. So, so I was saying what
utilization range you are expecting uh for that one on the new deploy
if you deploy if you deploy Mhm. >> Um sorry, sorry about that. Like the
line is uh not clear. Uh >> I'm sorry to interrupt. Your voice is sounding muffled, sir.
>> Uh okay. So now is my voice clear? >> Yes, please go ahead. >> Uh yeah. So I was asking what
utilization range is required for habit and pad break even on the newly developed capacity.
>> Uh let me uh listen to answer that question.
>> So could you could you please repeat what IDA range we require? uh evan pack break even on the newly
deployed capacities is required like what utilization range is required for eit and pack break even on the newly
deployed capacity. So I think it would be difficult to say for a new capacity because uh the cost
structure that we have is a broadly fixed because we would not be able to allocate the cost of employees etc and
the admin expenses to that from a GP margin perspective uh what we foresee is across our GP margin should be ranging
from 80 to 85% across on that uh newly deployed assets. Okay. Okay. And
may we requested to please rejoin the queue. We have participants waiting for the talk.
>> Okay. Okay. Okay. >> Thank you. The next question is from the line of Rajum Kumar Vanatan from RK
Invest. Please go ahead. >> Yeah. Good evening. Can you hear me? >> Yes, we can. Please go ahead.
>> Yeah. Uh I have two questions. So the first question is uh what is the return on equity that we are looking at in the
medium term say about 2 to 3 years uh cuz I know that currently you are uh not making money at the bottom line. So just
want to know what is the medium-term uh targets. See it will always be uh like as we keep
going it will always be a work in progress but ultimately what we are targeting from a uh aida perspective is
to be uh somewhere in the range of 70 plus minus something uh over the medium and long term as the uh volumes grow I
think like somewhere stabilize around those numbers >> 70%. Is that what you said? This is the
>> Okay, got it. So the second question is you mentioned the MR the monthly run rate of 40 crores you want to achieve by
March 26. So I >> 25 to 25 to 40. >> Yeah. So what would be the incremental
cost to the or the cost line will be more dramatic as what you have for >> so I don't think there should be any
major uh cost difference uh major cost difference in the sense that like we are already accounting for a lot of uh uh
cost that we are already incurring for the current infrastructure so I don't think it should massively change
uh so that being said uh what sorry uh that was the question or like was there a question over there so
we already have the infrastructure in place we don't see like a super change
>> yeah okay sir so the uh just to extend the question is I think uh your business kind of requires a lot of capital to be
deployed up front uh so do you think you you will be able to win this race in the longer run uh given that uh you know I
mean art at school you are the backing of L&T but uh I'm just saying that tomorrow if you need to deploy another
say about 10,000 crores of cash uh if the AI really picks up very big in India so uh do you think you will have the
wherewithal to kind of invest that much amount of money? >> Um I think it would be like uh very
speculative to answer that uh question today. So but obviously we are trying to build up uh our capacity as much as we
can. >> Okay. Now but how do you address the risk of obsolescence here? Uh cuz that
is one big risk that you are running. So what uh I mean what is your view on that?
>> We have been in the compute business for almost past uh 16 years. So we have seen multiple cycles of these. So essentially
there are uh so many moving parts to uh kind of like take care to kind of like not have to worry about uh obsolescent
as long as you have uh done things like in a credential manner. >> Sorry to interrupt Mr. Vanatan. We will
request you to please rejoin the queue. >> Thank you. >> Ladies and gentlemen, we will request
you please limit your questions to one per participant. The next question is from the line of Neil Manat from Pico
Capital. Please go ahead. >> Hello. Hi sir. Am I audible? >> Yes, you are.
>> Yes. >> Yeah sir. I wanted to understand our last uh June quarter exit MR was 14
and hence we did 43 and a half crores this quarter. So but our exit empire has been published for this quarter is 16 K.
So uh have we seen some tips in between in uh between months and then we've come back to 16 kores
broadly as soon as we uh receive the long-term orders from the India edition we had to kind of like free up the
capacity from longerterm users or refuse some business from the longerterm users and uh kind of like work with only
capacity which could be deployed preemptively. Yeah. So in that sense like uh kind of like we are in a holding
pattern for our capacity where we are trying to utilize it but not trying to utilize it uh for the long term because
we obviously want to give priority to the workloads that are coming from India edition and uh we also uh kind of like
want to somewhat utilize it uh not keeping it like fully idle. So like the balance between the two I think like uh
that is something we're trying to establish. >> Thank you. The next question is from the
line of Kesha from Nasha. Please go ahead. >> Yeah thanks for the opportunity again.
Question is like uh when do we expect the training to commence for the large projects that we want from the Indian?
Sorry. Say that again please. like when do you expect the billing to commence for the two large projects that
we want from the India? Any like timeline that you can share? >> I think it should be sooner than later
but like we can't put a uh immediate date to that. So but like we are very very hopeful that it could be very very
soon. >> Thank you. The next question is from the line of Shiasuk from Tamil Nadu Investor
Association. Please go ahead. >> Hi sir. Uh thanks for the opportunity. Uh have you done any uh benchmarks
comparing E2E versus AWS or Azure or GCP on training time and cost for the same model and know same data set? Uh and
what was the difference to in time to training the total cost on E2 versus these hypersc hyperscalers? There are
there are certain standard cluster tests for GPUs that our team keeps doing internally
uh with a view to find the uh compute performance and uh we believe that like uh although these are not I don't
believe these are published uh but like what we believe is that like we match the uh performance with the best of them
out there in amongst the new cloud operators. So it's very very close to the performance that everyone else uh is
getting including the standard benchmarks that are out there.
>> Thank you. The next question is from the line of an from Simma Private Limited. Please go ahead.
>> Hey, thanks for the opportunity again. I just want to ask one question. uh in the beginning of this month uh Trump tried
to attempt to restrict uh Nvidia stop black chips to leave us that easily. He's buying it from China but he's also
may try to attempt from other countries other countries as well. So would you have any you know issues or like any
threats based for that? >> I I think that question is a bit above my pay grade. So see basically like we
do our level best to kind of like follow uh like all our uh laws of India and laws uh of the country of origin of
where we are buying our equipment. So we ensure fully that like uh none of our services are utilized by anyone who is
not allowed to utilize those services. >> Thank you. The next question is from the line of Via Gandhi from Dal and Brocha.
Please go ahead. >> Yeah sir, I just wanted to know if you have seen any conversions on the PC side
with respect to enterprise because long run that is the steady state business right and how is the AI uh uh demand
environment from India AI mission. Uh if you can spend some time over there as well
the enterprises will continue to adopt AI. we will continue to work with the enterprises like those uh convertations
and conversions like uh will keep happening. So that being said uh like the overall demand from India mission
like we are very positive about that uh and uh uh we believe that like basically the goal of India mission is to make
India self-sufficient and like the cloud GPU providers I think they are a very big part of that equation and uh we
would continue to contribute to the mission. Thank you ladies and gentlemen due to
time constraints that was the last question for today. I now hand the conference over to management for
closing comments. >> Yeah. So thank you everyone for uh listening to us. I would also like to
thank our entire team uh all our customers, all our uh uh shareholders and uh uh all our ecosystem partners
including the vendors and everyone and uh uh thank you one and all like we hope that like uh we continue to have these
conversations. >> Uh thank you. >> Thank you. Thank you very much sir.
Members of the management with that we conclude today's conference call. On behalf of Go India Advisers that
concludes this conference thank you for joining us and you may now disconnect your lines.
E2E Networks is an Indian cloud infrastructure provider specializing in GPU-based computing resources, offering over 3,900 cloud GPUs through a proprietary, web-accessible self-service platform. This platform rivals major global providers by delivering comprehensive cloud capabilities tailored to AI and machine learning workloads, with a strong focus on sovereignty and open-source technology.
E2E Networks secured two major orders worth ₹88 crore and ₹177 crore to support domestic large language model development focused on Indian data. These contracts are expected to accelerate revenue growth, helping the company potentially achieve its FY26 monthly revenue run rate target of ₹35-40 crore ahead of schedule, driven by increased GPU utilization and demand.
The company has all current capacity, including its Chennai data center, fully operational, supporting around 8,000–10,000 GPUs within about 10 megawatts of power. E2E Networks is actively procuring NVIDIA's Blackwell B200 GPUs, planning a flexible scale from 248 up to 4,096 units to meet demand, and is exploring partnerships with new data centers like TCS’s 1-gigawatt facility to expand its footprint further.
E2E Networks focuses on sovereign cloud infrastructure to reduce reliance on foreign proprietary technology and align with India's AI sovereignty initiatives. Their platform leverages indigenous software and open-source AI/ML tools, ensuring secure, customizable, and compliant compute environments ideal for sensitive data and large-scale AI workloads requiring control over infrastructure.
In Q2 FY26, E2E Networks reported revenue of ₹43.8 crore, a 21% increase from the previous quarter, and a gross margin improvement to 41% from 29%. Despite a net loss of ₹13.5 crore mainly due to depreciation from capital expenditures on capacity expansion, the financials indicate strong top-line growth and improving profitability as the company ramps up GPU utilization and executes large contracts.
E2E Networks currently achieves 35-40% GPU utilization with targets to increase to 80-90% as new orders are fulfilled. They prioritize NVIDIA GPUs, especially the Blackwell series priced around ₹50 lakhs per unit, to meet customer demand. The company offers a mix of pay-as-you-go and long-term GPU contracts, catering to both startups and large enterprises, focusing on competitive pricing and flexible usage models.
E2E Networks sees a substantial growth opportunity in India's and global AI and machine learning markets, targeting medium- to long-term gross margins around 70%. Their strategy balances prudent capital investment with aggressive capacity scaling to avoid undersupply, emphasizing secure and sovereign cloud infrastructure tailored for AI workloads that require data sovereignty, customization, and robust performance.
Heads up!
This summary and transcript were automatically generated using AI with the Free YouTube Transcript Summary Tool by LunaNotes.
Generate a summary for freeRelated Summaries
E2E Networks Q3 FY26 Earnings Call: Strong Growth & AI Cloud Expansion
E2E Networks reports robust Q3 FY26 revenue growth driven by AI-focused GPU cloud capacity expansion and key partnerships. Management highlights progress on sovereign AI infrastructure, India AI Mission contracts, and strategic technology acquisitions fueling scalability and future profitability.
Sterlite Technologies Q3 FY26 Earnings: Growth, Innovation, and Tariff Challenges
Sterlite Technologies Limited (STL) reports strong Q3 FY26 revenue growth driven by volume and value additions in optical networking and digital solutions, despite tariff headwinds impacting margins. The company highlights advances in next-gen fiber technology, expanding data center portfolio, and strategic focus on North America and APAC markets to capitalize on AI, 5G, and data center demand surge.
Deep Seek R1: The Game Changer in AI Technology
Discover how Deep Seek R1 outshines OpenAI with unprecedented efficiency and performance on minimal resources.
Sami Hotels Q3 FY26 Earnings: Strong Growth Amid GST Challenges
Sami Hotels Ltd reported robust Q3 FY26 financials with 16% income growth and resilient portfolio performance despite external disruptions and GST-related margin impacts. The company outlined ongoing upscale expansions, stable debt metrics, and a confident outlook toward achieving ₹3,000 crore revenue by FY30.
Go Fashion India FY26 Q3 Earnings: Revenue Challenges, Store Strategy, and Market Insights
Go Fashion India Limited reported subdued Q3 FY26 results amid industry-wide footfall declines affecting same-store sales. The company focuses on stabilizing sales via selective store expansions, digital marketing, and product mix enhancements, while maintaining strong gross margins and operational discipline.
Most Viewed Summaries
Kolonyalismo at Imperyalismo: Ang Kasaysayan ng Pagsakop sa Pilipinas
Tuklasin ang kasaysayan ng kolonyalismo at imperyalismo sa Pilipinas sa pamamagitan ni Ferdinand Magellan.
A Comprehensive Guide to Using Stable Diffusion Forge UI
Explore the Stable Diffusion Forge UI, customizable settings, models, and more to enhance your image generation experience.
Pamamaraan at Patakarang Kolonyal ng mga Espanyol sa Pilipinas
Tuklasin ang mga pamamaraan at patakaran ng mga Espanyol sa Pilipinas, at ang epekto nito sa mga Pilipino.
Mastering Inpainting with Stable Diffusion: Fix Mistakes and Enhance Your Images
Learn to fix mistakes and enhance images with Stable Diffusion's inpainting features effectively.
Pamaraan at Patakarang Kolonyal ng mga Espanyol sa Pilipinas
Tuklasin ang mga pamamaraan at patakarang kolonyal ng mga Espanyol sa Pilipinas at ang mga epekto nito sa mga Pilipino.

