Currently, most AI inference computation relies on massive clusters of energy-intensive chips in cloud data centres — an environmentally and economically unsustainable practice. EnCharge AI, a startup developing pioneering analogue in-memory-computing AI chips, aims to transform this landscape with its unique chip architecture. The technology offers 20x better energy efficiency and 10x improved compute density, enabling new AI applications outside data centres while enhancing security, reducing latency, and lowering costs.
EnCharge AI has secured over $100 million in Series B funding led by Tiger Global. New investors include Maverick Silicon, Capital TEN, SIP Global Partners, Zero Infinity Partners, CTBC VC, Vanderbilt University, Morgan Creek Digital, Samsung Ventures, In-Q-Tel (IQT), and others. Previous investors RTX Ventures, Anzu Partners, Scout Ventures, AlleyCorp, ACVC, and S5V also participated in this round.
This oversubscribed round brings EnCharge AI’s total funding to more than $144 million. The funds will support commercialising its first client computing-focused AI accelerator products in 2025 and advance its future product development.
Expanding AI’s reach: From PCs to space exploration
Founded in 2022 by Naveen Verma, PhD (MIT graduate), Kailash Gopalakrishnan, PhD (ex-IBM and Stanford graduate), and Echere Iroaga, PhD (Stanford graduate), EnCharge AI aims to commercialise breakthrough research in in-memory computing for AI applications. The company focuses on solving critical AI computing challenges: energy efficiency, cost-effectiveness, edge deployment, scalability, accessibility, and sustainability.
Verma told TFN: “Deciding to create EnCharge AI was a confluence of two factors. Firstly, we felt we had sufficiently de-risked the technology by building successive generations of silicon in our lab and reached a point where the next step was to move from fundamental research to beginning to think about a product that addressed real-world use cases. Second, the surge of demand for compute to power AI models underlined the value proposition for highly efficient chips. In that environment and in consultation with many technical partners and potential customers, we decided it was time to commercialize these innovations.”
The company’s technology is built on six years of rigorous research at Princeton University, where Naveen Verma is an electrical and computer engineering professor. The project began with DARPA and the Department of Defense backing to create more efficient AI computing solutions.
EnCharge improves security, reduces latency, and lowers costs through enhanced computing efficiency by moving AI processing from cloud servers to local devices. This breakthrough affects industries ranging from consumer electronics and infrastructure to defence and aerospace.
EnCharge AI technology: advanced computing solutions
EnCharge AI’s noise-resilient analogue in-memory compute architecture significantly reduces power requirements for conventional and generative AI inference workloads. Through integrated analogue processing and memory, their AI accelerators use up to 20 times less energy than current leading AI chips across various applications.
“EnCharge is the only company to have developed a robust and scalable analog in-memory AI inference chip and accompanying software. The company was able to overcome previous hurdles to analog and in-memory chip architectures by leveraging precise metal-wire switch capacitors instead of noise-prone transistors. The result is a full-stack architecture that is substantially more energy efficient than currently available or soon-to-be-available leading digital AI chip solutions, ” explained Verma to TFN.
Combined with comprehensive software tools optimised for efficiency, performance, and fidelity, EnCharge AI’s technology expands AI capabilities within existing power constraints, extending cutting-edge AI beyond data centres to edge and client environments.
Riding the wave of Gen AI applications
EnCharge’s technology arrives at a crucial moment for the AI industry, which faces rapidly growing energy demands from generative AI applications.
“The efficiency breakthrough of EnCharge AI’s analogue in-memory architecture can be transformative for defence and aerospace use cases where size, weight, and power constraints limit how AI is deployed today,” said Dan Ateya, President and Managing Director of RTX Ventures. “Continuing our collaboration with EnCharge AI will help enable AI advancements in previously inaccessible environments given the limitations of current processor technology.”
“EnCharge had achieved something revolutionary while having comprehensively derisked their technology through research at Princeton before the company was even launched,” said a Managing Director at Samsung Ventures. “Building on multiple generations of chips encompassing seven years of peer-reviewed research, Naveen and his team are ready to commercialise a complete hardware and software solution that can bring advanced AI out of the cloud and onto consumer devices.”
“When evaluating EnCharge AI, we looked at and beyond their initial product plans and considered how this technology will continue to develop,” said Manish Muthal, Senior Managing Director at Maverick Silicon. “We were excited by the opportunity EnCharge has to rapidly bring products to market while continuing to achieve efficiency gains with their technology, raising the bar of AI compute efficiency to a place that would be difficult for others to reach.”
The post Nvidia’s rival EnCharge AI raises $100M for next-gen hardware appeared first on Tech Funding News.