Training large-scale AI models takes significant time — often weeks to months — depending on factors like model size, architecture, and computational resources. For example, OpenAI’s GPT-4, trained on approximately 1.7 trillion parameters, significantly increases training complexity and duration. Ceramic.ai promises to create faster, more efficient systems for training large language models (LLMs) while reducing compute requirements and costs.
The startup has secured $12M in seed funding from NEA, IBM, Samsung Next, and Earthshot Ventures, highlighting its achievements in long-context training and improved training speeds — 2.5X faster than the state-of-the-art. The valuation was not disclosed to TFN.
The funding will accelerate product development, scale the platform, and expand the company’s enterprise customer base. Anna Patterson, founder and CEO of Ceramic.ai, told us: “This round is about expansion and acceleration. We’re using it to refine our AI training infrastructure, grow our engineering team, and onboard more enterprise customers. We’re also deepening our partnerships with Lambda and AWS to ensure seamless, cost-effective AI model training at scale.”
Breaking AI training barriers: Ceramic’s enterprise solution
Ceramic.ai was founded in early 2024 by Anna Patterson, a renowned AI researcher and entrepreneur, and Chief Scientist Tom Costello. Together, they bring extensive experience in artificial intelligence, distributed systems, and large-scale infrastructure development.
The company tackles inefficiencies in distributed training systems, enabling researchers and engineers to push AI boundaries without constraints. Ceramic addresses bottlenecks caused by limited chip availability through software optimising how LLMs utilise GPUs. Their enterprise-grade infrastructure reduces overall compute requirements, saving companies millions.
Speaking about the inspiration behind her venture, Patterson told TFN: “It’s both — a personal initiative and a market need. At Google, I saw firsthand how only the biggest tech companies could afford to train large AI models because of sky-high costs and infrastructure bottlenecks. Enterprises trying to build their models were left behind. AI training can scale 10x, but not 100x — Ceramic.ai is here to change that. We’re making high-performance AI training radically more efficient and accessible, so companies don’t need billions in compute resources to compete.”
While current AI infrastructure can scale up to 10x, achieving 100x growth requires a complete redesign. Ceramic.ai bridges this gap with an enterprise-ready platform that’s not just faster but fundamentally more scalable—powering the next generation of AI while dramatically reducing training complexity and cost.
Patterson said: “In the midst of an AI adoption surge, too many companies are still hindered by barriers to scale—from prohibitive costs to limited infrastructure. We’re democratising access to high-performance AI infrastructure so companies can navigate the complexity of AI training without spending hundreds of millions in research and engineering resources. But the shift to enterprise AI isn’t just about better tools—it’s about changing how businesses work. If AI adoption were a baseball game, we’d still be singing the national anthem.”
Behind Ceramin: A next-generation AI training platform
The company has developed a platform that directly addresses enterprise AI deployment challenges. Its training infrastructure achieves 2.5× higher efficiency compared to open-source solutions, lowering costs while enhancing model performance. Patterson explained: “Most AI infrastructure simply throws more GPUs at the problem—an expensive and inefficient approach. We’ve taken a different path, optimising AI training from the ground up at the algorithmic level.”
Ceramic.ai is unique in its ability to train large models on long-context data, delivering superior quality and performance. The platform surpasses all reported benchmarks for long-context model training and maintains high efficiency even with 70B+ parameter models. In testing, Ceramic-trained reasoning models achieved a 92% Pass@1 score on GSM8K, surpassing Meta’s Llama70B (79%) and DeepSeek R1 (84%).
The platform also intelligently reorders training data, aligning each micro-batch by topic. This improves attention efficiency and eliminates token masking issues common in other models.
Patterson concluded, “We’re building the AI training backbone for enterprises. Over the next few years, we’ll expand enterprise adoption (make model training as easy as deploying cloud applications), refine and scale long-context support (ensure AI models process high-fidelity, large-scale data efficiently), and push the limits of compute efficiency (help companies build their custom foundation models without breaking the bank)”
The post Meet Anna Patterson: Former Google VP of Engineering and Gradient Ventures founder raises $12M to redefine AI training with Ceramic appeared first on Tech Funding News.