Dr. Devavrat Shah is the Co-founder and CEO of Ikigai Labs and he is a professor and a director of Statistics and Data Science Center at MIT. He co-founded Celect, a predictive analytics platform for retailers, which he sold to Nike. Devavrat holds a Bachelor and PhD in Computer Science from Indian Institute of Technology and Stanford University, respectively.
Ikigai Labs provides an AI-powered platform designed to transform enterprise tabular and time series data into predictive and actionable insights. Utilizing patented Large Graphical Models, the platform enables business users and developers across various industries to enhance their planning and decision-making processes.
Could you share the story behind the founding of Ikigai Labs? What inspired you to transition from academia to entrepreneurship?
I’ve actually been bouncing between the academic and business worlds for a few years now. I co-founded Ikigai Labs with my former student at MIT, Vinayak Ramesh. Previously, I co-founded a company called Celect which helped retailers optimize inventory decisions via AI-based demand forecasting. Celect was acquired by Nike in 2019.
What exactly are Large Graphical Models (LGMs), and how do they differ from the more widely known Large Language Models (LLMs)?
LGMs or Large Graphical Models are probabilistic view of data. They are in sharp contrast to the “Foundation model”-based AI such as LLM.
The Foundation Models assume that they can “learn” all the relevant “patterns” from a very large corpus of data. And therefore, when a new snippet of data is presented, it can be extrapolated based on the relevant part from the corpus of data. LLMs have been very effective for unstructured (text, image) data.
LGMs instead identify the appropriate “functional patterns” from a large “universe” of such patterns given the snippet of data. The LGMs are designed such that they have all relevant “functional patterns” available to them pertinent to structured (tabular, time series) data.
The LGMs are able to learn and provide precise prediction and forecasts using very limited data. For example, they can be utilized to perform highly accurate forecasts of critical, dynamically changing trends or business outcomes.
Could you explain how LGMs are particularly suited for analyzing structured, tabular data, and what advantages they offer over other AI models in this area?
LGMs are designed specifically for modelling structured data (i.e. tabular, time series data). As a result, they deliver better accuracy and more reliable predictions.
In addition, LGMs require less data than LLMs and therefore have lower compute and storage requirements, driving down costs. This also means that organizations can get accurate insights from LGMs even with limited training data.
LGMs also support better data privacy and security. They train only on an enterprise’s own data – with supplementation from select external data sources (such as weather data and social media data) when needed. There is never a risk of sensitive data being shared with a public model.
In what types of business scenarios do LGMs provide the most value? Could you provide some examples of how they have been used to improve forecasting, planning, or decision-making?
LGMs provide value in any scenario where an organization needs to predict a business outcome or anticipate trends to guide their strategy. In other words, they help across a broad range of use cases.
Imagine a business that sells Halloween costumes and items and is looking for insights to make better merchandizing decisions. Given their seasonality, they walk a tight line: On one hand, the company needs to avoid overstocking and ending up with excess inventory at the end of each season (which means unsold goods and wasted CAPEX). At the same time, they also don’t want to run out of inventory early (which means they missed out on sales).
Using LGMs, the business can strike a perfect balance and guide its retail merchandizing efforts. LGMs can answer questions like:
- Which costumes should I stock this season? How many should we stock of each SKU overall?
- How well will one SKU sell at a specific location?
- How well will this accessory sell with this costume?
- How can we avoid cannibalizing sales in cities where we have multiple stores?
- How will new costumes perform?
How do LGMs help in scenarios where data is sparse, inconsistent, or rapidly changing?
LGMs leverage AI-based data reconciliation to deliver precise insights even when they’re analyzing small or noisy data sets. Data reconciliation ensures that data is consistent, accurate, and complete. It involves comparing and validating datasets to identify discrepancies, errors, or inconsistencies. By combining the spatial and temporal structure of the data, LGMs enable good predictions with minimal and flawed data. The predictions come with uncertainty quantification as well as interpretation.
How does Ikigai’s mission to democratize AI align with the development of LGMs? How do you see LGMs shaping the future of AI in business?
AI is changing the way we work, and enterprises must be prepared to AI-enable workers of all types. The Ikigai platform offers a simple low code/no code experience for business users as well as a full AI Builder and API experience for data scientists and developers. In addition, we offer free education at our Ikigai Academy so anyone can learn the fundamentals of AI as well as get trained and certified on the Ikigai platform.
LGMs will have a huge impact more broadly on businesses looking to employ AI. Enterprises want to use genAI for use cases that require numerical predictive and statistical modelling, such as probabilistic forecasting and scenario planning. But LLMs weren’t built for these use cases, and lots of organizations think that LLMs are the only form of genAI. So they try Large Language Models for forecasting and planning purposes, and they don’t deliver. They give up and assume genAI just isn’t capable of supporting these applications. When they discover LGMs, they’ll realize they indeed can leverage generative AI to drive better forecasting and planning and help them make better business decisions.
Ikigai’s platform integrates LGMs with a human-centric approach through your eXpert-in-the-loop feature. Could you explain how this combination enhances the accuracy and adoption of AI models in enterprises?
AI needs guardrails, as organizations are naturally wary that the technology will perform accurately and effectively. One of these guardrails is human oversight, which can help infuse critical domain expertise and ensure AI models are delivering forecasts and predictions that are relevant and useful to their business. When organizations can put a human expert in a role monitoring AI, they’re able to trust it and verify its accuracy. This overcomes a major hurdle to adoption.
What are the key technological innovations in Ikigai’s platform that make it stand out from other AI solutions currently available on the market?
Our core LGM technology is the biggest differentiator. Ikigai is a pioneer in this space without peer. My co-founder and I invented LGMs during our academic work at MIT. We’re the innovator in large graphical models and the use of genAI on structured data.
What impact do you envision LGMs having on industries that rely heavily on accurate forecasting and planning, such as retail, supply chain management, and finance?
LGMs will be completely transformative as it is specifically designed for use on tabular, time series data which is the lifeblood of every company. Virtually every organization in every industry depends heavily on structured data analysis for demand forecasting and business planning to make sound decisions short and long-term – whether those decisions are related to merchandizing, hiring, investing, product development, or other categories. LGMs provide the closest thing to a crystal ball possible for making the best decisions.
Looking forward, what are the next steps for Ikigai Labs in advancing the capabilities of LGMs? Are there any new features or developments in the pipeline that you’re particularly excited about?
Our existing aiPlan model supports what-if and scenario analysis. Looking ahead, we’re aiming to further develop it and enable full featured Reinforcement Learning for operations teams. This would enable an ops team to do AI-driven planning in both the short and long term.
Thank you for the great interview, readers who wish to learn more should visit Ikigai Labs.
The post Dr. Devavrat Shah, Co-Founder & CEO of Ikigai Labs – Interview Series appeared first on Unite.AI.