Jeremy Kelway, VP of Engineering for Analytics, Data, and AI at EDB – Interview Series

Jeremy (Jezz) Kelway is a Vice President of Engineering at EDB, based in the Pacific Northwest, USA. He leads a team focused on delivering Postgres-based analytics and AI solutions. With experience in Database-as-a-Service (DBaaS) management, operational leadership, and innovative technology delivery, Jezz has a strong background in driving advancements in emerging technologies.

EDB supports PostgreSQL to align with business priorities, enabling cloud-native application development, cost-effective migration from legacy databases, and flexible deployment across cloud environments. With a growing talent pool and robust performance, EDB ensures security, reliability, and superior customer experiences for mission-critical applications.

Why is Postgres increasingly becoming the go-to database for building generative AI applications, and what key features make it suitable for this evolving landscape?

With nearly 75% of U.S. companies adopting AI, these businesses require a foundational technology that will allow them to quickly and easily access their abundance of data and fully embrace AI. This is where Postgres comes in.

Postgres is perhaps the perfect technical example of an enduring technology that has reemerged in popularity with greater relevance in the AI era than ever before. With robust architecture, native support for multiple data types, and extensibility by design, Postgres is a prime candidate for enterprises looking to harness the value of their data for production-ready AI in a sovereign and secure environment.

Through the 20 years that EDB has existed, or the 30+ that Postgres as a technology has existed, the industry has moved through evolutions, shifts and innovations, and through it all users continue to “just use Postgres” to tackle their most complex data challenges.

How is Retrieval-Augmented Generation (RAG) being applied today, and how do you see it shaping the future of the “Intelligent Economy”?

RAG flows are gaining significant popularity and momentum, with good reason! When framed in the context of the ‘Intelligent Economy’ RAG flows are enabling access to information in ways that facilitate the human experience, saving time by automating and filtering data and information output that would otherwise require significant manual effort and time to be created. The increased accuracy of the ‘search’ step (Retrieval) combined with being able to add specific content to a more widely trained LLM offers up a wealth of opportunity to accelerate and enhance informed decision making with relevant data. A useful way to think about this is as if you have a skilled research assistant that not only finds the right information but also presents it in a way that fits the context.

What are some of the most significant challenges organizations face when implementing RAG in production, and what strategies can help address these challenges?

At the fundamental level, your data quality is your AI differentiator. The accuracy of, and particularly the generated responses of, a RAG application will always be subject to the quality of data that is being used to train and augment the output. The level of sophistication being applied by the generative model will be less beneficial if/where the inputs are flawed, leading to less appropriate and unexpected results for the query (often referred to as ‘hallucinations’). The quality of your data sources will always be key to the success of the retrieved content that is feeding the generative steps—if the output is desired to be as accurate as possible, the contextual data sources for the LLM will need to be as up to date as possible.

From a performance perspective; adopting a proactive posture about what your RAG application is attempting to achieve—along with when and where the data is being retrieved—will position you well to understand potential impacts. For instance, if your RAG flow is retrieving data from transactional data sources (I.e. constantly updated DB’s that are critical to your business), monitoring the performance of those key data sources, in conjunction with the applications that are drawing data from these sources, will provide understanding as to the impact of your RAG flow steps. These measures are an excellent step for managing any potential or real-time implications to the performance of critical transactional data sources. In addition, this information can also provide valuable context for tuning the RAG application to focus on appropriate data retrieval.

Given the rise of specialized vector databases for AI, what advantages does Postgres offer over these solutions, particularly for enterprises looking to operationalize AI workloads?

A mission-critical vector database has the ability to support demanding AI workloads while ensuring data security, availability, and flexibility to integrate with existing data sources and structured information. Building an AI/RAG solution will often utilize a vector database as these applications involve similarity assessments and recommendations that work with high-dimensional data. The vector databases serve as an efficient and effective data source for storage, management and retrieval for these critical data pipelines.

How does EDB Postgres handle the complexities of managing vector data for AI, and what are the key benefits of integrating AI workloads into a Postgres environment?

While Postgres does not have native vector capability, pgvector is an extension that allows you to store your vector data alongside the rest of your data in Postgres. This allows enterprises to leverage vector capabilities alongside existing database structures, simplifying the management and deployment of AI applications by reducing the need for separate data stores and complex data transfers.

With Postgres becoming a central player in both transactional and analytical workloads, how does it help organizations streamline their data pipelines and unlock faster insights without adding complexity?

These data pipelines are effectively fueling AI applications. With the myriad data storage formats, locations, and data types, the complexities of how the retrieval phase is achieved quickly become a tangible challenge, particularly as the AI applications move from Proof-of-Concept, into Production.

EDB Postgres AI Pipelines extension is an example of how Postgres is playing a key role in shaping the ‘data management’ part of the AI application story. Simplifying data processing with automated pipelines for fetching data from Postgres or object storage, generating vector embeddings as new data is ingested, and triggering updates to embeddings when source data changes—meaning always-up-to-date data for query and retrieval without tedious maintenance.

What innovations or developments can we expect from Postgres in the near future, especially as AI continues to evolve and demand more from data infrastructure?

The vector database is by no means a finished article, further development and enhancement is expected as the utilization and reliance on vector database technology continues to grow. The PostgreSQL community continues to innovate in this space, seeking methods to enhance indexing to allow for more complex search criteria alongside the progression of the pgvector capability itself.

How is Postgres, especially with EDB’s offerings, supporting the need for multi-cloud and hybrid cloud deployments, and why is this flexibility important for AI-driven enterprises?

A recent EDB study shows that 56% of enterprises now deploy mission-critical workloads in a hybrid model, highlighting the need for solutions that support both agility and data sovereignty. Postgres, with EDB’s enhancements, provides the essential flexibility for multi-cloud and hybrid cloud environments, empowering AI-driven enterprises to manage their data with both flexibility and control.

EDB Postgres AI brings cloud agility and observability to hybrid environments with sovereign control. This approach allows enterprises to control the management of AI models, while also streamlining transactional, analytical, and AI workloads across hybrid or multi-cloud environments. By enabling data portability, granular TCO control, and a cloud-like experience on a variety of infrastructures, EDB supports AI-driven enterprises in realizing faster, more agile responses to complex data demands.

As AI becomes more embedded in enterprise systems, how does Postgres support data governance, privacy, and security, particularly in the context of handling sensitive data for AI models?

As AI becomes both an operational cornerstone and a competitive differentiator, enterprises face mounting pressure to safeguard data integrity and uphold rigorous compliance standards. This evolving landscape puts data sovereignty front and center—where strict governance, security, and visibility are not just priorities but prerequisites. Businesses need to know and be certain about where their data is, and where it’s going.

Postgres excels as the backbone for AI-ready data environments, offering advanced capabilities to manage sensitive data across hybrid and multi-cloud settings. Its open-source foundation means enterprises benefit from constant innovation, while EDB’s enhancements ensure adherence to enterprise-grade security, granular access controls, and deep observability—key for handling AI data responsibly. EDB’s Sovereign AI capabilities build on this posture, focusing on bringing AI capability to the data, thus facilitating control over where that data is moving to, and from.

What makes EDB Postgres uniquely capable of scaling AI workloads while maintaining high availability and performance, especially for mission-critical applications?

EDB Postgres AI helps elevate data infrastructure to a strategic technology asset by bringing analytical and AI systems closer to customers’ core operational and transactional data—all managed through Postgres. It provides the data platform foundation for AI-driven apps by reducing infrastructure complexity, optimizing cost-efficiency, and meeting enterprise requirements for data sovereignty, performance, and security.

An elegant data platform for modern operators, developers, data engineers, and AI application builders who require a battle-proven solution for their mission-critical workloads, allowing access to analytics and AI capabilities whilst using the enterprise’s core operational database system.

Thank you for the great interview, readers who wish to learn more should visit EDB

The post Jeremy Kelway, VP of Engineering for Analytics, Data, and AI at EDB – Interview Series appeared first on Unite.AI.

Facebook
Twitter
LinkedIn

Share:

More Posts

Stay Ahead of the Curve

Get the latest business insights, expert advice, and exclusive content delivered straight to your inbox. Join a community of forward-thinking entrepreneurs who are shaping the future of business.

Related Posts

Scroll to Top