MongoDB, Inc. announced the general availability of MongoDB Atlas Vector Search on Knowledge Bases for Amazon Bedrock to enable organizations to build generative AI application features using fully managed foundation models (FMs) more easily. MongoDB Atlas is the world's most widely available developer data platform and provides vector database capabilities that make it seamless for organizations to use their real-time operational data to power generative AI applications. Amazon Bedrock is a fully managed service from Amazon Web Services (AWS) that offers a choice of high-performing FMs from leading AI companies via a single API, along with a broad set of capabilities organizations need to build generative AI applications with security, privacy, and responsible AI.

Customers across industries can now use the integration with their proprietary data to more easily create applications that use generative AI to autonomously complete complex tasks and to deliver up-to-date, accurate, and trustworthy responses to end-user requests. The new integration with Amazon Bedrock allows organizations to more quickly and easily deploy generative AI applications on AWS that can act on data processed by MongoDB Atlas Vector Search to deliver more accurate, relevant, and trustworthy responses. Unlike add-on solutions that only store vector data, MongoDB Atlas Vector Search powers generative AI applications by functioning as a highly performant and scalable vector database with the added benefit of being integrated with a globally distributed operational database that can store and process all of an organization's data.

Customers can use the integration between MongoDB Atlas Vector Search and Amazon Bedrock to privately customize FMs like large language models (LLMs)?from AI21 Labs, Amazon, Anthropic, Cohere, Meta, Mistral AI, and Stability AI?with their real-time operational data by converting it into vector embeddings for use with LLMs. Using Agents for Amazon Bedrock for retrieval-augmented generation (RAG), customers can then build applications with LLMs that respond to user queries with relevant, contextualized responses?without needing to manually code. For example, a retail organization can more easily develop a generative AI application that uses autonomous agents for tasks like processing real-time inventory requests or to help personalize customer returns and exchanges by automatically suggesting in-stock merchandise based on customer feedback. Organizations can also isolate and scale their generative AI workloads independent of their core operational database with MongoDB Atlas Search Nodes to optimize cost and performance with up to 60 percent faster query times.

With fully managed capabilities, this new integration enables joint AWS and MongoDB customers to securely use generative AI with their proprietary data to its full extent throughout an organization, and to realize business value more quickly?with less operational overhead and manual work.