New Relic announced it is integrating its platform with NVIDIA NIM inference microservices to reduce the complexity and costs of developing, deploying, and monitoring generative AI (GenAI) apps. Now, customers can use New Relic AI monitoring to gain broad visibility across the AI stack for applications built with NVIDIA NIM, all with a simplified setup and enhanced data security. This complements the robust security features and ease of use of NVIDIA NIM's self-hosted models, which accelerates generative AI application delivery.

Together, New Relic integrated with NVIDIA NIM can help customers adopt AI faster and achieve quicker ROI. Organizations are rapidly adopting generative AI to enhance digital experiences, boost productivity, and drive revenue. Gartner predicts that over 80% of enterprises will use GenAI or deploy GenAI apps by 2026.

Quick deployment and faster ROI are crucial for organizations to gain market advantage and observability is the key. It offers an expansive, real-time view of the AI application stack - across services, infrastructure, and the AI layer - to ensure efficient, reliable, and cost-effective operation. New Relic Unlocks Faster ROI for AI Applications Built with NVIDIA NIM.

AI applications can simplify tech stacks, increase security concerns, and be cost prohibitive. New Relic AI monitoring provides a broad view of the AI stack, along with key metrics on throughput, latency, and costs while ensuring data privacy. It also traces the request flows across services and models to understand the inner workings of AI apps.

New Relic extends its in-depth monitoring to NVIDIA NIM, supporting a wide range of AI models including?Databricks DBRX, Google's Gemma, Meta's Llama 3, Microsoft's Phi-3, Mistral Large and Mixtral 8x22B, and Snowflake's Arctic. This helps organizations deploy AI applications built with NVIDIA NIM confidently, accelerate time-to-market, and improve ROI. Key features and use cases for AI monitoring include: Full AI stack visibility: Spot issues faster with a view across apps, NVIDIA GPU-based infrastructure, AI layer, response quality, token count, and APM golden signals.

Deep trace insights for every response: Fix performance and quality issues like bias, toxicity, and hallucinations by tracing the lifecycle of AI responses. Model inventory: Easily isolate model-related performance, error, and cost issues by tracking key metrics across NVIDIA NIM inference microservices in one place. Model comparison: Compare the performance of NVIDIA NIM inference microservices running in production in a single view to optimize model choice based on infrastructure and user needs.

Deep GPU insights: Analyze critical accelerated computing metrics such as GPU utilization, temperature, and performance states; understand context and resolve problems faster. Enhanced data security: In addition to NVIDIA's self-hosted model's security advantage, New Relic allows to exclude monitoring of sensitive data (PII) in AI requests and responses. New Relic deepens its 60+ AI integration ecosystem with NVIDIA This integration follows New Relic's recent addition to NVIDIA's AIOps partner ecosystem.

Leveraging NVIDIA AI's accelerated computing, New Relic combines observability and AI to streamline IT operations and accelerate innovation through its machine learning, and generative AI assistant, New Relic AI. New Relic offers the most expansive observability solution with 60+ AI integrations including NVIDIA GPUs and NVIDIA Tritonference Server software. New Relic AI monitoring is available as part of its all-in-one observability platform and offered via its usage-based pricing model.