44 min listen
OpenLLMetry - Observing the Quality of LLMs with Nir Gazit
FromPurePerformance
ratings:
Length:
51 minutes
Released:
Jan 15, 2024
Format:
Podcast episode
Description
Its only been a year since ChatGPT was introduced. Since then we see LLMs (Large Language Models) and Generative AIs being integrated into every days life software applications. Developers have the hard choice to pick the right model for their use case to produce the quality of output their end users demand.Tune in to this session where we have Nir Gazit, CEO and Co-founder of Traceloop, educating us about how to observe and quantify the quality of LLMs. Besides performance and costs engineers need to look into quality attributes such as accuracy, readability or grammatical correctness.Nir introduces us to OpenLLMetry - a set of Open Source extensions built on top of OpenTelemetry providing automated observability into the usage of LLMs for developers to better understand how to optimize the usage of LLMs. His advice to every developer is to start measuring the quality of your LLMs on Day 1 and continuously evaluate as you change your model, the prompt and the way you interact with your LLM stack!If you have more questions about LLM Observability check out the following links:OpenLLMetry GitHub Page: https://github.com/traceloop/openllmetryTraceloop Website: https://www.traceloop.com/OpenLLMetry Documentation: https://traceloop.com/docs/openllmetry
Released:
Jan 15, 2024
Format:
Podcast episode
Titles in the series (100)
011 Demystifying Database Performance Optimizations: Do you speak SQL? Do you know what an Execution Plan is? Are you aware that large amounts of unique queries will impact Database Server CPU and also efficiency of the Execution Plan and Data Cache? These are all learnings from this episode where Sonja... by PurePerformance