Evaluating the Effectiveness of Large Language Models: Challenges and Insights // Aniket Singh // #248

MLOps.community - A podcast by Demetrios Brinkmann

Categorie:

Aniket Kumar Singh is a Vision Systems Engineer at Ultium Cells, skilled in Machine Learning and Deep Learning. I'm also engaged in AI research, focusing on Large Language Models (LLMs). Evaluating the Effectiveness of Large Language Models: Challenges and Insights // MLOps Podcast #248 with Aniket Kumar Singh, CTO @ MyEvaluationPal | ML Engineer @ Ultium Cells. // Abstract Dive into the world of Large Language Models (LLMs) like GPT-4. Why is it crucial to evaluate these models, how we measure their performance, and the common hurdles we face? Drawing from Aniket's research, he shares insights on the importance of prompt engineering and model selection. Aniket also discusses real-world applications in healthcare, economics, and education, and highlights future directions for improving LLMs. // Bio Aniket is a Vision Systems Engineer at Ultium Cells, skilled in Machine Learning and Deep Learning. I'm also engaged in AI research, focusing on Large Language Models (LLMs). // MLOps Jobs board https://mlops.pallet.xyz/jobs // MLOps Swag/Merch https://mlops-community.myshopify.com/ // Related Links Website: www.aniketsingh.me Aniket's AI Research for Good blog that I plan to utilize to share any new research that would focus on the good: www.airesearchforgood.org Aniket's papers: https://scholar.google.com/citations?user=XHxdWUMAAAAJ&hl=en --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Aniket on LinkedIn: https://www.linkedin.com/in/singh-k-aniket/ Timestamps: [00:00] Aniket's preferred coffee [00:14] Takeaways [01:29] Aniket's job and hobby [03:06] Evaluating LLMs: Systems-Level Perspective [05:55] Rule-based system [08:32] Evaluation Focus: Model Capabilities [13:04] LLM Confidence [13:56] Problems with LLM Ratings [17:17] Understanding AI Confidence Trends [18:28] Aniket's papers [20:40] Testing AI Awareness [24:36] Agent Architectures Overview [27:05] Leveraging LLMs for tasks [29:53] Closed systems in Decision-Making [31:28] Navigating model Agnosticism [33:47] Robust Pipeline vs Robust Prompt [34:40] Wrap up

Visit the podcast's native language site