Artwork

Content provided by Geoff Livingston and Greg Verdino, Geoff Livingston, and Greg Verdino. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Geoff Livingston and Greg Verdino, Geoff Livingston, and Greg Verdino or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

NB 30 - Addressing the AI Hallucination Problem

50:25
 
Share
 

Fetch error

Hmmm there seems to be a problem fetching this series right now. Last successful fetch was on October 23, 2024 05:12 (22d ago)

What now? This series will be checked again in the next day. If you believe it should be working, please verify the publisher's feed link below is valid and includes actual episode links. You can contact support to request the feed be immediately fetched.

Manage episode 414204401 series 3456910
Content provided by Geoff Livingston and Greg Verdino, Geoff Livingston, and Greg Verdino. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Geoff Livingston and Greg Verdino, Geoff Livingston, and Greg Verdino or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

Special guest Eyelevel.ai CEO Neil Katz joins Geoff and Greg to discuss generative AI's hallucination problem. Large language models' (LLMs') propensity to hallucinate in their answers represents one of the biggest barriers to enterprise adoption. Eyelevel boasts a 95% accuracy on private instance responses by using its APIs and tools to prepare proprietary data for LLM consumption.

The trio dives into the LLM marketplace, including discussions about why brands choose to implement a private instance, how the LLM market has evolved, and what causes the hallucination problem. Then, they discuss the enterprise data problem and how retrieval augmented generation (RAG) techniques still need additional help to strengthen LLM responses.

Chapters include:

0:00 Start

4:40 Private instance versus licensing enterprise editions of LLMs

7:32 Eyelevel’s Air France implementation achieving 95% success rates

12:29: The need for enterprise data preparation

18:14 The hallucination problem with LLMs and RAG approaches

28:23 How governance can or cannot help enterprises

32:32 Why some use open source versus proprietary LLMs

39:12 The future of AI and an incredible vision

Learn more about Eyelevel at https://www.eyelevel.ai/ or contact Neil Katz via LinkedIn at https://www.linkedin.com/in/neilkatz/

Learn more about your ad choices. Visit megaphone.fm/adchoices

  continue reading

43 episodes

Artwork
iconShare
 

Fetch error

Hmmm there seems to be a problem fetching this series right now. Last successful fetch was on October 23, 2024 05:12 (22d ago)

What now? This series will be checked again in the next day. If you believe it should be working, please verify the publisher's feed link below is valid and includes actual episode links. You can contact support to request the feed be immediately fetched.

Manage episode 414204401 series 3456910
Content provided by Geoff Livingston and Greg Verdino, Geoff Livingston, and Greg Verdino. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Geoff Livingston and Greg Verdino, Geoff Livingston, and Greg Verdino or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

Special guest Eyelevel.ai CEO Neil Katz joins Geoff and Greg to discuss generative AI's hallucination problem. Large language models' (LLMs') propensity to hallucinate in their answers represents one of the biggest barriers to enterprise adoption. Eyelevel boasts a 95% accuracy on private instance responses by using its APIs and tools to prepare proprietary data for LLM consumption.

The trio dives into the LLM marketplace, including discussions about why brands choose to implement a private instance, how the LLM market has evolved, and what causes the hallucination problem. Then, they discuss the enterprise data problem and how retrieval augmented generation (RAG) techniques still need additional help to strengthen LLM responses.

Chapters include:

0:00 Start

4:40 Private instance versus licensing enterprise editions of LLMs

7:32 Eyelevel’s Air France implementation achieving 95% success rates

12:29: The need for enterprise data preparation

18:14 The hallucination problem with LLMs and RAG approaches

28:23 How governance can or cannot help enterprises

32:32 Why some use open source versus proprietary LLMs

39:12 The future of AI and an incredible vision

Learn more about Eyelevel at https://www.eyelevel.ai/ or contact Neil Katz via LinkedIn at https://www.linkedin.com/in/neilkatz/

Learn more about your ad choices. Visit megaphone.fm/adchoices

  continue reading

43 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide