Artwork

Content provided by Seth Earley & Chris Featherstone, Seth Earley, and Chris Featherstone. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Seth Earley & Chris Featherstone, Seth Earley, and Chris Featherstone or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Machine Learning and Algorithms - Gordon Hart - Earley AI Podcast with Seth Earley & Chris Featherstone - Episode # 027

50:47
 
Share
 

Manage episode 359066350 series 2984858
Content provided by Seth Earley & Chris Featherstone, Seth Earley, and Chris Featherstone. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Seth Earley & Chris Featherstone, Seth Earley, and Chris Featherstone or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

Today’s guest is Gordon Hart, Co-Founder and Head of Product at Kolena. Gordon joins Seth Earley and Chris Featherstone and shares how ​​machine learning algorithms are a challenge from different perspectives. Gordon also discusses the core problem in his company before they turned it around. Be sure to listen to Gordon's advice on how to validate models in order to have a successful product!

Takeaways:

  • Gordon noticed that developing algorithms internally or buying from other model vendors has really had a constant unexpected model behavior. It made him feel he couldn’t trust the models to behave sensibly.
  • Gordon started his company because he noticed that time after time, he was getting blindsided. He knew there was a better way to develop models and validate what they were doing.
  • The key challenge that Gordon and his team ran into was that when you have all the data when they were looking at that one number, they were looking at that aggregate metric computed across their entire benchmark.
  • Gordon expresses the importance of going through scenarios with your products. He found that when you break down your evaluation into these different scenarios, the test gives you an understanding of how this model improves in the aggregate over previous models and how are the failures distributed.
  • Testing data is more critical than training data because your testing data is used to determine if your new model has the correct behaviors.
  • Testing the full pipeline from pre-processing through post-processing rather than testing the model component will oftentimes improve the visibility into how your product is actually going to work when you put it out there.

Quote of the Show:

  • “Having your evaluation metrics align with the way that your system is going to be evaluated in the field is a key thing that you can do to get a better understanding of ‘is this model better for what I set out to do?’” (22:36)

Links:

Ways to Tune In:

Thanks to our sponsors:

  continue reading

52 episodes

Artwork
iconShare
 
Manage episode 359066350 series 2984858
Content provided by Seth Earley & Chris Featherstone, Seth Earley, and Chris Featherstone. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Seth Earley & Chris Featherstone, Seth Earley, and Chris Featherstone or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

Today’s guest is Gordon Hart, Co-Founder and Head of Product at Kolena. Gordon joins Seth Earley and Chris Featherstone and shares how ​​machine learning algorithms are a challenge from different perspectives. Gordon also discusses the core problem in his company before they turned it around. Be sure to listen to Gordon's advice on how to validate models in order to have a successful product!

Takeaways:

  • Gordon noticed that developing algorithms internally or buying from other model vendors has really had a constant unexpected model behavior. It made him feel he couldn’t trust the models to behave sensibly.
  • Gordon started his company because he noticed that time after time, he was getting blindsided. He knew there was a better way to develop models and validate what they were doing.
  • The key challenge that Gordon and his team ran into was that when you have all the data when they were looking at that one number, they were looking at that aggregate metric computed across their entire benchmark.
  • Gordon expresses the importance of going through scenarios with your products. He found that when you break down your evaluation into these different scenarios, the test gives you an understanding of how this model improves in the aggregate over previous models and how are the failures distributed.
  • Testing data is more critical than training data because your testing data is used to determine if your new model has the correct behaviors.
  • Testing the full pipeline from pre-processing through post-processing rather than testing the model component will oftentimes improve the visibility into how your product is actually going to work when you put it out there.

Quote of the Show:

  • “Having your evaluation metrics align with the way that your system is going to be evaluated in the field is a key thing that you can do to get a better understanding of ‘is this model better for what I set out to do?’” (22:36)

Links:

Ways to Tune In:

Thanks to our sponsors:

  continue reading

52 episodes

모든 에피소드

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide