Artwork

Content provided by Brian Ray, Don Sheu and Brian Ray; Don Sheu. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Brian Ray, Don Sheu and Brian Ray; Don Sheu or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Dr. Rachael Tatman: Data Scientist at Kaggle, Computational Sociolinguistics

27:03
 
Share
 

Manage episode 304339188 series 2993215
Content provided by Brian Ray, Don Sheu and Brian Ray; Don Sheu. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Brian Ray, Don Sheu and Brian Ray; Don Sheu or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
This week’s episode may be our most cerebral to date, probably thanks to hosting our first University of Washington Husky, Ph.D. graduate. Dr. Rachael Tatman shares snippets of her experience at Kaggle, stochastic approaches to ML models, errors in ML models, understanding prediction, and importance of reproducibility. A big takeaway references Dr. Tatman’s PyCon talk, “Put down the deep learning: When not to use neural networks and what to do instead.” We explore how many businesses benefit from a simple linear regression model instead of investing millions of dollars of compute time to a deep learning approach. Included with our conversation with Dr. Tatman is a discussion of linguistics and will Don ever get accurate translations of his Korean friends’ tweets in real-time. So far, our conclusion is that we’re far away from the singularity. Episode seven of 26.1 AI Podcast delivers lots of value for business leaders contemplating their future AI strategy.
  continue reading

48 episodes

Artwork
iconShare
 
Manage episode 304339188 series 2993215
Content provided by Brian Ray, Don Sheu and Brian Ray; Don Sheu. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Brian Ray, Don Sheu and Brian Ray; Don Sheu or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
This week’s episode may be our most cerebral to date, probably thanks to hosting our first University of Washington Husky, Ph.D. graduate. Dr. Rachael Tatman shares snippets of her experience at Kaggle, stochastic approaches to ML models, errors in ML models, understanding prediction, and importance of reproducibility. A big takeaway references Dr. Tatman’s PyCon talk, “Put down the deep learning: When not to use neural networks and what to do instead.” We explore how many businesses benefit from a simple linear regression model instead of investing millions of dollars of compute time to a deep learning approach. Included with our conversation with Dr. Tatman is a discussion of linguistics and will Don ever get accurate translations of his Korean friends’ tweets in real-time. So far, our conclusion is that we’re far away from the singularity. Episode seven of 26.1 AI Podcast delivers lots of value for business leaders contemplating their future AI strategy.
  continue reading

48 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide