17 subscribers
Go offline with the Player FM app!
Podcasts Worth a Listen
SPONSORED


1 Family Secrets: Chris Pratt & Millie Bobby Brown Share Stories From Set 22:08
Supervised machine learning for science with Christoph Molnar and Timo Freiesleben, Part 2
Manage episode 473681539 series 3475282
Part 2 of this series could have easily been renamed "AI for science: The expert’s guide to practical machine learning.” We continue our discussion with Christoph Molnar and Timo Freiesleben to look at how scientists can apply supervised machine learning techniques from the previous episode into their research.
Introduction to supervised ML for science (0:00)
- Welcome back to Christoph Molnar and Timo Freiesleben, co-authors of “Supervised Machine Learning for Science: How to Stop Worrying and Love Your Black Box”
The model as the expert? (1:00)
- Evaluation metrics have profound downstream effects on all modeling decisions
- Data augmentation offers a simple yet powerful way to incorporate domain knowledge
- Domain expertise is often undervalued in data science despite being crucial
Measuring causality: Metrics and blind spots (10:10)
- Causality approaches in ML range from exploring associations to inferring treatment effects
Connecting models to scientific understanding (18:00)
- Interpretation methods must stay within realistic data distributions to yield meaningful insights
Robustness across distribution shifts (26:40)
- Robustness requires understanding what distribution shifts affect your model
- Pre-trained models and transfer learning provide promising paths to more robust scientific ML
Reproducibility challenges in ML and science (35:00)
- Reproducibility challenges differ between traditional science and machine learning
Go back to listen to part one of this series for the conceptual foundations that support these practical applications.
Check out Christoph and Timo's book “Supervised Machine Learning for Science: How to Stop Worrying and Love Your Black Box” available online now.
What did you think? Let us know.
Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics:
- LinkedIn - Episode summaries, shares of cited articles, and more.
- YouTube - Was it something that we said? Good. Share your favorite quotes.
- Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.
Chapters
1. Introduction to supervised ML for science (00:00:00)
2. The model as the expert? (00:01:00)
3. Measuring causality: Metrics and blind spots (00:10:10)
4. Connecting models to scientific understanding (00:18:00)
5. Robustness across distribution shifts (00:26:40)
6. Reproducibility challenges in ML and science (00:35:00)
30 episodes
Manage episode 473681539 series 3475282
Part 2 of this series could have easily been renamed "AI for science: The expert’s guide to practical machine learning.” We continue our discussion with Christoph Molnar and Timo Freiesleben to look at how scientists can apply supervised machine learning techniques from the previous episode into their research.
Introduction to supervised ML for science (0:00)
- Welcome back to Christoph Molnar and Timo Freiesleben, co-authors of “Supervised Machine Learning for Science: How to Stop Worrying and Love Your Black Box”
The model as the expert? (1:00)
- Evaluation metrics have profound downstream effects on all modeling decisions
- Data augmentation offers a simple yet powerful way to incorporate domain knowledge
- Domain expertise is often undervalued in data science despite being crucial
Measuring causality: Metrics and blind spots (10:10)
- Causality approaches in ML range from exploring associations to inferring treatment effects
Connecting models to scientific understanding (18:00)
- Interpretation methods must stay within realistic data distributions to yield meaningful insights
Robustness across distribution shifts (26:40)
- Robustness requires understanding what distribution shifts affect your model
- Pre-trained models and transfer learning provide promising paths to more robust scientific ML
Reproducibility challenges in ML and science (35:00)
- Reproducibility challenges differ between traditional science and machine learning
Go back to listen to part one of this series for the conceptual foundations that support these practical applications.
Check out Christoph and Timo's book “Supervised Machine Learning for Science: How to Stop Worrying and Love Your Black Box” available online now.
What did you think? Let us know.
Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics:
- LinkedIn - Episode summaries, shares of cited articles, and more.
- YouTube - Was it something that we said? Good. Share your favorite quotes.
- Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.
Chapters
1. Introduction to supervised ML for science (00:00:00)
2. The model as the expert? (00:01:00)
3. Measuring causality: Metrics and blind spots (00:10:10)
4. Connecting models to scientific understanding (00:18:00)
5. Robustness across distribution shifts (00:26:40)
6. Reproducibility challenges in ML and science (00:35:00)
30 episodes
All episodes
×
1 Supervised machine learning for science with Christoph Molnar and Timo Freiesleben, Part 2 41:58

1 Supervised machine learning for science with Christoph Molnar and Timo Freiesleben, Part 1 27:29

1 The future of AI: Exploring modeling paradigms 33:42


1 Contextual integrity and differential privacy: Theory vs. application with Sebastian Benthall 32:32

1 Model documentation: Beyond model cards and system cards in AI governance 27:43

1 New paths in AI: Rethinking LLMs and model risk strategies 39:51

1 Complex systems: What data science can learn from astrophysics with Rachel Losacco 41:02

1 Preparing AI for the unexpected: Lessons from recent IT incidents 34:13

1 Exploring the NIST AI Risk Management Framework (RMF) with Patrick Hall 41:24

1 Data lineage and AI: Ensuring quality and compliance with Matt Barlin 28:29

1 Differential privacy: Balancing data privacy and utility in AI 28:17

1 Responsible AI: Does it help or hurt innovation? With Anthony Habayeb 45:59

1 Baseline modeling and its critical role in AI and business performance 36:23

1 Information theory and the complexities of AI model monitoring 21:56
Welcome to Player FM!
Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.