Artwork

Content provided by Gianluca Truda and Jared Tumiel, Gianluca Truda, and Jared Tumiel. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Gianluca Truda and Jared Tumiel, Gianluca Truda, and Jared Tumiel or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

012 | How Deep Learning Does Magic

1:33:49
 
Share
 

Manage episode 240637084 series 2519888
Content provided by Gianluca Truda and Jared Tumiel, Gianluca Truda, and Jared Tumiel. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Gianluca Truda and Jared Tumiel, Gianluca Truda, and Jared Tumiel or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

This is a discussion about why deep neural nets are unreasonably effective. Gianluca and Jared examine the relationships between neural architectures and the laws of physics that govern our Universe—exploring brains, human language, and linear functions. Nothing could have prepared them for the territories this episode expanded to, so strap yourself in!

----------

Shownotes:

AlphaGo beating Lee Sedol at Go: https://en.wikipedia.org/wiki/AlphaGo_versus_Lee_Sedol

OpenAI Five: https://openai.com/blog/openai-five/

Taylor series/expansions video from 3Blue1Brown: https://www.youtube.com/watch?v=3d6DsjIBzJ4

Physicist Max Tegmark: https://en.wikipedia.org/wiki/Max_Tegmark

Tegmark’s great talk on connections between physics and deep learning (which formed much of the inspiration for this conversation): https://www.youtube.com/watch?v=5MdSE-N0bxs

Universal Approximation Theorem: https://en.wikipedia.org/wiki/Universal_approximation_theorem

A refresher on “Map vs. Territory”: https://fs.blog/2015/11/map-and-territory/

Ada Lovelace (who worked on Babbage’s Analytical Engine): https://en.wikipedia.org/wiki/Ada_Lovelace

Manifolds and their topology: http://colah.github.io/posts/2014-03-NN-Manifolds-Topology/

Binary trees: https://en.wikipedia.org/wiki/Binary_tree

Markov process: http://mathworld.wolfram.com/MarkovProcess.html

OpenAIs GPT-2: https://openai.com/blog/better-language-models/

Play with GPT-2 in your browser here: https://talktotransformer.com/

Lex Fridman’s MIT Artificial Intelligence podcast: https://lexfridman.com/ai/

The Scientific Odyssey podcast: https://thescientificodyssey.libsyn.com/

  continue reading

31 episodes

Artwork

012 | How Deep Learning Does Magic

Bit of a Tangent

239 subscribers

published

iconShare
 
Manage episode 240637084 series 2519888
Content provided by Gianluca Truda and Jared Tumiel, Gianluca Truda, and Jared Tumiel. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Gianluca Truda and Jared Tumiel, Gianluca Truda, and Jared Tumiel or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

This is a discussion about why deep neural nets are unreasonably effective. Gianluca and Jared examine the relationships between neural architectures and the laws of physics that govern our Universe—exploring brains, human language, and linear functions. Nothing could have prepared them for the territories this episode expanded to, so strap yourself in!

----------

Shownotes:

AlphaGo beating Lee Sedol at Go: https://en.wikipedia.org/wiki/AlphaGo_versus_Lee_Sedol

OpenAI Five: https://openai.com/blog/openai-five/

Taylor series/expansions video from 3Blue1Brown: https://www.youtube.com/watch?v=3d6DsjIBzJ4

Physicist Max Tegmark: https://en.wikipedia.org/wiki/Max_Tegmark

Tegmark’s great talk on connections between physics and deep learning (which formed much of the inspiration for this conversation): https://www.youtube.com/watch?v=5MdSE-N0bxs

Universal Approximation Theorem: https://en.wikipedia.org/wiki/Universal_approximation_theorem

A refresher on “Map vs. Territory”: https://fs.blog/2015/11/map-and-territory/

Ada Lovelace (who worked on Babbage’s Analytical Engine): https://en.wikipedia.org/wiki/Ada_Lovelace

Manifolds and their topology: http://colah.github.io/posts/2014-03-NN-Manifolds-Topology/

Binary trees: https://en.wikipedia.org/wiki/Binary_tree

Markov process: http://mathworld.wolfram.com/MarkovProcess.html

OpenAIs GPT-2: https://openai.com/blog/better-language-models/

Play with GPT-2 in your browser here: https://talktotransformer.com/

Lex Fridman’s MIT Artificial Intelligence podcast: https://lexfridman.com/ai/

The Scientific Odyssey podcast: https://thescientificodyssey.libsyn.com/

  continue reading

31 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide