Artwork

Content provided by Patrick Wheeler and Jason Gauci, Patrick Wheeler, and Jason Gauci. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Patrick Wheeler and Jason Gauci, Patrick Wheeler, and Jason Gauci or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

172: Transformers and Large Language Models

1:26:08
 
Share
 

Manage episode 405928845 series 2417399
Content provided by Patrick Wheeler and Jason Gauci, Patrick Wheeler, and Jason Gauci. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Patrick Wheeler and Jason Gauci, Patrick Wheeler, and Jason Gauci or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

172: Transformers and Large Language Models

Intro topic: Is WFH actually WFC?

News/Links:

Book of the Show

Patreon Plug https://www.patreon.com/programmingthrowdown?ty=h

Tool of the Show

Topic: Transformers and Large Language Models

  • How neural networks store information
    • Latent variables
  • Transformers
    • Encoders & Decoders
  • Attention Layers
    • History
      • RNN
        • Vanishing Gradient Problem
      • LSTM
        • Short term (gradient explodes), Long term (gradient vanishes)
    • Differentiable algebra
    • Key-Query-Value
    • Self Attention
  • Self-Supervised Learning & Forward Models
  • Human Feedback
    • Reinforcement Learning from Human Feedback
    • Direct Policy Optimization (Pairwise Ranking)

★ Support this podcast on Patreon ★

  continue reading

176 episodes

Artwork
iconShare
 
Manage episode 405928845 series 2417399
Content provided by Patrick Wheeler and Jason Gauci, Patrick Wheeler, and Jason Gauci. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Patrick Wheeler and Jason Gauci, Patrick Wheeler, and Jason Gauci or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

172: Transformers and Large Language Models

Intro topic: Is WFH actually WFC?

News/Links:

Book of the Show

Patreon Plug https://www.patreon.com/programmingthrowdown?ty=h

Tool of the Show

Topic: Transformers and Large Language Models

  • How neural networks store information
    • Latent variables
  • Transformers
    • Encoders & Decoders
  • Attention Layers
    • History
      • RNN
        • Vanishing Gradient Problem
      • LSTM
        • Short term (gradient explodes), Long term (gradient vanishes)
    • Differentiable algebra
    • Key-Query-Value
    • Self Attention
  • Self-Supervised Learning & Forward Models
  • Human Feedback
    • Reinforcement Learning from Human Feedback
    • Direct Policy Optimization (Pairwise Ranking)

★ Support this podcast on Patreon ★

  continue reading

176 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide