Artwork

Content provided by Emily Laird. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Emily Laird or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

What is a Transformer?

6:28
 
Share
 

Manage episode 425336406 series 3578824
Content provided by Emily Laird. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Emily Laird or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

In this episode, we discover the fascinating world of Transformers. Imagine it's the early days of AI, with RNNs and LSTMs doing the heavy lifting, but struggling with long-range dependencies like forgetful grandparents. Enter the Transformer model—a revolutionary architecture introduced in 2017 by Google’s "Attention is All You Need" paper. Transformers handle long-range dependencies and process data in parallel, making them incredibly efficient. We'll break down their key components like self-attention, positional encoding, and multi-head attention, showing how they transformed the AI landscape. Tune in to discover why Transformers are the shiny new sports car of AI models.

Connect with Emily Laird on LinkedIn

  continue reading

36 episodes

Artwork
iconShare
 
Manage episode 425336406 series 3578824
Content provided by Emily Laird. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Emily Laird or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

In this episode, we discover the fascinating world of Transformers. Imagine it's the early days of AI, with RNNs and LSTMs doing the heavy lifting, but struggling with long-range dependencies like forgetful grandparents. Enter the Transformer model—a revolutionary architecture introduced in 2017 by Google’s "Attention is All You Need" paper. Transformers handle long-range dependencies and process data in parallel, making them incredibly efficient. We'll break down their key components like self-attention, positional encoding, and multi-head attention, showing how they transformed the AI landscape. Tune in to discover why Transformers are the shiny new sports car of AI models.

Connect with Emily Laird on LinkedIn

  continue reading

36 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide