Artwork

Content provided by Kyle Polich. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Kyle Polich or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

seq2seq

21:41
 
Share
 

Manage episode 228312513 series 49487
Content provided by Kyle Polich. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Kyle Polich or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

A sequence to sequence (or seq2seq) model is neural architecture used for translation (and other tasks) which consists of an encoder and a decoder.

The encoder/decoder architecture has obvious promise for machine translation, and has been successfully applied this way. Encoding an input to a small number of hidden nodes which can effectively be decoded to a matching string requires machine learning to learn an efficient representation of the essence of the strings.

In addition to translation, seq2seq models have been used in a number of other NLP tasks such as summarization and image captioning.

Related Links

  continue reading

530 episodes

Artwork

seq2seq

Data Skeptic

5,497 subscribers

published

iconShare
 
Manage episode 228312513 series 49487
Content provided by Kyle Polich. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Kyle Polich or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

A sequence to sequence (or seq2seq) model is neural architecture used for translation (and other tasks) which consists of an encoder and a decoder.

The encoder/decoder architecture has obvious promise for machine translation, and has been successfully applied this way. Encoding an input to a small number of hidden nodes which can effectively be decoded to a matching string requires machine learning to learn an efficient representation of the essence of the strings.

In addition to translation, seq2seq models have been used in a number of other NLP tasks such as summarization and image captioning.

Related Links

  continue reading

530 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide