Artwork

Content provided by Prateek Joshi. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Prateek Joshi or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Multimodal Video Understanding

42:12
 
Share
 

Manage episode 388154465 series 3370867
Content provided by Prateek Joshi. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Prateek Joshi or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

Jae Lee is the cofounder and CEO of Twelve Labs, where they are building video understanding infrastructure to help developers build programs that can see, hear, and understand the world. He was previously the Lead Data Scientist at the Ministry of National Defense in South Korea. He has a bachelors in computer science from UC Berkeley.
In this episode, we cover a range of topics including:
- What is multimodal video understanding
- State of play in multimodal video
- The founding of Twelve Labs
- The launch of Pegasus-1
- Four core principles: Efficient Long-form Video Processing, Multimodal Understanding, Video-native Embeddings, Deep Alignment between Video and Language Embeddings
- Differences between multimodal vs traditional video analysis
- In what ways can malicious actors misuse this technology?
- The future of multimodal video understanding
Jae's favorite books:
- Deep Learning (Authors: Ian Goodfellow, Yoshua Bengio, Aaron Courville)
- The Giving Tree (Author: Shel Silverstein)
--------
Where to find Prateek Joshi:
Newsletter: https://prateekjoshi.substack.com
Website: https://prateekj.com
LinkedIn: https://www.linkedin.com/in/prateek-joshi-91047b19
Twitter: https://twitter.com/prateekvjoshi

  continue reading

152 episodes

Artwork
iconShare
 
Manage episode 388154465 series 3370867
Content provided by Prateek Joshi. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Prateek Joshi or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

Jae Lee is the cofounder and CEO of Twelve Labs, where they are building video understanding infrastructure to help developers build programs that can see, hear, and understand the world. He was previously the Lead Data Scientist at the Ministry of National Defense in South Korea. He has a bachelors in computer science from UC Berkeley.
In this episode, we cover a range of topics including:
- What is multimodal video understanding
- State of play in multimodal video
- The founding of Twelve Labs
- The launch of Pegasus-1
- Four core principles: Efficient Long-form Video Processing, Multimodal Understanding, Video-native Embeddings, Deep Alignment between Video and Language Embeddings
- Differences between multimodal vs traditional video analysis
- In what ways can malicious actors misuse this technology?
- The future of multimodal video understanding
Jae's favorite books:
- Deep Learning (Authors: Ian Goodfellow, Yoshua Bengio, Aaron Courville)
- The Giving Tree (Author: Shel Silverstein)
--------
Where to find Prateek Joshi:
Newsletter: https://prateekjoshi.substack.com
Website: https://prateekj.com
LinkedIn: https://www.linkedin.com/in/prateek-joshi-91047b19
Twitter: https://twitter.com/prateekvjoshi

  continue reading

152 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide