October 26th, 2023 - Frontiers of AI: From Quantum Compression to Visionary Transformers
MP3•Episode home
Manage episode 381019757 series 3485608
Content provided by Marcus Edel. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Marcus Edel or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
- LLM-FP4: 4-Bit Floating-Point Quantized Transformers
- Detecting Pretraining Data from Large Language Models
- ConvNets Match Vision Transformers at Scale
- A Picture is Worth a Thousand Words: Principled Recaptioning Improves Image Generation
- QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models
Chapters
1. Intro (00:00:00)
2. LLM-FP4: 4-Bit Floating-Point Quantized Transformers (00:01:52)
3. Detecting Pretraining Data from Large Language Models (00:04:23)
4. ConvNets Match Vision Transformers at Scale (00:07:29)
5. A Picture is Worth a Thousand Words: Principled Recaptioning Improves Image Generation (00:10:08)
6. QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models (00:11:27)
75 episodes