Artwork

Content provided by Slator. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Slator or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

#175 Where DeepL Beats ChatGPT with Graham Neubig

38:53
 
Share
 

Manage episode 371283883 series 2975363
Content provided by Slator. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Slator or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

In this week’s SlatorPod, we are joined by Graham Neubig, Associate Professor of Computer Science at Carnegie Mellon University, to discuss his research on multilingual natural language processing (NLP) and machine translation (MT).
Graham discusses the research at Neulab, where they focus on various areas of NLP, including incorporating broad knowledge bases into NLP models and code generation.
Graham expands on his Zeno GPT-MT Report comparing large language models (LLMs) with special-purpose machine translation models like Google Translate, Microsoft Translate, and DeepL. He revealed that GPT-4 was competitive from English to other languages, but struggled with very long sentences.
When it comes to cost comparison, Graham highlights that GPT-3.5 Turbo (the model behind the free version of ChatGPT) is significantly cheaper than Google Translate and Microsoft Translator, but GPT-4 (available via OpenAI’s subscription) is more expensive.
Graham predicts that companies will likely move towards using general-purpose LLMs and fine-tuning them for specific tasks like translation. The discussion also covers the recent flurry of speech-to-speech machine translation system releases.
Graham talks about his startup, Inspired Cognition, which aims to provide tools for building and improving AI systems, particularly in text and code generation. Graham concludes the pod with advice for new graduates in the NLP field and his plans for Zeno and the Zeno report.

  continue reading

Chapters

1. Intro and Agenda (00:00:00)

2. Professional Background and Interest in Language (00:01:05)

3. Research at NeuLab (00:03:56)

4. Impact of ChatGPT on NLP (00:05:05)

5. Context in Machine Translation and LLMs (00:07:20)

6. How GPT Handles Machine Translation (00:12:43)

7. GPT Cost Comparison for Machine Translation (00:19:13)

8. How LLMs Will Evolve (00:23:07)

9. Why so Many Speech Translation Releases? (00:24:57)

10. LLMs and Low-Resource Languages (00:29:45)

11. Launching Inspired Cognition (00:32:17)

12. Advice to Graduate Students (00:35:29)

13. Plans for 2023 and Beyond (00:37:15)

215 episodes

Artwork
iconShare
 
Manage episode 371283883 series 2975363
Content provided by Slator. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Slator or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

In this week’s SlatorPod, we are joined by Graham Neubig, Associate Professor of Computer Science at Carnegie Mellon University, to discuss his research on multilingual natural language processing (NLP) and machine translation (MT).
Graham discusses the research at Neulab, where they focus on various areas of NLP, including incorporating broad knowledge bases into NLP models and code generation.
Graham expands on his Zeno GPT-MT Report comparing large language models (LLMs) with special-purpose machine translation models like Google Translate, Microsoft Translate, and DeepL. He revealed that GPT-4 was competitive from English to other languages, but struggled with very long sentences.
When it comes to cost comparison, Graham highlights that GPT-3.5 Turbo (the model behind the free version of ChatGPT) is significantly cheaper than Google Translate and Microsoft Translator, but GPT-4 (available via OpenAI’s subscription) is more expensive.
Graham predicts that companies will likely move towards using general-purpose LLMs and fine-tuning them for specific tasks like translation. The discussion also covers the recent flurry of speech-to-speech machine translation system releases.
Graham talks about his startup, Inspired Cognition, which aims to provide tools for building and improving AI systems, particularly in text and code generation. Graham concludes the pod with advice for new graduates in the NLP field and his plans for Zeno and the Zeno report.

  continue reading

Chapters

1. Intro and Agenda (00:00:00)

2. Professional Background and Interest in Language (00:01:05)

3. Research at NeuLab (00:03:56)

4. Impact of ChatGPT on NLP (00:05:05)

5. Context in Machine Translation and LLMs (00:07:20)

6. How GPT Handles Machine Translation (00:12:43)

7. GPT Cost Comparison for Machine Translation (00:19:13)

8. How LLMs Will Evolve (00:23:07)

9. Why so Many Speech Translation Releases? (00:24:57)

10. LLMs and Low-Resource Languages (00:29:45)

11. Launching Inspired Cognition (00:32:17)

12. Advice to Graduate Students (00:35:29)

13. Plans for 2023 and Beyond (00:37:15)

215 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide