Artwork

Content provided by Zeta Alpha. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Zeta Alpha or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Task-aware Retrieval with Instructions

1:11:13
 
Share
 

Manage episode 355037182 series 3446693
Content provided by Zeta Alpha. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Zeta Alpha or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

Andrew Yates (Assistant Prof at University of Amsterdam) and Sergi Castella (Analyst at Zeta Alpha) discuss the paper "Task-aware Retrieval with Instructions" by Akari Asai et al. This paper proposes to augment a conglomerate of existing retrieval and NLP datasets with natural language instructions (BERRI, Bank of Explicit RetRieval Instructions) and use it to train TART (Multi-task Instructed Retriever).

📄 Paper: https://arxiv.org/abs/2211.09260

🍻 BEIR benchmark: https://arxiv.org/abs/2104.08663

📈 LOTTE (Long-Tail Topic-stratified Evaluation, introduced in ColBERT v2): https://arxiv.org/abs/2112.01488

Timestamps:

00:00 Intro: "Task-aware Retrieval with Instructions"

02:20 BERRI, TART, X^2 evaluation

04:00 Background: recent works in domain adaptation

06:50 Instruction Tuning 08:50 Retrieval with descriptions

11:30 Retrieval with instructions

17:28 BERRI, Bank of Explicit RetRieval Instructions

21:48 Repurposing NLP tasks as retrieval tasks

23:53 Negative document selection

27:47 TART, Multi-task Instructed Retriever

31:50 Evaluation: Zero-shot and X^2 evaluation

39:20 Results on Table 3 (BEIR, LOTTE)

50:30 Results on Table 4 (X^2-Retrieval)

55:50 Ablations

57:17 Discussion: user modeling, future work, scale

  continue reading

16 episodes

Artwork
iconShare
 
Manage episode 355037182 series 3446693
Content provided by Zeta Alpha. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Zeta Alpha or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

Andrew Yates (Assistant Prof at University of Amsterdam) and Sergi Castella (Analyst at Zeta Alpha) discuss the paper "Task-aware Retrieval with Instructions" by Akari Asai et al. This paper proposes to augment a conglomerate of existing retrieval and NLP datasets with natural language instructions (BERRI, Bank of Explicit RetRieval Instructions) and use it to train TART (Multi-task Instructed Retriever).

📄 Paper: https://arxiv.org/abs/2211.09260

🍻 BEIR benchmark: https://arxiv.org/abs/2104.08663

📈 LOTTE (Long-Tail Topic-stratified Evaluation, introduced in ColBERT v2): https://arxiv.org/abs/2112.01488

Timestamps:

00:00 Intro: "Task-aware Retrieval with Instructions"

02:20 BERRI, TART, X^2 evaluation

04:00 Background: recent works in domain adaptation

06:50 Instruction Tuning 08:50 Retrieval with descriptions

11:30 Retrieval with instructions

17:28 BERRI, Bank of Explicit RetRieval Instructions

21:48 Repurposing NLP tasks as retrieval tasks

23:53 Negative document selection

27:47 TART, Multi-task Instructed Retriever

31:50 Evaluation: Zero-shot and X^2 evaluation

39:20 Results on Table 3 (BEIR, LOTTE)

50:30 Results on Table 4 (X^2-Retrieval)

55:50 Ablations

57:17 Discussion: user modeling, future work, scale

  continue reading

16 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide