Artwork

Content provided by Itzik Ben-Shabat. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Itzik Ben-Shabat or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Yicong Hong - VLN BERT

22:57
 
Share
 

Manage episode 320630033 series 3300270
Content provided by Itzik Ben-Shabat. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Itzik Ben-Shabat or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

PAPER TITLE:
"VLN BERT: A Recurrent Vision-and-Language BERT for Navigation"
AUTHORS:
Yicong Hong, Qi Wu, Yuankai Qi, Cristian Rodriguez-Opazo, Stephen Gould
ABSTRACT:
Accuracy of many visiolinguistic tasks has benefited significantly from the application of vision-and-language (V&L) BERT. However, its application for the task of vision and-language navigation (VLN) remains limited. One reason for this is the difficulty adapting the BERT architecture to the partially observable Markov decision process present in VLN, requiring history-dependent attention and decision making. In this paper we propose a recurrent BERT model that is time-aware for use in VLN. Specifically, we equip the BERT model with a recurrent function that maintains cross-modal state information for the agent. Through extensive experiments on R2R and REVERIE we demonstrate that our model can replace more complex encoder-decoder models to achieve state-of-the-art results. Moreover, our approach can be generalised to other transformer-based architectures, supports pre-training, and is capable of solving navigation and referring expression tasks simultaneously.
CODE:
💻 https://github.com/YicongHong/Recurrent-VLN-BERT
LINKS AND RESOURCES
👱Yicong's page
RELATED PAPERS:
📚 Attention is All You Need
📚 Towards learning a generic agent for vision-and-language navigation via pre-training
CONTACT:
-----------------
If you would like to be a guest, sponsor or just share your thoughts, feel free to reach out via email: talking.papers.podcast@gmail.com
This episode was recorded on April, 16th 2021.
SUBSCRIBE AND FOLLOW:
🎧Subscribe on your favourite podcast app: https://talking.papers.podcast.itzikbs.com
📧Subscribe to our mailing list: http://eepurl.com/hRznqb
🐦Follow us on Twitter: https://twitter.com/talking_papers
🎥YouTube Channel: https://bit.ly/3eQOgwP
#talkingpapers #CVPR2021 #VLNBERT
#VLN #VisionAndLanguageNavigation #VisionAndLanguage #machinelearning #deeplearning #AI #neuralnetworks #research #computervision #artificialintelligence

🎧Subscribe on your favourite podcast app: https://talking.papers.podcast.itzikbs.com

📧Subscribe to our mailing list: http://eepurl.com/hRznqb

🐦Follow us on Twitter: https://twitter.com/talking_papers

🎥YouTube Channel: https://bit.ly/3eQOgwP

  continue reading

35 episodes

Artwork

Yicong Hong - VLN BERT

Talking Papers Podcast

0-10 subscribers

published

iconShare
 
Manage episode 320630033 series 3300270
Content provided by Itzik Ben-Shabat. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Itzik Ben-Shabat or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

PAPER TITLE:
"VLN BERT: A Recurrent Vision-and-Language BERT for Navigation"
AUTHORS:
Yicong Hong, Qi Wu, Yuankai Qi, Cristian Rodriguez-Opazo, Stephen Gould
ABSTRACT:
Accuracy of many visiolinguistic tasks has benefited significantly from the application of vision-and-language (V&L) BERT. However, its application for the task of vision and-language navigation (VLN) remains limited. One reason for this is the difficulty adapting the BERT architecture to the partially observable Markov decision process present in VLN, requiring history-dependent attention and decision making. In this paper we propose a recurrent BERT model that is time-aware for use in VLN. Specifically, we equip the BERT model with a recurrent function that maintains cross-modal state information for the agent. Through extensive experiments on R2R and REVERIE we demonstrate that our model can replace more complex encoder-decoder models to achieve state-of-the-art results. Moreover, our approach can be generalised to other transformer-based architectures, supports pre-training, and is capable of solving navigation and referring expression tasks simultaneously.
CODE:
💻 https://github.com/YicongHong/Recurrent-VLN-BERT
LINKS AND RESOURCES
👱Yicong's page
RELATED PAPERS:
📚 Attention is All You Need
📚 Towards learning a generic agent for vision-and-language navigation via pre-training
CONTACT:
-----------------
If you would like to be a guest, sponsor or just share your thoughts, feel free to reach out via email: talking.papers.podcast@gmail.com
This episode was recorded on April, 16th 2021.
SUBSCRIBE AND FOLLOW:
🎧Subscribe on your favourite podcast app: https://talking.papers.podcast.itzikbs.com
📧Subscribe to our mailing list: http://eepurl.com/hRznqb
🐦Follow us on Twitter: https://twitter.com/talking_papers
🎥YouTube Channel: https://bit.ly/3eQOgwP
#talkingpapers #CVPR2021 #VLNBERT
#VLN #VisionAndLanguageNavigation #VisionAndLanguage #machinelearning #deeplearning #AI #neuralnetworks #research #computervision #artificialintelligence

🎧Subscribe on your favourite podcast app: https://talking.papers.podcast.itzikbs.com

📧Subscribe to our mailing list: http://eepurl.com/hRznqb

🐦Follow us on Twitter: https://twitter.com/talking_papers

🎥YouTube Channel: https://bit.ly/3eQOgwP

  continue reading

35 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide