Artwork

Content provided by Yoel Inbar, Michael Inzlicht, and Alexa Tullett, Yoel Inbar, Michael Inzlicht, and Alexa Tullett. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Yoel Inbar, Michael Inzlicht, and Alexa Tullett, Yoel Inbar, Michael Inzlicht, and Alexa Tullett or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Episode 99: Is MTurk Too Good To Be True?

1:03:09
 
Share
 

Manage episode 349064004 series 2313502
Content provided by Yoel Inbar, Michael Inzlicht, and Alexa Tullett, Yoel Inbar, Michael Inzlicht, and Alexa Tullett. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Yoel Inbar, Michael Inzlicht, and Alexa Tullett, Yoel Inbar, Michael Inzlicht, and Alexa Tullett or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

In a recent article, psychologists Webb and Tangney document their experience collecting psychology data online using Amazon's crowdsourcing platform MTurk. Alarmingly, the authors conclude that ultimately only 2.6% of their sample was valid data from human beings. Yoel and Alexa weigh in on these findings, discussing what researchers can reasonably expect from online studies and platforms, and how their personal experiences have informed their own practices. They also consider a response written by Cuskley and Sulik, who argue that researchers, not recruitment platforms, are responsible for ensuring the quality of data collected online. Questions that arise include: What studies do people want to do? Does anyone read the fine print? And what are the ethics of mouse-hunting?

Links:

  continue reading

112 episodes

Artwork
iconShare
 
Manage episode 349064004 series 2313502
Content provided by Yoel Inbar, Michael Inzlicht, and Alexa Tullett, Yoel Inbar, Michael Inzlicht, and Alexa Tullett. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Yoel Inbar, Michael Inzlicht, and Alexa Tullett, Yoel Inbar, Michael Inzlicht, and Alexa Tullett or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

In a recent article, psychologists Webb and Tangney document their experience collecting psychology data online using Amazon's crowdsourcing platform MTurk. Alarmingly, the authors conclude that ultimately only 2.6% of their sample was valid data from human beings. Yoel and Alexa weigh in on these findings, discussing what researchers can reasonably expect from online studies and platforms, and how their personal experiences have informed their own practices. They also consider a response written by Cuskley and Sulik, who argue that researchers, not recruitment platforms, are responsible for ensuring the quality of data collected online. Questions that arise include: What studies do people want to do? Does anyone read the fine print? And what are the ethics of mouse-hunting?

Links:

  continue reading

112 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide