Publication Details

Title: Pushing the Limits of Mechanical Turk: Qualifying the Crowd for Video Geo-Location
Author: L. Gottlieb, J. Choi, P. Kelm, T. Sikora, and G. Friedland
Bibliographic Information: Proceedings of the ACM Workshop on Crowdsourcing for Multimedia (CrowdMM 2012), held in conjunction with ACM Multimedia 2012, pp. 23-28, Nara, Japan
Date: October 2012
Research Area: Audio and Multimedia
Type: Article in conference proceedings
PDF: http://www.icsi.berkeley.edu/pubs/speech/limitsofmechturk12.pdf

Overview:
In this article we review the methods we have developed for finding Mechanical Turk participants for the manual annotation of the geo-location of random videos from the web. We require high quality annotations for this project, as we are attempting to establish a human baseline for future comparison to machine systems. This task is different from a standard Mechanical Turk task in that it is difficult for both humans and machines, whereas a standard Mechanical Turk task is usually easy for humans and difficult or impossible for machines. This article discusses the varied difficulties we encountered while qualifying annotators and the steps that we took to select the individuals most likely to do well at our annotation task in the future.

Acknowledgements:
This work was partially supported by funding provided to ICSI through National Science Foundation grant IIS:1138599 (“EAGER: Collecting Training Videos for Location Estimation with Mechanical Turk”). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors or originators and do not necessarily reflect the views of the National Science Foundation.

Bibliographic Reference:
L. Gottlieb, J. Choi, P. Kelm, T. Sikora, and G. Friedland. Pushing the Limits of Mechanical Turk: Qualifying the Crowd for Video Geo-Location. Proceedings of the ACM Workshop on Crowdsourcing for Multimedia (CrowdMM 2012), held in conjunction with ACM Multimedia 2012, pp. 23-28, Nara, Japan, October 2012