Crowdsourcing Micro-Level Multimedia Annotations: The Challenges of Elevation and Interface (bibtex)
by Sunghyun Park, Gelareh Mohammadi, Ron Artstein, Louis-Philippe Morency
Abstract:
This paper presents a new evaluation procedure and tool for crowdsourcing micro-level multimedia annotations and shows that such annotations can achieve a quality comparable to that of expert annotations. We propose a new evaluation procedure, called MM-Eval (Micro-level Multimedia Evaluation), which compares fine time-aligned annotations using Krippendorff’s alpha metric and introduce two new metrics to evaluate the types of disagreement between coders. We also introduce OCTAB (Online Crowdsourcing Tool for Annotations of Behaviors), a web-based annotation tool that allows precise and convenient multimedia behavior annotations, directly from Amazon Mechanical Turk interface. With an experiment using the above tool and evaluation procedure, we show that a majority vote among annotations from 3 crowdsource workers leads to a quality comparable to that of local expert annotations.
Reference:
Crowdsourcing Micro-Level Multimedia Annotations: The Challenges of Elevation and Interface (Sunghyun Park, Gelareh Mohammadi, Ron Artstein, Louis-Philippe Morency), In International ACM Workshop on Crowdsourcing for Multimedia (CrowdMM), 2012.
Bibtex Entry:
@inproceedings{park_crowdsourcing_2012,
	address = {Nara, Japan},
	title = {Crowdsourcing {Micro}-{Level} {Multimedia} {Annotations}: {The} {Challenges} of {Elevation} and {Interface}},
	url = {http://ict.usc.edu/pubs/Crowdsourcing%20Micro-Level%20Multimedia%20Annotations-%20The%20Challenges%20of%20Elevation%20and%20Interface.pdf},
	abstract = {This paper presents a new evaluation procedure and tool for crowdsourcing micro-level multimedia annotations and shows that such annotations can achieve a quality comparable to that of expert annotations. We propose a new evaluation procedure, called MM-Eval (Micro-level Multimedia Evaluation), which compares fine time-aligned annotations using Krippendorff’s alpha metric and introduce two new metrics to evaluate the types of disagreement between coders. We also introduce OCTAB (Online Crowdsourcing Tool for Annotations of Behaviors), a web-based annotation tool that allows precise and convenient multimedia behavior annotations, directly from Amazon Mechanical Turk interface. With an experiment using the above tool and evaluation procedure, we show that a majority vote among annotations from 3 crowdsource workers leads to a quality comparable to that of local expert annotations.},
	booktitle = {International {ACM} {Workshop} on {Crowdsourcing} for {Multimedia} ({CrowdMM})},
	author = {Park, Sunghyun and Mohammadi, Gelareh and Artstein, Ron and Morency, Louis-Philippe},
	month = oct,
	year = {2012},
	keywords = {Virtual Humans, UARC}
}
Powered by bibtexbrowser