|
Vol-2276
urn:nbn:de:0074-2276-5
Copyright ©
2018 for the individual papers
by the papers' authors. Copying permitted for private and academic purposes.
This volume is published and copyrighted by its editors.
|
SAD+CrowdBias 2018
Joint Proceedings SAD 2018 and CrowdBias 2018
Proceedings of the 1st Workshop on Subjectivity, Ambiguity and Disagreement in Crowdsourcing, and Short Paper Proceedings of the 1st Workshop on Disentangling the Relation Between Crowdsourcing and Bias Management (SAD 2018 and CrowdBias 2018)
co-located the 6th AAAI Conference on Human Computation and Crowdsourcing (HCOMP 2018)
Zürich, Switzerland, July 5, 2018.
Edited by
*
Vrije Universiteit Amsterdam
**
Google AI
***
Purdue University
+
The University of Sheffield
++
University of Queensland
+++
L3S Research Center
++++
University of Zurich
Table of Contents
Publications
-
Crowdsourced Measure of News Articles Bias: Assessing Contributors' Reliability
1-10
Emmanuel Vincent,
Maria Mestre
-
CrowdTruth 2.0: Quality Metrics for Crowdsourcing with Disagreement (short paper)
11-18
Anca Dumitrache,
Oana Inel,
Lora Aroyo,
Benjamin Timmermans,
Chris Welty
-
A Case for a Range of Acceptable Annotations
19-31
Jennimaria Palomaki,
Olivia Rhinehart,
Michael Tseng
-
CaptureBias: Supporting Media Scholars with Ambiguity-Aware Bias Representation for News Videos (short paper)
32-40
Markus de Jong,
Panagiotis Mavridis,
Lora Aroyo,
Alessandro Bozzon,
Jesse de Vos,
Johan Oomen,
Antoaneta Dimitrova,
Alec Badenoch
-
Bounding Ambiguity: Experiences with an Image Annotation System
41-54
Margaret Warren,
Pat Hayes
-
Expert Disagreement in Sequential Labeling: A Case Study on Adjudication in Medical Time Series Analysis
55-66
Mike Schaekermann,
Edith Law,
Kate Larson,
Andrew Lim
-
Characterising and Mitigating Aggregation-Bias in Crowdsourced Toxicity Annotations (short paper)
67-71
Agathe Balayn,
Panagiotis Mavridis,
Alessandro Bozzon,
Benjamin Timmermans,
Zoltán Szlávik
-
How Biased Is Your NLG Evaluation? (short paper) (Best Paper Award)
72-77
Pavlos Vougiouklis,
Eddy Maddalena,
Jonathon Hare,
Elena Simperl
-
LimitBias! Measuring Worker Biases in the Crowdsourced Collection of Subjective Judgments (short paper)
78-82
Christoph Hube,
Besnik Fetahu,
Ujwal Gadiraju
-
Investigating Stability and Reliability of Crowdsourcing Output (short paper)
83-87
Rehab Kamal Qarout,
Alessandro Checco,
Kalina Bontchevan
-
A Human in the Loop Approach to Capture Bias and Support Media Scientists in News Video Analysis (short paper)
88-92
Panagiotis Mavridis,
Markus de Jong,
Lora Aroyo,
Alessandro Bozzon,
Jesse de Vos,
Johan Oomen,
Antoaneta Dimitrova,
Alec Badenoch
-
Device-Type Influence in Crowd-based Natural Language Translation Tasks (short paper)
93-97
Michael Barz,
Neslihan Büyükdemircioglu,
Rikhu Prasad Surya,
Tim Polzehl,
Daniel Sonntag
We offer a BibTeX file for citing papers of this workshop from LaTeX. We offer the Proceedings file, containing the workshop program and all the papers.
2018-12-14: submitted by Alessandro Checco,
metadata incl. bibliographic data published under Creative Commons CC0
2018-12-14: published on CEUR-WS.org
|valid HTML5|