Despite the rapid growth of new platforms (e.g., CrowdFlower, Amazon Mechanical Turk, Upwork), crowdsourcing is still in its early stages. Many challenges remain to be addressed on the road to fair, high-quality platform-based work, as mentioned by Kittur et al. [1] and Demartini [2].

In the traditional workplace we usually make a great effort to have effective communication and to understand who people are, what they know and do, and how they are perceived by others. Current crowdsourcing platforms enable very limited interaction between crowd workers, requesters and platform owners with a great focus on anonymity. This, together with the opacity of some steps in the current crowdsourcing process, leads to potential dissatisfaction, uncertainty and consequently lack of trust. Worker misclassification lawsuits plague a wide range of platform operators, suggesting that shortcomings of accountability are typical of the current generation of platforms [3]. Weaving relations of trust, analogously to the traditional workplace is key to improving crowd work environments and user experience.

Existing reputation systems provide the first steps in this direction. For instance, Turkopticon [4] and FairCrowdWork [5] aggregate reviews from many workers to create multi-dimensional reputations of requesters and marketplaces. In TurkerNation [6], a dedicated forum for Amazon Mechanical Turk, crowd workers communicate and discuss with other crowd workers about tasks, requesters and platform features, building reputations that are less quantitative and structured but still quite influential. Crowd Work CV [7] enables the representation of crowdsourcing agents’ interests, qualifications and work history across marketplaces. Another encouraging project that aims to tackle the “lack of trust and uneven distribution of power among workers and requesters” is Daemo, a new open-governance crowdsourcing platform resulting from a crowdsourced research initiative [8]. While these are promising first steps, many dimensions of the problem remain unsolved, or even unarticulated.

The goal of this workshop, therefore, is to analyze different aspects of trust in online crowd work. We would like to discuss requirements, methods, techniques and studies that look into ways of boosting transparency and managing reputation of any of the participants in paid crowdsourcing (crowd workers, requesters and marketplaces) also looking at the trade-off with worker anonymity and privacy.

[1] Kittur et al. The future of crowd work. In:Computer-supported cooperative work. 2013.

[2] Demartini, G. Hybrid Human-Machine Information Systems: Challenges and Opportunities. In: Computer Networks, Special Issue on Crowdsourcing, Volume 90, page 5-13 (2015), Elsevier.

[3] The Gig Economy won’t last because it’s being sued to death.

[4] Lilly C. Irani and M. Six Silberman. 2013. Turkopticon: interrupting worker invisibility in amazon mechanical turk. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '13). ACM, New York, NY, USA, 611-620.

[5] Fair Crowd.

[6] Turkernation.

[7] C. Sarasua, M. Thimm. “Crowd Work CV: Recognition for Micro Work”. in: Social Informatics. Volume 8852 of the series Lecture Notes in Computer Science pp 429-437

[8] Stanford CrowdResearch Collective. “Daemo: A Self-Governed Crowdsourcing Marketplace”. In: Adjunct Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology, UIST’15. 2015.

[9] A. Bozzon, M. Brambilla, S. Ceri, M. Silvestri, and G. Vesci, “Choosing the right crowd: expert finding in social networks,” in Proceedings of the 16th International Conference on Extending Database Technology (EDBT2013), 2013.

[10] D. E. Difallah, G. Demartini, and P. Cudr´e-Mauroux, “Pick-a-crowd: tell me what you like, and I’ll tell you what to do,” in Proceedings of the 22nd international conference on World Wide Web (WWW2013), 2013.

[11] Y. Bachrach, M. Kosinski, T. Graepel, P. Kohli, and D. Stillwell, “Personality and patterns of facebook usage,” in Proceedings of the 3rd Annual ACM Web Science Conference, 2012.

[12] G. Kazai, J. Kamps, and N. Milic-Frayling, “Worker types and personality traits in crowdsourcing relevance labels,” in Proceedings of the 20th ACM international conference on Information and knowledge management, 2011.

[13] B. Satzger, H. Psaier, D. Schall, and S. Dustdar, “Auction-based crowdsourcing supporting skill management,” Information Systems, Elsevier, 2012.

[14] R. Khazankin, H. Psaier, D. Schall, and S. Dustdar, “QoS-based task scheduling in crowdsourcing environments,” in Proceedings of the 9th international conference on Service-Oriented Computing, 2011.

[15] V. Ambati, S. Vogel, and J. G. Carbonell, “Towards task recommendation in micro-task markets.” in Human Computation, 2011.

[16] G. Goel, A. Nikzad, and A. Singla, "Allocating Tasks to Workers with Matching Constraints: Truthful Mechanisms for Crowdsourcing Markets", in Proceedings of the Companion Publication of the 23rd International Conference on World Wide Web Companion, 2014

[17] Shih-Wen Huang and Wai-Tat Fu. 2013. Don't hide in the crowd!: increasing social transparency between peer workers improves crowdsourcing outcomes. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '13). ACM, New York, NY, USA, 621-630.

[18] Lease, Matthew and Hullman, Jessica and Bigham, Jeffrey P. and Bernstein, Michael S. and Kim, Juho and Lasecki, Walter and Bakhshi , Saeideh and Mitra, Tanushree and Miller, Robert C., Mechanical Turk is Not Anonymous (March 6, 2013).