Abstract
In this paper, we present a reference framework called Argo+ for worker-centric crowdsourcing where task assignment is characterized by feature-based representation of both tasks and workers and learning techniques are exploited to online predict the most appropriate task to execute for a requesting worker. On the task side, features are used to represent requirements expressed in terms of knowledge expertise that are asked to workers for being involved in the task execution. On the worker side, features are used to compose a profile, namely a structured description of the worker capabilities in executing tasks. Experimental results obtained on a real crowdsourcing campaign are discussed by comparing the performance of Argo+ against a baseline with conventional task assignment techniques.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
References
Alsayasneh, M., et al.: Personalized and diverse task composition in crowdsourcing. IEEE Trans. Knowl. Data Eng. 30(1), 128–141 (2018)
Amer-Yahia, S., Roy, S.B.: Human factors in crowdsourcing. PVLDB 9(13), 1615–1618 (2016)
Arun, R., Suresh, V., Veni Madhavan, C.E., Narasimha Murthy, M.N.: On finding the natural number of topics with latent Dirichlet allocation: some observations. In: Zaki, M.J., Yu, J.X., Ravindran, B., Pudi, V. (eds.) PAKDD 2010. LNCS (LNAI), vol. 6118, pp. 391–402. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-13657-3_43
Blei, D.M., Ng, A.Y., Jordan, M.I.: Latent Dirichlet allocation. J. Mach. Learn. Res. 3(Jan), 993–1022 (2003)
Castano, S., Ferrara, A., Genta, L., Montanelli, S.: Combining crowd consensus and user trustworthiness for managing collective tasks. Futur. Gener. Comput. Syst. 54, 378–388 (2016)
Castano, S., Ferrara, A., Montanelli, S.: A multi-dimensional approach to crowd-consensus modeling and evaluation. In: Johannesson, P., Lee, M.L., Liddle, S.W., Opdahl, A.L., López, Ó.P. (eds.) ER 2015. LNCS, vol. 9381, pp. 424–431. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-25264-3_31
Cesa-Bianchi, N., Gentile, C., Zappella, G.: A gang of bandits. In: Proceedings of the 27th Internatioanl Conference on Neural Information Processing Systems, Lake Tahoe, Nevada, USA, pp. 737–745 (2013)
Difallah, D.E., Demartini, G., Cudré-Mauroux, P.: Pick-a-crowd: tell me what you like, and I’ll tell you what to do. In: Proceedings of the 22nd WWW International Conference, Rio de Janeiro, Brazil (2013)
Fan, J., Li, G., Ooi, B.C., Tan, K.l., Feng, J.: iCrowd: an adaptive crowdsourcing framework. In: Proceedings of the ACM SIGMOD International Conference on Management of Data, pp. 1015–1030. ACM, Melbourne (2015)
Gadiraju, U., Fetahu, B., Kawase, R.: Training workers for improving performance in crowdsourcing microtasks. In: Conole, G., Klobučar, T., Rensing, C., Konert, J., Lavoué, É. (eds.) EC-TEL 2015. LNCS, vol. 9307, pp. 100–114. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24258-3_8
Goel, G., Nikzad, A., Singla, A.: Allocating tasks to workers with matching constraints: truthful mechanisms for crowdsourcing markets. In: Proceedings of the 23rd WWW International Conference, Seoul, Korea (2014)
Goncalves, J., Feldman, M., Hu, S., Kostakos, V., Bernstein, A.: Task routing and assignment in crowdsourcing based on cognitive abilities. In: Proceedings of the 26th WWW International Conference, Perth, Australia (2017)
Hassan, U., Curry, E.: A capability requirements approach for predicting worker performance in crowdsourcing. In: Proceedings of the 9th International Conference on Collaborate Computing, Austin, Texas, USA (2013)
Jain, A., Sarma, A.D., Parameswaran, A., Widom, J.: Understanding workers, developing effective tasks, and enhancing marketplace dynamics: a study of a large crowdsourcing marketplace. Proc. VLDB Endow. 10(7), 829–840 (2017)
Jain, S., Narayanaswamy, B., Narahari, Y.: A multiarmed bandit incentive mechanism for crowdsourcing demand response in smart grids. In: Proceedings of the 28th AAAI Conference on Artificial Intelligence, Qulébec, Canada, pp. 721–727 (2014)
Karger, D.R., Oh, S., Shah, D.: Budget-optimal task allocation for reliable crowdsourcing systems. Oper. Res. 62(1), 1–24 (2014)
Kazai, G., Kamps, J., Milic-Frayling, N.: Worker types and personality traits in crowdsourcing relevance labels. In: Proceedings of the 20th CIKM, Glasgow, Scotland, UK (2011)
Khazankin, R., Psaier, H., Schall, D., Dustdar, S.: QoS-based task scheduling in crowdsourcing environments. In: Kappel, G., Maamar, Z., Motahari-Nezhad, H.R. (eds.) ICSOC 2011. LNCS, vol. 7084, pp. 297–311. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-25535-9_20
Li, G., Wang, J., Zheng, Y., Franklin, M.J.: Crowdsourced data management: a survey. IEEE Trans. Knowl. Data Eng. 28(9), 2296–2319 (2016)
Liu, Y., Liu, M.: An online learning approach to improving the quality of crowd-sourcing. IEEE Trans. Netw. 25(4), 2166–2179 (2017)
Manning, C.D., Raghavan, P., Schütze, H.: Introduction to Information Retrieval, vol. 1. Cambridge University Press, Cambridge (2008)
Mo, K., Zhong, E., Yang, Q.: Cross-task crowdsourcing. In: Proceedings of the 19th ACM SIGKDD International Conference, Chicago, Illinois, USA (2013)
Müller, E., Günnemann, S., Färber, I., Seidl, T.: Discovering multiple clustering solutions: grouping objects in different views of the data. In: Proceedings of the 28th IEEE ICDE International Conference, Washington, DC, USA, pp. 1207–1210 (2012)
Organisciak, P., Teevan, J., Dumais, S.T., Miller, R., Kalai, A.T.: A crowd of your own: crowdsourcing for on-demand personalization. In: Proceedings of the 2nd AAAI HCOMP, Pittsburgh, USA (2014)
Pilourdault, J., Amer-Yahia, S., Lee, D., Roy, S.B.: Motivation-aware task assignment in crowdsourcing. In: Proceedings of the 20th EDBT International Conference, Venice, Italy (2017)
Porteous, I., Newman, D., Ihler, A., Asuncion, A., Smyth, P., Welling, M.: Fast collapsed gibbs sampling for latent Dirichlet allocation. In: Proceedings of the 14th ACM SIGKDD International Conference, pp. 569–577 (2008)
Simpson, E., Roberts, S.: Bayesian methods for intelligent task assignment in crowdsourcing systems. In: Guy, T., Kárný, M., Wolpert, D. (eds.) Decision Making: Uncertainty, Imperfection, Deliberation and Scalability. SCI, vol. 538, pp. 1–32. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-15144-1_1
Tran-Thanh, L., Stein, S., Rogers, A., Jennings, N.R.: Efficient crowdsourcing of unknown experts using bounded multi-armed bandits. Artif. Intell. 214, 89–111 (2014)
Tranquillini, S., Daniel, F., Kucherbaev, P., Casati, F.: Modeling, enacting, and integrating custom crowdsourcing processes. ACM Trans. Web 9(2), 7:1–7:43 (2015)
Zheng, Y., Li, G., Li, Y., Shan, C., Cheng, R.: Truth inference in crowdsourcing: is the problem solved? Proc. VLDB Endow. 10(5), 541–552 (2017)
Author information
Authors and Affiliations
Corresponding authors
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer Nature Switzerland AG
About this paper
Cite this paper
Castano, S., Ferrara, A., Montanelli, S. (2018). Crowdsourcing Task Assignment with Online Profile Learning. In: Panetto, H., Debruyne, C., Proper, H., Ardagna, C., Roman, D., Meersman, R. (eds) On the Move to Meaningful Internet Systems. OTM 2018 Conferences. OTM 2018. Lecture Notes in Computer Science(), vol 11229. Springer, Cham. https://doi.org/10.1007/978-3-030-02610-3_13
Download citation
DOI: https://doi.org/10.1007/978-3-030-02610-3_13
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-02609-7
Online ISBN: 978-3-030-02610-3
eBook Packages: Computer ScienceComputer Science (R0)