Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Crowdsourcing Task Assignment with Online Profile Learning

  • Conference paper
  • First Online:
On the Move to Meaningful Internet Systems. OTM 2018 Conferences (OTM 2018)

Part of the book series: Lecture Notes in Computer Science ((LNPSE,volume 11229))

Abstract

In this paper, we present a reference framework called Argo+ for worker-centric crowdsourcing where task assignment is characterized by feature-based representation of both tasks and workers and learning techniques are exploited to online predict the most appropriate task to execute for a requesting worker. On the task side, features are used to represent requirements expressed in terms of knowledge expertise that are asked to workers for being involved in the task execution. On the worker side, features are used to compose a profile, namely a structured description of the worker capabilities in executing tasks. Experimental results obtained on a real crowdsourcing campaign are discussed by comparing the performance of Argo+ against a baseline with conventional task assignment techniques.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    https://www.wikidata.org.

References

  1. Alsayasneh, M., et al.: Personalized and diverse task composition in crowdsourcing. IEEE Trans. Knowl. Data Eng. 30(1), 128–141 (2018)

    Article  Google Scholar 

  2. Amer-Yahia, S., Roy, S.B.: Human factors in crowdsourcing. PVLDB 9(13), 1615–1618 (2016)

    Google Scholar 

  3. Arun, R., Suresh, V., Veni Madhavan, C.E., Narasimha Murthy, M.N.: On finding the natural number of topics with latent Dirichlet allocation: some observations. In: Zaki, M.J., Yu, J.X., Ravindran, B., Pudi, V. (eds.) PAKDD 2010. LNCS (LNAI), vol. 6118, pp. 391–402. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-13657-3_43

    Chapter  Google Scholar 

  4. Blei, D.M., Ng, A.Y., Jordan, M.I.: Latent Dirichlet allocation. J. Mach. Learn. Res. 3(Jan), 993–1022 (2003)

    MATH  Google Scholar 

  5. Castano, S., Ferrara, A., Genta, L., Montanelli, S.: Combining crowd consensus and user trustworthiness for managing collective tasks. Futur. Gener. Comput. Syst. 54, 378–388 (2016)

    Article  Google Scholar 

  6. Castano, S., Ferrara, A., Montanelli, S.: A multi-dimensional approach to crowd-consensus modeling and evaluation. In: Johannesson, P., Lee, M.L., Liddle, S.W., Opdahl, A.L., López, Ó.P. (eds.) ER 2015. LNCS, vol. 9381, pp. 424–431. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-25264-3_31

    Chapter  Google Scholar 

  7. Cesa-Bianchi, N., Gentile, C., Zappella, G.: A gang of bandits. In: Proceedings of the 27th Internatioanl Conference on Neural Information Processing Systems, Lake Tahoe, Nevada, USA, pp. 737–745 (2013)

    Google Scholar 

  8. Difallah, D.E., Demartini, G., Cudré-Mauroux, P.: Pick-a-crowd: tell me what you like, and I’ll tell you what to do. In: Proceedings of the 22nd WWW International Conference, Rio de Janeiro, Brazil (2013)

    Google Scholar 

  9. Fan, J., Li, G., Ooi, B.C., Tan, K.l., Feng, J.: iCrowd: an adaptive crowdsourcing framework. In: Proceedings of the ACM SIGMOD International Conference on Management of Data, pp. 1015–1030. ACM, Melbourne (2015)

    Google Scholar 

  10. Gadiraju, U., Fetahu, B., Kawase, R.: Training workers for improving performance in crowdsourcing microtasks. In: Conole, G., Klobučar, T., Rensing, C., Konert, J., Lavoué, É. (eds.) EC-TEL 2015. LNCS, vol. 9307, pp. 100–114. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24258-3_8

    Chapter  Google Scholar 

  11. Goel, G., Nikzad, A., Singla, A.: Allocating tasks to workers with matching constraints: truthful mechanisms for crowdsourcing markets. In: Proceedings of the 23rd WWW International Conference, Seoul, Korea (2014)

    Google Scholar 

  12. Goncalves, J., Feldman, M., Hu, S., Kostakos, V., Bernstein, A.: Task routing and assignment in crowdsourcing based on cognitive abilities. In: Proceedings of the 26th WWW International Conference, Perth, Australia (2017)

    Google Scholar 

  13. Hassan, U., Curry, E.: A capability requirements approach for predicting worker performance in crowdsourcing. In: Proceedings of the 9th International Conference on Collaborate Computing, Austin, Texas, USA (2013)

    Google Scholar 

  14. Jain, A., Sarma, A.D., Parameswaran, A., Widom, J.: Understanding workers, developing effective tasks, and enhancing marketplace dynamics: a study of a large crowdsourcing marketplace. Proc. VLDB Endow. 10(7), 829–840 (2017)

    Article  Google Scholar 

  15. Jain, S., Narayanaswamy, B., Narahari, Y.: A multiarmed bandit incentive mechanism for crowdsourcing demand response in smart grids. In: Proceedings of the 28th AAAI Conference on Artificial Intelligence, Qulébec, Canada, pp. 721–727 (2014)

    Google Scholar 

  16. Karger, D.R., Oh, S., Shah, D.: Budget-optimal task allocation for reliable crowdsourcing systems. Oper. Res. 62(1), 1–24 (2014)

    Article  Google Scholar 

  17. Kazai, G., Kamps, J., Milic-Frayling, N.: Worker types and personality traits in crowdsourcing relevance labels. In: Proceedings of the 20th CIKM, Glasgow, Scotland, UK (2011)

    Google Scholar 

  18. Khazankin, R., Psaier, H., Schall, D., Dustdar, S.: QoS-based task scheduling in crowdsourcing environments. In: Kappel, G., Maamar, Z., Motahari-Nezhad, H.R. (eds.) ICSOC 2011. LNCS, vol. 7084, pp. 297–311. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-25535-9_20

    Chapter  Google Scholar 

  19. Li, G., Wang, J., Zheng, Y., Franklin, M.J.: Crowdsourced data management: a survey. IEEE Trans. Knowl. Data Eng. 28(9), 2296–2319 (2016)

    Article  Google Scholar 

  20. Liu, Y., Liu, M.: An online learning approach to improving the quality of crowd-sourcing. IEEE Trans. Netw. 25(4), 2166–2179 (2017)

    Article  Google Scholar 

  21. Manning, C.D., Raghavan, P., Schütze, H.: Introduction to Information Retrieval, vol. 1. Cambridge University Press, Cambridge (2008)

    Book  Google Scholar 

  22. Mo, K., Zhong, E., Yang, Q.: Cross-task crowdsourcing. In: Proceedings of the 19th ACM SIGKDD International Conference, Chicago, Illinois, USA (2013)

    Google Scholar 

  23. Müller, E., Günnemann, S., Färber, I., Seidl, T.: Discovering multiple clustering solutions: grouping objects in different views of the data. In: Proceedings of the 28th IEEE ICDE International Conference, Washington, DC, USA, pp. 1207–1210 (2012)

    Google Scholar 

  24. Organisciak, P., Teevan, J., Dumais, S.T., Miller, R., Kalai, A.T.: A crowd of your own: crowdsourcing for on-demand personalization. In: Proceedings of the 2nd AAAI HCOMP, Pittsburgh, USA (2014)

    Google Scholar 

  25. Pilourdault, J., Amer-Yahia, S., Lee, D., Roy, S.B.: Motivation-aware task assignment in crowdsourcing. In: Proceedings of the 20th EDBT International Conference, Venice, Italy (2017)

    Google Scholar 

  26. Porteous, I., Newman, D., Ihler, A., Asuncion, A., Smyth, P., Welling, M.: Fast collapsed gibbs sampling for latent Dirichlet allocation. In: Proceedings of the 14th ACM SIGKDD International Conference, pp. 569–577 (2008)

    Google Scholar 

  27. Simpson, E., Roberts, S.: Bayesian methods for intelligent task assignment in crowdsourcing systems. In: Guy, T., Kárný, M., Wolpert, D. (eds.) Decision Making: Uncertainty, Imperfection, Deliberation and Scalability. SCI, vol. 538, pp. 1–32. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-15144-1_1

    Chapter  Google Scholar 

  28. Tran-Thanh, L., Stein, S., Rogers, A., Jennings, N.R.: Efficient crowdsourcing of unknown experts using bounded multi-armed bandits. Artif. Intell. 214, 89–111 (2014)

    Article  MathSciNet  Google Scholar 

  29. Tranquillini, S., Daniel, F., Kucherbaev, P., Casati, F.: Modeling, enacting, and integrating custom crowdsourcing processes. ACM Trans. Web 9(2), 7:1–7:43 (2015)

    Article  Google Scholar 

  30. Zheng, Y., Li, G., Li, Y., Shan, C., Cheng, R.: Truth inference in crowdsourcing: is the problem solved? Proc. VLDB Endow. 10(5), 541–552 (2017)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Silvana Castano , Alfio Ferrara or Stefano Montanelli .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Castano, S., Ferrara, A., Montanelli, S. (2018). Crowdsourcing Task Assignment with Online Profile Learning. In: Panetto, H., Debruyne, C., Proper, H., Ardagna, C., Roman, D., Meersman, R. (eds) On the Move to Meaningful Internet Systems. OTM 2018 Conferences. OTM 2018. Lecture Notes in Computer Science(), vol 11229. Springer, Cham. https://doi.org/10.1007/978-3-030-02610-3_13

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-02610-3_13

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-02609-7

  • Online ISBN: 978-3-030-02610-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics