Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3411764.3445291acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
research-article
Open access

Investigating the Accessibility of Crowdwork Tasks on Mechanical Turk

Published: 07 May 2021 Publication History

Abstract

Crowdwork can enable invaluable opportunities for people with disabilities, not least the work flexibility and the ability to work from home, especially during the current Covid-19 pandemic. This paper investigates how engagement in crowdwork tasks is affected by individual disabilities and the resulting implications for HCI. We first surveyed 1,000 Amazon Mechanical Turk (AMT) workers to identify demographics of crowdworkers who identify as having various disabilities within the AMT ecosystem—including vision, hearing, cognition/mental, mobility, reading and motor impairments. Through a second focused survey and follow-up interviews, we provide insights into how respondents cope with crowdwork tasks. We found that standard task factors, such as task completion time and presentation, often do not account for the needs of users with disabilities, resulting in anxiety and a feeling of depression on occasion. We discuss how to alleviate barriers to enable effective interaction for crowdworkers with disabilities.

References

[1]
Harini Alagarai Sampath, Rajeev Rajeshuni, and Bipin Indurkhya. 2014. Cognitively inspired task design to improve user performance on crowdsourcing platforms. In Proceedings of the 32nd annual ACM conference on Human factors in computing systems - CHI ’14. ACM Press, Toronto, Ontario, Canada, 3665–3674. https://doi.org/10.1145/2556288.2557155
[2]
Amazon. 2020 (accessed September 11, 2020). Amazon Mechanical Turk. https://www.mturk.com/
[3]
Daniel Archambault, Helen C. Purchase, and Tobias Hobfeld. 2017. Evaluation in the Crowd: An Introduction. In Evaluation in the Crowd. Crowdsourcing and Human-Centered Experiments, Daniel Archambault, Helen Purchase, and Tobias Hobfeld(Eds.). Vol. 10264. Springer International Publishing, Cham, 1–5. https://doi.org/10.1007/978-3-319-66435-4_1
[4]
Giorgio Brajnik, Yeliz Yesilada, and Simon Harper. 2010. Testability and validity of WCAG 2.0: the expertise effect. In Proceedings of the 12th international ACM SIGACCESS conference on Computers and accessibility - ASSETS ’10. ACM Press, Orlando, Florida, USA, 43. https://doi.org/10.1145/1878803.1878813
[5]
Giorgio Brajnik, Yeliz Yesilada, and Simon Harper. 2012. Is accessibility conformance an elusive property? A study of validity and reliability of WCAG 2.0. ACM Transactions on Accessible Computing 4, 2 (March 2012), 1–28. https://doi.org/10.1145/2141943.2141946
[6]
Matthew W. Brault. 2012. Americans With Disabilities: 2010. https://www2.census.gov/library/publications/2012/demo/p70-131.pdf
[7]
Alice M. Brawley and Cynthia L.S. Pury. 2016. Work experiences on MTurk: Job satisfaction, turnover, and information sharing. Computers in Human Behavior 54 (Jan. 2016), 531–546. https://doi.org/10.1016/j.chb.2015.08.031
[8]
Robin Brewer, Meredith Ringel Morris, and Anne Marie Piper. 2016. ”Why would anybody do this?”: Understanding Older Adults’ Motivations and Challenges in Crowd Work. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. ACM, San Jose California USA, 2246–2257. https://doi.org/10.1145/2858036.2858198
[9]
US Census Bureau. 2020. U.S. Census Bureau QuickFacts: United States. https://www.census.gov/quickfacts/fact/table/US/PST045219
[10]
Rocío Calvo, Shaun K. Kane, and Amy Hurst. 2014. Evaluating the accessibility of crowdsourcing tasks on Amazon’s mechanical turk. In Proceedings of the 16th international ACM SIGACCESS conference on Computers & accessibility - ASSETS ’14. ACM Press, Rochester, New York, USA, 257–258. https://doi.org/10.1145/2661334.2661401
[11]
Krista Casler, Lydia Bickel, and Elizabeth Hackett. 2013. Separate but equal? A comparison of participants and data gathered via Amazon’s MTurk, social media, and face-to-face behavioral testing. Computers in Human Behavior 29, 6 (Nov. 2013), 2156–2160. https://doi.org/10.1016/j.chb.2013.05.009
[12]
Jesse Chandler and Danielle Shapiro. 2016. Conducting Clinical Research Using Crowdsourced Convenience Samples. Annual Review of Clinical Psychology 12, 1 (March 2016), 53–81. https://doi.org/10.1146/annurev-clinpsy-021815-093623
[13]
Yanto Chandra and Liang Shang. 2019. Inductive Coding. In Qualitative Research Using R: A Systematic Approach. Springer Singapore, Singapore, 91–106. https://doi.org/10.1007/978-981-13-3170-1_8
[14]
Cint. 2020 (accessed September 11, 2020). The Cint Platform. https://www.cint.com/platform-market-research-technology
[15]
Clickworker. 2020 (accessed September 11, 2020). Clickworker Website. https://www.clickworker.com/
[16]
Microsoft Corporation. 2003. The Wide Range of Abilities and Its Impact on Computer Technology. Technical Report. Forrester Research Inc.https://www.microsoft.com/en-us/download/details.aspx?id=18446
[17]
Djellel Difallah, Elena Filatova, and Panos Ipeirotis. 2018. Demographics and Dynamics of Mechanical Turk Workers. In Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining - WSDM ’18. ACM Press, Marina Del Rey, CA, USA, 135–143. https://doi.org/10.1145/3159652.3159661
[18]
Jennifer Fereday and Eimear Muir-Cochrane. 2006. Demonstrating Rigor Using Thematic Analysis: A Hybrid Approach of Inductive and Deductive Coding and Theme Development. International Journal of Qualitative Methods 5, 1 (March 2006), 80–92. https://doi.org/10.1177/160940690600500107
[19]
Avi Fleischer, Alan D. Mead, and Jialin Huang. 2015. Inattentive Responding in MTurk and Other Online Samples. Industrial and Organizational Psychology 8, 2 (June 2015), 196–202. https://doi.org/10.1017/iop.2015.25
[20]
D. Jake Follmer, Rayne A. Sperling, and Hoi K. Suen. 2017. The Role of MTurk in Education Research: Advantages, Issues, and Future Directions. Educational Researcher 46, 6 (Aug. 2017), 329–334. https://doi.org/10.3102/0013189X17725519
[21]
Karën Fort, Gilles Adda, and K. Bretonnel Cohen. 2011. Amazon Mechanical Turk: Gold Mine or Coal Mine?Computational Linguistics 37, 2 (June 2011), 413–420. https://doi.org/10.1162/COLI_a_00057
[22]
Ujwal Gadiraju, Ricardo Kawase, and Stefan Dietze. 2014. A taxonomy of microtasks on the web. In Proceedings of the 25th ACM conference on Hypertext and social media - HT ’14. ACM Press, Santiago, Chile, 218–223. https://doi.org/10.1145/2631775.2631819
[23]
Ujwal Gadiraju, Sebastian Müller, Martin Nöllenburg, Dietmar Saupe, Sebastian Egger-Lampl, Daniel Archambault, and Brian Fisher. 2017. Crowdsourcing Versus the Laboratory: Towards Human-Centered Experiments Using the Crowd. In Evaluation in the Crowd. Crowdsourcing and Human-Centered Experiments, Daniel Archambault, Helen Purchase, and Tobias Hobfeld(Eds.). Vol. 10264. Springer International Publishing, Cham, 6–26. https://doi.org/10.1007/978-3-319-66435-4_2
[24]
Neha Gupta, David Martin, Benjamin V. Hanrahan, and Jacki O’Neill. 2014. Turk-Life in India. In Proceedings of the 18th International Conference on Supporting Group Work - GROUP ’14. ACM Press, Sanibel Island, Florida, USA, 1–11. https://doi.org/10.1145/2660398.2660403
[25]
Kotaro Hara, Abigail Adams, Kristy Milland, Saiph Savage, Chris Callison-Burch, and Jeffrey P. Bigham. 2018. A Data-Driven Analysis of Workers’ Earnings on Amazon Mechanical Turk. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems - CHI ’18. ACM Press, Montreal QC, Canada, 1–14. https://doi.org/10.1145/3173574.3174023
[26]
Kotaro Hara, Abigail Adams, Kristy Milland, Saiph Savage, Benjamin V. Hanrahan, Jeffrey P. Bigham, and Chris Callison-Burch. 2019. Worker Demographics and Earnings on Amazon Mechanical Turk: An Exploratory Analysis. In Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, Glasgow Scotland Uk, 1–6. https://doi.org/10.1145/3290607.3312970
[27]
Kotaro Hara, Vicki Le, and Jon Froehlich. 2013. Combining crowdsourcing and google street view to identify street-level accessibility problems. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems - CHI ’13. ACM Press, Paris, France, 631. https://doi.org/10.1145/2470654.2470744
[28]
David J. Hauser and Norbert Schwarz. 2016. Attentive Turkers: MTurk participants perform better on online attention checks than do subject pool participants. Behavior Research Methods 48, 1 (March 2016), 400–407. https://doi.org/10.3758/s13428-015-0578-z
[29]
Isabell Hensel, Jochen Koch, and Eva Kocher. 2016. Crowdworking als Phanomen der Koordination digitaler Erwerbsarbeit - Eine interdisziplinare Perspektive. Industrielle Beziehungen2 (2016), 162–186. https://doi.org/10.1688/IndB-2016-02-Hensel
[30]
Matthias Hirth, Jason Jacques, Peter Rodgers, Ognjen Scekic, and Michael Wybrow. 2017. Crowdsourcing Technology to Support Academic Research. In Evaluation in the Crowd. Crowdsourcing and Human-Centered Experiments, Daniel Archambault, Helen Purchase, and Tobias Hobfeld(Eds.). Vol. 10264. Springer International Publishing, Cham, 70–95. https://doi.org/10.1007/978-3-319-66435-4_4
[31]
Paul Hitlin. 2016. Research in the Crowdsourcing Age, a Case Study: How Scholars, Companies and Workers are Using Mechanical Turk, a ”gig Economy” Platform, for Tasks Computers Can’t Handle. Pew Research Center.
[32]
Hwajung Hong, Svetlana Yarosh, Jennifer G. Kim, Gregory D. Abowd, and Rosa I. Arriaga. 2013. Investigating the use of circles in social networks to support independence of individuals with autism. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems - CHI ’13. ACM Press, Paris, France, 3207. https://doi.org/10.1145/2470654.2466439
[33]
J Howe. 2006. The Rise of Crowdsourcing. Wired Magazine 14, 6 (2006). https://www.wired.com/2006/06/crowds/
[34]
Panos Ipeirotis. 2009 (accessed September 11, 2020). Turker Demographics vs Internet Demographics. https://www.behind-the-enemy-lines.com/2009/03/turker-demographics-vs-internet.html
[35]
Jason T. Jacques and Per Ola Kristensson. 2017. Design Strategies for Efficient Access to Mobile Device Users via Amazon Mechanical Turk. In Proceedings of the First ACM Workshop on Mobile Crowdsensing Systems and Applications - CrowdSenSys ’17. ACM Press, Delft, Netherlands, 25–30. https://doi.org/10.1145/3139243.3139247
[36]
Jason T. Jacques and Per Ola Kristensson. 2019. Crowdworker Economics in the Gig Economy. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems - CHI ’19. ACM Press, Glasgow, Scotland Uk, 1–10. https://doi.org/10.1145/3290605.3300621
[37]
Shashank Khanna, Aishwarya Ratan, James Davis, and William Thies. 2010. Evaluating and improving the usability of Mechanical Turk for low-income workers in India. In Proceedings of the First ACM Symposium on Computing for Development - ACM DEV ’10. ACM Press, London, United Kingdom, 1. https://doi.org/10.1145/1926180.1926195
[38]
Aniket Kittur, Ed H. Chi, and Bongwon Suh. 2008. Crowdsourcing user studies with Mechanical Turk. In Proceeding of the twenty-sixth annual CHI conference on Human factors in computing systems - CHI ’08. ACM Press, Florence, Italy, 453. https://doi.org/10.1145/1357054.1357127
[39]
Masatomo Kobayashi, Tatsuya Ishihara, Akihiro Kosugi, Hironobu Takagi, and Chieko Asakawa. 2013. Question-Answer Cards for an Inclusive Micro-tasking Framework for the Elderly. In Human-Computer Interaction - INTERACT 2013, David Hutchison, Takeo Kanade, Josef Kittler, Jon M. Kleinberg, Friedemann Mattern, John C. Mitchell, Moni Naor, Oscar Nierstrasz, C. Pandu Rangan, Bernhard Steffen, Madhu Sudan, Demetri Terzopoulos, Doug Tygar, Moshe Y. Vardi, Gerhard Weikum, Paula Kotze, Gary Marsden, Gitte Lindgaard, Janet Wesson, and Marco Winckler (Eds.). Vol. 8119. Springer Berlin Heidelberg, Berlin, Heidelberg, 590–607. https://doi.org/10.1007/978-3-642-40477-1_38
[40]
Siou Chew Kuek and Cecilia Paradi-Guildford. 2015. The Global Opportunity in Online Outsourcing. Technical Report. World Bank Group. http://documents1.worldbank.org/curated/en/138371468000900555/pdf/ACS14228-ESW-white-cover-P149016-Box391478B-PUBLIC-World-Bank-Global-OO-Study-WB-Rpt-FinalS.pdf
[41]
Leib Litman, Jonathan Robinson, and Cheskie Rosenzweig. 2015. The relationship between motivation, monetary compensation, and data quality among US- and India-based workers on Mechanical Turk. Behavior Research Methods 47, 2 (June 2015), 519–528. https://doi.org/10.3758/s13428-014-0483-x
[42]
Eric Loepp and Jarrod T. Kelly. 2020. Distinction without a difference? An assessment of MTurk Worker types. Research & Politics 7, 1 (Jan. 2020), 205316801990118. https://doi.org/10.1177/2053168019901185
[43]
Emily M. Lund, Michael R. Nadorff, Kate Galbraith, and Katie B. Thomas. 2018. Using Amazon Mechanical Turk to Recruit Participants With Disabilities. SAGE Research Methods Cases in Psychology(2018). https://doi.org/10.4135/9781526437280
[44]
Adam Marcus and Aditya Parameswaran. 2015. Crowdsourced Data Management: Industry and Academic Perspectives. Foundations and Trends in Databases 6, 1-2 (2015), 1–161. https://doi.org/10.1561/1900000044
[45]
David Martin, Benjamin V. Hanrahan, Jacki O’Neill, and Neha Gupta. 2014. Being a turker. In Proceedings of the 17th ACM conference on Computer supported cooperative work & social computing - CSCW ’14. ACM Press, Baltimore, Maryland, USA, 224–235. https://doi.org/10.1145/2531602.2531663
[46]
Winter Mason and Siddharth Suri. 2012. Conducting behavioral research on Amazon’s Mechanical Turk. Behavior Research Methods 44, 1 (March 2012), 1–23. https://doi.org/10.3758/s13428-011-0124-6
[47]
A. Moghaddam. 2006. Coding issues in grounded theory. Issues in Educational Research 16 (01 2006), 47–58.
[48]
Gabriele Paolacci and Jesse Chandler. 2014. Inside the Turk: Understanding Mechanical Turk as a Participant Pool. Current Directions in Psychological Science 23, 3 (June 2014), 184–188. https://doi.org/10.1177/0963721414531598
[49]
Gabriele Paolacci, Jesse Chandler, and Panagiotis G. Ipeirotis. 2010. Running experiments on Amazon Mechanical Turk. Decision Making 5, 5 (2010), 411–419.
[50]
Prolific. 2020 (accessed September 11, 2020). Prolific. https://www.prolific.co/
[51]
Elissa M. Redmiles, Sean Kross, and Michelle L. Mazurek. 2019. How Well Do My Results Generalize? Comparing Security and Privacy Survey Results from MTurk, Web, and Telephone Samples. In 2019 IEEE Symposium on Security and Privacy (SP). IEEE, San Francisco, CA, USA, 1326–1343. https://doi.org/10.1109/SP.2019.00014
[52]
Mireia Ribera, Merce Porras, Marc Boldu, Miquel Termens, Andreu Sule, and Pilar Paris. 2009. Web Content Accessibility Guidelines 2.0: A further step towards accessible digital information. Program 43, 4 (Sept. 2009), 392–406. https://doi.org/10.1108/00330330910998048
[53]
Joel Ross, Lilly Irani, M. Six Silberman, Andrew Zaldivar, and Bill Tomlinson. 2010. Who are the crowdworkers?: shifting demographics in mechanical turk. In Proceedings of the 28th of the international conference extended abstracts on Human factors in computing systems - CHI EA ’10. ACM Press, Atlanta, Georgia, USA, 2863. https://doi.org/10.1145/1753846.1753873
[54]
Shruti Sannon and Dan Cosley. 2019. Privacy, Power, and Invisible Labor on Amazon Mechanical Turk. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems - CHI ’19. ACM Press, Glasgow, Scotland Uk, 1–12. https://doi.org/10.1145/3290605.3300512
[55]
Kathryn Sharpe Wessling, Joel Huber, and Oded Netzer. 2017. MTurk Character Misrepresentation: Assessment and Solutions. Journal of Consumer Research 44, 1 (June 2017), 211–230. https://doi.org/10.1093/jcr/ucx053
[56]
Scott M. Smith, Catherine A. Roster, Linda L. Golden, and Gerald S. Albaum. 2016. A multi-group analysis of online survey respondent data quality: Comparing a regular USA consumer panel to MTurk samples. Journal of Business Research 69, 8 (Aug. 2016), 3139–3148. https://doi.org/10.1016/j.jbusres.2015.12.002
[57]
Rion Snow, Daniel Jurafsky, and Andrew Y. Ng. 2020 (accessed September 11, 2020). Cheap and Fast - But is it Good? Evaluating Non-Expert Annotations for Natural Language Tasks.
[58]
Alexander Sorokin and David Forsyth. 2008. Utility data annotation with Amazon Mechanical Turk. In 2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops.
[59]
Neil Stewart, Christoph Ungemach, Adam J. L. Harris, Daniel M. Bartels, Ben R. Newell, Gabriele Paolaccik, and Jesse Chandler. 2015. The average laboratory samples a population of 7,300 Amazon Mechanical Turk workers. Judgment and Decision Making 10, 5 (2015), 479–491.
[60]
Saiganesh Swaminathan, Kotaro Hara, and Jeffrey P. Bigham. 2017. The Crowd Work Accessibility Problem. In Proceedings of the 14th Web for All Conference on The Future of Accessible Work - W4A ’17. ACM Press, Perth, Western Australia, Australia, 1–4. https://doi.org/10.1145/3058555.3058569
[61]
Kathryn Zyskowski, Meredith Ringel Morris, Jeffrey P. Bigham, Mary L. Gray, and Shaun K. Kane. 2015. Accessible Crowdwork?: Understanding the Value in and Challenge of Microtask Employment for People with Disabilities. In Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing - CSCW ’15. ACM Press, Vancouver, BC, Canada, 1682–1693. https://doi.org/10.1145/2675133.2675158

Cited By

View all
  • (2024)The State of Pilot Study Reporting in Crowdsourcing: A Reflection on Best Practices and GuidelinesProceedings of the ACM on Human-Computer Interaction10.1145/36410238:CSCW1(1-45)Online publication date: 26-Apr-2024
  • (2024)A Systematic Review of Ability-diverse Collaboration through Ability-based Lens in HCIProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3641930(1-21)Online publication date: 11-May-2024
  • (2024)Probing into the Usage of Task Fingerprinting in Web Games to Enhance Cognitive Personalization: A Pilot Gamified Experience with Neurodivergent Participants2024 IEEE 12th International Conference on Serious Games and Applications for Health (SeGAH)10.1109/SeGAH61285.2024.10639597(1-8)Online publication date: 7-Aug-2024
  • Show More Cited By

Index Terms

  1. Investigating the Accessibility of Crowdwork Tasks on Mechanical Turk
      Index terms have been assigned to the content through auto-classification.

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      CHI '21: Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems
      May 2021
      10862 pages
      ISBN:9781450380966
      DOI:10.1145/3411764
      This work is licensed under a Creative Commons Attribution International 4.0 License.

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 07 May 2021

      Check for updates

      Author Tags

      1. AMT
      2. MTurk
      3. accessibility
      4. crowdsourcing
      5. crowdwork
      6. disability

      Qualifiers

      • Research-article
      • Research
      • Refereed limited

      Conference

      CHI '21
      Sponsor:

      Acceptance Rates

      Overall Acceptance Rate 6,199 of 26,314 submissions, 24%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)479
      • Downloads (Last 6 weeks)65
      Reflects downloads up to 16 Oct 2024

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)The State of Pilot Study Reporting in Crowdsourcing: A Reflection on Best Practices and GuidelinesProceedings of the ACM on Human-Computer Interaction10.1145/36410238:CSCW1(1-45)Online publication date: 26-Apr-2024
      • (2024)A Systematic Review of Ability-diverse Collaboration through Ability-based Lens in HCIProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3641930(1-21)Online publication date: 11-May-2024
      • (2024)Probing into the Usage of Task Fingerprinting in Web Games to Enhance Cognitive Personalization: A Pilot Gamified Experience with Neurodivergent Participants2024 IEEE 12th International Conference on Serious Games and Applications for Health (SeGAH)10.1109/SeGAH61285.2024.10639597(1-8)Online publication date: 7-Aug-2024
      • (2023)Enabling text comprehensibility assessment for people with intellectual disabilities using a mobile applicationFrontiers in Communication10.3389/fcomm.2023.11756258Online publication date: 3-Aug-2023
      • (2023)Fostering Data Worker Inclusion and Well-Being: Identifying Barriers and Designing InterventionsCompanion Publication of the 2023 Conference on Computer Supported Cooperative Work and Social Computing10.1145/3584931.3608917(425-428)Online publication date: 14-Oct-2023
      • (2023)Uncovering Gig Worker-Centered Design Opportunities in Food Delivery WorkProceedings of the 2023 ACM Designing Interactive Systems Conference10.1145/3563657.3596123(688-701)Online publication date: 10-Jul-2023
      • (2023)Evaluating Mitigation Approaches for Adversarial Attacks in Crowdwork2023 IEEE International Conference on Big Data and Smart Computing (BigComp)10.1109/BigComp57234.2023.00026(113-119)Online publication date: Feb-2023
      • (2023)Exploring Human Computer Interaction in Industry 4.0AI, IoT, Big Data and Cloud Computing for Industry 4.010.1007/978-3-031-29713-7_2(21-38)Online publication date: 1-Aug-2023
      • (2022)Impact of Annotator Demographics on Sentiment Dataset LabelingProceedings of the ACM on Human-Computer Interaction10.1145/35556326:CSCW2(1-22)Online publication date: 11-Nov-2022
      • (2022)Understanding the Microtask Crowdsourcing Experience for Workers with Disabilities: A Comparative ViewProceedings of the ACM on Human-Computer Interaction10.1145/35551376:CSCW2(1-30)Online publication date: 11-Nov-2022
      • Show More Cited By

      View Options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format.

      HTML Format

      Get Access

      Login options

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media