Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Paying in Kind For Crowdsourced Work in Developing Regions

Download as pdf or txt
Download as pdf or txt
You are on page 1of 18

Paying in Kind for Crowdsourced Work in Developing

Regions

Navkar Samdaria1, Akhil Mathur2, and Ravin Balakrishnan1


1
University of Toronto, Toronto, Canada – M5S2E4
2
Bell Labs India, Alcatel-Lucent, Bangalore, India - 560045
{navkar,ravin}@dgp.toronto.edu,
akhil.mathur@alcatel-lucent.com

Abstract. In developing regions, the reach of crowdsourcing services such as


Amazon Mechanical Turk (mTurk) has been limited by the lack of adequate
payment mechanisms and low visibility amongst the crowd. In this paper, we
present a commodity based model for crowdsourcing where crowd workers get
paid in kind in the form of a commodity instead of money. Our model makes
crowdsourcing services more visible to users in developing regions and also
addresses the issue of payment. We conducted two field studies in urban India
to evaluate the applicability of our proposed model. Our results show that the
commodity based crowdsourcing model reached workers with very different
demographics from the typical mTurk workers. We also found that users
preferred to receive a commodity instead of money as remuneration.

Keywords: crowdsourcing, mobile phones, humans as pervasive computing


resources, commodity exchange model, developing regions, Amazon
Mechanical Turk, India.

1 Introduction

Microtasking services such as Amazon Mechanical Turk allow its users to distribute
tasks to a large number of crowd workers. The majority of these tasks are those which
are difficult for computers, yet simple for humans (for example, surveys, image
labelling, audio transcription, and finding specific information on a website). It has
been estimated that in the last decade, over 1 million workers have earned $1-2 billion
via crowdsourced work allocation [2].
Microtasking platforms hold a particular promise for workers in developing
regions like India. They provide workers an opportunity to earn money without being
physically co-located with the work provider, and the dollar remuneration when
converted to local currency also becomes quite significant [11]. A recent survey of
733 mTurk workers [11] showed that 36% of the respondents were from India. The
Indian workers were young (with an average age of 26-28 years), well-educated and
had a higher standard of living than the average Indian. In another study with 200
mTurk workers, Khanna et al. [5] report that nearly 80% of respondents had at least a
Bachelor’s degree, with another 11% currently in college. Interestingly, 92% of the

J. Kay et al. (Eds.): Pervasive 2012, LNCS 7319, pp. 343–360, 2012.
© Springer-Verlag Berlin Heidelberg 2012
344 N. Samdaria, A. Mathur, and R. Balakrishnan

workers had a PC and internet connection in their homes. However, those with a
Bachelor’s degree or higher constitute only 6% of India’s working age population
(15-60 years) [13] and home PC penetration in India is estimated at <10% [3]. These
statistics suggest that the reach of microtasking services has been limited to the
educated elite in developing regions. We believe that there is tremendous untapped
potential for microtasking services in developing regions if they are made more
pervasive and available to a larger number of workers. We argue that there are three
major reasons why microtasking services have not been able to reach more workers in
developing regions:
(i) Access: Most microtasking platforms are hosted on the internet. Internet
penetration in developing regions like India is low, as a result of which a large
number of potential workers are unable to access microtasking services. On the
contrary, penetration of mobile phones in developing regions is very high (64.7% of
the population in India as per the latest statistics [14]) which make them a promising
platform to address the issue of accessibility of microtasking services.
ii) Visibility: The visibility of microtasking services is also quite low in developing
regions, and the potential workers are not aware of them. If some of these services can
be brought from the digital world into the physical world, it may increase their
awareness among the workers.
iii) Payment: A major problem impeding the growth of microtasking services in
developing regions is the lack of adequate payment mechanisms for the crowd
workers. More than 60% [8] of the Indian population do not have a bank account,
which makes it difficult for a microtasking service to pay them for their work via the
traditional banking system. An obvious solution to this problem is to give the workers
some commodity or service in return of the work. However, the choice of the
commodity should be such that it is useful for the worker immediately or in the near
future. For example, a microtasking service named txteagle [4] provides mobile phone
airtime as the commodity in exchange for work. However, one can argue that the
workers may not be in the need for mobile airtime every time they do the work, and
as subscribers in India are unable to convert airtime to cash payouts, this leads to
lower participation in the microtasking service.
The problem of Access has been addressed by initiatives like txteagle [4] and
MobileWorks [6] which push microtasks to the worker’s mobile phone using SMS or
the USSD protocol. In this paper, we investigate the applicability of an alternate
model of crowdsourcing which address the aforementioned problems of Visibility and
Payment with existing crowdsourcing systems. We propose a model in which workers
are presented with the opportunity to do microtasks whenever they feel the need for a
commodity, and on completion of the microtasks, they get their desired commodity as
remuneration. At a strictly objective level, it is effectively a change in the currency of
remuneration, but subjectively we hypothesize that getting a commodity ‘for free’,
particularly at the time when said commodity is to be consumed, is perhaps a better
motivator to do the microtasks than simply working for money.
We present two field studies to explore the applicability of our proposed model in
real-life scenarios. Results show that our proposed model increases the reach of
microtasking services to those user segments which are less likely to join existing
microtasking services like mTurk. We also found that users have different motivations
Paying in Kind for Crowdsourced Work in Developing Regions 345

to work on microtasks such as “desire to earn”, “desire to save”, and “desire for
commodity”. The conventional crowdsourcing models only appeal to their “desire to
earn”, while our proposed model can fulfill all three desires.
In the next two sections, we describe our crowdsourcing model and give an overview
of the related work. Then, we describe evaluations with user populations in urban India
and report their results before finally outlining and discussing the key findings.

2 Model Description

Our proposed commodity-centric crowdsourcing model (CCCM) assumes that there is


a repository which consists of microtasks contributed by various work-providers. In
the conventional crowdsourcing model (for example, mTurk), crowd workers will go
to this microtask repository and express a desire to do tasks. The repository will first
collect information on the background and qualifications of the workers and then push
appropriate tasks to them.

Fig. 1. Overview of Commodity-Centric Crowdsourcing Model

In CCCM (see Fig. 1), the microtasking repository does not interact with the
workers directly, but through an intermediary we call a Commodity Provider. A
Commodity Provider can be any entity which offer a commodity to the worker, and in
return ask the workers to perform microtasks. A commodity may comprise of a good
or a service. For example, an auto-rickshaw1 driver can be a Commodity Provider
who provides auto-rickshaw service for free or on a discounted price in return for the
performance of some microtasks, or the ACM digital portal can be a Commodity
Provider which provides scientific articles to a student worker on completion of a
microtask. It’s important to note that at some level a commodity could be equated to
cash, since most tangible goods and services essentially have a monetary value in
most societies. Of course, the nature of the commodity might well determine how
easily a cash equivalent can be determined; for example, a discount on an auto-
rickshaw ride has an obvious cash equivalency. However, we believe that by casting
the compensation as a commodity in terms of how it is described and provided, it
enables users to not think of it in direct monetary terms and as such might well
1
An auto-rickshaw is a three-wheeled vehicle which is very common for public transport in
India. They offer a cheaper alternative to taxis and attract a large number of passengers every
day. There are more than 100,000 auto-rickshaws currently operating within the city of
Bangalore alone. (http://en.wikipedia.org/wiki/Auto_rickshaw).
346 N. Samdaria, A. Mathur, and R. Balakrishnan

ascribe a value to the commodity that they would not typically ascribe to the
monetary equivalent. The findings of recent studies from economics literature [17, 18]
also highlight the advantage of commodity compensation over its monetary
equivalent.
When users (potential crowd workers) approach a Commodity Provider, they are
given an option of doing a microtask to get the commodity, or they can choose to pay
for the commodity as they would normally do. If they decide to do microtasks, the
Commodity Provider fetches tasks from the repository and passes them to the
workers. The credit earned by the workers after completing the task is exchanged for
the commodity being offered by the Commodity Provider. In case the credit earned is
less than the value of the commodity; the workers get a discount on the commodity
proportional to the credits earned by them, and they pay the remaining amount in
cash. Later, the microtasking service pays money to the Commodity Provider in
return of the task credits, along with a small commission for its services.
Because a crowd worker will expect to get instant remuneration for his/her work,
this model is better suited to those tasks which can be done in spurts (surveys, image
categorization) and do not require a formal verification. For example, crowd workers
may not prefer doing an essay writing task which requires quality checks from the
task giver, resulting in a delay in remuneration.
While CCCM can be applicable to both physical Commodity Providers like the
auto-rickshaw drivers and online Commodity Providers such as the ACM digital
portal (for scientific articles), in this paper we are mainly interested in studying the
application of CCCM in physical settings in developing regions. We argue that the
integration of CCCM with physical Commodity Providers can increase the visibility
of microtasking services. The Commodity Provider can leverage the high mobile
penetration in developing regions to distribute tasks to the workers on their mobile
phones, hence solving the problem of Access. CCCM also address the problem of
payment mechanism to a great extent: instead of paying all the crowd workers, the
microtasking service only has to pay the Commodity Providers which are far fewer in
number than the crowd workers.

3 Related Work

Perhaps the closest and most relevant work related to our proposed model is
reCAPTCHA [9] which asks a user of a system ‘X’ to solve image captchas in order
to get access to the system X. In the context of our model, the system X is the
Commodity Provider and ‘access to X’ is the commodity for which a user will do the
image captcha task. In contrast to reCAPTCHA, we are inclined to explore the
applicability of the CCCM model in physical settings in developing regions to solve
the problems of Visibility and Payment.
There has been some interesting work on developing microtasking services that
specifically target workers in developing countries. txteagle [4], started in Kenya, is one
such service which makes use of standard channels like text, voice, and USSD to
distribute and administer tasks to the workers. Sample tasks include software
localization, evaluation of search results, categorization of blog sentiments, and market
research. Payment to the workers is made in the form of mobile airtime. MobileWorks
Paying in Kind for Crowdsourced Work in Developing Regions 347

[6] is another such service which uses a web-based mobile application to distribute OCR
tasks and pays its workers in cash. SamaSource [12] is a non-profit organization
seeking to empower workers in developing countries. They recruit and train the workers
(women, youth and refugees) to work on microtasks and earn their livelihood. Ushahidi
[7] is an open source platform from Kenya, which allows users to crowdsource crisis
information through text messaging using a mobile phone, email, and the web.
In addition to these there are more than 50 other companies running online task
marketplaces of various kinds [2]. In addition to mTurk, some examples include
CrowdFlower, CrowdSifter, CloudCrowd, LiveWork, LogoTournament, CastingWords,
and SmartSheet which draw workers from developing countries. All the listed examples
are internet based solutions and fail to tackle the issues of Access and Payment.
Among all the crowdsourcing services mentioned above, txteagle makes use of
mobile phones to distribute tasks which makes it more accessible to the workers in
developing regions. It pays the workers with mobile airtime to solve the Payment
problem. However, txteagle’s approach is different from our proposed Commodity-
Centric Crowdsourcing Model (CCCM) in many ways. At a high level, txteagle
follows the mTurk-like model where users would approach the microtasking service,
and work on some tasks to get paid. On the contrary, in our model users work only
when they need a commodity. CCCM makes sure that there is a need for the
commodity before pushing the tasks to the workers whereas in txteagle tasks are
pushed irrespective of the need for the commodity. Apart from solving the issue of
payment, we conjecture that our model would expand the range of the crowdsourcing
workforce by bringing in workers of different demographics.
Finally, there has been work around bringing microtasking services into the
physical world. Florian et al. [1] developed a mobile application to facilitate location
based crowdsourcing. Other researchers [10, 15, 16] discuss different approaches with
sensing devices like smartphones for getting people at some specific location to
contribute to microtasks.

4 Evaluation
We explored the effectiveness and applicability of the CCCM model in developing
regions via two user evaluations in urban India. The first (primary) study is focused
on evaluating the basic premise of the CCCM model with potential target populations,
while the second (ancillary) study is a follow-up intended to see if the CCCM model
might also apply to those populations who might have previously participated in more
conventional mTurk like activities online.

4.1 Study 1
The primary focus of our work is to determine whether or not the CCCM model is
viable, and to gauge its potential amongst user populations that currently do not
partake in conventional crowdsourcing activities. One example of such a user
population are people in the lower- to middle- income demographic in urban Indian
cities who have some literacy of technology but do not necessarily use it extensively
in their daily lives, and who might be motivated by payment by commodity. We also
had to decide as to an appropriate Commodity Provider for this initial validation
348 N. Samdaria, A. Mathur, and R. Balakrishnan

study. Our main criteria in this regard was to pick a Commodity Provider who came
into contact with a broad cross-section of the target user population in their daily
regular business activity, and who also could capture the attention of the users for a
reasonable period of time. One possible class of Commodity Provider that met this
criteria are the drivers of auto-rickshaws, as they tend to cater to a broad population
base and, crucially, have a “captive” audience for the duration of the rickshaw ride.
As such, we enlisted auto-rickshaw drivers as the Commodity Providers, who offered
auto-rickshaw service (i.e. the commodity) for free or on a discounted price and in
return they asked the passengers to complete microtasks on mobile phones.

Participants. Three auto-rickshaw drivers from Bangalore, India participated in our


first study as commodity providers. Two auto-rickshaw drivers were selected at
random and one was selected via a referral. The drivers were male, in the age group
of 25-35 years. Their monthly earnings were in the range of Rs. 15,000-20,000 (Rs.
50 = ~ USD 1). None of them were fluent in speaking English, but they could identify
common English words such as ‘Hello’, ‘Start’, ‘Exit’, ‘Right’, ‘Left’. Their language
of communication was Hindi and Kannada (the local language spoken in Bangalore).
All of them were numerically literate with an education level below 10th grade. All of
them owned a mobile phone, which was primarily used for dialing and receiving calls.

Methodology. The auto-rickshaw drivers were given a Java enabled mobile phone
with a pre-loaded microtasking application (details in the next section). They were
instructed to offer their passengers (crowd workers) an opportunity to avail of a
discount on the journey fare in return for working on the microtasks. The total amount
earned by a passenger was discounted from the journey fare. A discount on the
journey fare was given only if the work done by a passenger was worth more than Rs.
5 and the maximum discount a passenger can get cannot exceed the journey fare. For
their service as a Commodity Provider in our model, drivers received 20%
commission on the work being done by the passenger.
We put flyers in Kannada and English in front of the passenger’s seat which
provided instructions to the passengers on running the microtasking application. Each
auto-rickshaw driver was given a small pocket diary and was asked to maintain a
record of the date of the journey, gender of the passenger, approximate age for every
passenger, total journey fare, discount offered, and journey duration. Before the study,
a researcher trained the drivers on using the application and ensured that they
understood the purpose of the application.
We conducted semi-structured interviews with the auto-rickshaw drivers at the end
of the day to get their feedback as well as the passengers’ reactions towards the
microtasking application. The total discount given by the auto-rickshaw drivers on
that day was reimbursed to them along with the 20% commission. The dispatch of
daily payment was necessary to maintain the trust of drivers in the system. Apart from
the commission a fixed compensation of Rs. 500 was given to each driver for
participating in the study.
The microtasking application had a data logging feature which recorded the
performance of workers on each microtask. At the end of the study, we collected all
the logs for analysis.
Paying in Kind for Crowdsourced Work in Developing Regions 349

Microtasking Application. We developed a J2ME application which can be used to


work on various microtasks. We deployed our application on a Nokia C2-01 which
costs Rs. 4000. The application starts with a welcome screen and prompts the user to
choose between two modes: Passenger Mode and Driver Mode. Fig. 2 shows the
application in Passenger and Driver Modes.

(a) (b)
Fig. 2. (a) Screenshot of Passenger Mode (b) Screenshot of Driver Mode

Passenger Mode. In the Passenger Mode, users are shown a list of all available
microtasks. The order of tasks in the list is chosen randomly at the start of the
application so as to avoid any bias caused by the task ordering on user’s task
preference. Users can work on the tasks of their choice and are allowed to switch
between tasks at any point of time. The asterisk key (*) is used to exit the current task
and return to the task list. The top of the screen shows the Balance i.e. the total
discount accumulated by the passenger. ‘Balance’ is a colloquial word for Credit in
the context of mobile phones in India – the use of this word made it easier for both
passengers and drivers to understand its use in context of our application.

Driver Mode. In order to reduce the learning curve for the drivers we kept the driver
interface very simple with minimal functionality. In this mode, drivers can view:
a) The total Balance for the last passenger
b) The total Balance for all passengers on a day.

Choice of Tasks. We did a survey of all available tasks on mTurk and found four
categories of tasks which can be supported by low-end mobile phones with basic text
and voice capability:
a) Selection Tasks (ST), which require users to select an answer from a set of
options,
b) Data Entry Tasks (DET), which require users to type in data from any source into
the application,
c) Transcription Tasks (TT), which require users to convert speech into text, and
d) Language Translation Tasks (LTT), which require users to translate text from one
language to another.
350 N. Samdaria, A. Mathur, and R. Balakrishnan

In our application, we included at least one task representing each of the four
categories except Language Translation Tasks. LTT were deliberately left out because
typing in a non-English language is challenging on a low-end mobile phone. Table 1
shows all the available tasks and the rewards associated with them.
The tasks on Image Categorization (IC) were borrowed from mTurk, while
handwritten notes of a college student were scanned to generate images for the task IT.
For the task AT, we used the audios of numbers (for example, one, two) instead of
audios of English words (for example, cat, dog). This was done to ensure that
proficiency in the English language does not affect a worker’s performance on the task.
Lastly, task SV was designed to collect demographic information like age, gender,
education level, and monthly income of users. Both IC and IT tasks had 100 images
each while 20 audio clips were available in AT. There was only 1 SV task with four
questions on user demographics. Fig. 3 shows the design of all the four available tasks.

Table 1. Types of ST, DET and TT tasks supported by our microtasking application

Task Description Reward (Rs. 50 = ~ USD 1) Task


Category
Image Categorization Look at an image and Rs. 0.2 per image Selection
(IC) answer YES if it contains
a person.
Image to Text (IT) Type the word shown in Rs. 0.2 per image Data Entry
the scanned image
Audio to Text (AT) Convert a 5-6 sec audio Rs. 1 per audio Transcription
to text
Survey (SV) Choose an answer from Rs. 5 for the complete Selection
multiple options. survey

It is important to note that we did not crawl mTurk or other microtasking services
to import their tasks automatically into our application. Instead, we manually chose
particular tasks for our application which are suitable for Indian users. For example
most of the AT tasks on mTurk have audio in an American accent which might be
difficult for Indian users to understand. Therefore, we chose to use numeric audio
clips in an Indian accent for our AT tasks. In short, the format and categories of the
tasks in our application were similar to the tasks on popular microtasking services like
mTurk, but the content of the tasks was tailored to suit the target users.
As mentioned in the model description, the need for instant remuneration makes it
difficult to validate the work done by the workers. However, we wanted to ensure that
the passengers are doing the work seriously instead of merely guessing or randomly
answering the questions in the task. To achieve this, we introduced a “qualification
phase” at the beginning of each task which consisted of a few challenges whose
answers were already known to us. It should be noted that the users (passengers and
driver) are not aware of the qualification phase. During the qualification phase, each
user response is verified and reward is credited to users balance only if the answer is
correct. If users answer 80% of the challenges correctly, they are allowed to proceed
to the remaining task, otherwise they are asked to work on some other task.
Paying in Kind for Crowdsourced Work in Developing Regions 351

Fig. 3. Tasks available in the microtasking application

Results. The results of our study are promising and suggest that the Commodity
Centric Crowdsourcing Model indeed has potential in developing countries. During
the two week study, auto-rickshaw drivers offered the phone to 204 passengers for
doing the microtasks, out of which 174 (25 female, 149 male) accepted the offer and
availed of a discount of value greater than Rs. 5. The total discount availed by 174
passengers altogether was Rs. 4433 (µ = 25.4, σ = 11.9). On average each passenger
worked on ~79 microtasks to complete a total of 13,781 microtasks involving IC, IT
and AT tasks. Fig. 4 shows the distribution of discounts among passengers. More than
100 passengers got a discount in the range of Rs. 15-25.

Fig. 4. Distribution of discounts among passengers (Rs. 50 = ~ USD 1)

Average journey fare and journey duration was Rs. 41.2 and 27 minutes
respectively, while average time spent on microtasks was 13 minutes. As expected,
we observed a strong correlation between journey fare and the discount (Pearson’s
r(172) = 0.77, p < 0.05). Fig. 5 shows the results from the survey task. Out of the 174
passengers 71(15 female, 56 male) passengers responded to the survey task (SV).
73% of the respondents had an education level of grade 12 or lower, and more than
50% of the respondents had a monthly income less than Rs. 5000. In contrast, recent
mTurk survey of 200 Indian workers reported that nearly 80% of the respondents had
352 N. Samdaria, A. Mathur, and R. Balakrishnan

at least a bachelor’s degree [5]. This result implies that CCCM is capable of reaching
segments of workers who typically are not mTurk users.

(a) (b)
Fig. 5. (a) Education level of the survey respondents, (b) Monthly income of the survey
respondents

Task Accuracy. The accuracy for both Image Classification (IC) and Image-to-Text (IT)
tasks was 91.2% and 92.5% respectively while Audio-to-Text (AT) had an accuracy of
79.65%. One possible explanation for the low accuracy in AT can be the existence of
traffic noise in the auto-rickshaw2 which might have made it difficult for the user to
listen to the audio. Fig. 6 shows a user listening to the audio inside an auto-rickshaw.

Fig. 6. A user sitting inside an auto-rickshaw is working on audio transcription and listens to
the audio by keeping the phone close to his ears

Task Preference. IC and AT were clearly the favorite among the users with 66% and
64% users attempting to work on each respectively while only 24% users attempted to
work on IT task. Majority of the users who started the IC or AT task carried on to

2
An auto rickshaw does not have a door on either side which makes it difficult to avoid the
surrounding noise.
Paying in Kind for Crowdsourced Work in Developing Regions 353

finish all the available challenges3 before moving on to another task. Only 17% of
users who started working on IT carried on to finish all the available challenges for
IT. The low response to IT is understandable as mobile text entry is relatively difficult
and takes more time. Although AT also required users to enter text, we believe that
the idea of listening to an audio clip made it more alluring for the users to do the task.
Next we discuss the qualitative findings of our study.
Change in Work Behavior of Auto-Rickshaw Drivers. On the 4th day of the study,
two of the auto-rickshaw drivers told us that they prefer to serve those passengers
who they thought would be able to work on Microtasks. They would often go and
wait near an education institute (for example, colleges, private tuition institutes)
hoping to serve a student, even if it required them to travel an extra mile to reach
there. Earlier they used to wait outside temples, hospitals, shopping malls; but now
they preferred to wait near places where they could find potential workers for the
microtasks. Additionally, they started preferring passengers who would travel for
shorter distances (30-45 minute drives) so as to reduce the loss of time in case a
passenger denies working on the microtasks during the journey.

Selection Bias by Auto-Rickshaw Drivers. Auto-rickshaw drivers would often


decide whether to offer a passenger a phone based on his/her age, gender, appearance,
boarding point, and his/her familiarity with English. Instead of offering the phone to
the passenger right at the start of the journey, the drivers chose to interact with them
for a few minutes and gauge their ability to do microtasks. Only when they thought
that the passenger might be able to do some tasks, they would offer the phone to
him/her.
This result is particularly interesting because it shows that the drivers were using
their “human intelligence” to profile the workers. Microtasking services such as
mTurk also ask the users for their profile information at sign-up and assign the tasks
accordingly. The drivers accomplished the same using their human intelligence.

Motivated Auto-Rickshaw Drivers. Auto-rickshaw drivers were quite excited about


the system and wanted to take the full advantage of the earning opportunity presented to
them. One of the drivers commented – “God has given me this golden opportunity to
earn some extra money. Now I have to work hard and earn as much [money] as I can.”
Happy Passengers. We interviewed 5 passengers (2 female, 3 male) to get feedback
on the system. Three of them were studying in a college, one was doing a job and one
was a housewife. All the participants said that they would like to work on these tasks
mainly because a) it allows them to get immediate discount on the auto-rickshaw fare,
and b) it is a good way to pass time during the journey.
Auto-rickshaw drivers often mentioned that passengers returned a small share of
the discount as a gesture of regards (like a tip) towards the driver. This amount varied
from Rs 1 to 10. The custom of tipping auto-rickshaw drivers is not at all common in
India – the only reason why the passengers gave this tip was because they were happy

3
As mentioned in the description of the microtasking application, the IC and AT tasks had 100
images each, while IT task had 20 audios.
354 N. Samdaria, A. Mathur, and R. Balakrishnan

with the discount given by the auto-rickshaw driver. One of the drivers quoted a
passenger saying –
“I [passenger] am very happy today; you [driver] have given me a discount, I will
also give you some discount.”
Passengers Work More When They Are Travelling in a Group. Out of the 174
passengers, 45 passengers were accompanied by one or more people. We observed
that multiple passenger trip earned greater discounts than the ones with only single
passenger (t(172) = 2.89, p < 0.01). This result was surprising because we were
expecting that people travelling in group would spend less time working on tasks as
they might be busy talking to each other. We also observed that the passengers
travelling in group solved AT with an accuracy of 89.45% which is greater than the
overall accuracy of AT (79.65%). Although we do not have any data to explain the
cause of this result, we believe that the presence of one more person might have
enhanced the ability of the group to hear, interpret and remember the content of the
audio, thus resulting in higher accuracy.
Retained Interest of Passengers. We came across 6 cases when a passenger travelled
twice in the same auto-rickshaw. The auto-rickshaw drivers reported that while
travelling for the second time the passengers immediately asked for the phone. Many
of the passengers asked the drivers for their phone number and showed interest in
travelling regularly with them.

4.2 Study 2
Results of the first study show that CCCM is capable of reaching segments of workers
who typically are not mTurk users, by bringing crowdsourcing tasks to them and by
commodity based compensation. This is the key result that bolsters our premise for
the CCCM model. As an added exploration, however, we felt it might be useful to see
if the model also appeals to a typical mTurk user (e.g. a college student). In essence,
in addition to expanding the reach of crowdsourcing tasks to broader populations, as
shown in Study 1, we are looking at whether a simple change in compensation from
monetary to commodity might make a difference to existing populations who already
partake in crowdsourcing activities. While this second study, unlike Study 1, is
arguably not as crucial to assessing the validity of the entire CCCM model, it
nonetheless will shed some light as to the compensation aspect of the model.
Therefore, we designed a comparative user study with college students in urban India
to compare their reactions to CCCM as compared to a mTurk-like interface.
Participants. Eighteen undergraduate students (5 female, 13 male) from a
engineering college in Gandhinagar (India) participated in the study. Participants were
aged between 19-22 years and were enrolled in a Computer Science program. The
students were hired through an open call via email and public announcement. All the
students lived on the college campus and each participant owned a PC with 24 hour
internet connectivity. None of the participants had prior exposure to mTurk or any
other microtasking platform.
Methodology. To compare the CCCM model against the conventional mTurk-like
crowdsourcing model, we created two different web interfaces. The first interface (I1)
Paying in Kind for Crowdsourced Work in Developing Regions 355

was built on the mTurk model where users can login and work on a microtask to earn
money. The second interface (I2) was a meal and beverage coupon gallery, where
users can do a microtask in return for a food or beverage (i.e. commodity) coupon.
Because the students lived on the college campus and bought their daily meals from
the college cafeteria, we decided to choose meal and beverage coupons as the
commodity of our crowdsourcing model. I2 had coupons for five different varieties of
food items valued in the range of Rs. 10-40. In order to get a coupon, users had to
complete microtasks of equivalent value. The coupons could only be redeemed at the
college cafeteria. We bought coupons from the college cafeteria in advance and gave
them to the students on completion of the microtask. In both I1 and I2, the microtasks
submitted by the workers were verified and they were informed about its acceptance
within 24 hours of the submission.
Table 2 below shows the list of available tasks and the reward associated with each
of them. All the tasks and the rewards associated with them were taken from mTurk.
The Article Writing (AW) task required the worker to write a 200-300 word article on
a given topic. The reward for each topic was different and varied between Rs. 10-40.
In Audio Transcription (AT) task, workers had to transcribe English language audios,
while the Extract text from images (ETI) task required the workers to extract textual
content from an image. All these tasks can be found in abundance on mTurk and they
attract large number of workers with varying skill sets.

Table 2. Available tasks in I1 and I2


Task Description Reward (Rs. 50 =
~ USD 1)
Article Writing (AW) Write a 200-300 word article Rs. 10-40
Audio Transcription (AT) Transcribe 10 audio files Rs. 1 per audio
each 5-7 secs in duration
Extract Text from Images (ETI) Identify and extract content Rs. 0.50 per image
from 20 scanned images

We did a within-subject experiment in which participants were randomly divided


into two groups. For counterbalancing, one group was subjected to I1 first and I2 later
(with a gap of one day in between to verify the tasks submitted for I1) and vice versa
for the second group. The study was conducted in a week’s time with each group
being subjected to I1 and I2 for 3 days. At the end of the study follow-up interviews
were conducted with all the participants. For I1, students could collect their cash
earnings from the researcher after their tasks were approved. For I2, coupon codes
were sent to the users on their mobile phone after the task was approved. Apart from
this, each participant was given Rs. 50 for their participation in the study.

Results. Out of the 18 participants, two participants failed to participate in the second
half of the study, thus resulting in a total of 16 participants (8 in each group). In I1,
participants completed tasks worth Rs. 690 (µ = 43.12, σ = 58.49) as compared to Rs.
1460 (µ = 91.25, σ = 88.73) with I2.
Paired t-tests show a trend that users worked and earned more (t(15) = 2.04, p <
0.06) in I2 (CCCM) than in I1 (mTurk model). Fig. 7 shows distribution of tasks
356 N. Samdaria, A. Mathur, and R. Balakrishnan

completed in both I1 and I2. Extract Text from Image (ETI) got the highest hits
among all the three available tasks. During the exit interviews, participants reported
that ETI was the easiest of all three tasks, while Audio transcriptions (AT) and article
writing (AW) were both challenging and required more time to complete. Few of the
participants reported problems in audio streaming, which might be a reason for the
low popularity of AT.

Fig. 7. Number of task completed by users in both I1 and I2

Seven participants out of 16 said they would prefer I2 while 5 participants voted
for I1 arguing that once they leave the college campus, the coupons will lose their
importance. The remaining 4 participants were neutral because they felt that the
amount of work required in both the models is the same. A participant commented
that he prefers I2 because it allows him to fulfill his desires and also save money at
the same time. Giving an example, he said:
“As a student, I have to spend my money wisely and cannot afford to eat burger
often; but this [coupon] gives me opportunity to do so. If I get money instead, I will
think of saving the money and may not fulfill my desires”.
Therefore, his desire for a commodity (burger) motivated him to do the microtasks.
Another user mentioned:
“I eat here (college cafeteria) daily, so these coupons could be used daily. Also it
feels good to get something for free.”
Overall, we observed three types of motivations for participants to work on
microtasks:
M1) Desire to earn - participants thought that the platform helps them earn something,
either money or commodity.
M2) Desire to save on daily expenses - participants thought that the platform enables
them to save on daily expenses by giving a commodity for free.
M3) Desire for commodity – participants thought that the platform helps them satisfy
their longing for the commodity.
The decision to work on I1 is only based on motivation M1 while all M1, M2, M3 come
into play when a user is exposed to I2. On basis of these results we argue that our model
is capable of attracting users’ with varied levels of motivation. Moreover we believe that
microtasks, when tied to a commodity, can leverage the existing visibility of the
commodity, thereby increasing the overall visibility of microtasking platforms.
Paying in Kind for Crowdsourced Work in Developing Regions 357

5 Discussion
CCCM Increases the Visibility and Reach of Microtasking Services. Out of the 71
passengers who completed the survey task in the first study, more than 73% had an
education level of grade 12 or less. In contrast, past surveys with mTurk users in India
have reported that a large majority of the users at least had a bachelor degree [5, 11].
This result implies that CCCM has the potential to reach those segments of workers
which are less likely to be on mTurk.
We also found that the educated and technology savvy crowd workers in our
second study had different motivations to perform microtasks, such as ‘desire to
earn’, ‘desire to save’, and ‘desire for commodity’. Services like mTurk only cater to
the ‘desire to earn’, thus leaving out a section of crowd workers who may have other
motivations. CCCM, however, attracts workers with all three motivations and can
therefore increase the adoption of microtasking services even among the educated and
technology savvy users.
User Profiling and Task Distribution. We observed that the auto-rickshaw drivers
offered the mobile phone to only selected passengers. The selection criterion was
based on their perceived understanding of passengers’ capability to work. The main
factors affecting their choice were – passenger’s gender, age, language of
communication, dressing, and boarding point of the journey. This result is particularly
interesting because it shows that the drivers were using their “human intelligence” to
profile the workers. Microtasking services such as mTurk also ask for a worker’s
profile information at sign-up and assign them the tasks accordingly. The drivers
accomplished the same using their “human intelligence” and their perceived
understanding of a user’s profile.
We believe that the intelligence of the human mediators (commodity providers)
can be used to recruit and distribute tasks effectively. For example, in the auto-
rickshaw scenario, we can group the microtasks into following user categories: 1)
College Student, 2) Housewife, 3) Working Professionals, and 4) Others. Before
handing over the phone to the passenger (crowd worker), an auto-rickshaw driver can
choose one of these categories based on his perceived profile of the passenger. This
will ensure that the microtasks given to a worker are relevant for them. For instance, a
task related to food recipes can be pushed to a housewife.
Additionally, relevant tasks can be distributed based on the commodity chosen by a
worker. For example, a person seeking to purchase an online scientific article is likely to
be capable of performing intellectual tasks like article writing. In future, we will explore
these task distribution mechanisms based on human-intelligence and commodity choice.
Choice of Tasks and Commodities in CCCM. One of the characteristics of CCCM
is that the commodity provider remunerates the workers right after the microtask is
completed. This need for instant remuneration, however, leaves little time for task
verification. Secondly, when crowd workers are in need of a commodity, they may
not have time work on lengthy microtasks.
Therefore, those tasks, which can be (a) done in spurts and (b) do not require a
formal verification, are better suited for this model. For example, tasks involving
content verification, categorization, surveys and OCR tasks will be preferred over
tasks like essay writing.
358 N. Samdaria, A. Mathur, and R. Balakrishnan

The choice of commodities in CCCM should be based on the type of microtasks


that we want to get done from the workers. For example, microtasks related to
surveys and advertisements would prefer to have new crowd workers every day. Such
microtasks would benefit from a commodity such as ‘auto-rickshaw fare’ (study 1)
which is more likely to see new workers every day. Similarly, a microtask which
requires data from the same set of workers over a period of time would benefit from
commodities like ‘cafeteria food coupons’ (study 2) as the cafeteria is more likely to
see the same set of college students every day.
Human Intermediaries as Pervasive Computing Resources. It is clear that the
human intermediaries (commodity providers) have a major role to play in the CCCM
model. The auto-rickshaw drivers used their human intelligence to profile the
passengers and offered the mobile phone only to those passengers who they perceived
as qualified enough to work on the microtasks. They also helped the passengers in
resolving any queries about the interface or the tasks.
It is important to devise proper incentives for the Commodity Providers to keep them
motivated over time. We offered a 20% commission on the value of the microtasks to
the auto-rickshaw drivers and found that they were happy with it. Other incentive
mechanisms like fixed monthly salaries for Commodity Providers can also be explored.
Payment to Commodity Providers. CCCM reduces the complexity of payment by
the microtasking service. Instead of paying all the crowd workers, a microtasking
service only has to pay the Commodity Providers. For our study, we paid the
commodity providers (auto-rickshaw drivers) in cash. However, in a real-life system
the amount can be transferred into their bank accounts.
If the Commodity Providers do not have a bank account, as was the case with the
three auto-rickshaw drivers we recruited, they can be given a commodity relevant to
them. For example, the auto-rickshaw drivers require fuel on a daily basis, so a
microtasking service can give them fuel credits which can be redeemed at different
fuel stations. The microtasking service can then do a banking transaction with the fuel
station, which is more likely to have a bank account.
Microtask Distribution in Physical Settings. In a real-world deployment of CCCM in
physical settings, distribution of microtasks can happen over SMS as demonstrated by
Gupta et al. [19]. When a worker approaches the commodity provider (e.g. auto-
rickshaw driver), he/she can send a authorization SMS to the microtask repository along
with the cellphone number of the worker. In response, the microtasking repository can
push the tasks to the worker’s phone directly. After the task completion, a notification
about the total earning can be sent to both the worker and the commodity provider.
Apart from reducing the burden on the commodity provider, this approach also
helps the microtasking repository to gradually create a profile of the workers based on
the type of tasks completed by them. This profile information can later be used to
push relevant tasks to the workers.
Limitations of the Model. In our model, the crowd workers do the microtasks for a
short period of time which makes it hard for them to become task experts. However,
in service like mTurk, workers repeatedly do the same microtasks over a period of
time, hence developing an expertise in that microtask.
The need for instant remuneration in our model makes it challenging to use those
microtasks (for example, summarizing a paragraph of text) which need verification or
Paying in Kind for Crowdsourced Work in Developing Regions 359

quality check from the task provider. The worker would want the commodity instantly
and may not want to wait till the verification is complete. We feel that those tasks
which can be completed in small spurts are more suitable for this model.
Clearly, our proposed model cannot replace the conventional model of
crowdsourcing used by services like mTurk. However, it is an effective way of
reaching a much more diverse population of crowd workers who are less likely to join
mTurk like services voluntarily.

6 Conclusion and Future Work


We presented a Commodity-Centric Crowdsourcing Model (CCCM) which enables
the users to get a commodity of their choice by working on microtasks. Our proposed
model address the problems related to low visibility of microtasking services and lack
of adequate payment mechanisms in developing regions. We did user evaluation in
urban India to understand the applicability of this model in developing regions.
For the first study, we created a prototype application for low-end mobile devices
which was used by passengers of auto-rickshaws to work on microtasks. The results
show that the passengers were motivated to work on microtasks for a discount on the
auto-rickshaw fare. We were also able to reach crowd workers with very different
demographics from a typical mTurk user, which proves the ability of CCCM in
increasing visibility of microtasking services. Our second study was aimed to collect
reaction of a typical mTurk user towards CCCM in comparison to convention
crowdsourcing model. Results show that users have different motivations to work on
microtasks such as “desire to earn”, “desire to save” and “desire for commodity”.
CCCM caters to all these motivations, while conventional crowdsourcing models only
appeal to their “desire to earn”. As a result, a higher number of microtasks were done
in the study with CCCM as compared to the conventional model.
We discussed the importance of human intermediaries (commodity providers) in
user profiling and task distribution, and suggested ways of designing microtasking
application leveraging these capabilities of the human intermediaries. We also
discussed the limitation of the model which include - (a) it cannot create expert crowd
workers, b) the need for instant remuneration limits the kind of tasks that can be used
in the model. We do not claim that CCCM will replace the conventional
crowdsourcing model. However, we do believe that it can complement the
conventional model and help the microtasking services reach a much diverse set of
users without worrying about the complexity of paying them with money.
In future, we want to address the issues of user profiling and task distribution and
are excited about the idea of using human (Commodity Provider) intelligence for task
distribution. We also plan to conduct long term user studies with auto-rickshaw
drivers to understand the dynamics of the model over a longer period of time.

Acknowledgements. We thank the reviewers and the shepherd for their valuable
comments. We also thank Bill Thies, Khai Truong, Animesh Nandi and Sharad
Jaiswal for their continuous assistance. Finally, we are grateful to our participants for
their time and effort.
360 N. Samdaria, A. Mathur, and R. Balakrishnan

References
1. Alt, F., Shirazi, A.S., Schmidt, A., Kramer, U., Nawax, Z.: Location-based
Crowdsourcing: Extending Crowdsourcing to the Real World. In: ACM Nordic
Conference on Human Computer Interactions, Reykjavik, IC (October 2010)
2. Frei, B.: Paid Crowdsourcing: Current State & Progress towards Mainstream Business
Use. Smartsheet White Paper (September 2009)
3. BCG Report: The Internet’s new Billion: Digital Consumers in Brazil, Russia, India, China
and Indonesia (September 2010)
4. Eagle, N.: txteagle: Mobile Crowdsourcing. In: Aykin, N. (ed.) IDGD 2009. LNCS,
vol. 5623, pp. 447–456. Springer, Heidelberg (2009)
5. Khanna, S., Ratan, S., Davis, J., Thies, W.: Evaluating and Improving the Usability of
Mechanical Turk for Low-Income Workers in India. In: Symposium on Computing for
Development, DEV (2010)
6. MobileWorks, http://www.mobileworks.com/
7. Okolloh, O.: Ushahidi or ’testimony’: Web 2.0 tools for crowdsourcing crisis information.
Participatory Learning and Action (59) (2009)
8. Reserve Bank of India: Report on trend and banking in India (October 2009), http://
rbidocs.rbi.org.in/rdocs/Publications/PDFs/RTP081110FL.pdf
9. reCAPTCHA, http://www.google.com/recaptcha
10. Reddy, S., Estrin, D., Srivastava, M.: Recruitment Framework for Participatory Sensing
Data Collections. In: Floréen, P., Krüger, A., Spasojevic, M. (eds.) Pervasive Computing.
LNCS, vol. 6030, pp. 138–155. Springer, Heidelberg (2010)
11. Ross, J., Irani, L., Silberman, M.S., Zaldivar, A., Tomlinson, B.: Who are the
crowdworkers?: shifting demographics in mechanical turk. In: CHI 2010, Atlanta, Georgia,
USA, April 10-15 (2010)
12. Samasource website, http://www.samasource.org/
13. TeamLease Services. Indian Labour Report 2007: The Youth Unemployability Crisis
(2007), http://www.teamlease.com/images/reports/
Teamlease_LabourReport_2007.pdf
14. Telecom Regulatory Authority of India (TRAI),
http://www.trai.gov.in/Default.asp
15. Willett, W., Aoki, P., Kumar, N., Subramanian, S., Woodruff, A.: Common Sense
Community: Scaffolding Mobile Sensing and Analysis for Novice Users. In: Floréen, P.,
Krüger, A., Spasojevic, M. (eds.) Pervasive Computing. LNCS, vol. 6030, pp. 301–318.
Springer, Heidelberg (2010)
16. Yan, T., Kumar, V., Ganesan, D.: Crowdsearch: Exploiting Crowds for Accurate Real-
Time Image Search on Mobile Phones. In: ACM Mobisys, San Francisco, CA (2010)
17. Kube, S., Maréchal, M.A., Puppe, C.: The Currency of Reciprocity - Gift-Exchange in the
Workplace. Institute of Empirical Research in Economics, University of Zurich (July
2008)
18. Kurosaki, T.: Wages in Kind and Economic Development: Their Impacts on Labor Supply
and Food Security of Rural Households in Developing Countries. Institute of Economic
Research, Hitotsubashi University (2008)
19. Gupta, A., Thies, W., Cutrell, E., Balakrishnan, R.: mClerk: Enabling Mobile
Crowdsourcing in Developing Regions. In: CHI 2012, Austin, Texas, US, May 5-10
(2012)

You might also like