Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Managing The Crowd

Download as pdf or txt
Download as pdf or txt
You are on page 1of 16

MANAGING THE CROWD: A STUDY ON VIDEOGRAPHY APPLICATION

Abstract
This paper examines the principles of managing groups of digital workers, known as crowds. So far
empirical work on managing crowds (i.e. crowdsourcing practices) has been scarce. We present a
case study in which a mobile application was utilized to gather qualitative research data. The
learning from the process is reported in regards to guidance, incentives, quality and outcomes.
Managing crowds is a complex process and requires managers to update their thinking and know-
how. Our paper offers practical guidance based on first-hand experience.

Lauri Pitknen Joni Salminen
lauri.pitkanen@utu.fi joni.salminen@utu.fi

Address:
Lauri Pitknen
Turun kauppakorkeakoulu
Rehtorinpellonkatu 3
20540 Turku


ABEAI 2013 Conference
Applied Business and Entrepreneurship Association International (ABEAI), Hawaii 14-20.11.2013



1. Introduction
The Internet has resulted in a proliferation of knowledge work, and it also facilitates interaction
between firms and digital workers (Terranova, 2000). In particular, firms may utilize
crowdsourcing platforms
1
to find freelancers and other individuals to carry out specific tasks which
vary by complexity (Howe 2006; 2008). Although some companies have adapted crowdsourcing
activities and outsourced their operations to consumers, the practice is still relatively unexploited
and unknown for most firms. Examples of industries influenced by crowdsourcing include online
markets, for example the online photo bank business. Recently, platforms have integrated
crowdsourcing practices into mobile phones, e.g. so that consumers may sell their photos for news
portals directly from their phone (Scoopshot 2011). Via mobile crowdsourcing applications firms
may also offer tasks for consumers. These include information on requirements and rewards, known
as task design. Crowds
2
e.g. upload their photos to the service where the task owner can choose
which photo to use and the contributor is credited the reward defined in the task.
In addition, crowds can be utilized to collect research data quickly and cost-effectively (Pitknen
& Salminen, 2012). For example, Horton et al. (2011) study the power of online laboratories for
conducting experiments in digital labor markets, a possibility that is much more difficult to execute
in physical labor markets due to access concerns and inability to manipulate incentives (wages). In
crowdsourcing, the researcher can freely set rewards and get exact data on responses of the crowd.
Moreover, the popular paper by Benkler (2002) examines how physically fragmented open source
communities are able to develop complex systems through efficient and timely solution of
coordination problems, also strongly present in most labor markets. Benkler (2002, 369) argues that
peer production has a systematic advantage over markets and firms in matching the best available
human capital to the best available information inputs in order to create information products.
Clearly, crowds have expertise and potential in many areas, especially in knowledge work. As a
logical conclusion, firms that are able to leverage this potential may gain competitive advantage.
The study examines the following questions:
1) What does literature tell about managing crowds?
2) What aspects should firms consider when applying crowdsourcing?
3) How can the success of crowdsourcing activities be evaluated?
The first question aims to define the method of managing crowds, the second one to define how
it could be applied, and the last one how the application of the method could be evaluated. By
answering the questions this study supports the development of crowd management in theoretical
and practical level. The theoretical level refers to providing definitions and description and the
practical level to finding the best practices to execute taks. Answering these sub-questions covers
comprehensively what crowdsourcing is, how it is applied and how its quality could be evaluated.
These definitions are a good opening for the further development of crowdsourcing practices.
This study uses literature review and evaluative research. First, the literature review defines and
describes the crowdsourced videographic method, its process for empirical application, and quality
criteria. Then, the evaluative research method that is applied from evaluating social programs is
used to evaluate the focal study about mobile gaming that was conducted with created
crowdsourced videographic method. The used data for evaluation was secondary data about the
focal study including the research task, screenshots from the mobile application, videos collected

1
Some examples of Lhese plaLforms lnclude Amazon Mechanlcal 1urk, ouesk, and lreelancer.com.
2
A crowd ls deflned here as physlcally and moLlvaLlonally fragmenLed group of lndlvlduals wllllng Lo perform Lasks
lnlLlaLed by flrms Lhrough a crowdsourclng plaLform.
from the participants, and evaluation by the authors. In evaluation the definitions and descriptions
made during literature review are used as a basis for evaluation of the focal study and secondary
data is compared to them. This study develops the crowdsourcing practices by defining and testing
them.
2. Theoretical framework
Based on an earlier literature review Pitknen and Salminen (2012) propose a conceptual model for
videographic crowdsourcing, consisting of four themes: 1) guidance, 2) incentives, 3) quality, and
4) representation (outcomes). The model is illustrated in Figure 1.

Figure 1 Model for videographic crowdsourcing (Pitknen & Salminen 2012)
Two-way connections occur in the model between guidance and incentives, incentives and
quality, quality and guidance, guidance and representation, and representation and quality (Pitknen
& Salminen 2012, 342). Incentives have only indirect connection to the representation, via guidance
and quality. Connections between themes vary based on the perspective the connection is
approached from. Design and communication refers to designing the research tasks and
communicating them to consumer participants. Although consumers are able to create context-rich
data that can be used in consumer research (Hein et al. 2010), they depend on guidance by
researchers. Therefore, the creation process is influenced by interaction with the researcher. The
researcher is, in fact, framing the crowds activities through task design, delegation, and guidance.
This framing will necessarily affect the quality of outcomes. The following sections discuss the
dimensions of the conceptual model.

2.1 Guidance
Utilizing crowds involves a risk: Will the crowd respond in a desired way, and will the quality of
their contribution be satisfying? Guidance includes methods to improve communication, reduce the
risk of poor quality due to misunderstanding, designing tasks in a parsimonious way and, in sum,
guide the crowd before and during the task. Transparency relating to requirements and rewards
increases consumers awareness of the study goals
3
and the importance of their input which is
aimed at generating trust (Prahalad & Ramaswamy 2004). Too much control, however, can become

3
1he noLable excepLlon ls Lhe experlmenLal deslgn, ln whlch case Lhe research goal may be kepL prlvaLe Lo ensure
soclal deslrablllLy blas.
REPRESENTATION INCENTIVES
GUIDANCE
QUALITY
Design and
communication
counterproductive. For example, strict guidelines may reduce richness of the crowd-originated data
(Zheng et al. 2011). Communication with consumers consists of delegating tasks, which is followed
by consumers responding by sending back the results of their work. The task delegation effectively
creates an agency relationship between the firm and participants (Eisenhardt 1989), as the firm is
unable to monitor participants behavior to the full extent. However, most crowdsourcing platforms
release rewards only after approval of the work which may curb opportunistic behavior, although it
does not solve poor quality due to lack of skill
4
. Therefore, the crowds properties such as autonomy
and self-guidance may decrease, even hinder the firms ability to guide them and extract the desired
outcomes. Nevertheless, Schenk and Guittard (2011) maintain that if the request is ill defined, the
crowdsourcing process is very likely to lead to non-satisfactory contributions. Zheng, Li, and Hou
(2011) suggest that crowdsourced tasks should preferably be highly autonomous, explicitly
specified, and less complex, as well as require a variety of skills. Since complexity and skills are
both relative terms (skill levels of individuals differ), also communication may be more or less
intense depending on the proficiency of individual members of the crowd. In summary, guidance
involves task design (before crowds start working) and communication (during the task execution,
e.g. answering to questions). Through task design, it is linked to incentives which are rewards from
the perspective of crowds. It is also assumed that there is a relationship between guidance and
outcome, so that task design influences what type of results the firm receives (i.e. what is asked
from the crowd) and communication affects quality; that is, how well is the chosen type of task
carried out.
2.2 Quality
Dow et al. (2011) assert that requesters often struggle to obtain high-quality results, especially on
content-creation tasks, because work cannot be easily verified and workers can move to other tasks
without consequence. Although we do not rule out the possibility that this be an inherent problem
between an anonymous and distant crowdsourcing platform and the required degree of intensive
guidance, lasting relationships between the researcher and informant are possible over the Internet
5
.
Following this line of thought, non-verifiability becomes a more crucial issue when dealing with
micro-tasks of superficial nature. In fact, consumers may be tempted to game the system in hopes
of easy payoffs, e.g. by providing nonsense answers in order to decrease their time spent and thus
increase their rate of pay (Kittur, Chi, & Suh 2008). Although the lack of definite answers may
hinder the researchers ability to identify opportunistic participants (Kittur et al. 2008; Eickhoff &
de Vries 2011), we argue that videographic data is fairly easy to screen for malicious users, as the
researcher can quickly interpret the match between guidance and representation provided by the
consumer. However, pre-assessing the motivation of participating consumers should be considered
already when designing the task and incentives. Complementary suggestions have been made; e.g.,
Bao, Sakamoto and Nickerson (2007) proposed using Likert scales and prediction voting to evaluate
the quality of crowdsourced material. However, using the crowd to evaluate quality adds
complexity to the research design (e.g., requirements of competence, coordination costs), and
therefore we do not recommend it. However, considering the quality criteria before, during and
after research gives the right directions for an effective and reasonable research. Further, decisions
on what is included and excluded in the final outcome are important for usefulness of the project
(Kozinets & Belk, 2006). Finally, solving the quality challenge requires finding and motivating
suitable crowds. Although current mobile phones are sufficiently advanced to produce decent raw

4
Conslder a benevolenL buL lncompeLenL parLlclpanL - lL ls easy Lo see LhaL a large number of lncompeLenL
parLlclpanLs qulckly erode efflclency beneflLs as Lhe crowd manager ls forced Lo spend an excesslve amounL of Llme
separaLlng low-quallLy work from hlgh-quallLy work.
3
Such as ln any relaLlonshlp, Lhe quallLy of onllne relaLlonshlps ls lnfluenced more by lLs relaLlve lmporLance Lo parLles
Lhan by dlsLance of Lhe parLles (Cummlngs, 8uLler, & krauL, 2002).
material, issues may relate to consumers ability and willingness of using crowdsourcing platforms.
For example, Hein et al. (2010) studied a group of young males in Scotland. Their research was
possible to carry out with mobile phones, but if the target group would have been pensioners,
finding a proper user group might have been more difficult
6
. Overcoming these obstacles requires
previous understanding of the target consumers as well as creating effective incentives when
necessary. The scope of the research is likely to influence the process to a great extent. For
example, Hein, Ryan and Corrigan (2010) found that the use of mobile phones during the
ethnographic research not only reduced the disruption resulting from the research situation, but also
revealed insights of the consumers otherwise hard to capture
7
(Hein et al., 2010). In contrast, Belk
and Kozinets (2005) observe that the presence of video cameras affects informants behavior,
denaturalizing the research situation. Hein et al. (2011) report that taking notes, photographs and
video during both participant and non-participant ethnographies tended to disrupt informants. In
participant ethnography, taking notes may disrupt the interaction between researcher and informant,
whereas in the non-participant ethnography it may result in less natural behavior (Hein et al., 2010).
In contrast, using mobile phones for observing requires less technical skills, which leads to more
natural field notes and representation of the context.
2.3 Incentives
Consumers typically contribute to crowdsourcing projects for little or no money (Howe, 2008).
Financial incentives offered tend to be low
8
; however, there are other motives for participation
(Kaufmann & Schulze, 2011). The question of motives is a complex one since customers are very
different, and there are many alternative general theories explaining motivation. We focus on
discussing intrinsic and extrinsic motivation (Deci & Ryan, 1985) which has been applied to
crowdsourcing in earlier research. Internally motivated individuals seek the fulfillment generated
by the activity, whereas in extrinsic motivation the activity is only an instrument for achieving a
certain desired outcome, e.g. money or avoiding sanctions (Kaufmann & Schulze, 2011). Learning
and social motives are examples of internal motivation whereas direct compensation and prizes as
external ones (Zheng, Li, & Hou, 2011). Zheng, Li, & Hou (2011) find intrinsic motivation more
efficient for encouraging participation; in particular, they recognized autonomy, variety
9
and
analyzability (explicitly specified, and less complex) as important task features. Kaufmann and
Schulze (2011) found that while extrinsic motivational factors (various payoffs) positively
influenced the time spent in the crowdsourcing platform, participants tend to highlight intrinsic
factors. These are also included in Kaufmann and Schulzes (2011) integrated model of
crowdsourcing motivation, divided into enjoyment based and community based motivation and
loosely following the dichotomy between personal interest and social interest (e.g., Ostrom, 2000).
Hence, altruistic, reputational and other social factors also play a role as motivational factors of
crowdsourcing (Quinn & Bederson, 2011). Kaufmann and Schulze (2011) discuss social motivation
as the extrinsic counterpart of intrinsic motivation, meaning that the participants are not driven by
external rewards beneficial only to them (cf. orthodox-rationality in economics), but also consider
benefits externalized to the social group, or the community. When innovations are freely shared,
others can benefit from them, and a result is created that the larger community will benefit from.
The community, if such exists within the crowdsourcing platform, can also potentially serve to
filter and monitor quality potential social punishments may exceed threats imposed by the
researcher (Ross, 1896). The power of such a community stems from shared, values, norms and

6
Assumlng LhaL pensloners are noL, ln general, famlllar wlLh moblle vldeo recordlng.
7
1he fllm ls whaL consumers creaLed by Lhemselves for Lhemselves. lllms were one of Lhe medlaLed forms Lhrough
whlch consumers consLrucLed Lhelr ldenLlLles and capLured Lhelr consumpLlon." (Peln 2010)
8
AL leasL ln WesLern sLandards.
9
ln Lerms of skllls requlred.
obligations; in other words, community identification (Kaufmann & Schulze, 2011). Additionally,
crowdsourcing implies voluntary participation of individuals, with no hierarchy or contract related
constraint, as well as a high degree of autonomy in the achievement of tasks (Schenk & Guittard,
2011). In other words, little coordination by hierarchy takes place and consumers act independently.
Kaufmann and Schulze (2011) name this task autonomy and link it to enjoyment based motivation,
so that the participant is motivated because the task allows him to be creative (see also
Csikszentmihalyi, 2008). However, individuals may emphasize their desire for creativity differently
in terms of motivation in the case of videography, capturing video allows a higher degree of
creativity than traditional data collection methods, which can be seen as an advantage in terms of
participant motivation. However, at the same a deeper commitment is required since requires more
time and skill than e.g. answering to a questionnaire. Further, because of research requirements
there can be no absolute lack of coordination. In fact, to protect data quality, guiding of participants
should continue after task delegation as data collection progresses; the notable exception being non-
participant approach where the researcher wants to influence the data as little as possible.
Kaufmann and Schulze (2011) mention feedback in their integrated model; the idea is that some
participants work partly to receive appraisal, or social approval, from the researcher (cf. Steele-
Johnson, Beauregard, Hoover, & Schmidt, 2000
10
) as long as they receive it, the motivation level
is assumed to remain high. In contrast, a failure to give feedback (acknowledgement) will lead to
diminishing motivation. Finally, participants may be after delayed payoffs which refers to
accumulation of skills that could be useful e.g. in future projects; similarly the participant in
crowdsourcing tasks may use it as a signaling strategy for attracting the awareness of possible
employers (Kaufmann & Schulze, 2011). Boudreau and Hagiu (2009, p. 175) note that profit
opportunities and monetary awards are but one aspect of prizes (); participation itself may
enhance ones affiliation with the broader coder community; posted scores are an opportunity to
signal capabilities to prospective employers and to achieve status; objective evaluation can be a
useful means of self-improvement. Although their findings are from a programmer community,
not the average crowdsourcing platform, the same principles are generalizable according to their
social fundaments: crafting game mechanics and other aspects of play and competition raises the
interestingness of the challenge and therefore positively influences motivation. Increasing the
difficulty of the task is likely to have the same effect, as long as the participants feels capable of
accomplishing the task (Csikszentmihalyi, 2008). Crowdsourcing activities, such as data collection
may require specific skills and significant time investments (Schenk & Guittard, 2011). The match
between skills possessed by the consumer and required by the researcher is likely to influence
motivation; if there is a wide gap between the two (e.g. the task is too demanding compared to
abilities of the participant), the motivation is likely to drop (Csikszentmihalyi, 2008). Besides
compatibility of requirements and skills, the variety of skills required seems to affect motivation
(Kaufmann & Schulze, 2011). It is logical to assume a relationship between motivation and quality
of work (Kaufmann & Schulze, 2011). Because the researcher wants consumers to provide high-
quality data, designing appropriate incentives becomes a priority. Degree of motivation will
influence how participants complete the task, so that participants in the same study may answers
differently based on variance of commitment, regardless of underlying consistency (Schmidt, 2010).
As note previously, motives affect the quality of data collection, for which understanding is crucial.
Consumers participating only to make money or kill time have significantly different motivation
from those who participate because they find the research topic important (Fry & Dwyer, 2001).
Kaufmann and Schulze (2011) refer to this as task identity; if the participant understands how the
data will be used, the motivation will be higher than in a case of confusion of lack of information
(note the conceptual link to guidance).

10
8elaLlng Lo Lhe conflrmaLlon blas, or Lendency of some sub[ecLs Lo provlde answers Lhey feel Lhe researcher ls
looklng for.
2.4 Outcomes
The fabrication of the outcome relates to the type of material collected. This section of the paper
focuses on videographic research as an example of data. To begin, Rokka (2010) suggests that
visual elements reflect the actual field and could be put in the center of research until
representations emerge. He divides videographies into documentaries and visual ethnography.
Documentaries connect to the real world through actions and events recorded on video. They are
bound to relevant theories by reflecting practice. In contrast, visual ethnography is a process of
creating and representing new knowledge; highly compatible with interpretative consumer studies
(Rokka, 2010). Belk and Kozinets (2005; Kozinets & Belk, 2006) underline that videography can
benefit and contribute into different techniques and methodologies of qualitative consumer research.
Belk and Kozinets (2005) list individual or group interviews, naturalistic observation and
autovideography as traditional methodological options. They also recognize collaborative,
retrospective and impressionistic videography. Pink (2007) underlines the importance of
collaboration and self-representations in videographic techniques. The role of video should be seen
as recording reality, which can be retained for later use. In practice, the material is created based on
video diary keeping, note-taking including surveys and recording certain processes and activities
(Chalfen & Rich, 2004; Pink, 2007). Documentary and visual ethnography are both based on
context-rich videographic data that is retained in the representation. Even though documentaries are
analyzed based on various categories (see Rokka, 2010), the question of whether videographies
could be more than compelling data have been addressed (Rokka, 2010). To answer this question,
the relationship between reality and theory should be reviewed. More specifically, the focus
documentary representation is on argumentation representing meanings, explanations and
interpretations; not only describing what exists (Rokka, 2010). In turn, benefits of visual
ethnography are seen in inter-subjectivity and multivocality of the method. Visual ethnographers
aim into new ways of understanding, creating a tighter bond between vision and observation that
does reflect the reality. The approach is reflexive and underlines subjectivity, creativity and self-
consciousness (Rokka, 2010). For example, interviews shift textual authority from the filmmaker to
the interviewees, while the reflexive mode focuses on how topics are talked about. In documentaries
these modes can occur simultaneously or mixed; whereas documentaries support different roles of
the filmmaker, visual ethnography only supports the reflexive mode of representation (Rokka,
2010). In visual ethnography, this mode underlines the collaboration between ethnographer and
informants to draw out, reconstruct and represent relevant experiences (Pink, 2007). Through
filming, editing and reflecting upon the material, the filmmaker and participants are building the
context where continuities between diverse worlds and experiences are presented. Hence,
videography and particularly the visual ethnography can be seen as a true opportunity to bring out
informants voices in the representation (Rokka, 2010). The context becomes grounded on material
from consumers, bringing out their individual voices. Finally, while creating the representation, the
researcher should also consider ways to judge its quality. Kozinets and Belks (2006) topical,
theoretical, theatrical and technical criteria can be used as the starting for assessment. When
considered in the context of crowdsourcing, data collection, topical and theoretical criteria are fully
reliant on tasks given by researchers. Theatricality and technical parts depend on the material
consumers have filmed, but receive their final forms in researchers editing table. Of course, the
guiding given by the researcher significantly affects consumers filming decisions such as camera
angles and backgrounds; however, the choice of what to include and exclude into the representation
remains with the researcher.
3. Method
The empirical part evaluates the usefulness of the crowdsourcing model in practice. The theoretical
model was applied in crowdsourced videographic research conducted for a Finnish gaming
company in November 2011. The purpose of that research was to explore consumers mobile game
consumption in multiple contexts which in this case were locations and environments. The data was
sourced from participants who documented their game consumption through short video clips. A
special application was used for data collection; taking the role of crowdsourcing platform. Hence,
the application for crowdsourced videographic research method allows the researcher to connect
with crowds to provide filming tasks for research purposes and discuss about different ways to
conducting the research. Crowds in turn can receive tasks, discuss with researcher, and most
importantly film the videographic data that could be sent directly to the researcher without any
disruptive device changes. The crowdsourced videographic research method at its best could be
used to provide both participant and non-participant data. But executing research as through non-
participant techniques requires control of research ethics. Crowdsourced videographic research
method should be applied in researching consumer culture, because it brings out contextual
relations and unconsidered contextual relationship behind what is obvious. The application creates
common platform that makes the use of crowdsourced videographic research method possible.
Video material was supplemented by asking consumers to explain verbally why they were
playing games in the specific situations. Based on previous research, visuals were expected to be in
a key role for creating understanding, because they extend the description of context beyond what
consumers are able to tell through words. However, using both visual and verbal communication in
producing the video material was especially underlined in the Research Task 1:

Please show through video what kind of situation you are in and verbally tell why you
want to play the game in this specific place at this moment.

It was also important for the focal study that the consumers filmed in different locations, as the
game had location-based elements relying on real environments. Importance of bringing out variety
of situations was also highlighted in the Research Task 2 which asked players to

explore situations in which consumers might want to play mobile multiplayer games.

Participants for the focal study were chosen by convenience-sampling researchers networks. In
practice, suitable participants were approached via Facebook where they were asked to download a
special mobile application created for this crowdsourced research. Through the application,
participants were able to receive a research task, film material and send it to the researcher. At this
point participants were not actually informed about the research questions, but told that the topic
was mobile gaming and that their task was to film videos through the application they had
downloaded. Application was downloaded ten times in total and all participants received the
research task to their mobile device in the beginning of the research period. No additional
information was provided. Movie tickets were used as incentives for crowds to participate.
The four most active participants created three videos each and two others two each. The four
remaining participants did not produce any videos at all. Thus, activity was not very high and
participants did not even produce one video per filming day. The timeframe for filming was one
week, but even the most active participants only filmed three videos. In total, the study received 16
videos, their length ranging from 7 to 56 seconds. All videos contained verbal interpretation except
three videos filmed by one participant that did not had voiceovers at all. In general, videos where
filmed at different places and situations. Participants focused on showing the environment where
they were. They moved camera to show a wide view of the context and most of them completed the
description by verbally explaining where they were, what they were about to do and why would
they play the mobile games in those particular situations. For example, killing time was a
common reason for playing mobile games stated by respondents. Many videos were filmed on the
go. Participants were presented more often alone than accompanied with other people. Most of the
videos were filmed seriously, but one participant also joked about the research task and described
his actions in an ironic way.
This study uses postulates of videographic crowdsourcing to evaluate the management of the
crowdsourcing process of the focal study. So forth the data-analysis has checklist approach for this
research. In the original use, checklist aimed to make an evaluation about factors that impacted the
program (Rutman 1984, 142) and in this study the used factors of checklists were defined in the
model. The actual evaluation was conducted by comparing the data about the focal study towards
these pre-defined postulates. The use of checklist approach of evaluative method was adaptive and
the used analysis crafted to match with this specific research (see Hesse-Bieber & Leavy 2006,
344). The use of checklists also helped to avoid errors that normally occur in qualitative analyses
and supported to select the checklists as the approach of evaluation (Cook & Campbell 1976; see
Rutman 1984, 143). The postulates are as follows:
Guidance Quality: How the researcher guides, interacts and communicates with the
crowd affects the quality of material produced.
Guidance Representation: How the researcher guides, interacts and communicates with
the crowd frames the possible research outcomes, or representations.
Guidance Incentives: How the researcher guides, interacts and communicates with the
crowd may in some cases act as an incentive to participate per se.
Quality Guidance: The quality of material collected affects how the researcher is
evaluating and improving future guidance.
Quality Representation: The quality of the material collected frames the possible
representations that the researcher can create.
Quality Incentives: The quality of material collected affects how the researcher is
evaluating and improving future incentives.
Incentives Guidance: Instead of removing the need for guidance, incentives may even
increase it if the number of participants increases (in complex tasks).
Incentives Quality: Incentives may have a positive effect on quality, although this
cannot be interpreted as a rule due to fuzziness of personal (hidden) motives.
Representation Guidance: When creating the representation, the researcher strives to
hold consistency with the guidance given to participants.
Representation Quality: When creating the representation, the researcher is considering
different criteria for judging quality.
Evaluative research is common in social research and it attempts to assess whether a particular
intervention, process, or procedure is able to change behavior (Salkind 2010, 1254). At its
simplest, evaluation leads to an opinion concerning the object (House 1980, 18), but it is at the same
time bound by criteria and context. Suchman (1969 see Caro1971, 8) continues that evaluative
research focuses on a comprehensive appraisal of how the objective meets the criteria. This study
differs from the original use of the evaluative and its application, because the evaluative method is
used to provide information about the management of crowdsourcing research process, but use of it
is still acceptable, because of focus on the relation between appraisal and criteria. This study applies
professional review as an approach of evaluative research. In the approach it is assumed there is a
consensus on evaluation criteria and it need to fit the case (Caro 1971, 8).
For professional review approach House (1980, 23; 35) suggests reviews by panel or self-studies.
In this study, in evaluating management of crowdsourced videographic research the object of the
review was the secondary data about the focal study. Data included the process of crowdsourcing
and created videographic data. In this study the used data was secondary material about how the
research of the focal study was communicated to the participats and the actual data participants
gathered during research (see Rutman 1984, 22). By using special mobile application the researcher
controlled the research process and communicated the research task to participants. This data about
using the application was documented with screenshots from the mobile application and used in
evaluation. The second type of data was videographic data produced by the participants. Both of
these two types of used data could be defined as record and file based data (see Rutman 1984, 22).
Rutman (1984, 22) continues that record and file base data are common for evaluative research. The
fact that author conducted also the focal study gave a good opportunity for self-evaluation that was
third type of data used in this study. Record and file based data about focal study was received
directly from authors research records and nothing unpredictable or special could had happen
during this secondary data collection to affect evaluation. Self-evaluation was made reflectively
during conducting this study and there was no risk in missing information in the data collection of
this study. The secondary data actually fit the evaluative research well compared to the primary one
(Caro 1971, 2223) and because author also conducted the original crowdsourced videographic
research there was possibility for self-evaluation. Thus, in addition to secondary material author
commented the research process based on his own experiences from executing the focal study. In
practice, the used method in this study was actually a combination of objective professional review
and self-evaluation, and author acted in two roles.
4. Results
4.1 Guidance
Guidance in the focal study involved the task description but there was no additional support for
participants to carry out the task. In hindsight, this created the risk of participants not responding to
the task in the correct way and the quality of the data might not meet the requirements. Zheng et al.
(2011) define that guidance improves the materials similarity and analyzability, which was seen in
the differences in the material. For example, one participant did not express verbally anything, the
other discussed the factors in the game and the rest filmed the environment they actually were in.
Referring to the features in the game was an example of how research task was misunderstood
compared to the other. The other videos were similar to each other in their content and structure.
They were filmed to show the environment and the participants described where they were and
explained why they would play the games at that particular moment. Many participants filmed the
videos when they were on their way to different places and videos formed a certain theme as a
result. Videos also shared the same structure of first showing the environment and then focusing on
the details. Participants interpreted the task similarly and answered in a form that researcher
expected. So, it could be said that enough information was given to participants to understand the
task correctly with good task design. These similar videos were analyzable and provided sufficient
information for the research task based on their similarity. The videos of one participant that were
ironic comments about research followed the same structure, even though the content was not really
useful. It is also good to think about the engagement of participants and the variety of videos. In the
focal study the researcher managed to engage half of the participants who downloaded the mobile
application to answer the task, but the amount of videos any participant produced during the week
was quite low compared to the time limit for filming. Feedback from the researcher during the
research might have encouraged them to produce more material. However, participants did not
contact the researcher to ask any additional questions. The independence of participants could be
seen in the variety of the videos, because the situations described in videos were different from each
other and depended a lot on the participants. Everyone expect one participant were actually trying to
provide material that answered the research question. So forth, the research task was well-enough
designed, autonomous and explicitly defined to gain satisfactory contributions (see Zheng et al.
2011, 57; and Guittard 2011, 103). In the focal study incentive effects did not show too strongly
because of limited guidance. On the other hand, in the focal study the participants were recruited
from the acquaintances of the researcher, so the fact that the participant knew the researcher
promoted answering. Still, many did not record any videos at all. In the focal study, tangible
incentives to answer the research task were the movie tickets. The prize was only informed in the
summary of the research task and most likely it did not have that big of an effect on the participants.
In practice, this means that incentives were not used to actually persuade the participants. More
likely the fact that the researcher knew participants was a more important incentive and a source of
motivation for the participants. In the focal study it was not defined when and based on what
performance the participants would actually get the tickets. In practice, the tickets were offered to
the participants after the research via post to those who filmed videos. Emphasizing this information
might have supported the crowdsourcing process.
4.2 Quality
Conducting research by using crowdsourced videography helps to see how different ways of
guidance lead to different quality of material. The quality level of material also provides
instructions on how to guide, but it requires a couple researches to get the touch. The most
important fact when evaluating the focal study here is to see that there was no continuous discussion
during the research. However, there were no total misunderstandings, only one participant was
providing too detailed information about a game not the context and other did not verbally interpret
any situation. Problems concerning the situations where participants go too deep to side roads or
minor details, in this case by telling about playing one specific game, could be avoided by leaving
the given tasks without examples. In practice, it is more challenging to make sure that participants
include the verbal interpretations in videos. This is because there might be different reasons for the
lack of voiceovers, where as mixing the objective with an example is only dependent on if the
examples are used in the given task. Awkwardness to speak loud in public places could be the
reason for the lack of voiceovers in the material. However, one participant did not use voiceover
even when he was alone indoors. The other relevant reason is that the participant did not understand
the question. For example, the video example within the given task might be the solution for these
situations. On the other hand the example video could affect on outcomes, when participants have
too a dominant idea about the video. In the focal studys outcome collected videos were shown as
unedited visual ethnography. Therefore, the quality of created material was shown in the structure
of the videos. All the material of the focal study was easy to watch through, because the perspective
of video was the same in all. Even though two participants videos had a bit different content, focus
in game actions and no voiceovers, their videos still followed the similar framing and structure with
the others and outcomes watching experience was still pleasant and congruent. The mobile phones
role could be discovered through relation from quality to representation, because the way how
videos were filmed, actually followed quite well the angles how mobile phones are normally held
and used. Standard way how to film videos with mobile phones makes material more homogeneous.
This is also good, because not all the participants are used to filming video, but they are still
somehow comfortable with using mobile phones.
4.3 Incentives
In the focal study guidance was extremely limited. Thus, there was no option that guiding could
have been increased heavily. However, going through the videos that participants filmed would
have taken longer if there would have been enormous amount of clips per participant. Mainly the
focal study showed that gaining enough motivated participants was the concern in the focal study.
Personal motives seem also quite fuzzy in the focal study. The amount of received videos reveals
that the most likely participants did not really have motivation to contribute. Movie tickets were not
such an incentive to motive externally participants. On the other hand, there was also a problem
with internal motivation as well. Still, what could be underlined is that information about the
incentives should have been brought out more. Also combining the incentives more directly with
the quality of participants material would have been good. For example, giving a better incentive if
participant produces multiple videos, or giving the prize for the best participant might have created
stronger links from quality to incentive. The task should also have motivated crowds internally, but
providing information about mobile gaming was not significant for participants and the amount of
active participants was limited. Also, the active participants only filmed two or three videos within
a week, even though they would have had time for deeper contribution. Thus, task did not motivate
participants well. It is also good to recognize that participants motives could have been to please
the researcher, but even that did not bring many videos.
4.4 Outcome
In the crowdsourced videography researcher controlled the outcome in the very end. In the focal
study the participants were only informed about the topic of the material they were creating not how
the videos are used in any point. This actually made the use of videos a bit different and the
researcher can for example approach the material from different perspectives. The approach was
direct and the material was only used to explore direct situations in mobile gaming. The other
approach for the use of the material could have been what these situations tell about mobile gamers,
for example. In that case the whole research would of course have been different, but it would not
actually have effected how the material was collected. The only problem might have occurred, if the
participants had found out that their material was going to be used differently than first explained.
This ethical or transparency issue could have been avoided by using more open research task, where
use of videos would not be defined. Thinking about quality of crowdsourced research as whole
while creating the outcome is extremely important for success in process. When video is edited
especially the limits should be considered. When presenting research outcome as visual
ethnography without editing there are not that many options that can affect the outcome after
receiving of the videos. However, choosing the proper order and categorizing similar videos
together helps to create an understandable whole.
5. Discussion
Crowdsourcing is managed through guidance, quality, incentives and outcomes. When task design
is clear enough it improves similarity and usefulness of contributions, even when there is no other
guidance. It is important to provide all needed information clearly, so participants can understand
the task correctly and produce useful outcomes with sufficient information. The participation level
in amount of participants and in amount of filmed videos showed that attracting participants to join
the research was difficult. Because the planned outcome was visual ethnography, not documentary
(see Rokka 2010), there was less pressure for guiding; videos could be shown as raw files and
different perspectives were sought after. Incentive effect was not underlined during the research
process. This might be because the participants were recruited from the acquaintances and networks
of the researcher, so the fact that the participants knew the researcher must have promoted
answering. Moreover, the prize was communicated only in the summary of the research task.
While the effect of the guidance is quite clear, evaluation about quality is quite challenging.
Even though there was no continuous discussion during the research, there were no serious
misunderstandings in data collection. Some mistakes like focus on example in task or skipping
verbal interpretation in video could be fixed by making sure that the given filming task is
unambiguous. However, the quality of unedited visual ethnography was fully reliant on the quality
of material. The other forms of representations required different and diverse quality material to
make comprehensive representation. Communicating incentives is important in order to receive
good quality video and incentives should be linked to the quality. In a similar vein, outcome is
under the researchers control, but material gives guidelines what is possible to do and what is not.
Transparency is important and participants should know what is going to happen to the material
they are providing in order for them to provide useful contributions.
Incentives are also demanding. Too persuasive incentives might increase the required guidance,
but more likely there is need to motivate the participants to produce enough material in terms of
quantity and quality. Also understanding and affecting personal motives is hard. Even in the case of
personal relations between the researcher and the participant there was no strong motivation to
answer. Motives are extremely personal and hard to influence by external incentives. Overall, the
reasons for crowds to participate seem to relate to personal interest more than money or other firm-
provided incentives. Therefore, when designing the incentives, firms may consider intrinsic motives
as an alternative to financial incentives. Such a conclusion also applies to researchers of
crowdsourcing; for example the influence of pricing may be limited in empirical settings (Boudreau
& Hagiu, 2009): extreme competition and rivalry () is itself a great motivator for coders [,]
something a price system on its own would clearly fail to achieve. Regarding tasks given by the
researcher, the instructions should be sufficient to extract the answers for the question asked, but at
the same time open enough to motivate creativity and achievement of various personal interests
(Howe, 2008; Zheng et al., 2011). As such, crowdsourcing becomes more than just a cost-cutting
strategy, and firms may consider recruiting the most talented members that perceive the effort of
participation as meaningful. In order to succeed, firms have to focus on guiding crowds, being clear
and transparent on task requirements and rewards, and carefully plan integration of the outcomes to
their internal processes. Understanding these practical features helps firms to manage
crowdsourcing in practice. Before this study there were little practical tips, but now there are at least
some first-hand experiences and documented best practices on managing the crowd.
References
Agatonoff, Nick (2006) Adapting ethnographic research methods to ad hoc commercial market
research. Qualitative Market Research: An International Journal, Vol. 9(2), 115125.
Alasuutari, Pertti (2011) Laadullinen tutkimus 2.0. Vastapaino, Tampere.
Apple Announces iPhone 4S. Mashable. <http://mashable.com/2011/10/04/apple-iphone-4s/>,
retrieved 19.6.2012.
Arnould, E. J., Thompson, C. J., (2005) Consumer Culture Theory (CCT): Twenty Years of
Research. Journal of Consumer Research, Vol. 31(4), 868882.
Arnould, E. J., Thompson, C. J., (2007) Consumer Culture Theory (and We Really Mean
Theoretics): Dilemmas and Opportunities Posed by an Academic Branding Strategy.
In: Research in Consumer Behavior (11): Consumer Culture Theory, eds. Belk, R. W.,
Sherry, J. F., 322. Elsevier, Oxford.
Belk, R. W., Kozinets, R. V., (2005) Videography in marketing and consumer research.
Qualitative Marketing Research: An International Journal, Vol. 8(2), 128141.
Booth, A, Papaionnou, D., Sutton, A., (2012) Systematic approaches to a successful literature
review. Sage Publications Ltd, London.
Boudreau, K., & Hagiu, A. (2009). Platform rules: multi-sided platforms as regulators. In
Platforms, Markets and Innovation (Annabelle Gawer.). Cheltenham, UK: Edward
Elgar Publishing.
Caro, Francis G. (1971) Readings in Evaluation Research. Russel Sage Foundation, New York.
Csikszentmihalyi, Mihaly (2008) Flow: The Psychology of Optimal Experience. Harper Perennial
Modern Classics, New York.
Creswell, John W. (2007) Qualitative Inquiry & Research Design: Choosing Among Five
Approaches. Sage Publications, London.
Cummings, J. N., Butler, B., & Kraut, R. (2002). The quality of online social relationships.
Commun. ACM, 45(7), 103108. doi:10.1145/514236.514242.
Deci, E. Ryan, R. (1985) Intrinsic motivation and self-determination in human behavior. Plenum,
New York.
Dow, S., Kulkarni, A., Klemmer, S., Hartmann, B., (2012) Shepherding the Crowd Yields Better
Work. CSCW: ACM Conference on Computer Supported Cooperative Work, February
1115, 2012, Seattle, Washington, USA.
Ekstrm, Karin (2003) Revisiting the Family Tree: Historical and Future Consumer Behavior
Research. Academy of Marketing Science Review, Vol. 2003(1), 129.
Eriksson, P. Kovalainen, A. (2008) Qualitative Methods in Business Research. Sage Publications,
London.
Estells, A. E. Gonzlez, L. F. (2012) Towards an integrated crowdsourcing definition. Journal of
Information Science, Vol. 38(2), 114.
Fry, G. Dwyer, R. (2001) For love or money? An exploratory study of why injecting drug users
participate in research. Addiction, Vol96(9), 13191325.
Hein W., Ryan, A., Corrigan, R. (2010): "POV: Point of View... Consumers and Ethnographers in
Perspective...", in Advances in Consumer Research, Vol. 37.
<http://youtu.be/OWPeDq2QdGc>, retrieved 19.10.2011.
Hein, W., ODonohoe, S., Ryan, A. (2011) Mobile phones as an extension of the participant
observers self. Reflections on the emergent role of an emergent technology.
Qualitative Market Research: An International Journal, Vol. 14(3), 258273.
Hesse-Biber, S. N. Leavy, S. (2006) The Practice of Qualitative Research. Sage Publications Ltd,
London.
Hietanen, Joel (2012) Videography in Consumer Culture Theory: An Account of Essence(s) and
Production. Doctors thesis, Aalto University School of Economics, Helsinki.
Hirsjrvi, S., Remes, P., Sajavaara, P., (1997) Tutki ja kirjoita. Kirjayhtym Oy, Helsinki.
Horton, J. J., Rand, D. G., & Zeckhauser, R. J. (2011). The online laboratory: Conducting
experiments in a real labor market. Experimental Economics, 14(3), 399425.
House, Ernest R. (1980) Evaluating with Validity. Sage Publications Inc., London.
Howe, Jeff (2006) The rise of crowdsourcing. Wired magazine 6/2006 issue 14.06.
Howe, Jeff (2008) Crowdsourcing: Why the power of the crowd is driving the future of business.
Crown, New York.
Kaufmann, N., Schulze, T., Veit, D. (2011) More than fun and money. Worker Motivation in
Crowdsourcing A Study on Mechanical Turk. AMCIS 2011 Proceedings.
Kittur, A., Chi, e., Suh, B., (2008) Crowdsourcing User Studies With Mechanical Turk. Proceedings
of the SIGCHI Conference on Human Factors in Computing Systems, April 510,
2008, Florence.
Kozinets, Robert V. (2002) The Field behind the Screen: Using Netnography for Marketing
Research in Online Communities Journal of Marketing Research Vol. 39(1), 6172.
Kozinets, R. V. Belk, R. W. (2006) Camcorder Society: Quality Videography in Consumer and
Marketing Research. In: Handbook of qualitative research methods in marketing:
Participative inquiry and practice, eds. Russel W. Belk, 361370. Edward Elgar,
Cheltenham.
Moisander J. Valtonen A. (2006) Qualitative Marketing Research: A Cultural Approach. Sage
Publications, London.
Nokia 808 PureView Has a Monster 41-Megapixel Camera. Mashable. <
http://mashable.com/2012/02/27/nokia-808-pureview/>, retrieved 25.6.2012.
Ostrom, E. (2000). Collective Action and the Evolution of Social Norms. The Journal of Economic
Perspectives, 14(3), 137158.
Pink, Sarah (2007) Doing Visual Ethnography. Images, Media and Representation in Research. 2nd
edition. Sage Publications Ltd, London.
Pitknen, L., Salminen, J. (2012) Crowdsourcing research to mobile consumers? Emerging
themes on videographic data collection. Proceedings of Planetary Scientific Research
Centre Conference, March 2425, 2012, Dubai.
Rokka, Joonas (2010) Exploring the Cultural Logic of Translocal Marketplace Cultures: Essays on
New Methods and Empirical Insights. Doctors thesis, Aalto University School of
Economics, Helsinki.
Ross, E. A. (1896). Social Control. American Journal of Sociology, 1(5), 513535.
Rutman, Leonard (1984) Evaluation Research Methods: A Basic Guide. Sage Publications Inc.,
Newbury Park, California.
Salkind, Neil J. (2010) Encyclopedia of Research Design. Sage Publications Ltd, Thousand Oaks.
Scoopshot. Snap a newsphoto. Send it in. Earn money. Scoopshot company webpage.
<https://www.scoopshot.com/en/instructions>, retrieved 6.10.2011.
Schenk, E. Guittard, E. (2011) Towards a Characterization of Crowdsourcing Practices. Journal
of Innovation Economics, Vol. 7(1), 93107.
Schmidt, Lauren A. (2010) Crowdsourcing for Human Subjects Research. CrowdConf 2010,
October 4, 2010, San Francisco, California, USA.
Silverman, David (2000) Doing Qualitative Research: A Practical Handbook.Sage Publications,
London.
Slater, Don (1997) Consumer Culture and Modernity. Polity Press, Cambridge.
Spindler, G. Spindler, L. (1987) Teaching and learning how to do the ethnography of education.
In: Interpretive ethnography of education: At home and abroad, eds. George Spindler
Louise Spindler, 1733. Lawrence Erlbaum Associates, Mahwah.
Steele-Johnson, D., Beauregard, R. S., Hoover, P. B., & Schmidt, A. M. (2000). Goal orientation
and task demand effects on motivation, affect, and performance. Journal of Applied
Psychology, 85(5), 724738. doi:10.1037/0021-9010.85.5.724
Surowiecki, James (2005) The Wisdom of Crowds. Anchor books, New York.
Terranova, T. (2000). Free labor: Producing culture for the digital economy. Social text, 18(2), 33
58.
Thakur, A., Gormish, M., Erol, B., (2011) Mobile Phones and Information Capture in the
Workplace. CHI '11 Extended Abstracts on Human Factors in Computing Systems,
May 712, 2011, Vancouver, BC, Canada.
The Commuter: A Short Film Shot Entirely With the Nokia N8. Mashable.
<http://mashable.com/2010/10/28/commuter-short-film-nokia-n8/>, retrieved
16.10.2011.
Viitamki, Sami (2008) The FLIRT Model of Crowdsourcing: Planning and Executing Collective
Customer Collaboration. Masters thesis, Helsinki School of Economics, Helsinki.
Zheng, H., Li, D., Hou, W., (2011) Task Design, Motivation, and Participation in Crowdsourcing
Contests. International Journal of Electronic Commerce Vol. 15(4), 5788.

You might also like