lOMoARcPSD|3821164
Book summary
Introduction to Research Methods (Tilburg University)
StuDocu is not sponsored or endorsed by any college or university
Downloaded by Mohamed Chakroun (chakroun.mohamed2000@gmail.com)
lOMoARcPSD|3821164
Introduction to research methods book summary: Research Methods for Business
CHAPTER ONE
p. 20 -50
What is research? Research is the process of finding solutions to a problem after a thorough study and
analysis of the situational factors. Managers in organizations constantly engage in some form of
research activity as they engage themselves in studying and analyzing issues and make decisions at the
workplace. The difference between good and bad decisions is how they go about the decision-making
process, this is the essence of research and it is important to know how to go about making the right
decisions by being knowledgeable about the various steps involved in finding solutions to problematic
issues.
Business research: it can be described as a systematic and organized effort to investigate a specific
problem encountered in the work setting, which needs a solution. The first step is to know where the
problem areas exist and to identify as clearly as possible the problems that need to be studied and
resolved. Once the problem is defined, steps can be taken to determine the factors associated with
the problem, gather information and analyze the data and then solve it. This entire process is called
research; it encompasses the processes of inquiry, investigation, examination and experimentation.
The processes have to be carried out systematically, diligently, critically, objectively and logically. The
definition of business research is: an organized, systematic, data-based, critical, objective, inquiry or
investigation into a specific problem, undertaken with the purpose of finding answers or solutions to
it. In essence, research provides the necessary information that guides managers to make informed
decisions to carefully deal with problems. The information provided could be the result of an analysis
of data that can be either quantitative or qualitative.
Research and the manager: an experience common to all organizations is that the managers thereof
encounter problems, big and small, on a daily basis, which they have to solve by making the right
decisions. Depending on the field the research is conducted in, the issues and the way of conducting
the research vary.
Types of business research: applied and basic: research can be undertaken for two different
purposes, one is to solve a current problem faced by the manager in the work setting, demanding a
timely solution. This is called applied research. The other is to generate a body of knowledge by trying
to comprehend how certain problems that occur in organizations can be solved; this is called basic,
fundamental or pure research. It is done chiefly to make a contribution to existing knowledge, they
contribute to the building of knowledge in the various functional areas of business.
Managers and research: managers with knowledge of research have an advantage over those
without, they have to understand, predict and control events that are dysfunctional within the
organization. Being able to sense, spot and deal with problems before they get out of hand is very
helpful. Although minor problems can be fixed by the manager, major problems warrant the hiring of
outside researchers or consultants. The manager who is knowledgeable about research can interact
effectively with them and the manager will become more discriminating while sifting through the
information disseminated in business journals. Other reasons while managers should be
knowledgeable about research, are that it sharpens the sensitivity of managers to variables operating
in a situation, second, when managers understand the research reports handed to them by
professionals, they are equipped to take intelligent and educated risks with known probabilities
attached to the success or failure of their decisions. Third, if managers become knowledgeable about
scientific investigations, vested interest inside or outside the organization will not prevail.
The manager and the consultant-researcher: managers often need to engage a consultant to study
some of the more complex, time-consuming problems that they encounter. It is thus important to be
Downloaded by Mohamed Chakroun (chakroun.mohamed2000@gmail.com)
lOMoARcPSD|3821164
knowledgeable about how to effectively interact with the consultant, what the manager-researcher
relationship should be, and the advantages and disadvantages of internal vs external consultants.
o Manager-researcher relationship: during their careers, it often becomes necessary for
managers to deal with consultants. The manager must not only interact effectively with the
research team, but must also explicitly delineate the roles for the researchers and the
management. He has to inform that what information will and what will not be provided to
the research team. Relevant philosophies and value systems of the organization are clearly
stated and constraints, if any, are communicated. Furthermore, a good rapport is established
with the researchers, and between the researchers and the employees in the organization.
Internal vs external consultants / researchers
o Internal consultants / researchers: some organizations have their own consulting or research
department which serves as the internal consultant to subunits of the organization that face
certain problems and seek help. The advantages of this are that; the internal team stands a
better chance of being readily accepted by the employees in the subunit where research
needs to be done, the team requires much less time to understand the structure, the
philosophy and climate, and the functioning and work systems, they are available to
implement their recommendations after the research findings have been accepted. They are
also available to evaluate the effectiveness of the changes. Furthermore, the internal team
might cost considerably less than an external team. The disadvantages are that the internal
team might fall into a stereotyped way of looking at the organization and its problems, there
is scope for certain powerful coalitions in the organization to influence the internal team to
conceal, distort or misrepresent certain facts, there is also a possibility that the research
team is not perceived as ‘experts’ by the staff and management and certain organizational
biases of the internal research team might make the findings less objective and less scientific.
o External consultants / researchers: the advantages are that they can draw on a wealth of
experience from having worked with different types of organization and the external teams
might have more knowledge of current sophisticated problem-solving models through their
periodic training programs. The disadvantages are that it is very costly, it takes time to be
understand the organization, the team might not be readily accepted by employees and they
may even charge additional fees for their assistance in the implementation and evaluation
phases.
Knowledge about research and managerial effectiveness: managers are responsible for the final
outcome by making the right decisions at work which is greatly facilitated by research knowledge.
Knowledge of research heightens the sensitivity of managers to the innumerable internal and external
factors of a varied nature operating in their work and organizational environment. Even superficial
knowledge of techniques used by external consultants help the manager to deal with the researcher in
a mature and confident manner.
Ethics and business research: this refers to a code of conduct or expected societal norms of behavior
while conducting research. Ethical conduct applies to the organization and the members that sponsor
the research, the researchers who undertake the research and the respondents who provide them
with the necessary data. Ethical conduct should also be reflected in the behavior of the researchers
who conduct the investigation, the participants, the analysts and the entire research team. Ethical
behavior pervades each step of the research process.
CHAPTER TWO
p. 50 - 81
The scientific approach and alternative approaches to investigation
Downloaded by Mohamed Chakroun (chakroun.mohamed2000@gmail.com)
lOMoARcPSD|3821164
Scientific research focuses on solving problems and pursues a step-by-step logical, organized, and
rigorous method to identify the problems, gather data, analyze them, and draw valid conclusions from
them. It is purposive and rigorous and enables all those who research and know about the same or
similar issues to come up with comparable findings when the data are analyzed. Scientific research
also helps researchers to state their findings with accuracy and confidence. This helps various other
organizations to apply those solutions when they have similar problems.
Scientific investigation: it is more objective than subjective and helps managers to highlight the most
critical factors at the workplace that need specific attention so as to avoid, minimize and solve
problems. The term scientific research applies to both basic and applied research. Researchers do not
always have the same approaches to and perspectives on research. Even business “gurus” make big
mistakes due to errors of judgment, because not enough research has preceded their formulation.
The hallmarks of scientific research
o Purposiveness: the research has started with a definite aim or purpose (mostly to benefit the
organization).
o Rigor: it connotes carefulness, scrupulousness, and the degree of exactitude in research
investigations. It makes sure the conclusions are correctly drawn, that the manner of framing
addressing the questions does not have bias or incorrectness or any other important
influence. Rigor enables the researcher to collect the right kind of information from an
appropriate sample with the minimum degree of bias, and facilitate suitable analysis of the
data gathered.
o Testability: it is a property that applies to the hypothesis (a tentative, yet testable, statement,
which predicts what you expect to find in your empirical data) of a study. Not all hypotheses
can be tested, non-testable hypotheses are often vague statements, or they put forward
something that cannot be tested experimentally (e.g. that God created the earth).
o Replicability: we will have more faith in findings and conclusion of a study if the findings are
replicated in another study. Replication demonstrates that our hypotheses have not been
supported merely by chance, but are reflective of the true state of affairs in the population.
The results of the tests of hypotheses should be supported again and yet again when the
same type of research is repeated in similar circumstances.
o Precision and confidence: it is almost impossible to study the universe of things, events or
population, so in all probability, the sample in question may not reflect the exact
characteristics. Measurement errors and other problems are also bound to introduce an
element of bias or error in our findings. Precision refers to the closeness of the findings to
‘reality’ based on a sample; it reflects the degree of accuracy or exactitude of the results on
the basis of the sample, that what really exists in the universe. Confidence refers to the
probability that our estimations are correct. The narrower the limits within which we can
estimate the range of our predictions and the greater the confidence we have in our research
results, the more useful and scientific the findings become.
o Objectivity: the conclusions drawn through the interpretation of the results of data analysis
should be objective; that is, they should be based on the facts of the findings derived from
actual data, and not on our own subjective or emotional values. The more objective the
interpretation of the data, the more scientific the research investigation becomes.
o Generalizability: refers to the scope of applicability of the research findings in one
organizational setting to other settings. Obviously, the wider the range of applicability of the
solutions generated by research, the more useful the research is to the users. However, not
many research findings can be generalized to all other settings, situations, or organizations.
o Parsimony: simplicity in explaining the phenomena or problems that occur, and in generating
solutions for the problems, is always preferred to complex research frameworks that consider
an unmanageable number of factors. Economy in research models is achieved when we can
Downloaded by Mohamed Chakroun (chakroun.mohamed2000@gmail.com)
lOMoARcPSD|3821164
build into our research framework a lesser number of variables that explain the variance far
more efficiently than a complex set of variables that only marginally add to the variance
explained.
The hypothetico-deductive method: scientific research pursues a step-by-step, logical, organized, and
rigorous method to find a solution to a problem. The scientific method was developed in the context
of the natural sciences, where it has been the foundation of many important discoveries. This is still
the predominant approach for generating knowledge in natural, social and business sciences, the
hypothetico-deductive method, popularized by Karl Popper.
o The seven-step process in the hypothetico-deductive method
Identify a broad problem area
Define the problem statement: it needs a definite aim or purpose in order to find
solutions. Gathering initial information about the factors that are possibly related to
the problem will help to narrow down the broad problem and define the problem
statement. Preliminary information gathering can be done by literature review or by
talking to several people in the work setting, to clients or to other relevant sources,
thereby gathering information on what is happening and why.
Develop hypotheses: in this step, variables are examined to ascertain their
contribution or influence in explaining why the problem occurs and how it can be
solved. From a theorized network of associations identified among the variables,
certain hypotheses or educated conjectures can be generated. A scientific
hypothesis must meet two requirements: the first is that the hypothesis must be
testable, the second that the hypothesis must be falsifiable. Karl Popper stated this is
important because a hypothesis cannot be confirmed, there is always the possibility
that future research will show that it is false.
Determine measures: unless the variables in the theoretical framework are
measured in some way, we will not be able to test our hypotheses. In order to test it,
we need to operationalize it.
Data collection: data with respect to each variable in the hypothesis needs to be
obtained.
Data analysis: the data gathered are statistically analyzed to see if the hypotheses
that were generated have been supported.
Interpretation of data: now we must decide whether our hypotheses are supported
or not by interpreting the meaning of the results of the data analysis. Based on these
deductions, we are able to make recommendations.
o Review of the hypothetico-deductive method:
Deductive reasoning: we start with a general theory and then apply this theory to a
specific case. Hypothesis testing is deductive in nature because we test if a general
theory is capable of explaining a particular problem.
Inductive reasoning: it is a process where we observe specific phenomena and on
this basis arrive at general conclusions. According to Popper, it is not possible to
“prove” a hypothesis by the means of induction because no amount of evidence
assures us that contrary evidence will not be found.
Both inductive and deductive processes are often used in research because, as is
argued, both are essential parts of the research process. Theories based on both
help us understand, explain, and / or predict business phenomena.
Some obstacles to conducting scientific research in the management area: in the management and
behavioral areas, it is not always possible to conduct investigations that are 100% scientific because
the results obtained will not be exact and error-free. This is primarily because of difficulties likely to be
encountered in the measurement and collection of data in the subjective areas of feelings, emotions,
attitudes and perceptions. These problems occur whenever we attempt to measure abstract and
Downloaded by Mohamed Chakroun (chakroun.mohamed2000@gmail.com)
lOMoARcPSD|3821164
subjective constructs. Difficulties might also be encountered in obtaining a representative sample,
restricting the generalizability of findings. It is not always possible to meet all the hallmarks of science
in full; still, to the extent possible we would have endeavored to engage in scientific investigation.
Alternative approaches to research: all research is based on beliefs about the world around us
(ontology) and what we can possible discover by research. However, different researchers have
different ideas about these issues. The disagreement about the nature of knowledge or how we come
to know (epistemology) has a long history and is not restricted to research in business. Now, the most
important perspectives for contemporary research in business are discussed.
o Positivism: in a positivist view of the world, science and scientific is seen as a way to get to
the truth so we can understand the world well enough so we can predict and control it. The
world operates by laws of cause and effect that we can discern if we use a scientific approach.
Positivists use deductive reasoning to put forward theories that they can test by means of a
fixed research design and objective measures. They believe that the goal of research is to only
describe phenomena that one can directly observe and objectively measure.
o Constructionism: it criticizes the positivist belief that there is an objective truth,
constructionists hold the view that the world is fundamentally mental or mentally
constructed. They aim to understand the rules people use to make sense of the world by
investigating what happens in people’s minds, they emphasize how people construct
knowledge. They are interested in how people’s views of the world result from interactions
with others and the context in which they take place. The research methods are often
qualitative, e.g. focus groups and unstructured interviews allow them to collect rich data.
o Critical realism: it is a combination of the belief in an external reality (an objective truth) with
the rejection of the claim that this external reality can be objectively measured. Observations
will always be subject to interpretation. The critical realist is critical of our ability to
understand the world with certainty. Measures of phenomena such as emotions, feelings and
attitudes are often subjective in nature and the collection of data is imperfect and flawed as
well as that researchers are inherently biased.
o Pragmatism: they do not take on a particular position on what makes good research; they
feel that research on both objective, observable phenomena and subjective meanings can
produce useful knowledge, depending on the research questions of the study. The focus is on
practical, applied research where different viewpoints on research and the subject under
study are helpful in solving a problem. It describes research as a process where concepts and
meanings are generalizations of our past actions and experiences, and of interactions we
have had with our environment. Different researchers have different views about the world
and these different perspectives help us to gain an understanding of the world, it endorses
eclecticism and pluralism. Pragmatism views the current truth as tentative and changing over
time. They also stress the relationship between theory and practice. To them, theory is
derived from practice and then applied back to practice to achieve intelligent practice.
CHAPTER THREE
p. 81 – 105
The broad problem area: the origin of most research stems from the desire to get a grip on issues,
concerns and conflicts within the company or in its environment. In other words, research typically
begins with a problem. A problem does not necessarily mean that something is seriously wrong with a
current situation that needs to be rectified immediately, it could also indicate an interest in an issue
where finding the right answers might help to improve an existing situation. Once we have identified
the management problem, it needs to be narrowed down to a researchable topic for study. Very often
much work is needed to do that. Note that the symptoms of problems are not defined as the real
Downloaded by Mohamed Chakroun (chakroun.mohamed2000@gmail.com)
lOMoARcPSD|3821164
problem. A feasible topic for research is specific and focused. You need to transform the broad
problem into a feasible topic for research by setting clear boundaries and selecting a perspective (e.g.
academic). Preliminary information gathering will help to make the necessary transformations.
Preliminary information gathering
o Nature of information to be gathered: preliminary information gathering via introspection,
unstructured interviews, structured interviews and / or a review through existing sources of
information such as news articles, textbooks, conference proceedings and the Internet, will
help the researcher to narrow down the broad problem area and to define a specific problem
statement. The exact nature of it can be classified under two headings: background
information on the organization and its environment (the contextual factors) and literature
(the body of knowledge available to you or what is already known and written down that is
relevant to your research project. Secondary data are data that already exist and do not have
to be collected by the researcher. Primary data refers to information that the researcher
gathers first hand through instruments such as surveys, interviews, focus groups, or
observation.
o Background information on the organization: this might include contextual factors which
may be obtained from various published sources, e.g. the origin and history of the company,
size, charter (purpose and ideology), location, resources, interdependent relationships with
other institutions and the external environment, financial position, information on structural
factors (role and positions etc.) and information on the management philosophy. An
understanding of these factors might be helpful in arriving at a precise problem formulation.
o Literature – the body of knowledge available to you: it may also help to think about and / or
better understand the problem. A careful review of published and unpublished materials
ensures that you have a thorough awareness and understanding of current work and
viewpoints on the subject area. This helps you to structure your research on work already
done, or in other words to build on the foundation of existing knowledge and to develop the
problem statement with precision and clarity. A first review of the literature also helps you to
make an informed decision about your research approach. Familiarity with the literature is
beneficial in both an academic (fundamental) and a nonacademic (applied) context.
Defining the problem statement: after gathering preliminary information, the researcher is in a
position to narrow down the problem from its original broad base and define the issues of concern
more clearly.
o What makes a good problem statement? It includes both a statement of the research
objective(s) and the research question(s). Good research has a purposive focus (why);
fundamental research related to expanding knowledge of business and management in
general. The aim of applied research is to solve a specific problem encountered in the work
setting. Once the purpose of the study has been identified, one is able to formulate the
research question(s) of the study. The research question(s) specify what you want to learn
and guide and structure the process of collecting and analyzing information. A problem
statement is relevant if it is meaningful from a managerial perspective (relevant if it relates
to a problem that currently exists in an organizational setting or an area that a manager
believes needs to be improved in the organization), an academic perspective (relevant if
nothing is known, knowledge is scattered and not integrated, much research is available but
the results are contradictory or established relationships do not hold in certain situations), or
both. A good problem statement is relevant but also feasible; this is if you are able to answer
the research questions within the restrictions of the research project. It should also be
interesting.
The research proposal: before any research study is undertaken, there should be an agreement
between the person who authorizes the study and the researcher as to the problem to be
Downloaded by Mohamed Chakroun (chakroun.mohamed2000@gmail.com)
lOMoARcPSD|3821164
investigated, the methodology to be used, the duration of the study and its cost. The research
proposal is the result of a planned, organized and careful effort and contains the following:
o A working title
o Background of the study
o The problem statement
The purpose of the study
Research questions
o The scope of the study
o The relevance of the study
o The research design, offering details on:
Type of study (exploratory, descriptive and / or causal)
Data collection methods
Sampling design
Data analysis
o Time frame
o Budget
o Selected bibliography
Managerial implications: understanding the antecedents-problem-consequences sequence and
gathering the relevant information prevents frustration when the remedies of the manager don’t
work. The inputs from managers help researchers to define the broad problem area and confirm their
own theories about the situational factors impacting the central problem. A well-developed research
proposal allows managers to judge the relevance of the proposed study. However, to make sure the
objectives are actually achieved, managers must stay involved throughout the entire research process.
Ethical issues in the preliminary stages of investigation: if the researcher does not have the skills or
resources to carry out the project, once the problem is specified and a problem statement is defined,
he or she should decline the project. If the researcher decides to carry out the project, it is necessary
to inform all the employees of the proposed study. It is also necessary to assure employees that their
responses will be kept confidential and that individual responses will not be divulged to anyone in the
organization. These two steps will make the employees comfortable with the research undertaken and
ensure their cooperation. Employees should not be forced to participate in the study; they also have a
right to privacy and confidentiality.
CHAPTER FOUR
The critical literature review
p. 105 – 137
A second review of the literature (critical literature review) is essential. A literature review is the
selection of available documents on the topic, which contain information, ideas, data and evidence
written from a particular standpoint to fulfill certain aims or express certain views on the nature of the
topic and how it is to be investigated, and the effective evaluation of these documents in relation to
the research being proposed. A critical literature review has many functions; it will help to develop a
conceptual or theoretical background, which discusses the literature pertinent to the specific issue or
problem. This will allow the researcher to account for the selection of the research approach that is
taken (e.g. inductive or deductive in nature). In deductive research, a literature review will also help to
develop a theoretical framework and hypotheses. In inductive research, you do not develop a
theoretical framework.
The purpose of a critical literature review: a literature review is a step-by-step process. It ensures that
no important variable that has in the past been found repeatedly to have had an impact on the
Downloaded by Mohamed Chakroun (chakroun.mohamed2000@gmail.com)
lOMoARcPSD|3821164
problem is ignored in the research project. Along these lines, a critical review of the literature provides
researchers with a framework for their own work; this includes the identification and definition of the
relevant concepts related to the study and an explanation of how and why these relevant concepts are
related to each other. It thus allows the researcher to introduce relevant terminology and provide
guiding definitions; it also enables the researcher to provide arguments for the relationships between
the variables in a conceptual model. A good literature review provides the foundation for developing a
comprehensive theoretical framework from which hypotheses can be developed for testing as well as
an idea of the research methods that others have used, which allows the researcher to replicate
models which will save time and effort. In general, a literature review ensures that:
o The research effort is positioned relative to existing knowledge and builds on this
knowledge
o One does not run the risk of “reinventing the wheel”
o The background can enable you to look at a problem from a specific angle, to shape your
thinking and to spark useful insights.
o A clearer idea emerges as to what variables will be important to consider, and why and
how.
o The researcher is able to introduce relevant terminology and to provide guiding definitions
o Testability and replicability of the findings of the current research are enhanced
o The research findings are related to the findings of others
How to approach the literature review: the first step involves the identification of the various
published and unpublished materials that are available on the topic of interest, and gaining access to
these.
o Data sources: the quality of a literature reviews depends on a cautious selection of the
literature (no way! So unexpected! (please note the sarcasm)). Academic books and journals
are in general the most useful sources of information.
Textbooks: a useful source of theory in a specific area. They can cover a broad range
of topics and much more thoroughly than articles can. Textbooks offer a good
starting point; however, they tend to be less up to date.
Journals: both academic and professional journals are important sources of up-todate information. They have generally been peer-reviewed. Review articles
summarize previous research findings to inform the reader of the state of existing
research. Research articles are reports of empirical research, describing one or a few
related articles.
Theses: they often contain an exhaustive review of the literature in a specific area
and most include several empirical chapters.
Conference proceedings: they can be useful in providing the latest research because
they are very up-to-date.
Unpublished manuscripts: these are information sources that are not officially
released by an individual, publishing house, or other company. They are often very
up to date.
Reports: government departments and corporation commissions carry out a large
amount of research, reports are their published findings.
Newspapers: they provide up-to-date business information and are a useful source
of specific market, industry or company information.
The Internet: here, an enormous amount of information can be found. However, it is
unregulated and unmonitored so determining the usefulness and reliability is an
exceptional challenge. Search engines can help you to find relevant information.
o Searching for literature: with modern technology, locating sources where the topics of
interest have been published has become much easier. This saves enormous amounts of time
Downloaded by Mohamed Chakroun (chakroun.mohamed2000@gmail.com)
lOMoARcPSD|3821164
and they are comprehensive in their listing and review of references. Gaining access to them
is also relatively inexpensive. The most frequently used electronic resources are electronic
journals, full-text databases, bibliographic databases and abstract databases.
o Evaluating the literature: a glance at the title of articles will indicate which of them may be
pertinent and which not. The abstract usually provides an overview of the study purpose,
general research strategy, findings and conclusions. The introduction also provides an
overview of the problem and specific research objectives and research questions. The table of
contents and the first chapter of a book may help to assess the relevance of the book. A good
literature review needs to include references to the key studies in the field, as well as recent
work since it builds on a broader and more up-to-date stream of literature than older work.
The quality of the journal that published an article can also be used as an indicator of the
quality of an article.
o Documenting the literature review: the purpose of the literature review is to help the
researcher to build on the work of others. A review of the literature identifies and highlights
the important variables, and documents the significant findings from earlier research.
Documenting this is important to convince the reader that the researcher is knowledgeable
about the problem area and a theoretical framework will be structured on work already done
and will add to that solid foundation. The literature survey should bring together all relevant
information in a cogent and logical manner.
Ethical issues: research involves building on the work of others. The pitfalls that you have to beware of
are purposely misrepresenting the work of other authors and plagiarism. Both of these are considered
to be fraud. Especially plagiarism is taken very seriously in the academic world, because it does not
convey much respect for the efforts that other people have put into their work. It also makes it
difficult for the reader to verify whether your claims about other authors and sources are accurate and
you need to make your position in a scientific debate clear by designating the authors whose work you
are building on.
CHAPTER FIVE
p. 137 – 181
Theoretical framework and hypothesis development
The need for a theoretical framework: a theoretical framework represents your beliefs on how
certain phenomena (or variables or concepts) are related to each other (a model) and an explanation
of why you believe that these variables are associated with each other. Both the model and the theory
flow logically from the documentation of previous research in the problem area. The process of
building a theoretical framework includes: introducing definitions of the concepts or variables in your
model, developing a conceptual model that provides a descriptive representation of your theory and
coming up with a theory that provides an explanation for relationships between the variables in your
model. From the theoretical framework, testable hypotheses can be developed to examine whether
your theory is valid or not. Thereafter, the hypothesized relationships can be tested through statistical
analyses.
Variables: a variable is anything that can take on differing or varying values. Examples of variables are
production units, absenteeism and motivation. Four main types of variables are discussed in this
chapter. Variables can be discrete (e.g. Male/female) or continuous (e.g. the age of an individual).
Extraneous variables confound cause-and-effect relationships.
o Dependent variable: this is the variable of primary interest to the researcher the researcher’s
goal is to understand and describe the dependent variable, or to explain its variability, or
predict it. Through the analysis of the dependent variable (finding what variables influence it),
Downloaded by Mohamed Chakroun (chakroun.mohamed2000@gmail.com)
lOMoARcPSD|3821164
it is possible to find answers or solutions to the problem. It is possible to have more than one
dependent variable in a study.
o Independent variable: this is the variable that influences the dependent variable in either a
positive or negative way. The variance in the dependent variable is accounted for by the
independent variable. To establish that a change in the independent variable causes a change
in the dependent variable, these conditions should be met:
IV and DV should covary; a change in the DV should be associated with a change in
the IV.
The IV should precede the DV (time sequence)
No other factor should be a possible cause of the change in the dependent
variable. The researcher should control for the effects of other variables.
A logical explanation is needed and it must explain why IV affects DV
o Moderating variable: this is the variable that has a strong contingent effect on the IV-DV
relationship. This third variable modifies the original relationship between the two variables.
o Mediating variable: a mediating (or intervening) variable is one that surfaces between the
time the IV starts operating to influence the DV and the time its impact is felt on it. There is
thus a temporal quality of time dimension. The mediating variable helps to conceptualize and
explain the influence of the IV(s) on the DV.
Theoretical framework: this is the foundation on which the entire deductive research project is based.
It is a logically developed, described, and elaborated network of associations among the variables
deemed relevant to the problem situation and identified through such processes as interviews,
observations, and literature review. A correctly identified problem is evident to arrive at good
solutions. The relationship between the literature review and the theoretical framework is that the
formed provides a solid foundation for developing the latter; it identifies the variables that might be
important.
o The components of the theoretical framework: a good theoretical framework identifies and
defines the important variables in the situation that are relevant to the problem and
subsequently describes and explains the interconnections among these variables as well as
elaborates on the variables. There are three basic features that should be incorporated in any
theoretical framework: the variables considered relevant to the study should be clearly
defined, a conceptual model that describes the relationships between the variables in the
model should be given and there should be a clear explanation of why we expect these
relationships to exist. A conceptual model helps you to structure your discussion of the
literature; it describes your ideas about how the concepts (variables) in your model are
related to each other. They are often expressed in a schematic diagram which helps the
reader to visualize the theorized relationships between the variables in the model. A theory
or a clear explanation for the relationships in your model is the last component of the
theoretical framework. It attempts to explain relationships between the variables in the
model. From the theoretical framework, then, testable hypotheses can be developed to
examine whether the theory formulated is valid or not.
Hypothesis development: once the important variables in a situation have been identified and the
relationships among them have been established through logical reasoning, we can test whether the
relationships, in fact, hold true. By testing these relationships scientifically through appropriate
statistical analyses, we are able to obtain reliable information on what kinds of relationships exist
among the variables operating in the problem situation. Formulating testable statements is called
hypothesis development.
o A hypothesis: a tentative, yet testable, statement, which predicts what you expect to find in
your empirical data. They are derived from the theory on which your conceptual model is
based and are often relational in nature. By testing the hypothesis, it is expected that
solutions can be found.
Downloaded by Mohamed Chakroun (chakroun.mohamed2000@gmail.com)
lOMoARcPSD|3821164
Statements of hypotheses: formats
If-then statements: to examine whether or not the conjectured relationships or
differences exist, hypotheses can be set either as propositions or in the form of ifthen statements.
Directional and non-directional hypotheses: if, in stating the relationship between
two variables or comparing two groups, terms such as positive, negative, more than,
less than, and the like are used, then these are directional hypotheses because of
the direction of the relationship between the variables is indicated. On the other
hand, non-directional hypotheses are those that do postulate a relationship or
difference, but offer no indication of the direction of these relationships or
differences. These hypotheses are formulated either because the relationships or
differences have never been explored, and hence there is no basis for indicating the
direction, or because there have been conflicting findings in previous research
studies on the variables.
Null and alternate hypotheses: the hypothetico-deductive method requires that
hypotheses are falsifiable: they must be written in such a way that other researchers
can show them to be false. A null hypothesis is a hypothesis set up to be rejected in
order to support an alternate hypothesis. When used, the null hypothesis is
presumed true until statistical evidence, in the form of a hypothesis test, indicates
otherwise. Typically, the null statement is expressed in terms of there being no
(significant) relationship between two variables or no (significant) difference
between two groups. The alternate hypothesis is a statement expressing a
relationship between two variables or indicating differences between groups.
Hypothesis generation and testing can be done both through deduction and
induction. In deduction, the theoretical model is first developed, testable
hypotheses are then formulated, and data collected and then the hypotheses are
tested. In the inductive process, new hypotheses are formulated based on what is
known from the data already collected.
Hypothesis testing with qualitative research: negative case analysis: to test a hypothesis, the
researcher should look for data to refute the hypothesis (e.g. that 3 specific factors influence unethical
practices by employees). A single case does not support a hypothesis, so the theory needs revision.
When the researcher finds out that the original hypothesis is disconfirmed, this is called negative case
method. This enables the researcher to revise the theory and the hypothesis until such time as the
theory becomes robust.
Managerial implications: in the first stage of research, managers sense the problem area. The
following stages are of preliminary data gathering (including literature review) and developing the
theoretical framework based on the literature review and guided by experience and intuition and
finally, formulating hypotheses for testing. Once the problem is defined, a grasp of the four different
types of variables broadens the understanding of managers as to how multiple factors impinge on the
organizational setting. Knowledge of how and for what purpose the theoretical framework is
developed and the hypotheses are generated enables the manager to be an intelligent judge of the
research report submitted by the consultant.
o
CHAPTER SIX
p. 181 – 217
Elements of research design
Downloaded by Mohamed Chakroun (chakroun.mohamed2000@gmail.com)
lOMoARcPSD|3821164
The research design: this is a blueprint for the collection, measurement, and analysis of data, based on
the research questions of the study. The quality of a research study depends on how carefully the
manager / researcher chooses the appropriate design alternatives, taking into consideration its
specific purpose.
Purpose of the study; exploratory, descriptive, causal: the nature of the study depends on the stage
to which knowledge about the research topic has advanced.
o Exploratory study: this is undertaken when not much is known about the situation at hand,
or no information is available on how similar problems or research issues have been solved in
the past. In such cases, extensive preliminary work needs to be done to understand what is
occurring, assess the magnitude of the problem and /or gain familiarity with the phenomena
in the situation. Exploratory research often relies on secondary data and/ or qualitative
approaches to data gathering such as informal discussions (with consumers, employees,
managers) and more formal approaches such as interviews, focus groups, and case studies.
The results of these studies are typically not generalizable and are flexible in nature.
o Descriptive study: the objective of a descriptive study is to describe, often the characteristics
of persons, events, or situations descriptive research is either quantitative or qualitative.
Correlational research describes relationships between variables, but a correlation does not
mean one variable causes a change in another variable. Descriptive studies may help the
researcher to: understand the characteristics of a group in a given situation, think
systematically about aspects in a given situation, offer ideas for further probe and research,
and help make certain decisions.
o Causal study: these are at the heart of the scientific approach to research. Such studies test
whether or not one variable causes another to change. However, note that, quite often, it is
not just one variable that causes a problem in organizations. In order to establish a causal
relationship, all four of the following conditions should be met:
IV and DV should covary; a change in the DV should be associated with a change in
the IV.
The IV should precede the DV (time sequence)
No other factor should be a possible cause of the change in the dependent
variable. The researcher should control for the effects of other variables.
A logical explanation is needed and it must explain why IV affects DV
Extent of researcher interference with the study: the extent of interference by the researcher has a
direct bearing on whether the study undertaken is correlational or causal. A correlational study is
conducted in a natural environment with minimum interference by the researcher with the normal
flow of events. In studies conducted to establish cause-and-effect relationships, the researcher tries to
manipulate certain variables so as to study the effects of such manipulation on the DV of interest.
Study-setting; contrived and non-contrived: in the natural environment where events proceed
normally, this is in a non-contrived setting. If it takes place in an artificial environment, this is a
contrived setting. Correlational studies done in non-contrived settings are called field studies. Studies
conducted to establish cause-and-effect relationships using the same natural environment are called
field experiments. Here, the researcher does interfere with the natural occurrence of events inasmuch
as the IV is manipulated. Experiments done to establish a cause-and-effect relationship beyond the
possibility of the least doubt require the creation of an artificial, contrived environment and then
these are called lab experiments. It is important to decide the various design details before
conducting the research study, since one decision criterion might have an impact on others.
Research strategies:
o Experiments: these are usually associated with deductive research and a scientific or
hypothetico-deductive approach to research. These designs are often used to establish causal
relationships and are much less useful for explorative and / or descriptive studies.
Downloaded by Mohamed Chakroun (chakroun.mohamed2000@gmail.com)
lOMoARcPSD|3821164
Survey research: a survey is a system for collecting information from or about people to
describe, compare or explain their knowledge, attitudes and behavior. This system includes
setting objectives for data collection, designing the study, preparing a reliable and valid
survey instrument, administering the survey, managing and analyzing survey data, and
reporting the results. The survey strategy is very popular in business research because it
allows the researcher to collect quantitative and qualitative data on many types of research
questions. Surveys are used in exploratory, descriptive and causal research to collect data
about people, events or situations.
o Observation: this involves going into the natural setting of people, watching what they do,
and describing, analyzing and interpreting what one has seen. Observation can be defined as
the planned watching, recording, analysis and interpretation of behavior, actions, or events.
The four key dimensions that characterize the way observation is conducted are: control,
whether the observer is a member of the group that is observed or not, structure, and
concealment of observation.
o Case studies: they focus on collecting information about a specific object, event or activity,
such as a particular business unit or organization. In order to obtain a clear picture of a
problem, one must examine the real-life situation from various angles and perspectives using
multiple methods of data collection. It is a research strategy that involves an empirical
investigation of a particular contemporary phenomenon within its real-life context using
multiple methods of data collection. Case studies may provide both qualitative and
quantitative data for analysis and interpretation.
o Grounded theory: this is a systematic set of procedures to develop an inductively derived
theory from the data important tools are theoretical sampling, coding, and constant
comparison. If there is a bad fit between data or between the data and your theory, then the
categories and theories have to be modified until your categories and your theory fit the
data.
o Action research: this is sometimes undertaken by consultants who want to initiate change
processes in organizations. Here, the researcher begins with a problem that is already
identified, and gathers relevant data to provide a tentative problem solution. This solution is
then implemented and its effects are evaluated, defined and diagnosed and the research
continues on an ongoing basis until the problem is fully resolved.
o Mixed methods: triangulation is a technique that is often associated with using mixed
methods. The idea behind it is that one can be more confident in a result if the use of
different methods or sources leads to the same results. It requires that research is addressed
from multiple perspectives. The several kinds of triangulation that are possible:
Method triangulation: using multiple methods of data collection and analysis
Data triangulation: collecting data from several sources and/or at different time
periods
Researcher triangulation: multiple researchers collect and/or analyze the data
Theory triangulation: multiple theories and/or perspectives are used to interpret
and explain the data
Unit of analysis; individuals, dyads, groups, organizations, cultures: the unit of analysis refers to the
level of aggregation of the data collected during the subsequent data analysis stage. A dyad is a twoperson group, e.g. the husband-wife interaction. Our research question determines the unit of
analysis. “Levels of analysis” has as a characteristic that the lower levels (e.g. individuals, dyads) are
subsumed within the higher levels (e.g. groups, nations). The nature of the information gathered, as
well as the level at which data are aggregated for analysis, are integral to decisions made on the
choice of the unit of analysis.
Time horizon; cross-sectional vs. longitudinal studies
o
Downloaded by Mohamed Chakroun (chakroun.mohamed2000@gmail.com)
lOMoARcPSD|3821164
Cross-sectional studies: one-shot or cross-sectional studies are studies undertaken in which
data are gathered just once, perhaps over a period of days, weeks or months, in order to
answer a research question.
o Longitudinal studies: these are undertaken when a researcher wants to study people or
phenomena at more than one point in time in order to answer the research question. This
can be when behavior is to be studied before and after a change, so as to know what effects
the change accomplished. These studies take more time, effort and money. However, they
can help to identify cause-and-effect relationships. Experimental designs invariably are
longitudinal studies, since data are collected both before and after a manipulation. Field
studies may also be longitudinal
Review of elements of research design: the researcher determines the appropriate decisions to be
made in the study design based on the research perspective of the investigator, the problem
definition, the research objectives, the research questions, the extent of rigor desired, and practical
considerations.
Managerial implications: knowledge about research design issues helps the manager to understand
what the researcher is attempting to do. One of the important decisions a manager has to make
before starting a study pertains to how rigorous the study ought to be. Knowing that more rigorous
research designs consume more resources, the manager can weigh the gravity of the problem
experienced and decide what kind of design will yield acceptable results in an efficient manner. One of
the main advantages in fully understanding the difference between causal and correlational studies is
that managers do not fall into the trap of making implicit causal assumptions when two variables are
only associated with each other. Knowledge of research design details also helps managers to study
and intelligently comment on research proposals.
o
CHAPTER SEVEN
Sources of data
Primary data: information obtained first-hand by the researcher
o Focus groups: about 8-10 members with a moderator leading the discussions who is not
involved in the discussion himself. The members are generally chosen for their level of
experience. The sessions are aimed at obtaining respondents’ impressions, interpretations
and opinions on a specific topic. This is a cheap option but the opinions cannot be considered
to be truly representative. This can also be done using videoconferencing to e.g. conduct
sample surveys, make generalizations and do exploratory studies.
o Panels: meets more than once (unlike focus groups) and are studied over a period of time.
Individuals are randomly chosen and become ‘experts’ during the study. It can either be a
static panel or a dynamic panel (members change). The disadvantages are that the panel
members could become sensitized so that their opinions are no longer representative or
mortality because members drop out.
o The Delphi Technique: a forecasting method that uses a panel of experts in a systematic,
interactive manner. The experts answer questionnaires and their identity is not revealed.
o Unobtrusive manners: trace measures originate from a primary source that does not involve
people. E.g. the number of different brands of soft drink cans found in trash bags that provide
a measure of their consumption level.
Secondary data: information gathered from sources that already exist
o e.g. books, periodicals, government publications of economic indicators, statistical abstracts,
…
o The advantage of secondary data is that it saves in time and costs of acquiring information.
Downloaded by Mohamed Chakroun (chakroun.mohamed2000@gmail.com)
lOMoARcPSD|3821164
Methods of data collection
Interviewing
o Unstructured interviews: no planned sequence of questions, the interviewee is asked broad,
open-ended questions which might elicit an elaborate response. The main purpose is to
explore and probe into the several factors that might be central to the broad problem area.
o Structured interviews: at the outset, the information that is needed is known. The
interviewer has a list of predetermined questions to be asked either personally, through the
telephone or via computer. Every interviewee will be asked the same questions. More indepth information can be obtained.
o Training interviewers: usually, they work in a team of interviewers since it is not feasible to
be done alone.
o Bias: The information obtained should be free from bias (errors or inaccuracies in the data
collected). It can be introduced by the interviewer, interviewee or the situation.
o Establishing rapport: gaining trust from the interviewee in order to obtain honest
information. The researcher should make the respondent sufficiently at ease to give
informative and truthful answers without fear of adverse consequences. It can be hard when
the respondents are suspicious of the researchers’ intentions.
o Questioning technique:
Funneling: asking an open-ended question at the beginning of an unstructured
interview in order to get a broad idea and form some impressions about the
situation. Then, some more focused follow up questions can be asked.
Unbiased questions: asking no loaded questions, rephrasing, seeking clarification
etc.
Helping the respondent to think through issues: by asking the question in a simpler
way by using paired-choice questions for example.
Taking notes.
o Face-to-face and telephone interviews
Face-to-face: direct interviews, the researcher can adapt the questions as necessary,
clarify doubts. Non-verbal communication can be picked up by the researcher. The
disadvantages are the costs of training and respondents might feel uneasy about
their anonymity.
Telephone interviews: a number of different people can be reached in a relatively
short period of time. The discomfort in facing the interviewer is eliminated and it is
less uncomfortable to disclose personal information. The disadvantages are that the
respondent can end the call by hanging up without warning or explanation. This can
be prevented by calling ahead of time to request participation and setting up a
mutually convenient time. The researcher will not be able to see the respondent to
read the nonverbal communication.
o Computer-assisted interviewing: questions are flashed onto the computer screen and
interviewers can enter the answers of respondents directly into the computer. The accuracy
is enhanced and it prevents interviewers asking the wrong questions or in the wrong
sequence.
o Projective methods:
Word association
Thematic apperception tests: calls for the respondent to weave a story around a
picture that is shown
Inkblot tests
Downloaded by Mohamed Chakroun (chakroun.mohamed2000@gmail.com)
lOMoARcPSD|3821164
CHAPTER EIGHT
Observation
Four key dimensions
o Controlled vs uncontrolled observational studies
Controlled: artificial setting, high control when the subjects are exposed to a certain
situation or condition, the researcher can observe differences between individual
behavioral reactions to the situation.
Uncontrolled: natural setting, no manipulation or interfering in the real-life setting.
However, it is difficult to untangle the causes of events, actions and behavior.
o Participant vs nonparticipant observation
Nonparticipant: no direct involvement in the action, just observation via e.g. a oneway mirror or a camera.
Participant: mostly used in case studies, ethnographic studies and grounded theory
studies. The researcher participates in the daily life of the group or organization
under study.
There are different levels of participant observation, going from passive participation
to moderate participation to active participation to complete participant
observation.
o Structured vs unstructured observational studies
Structured: a predetermined set of categories of activities or phenomena planned to
be studied. Formats for recording the observations can be specifically designed and
tailored to each study. Its nature is quantitative.
Unstructured: forms of exploratory and qualitative research, the observer will
record everything that is observed. It happens when the observer has no definite
ideas of the particular aspects that need focus.
o Concealed vs unconcealed observation
Do the members of the group under study know they are being investigated?
Concealed: no influence by the awareness they are being observed since reactivity
or to the extent to which the observer affects the situation could be a major threat
to the validity. Although, it has ethical drawbacks since it may violate the principles
of informed consent, privacy and confidentiality.
Unconcealed: more obtrusive and might upset the authenticity of the behavior
under study.
o The two important approaches to observation are participant observation and structured
observation.
Participant observation
Pure observation vs complete participation
Observation includes establishing rapport, establishing a trusting
relationship. The degree to which rapport is established influences the
degree to which the information that is collected is accurate and
dependable.
What to observe?
o Descriptive observation
o Focused observation emphasizes observation
o Selective observation, the researcher focuses on different types of
action, activities or events and looks for regularities while being
open to variations from or exceptions to emerging patterns.
Downloaded by Mohamed Chakroun (chakroun.mohamed2000@gmail.com)
lOMoARcPSD|3821164
Capturing data: writing field notes about what is observed, records of
informal conversations and journal notes. The documentation should be as
accurate, complete, detailed and objective as possible.
Structured observation
The focus is fragmented into small and manageable pieces of information.
Coding schemes: contain predetermined categories for recording what is
observed. The type depends on the information you want to collect. The
following considerations should be taken into account
o Focus
o Objective
o Ease of use
o Mutually exclusive and collectively exhaustive
Advantages and disadvantages
o Directness, it allows the researcher to gather behavioral data without asking questions. The
other (environmental) factors can also be noted and situational factors can be discerned even
though their specific effects are difficult to establish. Observation makes it possible to
observe certain groups of individuals for whom it may be otherwise difficult to obtain
information. The data obtained from events as they normally occur are generally more
reliable and free from respondent bias.
o The drawbacks are reactivity if it goes on for a short period of time and observer bias since it
is observed from the researcher’s point of view (he might have “gone native”). However, it is
time consuming and expensive.
CHAPTER NINE
Questionnaires, p. 277 - 316
Types of questionnaire: a questionnaire is a preformulated written set of questions to which
respondents record their answers, usually within rather closely defined alternatives. This is an efficient
mechanism when a study is descriptive or explanatory in nature: they are generally less expensive and
time consuming but also introduce a much larger chance of nonresponse and nonresponse error.
Questionnaires are generally designed to collect large numbers of quantitative data and they can be
administered personally, mailed to the respondents or electronically distributed.
o Personally administered questionnaires: the main advantage of this is that the researcher
can collect all the completed responses within a short period of time. Any doubts that the
respondent may have can be clarified on the spot. It is less expensive and consumes less time
than interviewing, it also requires less skills. A disadvantage is that the researcher may
introduce a bias by explaining questions differently to different people and they take time
and a lot of effort.
o Mail and electronic questionnaires: the main advantage is that a wide geographical area can
be covered in the survey. The respondents can complete them at their convenience, in their
homes and at their own pace. However, the return rates are rather low, about 30% on
average. Another disadvantage is that any doubts the respondent may have cannot be
clarified and with the low return rate, it is difficult to establish the representativeness of the
sample. However, sending follow-up emails or letters and keeping the questionnaire brief can
help with this. Mail and electronic questionnaires are expected to meet with a better
response rate when respondents are notified in advance and when a reputed research
organization administers them.
Downloaded by Mohamed Chakroun (chakroun.mohamed2000@gmail.com)
lOMoARcPSD|3821164
Guidelines for questionnaire design: sound questionnaire design principles should focus on three
areas: the first relates to the wording of the questions. The second refers to the planning of issues
with regard to how the variables will be categorized, scaled and coded after receipt of the responses.
The third pertains the general appearance of the questionnaire.
o Principle of wording: this refers to the appropriateness of the content of the questions
(what is the purpose of the question, subjective feelings or objective facts), how questions
are worded and the level of sophistication of language (it should approximate the level of
understanding of the respondents), the type and form of questions asked (open-ended or
closed and positively or negatively worded. Careful! Watch out for double-barreled questions
that lends itself to different possible responses to its subparts and ambiguous questions.
Also, recall-dependent questions might have biased answers and leading questions should
be avoided, they lead the respondents to give the response the researcher would like them to
give. Loaded questions might have biases too, they are phrased in an emotionally charged
manner and questions should not be worded such that they elicit socially desirable
responses), the sequencing of the questions (general to specific questions and from easy to
more difficult questions) and the personal data sought from the respondents (classification
data, such as age, gender, marital status etc. If not necessary for the survey itself, it is useful
for the sample characteristics because it might explain certain things, e.g. if there is only one
woman in the entire company).
o Principle of measurement: these are to ensure that the data collected are appropriate to test
our hypotheses. These refer to the scales and scaling techniques used in measuring concepts,
as well as the assessment of reliability and validity of the measures used to assess the
“goodness of data”. Whenever possible, the interval and ratio scales should be used in
preference to nominal or ordinal scales Validity establishes how well a technique or process
measures a particular concept, and reliability indicates how stably and consistently the
instrument taps the variable. The data have to be obtained in a manner that makes for easy
categorization and coding.
General appearance of the questionnaire: an attractive and neat questionnaire with
appropriate introduction that clearly discloses the identity of the researcher and
conveys the purpose of the survey, instructions and well-arrayed set of questions
that are organized logically and neatly in appropriate sections and response
alternatives will make it easier for the respondents to answer them. Information on
income and other sensitive data should, if considered at all necessary for the survey,
be asked at the end and should be justified by explaining how this information might
contribute to knowledge and problem solving. The questionnaire could include an
open-ended question at the end, allowing respondents to comment on any aspect
they choose and it should end with an expression of sincere thanks to respondents.
o Pretesting of structured questions: it is important to pretest the instrument to ensure that
the questions are understood by the respondents and that there are no problems with the
wording or measurement. Pretesting involves the use of a small number of respondents to
test the appropriateness of the questions and their comprehension. This helps to rectify any
inadequacies and thus reduces bias.
o Electronic questionnaire and survey design: electronic survey design systems facilitate the
preparation and administration of questionnaires and are particularly useful for marketing
research. After data collection is complete, a data-editing program identifies missing or out-of
range data. Randomization of questions and the weighting of respondents to ensure more
representative results are some of the attractive features of survey design.
International dimensions of surveys: with the globalization of business operations, managers often
need to compare the business effectiveness of their subsidiaries in different countries. Researchers
Downloaded by Mohamed Chakroun (chakroun.mohamed2000@gmail.com)
lOMoARcPSD|3821164
engaged in cross-cultural research also endeavor to trace the similarities and differences in the
behavioral and attitudinal responses of employees at various levels in different cultures.
o Special issues in instrumentation for cross-cultural research: since different languages are
spoken in different countries, it is important to ensure that the translation of the instrument
to the local language matches accurately to the original language.
o Issues in data collection: response equivalence is ensured by adopting uniform data
collection procedures in different cultures. Identical methods of introducing the study etc. in
personally administered questionnaires provide equivalence in motivation, goal orientation
and response attitudes. Timing of data collected across cultures is also critical; data should be
collected within 3 to 4 months. If too much time elapses, much might change during the time
interval. The researcher has to be sensitive to cultural nuances (not all societies are
egalitarian and the responses might be biased for fear of portraying the country to a
“foreigner” in an “adverse light”). It is worthwhile collaborating with a local researcher while
developing and administering the research instrument.
Multimethods of data collection: almost all data collection methods have some bias associated with
them, so collecting data through multimethods and from multiple sources lends rigor to research. For
instance, if the responses collected through interviews, questionnaires and observation are strongly
correlated with one another, and then we will have more confidence about the goodness of the
collected data. Good research entails collection of data from multiple sources and through multiple
data collection methods. Such research, however, is more costly and time consuming.
Managerial implications: when a manager, and not a researcher, collects data, will have to know how
to phrase unbiased questions to elicit the right types of useful response.
Ethics in data collection: these pertain to those who sponsor the research (they should ask for the
study to be done to better the purpose of the organization), those who collect the data and those who
offer them.
o Ethics and the researcher:
He must treat the information given by the respondent as strictly confidential and
guarding his or her privacy is one of the primary responsibilities of the researcher.
The researcher should not misrepresent the nature of the study to subjects; the
purpose of the study must be explained.
Personal or seemingly intrusive information should not be solicited and, if absolutely
necessary, be tapped with high sensitivity.
The self-esteem and self-respect of the subjects should never be violated.
No one should be forced to respond the survey and if someone does not want to,
the individual’s desire should be respected.
Nonparticipant observers should be as unintrusive as possible.
In lab studies, the subjects should be debriefed with full disclosure of the reason for
the experiment after they have participated in the study.
Subjects should never be exposed to situations where they could be subject to
physical or mental harm.
There should be no misrepresentation or distortion in reporting the data collected
during the study.
o Ethical behavior of respondents
The subject should cooperate fully in the tasks ahead once having exercised the
choice to participate in a study.
The respondent also has an obligation to be truthful and honest in the responses.
CHAPTER TEN
Experimental designs, p. 316 – 363
Downloaded by Mohamed Chakroun (chakroun.mohamed2000@gmail.com)
lOMoARcPSD|3821164
The lab experiment: when a cause-and-effect relationship between an independent and a dependent
variable of interest is to be clearly established, then all other variables that might contaminate or
confound the relationship have to be tightly controlled. It is also necessary to manipulate the
independent variable so that the extent of its causal effects can be established. The control (of
another factor that might have an influence on the relationship) and manipulation (creating different
levels of the independent variable to assess the impact on the dependent variable. Sometimes, it is
called treatment) are best done in an artificial setting where the causal effects can be testing.
o Controlling the contamining, exogenous or “nuisance” variables
Matching groups: deliberately spreading the confounding characteristics across
groups. However, we may not know all the factors that could possibly contaminate
the cause-and-effect relationship in any given situation and hence fail to match some
critical factors.
Randomization: assign members randomly to groups, so every member will have a
known and equal chance of being assigned to any of the groups. All the confounding
variables (of age, sex and previous experience) are controlled variables and will have
an equal probability of being distributed among the groups. It ensures that each
group is comparable to the others.
o Internal validity of lab experiments: internal validity refers to the confidence we place in the
cause-and-effect relationship. It addresses the question: “to what extent does the research
design permit us to say that the independent variable A causes a change in the dependent
variable B?” In research with high internal validity, we are better able to argue that the
relationship is causal than in studies with low internal validity.
o External validity or generalizability of lab experiments: to what extent are the results found
in the lab setting transferable or generalizable to actual organizational or field settings? If a
relationship if found in a lab experiment, can we then confidently say the same relationship
will also hold true in the organizational setting?
The field experiment: an experiment done in the natural environment in which work goes on as usual,
but treatments are given to one or more groups. Even though it may not be possible to control all the
nuisance variables, the treatment can still be manipulated. Control groups can also be set up in field
experiments. Any cause-and-effect relationship found under these conditions will have wider
generalizability to other similar production settings. Field experiments have more external validity
than in lab experiments but in lab experiments, the internal validity is higher.
Trade-off between internal and external validity: researchers usually try first to test the causal
relationships in a lab setting and once that has been established, they try to test the relationship in a
field experiment.
Factors affecting the validity of experiments: possible confounding factors that may influence even
the best designed lab studies, might still pose a threat to internal validity. The seven major threats are
discussed below:
o History effects: certain events or factors might unexpectedly occur while the experiment is in
progress, and this history of events would confound the cause-and-effect relationship, thus
the internal validity.
o Maturation effects: passing of time, a function of both biological and psychological processes
operating within the respondents as a result of the passage of time. Examples include
growing older, getting tired, feeling hungry, and getting bored.
o Testing effects: a first measure is taken (the pretest), then the treatment is given, and after
that a second measure is taken (the posttest). The difference between posttest and pretest
scores is then attributed to the treatment. However, this may lead to two types of testing
effects. A main testing effect occurs when the pretest affects the posttest. This typically
occurs because participants want to be consistent. Interactive testing effects occur when the
Downloaded by Mohamed Chakroun (chakroun.mohamed2000@gmail.com)
lOMoARcPSD|3821164
pretest affects the participant’s reaction to the treatment. Both testing effects may affect
both the internal and external validity of our finings
o Selection bias effects: the types of participants in a lab experiment may be very different
from the types of employees recruited by an organization which is a threat to external
validity. The threat to internal validity comes from improper or unmatched selection of
subjects for the experimental and control groups.
o Mortality effects: when the group composition changes over time across the groups,
comparison between the groups becomes difficult because those who dropped out of the
experiment may confound the result.
o Statistical regression effects: the members chosen for the experimental group have extreme
scores on the dependent variable to begin with. The laws of probability say that these people
will likely to score closer to the mean on a posttest after being exposed to the treatment
(regressing toward the mean).
o Instrumentation effects: these might arise because of a change in measuring instrument
between pretest and posttest, and not because of the treatment’s differential impact at the
end.
Internal validity in case studies: we cannot draw conclusions about causal relationships from case
studies that describe the events that occurred during a particular time since it poses a threat to
internal validity.
Types of experimental design and validity:
o Quasi-experimental designs: some studies expose an experimental group to a treatment and
measure its effects. This is weak since it does not measure the true cause-and-effect
relationship; there is no comparison between groups, no recording of the status of the
dependent variable as it was prior to the treatment and how it changed. The following three
designs are quasi-experimental designs.
Pretest and posttest experimental group design: an experimental group (without a
control group) may be given a pretest, exposed to a treatment and then given a
posttest to measure the effects of the treatment.
Posttests only with experimental and control groups: the experimental group is
exposed to a treatment and posttest, the control group only to the posttest.
Time series design: this collects data on the same variable at regular intervals (e.g.
weeks, months or years). This allows the researcher to assess the impact of a
treatment over time.
o True experimental designs: these include both the treatment and control groups and record
information both before and after the experimental group is exposed to the treatment. They
are also known as ex post facto experimental designs.
Pretest and posttest experimental and control group design: two groups, one
experimental and the other control, are exposed to the pretest and the posttest.
Only the experimental group is exposed to a treatment, the control group isn’t.
Solomon four-group design: to gain more confidence in internal validity in
experimental designs, it is advisable to set up two experimental groups and two
control groups. One experimental group and one control group can be given both
the pretest and the posttest; the other two groups will be given only the posttest.
Both the experimental groups will receive the treatment. This design controls for all
the threats to internal validity, except for mortality. The estimates of the measures
can be used to generate estimations of the impact of the experimental treatment
(E), interactive testing effects (I) and the effects of uncontrolled variables (U).
Group 1: (O2 – O1) = E+I+U
Group 2: (O4 – O3) = U
Downloaded by Mohamed Chakroun (chakroun.mohamed2000@gmail.com)
lOMoARcPSD|3821164
Group 3: (O5 – ½(O1 + O2)) = E+U
Group 4: (O6 – ½(O1 + O3)) = U
We are able to control for I by calculating its effect using the results of group 1 and
3.
Double-blind studies: these are used to avoid any bias that might creep in. The
subjects in the experimental and control groups are kept unaware of who is given
the drug and who the placebo for example. Then it is a blind study. When both he
experimenter and the subjects are blinded (an outside testing agency knows only
who got what treatment) such studies are called double blind studies.
Ex post facto experimental design: there is no manipulation of the independent
variable in the lab or field setting, but subjects who have already been exposed to a
stimulus and those not so exposed are studied. The study does not immediately
follow after a treatment.
Simulation: it can be thought of as an experiment conducted in a specially created setting that very
closely represents the natural environment in which activities are usually carried out. It lies
somewhere between a lab and field experiment. Causal relationships can be tested since both
manipulation and control are possible in simulations. Computer-based simulations are frequently used
in the accounting and financial areas, e.g. risk management.
Ethical issues in experimental design research. The following practices are considered unethical:
o Putting pressure on individuals to participate in experiments
o Giving menial tasks and asking demeaning questions that diminish participants’ self-respect
o Deceiving subjects
o Exposing participants to physical or mental stress
o Not allowing subjects to withdraw from the research when they want to
o Using the research results to disadvantage the participants
o Not explaining the procedures to be followed in the experiment
o Exposing respondents to hazardous and unsafe environments
o Not debriefing participants fully and accurately after the experiment is over
o Not preserving the privacy and confidentiality of the information given by the participants.
o Withholding benefits from control groups. This one is somewhat controversial though.
Managerial implications: before using experimental designs, it is essential to consider whether they
are necessary at all, and if so, at what level of sophistication.
In the appendix that follows this chapter, more research designs are explained but I did not put them
here since I do not consider them that necessary and important.
CHAPTER ELEVEN
Measurement of variables: operational definition, p. 370 - 391
How variables are measured: measurement is the assignment of numbers or other symbols to
characteristics (or attributes) of objects according to a prespecified set of rules. Objects include
persons, strategic business units, etc. examples of characteristics are arousal-seeking tendency,
achievement motivation, length, ethnic diversity, etc. You cannot measure objects; you measure
characteristics or attributes of objects. To be able to measure, you also need a judge. This is someone
who has the necessary knowledge and skills to assess the quality of something, such as the taste of
yogurt for example. In many cases, the object and the judge are the same (e.g. if you want to know the
gender of employees). It is easy to measure attributes of objects that can be physically measured. The
measurement of more abstract and subjective attributes is more difficult.
Downloaded by Mohamed Chakroun (chakroun.mohamed2000@gmail.com)
lOMoARcPSD|3821164
Operational definition: operationalization: despite the lack of physical measures devices to measure
more nebulous variables, there are ways of tapping these types of variable. One technique is to reduce
these abstract notions or concepts to observable behavior and / or characteristics. This is called
operationalization of the concepts.
o Dimensions and elements: the measure has to include an adequate and representative set of
items (or elements). An example of this is the measure of need for cognition; this would have
to contain 34 items because otherwise it would not represent the entire domain or universe
of need for cognition. An example of a construct with more than one dimension is aggression.
This has at least two dimensions: verbal aggression and physical aggression. A valid
measurement would have to include both dimensions.
o Operationalizing the (multidimensional) concept of achievement motivation: this is a
subjective abstract construct so we must infer achievement motivation by measuring
behavioral dimensions, facets, or characteristics we would expect to find in people with high
achievement motivation. The next step is to go through the literature to find out whether
there are any existing measures of the concept. Both scientific journals and “scale
handbooks” are important sources for this. These help you determine whether a
measurement scale exists and, if there are multiple, to make a logical selection between
available measures.
Dimensions and elements of achievement motivation: we would expect those with
high achievement motivation to drive themselves hard at work, find it difficult to
relax, prefer to work alone, etc. Now, in the chapter, the elements of the dimensions
are being described. However, I find this not necessary so I skip it in this summary.
o What operationalization is not: it does not describe the correlates of the concept.
Operationalizing a concept does not consist of delineating the reasons, antecedents,
consequences, or correlates of the concept. Rather, it describes its observable characteristics
in order to be able to measure the concept.
o Review of operationalization: after you made a definition of the concept, the next step is to
either find or develop an adequate (set of) closed-end question(s) that allow(s) you to
measure the concept in a reliable and valid way. Luckily, measures that are used in business
have already been developed by researchers. If not, you have to develop your own.
International dimensions of operationalization: in conducting transnational research, it is important
to remember that certain variables have different meanings and connotations in different cultures.
CHAPTER TWELVE
Measurement: scaling, reliability, validity, p. 391 - 437
Four types of scales: measurement means gathering data in the form of numbers. To be able to assign
numbers to attributes of objects we need a scale. A scale is a tool or mechanism by which individuals
are distinguished as to how they differ from one another on the variables of interest to our study.
Scaling involves the creation of a continuum on which our objects are located.
o Nominal scale: one that allows the researcher to assign subjects to certain categories or
groups. For example, with respect to the variable of gender, respondents can be grouped into
two categories: male and female. These two groups can be assigned code number 1 and 2.
These numbers serve as simple and convenient category labels with no intrinsic value, other
than to assign respondents to one of two non-overlapping, or mutually exclusive, categories.
The nominal scale gives some basic, categorical, gross information.
o Ordinal scale: it not only categorizes the variables in such a way as to denote differences
among the various categories, it also rank-orders the categories in some meaningful order.
With any variable for which the categories are to be ordered according to some preference,
Downloaded by Mohamed Chakroun (chakroun.mohamed2000@gmail.com)
lOMoARcPSD|3821164
the preference would be ranked (e.g. best to worst). For example, respondents might be
asked to indicate their preferences by ranking the importance they attach to a number of
distinct categories. However, the ordinal scale does not give any indication of the magnitude
of the differences among the ranks.
o Interval scale: this allows us to perform certain arithmetical operations on the data collected
from the respondents. The interval scale lets us measure the distance between any two
points on the scale. Therefore, it also measures the magnitude of the differences in the
preferences among the individuals. It is usually used when responses can be tapped on a fivepoint scale (e.g. a Likert scale).
o Ratio scale: this overcomes the disadvantage of the arbitrary origin point of the interval scale,
in that is has an absolute (in contrast to an arbitrary) zero point, which is a meaningful
measurement point. It not only measures the magnitude of the differences between points
on the scale but also taps the proportions in the differences. It subsumes all the properties of
the other three scales. It is usually used in business research when exact numbers on
objective factors are called for.
Rating scales: see the following scales that are mentioned and described below:
o Dichotomous scale: used to elicit a yes or no answer by using a nominal scale.
o Category scale: it uses multiple items to elicit a single response, e.g. “where in a city do you
live?”
o Semantic differential scale: several bipolar attributes are identified at the extremes of the
scale, and respondents are asked to indicate their attitudes, on what may be called a
semantic space, toward a particular individual, object, or event on each of the attitudes. The
bipolar adjectives might employ terms such as good-bad, strong-weak and hot-cold. It is used
to assess the respondents’ attitudes toward something.
o Numerical scale: it is similar to the semantic differential scale, with the difference that
numbers on a five-point or seven-point scale are provided, with bipolar adjectives at both
ends.
o Itemized rating scale: a five-point or seven-point scale with anchors, as needed, is provided
for each item and the respondent states the appropriate number on the side of each item, or
circles the relevant number against each item. This uses an interval scale. When a neutral
point is provided, it is a balanced rating scale, and when it is not, it is an unbalanced rating
scale.
o The Likert scale: it is designed to examine how strongly subjects agree or disagree with
statements on a five-point scale with the following anchors. The responses over a number of
items tapping a particular concept or variable can be analyzed item by item, but it is also
possible to calculate a total or summated score for each respondent by summing across
items. The latter is widely used, and therefore, the Likert scale is also referred to as a
summated scale. It is debated upon whether a Likert scale is an ordinal or an interval scale.
Generally, they are treated as interval scales.
o Fixed or constant sum scale: the respondents are here asked to distribute a given number of
points across various items as per the example below. This is more in the nature of an ordinal
scale.
o Staple scale: this scale simultaneously measures both the direction and intensity of the
attitude toward the items under study. The characteristic of interest is placed at the center
with a numerical scale ranging, say, from +3 to -3, on either side of the item. It does not have
an absolute zero point so it is an interval scale.
o Graphic rating scale: a graphical representation helps the respondents to indicate on this
scale their answers to a particular question by placing a mark at the appropriate point on the
line. This is an ordinal scale.
Downloaded by Mohamed Chakroun (chakroun.mohamed2000@gmail.com)
lOMoARcPSD|3821164
Consensus scale: scales can also be developed by consensus, where a panel of judges selects
certain items, which in its view measure the relevant concept. Such scale is developed after
the selected items have been examined and tested for their validity and reliability. However,
this scale is rarely used for measuring organizational concepts because of the time necessary
to develop it.
Ranking scales: they are used to tap preferences between two or among more objects or items
(ordinal in nature). However, such ranking may not give definite clues to some of the answers sought.
Ordinal data lends often to any of the ranking scales. Alternative methods used are paired
comparisons, forced choice, and the comparative scale.
o Paired comparison: it is used when, among a small number of objects, respondents are asked
to choose between two objects at a time. This helps to assess preferences. This is a good
method if the number of stimuli presented is small.
o Forced choice: this enables respondents to rank objects relative to one another, among the
alternatives provided.
o Comparative scale: it provides a benchmark or a point of reference to assess attitudes toward
the current object, event, or situation under study.
International dimensions of scaling: different cultures react differently to issues of scaling. For
instance, in some countries, a seven-point scale is more sensitive than a four-point scale in eliciting
unbiased responses. Research has also shown that people from different countries differ in both their
tendency to use the extremes of the rating scale and to respond in a socially desirable way.
Goodness of measures: it is important to make sure that the instrument that we develop to measure a
particular concept is indeed accurately measuring the variable, and that, in fact, we are actually
measuring the concept that we set out to measure. This ensures that we have not overlooked some
important dimensions and elements or included some irrelevant ones.
o Item analysis: it is carried out to see if the items in the instrument belong there or not. Each
item is examined for its ability to discriminate between those subjects whose total scores are
high and those with low scores. Reliability is a test of how consistently a measuring
instrument measures whatever concept it is measuring. Validity is a test of how well an
instrument that is developed measures the particular concept it is intended to measure.
o Content validity: it ensures that the measure includes an adequate and representative set of
items that tap the concept. The more the scale items represent the domain or universe of the
concept being measured, the greater the content validity. A panel of judges can attest the
content validity of the instrument. Face validity indicates that the items that are intended to
measure a concept, do, on the face of it, look like they measure the concept.
o Criterion-related validity: it is established when the measure differentiates individuals on a
criterion it is expected to predict. Concurrent validity is established when the scale
discriminates individuals who are known to be different; that is, they should score differently
on the instrument. Predictive validity indicates the ability of the measuring instrument to
differentiate among individuals with reference to a future criterion.
o Construct validity: this testifies to how well the results obtained from the use of the measure
fit the theories around which the test is designed. This is assessed through convergent
validity (established when the scores obtained with two different instruments measuring the
same concept are highly correlated) and discriminant validity (established when, based on
theory, two variables are predicted to be uncorrelated and the scores obtained by measuring
them are indeed empirically found to be so.
o Some of the ways in which the aforementioned forms of validity can be established are
through:
Correlational analysis, as in the case of establishing concurrent and predictive
validity or convergent and discriminant validity.
o
Downloaded by Mohamed Chakroun (chakroun.mohamed2000@gmail.com)
lOMoARcPSD|3821164
Factor analysis, it confirms the dimensions of the concept that have been
operationally defined, as well as indicating which of the items are most appropriate
for each dimension.
The multitrait, multimethod matrix of correlations, derived from measuring
concepts by different forms and different measures.
Validity is a necessary but not sufficient condition and there are different ways of
doing a study on the same topic (e.g. job satisfaction).
o Reliability: it indicates the extent to which it is without bias and hence ensures consistent
measurement across time and across the various items in the instrument.
Stability of measures: the ability of a measure to remain the same over time,
despite uncontrollable testing conditions or the state of the respondents
themselves, is indicative of its stability.
Test-retest reliability: obtained by repetition of the same measure on a second
occasion.
Parallel-form reliability: when responses on two comparable sets of measures
tapping the same construct are highly correlated.
Internal consistency of measures: it is indicative of the homogeneity of the items in
the measure that taps the construct. The items should hang together as a set and be
capable of independently measuring the same concept so that the respondents
attach the same overall meaning to each of the items.
Interitem consistency reliability: a test of the consistency of respondents’
answers to all the items in a measure.
Split-half reliability: it reflects the correlations between two halves of an
instrument. The estimates will vary depending on how the items in the
measure are split into two halves.
Reflective vs. formative measurement scales: the items of a multi-item measure should hang
together as a set and be capable of independently measuring the same concept. Only in formative
scales, items that measure a concept should not always hang together.
o Reflective scale: all the items are expected to correlate; they all share a common basis.
o Formative scale: it is used when a construct is viewed as an explanatory combination of its
indicators. It may have several dimensions.
CHAPTER THIRTEEN
Population: the entire group of people, events or things of interest that the researcher wishes to
investigate, he wants to make inferences based on statistics of the group. E.g. If regulators want to
know how patients in nursing homes are cared for, then all the patients in the nursing homes are the
population.
Element: a single member of the population.
Sample: a subset of the population, it comprises some members of it. From a study of the sample, the
researcher will draw generalizable conclusions about the entire population.
Sampling unit: the element or set of elements that is available for selection in some stages of the
sampling process. E.g. city blocks, households and individuals within the household.
Subject: a single member of the sample.
Parameters: the characteristics of the population such as µ (population mean), σ (the population
standard deviation) and σ² (the population variance) are referred to as parameters. All conclusions
drawn about the sample under study are generalized to the population so the sample statistics: x
(sample mean), s (standard deviation) and s² (the variation in the sample) are used as estimates of the
population parameters.
Reasons for sampling:
Downloaded by Mohamed Chakroun (chakroun.mohamed2000@gmail.com)
lOMoARcPSD|3821164
It would be impossible to collect data from, or test, or examine every element in research
investigations involving several hundreds and thousands of elements.
o If it were possible, it would be prohibitive in terms of time, cost and other human resources.
o Study of a sample rather than the entire population is also sometimes likely to produce more
reliable results, mostly because fatigue is reduced and fewer errors result in collecting data
o It would also be impossible to use the entire population to gain knowledge about, or test,
something. This is destructive sampling (if there would be nothing left to sell e.g.)
Representativeness of samples: rarely will the sample be an exact replica of the population from
which it is drawn. However, if the sample is chosen in a scientific way, we can be reasonably sure that
the sample statistic is fairly close to the population parameter.
Normality of distributions: attributes or characteristics of the population are generally normally
distributed; most people will be clustered around the mean. If we are to estimate population
characteristics from those represented in a sample with accuracy, the sample has to be so chosen that
the distribution of the characteristics of interest follows the same pattern of normal distribution in the
sample as it does in the population. If we take a sufficiently large number of samples and choose them
with care, the sampling distribution of the means have normality. However, some cases may not call
for such regard to generalizability, e.g. at the exploratory stages of fact finding, we might limit the
interview to only the most conveniently available people or when urgency in getting information
overrides a high level of accuracy in terms of priority. However, the results of such convenient samples
are not reliable and can never be generalized to the population.
The sampling process: sampling is the process of selecting a sufficient number of right elements from
the population so that a study of the sample makes it possible to generalize the properties or
characteristics to the population elements. The major steps include:
o Define the population: sampling begins with precisely defining the target population. It must
be defined in terms of elements, geographical boundaries and time.
o Determine the sample frame: it is a (physical) representation of all the elements in the
population from which the sample is drawn. E.g. the payroll of an organization if its members
are to be studied. Although the sampling frame is useful in providing a listing of each element
in the population, it may not always be a current up-to-date document. When it does not
exactly match the population, coverage errors occur. This error should be dealt with by either
redefining the target population in terms of the sampling frame or adjusting the collected
data by a weighting scheme to counterbalance the coverage error.
o Determine the sample design: There are two major types of sampling design: probability and
nonprobability sampling.
Probability sampling: the elements in the population have some known, nonzero
chance or probability of being selected as sample subjects. It is used when the
representativeness of the sample is of importance in the interest of wider
generalizability. It can be either unrestricted (simple random sampling) or restricted
(complex probability sampling) in nature.
Unrestricted or simple random sampling: every element in the population
has a known and equal chance of being selected as a subject. When the
elements from the population are drawn, it is most likely that the
distribution patterns of the characteristics we are interested in investigating
in the population are also likewise distributed in the subjects we draw for
our sample. This sampling design has the least bias and offers the most
generalizability; however, it could become cumbersome and expensive.
Restricted or complex probability sampling: these sampling procedures
offer a viable, and sometimes more efficient, alternative to the unrestricted
o
Downloaded by Mohamed Chakroun (chakroun.mohamed2000@gmail.com)
lOMoARcPSD|3821164
design. Efficiency is improved in that more information can be obtained.
There are 5 most common complex probability sampling designs:
o Systematic sampling: it involves drawing every nth element in the
population starting with a randomly chosen element between 1
and n.
o Stratified random sampling: it involves a process of stratification
or segregation, followed by random selection of subjects from
each stratum. The population is first divided into mutually
exclusive groups that are relevant, appropriate and meaningful in
the context of the study.
o Proportionate and disproportionate stratified random sampling:
once the population has been stratified in some meaningful way, a
sample of members from each stratum can be drawn using either a
simple random sampling or a systematic sampling procedure. The
subjects drawn from each stratum can be either proportionate or
disproportionate to the number of elements in the stratum.
However, when the information from some of the levels would not
truly reflect how all members at those levels would respond (e.g.
when the number is very small), one might decide to use a
disproportionate stratified random sampling. The number of
subjects from each stratum would now be altered, while keeping
the sample size unchanged. Disproportionate sampling is also
sometimes done when it is easier, simpler and less expensive to
collect data from one or more strata than from others.
Cluster sampling: they are samples gathered in groups or chunks of
elements that, ideally, are natural aggregates of elements in the population.
The target population is first divided into clusters, then, a random sample of
clusters is drawn and for each selected cluster either all the elements or a
sample of elements are included in the sample. They offer more
heterogeneity within groups and more homogeneity among groups. A
specific type of cluster sampling is area sampling. In this case, clusters
consist of geographical areas. It is less expensive than most other
probability sampling designs and it is not dependent on a sampling frame.
Cluster sampling is not very common in organizational research because it
does not offer much efficiency in terms of precision or confidence in the
results. However, it’s cheap and convenient.
o Single-stage and multistage cluster sampling: single stage cluster
sampling is as discussed above, and cluster sampling, when it is
done in several stages, is known as multistage cluster sampling. It
involves a probability sampling of the primary sampling units and
when we have reached the final stage of breakdown for the
sample units, we sample every member in those units.
Double sampling: This plan is resorted to when further information is
needed from a subset of the group from which some information has
already been collected for the same study. It examines the matter in more
detail.
Nonprobability sampling, the elements do not have a known or predetermined
chance of being selected as subjects. It is used when time or other factors, rather
than generalizability become critical. Depending on the extent of generalizability
Downloaded by Mohamed Chakroun (chakroun.mohamed2000@gmail.com)
lOMoARcPSD|3821164
o
o
desired, the demands of time and other resources, and the purpose of the study,
different types of probability and nonprobability sampling design are chosen.
Convenience sampling: refers to the collection of information from
members of the population who are conveniently available to provide it. It
is most often used during the exploratory phase of a research project and is
perhaps the best way of getting some basic information quickly and
efficiently.
Purposive sampling: it wants to get information from specific target
groups, specific types of people who can provide the desired information,
either because they are the only ones who have it, or they conform to some
criteria set by the researcher. There are two major types of purposive
sampling:
o Judgement sampling: it involves the choice of subjects who are
most advantageously placed or in the best position to provide the
information required. It is used when a limited number or category
of people have the information that is sought. It may curtail the
generalizability of the findings due to the fact that we are using a
sample of experts who are conveniently available.
o Quota sampling: it ensures that certain groups are adequately
represented in the study through the assignment of a quota.
Generally, the quota fixed for each subgroup is based on the total
numbers of each group in the population; however, the results are
not generalizable to the population. This becomes a necessity
when a subset of the population is underrepresented in the
organization, e.g. minority groups, foremen, etc.
Determine the appropriate sample size: the decision about how large the sample size should
be depends on different factors:
The research objective
The extent of precision desired
The acceptable risk in predicting that level of precision
The amount of variability in the population itself
The cost and time constraints
The size of the population itself in some cases
Execute the sampling process.
Note: nonresponse and nonresponse error. Two important sources of nonresponse are not-at-homes and
refusals. To reduce the first one, it is possible to call back at another time on a different time of the day. The
rate of refusals depends, among other things, on the length of the survey, the data collection method and the
patronage of the research. It is almost impossible to entirely avoid nonresponse in surveys, so you have to turn
to methods to deal with nonresponse error, such as generalizing the results to the respondents only or
statistical adjustment.
Sampling in cross-cultural research: while engaging in cross-cultural research, one has to be sensitive
to the issue of selecting matched samples in the different countries.
Issues of precision and confidence in determining sample size: a reliable and valid sample should
enable us to generalize the findings from the sample to the population under investigation. The
sample statistics should be reliable estimates and reflect the population parameters as closely as
possible within a narrow margin of error.
o Precision: it refers to how close our estimate is to the true population characteristic. It is a
function of the range of variability in the sampling distribution of the sample mean. The
Downloaded by Mohamed Chakroun (chakroun.mohamed2000@gmail.com)
lOMoARcPSD|3821164
smaller the differences between different samples, the greater the probability that the
sample mean will be closer to the population mean. This variability is called the standard
error, calculated by the following formula:
s x́ =
S
√n
Where s is the standard deviation, n the sample size, and
s x́ indicates the standard error or
the extent of precision offered by the sample.
o Confidence: it denotes how certain we are that our estimates will really hold true for the
population, based on the sample statistics.
o If we want to maintain our original precision while increasing the confidence, or maintain the
confidence level while increasing precision, or increasing both, we need a larger sample size
n. N is a function of the variability in the population, precision or accuracy needed,
confidence level desired and type of sampling plan used. However, when it is not possible to
increase n, the only way to maintain the same level of precision is to forsake the confidence
with which we can predict our estimates.
Sample data and hypothesis testing: sample data can not only be used as a means of estimating the
population parameters, but it can also be used to test hypotheses about population values rather than
simply to estimate population values.
Importance of sampling design and sample size: both are important to establish the
representativeness of the sample for generalizability. If the appropriate sampling design is not used, a
large sample size will not, in itself, allow the findings to be generalized to the population. Likewise,
unless the sample size is adequate for the desired level of precision and confidence, no sampling
design will be useful in meeting the objectives of the study. Also, is statistical significance more
relevant than practical significance? Roscoe (1975) proposed the following rules of thumb:
o Sample sizes should be between 30 and 500 people for most research.
o When samples are broken into subsamples, a minimum sample size of 30 for each category is
necessary.
o In multivariate research, the sample size should be several times (preferably 10) as large as
the number of variables in the study.
o For simple experimental research with tight experimental controls, successful research is
possible with samples as small as 10 to 20 in size.
Efficiency in sampling: it is attained when, for a given level of precision, the sample size could be
reduced, or for a given sample size, the level of precision could be increased. The choice of a sampling
plan depends on the objectives of the research, as well as on the extent and nature of efficiency
desired.
Sampling as related to qualitative studies: sampling for qualitative research is as important as
sampling for quantitative research. It also defines the target population, mostly uses nonprobability
sampling as a technique as well as purposive sampling. The form of purposive sampling that is used is
theoretical sampling, with the grounded theory, the idea that theory will emerge from data through
an iterative process that involves repeated sampling, collection of data and analysis of data until
“theoretical saturation” is reached (when no new information about the subject emerges in repeated
cases).
Managerial implications: awareness of sampling designs and sample size helps managers to
understand why a particular method is used by researchers.
CHAPTER FOURTEEN
Quantitative data analysis
P 516 - 559
Downloaded by Mohamed Chakroun (chakroun.mohamed2000@gmail.com)
lOMoARcPSD|3821164
Getting the data ready for analysis: after data are obtained through questionnaires, they need to be
coded, keyed in and edited. A categorization scheme has to be set up before the data can be typed in.
Then, outliers, inconsistencies and blank responses, if any, have to be handled in some way.
o Coding and data entry: data coding involves assigning a number to the participants’
responses so they can be entered into a database. While coding the responses, human errors
can occur. Therefore, at least 10% of the coded questionnaires should be checked for coding
accuracy. After responses have been coded, they can be entered into a database.
o Editing data: after the data are keyed in, they need to be edited, e.g. blank responses and
inconsistent data have to be checked and followed up. An example of an illogical response is
an outlier response (an observation that is substantially different from the other
observations). It is not always an error even though data errors (entry errors) are a likely
source of outliers. Because they have a large impact on the research results, they should be
investigated carefully. You can check the dispersion of nominal and/or ordinal variables by
obtaining minimum and maximum values and frequency tables. Inconsistent responses are
responses that are not in harmony with other information. However, it is possible that the
participant deliberately indicated something that seems contradictory to the other answers
and if it were edited, a bias would be introduced. Illegal codes are values that are not
specified in the coding instructions (e.g. answering a number to a yes / no question).
Omissions may occur because respondents did not understand the question, did not know
the answer, or were not willing to answer the question. If a substantial number of questions
have not been answered, it might be a good idea to not include it in the data analysis. When
there are only a few, the blank responses are to be handled. One way to do so is by ignoring it
when the analyses are done. A disadvantage is that it will reduce the sample size whenever
that particular variable is involved in the analyses. Another way to solve it would be to look at
the participants’ pattern of responses to other questions and, from these answers, deduce a
logical answer to the question for the missing response. Another alternative solution would
be to assign to the item the mean value of the responses of all those who have responded to
that particular item. If many of the respondents have answered “don’t know”, further
investigations may be worthwhile.
o Data transformation: a variation of data coding, it is the process of changing the original
numerical representation of a quantitative value to another value. Data are typically changed
to avoid problems in the next stage of data analysis. It is necessary when several questions
have been used to measure a single concept. In such cases, the scores for these questions
have to be combined into a single score.
Getting a feel for the data: we can acquire a feel for the data by obtaining a visual summary or by
checking the central tendency and the dispersion of a variable. We can also get to know our data by
examining the relation between two variables. When there is little variability in the data, the
researcher may suspect that the particular question was not properly worded. Biases, if any, may also
be detected if the respondents have tended to respond similarly to all items. If there is no variability in
the data, no variance can be explained.
Frequencies: it refers to the number of times various subcategories of a certain phenomenon occur,
from which the percentage and the cumulative percentage of their occurrence can be easily
calculated.
Measures of central tendency and dispersion: there are three measures of central tendency: the
mean, the median and the mode. Measures of dispersion include the range, the standard deviation,
the variance (where the measure of central tendency is the mean), and the interquartile range (where
the measure of central tendency is the median).
o Measures of central tendency:
The mean: or the average is a measure that offers a general picture of the data
without unnecessarily inundating one with each of the observations in a data set. It
Downloaded by Mohamed Chakroun (chakroun.mohamed2000@gmail.com)
lOMoARcPSD|3821164
is the average of a set of observations, is the sum of the individual observations
divided by the number of observations.
The median: it is the central item in a group of observations when they are arrayed
in either an ascending or a descending order.
The mode: the most frequently occurring phenomenon.
o Measures of dispersion: it gives knowledge about the variability that exists in a set of
observations. For example, two sets of data might have the same mean but the dispersions
could be different.
Range: the extreme values in a set of observations.
Variance: it is calculated by subtracting the mean from each of the observations in
the data set, taking the square of this difference, and dividing the total of these by
the number of observations.
Standard deviation: it offers an index of the spread of a distribution or the variability
in the data. It is simply the square root of the variance. It is one of the most common
descriptive statistics for interval and ratio scaled data, as well as the mean. Together,
they are very useful because of the following statistical rules:
Practically all observations fall within 3 standard deviations of the average
or the mean.
More than 90% of the observations are within 2 standard deviation of the
mean.
More than half of the observations are within 1 standard deviation of the
mean.
Other measures of dispersion: when the median is the measure of central tendency,
percentiles, deciles and quartiles become meaningful. The quartile divides the total
realm of observations into four equal parts, the decile into ten and the percentile
into 100 equal parts. The interquartile range consists of the middle 50% of the
observations (excluding the top and bottom 25%). This is very useful when
comparisons are to be made among several groups.
Relationships between variables: relationship between any two variables: a bivariate relationship
o Relationship between two nominal variables: χ2 test. Used when you want to know if there is
a relationship between two nominal variables or whether they are independent of each
other. The chi-square test indicates whether or not an observed pattern is due to chance, it is
a nonparametric test that is used when normality of distribution cannot be assumed as in
nominal or ordinal data. The test compares the expected frequency and the observed
o
o
frequency.
is the formula to obtain this where Oi is the observed
frequency of the ith cell and Ei is the expected frequency. The null hypothesis would be set to
state that there is no significant relationship between the two variables. The chi-square
statistic is associated with the degrees of freedom (df) which denote whether or not a
significant relationship exists between two nominal variables.
Correlation: A Pearson correlation matrix will indicate the direction, strength and significance
of the bivariate relationships among all the variables that were measured at an interval or
ratio level. The correlation is derived by assessing the variations in one variable as another
variable also varies.
Reliability: the reliability of a measure is established by testing for both consistency and
stability. Consistency indicates how well the items measuring a concept hang together as a
set and Cronbach’s alpha is a reliable coefficient that indicates how well the items in a set are
positively correlated to one another. The closer Cronbach’s alpha is to 1, the higher the
internal consistency reliability. Another measure of consistency reliability is the split-half
Downloaded by Mohamed Chakroun (chakroun.mohamed2000@gmail.com)
lOMoARcPSD|3821164
o
reliability coefficient. It reflects the correlations between two halves of a set of items, so the
coefficients obtained will vary depending on how the scale is split.
Validity:
Factorial validity: can be established by submitting the data for factor analysis. The
results of this will confirm whether or not the theorized dimensions emerge.
Criterion-related validity: can be established by testing for the power of the
measure to differentiate individuals who are known to be different.
Convergent validity: can be established when there is a high degree of correlation
between two different sources responding to the same measure (e.g. both
subordinates and supervisors).
Discriminant validity: can be established when two distinctly different concepts are
not correlated to each other.
CHAPTER SIXTEEN
P. 608 - 633
Qualitative data analysis
Data reduction: Qualitative data collection produces large amounts of data; therefore, the first step in
data analysis is the reduction of data through coding and categorization. Coding is the analytic process
through which the qualitative data that you have gathered are reduced, rearranged, and integrated to
form theory. Codes are labels given to units of text which are later grouped and turned into
categories, and coding helps you to draw meaningful conclusions about the data. Coding begins with
selecting the coding unit, e.g. words, sentences, paragraphs and themes. Categorization is the process
of organizing, arranging and classifying coding units. Codes and categories can be developed both
inductively and deductively. Where there is no theory available, you must generate codes and
categories inductively from the data. In its extreme form, this is called grounded theory. However, in
many situations, you will have a preliminary theory on which you can base your codes and categories.
As you begin organizing the data into categories and subcategories, you will begin to notice patterns
and relationships between the data. Quantification of your qualitative data may provide you with a
rough idea about the (relative) importance of the categories and subcategories.
Data display: this is the second major activity that you should go through when analyzing your
qualitative data. Data display involves taking your reduced data and displaying them in an organized,
condensed manner. Along the lines, charts, matrices … may help you to organize the data and to
discover patterns and relationships in the data so that the drawing of conclusions is eventually
facilitated. The selected data display technique may depend on researcher preference, the type of
data set, and the purpose of the display.
Drawing conclusions: this is the “final” analytical activity in the process of qualitative data analysis. It
is at this point where you answer your research questions by determining what identified themes
stand for, by thinking about explanations for observed patterns and relationships, or by making
contrasts and comparisons.
Reliability and validity in qualitative research: it is important that the drawn conclusions are verified
in one way or another. You must make sure that the conclusions that you derive from your qualitative
data are plausible, reliable and valid. Reliability in qualitative data analysis includes category and
interjudge reliability (a degree of consistency between coders processing the same data). Category
reliability depends on the analyst’s ability to formulate categories and present to competent judges
definitions of the categories so they will agree on which items of a certain population belong in a
category and which do not. Category reliability relates to the extent to which judges are able to use
category definitions to classify the qualitative data. However, categories that are defined in a very
Downloaded by Mohamed Chakroun (chakroun.mohamed2000@gmail.com)
lOMoARcPSD|3821164
broad manner will also lead to higher category reliability. This can lead to the oversimplification of
categories, which reduces the relevance of the research findings. The researcher must find a balance
between category reliability and the relevance of categories. Validity is defined as the extent to which
the research results accurately represent the collected data (internal validity) and can be generalized
or transferred to other contexts or settings (external validity). Two methods that have been developed
to achieve validity are:
o Supporting generalizations by counts of events.
o Ensuring representativeness of cases and the inclusion of deviant cases.
Other methods of gathering and analyzing qualitative data
o Content analysis: an observational research methods that is used to systematically evaluate
the symbolic contents of all forms of recorded communications. It can be used to analyze
newspapers, websites, advertisements, etc. the method of content analysis enables the
researcher to analyze textual information and systematically identify its properties, such as
the presence of certain words, concepts, characters, themes or sentences. To conduct a
content analysis on a text, the text is coded into categories and then analyzed using
conceptual analysis or relational analysis. Conceptual analysis establishes the existence and
frequency of concepts in a text. It analyzes and interprets text by coding the text into
manageable content categories. Relational analysis builds on conceptual analysis by
examining the relationships among concepts in a text. The results of conceptual or relational
analysis are used to make inferences about the messages within the text, the effects of
environmental variables on message content, the effects of messages on the receiver, and so
on.
o Narrative analysis: it is a story or “account involving the narration of a series of events in a
plotted sequence which unfolds in time”. Narrative analysis is an approach that aims to elicit
and scrutinize the stories we tell about ourselves and their implications for our lives.
Narrative data are often collected via interviews that are designed to encourage the
participant to describe a certain incident in the context of his or her life history.
o Analytic induction: it is an approach to qualitative data analysis in which universal
explanations of phenomena are sought by the collection of qualitative data until no cases that
are inconsistent with a hypothetical explanation of a phenomenon are found. It starts with a
definition of a problem, continues with a hypothetical explanation of the problem and then
proceeds with the examination of cases. If a case is inconsistent with the hypothesis, the
researcher either redefines the hypothesis or excludes the ‘deviant’ case that does not
confirm the hypothesis.
CHAPTER SEVENTEEN
The research report
p. 633 – 665
The written report: it starts with a description of the management problem and the research
objective. This allows the reader to become familiar with “the why” of the research project. It should
allow the reader to weigh the facts and arguments presented therein, to examine the results of the
study, to reflect on the conclusions and recommendations, and eventually to implement the
acceptable recommendations presented in the report, with a view to closing the gap between the
existing state of affairs and the desired state.
o The purpose of the written report: research reports can have different purposes and hence
the form of the written report will vary according to the situation. It is important to identify
the purpose of the report, so that it can be tailored accordingly. If the purpose is simply to
Downloaded by Mohamed Chakroun (chakroun.mohamed2000@gmail.com)
lOMoARcPSD|3821164
offer details, the report can be very narrowly focused and provide the desired information in
a brief format.
o The audience for the written report: the organization of a report, its length, and focus on
details, data presentation, and illustrations will, in part, be a function of the audience for
whom it is intended. Sometimes, the findings of a study may be unpalatable to the executive
or may reflect poorly on management, tending to make them react defensively. In such cases,
tact should be exercised in presenting the conclusions without compromising on the actual
findings.
o Characteristics of a well-written report: despite the fact that report writing is a function of
the purpose of the study and the type of audience to which it is presented, and accordingly
has to be tailored to meet both, certain basic features are integral to all written reports.
Clarity, conciseness, coherence, the right emphasis on important aspects, meaningful
organization of paragraphs, smooth transition from one topic to the next, apt choice of
words, and specificity are all important features of a good report. The report should, to the
extent possible, be free of technical or statistical jargon unless it happens to be of a technical
or statistical nature.
o Content of the research report: a research report has a title page, an executive summary or
an abstract, a preface, a table of contents, and sometimes a copy of the authorization to
conduct the study. All reports should have an introductory section detailing the purpose of
the study, giving some background of what it relates to, and stating the problem studied,
setting the stage for what the reader should expect in the rest of the report. The body of the
report should contain details regarding the framework of the study, hypotheses, if any,
sampling design, data collection methods, analysis of data and the results obtained. The final
part of the report should present the findings and draw conclusions. If recommendations
have been called for, they will be included. Every report should also point out the limitations
of the study followed by a list of references cited in the report. Appendices, if any, should be
attached to the report.
Integral parts of the report
o The title and title page: the title permits potential readers to obtain a first idea of your study
and to decide whether they want to read your report in its entirety. A good title also grabs
attention and entices people to read the research report in its entirety. The title page will also
indicate further relevant information.
o The executive summary or abstract: it is placed on the page immediately following the title
page. It is a brief account of the entire research study; it provides an overview and highlights
the following important information related to it: the problem statement, sampling design,
data collection methods used results of data analysis, conclusions, and recommendations,
with suggestions for their implementation. The executive summary will be used by the
readers to get an initial idea about (the results) of your study. It will be brief and is usually
restricted to one page or less.
o Table of contents: the logic of the structure of your research report becomes evident not
only through an apt choice of words in the titles of the sections, but also as a result of the
arrangement of the different components of the work. The table of contents will serve a
guide through your research report; it usually lists the important headings and subheadings in
the report with page reference.
o List of tables, figures, and other materials: if the research report contains charts, figures,
maps, etc., each series of these should be listed separately in an appropriate list on the page
or pages immediately following the table of contents. Each such list should appear on a
separate page and should follow the general style of format of the table of contents.
Downloaded by Mohamed Chakroun (chakroun.mohamed2000@gmail.com)
lOMoARcPSD|3821164
Preface: this is used primarily to mention matters of background necessary for an
understanding of the subject that do not logically fit into the text. The reason why the report
has been written, the reason for the selection of the subject, difficulties encountered along
the way, etc. can also be mentioned here. It is customary to include a brief expression of the
author’s appreciation of help and guidance received in the research. The preface is not the
same as an introduction!
o The authorization letter: a copy of the letter of authorization from the sponsor of the study
approving the investigation and detailing its scope is sometimes attached at the beginning of
the research report.
o The introductory section: the layout of the first chapter is more or less standard. It contains,
in the following order:
Introduction (short, providing background information on why and how the study
was initiated),
reason for the research (problem indication) and the purpose of the research,
problem statement and research questions,
the scope of the study,
research method (approach),
managerial relevance,
Structure and division of chapters in the research report.
o The body of the report: the central part usually has two large sections: a theoretical part and
an empirical part.
The theoretical part: it contains an in-depth exploration and a clear explication of
the relevant literature. It documents the relevant findings from earlier research and
should be selective, goal oriented, thorough and critical. It can be concluded with a
number of hypotheses.
The empirical part: it describes the design details, such as sampling and data
collection methods, as well as the nature and type of the study, the time horizon, the
field setting and the unit of analysis. The information provided here should enable
the reader to replicate the research. It should contain the components
“participants”, “material”, “method” depending on the kind of survey. All relevant
results should be reported, also those that contradict your hypotheses.
o The final part of the report: the aim of this part is to interpret the results of the research with
regard to the research questions. The discussion should stand on its own and should form a
whole with a beginning and an ending. The discussion should include:
The main findings of the research
The (managerial) implication of these findings
Recommendations for implementation and a cost-benefit analysis of these
recommendations
The limitations of your study and suggestions for future research.
o References: a list of the references cited in the literature review and in other places in the
report will be given immediately after the final part of the report. Footnotes, if any, are
referenced either separately at the end of the report, or at the bottom of the page where the
footnote occurs.
o Appendix: this is the appropriate place for the materials that substantiate the text or the
report, or other things that might help the reader follow the text. It should also contain a
copy of the questionnaire administered to the respondents.
Oral presentation: usually, an oral presentation of the research project is required, followed by a
question-and-answer session. The challenge is to present the important aspects so as to hold the
o
Downloaded by Mohamed Chakroun (chakroun.mohamed2000@gmail.com)
lOMoARcPSD|3821164
interest of the audience. Speaking audibly, clearly, without distracting mannerisms, and at the right
speed for the audience to comprehend is vital for holding their attention.
o Deciding on the content: because a lot of material has to be covered, it becomes necessary
to decide on the points to focus on and the importance to be given to each.
o Visual aids: graphs, charts and tables help to drive home the points one wishes to make much
faster and more effectively. They provide a captivating sensory stimulus that sustains the
attention of the audience.
o The presenter: an effective presentation is also a function of how “unstressed” the presenter
is. The speaker should establish eye contact with the audience, speak audibly and
understandably and be sensitive to the nonverbal reactions of the audience. One should also
not minimize the importance of the impression created on the audience by dress, posture,
bearing and the confidence with which one carries oneself.
o The presentation: the opening remarks set the stage for riveting the attention of the
audience. Certain aspects such as the problem investigated, the findings, the conclusions
drawn, the recommendations made, and their recommendations are important aspects of
the presentation that should be driven home at least three times: once in the beginning,
again when each of the areas is covered, and finally, while summarizing and bringing the
presentation to a conclusion.
o Handling questions: the presenter should be knowledgeable and be open to suggestions or
ideas. The question-and-answer session should leave the audience with a sense of
involvement and satisfaction. Questioning should be encouraged and responded to with care.
Glossary: from page 692 onwards
Downloaded by Mohamed Chakroun (chakroun.mohamed2000@gmail.com)