Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Processing and Interpretation of Data: Prashanta Sharma Professor Department of Commerce Gauhati University

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 27

PROCESSING AND

INTERPRETATION OF DATA
PRASHANTA SHARMA
PROFESSOR
DEPARTMENT OF COMMERCE
GAUHATI UNIVERSITY
PROCESSING OF DATA
The collected data in research is processed and analyzed to
come to some conclusions or to verify the hypothesis
made.
Processing of data is important as it makes further analysis of
data easier and efficient. Processing of data technically
means
1. Editing of the data
2. Coding of data
3. Classification of data
4. Tabulation of data.
EDITING:

Data editing is a process by which collected


data is examined to detect any errors or
omissions and further these are corrected
as much as possible before proceeding
further.

Editing is of two types:


1. Field Editing
2. Central Editing.
FIELD EDITING:

This is a type of editing that relates to abbreviated


or illegible written form of gathered data. Such
editing is more effective when done on same day or
the very next day after the interview. The
investigator must not jump to conclusion while doing
field editing.
CENTRAL EDITING:
Such type of editing relates to the time when all data
collection process has been completed. Here a
single or common editor corrects the errors like
entry in the wrong place, entry in wrong unit e.t.c. As
a rule all the wrong answers should be dropped
EDITING REQUIRES SOME CAREFUL
CONSIDERATIONS:

 Editor must be familiar with the interviewer’s


mind set, objectives and everything related to
the study.
 Different colors should be used when editors
make entry in the data collected.
 They should initial all answers or changes they
make to the data.
 The editors name and date of editing should
be placed on the data sheet.
CODING:
Classification of responses may be done on
the basis of one or more common concepts.

In coding a particular numeral or symbol


is assigned to the answers in order to put
the responses in some definite categories
or classes.
CODING
The classes of responses determined by the
researcher should be appropriate and suitable
to the study.

Coding enables efficient and effective


analysis as the responses are categorized
into meaningful classes
Coding decisions are considered while
developing or designing the
questionnaire or any other data
collection tool.

Coding can be done manually or


through computer
CLASSIFICATION:
 Classification of the data implies that the
collected raw data is categorized into
common group having common feature.

 Data having common characteristics are


placed in a common group.
 The entire data collected is categorized into
various groups or classes, which convey a
meaning to the researcher.
Classification is done in two ways:

1. Classification according to
attributes.

2.Classification according to the


class intervals
CLASSIFICATION ACCORDING THE THE
ATTRIBUTES:

Here the data is classified on the basis of


common characteristics that can be
descriptive like literacy, sex, honesty, marital
status e.t.c. or numeral like weight, height,
income e.t.c.
Descriptive features are qualitative in nature
and cannot be measured quantitatively but
are kindly considered while making an
analysis.
Analysis used for such classified data is
known as statistics of attributes and
the classification is known as the
classification according to the attributes.
CLASSIFICATION ON THE BASIS OF
THE INTERVAL:
The numerical feature of data can be
measured quantitatively and analyzed with
the help of some statistical unit like the data
relating to income, production, age, weight
e.t.c. come under this category. This type of
data is known as statistics of variables and
the data is classified by way of intervals.
CLASSIFICATION ACCORDING TO
THE CLASS INTERVAL USUALLY
INVOLVES THE FOLLOWING THREE
MAIN PROBLEMS:
1. Number of Classes.

1. How to select class limits.

1. How to determine the frequency of each


class.
TABULATION:
The mass of data collected has to be
arranged in some kind of concise and
logical order.

Tabulation summarizes the raw data and


displays data in form of some statistical
tables.

Tabulation is an orderly arrangement of


OBJECTIVE OF TABULATION:
1. Conserves space & minimizes explanation
and descriptive statements.

2. Facilitates process of comparison and


summarization.
3.Facilitates detection of errors and omissions.

4.Establish the basis of various statistical


computations
BASIC PRINCIPLES OF TABULATION:

1. Tables should be clear, concise &


adequately titled.

2. Every table should be distinctly numbered


for easy reference.

3. Column headings & row headings of the


table should be clear & brief.
4.Units of measurement should be
specified at appropriate places.

5.Explanatory footnotes concerning


the table should be placed at
appropriate places.
6.Source of information of data
should be clearly indicated.
7.The columns & rows should be clearly
separated with dark lines
8. Demarcation should also be made
between data of one class and that of
another.
9. Comparable data should be put side
by side.
10.The figures in percentage should be
approximated before tabulation
.
11.The alignment of the figures, symbols etc.
should be properly aligned and adequately
spaced to enhance the readability of the
same.

12. Abbreviations should be avoided.


INTERPRETATION:

1.Interpretation is the relationship amongst


the collected data, with analysis.
2. Interpretation looks beyond the data of the
research
3.It includes researches, theory and
hypothesis.
4.Interpretation in a way act as a tool to
explain the observations of the researcher
during the research period and it acts as a
guide for future researches.
WHY Interpretation?
-1.the researcher understands the abstract
principle underlying the findings.
-2.Interpretation links up the findings with
those of other similar studies.
-3.The researcher is able to make others
understand the real importance of his
research findings.
PRECAUTIONS IN INTERPRETATION:

1. Researcher must ensure that the data is


appropriate, trust worthy and adequate for
drawing inferences.

2. Researcher must be cautious about errors


and take due necessary actions if the error arises

3. Researcher must ensure the correctness of the data


analysis process whether the data is qualitative or
quantitative.

4. must also ensure that there should be constant


interaction between initial hypothesis, empirical
observations, and theoretical concepts.
Few example of statistical tools
 Ratio Analysis
 Trend Analysis
 Central Tendency
 Dispersion
 Index
 Correlation Coefficient
 Regression Equation
TESTING THE HYPOTHESIS
Several factor are considered into the determination of the
appropriate statistical technique to use when conducting a
hypothesis tests. The most important are as:
1. The type of data being measured.
2. The purpose or the objective of the statistical inference.

Hypothesis can be tested by various techniques. The


hypothesis testing techniques are divided into two broad
categories:
1. Parametric Tests.
2. Non- Parametric Tests.
Few Examples of Testing
Hypotheses

1. t- test
2. z- test
3. F- test
4. 2- test

You might also like