PROCESSING OF DATA
The collected data in research is processed and analyzed to
come to some conclusions or to verify the hypothesis
made.
Processing of data is important as it makes further analysis of
data easier and efficient. Processing of data technically
means
1. Editing of the data
2. Coding of data
3. Classification of data
4. Tabulation of data.
EDITING:
Data editing is a process by which collected data is
examined to detect any errors or omissions and further
these are corrected as much as possible before
proceeding further.
Editing is of two types:
1. Field Editing
2. Central Editing.
FIELD EDITING:
This is a type of editing that relates to abbreviated or illegible
written form of gathered data. Such editing is more effective
when done on same day or the very next day after the
interview. The investigator must not jump to conclusion while
doing field editing.
CENTRAL EDITING:
Such type of editing relates to the time when all data
collection process has been completed. Here a single or
common editor corrects the errors like entry in the wrong
place, entry in wrong unit e.t.c. As a rule all the wrong
answers should be dropped from the final results.
EDITING REQUIRES SOME CAREFUL
CONSIDERATIONS:
Editor must be familiar with the interviewer’s mind set, objectives
and everything related to the study.
Different colors should be used when editors make entry in the
data collected.
They should initial all answers or changes they make to the data.
The editors name and date of editing should be placed on the
data sheet.
CODING:
Classification of responses may be done on the basis of
one or more common concepts.
In coding a particular numeral or symbol is assigned to the
answers in order to put the responses in some definite
categories or classes.
The classes of responses determined by the researcher
should be appropriate and suitable to the study.
Coding enables efficient and effective analysis as the
responses are categorized into meaningful classes.
Coding decisions are considered while developing or
designing the questionnaire or any other data collection tool.
Coding can be done manually or through computer.
CLASSIFICATION:
Classification of the data implies that the collected raw
data is categorized into common group having common
feature.
Data having common characteristics are placed in a
common group.
The entire data collected is categorized into various groups
or classes, which convey a meaning to the researcher.
Classification is done in two ways:
1. Classification according to attributes.
2. Classification according to the class intervals.
CLASSIFICATION ACCORDING THE THE ATTRIBUTES:
Here the data is classified on the basis of common
characteristics that can be descriptive like literacy, Gender,
honesty, marital status e.t.c. or numeral like weight, height,
income e.t.c.
Descriptive features are qualitative in nature and cannot be
measured quantitatively but are kindly considered while
making an analysis.
Analysis used for such classified data is known as statistics
of attributes and the classification is known as the
classification according to the attributes.
CLASSIFICATION ON THE BASIS OF THE INTERVAL:
The numerical feature of data can be measured quantitatively
and analyzed with the help of some statistical unit like the
data relating to income, production, age, weight e.t.c.
come under this category. This type of data is known as
statistics of variables and the data is classified by way of
intervals.
CLASSIFICATION ACCORDING TO THE CLASS
INTERVAL USUALLY INVOLVES THE FOLLOWING
THREE MAIN PROBLEMS:
1. Number of Classes.
2. How to select class limits.
3. How to determine the frequency of each class.
TABULATION:
The mass of data collected has to be arranged in some kind of
concise and logical order.
Tabulation summarizes the raw data and displays data in form
of some statistical tables.
Tabulation is an orderly arrangement of data in rows and
columns.
OBJECTIVE OF TABULATION:
1. Conserves space & minimizes explanation and descriptive
statements.
2. Facilitates process of comparison and summarization.
3. Facilitates detection of errors and omissions.
4. Establish the basis of various statistical computations.
BASIC PRINCIPLES OF TABULATION:
1. Tables should be clear, concise & adequately titled.
2. Every table should be distinctly numbered for easy
reference.
3. Column headings & row headings of the table should be
clear & brief.
4. Units of measurement should be specified at appropriate
places.
5. Explanatory footnotes concerning the table should be
placed at appropriate places.
6. Source of information of data should be clearly indicated.
7. The columns & rows should be clearly separated with
dark lines
8. Demarcation should also be made between data of one
class and that of another.
9. Comparable data should be put side by side.
10. The figures in percentage should be approximated before
tabulation.
11. The alignment of the figures, symbols etc. should be
properly aligned and adequately spaced to enhance the
readability of the same.
12. Abbreviations should be avoided.