Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
100% found this document useful (1 vote)
823 views

Data Analytics 2marks PDF

Statistics is used in data analytics for estimating problem complexity, suggesting techniques, and identifying informative fields. The sample for statistics should be representative, manageable, accessible, and unbiased. Advanced databases include object-oriented and object-relational, while specific applications use spatial, time-series, text and multimedia databases. Relational databases organize data into tables with attributes and tuples.

Uploaded by

shobana
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
823 views

Data Analytics 2marks PDF

Statistics is used in data analytics for estimating problem complexity, suggesting techniques, and identifying informative fields. The sample for statistics should be representative, manageable, accessible, and unbiased. Advanced databases include object-oriented and object-relational, while specific applications use spatial, time-series, text and multimedia databases. Relational databases organize data into tables with attributes and tuples.

Uploaded by

shobana
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

IT6006 – DATA ANALYTICS

QUESTION BANK

What are the uses of statistics in data analytics?


Statistics is used to
• to estimate the complexity of a data mining problem;
• suggest which data mining techniques are most likely to be successful; and
• identify data fields that contain the most “surface information”.

m
What are the factors to be considered while selecting the sample in statistics?
The sample should be
• Large enough to be representative of the population.

.co
• Small enough to be manageable.
• Accessible to the sampler.
• Free of bias.

Name some advanced database systems.


lts
Object-oriented databases, Object-relational databases.

Name some specific application oriented databases.


• Spatial databases,
• Time-series databases.
u
• Text databases and multimedia databases.
res

Define Relational databases.


A relational database is a collection of tables, each of which is assigned a unique name.
Each table consists of a set of attributes (columns or fields) and usually stores a large set
of tuples (records or rows). Each tuple in a relational table represents an object identified
by a unique key and described by a set of attribute values.
y2

Define Transactional Databases.


A transactional database consists of a file where each record represents a transaction. A
Transaction typically includes a unique transaction identity number (trans_ID), and a list
2b

of the items making up the transaction.

Define Spatial Databases.


Spatial databases contain spatial-related information. Such databases include geographic
(map) databases, VLSI chip design databases, and medical and satellite image databases.
Spatial data may be represented in raster format, consisting of n-dimensional bit maps or
pixel maps.

IT6006-DATA ANALYTICS Dr.M.RAJESH BABU Prof/CSE


What is Temporal Database?
Temporal database store time related data .It usually stores relational data that include
time related attributes. These attributes may involve several time stamps, each having
different semantics.

What are Time-Series databases?


A Time-Series database stores sequences of values that change with time, such as data
Collected regarding the stock exchange.

Why machine learning is done?


1. To understand and improve the efficiency of human learning.

m
2. To discover new things or structure that is unknown to human beings.
3. To fill in skeletal or computer specifications about a domain.

.co
Give the components of a learning system.
1. Critic
2. Sensors
3. Learning Element
4. Performance Element
5. Effectors
lts
6. Problem generators.

What are the steps in the data mining process?


u
Data cleaning
Data integration
Data selection
res

Data transformation
Data mining
Pattern evaluation
Knowledge representation
y2

Define data cleaning.


Data cleaning means removing the inconsistent data or noise and collecting
necessary information
2b

Define data mining.


Data mining is a process of extracting or mining knowledge from huge amount of data.

Define pattern evaluation.


Pattern evaluation is used to identify the truly interesting patterns representing knowledge
based on some interesting measures.

IT6006-DATA ANALYTICS Dr.M.RAJESH BABU Prof/CSE


Define knowledge representation.
Knowledge representation techniques are used to present the mined knowledge to the
user.

What is Visualization?
Visualization is for depiction of data and to gain intuition about data being observed. It
Assists the analysts in selecting display formats, viewer perspectives and data
representation schema

Define Spatial Visualization.


Spatial visualization depicts actual members of the population in their feature space

m
What is Descriptive and predictive data mining?
Descriptive data mining describes the data set in a concise and summertime manner and

.co
Presents interesting general properties of the data. Predictive data mining analyzes the
data in order to construct one or set of models and attempts to predict the behavior of new
data sets.

What is Data Generalization?


lts
It is process that abstracts a large set of task-relevant data in a database from a relatively
low conceptual to higher conceptual levels 2 approaches for Generalization
a. Data cube approach
b. Attribute-oriented induction approach
u
Define Attribute Oriented Induction.
These method collets the task-relevant data using a relational database query and then
res

perform generalization based on the examination in the relevant set of data.

What is bootstrap?
An interpretation of the jack knife is that the construction of pseudo value is based on
Repeatedly and systematically sampling with out replacement from the data at hand. This
y2

lead to generalized concept to repeated sampling with replacement called bootstrap.

Explain the view of statistical approach.


Statistical method is interested in interpreting the model. It may sacrifice some
2b

performance to be able to extract meaning from the model structure. If accuracy is


acceptable then the reason that a model can be decomposed in to revealing parts is often
more useful than a 'black box' system, especially during early stages of investigation and
design cycle.

IT6006-DATA ANALYTICS Dr.M.RAJESH BABU Prof/CSE


Define Deterministic models.
Deterministic models, which takes no account of random variables, but gives precise,
fixed reproducible output.

Define Systems and Models.


System is a collection of interrelated objects and Model is a description of a system.
Models are abstract, and conceptually simple.

How do you choose the best model?


All things being equal, the smallest model that explains the observations and fits the
objectives that should be accepted. In reality, the smallest means the model should

m
optimizes a certain scoring function (e.g. Least nodes, most robust, least assumptions)

.co
What is clustering?
Clustering is the process of grouping the data into classes or clusters so that objects
within a cluster have high similarity in comparison to one another, but are very dissimilar
to objects in other clusters.
lts
What are the requirements of clustering?
• Scalability
• Ability to deal with different types of attributes
• Ability to deal with noisy data
u
• Minimal requirements for domain knowledge to determine input parameters
• Constraint based clustering
• Interpretability and usability
res

State the categories of clustering methods.


Partitioning methods
Hierarchical methods
Density based methods
y2

Grid based methods


Model based methods
2b

What is linear regression?


In linear regression data are modeled using a straight line. Linear regression is the
simplest form of regression. Bivariate linear regression models a random variable Y
called response variable as a linear function of another random variable X, called a
predictor variable.
Y=a+bX

IT6006-DATA ANALYTICS Dr.M.RAJESH BABU Prof/CSE


State the types of linear model and state its use.
Generalized linear model represent the theoretical foundation on which linear regression
can be applied to the modeling of categorical response variables. The types of generalized
linear model are
Logistic regression
Poisson regression

Write the preprocessing steps that may be applied to the data for classification and
prediction.
a. Data Cleaning
b. Relevance Analysis
c. Data Transformation

m
Define Data Classification.
It is a two-step process. In the first step, a model is built describing a pre-determined set

.co
of data classes or concepts. The model is constructed by analyzing database tuples
described by attributes. In the second step the model is used for classification.

What is a “decision tree”?


It is a flow-chart like tree structure, where each internal node denotes a test on an
lts
attribute, each branch represents an outcome of the test, and leaf nodes represent classes
or class distributions. Decision tree is a predictive model. Each branch of the tree is a
classification question and leaves of the tree are partition of the dataset with their
classification.
u
Where are decision trees mainly used?
Used for exploration of dataset and business problems Data preprocessing for other
res

predictive analysis Statisticians use decision trees for exploratory analysis

What is Association rule?


Association rule finds interesting association or correlation relationships among a large
y2

set of data items, which is used for decision-making processes. Association rules analyzes
buying patterns that are frequently associated or purchased together.

Define support.
2b

Support is the ratio of the number of transactions that include all items in the antecedent
and consequent parts of the rule to the total number of transactions. Support is an
association rule interestingness measure.

IT6006-DATA ANALYTICS Dr.M.RAJESH BABU Prof/CSE


Define Confidence.
Confidence is the ratio of the number of transactions that include all items in the
consequent as well as antecedent to the number of transactions that include all items in
antecedent. Confidence is an association rule interestingness measure.

How is association rules mined from large databases?


Association rule mining is a two-step process.
Find all frequent itemsets.
Generate strong association rules from the frequent itemsets.

What is the classification of association rules based on various criteria?


1. Based on the types of values handled in the rule.

m
Boolean Association rule.
Quantitative Association rule.
2. Based on the dimensions of data involved in the rule.

.co
a. Single Dimensional Association rule.
b. Multi Dimensional Association rule.
3. Based on the levels of abstractions involved in the rule.
Single level Association rule.
Multi level Association rule.
lts
4. Based on various extensions to association mining.
Maxpatterns.
Frequent closed itemsets.
u
What are the advantages of Dimensional modeling?
Ease of use.
High performance
res

Predictable, standard framework


Understandable
Extensible to accommodate unexpected new data elements and new design decisions

Define Dimensional Modeling.


y2

Dimensional modeling is a logical design technique that seeks to present the data in a
Standard framework that intuitive and allows for high-performance access. It is
inherently
Dimensional and adheres to a discipline that uses the relational model with some
2b

important restrictions.

What comprises of a dimensional model.


Dimensional model is composed of one table with a multipart key called fact table and a
set of smaller tables called dimension table. Each dimension table has a single part
primary key that corresponds exactly to one of the components of multipart key in the
fact table.

IT6006-DATA ANALYTICS Dr.M.RAJESH BABU Prof/CSE


Define a data mart.
Data mart is a pragmatic collection of related facts, but does not have to be exhaustive or
Exclusive. A data mart is both a kind of subject area and an application. Data mart is a
collection of numeric facts.

What are the advantages of a data-modeling tool?


• Integrates the data warehouse model with other corporate data models.
• Helps assure consistency in naming.
• Creates good documentation in a variety of useful formats.
• Provides a reasonably intuitive user interface for entering comments about
objects.

m
What is data warehouse performance issue?
The performance of a data warehouse is largely a function of the quantity and type of
data stored within a database and the query/data loading workload placed upon the

.co
system.

What are the types of performance issue?


1. 1.Capacity planning for the data warehouse
2. 2.data placement techniques within a data warehouse
lts
3. 3.Application Performance Techniques.
4. Monitoring the Data Warehouse.
.
u
Why do you need data warehouse life cycle process?
Data warehouse life cycle approach is essential because it ensures that the project pieces
res

are brought together in the right order and at the right time.

What are the steps in the life cycle approach?


• Project Planning
• Business Requirements definition
• Data track: Dimensional modeling, Physical Design, Data Staging Design &
y2

Development
• Technology track: Technical Architecture design, Product Selection & Installation
• Application track: End user Application Specification, End user Application
Development
2b

• Deployment
• Maintenance & Growth
• Project Management

IT6006-DATA ANALYTICS Dr.M.RAJESH BABU Prof/CSE


Merits of Data Warehouse.
• Ability to make effective decisions from database
• Better analysis of data and decision support
• Discover trends and correlations that benefits business
• Handle huge amount of data.

What are the characteristics of data warehouse?


• Separate
• Available
• Integrated
• Subject Oriented

m
• Not Dynamic
• Consistency
• Iterative Development

.co
List some of the Data Warehouse tools.
• OLAP (Online Analytic Processing)
• ROLAP (Relational OLAP)
• End User Data Access tool
• Ad Hoc Query tool
lts
• Data Transformation services
• Replication
u
Explain OLAP.
The general activity of querying and presenting text and number data from Data
res

Warehouses, as well as a specifically dimensional style of querying and presenting that is


exemplified by a number of “OLAP Vendors” .The OLAP vendors technology is no
relational and is almost always biased on an explicit multidimensional cube of data. LAP
databases are also known as multidimensional cube of databases.
y2

Explain ROLAP.
ROLAP is a set of user interfaces and applications that give a relational database a
dimensional flavour. ROLAP stands for Relational Online Analytic Processing.
2b

Explain End User Data Access tool.


End User Data Access tool is a client of the data warehouse. In a relational data
warehouse, such a client maintains a session with the presentation server, sending a
stream of separate SQL requests to the server. Eventually the end user data access tool is
done with the SQL session and turns around to present a screen of data or a report, a
graph, or some other higher form of analysis to the user. An end user data access tool can

IT6006-DATA ANALYTICS Dr.M.RAJESH BABU Prof/CSE


be as simple as an Ad Hoc query tool or can be complex as a sophisticated data mining or
modeling application.

Explain Ad Hoc query tool.


A specific kind of end user data access tool that invites the user to form their own queries
by directly manipulating relational tables and their joins. Ad Hoc query tools, as powerful
as they are, can only be effectively used and understood by about 10% of all the potential
end users of a data warehouse.

Name some of the data mining applications.


Data mining for Biomedical and DNA data analysis
Data mining for Financial data analysis
Data mining for the Retail industry

m
Data mining for the Telecommunication industry

Name some of the data mining applications.

.co
• Data mining for Biomedical and DNA data analysis
• Data mining for Financial data analysis
• Data mining for the Retail industry
• Data mining for the Telecommunication industry
lts
What is the difference between “supervised” and unsupervised” learning scheme?
In data mining during classification the class label of each training sample is provided,
this type of training is called supervised learning (i.e.) the learning of the model is
supervised in that it is told to which class each training sample belongs. Eg. Classification
u
In unsupervised learning the class label of each training sample is not known and the
member or set of classes to be learned may not be known in advance. Eg.Clustering
res

Explain the various OLAP operations.


a) Roll-up: The roll-up operation performs aggregation on a data cube, either by
Climbing up a concept hierarchy for a dimension.
b) Drill-down: It is the reverse of roll-up. It navigates from less detailed data to more
Detailed data.
y2

c) Slice: Performs a selection on one dimension of the given cube, resulting in a


Sub cube.

Why is data quality so important in a data warehouse environment?


2b

Data quality is important in a data warehouse environment to facilitate decision-making.


In order to support decision-making, the stored data should provide information from a
historical perspective and in a summarized manner.

How can data visualization help in decision-making?


Data visualization helps the analyst gain intuition about the data being observed.
Visualization applications frequently assist the analyst in selecting display formats,

IT6006-DATA ANALYTICS Dr.M.RAJESH BABU Prof/CSE


viewer perspective and data representation schemas that faster deep intuitive
understanding thus facilitating decision-making.

What do you mean by high performance data mining?


Data mining refers to extracting or mining knowledge. It involves an integration of
techniques from multiple disciplines like database technology, statistics, machine
learning, neural networks, etc. When it involves techniques from high performance
computing it is referred as high performance data mining.

Explain the various data mining issues.


Explain about
• Knowledge Mining
• User interaction

m
• Performance
• Diversity in data types

.co
Explain the data mining functionalities.
The data mining functionalities are:
• Concept class description
• Association analysis
• Classification and prediction
• Cluster Analysis
lts
• Outlier Analysis
u
Explain the different types of data repositories on which mining can be performed.
The different types of data repositories on which mining can be performed are:
res

• Relational Databases
• Data Warehouses
• Transactional Databases
• Advanced Databases
• Flat files
y2

• World Wide Web

Explain the architecture of data warehouse.


Steps for the design and construction of DW
2b

Top-down view
Data source view
Data warehouse view
Business query view
3tier DW architecture

IT6006-DATA ANALYTICS Dr.M.RAJESH BABU Prof/CSE


What is Data Mining? Explain the steps in Knowledge Discovery.
Data mining refers to extracting or mining knowledge from large amount of data. The
steps in knowledge discovery are:
Data cleaning
Data integration
Data selection
Data transformation
Data mining
Pattern Evolution
Knowledge Discovery.

Explain the data pre-processing techniques in detail.

m
The data preprocessing techniques are:
Data Cleaning
Data integration

.co
Data transformation
Data reduction

Explain the smoothing Techniques.


• Binning
• Clustering
lts
• Regression

Explain Data transformation in detail.


u
• Smoothing
• Aggregation
• Generalization
res

• Normalization
• Attribute Construction

Explain Normalization in detail.


y2

• Min Max Normalization


• Z-Score Normalization
• Normalization by decimal scaling
2b

Explain data reduction.


• Data cube Aggregation
• Attribute subset Selection
• Dimensional reduction
• Numerosity reduction

IT6006-DATA ANALYTICS Dr.M.RAJESH BABU Prof/CSE


Explain parametric methods and non-parametric methods of reduction.

Parametric Methods:
• Regression Model
• Log linear Model
Non-Parametric Methods
Sampling
Histogram
Clustering

Explain Data Discrimination and Concept Hierarchy Generation.


Discrimination and concept hierarchy generation for numerical data:

m
Segmentation by natural partitioning
Binning
Histogram Analysis
Cluster Analysis

.co
Explain Data mining Primitives.
There are 5 Data mining Primitives. They are:
• Task relevant data
• Kinds of knowledge to be mined
• Concept Hierarchies
lts
• Interesting Measures
• Knowledge Presentation and Visualization Technique to be used for Discovery
patterns
u
Explain Attribute Oriented Induction.
res

• Attribute oriented induction for data characterization


• Algorithm
• Presentation of derived generalization
• Example
y2

Explain Statistical measures in databases.


• Measuring the central tendency
• Measuring the dispersion of data
• Graph displays
2b

Explain multilevel association rule.


• Example
• Explanation
• Variations

IT6006-DATA ANALYTICS Dr.M.RAJESH BABU Prof/CSE


Explain Multidimensional Database.
• Star schema
• Snowflake schema
• Fact constellation

Explain Indexing with suitable examples.


• Bitmap Indexing
• Join Indexing
• Bitmapped join indexing

m
Explain the Back Propagation technique.
• Definition
• Back Propagation Algorithm & diagram
• Example

Explain Partition Methods.


• K-Means Partition
• K-Medoids Partition
.co
lts
• CLARANS method with examples.

Explain the types of data in cluster analysis.


u
• Data matrix
• Dissimilarity matrix
res

• Interval scaled variables


• Binary variables
• Nominal, Ordinal and Ratio scaled variables

Explain Outlier analysis.


y2

• Statistical based outlier detection


• Distance based outlier detection
• Deviation based outlier detection
2b

IT6006-DATA ANALYTICS Dr.M.RAJESH BABU Prof/CSE

You might also like