Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
26 views

CH-2 Introduction To Data Science

Uploaded by

bluejackbro
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views

CH-2 Introduction To Data Science

Uploaded by

bluejackbro
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 26

Undergraduate Program

Introduction to Emerging Technologies

INTRODUCTION TO DATA
SCIENCE 6/24/2021
Introduction to Data Science
Topics Covered
 Overview for Data Science
 Definition of data and information
 Data types and representation
 Data Value
Chain
 Data Acquisition
 Data Analysis
 Data Curating
 Data Storage
 Data Usage
 Basic concepts
of Big data
6/24/2021 2
Overview of Data Science (1)
 Data science is a multi-disciplinary field that uses
scientific methods, processes, algorithms, and systems to
extract knowledge and insights from structured, semi-
structured and unstructured data.
 Data science is much more than simply analyzing data. It
offers a range of roles and requires a range of skills.
 Data science continues to evolve as one of the most
promising and in-demand career paths for skilled
professionals.
6/24/2021 3
Overview of Data Science (2)
 Today, successful data professionals understand that they
must advance past traditional skills of analyzing large
amounts of data, data mining, and programming skills.
 Data scientists need to be curious and result-oriented,
with exceptional industry-specific knowledge and
communication skills
 Data science need a strong quantitative background in
statistics and linear algebra as well as programming
knowledge with focuses in data warehousing, mining, and
modelling to build and analyze algorithms.
6/24/2021 4
What is data?
 Data is a collection of raw facts and figures or a raw
input which when processed or arranged makes
meaningful output.
 Data can be defined as a representation of facts, concepts,
or instructions in a formalized manner, which should be
suitable for communication, interpretation, or processing
by human or electronic machine.
 Data is represented with the help of characters such as
alphabets (A-Z, a-z), digits (0-9) or special characters (+,-
,/,*,<,>,= , # , @ ,$,%, etc.)
6/24/2021 5
What is Information?
 Information is organized or classified data, which
has some meaningful values for the receiver.
 Information is the processed data on which decisions and
actions are based.
 Information is the data that is org anized , meaningful
and useful for making a decision.
 For the decision to be meaningful, the processed
data must qualify for the following characteristics:
 Timely − Information should be available when required.
 Accuracy − Information should be accurate.
 Completeness − Information should be complete.
6/24/2021 6
Summary: Data Vs. Information

Data Information
Described as unprocessed or raw facts Described as processed data
and figures

Cannot help in decision making Can help in decision making


Raw material that can be Interpreted data; created from organized,
organized, structured, and processed data in a
structured, and interpreted to create particular context.
useful information systems.
‘Groups of non-random’ symbols in Processed data in the form of text, images,
the form of text, images, and voice and voice representing quantities,
representing quantities, action and action and objects'.
objects'.

6/24/2021 7
Data Processing Cycle (1)
 Data processing is the re-structuring or re-ordering of
data by people or machine to increase their usefulness
and add values for a particular purpose.
 Data processing consists of the following basic steps -
input, processing, and output. These three steps
constitute the data processing cycle.

Data Processing Cycle


6/24/2021 8
Data Processing Cycle (2)
 Input step − the input data is prepared in some
convenient form for processing.
 The form depends on the processing machine.
 For example - when electronic computers are used, input
data can be recorded on several types of storage medium,
such as hard disk, CD, flash disk and so on.
 Processing step − the input data is changed to produce
data in a more useful form.
 For example - pay-checks can be calculated from the time
cards, or a summary of sales for the month can be
calculated from the sales orders.
6/24/2021 9
Data Processing Cycle (3)
 Output step − at this stage, the result of the
proceeding processing step is collected.
 The particular form of the output data depends on the use
of the data.
 For example - output data may be pay-checks
for employees.

6/24/2021 10
Data types and their representation (1)
 A data type is simply an attribute of data that tells the
compiler or interpreter how the programmer intends to
use the data.
 Data types from Computer programming perspective
 Almost all programming languages explicitly include the
notion of data type. Common data types include:
⚫Integers(int)- is used to store whole numbers, in maths known as integers.
⚫Booleans(bool)- is used to represent one of two values: true or false.
⚫Characters(char)- is used to store a single character.
⚫Floating-point numbers(float)- is used to store real numbers.
⚫Alphanumeric strings(string)- used to store a combination of
characters and numbers.
6/24/2021 11
Data types and their representation (2)
 Data types from Data Analytics perspective
 From a data analytics point of view, there are three
common types of data types or structures: Structured,
Semi-structured, and Unstructured data types.
Figure below describes the three types of data and metadata.

6/24/2021 12
Structured Data
 Structured data is data that adheres to a pre-defined data
model and is therefore straightforward to analyze.
 Structured data conforms to a tabular format with a
relationship between the different rows and columns.
 Common examples of structured data are Excel files or
SQL databases. Each of these has structured rows and
columns that can be sorted.
Structured data depends on the existence of a data model
– a model of how data can be stored, processed and
accessed.
6/24/2021 13
Semi-structured Data
 Semi-structured data is a form of structured data that does
not conform with the formal structure of data models
associated with relational databases or other forms of data
tables.
 But nonetheless, contains tags or other markers to
separate semantic elements and enforce hierarchies of
records and fields within the data.
 Therefore, it is also known as a self-describing structure.
Examples of semi-structured data include JSON and XML
are forms of semi-structured data.
6/24/2021 14
Unstructured Data
 Unstructured data is information that either does not have
a predefined data model or is not organized in a pre-
defined manner.
 It is without proper formatting and alignment.
 Unstructured information is typically text-heavy but may
contain data such as dates, numbers, and facts as well.
 This results in irregularities and ambiguities that make it
difficult to understand using traditional programs as
compared to data stored in structured databases.
 Common examples of unstructured data include audio,
video files or No-SQL databases.
6/24/2021 15
Metadata – Data about Data
 A last category of data type is metadata. From a technical
point of view, this is not a separate data structure, but it is
one of the most important elements for Big Data analysis
and big data solutions.
 Metadata is data about data.
 It provides additional information about a specific set of data.
 In a set of photographs, for example, metadata could describe
when and where the photos were taken. The metadata then
provides fields for dates and locations which, by themselves, can
be considered structured data.
 Because of this reason, metadata is frequently used by Big Data
solutions for initial analysis.
6/24/2021 16
Data value Chain
The Data Value Chain is introduced to describe the information
flow within a big data system as a series of steps needed to
generate value and useful insights from data.
The Big Data Value Chain identifies the following key high-level
activities:

6/24/2021 17
Data Acquisition
 It is the process of gathering, filtering, and cleaning data before it
is put in a data warehouse or any other storage solution on which
data analysis can be carried out.
 Data acquisition is one of the major big data challenges in terms of
infrastructure requirements.
Data Analysis
 It is concerned with making the raw data acquired amenable to
use in decision-making as well as domain-specific usage.
 Data analysis involves exploring, transforming, and modelling data
with the goal of highlighting relevant data, synthesizing and
extracting useful hidden information.
 Related areas include data mining, business intelligence, and
machine learning.
6/24/2021 18
Data Curation
 It is the active management of data over its life cycle to ensure it
meets the necessary data quality requirements for its effective
usage.
 Data curation processes can be categorized into different activities
such as content creation, selection, classification, transformation,
validation, and preservation.
 Data curation is performed by expert curators that are responsible
for improving the accessibility and quality of data.
 Data curators (also known as scientific curators, or data annotators)
hold the responsibility of ensuring that data are trustworthy,
discoverable, accessible, reusable, and fit their purpose.

6/24/2021 19
Data Storage
 It is the persistence and management of data in a scalable way that
satisfies the needs of applications that require fast access to the
data.
 Relational Database Management Systems (RDBMS) have been the
main, and almost unique, a solution to the storage paradigm.
Data Usage
 It covers the data-driven business activities that need access to
data, its analysis, and the tools needed to integrate the data
analysis within the business activity.
 Data usage in business decision-making can enhance
competitiveness through the reduction of costs, increased added
value, or any other parameter that can be measured against
existing performance criteria.
6/24/2021 20
Basic concepts of big data
 Big data is a blanket term for the non-traditional strategies
and technologies needed to gather, organize, process, and
gather insights from large datasets.
 While the problem of working with data that exceeds the
computing power or storage of a single computer is not
new, the pervasiveness, scale, and value of this type of
computing have greatly expanded in recent years.
 In this section, we will talk about big data on a
fundamental level and define common concepts you might
come across.
6/24/2021 21
What Is Big Data?
 Big data is the term for a collection of data sets so large
and complex that it becomes difficult to process using on-
hand database management tools or traditional data
processing applications.
 In this context, a “large dataset” means a dataset too
large to reasonably process or store with traditional tooling
or on a single computer.
 This means that the common scale of big datasets is
constantly shifting and may vary significantly from
organization to organization.
 Big data is characterized by 3V and more:
6/24/2021 22
Characteristics of Big Data
 Volume: large amounts of data Zeta bytes/Massive datasets.
 Velocity: Data is live streaming or in motion.
 Variety: data comes in many different forms from diverse sources.
 Veracity: can we trust the data? How accurate is it? etc.

6/24/2021 23
Clustered Computing (1)
 Because of the qualities of big data, individual computers
are often inadequate for handling the data at most stages.
 To better address the high storage and computational
needs of big data, computer clusters are a better fit.
 Big data clustering software combines the resources of
many smaller machines, seeking to provide a number of
benefits:
 Resource Pooling: Combining the available storage
space to hold data is a clear benefit, but CPU and memory
pooling are also extremely important.
6/24/2021 24
Clustered Computing (2)
 High Availability: Clusters can provide varying levels of fault
tolerance and availability guarantees to prevent hardware or
software failures from affecting access to data and processing.
 Easy Scalability: Clusters make it easy to scale horizontally by
adding additional machines to the group.
 Cluster membership and resource allocation can be handled by
software like Hadoop’s YARN (which stands for Yet Another
Resource Negotiator).
 The machines involved in the computing cluster are also typically
involved with the management of a distributed storage system.
6/24/2021 25
End Of Chapter Two

6/24/2021 3
0

You might also like