Adbms Data Warehousing and Data Mining
Adbms Data Warehousing and Data Mining
Adbms Data Warehousing and Data Mining
Course Overview
The course: what and how 0. Introduction I. Data Warehousing II. Decision Support and OLAP III. Data Mining IV. Looking Ahead Demos and Labs
2
0. Introduction
Data Warehousing, OLAP and data mining: what and why (now)? Relation to OLTP A case study demos, labs
What product prom-otions have the biggest impact on revenue? What impact will new products/services have on revenue and margins?
Data, Data everywhere yet ... I cant find the data I need
data is scattered over the network many versions, subtle differences
Data
Evolution
60s: Batch reports
hard to find and analyze information inflexible and expensive, reprogram every new request
10%
Initial 5% 0% 5GB 10-19GB 5-9GB 50-99GB 250-499GB 500GB-1TB
10
Projected 2Q96
Source: META Group, Inc.
20-49GB
100-249GB
Weather images
Intelligence Agency Videos
11
Data Warehouse
A data warehouse is a
subject-oriented integrated time-varying non-volatile
13
Explorers: Seek out the unknown and previously unsuspected rewards hiding in the detailed data
14
Extraction Cleansing
Data Warehouse Engine Analyze Query
Purchased Data
Legacy Data
Metadata Repository
15
Decision Support
Used to manage and control business Data is historical or point-in-time Optimized for inquiry rather than update Use of the system is loosely defined and can be ad-hoc Used by managers and end-users to understand the business and make judgements
17
Application Areas
Industry Application Finance Credit Card Analysis Insurance Claims, Fraud Analysis Telecommunication Call record analysis Transport Logistics management Consumer goods promotion analysis Data Service providers Value added data Utilities Power usage analysis
20
21
Function
Missing data: Decision support requires historical data, which op dbs do not typically maintain. Data consolidation: Decision support requires consolidation (aggregation, summarization) of data from many heterogeneous sources: op dbs, external sources. Data quality: Different sources typically use inconsistent data representations, codes, and formats which have to be 23 reconciled.
Operational Systems
Run the business in real time Based on up-to-the-second data Optimized to handle large numbers of simple read/write transactions Optimized for fast response to predefined transactions Used by people who deal with customers, products -- clerks, salespeople etc. They are increasingly used by customers
26
Industry Usage
All
Technology
Volumes
Small-medium Large Very Large Very Large
Track Legacy application, flat Customer files, main frames Details Account Finance Control Legacy applications, Balance account hierarchical databases, activities mainframe Point-of- Retail Generate ERP, Client/Server, Sale data bills, manage relational databases stock Call Telecomm- Billing Legacy application, Record unications hierarchical database, mainframe Production Manufact- Control ERP, Record uring Production relational databases, AS/400
Medium
27
Operational Database
Loans Credit Card Trust Savings
Data Warehouse
Customer
Vendor Product Activity
29
Warehouse (DSS)
Subject Oriented Used to analyze business Summarized and refined Snapshot data Integrated Data Ad-hoc access Knowledge User (Manager)
31
Data Warehouse
Performance relaxed Large volumes accessed at a time(millions) Mostly Read (Batch Update) Redundancy present Database Size 100 GB - few terabytes
32
Data Warehouse
Query throughput is the performance metric Hundreds of users Managed by subsets
33
To summarize ...
OLTP Systems are used to run a business
Why Now?
Data is being produced ERP provides clean data The computing power is available The computing power is affordable The competitive pressures are strong Commercial products are available
35
Million dollar massively parallel hardware is needed to deliver fast time for complex queries
OLAP servers require massive and unwieldy indices Complex OLAP queries clog the network with data Data warehouses must be at least 100 GB to be effective
Source -- Arbor Software Home Page
36
Inventory Management Merchandise Accounts Payable Purchasing Supplier Promotions: National, Region, Store Level
Wal*Mart Manager
Daily Inventory Restock Suppliers (sometimes SameDay) ship to Wal*Mart
Warehouse-Pass Through
Stock some Large Items
Delivery may come from supplier
Distribution Center
Suppliers merchandise unloaded directly onto Wal*Mart Trucks
39
Wal*Mart System
24 TB Raw Disk; 700 1000 Pentium CPUs > 5 Billions 65 weeks (5 Quarters) Current Apps: 75 Million New Apps: 100 Million + Number of Users: Thousands Number of Queries: 60,000 per week NCR 5100M 96 Nodes; Number of Rows: Historical Data: New Daily Volume:
40
Course Overview
0. Introduction I. Data Warehousing II. Decision Support and OLAP III. Data Mining IV. Looking Ahead
42
ERP Systems
Extraction Cleansing
Data Warehouse Engine Analyze Query
Purchased Data
Legacy Data
Metadata Repository
43
44
Source Data
Operational/ Source Data Sequential Legacy Relational External
External Sources
Nielsens, Acxiom, CMIE, Vendors, Partners
46
Nothing could be farther from the truth Warehouse data comes from disparate questionable sources
47
49
51
52
Sources for data generally in legacy mainframes in VSAM, IMS, IDMS, DB2; more data today in relational databases on Unix
Conditioning
The conversion of data types from the source to the target data store (warehouse) -always a relational database
53
Scoring
computation of a probability of an event. e.g..., chance that a customer will defect to AT&T from MCI, chance that a customer is likely to buy a new product
55
Loads
After extracting, scrubbing, cleaning, validating etc. need to load the data into the warehouse Issues
huge volumes of data to be loaded small time window available when warehouse can be taken off line (usually nights) when to build index and summary tables allow system administrators to monitor, cancel, resume, change load rates Recover gracefully -- restart after failure from where you were and without loss of data integrity
56
Load Techniques
Use SQL to append or insert new data
record at a time interface will lead to random disk I/Os
57
Load Taxonomy
Incremental versus Full loads Online versus Offline loads
58
Refresh
Propagate updates on source data to the warehouse Issues:
when to refresh how to refresh -- refresh techniques
59
When to Refresh?
periodically (e.g., every night, every week) or after significant events
on every update: not warranted unless warehouse data require current data (up to the minute stock quotes) refresh policy set by administrator based on user needs and traffic possibly different policies for different sources
60
Refresh Techniques
Full Extract from base tables
read entire source table: too expensive maybe the only choice for legacy systems
61
Scrubbing Data
Sophisticated transformation tools. Used for cleaning the quality of data Clean data is vital for the success of the warehouse Example
Seshadri, Sheshadri, Sesadri, Seshadri S., Srinivasan Seshadri, etc. are the same person
64
Scrubbing Tools
Apertus -- Enterprise/Integrator Vality -- IPE Postal Soft
65
Structuring/Modeling Issues
67
68
customer activity (1986-89) -- monthly summary customer activity detail (1987-89) customer activity detail (1990-91)
custid, activity date, amount, clerk id, order no custid, activity date, amount, line item no, order no
69
Granularity in Warehouse
Can not answer some questions with summarized data
Did Anand call Seshadri last month? Not possible to answer if total duration of calls by Anand over a month is only maintained and individual call details are not.
Granularity in Warehouse
Tradeoff is to have dual level of granularity
Store summary data on disks
95% of DSS processing done against this data
72
Vertical Partitioning
Acct. No Name Balance Date Opened Interest Rate Address
Frequently accessed
Acct. Balance No Acct. No Name Date Opened
Rarely accessed
Interest Rate Address
Derived Data
Introduction of derived (calculated data) may often help Have seen this in the context of dual levels of granularity Can keep auxiliary views and indexes to speed up query processing
74
Schema Design
Database organization
must look like business must be recognizable by business user approachable by business user Must be simple
Schema Types
Star Schema Fact Constellation Schema Snowflake schema
75
Dimension Tables
Dimension tables
Define business in terms already familiar to users Wide rows with lots of descriptive text Small tables (about a million rows) Joined to fact table by a foreign key heavily indexed typical dimensions
time periods, geographic region (markets, cities), products, customers, salesperson, etc.
76
Fact Table
Central table
mostly raw numeric items narrow rows, a few columns at most large number of rows (millions to a billion) Access via dimensions
77
Star Schema
A single fact table and for each dimension one dimension table Does not capture hierarchies directly
T i
date, custno, prodno, cityname, ...
c u s t
f a c t
p r o d c i t y
78
Snowflake schema
Represent dimensional hierarchy directly by normalizing tables. Easy to maintain and saves storage
T i e
date, custno, prodno, cityname, ...
c u s t
f a c t
p r o d
c i t y
r e g i o 79 n
Fact Constellation
Fact Constellation
Multiple fact tables that share many dimension tables Booking and Checkout may share many dimension tables in the hotel industry
Hotels
Booking Checkout
Customer
Promotion
Travel Agents
Room Type
80
De-normalization
Normalization in a data warehouse may lead to lots of small tables Can lead to excessive I/Os since many tables have to be accessed De-normalization is the answer especially since updates are rare
81
Creating Arrays
Many times each occurrence of a sequence of data is in a different physical location Beneficial to collect all occurrences together and store as an array in a single row Makes sense only if there are a stable number of occurrences which are accessed together In a data warehouse, such situations arise naturally due to time based orientation
can create an array by month
82
Selective Redundancy
Description of an item can be stored redundantly with order table -- most often item description is also accessed with order table Updates have to be careful
83
Partitioning
Breaking data into several physical units that can be handled separately Not a question of whether to do it in data warehouses but how to do it Granularity and partitioning are key to effective implementation of a warehouse
84
Why Partition?
Flexibility in managing data Smaller physical units allow
easy restructuring free indexing sequential scans if needed easy reorganization easy recovery easy monitoring
85
86
Where to Partition?
Application level or DBMS level Makes sense to partition at application level
Allows different definition for each year
Important since warehouse spans many years and as business evolves definition changes
Organizationally Structured
Data Warehouse
Data
89
Data Marts
Data Warehouse
93
True Warehouse
Data Sources
Data Warehouse
Data Marts
95
Query Processing
Indexing
Indexing Techniques
Exploiting indexes to reduce scanning of data is of crucial importance Bitmap Indexes Join Indexes Other Issues
Text indexing Parallelizing and sequencing of index builds and incremental updates
97
Indexing Techniques
Bitmap index:
A collection of bitmaps -- one for each distinct value of the column Each bitmap has N bits where N is the number of rows in the table A bit corresponding to a value v for a row r is set if and only if r has the value for the indexed attribute
98
BitMap Indexes
An alternative representation of RID-list Specially advantageous for low-cardinality domains Represent each row of a table by a bit and the table as a bit vector There is a distinct bit vector Bv for each value v for the domain Example: the attribute sex has values M and F. A table of 100 million people needs 2 lists of 100 million bits
99
Bitmap Index
M F F M F F
Y Y N N Y N
0
1 1
1
1 0
0
1 0
0
1 1
0
1 0
0
1 0
Customer
Cust C1 C2 C3 C4 C5 C6 C7
Region Rating N H S M W L W H S L W L N H
Row ID N S E W 1 1 0 0 0 2 0 1 0 0 3 0 0 0 1 4 0 0 0 1 5 0 1 0 0 6 0 0 0 1 7 1 0 0 0
Row ID H M L 1 1 0 0 2 0 1 0 3 0 0 0 4 0 0 0 5 0 1 0 6 0 0 0 7 1 0 0
Customers where
Region = W
And
Rating = M
101
BitMap Indexes
Comparison, join and aggregation operations are reduced to bit arithmetic with dramatic improvement in processing time Significant reduction in space and I/O (30:1) Adapted for higher cardinality domains as well. Compression (e.g., run-length encoding) exploited Products that support bitmaps: Model 204, TargetIndex (Redbrick), IQ (Sybase), Oracle 7.3
102
Join Indexes
Pre-computed joins A join index between a fact table and a dimension table correlates a dimension tuple with the fact tuples that have the same value on the common dimensional attribute
e.g., a join index on city dimension of calls fact table correlates for each city the calls (in the calls table) from that city
103
Join Indexes
Join indexes can also span multiple dimension tables
e.g., a join index on city and time dimension of calls fact table
104
Time
Location
C+T+L
C+T+L +P
105
Plan
Time
Location Plan
Apply Selections
Calls Virtual Cross Product of T, L and P
106
Calls
Calls
0 0 1
AND
1 1 0
107
Intelligent Scan
Piggyback multiple scans of a relation (Redbrick)
piggybacking also done if second scan starts a little while after the first scan
108
Deterrents to parallelism
startup communication
109
Parallel Utilities
Load, Archive, Update, Parse, Checkpoint, Recovery
110
Pre-computed Aggregates
Keep aggregated data for efficiency (pre-computed queries) Questions
Which aggregates to compute? How to update aggregates? How to use pre-computed aggregates in queries?
111
Pre-computed Aggregates
Aggregated table can be maintained by the
warehouse server middle tier client applications
Pre-computed aggregates -- special case of materialized views -- same questions and issues remain
112
SQL Extensions
Extended family of aggregate functions
rank (top 10 customers) percentile (top 30% of customers) median, mode Object Relational Systems allow addition of new aggregate functions
113
SQL Extensions
Reporting features
running total, cumulative totals
Cube operator
group by on all subsets of a set of attributes (month,city) redundant scan and sorting of data can be avoided
114
115
116
117
Course Overview
The course: what and how 0. Introduction I. Data Warehousing II. Decision Support and OLAP III. Data Mining IV. Looking Ahead Demos and Labs
118
-- Ralph Kimball
120
121
What Is OLAP?
Online Analytical Processing - coined by EF Codd in 1994 paper contracted by Arbor Software* Generally synonymous with earlier terms such as Decisions Support, Business Intelligence, Executive Information System OLAP = Multidimensional Database MOLAP: Multidimensional OLAP (Arbor Essbase, Oracle Express) ROLAP: Relational OLAP (Informix MetaCube, Microstrategy DSS Agent)
* Reference: http://www.arborsoft.com/essbase/wht_ppr/coddTOC.html 122
Result: OLAP shifted from small vertical niche to mainstream DBMS category
123
Strengths of OLAP
It is a powerful visualization paradigm It provides fast, interactive response times It is good for analyzing time series
OLAP Is FASMI
Fast Analysis Shared Multidimensional Information
Multi-dimensional Data
HeyI sold $100M worth of goods
Dimensions: Product, Region, Time Hierarchical summarization paths
S W Product Industry Region Country Time Year N Juice Cola Milk Cream Toothpaste Soap 1 2 34 5 6 7
Product
Category
Region
Quarter
Product
City
Month
Week
126
Month
Office Day
128
Juice
Cola Milk
10 Region Product 47 30
Cream 12
Date
129
Household Telecomm Video Europe Far East India Retail Direct Special
Audio
Sales Channel
130
Low-level Details
131
Drill-Down
132
133
Multidimensional Spreadsheets
Analysts need spreadsheets that support
pivot tables (cross-tabs) drill-down and roll-up slice and dice sort selections derived attributes
134
OLAP - Data Cube Idea: analysts need to group data in many different ways eg. Sales(region, product, prodtype, prodstyle, date, saleamount) saleamount is a measure attribute, rest are dimension attributes groupby every subset of the other attributes materialize (precompute and store) groupbys to give online response Also: hierarchies on attributes: date -> weekday, date -> month -> quarter -> year
135
SQL Extensions
Front-end tools require
Extended Family of Aggregate Functions
rank, median, mode
Reporting Features
running totals, cumulative totals
Data Cube
136
Database Layer
Presentation Layer
Generate SQL execution plans in the ROLAP engine to obtain OLAP functionality.
Database Layer
Presentation Layer
Store atomic data in a proprietary data structure (MDDB), pre-calculate as many outcomes as possible, obtain OLAP functionality via proprietary algorithms running against this data.
Number of Aggregations
Number of Dimensions
Microsoft TechEd98
Metadata Repository
Administrative metadata
source databases and their contents gateway descriptions warehouse schema, view & derived data definitions dimensions, hierarchies pre-defined queries and reports data mart locations and contents data partitions data extraction, cleansing, transformation rules, defaults data refresh and purging rules user profiles, user groups security: user authorization, access control
140
Metdata Repository .. 2
Business data
business terms and definitions ownership of data charging policies
operational metadata
data lineage: history of migrated data and sequence of transformations applied currency of data: active, archived, purged monitoring information: warehouse usage statistics, error reports, audit trails.
141
From day one establish that warehousing is a joint user/builder project Establish that maintaining data quality will be an ONGOING joint user/builder responsibility Train the users one step at a time Consider doing a high level corporate data model in no more than three weeks
143
Implement a user accessible automated directory to information stored in the warehouse Determine a plan to test the integrity of the data in the warehouse
From the start get warehouse users in the habit of 'testing' complex queries
144
When in a bind, ask others who have done the same thing for advice
Be on the lookout for small, but strategic, projects Market and sell your data warehousing systems
145
You will find the need to store data not being captured by any existing system
You will need to validate data not being validated by transaction processing systems
146
Physical Design
design of summary tables, partitions, indexes tradeoffs in use of different indexes
Query processing
selecting appropriate summary tables dynamic optimization with feedback acid test for query optimization: cost estimation, use of transformations, search strategies partitioning query processing between OLAP server and backend server.
149
Reporting Tools
Andyne Computing -- GQL Brio -- BrioQuery Business Objects -- Business Objects Cognos -- Impromptu Information Builders Inc. -- Focus for Windows Oracle -- Discoverer2000 Platinum Technology -- SQL*Assist, ProReports PowerSoft -- InfoMaker SAS Institute -- SAS/Assist Software AG -- Esperant Sterling Software -- VISION:Data
152
Informatica -- OpenBridge
Information Builders Inc. -- EDA Copy Manager Platinum Technology -- InfoRefiner Prism Solutions -- Prism Warehouse Manager Red Brick Systems -- DecisionScape Formation
155
Scrubbing Tools
Apertus -- Enterprise/Integrator Vality -- IPE Postal Soft
156
Warehouse Products
Computer Associates -- CA-Ingres Hewlett-Packard -- Allbase/SQL Informix -- Informix, Informix XPS Microsoft -- SQL Server Oracle -- Oracle7, Oracle Parallel Server Red Brick -- Red Brick Warehouse SAS Institute -- SAS Software AG -- ADABAS Sybase -- SQL Server, IQ, MPP
157
Sybase
Adaptive Server 11.5 Sybase MPP Sybase IQ
158
Teradata
159
161
PowerSoft --
PowerBuilder
162
163
Data Warehouse
W.H. Inmon, Building the Data Warehouse, Second Edition, John Wiley and Sons, 1996 W.H. Inmon, J. D. Welch, Katherine L. Glassey, Managing the Data Warehouse, John Wiley and Sons, 1997 Barry Devlin, Data Warehouse from Architecture to Implementation, Addison Wesley Longman, Inc 1997
164
Data Warehouse
W.H. Inmon, John A. Zachman, Jonathan G. Geiger, Data Stores Data Warehousing and the Zachman Framework, McGraw Hill Series on Data Warehousing and Data Management, 1997 Ralph Kimball, The Data Warehouse Toolkit, John Wiley and Sons, 1996
165
Data Mining
Michael J.A. Berry and Gordon Linoff, Data Mining Techniques, John Wiley and Sons 1997 Peter Adriaans and Dolf Zantinge, Data Mining, Addison Wesley Longman Ltd. 1996 KDD Conferences
167
Other Tutorials
Donovan Schneider, Data Warehousing Tutorial, Tutorial at International Conference for Management of Data (SIGMOD 1996) and International Conference on Very Large Data Bases 97 Umeshwar Dayal and Surajit Chaudhuri, Data Warehousing Tutorial at International Conference on Very Large Data Bases 1996 Anand Deshpande and S. Seshadri, Tutorial on Datawarehousing and Data Mining, CSI-97
168
Useful URLs
Ralph Kimballs home page
http://www.rkimball.com
OLAP Council
http://www.olapcouncil.com/
169