The document discusses using Hadoop as a data hub. It describes how a data hub allows organizations to source data once and reuse it many times, eliminating the need for complex and costly ETL processes. Key benefits of a Hadoop data hub include making all data available with low latency, empowering business users with self-service access, and reducing IT costs through a "source once, reuse many times" approach. The document also provides an overview of how Sears has implemented a Hadoop data hub to modernize their legacy systems and analytics capabilities.
9. What is a Data Hub
A single, consolidated, fully
populated data archive that
gives unfettered user access to
analyze and report on data, with
appropriate security, as soon as
the data is created by the
transactional or other source
system
10. Why a Data Hub
• Most data latency is removed
• Users and analysts are put in a self-service mode
• The concept of a “data cube” is unnecessary
• Analysis at the lowest level – No need to run at the segment level
• Any question can be asked
• Business users and analysts have unrestricted ability to explore
• Correlation of any data set is immediately possible
• Significant reduction in reporting and analysis times
– Time to source the data
– Time for users to gain access to the data
• Reduction in IT labor ….
– Source Once – Use Many Times
11. • Data is Copied from source systems via ETL
• Sub-sets of data are captured
– Too expensive to keep all detail
– Takes too long to ETL all data fields from sources
• Each use of data generates more unique ETL jobs
• Data is segmented to reduce query times
• Cubes or views are generated to improve analysis speed
• Disparate data silos required ETL before users have access
• Data warehouse costs and performance limitations force
archiving and data truncation
• Tends to lead to different versions of “truth”
• Time lag or latency from data generation to use
The Traditional Approach
12. Benefits - Hadoop as a Data Hub
• All data is available
– All history
– All detail
• No need to filter, segment or cube before use
• Data can be consumed almost immediately
• No need to silo into different databases to
accommodate performance limitations
• Users do not require IT to ETL data before use
• Security is applied via Datameer profiles
• User self-service is a reality
13. Prerequisites
• An Enterprise data architecture that has a Data
Hub as a foundation
• Data sourcing must be controlled
• Metadata must be created for data sources
• A leader with the vision and capability to drive
• Willing business users to pilot and coach others
• A sustained strategy to Enterprise Data
Architecture and governance
• A carefully designed Hadoop data layer
architecture
14. Key Concepts
• A Data Hub is now reality
• Drives lower costs and reduces delays
• Time to value for data is reduced
• Business users and analysts are empowered
• The most important:
– Source Once – Re-use Many Times
– Source everything
– Retain everything
15. o ETL complexity is needed no-longer – DATA HUB
– Source Once – Re-Use many times
– ETL is transformed to ELTTTTTT with lower data latency
– Consume data in-place with Datameer
o ETL-induced data latency is largely eliminated
– Analysis is routinely possible within minutes of data creation
o Long-running overnight workload on Legacy Systems
– Can be eliminated and executed at any time
– Run times are a fraction of the original clock-time
o Batch processing on mainframes or other conventional batch
– Moved to Hadoop
– Run 10, 50, even 100 times faster.
o Intelligent Archive
– Put your archives/tape data on Hadoop and make it Intelligent
– Archive with the ability to run analytics or join it with other data
o Modernize Legacy
– Mainframe MIPs reduction has very attractive ROI
– Move Data Warehouse workload – Reduce Cost – Go Faster
Key Learning