Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
SlideShare a Scribd company logo

1

© 2015, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Matt Yanchyshyn, Sr. Mgr. Solutions Architecture
October 2015
BDT205
Building Your First Big Data Application

2

Amazon S3
Amazon Kinesis
Amazon DynamoDB
Amazon RDS (Aurora)
AWS Lambda
KCL Apps
Amazon
EMR
Amazon
Redshift
Amazon Machine
Learning
Collect Process Analyze
Store
Data Collection
and Storage
Data
Processing
Event
Processing
Data
Analysis
Data Answers

3

Your first big data application on AWS

4

Collect Process Analyze
Store
Data Answers

5

Collect Process Analyze
Store
Data Answers

6

Collect Process Analyze
Store
Data Answers
SQL

7

Set up with the AWS CLI

8

Amazon Kinesis
Create a single-shard Amazon Kinesis stream for incoming
log data:
aws kinesis create-stream 
--stream-name AccessLogStream 
--shard-count 1

9

Amazon S3
YOUR-BUCKET-NAME

10

Amazon EMR
Launch a 3-node Amazon EMR cluster with Spark and Hive:
m3.xlarge
YOUR-AWS-SSH-KEY

11

Amazon Redshift

CHOOSE-A-REDSHIFT-PASSWORD

12

Your first big data application on AWS
2. PROCESS: Process data with
Amazon EMR using Spark & Hive
STORE
3. ANALYZE: Analyze data in
Amazon Redshift using SQLSQL
1. COLLECT: Stream data into
Amazon Kinesis with Log4J

13

1. Collect

14

Amazon Kinesis Log4J Appender
In a separate terminal window on your local machine,
download Log4J Appender:
Then download and save the sample Apache log file:

15

Amazon Kinesis Log4J Appender
Create a file called AwsCredentials.properties with
credentials for an IAM user with permission to write to
Amazon Kinesis:
accessKey=YOUR-IAM-ACCESS-KEY
secretKey=YOUR-SECRET-KEY
Then start the Amazon Kinesis Log4J Appender:

16

Log file format

17

Spark
• Fast, general purpose engine
for large-scale data processing
• Write applications quickly in
Java, Scala, or Python
• Combine SQL, streaming, and
complex analytics

18

Amazon Kinesis and Spark Streaming
Log4J
Appender
Amazon
Kinesis
Amazon
S3
Amazon
DynamoDB
Spark-Streaming uses
Kinesis Client Library
Amazon
EMR

19

Using Spark Streaming on Amazon EMR
-o TCPKeepAlive=yes -o ServerAliveInterval=30 
YOUR-AWS-SSH-KEY YOUR-EMR-HOSTNAME
On your cluster, download the Amazon Kinesis client for
Spark:

20

Using Spark Streaming on Amazon EMR
Cut down on console noise:
Start the Spark shell:
spark-shell --jars /usr/lib/spark/extras/lib/spark-streaming-
kinesis-asl.jar,amazon-kinesis-client-1.6.0.jar --driver-java-
options "-
Dlog4j.configuration=file:///etc/spark/conf/log4j.properties"

21

Using Spark Streaming on Amazon EMR
/* import required libraries */

22

Using Spark Streaming on Amazon EMR
/* Set up the variables as needed */
YOUR-REGION
YOUR-S3-BUCKET
/* Reconfigure the spark-shell */

23

Reading Amazon Kinesis with Spark Streaming
/* Setup the KinesisClient */
val kinesisClient = new AmazonKinesisClient(new
DefaultAWSCredentialsProviderChain())
kinesisClient.setEndpoint(endpointUrl)
/* Determine the number of shards from the stream */
val numShards =
kinesisClient.describeStream(streamName).getStreamDescription().getShard
s().size()
/* Create one worker per Kinesis shard */
val ssc = new StreamingContext(sc, outputInterval)
val kinesisStreams = (0 until numShards).map { i =>
KinesisUtils.createStream(ssc, streamName,
endpointUrl,outputInterval,InitialPositionInStream.TRIM_HORIZON,
StorageLevel.MEMORY_ONLY)
}

24

Writing to Amazon S3 with Spark Streaming
/* Merge the worker Dstreams and translate the byteArray to string */
/* Write each RDD to Amazon S3 */

25

View the output files in Amazon S3
YOUR-S3-BUCKET
YOUR-S3-BUCKET
yyyy mm dd HH

26

2. Process

27

Spark SQL
Spark's module for working with structured data using SQL
Run unmodified Hive queries on existing data

28

Using Spark SQL on Amazon EMR
YOUR-AWS-SSH-KEY YOUR-EMR-HOSTNAME
Start the Spark SQL shell:
spark-sql --driver-java-options "-
Dlog4j.configuration=file:///etc/spark/conf/log4j.propertie
s"

29

Create a table that points to your Amazon S3 bucket
CREATE EXTERNAL TABLE access_log_raw(
host STRING, identity STRING,
user STRING, request_time STRING,
request STRING, status STRING,
size STRING, referrer STRING,
agent STRING
)
PARTITIONED BY (year INT, month INT, day INT, hour INT, min INT)
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.RegexSerDe'
WITH SERDEPROPERTIES (
"input.regex" = "([^ ]*) ([^ ]*) ([^ ]*) (-|[[^]]*]) ([^
"]*|"[^"]*") (-|[0-9]*) (-|[0-9]*)(?: ([^ "]*|"[^"]*") ([^
"]*|"[^"]*"))?"
)
LOCATION 's3://YOUR-S3-BUCKET/access-log-raw';
msck repair table access_log_raw;

30

Query the data with Spark SQL
-- return the first row in the stream
-- return count all items in the Stream
-- find the top 10 hosts

31

Preparing the data for Amazon Redshift import
We will transform the data that is returned by the query before writing it
to our Amazon S3-stored external Hive table
Hive user-defined functions (UDF) in use for the text transformations:
from_unixtime, unix_timestamp and hour
The “hour” value is important: this is what’s used to split and organize
the output files before writing to Amazon S3. These splits will allow us
to more efficiently load the data into Amazon Redshift later in the lab
using the parallel “COPY” command.

32

Create an external table in Amazon S3
YOUR-S3-BUCKET

33

Configure partition and compression
-- setup Hive's "dynamic partitioning"
-- this will split output files when writing to Amazon S3
-- compress output files on Amazon S3 using Gzip

34

Write output to Amazon S3
-- convert the Apache log timestamp to a UNIX timestamp
-- split files in Amazon S3 by the hour in the log lines
INSERT OVERWRITE TABLE access_log_processed PARTITION (hour)
SELECT
from_unixtime(unix_timestamp(request_time,
'[dd/MMM/yyyy:HH:mm:ss Z]')),
host,
request,
status,
referrer,
agent,
hour(from_unixtime(unix_timestamp(request_time,
'[dd/MMM/yyyy:HH:mm:ss Z]'))) as hour
FROM access_log_raw;

35

View the output files in Amazon S3
YOUR-S3-BUCKET
YOUR-S3-BUCKET

36

3. Analyze

37

Connect to Amazon Redshift
# using the PostgreSQL CLI
YOUR-REDSHIFT-ENDPOINT
Or use any JDBC or ODBC SQL client with the PostgreSQL
8.x drivers or native Amazon Redshift support
• Aginity Workbench for Amazon Redshift
• SQL Workbench/J

38

Create an Amazon Redshift table to hold your data

39

Loading data into Amazon Redshift
“COPY” command loads files in parallel
COPY accesslogs
FROM 's3://YOUR-S3-BUCKET/access-log-processed'
CREDENTIALS
'aws_access_key_id=YOUR-IAM-
ACCESS_KEY;aws_secret_access_key=YOUR-IAM-SECRET-KEY'
DELIMITER 't' IGNOREHEADER 0
MAXERROR 0
GZIP;

40

Amazon Redshift test queries
-- find distribution of status codes over days
-- find the 404 status codes
-- show all requests for status as PAGE NOT FOUND

41

Your first big data application on AWS
A favicon would fix 398 of the total 977 PAGE NOT FOUND (404) errors

42

Visualize the results
• Client-side JavaScript example using Plottable, a library built on D3
• Hosted on Amazon S3 for pennies a month
• AWS Lambda function used to query Amazon Redshift

43

…around the same cost as a cup of coffee
Try it yourself on the AWS cloud…
Service Est. Cost*
Amazon Kinesis $1.00
Amazon S3 (free tier) $0
Amazon EMR $0.44
Amazon Redshift $1.00
Est. Total $2.44
*Estimated costs assumes: use of free tier where available, lower cost instances, dataset no bigger than 10MB and instances running
for less than 4 hours. Costs may vary depending on options selected, size of dataset, and usage.
$3.50

44

Learn from AWS big data experts
blogs.aws.amazon.com/bigdata

45

Remember to complete
your evaluations!

46

Thank you!

More Related Content

(BDT205) Your First Big Data Application On AWS

  • 1. © 2015, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Matt Yanchyshyn, Sr. Mgr. Solutions Architecture October 2015 BDT205 Building Your First Big Data Application
  • 2. Amazon S3 Amazon Kinesis Amazon DynamoDB Amazon RDS (Aurora) AWS Lambda KCL Apps Amazon EMR Amazon Redshift Amazon Machine Learning Collect Process Analyze Store Data Collection and Storage Data Processing Event Processing Data Analysis Data Answers
  • 3. Your first big data application on AWS
  • 7. Set up with the AWS CLI
  • 8. Amazon Kinesis Create a single-shard Amazon Kinesis stream for incoming log data: aws kinesis create-stream --stream-name AccessLogStream --shard-count 1
  • 10. Amazon EMR Launch a 3-node Amazon EMR cluster with Spark and Hive: m3.xlarge YOUR-AWS-SSH-KEY
  • 12. Your first big data application on AWS 2. PROCESS: Process data with Amazon EMR using Spark & Hive STORE 3. ANALYZE: Analyze data in Amazon Redshift using SQLSQL 1. COLLECT: Stream data into Amazon Kinesis with Log4J
  • 14. Amazon Kinesis Log4J Appender In a separate terminal window on your local machine, download Log4J Appender: Then download and save the sample Apache log file:
  • 15. Amazon Kinesis Log4J Appender Create a file called AwsCredentials.properties with credentials for an IAM user with permission to write to Amazon Kinesis: accessKey=YOUR-IAM-ACCESS-KEY secretKey=YOUR-SECRET-KEY Then start the Amazon Kinesis Log4J Appender:
  • 17. Spark • Fast, general purpose engine for large-scale data processing • Write applications quickly in Java, Scala, or Python • Combine SQL, streaming, and complex analytics
  • 18. Amazon Kinesis and Spark Streaming Log4J Appender Amazon Kinesis Amazon S3 Amazon DynamoDB Spark-Streaming uses Kinesis Client Library Amazon EMR
  • 19. Using Spark Streaming on Amazon EMR -o TCPKeepAlive=yes -o ServerAliveInterval=30 YOUR-AWS-SSH-KEY YOUR-EMR-HOSTNAME On your cluster, download the Amazon Kinesis client for Spark:
  • 20. Using Spark Streaming on Amazon EMR Cut down on console noise: Start the Spark shell: spark-shell --jars /usr/lib/spark/extras/lib/spark-streaming- kinesis-asl.jar,amazon-kinesis-client-1.6.0.jar --driver-java- options "- Dlog4j.configuration=file:///etc/spark/conf/log4j.properties"
  • 21. Using Spark Streaming on Amazon EMR /* import required libraries */
  • 22. Using Spark Streaming on Amazon EMR /* Set up the variables as needed */ YOUR-REGION YOUR-S3-BUCKET /* Reconfigure the spark-shell */
  • 23. Reading Amazon Kinesis with Spark Streaming /* Setup the KinesisClient */ val kinesisClient = new AmazonKinesisClient(new DefaultAWSCredentialsProviderChain()) kinesisClient.setEndpoint(endpointUrl) /* Determine the number of shards from the stream */ val numShards = kinesisClient.describeStream(streamName).getStreamDescription().getShard s().size() /* Create one worker per Kinesis shard */ val ssc = new StreamingContext(sc, outputInterval) val kinesisStreams = (0 until numShards).map { i => KinesisUtils.createStream(ssc, streamName, endpointUrl,outputInterval,InitialPositionInStream.TRIM_HORIZON, StorageLevel.MEMORY_ONLY) }
  • 24. Writing to Amazon S3 with Spark Streaming /* Merge the worker Dstreams and translate the byteArray to string */ /* Write each RDD to Amazon S3 */
  • 25. View the output files in Amazon S3 YOUR-S3-BUCKET YOUR-S3-BUCKET yyyy mm dd HH
  • 27. Spark SQL Spark's module for working with structured data using SQL Run unmodified Hive queries on existing data
  • 28. Using Spark SQL on Amazon EMR YOUR-AWS-SSH-KEY YOUR-EMR-HOSTNAME Start the Spark SQL shell: spark-sql --driver-java-options "- Dlog4j.configuration=file:///etc/spark/conf/log4j.propertie s"
  • 29. Create a table that points to your Amazon S3 bucket CREATE EXTERNAL TABLE access_log_raw( host STRING, identity STRING, user STRING, request_time STRING, request STRING, status STRING, size STRING, referrer STRING, agent STRING ) PARTITIONED BY (year INT, month INT, day INT, hour INT, min INT) ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.RegexSerDe' WITH SERDEPROPERTIES ( "input.regex" = "([^ ]*) ([^ ]*) ([^ ]*) (-|[[^]]*]) ([^ "]*|"[^"]*") (-|[0-9]*) (-|[0-9]*)(?: ([^ "]*|"[^"]*") ([^ "]*|"[^"]*"))?" ) LOCATION 's3://YOUR-S3-BUCKET/access-log-raw'; msck repair table access_log_raw;
  • 30. Query the data with Spark SQL -- return the first row in the stream -- return count all items in the Stream -- find the top 10 hosts
  • 31. Preparing the data for Amazon Redshift import We will transform the data that is returned by the query before writing it to our Amazon S3-stored external Hive table Hive user-defined functions (UDF) in use for the text transformations: from_unixtime, unix_timestamp and hour The “hour” value is important: this is what’s used to split and organize the output files before writing to Amazon S3. These splits will allow us to more efficiently load the data into Amazon Redshift later in the lab using the parallel “COPY” command.
  • 32. Create an external table in Amazon S3 YOUR-S3-BUCKET
  • 33. Configure partition and compression -- setup Hive's "dynamic partitioning" -- this will split output files when writing to Amazon S3 -- compress output files on Amazon S3 using Gzip
  • 34. Write output to Amazon S3 -- convert the Apache log timestamp to a UNIX timestamp -- split files in Amazon S3 by the hour in the log lines INSERT OVERWRITE TABLE access_log_processed PARTITION (hour) SELECT from_unixtime(unix_timestamp(request_time, '[dd/MMM/yyyy:HH:mm:ss Z]')), host, request, status, referrer, agent, hour(from_unixtime(unix_timestamp(request_time, '[dd/MMM/yyyy:HH:mm:ss Z]'))) as hour FROM access_log_raw;
  • 35. View the output files in Amazon S3 YOUR-S3-BUCKET YOUR-S3-BUCKET
  • 37. Connect to Amazon Redshift # using the PostgreSQL CLI YOUR-REDSHIFT-ENDPOINT Or use any JDBC or ODBC SQL client with the PostgreSQL 8.x drivers or native Amazon Redshift support • Aginity Workbench for Amazon Redshift • SQL Workbench/J
  • 38. Create an Amazon Redshift table to hold your data
  • 39. Loading data into Amazon Redshift “COPY” command loads files in parallel COPY accesslogs FROM 's3://YOUR-S3-BUCKET/access-log-processed' CREDENTIALS 'aws_access_key_id=YOUR-IAM- ACCESS_KEY;aws_secret_access_key=YOUR-IAM-SECRET-KEY' DELIMITER 't' IGNOREHEADER 0 MAXERROR 0 GZIP;
  • 40. Amazon Redshift test queries -- find distribution of status codes over days -- find the 404 status codes -- show all requests for status as PAGE NOT FOUND
  • 41. Your first big data application on AWS A favicon would fix 398 of the total 977 PAGE NOT FOUND (404) errors
  • 42. Visualize the results • Client-side JavaScript example using Plottable, a library built on D3 • Hosted on Amazon S3 for pennies a month • AWS Lambda function used to query Amazon Redshift
  • 43. …around the same cost as a cup of coffee Try it yourself on the AWS cloud… Service Est. Cost* Amazon Kinesis $1.00 Amazon S3 (free tier) $0 Amazon EMR $0.44 Amazon Redshift $1.00 Est. Total $2.44 *Estimated costs assumes: use of free tier where available, lower cost instances, dataset no bigger than 10MB and instances running for less than 4 hours. Costs may vary depending on options selected, size of dataset, and usage. $3.50
  • 44. Learn from AWS big data experts blogs.aws.amazon.com/bigdata