Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Apache Flume - Data Transfer in Hadoop - Tutorialspoint

Download as pdf or txt
Download as pdf or txt
You are on page 1of 2

Home Jobs Tools Coding Ground Current Affairs UPSC Notes Online Tutors Whiteboard Net Meeting Tutorix

Categories Library Videos Q/A eBooks


Search your favorite tutorials...

Apache Flume - Data Transfer In Hadoop


Advertisements

Apache Flume Tutorial

Apache Flume - Home Previous Page Next Page

Apache Flume - Introduction


Big Data, as we know, is a collection of large datasets that cannot be processed using traditional
Data Transfer in Hadoop computing techniques. Big Data, when analyzed, gives valuable results. Hadoop is an open-source
framework that allows to store and process Big Data in a distributed environment across clusters of
Apache Flume - Architecture
computers using simple programming models.
Apache Flume - Data Flow

Apache Flume - Environment Streaming / Log Data


Apache Flume - configuration Generally, most of the data that is to be analyzed will be produced by various data sources like
applications servers, social networking sites, cloud servers, and enterprise servers. This data will be
Apache Flume - Fetching Twitter Data
in the form of log files and events.
Sequence Generator Source
Log file − In general, a log file is a file that lists events/actions that occur in an operating system. For
Apache Flume - NetCat Source example, web servers list every request made to the server in the log files.
On harvesting such log data, we can get information about −
Apache Flume Resources
the application performance and locate various software and hardware failures.

Apache Flume - Quick Guide


the user behavior and derive better business insights.

Apache Flume - Useful Resources


The traditional method of transferring data into the HDFS system is to use the put command. Let us
see how to use the put command.
Apache Flume - Discussion

HDFS put Command


Selected Reading
The main challenge in handling the log data is in moving these logs produced by multiple servers to
UPSC IAS Exams Notes the Hadoop environment.

Developer's Best Practices Hadoop File System Shell provides commands to insert data into Hadoop and read from it. You can
insert data into Hadoop using the put command as shown below.
Questions and Answers

Effective Resume Writing $ Hadoop fs –put /path of the required file /path in HDFS where to save the file

HR Interview Questions

Computer Glossary
Problem with put Command
Who is Who
We can use the put command of Hadoop to transfer data from these sources to HDFS. But, it
suffers from the following drawbacks −
Using put command, we can transfer only one file at a time while the data generators
generate data at a much higher rate. Since the analysis made on older data is less
accurate, we need to have a solution to transfer data in real time.
If we use put command, the data is needed to be packaged and should be ready for the
upload. Since the webservers generate data continuously, it is a very difficult task.

What we need here is a solutions that can overcome the drawbacks of put command and transfer
the "streaming data" from data generators to centralized stores (especially HDFS) with less delay.

Problem with HDFS


In HDFS, the file exists as a directory entry and the length of the file will be considered as zero till it is
closed. For example, if a source is writing data into HDFS and the network was interrupted in the
middle of the operation (without closing the file), then the data written in the file will be lost.
Therefore we need a reliable, configurable, and maintainable system to transfer the log data into
HDFS.

Note − In POSIX file system, whenever we are accessing a file (say performing write operation),
other programs can still read this file (at least the saved portion of the file). This is because the file
exists on the disc before it is closed.

Available Solutions
To send streaming data (log files, events etc..,) from various sources to HDFS, we have the following
tools available at our disposal −

Facebook’s Scribe
Scribe is an immensely popular tool that is used to aggregate and stream log data. It is designed to
scale to a very large number of nodes and be robust to network and node failures.

Apache Kafka
Kafka has been developed by Apache Software Foundation. It is an open-source message broker.
Using Kafka, we can handle feeds with high-throughput and low-latency.

Apache Flume
Apache Flume is a tool/service/data ingestion mechanism for collecting aggregating and transporting
large amounts of streaming data such as log data, events (etc...) from various webserves to a
centralized data store.
It is a highly reliable, distributed, and configurable tool that is principally designed to transfer streaming
data from various sources to HDFS.

In this tutorial, we will discuss in detail how to use Flume with some examples.

Previous Page Next Page

Advertisements
About us Terms of use Cookies Policy FAQ's Helping Contact

© Copyright 2019. All Rights Reserved.


We use cookies to provide and improve our services. By using our site, you consent to our Cookies Policy. Accept Learn more

You might also like