Naukri SazzadAli (6y 6m)
Naukri SazzadAli (6y 6m)
Naukri SazzadAli (6y 6m)
Professional Goals
To associate with progressive organization that gives me scope to update my knowledge and skills in Software
development according to latest trends and be a part of team that dynamically works towards the growth of
organization and gains satisfaction.
Career Summary
Overall 6+ years of IT experience in a variety of projects, which includes hands on experience in Big Data
Analytics and development
Strong experience on Hadoop distributions like Cloudera
Expertise with the tools in Hadoop Ecosystem including Spark, Hive, HDFS, Spark-SQL.
Worked with Amazon Web Services (AWS) using EC2 for computing and S3 as storage mechanism POC.
Experience in designing and developing applications in Spark using Scala to compare the performance
of Spark with Hive
Experience in collecting and storing stream data like log data in HDFS.
Knowledge on latest languages like Spark, Scala
TECHNICAL SKILLS
Academic Qualifications
Key Responsibilities:
Understand business requirement and write business logic to align delivery.
Developed spark application using Scala, using both Data frames/SQL/Data sets for Data Aggregation,
queries.
Working in Cloudera Environment and used S3 for storage.
Optimizing of existing algorithms in Hadoop using Spark Context, Spark-SQL, DataFrames.
Worked with Spark Sql using DataFrame for more complex queries.
Translate complex functional and technical requirements into detailed design.
Involved in creating Hive tables, and loading and analysing data using hive queries
Data cleansing for verification and validation
Key Responsibilities:
Developed Spark engine module per the requirement.
Developed spark scripts, UDFs using both Data frames/SQL/Data sets for Data Aggregation, queries.
Working in Cloudera Environment.
Optimizing of existing algorithms in Hadoop using Spark Context, Spark-SQL, Data Frames.
Worked with Spark Sql using DataFrame for more complex queries.
Translate complex functional and technical requirements into detailed design.
Involved in creating Hive tables, and loading and analyzing data using hive queries
Data cleansing for verification and validation
Responsible for the project tasks assigned
Initiate escalation procedure for incidents based on the agreed upon timelines and tracks it to closure
Manage queue effectively and allocate tasks to the team based on an allocation plan.
Contribute and participate proactively in knowledge sharing sessions
Provide complete KT to support teams before any production release.
Key Responsibilities:
Developed Spark scripts by using Scala shell commands as per the requirement.
Developed Scala scripts, UDFFs using both Data frames/SQL/Data sets and RDD/MapReduce in Spark 2.x for
Data Aggregation, queries.
Working in SimCloud Environment.
Optimizing of existing algorithms in Hadoop using Spark Context, Spark-SQL, Data Frames and Pair RDD's.
Worked with Spark Sql using DataFrame for more complex queries.
Translate complex functional and technical requirements into detailed design.
Involved in creating Hive tables, and loading and analyzing data using hive queries
Data cleansing for verification and validation
Key Responsibilities:
Key Responsibilities:
Create and Maintains the Hive tables.
Implemented Partitioning, Dynamic Partitions, Buckets in HIVE.
Importing and exporting data into HDFS and Hive using Sqoop.
Generating report for various investigation and procedures.
Daily checking of the Sqoop job fetched data from production server.
Strengths
Strong determination to succeed and learn new things.
Flexible and adapt quickly to new working environments.
Work independently and as part of a team.
Proven leadership skills.
Organized and methodical approaches towards deliverables ensuring timelines are met.
Excellent research capabilities with a proactive approach towards leveraging available resources to
achieve results.
DECLARATION:
I hereby declare that all the information given above are true and correct to the best of my knowledge.