Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Syed Ahmed

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 4

Moinuddib Ahmed Syed

Email: syed.engg123@gmail.com
(914) 236-0661

Summary
 Overall 8+ years of experience as Data Engineer and Data Analyst including designing, developing and implementation of
data models for enterprise-level applications and systems.
 Excellent understanding of Hadoop architecture and underlying framework including storage management.
 Excellent working experience in Scrum / Agile framework and Waterfall project execution methodologies.
 Good working on analysis tool like Tableau for regression analysis, pie charts, and bar graphs.
 Experience in Data transformation, Data mapping from source to target database schemas, Data Cleansing procedures.
 Experience in Worked on NoSQL databases - HBase, Cassandra & MongoDB, database performance tuning & data
modeling.
 Expertise in Developing Big data solutions using Data ingestion, Data Storage.
 Highly motivated to work on Python scripts for statistics analytics for generating reports for Data Quality.
 Good experience in using Sqoop for traditional RDBMS data pulls.
 Good Knowledge with cloud technologies like Azure and AWS (EMR, S3, Red Shift, EC2).
 Extensive experience in loading and analyzing large datasets with Hadoop framework (HDFS, PIG, HIVE, Flume,
Sqoop).
 Hands on experience in Normalization and De-moralization techniques for effective and optimum performance in OLTP
and OLAP environments.
 Good experience in Data Modeling and Data Analysis as a Proficient in gathering business requirements and handling
requirements management.
 Solid knowledge of Dimensional Data Modeling with Ralph Kimball Methodology (Star Schema Modeling, Snow-Flake
Modeling for FACT and Dimensions Tables) using Analysis Services.
 Responsible for troubleshooting issues in the execution of MapReduce jobs by inspecting and reviewing log files.
 Strong experience in migrating data warehouses and databases into Hadoop/NoSQL platforms.
 Expertise in Data Migration, Data Profiling, Data Cleansing, Transformation, Integration, Data Import
 Extensive experience in using ER modeling tools such as Erwin and ER/Studio, Teradata, and MDM.
 Knowledge about using Data Bricks Platform, Cloudera Manager and Hortonworks Distribution to monitor and manage
clusters.
 Good Knowledge with cloud technologies like Azure and AWS (EMR, S3, RedShift, EC2, DynamoDB).
 Experience in configuring and administering the Hadoop Cluster using major Hadoop Distributions like Apache Hadoop
and Cloudera.
 Experience in NoSQL databases - HBase, Cassandra & MongoDB, database performance tuning & data modeling.
 Excellent experience in development of Big Data projects using Hadoop, Hive, HDP, Pig, Flume, Storm and MapReduce
open source tools/technologies.
 Working on the ad-hoc queries, Indexing, Replication, Load balancing, and Aggregation in MongoDB.

Technical Skills
 Hadoop Ecosystem: MapReduce, Spark 2.3, HBase 1.2, Hive 2.3, Pig 0.17, Solr 7.2, Flume 1.8, Sqoop 1.4, Kafka 1.0.1,
Oozie 4.3, Hue, Cloudera Manager, Stream sets, Neo4j, Hadoop 3.0, Apache Nifi 1.6, Cassandra 3.11
 Cloud Management: Microsoft Azure , Amazon Web Services(AWS)
 OLAP Tools: Tableau, SAP BO, SSAS, Business Objects, and Crystal Reports 9
 Programming Languages: SQL, PL/SQL, UNIX shell Scripting, PERL, AWK, SED
 RDBMS Databases: Oracle 12c/11g, Teradata R15/R14, MS SQL Server 2016/2014, DB2.
 NoSQL Database: Cassandra, HBase, Mongo DB, Dynamo DB, Cosmos DB
 Testing and defect tracking Tools: HP/Mercury, Quality Center, Win Runner, MS Visio 2016 & Visual Source Safe
 Operating System: Windows 7/8/10, Unix, Sun Solaris
 ETL/Data warehouse Tools: Informatica v10, SAP Business Objects Business Intelligence 4.2 Service Pack 03, Talend,
Tableau, and Pentaho.
 Methodologies: RAD, JAD, RUP, UML, System Development Life Cycle (SDLC), Agile, Waterfall Model.

Work Experience

Amgen - Thousand Oaks, CA Mar’ 19 – Present


Data Engineer
Responsibilities
 As a Sr. Data Engineer, Responsible for design and development of Big Data applications using Cloudera Hadoop.
 Extracted and loaded data into Data Lake environment (MS Azure) by using Sqoop which was accessed by business users.
 Used SDLC (System Development Life Cycle) methodologies like RUP and Agile methodology.
 Worked in a Hadoop ecosystem implementation/administration, installing software patches along with system upgrades and
configuration.
 Participated in JAD sessions and requirements gathering & identification of business subject areas.
 Configured Azure Cosmos DB Trigger in Azure Functions, which invokes the Azure Function when any changes are
made to the Azure Cosmos DB container.
 Importing and exporting data into HDFS from MySQL and vice versa using Sqoop and manage the data coming from
different sources.
 Integrate visualizations into a Spark application using Data bricks and popular visualization libraries (ggplot,
matplotlib).
 Used Reverse Engineering approach to redefine entities, relationships and attributes in the data model.
 Involved in all phases of data mining, data collection, data cleaning, developing models, validation and visualization.
 Created Hive External tables to stage data and then move the data from Staging to main tables
 Implemented Kafka producers create custom partitions, configured brokers and implemented High level consumers to
implement data platform.
 Created dimensional model based on star schemas and snowflake schemas and designed them using Erwin.
 Configured Input & Output bindings of Azure Function with Azure Cosmos DB collection to read and write data from the
container whenever the function executes.
 Worked with ETL tools to migrate data from various OLTP databases to the data mart.
 Engaged in solving and supporting real business issues with your Hadoop distributed File systems and Open Source
framework knowledge.
 Experienced of building Data Warehouse in Azure platform using Azure data bricks and data factory
 Exported the analyzed data to the relational databases using Sqoop for visualization and to generate reports for the BI team.
 Worked with production support team to provide necessary support for issues with CDH cluster and the data ingestion
platform.
 Worked in Azure environment for development and deployment of Custom Hadoop Applications.
 Developed SQL Queries to fetch complex data from different tables in remote databases using joins, database links and Bulk
collects.
 Designed and implemented scalable Cloud Data and Analytical a solutions for various public and private cloud platforms
using Azure.
 Developed Scala scripts, UDF's using both Data frames/SQL for Data Aggregation, queries and writing data back into
RDBMS through Sqoop.
 Developed Oozie workflow jobs to execute hive, Sqoop and MapReduce actions.
 Developed numerous MapReduce jobs in Scala for Data Cleansing and Analyzing Data in Impala.
 Used Hive to analyze data ingested into HBase by using Hive-HBase integration and compute various metrics for reporting
on the dashboard
 Developed customized classes for serialization and De-serialization in Hadoop.
 Worked on the Ad hoc queries, Indexing, Replication, Load balancing, Aggregation in Mongo DB
Environment: Hadoop 3.0, HBase, Hive 2.3, HDFS, Scala 2.12, Sqoop 1.4, MapReduce, Apache Nifi, HDFS, Azure, SQL server,
EC2, Erwin 9.7.

J. B. Hunt - Lowell, AR Apr’ 17 – Feb’ 19


Data Engineer
Responsibilities:
 Worked as a Data Engineer to review business requirement and compose source to target data mapping documents.
 Responsible for data warehousing, data modeling, data governance, standards, methodologies, guidelines and techniques.
 Participated in JAD session with business users, sponsors and subject matter experts to understand the business
requirement document.
 Worked on Amazon Redshift and AWS a solution to load data, create data models and run BI on it.
 Designed and implemented end to end data near real-time data pipeline by transferring data from DB2 tables into Hive on
HDFS using Sqoop.
 Worked with DBAs to create a best fit physical data model from the logical data model.
 Translated business requirements into detailed, production-level technical specifications, new features, and created
conceptual modeling.
 Created 3NF business area data modeling with de-normalized physical implementation data and information requirements
analysis using Erwin tool.
 Designing and implementing the data lake on the NoSQL database HBase with de-normalized tables suited to feed the
down- stream reporting applications.
 Developed Data Migration and Cleansing rules for the Integration Architecture (OLTP and DW)
 Developed Star and Snowflake schemas based dimensional model to develop the data warehouse.
 Worked with Cloud Service Providers as Amazon AWS.
 Generate the DDL of the target data model and attached it to the Jira to be deployed in different Environments.
 Created reports from several discovered patterns using Microsoft excel to analyze pertinent data by pivoting.
 Worked on AWS Redshift and RDS for implementing models and data on RDS and Redshift.
 Applied Data Governance rules (primary qualifier, class words and valid abbreviation in Table name and Column names).
 Identified Facts & Dimensions Tables and established the Grain of Fact for Dimensional Models.
 Worked on the reporting requirements and involved in generating the reports for the Data Model.
Environment: Apache Hive 2.1, HDFS, Sqoop 1.4, HBase 1.2 OLTP, Jira, Teradata r14, SQL, AWS, MS Visio.

T - Mobile - Bellevue, WA Jan’ 16 – Mar’ 17


Data Modeler
Responsibilities
 Gathered and translated business requirements into detailed, production-level technical specifications, new features, and
enhancements to existing technical business functionality.
 Analyzed conceptual into logical data and had JAD sessions and also communicated data related issues and standards.
 Built a Data warehouse using Bill Inmon’s approach in order to support reporting needs.
 Developed Source to Target mapping documents by analyzing data content and physical data structures.
 Created DDL scripts for static dimensions and data model for the star schema using Erwin.
 Created Data Design document included data specifications, which are used by development team.
 Involved in extensive data validation by writing several complex SQL queries and Involved in back-end testing and
worked with data quality issues.
 Worked on the reporting requirements and involved in generating the reports for the Data Model using Crystal reports
 Worked on Data governance, data quality, data lineage establishment processes.
 Extensively worked with SSIS to load the data from source systems, and run in periodic intervals
 Worked with data transformations in both normalized and de-normalized data environments
 Implemented Snow-flake schema to ensure no redundancy in the database.
 Prepared and maintained database architecture and modeling policies, procedures and standards.
 Developed the Data warehouse model (Kimball’s) with several data marts and conformed dimensions for the proposed
model in the Project.
 Gathered and documented the Audit trail and traceability of extracted information for data quality.
 Worked on the reporting requirements and involved in generating the reports for the Data Model.
 Created 3 NF business area data modeling with de-normalized physical implementation; data and information requirements
analysis.
 Created PL/SQL packages and Database Triggers and developed user procedures and prepared user manuals for the new
programs.
 Developed Data mapping, Transformation and Cleansing rules for the Data Management involving OLAP and OLTP
 Developed SQL Queries to fetch complex data from different tables in remote databases using joins, database links and bulk
collects.
 Analyzed, retrieved and aggregated data from multiple datasets to perform data mapping.
 Created E/R Diagrams, Data Flow Diagrams, grouped and created the tables, validated the data, for lookup tables.
Environment: Erwin 9.6, OLAP, OLTP, SQL, SSIS, DDL, SQL, Crystal reports 2015, PL/SQL, Triggers, 3NF, Data Mart.

Vangaurd - Malvern PA Sep’ 14 – Dec’ 15


Data Analyst/Data Modeler
Responsibilities
 Worked with Business Analysts team in requirements gathering and in preparing functional specifications and translating
them to technical specifications.
 Worked with Business users during requirements gathering and prepared Conceptual, Logical and Physical Data Models.
 Worked with supporting business analysis and marketing campaign analytics with data mining, data processing, and
investigation to answer complex business questions.
 Developed scripts that automated DDL and DML statements used in creations of databases, tables, constraints, and
updates.
 Planned and defined system requirements to Use Case, Use Case Scenario and Use Case Narrative using the UML (Unified
Modeling Language) methodologies.
 Gather all the analysis reports prototypes from the business analysts belonging to different Business units; Participated in
JAD sessions involving the discussion of various reporting needs.
 Reverse Engineering the existing data marts and identified the Data Elements (in the source systems), Dimensions, Facts
and Measures required for reports.
 Conduct Design discussions and meetings to come out with the appropriate Data Warehouse at the lowest level of grain for
each of the Dimensions involved.
 Created Entity Relationship Diagrams (ERD), Functional diagrams, Data flow diagrams and enforced referential
integrity constraints.
 Involved in designing and developing SQL server objects such as Tables, Views, Indexes (Clustered and Non-Clustered),
Stored Procedures and Functions in Transact-SQL.
 Designed a Star schema for sales data involving shared dimensions (Conformed) for other subject areas using Erwin Data
Modeler.
 Created and maintained Logical Data Model (LDM) for the project. Includes documentation of all entities, attributes, data
relationships, primary and foreign key structures, allowed values, codes, business rules, glossary terms, etc.
 Ensured the feasibility of the logical and physical design models.
 Worked on the Snow-flaking the Dimensions to remove redundancy.
 Wrote PL/SQL statement, stored procedures and Triggers in DB2 for extracting as well as writing data.
 Defined facts, dimensions and designed the data marts using the Ralph Kimball's Dimensional Data Mart modeling
methodology using Erwin.
 Involved in Data profiling and performed Data Analysis based on the requirements, which helped in catching many
Sourcing Issues upfront.
 Developed Data mapping, Data Governance, Transformation and Cleansing rules for the Data Management involving
OLTP, ODS and OLAP.
 Normalized the database based on the new model developed to put them into the 3NF of the data warehouse.
 Created SSIS package for daily email subscriptions using the ODBC driver and Postgre SQL database.
 Constructed complex SQL queries with sub-queries, inline views as per the functional needs in the Business Requirements
Document (BRD).
Environment: PL/SQL, Erwin 8.5, MS SQL 2012, OLTP, ODS, OLAP, SSIS, Transact-SQL, Teradata SQL Assistant

Curvy Technology - Hyderabad, India Apr’ 12 – Aug’ 14


Data Analyst
Responsibilities
 Maintained numerous monthly scripts, executed on monthly basis, produces reports and submitted on time for business
review.
 Worked with Data Analysts to understand Business logic and User Requirements.
 Closely worked with cross functional Data warehouse members to import data into SQL Server and connected to SQL
Server to prepare spreadsheets.
 Created reports for the Data Analysis using SQL Server Reporting Services.
 Created V-Look Up functions in MS Excel for searching data in large spreadsheets.
 Created SQL queries to simplify migration progress reports and analyses.
 Developed Stored Procedures in SQL Server to consolidate common DML transactions such as insert, update and delete
from the database.
 Analyzed data using data visualization tools and reported key features using statistic tools.
 Developed reporting and various dashboards across all areas of the client's business to help analyze the data.
 Cleansed and manipulated data by sub-setting, sorting, and pivoting on need basis.
 Used SQL Server and MS Excel on daily basis to manipulate the data for business intelligence reporting needs.
 Developed the stored procedures as required, and user defined functions and triggers as needed using T-SQL.
 Designed data reports in Excel, for easy sharing, and used SSRS for report deliverables to aid in statistical data analysis and
decision making.
 Created reports from OLAP, sub reports, bar charts and matrix reports using SSIS.
 Used Excel and PowerPoint on various projects as needed for presentations and summarization of data to provide insight on
key business decisions.
 Extracted data from different sources performing Data Integrity and quality checks.
 Performed Data Analysis and Data Profiling and worked on data transformations and data quality rules.
 Involved in extensive data validation by writing several complex SQL queries and Involved in back-end testing and
worked with data quality issues.
 Worked on V-lookups, Pivot tables, and Macros in Excel developed ad-hoc.
 Performed Data Manipulation using MS Excel Pivot Sheets and produced various charts for creating the mock reports.
 Worked on creating Excel Reports which includes Pivot tables and Pivot charts.
 Collected, analyze and interpret complex data for reporting and/or performance trend analysis
Environment: SQL Server, MS Excel 2010, V-Look, T-SQL, SSRS, SSIS, OLAP, MS Power Point 2010

You might also like