Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
SlideShare a Scribd company logo
Jeffrey W. Richardson
mmasigung@gmail.com
(512) 699-8556
.
Page 1 of 4
SUMMARY
Jeff has over 33 years of experience in IT consulting, product development and system operations. From
his early degree in Electronic Data Processing to his current work in Big Data / Hadoop technologies, Jeff
has continuously honed his skills, expertise, and passion in the design, development and implementation
of large scale solutions. His most recent experiences have been with MetiStream, Inc., as a Big Data
Architect supporting customer projects around data warehousing offload and optimization efforts to
Hadoop. Leveraging Big Data open source technologies, Jeff led MetiStream’s efforts to design and build
innovative tools and best practices to automate and expedite data warehouse migrations to Hadoop.
While at IBM, Jeff achieved the highest level of IT Specialist Certification and his last assignment there
was with the Big Data Analytics Center of Competency. For many large customer engagements, Jeff has
been responsible for the development, operation and administration of large scale production, test and
development environments including a number of significant Big Data initiatives at communication
providers and banking institutions.
TECHNICAL SKILLS
Big Data Technologies
Big Data Hadoop, Sqoop, Spark, Hive, Impala, YARN, Oozie, Parquet, Avro, Cloudera
Manager
Programming Languages
Proficient korn shell, bash, awk, sed
Comfortable HTML, JavaScript, SQL
Working knowledge Scala, C, Java (JDBC), Python, Perl, COBOL, RPG II, FORTRAN, Assembler
Development Platforms & Tools
Platforms Linux, Windows, z/VM, AIX, Solaris. Amazon Web Services (AWS), SkyTap
IDEs Eclipse, Aginity Workbench, IBM Data Studio, Toad for DB2, IntelliJ
Databases Proficient: IBM DB2 LUW, Netezza Performance Server, Hive, Impala
Moderate: IBM Informix
Academic understanding: Oracle, MySQL, PostgreSQL, Cloudscape/Derby,
Sybase, Teradata
Assorted Tools VMware Workstation, IBM InfoSphere Data Architect, Docker, JIRA, GitHub
Application Server IBM WebSphere Application Server
Security Kerberos, Center for Internet Security benchmarks
EDUCATION
1994 B.A. Computer Science St. Edward’s University
1982 A.A.S. Electronic Data Processing Temple Junior College.
TRAINING / CERTIFICATIONS
2015 Cloudera Certified Administrator for Apache Hadoop Cloudera
2012 IBM Certified Specialist – Netezza Platform Software v6 IBM
2012 IBM Level 3 IT Specialist Certification IBM
2012 IBM Certified Database Associate DB2 10.1 IBM
2009 IBM Certified Database Administrator DB2 9 LUW IBM
Page 2 of 4
2004 IBM Certified Database Administrator DB2 UDB V8.1 IBM
WORK EXPERIENCE
March 2015-April 2016 MetiStream McLean, VA - Big Data Architect
 Big Data Engineer, as part of a MetiStream team assisting a Health Care services and product
company
o Developed a test automation framework based on Hadoop components
o Technologies included were Oozie workflows controlling Hive, shell and email actions.
o Framework included a test case template, business rules table (pass/fail criteria), results
table, email and report generation
o Ported Oracle and Netezza SQL query test cases to Hive Query Language
o Provided documentation and training to the company
 Big Data Engineer, as part of a Cloudera professional services technical team at a large financial
institution
o Supported architecture, design and implementation phases of a new analytic process,
offloading work from Netezza to Hadoop for 3 years of data with an estimated size of
approximately 5TB.
o Supported Hadoop application architecture review and Data Pipeline review including the
evaluation of data sources, data processing jobs, analytic processes, and SLAs.
o Worked with Cloudera to evaluate application data access patterns, how data schemas are
managed and evolved, and the effectiveness of the partitioning system
o In collaboration with Cloudera and customer team members, modified and extended a
prototype which loads merchant / location data from the current Netezza data warehouse to
Hadoop. Using Apache Sqoop and other Big Data solutions, helped build and manage jobs
and the overall data pipeline solution to pull data from the Netezza system and create
transformations to publish the dataset within the storage schema.
o Technologies involved were Sqoop, Pig, Hive, Impala, YARN, Oozie, Crunch, Kerberos,
Parquet, Avro, Python, YAML, HQL, Korn shell, bash, Cloudera Manager.
 Using Apache Spark and other open source Big Data technologies, lead a team of four engineers in
the development of MetiStream’s Translation Engine and provider overall support and best practices
around MetiStream’s Netezza / DW Offload and Optimization Solution
 Supported the review, development, and quality control of MetiStream’s Advanced Apache Spark
Training
 Developed an introductory level Spark education class, deployed to Skytap and AWS
 Built and deployed a cloud based PostgreSQL team database server.
May, 1982 – Oct, 2014 IBM Austin, TX
Feb 2013 -Oct 2014 - Overland Park, KS
 Member of IBM Big Data Analytics Center of Competency.
o Mentored and guided team members during formation of this new organization within IBM.
o Focused on Netezza and Netezza Replication technologies.
Page 3 of 4
o Member of Data Architect team for two Netezza implementation projects at a major US
telecom provider.
 Reverse engineered prototype Netezza databases with IBM InfoSphere Data
Architect. Created the physical data model, DDL. From this the team created the
initial logical data model and imported it into CA’s ERwin Data Modeler. Erwin did not
support Netezza so Jeff developed a procedure that would create DB2 DDL suitable
for Netezza.
 Designed three variations of a security model based on Netezza best practices and
the customer’s enterprise database security requirements. Implemented the option
chosen by the customer.
 Led the team that designed and implemented Fraud data ingest, ETL and analytics.
Designed and developed code that could be deployed in test or production
environments without modification. Met all architectural and customer requirements
for coding standards, delivering ready-to-use code.
 Led the team that implemented network quality project. Modified existing code design
so that it would work in all environments without modification and met all standards.
 Worked with IBM and customer stakeholders and project managers to ensure all
project dates and milestones my teams were responsible for were met.
Sep 2011 – Feb 2013
 IBM Software Group Information Management ISV Support
o Participated in development of educational materials for the IBM PureData family of products:
 IBM PureData for Analytics (Netezza)
 IBM PureData for Operational Analytics (DB2)
Sep 2010-Aug 2011 – Columbus, OH
Member of the IBM Software Group Information Management ISV Support organization. Architect and
specialist for three database migration projects at a large financial institution.
 Ensured architectural and design guidelines were adhered to and non-functional requirements met
during deployment of the physical infrastructure and migration of two mission critical databases from
Oracle to DB2.
o Executed gap analysis: found and resolved security incompatibilities, replication design
errors.
o Arranged and delivered technical education to database administrators.
o Enhanced customer’s performance test suite.
o Assisted customer’s data architects in developing new methods of database and table design
for performance and data retention.
Sep 2008 – Aug 2011 IBM Software Group Information Management ISV Support
 Supported IBM, IBM business partners and customers worldwide with technical assistance integrating
IBM Information Management products such as DB2 and Informix into their products.
 Developed and delivered education to IBM customers and business partners in the United States,
Japan, Korea, Viet Nam, Malaysia and Argentina.
Page 4 of 4
June 2002 – Sep 2008 Linux Integration Center
 Lab administrator
o System: workstation, server, blade server, mainframe
o Network: switches, cabling, DNS, NTP, DHCP, routers, gateways, bridges
o Security: controlled access, userid authentication/authorization
o Database administrator: installation, configuration, data ingest, maintenance
 Designed and led implementation team for an unattended software installation and configuration
methodology for the lab (pre-cloud).
 Developed and executed Proof of Concept and Proof of Technology solutions for different IBM
products such as:
o DB2 UDB or LUW products, including high availability (HA) environments, DB2 Connect and
DB2 clusters (DPF)
o WebSphere Portal Server, including DB2 LUW or Oracle and IBM Tivoli Director Server
o WebSphere Application Server
o Tivoli Directory Server (using DB2 UDB V8 as the data store)
o GPFS (General Parallel File System)
 Reproduced customer environments in the lab to help debug or tune customer environments
 Security: Created procedure for hardening a compute-intensive cluster, included instructions for
softening the cluster when software component updates or upgrades were required.
Jan 1991 – June 2002
 Administrator for large System Test lab.
 Test lead on many different IBM products.
 Developed and delivered education to IBM business partners in Germany.
 Developed, executed Y2K tests for IBM software. Consulted with IBM customers in Japan.
Jan 1990 – Dec 1991 Programmer retraining
 Dedicated one year towards completion of Bachelor of Arts degree.
May 1982 – Dec 1989 Mainframe operations
 System Operator and System Analyst for Austin site mainframe complex, responsible for production
job runs, software development environments and manufacturing line control processes.
Prior experience:
Prior to working at IBM, Jeff worked at two banks in the Central Texas area and before that served for
four years in the US Air Force.

More Related Content

JeffRichardsonResume2016

  • 1. Jeffrey W. Richardson mmasigung@gmail.com (512) 699-8556 . Page 1 of 4 SUMMARY Jeff has over 33 years of experience in IT consulting, product development and system operations. From his early degree in Electronic Data Processing to his current work in Big Data / Hadoop technologies, Jeff has continuously honed his skills, expertise, and passion in the design, development and implementation of large scale solutions. His most recent experiences have been with MetiStream, Inc., as a Big Data Architect supporting customer projects around data warehousing offload and optimization efforts to Hadoop. Leveraging Big Data open source technologies, Jeff led MetiStream’s efforts to design and build innovative tools and best practices to automate and expedite data warehouse migrations to Hadoop. While at IBM, Jeff achieved the highest level of IT Specialist Certification and his last assignment there was with the Big Data Analytics Center of Competency. For many large customer engagements, Jeff has been responsible for the development, operation and administration of large scale production, test and development environments including a number of significant Big Data initiatives at communication providers and banking institutions. TECHNICAL SKILLS Big Data Technologies Big Data Hadoop, Sqoop, Spark, Hive, Impala, YARN, Oozie, Parquet, Avro, Cloudera Manager Programming Languages Proficient korn shell, bash, awk, sed Comfortable HTML, JavaScript, SQL Working knowledge Scala, C, Java (JDBC), Python, Perl, COBOL, RPG II, FORTRAN, Assembler Development Platforms & Tools Platforms Linux, Windows, z/VM, AIX, Solaris. Amazon Web Services (AWS), SkyTap IDEs Eclipse, Aginity Workbench, IBM Data Studio, Toad for DB2, IntelliJ Databases Proficient: IBM DB2 LUW, Netezza Performance Server, Hive, Impala Moderate: IBM Informix Academic understanding: Oracle, MySQL, PostgreSQL, Cloudscape/Derby, Sybase, Teradata Assorted Tools VMware Workstation, IBM InfoSphere Data Architect, Docker, JIRA, GitHub Application Server IBM WebSphere Application Server Security Kerberos, Center for Internet Security benchmarks EDUCATION 1994 B.A. Computer Science St. Edward’s University 1982 A.A.S. Electronic Data Processing Temple Junior College. TRAINING / CERTIFICATIONS 2015 Cloudera Certified Administrator for Apache Hadoop Cloudera 2012 IBM Certified Specialist – Netezza Platform Software v6 IBM 2012 IBM Level 3 IT Specialist Certification IBM 2012 IBM Certified Database Associate DB2 10.1 IBM 2009 IBM Certified Database Administrator DB2 9 LUW IBM
  • 2. Page 2 of 4 2004 IBM Certified Database Administrator DB2 UDB V8.1 IBM WORK EXPERIENCE March 2015-April 2016 MetiStream McLean, VA - Big Data Architect  Big Data Engineer, as part of a MetiStream team assisting a Health Care services and product company o Developed a test automation framework based on Hadoop components o Technologies included were Oozie workflows controlling Hive, shell and email actions. o Framework included a test case template, business rules table (pass/fail criteria), results table, email and report generation o Ported Oracle and Netezza SQL query test cases to Hive Query Language o Provided documentation and training to the company  Big Data Engineer, as part of a Cloudera professional services technical team at a large financial institution o Supported architecture, design and implementation phases of a new analytic process, offloading work from Netezza to Hadoop for 3 years of data with an estimated size of approximately 5TB. o Supported Hadoop application architecture review and Data Pipeline review including the evaluation of data sources, data processing jobs, analytic processes, and SLAs. o Worked with Cloudera to evaluate application data access patterns, how data schemas are managed and evolved, and the effectiveness of the partitioning system o In collaboration with Cloudera and customer team members, modified and extended a prototype which loads merchant / location data from the current Netezza data warehouse to Hadoop. Using Apache Sqoop and other Big Data solutions, helped build and manage jobs and the overall data pipeline solution to pull data from the Netezza system and create transformations to publish the dataset within the storage schema. o Technologies involved were Sqoop, Pig, Hive, Impala, YARN, Oozie, Crunch, Kerberos, Parquet, Avro, Python, YAML, HQL, Korn shell, bash, Cloudera Manager.  Using Apache Spark and other open source Big Data technologies, lead a team of four engineers in the development of MetiStream’s Translation Engine and provider overall support and best practices around MetiStream’s Netezza / DW Offload and Optimization Solution  Supported the review, development, and quality control of MetiStream’s Advanced Apache Spark Training  Developed an introductory level Spark education class, deployed to Skytap and AWS  Built and deployed a cloud based PostgreSQL team database server. May, 1982 – Oct, 2014 IBM Austin, TX Feb 2013 -Oct 2014 - Overland Park, KS  Member of IBM Big Data Analytics Center of Competency. o Mentored and guided team members during formation of this new organization within IBM. o Focused on Netezza and Netezza Replication technologies.
  • 3. Page 3 of 4 o Member of Data Architect team for two Netezza implementation projects at a major US telecom provider.  Reverse engineered prototype Netezza databases with IBM InfoSphere Data Architect. Created the physical data model, DDL. From this the team created the initial logical data model and imported it into CA’s ERwin Data Modeler. Erwin did not support Netezza so Jeff developed a procedure that would create DB2 DDL suitable for Netezza.  Designed three variations of a security model based on Netezza best practices and the customer’s enterprise database security requirements. Implemented the option chosen by the customer.  Led the team that designed and implemented Fraud data ingest, ETL and analytics. Designed and developed code that could be deployed in test or production environments without modification. Met all architectural and customer requirements for coding standards, delivering ready-to-use code.  Led the team that implemented network quality project. Modified existing code design so that it would work in all environments without modification and met all standards.  Worked with IBM and customer stakeholders and project managers to ensure all project dates and milestones my teams were responsible for were met. Sep 2011 – Feb 2013  IBM Software Group Information Management ISV Support o Participated in development of educational materials for the IBM PureData family of products:  IBM PureData for Analytics (Netezza)  IBM PureData for Operational Analytics (DB2) Sep 2010-Aug 2011 – Columbus, OH Member of the IBM Software Group Information Management ISV Support organization. Architect and specialist for three database migration projects at a large financial institution.  Ensured architectural and design guidelines were adhered to and non-functional requirements met during deployment of the physical infrastructure and migration of two mission critical databases from Oracle to DB2. o Executed gap analysis: found and resolved security incompatibilities, replication design errors. o Arranged and delivered technical education to database administrators. o Enhanced customer’s performance test suite. o Assisted customer’s data architects in developing new methods of database and table design for performance and data retention. Sep 2008 – Aug 2011 IBM Software Group Information Management ISV Support  Supported IBM, IBM business partners and customers worldwide with technical assistance integrating IBM Information Management products such as DB2 and Informix into their products.  Developed and delivered education to IBM customers and business partners in the United States, Japan, Korea, Viet Nam, Malaysia and Argentina.
  • 4. Page 4 of 4 June 2002 – Sep 2008 Linux Integration Center  Lab administrator o System: workstation, server, blade server, mainframe o Network: switches, cabling, DNS, NTP, DHCP, routers, gateways, bridges o Security: controlled access, userid authentication/authorization o Database administrator: installation, configuration, data ingest, maintenance  Designed and led implementation team for an unattended software installation and configuration methodology for the lab (pre-cloud).  Developed and executed Proof of Concept and Proof of Technology solutions for different IBM products such as: o DB2 UDB or LUW products, including high availability (HA) environments, DB2 Connect and DB2 clusters (DPF) o WebSphere Portal Server, including DB2 LUW or Oracle and IBM Tivoli Director Server o WebSphere Application Server o Tivoli Directory Server (using DB2 UDB V8 as the data store) o GPFS (General Parallel File System)  Reproduced customer environments in the lab to help debug or tune customer environments  Security: Created procedure for hardening a compute-intensive cluster, included instructions for softening the cluster when software component updates or upgrades were required. Jan 1991 – June 2002  Administrator for large System Test lab.  Test lead on many different IBM products.  Developed and delivered education to IBM business partners in Germany.  Developed, executed Y2K tests for IBM software. Consulted with IBM customers in Japan. Jan 1990 – Dec 1991 Programmer retraining  Dedicated one year towards completion of Bachelor of Arts degree. May 1982 – Dec 1989 Mainframe operations  System Operator and System Analyst for Austin site mainframe complex, responsible for production job runs, software development environments and manufacturing line control processes. Prior experience: Prior to working at IBM, Jeff worked at two banks in the Central Texas area and before that served for four years in the US Air Force.