We are storming the marketplace with the highly skilled, experienced, and certified professionals that businesses need.

Find your perfect job.

Hadoop Engineer (Closed)

SkillStorm is seeking a Hadoop Engineer for our client in Charlotte, NC OR Richardson TX, OR Richmond VA. Candidates must be able to work on SkillStorm's W2; not a C2C position. EOE, including disability/vets.

Job Description:

  • Infrastructure Information Services is looking for top talent to design and build best-in-class Data Management and Integration Services capability over Infrastructure/ITSM data using Hadoop Architecture. The Data Engineer will innovate and transform the systems integration landscape for the Technology Infrastructure organization, while following industry best practices and providing capability maturity in support of Enterprise Data Management standards. The ideal candidate is an expert in Data Warehousing and Master Data Management Design and Development.
  • Candidate should have a strong understanding of data management concepts and applied DW/MDM development of DB-level routines and objects. Candidate should have experience in migrating traditional Relational Database Management System (RDBMS) to a Hadoop based architecture. In addition have hands on experience developing in many of the Apache Hadoop based tools. It involves hands-on development and support of integrations with multiple systems and ensuring accuracy and quality of data by implementing business and technical reconciliations. Candidate needs to be able to understand macro level requirements and convert them into actionable tasks to deliver a technically sound product. Candidate should be able to work in teams in a collaborative manner.

Required Skills:

  • 5+ years of total IT experience
  • Over 3 years of experience developing for Data Warehousing, Data Marts, and/or Master Data Management
  • Experience developing in Oracle, DB2, SQL Server
  • Experience migrating databases from traditional RDBMS to Apache Hive
  • Possesses and demonstrates solid knowledge of the Hadoop Ecosystem
  • Experienced exposure to Hadoop ecosystem including but not limited to: HDFS, MapReduce, Spark, Hive, HBase
  • Demonstrates competency in Apache Hive DDL.
  • Hadoop transformation and program fluency using Python & Spark
  • Experience with Hive/Spark SQL
  • Object oriented programming concepts
  • Expert SQL skills
  • Experience in SDLC and best practices for development
  • CI/CD knowledge of packaging and promotion practices for maintaining code in development, test, and production using tools JIRA and Bitbucket

#LI-DNI

Similar Jobs

Entry Level Software Developer

Contract job in Charlotte