We are storming the marketplace with the highly skilled, experienced, and certified professionals that businesses need.

Find your perfect job.

Big Data Engineer (Closed)

SkillStorm is seeking a Big Data Engineer for our client in Addison, TX. Candidates must be able to work on SkillStorm's W2; not a C2C position. EOE, including disability/vets.

Job Description:

  • This is a Platform Engineering role supporting a new initiative in Addison, TX. The team is responsible for managing and building the Graph Data Platform using emerging technologies interface for consumer and global wealth applications as well as for fraud and AML tracking. Candidate must possess a passion for producing high quality software and solutions, support the platform, be ready to jump in and solve complex problems, and mentor junior resources.

Required Skills:

  • Bachelor’s with Degree in Computer Science or related field in engineering
  • 5+ years of hands-on experience in software development, Hadoop.
  • Good understanding of data architecture and Big Data platform
  • Support the company’s commitment to protect the integrity and confidentiality of systems and data.
  • Experience managing and leading small development teams in an Agile environment
  • Drive and maintain a culture of quality, innovation and experimentation
  • Collaborate with product teams, data analysts and data scientists to design and build data-forward solutions
  • Provide the prescriptive point-solution architectures and guide the descriptive architectures within assigned modules
  • Own technical decisions for the solution and application developers in the creation of architectural decisions and artifacts
  • Accountable for the availability, stability, scalability, security, and recoverability enabled by the designs
  • Ability to clearly communicate with team & stakeholders

Desired Skills:

  • 3+ years of programming experience in Java or Scala.
  • Hands on experience in designing, developing, and maintaining software frameworks using Kafka, Spark Streaming and Spark Batch processing.
  • Hands on experience building big data pipelines using Hadoop components Apache Hive, Spark, Hbase.
  • Ability to understand API Specs, identify relevant API calls and implementation.
  • Understanding on various distributed file formats such as Apache Avro, Apache Parquet and common methods in data transformation.
  • Good debugging, critical thinking, and interpersonal skills: ability to interact and work well with members in other functional groups in a project teams and a strong sense of project ownership.
  • Well versed with processing and deployment technologies such as YARN.
  • Proficient in Unix environment and shell scripting.
  • Hands on experience on implementing CI/CD and automation using the Atlassian ecosystem
  • Proven understanding of source control software like Bitbucket or Git.

#LI-DNI

Similar Jobs

Entry Level Software Developer

Contract job in Dallas

Entry Level Software Developer

Contract job in Fort Worth

Quality Assurance Consultant IV

Contract job in Plano

BUSINESS SYSTEMS ANALYST

STRATA Contract job in Westlake