Enable job alerts via email!

Big Data Design Engineer

ZipRecruiter

Atlanta (GA)

Remote

Full time

30+ days ago

Boost your interview chances

Create a job specific, tailored resume for higher success rate.

Job summary

An innovative firm is seeking a Big Data Design Engineer to lead the architecture and implementation of a cutting-edge Big Data platform. In this fully remote role, you will manage the Hadoop infrastructure, ensuring optimal performance and security. Collaborating with cross-functional teams, you will drive the deployment of new technologies and maintain high data quality. This position offers a unique opportunity to work on complex projects that shape the future of data analytics. If you are passionate about Big Data and thrive in a dynamic environment, this role is perfect for you!

Qualifications

  • 7+ years of experience in architecture and implementation of complex projects.
  • Must possess strong Cloudera Admin skills and Hadoop navigation experience.

Responsibilities

  • Oversee implementation and administration of Hadoop infrastructure and systems.
  • Analyze and propose new hardware/software environments for Hadoop.

Skills

Hadoop
Spark
ETL
Cloudera Admin
YARN
HIVE
Airflow
DevOps principles
Docker
Kubernetes

Education

Bachelor's degree in a related field

Tools

Hadoop Distributed File System (HDFS)
Cloudera Manager Enterprise
Ganglia
Nagios
Kafka
Flink
Spark Streaming

Job description

Job Description

Note: All candidates must be able to work as a W2 or 1099 employee for any employer in the US.

(The role is not eligible for those requiring sponsorships now or potentially in the future.)

Title: Big Data Design Engineer

Type of role: Contract (12mo+ assignment)

Location: Fully remote, no travel, candidate is expected to work CST hours

Compensation: $65/hr–85/hr (Based on relevant experience.)

Job Description: The Big Data Design Engineer is responsible for architecture design, implementation of Big Data platform, Extract/Transform/Load (ETL), and analytic applications.

Primary Responsibilities

  • Oversees implementation and ongoing administration of Hadoop infrastructure and systems. Manages Big Data components/frameworks such as Hadoop, Spark, Storm, HBase, Hadoop Distributed File System (HDFS), Pig, Hive, Sqoop, Flume, Oozie, Avro, etc.
  • Analyzes latest Big Data analytic technologies and innovative applications in both business intelligence analysis and new offerings.
  • Aligns with the systems engineering team to propose and deploy new hardware and software environments required for Hadoop and expand existing environments.
  • Handles cluster maintenance and creation/removal of nodes using tools like Ganglia, Nagios, Cloudera Manager Enterprise.
  • Handles performance tuning of Hadoop clusters and Hadoop MapReduce routines. Hadoop cluster job performances and capacity planning.
  • Monitors Hadoop cluster connectivity and security.
  • Manages and reviews Hadoop log files.
  • Handles HDFS and filesystem management, maintenance, and monitoring.
  • Partners with infrastructure, network, database, application, and business intelligence teams to guarantee high data quality and availability.
  • Collaborates with application teams to install operating system and Hadoop updates, patches, and version upgrades when required. Acts as point of contact for vendor escalation.

This position is exempt from timekeeping requirements under the Fair Labor Standards Act and is not eligible for overtime pay.

Requirements

  • Bachelor's degree in a related field
  • Seven (7) years of experience in architecture and implementation of large and highly complex projects
  • Must have Cloudera Admin skills (e.g., understanding error messages, configuring and setting up clusters, etc.)
  • Must have experience maintaining/navigating Cloudera in Hadoop
  • Must have good communication skills and experience working collaboratively within a highly technical environment.
  • Experience with YARN
  • Knowledge of HIVE and Impala

Skills and Competencies

  • Experience with Airflow, Argo, Luigi, or similar orchestration tool
  • Experience with DevOps principles and CI/CD
  • Experience with Docker and Kubernetes
  • Experience with No-SQL databases such as HBase, Cassandra, or MongoDB
  • Experience with streaming technologies such as Kafka, Flink, or Spark Streaming
  • Experience working with Hadoop ecosystem building Data Assets at an enterprise scale
  • Strong communication skills through written and oral presentations
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.

Similar jobs

Production Designer - Long-Form / Multipage Publications

Creative Circle

Atlanta

Remote

USD <1,000

20 days ago

Product Developer

DataAnnotation

Austin

Remote

USD <60,000

Yesterday
Be an early applicant

Senior Tool & Die Maker

Southern States LLC

Hampton

On-site

USD <70,000

2 days ago
Be an early applicant

Digital Product Designer

New Era Technology

Remote

USD <1,000

30+ days ago

Staff Engineer/Engineer-in-Training (Design/Construction Group)

Idaho

Boise

On-site

USD <1,000

30+ days ago

Selling Center Design Specialist

Tuff Shed

McDonough

On-site

USD <1,000

30+ days ago

Digital Product Designer

New Era Technology

Indianapolis

On-site

USD <1,000

30+ days ago

Die Maker Journeyperson - Marion

General Motors of Canada

Marion

On-site

USD <1,000

30+ days ago