Enable job alerts via email!

Big Data Solutions Engineer

Jobsbridge

Bridgewater (MA)

On-site

USD 60,000 - 80,000

Full time

Today
Be an early applicant

Job summary

A leading IT staffing organization in Massachusetts is seeking a professional with experience in Big Data technologies. The ideal candidate should have a solid background in building scalable data pipelines and collaborating with teams to develop data tools. Proficiency in Java, Python, and Hadoop is required. This full-time position promises a dynamic work environment and opportunities for professional growth.

Qualifications

  • Minimum 2 years of experience on Big Data Platform.
  • Flair for data, schema, data model, and efficiency in big data related life cycle.
  • Understanding of automated QA needs related to Big data.

Responsibilities

  • Build distributed, scalable, and reliable data pipelines.
  • Collaborate with teams to design and deploy data tools.
  • Perform offline analysis of large data sets.

Skills

Java
Python
Scala
HBase
Hive
MapReduce
Kafka
Mongo
Postgres
Tableau
D3JS
Agile practices

Education

BS in Computer Science or related area

Tools

Hadoop
Apache Spark
Data Stage
Informatica
Job description

Jobs Bridge Inc is among the fastest growing IT staffing / professional services organization with its own job portal.

Jobs Bridge works extremely closely with a large number of IT organizations in the most in-demand technology skill sets.

Job Description

The ideal candidate will have experience with Hadoop, Big Data, and related technologies such as Flume, Storm, and Hive.

Job Details:

  • Total Experience: 2 years
  • Max Salary: Not Mentioned
  • Employment Type: Direct Jobs (Full Time)
  • Domain: Any

OPT and EAD candidates are eligible to apply.

Job Responsibilities:
  • Build distributed, scalable, and reliable data pipelines that ingest and process data at scale and in real-time.
  • Collaborate with other teams to design and develop and deploy data tools that support both operations and product use cases.
  • Perform offline analysis of large data sets using components from the Hadoop ecosystem.
  • Evaluate and advise on technical aspects of open work requests in the product backlog with the project lead.
  • Own product features from the development phase through to production deployment.
  • Evaluate big data technologies and prototype solutions to improve our data processing architecture.
Candidate Profile:
  • BS in Computer Science or related area
  • Minimum 2 years of experience on Big Data Platform
  • Proficiency with Java, Python, Scala, HBase, Hive, MapReduce, ETL, Kafka, Mongo, Postgres, and Visualization technologies
  • Flair for data, schema, data model, and efficiency in big data related life cycle
  • Understanding of automated QA needs related to Big data
  • Understanding of various Visualization platforms (Tableau, D3JS, others)
  • Proficiency with agile or lean development practices
  • Strong object-oriented design and analysis skills
  • Excellent technical and organizational skills
  • Excellent written and verbal communication skills
Desired Skill Sets:
  • Batch processing: Hadoop MapReduce, Cascading/Scalding, Apache Spark
  • Stream processing: Apache Storm, AKKA, Samza, Spark streaming
  • ETL Tools: Data Stage, Informatica
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.