Enable job alerts via email!

Senior Data Engineer | New York, NY, USA | Remote

Hermeneutic Investments

Buxton (ND)

Remote

USD 90,000 - 150,000

Full time

3 days ago
Be an early applicant

Boost your interview chances

Create a job specific, tailored resume for higher success rate.

Job summary

An innovative hedge fund seeks a Senior Data Engineer to architect and implement robust data infrastructures. This pivotal role involves designing scalable pipelines for vast datasets, ensuring data quality, and collaborating across teams to meet research and trading needs. The ideal candidate will leverage advanced data management practices and cutting-edge technologies to drive innovation while maintaining data integrity and performance. Join a rapidly growing firm that values ownership, competence, and openness, and play a crucial role in shaping the future of data-driven decision-making in finance.

Qualifications

  • 5+ years of experience in data engineering with a focus on ETL processes.
  • Strong understanding of data modeling and data warehousing principles.
  • Experience with cloud platforms and their data services.

Responsibilities

  • Design and optimize scalable data pipelines for large datasets.
  • Ensure data quality and real-time monitoring of data processes.
  • Collaborate with teams to understand data needs and build frameworks.

Skills

Problem-solving
Analytical thinking
SQL proficiency
Python
Java/Scala
ETL pipeline development
Data quality checks
Data security standards

Education

Degree in Computer Science
Degree in Engineering
Degree in Mathematics

Tools

Apache Airflow
Snowflake
Redshift
BigQuery
Apache Spark
Kafka
Flink
AWS
GCP
Azure

Job description

Senior Data Engineer
Hermeneutic Investments New York, United States

Senior Data Engineer
Hermeneutic Investments New York, United States Apply now Posted 3 days ago Remote Job Permanent Competitive Compensation

We are looking for a Senior Data Engineer to help us architect, implement and operate the complete data infrastructure pipeline for our Research and Trading operations. This role will be crucial in building a scalable, reliable, and cost-efficient system for handling vast amounts of market trading data, real-time news feeds and a variety of internal and external data sources. The ideal candidate will be a hands-on professional who understands the entire data lifecycle and can drive innovation while collaborating across research and engineering teams to meet their needs.

Responsibilities

  • Design, build, and optimize scalable pipelines for ingesting, transforming, and integrating large-volume datasets (market data, news feeds and various unstructured data sources).
  • Ensure data quality, consistency, and real-time monitoring using tools like DBT, 3rd party libraries that can facilitate data validation processes.
  • Develop processes to normalize and organize our data warehouse for use across different departments.
  • Apply advanced data management practices to ensure the scalability, availability, and efficiency of data storage
  • Ensure the infrastructure supports trading and research needs while maintaining data integrity, security, and performance at scale
  • Collaborate with research and analytics teams to understand their data needs and build frameworks that empower data exploration, analysis, and model development. Create tools for overlaying data from multiple sources
  • Ensure that data storage, processing, and management are done in a cost-effective manner, optimizing both hardware and software resources. Implement solutions that balance high performance with cost control.
  • Stay ahead of the curve by continuously evaluating and adopting the most suitable technologies for the organization’s data engineering needs. Ensure that company’s systems align with the latest best practices in data management

Requirements

Must Have

  • Strong problem-solving and analytical thinking
  • Clear communication skills for cross-functional collaboration
  • Proficiency in building robust data quality checks for ingested data
  • Experience identifying anomalies in ingested data
  • Strong proficiency in writing complex SQL (and similar) queries and optimize performance
  • Proficiency in Python or Java/Scala
  • Experience building and maintaining complex ETL pipelines with tools like Apache Airflow, dbt, or custom scripts
  • Strong understanding of dimensional modeling, star/snowflake schemas, normalization/denormalization principles
  • Proven experience with platforms like Snowflake, Redshift, BigQuery, Synapse
  • Expert knowledge of Apache Spark, Kafka, Flink, or similar
  • Strong understanding of data security and privacy standards

Good to Have

  • A degree in Computer Science, Engineering, Mathematics, or a related field
  • Familiarity with one of the major cloud platforms (AWS, GCP, Azure) and their data services (e.g., BigQuery, Redshift, S3, Dataflow, etc.), proven by certifications (e.g., Google Professional Data Engineer, AWS Big Data Specialty or Snowflake’s SnowPro Data Engineer )
  • Experience with data quality frameworks (e.g., Great Expectations, Deequ or others)
  • Experience with Git/GitHub or similar for code versioning.
  • Experience with infrastructure-as-code tools (e.g., Terraform, CloudFormation).
  • Exposure to containerization/orchestration (Docker, Kubernetes).
  • Familiarity with data governance, data lineage, and catalog tools (e.g., Apache Atlas, Amundsen).
  • Hands-on with observability and monitoring tools for data pipelines (e.g., Monte Carlo, Datadog).
  • Knowledge of machine learning pipelines
  • Prior experience in a trading or financial services environment.

Interview Process

  • Our partner and VP Eng will review your CV
  • Our VP of Engineering will conduct the first round of interviews
  • Our partner will conduct an additional round of interviews on technical and cultural fit
  • Additional rounds may be conducted as necessary with other team members or our partners

Throughout the process, you'll be assessed for cultural fit through our company values:

  • Drive - We believe the best team members are passionate about what they do, and that propels them to greater heights in their career
  • Ownership - We aim to give ownership interest to as many people in the firm as possible, but in return, we expect everyone to act like owners
  • Judgement - We look for team members who consistently look at the big picture and spend their time on the activities that most drive PnL
  • Openness - We want a culture where we proactively share information with one another and challenge each other with constructive debate
  • Competence - We value people with high intellectual horsepower who are experts in their domains and quick learners

We are a rapidly growing hedge fund, 2 years old, managing a 9-figure AUM, generating 200%+ annualized returns with a 4 Sharpe.

Our team has grown to approximately 40 professionals across Trading & Research, Technology, and Operations.

As part of our growing team, you will play a pivotal role in designing and implementing robust data infrastructures that enable seamless research, analytical workflows, and effective trade ideation and execution. If you are an experienced data engineering leader with a passion for complex data systems, we want to hear from you!

Senior Site Reliability/DevOps Engineer Hermeneutic Investments New York, United States

Sr AI/NLP Engineer - RTI, Hedge Fund Hermeneutic Investments New York, United States

Senior Site Reliability/DevOps Engineer Hermeneutic Investments Taipei, Taiwan

Senior Site Reliability/DevOps Engineer Hermeneutic Investments London, United Kingdom

Senior Site Reliability/DevOps Engineer Hermeneutic Investments Hong Kong

More jobs from the company

Boost your career

Boost your career
Find thousands of job opportunities by signing up to eFinancialCareers today.

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.