Enable job alerts via email!
Boost your interview chances
Create a job specific, tailored resume for higher success rate.
An innovative platform is seeking a Data Engineer to design and maintain robust data pipelines that power analytics and AI solutions. This role involves working with cutting-edge Azure services like Databricks to build scalable data architectures and ensure data quality and compliance. Collaborating with cross-functional teams, you'll integrate diverse data sources and document best practices. Join a forward-thinking company that is transforming the private capital industry with its AI-powered solutions, and be part of a dynamic team that values communication and teamwork.
OVERVIEW OF 73 STRINGS:
73 Strings is an innovative platform providing comprehensive data extraction, monitoring, and valuation solutions for the private capital industry. The company's AI-powered platform streamlines middle-office processes for alternative investments, enabling seamless data structuring and standardization, monitoring, and fair value estimation at the click of a button. 73 Strings serves clients globally across various strategies, including Private Equity, Growth Equity, Venture Capital, Infrastructure and Private Credit.
Our 2025 $55M Series B, the largest in the industry, was led by Goldman Sachs, with participation from Golub Capital and Hamilton Lane, with continued support from Blackstone, Fidelity International Strategic Ventures and Broadhaven Ventures.
About the role
We are seeking a Data Engineer with hands-on experience in Azure, Databricks, and API integration. You will design, build, and maintain robust data pipelines and solutions that power analytics, AI, and business intelligence across the organization.
Key Responsibilities
- Develop, optimize, and maintain ETL/ELT pipelines using Azure Data Factory, Databricks, and related Azure services.
- Build scalable data architectures, including data lakes and data warehouses
- Integrate and process data from diverse sources via REST and SOAP APIs
- Design and implement Spark-based data transformations in Databricks using Python, Scala, or SQL
- Ensure data quality, security, and compliance across all pipelines and storage solutions.
- Collaborate with cross-functional teams to understand data requirements and deliver actionable datasets.
- Monitor, troubleshoot, and optimize Databricks clusters and data workflows for performance and reliability.
- Document data processes, pipelines, and best practices.
Required Skills & Qualifications
- Proven experience with Azure cloud services, especially Databricks, Data Lake, and Data Factory.
- Strong programming skills in Python, SQL, and/or Scala.
- Experience building and consuming APIs for data ingestion and integration.
- Solid understanding of Spark architecture and distributed data processing.
- Familiarity with data modeling, data warehousing, and big data best practices.
- Knowledge of data security, governance, and compliance within cloud environments.
- Excellent communication and teamwork skills.
Preferred
- Experience with DevOps tools, CI/CD pipelines, and automation in Azure/Databricks environments[7][6][10].
- Exposure to real-time data streaming (e.g., Kafka) and advanced analytics solutions
Education
- Master’s degree in Computer Science, Engineering, or a related field, or equivalent experience.