As one of tigerlab’s first Data Engineers, you will play a foundational role in architecting and scaling our data ecosystem. You’ll work hands‑on across the entire stack, ingesting raw event streams, shaping data models, building analytics pipelines, and enabling self‑serve insights for all teams. You will collaborate closely with founders, as well as our Sales, Product, and Delivery teams to turn raw data into strategic intelligence, fueling AI‑driven underwriting, smarter product decisions, and customer insights. Your work will also help define how tigerlab delivers scalable, reusable, and trustworthy data services that accelerate growth and operational excellence for insurers, MGAs, retailers, and innovators. This is a rare opportunity to lay the groundwork for a modern analytics ecosystem, shape a self‑serve data culture, and directly impact how tigerlab and our clients make data‑driven decisions.
Key Responsibilities
- Work directly with founders and leaders across Sales, Product, and Delivery to understand their analytical requirements and build foundational datasets.
- Design, optimize, and expand our event‑driven data infrastructure (Kafka, Firehose, S3, Athena).
- Ensure accurate, compliant, and scalable storage of structured and semi‑structured data.
- Build and maintain pipelines powering internal dashboards, external analytics, pricing models, underwriting insights, and AI‑driven features.
- Create reusable dashboards, datasets, and visualizations for operational performance and customer intelligence.
- Prepare large datasets for analysis, machine learning, and predictive modeling.
- Support business teams with deep dives and ad‑hoc data investigations.
- Define and maintain data contracts in partnership with engineering.
- Implement data observability, monitoring, and alerting to ensure reliability.
- Occasionally participate in client discussions to understand their data needs or present analytical findings.
- Help establish tigerlab’s data culture as the first member of the Data team.
Required Qualifications
- You have 4+ years of experience in a data engineering role, ideally in high‑growth or data-intensive environments.
- You’ve optimized queries for speed and cost at scale; billions of rows/day is familiar territory.
- You have strong business acumen and can interpret data through a sales or customer lens.
- You’re experienced with cloud-based data stacks: AWS S3, Athena, Redshift, BigQuery, or Snowflake.
- You write SQL and Python fluently; experience with Pandas, NumPy, or PySpark is a plus.
- You care deeply about accuracy, reliability, and data quality.
- You’re excited about the modern data stack and empowering teams with self‑serve analytics.
- You have experience with dbt, Airflow, n8n, or other transformation/orchestration tools.
- You’re comfortable with version control, CI/CD for analytics, and BI tools like Metabase, Looker, or Power BI.
- You understand data privacy, anonymization techniques, and GDPR compliance.
- You’ve worked with insurance-related data (policies, quotes, events, pricing, loss ratios); or are eager to learn.
What We Offer
- Work on real‑world digital products with global clients.
- Learn directly from experienced designers and engineers.
- Be part of a supportive, agile team that values creativity and growth.
- Build your career in the fast‑growing world of insurance technology.
- Competitive salary and benefits package
- Comprehensive training and professional development opportunities
- Opportunity to work with cutting‑edge insurance technology
- Collaborative and innovative work environment
- Career progression opportunities within a growing company