Big Data Operations Engineer
General Description
We are seeking a passionate Big Data Operations Engineer to join the Global Data Platform team, which serves as the enterprise‑wide single source of truth for reporting and analytics. The platform also provides a common data model and a self‑service data science environment.
Key Features of the Position
Functional / Technical Responsibilities
- Provide daily support and troubleshooting of data pipelines from ingestion to consumption
- Ensure defined thresholds are met for data availability
- Actively support release cycles and advocate changes to prevent production disruption
- Maintain and model JSON‑based schemas and metadata for reuse across the organization
- Perform Data Engineer responsibilities to implement corrective measures (historizing tables, managing dependencies, resolving quality issues, etc.)
- Take operational responsibility for Common Data Model tables within a dedicated access zone
- Participate in the agile setup and support development teams
- Manage the operating model to ensure support levels and task distribution for operational activities (e.g., monitoring service availability and performance) with Level 1 and Level 2 support
- Provide Level 3 support (incident & problem management) for the IT service and related data pipelines
- Continuously enhance service availability, performance, capacity, and knowledge management
Skills Requirements
Professional
- Higher education in Computer Science (University or equivalent diploma)
- Strong communication, planning, and coordination skills
- Strong team player with a proactive, collaborative, and customer‑focused approach
- Excellent understanding of data technology and operational management
- Passion for driving a data‑driven culture and empowering users
Technical
- Minimum 5 years of experience in cluster environment operations (Kubernetes, Kafka, Spark, distributed storage such as S3/HDFS)
- Strong Python proficiency
- Expertise in DataOps processes
- Experience with monitoring tools for platform health, ingestion pipelines, and consumer applications
- Experience with Linux‑based infrastructure including advanced scripting
- Expert SQL skills (both traditional DWH and distributed environments)
- Experience in maintaining databases, load processes, CI/CD pipelines
- Several years of experience in a similar role within a complex data environment
- Good to have: experience with Dataiku, Tableau
Personal & Social
- Strong English language skills (written and verbal)
- High self‑motivation to drive initiatives and improvements
- Willingness to contribute new ideas and lead change
- Exceptional communication skills to work effectively with technical and business stakeholders
- Detail‑oriented with strong multitasking and prioritization abilities
When you apply, you voluntarily consent to the disclosure, collection and use of your personal data for employment/recruitment and related purposes in accordance with the Tech Aalto Privacy Policy, a copy of which is published at Tech Aalto’s website (https://www.techaalto.com/privacy/)
Confidentiality is assured, and only shortlisted candidates will be notified for interviews.