We are looking for an experienced Senior Data Engineer to join our team. In this role, you will be responsible for developing and optimizing our data pipelines and infrastructure. You will continuously improve the quality of our data architecture by leveraging your technical expertise and attention to detail. A deep understanding of SQL, Python, and Big Data technologies (e.g., BigQuery, Hadoop, Spark, Kafka) is required. Experience with cloud data platforms such as GCP or AWS, as well as knowledge of ETL processes and database management systems, is a plus. Your ability to work independently, demonstrate team spirit, and stay up-to-date in a dynamic environment is essential.
Responsibilities
Designing and implementing data pipelines: Designing and implementing efficient and scalable data pipelines that collect, process, and store large volumes of structured and unstructured data from various sources.
Developing and maintaining data infrastructure: Developing and maintaining data infrastructure, such as data warehouses, data lakes, and databases, to support data storage, processing, and analysis.
Ensuring data quality and integrity: Ensuring data quality and integrity by implementing data validation and cleansing processes, monitoring data accuracy and completeness, and resolving data quality issues.
Collaborating with cross-functional teams: Collaborating with cross-functional teams, such as data scientists, analysts, and product managers, to understand their data needs and provide data solutions that meet their requirements.
Optimizing data performance: Optimizing data performance by tuning database queries, designing data models, and implementing data indexing and partitioning techniques.
Implementing data security and privacy measures: Implementing data security and privacy measures to protect sensitive data, such as encrypting data at rest and in transit, and managing user access and authentication.
Keeping up with new technologies: Keeping up with new technologies and industry trends in data engineering, such as cloud computing, big data, and machine learning, to identify opportunities for innovation and improvement.
Functional Fit
Bachelor's degree in Computer Science, Engineering, or a related field
Experience with cloud computing (AWS, GCP)
Proficient in Python and SQL
Experience with containers (Docker, Kubernetes).
Experience in building and expanding data warehouse, data lake and transformation pipelines.
Experience in the financial sector is a plus, but not a must.