Ativa os alertas de emprego por e-mail!
Melhora as tuas possibilidades de ir a entrevistas
Cria um currículo adaptado à oferta de emprego para teres uma taxa de sucesso superior.
Join a forward-thinking company as a Software Engineer in Data Lake Engineering, where you'll contribute to innovative Data Lakehouse solutions that drive AI and machine learning initiatives. This exciting role offers the chance to work with cutting-edge technologies like AWS and Azure, while collaborating with a talented team of engineers. You'll tackle challenging problems that have a significant business impact, gain hands-on experience with containerization and CI/CD pipelines, and continuously learn from your peers. If you're eager to grow your career in a dynamic environment and make a real difference, this opportunity is perfect for you.
Software Engineer 2, Data Lake Engineering
Apply locations Brazil - Remote time type Full time posted on Posted 30+ Days Ago job requisition id R16763
About the Team/Role
We are looking for a highly motivated and high-potential entry-level Engineer to join our Data Lake engineering team to make significant contributions to our Data Lakehouse product and grow your career.
This is a really exciting time to be in the Data Lake engineering team at WEX. The team has an opportunity to create a brand new Data Lakehouse that powers our Data, AI and machine learning initiatives. We work with cutting-edge technologies like AWS, Azure, Docker, and Kubernetes to create a dynamic environment that supports the development and deployment of AI models at scale.
We have challenging problems with huge business impact potential for you to work on and grow. We also have a strong team with highly talented and skillful engineers and leaders to support, guide, and coach you.
If you dream to be a strong engineer who can solve tough problems, generate big impacts, and grow fast, this is a great opportunity for you!
How you’ll make an impact
Collaborate with partners/stakeholders to understand the requirements of our Data and AI development teams and key challenges.
Learn and practice designing, building, and maintaining a Data Lakehouse on AWS and Azure to support Data and AI/ML workloads.
Gain hands-on experience with containerization technologies (Docker) and orchestration platforms (Kubernetes).
Learn and implement CI/CD pipelines for automating the deployment and management of AI infrastructure.
Develop and maintain monitoring and alerting systems to ensure the health and performance of production Data Lakehouse infrastructure.
Learn to analyze system performance data to identify bottlenecks and opportunities for improvement.
Learn from your peers and foster continuous learning of new cloud technologies and best practices.
Learn our team’s processes and best practices and apply them to given tasks with help from peers and your manager. Make sure to understand the underlying problems you try to solve with these tasks, and your implementations effectively address these problems in a reliable and sustainable way.
Partner with team members in development and problem-solving.
Proactively seek reviews from senior engineers on your work to ensure quality.
Experience you’ll bring
Bachelor's degree in Computer Science, Software Engineering, or a related field. OR demonstrable equivalent deep understanding, experience, and capability.
A Master's or PhD degree in Computer Science (or related field) is a plus.
Good experience in software engineering or cloud infrastructure, with a focus on supporting AI/ML workloads.
Demonstrable strong programming skills in a 3GL strongly-typed language like Java, Python, C/C++ or Golang.
Experience in Spark or Flink is a big plus.
Strong understanding of cloud platforms (AWS and Azure), including services relevant to AI/ML (e.g., EC2, S3, EKS, Azure ML, AKS).
Hands-on experience with containerization (Docker) and container orchestration (Kubernetes).
Experience with building and managing CI/CD pipelines for infrastructure and ML model deployment (using tools like Jenkins, GitLab CI/CD, etc.).
Experience with infrastructure monitoring and alerting tools (e.g., Prometheus, Grafana, CloudWatch, Azure Monitor).
Strong scripting skills (Python, Bash) for automation and configuration management.
Excellent problem-solving skills, with the ability to analyze complex systems and identify performance bottlenecks.
Strong communication and collaboration skills, with the ability to work effectively in a team environment.
Preferred Qualifications:
Experience with CI/CD tools and processes.
Familiarity with infrastructure monitoring and alerting tools.