
Ativa os alertas de emprego por e-mail!
Cria um currículo personalizado em poucos minutos
Consegue uma entrevista e ganha mais. Sabe mais
A leading retail innovator is seeking a Senior Big Data Engineer to join their Customer Identity team to ensure customer data accuracy and accessibility. This role involves designing and managing scalable data services that support personalized marketing and customer experiences. Candidates should have 5+ years of experience, strong Python and GCP skills, and hands-on experience with Apache Airflow, Spark, and Kafka. This full-time remote position offers competitive pay, paid vacations, and work-life balance.
Native / Bilingual English is required for this role (read / written / spoken)
Please upload your CV Resume in English.
Monthly salary: $4,000 - $5,000 USD
Along with our partner, we are seeking a Senior Big Data Engineer to join the Customer Identity (CI) team that is central to the enterprise, focused on ensuring customer data is accurate, consistent, and readily accessible. This critical data is the foundation for essential business processes, including personalized marketing, analytical reporting, and seamless online and in-store checkout and customer service experiences.
They manage a wide array of high-performance, scalable batch and real-time services. These services are utilized at numerous customer interactions, such as in-store purchases, online orders, rewards program enrollments, and activations from various marketing channels (Email, Direct Mail, SMS, Push notifications, etc.). By leveraging modern development practices, tools, and technology platforms, the CI team builds capabilities that empower customer-facing and checkout teams to deliver personalized, relevant, and frictionless experiences, ensuring customers realize value with every interaction.
This team operates in every major US time zone from Pacific to Eastern. Core hours would be considered 10 am CST - 4 pm CST with 2 hours spilling over either direction based on location.
Python, Apache Airflow, Apache Spark, Spark SQL, Spark Streaming, Kafka, Relational Databases, NoSQL Databases, GCP BigQuery, BigTable, Dataproc, RESTful APIs.
3 months; end date being March 31, 2026, with the possibility of being extended based on team needs.