JOB TITLE: Data Engineer II
LOCATION: Onsite in Seattle, WA
DURATION: 8 months with potential extension
PAY RANGE: $58-68/hour
TOP 3 SKILLS:- Experience with data modeling, warehousing, and building ETL pipelines
- Programming/Scripting: Python/Java
- Database Experience:
- Specifically mentioned Redshift or Snowflake as ideal
- Experience with MPP (Massively Parallel Processing) RDBMS
- Be able to write SQL on top of databases
Job Description: Build the future of entertainment with us. Are you interested in shaping the future of movies and television? Do you want to define the next generation of how and what customers are watching?
Our client is a premium streaming service that offers customers a vast collection of TV shows and movies, all with the ease of finding what they love to watch in one place. We offer customers thousands of popular movies and TV shows, from Originals and Exclusive content to exciting live sports events. We also offer our members the opportunity to subscribe to add-on channels, which they can cancel at any time, and to rent or buy new release movies and TV box sets on the Store. Our client is a fast-paced, growth business - available in over 240 countries and territories worldwide. The team works in a dynamic environment where innovating on behalf of our customers is at the heart of everything we do. If this sounds exciting to you, please read on.
The team presents opportunities to work on very large data sets in one of the world's largest and most complex data warehouse environments. Our data warehouse is built on AWS cloud technologies like Redshift, Kinesis, Lambda, S3, MWAA, performing ETL processing on multi terabytes of relational data in a matter of hours. Our team is serious about great design and redefining best practices with a cloud-based approach to scale, resilience and automation.
Key job responsibilities
You'll solve data warehousing problems on a massive scale and apply cloud-based AWS services to solve challenging problems around: big data processing, data warehouse design, self-service data access, automated data quality detection and building infrastructure as a code. You'll be part of the team that focuses on automation and optimization for all areas of DW/ETL maintenance and deployment.
You'll work closely with global business partners and technical teams on many non-standard and unique business problems and use creative problem solving to deliver data products that underpin strategic decision making, from content selection to on-platform customer experience. You'll develop efficient systems and tools to process data, using technologies than can scale to seasonal spikes and easily accommodate future growth. Your work will have a direct impact on the day-to-day decision making across the company.
Basic qualifications
- Experience with data modeling, warehousing and building ETL pipelines
- Experience with one or more query languages (e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala)
- Experience with one or more scripting languages (e.g., Python, KornShell)
- Experience with SQL
- Experience as a Data Engineer or in a similar role
Must-have skills:
- Experience with data modeling, warehousing, and building ETL pipelines
- Programming/Scripting: Python/Java
- Database Experience:
- Specifically mentioned Redshift or Snowflake as ideal
- Experience with MPP (Massively Parallel Processing) RDBMS
- Be able to write SQL on top of databases
Preferred qualifications
- Experience with big data technologies such as: Hadoop, Hive, Spark, EMR
- Experience with any ETL tool like, Informatica, ODI, SSIS, BODI, Datastage, etc.
BENEFITS SUMMARY: Individual compensation is determined by skills, qualifications, experience, and location. Compensation details listed in this posting reflect the base hourly rate or annual salary only, unless otherwise stated. In addition to base compensation, full-time roles are eligible for Medical, Dental, Vision, Commuter and 401K benefits with company matching.
IND123