About the Opportunity
Driscoll’s is seeking an Data Services Developer with experience to turn raw data into business insights, build data pipelines and data marts to serve Tableau dashboards that impact key decision making in the organization. The Data Services Developer will analyze, interpret, and organize enormous amounts of data from the various source systems including Oracle ERP and other boundary applications. In this position, the Data Engineer will demonstrate in depth understanding of AWS services like glue, s3, redshift, Athena , lambda and step function, and recommend industry standards in stabilizing data pipeline workstream with industry standards and trends. They will be working with Sales, Finance, Supply Chain, and others along with IT teams in building the data pipelines
Responsibilities
- Collaborate with functional leaders, Business Stakeholders, IT and project teams to create and manage data pipelines and analytics solutions using SQL, Python, and AWS services.
- Work with requirements to document detailed data pipeline requirements into functional specifications for all data pipelines to be used during the progressive build cycles of the implementation.
- Work closely with IT Team leads, Senior business leaders and Users to determine the data pipelineing requirements.
- Effectively translate complex business requirements into technical requirements and assist in designing Functional Design Documents.
- Manage technical delivery resources supporting data pipeline development, testing, deployment activities in onshore/offshore model.
- Work with Security and Compliance team to define and build structure that defines roles/privileges and to control data pipeline access.
- Design, development, and maintenance of ongoing KPIs, metrics, data pipelines, analyses, dashboards, etc. to drive key business decisions.
- Monitor, respond and resolve tickets and issues submitted by users, ability to perform RCA on critical tickets/incidents.
Candidate Profile
- Bachelor's degree in Information Technology, Business Analytics, or related field
- 2+ yrs experience in implementing data engineering pipelines using Python and Spark
- Strong hands-on experience in AWS data engineering tools such as Glue, Redshift, Athena, and Lambda.
- Proficient in writing advanced SQL and Python scripts for data transformation, extraction, and processing.
- Skilled in performance tuning, data quality validation, and pipeline troubleshooting across distributed systems.
- Must be self-motivated and able to work independently in a fast-paced, agile team environment.
- Excellent verbal and written communication skills, strong attention to detail, and the ability to manage multiple priorities and meet deadlines.
- Solid understanding of data modeling, data flow, and architecture best practices in AWS.
- Experience working with financial or supply chain datasets and transforming data for analytics and reporting.
- Familiarity with AWS-based data governance, lineage tracking, and access control mechanisms.
- Strong coding fundamentals and experience developing modular, reusable, and scalable pipelines.