Title : Lead AWS Data EngineerMandatory skills : Python, PySpark , SQL, and AWS servicesYears of experience : 10+ yearsTarget Date : Looking for immediate joiners.Level of interview : 2 Technical LevelsSalary : INR(1,00,000- 1,20,000)/month in handMode of Work : RemoteAWS Data EngineerWe are seeking a skilled AWS Data Engineer with expertise in various AWS services. The ideal candidate will have hands-on experience with Lambda, Glue, SNS, SQS, Step Functions , PySpark , Python, Athena, CloudWatch, S3, and more. The successful candidate should also have working experience with various data file formats such as JSON, XML, CSV, Parquet.Proficiency in SQL, and visualization tool experience with Looker or Power BI.Responsibilities :1. Develop and maintain robust data pipelines using AWS Glue for efficient ETLprocesses.2. Implement serverless computing solutions with AWS Lambda to automate tasks andprocesses.3. Utilize SNS and SQS for efficient messaging and event-driven architecture.4. Design and orchestrate data workflows using AWS Step Functions.5. Leverage PySpark, Python, and SQL for data processing, analysis, andtransformation.6. Implement and optimize queries using AWS Athena for efficient querying of largedatasets.7. Monitor and manage resources and applications using AWS CloudWatch.8. Manage data storage and retrieval using AWS S3.9. Work with various data file formats, including JSON, XML, CSV, TSV, Parquet, andexecute SQL queries as needed.10. Utilize visualization tools such as Looker or Power BI for effective datarepresentation.11. Build end-to-end data pipelines, from conception to implementation, ensuringscalability and efficiency.12. Hands-on experience with CI/CD tools such as Jenkins, GitLab/Github, Jira,Confluence, and other related tools.13. Experience working with Delta Lake for efficient version control and datamanagement.Qualifications : 7+ years experience as a Data Engineer in consumer finance or equivalent industry(consumer loans, collections, servicing, optional product, and insurance sales). Proven experience as a Data Engineer with a strong focus on AWS services. Proficiency in Python, PySpark, SQL, and AWS services for data processing andanalysis. Hands-on experience with AWS Lambda, Glue, SNS, SQS, Step Functions, Athena,CloudWatch, and S3. Practical experience working with JSON, XML, CSV, TSV, Parquet file formats. Experience with visualization tools such as Looker or Power BI is a significant plus. Good understanding of serverless architecture and event-driven design. Hands-on experience with CI/CD tools, including Jenkins, GitLab/Github, Jira,Confluence, and other related tools. Comfortable learning about and deploying new technologies and tools. Organizational skills and the ability to handle multiple projects and prioritiessimultaneously and meet established deadlines. Good written and oral communication skills and the ability to present results to non-technical audiences. Knowledge of business intelligence and analytical tools, technologies, andtechniques. Experience with Terraform is a plus.Job Types: Full-time, Contractual / TemporaryContract length: 12 monthsExperience: total work: 10 years (Required) Data Engineer, Python: 7 years (Required) AWS services: 6 years (Required) SQL, Pyspark: 6 years (Required)Work Location: Remote,
Sign-in & see how your skills match this job
Sign-in & Get noticed by top recruiters and get hired fast