Enable job alerts via email!

Content Developer - Python / Datatrack

iamneo - An NIIT Venture

Coimbatore District

On-site

INR 10,00,000 - 15,00,000

Full time

Today
Be an early applicant

Job summary

A technology firm in Coimbatore is seeking a skilled Python - Content Developer. You will design, develop, and maintain scalable big data solutions, ensuring efficient data processing, storage, and analytics. The role requires collaboration with cloud platforms and modern data frameworks, particularly with technologies like Python, Apache Spark, and Hadoop. Strong problem-solving and communication skills are essential for success in this agile environment.

Qualifications

  • Proficiency in Python, Java, or Scala for data processing.
  • Hands-on experience with Apache Spark, Hadoop, Kafka, Flink, and Storm.
  • Strong expertise in cloud-based data solutions (AWS / Google / Azure).

Responsibilities

  • Build efficient data processing solutions using Python and Apache Spark.
  • Implement data lakes and data warehouses.
  • Monitor and troubleshoot data pipelines.

Skills

Python
Cloud technologies
Apache Spark
Hadoop
Kafka
SQL
NoSQL
Data Lake

Education

B.E./B.Sc./M.Sc./MCA

Tools

Apache Airflow
Docker
Kubernetes
Job description

We are looking for a highly skilled Python - Content Developer with expertise in cloud technologies to join our team. The ideal candidate will be responsible for designing, developing, and maintaining scalable big data solutions, ensuring efficient data processing, storage, and analytics. This role involves working with distributed systems, cloud platforms, and modern data frameworks to support real-time and batch data pipelines.

Key Responsibilities
  • Work with Python, Apache Spark, Hadoop, and Kafka to build efficient data processing solutions.
  • Implement data lakes, data warehouses, and streaming architectures.
  • Optimize database and query performance for large-scale datasets.
  • Collaborate with SMEs, Clients, and software engineers to deliver content.
  • Ensure data security, governance, and compliance with industry standards.
  • Automate workflows using Apache Airflow or other orchestration tools.
  • Monitor and troubleshoot data pipelines to ensure reliability and scalability.
Required Qualifications
  • Minimum educational qualifications: B.E., Bsc, Msc, MCA
  • Experience requirements
  • Proficiency in Python, Java, or Scala for data processing.
  • Hands-on experience with Apache Spark, Hadoop, Kafka, Flink, Storm.
  • Hands-on experience with SQL and NoSQL databases.
  • Strong expertise in cloud-based data solutions (AWS / Google / Azure).
  • Hands-on experience in building and managing ETL/ELT pipelines.
  • Knowledge of containerization and orchestration Docker or K8S.
  • Hands-on experience with real-time data streaming and serverless data processing.
  • Familiarity with machine learning pipelines and AI-driven analytics.
  • Strong understanding of CI/CD & ETL pipelines for data workflows.
Technical Skills
  • Big Data Technologies: Apache Spark, Hadoop, Kafka, Flink, Storm
  • Cloud Platforms: AWS / Google / Azure
  • Programming Languages: Python, Java, Scala, SQL, PySpark
  • Data Storage & Processing: Data Lakes, Warehouses, ETL/ELT Pipelines
  • Orchestration: Apache Airflow, Prefect, Dagster
  • Databases: SQL (PostgreSQL, MySQL), NoSQL (MongoDB, Cassandra)
  • Security & Compliance: IAM, Data Governance, Encryption
  • DevOps Tools: Docker, Kubernetes, Terraform, CI/CD Pipelines
Soft Skills
  • Strong problem-solving and analytical skills
  • Excellent communication and collaboration abilities
  • Ability to work in an agile, fast-paced environment
  • Attention to detail and data accuracy
  • Self-motivated and proactive
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.