Enable job alerts via email!
Boost your interview chances
Create a job specific, tailored resume for higher success rate.
Gigster Inc. is looking for a part-time Data Engineer to design and optimize data pipelines using technologies like Apache Spark and Kafka. The role includes integrating public APIs and collaborating with teams to enhance data processing for NLP models. This short-term contract offers flexible project choices and competitive pay rates.
Do you want to work on cutting-edge projects with the world’s best IT engineers? Do you wish you could control which projects to work on and choose your own pay rate? Are you interested in the future of work and how the cloud will form teams? If so - the Gigster Talent Network is for you.
Our clients rely on our Network for two main areas, Software Development and Cloud Services. In some cases, they need help building great new products, in others they want our expertise in migrating, maintaining, and optimizing their cloud solutions.
At Gigster, whether working with entrepreneurs to realize ‘the next great vision’ or with Fortune 500 companies to deliver a big product launch, we build really cool enterprise software on cutting-edge technology.
We are seeking an experienced Data Engineer with deep expertise in data transformation at scale, particularly in integrating and processing data from third-party public APIs. This role is critical to enhancing and maintaining data pipelines that feed into Natural Language Processing (NLP) models.
Design, build, and optimize scalable ETL/ELT data pipelines usingApache Spark, Apache Kafka, and orchestration tools such as Prefect or Airflow
Integrate external data sources and public APIs with internal data systems
Work with large-scale datasets to support NLP model training and inference
Analyze existing pipelines and recommend enhancements for performance, reliability, and scalability
Collaborate with cross-functional teams, including data scientists and ML engineers
Own the end-to-end engineering process—from planning and technical design to implementation
Regularly report progress and outcomes to client stakeholders
Proficiency in Python and experience with data transformation and data engineering best practices
Strong experience with Apache Spark, Apache Kafka, and Google Cloud Platform (GCP)
Hands-on experience with workflow orchestration tools (e.g., Prefect, Airflow)
Demonstrated experience working with large datasets and real-time data processing
Experience building and maintaining ETL/ELT pipelines for analytical or machine learning use cases
Self-motivated, with excellent communication and project ownership skills
Familiarity with financial services data or regulated data environments
Experience with Snowflake or Google BigQuery
Exposure to NLP workflows and data requirements for machine learning models
*
indicates a required field
First Name *
Last Name *
Email *
Phone *
Location (City) *
Resume/CV *
Enter manually
Accepted file types: pdf, doc, docx, txt, rtf
Education
Degree Select...
Start date year
End date year
LinkedIn Profile *
Website
What pronouns do you use?
In what region of the world do you live? * Select...
APAC - Asia and Pacific
EMEA - Europe, Middle East, and Africa
LATAM - Latin America
NORAM - North America
In what country do you live in? * Select...
Or in what country will you be living should you join us (in case it's not the same country).
This is a short-term, part-time engagement. Are you OK with that? Select...