
Enable job alerts via email!
Generate a tailored resume in minutes
Land an interview and earn more. Learn more
A technology solutions company is seeking an experienced Data Engineer to design and develop large-scale data infrastructures. The ideal candidate will have expertise in ETL processes and work collaboratively with data scientists and developers. This role is pivotal in enabling the company's mission to deliver intelligent solutions across industries. The position requires a Bachelor's or Master's degree and a minimum of 3 years of experience in data engineering.
Vision Tact is seeking an experienced Data Engineer to design, develop, and optimize large-scale data pipelines and infrastructures that power our AI, automation, and analytics platforms.
You’ll work closely with data scientists, AI engineers, and software developers to ensure that data is clean, structured, and efficiently accessible for modeling, visualization, and real-time processing.
This role requires deep technical expertise in ETL processes, database architecture, API data integration, and cloud-based data management. You’ll play a critical role in enabling Vision Tact’s mission to deliver intelligent, data-driven solutions across industries.
Design and develop data pipelines for ingestion, transformation, and integration across multiple data sources (APIs, databases, IoT, GIS, etc.).
Implement and maintain ETL/ELT frameworks for structured and unstructured datasets.
Build and manage data warehouses and lakes to support analytics and AI initiatives.
Ensure data quality, consistency, and reliability through validation and monitoring processes.
Work with AI and ML engineers to prepare datasets for training, testing, and deployment.
Collaborate with DevOps teams for cloud-based data infrastructure and CI/CD deployment.
Implement data governance, access control, and versioning standards.
Optimize performance for high-volume data storage, retrieval, and transformation.
Programming: Python, SQL, Scala, or Java.
Data Pipelines: Apache Airflow, Luigi, or Prefect.
Databases: PostgreSQL, MySQL, MongoDB, Cassandra, or BigQuery.
Big Data Technologies: Apache Spark, Hadoop, Kafka.
ETL Tools: Talend, Fivetran, dbt, or custom ETL frameworks.
Cloud Platforms: AWS (Glue, Redshift, S3), GCP (Dataflow, BigQuery), Azure Data Factory.
Version Control: Git, GitHub, or Bitbucket.
Soft Skills: Logical thinking, detail orientation, and cross-team communication.
Bachelor’s or Master’s degree in Computer Science, Data Engineering, or related field.
Minimum 3 years of experience in data engineering, preferably within AI, automation, or analytics domains.
Demonstrated experience with ETL pipeline development and data infrastructure design.
Hands‑on experience with SQL and big data processing frameworks.
Certification in AWS Data Engineering, Google Cloud Professional Data Engineer, or equivalent preferred.