Job Search and Career Advice Platform

¡Activa las notificaciones laborales por email!

Software Engineer Id45371

Agileengine

A distancia

ARS 100.689.000 - 129.458.000

Jornada completa

Hace 10 días

Genera un currículum adaptado en cuestión de minutos

Consigue la entrevista y gana más. Más información

Descripción de la vacante

A leading software development company based in San Miguel de Tucumán, Argentina is seeking a Senior Software Engineer specializing in Data Engineering. The role involves developing advanced trade surveillance technology and improving data pipelines using Java and modern tools. Ideal candidates have a BSc in Computer Sciences and at least 3 years of programming experience. This position offers significant professional growth, competitive compensation, and flexibility in work arrangements.

Servicios

Professional growth opportunities
Competitive compensation
Flextime

Formación

  • 3+ years experience with Java.
  • Experience in data pipeline development.
  • Proven experience with relational and non-relational DBs.

Responsabilidades

  • Design and develop microservices for data processing.
  • Scale data pipeline for billions of events.
  • Optimize queries for data warehouses.

Conocimientos

Java
Data Engineering
SQL
Monitoring Systems
Object-Oriented Development
Communication Skills

Educación

BSc. in Computer Sciences

Herramientas

Clickhouse
Spark
Kafka
Prometheus
Grafana
Descripción del empleo
Job Description

AgileEngine is an Inc. 5000 company that creates award-winning software for Fortune 500 brands and trailblazing startups across 17+ industries. We rank among the leaders in areas like application development and AI / ML, and our people-first culture has earned us multiple Best Place to Work awards.

WHY JOIN US

If you're looking for a place to grow, make an impact, and work with people who care, we'd love to meet you!

ABOUT THE ROLE

Join our ambitious team as a Senior Software Engineer specializing in Data Engineering to develop cutting-edge trade surveillance technology, protecting investors and ensuring market integrity. You will have a profound impact by designing and scaling robust, highly available data pipelines and microservices, tackling complex data challenges using modern tools. This unique opportunity offers significant influence and professional growth in a dynamic, collaborative environment that values accountability and a self-starter attitude.

WHAT YOU WILL DO
  • Design and develop the data’s team micro services – Java services running on K8S;
  • Scale our data pipeline to support the processing of billions of events in both low-latency real-time and T+1 batch using advanced technologies like Clickhouse, Spark, and Kafka;
  • Tackle data duplication, velocity, schema adherence (and schema versioning), high availability, data governance, and more;
  • Develop and maintain our data pipeline written mostly in Java and running on K8S in a micro-service architecture;
  • Plan and communicate integrations with other teams that consume the data and use it for insights creation;
  • Ongoing improvement of the way data is stored and served. Improve queries and data formats to make sure the data is optimized for consumption by a variety of clients;
  • Query optimizations in our data warehouses, ensuring data completeness and reliability at scale.
MUST HAVES
  • BSc. in Computer Sciences from a top university, or equivalent;
  • Strong background as a software engineer with at least 3+ years experience with Java ;
  • Experience in data engineering and data pipeline development;
  • Proven experience with relational and non-relational DBs. Expert. Proficient in SQL and query optimizations – preferably in clickhouse;
  • Experience with monitoring systems (either Prometheus, Grafana, Zabbix, Datadog);
  • Experience in object-oriented development. Should have strong software engineering foundations;
  • Curiosity, ability to work independently and proactively identify solutions;
  • Excellent verbal and written communication skills in a remote environment;
  • Upper-intermediate English level.
NICE TO HAVES
  • Experience working in low-latency, real-time systems processing billions of events a day;
  • Experience with data-engineering cloud technologies as Apache Airflow, K8S, Clickhouse, Snowflake, Redis, Spark, Caching technologies and / or Kafka.
PERKS AND BENEFITS
  • Professional growth : Accelerate your professional journey with mentorship, TechTalks, and personalized growth roadmaps.
  • Competitive compensation : We match your ever-growing skills, talent, and contributions with competitive USD-based compensation and budgets for education, fitness, and team activities.
  • A selection of exciting projects : Join projects with modern solutions development and top-tier clients that include Fortune 500 enterprises and leading product brands.
  • Flextime : Tailor your schedule for an optimal work-life balance, by having the options of working from home and going to the office – whatever makes you the happiest and most productive.
Requirements

BSc. in Computer Sciences from a top university, or equivalent; Strong background as a software engineer with at least 3+ years experience with Java; Experience in data engineering and data pipeline development; Proven experience with relational and non-relational DBs. Expert. Proficient in SQL and query optimizations – preferably in clickhouse; Experience with monitoring systems (either Prometheus, Grafana, Zabbix, Datadog); Experience in object-oriented development. Should have strong software engineering foundations; Curiosity, ability to work independently and proactively identify solutions; Excellent verbal and written communication skills in a remote environment; Upper-intermediate English level.

Consigue la evaluación confidencial y gratuita de tu currículum.
o arrastra un archivo en formato PDF, DOC, DOCX, ODT o PAGES de hasta 5 MB.