
Ativa os alertas de emprego por e-mail!
Cria um currículo personalizado em poucos minutos
Consegue uma entrevista e ganha mais. Sabe mais
A leading global energy service provider is seeking a Mid to Senior Python Backend Developer. This fully remote position involves building low-latency services and debugging complex systems, contributing to a high-performance team. Ideal candidates will have over 5 years of experience in Python development, knowledge of REST APIs, and proficiency in streaming technologies like Kafka and Flink. This role offers the chance to work on innovative solutions in the energy sector.
We are looking for the right people — people who want to innovate, achieve, grow and lead. We attract and retain the best talent by investing in our employees and empowering them to develop themselves and their careers. Experience the challenges, rewards and opportunity of working for one of the world’s largest providers of products and services to the global energy industry.
We are building a platform that merges real-time physics, structured knowledge, and autonomous intelligence. We are looking for developers ready to tackle one (or more) of those challenges :
Ingestion : Handling massive streams of sensor data with minimal latency.
Intelligence : Making LLMs deterministic and reliable within robust multi-agent systems.
Context : Modeling complex ontologies to map thousands of physical assets.
You will join a high-performance team, contributing to the Core Backend while focusing on one of the following distinct tracks :
High-Performance APIs : Build low-latency Python services (FastAPI) to serve live data to frontend and AI models.
System Reliability : Debug complex concurrency issues and ensure production reliability in distributed systems.
Rapid Delivery : Adopt a "deliver fast" mentality without compromising on code quality, testing, or API design standards.
Possible paths to work on the Project (you can contribute to one or more) :
Build Streaming Pipelines : Design scalable services using Kafka and Spark / Flink to process raw sensor data in real-time.
Time-Series Optimization : Optimize database schemas (TimescaleDB) to enable fast historical data retrieval and implement algorithmic checks to validate sensor readings.
Build Autonomous Agents : Deploy stateful agents (using LangGraph) that plan tasks, query Knowledge Graphs, and execute tools without hallucinating.
Advanced RAG : Build Graph-RAG pipelines that combine semantic search with structured knowledge traversal for grounded answers
Knowledge Graph Engineering : Design domain ontologies in Neo4j, defining relationships between assets, documents, and time-series data.
Search Infrastructure : Implement Hybrid Retrieval logic combining Vector Search, Full-Text Search, and Graph traversal.
We use a modern, high-performance stack. You should be proficient in the Core and deeply knowledgeable in your chosen track.
Core Backend : Python 3.12+ (FastAPI, Pydantic), Polars, Docker, Kubernetes.
Streaming : Apache Kafka, Flink, Spark Streaming, TimescaleDB, Tiger Data.
GenAI : LangGraph, LangChain, LiteLLM, Azure OpenAI / Anthropic / Local SLMs.
Graph / Data : Neo4j (Cypher), PostgreSQL (pg_vector, Full-Text Search).
Halliburton is an Equal Opportunity Employer. Employment decisions are made without regard to race, color, religion, disability, genetic information, pregnancy, citizenship, marital status, sex / gender, sexual preference / orientation, gender identity, age, veteran status, national origin, or any other status protected by law or regulation.
Fully remote position.
Requisition Number : 205554
Experience Level : Mid to Senior
Job Family : Engineering / Science / Technology
Product Service Line : Landmark Software & Services
Full Time / Part Time : Full Time
Employee Group : Temporary