We help companies gain a competitive edge by delivering customized AI solutions. Our mission is to empower our clients to unlock the full potential of AI.
We are specialized in key technologies such as LLM & RAG, MLOps, Edge Solutions, Computer Vision, and Natural Language Processing.
Our team of 120 world-class AI experts has worked on 200+ commercial and R&D projects with companies such as Unstructured, Google, Brainly, DocPlanner, B-Yond, Zebra Technologies, Hexagon, and many more.
What we believe in?
- Team Strength – sharing and exchanging knowledge is key to our daily work
- Accountability – we take responsibility for the tasks entrusted to us so that ultimately the client receives the best possible quality
- Balance – we value work-life balance
- Commitment – we want you to be fully part of the team
- Openness – we don’t want you to be locked into one solution, we want to look for alternatives, explore new possibilities
Responsibilities
- Design and implementation of modern, scalable ML infrastructure (cloud-native or on-premise) to enable both day-to-day work of teams as well as deployment of pipelines and models.
- Collaboration with Data Scientists and Machine Learning Engineers to design the architecture of MLOps solutions to meet functional and performance requirements.
- Implementing and ensuring compliance with the best MLOps practices in the areas of automation (e.g. CI/CD), monitoring, versioning (code, data, models) and infrastructure.
- Performing code reviews for other MLOps Engineers but also other roles in the team.
- Delivering high quality code and infrastructure, properly tested and aligned with the project requirements.
You must have
- Proficiency in the MLOps domain, awareness of best practices, frameworks, tools.
- Strong experience with cloud (AWS, GCP, Azure), ability to design cloud-native applications using dedicated cloud services for: serverless functions, batch processing, managed Kubernetes, relational databases, object storage, data warehouses, message buses, streaming and serverless ML platforms.
- Experience with deployment of applications in Kubernetes environments (using tools like Helm) but also provisioning, administration and troubleshooting of existing Kubernetes clusters.
- Experience with Terraform or other Infrastructure-as-Code solutions (e.g. Pulumi).
- Hands-on experience in (object-oriented) programming in Python in particular in AI/ML related use cases, such as development of ML pipelines or model serving.
- Expertise in enhancing ML systems with proper automation (CI/CD, GitOps, GitHub Actions, ArgoCD), monitoring (CloudWatch, Prometheus, Evidently) and versioning (Git, DVC) tools.
- Solid understanding of machine learning (including deep learning and LLMs), software engineering and DevOps.
- Good understanding of Linux systems, essential for maintaining development and production environments.
You may have
- Experience with distributed computing frameworks such as Ray.
- Experience with design and implementation of data engineering solutions with tools such as: Kinesis, Glue, Airflow, dbt, Great Expectations etc.
- Understanding data warehousing (e.g. Snowflake) and data streaming frameworks (e.g., Apache Kafka, Spark, SQL).
- Experience working with (non-)relational and vector databases (e.g. Pinecone).
We offer
- Opportunity to work on cutting-edge AI projects with a diverse range of clients and industries, driving solutions from development to production.
- Collaborative and supportive work environment, where you can grow and learn from a team of talented professionals.
- An opportunity to participate in conferences and workshops around the world.
- An opportunity to participate in Tech Talks (internal training and seminar sessions).
- Flexible working hours and remote work options.