Find the latest job opportunities in AI and tech.
enliteAI is a technology provider for Artificial Intelligence specialized in Reinforcement Learning and Computer Vision/geoAI. They offer AI solutions and services.
International product and innovation-driven team with rich expertise in Computer Vision and Reinforcement Learning as well as distributed training, data engineering, ML ops and cloud architectures.
Working with the latest technologies at the interface between research and industry (enliteAI is an ELISE EU research network Organizing Node).
Personal growth: Receive continuous training and education opportunities. Budget and time allotment for the pursuit of individual R&D projects, training or conference participations.
Flexible work models: Remote work, an office in Vienna's 1st district and minimal core hours.
Experience Requirements:
- 3+ years of work experience in data-driven environments.
- Passionate about everything related to AI, Machine Learning and Computer Vision.
Other Requirements:
- Python programming skills, emphasis on data engineering and distributed processing (e.g. Flask, Postgres, SQLalchemy, Airflow).
- Proficiency in working with databases and data storage solutions.
- Experience with Kubernetes and Docker (Helm, Terraform, Amazon Kubernetes Service).
- Familiarity with cloud environments (AWS, Gcloud, Azure).
- Used to mature workflows in software development (Git, issue management, documentation, unit testing, CI/CD).
- Fluent in English both spoken and in written language.
- Valid work permit for Austria.
Responsibilities:
- Collaborate with our machine learning and backend engineers to design and manage scalable processing pipelines in productive environments.
- Implement robust processing flows and I/O-efficient data structures, powering use cases such as road surface analysis, sign detection and localization on large volumes of point cloud and imagery data.
- Design and manage relevant database schemes in close collaboration with backend engineering.
- Create and maintain comprehensive documentation of the processing pipelines, database schemes, configuration and software architecture.
Scylla is an AI-powered video analytics platform for security, offering gun detection, face recognition, and more.
Individual Benefit package: serves for your work and life balance.
Work From Home Days.
Individual KPI (Key Performance Indicator); Access to advisers including advisory board member at NASA Health Institute and serial founders to help us grow, personally and professionally; Weekly team events; Technical and soft-skills trainings; Variety of knowledge-sharing and self-development opportunities.
Education Requirements:
- Degree in Computer Science, IT, or similar field; a Master’s is a plus.
Experience Requirements:
- 4+ years of experience in designing and developing Python-based solutions.
Responsibilities:
- Build high-performance databases, and improve data models.
- Machine learning algorithm deployment.
- Manage data and meta-data.
- Perform data analysis and implement solutions to improve processes; Development of data-related instruments/instances; Write unit/integration tests to maintain accuracy in the data models; Track pipeline stability; Drive the collaboration process with other team members.
Shaped helps increase engagement, conversion, and retention with GenAI search & recommendations, using a configurable system adapting in real-time for various platforms.
Arturo is an AI-powered property intelligence platform for the insurance industry, offering solutions for underwriting, risk management, and claims.
Willingness to learn: You have an insatiable desire to continue growing, a fearless approach to the unknown, and love a challenge.
Teamwork/Collaboration: You like working with others; you participate actively and enjoy sharing the responsibilities and rewards. You pro-actively work to strengthen our team. And you definitely have a sense of humor.
Critical Thinking: You incorporate analysis, interpretation, inference, explanation, self-regulation, open-mindedness, and problem-solving in everything you do.
Drive for Results: You keep looking forward, solve problems and participate in the success of our growing organization.
Experience Requirements:
- Good to Expert level understanding of geospatial systems, concepts, patterns, and software, including both legacy formats and software, as well as the hottest newest open-source packages and tools.
- Professional experience writing production-ready Python code that leverages modern software development best practices (automated testing, CICD, observability).
- Experience working on a team of developers, maintaining a shared codebase, and having your code reviewed before merging.
- Strong DB Expertise in an Amazon environment (RDS, Postgres, and DynamoDB).
- Strong ETL Experience (especially in extraction and ingestion of 3rd party data).
Other Requirements:
- Familiarity with machine learning concepts.
- Familiarity with asynchronous programming.
Responsibilities:
- Onboard new geospatial datasets into our systems: imagery, parcels, extracted polygons, and build ETL pipelines to automate these processes.
- Manage the interface between our AI systems and the input data they need to operate via streaming systems, storage systems, and caching systems.
- Be aware of, and manage 3rd party provider rate limits, and architect our systems to gracefully deal with these limits while maximizing our throughput.
- Participate in an agile product development process, where collaboration with stakeholders is a vital step to building what is needed.
- Challenge and be challenged on a diverse, collaborative, and brilliant team.
- Write automated test suites to ensure the quality of your code.
- Contribute to open-source geospatial software.
- Build solutions that enable new products -- typically involving large scale or intricate geospatial techniques.
- Build-in system quality from the beginning by writing unit & integration tests and integrating with logging, metrics, and observability systems.
Voice.ai is a free AI-powered voice changer software that allows you to change your voice in real-time or transform any audio with a collection of thousands of AI voices. Works with Windows, Discord, Skype, Zoom, and many games.
Experience Requirements:
- Proven experience as a data engineer, with a strong track record in data pipeline development and data warehousing.
- Proficiency in data engineering technologies, including ETL tools, database systems, and data processing frameworks (e.g. Apache Spark).
Responsibilities:
- Data Pipeline Development: Design and develop robust and scalable data pipelines for collecting, processing, and storing voice data from various sources.
- Data Warehousing: Build and maintain data warehouses and data lakes to store and organize structured and unstructured voice data.
- ETL Processes: Develop and optimize ETL processes to transform raw data into actionable insights, supporting the work of data scientists and machine learning engineers.
Akkio is an AI analytics platform built for advertising agencies, offering AI-powered insights and campaign optimization. It offers solutions for audience discovery, forecasting, and performance analysis.
Experience Requirements:
- 5+ years of experience as a data engineer.
Other Requirements:
- Must be authorized to work in the US.
Responsibilities:
- Work with large datasets from a variety of customers, all with privacy concerns.
- Operate in a fast-paced environment to bring customers value quickly.
- Continuously optimize, standardize, and improve code and processes.
- Build systems that run reliably, and report/alert reliably when they fail.
LXT provides high-quality AI training data solutions, including data annotation, collection, evaluation, and generative AI services, to power global AI innovation.
Experience Requirements:
- Bachelor’s degree in computer science, Data Engineering, or a related field.
- 4-6 years of hands-on experience in data pipeline design, data transformation, and data integration.
Other Requirements:
- Proficiency in programming languages such as Python, SQL, or Scala, and experience with data manipulation libraries and frameworks.
- Solid knowledge of data storage and database management systems, including relational and NoSQL databases.
- Familiarity with data visualization tools and techniques to facilitate data understanding and analysis.
- Experience with cloud-based data platforms, such as AWS, GCP, or Azure, is a plus.
- Solid understanding of data quality and data governance principles.
Rockerbox unifies marketing measurement with MTA, MMM, and incrementality testing, centralizing data for a complete performance view.
Experience Requirements:
- 7+ years of experience as a Data Engineer.
Other Requirements:
- Deep expertise in AWS services (Redshift preferred).
- Proficient with Python and have experience working with Kubernetes (K8s).
- Comfortable ingesting data from third-party APIs and familiar with modern orchestration tools (e.g., Airflow).
- Comfortable with Infrastructure as Code (e.g., Terraform).
- Experience mentoring engineers and enjoy sharing your knowledge, particularly in areas like data warehousing, pipelines, and architecture.
- Product mindset, balancing technical excellence with business needs to deliver scalable solutions.
- Communicate complex topics clearly, whether you’re writing technical specs or aligning stakeholders around a project plan.
Responsibilities:
- Scope and own projects to build, improve, and maintain scalable data ingests and pipelines.
- Design and implement solutions that are resilient, well-monitored, and cost-effective.
- Lead technical initiatives, working closely with Product Managers and cross-functional teams to develop clear, detailed technical specifications.
- Share knowledge with your peers through design and code reviews, fostering a culture of continuous learning.
- Stay curious—evaluate emerging technologies and tools to improve efficiency and effectiveness in our engineering processes.
AppZen uses AI to automate accounts payable and expense management, improving efficiency and reducing fraud.
Education Requirements:
- Bachelor's or master's degree in Computer Science, Engineering, or a related field.
Experience Requirements:
- Strong proficiency in Python and SQL for data manipulation, analysis, and scripting.
- Extensive experience with cloud platforms, particularly AWS, and working knowledge of services like EMR, Redshift, and S3.
- Solid understanding of data warehousing concepts and experience with relational databases like PostgreSQL.
Other Requirements:
- Experience with building and maintaining data pipelines using tools like Airflow.
- Knowledge of Python web frameworks like Flask or Django for building data-driven applications.
- Strong problem-solving and analytical skills, with a keen attention to detail.
- Excellent communication and collaboration skills, with the ability to work effectively in a team environment.
Responsibilities:
- Design, develop, and implement scalable and efficient data pipelines in the cloud using Python, SQL, and relevant technologies.
- Build and maintain data infrastructure on platforms such as AWS, leveraging services like EMR, Redshift, and others.
- Collaborate with data scientists, analysts, and other stakeholders to understand their requirements and provide the necessary data solutions.
- Develop and optimize ETL (Extract, Transform, Load) processes to ensure the accuracy, completeness, and timeliness of data.
- Create and maintain data models, schemas, and database structures using PostgreSQL and other relevant database technologies.
Harmonya uses AI to enrich product data, provide insights, and manage attribution for CPG and retail companies.
Experience Requirements:
- At least 6 years of experience in software engineering in Python or an equivalent language.
- At least 3 years of experience with data engineering products from early-stage concept to production rollouts.
- Experience with cloud platforms (GCP, AWS, Azure), working on production payloads at large scale and complexity.
- Hands-on experience with data pipeline building and tools (Luigi, Airflow etc), specifically on cloud infrastructure.
Other Requirements:
- Advantage: Hands-on experience with relevant data analysis tools (Jupyter notebooks, Anaconda etc).
- Advantage: Hands-on experience with data science tools, packages, and frameworks.
- Advantage: Hands-on experience with ETL Flows.
- Advantage: Hands-on experience with Docker / Kubernetes.
Responsibilities:
- Design and build data acquisition pipelines that acquire, clean, and structure large datasets to form the basis of our data platform and IP.
- Design and build data pipelines integrating many different data sources and forms.
- Define architecture, evaluate tools and open source projects to use within our environment.
- Develop and maintain features in production to serve our customers.
- Collaborate with product managers, data scientists, data analysts and full-stack engineers to deliver our product to top tier retail customers.
DataNimbus provides AI-powered data management and integration tools, including a cloud-native ETL designer and a comprehensive data platform, designed for seamless Databricks integration and accelerated AI adoption.
Opportunity to work on cutting-edge data and AI projects.
Collaborative and innovative work environment.
Competitive salary and benefits package.
Professional growth and development opportunities.
Experience Requirements:
- 4+ years experience in data engineering, data architecture, data platforms & analytics.
- At least 3+ years experience with Databricks, PySpark, Python, and SQL.
Other Requirements:
- Consulting / customer facing experience, working with external clients across a variety of industry markets.
- Comfortable writing code in both Python and SQL.
- Proficiency in SQL and experience with data warehousing solutions.
- Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one.
- Strong understanding of data modeling, ETL processes, and data architecture principles.
- Deep experience with distributed computing with Apache Spark and knowledge of Spark runtime internals.
- Familiarity with CI/CD for production deployments – GitHub, Azure DevOps, Azure Pipelines.
- Working knowledge of MLOps methodologies.
- Design and deployment of performant end-to-end data architectures.
- Experience with technical project delivery – managing scope and timeline.
- Experience working with clients and managing conflicts.
- Good to have Databricks Certifications.
- Strong communication and collaboration skills.
- Ability to travel up to 30% when needed.
Responsibilities:
- Handle a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to’s and productionalizing customer use cases.
- Work with engagement managers to scope variety of professional services work with input from the customer.
- Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications.
- Consult on architecture and design, bootstrap or implement customer projects which leads to a customers’ successful understanding, evaluation and adoption of Databricks.
- Support customer operational issues with an escalated level of support.
Oxa develops self-driving software and services for businesses, focusing on safety, efficiency, and AI-powered solutions for autonomous transportation.
Competitive salary, benchmarked against the market and reviewed annually.
Company share programme.
Hybrid and/or flexible work arrangements.
An outstanding £3,000 flexible benefits including private medical insurance, critical illness coverage, life assurance, EAP, group income protection.
Funded relocation support.
Education Requirements:
- Degree in Computer Science, Mathematics or a related field.
Experience Requirements:
- 3+ years of professional experience developing behavioural machine learning technologies for autonomous vehicles or robotics.
- Experience with production ML pipelines: data creation and curation, training frameworks, evaluation pipelines.
- Fluency in Python and experience with data analysis libraries and packages.
- Proven record of leading and delivering projects as part of a team.
Responsibilities:
- Take a leading role within your team to develop and deploy state of the art data pipelines for our machine learning models.
- Design and implement metrics for model validation and continuous monitoring in production.
- Leverage the Oxa Metadriver platform to generate synthetic data, and train effective and robust driving policies.
- Build cloud tooling and infrastructure in support of experimentation, evaluation and deployment workflows.
- Engage with team members and colleagues throughout the business to create an environment that supports collaboration and mutual understanding.
Affine uses AI to solve complex business challenges, offering solutions across various industries and AI domains.
AI-powered keyboard enhancing creative expression and task completion, offering solutions for both consumers and businesses.
Education Requirements:
- Bachelor's degree in Computer Science Technology, or a related field.
- Master's degree in Computer Science Technology or computer application is an added advantage.
Experience Requirements:
- 3-6 years of experience in Data Engineering, Database management, Data structures & ETL development.
Responsibilities:
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL, EMR/Snowflake, Redshift/Bigquery and AWS/GCP ‘big data’ technologies.
- Maintaining the high volume and compute data pipelines and ensuring that process is not interrupted.
- Hands on Hadoop, SPARK and other relevant Big Data technologies.
- Develop algorithms to transform data into useful, actionable information.
- Build, test, and maintain database pipeline architectures.
AI-powered weather forecasting for energy trading, offering high-accuracy predictions up to 31 days in advance.
Time off (minimum 25 days paid vacation).
AI-powered news intelligence platform providing global news aggregation, NLP enrichment, and advanced analytics.
Annual leave, plus national holidays + your birthday off!
Volunteer Day off.
Experience Requirements:
- 4+ years of industry experience in data engineering.
- Proficiency in Scala, Java, Python, or similar.
- Expertise building and deploying data processing systems.
- Experience with modern development tooling and DevOps technologies.
Responsibilities:
- Write efficient and fault-tolerant code.
- Configure and deploy Quantexa software.
- Act as a trusted source of knowledge for clients.
- Collaborate with solution architects and R&D engineers.
VI is an AI-powered platform for health organizations that helps maximize member health outcomes and financial returns by improving acquisition, engagement, and retention.
Education Requirements:
- BSC in a related technical field or equivalent practical experience.
Experience Requirements:
- Over 5 years of experience in a Data Engineering role.
Other Requirements:
- Coding experience with Python.
- Experience with SQL and NoSQL databases.
- Experience with data modeling, and building ELT/ETL pipelines.
- Experience with common data warehouse and lakehouse technologies.
- Experience with AWS platform.
- Experience working with Git, CI/CD flows and Docker.
- Experience building data pipelines with big data frameworks such as Spark.
- Technologically diverse background and ability/willingness to learn new things quickly.
Responsibilities:
- Design, develop and implement robust data pipelines using ETL/ELT to move data from various sources into our lakehouse.
- Ensure quality and integrity of data by implementing validations, testing and monitoring.
- End-to-end feature development and ownership, from design to production.
GoodNotes is an AI-powered note-taking app offering a seamless digital pen-and-paper experience across multiple platforms.
Meaningful equity in a profitable tech startup.
Budget for things like noise-cancelling headphones, setting up your home office, personal development, professional training, and health & wellness.
Sponsored visits to our Hong Kong or London office every 2 years.
Company-wide annual offsite (we met in Portugal in 2023 and Bali in 2024).
Flexible working hours and location. Medical insurance for you and your dependents.
Experience Requirements:
- 7+ years of software engineering experience, notable part of it being in the data engineering space.
- Expertise with designing and building data pipelines within the ETL and ELT paradigm.
- Expertise in distributed systems, as well as different database systems and big data solutions.
- Previous experience in designing, building and maintaining cloud infrastructure to support analytics operations.
- Experience with working in Python, Spark and SQL.
- Experience with some of the following tools: dbt, Airflow, Kafka, AWS Glue, delta lake.
- Excellent problem-solving skills and ability to think critically.
- Desire to work in a fast-paced, collaborative environment.
- Excellent communication skills, both verbal and written.
Responsibilities:
- Build and maintain end-to-end data pipelines.
- Design and build components within data pipelines for collecting production data.
- Data engineering for efficient ingestion of data.
- Helping the team build and maintain analytics and insights infrastructure.
- Monitoring the production and setting up alerting mechanisms.
Wizard is an AI-powered shopping assistant that uses natural language processing and machine learning to provide personalized product recommendations and streamline the shopping experience.
Competitive compensation packages, including equity.
Health, dental and vision insurance.
Mental healthcare support and services.
401(k) plan.
Education Requirements:
- Bachelor's degree in Computer Science or a related field, or equivalent practical experience.
Experience Requirements:
- 5+ years of professional experience in software development with a strong focus on data engineering.
- Proficiency in Python with experience implementing software engineering best practices.
- Strong expertise in building ETL pipelines using tools such as Apache Spark, Databricks, or Hadoop.
- Solid understanding of distributed computing and data modeling for scalable systems.
- Hands-on experience with NoSQL databases like MongoDB, Cassandra, DynamoDB, or CosmosDB.
Other Requirements:
- Proficiency in real-time stream processing systems such as Kafka, AWS Kinesis, or GCP Data Flow.
- Experience with Delta Lake, Parquet files, and cloud platforms (AWS, GCP, or Azure).
- Familiarity with caching and search technologies such as Redis, Elasticsearch, or Solr.
- Knowledge of message queuing systems like RabbitMQ, AWS SQS, or GCP Cloud Tasks.
- Advocate for Test-Driven Development (TDD) and experienced in using version control tools like GitHub or Bitbucket.
Responsibilities:
- Develop and maintain scalable data infrastructure to support batch and real-time data processing with high performance and reliability.
- Build and optimize ETL pipelines for efficient data flow and accessibility.
- Collaborate with data scientists and cross-functional teams to ensure accurate monitoring and insightful analysis of business processes.
- Design backend data solutions to support a microservices architecture, ensuring seamless data integration and management.
- Implement and manage integrations with third-party e-commerce platforms to enhance Wizard's data ecosystem.