About the Company
GROPYUS creates sustainable affordable and aspirational buildings for everyone through modular construction and setting a new standard in smart living.
About the Role
We are growing our Data Language Team within the Gropyus Tech department. The Data Language team is responsible for semantics modeling and data quality for our cross functional platform the Gropyus Data Fabric.
The Data Language team interacts with experts from various domains such as Digital Building Planning and Automation Product Operations Sustainability AI IoT construction engineers building architects logistics experts software engineering; solving complex challenges pertaining to end-to-end process for planning building and operating a building.
Responsibilities
- Design data models to formalize concepts from various architecture and construction domains.
- Contribute to the logic to transform and enrich our centralized data for self-service analytics.
- Collaborate with our Data Platform to integrate data flows and processing for our products and services.
- Collaborate with domain experts and software engineers to understand data needs and deliver high-quality datasets.
- Implement and uphold data quality governance and security standards including monitoring testing and documentation.
- Drive adoption of best practices in data modeling and management.
- Lead strategic initiatives to modernize or scale our data ecosystem (including query optimization schema design analytics etc).
- Contribute to best practices and rigor in development including governance testing and validation.
- Mentor and guide team members to elevate engineering rigor and technical capability.
Qualifications
- Experience working with a tech stack similar to: Programming languages like Python or Kotlin.
- Databases like Postgres, BigQuery, Spark, Graph DB.
- Cloud infrastructure like AWS, Azure, Kubernetes.
- You have strong problem-solving and analytical skills and the ability to break down complex tasks for operations including working with stakeholders adapting to dynamic input and leading implementation projects.
- You have experience identifying and resolving issues related to data discrepancies and inconsistencies and creating validation and testing for prevention and handling.
- You have experience in building operating and scaling data intensive reactive processing pipelines from end to end including ingestion, storage, orchestration, transformation, enrichment, and analytics.
Optional Experience
- Some knowledge about semantic web technologies like RDF and OWL.
- Experience with ontology engineering and knowledge engineering topics including using ontology engineering tools such as Protege.
- Experience working with construction industry data and/or complex and large datasets.
- Data Science Machine Learning and AI agents.
Benefits
- Be part of something big: You’re here to make a change. Join us in reinventing construction and sustainable affordable living.
- Its on you: We offer a tremendous amount of ownership and room to make a mark at all organization levels. Find your solutions, drive and test them.
- Focus on results: You choose if you work from home, a park or the office. Whether you start your day early after your run or pick up on work when your kids are in bed, what counts is your contribution and delivery.
- Bring your uniqueness to the team: Innovation requires diversity of thought. We actively seek diversity and strive to unlock each others' full potential. We are proud that people from all industries and walks of life are joining our company.
- Be an owner: Participate in the success of GROPYUS through stock options.
Additional Information
Join us on our mission to design buildings as continuously evolving products to create the most exciting and affordable experience for all. We build for people and conserve the resources of our planet.
We cant wait to get to know you.
For more information visit ourwebsite and if you have any questions please reach out to us via email.
Employment Details
Key Skills: Apache Hive,S3,Hadoop,Redshift,Spark,AWS,Apache Pig,NoSQL,Big Data,Data Warehouse,Kafka,Scala
Employment Type: Full Time
Experience: years
Vacancy: 1