Ativa os alertas de emprego por e-mail!
Melhora as tuas possibilidades de ir a entrevistas
Cria um currículo adaptado à oferta de emprego para teres uma taxa de sucesso superior.
A leading company in procurement solutions seeks a web developer to create and maintain robust web crawlers. Your role will involve standardizing data for analytics and automating scraping processes while collaborating with engineers to ensure delivery of high-quality data. Strong Python skills and familiarity with data handling tools are essential.
GEP is a diverse, creative team of people passionate about procurement. We invest ourselves entirely in our client’s success, creating strong collaborative relationships that deliver extraordinary value year after year. Our clients include market global leaders with far-flung international operations, Fortune 500 and Global 2000 enterprises, leading government and public institutions.
We deliver practical, effective services and software that enable procurement leaders to maximise their impact on business operations, strategy and financial performance. That’s just some of the things that we do in our quest to build a beautiful company, enjoy the journey and make a difference. GEP is a place where individuality is prized, and talent respected. We’re focused on what is real and effective. GEP is where good ideas and great people are recognized, results matter, and ability and hard work drive achievements. We’re a learning organization, actively looking for people to help shape, grow and continually improve us.
Are you one of us?
GEP is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, ethnicity, color, national origin, religion, sex, disability status, or any other characteristics protected by law. We are committed to hiring and valuing a global diverse work team.
For more information please visit us on GEP.com or check us out on LinkedIn.com.
Develop and maintain robust and scalable web crawlers to collect data from multiple sources.
Standardize and transform raw data into formats ready for analytical use or internal product integration.
Automate scraping processes with a focus on performance, exception handling, and resilience.
Apply and suggest improvements to engineering best practices (modular code, logging, version control, etc.).
Work closely with engineers and analysts to ensure the delivery of high-quality data.
Collaborate on projects involving asynchronous data extraction, structured data parsing, and optionally machine learning to support extraction and classification tasks.
Hands-on experience with Python and developing web scraping scripts or data pipelines.
Familiarity with libraries such as requests, httpx, playwright, asyncio, and others.
Understanding of data structures, error handling, and collaborative development practices (Git, pull requests).
Basic knowledge of relational databases (SQL) and data formats (json, parquet).
Experience with GitHub and version control tools (CI/CD knowledge is a plus).
Eagerness to learn about robust engineering, large-scale data processing, and AI-assisted data workflows.
Strong communication skills, autonomy, accountability, and attention to detail.
Intermediate English for technical reading and basic communication.