Job Search and Career Advice Platform

Attiva gli avvisi di lavoro via e-mail!

AI Backend Engineer

Fengkai Group Co., Limited

Savona

Ibrido

EUR 30.000 - 45.000

Tempo pieno

Ieri
Candidati tra i primi

Genera un CV personalizzato in pochi minuti

Ottieni un colloquio e una retribuzione più elevata. Scopri di più

Descrizione del lavoro

A fast-moving startup in Italy seeks an Entry Level Backend Engineer. This role involves designing and operating backend systems that power AI-driven products. Candidates should have strong Python skills, experience with ML systems, and enjoy working in a collaborative, startup environment. You will implement data pipelines and manage memory for efficient processing. This position offers a hybrid working model, enabling both office presence and remote work, allowing for a balanced approach to productivity.

Servizi

Hybrid working model
Opportunity to make a real impact

Competenze

  • 3+ years of experience building ML systems in production.
  • Proven results with LLMs in production.
  • Experience with retrieval systems and vector search.

Mansioni

  • Design, build, and operate the backend systems.
  • Implement inference services for generation and ranking.
  • Manage embeddings pipeline for texts and documents.

Conoscenze

Python proficiency
Backend services
ML systems
Cross-functional teamwork
Startup mindset

Formazione

Bachelor’s or Master’s degree in Computer Science or Engineering

Strumenti

Flask
TensorFlow
Kubernetes
Descrizione del lavoro

Direct message the job poster from Fundo.one

Fundo.one is an AI‑powered financing‑access platform built to transform how SMEs secure the capital they need to grow. With a uniquely compelling product, an exceptional founding team, a vast addressable market, and strong backing, we’re positioned for something big.

Join us in Genoa at a moment when the AI revolution is redefining how the world builds, finding, and creates. We’re a fast‑moving startup driven by high standards, deep curiosity, and a culture that brings out the best in each of us.

Here, you won’t just write code, you’ll craft end‑to‑end agentic AI products, take genuine ownership, and experience the excitement of building something bold from the ground up. If you thrive where excellence, commitment, and ambitious thinking intersect, you’ll feel right at home with us.

We’re building a production LLM‑driven product (with an MVP already proven) that helps organizations identify, prioritize, and write competitive grant applications. You’ll help design, implement, and operate the backend systems that power the LLM, retrieval, data pipelines, and evaluation—turning research into a reliable, secure, and scalable product.

Your role
  • Design, build, and operate the backend systems that serve the Grant Optimizer LLM: model hosting, prompt orchestration, RAG pipelines, embeddings store, and inference APIs.
  • Productionize fine‑tuning and continual learning workflows (supervised fine‑tuning, LoRA/QLoRA, and RLHF, where applicable) and automate dataset curation based on user interactions and labeled outcomes.
  • Implement retrieval (vector database, chunking, metadata, MMR) and document ingestion pipelines for large collections of grant calls, program rules, and applicant documents.
  • Build robust evaluation pipelines (including automated metrics, human‑in‑the‑loop feedback, and A/B testing) to measure relevance, factuality, and success in improving grant quality/award probability.
  • Ensure privacy, compliance, and data governance for sensitive grant documents (PII redaction, encryption, access controls).
  • Optimise cost/performance for inference (batching, quantization, ബ്രോി‑GPU orchestration, autoscaling).
  • Collaborate with product, research, and frontend teams to translate user workflows (scoring, ranking, drafting, suggestions) into reliable APIs and event processing.
  • Create monitoring, alerting, and observability for model performance, data drift, latency, and error budgets.
  • Produce clear docs, runbooks, and onboard engineers to the LLM backend.
Responsibilities
  • Implement inference services (fast REST/gRPC endpoints) for generation and ranking.
  • Build and maintain ETL pipelines for ingesting grant notices, historical bids, scoring outcomes, and user documents.
  • Manage embeddings pipeline: text chunking, embedding generation, index creation and maintenance (FAISS / Pinecone / Weaviate / etc.).
  • Automate SFT/finetuning and evaluation workflows (training infra + dataset versioning).
  • Apply retrieval‑augmented generation techniques and ensure timely, accurate retrieval with provenance.
  • Implement rate‑limiting, request validation, caching layers, and cost controls for external LLM providers and self‑hosted models.
  • Run experiments to optimise prompts, system messages, and chain‑of‑thought strategies for grant drafting tasks.
  • Lead security reviews and implement access controls, secure token handling, and audit logging.
Technical Must‑have
  • Strong Python proficiency and experience building backend services (Flask/FastAPI, async frameworks).
  • 3+ years building ML systems in production; proven experience with LLMs in production (Hugging Face, OpenAI, Anthropic, or self‑hosted).
  • Practical experience with retrieval systems and vector search (FAISS, Milvus, Pinecone, Weaviate, etc.).
  • Experience with model fine‑tuning workflows (LoRA/QLoRA/SFT), dataset management, and training infra.
  • Cloud experience: deploying and operating services on AWS / GCP / Azure (Kubernetes, ECS, IAM, S3, Cloud SQL, etc.).
  • LLM tooling: Hugging Face Transformers, PEFT/LoRA, OpenAI API (or other hosted providers), LangChain/LlamaIndex.
  • Familiar with model πρόγραμμα optimization techniques (quantization, batching, sharding) and cost/perf tradeoffs.
Background must‑have
  • English proficiency: C1 level or higher, with the ability to work and communicate daily with an international team.
  • Teamwork: You enjoy collaborating in cross‑functional teams and contributing to shared goals.
  • Startup mindset: You’re comfortable in a fast‑paced startup environment, take ownership, and adapt quickly to change.
  • Education: Bachelor’s or Master’s degree in Computer Science, Engineering, or related fields.
Nice‑to‑have
  • Familiarity with LangChain, LlamaIndex, or similar orchestration libraries and prompt engineering best practices.
  • Background in ML evaluation (factuality metrics, BLEU/ROUGE useful but also human eval design).
  • Experience with private LLM deployment (tuned LLaMA derivatives, Mistral, etc.) and GPU orchestration.
  • Experience with data privacy, GDPR and secure handling of sensitive ജില്ല documents.
Offices
  • Hybrid working, combining office presence and remote work.

Working at Fundo.one means joining an early‑stage startup where your work has a real, visible impact. You’ll collaborate closely with the founders, move fast, take ownership, and help shape both the product and the company from the ground up.

If you need more info, please do not hesitate to contact Federico Daneu - COO at federico@fundo.one

Seniority Level
  • Entry level
Employment Type
  • Full‑time
Job Function
  • Engineering and Information Technology
Ottieni la revisione del curriculum gratis e riservata.
oppure trascina qui un file PDF, DOC, DOCX, ODT o PAGES di non oltre 5 MB.