
Enable job alerts via email!
Generate a tailored resume in minutes
Land an interview and earn more. Learn more
A leading biotech firm in Greater London is seeking a skilled quantitative pharmacologist with deep experience in PK PD modeling. The ideal candidate will excel in creating and evaluating models, identify non-plausible parameters, and use tools like NONMEM or Monolix. Responsibilities include designing evaluation rubrics and interpreting results to inform decision-making. This is a remote, part-time contract role, offering a flexible approach to outcomes. Candidates with a strong portfolio in exposure-response modeling are encouraged to apply.
Deep hands‑on experience in PK PD exposure-response modeling and ideally population PK or QSP.
Expert at model fitting sensitivity analysis and identifying non‑plausible parameter spaces.
Can evaluate the validity of dose‑exposure predictions and detect high‑risk extrapolations.
Comfortable designing model evaluation rubrics that distinguish between acceptable vs. non‑credible outputs.
Able to articulate how quantitative checks should complement narrative decision logic.
Experience supporting translational or clinical pharmacology leads in dose justification.
Familiarity with integrating nonclinical PK / PD data (2‑species GLP human FIH extrapolation).
812 years of quantitative pharmacology experience in pharma CROs or modeling consultancies.
Strong portfolio in population PK / PD exposure-response and parameter estimation using NONMEM, Monolix or equivalent tools.
Demonstrated ability to interpret model results for decision‑making, not just fit data.
Can create fit‑for‑purpose models and critique model structures or assumptions under uncertainty.
Design and refine micro‑evaluations for PK / PD performance (curve fits, parameter checks, error taxonomies).
Encode quantitative sanity checks into model rubrics for automated evaluation.
Define failure conditions (e.g., unsafe extrapolation, poor coverage, curves, invalid assumptions).
PK / PD datasets, tox summaries and performance prompts (e.g., fit exposure‑response curves, interpret safety margins).
Example model outputs from automated systems.
Quantitative Rubrics: clear thresholds for acceptable parameter fits, coverage curve quality, and model integrity checks.
Golden Fit Examples: representative ideal PK / PD model outputs and visualizations for calibration.
Error Taxonomy: structured list of typical modeling or fitting errors with root‑cause annotations.
Meta‑Layer Commentary: short note per rubric capturing how expert modelers recognize implausible or unsafe fits beyond numeric error values.
Full Time
years
1