Preliminary Program

  • 08:00
  • 08:15
  • 08:30
  • 08:45
  • 09:00
  • 09:15
  • 09:30
  • 09:45
  • 10:00
  • 10:15
  • 10:30
  • 10:45
  • 11:00
  • 11:15
  • 11:30
  • 11:45
  • 12:00
  • 12:15
  • 12:30
  • 12:45
  • 13:00
  • 13:15
  • 13:30
  • 13:45
  • 14:00
  • 14:15
  • 14:30
  • 14:45
  • 15:00
  • 15:15
  • 15:30
  • 15:45
  • 16:00
  • 16:15
  • 16:30
  • 16:45
  • 17:00
  • 17:15
  • 17:30
  • 17:45
  • 18:00
  • 18:15
  • 18:30
  • 18:45
  • 19:00
  • 19:15
  • 19:30
  • 19:45
  • 20:00
  • 20:15
  • 20:30
  • 20:45
  • 21:00
  • 21:15
  • 21:30

Keynote Speaker

Auditorium Lumière - Monday, July 08

Treatment Planning and Dose Monitoring

Auditorium Lumière - Monday, July 08

11:00 - 11:15

Treatment error classification with time-resolved EPID images with Multiple Instance Learning - Vyacheslav Yarkin (1), Evelyn De Jong (2), Rutger Hendrix (3), Frank Verhaegen (1), Cecile Wolfs (1)

1 - Department of Radiation Oncology (Maastro), GROW School for Oncology and Reproduction, Maastricht University Medical Center (Netherlands)
2 - Instituut Verbeeten (Netherlands)
3 - Medical Image Analysis group, Department of Biomedical Engineering, Eindhoven University of Technology (Netherlands)

Recent research has investigated the classification of radiotherapy errors in time-integrated (TI) in vivo Electronic Portal Imaging Device (EPID) images with convolutional neural networks, showing promise. However, it has been observed that TI images may cancel out the error presence on γ-maps during dynamic treatments. To address this limitation, time-resolved (TR) γ-maps for each VMAT angle were used to detect realistically simulated treatment errors caused by complex patient geometries and beam arrangements. Typically, such images can be interpreted as a set of frames where only set class labels are provided, making manual analysis difficult and time-consuming. To effectively aggregate individual segment information and evaluate its contribution to error classification we applied a transformer based multiple instance learning (MIL) approach on TR EPID images. The proposed algorithm showed excellent performance on classification of anatomical, positioning and mechanical errors. Accuracy on the hold-out test set reached 0.94 for 11 error classes.

11:15 - 11:30

A novel iterative reconstruction algorithm for imaging of multi-field proton treatments with PET - Keegan McNamara (1) (2), Marina Béguin (2), Günther Dissertori (2), Judith Flock (2), Cristian Fuentes (2), Jan Hrbacek (1), Damien C. Weber (1) (2) (1) (3) (4), Shubhangi Makkar (1) (2), Christian Ritzer (2), Carla Winterhalter (1)

1 - Center for Proton Therapy, Paul Scherrer Institute, Villigen (Switzerland)
2 - Department of Physics, ETH Zurich, Zurich (Switzerland)
3 - Department of Radiation Oncology, University Hospital of Zürich (Switzerland)
4 - Department of Radiation Oncology, Inselspital, Bern University Hospital, University of Bern (Switzerland)

Treatment verification of multi-field proton therapy using PET imaging may allow for comprehensive verification capabilities through range measurements for each delivered field. We introduce a novel maximum-likelihood expectation maximization (MLEM) algorithm which combines PET data recorded following delivery of each field in a multi-field treatment plan and reconstructs the expected activity distribution induced by each field. We showcase an example use case through simulation of a 3-field treatment plan delivered to a brain tumour, imaged using the open-ring PETITION PET detector. Our novel MLEM approach produced activity distributions with better agreement to the expected positron annihilation distributions for each field, with a decrease in normalised root mean square error by 0.4%-9.7% and an increase in structural similarity index by 4.4%-17.3% when compared to a simplified calculation of the expected activity. We show reduced noise, and the potential for better range predictions for each field using iso-activity analysis along the beams-eye-view.

11:30 - 11:42

Evaluation of algorithms for dose reconstruction from prompt-gamma radiation in proton therapy - Beatrice Foglia (1), Chiara Gianoli (1), Takamitsu Masuda (2), Elisabetta De Bernardi (3), Thomas Bortfeld (4), Katia Parodi (1), Marco Pinto (1)

1 - Department of Experimenta
Physics - Medical Physics, Ludwig-Maximilians-Universitaet (LMU) Muenchen [Garching b. Muenchen] (Germany)
2 - National Institutes for Quantum and Radiological Science and Technology [Chiba] (Japan)
3 - Dipartimento di Medicina e Chirurgia = School of Medicine and Surgery, University of Milano Bicocca [Monza] (Italy)
4 - Department of Radiation Oncology, Massachusetts General Hospital (MGH) [Boston] (United States)

Dose and range monitoring with secondary radiation, like prompt-gammas, can help minimizing range uncertainties in proton therapy. This work aims at the reconstruction of the delivered proton dose from prompt-gamma distributions. Some techniques have already been proposed in the literature to reconstruct the dose from secondary radiation, mostly positron emitters. Among them, very promising methods are the analytical deconvolution approach, the evolutionary algorithm and the maximum-likelihood expectation-maximization algorithm. Only the evolutionary algorithm has been applied to prompt-gamma distributions so far. Within this work, the applicability of these approaches to prompt-gamma distributions was assessed with simulated mono- and polynergetic proton beams irradiating homogeneous and heterogeneous phantoms. The reconstructed depth-dose profiles were compared with their corresponding simulated distributions using different metrics. Overall, the reconstructed dose profiles match the simulated ones, and range differences evaluated at different distal dose levels were found to be lower than or in the order of 1 mm.

11:42 - 11:54

An analytical approach to predict 3D positron emitter distribution in carbon ion therapy - Tianxue Du (1), Julia Bauer (2) (3) (4), Katia Parodi (1), Marco Pinto (1)

1 - Department of Medical Physics, Ludwig-Maximilians-Universität München, Munich (Germany)
2 - Department of Radiation Oncology, Heidelberg University Hospital, Heidelberg (Germany)
3 - Heidelberg Institute of Radiation Oncology (HIRO), Heidelberg (Germany)
4 - National Center for Tumor diseases(NCT), Heidelberg (Germany)

Carbon ion therapy provides high accuracy in targeting tumors while sparing healthy tissues, and could highly benefit from possibilities of range verification due to its sensitivity to range uncertainties. Positron emission tomography (PET) is commonly used for range verification, but as PET signals are not directly proportional to dose distributions, comparing them to a predicted positron emitter distribution (PED) is one viable approach. To this end, an analytical approach was previously suggested to predict 1D PED from depth dose profiles. We revised this method by introducing finer modelling functions and considering more positron emitters, and extended it to 3D. In this contribution, we present our proposed 3D PED generation framework based on a pencil beam algorithm and ray-casting method. The method is validated in in-silico studies against full Monte Carlo calculations for several phantoms as well as realistic computed tomography (CT) scans from patient data.

11:54 - 12:06

Simultaneous optimization of multiple plans within one treatment course for temporally feathered radiation therapy - Jenny Bertholet (1), Julius Arnold (1) (2), Chenchen Zhu (1), Gian Guyer (1), Silvan Mueller (1), Marco F M Stampanoni (3), Peter Manser (1), Michael K Fix (1)

1 - Division of Medical Radiation Physics and Department of Radiation Oncology, Inselspital, Bern University Hospital, and University of Bern, Bern, Switzerland (Switzerland)
2 - Department of Physics, ETH Zürich, Zürich, Switzerland (Switzerland)
3 - Institut of Biomedical Engineering, ETH Zürich and PSI, Villigen, Switzerland (Switzerland)

Temporally feathered radiation therapy (TFRT) is a new technique where several iso-curative sub-plans deliver an alternance of high and low doses to the same number of feathered organs-at-risks (OARs) to exploit the dynamic recovery process of healthy tissues. The goal is that each OAR receives, in alternance, one high dose and four low doses where enhanced repair can take place to offset the sub-lethal damage caused during the high dose fraction. The aim of this work is to enable simultaneous optimization of multiple sub-plans within one treatment course for TFRT planning. A clear high to low dose pattern was achieved for an academic case according to different TFRT planning objective variations (soft, medium, and hard). Feathering was also achieved for a clinically motivated head and neck case with high to low dose patterns and/or reduction in total mean dose for the ipsilateral parotid and submandibular glands, pharynx, oral cavity, and larynx.

12:06 - 12:18

The relationship between treatment plan complexity and dosimetry quality: An analysis of three stereotactic treatment planning challenges - Nick Hardcastle (1), Mathieu Gaudreault (1), Alisha Moore (2), Benjamin Nelms (3), Jordi Saez (4), Victor Hernandez (5)

1 - Peter MacCallum Cancer Centre (Australia)
2 - Trans-Tasman Radiation Oncology Group (Australia)
3 - Canis Lupus LLC (United States)
4 - Hospital Clínic de Barcelona (Spain)
5 - Hospital Sant Joan de Reus (Spain)

Radiation therapy treatment planning challenges provide the radiation therapy community with a standardized means to compare treatment plans from different techniques, software and users. Treatment planning challenges thus provide a unique opportunity to evaluate the impact of treatment plan complexity on treatment plan quality. In this study we compared the treatment plan complexity in three stereotactic treatment planning challenges between different treatment planning systems, and determined the relationship between treatment plan dosimetric quality and complexity. There was a large variation in treatment plan complexity between treatment planning systems, MLC widths and between challenges. Although treatment plan dosimetric score was positively correlated with increasing treatment plan complexity, we have demonstrated this was not a necessity to achieve a high dosimetric score. Thus limiting treatment plan complexity should be considered when optimizing treatment plans, to minimize associated increased uncertainties with highly complex treatment plans.

12:18 - 12:30

Addressing the Gap in Oncologist Training: Generating Clinically Realistic, Suboptimal Dose Distributions through Geometry Aware Convolutions - Skylar Gay (1) (2), Mary Gronberg (3), Raymond Mumme (1), Christine Chung (1), Meena Khan (1), Chelsea Pinnix (1), Sanjay Shete (1), Brent Parker (1) (4), Tucker Netherton (1), Carlos Cardenas (5), Laurence Court (1)

1 - The University of Texas M.D. Anderson Cancer Center [Houston] (United States)
2 - The University of Texas M.D. Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences (United States)
3 - University of Texas Southwestern Medical Center (United States)
4 - University of Texas Medical Branch at Galveston (United States)
5 - The University of Alabama at Birmingham (United States)

Radiation oncologists are responsible for determining the clinical acceptability of radiotherapy treatment plans. Their training typically revolves around routine clinical practice, observing the approval of patient plans. This approach depends on sufficiently rich patient populations (which are difficult for smaller clinics and less-common cancer types) and is example-starved as the trainee typically sees only one plan example for each patient. To address this, we have developed techniques to rapidly create clinically realistic plans that are suboptimal in controllable ways. These plans can then be used to train oncologists to understand what dose distributions are possible, and how to spot plans that could be improved. Here we first use a deep learning-based dose prediction to create a high-quality treatment plan (trained using only high-quality plans). We then increase dose by controllable amounts around selectable organs-at-risk. This approach is shown to generate clinically realistic but suboptimal plans as scored by experienced dosimetrists.

Image Segmentation I

Salle St Clair 3 - Monday, July 08

11:00 - 11:15

Feasibility of training federated deep learning oropharyngeal primary tumour segmentation models without sharing gradient information - Leroy Volmer (1), Ananya Choudhury (1), Aiara Lobo Gomes (1), Andre Dekker (1), Leonard Wee (1)

1 - Maastricht University [Maastricht] (Netherlands)

Federated deep learning is a way of training a deep learning model on vast amounts of privacy-sensitive data without having to exchange the data itself. A vulnerability exposed during federated model training relies on intercepting the gradients during training. In this work, we show that it is feasible to train a model for segmenting an oropharyngeal tumour in a simulated federated manner without any sharing of gradients between institutions. Further, we show that the federally trained model is functionally equivalent to a centrally trained one. In future, we expect geometric similarity performance could be significantly improved by combining multi-modality images as inputs, such as PET and CT.

11:15 - 11:30

Follow the Leader—a General Template-Guided Deep Learning Framework for Adaptive Radiotherapy Target Delineation - Mahdieh Kazemimoghadam (1), Qingying Wang (1), Mingli Chen (1), Xuejun Gu (1) (2), Weiguo Lu (1)

1 - Department of Radiation Oncology, The University of Texas Southwestern Medical Center (United States)
2 - Department of Radiation Oncology, Stanford University (United States)

Adaptive Radiotherapy (ART) heavily relies on efficient and accurate target delineation due to the need for real-time adjustments. However, manual contouring hinders ART integration with its variability, labor-intensive, and time-consuming nature. This study introduces a novel template-guided deep learning model for target segmentation. The model built upon a 3D nnUNet featuring three input channels: primary CT image, template CT image, and template mask (GTV, CTV, or PTV). The structure allows “Follow-the-Leader" nature of learning, i.e. the output to dynamically adjust based on the chosen template (the leader), providing segmentations tailored to the template. Additionally, it incorporates stylistic cues from the template leading to accurate segmentation. The approach was tested on a dataset of breast cancer CT images. Patient data (n=114) were randomly split into training (84), validation (15), and test (15). The model achieved Dice similarity coefficients (GTV: 0.74±0.06, CTV: 0.86±0.04, PTV: 0.87±0.03) showcasing its potential for improving ART workflow.

11:30 - 11:42

Clarifying the dosimetric impact of autosegmentation inaccuracies - Joëlle Van Aalst (1) (2), Tomas Janssen (3), Ilse Van Bruggen (2), Jelmer Wolterink (4), Roel Steenbakkers (2), Stefan Both (2), Johannes Langendijk (2), Charlotte Brouwer (2)

1 - Technical Medicine, University of Twente, Enschede, The Netherlands (Netherlands)
2 - Department of Radiation Oncology, University Medical Centre Groningen, Groningen, The Netherlands (Netherlands)
3 - Department of Radiation Oncology, Netherlands Cancer Institute, Amsterdam, The Netherlands (Netherlands)
4 - Department of Applied Mathematics, Technical Medical Centre, University of Twente, Enschede, The Netherlands (Netherlands)

Deep learning-based autosegmentation is conventionally assessed using geometric metrics. However, for clinically relevant quality evaluation, the dosimetric impact and the impact on normal tissue complication probability (NTCP) must be considered. Two distinct dosimetric effects can be distinguished. First, segmentation influences the optimized treatment plan (dosimetric impact on planning, DIP) and second, segmentation influences the dose evaluated in an organ (dosimetric impact on evaluation, DIE). We provide a comprehensive autosegmentation evaluation, considering geometric metrics, the DIP and DIE and the impact on NTCP of 68 head and neck cancer patients on 11 OARs. Mean Dice scores (0.50-0.90) and mean Hausdorff distances (0.74-1.26 mm) of autosegmentation did not correlate with the dosimetric measures. No significant mean dose effect exceeding 2 Gy was observed. The DIE showed larger variations than the DIP. An NTCP difference of ≥1.0 percentage point was found with the DIE for xerostomia. Furthermore, the DIE can be approximated using retrospective clinical dose distributions, reducing the workload of autosegmentation evaluation. 

11:42 - 11:54

Team work makes the dream work: An ensemble learning approach for segmentation with Convolutional Neural Networks and Vision Transformers - Anne Andresen (1) (2), Slavka Lucakova (1) (2), Yasmin Lassen-Ramshad (2), Christian Rønn Hansen (3) (4) (2), Nadine Vatterodt (2) (1), Jesper Kallehauge (1) (2)

1 - Aarhus University [Aarhus] (Denmark)
2 - Aarhus University Hospital (Denmark)
3 - University of Southern Denmark (Denmark)
4 - Odense University Hospital (Denmark)

Introduction We propose to apply nnUNet and Vision Transformer (ViT) in ensemble learning for OAR segmentation to enhance handling of variations in organ sizes, while providing high quality segmentations on smaller datasets.   Method and Materials nnUNet and a transformer model were trained individually to perform segmentation of 8 organs at risk (OAR) in the brain. The dataset included 55 T1 contrast enhanced MRI scans split in 49 for training and 6 for testing. The evaluation compared the predicted segmentations and manual delineations provided by oncologists, using dice, 1mm normalized surface dice and hausdorff distance 95th percentile.   Results 1mm normalized surface dice medians was within 0.90-0.94, the dice within 0.85-0.97 and hausdorff 95th percentile within 1.0- 2.0 across all OARs.   Discussion Results suggest that ensemble learning of deep learning models could contribute to increasing accuracy in presence of smaller dataset size and class imbalance, however, evaluation on additional datasets is needed.  

11:54 - 12:06

Data harvesting vs data farming: A study of the importance of variation vs sample size in deep learning-based auto-segmentation for breast cancer patients - Emma Skarsø Buhl (1) (2), Else Maae (3), Louise Wichmann Matthiessen (4), Mette Holck Nielsen (5), Maja Maraldo (6), Mette Møller (7), Stine Elleberg (1), Sami Aziz-Jowad Al-Rawi (8), Birgitte Vrou Offersen (9), Stine Korreman (1) (2)

1 - Danish Center for Particle Therapy, Aarhus University Hospital, Aarhus (Denmark)
2 - Department of Clinical Medicine, Aarhus University, Aarhus (Denmark)
3 - Department of Oncology, Vejle Hospital, University Hospital of Southern Denmark (Denmark)
4 - Department of Oncology, Herlev and Gentofte Hospital, Herlev (Denmark)
5 - Department of Oncology, Odense University Hospital (Denmark)
6 - Department of Clinical Oncology, Rigshospitalet, Copenhagen University Hospital (Denmark)
7 - Department of Oncology, Aalborg University Hospital, Aalborg (Denmark)
8 - Department of Clinical Oncology and Palliative Care, Zealand University Hospital, Næstved (Denmark)
9 - Department of Experimental Clinical Medicine, Aarhus University Hospital, Aarhus (Denmark)

Use of deep learning-based auto-segmentation (DL) can mitigate interobserver variations (IOV) seen in manual delineations. To train a well performing model, a large high-quality dataset is needed. A large dataset can become available if using delineations from clinical practice, but they will have high IOV. A dataset created for the purpose of training a DL model will give lower IOV, but with a smaller dataset. The aim of this study was to investigate the difference in performance, when training a model in three different scenarios: a large clinical dataset, a clinical curated dataset, and a smaller dedicated dataset. Model performance was estimated using dice similarity coefficient, Hausdorff 95th percentile (HD95) and mean surface distance (MSD). In their own test set the clinical models had poorer performance. When testing the models in the dedicated dataset, the dedicated model showed a slightly better performance, along with fewer segmentation outliers.

12:06 - 12:18

Development and external validation of a DL-based CTV segmentation model on a large multicentric dataset in the case of whole-breast radiotherapy treatment - Maria Giulia Ubeira Gabellini (1), Gabriele Palazzo (1), Martina Mori (1), Alessia Tudda (1), Luciano Rivetti (2), Elisabetta Cagni (3), Roberta Castriconi (1), Valeria Landoni (4), Eugenia Moretti (5), Aldo Mazzilli (6), Caterina Oliviero (7), Lorenzo Placidi (8), Giulia Rambaldi Guidasci (9), Cecilia Riani (1), Andrei Fodor (10), Nadia Gisella Di Muzio (10) (11), Robert Jeraj (2) (12), Antonella Del Vecchio (1), Claudio Fiorino (1)

1 - Medical Physics, IRCCS San Raffaele Scientific Institute, Milan (Italy)
2 - Faculty of Mathematics and Physics [Ljubljana] (Slovenia)
3 - Medical Physics Unit, Department of Advanced Technology, Azienda USL-IRCCS di Reggio Emilia, Reggio Emilia (Italy)
4 - Istituto Nazionale dei tumori Regina Elena, Rome (Italy)
5 - Department of Medical Physics, University Hospital, Udine (Italy)
6 - Medical Physics Dept, University Hospital of Parma AOUP (Italy)
7 - University Hospital, “Federico II”, Naples (Italy)
8 - Fondazione Policlinico Universitario A. Gemelli IRCCS, uosd medical physics and radioprotection, Rome (Italy)
9 - UOC di Radioterapia Oncologica, Fatebenefratelli Isola Tiberina – Gemelli Isola, Rom (Italy)
10 - Radiotherapy, IRCCS San Raffaele Scientific Institute, Milan (Italy)
11 - Vita-Salute San Raffaele University, Milan (Italy)
12 - Department of Medical Physics, University of Wisconsin, Madison (United States)

To optimize the radiotherapy treatment and reduce toxicities, organs-at-risk and clinical target volume (CTV) must be segmented. DL techniques have great potential to accomplish this task considering also segmentation variability. Thanks to the availability of a large single-institute data sample together with numerous multi-centric data, a reliable CTV segmentation model is feasible and possible to be validated. A preprocessing followed by a 3D-Unet able to segment both right and left CTV was implemented. The metrics used to evaluate the performance are the Dice similarity coefficient (DSC), the Hausdorff distance (HD) and its 95th percentile variant (HD_95). The model achieved high performance on validation set (DSC: 0.901; HD: 20.9~mm; HD_95: 9.6~mm). Moreover, it predicts contours more extended than clinical ones along the cranio-caudal axis in both directions. The same metrics applied on internal and external data showed an overall agreement and model transferability for all but one (Institute 9) center considered.

12:18 - 12:30

Leveraging Network Uncertainty to Identify Regions in Rectal Cancer CTV Auto-Segmentations Likely Requiring Manual Edits - Federica Carmen Maruccio (1), Rita Simões (1), Charlotte Brouwer (2), Jan-Jakob Sonke (1), Tomas Janssen (1)

1 - Netherlands Cancer Institute (Netherlands)
2 - University Medical Center Groningen [Groningen] (Netherlands)

While Deep Learning (DL) auto-segmentation has the potential to improve segmentation efficiency, manual verification and adjustment of the predictions are still required. This study explores quantifying network uncertainty in DL models using Monte Carlo dropout to identify regions requiring correction in auto-segmentation. The analysis compares DL predictions with both independent manual segmentations and manual corrections of the prediction. We hypothesized that corrections address only clinically relevant errors, reducing uncertainty due to less inter-observer variation. The study explores correlations of network uncertainty with network performance both globally and locally. A novel local approach was implemented to compare uncertainty envelope width with editing vector length. Using the uncertainty envelope width as a binary classifier, we achieved a median AUC of 0.8 in identifying regions that were manually edited with edits of 2 mm or larger. This demonstrates the capability of the uncertainty maps to capture relevant errors. 

Invited Speakers

Auditorium Lumière - Monday, July 08

Salle St Clair 3 - Monday, July 08

Prediction Models I

Auditorium Lumière - Monday, July 08

14:15 - 14:27

Virtual patient-specific quality assurance: a prospective two-year analysis - Alon Witztum (1), Phillip Wall (2), Emily Hirata (1), Gilmer Valdes (1)

1 - University of California, San Francisco (United States)
2 - Washington University in St. Louis (United States)

In modern radiation oncology clinics, patient-specific quality assurance (QA) has been shown to have questionable utility in relation to the disproportionate amount of resources it demands. Predictive machine learning models can be designed to indicate the likelihood a treatment plan will fail QA based on its complexity, which can maximize efficiency by focusing resources less on “simple” plans and more on “complex” plans. Our implementation of this method – Virtual QA (VQA) – has been integrated into a clinical workflow to run prospectively alongside regular QA operations for over two years, storing a predicted QA result alongside the measured result for every treatment plan. Results from this study show VQA can reduce the number of QA measurements by approximately two-thirds, with a specificity of 100% and sensitivity of 67% in identifying plans with an absolute QA result greater than or equal to 4%. The model maintained a mean absolute error of 0.97% and a maximum absolute error of 2.99%. The area under the ROC curve for VQA to predict QA results that fail at a 3% threshold is 0.77, and at a 4% threshold it is 0.94.

14:27 - 14:39

Model Size Reduction in CNNs through Sparse Regularization: Balancing Efficiency and Diagnostic Accuracy in Medical Imaging - Carlos Garcia Fernandez (1), Wilmer Arbelo-Gonzalez (2), Jerome Friedman (3), Gilmer Valdes (1)

1 - Department of Radiation Oncology, UCSF (United States)
2 - Altos Labs (United States)
3 - Department of Statistics, Stanford University (United States)

Convolutional neural networks (CNNs) are at the forefront in medical imaging diagnosis and prognostics. Typically, networks contain millions of parameters and are predominantly designed for the ImageNet database, which consists of millions of images across 1000 categories. In contrast, medical imaging tasks usually involve fewer categories and a smaller number of training images. In this study, we introduce an innovative method for automatic model size reduction in CNNs that preserves performance in computer vision classification tasks

14:39 - 14:51

The implementation of Gaussian Process Regression to Inform Voxel-Based Analysis when Dealing with Sparse and Irregular Time-series Longitudinal Data - Jane Shortall (1), Eliana Vasquez Osorio (1), Andrew Green (2), David Wong (3), Tanuj Puri (1), Peter Hoskin (1) (4), Ananya Choudhury (1) (5), Marcel Van Herk (1), Alan McWilliam (1)

1 - Division of Cance
Sciences - Faculty of Biology, Medicine and Health, The University of Manchester, Manchester, United Kingdom (United Kingdom)
2 - EMBL's European Bioinformatics Institute, United Kingdom. (United Kingdom)
3 - Division of Informatics, The University of Manchester (United Kingdom)
4 - 4 Christie Medical Physics and Engineering, The Christie NHS Foundation Trust, Manchester, United Kingdom (United Kingdom)
5 - Christie Medical Physics and Engineering, The Christie NHS Foundation Trust, Manchester, United Kingd (United Kingdom)

Short-term Prostate Specific Antigen (PSA) trajectory has been shown to be prognostic of future biochemical recurrence (BCR), which in turn could be related to too low radiotherapy dose in certain anatomy. PSA follow-up is often irregular and sparse, presenting challenges in population-based studies. We implement Gaussian Process (GP) regression to inform voxel-based analysis to investigate the effect of dose on BCR. PSA data was modelled using GP regression. Interpolated PSA in 6 steps from 0-3 years was used as outcome in voxel-based analysis to find regions with significant association of PSA and dose. Spearman's rank test for voxel-wise statistics in a cohort of 219 patients with prostate cancer treated with radiotherapy. GP informed voxel-wise analysis revealed a significant correlation between dose and follow-up PSA data at all tested time-points, with the identified region changing over time. Ddose in the seminal vesicles was consistently identified as significantly correlated with PSA.

14:51 - 15:03

Voxel-wise causal inference to detect location and dose-response of organs at risk in lung cancer - Marcel Van Herk (1)

1 - The Christie NHS Foundation Trust [Manchester, Royaume-Uni] (United Kingdom)

Dose correlations between neighbouring organs complicate NTCP modeling. Sometimes even the dose-sensitive region and NTCP function are unknown. We propose a novel voxel-wise NTCP modelling approach based on instrumental variable analysis, a causal inference technique, to address this problem and apply it to lung cancer patients. Our instrumental variable is the random perturbation of the delivered dose due to an action threshold during historical image-guided radiotherapy (IGRT). We test many NTCP models and perform Cox regression per voxel between DNTCP due to perturbation and hazard of death. The largest AIC-significant voxel cluster over all parameters defines region and NTCP model. In a cohort of 609 stage III patients, a dose-response was found in a location close to the sinoatrial node in the heart, with an integrated Gaussian dose-response with D50 of 20+/-1Gy and g of 2+/-0.5Gy. This is the first time voxel-wise causal inference is used in radiotherapy research.

15:03 - 15:15

Pelvic bone marrow dose-volume predictors of late lymphopenia following pelvic lymph node radiation therapy for prostate cancer - Maddalena Pavarini (1), Lisa Alborghetti (1), Stefania Aimonetto (2), Angelo Maggio (3), Valeria Landoni (4), Paolo Ferrari (5), Antonella Bianculli (6), Edoardo Petrucci (7), Alessandro Cicchetti (8), Bruno Farina (9), Paolo Salmoiraghi (10), Eugenia Moretti (11), Barbara Avuzzi (8), Tommaso Giandini (8), Fernando Munoz (2), Alessandro Magli (12), Giuseppe Sanguineti (4), Justyna Magdalena Waskiewicz (5), Luciana Rago (6), Domenico Cante (7), Giuseppe Girelli (9), Vittorio Vavassori (10), Nadia Gisella Di Muzio (13) (14), Tiziana Rancati (8), Cesare Cozzarini (13), Claudio Fiorino (13)

1 - IRCCS San Raffaele Scientific Institute [Milan, Italie] (Italy)
2 - Ospedale Regionale Parini-AUSL Valle d'Aosta, Aosta (Italy)
3 - Istituto d
Candiolo - Fondazione del Piemonte per l'Oncologia IRCCS (Italy)
4 - IRCCS Istituto Nazionale Tumori Regina Elena, Roma (Italy)
5 - Comprensorio Sanitario di Bolzano (Italy)
6 - IRCCS Crob, Rionero in Vulture (Italy)
7 - ASL TO4 Ospedale di Ivrea (Italy)
8 - Fondazione IRCCS Istituto Nazionale dei Tumori, Milano (Italy)
9 - Ospedale degli Infermi di Biella (Italy)
10 - Cliniche Gavazzeni-Humanitas, Bergamo (Italy)
11 - Azienda sanitaria universitaria Friuli Centrale, Udine (Italy)
12 - Azienda Ospedaliero Universitaria S. Maria della Misericordia, Udine (Italy)
13 - IRCCS San Raffaele Scientific Institute, Milano (Italy)
14 - Vita-Salute San Raffaele University, Milano (Italy)

To assess clinical/dosimetric predictors of late hematological toxicity in a prospective multi-institute prostate cancer (PCa) cohort undergoing pelvic-nodes irradiation (PNI). Clinical/dosimetric and blood test data were prospectively collected, including lymphocytes count (ALC) at different times. DVHs with irradiated volumes of ilium, lumbosacral spine, lower pelvis and whole pelvis were extracted. Twenty-four clinical/dosimetric covariates were considered as potential predictors of 2-year CTCAE v4.03 grade≥2 (G2+) lymphopenia (ALC<800/μL), including best-selected DVH parameters and acute CTCAE v4.03 grade≥3 (G3+) lymphopenia (ALC<500/μL), separately. Late G2+ lymphopenia was experienced by 46/499 (9.2%) patients. ALC at baseline (μL-1) [HR=0.997, 95% CI 0.996-0.998, p<0.0001], smoke (yes/no) [HR=2.9, 95% CI 1.25-6.76, p=0.013] and BMLS-V≥24Gy (cc) [HR=1.006, 95% CI 1.002-1.011, p=0.003] were retained for the multi-variable logistic regression model. When acute G3+ lymphopenia (yes/no) was considered, it was included in the model [HR=4.517, 95% CI 1.954-10.441, p=0.0004]. Performances were moderately high and bootstrap-based internal validation confirmed results' robustness.

Image Registration

Salle St Clair 3 - Monday, July 08

14:15 - 14:27

Inter-patient registration methods for voxel-based analysis in lung cancer - Chloe Min Seo Choi (1), Jue Jiang (1), Maria Thor (1), Joseph Deasy (1), Harini Veeraraghavan (1)

1 - Memorial Sloan Kettering Cancer Center (United States)

Accurate registration between patients is crucial for reliable radiotherapy outcomes prediction using voxel-based analysis (VBA). We compared three registration methods, including: 1) an iterative non-deep learning intensity-based B-spline deformable image registration method available in open-source Elastix called SyN, 2) a published deep learning method called patient-specific anatomy and shape context (PACS), and 3) an enhanced version, called PACS-tumor aware, which specifically considers relevant structures that may change, e.g., tumors. The algorithms were tested on a dataset of 240 3D CT images from patients with lung cancer. Our analysis showed that PACS-tumor demonstrated minimal differences in tumor volume (1.08%) and GTV dose (0.05Gy) between undeformed and deformed moving images, preserving tumors effectively. Additionally, it exhibited robust registration accuracy and consistency across patients of different sizes. In conclusion, PACS-tumor is a well-performing method for inter-patient registration in VBA for lung cancer patients.

14:27 - 14:39

Modelling progressive anatomical changes and their variability during radiotherapy treatment - Poppy Nikou (1) (2) (3), Eliana M. Vasquez Osorio (4) (5), Marcel Van Herk (4) (5), Teresa Guerrero Urbano (6) (7), Anna Thompson (8), Andrew Nisbet (3), Sarah Gulliford (3) (8), Jamie McClelland (1) (2) (3)

1 - Centre for Medical Image Computing (CMIC), University College London, London, UK (United Kingdom)
2 - Wellcome / EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, London, UK (United Kingdom)
3 - Department of Medical Physics and Biomedical Engineering, University College London, London, UK (United Kingdom)
4 - Division of Cancer Sciences, School of Medical Sciences, Faculty of Biology, Medicine and Health, University of Manchester, Manchester, UK (United Kingdom)
5 - Christie NHS Foundation Trust Hospital, Manchester, UK (United Kingdom)
6 - Guys and St Thomas NHS Foundation Trust Hospital, London, UK (United Kingdom)
7 - Kings College London, London, UK (United Kingdom)
8 - University College London NHS Foundation Trust Hospital, London, UK (United Kingdom)

Progressive anatomical changes are often observed in head and neck cancer patients over the course of radiotherapy. To create proton therapy treatment plans that are robust against these changes, these potential anatomical changes must be known in advance. In this work we present a novel methodology for developing statistical population models that capture progressive anatomical changes and their variability. The models are continuous in time, and can be built from a population of patients with a varying number and frequency of images over treatment. The statistical population models are based on principal component analysis of the progressive anatomical changes, and can generate scenarios of many potential anatomical changes. In this work two principal components were used to generate nine scenarios of potential anatomical change. The evaluation found, for each patient, one scenario that matched their progressive anatomical changes better than by just considering the average change.

14:39 - 14:51

An anatomically-informed correspondence initialisation method to improve learning-based registration for radiotherapy - Edward G. A. Henderson (1), Marcel Van Herk (1), Andrew F. Green (2), Eliana M. Vasquez Osorio (1)

1 - University of Manchester [Manchester] (United Kingdom)
2 - European Bioinformatics Institute (United Kingdom)

We propose an anatomically-informed initialisation method for interpatient CT non-rigid registration (NRR), using a learning-based model to estimate correspondences between organ structures. A thin plate spline (TPS) deformation, set up using the correspondence predictions, is used to initialise the scans before a second NRR step. We compared two established NRR methods for the second step: a B-spline iterative optimisation-based algorithm and a deep learning-based approach. Registration performance is evaluated with and without the initialisation by assessing the similarity of propagated structures. Our proposed initialisation improved the registration performance of the learning-based method to more closely match the traditional iterative algorithm, with the mean distance-to-agreement reduced by 1.8mm for structures included in the TPS and 0.6mm for structures not included, while maintaining a substantial speed advantage (5 vs. 72 seconds).

14:51 - 15:03

DiffuseRT: learning likely deformations of head and neck radiotherapy patients from data - Luciano Rivetti (1), Andreas Smolders (2) (3), Nadine Vatterodt (4) (5), Stine Korreman (4) (5), Antony Lomax (2) (3), Manju Sharma (6), Andrej Studen (1) (7), Damien Weber (2) (8) (9), Francesca Albertini (2), Jeraj Robert (1) (10)

1 - Faculty of Mathematics and Physics, University of Ljubljana (Slovenia)
2 - Paul Scherrer Institute, Center for Proton Therapy (Switzerland)
3 - Department of Physics, ETH Zurich (Switzerland)
4 - Danish Center for Particle Therapy, Aarhus University Hospital (Denmark)
5 - Department of Clinical Medicine, Aarhus University (Denmark)
6 - Department of Radiation Oncology, University of California, San Francisco (United States)
7 - Jožef Stefan Institute, Ljubljana (Switzerland)
8 - Department of Radiation Oncology, University Hospital Zurich (Switzerland)
9 - Department of Radiation Oncology, Inselspital, Bern University Hospital, University of Bern (Switzerland)
10 - University of Wisconsin School of Medicine and Public Health (United States)

Predicting potential deformations of patients undergoing radiotherapy can help to improve treatment planning. This study introduces Denoising Diffusion Probabilistic Models (DDPMs) to forecast fraction-specific anatomical changes. Three DDPMs were developed: one generating future CBCTs directly (image model), another producing deformable vector fields that deform a reference CBCT, and a hybrid model that combines strengths of both previous models. Visual inspection revealed realistic random daily anatomical changes and systematic weight loss in the generated images. Furthermore, the image and hybrid models mirrored the distribution patterns of body, esophagus, and parotids volume and position changes observed in real CBCTs. Using these images for robust optimization in simplified treatment plans demonstrated enhanced target coverage for equal integral dose compared to plans optimized solely for set-up robustness. The results show that the models generate distributions similar to the real anatomical changes on a population level and have a potential use for robust anatomical optimization.

15:03 - 15:15

Choice of Colourmaps for Radiotherapy - Alex Davies (1), Mark Fleckney (1), Marcel Van Herk (2)

1 - Kent Oncology Centre, Maidstone (United Kingdom)
2 - University of Manchester [Manchester] (United Kingdom)

Colourmaps are used in medical imaging to allow the simultaneous display of multiple datasets. A large observer study was conducted to investigate whether choice of colourmap should be a consideration in radiotherapy. A simulated low-contrast phantom was displayed in eight colourmaps, and users asked which test regions were visible using each colourmap. Analysis of 214 observers showed that more test regions were visible in the rainbow, red, viridis, and orange colourmaps compared to grayscale, whilst fewer regions could be seen using blue, green, and purple. A k-means clustering analysis showed 6 distinct clusters of response type, indicating user-specific benefits to different colourmaps for this detection-type task. 

Radiomics and Voxel-Based-Analysis

Auditorium Lumière - Monday, July 08

15:45 - 16:00

Genetically-based analysis of dose surface maps for assessing toxicity post-RT - Tiziana Rancati (1), Eliana Gioscio, Michela Carlotta Massi, Nicola Rares Franco, Petra Seibold, Barbara Avuzzi, Alessandro Cicchetti, Ananya Choudhury, Barry Rosenstein, David Azria, Dirk De Ruysscher9, Maarten Lambrecht, Elena Sperk, Christopher J Talbot, Ana Vega, Liv Veldeman, Adam Webb, Paolo Zunino, Anna M Paganoni, Francesca Ieva, Andrea Manzoni, Sarah L Kerns, Alison Dunning, Jenny Chang-Claude, Catharine L West

1 - Fondazione IRCCS Istituto Nazional
Tumori - National Cancer Institute [Milan] (Italy)

Polygenic risk scores (PRS) can consider the cumulative impact of multiple SNPs on radiosensitivity. Epistasis is also likely to affect the toxicity risk, and it is essential to include SNP-SNP interactions in PRS. Voxel-based analysis (VBA) [1] examines dose distributions for insights into radiobiological heterogeneity across tissues to enhance understanding of the dose-response relationships. We propose a method to couple PRS and VBA with application to modelling late urinary toxicity after prostate cancer radiotherapy. We included 778 patients prostate cancer patients receiving radical radiotherapy enrolled and followed up in the REQUITE and RADPrecise studies and considered three late toxicity endpoints: urinary frequency, heamaturia and rectal bleeding. We incorporated the PRSi within VBA via proportional adjustment of the single voxel doses. On the resulting PRSi-modulated DSMs, we performed Cox VBA and identified the bladder/rectum subregions to be spared within personalised radiotherapy optimisation.

16:00 - 16:12

Prediction of survival outcomes and relevant tumor immune infiltrates using radiomic network analysis - Jung Hun Oh (1), Aditya Apte (1), Harini Veeraraghavan (1), Amita Shukla-Dave (1), Joseph Deasy (1)

1 - Memorial Sloan Kettering Cancer Center (United States)

We propose to utilize a network-based method to analyze radiogenomic (radiomic/genomic) data. This approach enables the construction of sparse radiomic networks using the graphical LASSO (Least Absolute Shrinkage and Selection Operator) algorithm based on partial correlations among radiomic features. We developed and applied an optimal transport-based k-means algorithm to the resulting radiomic networks to identify clusters. The differences in survival rates and immune cell types between the resulting clusters were then assessed. In this study, we analyzed computed tomography (CT) radiomic features and RNA-Seq data used to quantify CIBERSORT scores in head and neck cancer and lung cancer patients taken from the TCIA and TCGA databases. Our proposed method identified sub-groups with the significant differences in survival outcomes and immune cell types. The promising results suggest that the geometry-based radiomic network approach could help identify patients at high risk of poor prognosis.

16:12 - 16:24

Radiomic Extraction and Analysis for DICOM Images to Refine Objective Quality Control (READII-2-ROQC): An open-source foundation for head and neck radiomics - Katy L. Scott (1), Sejin Kim (1) (2) (3), Jermiah J. Joseph (1), Matthew Boccalon (1), Mattea Welch (1) (3), Mogtaba Alim (4), Umar Yousafzai (5), Ian Smith (1) (2), Chris McIntosh (2) (6) (7), Katrina Rey-McIntyre (7), Shao Hui Huang (7) (8), Tirth Patel (3) (6) (7), Tony Tadic (3) (7) (8), Brian O'sullivan (7), Scott V. Bratman (7) (8) (9), Andrew J. Hope (7) (8), Benjamin Haibe-Kains (1) (3) (9)

1 - Princess Margaret Research [Princess Margaret Cancer Centre] (Canada)
2 - Department of Medical Biophysics, University of Toronto (Canada)
3 - Cancer Digital Intelligence [Princess Margaret Cancer Centre] (Canada)
4 - Department of Computer Science [University of Toronto] (Canada)
5 - Cheriton School of Computer Science [Waterloo] (Canada)
6 - TECHNA Institute (Canada)
7 - Radiation Medicine Program [Princesss Margaret Cancer Centre] (Canada)
8 - Department of Radiation Oncology, University of Toronto (Canada)
9 - Department of Medical Biophysics, University of Toronto (Canada)

Radiomics has exploded in oncology applications to develop signatures capable of predicting cancer outcomes and survival. However, radiomic analysis suffers from inconsistent methodology, insufficient data availability, and a lack of open-source processing tools to handle common radiological image formats. In this work, we present the READII-2-ROCQ pipeline: an end-to-end pipeline using DICOM-RT objects, image pre-processing, radiomic feature extraction, and publishes an organized data structure on a publicly available repository (ORCESTRA). Further, new quality control methods to assess radiomic features through negative control image generation are included. As a proof-of-concept, we explored a large head and neck cancer dataset (RADCURE) and replicated three previously published radiomics models. Our results highlight the importance of determining if radiomic signatures are using underlying imaging data. The main contributions of this work are the pipeline, an analysis-ready radiomics dataset, and open-source code to generate negative controls, so all radiomic studies can be READII-2-ROCQ.

16:24 - 16:36

Predicting Radiation Pneumonitis using Pre-Radiotherapy CT Scans by Radiomic and a Pre-Trained CNN Based Feature Extraction - Matthew Gil (1) (2), William Nailon (2) (3), Stephen Harrow (2), Paul Murray (1), Sorcha Campbell (2), Ai Wain Yong (2), John Murchison (2), Gillian Ritchie (2), Stephen Marshall (1)

1 - University of Strathclyde (United Kingdom)
2 - NHS Lothian (United Kingdom)
3 - University of Edinburgh (United Kingdom)

A radiomic model to predict radiation pneumonitis, an adverse effect of radiotherapy, has been developed and tested using data from 66 lung cancer patients that received IMRT treatment. Image features were extracted from planning CT images using a radiomic approach and by leveraging a pre-trained deep learning based UNet model. Radiation dose and clinical features were also included. The AdaBoost algorithm was used to create a prediction model which was trained and tested using a leave-one-out cross validation approach. When using only the dose and clinical features, an AUC ROC of 0.63 was achieved which was improved to 0.836 when including all of the CT image features. This shows that CT images, which are not currently applied clinically for pneumonitis prediction, calculated by both radiomic and our UNet based feature extraction are a source of predictive power and can greatly improve predictions from standard dose and clinical information.

16:36 - 16:48

Integrating Radiomics Harmonization and Deep Survival Analysis for Enhanced Predictive Modeling in a Multicentric Context - Nassib Abdallah (1) (2) (3), Jean-Marie Marion (1) (4), Pierre Chauvet (1) (4), Clovis Tauber (2), Thomas Carlier (5), Mathieu Hatt (3)

1 - Laboratoire Angevin de Recherche en Ingénierie des Systèmes (France)
2 - Imaging & Brain (France)
3 - Laboratoire de Traitement de l'Information Medicale, INSERM UMR 1101 (France)
4 - Université Catholique de l'Ouest (France)
5 - University Hospital of Nantes, (France)

This paper compares harmonization methods in the context of positron emission tomography / computed tomography (PET/CT) radiomics analysis for predicting recurrence-free survival (RFS) in head and neck cancer (HNC) patients, using the multicentre HECKTOR 2022 challenge dataset. We present harmonization techniques—HarMSTD and HarCenter—as alternatives to the popular COMBAT method. Notably, these methods do not require model retraining with new data, enhancing their practical utility. Our results demonstrate the benefit of harmonization in this context, with improved concordance index (C-index) values. The paper also compares our findings with existing literature, positioning our methods favorably in the context of the HECKTOR 2022 challenge. These novel approaches offer promising advancements in oncological predictive modeling, emphasizing the need for tailored harmonization techniques in multicenter clinical studies.

16:48 - 17:00

Can bootstrapping increase robustness of voxel-based-analysis? - Alan McWilliam (1), Marcel Van Herk (1), Chloe Pratt (2), Eliana M. Vasquez Osorio (1)

1 - Division of Cancer Science, University of Manchester (United Kingdom)
2 - Department of Medical Physics, Leeds Cancer Centre, St James's University Hospitals (United Kingdom)

Bootstrapping is a well-established resampling method in statistics, allowing measures of accuracy to be assigned. Using bootstrapping in a voxel-based-analysis (VBA) workflow has potential to increase robustness for identifying anatomical regions where dose is significantly associated with outcome. In this work, we implement bootstrapped VBA, and assessed its impact across progressively smaller datasets. 1101 Non-Small-Cell Lung Cancer patients were available. Bootstrapping was implemented, performing 100 bootstraps of the full VBA analysis performing a t-test per voxel against survival at 12-months. Results demonstrate that bootstrapped VBA defines a smaller, more targeted region in the base of the heart. Dose extracted from the bootstrapped VBA region showed improved model performance in multivariable models compared to the standard VBA workflow. VBA showed utility in progressively smaller dataset sizes up to 250 entries. Datasets of 100 and 50 showed high uncertainty in defining significance. Bootstrapped VBA emerges as a valuable internal validation approach.

Hadrontherapy Planning

Salle St Clair 3 - Monday, July 08

15:45 - 16:00

Trading Robustness? Introducing total variance objectives to Pareto surface approximation techniques for radiotherapy - Remo Cristoforetti (1) (2) (3), Philipp Süss (4), Niklas Wahl (1) (2)

1 - Department of Medical Physics in Radiation Oncology, German Cancer Research Center – DKFZ, Heidelberg, Germany (Germany)
2 - National Center for Radiation Oncology (NCRO), Heidelberg Institute for Radiation Oncology (HIRO), Heidelberg, Germany (Germany)
3 - Department of Physics and Astronomy, Heidelberg University, Heidelberg, Germany (Germany)
4 - Optimization division, Fraunhofer Institute for Industrial Mathematics (ITWM), Kaiserslautern, Germany (Germany)

This work introduces a novel approach to introduce robustness into multicriteria optimization (MCO) in proton therapy. Our approach exploits a formulation of expected value optimization relying on precomputed total variance and expected dose influence. This facilitates merging robust optimization with Pareto front approximation while avoiding explicit calculation of scenarios during optimization. The method also introduces robustness as an objective, enabling interactive management of robustness trade-offs. We demonstrate MCO including robustness trading with a single field proton plan on the TG119 phantom and a two field proton plan on a liver case. Trade-offs between robustness and dose metrics can be effectively approximated and visualized. Our approach allows planners more explicit control over robustness, and is directly translatable to other MCO approaches like lexicographic optimization.

16:00 - 16:12

Mid-Range Proton Arc Therapy - Qingying Wang (1), Mingli Chen (2), Mahdieh Kazemimoghadam (3), Xuejun Gu (4), Weiguo Lu (5)

1 - Qingying Wang (United States)
2 - Mingli Chen (United States)
3 - Mahdieh Kazemimoghadam (United States)
4 - Xuejun Gu (United States)
5 - Weiguo Lu (United States)

Proton arc therapy (PAT) holds promise for enhancing target coverage and sparing normal tissues by incorporating all beam angles in treatment plan optimization. However, the conventional use of all possible energy layers for each angle significantly impacts delivery efficiency and increases planning complexity as treatment delivery time is heavily extended by energy switch and planning parameters are greatly amplified by including energy selection optimization. In this study, we propose a novel Mid-Range Proton Arc Therapy (MRPAT) framework that utilizes the mid-range beam, employing a single pre-selected energy layer for delivering mid-range spots at each beam angle in PAT and adopts a novel spot optimization strategy to efficiently solve large-scale PAT optimization problem. As the gantry rotates, the MRPAT enforces the scanning spots filling the target with the Bragg peaks well inwards away from the target's distal edge and effectively utilizes the high linear energy transfer (LET) of Bragg peaks to enhance Relative Biological Effectiveness (RBE) in the target while mitigating adjacent organs at risk (OARs) damage caused by range uncertainty. The mid-range energy smoothly varies with the beam angle without increasing delivery time as energy switching takes place during gantry rotation.

16:12 - 16:24

From Concept to Clinic: How Our In-house Automated Planning System Augmented the Commercial TPS for Treating Over 10,000 Patients - Masoud Zarepisheh (1), Gourav Jhanwar, Ying Zhou, Qijie Huang, Hai D. Pham, Jie Yang, Laura Cervino, Jean M Moran, Joseph O. Deasy, Linda Hong

1 - Memorial Sloan Kettering Cancer Center (United States)

Most modern Treatment Planning Systems (TPS) come with Application Programming Interface (API) scripting capabilities, allowing for the development of custom applications that enhance the functionalities of commercial TPS. In our clinic, we have developed and integrated ECHO (Expedited Constrained Hierarchical Optimization), an in-house automated planning system, with the Eclipse TPS. ECHO has played a pivotal role in the treatment of over 10,000 patients. It streamlines the planning process: after preparing the contours and beams (IMRT fields or VMAT arcs), users launch the ECHO plug-in within the TPS, select optimization structures, and run the program. ECHO automatically retrieves necessary data from the TPS, constructs and solves optimization problems, and imports the final results (optimal fluence for IMRT, optimal control points for VMAT) back into the TPS. It then runs final dose calculation in TPS and notifies the user via email for plan review, all within 15-120 minutes. ECHO completely bypasses TPS optimization engine, however, it utilizes TPS's final dose calculation and, for IMRT, leaf sequencing. In this work, we discuss our development journey and clinical experiences, outlining both past and current technical challenges, along with the solutions we have devised. We have also recently introduced an open-source project, PortPy (Planning and Optimization for Radiation Therapy in Python), which aims to share our experience with the wider research community. 

16:24 - 16:36

Fully-automated integrated multi-criterial optimisation of transmission pencil beams in IMPT for penumbra sharpening - Wens Kong (1), Merle Huiskes (2), Steven Habraken (2) (3), Eleftheria Astreinidou (2), Coen Rasch (2), Ben Heijmen (1), Sebastiaan Breedveld (1)

1 - Department of Radiotherapy, Erasmus MC Cancer Institute, Erasmus University Medical Center (Netherlands)
2 - Department of Radiation Oncology, Leiden University Medical Center (Netherlands)
3 - HollandPTC (Netherlands)

Objective: In IMPT Bragg peaks (BPs) result in a steep dose fall-off. Typically, the lateral dose fall-off is less steep. High-energy pristine transmission pencil beams (TPBs) with BPs outside the patient exhibit a sharp lateral penumbra. In this study we hypothesised that by combining IMPT BPs with TPBs (denoted ‘IMPT&TPB'), dose distributions in head-and-neck cancer (HNC) patients could be improved compared to conventional IMPT. Approach: Our spot selection algorithm for fully-automated IMPT treatment planning, was extended for the combined optimisation of IMPT BPs and TPBs. For 8 nasopharynx and 8 oropharynx cancer, the IMPT&TPB and IMPT plan were compared. Main results: Clinical OAR and target constraints were met in all plans. On average, 11 of the 19 investigated OAR plan parameters significantly improved by allowing TPBs in the optimisations, while one parameter showed small but significant deterioration.  Conclusion: For HNC, IMPT&TPB resulted in comparable target coverage with overall superior OAR sparing.

16:36 - 16:48

Interactive dose modification: a novel approach to proton beam therapy treatment planning - Matthew Lowe (1)

1 - The Christie NHS Foundation Trust, Manchester, UK (United Kingdom)

A novel approach to treatment planning for pencil beam scanning proton therapy is presented by which the user can interact intuitively with a dose distribution to modify a treatment plan. In this proof-of-principle implementation, the dose distribution is displayed independently of the treatment planning system (TPS). The user can mouse click or scroll to increase or decrease dose to any voxel. This is achieved by modifying spot weights proportional to their contribution to this voxel described using an influence matrix. The dose is updated within 180 ms on average, appearing real-time to the user. Minimum spot weights are accounted for, ensuring plan deliverability. Modified spot weights are returned to the TPS from which the normal pathway to patient treatment can continue. The proposed approach allows the user to produce a desired dose distribution without knowing how to describe it using optimisation objectives, reducing the expertise needed to modify plans.

16:48 - 17:00

Simultaneous Reduction of Number of Spots and Energy Layers in Intensity Modulated Proton Therapy for Rapid Spot Scanning Delivery - Anqi Fu (1), Vicki Taasti (2), Masoud Zarepisheh (1)

1 - Memorial Sloan Kettering Cancer Center (United States)
2 - Maastricht University Medical Centre (Netherlands)

We propose a method to improve the delivery efficiency of proton therapy by simultaneously reducing the number of spots and energy layers using reweighted l1 regularization. We formulate the treatment planning problem as an optimization problem with an objective consisting of a dosimetric plan quality term plus a weighted l1 regularization term. We then iteratively solve this problem and adaptively update the regularization weights to promote sparsity of the spots and layers. To assess our method's effectiveness, we compare its performance on four head-and-neck cancer patients against standard l1 and group l2 regularization. Reweighted l1 regularization reduces the number of spots and energy layers by 40% and 35%, respectively, with minimal cost to dosimetric plan quality. In addition, it provides a better trade-off between plan delivery efficiency and plan quality than standard l1 or group l2 regularization, requiring the lowest cost to quality to achieve a given level of efficiency.

Teaching Speakers

Auditorium Lumière - Tuesday, July 09

Salle St Clair 3 - Tuesday, July 09

Rising Stars Competition

Auditorium Lumière - Tuesday, July 09

09:00 - 09:15

Quantifying changes in ventilation for photon- and proton-treated patients according to dose and pneumonitis development - Rebecca Lim (1), Tien Tang (1), Austin Castelo (1), Caleb O'connor (1), Yulun He (1), Zhongxing Liao (1), Radhe Mohan (1), Kristy Brock (1)

1 - The University of Texas M.D. Anderson Cancer Center [Houston] (United States)

Proton therapy has not yet demonstrated a clear reduction in the development of radiation-induced pneumonitis for lung cancer patients, which may stem from a lack of established clinical endpoints to study the biological effect across modalities. Computed tomography (CT) imaging extracts ventilation through deformable image registration of 4DCTs and may provide information on imaging-based functional changes in the lung. We hypothesize that proton therapy results in different changes in ventilation compared to photon therapy. For 20 photon and 20 proton cases, 4DCT Jacobian-based ventilation was correlated with dose and pneumonitis development. Ventilation was tracked over the course of treatment on a voxel-wise basis and stratified into high and low function. Ventilation differences across modalities were compared by function and toxicity. Patients treated with protons exhibited statistically significant improvements in ventilation change of low functioning voxels compared to photon patients. Future work will evaluate specific voxels that developed pneumonitis.

09:15 - 09:30

Introducing a co-creation design methodology for AI-based error detection in radiotherapy - Luca Heising (1) (2), Frank Verhaegen (1), Viola Mottarella (3), Yin-Ho Chong (3), Mariangela Zamburlini (3), Bas Nijsten (1), Ans Swinnen (1), Michel Öllers (1), Maria Jacobs (1) (2), Carol Ou (2), Cecile Wolfs (1)

1 - Department of Radiation Oncology (Maastro), GROW School for Oncology and Reproduction, Maastricht University Medical Centre+ (Netherlands)
2 - Department of Information Management [Tilburg] (Netherlands)
3 - Varian, a Siemens Healthineers Company (Switzerland)

In vivo dosimetry (IVD) using the electronic portal imaging device (EPID) can increase accuracy of radiotherapy treatment (RT)by catching treatment errors, monitoring dose delivery, and assist in treatment adaptations. As manual assessment of information generated by the EPID is labor intensive work, artificial intelligence (AI) has been proposed to catch treatment errors. To foster the translation from research to clinical implementation, a human-centric design process consisting of a five-day design sprint was taken, yielding a prototype of an interface to interact with the AI, including AI certainty metric, explainable AI (XAI), and a decision-aid tool. The prototype was well received by end-users. Further work will focus on the AI certainty metric, and development of appropriate XAI. 

09:30 - 09:45

Dose driven gaze optimization in ocular proton therapy - Daniel Björkman (1) (2), Riccardo Via (1), Tony Lomax (1) (2), Maria De Prado (1), Alessia Pica (1), Guido Baroni (3), Damien C. Weber (1) (4) (5), Jan Hrbacek (1)

1 - Paul Scherrer Institute (Switzerland)
2 - Department of Physics [ETH Zürich] (Switzerland)
3 - Dipartimento di Electtronica, Informazione e Bioingegneria [Politecnico Milano] (Italy)
4 - Bern University Hospital [Berne] (Switzerland)
5 - University hospital of Zurich [Zurich] (Switzerland)

This study introduces a novel approach to ocular proton therapy treat- ment planning of ocular tumours. The aim is to autonomously equip planners with information derived from recent normal tissue compli- cation probability (NTCP) models and predictive factors for radiation induced side effects, to assist the planners in sparing of healthy tissue. This is accomplished by utilizing our in-house dose engine OptisTPS, where we integrate machine learning (ML) and a tailored cost function to streamline computations for identifying an attainable eye orienta- tion for treatment. The ML predictions efficiently reduce the search area for attainable eye orientations which has been demonstrated for a cohort of 190 patients. Results from the integration of the cost func- tion and NTCP models are displayed for sampled gaze fixation points, where color represents either cost of arbitrary unit or the probability of radiation-induced toxicity, across 12 investigated patients.

09:45 - 10:00

Neural Graphics Primitives-based Deformable Image Registration for On-the-fly Motion Extraction - Xia Li (1) (2), Fabian Zhang (2), Muheng Li (1) (3), Damien Weber (1) (4) (5), Antony Lomax (1), Joachim Buhmann (2), Ye Zhang (1)

1 - Center for Proton Therapy, Paul Scherrer Institut (Switzerland)
2 - Department of Computer Science, ETH Zurich (Switzerland)
3 - Department of Physics, ETH Zurich (Switzerland)
4 - Department of Radiation Oncology, University Hospital of Zürich (Switzerland)
5 - Department of Radiation Oncology, Inselspital, Bern University Hospital, University of Bern (Switzerland)

Intra-fraction motion in radiotherapy is commonly modeled using deformable image registration (DIR). However, existing methods often struggle to balance speed and accuracy, limiting their applicability in clinical scenarios. This study introduces a novel approach that harnesses Neural Graphics Primitives (NGP) to optimize the displacement vector field (DVF). Our method leverages learned primitives, processed as splats, and interpolates within space using a shallow neural network. Uniquely, it enables self-supervised optimization at an ultra-fast speed, negating the need for pre-training on extensive datasets and allowing seamless adaptation to new cases. We validated this approach on the 4D-CT lung dataset DIR-lab, achieving a target registration error (TRE) of 1.15\pm1.15 mm within a remarkable time of 1.77 seconds. Notably, our method also addresses the sliding boundary problem, a common challenge in conventional DIR methods.

10:00 - 10:15

Robust optimization for contouring uncertainties in adaptive radiation therapy - Ivar Bengtsson (1) (2), Andreas Smolders (3) (4), Anders Forsgren (1), Antony Lomax (3) (4), Damien Weber (3) (5) (6), Albin Fredriksson (2), Francesca Albertini (3)

1 - KTH Royal Institute of Technology, Stockholm (Sweden)
2 - RaySearch Laboratories AB (Sweden)
3 - Paul Scherrer Institute (Switzerland)
4 - Department of Physics [ETH Zürich] (Switzerland)
5 - Universitätsspital Zürich (Switzerland)
6 - Bern University Hospital [Berne] (Switzerland)

Online adaptive radiation therapy requires fast and automated contouring of daily scans for treatment plan re-optimization. However, automated contouring is imperfect and introduces contour uncertainties. This work aims at developing and comparing robust optimization strategies accounting for such uncertainties. A deep-learning method was used to predict the uncertainty of deformable image registration, and to generate a finite set of daily contour samples. Eight optimization strategies were compared: two baseline methods, three methods that convert contour samples into voxel-wise probabilities, and three methods accounting explicitly for contour samples as scenarios in robust optimization. Target coverage and organ-at-risk (OAR) sparing were evaluated robustly for simplified proton therapy plans for five head-and-neck cancer patients. We found that the probability-based methods covered the target with better OAR sparing than the baseline methods, without increasing the optimization time. The scenario-based methods spared the OARs even more, but increased integral dose and required more computation time.

10:15 - 10:30

Short-term mortality prediction in SIII NSCLC: What do clinicians want and can models provide it? - Matt Craddock (1) (2), Azadeh Abravan (1) (2), Alan McWilliam (1) (2), Corinne Faivre-Finn (1) (2), Gareth Price (1) (2)

1 - University of Manchester, Division of Cancer Sciences, Manchester (United Kingdom)
2 - The Christie NHS Foundation Trust, Department of Clinical Oncology, Manchester (United Kingdom)

Selecting the “right” treatment in the diverse SIII NSCLC patient cohort is a challenging process which is poorly supported by evidence. Currently, ~20% of SIII NSCLC patients receiving curative intent radiotherapy in Manchester die within 6 months, a timescale in which arguably they may derive no benefit from the treatment and which, we consider could be reduced by model informed clinical decision making. We present a clinician informed modelling study in which we survey oncologists to determine their views of the utility and specification of a short term survival model in SIII NSCLC. Conventional time to event modelling, modern machine learning based methods and discrete time modelling are applied to demonstrate the inherent difficulties of deriving clinically useful predictions from low event datasets with significant class overlap.

Invited Speakers

Auditorium Lumière - Tuesday, July 09

Salle St Clair 3 - Tuesday, July 09

Breathing Motion

Auditorium Lumière - Tuesday, July 09

11:30 - 11:42

Deep learning autosegmentation performance on Mid-Position and time-averaged lung CT scans - Joren Brunekreef (1), Jan-Jakob Sonke (1)

1 - Netherlands Cancer Institute, Amsterdam (Netherlands)

4D CT scans are routinely used in the treatment preparation process of radiation therapy (for targets in the upper abdomen and thorax to reduce geometric uncertainties). Online adaptive RT (OART) is an emerging technique to increase the accuracy in radiation therapy but in-room 4D imaging techniques are currently scarce. To increase the efficiency of OART, deep learning auto-segmentation is an active field of research. In this work, we investigate the difference in autosegmentation quality in motion averaged and motion compensated scans. We found that MidP scans led to increased segmentation performance for GTVs and organs-at-risk compared to mean-intensity projections. These results indicate that future workflows for online adaptive radiotherapy may benefit from integrating motion-compensation techniques when using deep learning auto-segmentation algorithms for delineating anatomical structures.

11:42 - 11:54

Surrogate-Optimized Respiration Motion Model to Estimate Motion for Each Projection of 3D CBCT Scan - Yuliang Huang (1), Kris Thielemans (1) (2), Jamie McClelland (1)

1 - Center of Medical Image Computing, University College London (United Kingdom)
2 - Institute of Nuclear Medicine, University College London (United Kingdom)

Cone-beam CT (CBCT) images are commonly acquired during radiotherapy to help setup the patient and to monitor changes of their anatomy. However, standard 3D CBCT images do not estimate breathing motion and are susceptible to motion artifacts when it occurs. State-of-the-art surrogate driven motion models can estimate the motion for each CBCT projection using correspondence between the internal motion and one or more respiration surrogate signals. However, it can be challenging to measure or extract surrogate signals that are strongly related to the internal motion. Therefore this study proposes a novel surrogate-optimized respiratory motion model and investigates the feasibility of applying it on CBCT projection data. This method optimises the surrogate signal(s) as well as the correspondence model, eliminating the need for the surrogate signals to have strong relationship with the internal motion. Our method is evaluated on both simulated and real patient data and compared with a surrogate-driven motion model, showing promising improvements across all metrics for the simulated data, and improved image quality for the real data.

11:54 - 12:06

Monte Carlo robustness analysis of breath-hold and free-breathing photon and proton treatments of small lung tumors using Poisson displacement fields - Nils Olovsson (1) (2), Kenneth Wikström (1) (2) (3), Anna Flejmer (1) (2) (3), Anders Ahnesjö (1), Alexandru Dasu (1) (2)

1 - Uppsala University (Sweden)
2 - The Skandion Clinic (Sweden)
3 - Uppsala University Hospital (Sweden)

The adoption of proton therapy for the treatment of small lung tumors has been cautious due to the sensitivity of proton range to geometric and radiological uncertainties. There is however a potential of sparing dose to organs at risk if such treatments could be performed confidently. In this study a comprehensive Monte Carlo based method of evaluating the impact of these uncertainties for treatments in breath-hold and free-breathing is proposed together with a method for deforming images based on Laplace and Poisson editing. Tabulated values was used for the uncertainty parameters and free-breathing motion was simulated using tumor positions from multiple cine-CT. A breath-hold proton plan was compared to breath-hold and a free-breathing photon plans. With robust treatment planning adequate target coverage was achieved for the proton plan with the benefit of lower mean heart dose but at the cost of higher dose to the esophagus.

12:06 - 12:18

Real-Time CBCT Imaging via a Single Arbitrarily-Angled X-ray Image using a Joint Dynamic Reconstruction and Motion Estimation (DREME) Framework - Hua-Chieh Shao (1), Tielige Mengke (1), Tinsu Pan (2), You Zhang (1)

1 - University of Texas Southwestern Medical Center (United States)
2 - University of Texas MD Anderson Cancer Center (United States)

Real-time cone-beam CT (CBCT) imaging provides instantaneous anatomy for image guidance and online treatment adaptation in radiotherapy, but faces a stringent temporal constraint (<500 ms) under respiration-induced anatomical motion, leading to extreme under-sampling. We proposed a joint dynamic CBCT reconstruction and motion estimation (DREME) framework to address this challenge. DREME reconstructs a dynamic CBCT from a pre-treatment conventional CBCT scan, while simultaneously deriving a deep learning-based, angle-agnostic model for real-time motion and CBCT estimation based on a single X-ray projection. DREME uses a prior model-free spatiotemporal implicit neural representation (PMF-STINR) framework to reconstruct dynamic CBCTs and derive a data-driven motion model. Capturing the up-to-date motion patterns, the motion model is subsequently used to train a deep learning model to estimate real-time motion from an arbitrarily-angled X-ray projection. DREME accurately solve 3D motion in real time, validated through phantom simulations (1.0±0.4mm tracking error) and a lung patient dataset (1.3±1.2mm tracking error).

12:18 - 12:30

Enhancing Clinical Utility of 0.55T 4DMRI in Radiotherapy Planning: Exploring Compressed Sensing, Denoising, and Auto-segmentation - Austen Curcuru (1), Young Hun Yoon (1) (2), Shanti Marasini (1), H Michael Gach (1), Jianing Pang (3), Xinzhou Li (3), Xin Miao (3), Jin Sung Kim (2), Taeho Kim (1)

1 - Washington University School of Medicine [Saint Louis, MO] (United States)
2 - Yonsei University (South Korea)
3 - Siemens Healthcare (Germany)

Respiratory self-gating 4DMRI offers superior soft tissue contrast compared with 4DCT without imparting any ionizing radiation or requiring an external surrogate device. Recent advancements in MR hardware and sequence design have made 4DMRI simulation an increasingly attractive option for patients with tumors in the thoracoabdominal area that may experience several centimeters of motion due to respiration. This work investigates different respiratory trace binning options and compressed sensing reconstruction in a vendor designed 4DMRI sequence. Five volunteers were scanned on a 0.55 T MRI. The k-space data was retrospectively reconstructed using both phase and amplitude binning, with 5 and 10 total bins, and both with and without compressed sensing. A denoising algorithm was applied to the reconstructed images to improve image quality. A deep learning auto-segmentation network was used to determine whether the images were of sufficient quality to produce stable segmentation results.

Internal Radiation Therapy

Salle St Clair 3 - Tuesday, July 09

11:30 - 11:42

Reconstruction of Time-Activity Curves in MRT Therapy Using Bayesian Unfolding - Francesca Nicolanti (1) (2), Carlo Mancini Terracciano (1) (2), Riccardo Faccini (1) (2), Elena Solfaroli Camillocci (3), Silvio Morganti (1), Riccardo Mirabelli (1) (2), Lorenzo Campana (4), Francesco Collimati (1) (2)

1 - Istituto Nazionale di Fisica Nucleare [Sezione di Roma 1] (Italy)
2 - Dipartimento di Fisica [Roma La Sapienza] (Italy)
3 - Istituto Superiore di Sanità = National Institute of Health (Italy)
4 - Department of Basic and Applied Sciences for Engineering, Sapienza University of Rome (Italy)

Molecular Radiotherapy (MRT) holds the potential for personalized cancer treatment, yet achieving precise dosimetry remains a challenge. WIDMApp (Wearable Individual Dose Monitoring Apparatus) is a novel approach to address this by developing a method to accurately characterize the Time-Activity Curves (TACs) of organs using Time-Counts Curves (TCCs) acquired with wearable radiation detectors designed on purpose. The WIDMApp approach relies on the detectors, a Monte Carlo simulation, and an unfolding procedure. The MC simulation estimates the probabilities that the particles produced in each organ produce a signal in all the detectors. Such probabilities are then used to infer the organ activities (i.e. the TACs) given the detectors measurements (i.e. the TCCs) using a Bayesian approach. Thanks to the Bayesian formalism, this algorithm does not need to assume the TACs functional form. Utilizing the MIRD computational human phantom, we developed a Monte Carlo simulations to generate the dataset of an 131I thyroid treatment. Assuming a mono-exponential washout of all the relevant organs, we simulated the measurement of TCCs and we used this dataset to test our algorithm. We assess the Bayesian algorithm's performances under varying initial conditions and data fluctuations, and compare results with those obtained by using a fitting procedure. This study indicates that the Bayesian unfolding can reconstruct organ TACs within 15% accuracy, even with significant noise and uncertainty in initial activity estimations. The obtained results pave the way for the exploration of more complex and realistic clinical scenarios.

11:42 - 11:54

Rapid Auto-Planning for Cervical Brachytherapy: Direct Dwell Time vs. Dose Prediction with Deep Learning Networks - Lance Moore (1), Karoline Kallis (1), Kelly Kisling (1), Nuno Vasconcelos (2), Sandra Meyers (1)

1 - Department of Radiation Medicine and Applied Sciences, University of California, San Diego (United States)
2 - Department of Electrical and Computer Engineering, University of California, San Diego (United States)

Treatment planning for cervical brachytherapy is often performed manually and can take over an hour. Automated planning leveraging deep learning models could significantly reduce planning time, improving the patient experience. The goal of this work was to assess a new approach to automated planning: using a neural network to directly predict dwell times (VGG model). This model was compared to an automated planning method that uses a neural network to predict 3D dose (OPT model), followed by an optimizer to convert dose to dwell times. Models were developed using 1803 prior cervical brachytherapy treatment plans, with a 68%/16%/16% train/test/validation split. The dataset included 7 different types of implants, featuring some combination of tandem, ovoids, ring, and/or needles. The VGG model used a 16-layer 3D VGG regression network, while the OPT model employed a Cascade U-Net. Automated plans were compared to clinical plans using the mean absolute error (MAE) and mean error (ME) in dose and dwell times. The resulting automated plans were very similar to clinical plans for both models, as evidenced by low MAE (OPT=3.4%, VGG=4.7%) and ME in dose (OPT=0.1%, VGG=0.6%), and low MAE (OPT=7.3s, VGG=8.0s) and ME (OPT=0.2s, VGG=0.7s) in dwell times. While the OPT model featured slightly superior performance, the direct dwell time prediction of the VGG model reduced computation time. Both methods of automated planning could provide high clinical value for brachytherapy procedures.

11:54 - 12:06

Near-to-target organs-at-risk segmentation in cervical high dose-rate brachytherapy via deep learning - Ruiyan Ni (1), Elizabeth Chuk (1) (2), Kathy Han (1) (2), Jennifer Croke (1) (2), Anthony Fyles (1) (2), Jelena Lukovic (1) (2), Michael Milosevic (1) (2), Benjamin Haibe-Kains (1) (2) (3), Alexandra Rink (1) (2)

1 - UNIVERSITY OF TORONTO (Canada)
2 - Princess Margaret Hospital (Canada)
3 - Vector Institute (Canada)

Deep learning has been used to automate and speed up the manual contouring step for organs at risk (OARs) and targets in radiotherapy. In cervical high dose-rate brachytherapy, OARs far from the radiotherapy target are not considered during treatment planning, which is driven by the OAR dose to 2 cubic centimeters (D2cm^3) closest to the target. Accuracy in distal OAR segmentation does not necessarily translate to dosimetric impact. To align with the clinical requirements, we developed a distance-penalized loss function to guide the network's attention towards the near-to-target OAR regions, ensuring highly accurate proximal OAR segmentations. Models trained with the proposed loss demonstrated improved geometric and dosimetric performance. A clinical acceptability test was conducted, by a physician scoring the auto-contours and recording the time to rate and revise. A novel geometric metric, weighted Dice Similarity Coefficient, was introduced as a better indicator for D2cm^3 accuracy compared to commonly used metrics.

12:06 - 12:18

Automatic target propagation strategies for MRI-guided cervical brachytherapy - Rita Simões (1), Eva C. Schaake (1), Eva E. Rijkmans (1), Marlies E. Nowee (1), Tomas M. Janssen (1)

1 - Netherlands Cancer Institute (Netherlands)

The target structures required for cervical cancer brachytherapy treatment planning are delineated by radiation oncologists using both imaging and clinical information. The segmentations are made manually from scratch at the first brachytherapy fraction. For subsequent fractions the segmentations from the first fraction are rigidly propagated. This process is time- and resource-intensive, which makes the workflow inefficient and, since the patient is waiting with the applicator in situ, uncomfortable for the patient. In this work, we compared different strategies for automatic target propagation from the first to the second and third fractions. Registration-based approaches performed the worst, while segmentation-based approaches that take prior patient-specific information into account reached the best performance. In particular, patient-specific fine-tuned deep learning segmentation models are a promising strategy for automating the target propagation workflow.

12:18 - 12:30

Geant4 simulation of DNA damage surrounding a DaRT seed - Laura Ballisat (1), Chiara De Sio (1), Lana Beck (1), Susanna Guatelli (2), Dousatsu Sakata (1) (2) (3), Yuyao Shi (1), Jinyan Duan (1), Jaap Velthuis (1), Anatoly Rosenfeld (2)

1 - University of Bristol (United Kingdom)
2 - University of Wollongong (Australia)
3 - Osaka University (Japan)

Diffusing alpha-emitters radiation therapy (DaRT) is a brachytherapy technique using 224Ra seeds. Geant4 and Geant4-DNA are used to simulate DNA damage due to DaRT. Simulations show the majority of the dose is delivered in the first 3 weeks, therefore little additional dose is delivered if seeds are not removed. The dose near the seed is predominantly due to α-particles and the contribution of electrons dominates at radial distances beyond 4 mm, showing that all particles should be included in treatment planning for accurate dose calculation. The DNA damage surrounding the seed becomes less complex with distance and radiobiological effectiveness (RBE) decreases to unity. This should be considered when determining the seed spacing to maintain the required therapeutic effect. The reduced RBE with distance results in less complex damage to surrounding normal tissue. Hence DaRT has the potential to be very conformal, whilst delivering a high RBE to the treatment volume.

Keynote Speaker

Auditorium Lumière - Tuesday, July 09

Image Segmentation II

Auditorium Lumière - Tuesday, July 09

16:30 - 16:45

Segment anything model for head and neck tumor segmentation with CT, PET and MRI multi-modality images - Jintao Ren (1) (2), Mathis Rasmussen (1) (2), Jasper Nijkamp (1) (2), Jesper Eriksen (1) (2), Stine Korreman (1) (2)

1 - Aarhus University (Denmark)
2 - Aarhus University Hospital (Denmark)

Deep learning presents novel opportunities for the auto-segmentation of gross tumor volume (GTV) in head and neck cancer (HNC), yet fully automatic methods usually necessitate significant manual refinement. This study investigates the Segment Anything Model (SAM), recognized for requiring minimal human prompting and its zero-shot generalization ability across natural images. We specifically examine MedSAM, a version of SAM fine-tuned with large-scale public medical images. Despite its progress, the integration of multi-modality images (CT, PET, MRI) for effective GTV delineation remains a challenge. Focusing on SAM's application in HNC GTV segmentation, we assess its performance in both zero-shot and fine-tuned scenarios using single (CT-only) and fused multi-modality images. Our study demonstrates that fine-tuning SAM significantly enhances its segmentation accuracy, building upon the already effective zero-shot results achieved with bounding box prompts. These findings open a promising avenue for semi-automatic HNC GTV segmentation.

16:45 - 17:00

Test-time Adaptation for Medical Image Segmentation based on a Joint framework with an Adaptive Wavelet-VNet and a Refine Model (WaVNet-Refine) - Xiaoxue Qian (1), You Zhang (2)

1 - xiaoxue Qian (United States)
2 - You Zhang (United States)

For medical image segmentation, there often exists a domain gap between acquired training and testing datasets due to variations in imaging hardware/protocols. It leads to domain shift problems that may substantially degrade the performance of models. Due to the high cost of manual labeling and privacy protection, it is often difficult to annotate new (target) domain data for model fine-tuning or re-access source domain training data to train domain generalization models. In this study, we proposed an adaptive wavelet-VNet for test-time adaptation. To improve the model's robustness to domain variations, we introduced the multi-level wavelet transform into an adaptive wavelet-VNet. Meanwhile, we employed a Refine model to correct the incomplete segmentation results caused by the domain shift. The adaptive wavelet-VNet and the Refine model were unified into a test-time adaptation framework (WaVNet-Refine). We evaluated WaVNet-Refine with a multi-domain liver dataset and showed its advantages.

17:00 - 17:12

MAGIC: Multitask Adversarial Generator of Images and Contours from CBCT for Adaptive Radiotherapy - Adam Szmul (1), Kamalram Thippu Jayaprakash (2), Rajesh Jena (2), Andrew Hoole (3), Catarina Veiga (4), Yipeng Hu (1), Jamie R. McClelland (1)

1 - Wellcome / EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, London, UK (United Kingdom)
2 - Department of Oncology, Cambridge University Hospitals NHS Foundation Trust, Cambridge, UK (United Kingdom)
3 - Department of Medical Physics, Cambridge University Hospitals NHS Foundation Trust, Cambridge, UK (United Kingdom)
4 - Department of Medical Physics and Biomedical Engineering, University College London, London, UK (United Kingdom)

Adaptive radiotherapy requires images of planning CT-like quality and segmentations of structures for treatment plan evaluation and its adaptation. We propose an unsupervised framework which can jointly generate synthetic CTs and segment Organs at Risk, without requiring any segmentations on CBCT during training or inference. The presented results show feasibility of such an approach with improvement in CBCT images in terms of L2 histograms distance, mean absolute error and normalized cross correlation with respect to virtual CTs. The proposed framework generates segmentations that require further editing to be clinically acceptable but shows great promise considering the relatively small training dataset used. This proof-of-concept study has potentials for clinical uses, such as being an assistive tool in current manual segmentation, and future development as plan verification tool or a fully automated re-planning.

17:12 - 17:24

Improving Clinical Target Volume Segmentation throughout a Gastric Cancer Radiotherapy Clinical Trial - Phillip Chlap (1) (2), Mark Lee (1) (2), Trevor Leong (3), Matthew Field (1) (2), Jason Dowling (1) (4), Hang Min (1) (4), Julie Chu (3), Jennifer Tan (3), Phillip K Tran (3), Tomas Kron (3), Annette Haworth (5), Martin A Ebert (6) (7), Shalini K Vinod (1) (2), Lois Holloway (1) (2) (5)

1 - University of New South Wales and the Ingham Institute for Applied Medical Research (Australia)
2 - Liverpool and Macarthur Cancer Therapy Centres, South-Western Sydney Local Health District (Australia)
3 - Peter MacCallum Cancer Centre and Sir Peter MacCallum Department of Oncology, University of Melbourne (Australia)
4 - Australian e-Health Research Centre, CSIRO (Australia)
5 - Institute for Medical Physics, School of Physics, The University of Sydney (Australia)
6 - Sir Charles Gairdner Hospital and University of Western Australia (Australia)
7 - School of Physics, Mathematics and Computing, University of Western Australia (Australia)

Radiotherapy clinical trials can benefit from an accurate segmentation model to help ensure consistency of contour definitions across patients treated in the trial, however, only a small dataset is often available initially as trial recruitment begins and deep learning segmentation models typically require large datasets for training. In this work we investigate the effect of retraining a segmentation model as more data becomes available throughout a trial. Models were trained for Clinical Target Volume (CTV) segmentation in the TOPGEAR (gastric cancer) clinical trial. There were 5 iterations of model training with datasets containing only 10 cases in the initial model, then increasing in increments to 43 in the last iteration. Each model was tested on a hold-out set of 50 cases, where the Dice Similarity Coefficient (DSC), Surface DSC (sDSC), Mean Distance to Agreement (MDA) and Hausdorff Distance were computed between the segmentation model and the manual contour. The DSC improved from 0.8 to 0.82 between the initial model and the last model. Improvement was also observed for other metrics suggesting the proposed methodology for retraining a segmentation model would be useful for prospective clinical trials.

17:24 - 17:36

OAR-Weighted Dice Score: A spatially aware, radiosensitivity aware metric for target structure contour quality assessment - Lucas McCullum (1), Kareem Wahid (1), Clifton Fuller (1), Barbara Marquez (1)

1 - The University of Texas M.D. Anderson Cancer Center [Houston] (United States)

The Dice Similarity Coefficient (DSC) is the current de facto standard to determine agreement between a reference segmentation and one generated by manual / auto-contouring approaches. This metric is useful for non-spatially important images; however, radiation therapy requires consideration of nearby Organs-at-Risk (OARs) and their radiosensitivity which are currently unaccounted for with the traditional DSC. In this work, we introduce the OAR-DSC which accounts for nearby OARs and their radiosensitivity when computing the DSC. We illustrate the importance of this through cases where two proposed contours have similar DSC, but lower OAR-DSC when one contour expands closer to the surrounding OARs. This work is important because the OAR-DSC may be used by deep learning auto-contouring algorithms in a radiation therapy specific loss function, thereby progressing on the current disregard for the importance of these differences on the final radiation dose plan generation, delivery, and risks of patient toxicity.

17:36 - 17:48

Towards Automatic Identification of Missing Tissues using a Geometric-Learning Correspondence Model - Eliana M Vasquez Osorio (1) (2), Edward Henderson (1)

1 - Faculty of Biology, Medicine and Health, University of Manchester, Manchester (United Kingdom)
2 - The Christie NHS Foundation Trust Hospital, Manchester (United Kingdom)

Missing tissue presents a big challenge for dose mapping, e.g., in the reirradiation setting. We propose a pipeline to identify missing tissue on intra-patient structure meshes using a previously trained geometric-learning correspondence model. For our application, we relied on the prediction discrepancies between forward and backward correspondences of the input meshes, quantified using a correspondence-based Inverse Consistency Error (cICE). We optimised the threshold applied to cICE to identify missing points in a dataset of 35 simulated mandible resections. Our identified threshold, 5.5 mm, produced a balanced accuracy score of 0.883 in the training data, using an ensemble approach. This pipeline produced plausible results for a real case where ~25% of the mandible was removed after a surgical intervention. The pipeline, however, failed on a more extreme case where ~50% of the mandible was removed. This is the first time geometric-learning modelling is proposed to identify missing points in corresponding anatomy.

17:48 - 18:00

Fully Automatic Flap Segmentation through Deep Learning Methods to Refine Head and Neck Cancer Pos toperative Radiotherapy, a GORTEC study - Juliette Thariat (1) (2), Zacharia Mesbah, Youssef Chahir, Abir Fathallah, Vianney Bastit, Ingrid Breuskin, Arnaud Beddok, Alice Blache, Jean Bourhis, Mathieu Hatt, Romain Modzelewski

1 - Centre Régional de Lutte contre le Cancer François Baclesse [Caen] (3 avenue général Harris, 14000 Caen France)
2 - Laboratoire de physique corpusculaire de Cae
(ENSICAEN - 6, boulevard du Marechal Juin 14050 Caen Cedex France)

Postoperative radiotherapy (poRT) may negatively impact flap outcomes in head and neck cancer (HNC) patients. This study sought to develop a fully automatic method for flap segmentation, to facilitate future attempts at evaluating dose-volume effects on HNC flaps. 148 HNC patients' flaps were delineated by two experts to generate ground-truth segmentation masks. The nnUNet and MedSAM frameworks were used to train flap segmentation. The Dice Similarity Coefficient (DSC), Intersection Over Union (IOU) and Hausdorff Distance metrics were used to evaluate the neural network performances. The nnUNet outperformed MedSAM. It achieved median DSCs of 86.9% in the test set for soft tissue flap segmentation. Features resulting in poorer performances of automatic segmentation were pedicled flaps, moderate-to-severe artifacts (present in 42% of patient poRT CTs), and low-volume centers.

Radiobiology and Microscopic Disease

Salle St Clair 3 - Tuesday, July 09

16:30 - 16:45

A rat model of lymphocyte recirculation and the effect of irradiation - Chris Beekman (1), Natalia Carrasco-Rojas (2), Julia Withrow (2), Wesley E. Bolch (2), Harald Paganetti (1)

1 - Department Radiation Oncology, Massachusetts General Hospital / Harvard Medical School (United States)
2 - Department of Biomedical Engineering, University of Florida (United States)

Purpose: To create a 4-dimensional (4D) rat model of lymphocyte recirculation between the blood and other lymphatic compartments to for dynamic dose accumulation and effect assessment during irradiation. Methods: A graph representation of the lymphatic compartments was created, including 1. major organs divided into a blood and a lymphatic part; 2. the lymphatic network including lymph nodes (LNs) and Peyer's patches. This graph was converted into a physiologically based pharmacokinetic (PBPK) model as both a system of differential equations and a stochastic model. By overlaying the lymphocyte migration kinetics with a 3D model of rat anatomy, the dose could be dynamically accumulated under different irradiation scenarios. Results & Conclusions: Irradiation of blood compartments contributes negligibly to the total dose to the recirculating lymphocyte pool. Radiation-induced depletion of viable lymphocytes in the blood is caused by irradiation of secondary lymphatic organs and their subsequent redistribution among lymphatic compartments.

16:45 - 17:00

An AI-based approach for modeling the synergy between radiotherapy and immunotherapy - Hao Peng (1), Casey Moore, Yuanyuan Zhang, Debabrata Saha, Steve Jiang, Robert Timmerman

1 - Departments of Radiation Oncology, University of Texas Southwestern Medical Center (United States)

PULSAR (Personalized, Ultra-fractionated Stereotactic Adaptive Radiotherapy) is designed to administer tumoricidal doses in a pulsed mode with extended intervals, spanning weeks or months. This approach leverages longer intervals to adapt the treatment plan based on tumor changes and enhance immune-modulated effects. In this investigation, we seek to elucidate the potential synergy between combined PULSAR and PD-L1 blockade immunotherapy using experimental data from a Lewis Lung Carcinoma (LLC) syngeneic murine cancer model. Employing a Long Short-Term Memory (LSTM) Recurrent Neural Network (RNN) model, we simulated the treatment response by treating irradiation and anti-PD-L1 as external stimuli occurring in a temporal sequence. Our findings demonstrate that: 1) The model can simulate tumor growth by integrating various parameters such as timing and dose, and 2) The model provides mechanistic interpretations of a “causal relationship” in combined treatment, offering a completely novel perspective. The model can be utilized for in silico modeling, facilitating exploration of innovative treatment combinations to optimize therapeutic outcomes. Advanced modeling techniques, coupled with additional efforts in biomarker identification, may deepen our understanding of the biological mechanisms underlying the combined treatment.

17:00 - 17:12

Integration of white matter tracts in stereotactic brain radiotherapy treatment planning with a fully automated atlas-based pipeline - Sophie Bockel (1), Mohammed El Aichi (1), Roger Sun (1), Nathan Benzazon (1), Alexandre Carré (1), François De Kermenguy (1), Eric Deutsch (1), Charlotte Robert (1)

1 - Radiothérapie Moléculaire et Innovation Thérapeutique (France)

This study introduces a Python-scripted pipeline for integrating white matter tractography atlases into stereotactic brain radiotherapy (SBRT) treatment planning. The method utilizes a probabilistic atlas of 52 white matter fibers, offering an accessible and automated approach to visualize and analyze white matter tracts in individual patients undergoing SBRT. The pipeline involves image preprocessing, affine then non linear registration to a common space, and dose map transformation with different interpolation methods. The results demonstrate accurate registration and suggest that bSpline interpolation provides optimal consistency in dose distribution. The study aims to derive dose constraints for white matter tracts, offering a potential tool for guiding therapeutic decisions in SBRT. The pipeline enhances the accessibility of white matter structural connectivity in radiotherapy planning.

17:12 - 17:24

Comparison of Geant4-DNA and RITRACKS/RITCARD: microdosimetry, nanodosimetry and DNA break predictions - Victor Levrague (1), Rachel Delorme (1), Mathieu Roccia (1), Ngoc Hoang Tran (2), Sébastien Incerti (2), Michaël Beuve (3), Etienne Testa (3), Ianik Plante (4), Shirin Rahmanian (5), Floriane Poignant (5)

1 - Laboratoire de Physique Subatomique et de Cosmologie (France)
2 - Laboratoire de Physique des Deux Infinis Bordeaux (France)
3 - Institut de Physique des 2 Infinis de Lyon (France)
4 - KBR, Houston, TX 77058, USA (United States)
5 - Analytical Mechanics Associates Inc., Hampton, VA 23666, USA (United States)

This work aims at investigating the impact of DNA geometry, compaction and calculation chain on DNA break and chromosome aberration predictions for high charge and energy (HZE) ions, using the Monte Carlo codes Geant4-DNA, RITRACKS and RITCARD. To ensure consistency of ion transport of both codes, we first compared microdosimetry and nanodosimetry spectra for different ions of interest in hadrontherapy and space research. The Rudd model was used for the transport of ions in both models. Developments were made in Geant4 (v11.2) to include periodic boundary conditions (PBC) to account for electron equilibrium in small targets. Excellent agreements were found for both microdosimetric and nanodosimetric spectra for all ion types, with and without PBC. Some discrepancies remain for low-energy deposition events, likely due to differences in electron transport models. The latest results obtained using the newly available Geant4 example “dsbandrepair” will be presented comparing DNA break predictions obtained with RITCARD.

17:24 - 17:36

Whole-body blood circulation modeling and dose to circulating blood in intensity modulated radiation therapy - Bingqi Guo (1), Ping Xia (1)

1 - Cleveland Clinic (United States)

Dose to circulating blood is gaining interest in radiation oncology due to the potential impact of blood dose on the response of the immune system to radiation therapy. However, modeling the interplay of blood circulation with the dynamic delivery of intensity-modulated radiation therapy is challenging. In this study, a novel technique to model whole-body blood circulation was introduced. The model uses patient-specific anatomy from medical images and preserves realistic blood dynamics, distribution, and perfusion. To demonstrate the use of dynamic blood modeling in radiation oncology, the model was used to evaluate the dose to circulation blood in total body irradiation treated with multi-isocentric volumetric modulated arc therapy, one of the most complex techniques in radiation oncology.

17:36 - 17:48

Using fluid flow simulations to model microscopic tumor invasion in muscle tissue for radiotherapy treatment - Gregory Buti (1), Ali Ajdari (1), Yen-Lin Chen (1), Christopher Bridge (1), Gregory Sharp (1), Thomas Bortfeld (1)

1 - Massachusetts General Hospital and Harvard Medical School (United States)

A framework is developed to include the preferred path of microscopic tumor invasion along muscle fibers in radiation target volumes. Following the novel idea that the directionality of muscle fibers inside the muscle can be represented by a flow velocity field, fluid flow simulations are performed with a computational fluid dynamics solver using the finite-elements method. The flow velocity fields are then embedded in a shortest path algorithm to model the microscopic infiltration around the gross tumor. The model assumes that the preferred path of microscopic tumor invasion is along the dominant muscle fiber direction estimated from the velocity field. The method was tested on a sacral chordoma tumor, a malignant tumor originating in the pelvis with suspected microscopic invasion of the gluteal muscles.

17:48 - 18:00

Characterization of Acute Radiation-Induced Vascular Changes in Brain Tumors Using Wavelet-Based Analysis of Dynamic-Contrast-Enhanced MRI Information - Hassan Bagher-Ebadian (1) (2) (3), Stephen L. Brown (1) (2), Tavarekere N. Nagaraja (1) (2), Prabhu Acharya (3), James R. Ewing (1) (2) (3), Indrin J. Chetty (3) (4), Benjamin Movsas (1), Kundan Thind (1) (2) (5)

1 - Henry Ford Hospital (United States)
2 - Michigan State University [East Lansing] (United States)
3 - Oakland University (United States)
4 - Cedars-Sinai Medical Center (United States)
5 - Wayne State University School of Medicine (United States)

Purpose: This study recruits DCE-based pathophysiological nested model-selection (NMS) technique along with wavelet-based spatiotemporal modeling of DCE-MRI information, to characterize the RT-induced regions in an animal model of cerebral U-251N tumors. Methods: Twenty-three immune-compromised-RNU rats were implanted with human U251N cancer cells. 28 days after tumor implantation, two DCE-MRI studies were performed 24hr apart. One 20Gy stereotactic radiation was performed before the second MRI. A DCE-MRI analysis was done using a NMS to distinguish different brain regions. Time-traces of DCE-MRI information for different models were analyzed using a wavelet-based coherence analysis method to characterize and rank their RT-induced effects. Results: Compared to the peritumoral tissues (r=0.622, p

Teaching Speakers

Auditorium Lumière - Wednesday, July 10

Salle St Clair 3 - Wednesday, July 10

Keynote Speaker

Auditorium Lumière - Wednesday, July 10

Open Source Software

Auditorium Lumière - Wednesday, July 10

09:45 - 10:00

OpenTPS: an open-source treatment planning system to foster research and innovation - Ana María Barragan Montero (1), Damien Dasnoy (2), Sophie Wuyckens (1), Wei Zhao (2), Eliot Peeters (1), Romain Schyns (1), Margerie Huet Dastarac (1), Estelle Löyen (2), Gauthier Rotsart De Hertaing (2), Guillaume Janssens (3), Valentin Hamaide (3), Sylvain Deffet (3), Kevin Souris (3), Edmond Sterpin (1) (4), Benoit Macq (2), John Lee (1) (2)

1 - Center of Molecular Imaging, Radiotherapy and Oncology (MIRO), Université Catholique de Louvain, Brussels, Belgium (Belgium)
2 - Institute of Information and Communication Technologies, Electronics and Applied Mathematics (Belgium)
3 - Ion Beams Applications (Belgium)
4 - Department of Radiation Oncology, KULeuven (Belgium)

Treatment planning systems (TPS) are an essential component for simulating and optimizing a radiation therapy treatment before administering it to the patient. However, TPS commercialized by private companies are often black-boxes when it comes to the implementation of most of their features (i.e. image registration, dose calculation and optimization algorithms). This hampers and delay researchers when working with these TPS for implementing and testing new ideas. We have developed OpenTPS, a new open-source software platform (opentps.org), to foster high quality research involving treatment planning for external beam radiotherapy, and, in particular, for proton therapy. It is designed to be a flexible and user-friendly platform for medical physicists, radiation oncologists, and other members of the radiation therapy and research community. It serves to create customized treatment plans for educational and research purposes, particularly for emerging delivery and optimization techniques such as FLASH, arc proton therapy, or beamlet free optimization, among others. 

10:00 - 10:15

GATE 10: A new versatile Python-driven Geant4 application for medical physics - Nils Krah (1), Lydia Maigne (2), Martina Favoretto (3), Andreas Resch (3), Carlotta Triglia (4), Guneet Mummaneni (4), Emilie Roncali (4), Maxime Jacquet (1), Thomas Baudier (1), Alexis Pereda (2), Hermann Fuchs (5), Aurélien Coussat (6), David Sarrut (1)

1 - CREATIS, CNRS, Centre Léon Bérard, INSA Lyon, France (France)
2 - LPC, CNRS, University of Clermont Auvergne, France (France)
3 - MedAustron Ion Therapy Centre, Wiener Neustadt, Austria (Austria)
4 - University of California at Davis, USA (United States)
5 - Medical, University of Vienna, Vienna, Austria (Austria)
6 - Jagiellonian University in Kraków, Poland (Poland)

In this contribution, we present GATE 10, the new version of the long-standing Geant4 application GATE, dedicated to medical physics, e.g. radiation imaging and radiation therapy systems. It features a Python-based user interface and integration of artificial intelligence methods. Moreover, being a Python package, GATE 10 can be easily integrated into external software, e.g. as a Monte Carlo dose engine. The user interface is easy to use, yet provides access to the rich functionality of Geant4 and preconfigured components of GATE (e.g. actors, sources). GATE 10 is currently under rapid development and a first non-beta release is planned for May 2024. The software is open-source and available via the GitHub platform. Installation is performed via the Python package manager (pip). In this contribution, we present an overview of GATE 10 and a selection of new features and improve- ments.

10:15 - 10:30

PortPy: An Open-Source Initiative for Radiotherapy Treatment Planning Optimization Including Benchmark Data and Algorithms and Integration with Commercial TPS - Gourav Jhanwar (1), Mojtaba Tefagh (2), Qijie Huang (1), Vicki Trier Taasti (3), Seppo Tuomaala (4), Saad Nadeem (1), Masoud Zarepisheh (1)

1 - Memorial Sloan Kettering Cancer Center (United States)
2 - Sharif University of Technology [Tehran] (Iran)
3 - Aarhus university hospital (Denmark)
4 - Varian Medical System, Inc. (United States)

We have developed PortPy (Planning and Optimization for Radiation Therapy in Python), an innovative open-source software package designed to expedite the research and clinical translation of novel treatment planning techniques in radiation therapy. PortPy addresses the limitations of existing packages by offering several key features: 1) Interoperability with commercial treatment planning systems like, Eclipse, which allows for the objective evaluation of new techniques in comparison to the current best clinical practices. 2) Inclusion of benchmark dataset which consists of 50 lung patients, complete with clinical plans and all necessary data for treatment planning optimization. This data is extracted from Eclipse using its API, facilitating realistic testing scenarios. 3) Advanced benchmark algorithms capable of identifying “global" optimal solutions, using computationally intensive mixed-integer programming, for various non-convex optimization problems encountered in treatment planning. PortPy enables the design, testing, and clinical validation of both classical optimization-based and cutting-edge AI-based treatment planning techniques.

10:30 - 10:45

Pylinac: An open-source quality assurance image analysis engine - James Kerns (1)

1 - Radformation (United States)

Pylinac is an open-source image analysis library written in Python that can analyze routine quality assurance images and datasets for medical physicists in radiotherapy. The library is concise and usually implemented in a few lines of code. The engine is used in a commercial application and receives regular updates. Several modules are available for complete analysis of starshots, CatPhan phantom, and planar (2D) phantoms from numerous commercial manufacturers.

Dose Calculation

Auditorium Lumière - Wednesday, July 10

11:00 - 11:15

A long short-term memory network-based proton dose calculation method in a magnetic field - Fan Xiao (1), Domagoj Radonic (1) (2), Niklas Wahl (3) (4), Luke Voss (3) (4) (5), Ahmad Neishabouri (4) (6), Nikolaos Delopoulos (1), Stefanie Corradini (1), Claus Belka (1) (7) (8), George Dedes (2), Christopher Kurz (1), Guillaume Landry (1)

1 - Department of Radiation Oncology, LMU University Hospital, LMU Munich, Munich, Germany (Germany)
2 - Department of Medical Physics, LMU Munich, Munich, Germany (Germany)
3 - Department of Medical Physics in Radiation Oncology, German Cancer Research Center – DKFZ, Heidelberg, Germany (Germany)
4 - National Center for Radiation Oncology (NCRO), Heidelberg Institute for Radiation Oncology (HIRO), Heidelberg, Germany (Germany)
5 - Ruprecht Karl University of Heidelberg, Institute of Computer Science, Heidelberg, Germany (Germany)
6 - Clinical Cooperation Unit Radiation Oncology, German Cancer Research Center (DKFZ), Heidelberg, Germany (Germany)
7 - German Cancer Consortium (DKTK), partner site Munich, a partnership between DKFZ and LMU University Hospital, Munich, Germany (Germany)
8 - Bavarian Cancer Research Center (BZKF), Munich, Germany (Germany)

Objective: To present a long short-term memory (LSTM) network-based dose calculation method for magnetic resonance (MR)- guided proton therapy. Approach: 30 deformed planning computed tomography (CT) images of prostate cancer patients were collected for Monte Carlo (MC) dose calculation under a 1.5 T magnetic field. Proton pencil beams (PB) at 150, 175, and 200 MeV energies were simulated (6480 PBs at each energy). A 3D relative stopping power (RSP) cuboid covering the extent of the PB dose was extracted and flattened into the LSTM model, yielding a 3D predicted PB dose. Training and validation involved 25 patients; 5 patients were reserved for testing. Results: Test results of all PBs showed a mean gamma passing rate (2%/2 mm) of 99.89%, with an average center-of-mass discrepancy of 0.36 mm between predicted and simulated trajectories. The model inference time was 10 ms per beam. Conclusion: LSTM models for proton dose calculation in a magnetic field were developed and showed promising accuracy and efficiency for prostate cancer patients.

11:15 - 11:30

Deep learning-based dose prediction for INTRABEAM - Mojahed Abushawish (1) (2), Arthur V Galapon (1) (3), Joaquin L. Herraiz (1) (4), José M Udías (1) (4), Paula Ibanez (1) (4)

1 - Nuclear Physics Group and IPARCOS, University Complutense of Madrid (Spain)
2 - CNRS, Délégation Rhône Auvergne (France)
3 - University Medical Center Groningen, Department of Radiation Oncology, Groningen (Netherlands)
4 - Clinical hospital San Carlos, Health research institute (Spain)

Intraoperative radiation therapy requires fast and accurate treatment planning, as these procedures involve tumor removal surgeries preceding irradiation which may alter the original treatment plan. While Monte Carlo calculations are considered the gold standard, they require significant computational resources and time. The use of deep learning algorithms may provide a faster alternative. In this study, we trained and tested two deep learning models, U-Net and MultiResUnet, to predict 3D dose distributions for INTRABEAM spherical and needle applicators. A cohort of 800 paired CTs and dose distributions (720 training, 80 validation) of partial breast irradiations with a spherical applicator and 41 (36 training, 5 validation) of kyphoplasty using the needle applicator were used separately for training and validation. Doses were calculated using a fast MC algorithm, serving as the ground truth. Additional tissue material information was incorporated into the input channel as a segmentation mask, and an image augmentation technique was introduced to produce an image by taking different parts from other images in the dataset. Gamma analysis was performed to compare the predicted dose to the simulated dose. In general, dose predicted conformed well with the reference dose. Consequently, deep learning models show great potential in predicting dose distributions for IORT, especially the MultiResUnet model.

11:30 - 11:42

GPU-based Monte Carlo method for mixed beam radiation therapy electron beamlet calculation. - Veng Jean Heng (1), Marc-André Renaud (2), Jan Seuntjens (3) (4) (5) (1)

1 - Medical Physics Unit, McGill University (Canada)
2 - Gray Oncology Solutions (Canada)
3 - Princess Margaret Cancer Centre (Canada)
4 - Department of Radiation Oncology, University of Toronto (Canada)
5 - Department of Medical Biophysics, University of Toronto (Canada)

Calculation of robust electron beamlets for Mixed Beam Radiation Therapy (MBRT) is a computationally expensive process due to the need for Monte Carlo-based approaches. In this work, we use the GPU-based Pre-calculated Monte Carlo (PMC) algorithm to speed up electron dose calculation. PMC relies on pre-calculated electron tracks, generated ahead of time using the full Monte Carlo EGSnrc code. At run time, the tracks are used as look-ups to raytrace electrons. A photon transport method is added onto the PMC code to properly handle dose contributions due to bremsstrahlung and contamination photons. Excellent agreement in electron beamlets calculated with energies ranging from 6 to 20 MeV were found with an average latent uncertainty of 1.3%. Compared to single core EGSnrc calculations, a 12 MeV electron beamlet was calculated over 1800 times faster with PMC on the NVIDIA Titan RTX GPU.

11:42 - 11:54

Revisiting Over Three Years of Clinical Application of Monte Carlo Based Patient Quality Assurance at the Maastro Proton Therapy Center - Ilaria Rinaldi (1), Jan Gajewski (2), Angelo Schiavi (3), Nils Krah (4), Antoni Rucinski (2), Vincenzo Patera (3), Sebastiaan Nijsten (1)

1 - Maastro (Netherlands)
2 - Institute of Nuclear Physics PAN (Poland)
3 - University of Rome "Sapienza" (Italy)
4 - CREATIS, CNRS, UMR5220, Centre Léon Bérard, INSA Lyon (France)

In our proton facility, the Maastro Proton Therapy Center in Maastricht, we have entirely supplanted traditional measurements with machine logfile-based patient specific quality assurance (PSQA). This innovative approach is underpinned by the independent GPU-accelerated Monte Carlo code Fred. This study presents a comprehensive report on the clinical application of Monte Carlo based PSQA over a period exceeding three years at our proton therapy center.

11:54 - 12:06

A stochastic model to estimate dose to circulating lymphocytes during brain radiotherapy - François De Kermenguy (1), Nathan Benzazon (1), Pauline Maury (1), Rémi Vauclin (2), Meissane M'hamdi (1), Vjona Cifliku (1), Elaine Limkin (1) (3), Ibrahima Diallo (1), Daphné Morel (1), Candice Milewski (3), Céline Clémenson (1), Michele Mondini (1), Eric Deutsch (1) (3), Charlotte Robert (1) (3)

1 - Radiothérapie Moléculaire et Innovation Thérapeutique (France)
2 - TheraPanacea [Paris] (France)
3 - Institut Gustave Roussy (France)

Severe radiation-induced lymphopenia occurs in 40% of patients treated for primary brain tumors and is an independent risk factor of poor survival outcomes. We developed an in-silico model that estimates the radiation doses received by lymphocytes during brain irradiation. The simulation consists of two interconnected compartmental models describing the slow recirculation of lymphocytes between lymphoid organs and the bloodstream . We used dosimetry data from 33 patients treated for glioblastoma to compare three scenarios: H1) lymphocytes circulation only in the bloodstream; H2) lymphocytes recirculation between lymphoid organs H3) lymphocytes recirculation between lymphoid organs and deep-leaning computed out-of-field dose to head-and-neck lymphoid structures. According to H3 case, almost all lymphocytes were irradiated after treatment, at a mean dose of 265.6 ± 48.5mGy. Our results demonstrate the need to clarify the indirect effects of irradiation on lymphopenia, in order to potentiate the combination of radio-immunotherapy or the abscopal effect.

12:06 - 12:18

Deep Learning dose engine for electron FLASH radiotherapy - Lorenzo Arsini (1), Barbara Caccia (2), Andrea Ciardiello (1), Angelica De Gregorio (1), Gaia Franciosini (1), Stefano Giagu (1), Jack Humphreys (3), Annalisa Muscato (4), Angelo Schiavi (5), Christopher White (3), Markus Hagenbuchner (3), Susanna Guatelli (3), Carlo Mancini Terracciano (1)

1 - Dipartimento di Fisica [Roma La Sapienza] (Italy)
2 - Istituto Superiore di Sanità = National Institute of Health (Italy)
3 - University of Wollongong [Australia] (Australia)
4 - Scuola post-laurea in Fisica Medica [Roma La Sapienza] (Italy)
5 - Università degli Studi di Roma "La Sapienza" = Sapienza University [Rome] (Italy)

The aim of our work is to develop a Deep Learning (DL) algorithm capable of efficiently generating dose distributions for particle beams within heterogeneous mediums, such as the human body. This model computes dose as a function of the beam parameters, as entry point, energy, and fluence with an accuracy comparable to Monte Carlo algorithms but at an exceptionally rapid pace (0.01 seconds per beam on a single GPU). The resulting dose maps hold great potential for integration into the cost function of forthcoming Radiotherapy (RT) auto - planning algorithms, particularly those based on differentiable optimization. The core of our DL architecture is a U-Net based on Graph Neural Networks, with a purpose-developed pooling technique. Trained on Geant4 Monte Carlo simulations using a patient CT scan and beam characteristics as inputs, our model achieve satisfactory performances through rigorous training and testing on diverse beam orientations and energies.

12:18 - 12:30

Skin dose in breast radiation therapy: Monte Carlo calculations from deformed vector fields (DVF)-driven CT images - Nicolas Arbor (1), Laurent Bartolucci (2), Botao Dai (3), Halima Elazhar (2), Pierre Galmiche (3), Delphine Jarnet (2), Michel De Mathelin (3), Philippe Meyer (2), Hyewon Seo (3)

1 - Institut Pluridisciplinaire Hubert Curien (France)
2 - Institut de Cancérologie de Strasbourg Europe (France)
3 - Laboratoire des sciences de l'ingénieur, de l'informatique et de l'imagerie (France)

Skin dose in radiation therapy is a key issue for reducing patient side effects, but dose calculations in this high-gradient region is still a challenge today. To support radiation therapists and medical physicists in their decisions, a computational tool has been developed to systematically recalculate the skin dose for each treatment session, taking into account changes in breast geometry. This tool is based on Monte Carlo skin dose calculations from deformed vector fields (DVF)-driven CT images, using non-ionizing 3D-scanner data. The application of deformation fields to the initial patient CT, as well as the validation of Monte Carlo skin dose calculations and comparison with TPS algorithm, are presented. The tool is currently being evaluated using a dataset of 60 patients from the MorphoBreast3D clinical study.

Treatment Planning

Salle St Clair 3 - Wednesday, July 10

11:00 - 11:15

Peeking under the Hood: Addressing Biomarker-driven Uncertainty in Machine Learning-based Radiotherapy Personalization - Ali Ajdari (1), Zhongxing Liao (2), Thomas Bortfeld (1)

1 - Massachusetts General Hospital [Boston, MA, USA] (United States)
2 - MD Anderson Cancer Center [Houston] (United States)

Radiotherapy (RT) has long been performed using a set of population-wide dose prescriptions that do not reflect the individual patients' biological makeup. Although powerful machine learning (ML) models are available for predicting patient-specific outcome of RT, these models remain disconnected from the realm of dose personalization. Here, we propose a novel strategy for bridging the gap between ML-based outcome modeling and dose-based RT treatment planning. The novelty of our approach lies in directly addressing data uncertainty (from predictive biomarkers) during the outcome modeling and optimization. On the modeling side, we train adversarially robust neural networks that are immune to reasonable perturbation of the underlying data. On the computational side, we propose efficient algorithms for embedding the resulting robust models directly within conventional treatment planning for personalized dose adjustment. We perform out simulation on a cohort of non-small cell lung cancer patients, with the endpoint of minimizing radiation-induced cardiac toxicity

11:15 - 11:30

pyanno4rt: a toolkit for machine learning (N)TCP model-based inverse radiotherapy treatment plan optimization - Tim Ortkamp (1) (2) (3), Patrick Salome (2) (4), Oliver Jäkel (2) (3) (5) (6), Martin Frank (1) (3), Niklas Wahl (2) (5)

1 - Karlsruhe Institute of Technology (KIT), Scientific Computing Center, Karlsruhe, Germany (Germany)
2 - Department of Medical Physics in Radiation Oncology, German Cancer Research Center (DKFZ), Heidelberg, Germany (Germany)
3 - Helmholtz Information and Data Science School for Health (HIDSS4Health), Karlsruhe/Heidelberg, Germany (Germany)
4 - Clinical Cooperation Unit Radiation Oncology, German Cancer Research Center (DKFZ), Heidelberg, Germany (Germany)
5 - National Center for Radiation Research in Oncology (NCRO), Heidelberg Institute for Radiation Oncology (HIRO), Heidelberg, Germany (Germany)
6 - Heidelberg Ion Beam Therapy Center (HIT), Heidelberg, Germany (Germany)

Machine learning (ML) models on tumor control probability (TCP) or normal tissue complication probability (NTCP) are on the verge of replacing classical radiotherapy treatment outcome models. In spite of this, solving the inverse problem in intensity-modulated radiotherapy (IMRT) treatment planning still relies on mathematically simple, non-model-based surrogate functions with empirical dose prescription and tolerance parameters, rather than directly optimizing (N)TCP based on such outcome models. In this paper, we present the open-source Python package pyanno4rt, a toolkit which implements a technical framework for ML outcome model-based IMRT optimization. Our results show that pyanno4rt is able to generate ML model-based treatment plans with appealing dose distributions, while allowing for improved outcome predictions.

11:30 - 11:42

Intelligent Automatic Treatment Planning for Radiation Therapy - Yin Gao (1), Yang Kyun Park (1), Daniel Yang (1), Xun Jia (2)

1 - Department of Radiation Oncology, University of Texas Southwestern Medical Center (United States)
2 - Department of Radiation Oncology and Molecular Radiation Sciences, Johns Hopkins University (United States)

The current workflow of treatment planning in radiation therapy (RT) heavily relies on human planners, leading to variable plan quality and low efficiency due to complex interactions among physicians, planners, and the treatment planning system (TPS). We propose an Intelligent Automatic Treatment Planning (IATP) framework including two artificial intelligence agents to emulate the roles of human physicians and planners. Based on a Virtual Physician Network, constructed with the deep neural network to model physicians' preferences for plan approval for prostate cancer Stereotactic Body Radiation Therapy (SBRT), we developed a Physician Preference-guided Virtual Treatment Planner Network (PgVTPN) via an end-to-end deep reinforcement learning approach. PgVTPN was able to operate the Eclipse TPS autonomously, like a human planner under physician's guidance, to generate prostate SBRT plans that meet dosimetric requirements and closely align with physicians' clinical desires. We connected PgVPTN to Eclipse TPS via ESAPI. Statistical analysis revealed that PgVTPN-generated plans exhibited a significantly higher total score of 252.99+-15.57 compared to clinical plans with a score of 237.29+-18.72. PgVTPN demonstrated superiority over human planners in key metrics, including target conformality, VRectum[36Gy], DRectum[40%], DUrethra[20%], DBowel[1cc], and DRFemoralHead [max], with statistical significance. The plans were deliverable and successfully passed quality assurance with a gamma passing rate (3%/3mm) of 96.3%. Blind evaluation by experienced physician and physicists favored PgVPTN-generated plans on average 90% over corresponding clinical plans.

11:42 - 11:54

An exploration of the complexities of CTV mapping - Marcel Van Herk (1)

1 - University of Manchester / The Christie NHS Foundation Trust [Manchester, UK] (United Kingdom)

A naïve CTV description could be a population-averaged cell density of microscopic disease. We show, by simulation, that there is a huge impact of the mode of cell density variation on margin requirements. Our model consists of a spherical GTV, surrounded by small nodules at given distance. Geometrical errors were simulated and TCP calculated for many scenarios with varying margin. The three modes of CTV nodules variation were: 1: varying cell density; 2: varying nodule probability, with all nodules present/absent at once, 3: varying nodule probability with each nodule independent. We report the CTV+PTV margin at 2% TCP loss from large margins. Per 0.1 times density reduction (mode 1) CTV margin reduces 1mm. For mode 2, CTV margin reduced by 4mm at p=0.1. For mode 3, the margin is in- between. Since all these modes share the same population averaged tumour cell density, we demonstrated that this representation is insufficient.

11:54 - 12:06

Biologically-guided multi-criteria optimization and exploration to generate clinical plans for functionality-sparing radiation therapy trials - Dimitre Hristov (1), Matthew Kim (1), Jen-Yeu Kim (1), Elizabeth Kidd (1)

1 - Department of Radiation Oncology, Stanford University (United States)

Functional imaging of heterogeneous organ function can advance radiotherapy treatments aiming at preserving organ functionality. Designing such treatments with clinical planning systems is a challenge since these systems cannot drive an optimization in accordance to an image-based functionality metric. Using tools available in a commercial treatment planning system we overcome this challenge with a novel optimization approach that combines organ sub-region segmentation and multicriteria optimization to generate a database of plans with competing sub-region sparing. Simulated annealing exploration of the plan database then selects a plan that maximizes a user-defined metric of functionality calculated from paired image and dose values at voxels within the organ with heterogeneous function. The clinical utility of our approach has been demonstrated in a pilot study of personalized radiotherapy (NCT04514692) to spare bone marrow depicted by GCSF-stimulated FDG-PET imaging, where optimized and subsequently treated plans have provided substantial gains in 70% of the patients.

12:06 - 12:18

Deep Learning Guided Radiotherapy Modality Selection via Multi-Task Dose Prediction - Austen Maniscalco (1), Ezek Mathew (1), David Parsons (1), Justin Visak (1), Mona Arbab (1), Prasanna Alluri (1), Xingzhe Li (1), Narine Wandrey (1), Mu-Han Lin (1), Asal Rahimi (1), Steve Jiang (1), Dan Nguyen (1)

1 - University of Texas Southwestern Medical Center (United States)

In radiation therapy (RT), choosing an appropriate treatment modality is crucial and complex task due to the unique features of each modality that impact dose distribution. This complexity is compounded by the lack of quantitative, patient-specific data for modality selection. To address this gap, we developed an efficient multi-task convolutional neural network. This model utilizes patient-specific data to predict dose distributions for various modalities simultaneously. To evaluate model performance, we used a dataset consisting of 92 plans from 28 patients undergoing accelerated partial breast irradiation and divided it into training, validation, and test subsets. Across the test subset, the multi-task model dose predictions achieved Mean Absolute Percent Error of 1.1033 ± 0.3627%. Our approach shows promise as an efficient, quantitative tool for guiding clinicians in RT modality selection. In the future, we aim to expand upon this work with a site-agnostic model to improve its applicability and generalizability in RT.

12:18 - 12:30

Producing Deliverable Treatment Plans with Deep Learning: A Fully Automated Pipeline Outside the Treatment Planning System - Gerd Heilemann (1), Tufve Nyholm (2), Gregor Goldner (1), Joachim Widder (1), Dietmar Georg (1), Peter Kuess (1)

1 - Department of Radiation Oncology, Medical University of Vienna (Austria)
2 - Department of Diagnostics and Interventions, Umeå University (Sweden)

Background: This study introduces a novel, fully automated one-click planning pipeline for radiation oncology, capable of operating independently of a traditional treatment planning system (TPS) without human intervention. Methods: An integrated pipeline combining a dose prediction model and a model for generating treatment plans was developed. In this pipeline, we can produce a DICOM RT plan file directly from the segmented CT images of a prostate patient. The plans were analyzed with respect to speed and clinical acceptability based on the recalculated dose distributions. Results: The pipeline efficiently produced 40 deliverable treatment plans with an average time of (38 ± 6) seconds. All plans met minimum PTV coverage requirements, but a few exceeded maximum dose limits for the bladder and rectum. Conclusion: This innovative, fully automated pipeline demonstrates potential for rapid and efficient treatment planning in radiation oncology, independent of any TPS.

Invited Speakers

Auditorium Lumière - Wednesday, July 10

Salle St Clair 3 - Wednesday, July 10

Language Models and Ontologies

Auditorium Lumière - Wednesday, July 10

14:15 - 14:27

Enhancing Domain-Specific Knowledge of LLMs through Retrieval Augmented Multi-Process Prompting (RAMP) - Jace Grandinetti (1), Rafe McBeth (1)

1 - University of Pennsylvania (United States)

Despite advancements in Large Language Models (LLMs), their application in specialized fields, such as medical physics, remains challenged by the need for domain-specific knowledge and avoidance of hallucinations. This study introduces a novel framework called RAMP that incorporates Retrieval Augmented Multi-process Prompting techniques to enhance the domain-specific accuracy of LLMs without the need for extensive retraining. By benchmarking on a medical physics multiple-choice exam, we found that our proposed model significantly outperforms standard LLMs and humans in performance, demonstrating improvements by as much as 68% and achieving a high score of 90% on the exam. This method reduces hallucinations and increases domain-specific performance, underscoring the potential for clinical use where accurate information is essential. This generalized technique is easily adaptable to different domains and is independent of LLM models, demonstrating its significant potential.

14:27 - 14:39

An open source PyMedPhys harness for interfacing MOSAIQ® with Claude® - Stuart Swerdloff (1), Simon Biggs (2)

1 - ELEKTA Pty (New Zealand)
2 - Anthropic PBC (Australia)

We introduce tools added to the PyMedPhys project for connecting an anonymized Oncology Information System database, MOSAIQ® , to a Large Language Model, Claude2® for natural language querying. The approach to anonymizing utilizes a custom strategy file provided to the ‘pynonymizer' package, and the interface to Claude2® leverages existing capabilities in PyMedPhys and adds a Streamlit based GUI and application logic for querying via Claude2 using Retrieval Augmented Generation of the schema and data dictionary and prompting.

14:39 - 14:51

Iterative Prompt Refinement for Radiation Oncology Symptom Extraction Using Teacher-Student Large Language Models - Reza Khanmohammadi (1), Ahmed Ghanem (2), Kyle Verdacchia (2), Ryan Hall (2), Mohamed Elshaikh (2), Benjamin Movsas (2), Hassan Bagher-Ebadian (2) (3) (4), Indrin Chetty (4) (5), Mohammad Ghassemi (6), Kundan Thind (2) (7)

1 - Department Computer Science and Engineering, Michigan State University, East Lansing, USA (United States)
2 - Department of Radiation Oncology, Henry Ford Cancer Institute, Detroit, USA (United States)
3 - Departments of Radiology and Osteopathic Medicine, Michigan State University, East Lansing, USA (United States)
4 - Department of Physics, Oakland University, Rochester, USA (United States)
5 - Department of Radiation Oncology, Cedars Sinai Medical Center, Los Angeles, USA (United States)
6 - Department of Computer Science and Engineering, Michigan State University, East Lansing, USA (United States)
7 - Department of Medicine, Michigan State University, East Lansing, USA (United States)

This study introduces a novel teacher-student architecture utilizing Large Language Models (LLMs) to improve prostate cancer radiotherapy symptom extraction from clinical notes. Mixtral, the student model, initially extracts symptoms, followed by GPT-4, the teacher model, which refines prompts based on Mixtral's performance. This iterative process involved 294 single symptom clinical notes across 12 symptoms, with up to 16 rounds of refinement per epoch. Results showed significant improvements in extracting symptoms from both single and multi-symptom notes. For 59 single symptom notes, accuracy increased from 0.51 to 0.71, precision from 0.52 to 0.82, recall from 0.52 to 0.72, and F1 score from 0.49 to 0.73. In 375 multi-symptom notes, accuracy rose from 0.24 to 0.43, precision from 0.6 to 0.76, recall from 0.24 to 0.43, and F1 score from 0.20 to 0.44. These results demonstrate the effectiveness of advanced prompt engineering in LLMs for radiation oncology use.

14:51 - 15:03

A BFO-compliant Ontology for Radiation Therapy (RTO) - Mark Phillips (1), Andre Dekker (2), Jonathan Bona (3)

1 - University of Washington (United States)
2 - Maastricht University (Netherlands)
3 - University of Arkansas (United States)

This work describes the development of an ontology for the medical domain of radiation therapy. This effort was undertaken to provide a solid foundation upon which other standards within the field can be developed and applied. Given the uncertainties in vocabularies and concepts, the decision was made to harness the W3C ontology tools which provides numerous advantages. These include interoperability between domains, a serialization that is widely acceptable and computable, and an implementation that allows the use of Descriptive Logic. The ontology follows the guidelines of the OBO Foundry and is based on the Basic Formal Ontology. It was written in the OWL language using the software package, Protege. Examples of the utility of this ontology in the clinical environment of radiation oncology are provided. 

15:03 - 15:15

Patient and Public Involvement and Engagement on Natural Language Processing for Real World Cancer Medical Notes - Wuraola Oyewusi (1), Eliana M. Vasquez Osorio (1), Goran Nenadic (1), Issy Macgregor (2), Gareth Price (1)

1 - University of Manchester [Manchester] (United Kingdom)
2 - Vocal, NIHR Manchester Biomedical Research Centre, Manchester University NHS Foundation Trust (United Kingdom)

Beyond clinical trials with a limited number of participants, Real-World Data (RWD) has the potential to improve inclusion in healthcare research. In deploying AI like Natural Language Processing (NLP) for healthcare, careful data and process curation is as important as the algorithms themselves. This ensures patient-focused outcomes, effective modeling, ethical use, and real-world clinical relevance. Therefore, stakeholder involvement through methods like Patient and Public Involvement and Engagement (PPIE) is crucial. This work explores the process and findings from implementing PPIE for using NLP in cancer related free text electronic health records. The identifiable nature of this data presents unique challenges compared to working with easily anonymizable data. The PPIE event was an in-person focus group discussion with cancer survivors and caregivers. Key questions and feedback focused on data use, acceptable consent processes, research communication, and participation. Regarding consent, two-thirds of the contributors preferred a national opt-out approach, while one-third preferred project-specific consent. 

Decision Support Systems

Salle St Clair 3 - Wednesday, July 10

14:15 - 14:27

An AI-Driven Radiation Therapy Workflow Management Platform - Anyi Li (1), Doori Rose, Grace Chow, Laura Cervino, Joseph Deasy, Sharif Elguindi, Gregory Niyazov, Jean Moran, Darian Pazgan-Lorenzo, Ellen Pinto, Lakshmi Santanam, Niral Shah, Pengpeng Zhang

1 - Memorial Sloan Kettering Cancer Center (United States)

Our team has engineered PlanQ, an innovative AI-powered radiotherapy workflow management platform, designed to streamline clinical decision-making and enhance operational efficiency within high-volume radiation therapy settings. PlanQ, with its flexible and modular design, integrates data across diverse systems and formats, offering real-time multimodal data curation, while incorporating practitioner insights as definitive inputs. By leveraging a sophisticated language model and natural language processing, PlanQ adeptly interlinks disparate clinical incidents, facilitates the examination of historical patient treatment information, and gauges re-irradiation hazards. Additionally, the platform centralizes the visualization and oversight of clinical workflows, and an AI-driven Operational Research model adeptly prioritizes planning schedules for a large team based on complexity and urgency. Empirical results indicate PlanQ's efficacy in diminishing coordination workload by 70%, reducing planning tasks by 10%, and saving an hour daily in plan verification efforts. The platform's language model demonstrates a remarkable 95% precision in synthesizing isolated clinical occurrences, ensuring immediate workflow transparency.

14:27 - 14:39

A mixture of hidden Markov models to predict the lymphatic spread in head and neck cancer depending on primary tumor location - Roman Ludwig (1), Julian Brönnimann (1), Yoel Pérez Haas (1), Esmée Lauren Looman (1), Panagiotis Balermpas (1), Sergi Benavente (2), Adrian Schubert (3), Dorothea Barbatei (4), Laurence Bauwens (4), Jean-Marc Hoffmann (1), Olgun Elicin (3), Matthias Dettmer (3), Bertrand Pouymayou (1), Roland Giger (3), Vincent Gregoire (4), Jan Unkelbach (1)

1 - University Hospital Zurich (Switzerland)
2 - Hospital Vall d'Hebron (Spain)
3 - Bern University Hospital (Switzerland)
4 - Centre Léon Bérard (France)

We previously developed a mechanistic hidden Markov model (HMM) to predict the lymphatic tumor progression in oropharyngeal squamous cell carcinomas. To extend the model to other tumor subsites in the head and neck defined by ICD-10 codes, we develop a mixture model combining multiple HMMs. The mixture coefficients and the model parameters are learned via an EM-like algorithm from a large multi-centric dataset on lymph node involvement. The methodology is demonstrated for tumors in the oropharynx and oral cavity. The mixture model groups anatomically nearly subsites and yields interpretable mixture coefficients consistent with anatomical location. It allows the prediction of differences in lymph node involvement depending on tumor subsite.

14:39 - 14:51

Data Collection Platform for Radiation Oncology Clinical Use and Medical Research in a Multivendor Setting - Diego García (1), David Brito (1)

1 - Sociedad de Lucha Contra el Cáncer SOLCA (Ecuador)

A data collection platform, Armoniq, was developed for radiation oncology to enhance clinical practice and research. The platform organizes workflows, creates forms, and generates reports. Radiotherapy necessitates efficient access to patient data within departmental workflows and for clinical research. Existing systems often lack interoperability, the platform integrates Mosaiq, Aria, and local Electronic Health Records, presenting data through a user-friendly web interface. Armoniq, hosted locally, employs PHP and MySQL, adhering to FAIR principles. User authentication and read-only SQL queries ensure data security. Since mid-2017, the platform has managed 12,500 cases, improved staff response and replaced paper records. The data collection module supported 4 studies with 186 variables, handling 1,500+ daily requests from 143 users. The system centralizes data, enhancing patient care, workflow efficiency, and medical research. Its success underscores the need for standardized healthcare data management. Our approach can contribute other facilities in similar endeavors, improving data practices and research contributions.

14:51 - 15:03

PARROT: A Web Platform for AI-Driven Radiation Therapy Planning - Margerie Huet Dastarac (1), Wei Zhao (2), Sylvain Deffet (3), Eléonore Longton (4), Sophie Cvilic (4), Edmond Sterpin (1), John Lee (1), Ana Barragan-Montero (1)

1 - Université Catholique de Louvain = Catholic University of Louvain (Belgium)
2 - Ecole Polytechnique de Louvain (Belgium)
3 - Ion Beam Applications (IBA) (Belgium)
4 - Cliniques Universitaires Saint-Luc [Bruxelles] (Belgium)

Artificial Intelligence (AI) is gaining momentum in medical fields like radiation therapy. Models to delineate structures and predict optimal dose trade-off for patients are developed by the research community. Although some of these models are starting to be implemented clinically, an end-to-end workflow with a user-friendly graphical interface to visualize the result of successive AI models and to support decision of treatment indication is still prospective. To address this problem, we present PARROT, a free and open-source web platform that facilitates the use of AI delineation and dose prediction models and the visualization of the models outputs. The treatment decision support shows clinical evaluation tools to compare dose distributions, among which normal tissue complication probabilities (NTCP) models.

15:03 - 15:15

Dose evaluation on daily CBCTs for breast cancer patients - Peter Stijnman (1), Maureen Groot Koerkamp (1), Anette Houweling (1), Cornel Zachiu (1), Bas Raaymakers (1)

1 - Department of Radiotherapy Utrecht (Netherlands)

In this work, an interfraction dose evaluation workflow based on daily CBCT data for breast cancer patients is shown. Using deformable image registration a synthetic CT is created from the planning CT and the daily CBCT and the contours delineated on the planning CT are propagated to the synthetic CT. This synthetic CT is used to calculate the daily delivered dose, from which a trigger system can be built to flag patients who might receive suboptimal treatment. Currently, the workflow has processed 456 patients with a total of 5157 fractions. The population-based data suggests that overall the delivered treatment meets a very high standard (i.e. 97.3\% of patients receive an average CTV coverage higher than 95\%). However, there are some patients for whom the treatment could be improved based on the patient-specific information that this workflow supplies. Furthermore, any uncertainties in the clinic about the impact of anatomical changes on the delivered dose can be verified with this workflow.

Invited Speakers

Auditorium Lumière - Wednesday, July 10

Salle St Clair 3 - Wednesday, July 10

Software Development and Interoperability

Auditorium Lumière - Wednesday, July 10

16:15 - 16:30

Harnessing Radiotherapy Data for Quality Care and Outcome Research - Rishabh Kapoor (1), William Sleeman, Jatinder Palta

1 - Rishabh Kapoor (United States)

The our work presents the development and validation of the Health Information Gateway and Exchange (HINGE) software platform, designed to harness radiotherapy data for quality care and outcome research in oncology. HINGE addresses challenges associated with healthcare data, including volume, variety, velocity, veracity, and value, by offering scalable architecture, diverse data format handling, real-time processing, data quality prioritization, and insightful data analysis. Key design features include data standardization, integration with EHR, Treatment Planning System (TPS), and Treatment Management System (TMS), and a user-friendly interface with disease-specific templates. The platform underwent validation using a comprehensive dataset, showing promising results in reducing physician burden and enhancing data accuracy. Challenges encountered in building HINGE, such as consensus among physicians, clinical workflow variations, and system integrations, were overcome through collaboration and iterative improvements. The platform's deployment architecture ensures reliability, security, and performance, with ongoing evaluation for VA compliance. Overall, HINGE represents a significant advancement in facilitating data-driven decision-making in radiation oncology.

16:30 - 16:42

TrajectoryLog.NET Applications of an Integrated Trajectory Log File Analysis API - Matthew Schmidt (1), Frank Marshall (1), Sarah Cradick (1), Justin Mikell (1), Phillip Wall (1), Nels Knutson (1), Thomas Mazur (1), Yao Hao (1)

1 - Washington University School of Medicine in St. Louis (United States)

Machine log file analysis has shown effective utility within a quality assurance program for both periodic linear accelerator testing and patient-specific plan verification. Limitations of existing machine log analysis implementations include the lack of integration with treatment planning system application programming interfaces and incompatibility with emerging ring delivery system linear accelerator platforms. An application programming interface (API) has been developed, TrajectoryLog.NET, to overcome these limitations. This API was benchmarked against a standard python trajectory log file analysis tool and employed in the creation of two analysis applications – a Respiratory Gating QA Analysis tool and a patient specific delivery parameter report. In benchmarking, all parameters were within 5.0 x 10-5 deg and 5.0 x 10-6 cm for rotational and translational axes of motion respectively. The API was also used to determine the agreement in gantry angle and MLC position between gated and non-gated QA and in determination of the dose rate hysteresis in a gated beam being an average of 5.5 control points (110 ms). 

16:42 - 16:54

PARADIM: a platform to support research at the interface of AI and medical imaging - Yannick Lemaréchal (1), Gabriel Couture (1), François Pelletier (1), Pierre-Luc Asselin (1), Ronan Lefol (1), Samuel Ouellet (1), Leonardo Di Schiavi Trotta (1), Philippe Despres (1)

1 - Department of Physics, Physical Engineering and Optics, Université Laval (Canada)

This paper describes a digital infrastructure designed to support AI research in medical imaging, with a focus on Research Data Management best practices. The platform (PARADIM, Platform for the annotation, reuse and analysis of medical images - Plateforme d'Annotation, de Réutilisation d'Analyse D'Images Médicales in French) is rooted in the FAIR principles, through strict compliance with the DICOM standard, and aims at fulfilling several needs arising from the research community at the intersection of AI and medical imaging: robust data curation procedures, responsible data access and effective data governance, robust de-identification and annotation pipelines, and streamlined data operations (e.g. model training, analysis). The platform, built from open-source components, is designed to facilitate and automate the execution of large-scale, computationally-intensive pipelines (e.g. automatic segmentations, dose calculations, AI model training). PARADIM fills a gap at the interface of AI and medical imaging, where data and digital infrastructures were histori cally not given the attention and resources they deserve.

16:54 - 17:06

pyCERR – A Python-based Computational Environment for Radiological Research - Aditya P. Apte (1), Aditi Iyer, Eve Locastro, Sharif Elguindi, Jue Jiang, Jung Hun Oh, Harini Veeraraghavan, Amita Shukla-Dave, Joseph O. Deasy

1 - Memorial Sloan Kettering Cancer Center (United States)

A Python-native, GNU-GPL copyright software framework, pyCERR, for commonly used radiological image data formats is presented in this work. It enables researchers to organize, access and transform metadata from high dimensional, multi modal datasets to build cloud-compatible workflows for Artificial Intelligence (AI) modeling in radiation therapy and image analysis. pyCERR provides an extensible data structure and a Viewer to allow for rapid prototyping. Various modules are provided to facilitate analyses including metrics for comparing image segmentation, Image Biomarker Standardization Initiative compliant radiomics, radiotherapy Dose Volume Histogram-based features, convolutional neural network-based models for image segmentations, normal tissue complication and tumor control models for radiotherapy and deformable image registration. pyCERR integrates with Jupyter notebooks, GitHub, the Napari viewer, and cloud services. In summary, pyCERR provides an end-to-end radiological image analysis framework to facilitate reproducible research and seamless integration with cloud platforms and clinical systems.

17:06 - 17:18

A system of governance for use of scripts in a commercial treatment planning system - Kenton Thompson, Keith Offer, Glen Osborne, Catherine Lawford, Alan Turner, Sandro Porceddu, Tomas Kron (1)

1 - Peter MacCallum Cancer Centre (Australia)

Scripting has become common practice to enhance radiotherapy treatment planning and extract additional information for decision making and research. However, scripts are not part of the commercial planning software that has been approved for clinical use by an appropriate authority. As such we have developed a process that requires scripts to be discussed by a multidisciplinary group, undergo appropriate testing and be signed off by a member of the department's executive. Since 2019 sixteen scripts have been authorized for use in our Varian Eclipse treatment planning system. While most of them are ‘plug in' and ‘read only' the process has also allowed for the use of stand-alone scripts and scripts that can write to the Eclipse database. The governance process developed encourages participation of all professional disciplines and ensures all scripts that can impact the treatment plan are documented and verified.

17:18 - 17:30

QuAAC: An interoperable routine quality assurance data format - James Kerns (1), Randle Taylor

1 - Radformation (United States)

A new data format is proposed for the interoperability and exchange of routine machine quality assurance data between in-house and commercial software. The format is flexible for multiple workflows, is human-readable, and uses common standards available today.

Monte-Carlo Simulation

Salle St Clair 3 - Wednesday, July 10

16:15 - 16:30

MCMimic: a machine learning approach for fast prediction of Monte Carlo proton dose and LET - Samuel Ingram (1) (2), Karen Kirkby (1) (2), Ranald Mackay (1) (2), Matthew Clarke (1) (2)

1 - Christie Medical Physics and Engineering, The Christie NHS Foundation Trust (Manchester United Kingdom)
2 - Division of Cancer Sciences, University of Manchester, Manchester (United Kingdom)

Monte Carlo is an important technique to achieve accurate dose and LET distributions but comes with the cost of long simulation times. In this study, machine learning is utilised to build a generator able to predict both dose and LET distributions simultaneously for proton beam therapy. An adapted pix2pix generative adversarial network is trained using Monte Carlo distributions in a fully supervised learning approach. This results in the ability to predict both dose and LET at increased speeds that are similar to analytical approaches. At 1%/1mm γ, the model can predict dose at 98.36% and LET at 67.29% pass rates. When using combinations of dose and LET, typically done for radiobiological review, γ pass rates are over 90%. Integration of the model into clinical networks will allow for it's use directly from the treatment planning system. Machine learning is well suited to learning to predict Monte Carlo results. 

16:30 - 16:42

Evaluation of the GATE-RTion/Geant4 simulation speed and dosimetric commissioning for independent dose calculation in carbon ion pencil beam scanning radiotherapy - Yihan Jia (1) (2), Lisa Hartl (1) (3), Martina Favaretto (1), Markus Stock (1) (4), Loic Grevillot (1), Andreas F. Resch (1)

1 - MedAustron Ion Therapy Center (Austria)
2 - Department of Radiation Oncology, Medical University of Vienna (Austria)
3 - Faculty of Electrical Engineering and Information Technology, Vienna University of Technology (Austria)
4 - Department of Oncology, Karl Landsteiner University of Health Sciences (Austria)

The open-source Independent Dose Calculation system IDEAL v1.1, using GATE-RTion v1.0/Geant4.10.3 as dose engine, is designed for the verification of dose calculation in treatment planning system for patients treated with proton and carbon ion pencil beam scanning. This study focuses on carbon ion only and investigates the simulation speed of GATE-RTion/Geant4 parameter settings. Over the clinical energy range of 120-400 MeV/u, key pencil beam parameters were simulated, and compared with reference dosimetric commissioning dataset of two clinical carbon ion beamlines at MedAustron Ion Therapy Center. Twenty spread out Bragg peaks of cubic targets of field size of 3-17 cm, modulation length of 3-10 cm and center at depths of 4-24 cm in water were simulated. With the setting of QGSP_BIC_EMZ and high electron production cut, the simulation speed was improved, while achieving good dose calculation accuracy. IDEAL was commissioned from a dosimetric standpoint for carbon ion IDC.

16:42 - 16:54

New global reconstruction for charged particles with the FOOT experiment - Christian Finck (1) (2)

1 - IPHC (France)
2 - Département Recherches Subatomiques (France)

In the framework of measuring double differential cross-section for hadrontherapy purposes, a new algorithm for global track reconstruction (Tracking Of Ejectile) was implemented in the analysis package for the FOOT experiment data set. The algorithm is based on a unscented Kalman filter approach. The cluster recognition efficiency was found to be around 99% with purity above 94%. The momentum resolution of the reconstructed track for particle with a momentum ≥ 2 GeV/c was found to be lower than 5 % what corresponds to the desired resolution for the FOOT experiment in order to separate the different isotopes.

16:54 - 17:06

Deep-learning powered denoising of Monte Carlo dose distributions - Hannes A. Loebner (1), Raphael Joost (1), Jenny Bertholet (1), Stavroula Mougiakakou (2), Michael K Fix (1), Peter Manser (1)

1 - Division of Medical Radiation Physics and Department of Radiation Oncology, Inselspital, Bern University Hospital, and University of Bern (Switzerland)
2 - ARTORG Center, University of Bern (Switzerland)

This study presents the development of deep-learning models to reduce noise in high statistical uncertainty (SU) Monte Carlo dose distributions (MC-DDs) of radiotherapy plans. Five 3D U-net models, varying in input, size, and batch normalization were trained on high/low SU MC-DD pairs from clinically-motivated plans. Depending on the model, the CT was included as input. Model accuracy was evaluated using gamma passing rate (GPR, 2%/2 mm, 10% threshold) comparing low SU MC-DD (reference) with denoised DD (evaluation). The best-performing model includes CT input and no batch normalization. It was retrained on high/low-SU MC-DD pairs of 106 clinically-motivated VMAT arcs for 29 CTs and achieved an average GPR of 82.9% on the test set. Applied to 12 new unseen cases from different treatment sites (head and neck, lung, prostate) a GPR of 91.0% was obtained on average. Denoised MC-DDs are computed in 35.1 seconds, a 340-fold efficiency gain compared to low-SU MC-DD calculation.

17:06 - 17:18

Development of a Range Probe to Improve Dose Delivery Accuracy in Proton Beam Therapy - Josephine Jones (1) (2), Michael Taylor (1) (2), Adam Aitkenhead (1) (2), Samuel Manger (1) (2)

1 - University of Manchester [Manchester] (United Kingdom)
2 - The Christie NHS Foundation Trust [Manchester] (United Kingdom)

Proton therapy requires accurate calculation of proton range in the patient, yet no methods for in-vivo range verification are currently in routine clinical use. A proton range probe is a possible solution, which we define as a small number of pencil beams that traverse the patient to measure water equivalent thickness (WET) and verify calculations from X- ray CT. The proposed range probe is based on time-of-flight (TOF) measurements. A pair of sensors upstream of the patient measure initial proton energy and a pair of sensors downstream measure final proton energy. The mean energy change can be used to calculate the WET along the proton path using a calibration curve. Monte Carlo modelling of the TOF range probe was done using GEANT4 to investigate how sensor parameters affect the accuracy and precision of WET measurement. Simulations with homogeneous materials demonstrated WET accuracy and resolution below 1% with sensor timing resolution ≤60ps.

17:18 - 17:30

Implementation of the Mayo Clinic Florida Microdosimetric Kinetic Model and AMDM into GATE/Geant4: Investigation of the impact of nozzle configurations - Hermann Fuchs (1) (2), Alessio Parisi (2), Keith Furutani (2), Dietmar Georg (1), Chris Beltran (2)

1 - Medizinische Universität Wien = Medical University of Vienna (Austria)
2 - Mayo Clinic Florida (United States)

In ion beam therapy a higher relative biological effectiveness (RBE) with respect to X-rays is well established. The RBE is known to be changing dynamically depending on multiple factors requiring complex biological models. A posteriori recalculation of most of these biological models using e.g. different cell-line data are challenging, requiring a complete recalculation of the whole treatment plan.  The abridged microdosimetric distribution methodology (AMDM) attempting to solve this issue, was integrated into the open-source Monte Carlo toolkit OpenGATE10. Furthermore, the calculation of RBE based on the Mayo Clinic Florida microdosimetric kinetic model (MCF MKM) was implemented and used to compare the RBE of two nozzle designs of the MedAustron and Mayo Clinic Florida. First investigations indicate that the nozzle design does not markedly affect the RBE. However, for mono energetic carbon ion beams changes in the initial energy spread due to different ripple filter designs may have a non-negligible influence.

Invited Speakers

Auditorium Lumière - Thursday, July 11

Salle St Clair 3 - Thursday, July 11

Image Reconstruction and Generative Models

Auditorium Lumière - Thursday, July 11

09:30 - 09:45

Diffusion Schrödinger Bridge Models for High-Quality MR-to-CT Synthesis for Head and Neck Proton Treatment Planning - Muheng Li (1) (2), Xia Li (1) (3), Sairos Safai (1), Damien Weber (1) (4) (5), Antony Lomax (1) (2), Ye Zhang (1)

1 - Center for Proton Therapy, Paul Scherrer Institut, Villigen (Switzerland)
2 - Department of Physics [ETH Zürich] (Switzerland)
3 - Department of Computer Science [ETH Zürich] (Switzerland)
4 - Department of Radiation Oncology, University Hospital of Zurich, Zurich (Switzerland)
5 - Department of Radiation Oncology, Inselspital, Bern University Hospital, University of Bern, Bern (Switzerland)

In recent advancements in proton therapy, MR-based treatment planning is gaining momentum to minimize additional radiation exposure compared to traditional CT-based methods. This transition highlights the critical need for accurate MR-to-CT image synthesis, which is essential for precise proton dose calculations. Our research introduces the Diffusion Schrödinger Bridge Models (DSBM), an innovative approach for high-quality MR-to-CT synthesis. DSBM learns the nonlinear diffusion processes between MR and CT data distributions. This method improves upon traditional diffusion models by initiating synthesis from the prior distribution rather than the Gaussian distribution, enhancing both generation quality and efficiency. We validated the effectiveness of DSBM on a head and neck cancer dataset, demonstrating its superiority over traditional image synthesis methods through both image-level and dosimetric-level evaluations. The effectiveness of DSBM in MR-based proton treatment planning highlights its potential as a valuable tool in various clinical scenarios.

09:45 - 09:57

Self super-resolution of anisotropic volumes in prostate MRI with normalized edge priors - Vincent Jaouen (1) (2), Pierre-Henri Conze (1) (2), Julien Bert (2), Dimitris Visvikis (2)

1 - IMT Atlantique (France)
2 - Laboratoire de Traitement de l'Information Medicale (France)

Due to involuntary organ motion, prostate magnetic resonance images are often acquired in 2D with thick through plane spacings and reconstructed into anisotropic voxelized volumes. In prostate cancer (PCa) radiotherapy, annotations such as zonal or lesion segmentation are then performed on the few axial slices in which the prostate appears, leading to coarse and often inconsistent delineations across slices, which are nevertheless used as ground truth for deep segmentation model training and performance reports. Recently, deep self-supervised super resolution (SSR) was proposed in brain MRI to remove the need for calibrated training data, by learning the low-resolution to high-resolution mapping from the anisotropic image itself. In this work, we propose a new training objective (loss function) for SSR based on the alignment of normalized image gradient vectors, a technique normally used for image registration. Preliminary experiments on the public Prostate158 dataset suggest that automatic PCa detection could be improved using SSR with the proposed training objective. However, the low quality of lesion annotations obtained on a few slices is a major bottleneck to confirm these findings quantitatively.

09:57 - 10:09

Patient-specific simultaneous CT and MR reconstruction from sparse projection measurements using implicit neural representation - Oscar Pastor-Serrano (1), Liang Qiu (1), Siqi Ye (1), Lei Xing (1)

1 - Department of Radiation Oncology [Stanford] (United States)

Radiotherapy workflows can significantly benefit from the simultaneous acquisition of paired computed tomography (CT) and magnetic resonance (MR) images, particularly in MR-guided treatments. In this study, we introduce a patient-specific algorithm designed to precisely reconstruct CT scans and their corresponding MR intensities using sparse measurements (i.e., as few as 10 projections). We propose a novel framework based on a generalizable implicit neural representation (GINR) with two channels, a method that incorporates prior patient information into the weights of a neural network mapping spatial coordinates to CT or MR values. With only a few projections, GINR can reconstruct the new CT anatomy and the corresponding MR within a few minutes. GINR achieves a peak signal-to-noise ratio (PSNR) of 40.13±1.05 dB when reconstructing brain CTs from 10 projections, and 26.64±0.57 dB for their corresponding MR counterparts, establishing itself as a fast and accurate alternative to traditional algorithms.

10:09 - 10:21

FDDM: Unsupervised MR-to-CT Translation with a Frequency-Decoupled Diffusion Model - Yunxiang Li (1), Hua-Chieh Shao (1), Xiaoxue Qian (1), You Zhang (1)

1 - University of Texas Southwestern Medical Center (United States)

Diffusion models have shown great promise in unsupervised medical image translation applications. However, diffusion models are often challenged by inaccurate anatomical structure translations especially for unpaired training data, while anatomical structures are crucial for reliable disease diagnosis and treatment planning. We introduced a Frequency-Decoupled Diffusion Model (FDDM) to achieve unsupervised, accurate MR-to-CT translations for unpaired data. FDDM separates the frequency components of medical images in the Fourier domain to enable structure-preserving image conversion. Specifically, it combines an unsupervised frequency conversion module with a diffusion model that is initialized and conditioned on the converted frequency information. We evaluated FDDM on a brain MR-to-CT dataset, comparing it with GAN-, VAE-, and diffusion-based models. FDDM excelled in all metrics including Fréchet inception distance (FID), peak signal-to-noise ratio (PSNR), and structural similarity index measure (SSIM). The substantial improvement offered by FDDM demonstrated its ability to generate realistic CT images without compromising the underlying anatomical accuracy.

10:21 - 10:33

Enhancing CBCT Image Quality Using a Novel sCT-based Shading Correction - Francesco Maria Russo (1) (2), Lukas Lamminger (1), Gabriele Belotti (1), Katia Parodi (2), Heinz Deutschmann (1), Philipp Steininger (1)

1 - medPhoton GmbH (Austria)
2 - Department of Medical Physics in Radiation Oncology, Ludwig-Maximilians-Universität München (Germany)

This study introduces a novel method for shading artefact correction in cone-beam computed tomography (CBCT), utilizing AI-generated synthetic CT (sCT) as a template. Our technique improves CBCT image quality by reducing spatial non-uniformity (SNU) and mean HU error, surpassing previous methods by eliminating the need for a pre-existing planning CT (pCT) and accelerating the correction process. A key advancement of our approach is its ability to enhance CBCT quality reliably, even with low-quality sCTs marked by significant hallucinations and artefacts. This ensures that sCT corrections enhance CBCT without transferring existing sCT artefacts to the corrected images, offering a safer and more dependable method for integrating sCTs in clinical practice.

10:33 - 10:45

Comparison of saddle and helical trajectories of a mobile cone beam computed tomography scanner with online calibration correction - Chengtao Wei (1) (2), Simon Rit (3), Matthieu Laurendeau (3) (4), Philipp Steininger (5), Felix Ginzinger (5), Marco Riboldi (2), Christopher Kurz (1), Guillaume Landry (1)

1 - Department of Radiation Oncology, LMU University Hospital, LMU Munich, Munich (Germany)
2 - Department of Medical Physics, Ludwig-Maximilians-Universität München, Garching (Germany)
3 - Univ Lyon, INSA-Lyon, Université Claude Bernard Lyon 1, UJM-Saint Etienne, CNRS, Inserm, CREATIS UMR 5220, U1294, F-69373 Lyon (France)
4 - Thales AVS, Moirans (France)
5 - Research & Development, medPhoton GmbH, Salzburg (Austria)

In cone-beam computed tomography (CBCT), the cone-beam (CB) artefact causes object-dependent image distortions away from the central plane. Non-circular trajectories such as saddles or helices can mitigate the CB artefact, but are challenging to execute with scanners not specifically designed for non-circular scanning. In this work, we made use of a recently available mobile robotic CBCT scanner (mobile ImagingRing, medPhoton, Salzburg, Austria) which allows sinusoidal motion perturbation of several degrees of freedoms during a source and detector rotation, coupled with infrared (IR) motion tracking to measure the geometry of each trajectory. We scanned a disk phantom designed to highlight CB artefacts with circular, saddle and helical trajectories executed by driving motion with wheels. We computed an incompleteness metric, tan(psi), and correlated it with the region free from CB artefacts in the images. Using a separate cylindrical phantom, we additionally computed the modulation transfer function at 10 percent (MTF10) to assess the accuracy of the calibration correction. We found regions free of CB artefacts of up to 190 mm in the superior-inferior direction for the saddle trajectory, and of 120 mm to 140 mm for the helical trajectory. MTF10 was 1.39 lp/mm, 0.98 lp/mm and 1.03 lp/mm for the circular, saddle and helical trajectories with IR tracking, while without IR tracking the saddle and helical trajectories had MTF10 less than 0.4 lp/mm. 

Prediction Models II

Salle St Clair 3 - Thursday, July 11

09:30 - 09:45

Deep learning-based Normal Tissue Complication Models for Head and Neck Cancer patients - Lisanne V. Van Dijk (1), Suzanne P.m. Vette (1), Hendrike Neh (1), Luuk Van Der Hoek (1), Hung Chu (1), Johannes A Langendijk (1), Peter M.a. Ooijen (1), Nanna M. Sijtsema (1)

1 - University Medical Center Groningen [Groningen] (Netherlands)

Accurate pre-treatment toxicity risk estimation in head and neck cancer (HNC) patients is essential to optimize radiotherapy and minimize long-term toxicities. This study utilizes the potential of deep learning (DL) to predict based on 3D data to improve the current conventional 1D Normal tissue complication probability (NTCP) models. Our results show that DL models that incorporating full 3D dose and image information improved the predictive performance for late xerostomia, dysphagia, and taste loss compared to conventional NTCP models. The attention maps showed that the DL model's focus is on the salivary gland dose for xerostomia, a broad scale of swallowing muscles for dysphagia, and parotid glands and anterior region of the tongue for taste loss. This research sets a new benchmark for toxicity prediction in current HNC radiotherapy, paving the way for more informed treatment decisions, minimizing toxicities, and enhancing patients' quality of life.

09:45 - 09:57

Causal Targeted Machine Learning for Real-World Evidence Generation in Oncology: Assessing Causal Effects of Radiation Toxicity on Overall Survival in Stage III Non-Small Cell Lung Cancer - Rachael Phillips (1), Tomi Nano (2), Jessica Scholey (2), Efstathios Gennatas (3), Nasir Mohammed (4), Jing Zeng (5), Rupesh Kotesha (6), Pranshu Mohindra (7), Lane Rosen (8), John Chang (9), Henry Tsai (10), James Urbanic (11), Carlos Vargas (12), Nathan Yu (12), Lyle Ungar (13), Eric Eaton (13), Charles Simone (14), Mark Van Der Laan (1), Gilmer Valdes (2)

1 - Division of Biostatistics, School of Public Health, University of California, Berkeley (United States)
2 - Department of Radiation Oncology, University of California, San Francisco (United States)
3 - Department of Epidemiology and Biostatisitcs, University of California, San Francisco (United States)
4 - University of Washington and Seattle Cancer Care Alliance Proton Therapy Center, Seattle, Washington (United States)
5 - Northwestern Medicine Chicago Proton Center, Warrenville, Illinois (United States)
6 - Miami Cancer Institute, Department of Radiation Oncology, Miami, Florida (United States)
7 - University of Maryland School of Medicine and Maryland Proton Treatment Center, Baltimore, Maryland (United States)
8 - Willis-Knighton Medical Center, Shreveport, Louisiana (United States)
9 - Oklahoma Proton Center, Oklahoma City, Oklahoma (United States)
10 - New Jersey Procure Proton Therapy Center, Somerset, New Jersey (United States)
11 - California Protons Therapy Center, Department of Radiation Oncology, San Diego, California (United States)
12 - Mayo Clinic Proton Center, Department of Radiation Oncology, Phoenix, Arizona (United States)
13 - Department of Computer and Information Science, University of Pennsylvania, Philadelphia (United States)
14 - Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York (United States)

This study explores the causal impact of radiation-induced toxicities on overall survival (OS) in stage III Non-Small Cell Lung Cancer (NSCLC) patients undergoing proton radiation therapy. Leveraging real-world data (RWD) from the Registry Study for Radiation Therapy Outcomes, and advanced causal inference methodologies that have garnered support from the US Food and Drug Administration's Real-World Evidence (RWE) Program, we estimated of the impact of pneumonitis/dyspnea and esophagitis/dysphagia on OS with targeted minimum loss-based estimation (TMLE). Our findings suggest there may be a time-dependent relationship between pneumonitis/dyspnea and decreased OS, emphasizing the need for personalized treatment strategies that balance treatment intensity with toxicity management. Additionally, our methodology demonstrates the utility of targeted learning and its accompanying step-by-step causal roadmap for generating reliable RWE, setting a precedent for applying such techniques in radiation oncology to support clinical and regulatory decision-making, and ultimately enhance patient care.

09:57 - 10:09

Rethinking the Elective Target Volume: Modelling Bilateral Lymphatic Head and Neck Tumour Progression Using Probability Theory - Kristoffer Moos (1) (2), Jesper Eriksen (2) (3), Anne Holm (3), Stine Korreman (1) (2)

1 - Institute of Clinical Medicine, Aarhus University (Denmark)
2 - Danish Centre for Particle Therapy, Aarhus University Hospital (Denmark)
3 - Department of Experimental Clinical Oncology, Aarhus University Hospital (Denmark)

In radiotherapy, a low-dose elective target volume is often defined to include lymph nodes that may be malignant. The prescribed dose to the elective target volume is relatively low, but still results in large irradiated volumes with associated risk of severe side-effects. In this study we establish a model for bilateral tumour progression in patients with oropharyngeal squamous cell carcinoma for personalised elective target volume definition. An expansion of a previously proposed Bayesian network model was employed to estimate the probability of bilateral nodal spread. This study highlights the potential for reconsidering the elective target volumes through probabilistic modelling, aiming to decrease unnecessary radiation dose to organs-at-risk and healthy tissue.

10:09 - 10:21

Three-dimensional visualization of DEBI-NNs for supporting interpretability of predictive models relying on neural networks - Amine Boukhari (1) (2), Ecsedi Boglarka (2) (3), Nassib Abdallah (1), David Haberl (4), Clemens Spielvogel (4), László Papp (2), Mathieu Hatt (1)

1 - Laboratoire de Traitement de l'Information Medicale (France)
2 - Center for Medical Physics and Biomedical Engineering, Medical University of Vienna (Austria)
3 - Georga Institute of Technology (United States)
4 - Division of Nuclear Medicine, Medical University of Vienna (Austria)

In the domain of medical imaging, model interpretability is required for AI experts to optimize their models and it is paramount for clinical trust. This work explores the interpretability potential of recently-proposed Distance-Encoding Biomorphic-Informational Neural Networks (DEBI-NN) in the context of HPV prediction using the HECKTOR dataset. DEBI-NNs train 3D spatial coordinates instead of weights, thus requiring less parameters to train compared to conventional NNs and they are inherently 3D objects. This study relies on a 3D visualization approach to understand the spatial configurations of DEBI-NNs, shedding light on the network's consistency and aiding in identifying optimal configurations. Two training settings were tested with and without spatial dropout (Setting A and B, respectively). Spatial dropout supports the approximation of a task-specific DEBI-NN during training. Results demonstrate the predictive power of DEBI-NNs and the added value of 3D DEBI-NN renderings in explaining them.

10:21 - 10:33

Robustness In Image-Based Data Mining: Bias In Training Data Can Affect The Identification Of Significant Regions - Tanwiwat Jaikuna (1) (2), Fiona Wilson (1), Marcel Van Herk (1), Peter Hoskin (1), Catharine West (1), David Azria (3), Jenny Chang-Claude (4), Maria Carmen De Santis (5), Sara Gutiérrez-Enríquez (6), Maarten Lambrecht (7), Petra Seibold (4), Elena Sperk (8), Paul R Symonds (9), Christopher J Talbot (9), Tiziana Rancati (10), Tim Rattay (9), Victoria Reyes (11), Barry S Rosenstein (12), Dirk De Ruysscher (13), Ana Vega (14), Liv Veldeman (15), Adam Webb (9), Marianne Aznar (1), Eliana Vasquez Osorio (1)

1 - The University of Manchester, Radiotherapy Related Research, Manchester (United Kingdom)
2 - Mahidol University, Radiology, Bangkok (Thailand)
3 - Montepellier University, Radiation Oncology, Montpellier (France)
4 - German Cancer Research Center (DKFZ), Cancer Epidemiology, Heidelburg (Germany)
5 - Fondazione IRCCS Isituto Nazionale dei Tumori, Radiation Oncology, Milan (Italy)
6 - Vall d'Hebron Barcelona, Hereditary Cancer Geneticss Group, Barcelona (Spain)
7 - Leuvens Kanker Institut, Radiotherapy-oncology, Leuven (Belgium)
8 - University of Heidelberg, Radiation Oncology, Mannheim (Germany)
9 - University of Leicester, Leicester Cancer Research Centre, Leicester (United Kingdom)
10 - Fondazione IRCCS Instituto Nazionale dei Tumori, Prostate Cancer Program, Milan (Italy)
11 - Vall d'Hebron Hospital Universitari, Radiation Oncology, Barcelona (Spain)
12 - Icahn School of Medicine at Mount Sinai, Radiation Oncology, New York (United States)
13 - Maastricht University Medical Center, Radiation Oncology, Maastricht (Netherlands)
14 - Universitario de Santiago, Fundacion Publica Galega Medicina Xenomica, Santiago de Compostela (Spain)
15 - Ghent University Hospital, Radiation Oncology, Ghent (Belgium)

Image-based data mining (IBDM) is a complex technique that requires robust technical and clinical validation to ensure repeatability. This study proposes a new method to assess the robustness of the IBDM-identified region using intra-cohort k-mean and k-fold cross-validation. We investigated the impact of biases in patient characteristics or outcome distribution using the k-mean method and stratified k-fold resampling, with k = 5. IBDM was performed using four groups in each cluster/fold, and the overlap of their significant regions was compared to IBDM for the complete cohort. We found that bias in outcome distribution impacts the volume of identified significant regions substantially, even more than bias in patient characteristics. Assessing the effect of potential biases in the training data can increase confidence in the robustness of the significant region identification in IBDM.

10:33 - 10:45

Modelling occult lymph node metastases in HNSCC patients with a trinary state hidden Markov model - Yoel Pérez Haas (1), Roman Ludwig (1), Esmée Lauren Looman (1), Vincent Gregoire (2), Laurence Bauwens (2), Panagiotis Balermpas (1), Jan Unkelbach (1)

1 - Universitätsspital Zürich (Switzerland)
2 - Centre Léon Bérard (France)

Head and Neck squamous cell carcinomas often spread through the neck's lymphatic system. Apart from clinically detected lymph node metastases, treatment includes elective irradiation of lymph node levels (LNL) at risk of harbouring occult metastases. We previously developed models for personalized risk of occult metastases predictions. These models relied on literature values for sensitivity and specificity of MRI, CT and PET for detecting metastases, ranging between 70-90\% averaged over all LNLs. However, recent data for oropharyngeal SCC indicate that the effective sensitivity varies between LNLs. Whereas LNL II conforms with the literature values, lower sensitivities are observed for LNL III (around 50\%) and IV (around 30\%). We developed a Hidden Markov Model of lymphatic tumor progression, which describes the difference between pathological and clinical lymph node involvement via growth of metastases from a microscopic to a macroscopic state, and thereby addresses the limitations originating from universal sensitivity and specificity values.

Keynote Speaker

Auditorium Lumière - Thursday, July 11