top of page

AI4LUNGS Research Accepted at BIBE 2025: Incrementally Learning to Segment the Lungs: Similarities and Differences Across Institutions

We are delighted to share the paper “Incrementally Learning to Segment the Lungs: Similarities and Differences Across Institutions”, authored by Joana Vale Sousa, Tania Pereira, and Helder P. Oliveira from INESC TEC, has been accepted to the 25th International Conference on BioInformatics and BioEngineering (BIBE 2025).

Represented by Joana and Helder from INESC TEC, AI4LUNGS was presented in Athens from November 6th to 8th during the BIBE2025!

ree

Understanding the Challenge

The paper recognizes that lung segmentation in CT scans is a demanding task due to several factors:

  • Variability in lung shape, size, and tissue patterns, often tied to different respiratory conditions.

  • Differences in imaging acquisition protocols across institutions.

  • Limited diversity in available datasets, combined with inconsistent annotations

These factors cause learning models to lose performance when deployed in real clinical scenarios, where variability is higher than in controlled experimental settings.


Solution? A Continual Learning Approach

To address these limitations, the study explores a Continual Learning approach using Experience-Replay with a random sampling strategy to build a memory buffer. This strategy aims to allow models to continuously expand their understanding of the data relevant to the environment in which they will operate, while preserving previously learned information.

The experiments were conducted under a Domain-Incremental Learning scenario, using data from four domains and evaluated on a cross-cohort dataset containing both in-domain and out-of-domain samples. A central focus of the research was to investigate:

  • How many samples from the memory buffer are needed to reduce catastrophic forgetting (CF)—when new knowledge overwrites past knowledge.

  • Which domains most effectively minimize this effect

Because this is the first work to focus solely on lung segmentation under a Continual Learning paradigm, the study used original data for the memory buffer. This helped the authors better understand the relationships between the training domains and how their similarities and differences influence the learning process. The results showed that:

  • A higher number of memory buffer samples improves the model’s ability to reduce catastrophic forgetting.

  • Similar training domains not only prevent forgetting but also enhance overall performance.

  • A dissimilar domain increases diversity and broadens the model’s understanding, but may cause loss of information regarding previous domains if these are not adequately protected.


These conclusions depend, to some extent, on the training data used and the learning order applied. With that in mind, future work will include experimenting with different learning orders and expanding the training datasets to strengthen the robustness of the approach.


Stay updated on the AI4LUNGS research development and progress by following the project on BlueSky, X and LinkedIn, and subscribing to our newsletter for the latest insights and advancements.

 
 
bottom of page