What are you looking for?
51 Résultats pour : « Portes ouvertes »

L'ÉTS vous donne rendez-vous à sa journée portes ouvertes qui aura lieu sur son campus à l'automne et à l'hiver : Samedi 18 novembre 2023 Samedi 17 février 2024 Le dépôt de votre demande d'admission à un programme de baccalauréat ou au cheminement universitaire en technologie sera gratuit si vous étudiez ou détenez un diplôme collégial d'un établissement québécois.

Automated Manufacturing Engineering Healthcare Technology Research and Innovation Health Technologies Intelligent and Autonomous Systems LIVIA – Imaging, Vision and Artificial Intelligence Laboratory

A Simple Method for Weakly Supervised Segmentation

Imaging from the brain

Purchased from Istock.com.

Semantic Image Segmentation

Semantic image segmentation—drawing the contours of an object in an image—is a highly active topic of research, and very relevant to medical imaging. Not only does it help to read and understand radiological scans, it can also be used for the diagnosis, surgery planning and follow-up of many diseases, while offering an enormous potential for personalized medicine.

Segmentation of brain parts

In recent years, deep convolutional neural networks (CNNs) have dominated semantic segmentation problems, both in computer vision and medical imaging, achieving groundbreaking performances, albeit at the cost of huge amounts of annotated data. A dataset is a collection of examples, each containing both an image and its corresponding segmentation (i.e. ground truth), as seen in Figure 1. With optimization methods such as the stochastic gradient descent, the network is trained until its output matches the ground truth as closely as possible. Once the network is trained, it can be used to predict segmentations on new, unseen images.

Labelling Data Is Expensive

In the medical field, only trained medical doctors can confidently annotate images for image segmentation, making the process very expensive. Furthermore, the complexity and size of the images can make the annotation slower—taking up to several days for complex brain scans. Therefore, more efficient methods are needed.

An attractive line of research consists in using incomplete annotations (weak annotations) instead of full pixel-wise annotations. Drawing a single point inside an object is faster and simpler in orders of magnitude than drawing its exact contour. It is not, however, possible to use those labels directly to train a neural network, as the network does not have enough information to find an optimal or even satisfactory solution.

Difference between full and weak annotations

Figure 1: Examples of full and weak annotations. In red are the pixels labelled as background, and in green the pixels labelled as foreground: what is to be segmented. The absence of color (in the weak annotations) means that no information is available for those pixels during training

Our Contribution: Enforcing Size Information with a Penalty

Nevertheless, more information on the problem exists, which is not given in the form of direct annotations. For instance, we know the anatomical properties of the organs that we want to segment, such as the approximate size:

Min and max size for a heart

Figure 2: By knowing the approximate size, we can guide the network toward anatomically feasible solutions

We can therefore mend the training procedure, forcing the network to not only correctly predict the few labelled pixels, but also to make sure that the size of the predicted segmentation is among the a prior known anatomical plausible solutions. This is known as a constrained optimization problem, which is a well-studied field in general (with Lagrangian methods), but seldom applied to neural networks. The sheer size of the network parameters, datasets, and requirements to alternate between different training modes makes it impractical at best.

Our key contribution was to include these constraints into the training process as a direct and naive penalty. Thus, the network has to pay a price each time the predicted segmentation size is out of bounds (the bigger the mistake, the bigger the penalty), with no consequence when bounds are respected.

Results and Conclusion

Surprisingly, and despite being contrary to any textbook on constrained optimization, our naive penalty for deep networks proved to be much more efficient, fast, and stable, than more complex Lagrangian methods. Taking the size information into account during training—basically adding a single value—allowed us to almost close the gap between full and weak supervision in the task of left-ventricle segmentation, while using only 0.01% of annotated pixels.

Segmentation resulting from our weakly supervised method

Figure 3: Visual comparison of the results. In the usual order: label, training with full labels, training naively with weak labels, our method

Additional Information

More technical details can be found in the full paper, describing different settings across several datasets: Kervadec, H.; Dolz, J.; Tang M.; Granger, E. Boykov, Y.; Ben Ayed I. 2019. “Constrained-CNN losses for weakly supervised segmentationMedical Image Analysis. Volume 54. pp. 88-99.

The code, available online, is free for reuse and modification: https://github.com/liviaets/sizeloss_wss.

About the authors
Hoel Kervadec earned his PhD at ÉTS Montréal, developing data efficient methods for semantic image segmentation. He is now working at Erasmus MC, in Rotterdam (The Netherlands).
Jose Dolz is a professor in the LOGTI department at ÉTS Montréal. His research interests focus on weakly and semi-supervised learning methods, computer vision and medical image processing.
Meng Tang is a research scientist at Facebook Reality Lab. His main research interests are computer vision and optimization
Eric Granger is a professor in the Systems Engineering Department at ÉTS. His research focuses on machine learning, pattern recognition, computer vision, information fusion, and adaptive and intelligent systems, with applications in biometrics, affective computing, medical imaging, and video surveillance.
Yuri Boykov is a professor at the University of Waterloo. His research work focuses on computer vision and biomedical image analysis
Ismail Ben Ayed is a professor in the Systems Engineering Department at ÉTS. His work focuses on the design of efficient algorithms for processing, analysis and interpretation of medical images. He has a strong interest in modern machine learning algorithms and optimization, in order to address the massive data issues in 3D or 4D imaging.