UCLIC Research Seminar Series
The U-net has become the predominant choice when facing any medical image segmentation task. This is due to its high performance in many different medical domains. In this talk, I will introduce the U-net, and I will present three projects from DeepMind Health Research that use the U-net to address different challenges. The first project, a collaboration with University College London Hospital, deals with the challenging task of the precise segmentation of radiosensitive head and neck anatomy in CT scans, an essential input for radiotherapy planning. The second project, together with Moorfields Eye Hospital, developed a system that analyses 3D OCT (optical coherence tomography) eye scans to provide referral decisions for patients. The performance was on par with world experts with over 20 years experience. Finally, I will focus on the third project, which deals with the segmentation of ambiguous images. This is of particular relevance in medical imaging where ambiguities can often not be resolved from the image context alone. We propose a combination of a U-net with a conditional variational autoencoder that is capable of efficiently producing an unlimited number of plausible segmentation map hypotheses for a given ambiguous image. We show that each hypothesis provides a globally consistent segmentation, and that the probabilities of these hypotheses are well calibrated.
Bernardino Romera-Paredes is a research scientist at DeepMind. He was a postdoctoral research fellow in the Torr Vision Group at the University of Oxford. Previously, he received his Ph.D. degree from University College London in 2014, supervised by Prof. Massimiliano Pontil and Prof. Nadia Berthouze, and also did an internship at Microsoft Research. He has published in top-tier machine-learning conferences such as in Conference on Neural Information Processing Systems (NIPS), International Conference on Machine Learning (ICML), and International Conference on Computer Vision (ICCV), as well as in journals, such as the Journal of Machine Learning Research (JMLR). His research focuses on structure prediction in computer vision, such as semantic and instance segmentation, and its application to the medical domain.