Inferring PET from MRI with pix2pix

Merel M. Jung, Bram van den Berg, Eric Postma, Willem Huijbers

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review


Medical image-to-image translation, using conditional Gen-erative Adversarial Networks (cGANs), could be beneficial for clinicaldecisions when additional diagnostics scans are requested. The recentlyproposed pix2pix architecture provides an effective image-to-image trans-lation method to study such medical use of cGANs. This study addressesthe question to what extent pix2pix can translate a magnetic resonanceimaging (MRI) scan of a patient into an estimate of a positron emissiontomography (PET) scan of the same patient. We perform two image-to-image translation experiments using paired MRI and PET brain scansof Alzheimer’s disease patients and healthy controls. In experiment 1, wetrain using data sliced in one dimension (the axial plane). In experiment2, we train using augmented data sliced in all three dimensions (axial,sagittal and coronal). After training, the synthetically generated PETscans are compared to the actual ones. The results suggest that PETscans can be sufficiently and reliably estimated from MRI, with similarresults using axial and augmented training. We conclude that image-to-image translation is a promising and potentially cost-saving method formaking informed use of expensive diagnostic technology
Original languageEnglish
Title of host publicationInferring PET from MRI with pix2pix
Publication statusPublished - 2018
EventBenelux Conference on Artificial Intelligence - Den Bosch, Netherlands
Duration: 8 Nov 20189 Nov 2018
Conference number: 30


ConferenceBenelux Conference on Artificial Intelligence
Abbreviated titleBNAIC2018
CityDen Bosch
Internet address


Dive into the research topics of 'Inferring PET from MRI with pix2pix'. Together they form a unique fingerprint.

Cite this