Blog

Motion correction for abdominal imaging

Respiratory motion correction for abdominal PET-MRI studies

PET and MRI are two powerful imaging technologies that are characterized by high sensitivity and the ability to provide superior anatomic detail, respectively, which might make them ideal for evaluating the upper abdomen. However, PET requires long acquisition times, including the acquisition of data from moving organs, which may result in image blurring. On the other hand, MRI, especially standard DCE-MRI, can scan the chosen field of view in a shorter time, but requires the patient’s cooperation with the respiratory instructions and the ability to suspend respiration for the acquisition time of breath-hold sequences, usually in the range 14–20 s. Moreover, even in patients with an adequate respiratory breath-hold ability, the quality and the diagnostic information of DCE-MRI are also dependent on the hemodynamics of the patient and the timing of contrast agent injection and data acquisition. These variables explain the occurrence of respiratory artifacts and erroneous phases of contrast enhancement imaging in DCE-MRI.

We presented and evaluated in vivo a comprehensive approach for self-gated MR motion modeling applied to concurrent respiratory motion compensation of PET and DCE-MRI data acquired simultaneously in an integrated PET/MR system.

Fully registered, motion-corrected PET images and diagnostic DCE-MR images were obtained with negligible acquisition time prolongation compared with standard breath-hold techniques. Both the MR and the PET image quality and tracer uptake quantification were improved when compared with conventional methods (Fuin 2018).

Comparison of PET images reconstructed before and after motion correction using motion vector fields obtained from 1- or 6-minutes of MR data

This approach was subsequently evaluated clinically in collaboration with Dr. Onofrio Catalano to demonstrate that motion-corrected PET/MRI produced better PET images and reduced the spatial mismatch between the two modalities (Catalano 2018).

Motion correction for cardiac PET-MRI studies

Motion correction for cardiac PET-MRI studies

We proposed an unsupervised deep learning-based approach for deformable three-dimensional cardiac MR image registration. This method learns a motion model that balances image similarity and motion estimation accuracy. We validated our approach comprehensively on three datasets and demonstrated higher motion estimation and registration accuracy relative to several popular state-of-the-art image registration methods (Morales 2019)

When used for PET motion correction, CarMEN led to an increase in the contrast-to-noise ratio in the simulated perfusion lesions (Morales et al, ISMRM/SNMMI co-provided PET/MRI Workshop, New York 2019). 

PET-based motion correction

When MR-derived motion estimates are unavailable, either in the gaps between sequences or when a given MR sequence is not amenable to motion estimation, motion can also be derived directly from the PET images themselves. Motion estimates from both the MR and the PET can be unified to provide continuous estimates of head motion throughout the duration of the scan, leveraging the higher resolution and more accurate MR-based estimates when they are available. (Levine 2017 Abstract)

Relevant publications

Motion correction for brain PET-MRI studies

MR-based motion correction for brain studies

Head motion is difficult to avoid in long PET studies, degrading the image quality and offsetting the benefit of using a high-resolution scanner. As a potential solution in an integrated MR-PET scanner, the simultaneously acquired high-temporal resolution MR data can be used for motion tracking. We proposed a data processing and rigid-body motion correction algorithm for the MR-compatible BrainPET prototype scanner and performed proof-of-principle phantom and human studies (Catana 2011).

FDG-BrainPET images before (left) and after (right) MR-assisted motion correction

This method was subsequently used in research studies aimed at assessing the role of dopamine D1 signaling in working memory (Roffman 2016) and the role of central dopamine in human bonding (Atzil 2017).  More recently, we showed the variability in the PET estimation of the cerebral metabolic rate of glucose utilization is reduced after MR-assisted motion correction in Alzheimer’s disease patients (Chen 2018).

Transmission imaging for attenuation correction

Transmission imaging for validation of MR-based attenuation correction methods​

While transmission-based techniques are still considered the true gold standard for PET attenuation correction, traditional rotating transmission sources have not been integrated into the PET/MRI scanners due to obvious engineering challenges and the desire to reduce the radiation exposure. As such techniques would be valuable for improving and validating MR-based approaches, several stationary transmission sources have been suggested as alternatives.

In the case of scanners without time-of-flight capabilities (e.g. Biograph mMR), we showed that a single torus source filled with [18F]FDG allows the acquisition of highly accurate transmission images before radiotracer administration (Bowen et al 2016)

Attenuation correction in presence of metal implants

Attenuation correction in the presence of metallic implants

The numerous foreign objects (e.g. dental implants, surgical clips and wires, orthopedic screws and plates, prosthetic devices, etc.) that can be present in the subject lead to susceptibility artifacts in the MR images that propagate as signal voids in the corresponding attenuation maps. 

We developed a method to estimate the location, shape, and linear attenuation coefficient of the implant using a joint reconstruction of the activity and attenuation algorithm. The implant PET-based attenuation map completion (IPAC) method performs a join reconstruction of radioactivity and attenuation from the emission data (Fuin et al 2017). 

MR-, CT- and IPAC-based attenuation map estimation

Attenuation correction for pelvis PET imaging

Attenuation correction for pelvis PET-MR imaging

The pelvis is another area where MR-based approaches that do not properly account for bone attenuation could lead to substantial bias in PET data quantification and deep learning approaches have been proposed to minimize this bias starting from the MR data acquired using specialized MR sequences. Instead, we implemented a deep learning-based approach to generate pseudo-CT maps exclusively from the Dixon-VIBE MR images routinely acquired for attenuation correction on the Biograph mMR scanner. Using our 2D network that takes four contrasts as inputs (water, fat, in-phase, and out-of-phase Dixon-VIBE images), the mean absolute relative change in PET values in the pelvis area was 2.36% ± 3.15% (Torrado-Carvajal et al 2019).

One remaining challenge in using convolutional neuronal networks to synthesize CT from MR images of the pelvis is the presence of air pockets (i.e., digestive tract gas) in this area. As CT and MR images are acquired on separate scanners at different times, the locations and sizes of these air pockets can change between the two scans, which can lead to errors in both the MR-CT co-registration and image synthesis tasks. We trained and evaluated CNNs to automatically segment air pockets from MR CAIPIRINHA-accelerated Dixon images and assessed the quantitative impact on the reconstructed PET images (Sari et al, submitted to JNM).

Attenuation correction for brain PET imaging

Attenuation correction for brain PET-MR imaging studies 

Identifying bone tissue is particularly relevant for accurate attenuation correction in neurological PET studies as this tissue class has the highest linear attenuation coefficient and inaccuracies in its estimation can introduce large biases in the adjacently located cortical structures. Since using conventional MRI pulse sequences bone tissue and air-filled cavities are difficult to distinguish, novel sequences have been developed to address this challenge. We were among the first to propose using ultra-short echo time MR sequences (optimized for imaging tissues with very short T2 relaxation times) for generating segmented head attenuation maps.  In a first approach, the head was segmented into three compartments (i.e. bone and soft tissue and air cavities) based on the relationship between the two echoes on a voxel-by-voxel basis (Catana 2010). 

Subsequently, combining the dual-echo UTE and T1-weighted MR data using probabilistic atlases allowed the generation of substantially improved segmented attenuation maps  (Poynton, Chen et al 2014). However, one of the main limitations of early generation UTE-based methods was that only three compartments (i.e. soft tissue, bone, and air cavities) could be identified. 

Segmented head attenuation maps derived from CT (upper row) and T1w&DUTE MR data (lower row) (Poyton, Chen et al. 2014)

We next implemented an SPM-based method for generating head attenuation maps from a single morphological MRI dataset obtained with the MPRAGE sequence routinely collected in research neurological studies (Izquierdo-Garcia et al 2014). After intensity normalization, the MR images are segmented into six tissue classes using the “New Segment” SPM tool and then registered to a previously created template using a diffeomorphic non-rigid image registration algorithm (SPM DARTEL). The inverse transformation is applied to obtain the pseudo-CT images in the subject space. Using this approach, the voxel- and regional-based quantification errors compared to the scaled CT method were 3.87±5.0% and 2.74±2.28%, respectively. This method was also demonstrated to work in patients with modified anatomy (e.g. glioblastoma patients post-surgery). This method and several of the others developed by other groups around the world have been shown in a multi-center evaluation to be quantitatively accurate to a degree “smaller than the quantification reproducibility in PET imaging” (Ladefoged 2017). Continuous-valued attenuation maps were also obtained from dual-echo UTE and T1-weighted MR data using probabilistic atlases (Chen et al 2017).  We also assessed the repeatability of the atlas- or UTE-based methods.  For example, comparable attenuation maps and PET volumes were obtained at three visits using the probabilistic atlas method (Chen et al 2017).  Similar results were reported for the SPM-based method (Izquierdo-Garcia et al 2018).

Software

  • Masamune (includes several methods for generating head attenuation maps from MR or PET data)
  • Stand-alone PseudoCT for the generating head attenuation maps for BrainPET and Biograph mMR studies

Development of the Human Dynamic NeuroChemical Connectome (HDNCC) Scanner

Development of the Human Dynamic NeuroChemical Connectome (HDNCC) Scanner

The goal of this project is to design and build a 7-T MR-compatible PET camera with >10x improved sensitivity to enable dynamic PET imaging of brain neurotransmission, neuromodulation, and other dynamic molecular events with unprecedented temporal resolution and beyond state-of-the-art spatial resolution. This will allow us to merge the dynamic functional capabilities of both PET and MRI methods, providing investigators the unique capacity to perform experiments linking structure with electrical (through its surrogate hemodynamics) and neurochemical function on time scales relevant for understanding human cognition. 

We will address the hardware and software challenges in assembling 7-T MR-compatible PET technology purpose-built to extend the temporal window of brain PET imaging down to just a few seconds. Funding for demonstrating proof-of-concept (i.e., develop the PET detectors and build a partial scanner) was provided by the BRAIN Initiative NIH-NIBIB&NINDS (1R01-EB026995-01; PI: Catana). We proposed to address the two main factors that determine PET sensitivity: geometric efficiency (to maximize the probability of photons to reach the detectors)and detection efficiency (to detect most of the incident photons).

Specifically, we will use a non-conventional spherical geometry to increase the solid angle coverage to ~71%. This change will translate into ~25% sensitivity for detecting true coincidences. Additionally, to decode the scintillator blocks we will design high-performance readout electronics with depth-of-interaction and time-of-flight (TOF, to improve the count rate performance) capabilities. Furthermore, the TOF information will also act as a virtual sensitivity amplifier and thus sensitivity could be as high as 50%, a dramatic improvement compared to current values (i.e. 1-2%).  

Our preliminary results to date suggest that:

  • very high sensitivity will indeed be obtained using the proposed partial-sphere PET geometry;
  • the photon detectors and associated electronics show no mutual interference with the 7-T system;
  • the 7-T main magnetic field will not be significantly perturbed by the PET scintillator arrays;
  • a high-performance transmit-receive 7-T MR array can be integrated into the PET gantry.

The recently awarded BRAIN Initiative grant U01EB029826 (PI: Catana) will provide funding to:

  1. Build the HSTR-BrainPET using 7-T MR-compatible technology to enable interference-free simultaneous data acquisition.
  2. Implement PET data acquisition and image reconstruction software for the spherical geometry.
  3. Apply the integrated HSTR-BrainPET & 7-T MRI scanner to the dynamic assessment of neurochemical events and brain activation in healthy subjects.

For the hardware/software developments proposed in this project, we have expanded our longstanding partnership with Siemens Healthineers by including experts from University of Tubingen (Germany), Hamamatsu (Japan), Complutense University of Madrid (Spain), and University of Texas at Arlington.

Siemens BrainPET

The BrainPET prototype (Siemens Healthineers) is a head insert designed to fit inside the 3T Siemens TIM Trio 60 cm whole-body MRI scanner. There are 32 detector cassettes that make up the BrainPET gantry, each consisting of six detector blocks. Each detector block consists of a 12×12 array of lutetium oxyorthosilicate (LSO) crystals (2.5×2.5×20 mm3) readout by a 3×3 array of Hamamatsu avalanche photodiodes (APDs, 5×5 mm2).  The gantry physical inner and outer diameters are 35 and 60 cm, respectively. The transaxial and axial fields-of-view are 32 cm and 19.125 cm, respectively.

After the BrainPET was installed at the Martinos Center in May 2008, we worked very closely with the Siemens engineers to optimize its performance and improve the image quality. We have been using the BrainPET in a myriad of studies ranging from those aimed at investigating the mutual interference between the two devices and the performance of the PET camera, developing methods to use the information obtained from one device to improve the other modality, and performing proof-of-principle studies in small animal, non-human primates and humans.

Relevant publications