Assess the Amount of Subject Motion During the MRI Scans

There are currently no widely available methods for directly assessing in-scanner motion during acquisition of neuroanatomical sequences. We propose to develop a method for measuring in-scanner motion via analysis of video obtained from an in-scanner eye tracker. The novel feature of this method is that it does not require any sensors to be placed on the participant; once the eye tracker system is installed it is relatively trivial to collect the raw data required to assess motion.

Video signals were obtained at a high temporal frequency; a sampling rate of 250 Hz was used for the experiments. Participants were imaged using whole brain T1-weighted MRI on a Siemens Prisma MRI scanner with an in-bore eye-tracker system at the NYU Center for Brain Imaging, New York.

The primary challenge was to extract an accurate estimate of head motion from the video data. To this end, we resorted to resting state functional magnetic resonance imaging (fMRI), which is a technique for measuring brain activity. It works by detecting the changes in blood oxygenation and blood flow that occur in response to neural activity. One particular application of fMRI is characterizing the subject’s position using six rigid body transformation parameter: three translation parameters which code movement in the directions of the three dimensional axes, movement along the X, Y, or Z axes; and three rotation parameters which code rotation centered on each of the X, Y, and Z axes). As illustrated in Figure 3, in-scanner motions would lead to sharp spikes proportional to the magnitude of the move in the corresponding parameter(s).


We employed a convolutional neural network (CNN) model for our classification task. Our model was able to achieve 98.3% and 96.4%accuracy on the positive (i.e., with-motion) and negative (i.e., without-motion) classes respectively. Table 3 presents the model’s performance on each individual subject.


Code repository: https://github.com/hyuan9310/MRI_image.git

Software Package: Coming Soon

Reduce Image Artifacts Induced by In-scanner Motions

Although Deep learning has attracted a great amount of interest in recent years and has been widely explored in many medical applications, few studies exist in the area of correcting motion artifacts of MRI scans. The difficulty is two folds: 1) MRI images are obtained through a complex electronic and mechanical process using Fourier and inverse- Fourier transforms. Thus, the resulting motion artifacts (e.g., blurring) can’t be effectively modeled using traditional techniques such as convolution kernels. 2) The success of a deep learning model rests on the availability of a large quantity of training data. It is simply infeasible both financially and labor-wise to collect a huge amount of MRI images corrupted with motion artifacts.

In our approach, we address the above two challenges by directly models the K-space representations of an MRI scan and generate sythetic artifacts that resemble the "rings" and blurs in the real motion corrupted scans. Figure 2 illustated the sythetics artifacts generated by our software:



We evaluated the performance of our model using a real-world dataset from NYU Langone's Comprehensive Epilepsy Center. The dataset consists of whole brain T1-weighted MRI scans obtained in six individuals imaged with both (i) deliberate head motion carried out during MRI acquisition and (ii) motion-free acquisitions. We further evaluated of our approach by applying the developed model to 55 MRI scans from the multi-center Autism Brain Imaging Data Exchange initiative. Figure 5 presents a sample of our experimental results.



Code repository: https://github.com/yijun2011/tzo

Software Package: Coming Soon