In Part Eleven of the series Physics for understanding fMRI artifacts (hereafter referred to as PFUFA) you saw how setting parameters in k-space determined the image field-of-view (FOV) and resolution. In that introduction I kept everything simple, and the Fourier transform from the k-space domain to the image domain worked perfectly. For instance, in one of the examples the k-space step size was doubled in one dimension, thereby neatly chopping the corresponding image domain in half with no apparent problems. At the time, perhaps you wondered where the cropped portions of sky and grass had gone from around the remaining, untouched Hawker Hurricane aeroplane. Or perhaps you didn't.
In any event, you can assume from the fact that this is a post dedicated to something called 'aliasing' that in real world MRI things aren't quite as neat and tidy. Changing the k-space step size - thereby changing the FOV - has consequences depending on the extent of the object being imaged relative to the extent of the image FOV. It's possible to set the FOV too small for the object. Alternatively, it's possible to have the FOV set to an appropriate span but position it incorrectly. (The position of the FOV relative to signal-generating regions of the sample is a settable parameter on the scanner.) Overall, what matters is where signals reside relative to the edges of the FOV.
Now, on a modern MRI scanner with fancy electronics, aliasing is a problem in one dimension only: the phase encoding dimension. (Yeah, the one with all the distortion and the N/2 ghosts. Sucks to be that dimension!) The frequency encoding dimension manages to escape the aliasing phenomenon by virtue of inline analog and digital filtering, processes that don't have a direct counterpart in the phase encoding dimension. Instead, signal that falls outside the readout dimension FOV, either because the FOV is too small or because the FOV is displaced relative to the object, is eliminated. It's therefore important to know what happens where and when as far as both image dimensions are concerned. One dimension gets chopped, the other gets aliased.
I will first cover the signal filtering in the frequency encoding dimension and then deal with aliasing in the phase encoding dimension. Finally, I'll give one example of what can happen when the FOV is set inappropriately for both dimensions simultaneously. At the end of the process you should be able to differentiate the effects with ease. (See Note 1.)
Effects in the frequency encoding dimension
Below are two sets of EPIs of the same object - a spherical phantom - that differ only in the position of the readout FOV relative to the phantom. In the top image the readout FOV is centered on the phantom, whereas in the bottom image the FOV is displaced to the left, causing the left portions of the phantom signal in each slice to be neatly, almost surgically, removed:
|Readout FOV centered relative to the phantom.|
|Readout FOV displaced to the left of the phantom, resulting in attenuation of the signal from the left edge of each slice.|