Education, tips and tricks to help you conduct better fMRI experiments.
Sure, you can try to fix it during data processing, but you're usually better off fixing the acquisition!

Friday, July 29, 2011

Physics for understanding fMRI artifacts: Part Nine

Conjugate variables redefined

In this post I'm going to provide the first part of a recipe for generating 2D images. It's going to be somewhat algorithmic. I may occasionally mention what a particular step implies, but for the most part I'm going to step through a sequence of events, produce a final recipe for you to follow, then go back and explain what some of the parts mean physically. This isn't the traditional approach to learning about k-space; most text books assume that you need to understand what it all means before you get to learn "the rules of the game." As is my wont, I'm coming at it backwards. My hope is that you will then be able to go back to your text books - I'll tell you where to look for subsequent explanations - and cement a decent understanding of the "why" of k-space, not just the "how."


Conjugate variables revisited

In Part Five of this series I introduced the Fourier transform and conjugate variables. The post focused on the most common pair of conjugate variables: frequency and time. If we have the time domain representation and we want to transform it into its frequency domain equivalent, we apply a (one-dimensional) FT, and vice versa.

But there is another pair of conjugate variables that is more useful and intuitive for imaging applications. (In this case your intuition for one of the variables may not develop until the end of this post, or later! Bear with me.) Whether it's maps, MRIs or architectural plans, the axes of an image are best described in terms of length. If we choose the centimeter as our unit of length, then FTing an axis in cm will yield an axis in 1/cm. You happen to have an intuitive notion of time, frequency and space from everyday life. Don't worry about what the reciprocal of real space means, just accept for now that it exists. We call this reciprocal space k-space because another term for 1/cm is the wavenumber, and the wavenumber is given the symbol k.



Representing pictures in reciprocal space

Let's take a random picture, in this case it's a digital photograph of a Hawker Hurricane plane. It's clearly a 2D picture. We have a digital version of it so we can do mathematical operations on it with a computer. If we do a 2D (digital) FT of the picture we get its representation in 2D k-space:



Note that a 2D FT is simply two 1D FTs performed in succession, once along each orthogonal direction. Furthermore, the image on the left is comprised of 512x512 pixels. Thus, the k-space representation on the right also comprises a matrix of 512x512 k-space values.

One can clearly see in the gray fuzz of the k-space plot on the right, above that it's a Hurricane and not the plane with which it is often confused, the Supermarine Spitfire. What, you don't see it? You don't see the squarer planform of the Hurricane's wing, or the raised section of the fuselage behind the cockpit in all those gray dots...? This is a Spitfire in k-space:


See the difference now...? Still not seeing it? Well, that is rather my point. We don't interpret k-space very naturally, our brains have evolved to interpret real space - as displayed in the conventional photos. But this silly illustration brings us to a fundamentally important concept: if we are ever given a 2D k-space representation of something and we want to reconstruct the (real space) image, all we have to do is run it through a 2D FT and we get a picture that we can interpret easily.

What does it mean to have a Hurricane or a Spitfire represented in k-space? We actually don't need to know at this point. Some of you - vision scientists and physicists, most likely - may already know a lot about the concept of spatial frequencies. If you don't, don't worry about it yet. Let's complete an exercise and then reconsider what k-space is.

We can write down the mathematics of the above transforms using what we already know about the FT (from Part Five). Considering the x dimension first, we have:


The y dimension is transformed analogously, remembering that a 2D FT is equivalent to a 1D FT along x then another 1D FT along y (or vice versa). These equations are the same as the FT that was introduced in Part Five, the only difference being a substitution of space and reciprocal space (or k-space) for time and frequency as the conjugate variables being transformed. One dimension of the Hurricane image is represented by I(x), and after Fourier transformation a rather strange pattern of gray dots is produced in the conjugate domain, S(kx). I'm using I and S intentionally; I to mean image, S to mean signal.

It's time to return to MR imaging, but with a new purpose. We know what our target is: a complete 2D plane of k-space. We also know why we want that target: so that a 2D FT will produce our final image. Thus, for MRI all we need to do is obtain a completed 2D plane of k-space (signals) and we will then be a single step - 2D Fourier transformation - away from a final image. It's time to see how we do it.


One-dimensional imaging as seen in k-space

Before jumping into 2D let's first see what happens with 1D k-space, since ultimately the two dimensions will be transformed independently (albeit simultaneously) in the 2D FT. We will first analyze the frequency-encoding gradient experiment that we saw in Part Seven:


Remember, in this exercise we aren't going to detect the signal and FT it. All we want to do is visualize the way that the spatial information is being encoded into the signal (were we to detect it). The goal is to analyze the evolving magnetization in a way that it contains spatial information in terms of kx, such that we could, if we so choose, FT it and get a representation of the x dimension of an image. It's important to make this seemingly trivial distinction because as we will see in the next post, there are times (lots of them!) when spatial encoding happens before signal detection, the spatial information being somehow "preserved" in the magnetization for when it is eventually read out.

Okay, consider again the 1D profile of a water-filled cylinder that we saw in Part Seven, as would be obtained by the above pulse sequence were we to read out and FT the signal during the gradient, Gx:


Previously we were interested in generating the green profile of the object, so we considered the entire signal that was measured in the presence of a read (or frequency encoding) gradient. Now, though, we're going to analyze the situation slightly differently by considering not the entire detected signal (that can be 1D FT'd to produce the profile) but just the signal at any instant in time as the readout gradient "makes its presence felt" by causing phase evolution of the spins depending on their position along x. In other words, we're going to look at a snapshot of this phase evolution that arises from the presence of the gradient.

First of all we need to recognize that until the signal is processed (with an FT) there is no way to distinguish between all the signals arising out of the sample. Even though the spins at position x5 are precessing at a different rate than at x8, etc., all the magnetization is precessing together and being detected simultaneously in the receiver coil. Only the FT can sort the position-dependent frequencies. So we have an interferogram - a distribution of frequencies from across the sample, all piled on top of each other. (See Part Five if you need reminding what an interferogram is.)

Still, this "all piled on top of each other" situation doesn't prevent us from understanding what's happening for any arbitrary restricted "chunk" of spins residing somewhere in the cylinder. After all, we know we will ultimately be able to sort all the signals out - with our FT - so we can consider the encoding that is happening in the time domain as a sort of proxy for doing the full analysis; a win for human insight!

The red line of spins in the above figure is as good a place as any to understand the position encoding. Noting that the red line really represents a very thin 3D slice of spins, the contribution of signal from this slice to the overall signal (the interferogram) can be described mathematically as the product of the magnetization in the slice (i.e. the number of spins present in the slice) with the amount of phase shift that the gradient has generated to this point in time.

Clearly, before the gradient is turned on there is no phase shift at all. The signal from the red slice would be M(x).dx, where M(x) is the total sample magnetization and dx is the fraction arising from just the small volume between position x and x + dx (i.e. the contribution from the red slice).

What's more, the total of all small fractions of signal - the total signal which we can detect - is now just the sum of all of the different positions along x. This summing up is the integral along x, i.e.



Once a gradient is turned on the signal accrues a phase that depends on the strength of the gradient and how long it's been enabled. In that case the signal from the each fraction (such as the red slice) becomes modulated with a phase factor that depends upon the strength of the gradient, position along x and the time for which the gradient is active:


Note that this assumes the rotating frame of reference which is why the main magnetic field strength, B0 doesn't feature; only departures from the on-resonance condition (at exactly B0) need be considered. I've also ignored 2.pi radians for simplicity.

Now the magic step. We're going to define a new variable. Let's call it k. And we will define it as:


(If you don't believe the units come out to be 1/cm, see Note 1.)

Now this is a very handy "arbitrary" definition for kx, and for a succession of reasons. Whereas in the equation for S(t) above the conjugate variables are frequency (w) and time, we are now in a position to redefine the conjugate variables as x and kx:


Nothing has really changed for the signal, we've just made a couple of simple substitutions, and just like that we have a signal equation for which the conjugate variables can be considered as x and kx instead of frequency and time. All to help us visualize what's going on, as we'll see in a moment.

But wait! The equation here for S(kx) is identical to the Fourier transforms we saw above for the Hurricane and the Spitfire (except that here I neglected the 2pi radians). Which implies, remarkably, that this "arbitrary" definition of k actually has meaning after all! When all is said and done we don't need to think about precession frequencies or magnetization phase or any of that stuff to understand spatial encoding in MRI. In fact, all we need to recognize is that when spatial encoding gradients are turned on they cause changes in k, and that when k changes it is actually tracing through the Fourier transform of the object we are seeking to image! Time to see this in pictures.


Tracing kx through time

So kx = gamma.Gx.t, where gamma is the gyromagnetic ratio for protons, the spins we're making images with. Since the gyromagnetic ratio is constant no matter what we do with the gradient strength or its duration, we can effectively ignore it (just as I already did for 2pi radians). That makes kx a remarkably simple variable to compute indeed. It is now just kx = Gx.t, which is clearly equivalent to the area of the gradient being applied, because for a rectangular-shaped gradient the area is given by base times height. (See Note 2.)

There's another important property of kx, too. Note that kx isn't constant during the gradient. In fact, before the gradient starts the value of kx is zero. Then, as the gradient is played out for increasing time, t the value of kx steadily increases. It continues to increase for as long as the gradient is on. Once the gradient is switched off the value of kx remains fixed at the last value it had under the gradient's influence; it doesn't instantly return to zero, nor does it continue to grow. Thus, we can easily see that the value of kx at t/2 is half of its eventual value after time t. In other words, kx evolves.

Now that we know how kx changes with time we can analyze the gradient echo pulse sequence we saw in the last post (Part Eight). Not only are we now in a position to dispense with the sampling period to understand what's happening to the spatial encoding, we can actually ignore the magnetization altogether! All that really matters from an encoding perspective is what is being done to the gradients. So, we can now look at the gradient echo sequence with a different perspective. Here is what happens in terms of kx during the first gradient episode:


By doing a "mental integration" during period 1 we see an increasing area under the square gradient as time evolves. Thus, in a k-space plot (which I have made 2D for obvious reasons - we will ultimately need the ky dimension to get a 2D image) the gradient traces the trajectory from the origin (at time zero) out to a maximum +kx value. The speed along the trajectory is constant (for a square gradient), but that turns out to be a bit less important than the route taken and the ultimate destination.

Next, the gradient's sign is reversed and two further periods of spatial encoding evolve:


We were already at maximum +kx so period 2 simply takes us back the way we came, and we end up back at kx=0. This position corresponds to the gradient echo top we saw in the last post, where all the dephasing has been unwound. As the gradient is left on during period 3 the trajectory takes us into negative k-space, out to maximum -kx. Again, the speed along the trajectories happened to be constant, but as before it's not the speed of the journey that matters so much as the route taken.

What's so remarkable about k-space trajectories is that they represent the gradients being applied to the point where you might reasonably ask why we bother with pulse sequence diagrams at all! Well, there are many uses for each representation. In the coming posts you will see that the k-space representation is most useful for comparing between different imaging sequences, and it can also be very useful for determining the source of some (but not all) image artifacts. In practice, we tend to use the pulse sequence and the k-space representations as a pair.


Getting off axis

I'm going to stop here for today. As an exercise, think about what sort of pulse sequence diagram might get us off the ky=0 axis and start to trace through an entire square of kx,ky space. As we saw at the start of this post, with photos of planes, the moment we have a complete square of kx,ky we are ready to apply a 2D FT and get our image out. That will be the focus of the next post.

I'll leave you with a simple homework exercise. Deduce the pulse sequence timing diagram that generates this k-space trajectory:



Answer next post! Also next post, suggested reading to reinforce the concepts you're seeing here. It usually takes several exposures to "get it." Hang in there, it really is worth it! (See Note 3.)


-------------------------



Notes

1.  Gamma, the gyromagnetic ratio, for proton spin is given as 4258 Hz/Gauss, where Gauss is an old measure of magnetic field strength. (1 Tesla = 10,000 Gauss.) Noting that Hz is 1/sec and that a magnetic field gradient strength can be given in Gauss/cm, you can quickly determine that for k the units are (Gauss.sec)/(Gauss.cm.sec), leaving just 1/cm after canceling terms.

2.  The shape of the gradient doesn't need to be square, it can be any shape whatsoever. The value of kx is always determined by the area under the shape being defined by the gradient. If it were a trapezoidal gradient instead of a square, we'd calculate the area under the trapezoid. Indeed, trapezoidal, triangular and even sinusoidally modulated gradients are commonly used in MRI. The general expression for determining k is:



3.  I think the k-space formalism is the single most useful device you can have at your disposal to understand fMRI acquisitions. I go for years without needing to use angular momentum or product operators or the Bloch equations or relaxation theory, but k-space is something I use daily. As I mentioned in an earlier post, MR physicists think in and talk about k-space routinely. Learn the code, it's not hard. Once you do you will have a perspective on imaging and on artifacts that is invaluable. Hence this series of posts to deposit you at that point! Nearly there. One more post on 2D k-space, then another post on the particular k-space trajectory for EPI. Once we have a k-space interpretation for EPI at our disposal we can understand and differentiate artifacts in rapid fashion.

1 comment:

  1. Good and complete description. Very useful!

    ReplyDelete