Saturday, April 26, 2014
On Tuesday I became involved in a discussion about data sharing with JB Poline and Matthew Brett. Two days later the issue came up again, this time on Twitter. In both discussions I heard a lot of frustration with the status quo, but I also heard aspirations for a data nirvana where everything is shared willingly and any data set is never more than a couple of clicks away. What was absent from the conversations, it seemed to me, were reasonable, practical ways to improve our lot.* It got me thinking about the present ways we do business, and in particular where the incentives and the impediments can be found.
Now, it is undoubtedly the case that some scientists are more amenable to sharing than others. (Turns out scientists are humans first! Scary, but true.) Some scientists can be downright obdurate when faced with a request to make their data public. In response, a few folks in the pro-sharing camp have suggested that we lean on those who drag their feet, especially where individuals have previously agreed to share data as a condition of publishing in a particular journal; name and shame. It could work, but I'm not keen on this approach for a couple of reasons. Firstly, it makes the task personal which means it could mutate into outright war that extends far beyond the issue at hand and could have wide-ranging consequences for the combatants. Secondly, the number of targets is large, meaning that the process would be time-consuming.
Where might pressure be applied most productively?
Tuesday, April 1, 2014
Disclaimer: This isn't an April Fool!
I'd like to use the collective wisdom of the Internet to discuss the pros and cons of a general approach to simultaneous multislice (SMS) EPI that I've been thinking about recently, before anyone wastes time doing any actual programming or data acquisition.
Multi-echo EPI for de-noising fMRI data
There has been quite a lot of interest in using multi-echo EPI to characterize and de-noise time series data, e.g.
These methods rest on one critical aspect: they use in-plane parallel imaging (GRAPPA or SENSE, usually depending on the scanner vendor) to render the per slice acquisition time reasonable. For example, with R=2 acceleration it's possible to get three echo planar images per slice at TEs of around 15, 40 and 60 ms. The multiple echoes can then be used to characterize BOLD from non-BOLD signal variations, etc.
The immediate problem with this scheme is that the per slice acquisition time is still a lot longer than for normal EPI, meaning less brain coverage. The suggestion has been to use MB/SMS to regain speed in the slice dimension. This results in the combination of MB/SMS in the slice dimension and GRAPPA/SENSE in-plane, thereby complicating the reconstruction, possibly (probably) amplifying artifacts, enhancing motion sensitivity, etc. If we could eliminate the in-plane parallel imaging and do all the acceleration through MB/SMS then that would possibly reduce some of the artifact amplification, might simplify (slightly) the necessary reference data, etc.
A different approach?