Intrinsic signal optical imaging is a functional imaging modality where the reflectance of red light indicates active portions of cortex. It is used for many applications, including imaging individual barrels in rodent somatosensory cortex, maps in visual cortex, and the tonotopic organization in auditory cortex.
Here is a prior Labrigger post with tips for intrinsic signal optical imaging. One of the key things is to use a high quality scientific camera.
Recently, a friend pointed out that newer digital SLRs have absolutely fantastic specs and could maybe be substituted for a scientific camera. The advantages would be that digital SLRs can be cheaper (especially second-hand) and can be used with off-the-shelf software.
So why not use a newer digital SLR instead of a scientific camera?
Short answer: For paradigms that rely on a lot of averaging, it could probably work. But for Fourier analysis-based paradigms, it probably won’t work.
Long answer: Here are the key issues to overcome:
1. Can the camera see the light?
700nm light is often used for intrinsic imaging. This report says that there is massive roll off around 700nm. The Bayer filter may contribute to this, but often there is a separate IR filter. Astrophotographers remove these filters to get results like this:
There are tutorials all over the web. For example, this is the source for the above image and it’s a good place to start. Also, plenty of people use wavelengths below 700nm for intrinsic imaging, so this is not a limiting factor for everyone.
2. Getting the raw data
Digital SLRs do all kinds of tricks to make the photos look nice. All of which will screw up your data. Fortunately, many cameras offer the option of reading the raw data off of the camera, in a format helpfully called RAW.
Special considerations for Fourier analysis
The amplitude of intrinsic signals are typically about 1 part in 10,000. Since most cameras, even the latest scientific cameras, top out around 12 bits (i.e., values range from 0 to 4095), it’s almost impossible to detect the signal without some amount of averaging.
There are two main paradigms for intrinsic imaging:
1. Averaging like hell in the temporal domain. This is what most people do. Just average a whole bunch of frames at rest, and then a whole bunch of frames during stimulation. Subtract the two images. Declare victory.
2. Averaging like hell in the frequency domain. This is a trick from fMRI. Kalatsky & Stryker implemented it for intrinsic signal optical imaging. It’s harder, but is typically much faster and can yield much more information.
For paradigm 1, any decent camera will work. But paradigm 2 has some special requirements:
1. Digital SLRs still typically have rolling shutters rather than global shutters. This means that the top of an image is captured over a different time than the bottom of an image. This can distort the phase of signals, which is important for this paradigm.
2. Image quality. Let’s start with pixel size.
The Nikon D3 and D4 sensors are about 3x the size and pixel count of a Dalsa 1M30, the classic choice for Fourier analysis intrinsic imaging. The Nikon FX chip’s pixels are smaller than those of the 1M30.. it’s also a CMOS chip rather than a CCD. (Though that distinction means less these days.)
8.45 x 8.45 µm (Nikon)
12 x 12 µm (Dalsa)
Now let’s talk about dynamic range:
I’m guessing the 66dB dynamic range of the Dalsa 1M30 is still better than most digital SLRs.
For example, the Sony Alpha 900 and Canon EOS 5D both top out at less than 40dB. (ref 1, ref 2) Consumers typically don’t need to pick a 1/10,000 signal out of their images.
Can you commonly get 30fps of uncompressed, RAW data at 1 megapixel resolution out of consumer dSLRs? I have the impression that you can get RAW stills, but video is still typically compressed. “Unfortunately there are no HDSLR cameras on the market that will give you a clean (non-overlay), uncompressed 1080p HDMI output.”