Intrinsic imaging on the cheap
Intrinsic signal optical imaging is a functional imaging modality where the reflectance of red light indicates active portions of cortex. It is used for many applications, including imaging individual barrels in rodent somatosensory cortex, maps in visual cortex, and the tonotopic organization in auditory cortex.
Here is a prior Labrigger post with tips for intrinsic signal optical imaging. One of the key things is to use a high quality scientific camera.
Recently, a friend pointed out that newer digital SLRs have absolutely fantastic specs and could maybe be substituted for a scientific camera. The advantages would be that digital SLRs can be cheaper (especially second-hand) and can be used with off-the-shelf software.
So why not use a newer digital SLR instead of a scientific camera?
Short answer: For paradigms that rely on a lot of averaging, it could probably work. But for Fourier analysis-based paradigms, it probably won’t work.
Long answer: Here are the key issues to overcome:
1. Can the camera see the light?
700nm light is often used for intrinsic imaging. This report says that there is massive roll off around 700nm. The Bayer filter may contribute to this, but often there is a separate IR filter. Astrophotographers remove these filters to get results like this:
There are tutorials all over the web. For example, this is the source for the above image and it’s a good place to start. Also, plenty of people use wavelengths below 700nm for intrinsic imaging, so this is not a limiting factor for everyone.
2. Getting the raw data
Digital SLRs do all kinds of tricks to make the photos look nice. All of which will screw up your data. Fortunately, many cameras offer the option of reading the raw data off of the camera, in a format helpfully called RAW.
Special considerations for Fourier analysis
The amplitude of intrinsic signals are typically about 1 part in 10,000. Since most cameras, even the latest scientific cameras, top out around 12 bits (i.e., values range from 0 to 4095), it’s almost impossible to detect the signal without some amount of averaging.
There are two main paradigms for intrinsic imaging:
1. Averaging like hell in the temporal domain. This is what most people do. Just average a whole bunch of frames at rest, and then a whole bunch of frames during stimulation. Subtract the two images. Declare victory.
2. Averaging like hell in the frequency domain. This is a trick from fMRI. Kalatsky & Stryker implemented it for intrinsic signal optical imaging. It’s harder, but is typically much faster and can yield much more information.
For paradigm 1, any decent camera will work. But paradigm 2 has some special requirements:
1. Digital SLRs still typically have rolling shutters rather than global shutters. This means that the top of an image is captured over a different time than the bottom of an image. This can distort the phase of signals, which is important for this paradigm.
2. Image quality. Let’s start with pixel size.
The Nikon D3 and D4 sensors are about 3x the size and pixel count of a Dalsa 1M30, the classic choice for Fourier analysis intrinsic imaging. The Nikon FX chip’s pixels are smaller than those of the 1M30.. it’s also a CMOS chip rather than a CCD. (Though that distinction means less these days.)
pixel size:
8.45 x 8.45 µm (Nikon)
12 x 12 µm (Dalsa)
Now let’s talk about dynamic range:
I’m guessing the 66dB dynamic range of the Dalsa 1M30 is still better than most digital SLRs.
For example, the Sony Alpha 900 and Canon EOS 5D both top out at less than 40dB. (ref 1, ref 2) Consumers typically don’t need to pick a 1/10,000 signal out of their images.
3. Output
Can you commonly get 30fps of uncompressed, RAW data at 1 megapixel resolution out of consumer dSLRs? I have the impression that you can get RAW stills, but video is still typically compressed. “Unfortunately there are no HDSLR cameras on the market that will give you a clean (non-overlay), uncompressed 1080p HDMI output.”
If you decide to go the scientific camera route, sCMOS and CCD are solid choices. If you are operating in a light-limited regime (e.g., flavoprotein fluorescence imaging), EMCCDs are the way to go.
Besider the potential for digital SLR cameras to be used for intrinsic imaging, has anyone actually explored this route in practical terms? Would be curious to see actual data obtained with an SLR…
I know people have successfully used cheap, 8-bit CMOS cameras for intrinsic imaging with a lot of averaging. But I don’t have any firsthand knowledge of people using dSLRs specifically.
Hi, I’ve done some IOI in the past and as I’m looking to buy a DSLR for personal use I thought I could maybe use that as double-purpose.
I’m a bit unsure on how this would work though. First of all, the sensors of DSLRs usually have bayer array filters so as to obtain color. As I understand it, only a quarter of the wells in such an array are sensitive to near-infra/red light. Would you just take these for imaging? or would you bin all the light from all of them together? Are the µm values usually given in the camera specs those of single-channel wells, or do they represent 2×2 binned clusters (which are the minimum requirement for RGB channels?)
I have had a closer look at sensor specs via the DxO index, and I am wondering which of the following (or ratio thereof) are most accurately indicative of intrinsic imaging (or optic imaging in general) success.
a) SNR
b) dynamic range
c) bit depth
I think it is obvious that the SNR should exceed the bit depth (so that each gray step is significant) but I’m looking for something a bit more quantitative in order to make a ranking of sensors. I would be especially interested in how much a dynamic range increase could compensate for a SNR or bit depth decrease (or vice versa) :-/. I tried to brew myself a 3-variable equation which makes sense, but I didn’t quite get there.
Could any of you give me a hand here?
This is a belated reply, and probably won’t be very helpful. Sorry.
All three items are related to each other, they’re not independent quantities. One needs good SNR and dynamic range, and this should be reflected in a bit depth of 10-12 bits. However, bit depth alone is not a good measure, since a lot of camera sellers will state that they have 16-bit output, but if the 8 least significant bits are pure noise, then it’s effectively an 8-bit camera, regardless of the precision of the ADC.
If the dynamic range is poor, then no amount of SNR or bit depth is going to make that up. The signal is always going to be below the detection threshold. I’m guessing 40 dB would work, but I wouldn’t bother trying with anything less than that. You might get it to work under some conditions, but it’s going to be tough. And really, if you can afford it, 60+ dB is a safer bet.
BTW, sensitivity doesn’t matter so much since you’re not operating in a light-limited regime. You can always turn the illumination up.
What software do you use for intrinsic imaging? I’m using an old labview program, but I’m very unhappy with it. Couldn’t find anything for matlab…
It should be something that displays on-line the current averaged runs for stimulation and baseline.
I use a program (more like script actually) which I wrote myself. It’s based on some MATLAB snippets which I got from some co-workers.
You can see the code, download it, and contribute to it here https://github.com/TheChymera/pyASF
It’s very purpose-built, so you may not understand how to use it unless you want to read the code a bit. If you’re interested in extending it though, I’d be very happy to join efforts and extend pyASF’S scope.
Some microscope manufacturers strike a bit of a blank if you ask them to help with coupling the SLR’s from their cameras into their micro/macroscope. In the past, I’ve spoken to Best Scientific in the UK about finding the right coupler, who managed to suggest a solution.
[…] cameras can have global shutters or rolling shutters (some cameras can operate in both modes). Global shutters are preferable for periodic applications, including intrinsic imaging, because rolling shutters can cause artifacts. This article does a […]
Which cameras are people using these days? The DALSA 1M60 and 1M30 have been recently discontinued (summer 2016 I was told by my supplier). I am not presently aware of any good replacements but there must be some good choices.
https://scientificimaging.com/knowledge-base/tandem-lens-macroscope-configuration/
The above link has useful information and is also a source to obtain front-coupled tandem lenses and cameras for intrinsic imaging. Also included is information about adding a filter cube in the “infinity space” between the two lenses.