Optimizing pulse frequency for 2p imaging
The lasers used in multiphoton imaging deliver their photons in pulses. Many commonly used systems pulse at 80 MHz. However, there are good reasons to try different frequencies.
In 2007, Donnert, Eggeling, and Hell published a Nature Methods paper where they used low frequency pulses to get more fluorescence signal out of the preparation. The idea was that many molecules get excited into triplet states that are long-lived. By having a long time between pulses, there is time for the molecules to fall back down into the ground state so that the next pulse will have a large population of molecules available to be excited.
The next year, Ji, Magee, and Betzig published a Nature Methods paper where they used high frequency, low power pulses to get an increase in signal-to-noise ratio with two-photon imaging.
Several people have been confused by these apparently contradictory results. Recently there was a discussion on the Confocal Listserv about this topic, again pointing out the differences between the two papers.
Andrew Ridsdale chimed in with his thoughts (link to post). One of his points is that in different experiments, different factors are limiting the signal.
In Hell’s experiments with low pulse rates, they were imaging cell-free molecules– a very bright signal. Bleaching (triplet-state occupancy) was the limiting factor, rather than damage. Because there weren’t even any cells around to be damaged, other than E. coli cells in the last figure. So allowing for relaxation time and maximum occupancy of the ground state gave the best results. All of the relevant processes were governed by 2p excitation and thus were second order.
In the Ji, Magee, and Betzig experiments, the signals were very dim (not unusually dim for slice experiments, but dimmer than the preparations used by Donnert et al.) and the limiting factor was damage to the preparation. Andrew’s point seems to be that in this case, the signal is coming from second order processes and the damage is from higher order processes, maybe even as high as 5th order, though they estimate it to be on average about 2.4 order. So in this case, it’s best to use pulses that are just barely effective for 2p excitation and completely ineffective for higher order processes (damage), and then blast the prep with as many pulses as possible. Since the likelihood of a 2p event is already low, bleaching isn’t as much of a factor.
(btw, Brain Windows did a very nice post on the Ji and Donnert papers)
Nice blog! Just saw this post, and thought that I would chime in, since I was one of the authors for the Ji paper.
I agree with Andrew’s thoughts mostly. Just want to add that, even for photobleaching, there are many mechanisms (linear vs. nonlinear, triplet vs. singlet states, etc.) that coexist and may dominate photobleaching under different power and time scales. Andrew was right that by splitting the pulses and exciting at lower energy per pulse, we reduced photodamage, as the slice imaging data suggested. The same thing happens for photobleaching, when its power dependence is higher than 2nd order, which is often under two-photon excitation conditions.
Another thing is that the mechanisms (or the power dependence) of photobleaching and photodamage depend on many factors: sample, probe, excitation wavelength, excitation power, etc.. It would be very difficult to extrapolate from one preparation to another. So it has to be characterized for individual preparations.
The final thing is that I wouldn’t say that our signal was very dim. I cannot compare our signal to those in the Donnert paper, since I don’t know their signal strength. We used the typical imaging condition employed in slice physiology, and got typical signal-to-noise ratios.
Right, I didn’t mean unusually dim. Just dim compared to the preps used by Donnert et al. I edited the post to clarify that point.