In science, self-satisfaction is death. Personal self-satisfaction is the death of the scientist. Collective self-satisfaction is the death of the research. It is restlessness, anxiety, dissatisfaction, agony of mind that nourish science.
Jacques...]]>In science, self-satisfaction is death. Personal self-satisfaction is the death of the scientist. Collective self-satisfaction is the death of the research. It is restlessness, anxiety, dissatisfaction, agony of mind that nourish science.
Jacques Monod (New Scientist 17 Jun 1976)
See also: Le Mythe de Sisyphe (thx LBT)
Some attributes that can help ensure success for graduate students:
– Restlessness. Always striving for more. Having fun in the process.
– Error checking. Re-reading and re-analyzing data. Searching for their own mistakes. Reviewing their own productivity and making changes in processes or habits when needed.
– Reading a lot. Both for research, and for recreation.
– Self-teaching. Autonomous learning. Figuring out how to make something work.
– Taking notes. Detailed notes. Organized. And referring back to them later. If it wasn’t written it down, it was goofing off. If it was documented, then it was science.
– Humility. Willingness to ask questions that might be foolish.
– Compassion. Caring about other people and trying to see things from their side and work with them. Surrounding themselves with good people. Keeping distance from insecure or destructive people.
– Getting to done. Being a finisher. Finding a way to make things happen and get to a landmark. Scientists like to obsess about things, and they’re not easily satisfied, but the successful ones, despite those tendencies, figure out how to put out good products.
PART 1. Our tools are imperfect. When I first got into neuroscience, I started deep down into neural circuitry: individual neurons and synapses. It was exciting to get glimpses...
]]>PART 1. Our tools are imperfect. When I first got into neuroscience, I started deep down into neural circuitry: individual neurons and synapses. It was exciting to get glimpses into the nuts and bolts of what makes neural circuitry support the amazing behaviors we see in animals. I love patch clamp electrophysiology in all of its forms. The data itself is viscerally satisfying on multiple levels. It is all but perfect.
But most of the neuronal activity data we take in neuroscience has lower fidelity. Many of our tools are quite primitive. A great deal of what we learn about population level activity is from shoving metal electrodes into brains, which we’ve done for 100 years. Sure, we make them smaller now, and I’m a big supporter of Neuropixels and similar efforts, but the approach is still fundamentally crude.
Calcium imaging is more technically interesting– especially multiphoton calcium imaging. One problem with calcium imaging is: it’s not spikes. It’s a correlate of spiking, and that correlate’s usefulness can vary by cell type, reporter, and imaging parameters. Also, the time resolution is typically quite poor. So how can we hope to make meaningful measurements with calcium imaging? We want the measurements we report to be precise and accurate, so that they can be informative and support detailed analysis. Is it possible to do so with imperfect tools?
It is. All tools are imperfect. Still, with technical care and rigor, it is possible to obtain high fidelity results with some imperfect tools. After all, this is what experimental science– when it’s at its best– is all about. Experimental scientists make an art out of pulling exquisite measurements out of imperfect instruments. For example, particle physicists clumsily throw streams of particles at each other and make measurements of the trajectories of the products of the collisions. The idea is fundamentally a bit sloppy– definitely stochastic. Yet, by applying rigorous analysis, they can achieve exquisitely precise measurements, up to the level of five sigma.
I’m fortunate to get to work with a fellow scientist who places a premium on rigor: Dr. Yiyi Yu. This post is about a small portion of her recent work, and how she is making precise, accurate, and informative measurements of neural circuitry in action, despite the shortcomings of calcium imaging. Yiyi has been studying activity correlations in neural circuitry using calcium imaging, finding ways to make rigorous measurements, and using the results to obtain insights into principles of neural circuit function.
PART 2. Why measure activity correlations with calcium imaging? Activity correlations among neurons give us clues into how they are connected, how they might represent information, and how they might process it.
Who would bother to measure correlations with calcium imaging data? Isn’t the time resolution hopelessly poor? Isn’t the fidelity low? What’s wrong with you?
We weren’t the first to measure activity correlations with calcium imaging. In fact, there has been some nice work on the analysis of them. What is new in Yiyi’s study is two-fold: First, she used new instrumentation– large field-of-view two-photon imaging— to obtain one of the most extensive data sets in the field. Second, she subjected the data to several types of rigorous analysis that have not been previously explored. She obtained several fascinating insights, including (i) a new granular functional classification of visual cortical neuron tuning, (ii) evidence and an explanation of how correlations can actually increase at the millimeter length scale (rather than falling off with distance as they usually do), and (iii) evidence showing that noise correlations are more stable across stimulus types than signal correlations– that is, it appears as though noise correlations are a more stable measure of connectivity . If you want to learn more about those insights, please check out the preprint. What I want to focus on in the blog post is the rigor of analysis of correlations.
Yiyi systematically assessed the potential issues with noise correlations, and her work merits a highlight here.
Isn’t the time resolution of calcium imaging too low? No, it’s actually fine. Even in electrophysiology experiments where the raw data is sampled at > 10 kHz, spikes are often binned into bins that are 0.1 – 1.0 seconds wide, and this can help make more accurate measurements of correlated variability. This time resolution is well within the resolution of calcium imaging.
Isn’t the spike inference from calcium imaging uncertain? You miss individual spikes and can’t count the number of spikes in bursts. Right? True, and this is one of the most annoying things to me about calcium imaging. However, it doesn’t end up mattering as much as I thought it would. Yiyi did a very thorough analysis of this. Using electrophysiological recordings– thus we know the ground truth– she dropped individual spikes or spikes in bursts, and the estimated correlation values were mostly stable. That is, it takes a LOT of missing spikes to throw off these measurements. As long as you have enough data, correlation measurements are relatively robust to imprecise spike inference.
Don’t you need an unobtainable amount of data to get good measurements? Nope. You don’t. It is entirely feasible, given good instrumentation and experimental design, to get enough data to make precise measurements of noise correlations. Especially at the population level. Yiyi obtained measurements that have particle physics levels of confidence. Just 100 neuron pairs gets the error level below 0.01. This is particle physics-level precision, in a neuroscience experiment with imperfect tools.
Here is the preprint. There’s a lot in it. The measurements themselves, which this post discusses, and then the insights and modeling to help understand what the measurements can tell us. Excellent work by Yiyi.
]]>Last summer I asked Bing’s Chat AI to write advertising copy for a microscope objective. I just came across it again and maybe it’s worth sharing.
Do you have a...
]]>Last summer I asked Bing’s Chat AI to write advertising copy for a microscope objective. I just came across it again and maybe it’s worth sharing.
]]>In May 2022, I asked for an “LLM to automatically write code to port arbitrary data sets to NWB format”.
Today, I see...
]]>In May 2022, I asked for an “LLM to automatically write code to port arbitrary data sets to NWB format”.
Today, I see this job ad from the Allen Institute, where the job is “to develop tools that use Large Language Models (LLMs) to support our metadata tracking and analysis. We track detailed metadata that is essential to interpreting and analyzing data. These are crucial to make our data findable and reusable, important pillars of Open Science. We are eager to develop tools that can aid in generating accurate metadata during data acquisition and/or can summarize the extensive metadata to create an accurate narrative of how the data was collected (e.g. write the methods section for a paper). This postbac will work with both software engineers and scientists to help achieve these goals, gaining valuable experience with LLMs and prompt engineering and learning about the scientific work that this effort will support.”
Beautiful. I’m looking forward to the products!
(in case it’s unclear, I’m not seriously taking credit… zero credit)
]]>This is an ad for Michael Beyeler. He’s awesome. Together with Michael Goard and Cris Niell, we have formed a supergroup that is funded by the NIH...
]]>This is an ad for Michael Beyeler. He’s awesome. Together with Michael Goard and Cris Niell, we have formed a supergroup that is funded by the NIH BRAIN Initiative.
NIH-Funded Postdoctoral Position in Visual/Computational Neuroscience
We are excited to invite a self-driven and enthusiastic postdoctoral researcher to our team at the Bionic Vision Lab, University of California, Santa Barbara (UCSB), led by Assistant Professor Michael Beyeler. Our project, at the exciting crossroads of visual and computational neuroscience, is part of the NIH BRAIN Initiative.
This project seeks to elucidate how the mouse brain processes visual information during active exploration to support visual navigation. Our recent NeurIPS paper highlighted how a multimodal recurrent net can integrate gaze-contingent visual input with behavioral and temporal dynamics to explain V1 activity in freely moving mice. We are now seeking to extend this work to higher-order visual areas (HVAs) in collaboration with the labs of Spencer Smith (UCSB), Michael Goard (UCSB), and Cris Niell (University of Oregon).
What we offer:
• A welcoming, inclusive, and collaborative environment that combines expertise in neuroscience, computer science, and cognitive sciences.
• A fully funded, unionized position, offering a competitive salary and excellent benefits, for a minimum 2-year commitment.
• An opportunity to work with diverse experts and engage with unique neural activity data sets, collected using state-of-the-art techniques.
We are looking for:
• A passionate individual with a solid foundation in visual and computational neuroscience.
• A PhD in (computational) neuroscience, computer science, cognitive sciences, statistics, or a related field
• Strong communication and teamwork skills, with proficiency in programming and statistical analysis
To apply, please email your CV and contacts for 2-3 references to mbeyeler@ucsb.edu.
]]>OpenAI — and LLMs in general that train by scraping data from the web and ignoring copyrights — are in legal jeopardy. There are multiple lawsuits filed and many of us are wondering how...
]]>OpenAI — and LLMs in general that train by scraping data from the web and ignoring copyrights — are in legal jeopardy. There are multiple lawsuits filed and many of us are wondering how they’ll shake out. People have made many comparisons to Napster, and I think those are valid, in a way. I also think there’s a valid comparison to sampling in music.
Has anyone asked Biz Markie or the Dust Brothers for their opinions on ChatGPT?
In my opinion, the current legal precedence for sampling is a mess. Take this with a grain of salt though, as I’m not a legal expert. I welcome criticism from experts on this. As far as I can tell, there are multiple conflicting legal opinions and it is difficult to operate outside of the “clear everything” modus operandi, which can obviously be cumbersome (especially for amateurs) and unfair (e.g., Bitter Sweet Symphony). I’m concerned that the debate in training AI models will evolve similarly– with a patchwork of legal opinions and decisions, and specious negotiations. To me, that is a realistic worst case scenario for AI.
And that’s maybe why business leaders in AI have been pushing the US Congress to legislate guidelines for the use of copyrighted material for training AI models. It would be preferable (especially since the companies would likely get to heavily influence the legislation) to the mess that music sampling is today.
By the way, indexers like Yahoo and Google generally obey the robots.txt standard for opting out of indexing. Do LLMs respect that guidance as well?
Tangentially related: I tried to get Bing’s GPT-4 based chat to find a quote for me today. It seemed to be working great, but I fact checked it and GPT-4 had completely made up the quote and the source. The book was real, but the quote wasn’t in it. I even did a text search on the book. I called GPT-4 out on the error and asked it to double-check. It admitted its error and then gave me a different page in the same book– which also didn’t have the quote. It was a complete fabrication from start to finish. It sounded good, but it wasn’t good.
]]>NPG still sells reprints (link). Seems anachronistic, doesn’t it? If you got into science after the year 2000, you might not be fully aware of the sea change that occurred just before you got into science....
]]>NPG still sells reprints (link). Seems anachronistic, doesn’t it? If you got into science after the year 2000, you might not be fully aware of the sea change that occurred just before you got into science. I’m not fully aware of it either, but I have a taste of what it was like before then, and I can share.
What are reprints? It’s just the article pages, as they appeared in the journal (mostly).
Why would people have these? It’s how people shared their work before PDFs. In the days before PDFs and journal websites, people would buy reprints, and then mail them to people who requested them. People who were at an institution that didn’t subscribe to a particular journal would write to authors and request reprints (some people even had preprinted postcards for just such requests). Then the authors would mail them a copy. Super slow, expensive, and old fashioned. I remember interviewing for grad school and many profs had big stacks of reprints around their offices, and they’d often give me one or two to read to learn about their work.
My PhD advisor had a filing cabinet full of reprints. Now people just have collections of PDFs, or maybe not even that anymore since it’s so easy to find and access the literature.
How did people do literature searches prior to Pubmed? They used big hardcopy books like the Index Medicus. Here’s an example, an edition from 1995. You can flip through it online (ironically).
Looking up by keyword:
Looking up by author:
Short background: I got into neuroscience after seeing a talk by Freeman Dyson. A friend of mine told me he was giving a talk about 2 hours away, and we met up for it. We had both read his books and we were excited to see his talk. I was an undergraduate in physics and mathematics and I wasn’t sure what direction I wanted to go into for a career. Dyson gave his talk in four vignettes, and one was on neuroscience. I emailed him afterwards and he sent me the references he had discussed.
Searching: It got me reading the literature and I used these huge hardcopy books to find the original papers Dyson had cited, and other neuroscience papers. It was the mid 1990s, and there were computers in the library that had CDROMs with this same information, but they were slow and clumsy. I found it quicker to use the hardcopy books. I only did this for a matter of months. Pubmed went online in 1996 and that helped tremendously. But I started with these big books and spending a lot of time with a small number of papers. I was hooked.
Neuroscience was, and continues to be, a wild frontier of experiments and theory with fascinating implications for humankind. I followed up with Dyson later and told him how his talk and correspondence had helped direct me to pursue a career in neuroscience. He responded, as he consistently did, with warm and encouraging messages. It was huge for me that he bothered to answer email from a nobody student like me. Later, I was delighted to learn that he was like that with everyone!
Finding the actual papers: Even after Pubmed became available for easy online searching, I still had to spend a lot of time in the library looking up the actual papers and reading them. Shortly thereafter, journals digitized their back catalogs and PDFs became widespread. (nota bene: Pubmed still has a bias towards more recent papers– the coverage of literature prior to the 1960s is incomplete.)
The twilight of hardcopies: My first publications and fellowships were all submitted in hardcopy via Fedex (maybe 1999 or 2000). Then everything moved online pretty quickly. For some years I would get people emailing me asking for PDFs of my articles that they didn’t have access to. That is pretty rare these days, but even I still do it sometimes.
(credit: top photo from this nice blog entry on the same topic)
]]>We’ve been using compressors in recent years, and in particular these compressors from Newport (hat...
]]>We’ve been using compressors in recent years, and in particular these compressors from Newport (hat tip to J. Stirman). I was just buying some oil for them now and sourced some from Amazon. Searching on Amazon is a big mess these days, because there is so much crap to wade through. But sometimes the broad offerings turn out to be handy and I find interesting stuff that I otherwise might not have.
If you know me, you know that I don’t like noise. I try to make my lab space quiet. I use sound dampening cabinets for noisy water chillers, laser components, and other parts with noisy fans. I like to seal off resonant mirrors (makes a huge difference). I have had unused blowers shut down by Facilities Management to lower noise. Similarly, I don’t like air compressor noise. The Newport ones are pretty quiet, 30 dB. I approve. Thorlabs sells a compressor too, but it’s louder– 50 dB (larger tank capacity, to be fair— in case you need that). I’m quiet fond of the Newport compressors. They’re quiet and they do the job reliably.
Air brush artists use compressors too, and at least some of them share my love of quiet equipment. I found this compressor on Amazon for that crowd. 30 decibels, and roughly the same pressure range. It’s cheaper than the Newport ones.
]]>I asked my lab to please buy these basic items. I assured them that we have sufficient funding to buy doorstops. Some...
]]>I asked my lab to please buy these basic items. I assured them that we have sufficient funding to buy doorstops. Some time passed. Then this popped up:
Custom 3D printed door stop.
]]>It was my first time seeing it, and for some reason, that image creeps me out a bit. So I replied, “it’s your turn. as long as you promise to never post that image again.”
Well, that backfired.
Now the default hologram in our lab is that image. It shows up all the time.
]]>I get a lot of email from students that are interested in graduate school. Many of these emails are automated. They are from real students, who want to connect...
]]>I get a lot of email from students that are interested in graduate school. Many of these emails are automated. They are from real students, who want to connect with a professor to increase their odds of getting admitted. However, the students are taking a mass email approach. The emails use my name, maybe some keywords about our research areas, and maybe even the title of a paper we recently published. The emails express interest in our work, and their desire to work in our labs at our universities.
However, it is usually pretty clear that they are automated, and not authentic*, and that reflects negatively on students in ways that I want to make clear in this open letter. I am bothering to write this because I think there are many well-meaning, talented students that use this approach, and I hope to reach at least some of them. Many of the emails I get are from countries that have excellent traditions in science and engineering, and excellent higher education, but are not easy for students to leave due to political differences between their home country’s leadership and leaders of other countries. I wish success for the students, and opportunities for them to have a great life and a positive impact on the world. And if this is what they want for themselves, then start with authenticity.
By sending automated emails masquerading as personal emails to individual professors, you are making your very first interaction with a professor dishonest. Your integrity should be one of the most valuable things in your scientific career. Honest mistakes are forgivable. Deliberate misrepresentation is not where you want to start.
Please don’t send these automated emails. If you insist on doing so, (why? does the approach work?) then at least label them as such.
I recommend sending personal emails to a smaller number of professors. Writing 1-2 sentences that are clearly authentic and express a bit of detail about your interest in their work can be much more effective at laying the groundwork for a constructive dialog.
And don’t worry too much about the grammar or format. Keep it brief and authentic. Attach a CV/Resume and (optionally) an unofficial transcript so that they can quickly get an idea of your background and abilities. I made some suggestions in an earlier blog post.
* For example, they might have errors in the formatting such as “Dear [professorName], ” or they might cite a paper that I wrote, but one of the least interesting ones like a commentary on someone else’s work. Even the most polished automated emails lack authentic expressions of interest, tending towards vague comments that could apply to anything.
]]>I was taught that when responding to reviewer criticism, there are two valid strategies: (1) new data, or (2) references to papers. When reviewers stray from these two strategies, they are...
]]>I was taught that when responding to reviewer criticism, there are two valid strategies: (1) new data, or (2) references to papers. When reviewers stray from these two strategies, they are on thin ice. Authors get away with it anyways sometimes, but not because they are clever.
The reviewer is your best friend. Get in this mindset. I give this advice to all of my trainees. When getting a challenging question after a talk, reading a harsh paper review, or receiving other types of critical evaluations, recognize that the reviewer is a friend. In a way. They might not be perfectly fair, and maybe even unconstructive, but they took the time to digest what you put out and think about it. And they did so for free. It’s natural to be instinctively defensive, but you have to overcome this. You are more than your data. Relax, disconnect from it, and work to see things from the reviewer’s perspective, and try to be a partner in addressing their concerns. Reviewers want to see that the paper got better. They cared enough to spend time reviewing it and making suggestions. Show them that you stepped up. Don’t tell them that they messed up or wrote a bad review. Be friends with them.
Look for validity in all criticisms, and hold yourself accountable. There are indeed some criticisms that are unfair. It happens. Commonly. But most of the time it’s best to take criticism to heart and try to respond constructively. Sometimes criticisms are poorly worded or overly broad or harsh. Try to find a kernel that you can address, and do so effectively.
Give direct answers, even if they’re not completely satisfying. Reviewers have been there. No experiment is perfect. Sometimes people need to leave lab and follow up experiments are not feasible. Reviewers understand that. It might not be a sufficient response, but being brief, direct, and honest can play better than trying to argue with the reviewer that their concern is invalid. The goal is to have the reviewers look at the revision and QUICKLY and CONFIDENTLY conclude that the paper is improved and their concerns have been addressed. So don’t write long wordy responses. Get to the point quickly.
Be humble. There are sometimes genuine misunderstandings with reviewers. Be gentle and humble in suggesting that possibility. Because it might be you who is mistaken. Or, you could at least be more likely to win the approval of the reviewer if you avoid coming across as an arrogant jerk.
Who cares how long the response is? Longer is not better. It is not impressive. It is not persuasive. If you need some space to make your point, take it. Just know that length alone isn’t impressive. More often than not, it means that the short and direct answer is weak and the authors are trying to avoid stating it plainly. Sometimes authors wage a war of attrition and try to wear reviewers down with long-winded, verbose, wordy, rambling, loquacious, prolix responses that are ultimately unsatisfying. Sometimes it works, and maybe that’s why some people still do it. But it is not rigorous or constructive, and it can backfire. I don’t recommend it. That said, if you need the space take it. Add figures if you like. As both a reviewer and an author, I love Reviewer Figures. Aim for one round of revisions. Be prepared to do a massive amount of work, including new experiments.
Don’t get bogged down. Maybe there’s a reviewer comment that is very upsetting, and you don’t have a great response to it. Move on and get the rest of the response drafted and done. Then ask a coauthor or even a non-author colleague to give their input. A fresh pair of eyes can work wonders. The manuscript is your baby and your judgement can be off. Let someone else help guide you to a good response.
It’s a dialog. You can ask questions. Ideally there isn’t too much back and forth, but if you’re really stuck, try to be constructive. Ask for clarification. Ask what might be sufficient. Offer something, while acknowledging that it might not completely satisfy the reviewer.
More tips:
Some items above are from Michael Breakspear https://twitter.com/DrBreaky/status/1273842646377566214
]]>