I’m excited to share that I’ll be starting my computational neurosci & machine learning lab at UCLA this July! ☀️
We’ll be working on computational methods for high-throughput neural data analysis, optical interrogation of neural circuits, & mechanistic models of artificial+bio neural systems. ⤵️
Posts by Charlie Windolf
You know what's better than inflating the variation of your observational model to heuristically accommodate "outliers"? Actually modeling the contaminating data generating process.
DREDge is a software tool for motion correction of high-density electrophysiology recordings. It can handle action potential or local field potential data and is demonstrated on a variety of acute or chronic recordings from humans, nonhuman primates and mice.
www.nature.com/articles/s41...
Totally, yeah... In that case I only have one more idea. Estimate mean/var for each feature separately using all their observations, then standardize them before estimating their correlation with masking. Then multiplying stuff to get the covariance may have a slightly larger effective sample size?
One option is to take masked averages of products of pairs of features for cases where both are observed. But the resulting matrix is not positive definite, although you can project it... A more expensive way is EM in a multivariate normal, if you’re ok with normals. If not, maybe another model?