Toy models, just in time for Christmas!
Excited to share my first article for @thetransmitter.bsky.social
#neuroskyence
Posts by Rory Byrne
I built a link-sharing discussion website for (meta)science.
talk.amacrin.com
Scientific discourse here and elsewhere is a bit fragmented, so I made a space for centralised, casual discussion. You can log in with Bsky.
Still in testing mode. I'll move it to a new domain once I find a good name.
Answer: because seniors know the language of software architecture.
This enriches their LLM prompting significantly.
If you want to successfully use AI for coding, learn some software architecture!
Proud to have been a part of this, a great example of distributed async science!
Huge thanks to @marcusghosh.bsky.social, @neuralreckoning.bsky.social, @tfiers.bsky.social, @krhab.bsky.social and others for putting in the bulk effort 🙌
A great piece from @ersatzben.bsky.social on the importance of bold, aesthetic, mission-oriented directions in publicly funded research.
betterscienceproject.substack.com/p/the-counte...
How can we best use AI in science?
Myself and 9 other research fellows from @imperial-ix.bsky.social use AI methods in domains from plant biology (🌱) to neuroscience (🧠) and particle physics (🎇).
Together we suggest 10 simple rules @plos.org 🧵
doi.org/10.1371/jour...
Both can safely go back to their home country. If (eg) an Indian immigrant doesn’t intend to integrate (continues speaking native language etc), are they then an expat?
What makes you think immigrants don’t have access to their home country?
New preprint for #neuromorphic and #SpikingNeuralNetwork folk (with @pengfei-sun.bsky.social).
arxiv.org/abs/2507.16043
Surrogate gradients are popular for training SNNs, but some worry whether they really learn complex temporal spike codes. TLDR: we tested this, and yes they can! 🧵👇
🤖🧠🧪
I think we already explain societal impact, just retroactively: pre-PhD work justifies PhD funding, PhD work justifies postdoc funding, etc. We explain past contributions to secure future funds. I don’t think concurrent justification is reasonable, and might even be detrimental.
NIU Open Software Week
🟢 Applications are now open for two SSI Fellows' events: Niko Sirmpilatze's "Animals in Motion" and Alessandro Felder's "Big Imaging Data". These events will take place during the NIU Open Software Week running between Monday 11 and Friday 15 August in London.
www.software.ac.uk/news/ssi-fel...
How do babies and blind people learn to localise sound without labelled data? We propose that innate mechanisms can provide coarse-grained error signals to boostrap learning. New preprint from @yang-chu.bsky.social. 🤖🧠🧪
arxiv.org/abs/2001.10605
Come build tools for science with us in London!
Food, cool people, great speakers (on both the science and toolmaking sides).
#neuroskyence #openscience #desci #openchem #bioinformatics #opensource #foss
This is great! But how does it work tech wise? It says powered by the python SDK. I don’t know much about the AT protocol (yet).
How can we explore the space of computational models in #neuroscience 🧠?
Picture a mouse navigating an environment with light and dark areas.
🧵1/10
If you’re in SF, come build tools-for-science with us later this month! 🛠️
#opensource #neuroskyence #openscience #machinelearning #compchem #chemsky
Just getting started @standupforscience.bsky.social
Scientific data and independence are at risk: We need to work with community-driven services and university libraries to create new multi-country organizations that are resilient to political interference.
By @neuralreckoning.bsky.social
#neuroskyence
www.thetransmitter.org/policy/scien...
Yes but in many cases even a txt file of timestamped lines is enough.
This is part of @flywhl.dev, an initiative building devtools for science.
Join our Discord: discord.gg/kTkF2e69fH
We've also made a "Call for Problems" in the workflow of computational science, which helps us decide what to build next: flywhl-ideas.notion.site
Now, you can search your commit history to find good results and the associated code state.
Then when your experiment runs, logis will commit your code for you, with a nice commit message and experiment metadata at the bottom.
Or use the (work-in-progress) implicit API, where logis finds your parameters/metrics in the arguments and return value.
The SDK is similar to Weights & Biases, just add relevant data to your experiment's run.
🔧 Logis - turn your git history into a searchable scientific log.
The `@commit` decorator auto-commits your code when your experiment runs - with metadata in the message.
Then you can find previous results by querying for commits with (e.g.) metrics.accuracy > 0.9.
github.com/flywhl/logis
I naively assumed that a Nobel Prize was a prerequisite for such behaviour.
Who likes categories anyway?
I wonder what this means for the concept of "areas"...
#neuroscience
www.thetransmitter.org/neural-codin...