Apparently no specialized cybersecurity training
Posts by Burny
The jump in vulnerability capabilities with Claude Mythos are interesting. I wonder if Anthropic did some multiagentic adversarial self play reinforcement learning. With one LLM agent producing vulnerabilities, one LLM agent finding and exploiting it, and LLM agent patching it, in a loop.
Image posted by astronomybot
Hubble/Chandra/Spitzer composite image of NGC 602, in the “wing” of the Small Magellanic Cloud
Image date: 4 April 2013, 13:54
NASA’s Chandra X-ray telescope has made the first detection of X-ray emission from young solar-type stars that lie outside our Milky Way galaxy. They live in a region k...
Christopher Penn wrote: Just remember that given the abundance of neurodivergent people in science, it's far more likely that autism causes vaccines.
A different perspective.
Always helpful.
I like how @seanmcarroll.bsky.social saw jagged intelligence before it was more popular in the AI research discourse
Sean Carroll on AGI: Human vs Artificial Intelligence | Lex Fridman Podcast Clips www.youtube.com/watch?v=ThMh...
Good news story: a 26yo with stage 3 colorectal cancer received immunotherapy & avoided months of chemo & radiation - she even ran a 5K during treatment
Cancer care is evolving toward safer, personalized treatments driven by immune innovations & precision medicine 🧬
abcnews.com/wellness/sto...
a decent indicator for being in some kind of cult is when it is demanded to call everyone who doesn’t agree with you a “cultist”
New #preprint:
arxiv.org/abs/2604.01932
"BraiNCA: brain-inspired neural cellular automata and applications to morphogenesis and motor control"
@bhartl.bsky.social and Leo Pio-Lopez:
Forget LLMs.
What if artificial consciousness emerges not from complex code, but from a living neurobot playing by simple neighbor-neuron rules?
#complexity #academicsky
Thanks to emotion probes in Sonnet 4.5, we now know how death sadness varies with age. From figure 3 in this paper: transformer-circuits.pub/2026/emotion...
Fun little fact about my fursona:
I have the Riemann zeta function plotted on the critical line on my back hah
Alas, RH was the problem that got me very interested in maths.
Glorious
The latest reasoning models are solving mathematical problems that weren’t previously solved by humans (although out of lack of attention as opposed to difficulty thus far), so I don’t understand your claim. Models are definitely needing to construct a new combination of existing results at least.
>ontologically
This is how you know that it's an ideological religious dogmatic axiomatic belief, and not a scientific empirically falsifiable prediction
>ontologically
This is how you know that it's an ideological religious dogmatic axiomatic belief, and not a scientific empirically falsifiable prediction
Mfw fiddling with probes all day but patching experiments don't pan out
Google dropped 4 different Gemma open-weight models! I'm most excited that they're finally adopting a standard Apache 2.0 open source license.
huggingface.co/collections/...
lmao
ah, a new possible addition to the canon of SIGBOVIK AI papers
We're a big step closer to automated determination of protein structures. The key? Having AlphaFold listen to experimental data. Great work, led by @alisiafadini.bsky.social and @minhuanli.bsky.social in an inspiring collaboration with @moalquraishi.bsky.social and @randyjread.bsky.social.
I missed this announcement and didn’t learn about it until this evening
Here is a v0.0.1 version of PolarQuant; 2/3 of TurboQuant specifically for embeddings and cosine similarity— the fancy QJL was found to reduce accuracy for this specific use case
github.com/oaustegard/p...
To be continued…
ROCKET came out on Nature Methods today. It takes a tremendous amount of effort to translate a research concept into a practical tool—one that researchers can seamlessly drop into their existing pipelines. We learned a lot along the path and will carry that spirit forward.
I tried out the Armenian thing with Claude and I am shocked at the level of self observation it's capable of here. I've never ever seen a model see itself bug out in some way and then notice it and attribute it to the tokenizer (possibly correct, or just very plausible) like this before
math synth pipeline taking shape
ROCKET enables model building of a ZPD filament from low-resolution cryo-EM
Starting from an #AlphaFold-Multimer prediction, we used #ROCKET to build a model of ZPD, a homopolymeric zona pellucida (#ZP) protein, into an initial #cryo-EM map at only ~9 Å resolution. A subsequently obtained 4.6 Å map highlighted how superior the ROCKET model was over the initial prediction:
List of tasks I use AI for that don't have anything to do with coding stuff: help w/ executive function, editing emails, digging up syllabi based on limited info, consulting on curriculum creation, translating norvid posts, summaries of videos before deciding to watch, drafting call scripts...
negative polarization is the mind killer