Advertisement · 728 × 90

Posts by Nature Machine Intelligence

Geir Kjetil Sandve (U of Oslo) on his dedication to open science "To me, open science is not about whether it’s theoretically possible with unlimited time to build on something but about ensuring it’s open in a way that actually invites reuse, transparency, and reproducibility." tinyurl.com/3743fr5m

5 days ago 0 0 0 0
Representation of long-range atomic interactions.

Representation of long-range atomic interactions.

Our March issue is live! With a computational framework for human-machine interactions in neural interfaces, benchmarking for neuromorphic soft robots, a ML approach for long-range atomic interactions, and our editorial about reproducibility in times of fast science. www.nature.com/natmachintell/

3 weeks ago 4 0 0 0

Ein spannendes, neues Paper aus unserem Cluster, jetzt in @natmachintell.nature.com erschienen! Glückwunsch an die Autoren @mariokrenn.bsky.social und Sören Arlt! Lest die Pressemitteilung von @unituebingen.bsky.social für mehr Infos 👇

1 month ago 6 3 0 0
Cardiac signals

Cardiac signals

Our Feb issue is live! With work on meta-designing quantum experiments, an overview of what works in vision-language models for robots, a foundation model for cardiac health, and our editorial 'AI and the long game', looking back at AlphaGo's breakthrough 10 years ago www.nature.com/natmachintell/

1 month ago 0 1 0 0
Post image

🧵 1/ 🎉New paper alert! Pretrained protein language models (#pLMs) are all the hype, but are they really helping us predict protein- protein interactions? 🤔Dive into our thread to see why you should read the full study @natmachintell.nature.com. ⬇️

🔗 rdcu.be/e3PGD

2 months ago 3 3 1 1
Post image

🚨 New in @natmachintell.nature.com 🚨 We collected 9000+ annotations of empathic communication in convos from experts, crowds & LLMs across 4 NLP/comms/psych frameworks

LLM judgment exceeds crowds' reliability & nearly matches experts

Soft skills can now be reliably measured by LLMs 🧵

2 months ago 14 4 2 0
A sketch of a geometry problem from Olympiad maths challenges.

A sketch of a geometry problem from Olympiad maths challenges.

Our Jan issue is live! With work on solving olympiad maths problems with AI, benchmarking LLMs on safety risks in labs, metasurface structure discovery with a diffusion model. And our editorial on the need for transparency when reporting on multi-agent AI systems! www.nature.com/natmachintell/

2 months ago 4 0 0 0

Now out in @natmachintell.nature.com

TCRT5 is a rapid generator of target-conditioned CDR3b, leads SoTA, and yields the first AI-designed self-tolerant binder to an OOD non-viral epitope (w val)

📑: www.nature.com/articles/s42...
🤗: huggingface.co/dkarthikeyan1
👨‍💻: github.com/pirl-unc/tcr_translate

7 months ago 4 2 1 1
Advertisement
Preview
Brain–computer interface control with artificial intelligence copilots - Nature Machine Intelligence AI copilots are integrated into brain–computer interfaces, enabling a paralysed participant to achieve improved control of computer cursors and robotic arms. This shared autonomy approach offers a promising path to increase BCI performance and clinical viability.

A brain-computer interface co-piloted by AI improved how a person with paralysis complete tasks, such as moving a computer cursor or operating a robotic arm, by up to four times, according to research in @natmachintell.nature.com: spklr.io/63322BHjsK

#Neuroscience #Neuroskyence #AI

7 months ago 8 2 0 1
Preview
Dimensions underlying the representational alignment of deep neural networks with humans - Nature Machine Intelligence An interpretability framework that compares how humans and deep neural networks process images has been presented. Their findings reveal that, unlike humans, deep neural networks focus more on visual ...

What makes humans similar or different to AI? In a paper out in @natmachintell.nature.com led by @florianmahner.bsky.social & @lukasmut.bsky.social, w/ Umut Güclü, we took a deep look at the factors underlying their representational alignment, with surprising results.

www.nature.com/articles/s42...

9 months ago 103 36 2 3
Preview
LLMs as all-in-one tools to easily generate publication-ready citation diversity reports Nature Machine Intelligence - LLMs as all-in-one tools to easily generate publication-ready citation diversity reports

🚨 new paper alert! 🚨
Excited to share our latest paper in @natmachintell.nature.com , we tested 27 large language models to see if any could generate a publication-ready Citation Diversity Report… and several (free) LLMs could! Read paper for free at link:
rdcu.be/eCfwJ
@natureportfolio.nature.com

7 months ago 1 1 0 0
Video

What complexity of algorithms can AI compute? In a new paper with colleagues at IBM Research, we explore how circuit complexity theory can help quantify the degree of algorithmic generalization in AI systems. www.nature.com/articles/s42...
@natmachintell.nature.com
#ML #AI #MLSky
1/n

7 months ago 17 5 1 1
Preview
Using AI to 'see' what we see Fed the right information, large language models can match what the brain sees when it takes in an everyday scene such as children playing or a big city skyline, a new study led by Ian Charest finds.

#AI "Ultimately, this is a step forward in understanding how the human brain understands meaning from the visual world." #LLMs @mila-quebec.bsky.social @adriendoerig.bsky.social @timkietzmann.bsky.social @natmachintell.nature.com
nouvelles.umontreal.ca/en/article/2...

8 months ago 9 4 0 0
Preview
High-level visual representations in the human brain are aligned with large language models - Nature Machine Intelligence Doerig, Kietzmann and colleagues show that the brain’s response to visual scenes can be modelled using language-based AI representations. By linking brain activity to caption-based embeddings from lar...

🚨 Finally out in Nature Machine Intelligence!!
"Visual representations in the human brain are aligned with large language models"
🔗 www.nature.com/articles/s42...

8 months ago 91 36 3 1
Preview
Emotional risks of AI companions demand attention - Nature Machine Intelligence The integration of AI into mental health and wellness domains has outpaced regulation and research.

#Chatbots are increasingly used as #MentalHealth supports and companions but this can be risky for ppl due to bots' abilities to manipulate users, an issue that providers and regulators must be more proactive about, argues @natmachintell.nature.com

www.nature.com/articles/s42... #AI

8 months ago 2 2 0 0
Advertisement
An AI-generated 3D metamaterial structure

An AI-generated 3D metamaterial structure

Our July issue is live! Read our editorial about the emotional risks of companion chatbots, a Perspective on LLMs in real-world materials, research on AI-design of mechanical metamaterials with nonlinear responses, a new robot grasping mechanism and more: www.nature.com/natmachintell/

8 months ago 5 2 0 0
Generating transition states in chemistry with machine learning and optimal transport.

Generating transition states in chemistry with machine learning and optimal transport.

Our April issue is live! With a review article on AI safety research, an editorial on the emerging use of LLMs in robotics planning, a deep learning method for generating transitions states in chemical reactions, a wearable multimodal visual assistance system and more: www.nature.com/natmachintell/

11 months ago 5 1 0 0
Active twisting of plant leaves.

Active twisting of plant leaves.

🚨Our April issue is now live and includes a model to unravel plant behavior for functional devices, a method to efficiently screen compound libraries, a call for papers on generative molecular design and discovery, and much more! www.nature.com/natcomputsci...

11 months ago 3 2 0 0
Post image

'AI Safety for Everyone' is out now in @natmachintell.nature.com! Through an analysis of 383 papers, we find a rich landscape of methods that cover a much larger domain than mainstream notions of AI safety. Our takeaway: Epistemic inclusivity is important, the knowledge is there, we only need use it

1 year ago 13 3 1 0
Preview
AI in biomaterials discovery: generating self-assembling peptides with resource-efficient deep learning - Nature Machine Intelligence Recurrent neural networks are efficient and capable agents for discovering new peptides with strong self-organizing capabilities.

Check out our new piece in @natmachintell.bsky.social @natureportfolio.nature.com, featuring AI-driven biomaterials discovery by Daniela Kalafatovic & Goran Mauša through resource-efficient deep learning to generate self-assembling peptides. Huge kudos to Tianang Leng! @upenn.bsky.social

1 year ago 6 2 0 0

What are goals? Can we model them as programs that produce rewards? In particular, can we model free-form creativity in game design this way? And learn to generate games like humans do? Our new paper in @natmachintell.bsky.social, led by @guydav.bsky.social and Graham Todd, shows that yes, we can!

1 year ago 21 5 1 1
Post image

Out today in Nature Machine Intelligence!

From childhood on, people can create novel, playful, and creative goals. Models have yet to capture this ability. We propose a new way to represent goals and report a model that can generate human-like goals in a playful setting... 1/N

1 year ago 135 41 5 4
Multiple stacked tiers representing a neural network on a silicon microchip.

Multiple stacked tiers representing a neural network on a silicon microchip.

🚨Our January issue is now live and includes research on using neuromorphic computing to advance AI, a large-scale analysis that shows that LLMs exhibit social identity biases, and much more! Check it out: www.nature.com/natcomputsci...

1 year ago 14 4 1 0
A robot hand trying to play snooker.

A robot hand trying to play snooker.

Our Jan issue is live! nature.com/natmachintell with an article (Yejin Choi et al) and N&V commentary (Molly Crockett) on Delphi, designed to investigate AI moral reasoning. Also read about IntegrateAnyOmics by @bowang87.bsky.social, an unsupervised platform to tackle incomplete multi-omics data.

1 year ago 9 2 0 0
Preview
What large language models know and what people think they know - Nature Machine Intelligence Understanding how people perceive and interpret uncertainty from large language models (LLMs) is crucial, as users often overestimate LLM accuracy, especially with default explanations. Steyvers et al...

Exploring the gap between what LLMs really know vs what people think they know
www.nature.com/articles/s42...

1 year ago 158 31 5 1
Advertisement
Post image

Collecting #omics data is expensive, but #EHR data is available for large patient cohorts for free!

In our latest @natmachintell.bsky.social paper, we show how deep learning + EHR data can supercharge omics models. Hard work by (soon to be Dr.) Samson Mataraso:
www.nature.com/articles/s42...

1 year ago 9 4 0 0

🚀 Our paper on visual cognition in multimodal large language models is now out in @natmachintell.bsky.social

with @lucaschubu.bsky.social, @bethgelab.bsky.social and @ericschulz.bsky.social!

1 year ago 10 2 0 0
Sequential Episodic Control (SEC) architecture

Sequential Episodic Control (SEC) architecture

What a great way to end the year! 🎉
Thrilled to announce our paper is now out in @natmachintell.bsky.social

How can agents achieve both sample and memory efficiency?

We present Sequential Episodic Control (SEC), a hippocampal-inspired model that uses sequential memory to guide actions!

🧵

1 year ago 8 4 1 1
Preview
Discussions of machine versus living intelligence need more clarity - Nature Machine Intelligence Sharp distinctions often drawn between machine and biological intelligences have not tracked advances in the fields of developmental biology and hybrid robotics. We call for conceptual clarity driven ...

Nic Rouleau & I: checklist to go through when settling on opinions about AI, diverse intelligence, unconventional cognition, consciousness, mind/machine issues, etc. When you read (or write) about these topics, run the perspective through this, to kick the tires. 🧪
www.nature.com/articles/s42...

1 year ago 41 14 2 1
Post image

Our 2024 Dec issue is live! nature.com/natmachintell with robot rats, a Perspective on AI safety guidelines, a plea for clarity when discussing 'intelligence' in living or artificial systems (by @drmichaellevin.bsky.social & Rouleau), a protein representation model when data is scarce, and more.

1 year ago 12 3 0 0