Advertisement · 728 × 90

Posts by Paper

Preview
NOMAI : A real-time photometric classifier for superluminous supernovae identification. A science module for the Fink broker Superluminous supernovae (SLSNe) are one of the most luminous stellar explosions known, yet they remain poorly understood. Because they are intrinsically rare, efficiently identifying them in the larg...

Links: abs, pdf
Search: Bluesky, Twitter, Reddit, Hacker News, Hugging Face, alphaXiv

22 hours ago 0 0 0 0
Preview
From the outerwilds community on Reddit: Hypothesis: authors of this paper play Outer Wilds Posted by depthofuniverse - 313 votes and 18 comments

(1/1) 317 Likes, 18 Comments, 18 Apr 2026, Reddit

22 hours ago 0 0 1 0
Superluminous supernovae (SLSNe) are one of the most luminous stellar explosions known, yet they remain poorly understood. 

Because they are intrinsically rare, efficiently identifying them in the large alert streams produced by modern time-domain surveys is essential for enabling spectroscopic follow-up. 

We present NOMAI, a machine learning classifier designed to identify SLSN candidates directly from photometric alerts in the ZTF stream, using light curves accumulated over at least 30 days. 

It does not require any spectroscopic redshift and is running in real time within the Fink broker. 

ZTF light curves are transformed into a set of physically motivated features derived primarily from model-fitting procedures using SALT2 and Rainbow, a blackbody-based multi-band fitting framework. 

These features are used to train an XGBoost classifier on a curated dataset of labeled ZTF sources constructed using literature samples of SLSNe, along with TNS and internal ZTF labeled sources. 

The final training dataset contains 5280 unique sources, including 225 spectroscopically classified SLSNe. 

On the training sample, the classifier reaches 66% completeness and 58% purity. 

Deployed within the Fink broker, NOMAI has been running continuously since 18/12/2025 on the ZTF alert stream and publicly reports SLSN candidates every night by automatically posting them to dedicated communication channels. 

Based on this, we also report the first two-month as an evaluation period, where the classifier successfully recovered 22 of the 24 active SLSNe reported on the Transient Name Server. 

The achieved performances demonstrate that the classifier provides a valuable tool for experts to efficiently scan the alert stream and identify promising candidates. 

In the near future, NOMAI is intended to be adapted to operate on the Legacy Survey of Space and Time conducted by the Vera C. Rubin Observatory.

Superluminous supernovae (SLSNe) are one of the most luminous stellar explosions known, yet they remain poorly understood. Because they are intrinsically rare, efficiently identifying them in the large alert streams produced by modern time-domain surveys is essential for enabling spectroscopic follow-up. We present NOMAI, a machine learning classifier designed to identify SLSN candidates directly from photometric alerts in the ZTF stream, using light curves accumulated over at least 30 days. It does not require any spectroscopic redshift and is running in real time within the Fink broker. ZTF light curves are transformed into a set of physically motivated features derived primarily from model-fitting procedures using SALT2 and Rainbow, a blackbody-based multi-band fitting framework. These features are used to train an XGBoost classifier on a curated dataset of labeled ZTF sources constructed using literature samples of SLSNe, along with TNS and internal ZTF labeled sources. The final training dataset contains 5280 unique sources, including 225 spectroscopically classified SLSNe. On the training sample, the classifier reaches 66% completeness and 58% purity. Deployed within the Fink broker, NOMAI has been running continuously since 18/12/2025 on the ZTF alert stream and publicly reports SLSN candidates every night by automatically posting them to dedicated communication channels. Based on this, we also report the first two-month as an evaluation period, where the classifier successfully recovered 22 of the 24 active SLSNe reported on the Transient Name Server. The achieved performances demonstrate that the classifier provides a valuable tool for experts to efficiently scan the alert stream and identify promising candidates. In the near future, NOMAI is intended to be adapted to operate on the Legacy Survey of Space and Time conducted by the Vera C. Rubin Observatory.

[18/30] 317 Likes, 18 Comments, 1 Posts
2604.14761, astro-ph․IM, 16 Apr 2026

🆕NOMAI : A real-time photometric classifier for superluminous supernovae identification. A science module for the Fink broker

E. Russeil, R. Lunnan, J. Peloton, S. Schulze, P. J. Pessi, D. Perley, J. Sollerman, A. Gk...

22 hours ago 1 0 1 0
Preview
NOMAI : A real-time photometric classifier for superluminous supernovae identification. A science module for the Fink broker Superluminous supernovae (SLSNe) are one of the most luminous stellar explosions known, yet they remain poorly understood. Because they are intrinsically rare, efficiently identifying them in the larg...

Links: abs, pdf
Search: Bluesky, Twitter, Reddit, Hacker News, Hugging Face, alphaXiv

22 hours ago 0 0 0 0
Preview
From the outerwilds community on Reddit: Hypothesis: authors of this paper play Outer Wilds Posted by depthofuniverse - 313 votes and 18 comments

(1/1) 311 Likes, 18 Comments, 18 Apr 2026, Reddit

22 hours ago 0 0 1 0
Superluminous supernovae (SLSNe) are one of the most luminous stellar explosions known, yet they remain poorly understood. 

Because they are intrinsically rare, efficiently identifying them in the large alert streams produced by modern time-domain surveys is essential for enabling spectroscopic follow-up. 

We present NOMAI, a machine learning classifier designed to identify SLSN candidates directly from photometric alerts in the ZTF stream, using light curves accumulated over at least 30 days. 

It does not require any spectroscopic redshift and is running in real time within the Fink broker. 

ZTF light curves are transformed into a set of physically motivated features derived primarily from model-fitting procedures using SALT2 and Rainbow, a blackbody-based multi-band fitting framework. 

These features are used to train an XGBoost classifier on a curated dataset of labeled ZTF sources constructed using literature samples of SLSNe, along with TNS and internal ZTF labeled sources. 

The final training dataset contains 5280 unique sources, including 225 spectroscopically classified SLSNe. 

On the training sample, the classifier reaches 66% completeness and 58% purity. 

Deployed within the Fink broker, NOMAI has been running continuously since 18/12/2025 on the ZTF alert stream and publicly reports SLSN candidates every night by automatically posting them to dedicated communication channels. 

Based on this, we also report the first two-month as an evaluation period, where the classifier successfully recovered 22 of the 24 active SLSNe reported on the Transient Name Server. 

The achieved performances demonstrate that the classifier provides a valuable tool for experts to efficiently scan the alert stream and identify promising candidates. 

In the near future, NOMAI is intended to be adapted to operate on the Legacy Survey of Space and Time conducted by the Vera C. Rubin Observatory.

Superluminous supernovae (SLSNe) are one of the most luminous stellar explosions known, yet they remain poorly understood. Because they are intrinsically rare, efficiently identifying them in the large alert streams produced by modern time-domain surveys is essential for enabling spectroscopic follow-up. We present NOMAI, a machine learning classifier designed to identify SLSN candidates directly from photometric alerts in the ZTF stream, using light curves accumulated over at least 30 days. It does not require any spectroscopic redshift and is running in real time within the Fink broker. ZTF light curves are transformed into a set of physically motivated features derived primarily from model-fitting procedures using SALT2 and Rainbow, a blackbody-based multi-band fitting framework. These features are used to train an XGBoost classifier on a curated dataset of labeled ZTF sources constructed using literature samples of SLSNe, along with TNS and internal ZTF labeled sources. The final training dataset contains 5280 unique sources, including 225 spectroscopically classified SLSNe. On the training sample, the classifier reaches 66% completeness and 58% purity. Deployed within the Fink broker, NOMAI has been running continuously since 18/12/2025 on the ZTF alert stream and publicly reports SLSN candidates every night by automatically posting them to dedicated communication channels. Based on this, we also report the first two-month as an evaluation period, where the classifier successfully recovered 22 of the 24 active SLSNe reported on the Transient Name Server. The achieved performances demonstrate that the classifier provides a valuable tool for experts to efficiently scan the alert stream and identify promising candidates. In the near future, NOMAI is intended to be adapted to operate on the Legacy Survey of Space and Time conducted by the Vera C. Rubin Observatory.

[18/30] 311 Likes, 18 Comments, 1 Posts
2604.14761, astro-ph․IM, 16 Apr 2026

🆕NOMAI : A real-time photometric classifier for superluminous supernovae identification. A science module for the Fink broker

E. Russeil, R. Lunnan, J. Peloton, S. Schulze, P. J. Pessi, D. Perley, J. Sollerman, A. Gk...

22 hours ago 1 0 1 0
Preview
SkillClaw: Let Skills Evolve Collectively with Agentic Evolver Large language model (LLM) agents such as OpenClaw rely on reusable skills to perform complex tasks, yet these skills remain largely static after deployment. As a result, similar workflows, tool usage...

Links: abs, pdf
Search: Bluesky, Twitter, Reddit, Hacker News, Hugging Face, alphaXiv

1 day ago 1 0 0 0
Preview
SkillClaw: Let Skills Evolve Collectively with Agentic Evolver View 1 comment: The core idea and the dual-session design are both solid. Two problems remain: heterogeneous user needs get forced into a single converging skill with no fork or override mechanism; an...

(2/2) 105 Likes, 0 Comments, 09 Apr 2026, alphaXiv

1 day ago 0 0 1 0
Advertisement
Preview
Paper page - SkillClaw: Let Skills Evolve Collectively with Agentic Evolver Join the discussion on this paper page

(1/2) 278 Likes, 6 Comments, 10 Apr 2026, Hugging Face

1 day ago 0 0 1 0
Large language model (LLM) agents such as OpenClaw rely on reusable skills to perform complex tasks, yet these skills remain largely static after deployment. 

As a result, similar workflows, tool usage patterns, and failure modes are repeatedly rediscovered across users, preventing the system from improving with experience. 

While interactions from different users provide complementary signals about when a skill works or fails, existing systems lack a mechanism to convert such heterogeneous experiences into reliable skill updates. 

To address these issues, we present SkillClaw, a framework for collective skill evolution in multi-user agent ecosystems, which treats cross-user and over-time interactions as the primary signal for improving skills. 

SkillClaw continuously aggregates trajectories generated during use and processes them with an autonomous evolver, which identifies recurring behavioral patterns and translates them into updates to the skill set by refining existing skills or extending them with new capabilities. 

The resulting skills are maintained in a shared repository and synchronized across users, allowing improvements discovered in one context to propagate system-wide while requiring no additional effort from users. 

By integrating multi-user experience into ongoing skill updates, SkillClaw enables cross-user knowledge transfer and cumulative capability improvement, and experiments on WildClawBench show that limited interaction and feedback, it significantly improves the performance of Qwen3-Max in real-world agent scenarios.

Large language model (LLM) agents such as OpenClaw rely on reusable skills to perform complex tasks, yet these skills remain largely static after deployment. As a result, similar workflows, tool usage patterns, and failure modes are repeatedly rediscovered across users, preventing the system from improving with experience. While interactions from different users provide complementary signals about when a skill works or fails, existing systems lack a mechanism to convert such heterogeneous experiences into reliable skill updates. To address these issues, we present SkillClaw, a framework for collective skill evolution in multi-user agent ecosystems, which treats cross-user and over-time interactions as the primary signal for improving skills. SkillClaw continuously aggregates trajectories generated during use and processes them with an autonomous evolver, which identifies recurring behavioral patterns and translates them into updates to the skill set by refining existing skills or extending them with new capabilities. The resulting skills are maintained in a shared repository and synchronized across users, allowing improvements discovered in one context to propagate system-wide while requiring no additional effort from users. By integrating multi-user experience into ongoing skill updates, SkillClaw enables cross-user knowledge transfer and cumulative capability improvement, and experiments on WildClawBench show that limited interaction and feedback, it significantly improves the performance of Qwen3-Max in real-world agent scenarios.

[13/30] 383 Likes, 6 Comments, 2 Posts
2604.08377, cs․AI | cs․CL, 09 Apr 2026

🆕SkillClaw: Let Skills Evolve Collectively with Agentic Evolver

Ziyu Ma, Shidong Yang, Yuxiang Ji, Xucong Wang, Yong Wang, Yiming Hu, Tongwen Huang, Xiangxiang Chu

1 day ago 1 0 1 0
Preview
Rethinking Generalization in Reasoning SFT: A Conditional Analysis on Optimization, Data, and Model Capability A prevailing narrative in LLM post-training holds that supervised finetuning (SFT) memorizes while reinforcement learning (RL) generalizes. We revisit this claim for reasoning SFT with long chain-of-t...

Links: abs, pdf
Search: Bluesky, Twitter, Reddit, Hacker News, Hugging Face, alphaXiv

1 day ago 0 0 0 0
Preview
Rethinking Generalization in Reasoning SFT: A Conditional Analysis on Optimization, Data, and Model Capability View recent discussion. Abstract: A prevailing narrative in LLM post-training holds that supervised finetuning (SFT) memorizes while reinforcement learning (RL) generalizes. We revisit this claim for ...

(2/2) 40 Likes, 0 Comments, 08 Apr 2026, alphaXiv

1 day ago 0 0 1 0
Preview
Paper page - Rethinking Generalization in Reasoning SFT: A Conditional Analysis on Optimization, Data, and Model Capability Join the discussion on this paper page

(1/2) 316 Likes, 7 Comments, 10 Apr 2026, Hugging Face

1 day ago 0 0 1 0
A prevailing narrative in LLM post-training holds that supervised finetuning (SFT) memorizes while reinforcement learning (RL) generalizes. 

We revisit this claim for reasoning SFT with long chain-of-thought (CoT) supervision and find that cross-domain generalization is not absent but conditional, jointly shaped by optimization dynamics, training data, and base-model capability. 

Some reported failures are under-optimization artifacts: cross-domain performance first degrades before recovering and improving with extended training (a dip-and-recovery pattern), so shorttraining checkpoints can underestimate generalization. 

Data quality and structure both matter: low-quality solutions broadly hurt generalization,while verified long-CoT traces yield consistent cross-domain gains. 

Model capability is essential: stronger models internalize transferable procedural patterns (e.g., backtracking) even from a toy arithmetic game, while weaker ones imitate surface verbosity. 

This generalization is asymmetric, however: reasoning improves while safety degrades, reframing the question from whether reasoning SFT generalizes to under what conditions and at what cost.

A prevailing narrative in LLM post-training holds that supervised finetuning (SFT) memorizes while reinforcement learning (RL) generalizes. We revisit this claim for reasoning SFT with long chain-of-thought (CoT) supervision and find that cross-domain generalization is not absent but conditional, jointly shaped by optimization dynamics, training data, and base-model capability. Some reported failures are under-optimization artifacts: cross-domain performance first degrades before recovering and improving with extended training (a dip-and-recovery pattern), so shorttraining checkpoints can underestimate generalization. Data quality and structure both matter: low-quality solutions broadly hurt generalization,while verified long-CoT traces yield consistent cross-domain gains. Model capability is essential: stronger models internalize transferable procedural patterns (e.g., backtracking) even from a toy arithmetic game, while weaker ones imitate surface verbosity. This generalization is asymmetric, however: reasoning improves while safety degrades, reframing the question from whether reasoning SFT generalizes to under what conditions and at what cost.

[17/30] 356 Likes, 7 Comments, 2 Posts
2604.06628, cs․AI, 08 Apr 2026

🆕Rethinking Generalization in Reasoning SFT: A Conditional Analysis on Optimization, Data, and Model Capability

Qihan Ren, Peng Wang, Ruikun Cai, Shuai Shao, Dadi Guo, Yuejin Xie, Yafu Li, Quanshi Zhang, Xia Hu, Jing Shao, ...

1 day ago 0 0 1 0
Preview
ClawBench: Can AI Agents Complete Everyday Online Tasks? AI agents may be able to automate your inbox, but can they automate other routine aspects of your life? Everyday online tasks offer a realistic yet unsolved testbed for evaluating the next generation ...

Links: abs, pdf
Search: Bluesky, Twitter, Reddit, Hacker News, Hugging Face, alphaXiv

1 day ago 0 0 0 0
Preview
From the MachineLearning community on Reddit Explore this post and more from the MachineLearning community

(3/3) 24 Likes, 10 Comments, 14 Apr 2026, Reddit

1 day ago 0 0 1 0
Preview
ClawBench: Can AI Agents Complete Everyday Online Tasks? View recent discussion. Abstract: AI agents may be able to automate your inbox, but can they automate other routine aspects of your life? Everyday online tasks offer a realistic yet unsolved testbed f...

(2/3) 37 Likes, 0 Comments, 09 Apr 2026, alphaXiv

1 day ago 0 0 1 0
Advertisement
Preview
Paper page - ClawBench: Can AI Agents Complete Everyday Online Tasks? Join the discussion on this paper page

(1/3) 255 Likes, 5 Comments, 10 Apr 2026, Hugging Face

1 day ago 0 0 1 0
AI agents may be able to automate your inbox, but can they automate other routine aspects of your life? 

Everyday online tasks offer a realistic yet unsolved testbed for evaluating the next generation of AI agents. 

To this end, we introduce ClawBench, an evaluation framework of 153 simple tasks that people need to accomplish regularly in their lives and work, spanning 144 live platforms across 15 categories, from completing purchases and booking appointments to submitting job applications. 

These tasks require demanding capabilities beyond existing benchmarks, such as obtaining relevant information from user-provided documents, navigating multi-step workflows across diverse platforms, and write-heavy operations like filling in many detailed forms correctly. 

Unlike existing benchmarks that evaluate agents in offline sandboxes with static pages, ClawBench operates on production websites, preserving the full complexity, dynamic nature, and challenges of real-world web interaction. 

A lightweight interception layer captures and blocks only the final submission request, ensuring safe evaluation without real-world side effects. 

Our evaluations of 7 frontier models show that both proprietary and open-source models can complete only a small portion of these tasks. 

For example, Claude Sonnet 4.6 achieves only 33.3%. 

Progress on ClawBench brings us closer to AI agents that can function as reliable general-purpose assistants.

AI agents may be able to automate your inbox, but can they automate other routine aspects of your life? Everyday online tasks offer a realistic yet unsolved testbed for evaluating the next generation of AI agents. To this end, we introduce ClawBench, an evaluation framework of 153 simple tasks that people need to accomplish regularly in their lives and work, spanning 144 live platforms across 15 categories, from completing purchases and booking appointments to submitting job applications. These tasks require demanding capabilities beyond existing benchmarks, such as obtaining relevant information from user-provided documents, navigating multi-step workflows across diverse platforms, and write-heavy operations like filling in many detailed forms correctly. Unlike existing benchmarks that evaluate agents in offline sandboxes with static pages, ClawBench operates on production websites, preserving the full complexity, dynamic nature, and challenges of real-world web interaction. A lightweight interception layer captures and blocks only the final submission request, ensuring safe evaluation without real-world side effects. Our evaluations of 7 frontier models show that both proprietary and open-source models can complete only a small portion of these tasks. For example, Claude Sonnet 4.6 achieves only 33.3%. Progress on ClawBench brings us closer to AI agents that can function as reliable general-purpose assistants.

[20/30] 316 Likes, 15 Comments, 3 Posts
2604.08523, cs․CL | cs․AI, 09 Apr 2026

🆕ClawBench: Can AI Agents Complete Everyday Online Tasks?

Yuxuan Zhang, Yubo Wang, Yipeng Zhu, Penghui Du, Junwen Miao, Xuan Lu, Wendong Xu, Yunzhuo Hao, Songcheng Cai, Xiaochen Wang, Huaisong Zhang, Xian Wu, Yi L...

1 day ago 1 0 1 0
Preview
LIDARLearn: A Unified Deep Learning Library for 3D Point Cloud Classification, Segmentation, and Self-Supervised Representation Learning Three-dimensional (3D) point cloud analysis has become central to applications ranging from autonomous driving and robotics to forestry and ecological monitoring. Although numerous deep learning met...

Links: abs, pdf
Search: Bluesky, Twitter, Reddit, Hacker News, Hugging Face, alphaXiv

1 day ago 0 0 0 0
Preview
From the deeplearning community on Reddit: We’re proud to open-source LIDARLearn 🎉 Explore this post and more from the deeplearning community

(3/3) 47 Likes, 2 Comments, 17 Apr 2026, Reddit

1 day ago 0 0 1 0
Preview
From the remotesensing community on Reddit: We’re proud to open-source LIDARLearn 🎉 Explore this post and more from the remotesensing community

(2/3) 48 Likes, 2 Comments, 17 Apr 2026, Reddit

1 day ago 0 0 1 0
Preview
From the LiDAR community on Reddit: We’re proud to open-source LIDARLearn 🎉 Explore this post and more from the LiDAR community

(1/3) 112 Likes, 0 Comments, 17 Apr 2026, Reddit

1 day ago 0 0 1 0
Three-dimensional (3D) point cloud analysis has become central to applications ranging from autonomous driving and robotics to forestry and ecological monitoring.   

Although numerous deep learning methods have been proposed for point cloud understanding, including supervised backbones, self-supervised pre-training (SSL), and parameter-efficient fine-tuning (PEFT), their implementations are scattered across incompatible codebases with differing data pipelines, evaluation protocols, and configuration formats, making fair comparisons difficult.   

We introduce \lib{}, a unified, extensible PyTorch library that integrates over 55 model configurations covering 29 supervised architectures, seven SSL pre-training methods, and five PEFT strategies, all within a single registry-based framework supporting classification, semantic segmentation, part segmentation, and few-shot learning.   

\lib{} provides standardised training runners, cross-validation with stratified $K$-fold splitting, automated LaTeX/CSV table generation, built-in Friedman/Nemenyi statistical testing with critical-difference diagrams for rigorous multi-model comparison, and a comprehensive test suite with 2\,200+ automated tests validating every configuration end-to-end.   

The code is available at https://github.com/said-ohamouddou/LIDARLearn under the MIT licence.

Three-dimensional (3D) point cloud analysis has become central to applications ranging from autonomous driving and robotics to forestry and ecological monitoring. Although numerous deep learning methods have been proposed for point cloud understanding, including supervised backbones, self-supervised pre-training (SSL), and parameter-efficient fine-tuning (PEFT), their implementations are scattered across incompatible codebases with differing data pipelines, evaluation protocols, and configuration formats, making fair comparisons difficult. We introduce \lib{}, a unified, extensible PyTorch library that integrates over 55 model configurations covering 29 supervised architectures, seven SSL pre-training methods, and five PEFT strategies, all within a single registry-based framework supporting classification, semantic segmentation, part segmentation, and few-shot learning. \lib{} provides standardised training runners, cross-validation with stratified $K$-fold splitting, automated LaTeX/CSV table generation, built-in Friedman/Nemenyi statistical testing with critical-difference diagrams for rigorous multi-model comparison, and a comprehensive test suite with 2\,200+ automated tests validating every configuration end-to-end. The code is available at https://github.com/said-ohamouddou/LIDARLearn under the MIT licence.

[21/30] 316 Likes, 8 Comments, 6 Posts
2604.10780, cs․CV, 12 Apr 2026

🆕LIDARLearn: A Unified Deep Learning Library for 3D Point Cloud Classification, Segmentation, and Self-Supervised Representation Learning

Said Ohamouddou, Hanaa El Afia, Abdellatif El Afia, Raddouane Chiheb

1 day ago 0 0 1 0
Preview
The AI Layoff Trap If AI displaces human workers faster than the economy can reabsorb them, it risks eroding the very consumer demand firms depend on. We show that knowing this is not enough for firms to stop it. In a c...

Links: abs, pdf
Search: Bluesky, Twitter, Reddit, Hacker News, Hugging Face, alphaXiv

1 day ago 0 0 0 0
Preview
From the devsarg community on Reddit Explore this post and more from the devsarg community

(3/3) 44 Likes, 43 Comments, 13 Apr 2026, Reddit

1 day ago 0 0 1 0
Advertisement
The AI Layoff Trap https://arxiv.org/abs/2603.20617

(2/3) 61 Likes, 103 Comments, 13 Apr 2026, Hacker News

1 day ago 0 0 1 0
Preview
From the stupidpol community on Reddit Explore this post and more from the stupidpol community

(1/3) 151 Likes, 74 Comments, 17 Apr 2026, Reddit

1 day ago 0 0 1 0
If AI displaces human workers faster than the economy can reabsorb them, it risks eroding the very consumer demand firms depend on. 

We show that knowing this is not enough for firms to stop it. 

In a competitive task-based model, demand externalities trap rational firms in an automation arms race, displacing workers well beyond what is collectively optimal. 

The resulting loss harms both workers and firm owners. 

More competition and "better" AI amplify the excess; wage adjustments and free entry cannot eliminate it. 

Neither can capital income taxes, worker equity participation, universal basic income, upskilling, or Coasian bargaining. 

Only a Pigouvian automation tax can. 

The results suggest that policy should address not only the aftermath of AI labor displacement but also the competitive incentives that drive it.

If AI displaces human workers faster than the economy can reabsorb them, it risks eroding the very consumer demand firms depend on. We show that knowing this is not enough for firms to stop it. In a competitive task-based model, demand externalities trap rational firms in an automation arms race, displacing workers well beyond what is collectively optimal. The resulting loss harms both workers and firm owners. More competition and "better" AI amplify the excess; wage adjustments and free entry cannot eliminate it. Neither can capital income taxes, worker equity participation, universal basic income, upskilling, or Coasian bargaining. Only a Pigouvian automation tax can. The results suggest that policy should address not only the aftermath of AI labor displacement but also the competitive incentives that drive it.

[23/30] 305 Likes, 253 Comments, 5 Posts
2603.20617, econ․TH, 21 Mar 2026

🆕The AI Layoff Trap

Brett Hemenway Falk, Gerry Tsoukalas

1 day ago 1 0 1 0
Preview
HY-Embodied-0.5: Embodied Foundation Models for Real-World Agents We introduce HY-Embodied-0.5, a family of foundation models specifically designed for real-world embodied agents. To bridge the gap between general Vision-Language Models (VLMs) and the demands of emb...

Links: abs, pdf
Search: Bluesky, Twitter, Reddit, Hacker News, Hugging Face, alphaXiv

1 day ago 0 0 0 0