Advertisement ยท 728 ร— 90

Posts by Shibhansh Dohare

The timescale for observation of plasticity loss depends on the hyperparameters. Reducing the learning rate or the replay ratio reduces plasticity loss. But, I think a good rule of thumb is that the more sample-efficient the hyperparameters (initially), the faster the network loses plasticity.

1 year ago 1 0 0 0
Preview
Loss of plasticity in deep continual learning - Nature The pervasive problem of artificial neural networks losing plasticity in continual-learning settings is demonstrated and a simple solution called the continual backpropagation algorithm is descri...

But injecting noise on the weights of quiescent neurons can:

www.nature.com/articles/s41...

So a bit of random homeostatic plasticity should do the trick.

1 year ago 7 4 1 0

This year's (first-ever) RL conference was a breath of fresh air! And now that it's established, the next edition is likely to be even better: Consider sending your best and most original RL work there, and then join us in Edmonton next summer!

1 year ago 19 3 0 0
Preview
Streaming Deep Reinforcement Learning Finally Works Natural intelligence processes experience as a continuous stream, sensing, acting, and learning moment-by-moment in real time. Streaming learning, the modus operandi of classic reinforcement learning ...

Streaming Deep Reinforcement Learning Finally Works, by
M. Elsayed, G. Vasan, A. R. Mahmood, is one of those papers I wish I had written ๐Ÿ˜…

This paper seems to allow us to do RL with NNs as it should have always been done. Everyone should read it!

arxiv.org/abs/2410.14606

1 year ago 91 20 1 0