Advertisement · 728 × 90

Posts by

2/2 - They responded essentially saying that the past doesn't matter and to "Please contact us about reconsidering the restriction after you have had three successful arXiv submissions announced which adhere to the restriction."

Any suggestions on where to post my NeurIPS 2026 submissions?

6 days ago 0 0 1 0

1/2 - So arXiv didn't love my submission: in rejecting they also handed me a ban - won't accept anything not already published. I sent a note respecting their decision, asking for the ban to be reconsidered noting that I'm an author of 20+ existing arXiv submissions, and around 100 published papers.

6 days ago 0 0 1 0

This is unfortunate. I have a submission that is admittedly a bit unusual but isn’t a position paper and has been “On hold” for a while. Is there any alternative that provides archival storage and versioning/time stamps/logs that you know of?

1 month ago 0 1 1 0

Thanks Johan - makes sense. Congratulations on this work.

10 months ago 1 0 0 0

This is super cool Johan. What is your feeling on the reason that this works so well?

10 months ago 1 0 2 0
Publication Trends in Artificial Intelligence Conferences: The Rise of Super Prolific Authors

arxiv.org/html/2412.07...

According to this, at CVPR 2023, 1% of Authors account for 50% of the papers… is it really skewed this much?

11 months ago 0 0 0 0

Results get way better! What’s going on? I look at code added and LLM has added class labels + random noise to the vectors fed to the classifier. Adding the class labels would be a mistake I could accept… but adding some noise to them - that’s a bit suspicious! 😬

11 months ago 1 0 0 0

Interesting experience. Done with experiments and ask an LLM to look for any data leakage (surprised by results). LLM suggests to add some checks and balances so I defer to its grand intelligence and allow some code to be added. 1/2

11 months ago 0 0 1 0

Ok - I guess I totally missed the point. 😂

At least I learned something cool.

11 months ago 1 0 0 0
Advertisement

🤔… Kersten?

11 months ago 1 0 1 0

Peyman: I must say that your historical perspectives and observations are some of the most interesting and insightful things online. It strikes me that important context is lost to time, but retains a role in supporting new discoveries. If you had a book full of these, I’d be first in line to buy it

1 year ago 3 0 0 0

Successively melted and refrozen snow against a building with corrugated metal exterior?

1 year ago 1 0 0 0

Manus, make me some slides about generative AI in the style of @csprofkgd.bsky.social

1 year ago 1 0 0 0

In case you’re wondering where many of the globally important AI conferences in 2025 are being held, here’s a quick reference:

🇺🇸 NeurIPS ‘25
🇺🇸 CVPR ‘25
🇺🇸 ICCV ‘25
🇺🇸 ICRA ‘25
🇺🇸 AAAI ‘25
🇺🇸 NAACL ‘25
🇺🇸 WACV ‘25
🇸🇬 ICLR ‘25
🇨🇦 IJCAI ‘25
🇦🇹 ACL ‘25
🇨🇳 EMNLP ‘25
🇹🇭 AISTATS ‘25

1 year ago 3 0 0 0
AI Conference Deadlines Countdowns to top CV/NLP/ML/Robotics/AI conference deadlines

Is aideadlin.es no longer updated? Does someone have an alternative I can bookmark?

1 year ago 0 0 1 0
Post image

Rebuttal:

1 year ago 7 1 1 0
Advertisement

I agree that the system is *very* noisy now. But it’s always felt like a bit of a lottery. The paper was rejected with a 4th reviewer giving it a 4. It’s published as a journal paper instead. I sympathize with everyone holding a lottery ticket… but keep buying your tickets! It’s worth it!

1 year ago 1 0 0 0

10 and 1 had the qualifier that if the paper were (accepted/rejected) the reviewer would consider not reviewing again. The 1 began with “This entire manuscript consists of pretentious rhetoric”. Reviews are certainly noisy now… but it’s always been a bit of a lottery. (2/3)

1 year ago 1 0 1 0

This post is in response to some of the discussion I’ve seen around recent ICLR reviews. I wish I had an archive of my emails from previous institutions to have all details. About 15 years ago I had NeurIPS reviews with scores 10, 5, and 1. Regarding 10 and 1, assigning these scores… (1/2)

1 year ago 1 0 1 0

It feels like computer vision is evolving in a direction where more and more of the most central advances rely on advanced mathematics. Is this my imagination, or do others feel this as well?

1 year ago 0 0 0 0
Post image

I had to give it the full exaggerated Canadian stereotype flavour. Now we’ve got a moose, a beaver, maple leaf adornments (and possibly fleur-de-lis?). Also 3 of the pieces look to be bottles of maple syrup including the beaver. 😂

1 year ago 1 1 1 0

Hello Bluesky!

1 year ago 161 8 21 3

Congrats Kosta! 🎓

1 year ago 1 0 1 0
Bcounter Almost-real-time Bluesky user count

Bluesky user counter web site:
(In case you’re interested in growth rate)

bcounter.nat.vg

1 year ago 0 0 0 0

Starter packs help a lot. Would love to see the limit on size of starter packs being raised. Assuming there is mostly a direct mapping between BlueSky names and the other place, it would be cool if there was a way to rebuild one’s own “followed” through a personalized starter pack.

1 year ago 0 0 0 0
Advertisement

Awesome Kosta - thanks! Based on how your list 2 is growing, maybe you’ll need a list 3 soon. 😂

1 year ago 1 0 0 0

I thought this might be coming. Given your standing in the community, I think it might be worth an AWESOME list 2. And if anyone reading this hasn’t followed Kosta yet - do it! You won’t regret it!

1 year ago 2 0 1 0

In the case of transformers (and especially ViT), it seems like interesting possibilities abound in getting creative with different choices for Q, K and V. Flamingo is one nice example of this. Any suggestions on innovative works that do something outside typical “vanilla” configurations?

1 year ago 0 0 0 0

If you build it they will come (given the right reason). Then the money needs to flow. Good test though - would love to see the same in 3 months.

1 year ago 0 0 0 0

Thanks so much Kosta - you have no idea how much I was hoping for a list like this. It was the missing seed in starting to regrow the network.

1 year ago 1 0 1 0