Advertisement · 728 × 90

Posts by Amine El Ouassouli

What if even the bullet points are made by an LLM?

2 days ago 1 0 0 0
Post image

Needed an update

1 week ago 1 0 0 0

That it’s time-off o’clock.

9 months ago 1 0 0 0
Preview
X’s dominance ‘over’ as Bluesky becomes new hub for research Data indicates more scholars turning to alternative social media site to post about their work after Elon Musk’s Twitter takeover

'Bluesky has overtaken its flailing rival X in hosting posts related to new academic research, indicating the platform is fast becoming the go-to place for scholars to share their work.'

1 year ago 17516 4406 131 317

13 minutes of wisdom.

“No authorities in science”.

Amen to that.

1 year ago 2 0 0 0
ImageNet Moment for Reinforcement Learning?
ImageNet Moment for Reinforcement Learning? YouTube video by Machine Learning Street Talk

@jfoerst.bsky.social take on how the community sees the ARC Challenge and how we evaluate models and use benchmarks nowadays is 👌.

#more_science_less_hype (please).

PS: Amazing discussion and good brain food, as usual with MLST.

1 year ago 3 1 0 0

There is nothing truer than this true statement.

1 year ago 1 0 0 0

📍

bsky.app/profile/aelo...

1 year ago 0 0 0 0
GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in Large Language Models

I missed this one when it came out but I can tell that it is one of the most useful piece of research I’ve read in a while.

“GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in Large Language Models”

arxiv.org/html/2410.05...

1 year ago 0 0 1 0

Seeing “very successful” and “have a somewhat loose relationship with the truth” referring to the same people is what I can’t make sense of …

1 year ago 0 0 1 0
Advertisement

We really need better brain-power allocation. The current algorithm is kind of turning crazy.

1 year ago 0 0 0 0

The ToDo list: a revolution.

1 year ago 1 0 0 0

Est ce qu'on dirait que "la promotion à outrance des mathématiques est idéologique" ? ça laisse entendre que "l'IA" spécifiquement, à forte dose, est une sciences eugénisante par essence. On peut très bien en faire une interprétation/usage "progressiste" même si ce n'est pas dans l'air du temps.

1 year ago 0 0 1 0

À mon avis, ce type d’analyse n’aide absolument pas à se faire une opinion. Le « calcul » (parce que c’est ce que c’est au finale) n’est que le calcul. En faire quelque chose d’idéologique par essence est une sur-interprétation biaisée. L’ « IA » elle même ne porte rien du tout.

1 year ago 0 0 1 0

😭

1 year ago 1 0 0 0

That’s a very good one 👌🏽

1 year ago 2 0 0 0

Best: the most useful research you can do in the current context.
Worst: seems that it is not the main focus for now + maybe the pushbacks.

1 year ago 1 0 0 0

May the force be with you!

1 year ago 1 0 0 0

Is it an outlier, though?

(and one way of coping for me is to listen to MLST to hear more nuanced, or at sounder, views and opinions + reading)

1 year ago 3 0 1 0
Advertisement

100%

1 year ago 0 0 0 0

The more I read and listen to current debates in the field, the more I’m convinced that we have a model evaluation crisis.

1 year ago 0 0 0 1

I never understood people going to concerts to spend their time there attending through the tiny screens of their phones.

1 year ago 0 0 0 0

Basically, IMO, given that all assertions have different degrees of consensus in the population, accurate sequential token prediction may overlap or not with accurate “truth” in the “meaning” or conceptual realm.

1 year ago 1 0 0 0

The model may represent truthfully what’s in the dataset, even if it is untruthful. An analogy I often use: you don’t decide if evolution exists by popular vote. The vote tells what the population thinks. Research work even if it is coming from a single individual is more relevant in “truthfulness”

1 year ago 7 0 1 0

Is it just me or are we in an Eliza effect pandemic?

1 year ago 0 0 0 0

What is clear for me is that the current hype is not helping the calm development of these methods and collaboration with other fields.

PS: as someone mentioned, cross domain collaboration is key when it comes to ai research. It is hard, but it is key.

1 year ago 0 0 0 0
Advertisement

If it is not satisfactory at an epistemological level, it is not always clear at the moment and advances in the field will highlight that later. Is that non-integrity ? I would say no (maybe I’m mistaken).

1 year ago 0 0 1 0

Then, there is epistemology. What people call ai nowadays is inductive reasoning at a huge scale. It’s new, not mature (even for ai researchers), and, it’s WIP but, really promising. If people are using it, they are using approaches that are still being developed, thus, inherently experimental.

1 year ago 0 0 1 0

In my opinion, there are two layers in that question: an epistemological and a deantological one. If someone is using “ai” in a wrong way knowingly or for clearly bad reasons (e.g secure funding, for the hype), then yes we have an obvious integrity problem. That’s the deantological part.

1 year ago 0 0 1 0

Huh, what a year !

Happy new year, everyone ! May it be a better one than 2024 (it’s not that hard, though)

Take care of your loved ones.

1 year ago 0 0 0 0