Advertisement · 728 × 90

Posts by Sireesh Gururaja

Did *models* robustly solve classification? Absolutely not lol

But could you make a convincing argument that given a year and the weight of the entire industry, that any individual classification problem could be "solved" in a ~year? Maybe!

4 days ago 5 1 0 0

This is a great thread, and reminds me of interviewing someone at one of the big labs for a study, and they made the claim that "classification was solved in 2022"

That's an insane quote, until you account for the money, time, and labor that will be poured into any problem that's deemed worth it

4 days ago 5 1 1 0
1 week ago 49 15 0 1

Have you read Ted Chiang's the Lifecycle of Software Objects? The sense of bonding is real, and so much more literal now

1 week ago 1 0 1 0

They need to make it so you can mass add starter packs to a list / feed if you don't want them to fill up your main feed

1 week ago 1 1 1 0

Monokai was suddenly everywhere with Sublime Text 3 at the tail end of my time in college, and I credit it with really giving me the bug for customizing my tools!

It looked cool in a way that I'm not sure color themes *can* for me anymore, and I've had a taste for hot pink ever since

1 week ago 2 0 0 0

friendly reminder to run docker system prune
if you haven't recently

1 week ago 71 7 8 1

But it's literally a benchmark!!

1 week ago 0 0 0 0

Really excited about this work w/ my long-time collaborators at Boulder!

We address limitations in existing morphosyntactic annotation systems for digitally under-resourced languages and show how *jointly* predicting morphological segmentation helps with glossing performance

1 week ago 6 3 0 0

Congratulations!! 🎉

1 week ago 1 0 1 0
Advertisement

if you invented public libraries today, every opinion page in the country would be arguing for means tested subsidized Amazon Prime memberships

2 weeks ago 7438 1896 59 17

It's time to seize the means of prediction

2 weeks ago 40 13 1 0
Preview
Stratum — Your Zotero library in Obsidian with zero config Structured literature notes from Zotero in Obsidian. No configuration, no templates. First synced note in less than two minutes.

I should not learn the lesson that if I wait long enough, someone else will do it.

however.

2 weeks ago 3 0 0 0

I think this is exactly right - I think there's a real use here, and it'd be a lot easier to sell if you led with that use rather than the fact that it was AI. Even if the AI provides a way to do other things down the line!!

2 weeks ago 3 1 0 0
Preview
Hypothesis Only Baselines in Natural Language Inference Adam Poliak, Jason Naradowsky, Aparajita Haldar, Rachel Rudinger, Benjamin Van Durme. Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics. 2018.

Also true for other tasks like entailment! A really subtle way to ruin a benchmark :(

2 weeks ago 9 0 0 0
Post image

You have a moral imperative to refuse to work with these people or develop models for these purposes.

2 weeks ago 9 3 1 2

*CL folks, a recent history question: was the move to OpenReview as the review platform for *CL conferences related to the move to ARR? Or did they just happen simultaneously?

Asking for a discussion with @shaily99.bsky.social

3 weeks ago 1 0 0 0

I'm really not a fan of the way some journals in the physical sciences state results upfront, and bury methods much later in the paper, or even in supplementary materials. Maybe it's my ML bias, but I don't trust that your methods are prima facie reasonable!! Show me what you did before the results!

3 weeks ago 2 1 0 0

Appointment reading

3 weeks ago 9 4 0 1

I think my AI beliefs are quickly becoming ‘this has enormous positive potential if implemented responsibly. it will not be implemented responsibly and like maybe five companies are even trying.’

3 weeks ago 502 62 27 6
Advertisement

Right!! And such complementary methods, too 🥰

1 month ago 3 0 0 1

This is also a much better statement of the alignment connection thru Bahdanau et al (2014)

1 month ago 2 0 0 0

Sorry for the barrage, this has just been a side quest for me for a while, so many thoughts

1 month ago 1 0 0 0
Emerging trends: A tribute to Charles Wayne | Natural Language Engineering | Cambridge Core Emerging trends: A tribute to Charles Wayne - Volume 24 Issue 1

Misc links: tributes to Charles Wayne (DARPA program manager, seen as responsible for benchmarking): www.cambridge.org/core/journal... and Fred Jelinek (so much non-rule based NLP): doi.org/10.1162/coli...

1 month ago 1 0 1 0
Preview
GitHub - kwchurch/Benchmarking_past_present_future: Workshop Home Page for Benchmarking: Past, Present and Future Workshop Home Page for Benchmarking: Past, Present and Future - kwchurch/Benchmarking_past_present_future

The 2021 workshop on benchmarking also had some real gems, including a story from John Makhoul about the wholesale shift from rule-based ASR to HMMs following the institution of benchmark evaluations: github.com/kwchurch/Ben...

1 month ago 3 0 1 0
Preview
From Protoscience to Epistemic Monoculture: How Benchmarking Set the Stage for the Deep Learning Revolution Over the past decade, AI research has focused heavily on building ever-larger deep learning models. This approach has simultaneously unlocked incredible achievements in science and technology, and hin...

Koch and Peterson have a great critical take on benchmarks and their relationship to the rise of deep learning (both in NLP and otherwise: arxiv.org/abs/2404.06647

1 month ago 3 0 1 0
Whither Speech Recognition? Speech recognition has glamour. Funds have been available. Results have been less glamorous. “When we listen to a person speaking much of what we think we hear

I'm biased, but the history of benchmarking is also really important. @markriedl.bsky.social mentioned the ALPAC report, but also a strong rec for Whither Speech Recognition. Same author, but this is a scathing piece that is interesting to compare to the field today (and an early LM shoutout!)

1 month ago 2 0 1 0
Preview
To Build Our Future, We Must Know Our Past: Contextualizing Paradigm Shifts in Natural Language Processing Sireesh Gururaja, Amanda Bertsch, Clara Na, David Widder, Emma Strubell. Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing. 2023.

Hope a self plug is ok—we wrote about the LLM shift in NLP and what feels different about it here: aclanthology.org/2023.emnlp-m...

1 month ago 8 0 2 0

The transformer being developed for MT also is a cool connection to the idea of alignment introduced (iirc) by the IBM models

1 month ago 3 0 1 0
Advertisement