Advertisement · 728 × 90

Posts by Andrew Drozdov

Preview
Reranking in Mosaic AI Vector Search for Faster, Smarter Retrieval in RAG Agents Boost RAG agent quality with reranking—deliver more relevant answers in less time with a single parameter in Mosaic AI Vector Search.

We built a thing! The Databricks Reranker is now in Public Preview. It's as easy as changing the arguments to your vector search call, and doesn't require any additional setup.

Read more: www.databricks.com/blog/reranki...

7 months ago 4 1 0 0
Post image

The transformer was invented in Google. RLHF was not invented in industry labs, but came to prominence in OpenAI and DeepMind. I took 5 of the most influential papers (black dots) and visualized their references. Blue dots are papers that acknowledge federal funding (DARPA, NSF).

1 year ago 109 24 2 0
LongEval 2025 Conference Template

LongEval is turning three this year!

This is a Call for Participation to our CLEF 2025 Lab - try out how your IR system does in the long term.

Check the details on our page:
clef-longeval.github.io

1 year ago 8 3 0 0

The PhD is pretraining. Interview prep is alignment. Take this to heart. :)

1 year ago 2 0 0 0
Leaderboard showing performance of language models on claim verification task over book-length input. o1-preview is the best model with 67.36% accuracy followed by Gemini 2.5 Pro with 64.17% accuracy.

Leaderboard showing performance of language models on claim verification task over book-length input. o1-preview is the best model with 67.36% accuracy followed by Gemini 2.5 Pro with 64.17% accuracy.

We have updated #nocha, a leaderboard for reasoning over long-context narratives 📖, with some new models including #Gemini 2.5 Pro which shows massive improvements over the previous version! Congrats to #Gemini team 🪄 🧙 Check 🔗 novelchallenge.github.io for details :)

1 year ago 11 4 0 0
ARR Dashboard

I think ARR used to do this? Seems like it’s missing in the recent cycle(s).

stats.aclrollingreview.org/iterations/2...

1 year ago 3 0 0 0

A corollary here is that a relevant context might not improve the probability of the right answer.

1 year ago 0 0 0 0

Perhaps the most misunderstood aspect of retrieval: For a context to be relevant, it is not enough for it to improve the probability of the right answer.

1 year ago 1 0 1 0

MLflow is on BlueSky! Follow @mlflow.org to keep up to date on new releases, blogs and tutorials, events, and more.

1 year ago 4 1 0 0

ris.utwente.nl/ws/portalfil...

1 year ago 0 0 0 0
Advertisement

---Born To Add, Sesame Street
---(sung to the tune of Bruce Springsteen’s Born to Run)

1 year ago 0 0 1 0

One, and two, and three police persons spring out of the shadows
Down the corner comes one more
And we scream into that city night: “three plus one makes four!”
Well, they seem to think we’re disturbing the peace
But we won’t let them make us sad
’Cause kids like you and me baby, we were born to add

1 year ago 0 0 1 0

"How Claude Code is using a 50-Year-Old trick to revolutionize programming"

1 year ago 2 0 0 0

Somehow my most controversial take of 2025 is that agents relying on grep are a form of RAG.

1 year ago 2 0 0 1
Preview
Promptagator: Few-shot Dense Retrieval From 8 Examples Much recent research on information retrieval has focused on how to transfer from one task (typically with abundant supervised data) to various other tasks where supervision is limited, with the impli...

Embedding finetuning is not a new idea, but it's still overlooked IMO.

The promptagator work is one of the more impactful papers that show finetuning with synthetic data is effective.

arxiv.org/abs/2209.11755

1 year ago 2 0 0 0
Preview
Data Brew by Databricks on LinkedIn: Join us on the latest Data Brew episode for a deep dive on Retrieval… Join us on the latest Data Brew episode for a deep dive on Retrieval, rerankers, and RAG tips and tricks with our very own Andrew Drozdov, Research Scientist…

Search is the key to building trustworthy AI and will only be more important as we build more ambitious applications. With that in mind, there's not nearly enough energy spent improving the quality of search systems.

Follow the link for the full episode:
www.linkedin.com/posts/data-b...

1 year ago 3 0 0 0
Post image

It was a real pleasure talking about effective IR approaches with Brooke and Denny on the Data Brew podcast.

Among other things, I'm excited about embedding finetuning and reranking as modular ways to improve RAG pipelines. Everyone should use these more!

1 year ago 8 0 1 0
Advertisement
Preview
Improving Retrieval and RAG with Embedding Model Finetuning Fine-tune embedding models on Databricks to enhance retrieval and RAG accuracy with synthetic data—no manual labeling required.

We're probably a little too obsessed with zero-shot retrieval. If you have documents (you do), then you can generate synthetic data, and finetune your embedding. Blog post lead by @jacobianneuro.bsky.social shows how well this works in practice.

www.databricks.com/blog/improvi...

1 year ago 9 5 1 0

I do want to see aggregate stats about the model’s generation and total reasoning tokens is perhaps the least informative one.

1 year ago 2 0 0 0
Video

"All you need to build a strong reasoning model is the right data mix."

The pipeline that creates the data mix:

1 year ago 13 1 1 0

After frequent road runs during a Finland visit I tend to feel the same

1 year ago 3 0 0 0
Post image

Using 100+ tokens to answer 2 + 3 =

1 year ago 18 0 1 0

It’s pretty obvious we’re in a local minima for pretraining. Would expect more breakthroughs in the 5-10 year range. Granted, it’s still incredibly hard and expensive to do good research in this space, despite the number of labs working on it.

1 year ago 10 0 1 0

Word of the day (of course) is ‘scurryfunging’, from US dialect: the frantic attempt to tidy the house just before guests arrive.

1 year ago 3270 563 108 75

... didn't know this would be one of the hottest takes i've had ...

for more on my thoughts, see drive.google.com/file/d/1sk_t...

1 year ago 49 7 3 0
i sensed anxiety and frustration at NeurIPS’24 – Kyunghyun Cho

feeling a but under the weather this week … thus an increased level of activity on social media and blog: kyunghyuncho.me/i-sensed-anx...

1 year ago 178 36 19 13
Preview
Smarter, Better, Faster, Longer: A Modern Bidirectional Encoder for Fast, Memory Efficient, and Long Context Finetuning and Inference Encoder-only transformer models such as BERT offer a great performance-size tradeoff for retrieval and classification tasks with respect to larger decoder-only models. Despite being the workhorse of n...

Smarter, Better, Faster, Longer: A Modern Bidirectional Encoder for Fast, Memory Efficient, and Long Context Finetuning and Inference

Introduces ModernBERT, a bidirectional encoder advancing BERT-like models with 8K context length.

📝 arxiv.org/abs/2412.13663
👨🏽‍💻 github.com/AnswerDotAI/...

1 year ago 17 3 0 0
Advertisement
Preview
State Space Models are Strong Text Rerankers Transformers dominate NLP and IR; but their inference inefficiencies and challenges in extrapolating to longer contexts have sparked interest in alternative model architectures. Among these, state spa...

State Space Models are Strong Text Rerankers

Shows Mamba-based models achieve comparable reranking performance to transformers while being more memory efficient, with Mamba-2 outperforming Mamba-1.

📝 arxiv.org/abs/2412.14354

1 year ago 4 1 0 0

I’m being facetious, but the truth behind the joke is that OCR correction opens up the possibility (and futility) of language much like drafting poetry. For every interpreted pattern for optimizing OCR correction, exceptions arise. So, too, with patterns in poetry.

1 year ago 2 1 1 0

Wait can you say more

1 year ago 0 0 1 0