Advertisement · 728 × 90

Posts by Erkan Güneş

Preview
Navigating the Prompt Space: Improving LLM Classification of Social Science Texts Through Prompt Engineering Recent developments in text classification using Large Language Models (LLMs) in the social sciences suggest that costs can be cut significantly, while performance can sometimes rival existing computa...

arxiv.org/abs/2603.25422

3 weeks ago 0 0 0 0
Post image

🚨 New preprint on text classification with LLMs.

3 weeks ago 0 0 1 0
Post image

Our superior use-case, which combined GPT 4 and Gemini 1.5 Pro achieved 0.82 weighted F1 score on the 83% of the data in which the two models agreed.

1 year ago 0 0 0 0

Our results point towards the insufficiency of complete reliance on instruction tuned LLMs, an increasing accuracy along with the human effort exerted, and a surprisingly high accuracy achieved in the most humanly demanding use-case.

1 year ago 0 0 1 0
Post image

We propose three use-case scenarios and estimate overall weighted F1 scores ranging from 0.44 to 0.82 depending on scenario and LLM models employed. The three scenarios aim at minimal, moderate, and major human interference, respectively.

1 year ago 0 0 1 0
Post image

We experimented on congressional bill titles and congressional hearing descriptions. We tested six different models' performance on clasifying titles and descriptions into Compratative Agendas Project's 21 issue topic categories.

1 year ago 0 0 1 0
Post image

Excited to announce my recently published article with Christoffer Florczak on LLMs' multiclass classification capabilities.
rdcu.be/d9oIw

1 year ago 0 1 1 0

Unexpected sequel to Why Nations Fail.

1 year ago 0 0 0 0
Advertisement