Advertisement · 728 × 90
#
Hashtag
#poliskydata
Advertisement · 728 × 90
Data – Demscore

We invite you to make good use of the updated data, available for download at the Demscore website: www.demscore.se/data/

@vdeminstitute.bsky.social @vdemamlat.bsky.social @qoginstitute.bsky.social

#PoliSkyData #PoliSciSky #PoliticalScience #SocialScience #PoliticalScienceData #PolDataSky

2 0 0 0
Decorative image of an ocean shore with the text Coming soon: Demscore version 6.

Decorative image of an ocean shore with the text Coming soon: Demscore version 6.

Coming very soon! DEMSCORE v6 offers new updates from UCDP, REPDEM, and VIEWS, and a new thematic dataset.
@ucdp.bsky.social @qoginstitute.bsky.social @vdeminstitute.bsky.social @viewsforecasting.org ‬‬
@repdem-org.bsky.social
#polisky #PoliSkyData #PoliSciSky #SocialScience
#SocialScienceData
1(3)

3 5 1 0
Image of a graph showing the V-Dem Electoral Democracy Index for The Gambia and Zambia for the years 1990 to 2024.

Image of a graph showing the V-Dem Electoral Democracy Index for The Gambia and Zambia for the years 1990 to 2024.

#GotW Electoral democracy in #TheGambia and #Zambia
The Gambia and Zambia have both recently improved in the Electoral Democracy Index (EDI). This week’s graph shows their performance between 1990 and 2024.
Full text w references @ v-dem.net/weekly_graph...
🧵1(7)
#PoliSky #PolDataSky #PoliSkyData

4 0 1 0
Post image

🏆We are happy to announce that #DEMSCORE has received the 𝗟𝗲𝗮𝗱𝗶𝗻𝗴 𝗗𝗲𝗺𝗼𝗰𝗿𝗮𝗰𝘆 & 𝗦𝗼𝗰𝗶𝗮𝗹 𝗣𝗼𝗹𝗶𝗰𝘆 𝗗𝗮𝘁𝗮 𝗣𝗹𝗮𝘁𝗳𝗼𝗿𝗺 𝟮𝟬𝟮𝟱 Award, in the framework of Acquisition International’s Non-Profit Organisation Award 2025🎉

#SocialScienceData #SocialScience #PolDataSky #PoliSkyData #PoliSciSky #PoliSky #AcademicSky
1(3)

12 3 1 0
Post image

#Demscore version 5.0 includes 161 datasets with 25,299 unique variables.
Among these are the updated #REPDEM Basic datasets, providing detailed information on #governments and #PoliticalParties in 33 parliamentary democracies from 1945 until December 2024.
#Polisky #poltheory #PoliSkyData
🧵1(2)

7 2 1 0
Post image

📢 Registration is now open for #DEMSCORE Conference 2025, June 9-10 in Gothenburg, Sweden.

Register here: www.demscore.se/.../dem.../d...

@vdeminstitute.bsky.social @ucdp.bsky.social @qoginstitute.bsky.social @medem.bsky.social

#polisky #poltheory
#PoliSkyData #SocialScienceData
🧵1(3)

1 0 1 0

🚨 #Data alert 📣 We just released #ParlLawSpeech – full texts of more than 40k bills, 28k laws, and 3 mio. parliamentary speeches from 7 countries (AT, CZ, DE, DK, ES, HR, HU) and the EU! If you study democracy with #TextAsData / #NLP methods, this is for you! A short 🧵 (1/3) #PoliSkyData #polisky

263 107 11 9

Could be of interest to #sociology #polisky #poliskydata #econsky #demography #geosky #methods #Stats #rstats #DataScience #StatsEd

8 2 0 0

#poliskydata

5 2 0 0
Recent scholarship suggests that large language models (LLMs) can perform many of the tasks of trained coders and crowdworkers in the social sciences. While we do not test the substitutability of LLM output for either coders or experts in this article, we agree that it is plausible that LLMs could perform well on the sort of tasks where we found crowdworkers were most substitutable for coders. However, even in this context issues of information availability will be a critical issue: the bias of most LLMs toward materials in commonly-spoken languages will limit their ability to gather historical data for many world regions; their reliance on publicly available data also limits their access to more academic texts that may contain this information.

With regard to expert-coded data, we believe the substitutability of LLMs is more dubious. In most cases, expert-coded output does not have a single “correct” answer, but rather a distribution of plausible responses. A primary goal of expert coding is therefore to aggregate over the scores of multiple experts with different evaluations to generate both a point estimate for a latent concept and estimates
of uncertainty over these values. While LLMs can synthesize data to provide point estimates of latent concepts, assessing the uncertainty and reliability associated with these estimates is not a straightforward endeavor. Given both the sensitivity of LLMs to prompt phrasing and
their tendency to unpredictably hallucinate sources and data
(Linegar et al. 2023), neither of these concerns are trivial. Finally, as with coder-coded data, the accuracy of LLM estimates will likely be correlated with data availability;
even more than with coder-coded data the plausible lack
of diverse views and biases in LLM training data (Bender
et al. 2021) could result in seriously flawed output. Indeed,
LLM estimates may be systematically biased in cases where
the most readily available data are wrong.

Recent scholarship suggests that large language models (LLMs) can perform many of the tasks of trained coders and crowdworkers in the social sciences. While we do not test the substitutability of LLM output for either coders or experts in this article, we agree that it is plausible that LLMs could perform well on the sort of tasks where we found crowdworkers were most substitutable for coders. However, even in this context issues of information availability will be a critical issue: the bias of most LLMs toward materials in commonly-spoken languages will limit their ability to gather historical data for many world regions; their reliance on publicly available data also limits their access to more academic texts that may contain this information. With regard to expert-coded data, we believe the substitutability of LLMs is more dubious. In most cases, expert-coded output does not have a single “correct” answer, but rather a distribution of plausible responses. A primary goal of expert coding is therefore to aggregate over the scores of multiple experts with different evaluations to generate both a point estimate for a latent concept and estimates of uncertainty over these values. While LLMs can synthesize data to provide point estimates of latent concepts, assessing the uncertainty and reliability associated with these estimates is not a straightforward endeavor. Given both the sensitivity of LLMs to prompt phrasing and their tendency to unpredictably hallucinate sources and data (Linegar et al. 2023), neither of these concerns are trivial. Finally, as with coder-coded data, the accuracy of LLM estimates will likely be correlated with data availability; even more than with coder-coded data the plausible lack of diverse views and biases in LLM training data (Bender et al. 2021) could result in seriously flawed output. Indeed, LLM estimates may be systematically biased in cases where the most readily available data are wrong.

Also, you may be wondering "What about LLMs?" So did our reviewers! Though we ended up cutting this from the final draft of the article, here is a brief outline of our thoughts about the use of LLMs to replace traditional coders of #poliskydata.

1 0 0 0

Here we add theoretical infrastructure to these arguments and expand them to incorporate crowdworkers, who are increasingly used to code #poliskydata.

2 0 1 0
During the past decade, analyses drawing on several democracy measures have shown a global trend of democratic retrenchment. While these democracy measures use radically different methodologies, most partially or fully rely on subjective judgments to produce estimates of the level of democracy within states. Such projects continuously grapple with balancing conceptual coverage with the potential for bias (Munck and Verkuilen 2002; Przeworski et al. 2000). Little and Meng (L&M) (2023) reintroduce this debate, arguing that “objective” measures of democracy show little evidence of recent global democratic backsliding.1 By extension, they posit that time-varying expert bias drives the appearance of democratic retrenchment in measures that incorporate expert judgments. In this article, we engage with (1) broader debates on democracy measurement and democratic backsliding, and (2) L&M’s specific data and conclusions.

During the past decade, analyses drawing on several democracy measures have shown a global trend of democratic retrenchment. While these democracy measures use radically different methodologies, most partially or fully rely on subjective judgments to produce estimates of the level of democracy within states. Such projects continuously grapple with balancing conceptual coverage with the potential for bias (Munck and Verkuilen 2002; Przeworski et al. 2000). Little and Meng (L&M) (2023) reintroduce this debate, arguing that “objective” measures of democracy show little evidence of recent global democratic backsliding.1 By extension, they posit that time-varying expert bias drives the appearance of democratic retrenchment in measures that incorporate expert judgments. In this article, we engage with (1) broader debates on democracy measurement and democratic backsliding, and (2) L&M’s specific data and conclusions.

In other recent work, colleagues (including @chknutsen.bsky.social, @acrowinghen.bsky.social, @medzihorsky.bsky.social, and @silindberg.bsky.social) and I have discussed the advantages and disadvantages of using experts vs trained coders to code #poliskydata. doi.org/10.1017/S104...

2 0 1 0
Post image

Hace algún tiempo hice una revisión de los mejores datasets sobre protestas globales

Por si resulta de interés para alguien del mundo de las ciencias sociales y esas cosas
medium.com/@sientifiko/...

#PolDataSky #PoliSkyData #PolSciData 📊

0 0 0 0