representativeness is a big issue for BTS.
our argument is backed by a new study, from Cindel White & @michael.muthukrishna.com: "more highly educated ppl are significantly more culturally similar to WEIRD countries, such as the Anglosphere and Western Europe." www.nature.com/articles/s41...
Posts by ManyLanguages
2️⃣ Scaling Projects Without Direct Access to Target Populations thread #ManyLanguagesChallengeOfTheMonth
1️⃣ Recruiting and Incentivising Adult Participants Across Contexts thread #ManyLanguagesChallengeOfTheMonth
- How can we incentivise participation fairly and uniformly across populations with different resources and constraints?
- Are there models of partnership, training, or resource‑sharing that could help level the playing field?
This raises important questions:
- How do we meaningfully reach colleagues and communities in the Global South?
- What strategies have worked for building collaborations where institutional access is limited?
But institutional and structural inequities across countries and regions often mean that participation ends up concentrated in the Global North.
2️⃣ Scaling Projects Without Direct Access to Target Populations: ManyLanguages, like other Big Team Science initiatives, aims to collect data from a wide range of populations to build findings that genuinely reflect human diversity.
- Is a 40‑minute study simply too long to ask volunteers to complete as a favour?
- If we split the study into two counterbalanced 20‑minute halves, would volunteers be more willing to participate without compensation?
So we’re asking:
- What recruitment strategies work in your region?
- Are there local norms or incentives that help sustain participation?
Our current study takes around 40 minutes, for which we have been paying participants approximately $10, and in some cases offering course credit. But we recognise that these options may not be feasible for all language teams globally.
1️⃣ Recruiting and Incentivising Adult Participants Across Contexts: The CLAPS team has been reflecting on how researchers around the world recruit and incentivise adult participants for longer studies.
This month, we’re highlighting two interconnected #ManyLanguagesChallengeOfTheMonth challenges that sit at the heart of scaling ManyLanguages projects across diverse linguistic and cultural contexts.
We’d love to hear your experiences:
- What scales have you used in your own projects?
- Did the choice affect data quality, participant behaviour, or analysis pipelines?
- Have you found clever ways to balance theoretical nuance with practical constraints?
Analysis: the latter really requires ordinal models, which can be computationally intensive. Theory: is the notion of grammatical acceptability really continuous (can one sentence be “twice as grammatical” as another?) or is it really underlyingly ordinal (are we implicitly “ranking” sentences?)
A recurring question in large-scale syntax projects is whether to use a continuous scale or a 7‑point Likert scale and what that means for both the statistical analysis and the underlying theory.
This month’s #ManyLanguagesChallengeOfTheMonth comes from the world of **grammatical acceptability judgments** by the ML2 CLAPS team. @ambridge.bsky.social
This is an especially important topic for ManyLanguages as we aim to include as many languages, communities, and cultural contexts as possible. Your experiences can help us build more resilient, inclusive, and globally workable research tools. (4/4)
If you’ve ever faced this kind of mismatch between lab conditions and real‑world deployment, we’d love to hear from you.
- What went wrong?
- What did you learn?
- Did you find any clever workarounds or design tweaks? (3/4)
Many of us have had the experience of setting up and testing an experiment in one place—everything smooth & perfect—only to watch it fall apart during fieldwork because the local infrastructure couldn’t support the same demands. (2/4)
January #ManyLanguagesChallengeOfTheMonth: We’re looking at how internet bandwidth differences across locations can become a hurdle for experiments. (1/4)
Instead of suffering in silence, let’s share them. We’d love to hear your stories, opinions, comments—or, in rare and glorious cases, actual solutions.
We’re trialling a new social activity: #ManyLanguagesChallengeOfTheMonth
Running large-scale projects means encountering puzzles, hiccups, and the occasional “why is this happening” moment.
ManyLanguages is on two social platforms:
✨ LinkedIn: www.linkedin.com/company/1107...
✨ Bluesky (this page): bsky.app/profile/many...
Follow us to stay in the loop, join discussions, and help us spread the word about Big Team Science.
The next one (though it's adults only) is a team-science project with @manylanguagesc.bsky.social looking at passives. We're keen to get as many diverse languages as possible so please get in touch if you'd like to join as a coordinator or a co-investigator for your language bsky.app/profile/many...
Dear exp linguist, for my advanced stats class, I am looking for articles that
- are experimental linguistics-ish
- report on linear mixed models that did not converge with the intended random effect structure (using lme4)
- published open data and scripts
Please @ me, I would appreciate it!
One of the major extrinsic hurdles to increasing outreach for @manylanguagesc.bsky.social has been structural and institutional requirements for potential collaborators to be `first authors' and/or 'last authors' for them to even be considered candidates for positions. <long rant coming>
Graphic announcement with a blue background reading “Big News from RoSE.” The RoSE and FORRT logos appear side by side with an “×” between them. Text states: “We’re launching a Big Team Science review asking: What is the current state of evidence on the use of statistical packages in teaching and learning?” A magnifying glass graphic emphasises the research theme.
📣 Big news from RoSE! We’re launching a new RoSE × FORRT collaborative research project 🎉
With @rosenetwork.bsky.social and @forrt.bsky.social, we’re starting a Big Team Science review on evidence for using statistical packages in teaching & learning.
Watch this space 👀
#statsed #openscience
If you are interested, check out the description of all roles and the project here: docs.google.com/document/d/1...
⭕Ethics Coordinator:
- Assist teams with submissions for ethical approval and answer ethics-related questions
- Check submitted ethical approval documents
- Coordinate with lead team when teams are approved
⭕Translation Coordinator:
- Recruit translators for individual languages
- Coordinate translation process
- Coordinate with the method lead for study implementation when translations are finished
- Coordinate translation implementation checks