If you’re also interested in latent class analysis, @lisemarienassen.bsky.social et al use LCA in this paper doi.org/10.17645/mac...
Posts by Doug Parry
I'll send along the other dataset that I mentioned earlier. I need to extract it from the rest of the data. It's similar to the other two from Craig.
This paper doi.org/10.1016/j.ch... by @lucyhitcham.bsky.social also has PhQ9, logged & subjective smartphone use (data is on OSF).
I have GAD7 too?
Odd, thanks for checking! I'll try on a different system tomorrow.
Looks super interesting!
I see from this talk www.youtube.com/watch?v=DSgR... that @davidlazer.bsky.social gave that they have app-level data and biweekly mental health data too.
cross-device data (well browser/phone) is also nice for studying behaviour
Is this geo-blocked?
I can't access any pages on nationalinternetobservatory.org from NL (it times out) but I can with a VPN set to the US...
Query A: Primary studies TITLE-ABS-KEY( ("screen time" OR "digital media use" OR "social media use" OR "smartphone use" OR "internet use") ) AND (DOCTYPE(ar)) AND NOT ( DOCTYPE(re) OR TITLE("review" OR "meta-analysis" OR "systematic review" OR "umbrella review") ) = 28906 results Query B: Reviews TITLE-ABS-KEY( ("screen time" OR "digital media use" OR "social media use" OR "smartphone use" OR "internet use") ) AND ( DOCTYPE(re) OR TITLE("review" OR "meta-analysis" OR "systematic review" OR "umbrella review") ) =2846 results Review density (reviews/100 studies) = 9.85 reviews per 100 primary studies
I know the OP is tongue-in-cheek but out of interest I ran a quick-and-dirty check on SCOPUS. Looks like 1 review for every 10 primary papers (though probably an undercount for primary papers).
I don't know what the ratio is for other areas, but that feels high?
If I understand that meta-analysis correctly almost all relevant IVs & DVs are self-report… (+ not directly targeting use). For the “clever experiment” (n=49), 3 meta-analyses find null effects.
This is definitely something that we need more research on but Cal is way beyond any solid data here..
Despite the headline (which presumably is just some editor for the press release), this is still an immense project!
At least there are pre (post?) prints...
Odd choice for a headline (and on a 1 April...)?!
A quick skim of the papers shows that with only a few exceptions most replication areas achieved >50% (a low bar..)?
That said, the reproducibility findings are pretty damning, and point to reporting problems and nontransparent materials...
Perhaps of interest, we tried this and wrote up our experience. TLDR. it takes a lot of work to code a bespoke agent that can realistically pass scrutiny, but it is possible (at scale, without access to data for refinement.. less likely)
bsky.app/profile/rich...
Screenshot of a manuscript title page. Title: “An AI agent can complete the Attention Network Test with human-like behavioral signatures: Implications for the bot-or-not debate.” Authors: Richard Huskey, Ziyu Zhao, Douglas A. Parry, and Jacob T. Fisher, with university affiliations listed below. The abstract says an autonomous AI agent completed the Attention Network Test in real time and produced mostly human-like behavioral data. Across seven code revisions, the bot achieved attention network scores within published human norms, 95.8% accuracy, and reaction-time patterns showing positive skew and trial-to-trial autocorrelation. Compared with 796 human participants, the bot fell within the human range on several measures but showed elevated autocorrelation and a bimodal reaction-time distribution due to intermittent detection failures. The paper argues this makes simple bot-vs-human detection harder in online reaction-time studies.
1/n
New preprint with Ziyu Zhao, @dougaparry.bsky.social, & @jacobtfisher.online
Can an AI bot complete a live online reaction-time task & produce data that passes as human?
We built an autonomous bot to take the Attention Network Test (ANT) in real time
Preprint:
doi.org/10.31234/osf...
Hey y'all! Very pleased to announce that we'll be hosting Communication Science Futures again in East Lansing later this year. The conference in 2024 was a blast, and we're looking forward to running it back!
Keynote speaker: James Pennebaker
Submission deadline: June 5th.
→ commscifutures.com
Many of Matt's comments about the order of magnitude, regulation and responsibility resonate with what I wrote here in this recent preprint about attention, agency, alignment and distributed responsibility in digital environments
doi.org/10.31234/osf...
Super interesting presentation on smartphone habits, automatic behavior, user perceptions, logged smartphone use, measurement gaps, and responsibility by Matt Sharpe at the @oii.ox.ac.uk
www.youtube.com/watch?v=jmDx...
The death of writing will not be caused solely by AI; it will also be perpetrated by concerned educators who now perceive academic essays as outdated. In the end, this becomes a self-fulfilling prophecy: If students are solely trained to do oral exams, it is unsurprising if writing skills take a hit
Yes, I feel like every other time that I try to do something on the OSF it breaks. Most times I end up having to do things multiple times to get it to eventually work.
Attention is not a fixed resource that some people “have” and others lack. It is a regulatory system that allocates effort based on expected reward and uncertainty. When outcomes are unclear or delayed, attention naturally destabilises. External structure often works better than pressure.
Hard to know given the wide range of what "ban" actually meant here...
snake_case FTW!
I’m sure. I haven’t looked in detail at this so can’t say about accuracy.
With Prism last week, it does seem like academic tools are increasingly in the cross hairs here
I’m also not necessarily opposed to these tools, this just seems like an odd target for automation.
Another one of life’s joys being automated away..
For sure! I use 2x quite a lot, but 4x feels really extreme (at least on typical YouTube content)
um... en.wikipedia.org/wiki/PRISM
I guess that's one way to get training data from scientists.
(notwithstanding the fact that this is trying to replace one of the best parts of doing science...)
Following A Perfect Circle's Disillusioned and 1000 Friends by Alien Weaponry, it seems like there's a new entry in the "anti-digital technology" genre with Primal by Soen focusing on mindless scrolling and hollowness..
youtu.be/rlSAhFy9YUw?...
youtu.be/BIsH686xWl0?...
youtu.be/R-2GKj25lQE?...
First post of the year, new paper out today: we present possibly the biggest case of systematic Measurement Schmeasurement in tech use. It seems that most studies on gaming (videogame) addiction/disorder haven't measured gaming after all. This research took years, so long 🧵 doi.org/10.1098/rsos...
Screenshot of a preprint titled “Digital Behaviourism: A functional approach to behaviour in digital environments”
Our preprint has evolved!
v2 of “Digital Behaviourism” is out now with a new title, new co-authors, and a deeper dive into the behavioural concepts that shape our online lives.
It’s time to move beyond “screen time” and focus on function of online behaviours.
osf.io/preprints/ps...
Ok, that makes sense... *Industry* being the operant term.
Just to check my understanding: if an author previously collaborated with Company X (e.g., Microsoft) and 2-years later publishes a paper on Platform Y (e.g., Instagram) without disclosing that prior collaboration, this would be coded as an undisclosed disclosable tie under their approach?
For #ICA26 I received many very useful reviews (on both accepted and rejected papers) but I think these two take the cake (on a student led paper).
Sure, it’s what we wanted to hear, but…