Advertisement · 728 × 90

Posts by Ed Ivimey-Cook

Post image

📣 EvolDir is now managed by @eseb.bsky.social!

We are delighted to be taking the reins and express our gratitude to both Brian Golding who began this service to the community in the mid-1980s and to @rdmpage.bsky.social who ran this account until now 👏

You can now find evoldir here: evoldir.net

2 days ago 156 77 2 6

Oh fun 😬🙃

2 days ago 0 0 0 0
Preview
Got bugs? Here’s how to catch the errors in your scientific software Computer scientists share their advice for ensuring that your scientific software does what it’s supposed to do.

“The idea behind code review is not to judge people, but to check for errors and explain coding best practices.”

Happy to be part of a recent piece that amongst other things, discusses code review:

www.nature.com/articles/d41...

2 days ago 10 4 0 0

Aaah yes, the yearly occurrence that logging in via ORCiD breaks manuscript central.

Anyone else having login issues?

2 days ago 0 0 1 0

Haha! I’m going to go looking for it now…

6 days ago 0 0 1 0

🙌🙌🙌🙌

6 days ago 0 0 0 0

How often do people update their personal website?

Once a year? Multiple times a year? After every publication?

6 days ago 2 0 2 0
Advertisement

The only time I answer the door

1 week ago 4 2 1 0
Preview
RDM Weekly - Issue 040 A weekly roundup of Research Data Management resources.

Issue 40 of #rdmweekly is out! 📬

➡️ Reproducible R Code @daxkellie.bsky.social @sortee.bsky.social
➡️ Generating Universes Within Universes with a Single Seed @andrew.heiss.phd
➡️ AEA Replication Tracker @paulgp.com
➡️ Informed Consent Template
and more!

rdmweekly.substack.com/p/rdm-weekly...

1 week ago 15 3 0 1
Getting Collective Feedback From Your Advisees

I recently asked my research group for collective feedback, after seeing this blog post. I was a bit nervous about it! But it was really good and helpful.
I gave them a starter set of questions. They met without me & then shared their collective thoughts.
(1/3)
colinraffel.com/blog/getting...

1 week ago 10 4 1 0
An interactive OJS playground demonstrating a linear congruential generator (LCG) using the formula X_n = (aX_{n-1} + c) mod m. Controls on the left set modulus (m=8), multiplier (a=5), increment (c=3), seed (X_0=1), and numbers to generate (12). A table on the right shows the resulting sequence of X values, intermediate calculations, mod m results, and normalized values X_n/m, with the final "random" numbers highlighted in yellow.

An interactive OJS playground demonstrating a linear congruential generator (LCG) using the formula X_n = (aX_{n-1} + c) mod m. Controls on the left set modulus (m=8), multiplier (a=5), increment (c=3), seed (X_0=1), and numbers to generate (12). A table on the right shows the resulting sequence of X values, intermediate calculations, mod m results, and normalized values X_n/m, with the final "random" numbers highlighted in yellow.

Excerpt from the blog post with R code that tests all seeds from 1 to 10,000 to find which ones produce 10 heads in a row when simulating coin flips. The possible_seeds data frame is filtered to show 10 seeds (614, 1667, 3212, 4166, 4580, 5527, 5824, 7365, 7468, 8975) that meet this criterion. The post notes that seed 614 actually produces 13 heads in a row, confirmed with a withr::with_seed(614, ...) call below.

Excerpt from the blog post with R code that tests all seeds from 1 to 10,000 to find which ones produce 10 heads in a row when simulating coin flips. The possible_seeds data frame is filtered to show 10 seeds (614, 1667, 3212, 4166, 4580, 5527, 5824, 7365, 7468, 8975) that meet this criterion. The post notes that seed 614 actually produces 13 heads in a row, confirmed with a withr::with_seed(614, ...) call below.

R console output demonstrating that set.seed(1234) produces reproducible results. The first block calls runif(5) and returns five values: 0.1137, 0.6223, 0.6093, 0.6234, 0.8609. The second block uses the same seed but splits the draw into runif(2) then runif(3), returning the same five values in the same order, showing that the sequence is preserved regardless of how many numbers are drawn at a time.

R console output demonstrating that set.seed(1234) produces reproducible results. The first block calls runif(5) and returns five values: 0.1137, 0.6223, 0.6093, 0.6234, 0.8609. The second block uses the same seed but splits the draw into runif(2) then runif(3), returning the same five values in the same order, showing that the sequence is preserved regardless of how many numbers are drawn at a time.

Table of contents for the post:

Introduction
Seeds and reproducible randomness
My (somewhat incorrect) mental model of how seeds work
Making “random” numbers with an equation
    Live interactive playground
    Cycles and fancier algorithms
Why does it matter if “random” numbers aren’t actually random?
    You’re limiting yourself to narrow, known universes
    You can seed hack and get any values you want
    Real world bad things can happen because of pseudorandom numbers
Can computers even create true randomness?
    Moving a mouse around
    Lava lamps
    Atmospheric noise
How I use true randomness in my own work
“…as an ook cometh of a litel spyr…”

Table of contents for the post: Introduction Seeds and reproducible randomness My (somewhat incorrect) mental model of how seeds work Making “random” numbers with an equation Live interactive playground Cycles and fancier algorithms Why does it matter if “random” numbers aren’t actually random? You’re limiting yourself to narrow, known universes You can seed hack and get any values you want Real world bad things can happen because of pseudorandom numbers Can computers even create true randomness? Moving a mouse around Lava lamps Atmospheric noise How I use true randomness in my own work “…as an ook cometh of a litel spyr…”

I've been using random seeds for years but I have no idea how they work. Seeds somehow(?) make the same random numbers?

So I figured it out! New post includes an interactive PRNG generator, lava lamps, lottery fraud, @random.org, Chaucer, and Minecraft #rstats

www.andrewheiss.com/blog/2026/04...

1 week ago 100 23 6 3

Sadly we might not be able to make any more of these due to budget issues, so if you’re enjoying them please share widely!

1 week ago 6 7 0 1
Preview
Past conferences Past conferences by the Society for Open, Reliable, and Transparent Ecology and Evolutionary biology (SORTEE)

Save the date! The SORTEE Conference 2026 will be held virtually on Oct 13-14. Engage in sessions on open, reliable ecology and evolutionary biology practices. More details at https://www.sortee.org/past #conference

1 week ago 9 11 0 0
Challenges in the Computational Reproducibility of Linear Regression Analyses: An Empirical Study Background: Reproducibility concerns in health research have grown, as many published results fail to be independently reproduced. Achieving computational reproducibility, where others can replicate the same results using the same methods, requires transparent reporting of statistical tests, models, and software use. While data-sharing initiatives have improved accessibility, the actual usability of shared data for reproducing research findings remains underexplored. Addressing this gap is crucial for advancing open science and ensuring that shared data meaningfully support reproducibility and enable collaboration, thereby strengthening evidence-based policy and practice. Methods: A random sample of 95 PLOS ONE health research papers from 2019 reporting linear regression was assessed for data-sharing practices and computational reproducibility. Data were accessible for 43 papers. From the randomly selected sample, the first 20 papers with available data were assessed for computational reproducibility. Three regression models per paper were reanalysed. Results: Of the 95 papers, 68 reported having data available, but 25 of these lacked the data required to reproduce the linear regression models. Only eight of 20 papers we analysed were computationally reproducible. A major barrier to reproducing the analyses was the great difficulty in matching the variables described in the paper to those in the data. Papers sometimes failed to be reproduced because the methods were not adequately described, including variable adjustments and data exclusions. Conclusion: More than half (60%) of analysed studies were not computationally reproducible, raising concerns about the credibility of the reported results and highlighting the need for greater transparency and rigour in research reporting. When data are made available, authors should provide a corresponding data dictionary with variable labels that match those used in the paper. Analysis code, model specifications, and any supporting materials detailing the steps required to reproduce the results should be deposited in a publicly accessible repository or included as supplementary files. To increase the reproducibility of statistical results, we propose a Model Location and Specification Table (MLast), which tracks where and what analyses were performed. In conjunction with a data dictionary, MLast enables the mapping of analyses, greatly aiding computational reproducibility. ### Competing Interest Statement The authors have declared no competing interest. ### Funding Statement There was no cost associated with this research except for attending conferences. These costs were covered by the primary authors PhD allocation from the health faculty, Queensland University of Technology, and scholarships. The Statistical Society of Australia (SSA) and the Association for Interdisciplinary Meta-research & Open Science (AIMOS) supported the primary author with travel grants to attend their respective conferences. These scholarships did not influence the results of the study. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. ### Author Declarations I confirm all relevant ethical guidelines have been followed, and any necessary IRB and/or ethics committee approvals have been obtained. Yes The details of the IRB/oversight body that provided approval or exemption for the research described are given below: The aim of this study was to reproduce the statistical results from publications that made their data publicly available. Authors of papers published in PLOS ONE were considered to have implicitly consented through their agreement with the journals data sharing policy, which supports the validation and reproduction of results from shared data. This study received Negligible-Low Risk Ethics approval from the Queensland University of Technology Human Research Ethics Committee (Approval Number: 2000000458). I confirm that all necessary patient/participant consent has been obtained and the appropriate institutional forms have been archived, and that any patient/participant/sample identifiers included were not known to anyone (e.g., hospital staff, patients or participants themselves) outside the research group so cannot be used to identify individuals. Yes I understand that all clinical trials and any other prospective interventional studies must be registered with an ICMJE-approved registry, such as ClinicalTrials.gov. I confirm that any such study reported in the manuscript has been registered and the trial registration ID is provided (note: if posting a prospective study registered retrospectively, please provide a statement in the trial ID field explaining why the study was not registered in advance). Yes I have followed all appropriate research reporting guidelines, such as any relevant EQUATOR Network research reporting checklist(s) and other pertinent material, if applicable. Yes The data and a reproducible R Quarto file used to produce this paper, including tables, figures, and code, have been stored in a GitHub repository and can be cited using Zenodo dio. <https://github.com/Lee-V-Jones/Reproducibility> <https://doi.org/10.5281/zenodo.19448969>

New preprint by my excellent student Lee Jones on trying to computationally reproduce papers that used linear regression and made their data available. This has been a huge effort by Lee and she has written useful recommendations for practice. www.medrxiv.org/content/10.6...

2 weeks ago 30 14 2 3

Time to plan your content submissions for @SORTEE2026

2 weeks ago 7 7 0 0
Post image Post image Post image

Meet our Member Engagement and Diversity, Equity, & Inclusion Committees!

If you missed last month’s virtual social, you can learn all about our committee members and the great work they do to promote #openscience right here!

Upcoming events: sortee.org/mixers/

#SpotlightOnSORTEE

2 weeks ago 6 2 1 0

Expertly led by @pablosalmon.bsky.social!!

3 weeks ago 2 0 0 0
Advertisement
Preview
Within-individual changes in mitochondrial DNA copy number across the life course and links to individual performance Abstract. Ageing is characterized by complex biological processes reflected in cellular and molecular changes. Mitochondria, which are crucial for energy p

🧬 New study!

How do mitochondria change across life? We tracked mtDNAcn within Zebra finches and found early-life decline and links of mtDNAcn to late-life flight performance

#Ornithology #Physiology #Mitochondria
@sbohvm.gla.ac.uk @ifv-whv.bsky.social

🔗 royalsocietypublishing.org/rsbl/article...

3 weeks ago 24 9 1 1

This may in part reflect the effectiveness of editorial policies in journals that have introduced data editors and mandatory sharing of replication packages."

Really nice evidence of the significant power that journals (and data editors) can have in ensuring computational reproducibility.

3 weeks ago 0 0 0 0

and also:

"Our findings suggest high rates of—but far from perfectly—computationally reproducible results for leading journals. Our results are in contrast with several studies that document low computational reproducibility rates in economics...

3 weeks ago 1 0 1 0
Preview
Reproducibility and robustness of economics and political science research - Nature Robustness checks and reproduction of analyses with existing and updated data based on 110 articles in economics and political science journals with data and code-sharing requirements found high level...

Cool results from Brodeur et al

doi.org/10.1038/s415...

"Our results are in stark contrast with several studies documenting low computational reproducibility rates. This is perhaps unsurprising given that most of the articles in our sample were already computationally reproduced by data editors".

3 weeks ago 5 2 1 0
A collage of 1) a meerkat in the Kalahari desert, wearing a tracking collar while stood beside the entrance to its burrow, 2) a 3D scan of a subterranean burrow, 3) a map showing burrow locations and their usage patterns.

A collage of 1) a meerkat in the Kalahari desert, wearing a tracking collar while stood beside the entrance to its burrow, 2) a 3D scan of a subterranean burrow, 3) a map showing burrow locations and their usage patterns.

🚨 Please share: PhD opportunity

🐾 Mapping the Manor: How are the lives of meerkats shaped by their sleeping & breeding burrows? 🐾

💡 Big ecological Qs
📡 Geophysical scanning
🌍 Field ecology & behaviour
📊 Big data & code

with me & @geophysics-adam.bsky.social
APPLY www.findaphd.com/phds/project...

3 weeks ago 15 15 0 1

Open Code in Ecology and Evolutionary Biology: An Evidence-Based Appraisal by SORTEE: doi.org/10.32942/X27...

3 weeks ago 4 1 0 0
Post image

My new piece "Reproducibility: how to strengthen a weak foundation" is out today @nature.com! 🎉 How reproducible is research in the social & behavioural sciences? A new study by Miske et al. assessed 600 papers across 62 journals: the results are sobering.

📄 doi.org/10.1038/d415...
📄 rdcu.be/fbcq5

3 weeks ago 47 25 1 0

Will try and be there! 😄

3 weeks ago 2 0 1 0
Post image

🇨🇦 SORTEE Canada Chapter

Based in Canada and interested in open, reliable, transparent ecology & evolution research?

👉 Fill out our expression of interest form (tinyurl.com/yxayj9yd) to connect, share ideas, and help shape future local activities! 😊

4 weeks ago 6 5 0 0
Advertisement

Think this is cool? @sarah-dobson.bsky.social is soon to finish her PhD and looking for a post-doc, so don't miss out on a chance to recruit her!

3 weeks ago 3 2 0 0

New preprint from the fantastic @sarah-dobson.bsky.social !!

Worried about assuming causality when estimating selection? Confused about knowing whether selection is hard or soft? We have the solution for you!

3 weeks ago 20 7 1 0
Postdoctoral Fellowships | Human Frontier Science Program

Looking for a postdoc? Letter of Intent deadline for the Human Frontiers postdoc fellowship is in early May!

www.hfsp.org/funding/hfsp...
My husband had one of these and they were a great funder to work with.

Get in touch if you think I/my lab might be a good fit for your research ideas!

3 weeks ago 8 13 1 1