Weird, I hated the Claude+Codex pairing - Codex kept being assertively & convincingly wrong about things!
That said, Claude for triage and the Codex for implementation (I hit my Claude limit) seems to work pretty nicely, though I should complete the loop and finish it with Claude for review.
Posts by Nic Crane
Just sent this to my oldest, who's in 4th year of a software engineering degree. It's going to be an "interesting" ride for this cohort ๐
I've found AI is a fantastic tool for learning things myself - great for debugging things outside of my expertise that I wouldn't have time to look at otherwise. Hope the idea helps anyway!
FWIW I just asked AI and the recommendations are to both add the noindex tags to old versions and also add canonical tags to the old ones, pointing to the latest one.
Hmm, I get the same, there's probably something systematic going on here that I should look into. Thanks for highlighting!
Thanks for mentioning thinking - I always have it on with Claude, but I'm a novice Codex user & didn't realise reasoning has different levels (default is "medium"). I'm going to see if turning it up works better; GPT 5.4 is currently telling me lots of confident mistruths about R on "medium"...
I was experimenting using a free subscription I got to a non-Claude AI service for writing R code (C API stuff), & had the horrifying discovery that newer models are still much worse at R, but have improved their ability to persuade me that their incorrect code is correct ๐ Back to Claude then!
I love Obsidian, I'm more thinking about personal strategies for archiving information though, regardless of tool
The thinking behind the auto-archive is that I tried manual archiving but it's a boring chore that I can justify skipping, so I need something more passive.
Anyone got good tips for managing personal knowledge bases? I organise my life with Obsidian but a bit overwhelmed as my notes grow. New plan trying out today - once a month everything gets put in "Archive" directory and things get promoted to top level only if needed. Any other ideas?
I took the contents of my workshop at RainbowR on the topic of LLMs in R & made it into a course! It's about {ellmer}, and I spent soo long agonising over how to sell it, I gave up & instead released it under a "Pay What You Want" model ๐
๐ #rstats #llms #ai
www.aifordatapeople.com/courses/llms...
I think I got the idea from your keynote at RainbowR tbh! Going to look up general tips on editing writing and see if other ones apply here too! Will check out Claude+Codex pairing.
Also, AI reviewing AI seems to work well - just did this and it found a nice addition to make a test more robust that I might not have caught myself on the manual pass.
What tips do folks have for reviewing their own AI-generated code? My current approach: after a few rounds of iteration & initial checking, leaving the pull request as a draft for a few days, & coming back to it fresh, so I don't have a false feeling of "having fully read it". #ai #llms #opensource
Our list of 2026 #rstats and #python summer internships has been posted.
We can't wait to work with you and make great things!
tidyverse.org/blog/2026/03...
This is fantastic news! Heather is such a positive force in the #rstats community and is doing vital work for the long-term sustainability of R and its community.
I really enjoyed reading this article from scikit-learn maintainers about the specific impacts of AI-generated open source contributions, recommendations for maintainers, and the potential for positives where AI use can be helpful.
blog.probabl.ai/maintaining-...
#ai #opensource #llms
Ah, cheers!
OMG, same, like the AI shame is so real, even when it's inevitable we're going to make mistakes with something so new!
That's so cool! What does the extension do?
Thanks! I still am impressed by how easy it makes it to do these kinds of things!
I built a GitHub issue classifier for Apache Arrow issue language using {ellmer} - super simple and almost 100% accuracy. Blog post: niccrane.com/posts/llm-issue-triage/
#rstats #ai #llms
In the shower thinking "wouldn't it be cool to combine LLM tool calls and have them run code but in a constrained way" & then "it needs some kind of intermediate representation; how would we validate whatever it produces?" & then realised my idea wasn't novel & just the motivation for text-to-sql ๐
I remember at posit::conf last year there was mention of posit::conf Europe 2026 - anyone know if this is still a thing? #rstats #positconf #posit
Huge thanks to the organisational team for putting on such an excellent event! ๐๐
Excited for all of the talks tomorrow, check out the schedule here if you havent' seen it! conference.rainbowr.org/schedule.html
Whew, and it's done! Thanks to everyone who came to my RainbowR workshop on LLMs for Data Analysis in #rstats! First time with that content in front of an audience, so I appreciate the excellent questions folks asked (and double thanks to everyone who filled in the feedback forms!)
"Working with agents is a lot more productive, but a lot less fun." Charlie Marsh on the weird world of building software right now. Full conversation on The Test Set.
Sounds interesting, how well does it work for R code?
It's still experimental, so potentially some rough edges, but I think it's a great example of making sure the LLM benefits are tempered with what actually makes sense for *people*.