OpenAI signs BAAs for ChatGPT. I would guess a lot of therapists are using personal accounts in a noncompliant way, but AI use certainly can be compatible with HIPAA
Posts by hortonhearsafoo
Meet our #PyOhio 2025 Speaker: William Horton 🎉
www.pyohio.org/2025/program...
William Horton is giving the talk:
Demystifying AI Agents with Python Code
www.pyohio.org/2025/program...
Join us next week to listen in, and learn more about the #Python world!
#PyOhioTalks
Excited to speak at @pyohio.org today! I’ll be talking about “Demystifying AI Agents with Python Code”
> this new thing is going to make all our lives unimaginably easier AND that you need to rapidly acquire new skills to adapt to it or be left behind
This was the exact messaging around the computer and the internet. It’s not new or unique to AI.
Washington DC is the hottest city in the country because it's the only one where people spend all day walking around a paved-over swamp in a wool suit
I’ve said it before, and I’ll say it again: We refuse to reinforce systems of oppression and exclusion.
If you agree, take my grandmother’s advice. Put your money where your mouth is 🫶🏾
Donate: www.zeffy.com/en-US/donati...
Non-partisan AI supporter here
I see plenty of absurd science fiction claims about things that AI cannot do all the time
Sal Kahn's description of AI teaching assistants there feels doesn't feel unrealistic to me - I think you could build that with our 2025 era models
Yeah @simonwillison.net had a good post about this. I fear the battle has been lost though, these days people just say “vibe coding” for anything
simonwillison.net/2025/Mar/19/...
All in all a very sane ruling
- training on a book is obviously fine
- pirating books to train on is obviously not fine
It’s pretty wild that in a place like DC, heating is considered a “necessity” that landlords must provide, but AC isn’t. Right now it’s 95 degrees outside, and feels like 103 according to Google.
a federal judge just ruled that AI/LLM training on copyrighted material is fair use
“Authors concede that training LLMs did not result in any exact copies nor even infringing knockoffs of their works being provided to the public”
I mean no, when you asked for evidence I didn’t think I’d have to personally defend the methodology of a peer-reviewed paper, yet here we are (I mean the second one I shared, the first I admit was only pre-print)
Also, how they define hallucination isn't relevant to the results that support my original claim. In the experiment I care about, they just did pairwise comparisons of the summaries and asked humans which was better, and LLM summaries won more often.
> The problem being that extrinsic hallucinations are necessarily wrong, they're just not supported explicitly by the text.
If you look at the paper that originally defines "extrinsic hallucinations", this is not a "problem" but in fact part of the definition that they consider extensively.
I can't believe I'm having to say this, but Chinese grad students are humans
It is always such a bummer to hear otherwise smart people on smart podcasts breezily opine that “ADHD is overdiagnosed.” It is far, far, far (far) more common for ADHD to be missed or ignored, especially in girls & adults, than it is to be inappropriately diagnosed or treated.
I know I’m snarky on here but I genuinely value the Bluesky discourse
hey if twitter could be killed by being a garbage website it'd have died a hundred deaths by now, thats my bitter lesson
Good call, it was actually the “Button Shapes” setting
You said: “Roads are designed around specific speeds. The engineers designing the roads take into account a lot of different factors to set the speed limit on the roads. It's not just an arbitrary number pulled out of the ether.”
My argument is that engineers don’t set the limit for most highways
“Ah, I see you use a personal computer. But can you defend what Microsoft did with Windows Vista?”
So both sides can agree with what you’re saying, but they draw the lines in very different places
I think you’re right overall, and I think a major disagreement comes from what people think should require human connection. I get yelled on here for writing rote work emails using ChatGPT, but to me corporate communication has never been something that fostered human connection in the first place.
Of course I actually underestimated significantly because I also assumed only one essay would be written in the course
If I were a teacher I would want to know how many students would be wrongly accused if nobody was actually cheating.
My underlying assumption was that, in the absence of evidence about how many students in your class are cheating, the only “fair” way to get at an expected value for false accusations is to assume nobody is cheating.
You may dispute that assumption but I think it’s at least defensible
Yeah I didn’t need to look it up.
I see you think that 3/100 falsely accused students is an ok outcome though
I feel like you saying that we don’t have enough info to determine the answer only further proves my overall point, which is that we shouldn’t be deploying AI detection software on students.
Anyway, false positive rate is false positives divided by (false positives + true negatives)