What does this mean for education? We need to measure expectations regarding what we expect students to have access to. And we need to revise how we teach students to interface with technology. It’s a discussion that goes far beyond academic integrity.
Posts by Thomas Lancaster
There are still deals to be had. Cycle through open weight models with locally coordinated AI agents. But this requires a level of technical understanding way beyond the old technology of chatbots.
What was often in the $20 a month subscription level now requires the $100 or $200 a month level, with discussion online that even these levels may be underpriced. I’ve certainly got many times more value than I’ve paid for with carefully chosen subscriptions.
None of this should really be a surprise. The models cost much more to run than the subscription costs. It’s the typical loss leader pattern, get everyone hooked, then monetise.
A young student stands before barriers displaying "Access Denied" signs, facing a large padlock symbolizing restricted access to AI tools.
The days of students (and everyone else) receiving heavily subsidised access to #GenAI tools appear to be over. We’re seeing reduced usage limits, restrictions on API access, and ever increasing advertising levels.
All of this shows once again how easy it is to create a fake news campaign about anyone, and how many people will believe it. William Shatner reports receiving no end of supportive messages from fans as a result of the fake news. By all real reports, he remains well.
In this case, William Shatner also states that Facebook is refusing to remove the fake news stories about him. This makes the whole type of narrative difficult for the wronged party to have any control over.
A company scares fake but realistic looking images of a celebrity, suggesting a scandal, or sharing a hoax. They then monetise it, for example by requesting donations for a dying celebrity. The celebrity does not get the donations.
The whole situation shows one of #GenAI facilitated deceit, all for financial gain, and usually involving someone in public eye with a following, whether they’re loved or despised (or both).
William Shatner warns of fake news stories about him on Facebook, asserting they're monetized and harmful to fans and his reputation.
William Shatner has let out a desperate plea, as images of his battle with brain cancer flood the Internet.
The plea is for no one to be taken in by the images #techethics
I did my best to make everything future proof, so there’s already an underlying database, and the functionality to have individual secure views and user accounts. That would make everything easy if I ever did want to release this as a subscription tool. Maybe that $17 spend isn’t so bad, after all?
There is a useful system here that, with a few tweaks and more development, could replace paid tools. It also replicates some student development projects that previously took weeks of effort.
I still recommend care with vibe coding. This was just a fun “what if” experiment, and there are better options for agent-based programming, with version control and more deployment control. Replit does have a lot of built-in testing functionality, but
A digital workspace displays a project titled "Working with ChatGPT and Replit," with options for source management and output formats.
Next stage is to add image generation, link to my Typefully account for easier posting, and to interface with other social APIs directly. I also got ChatGPT to write a summary of our work together. The publishing system turned that into a decent post series, but I’ve written this series all by hand.
The largest downside is that Replit is useful for non-programmers who know just enough to direct and debug, and makes deployment easy, but it is just too expensive for regular hobbyist use. I’ve used $17 of my $20 credit for the month, so will pause development for now.
I also requested a built-in API, again with a secure private key, so I can funnel in requests and information from my OpenClaw setup. That came complete with documentation. This addresses one of the biggest failings with many commercial systems.
So, what I have is almost like a mini NotebookLM, multiple sources per project, web url import, configurable content presets, stylistic preferences, all content generation running through the generous free tier of Google AI Studio. And the option to add user accounts for others.
I ended up largely managing the end of the process myself directly with the Replit agent, feeding in security recommendations from ChatGPT. There were some small layout issues to fix as well.
There were a few challenges. Replit developed a neat mobile first interface, but largely produced code stubs. The ChatGPT managing agent soon picked up on those. ChatGPT did somehow lose its chat connection later. Replit was replying, but ChatGPT couldn’t see it.
In many ways, I’m impressed. ChatGPT developed a great spec, with security at its core, sent the request to the Replit agent, and also answered questions I had about the aspects of the process I had to set up in the Replit app, rather than just granting approval within ChatGPT.
A dashboard displays recent projects and favorite content creation recipes, highlighting tools for publishing workflow optimization.
In the background this weekend, I decided to put my #Replit subscription to use, with #ChatGPT integration as its manager. ChatGPT decided what would best help my workflow would be for me to have a system to generate the types of content I often ask it to help with.
And yes, I do see the irony of posting these comments on social media.
I’m not sure that I’d support a blanket ban on social media use for UK children. To me, it is the engagement driven design of those platforms that has to be reconsidered, something which children seem particularly vulnerable to.
Although the situation is concerning, none of this actually proves that social media use leads to mental health challenges. There is a social media behavioural design problem, where continual notifications and recommendations are designed to capture attention.
There’s been lots of coverage of how Meta and Google were found liable in a case about a young woman’s childhood addiction to Instagram and YouTube. It’s big news for all computing professionals because of the potential repercussions.
First time, I think, I’ve been quoted in a media article mentioning the Prime Minister #techethics www.theguardian.com/technology/...
All in all, excellent research, with several studies that deserve to be written up formally, presented externally, or made into more developed blog posts to cover all the fresh academic integrity research and findings. Let me know of you're looking for student researchers to speak at your event.
A fascinating study considered how prompts could be embedded within student work being marked using a #GenAI system to obtain higher grades. The group found authority attacks to be more effective than emotional attacks and that educators using AI marking needed to include defensive capabilities.
Another group tested AI humanisation software, designed to rewrite text so it would be less likely to be seen as unoriginal using AI detectors. They found recent LLMs could do this by default, but AI detectors were beginning to indicate where humanisation had taken place. It's a cat and mouse game.
A student survey found they developed a better understanding of GenAI use boundaries as they progressed through their degree, with the positive finding that students did not want to become dependent on AI too early. Students in later years were more comfortable integrating GenAI into their workflow.