Good decision by the University of Groningen to ditch the Web of Science.
(All too often people assume when they pay a lot for something that the price means the same thing as value...)
www.rug.nl/library/news...
Posts by Rozenn Dahyot
1)We are told that AI represents exciting opportunities for teaching and research in our university.
2) Co-Pilot is the approved service.
3) Co-Pilots ToS clearly states it is for "entertainment purposes only".
ERGO: Entertainment services provide exciting opportunities for teaching and research.
👍
Related read: ergosphere.blog/posts/the-ma...
The goal of science is to make new knowledge, the goal of a PhD programme is to make new scientists.
💯👏
Exactly this. We should not accommodate a tool of our oppression. A tool extremely useful to those who would drown truth in irrelevance—every authoritarian’s dream. Resistance is not futile. It is imperative.
Smart glasses make it remarkably easy to film someone without their consent. Those who study covert filming say it can create a physical & psychological threat, from stalkers to increased anxiety, as the law struggles to keep up
www.cbc.ca/radio/thecur... #cdnlaw
Canadian article applies everywhere
Well, this is invasive. It's like a person you don't know very well, grabbing your phone, scrolling your photos and sharing them with their work colleagues to make their software better. Oh wait. It *is* that.
📢 Je recrute : ingé ou postdoc (12 mois)
➡️ www.ign.fr/nous-rejoind...
Venez entraîner des grands modèles génératifs pour le bien commun :
🗺️ données ouvertes (images aériennes/satellites)
🏞️ application au suivi du changement climatique et à la gestion des catastrophes naturelles
#lastig #ign
Last year when I was checking into a hotel, the desk person was wearing Meta glasses. I kindly asked them to take them off. They were annoyed. I said, “I do not consent to you looking at my credit card and ID with Meta glasses on.” My instincts were correct: www.bbc.com/news/article...
N'hésitez pas à utiliser l'excellent moteur de recherche
sciences.re/postes/
Anthropics team was relieved to hear that the government would be willing to remove those words, but one big problem remained: On Friday afternoon, Anthropic learned that the Pentagon still wanted to use the company's AI to analyze bulk data collected from Americans. That could include information such as the questions you ask your favorite chatbot, your Google search history, your GPS-tracked movements, and your credit-card transactions, all of which could be cross-referenced with other details about your life. Anthropics leadership told Hegseth's team that was a bridge too far, and the deal fell apart. Soon after, Hegseth
And I was told this was propaganda.
Made the cover of @acm.org with "A Decade of Docker Containers", recapping much systems work! Docker grew so fast in those early days that we never got a chance to write an academic paper about it, so this has been a long time coming cacm.acm.org/research/a-d... w/ @justincormack.bsky.social djs55
As far as I recall Ireland passed legislation a few years ago to outlaw essay mills. I do not see why AI cheating services shouldn’t be outlawed in the same way.
starting March 1, 2026, researchers from the Chinese Academy of Sciences are prohibited from using central government funds to pay Article Processing Charges for high-priced OA journals, specifically Nature Communications and Science Advances.
Postdoc position(s) in my section. Exciting opportunity to work as a postdoc in a Danish university with fantastic colleagues and enriching research atmosphere! Please spread the word.
candidate.hr-manager.net/ApplicationI...
Post-doc opportunity at EDF R&D on Time Series Foundation Models. It’s a great project and I’ll be closely involved in the collaboration!
Details for application at:
www.linkedin.com/posts/etienn...
Documentation of wide-spread scientific misconduct at several major CS conferences. This will only get worse, unless the community decide to not accept it. Will definitely check references more carefully now when reviewing for ICML.
Fantastic opportunity 👏
Together, our results suggest that the aggressive incorporation of AI into the workplace can have negative impacts on the professional development workers if they do not remain cognitatively engaged. Given time constraints and organizational pressures, junior developers or other professionals may rely on AI to complete tasks as fast as possible at the cost of real skill development. Furthermore, we found that the biggest difference in test scores is between the debugging questions. This suggests that as companies transition to more AI code writing with human supervision, humans may not possess the necessary skills to validate and debug AI-written code if their skill formation was inhibited by using AI in the first place.
Figure 7 from the preprint. Caption: "Task completion time and quiz score by years of coding experience. Error bars represent 95% CI. The control group (No AI) average quiz score is higher across all levels of coding experience."
From the preprint itself (on which the above blog post is based): arxiv.org/abs/2601.20245
This study by people from Anthropic itself should raise huge alarm bells about the use of AI in teaching how to code (and later on in coding itself, but esp. in the learning stage).
And remember: this is by the people who make Claude!
tl;dr: not that long, read it
www.anthropic.com/research/AI-...
None of the AI Skills Boost courses are bespoke. It's literally a government wrapper around microlearning courses that industry have been promoting to habituate new users to AI - making the user friendly to it - for years now. Also, you have to doubly sign up to AWS, Google etc to access them 😬
I signed up to the government's AI Skills Boost Hub to check its suite of courses and I am sad to report that the free AI training looks even worse than the thread below suggests aiskillshub.org.uk/aiskillsboost/
🇫🇷 We are hiring 🇫🇷
Assistant or Associate Professor Position in Computational Sociology @crestsociology.bsky.social @ipparis.bsky.social
Details here (please RT)
www.shorturl.at/E57le
Kudos to @maynoothuniversity.ie who have made the decision to suspending indefinitely their use of X (formerly Twitter).
Blistering piece on ed tech in @economist.com.
‘Although ed-tech companies tout huge learning gains, independent research has made clear that technology rarely boosts learning in schools—and often impairs it.’
economist.com/united-state...
You wouldn’t engage with a person trying to pick you up while waving a camera on your face. Smart glasses are removing that choice, and removing privacy. How are we supposed to trust other humans when we can’t tell if they are mining us for content?