Advertisement · 728 × 90

Posts by Marian Ferrarelli

Archaeologies of the Future

2 weeks ago 18 3 1 0

I've been seeing chatter about someone using ChatGPT to cure their dog's cancer, so decided to go find out what that was actually about.

Short 🧵>>

1 month ago 141 42 3 5
Post image
1 month ago 126 49 1 3

1. The European Parliament’s attempt to ban plant-based foods from being sold as “sausages”, “burgers” etc is a direct response to livestock industry lobbying. This is a short thread on how utterly bleeding ridiculous it is. 🧵1/6

4 months ago 1449 507 80 46

Our book with prof. Stephen Ball @ioe.bsky.social is an invitation to think education differently. Enjoy!

11 months ago 6 2 0 0
¿Escolarizar es educar? Tensiones, dilemas y preguntas urgentes
¿Escolarizar es educar? Tensiones, dilemas y preguntas urgentes YouTube video by Institución Libre de Enseñanza

#escuela #eduación
🎥 "¿Escolarizar es educar? Tensiones, dilemas y preguntas urgentes"
#Debate entre @jordicollet.bsky.social y Mariano F. Enguita este pasado lunes.
Modera @carlosmagro.bsky.social
🔗 www.youtube.com/watch?v=Fn7O...

1 month ago 0 2 0 0
Post image

This one-pager from the French ministry of education contains more useful guidance for teachers than the entire document we recently got in Ireland.

It is understandable that policymakers might not be able to answer all the questions that arise with AI. But how refreshing it is to see them try!

5 months ago 1 1 0 0
Preview
How to Imagine Educational AI: The Filling of a Pail or the Lighting of a Fire? Recent advances in artificial intelligence (e.g., machine learning, generative AI) have led to increased interest in its application in educational settings. AI companies hope to revolutionize teachi...

What do The Matrix and educational AI have in common? Unfortunately way too much, as they promote very narrow ideas of what education should be.

Instead, Alberto Romele and I highlight better sci-fi inspirations in this new paper in Educational Theory:

onlinelibrary.wiley.com/doi/10.1111/...

4 months ago 11 4 1 1
quote from philosopher Micheł Wieczorek - “The recklessness with which we are approaching the adoption of AI in schools is ridiculous. If you think about it, you just have tech companies bringing products into school with no testing, no evidence, no oversight. If you or I tried to do that, we would immediately get into trouble with an ethics board at our university … And yet, for some reason, we have decided that bringing AI as quickly to schools as possible is the way to go”

quote from philosopher Micheł Wieczorek - “The recklessness with which we are approaching the adoption of AI in schools is ridiculous. If you think about it, you just have tech companies bringing products into school with no testing, no evidence, no oversight. If you or I tried to do that, we would immediately get into trouble with an ethics board at our university … And yet, for some reason, we have decided that bringing AI as quickly to schools as possible is the way to go”

We're hearing more & more sloppy talk about 'AI ethics' in education - 🎧🎙️ listen to me talk to @michalwieczorek.bsky.social for a philosophical take on the ethics of AI ... and why a lot of current ed-tech is ethically questionable!

www.buzzsprout.com/1301377/epis...

2 months ago 13 4 0 0
Advertisement
quote from Mark West: "“My own background is history. And I'll tell you what changes history - pandemics change history. People like to forget that. But looking forward 100 maybe 200 years in the future, the COVID 19 pandemic will be seen as be a major turning point in education. So, we need to be clear about how the pandemic has changed narratives about education … and we also need to be clear about what lessons we can draw from this”

quote from Mark West: "“My own background is history. And I'll tell you what changes history - pandemics change history. People like to forget that. But looking forward 100 maybe 200 years in the future, the COVID 19 pandemic will be seen as be a major turning point in education. So, we need to be clear about how the pandemic has changed narratives about education … and we also need to be clear about what lessons we can draw from this”

Six years on since COVID hit us all, and it is wild how most people in ed-tech now act like *nothing* happened. I got to talk with Mark West about how the pandemic fundamentally changed our dependency on ed-tech, and what we can learn from the COVID experience.

www.buzzsprout.com/1301377/epis...

1 month ago 4 3 0 0
View of The Impact of GenAI Chatbots on Student Learning in Higher Education: A Literature Review 

ijte.net/index.php/ij...

"GenAI chatbots can enhance learning by providing personalized support, immediate feedback, and opportunities for self-directed learning. However, concerns persist regarding over-reliance on AI, reduced critical thinking, and academic integrity."

1 month ago 0 0 0 0
Preview
How Teens Use and View AI Just over half of U.S. teens say they've used chatbots for help with schoolwork, and 12% say they’ve gotten emotional support from these tools. Teens tend to view AI's future impact on their lives mor...

Pew Research: "AI literacy is on the minds of parents, educators and researchers. Experts are already calling this a crucial skill for teens – including as a way to combat misinformation."

www.pewresearch.org/internet/202...

1 month ago 1 0 0 0
Preview
How School Districts Are Crafting AI Policy on the Fly It's a struggle to create guidelines that keep up with rapid advances in the technology.

www.edweek.org/technology/h...

1 month ago 0 0 0 0
I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries.

Anthropic has therefore worked proactively to deploy our models to the Department of War and the intelligence community. We were the first frontier AI company to deploy our models in the US government’s classified networks, the first to deploy them at the National Laboratories, and the first to provide custom models for national security customers. Claude is extensively deployed across the Department of War and other national security agencies for mission-critical applications, such as intelligence analysis, modeling and simulation, operational planning, cyber operations, and more.

Anthropic has also acted to defend America’s lead in AI, even when it is against the company’s short-term interest. We chose to forgo several hundred million dollars in revenue to cut off the use of Claude by firms linked to the Chinese Communist Party (some of whom have been designated by the Department of War as Chinese Military Companies), shut down CCP-sponsored cyberattacks that attempted to abuse Claude, and have advocated for strong export controls on chips to ensure a democratic advantage.

Anthropic understands that the Department of War, not private companies, makes military decisions. We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner.

I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries. Anthropic has therefore worked proactively to deploy our models to the Department of War and the intelligence community. We were the first frontier AI company to deploy our models in the US government’s classified networks, the first to deploy them at the National Laboratories, and the first to provide custom models for national security customers. Claude is extensively deployed across the Department of War and other national security agencies for mission-critical applications, such as intelligence analysis, modeling and simulation, operational planning, cyber operations, and more. Anthropic has also acted to defend America’s lead in AI, even when it is against the company’s short-term interest. We chose to forgo several hundred million dollars in revenue to cut off the use of Claude by firms linked to the Chinese Communist Party (some of whom have been designated by the Department of War as Chinese Military Companies), shut down CCP-sponsored cyberattacks that attempted to abuse Claude, and have advocated for strong export controls on chips to ensure a democratic advantage. Anthropic understands that the Department of War, not private companies, makes military decisions. We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner.

However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values. Some uses are also simply outside the bounds of what today’s technology can safely and reliably do. Two such use cases have never been included in our contracts with the Department of War, and we believe they should not be included now:

Mass domestic surveillance. We support the use of AI for lawful foreign intelligence and counterintelligence missions. But using these systems for mass domestic surveillance is incompatible with democratic values. AI-driven mass surveillance presents serious, novel risks to our fundamental liberties. To the extent that such surveillance is currently legal, this is only because the law has not yet caught up with the rapidly growing capabilities of AI. For example, under current law, the government can purchase detailed records of Americans’ movements, web browsing, and associations from public sources without obtaining a warrant, a practice the Intelligence Community has acknowledged raises privacy concerns and that has generated bipartisan opposition in Congress. Powerful AI makes it possible to assemble this scattered, individually innocuous data into a comprehensive picture of any person’s life—automatically and at massive scale.
Fully autonomous weapons. Partially autonomous weapons, like those used today in Ukraine, are vital to the defense of democracy. Even fully autonomous weapons (those that take humans out of the loop entirely and automate selecting and engaging targets) may prove critical for our national defense. But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons. We will not knowingly provide a product that puts America’s warfighters and civilians at risk. We have offered to work directly with the Department of War on R&D to improve the reliability of these systems, but they have not accepted this offer. In addition, without proper oversight, fully autonomous weapons cannot b…

However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values. Some uses are also simply outside the bounds of what today’s technology can safely and reliably do. Two such use cases have never been included in our contracts with the Department of War, and we believe they should not be included now: Mass domestic surveillance. We support the use of AI for lawful foreign intelligence and counterintelligence missions. But using these systems for mass domestic surveillance is incompatible with democratic values. AI-driven mass surveillance presents serious, novel risks to our fundamental liberties. To the extent that such surveillance is currently legal, this is only because the law has not yet caught up with the rapidly growing capabilities of AI. For example, under current law, the government can purchase detailed records of Americans’ movements, web browsing, and associations from public sources without obtaining a warrant, a practice the Intelligence Community has acknowledged raises privacy concerns and that has generated bipartisan opposition in Congress. Powerful AI makes it possible to assemble this scattered, individually innocuous data into a comprehensive picture of any person’s life—automatically and at massive scale. Fully autonomous weapons. Partially autonomous weapons, like those used today in Ukraine, are vital to the defense of democracy. Even fully autonomous weapons (those that take humans out of the loop entirely and automate selecting and engaging targets) may prove critical for our national defense. But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons. We will not knowingly provide a product that puts America’s warfighters and civilians at risk. We have offered to work directly with the Department of War on R&D to improve the reliability of these systems, but they have not accepted this offer. In addition, without proper oversight, fully autonomous weapons cannot b…

To our knowledge, these two exceptions have not been a barrier to accelerating the adoption and use of our models within our armed forces to date.

The Department of War has stated they will only contract with AI companies who accede to “any lawful use” and remove safeguards in the cases mentioned above. They have threatened to remove us from their systems if we maintain these safeguards; they have also threatened to designate us a “supply chain risk”—a label reserved for US adversaries, never before applied to an American company—and to invoke the Defense Production Act to force the safeguards’ removal. These latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.

Regardless, these threats do not change our position: we cannot in good conscience accede to their request.

It is the Department’s prerogative to select contractors most aligned with their vision. But given the substantial value that Anthropic’s technology provides to our armed forces, we hope they reconsider. Our strong preference is to continue to serve the Department and our warfighters—with our two requested safeguards in place. Should the Department choose to offboard Anthropic, we will work to enable a smooth transition to another provider, avoiding any disruption to ongoing military planning, operations, or other critical missions. Our models will be available on the expansive terms we have proposed for as long as required.

We remain ready to continue our work to support the national security of the United States.

To our knowledge, these two exceptions have not been a barrier to accelerating the adoption and use of our models within our armed forces to date. The Department of War has stated they will only contract with AI companies who accede to “any lawful use” and remove safeguards in the cases mentioned above. They have threatened to remove us from their systems if we maintain these safeguards; they have also threatened to designate us a “supply chain risk”—a label reserved for US adversaries, never before applied to an American company—and to invoke the Defense Production Act to force the safeguards’ removal. These latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security. Regardless, these threats do not change our position: we cannot in good conscience accede to their request. It is the Department’s prerogative to select contractors most aligned with their vision. But given the substantial value that Anthropic’s technology provides to our armed forces, we hope they reconsider. Our strong preference is to continue to serve the Department and our warfighters—with our two requested safeguards in place. Should the Department choose to offboard Anthropic, we will work to enable a smooth transition to another provider, avoiding any disruption to ongoing military planning, operations, or other critical missions. Our models will be available on the expansive terms we have proposed for as long as required. We remain ready to continue our work to support the national security of the United States.

WASHINGTON (AP) — Anthropic CEO says AI company 'cannot in good conscience accede' to Pentagon's demands to allow wider use of its tech.

1 month ago 908 223 25 64
Preview
Agentic AI Can Complete Whole Courses. Now What? A young tech entrepreneur launched the tool Einstein this week, marketing it as a way to free students from busywork—and triggering robust faculty debate. Einstein’s creator says that was the whole po...

LOL: "After this story was published, Paliwal said he received a cease and desist letter from Instructure, which owns Canvas, and has since taken down Einstein’s website." This was the
agentic AI that was going to do your Canvas homework for you.
www.insidehighered.com/news/tech-in...

1 month ago 63 27 1 7
Preview
Repensar la ciudadanía digital desde América Latina La formación crítica, la regulación sólida y la participación pública son claves para una ciudadanía digital que dispute poder en la era de la IA.

Caro Elebi:
"No es suficiente con pedir y promover la responsabilidad individual. La alfabetización es necesaria pero debe estar acompañada de regulaciones que establezcan límites claros a quienes diseñan, despliegan y comercializan estos sistemas"

www.lasillavacia.com/red-de-exper...

1 month ago 1 0 0 0
Advertisement

Something that an AI agent can't do is tell you what thoughts and questions you actually have when you read something. Here are some of mine about the passage below, and why I value teaching students to think critically: so they can ask their own questions.

1 month ago 55 13 1 1
Screenshot of an article header from a website. The title reads "Writing As Thinking—By Proxy" in bold serif font. Below it, the author's name "by Jon Ippolito" appears as a red hyperlink, followed by the date "Wednesday, February 18th 2026" in gray monospace font. The article preview shows a photo of a cream-colored t-shirt on a hanger printed with cartoon robots and the text "The Transformers: Writing Instructors in the Age of A.I." alongside an italic abstract that reads: "In this provocation, Jon Ippolito questions what human capabilities AI extends and what capabilities it removes. In doing so, he charts the evolution of human writing processes alongside technology while speculating on what future human writing practices will look like."

Screenshot of an article header from a website. The title reads "Writing As Thinking—By Proxy" in bold serif font. Below it, the author's name "by Jon Ippolito" appears as a red hyperlink, followed by the date "Wednesday, February 18th 2026" in gray monospace font. The article preview shows a photo of a cream-colored t-shirt on a hanger printed with cartoon robots and the text "The Transformers: Writing Instructors in the Age of A.I." alongside an italic abstract that reads: "In this provocation, Jon Ippolito questions what human capabilities AI extends and what capabilities it removes. In doing so, he charts the evolution of human writing processes alongside technology while speculating on what future human writing practices will look like."

Will “writing as thinking” survive the AI age? A provocation from Jon Ippolito, followed by a conversation among the other "Transformers," Mark Marino, @anetv.bsky.social, @mahabali.bsky.social @marcwatkins.bsky.social Jeremy Douglass, and me.

preview.electronicbookreview.com/gatherings/t...

1 month ago 8 4 1 1
Post image

I'm working on a post about the Einstein AI agent that claimed it can do a whole course for you and log into canvas. It is likely a hoax or failed vibe-coded app. It has been taken down. Agentic AI like Perplexity's Comet browser CAN take a course for you. Knowing what is BS will always be valuable

1 month ago 23 5 5 0

Generic white dude who programs
@westbynoreaster
Why then did you take down the “Einstein” chatbot?
1:22 AM · Feb 26, 2026
·
67
 Views

Advait Paliwal
@advaitpaliwal
·
16h
Cease and desist
Generic white dude who programs
@westbynoreaster
·
15h
Really? Presumably from Canvas/Instructure, right?
Advait Paliwal
@advaitpaliwal
·
7h
Due to the name Einstein

Generic white dude who programs @westbynoreaster Why then did you take down the “Einstein” chatbot? 1:22 AM · Feb 26, 2026 · 67 Views Advait Paliwal @advaitpaliwal · 16h Cease and desist Generic white dude who programs @westbynoreaster · 15h Really? Presumably from Canvas/Instructure, right? Advait Paliwal @advaitpaliwal · 7h Due to the name Einstein

In utterly DELIGHTFUL news, Adwait Paliwal, the desi techbro behind the cheatbot Einstein AI which claimed it could log into Canvas and do/turn in your assignments for you has been forced to take down his website.
He'll likely be back, and there are others like him in abundance, sadly.

1 month ago 517 144 20 28
Preview
Assetizing academic content and the emergence of the ‘assetizen’: education platforms, publisher databases, and AI model training - Higher Education Higher Education - Academic content, such as teaching materials and academic publications, has become an economic resource. This has occurred through assetization as the key economic regime in...

New OA article just out on "assetizing academic content" led by @jkom.bsky.social with me, @keanbirch.bsky.social & Klaus Beiter, exploring how academic materials are turned into value-generating digital assets by HE institutions, edtech platforms, and AI companies link.springer.com/article/10.1...

2 months ago 99 61 2 4
Preview
¿Es la tecnología educativa responsable del deterioro cognitivo? Tanto mi amigo Fernando Herranz como mi admirado Ben Williamson han compartido en las últimas horas un artículo de la revista Fortune titulado (traduzco): “Estados Unidos gastó 30 mil millones de dóla...

«Precisamente porque la tecnología conlleva riesgos, su uso se debe abordar durante los procesos educativos formales. Es una cuestión de justicia social». @carlosmagro.bsky.social
carlosmagro.substack.com/p/es-la-tecn...

1 month ago 0 1 0 0
Reframing AI ethics in education: From individual responsibilisation to shared responsibility. Reconsider AI ethics in education: from moral burden on educators to collective, structural and governance accountability.

Happy to announce this CfP!
A space to discuss globally the role of ethics washing in educational technologies.
Come and join us!
think.taylorandfrancis.com/special_issu...

1 month ago 1 2 0 0
Conversación de Fernando Vicario con Juan Villoro en el entorno del 5º aniversario de  #plantauno
Conversación de Fernando Vicario con Juan Villoro en el entorno del 5º aniversario de #plantauno YouTube video by Transit Projectes

Juan Villoro:
Los simulacros sustituyen a los actos

youtube.com/watch?v=xGhb...

1 month ago 1 0 0 0
Preview
Escuela o barbarie Es privilegio de aldea que para todas las cosas haya en ella tiempo cuando el tiempo es bien repartido (...) tiempo para leer en un libro, (...

Acabo de leer esto de @tonisolano.bsky.social

Vía @jordi-a.bsky.social

www.repasodelengua.com/2026/02/escu...

1 month ago 4 2 1 0
Tu Nube Seca Mi Río – Impacto ecosocial de los Centros De Datos

Tu Nube Seca Mi Río:
Impacto ecosocial de los Centros De Datos

tunubesecamirio.com

1 month ago 0 0 0 0
Advertisement
Post image

Gran conversación con Belén Gopegui sobre su libro "Te siguen"

youtu.be/N3NNTM2S74g?...

1 month ago 1 0 0 0

La educación es política. Desde Dewey hasta Freire, desde María Zambrano hasta los MRP (Movimientos de Renovación Pedagógica), la renovación pedagógica ha insistido en que la escuela no es un espacio neutral y ha mostrado que la educación se articula en torno a decisiones y valores que son políticos

1 month ago 3 2 0 0
Video

Traje una amiga para ayudarme a compartirles porque registrarse para el kinder-3 y prekinder esta semana importa.

Visite myschools.nyc para empezar.

——

I brought a friend to help me share why signing up for free 3-K and Pre-K this week matters.

Visit myschools.nyc to get started.

1 month ago 936 191 43 92
Preview
Intel·ligència Artificial a la UB - Polítiques de digitalització - UB

#universitat #UB #GenAI
📢 "Bones pràctiques per a l'ús de la intel·ligència artificial generativa a la Universitat de Barcelona"
web.ub.edu/web/politiqu...
@ub.edu

2 months ago 2 3 0 0