"this Article argues that Chinese law functions as the sixth layer of the AI ecosystem, alongside energy, compute, cloud, foundation models, & apps" papers.ssrn.com/sol3/papers.... Angela Huyue Zhang, via my PhD student Leonard Baum #AIEthics #systemsAI cc @michae.lv @lilianedwards.bsky.social
#systemsAI #AIDevOps #AIEthics
bsky.app/profile/j2br...
A slide about how large language models work. I'd been drawing this on whiteboards for my students, then made a slide at a consciousness meeting where philosophers were confused why LLM never say they aren't sentient. [I hope it's evident that this is informed conjecture, not verbatim truth.] Thus the 3 people in the history of the Internet who ever wrote "I'm not sentient" are out in the fringes of the data, as is the one person who ever said "Check it out, MechaHitler!" However, Musk's attempt to use a second set of guardrails to push Grok right also pushed it towards the under-informed data fringe with the MechaHitler comment. The bottom of the slide says "False statements are not lies, nor hallucinations; just adequate predictions from an inadequately informed part of the data space."
BTW, not evident from the picture, but the linkedin post at the head of this thread complements @rollingstone.com for accurately describing the #systemsAI #systemsEngineering steps Grok (the corporate entity) took to address the harms.
See slide incl. alt text for Musk's ignorance of how LLM work.
"Moments [after the crash] the data was automatically “unlinked” from the 2019 Tesla Model S at the scene, meaning the local copy was marked for deletion, a standard practice for Teslas in such incidents."
Another one straight into the syllabus, argh.
#AIAct #systemsAI #GiftArticle
I was going to toot "even if #Tesla never had the data, THAT would be obstruction of justice." AI companies are morally and in the EU at least legally obliged to keep evidence of having followed due diligence. But look at THIS smoking gun: "Moments [after the crash] the data was automatically […]
Chris brings in some of the #AIEthics literature including cites to @davidgunkel.bsky.social & @floridi.bsky.social, but we mostly focus on his expertise in Weberian bureaucracy and governance, and mine on (moral) agency; devops; and systems design, engineering and administration. #systemsAI
#genAI is engineered; changes can be rolled back just like any other code. #AI is NOT just the data, but the reason there is no real way to produce what Musk is looking for is because the subset of data that is closer to what he wants is spewed by objectionable people.
#systemsAI #AIEthics #grok
Here are the collected Parnas memos. He did a lot of work for the military, he was no peacenik. They are truly interesting documents of a) communication from academia to the military b) about system engineering / #systemsAI and the importance of real-time testing.
web.stanford.edu/class/cs99r/...
I heard in 2023 that companies were cleaning up their acts to comply with #DSA & similar laws globally. I’ve also been telling governments & corporations both for years that any lack of such competence was culpable negligence.
I sincerely believe having their #systemsAI in order will HELP companies.
- Be ready for the human to take over
- The biggest advantage is speed of development
- LLMs amplify existing expertise
I dislike the anthropomorphic use of "them" and "conversation", but LOVE that Simon takes the time to fully document and share his experiences.
#genAI #systemsAI #AIEthics
A lot of people have been calling for more agile government, but it turns out have never read the 12 principles of the Agile Manifesto. Read it. ALL software must be codeveloped with those who it will serve.
agilemanifesto.org
#systemsAI #agileAI #AIDevOps #AIEthics the topics of my PhD, FWIW 2/2
Meredith talks about mystification of AI leading to people not applying standard systems engineering techniques required by sectors eg military, nuclear, people aren’t taking normal standard security such verification.
#AIActionSummit #SommetActionIA #systemsAI
Insight while marking #AIEthics exams: people are very hung up on figuring out exactly why an AI system might have had a bad "idea" / constructed a bad plan. IMO we should worry more about how a bad plan could come to be executed, and for how long, and with what redress.
#systemsAI #AIGovernance
Whenever someone tells you what AI is going to do, you know they are lying to you because AI is just a software engineering technique. Believe rather people who talk to you about what we should and shouldn't allow people to do with AI.
#AIEthics #systemsEngineering #systemsAI #AIPolicy […]
Whenever someone tells you what AI is going to do, you know they are lying to you because AI is just a software engineering technique. Believe rather people who talk to you about what we should and shouldn't allow people to do with AI.
#AIEthics #systemsEngineering #systemsAI #AIPolicy
Anyone can use our code to build AI for their URG #pepper robot!
Code is linked off of here robot-transparency.github.io #BODpepper #systemsAI
The software got a lot of (paid) help from Storm (Bath, UK) & Matthias Hofmann, also (unpaid iirc)
Andreas Theodorou &
Ronny Bogani
Would Turing have believed in xrisk, if he knew what we know now? Todd Holloway is expert in industrial-level #systemsAI; Dermot Turing on history of AI, including his uncle. Immodestly, I'd recommend our talk over Musk & Sunak's #xrisk #AISafetySummit #AIEthics
www.youtube.com/watch?v=o9bm...