Advertisement · 728 × 90

Posts by Andreas Theodorou

Raising investment and changing the goalpost in order to derail the ongoing regulation efforts.

1 year ago 35 5 2 0

OpenAI responded to Musk's lawsuit, releasing emails that show it discussed becoming a for-profit as early as 2017.

As I showed in my report, being a nonprofit org was crucial to its mythology, recruitment, and eventual valuation—and it was performative from the start.

openai.com/index/elon-m...

1 year ago 97 22 3 3

The best bit? The regulations cover the complete system’s life cycle by mandating assessment to be continuous, not check-once-and-forget checkbox lists. Interestingly, assessment results *need* to be made public! 5/5

1 year ago 0 0 0 0
Preview
Contestable Black Boxes The right to contest a decision with consequences on individuals or the society is a well-established democratic right. Despite this right also being explicitly included in GDPR in reference to automa...

High-risk systems get explicit requirements not only for explainability but also contestability! It is good to see a distinction between the two processes/ values (see our work with Aler Tubella, Michael, and @vdignum.bsky.social: arxiv.org/abs/2006.05133 ) 4/5

1 year ago 1 1 1 0

Regardless of the risk level, a great deal of emphasis is placed on 4 core rights: the right to information; the right to data privacy and protection; the right to human choice and participation; and the right to non-discrimination and bias correction. 3/5

1 year ago 0 0 1 0

...including ones "that employ subliminal techniques that have the purpose or effect of inducing a natural person to behave in a manner that is harmful or dangerous to their health or safety " This is IMO vague as we often anthropomorphise AI systems. Courts may have to decide what is 'harmful.' 2/5

1 year ago 0 0 1 0
Post image

On Friday, the Brazilian Senate committee approved their AI regulation bill with an emergency vote set to take place today —alongside a bill on cybsec!
This is a risk-based horizontal legislation, influenced by the AI Act —and GDPR. Like the Act, it prohibits a range of AI systems... 1/5

1 year ago 1 0 1 0

If I had AI-backer VC money instead of making a new LLM I would hire an elite team of librarians to find Actual Information

1 year ago 3854 492 127 44
Advertisement

Tech giants claim nuclear power will allow them to build their AI fantasies without despoiling the environment, but it’s a distraction from how much they’re emitting today and ignores real concerns with nuclear.

I was happy to talk to MV Ramana about this important issue!

1 year ago 173 38 7 1

Absolutely great thread on the—mostly British parts—of AI; from the Ancient Greeks to the controversial report by Lighthill to the unnecessary anthropomorphism that we see nowadays in the media.

1 year ago 1 1 0 0

LLMs don’t store accurate knowledge; they’re probability-based models predicting word sequences based on patterns in texts. While their responses seem credible, they’re unreliable and generate the most likely phrase.

1 year ago 1 2 1 0
Preview
Graham Walker, MD on LinkedIn: I'll break down this JAMA study into 4 takeaways: 𝗹𝗮𝗻𝗴𝘂𝗮𝗴𝗲… | 73 comments I'll break down this JAMA study into 4 takeaways: 𝗹𝗮𝗻𝗴𝘂𝗮𝗴𝗲, 𝗲𝗱𝘂𝗰𝗮𝘁𝗶𝗼𝗻, 𝗱𝗶𝘀𝗲𝗮𝘀𝗲, and 𝗷𝘂𝗱𝗴𝗺𝗲𝗻𝘁. Read the study or NYT article… | 73 comments on LinkedIn

These overhyped articles on #ChatGPT based "research" really get me angry:
@nyt: you are over the top with your headers! And not doing proper journalism work of presenting a balanced, corrected description of the facts!
See www.linkedin.com/posts/graham...

1 year ago 8 2 2 0

And yes, this applies to continental Europe universities too as they are, increasingly, follow the British trend of going after metrics while also reducing education spending.

1 year ago 1 0 0 0

Or, maybe, the UK government should reduce the amount of resource-wasting bureaucracy (e.g. REF) and fund *all* universities better —especially, if we want to ensure academic freedom. Relying ever more on soft money tends to benefit high-TLR research, but damages fundamental work.

1 year ago 0 0 1 0

Probably because it provides an incentive to produce data that are 'publication worthy.' A financial or some other form of payout at the end of the study is guaranteed regardless of the study results. Moreover, keeping private data associated with the research data would create a nightmare for DPOs.

1 year ago 0 0 0 0

A starterpack on important, critical voices on #AI #ResponsibleAI #AIgovernance #AIpolicy #AIethics
More suggestions welcome!
go.bsky.app/ECWQzUA

1 year ago 31 10 10 2
Preview
Research Associate - Assurance for Robotic Autonomous Systems:Manchester

I have a PDRA post available in Assurance for Robotic Autonomous Systems. Really looking for someone with some experience of formal verification or possibly a background in CyberSecurity. Details at: www.jobs.manchester.ac.uk/Job/JobDetai... #formal_verification #formal_methods #academic_jobs

1 year ago 2 1 0 0
Advertisement

I will be happy with that as it helps the transition from X to here even if I strongly believe that these metrics —or the citation ones— do more harm than good to academia as a whole.

1 year ago 1 0 0 0

Job alert! We have a vacancy for an assistant/associate/full professor for top female scientists. Topic: designing and engineering Human-Centered AI Systems. Deadline: Jan 21st. Reach out via email if you are interested and you have questions. More info bit.ly/KInD-HCAI-DTF

2 years ago 6 1 2 0
Post image

I just got a TiaGO robot —meant to be more robust and powerful than Pepper, but also costs 5x as much— where I plan to port the code!

1 year ago 1 0 0 0

I feel like this didn’t get enough traction: Cryptocurrencies align the interests of millions of young, emotionally-desperate gamblers with the criminal oligarchs that money laundering legislation was set up to control.

#democracy #digitalGovernance #AIEthics

1 year ago 55 26 1 1
The visual has a gradient background transitioning from purple to blue. The focus is a white square mimicking the ophthalmologist's eye chart with the following text: 

"AI  

IN EU 

SECURE 

TRUSTWORTHY 

HUMAN-CENTRIC" 

Below the square is a pointing stick showing a point in the text.   

The European Commission's logo is visible in the bottom right corner.

The visual has a gradient background transitioning from purple to blue. The focus is a white square mimicking the ophthalmologist's eye chart with the following text: "AI IN EU SECURE TRUSTWORTHY HUMAN-CENTRIC" Below the square is a pointing stick showing a point in the text. The European Commission's logo is visible in the bottom right corner.

AI in Europe: safe, trustworthy, and human-centric.

We are inviting feedback to help prepare new guidelines on the AI system definition and practices that break the AI Act – the world's first comprehensive regulation on artificial intelligence.

Have your say here 👉 europa.eu/!6m8JcV

#AI

1 year ago 68 21 2 5
Preview
Guidance to my social media communications as of October 2024 artificial and natural intelligence, including politics, policy, ethics and security

A shoutout to my PhD supervisor @j2bryson.bsky.social whose social media policy (joanna-bryson.blogspot.com/2024/10/guid...) inspired me to join. 2/2

1 year ago 3 1 0 0

A bit late to the 'party,' but here I am. The obligatory short intro: I am Assimilated Associate Professor at UPC. Most of my research nowadays focuses on #XAI and #policy relate but I have just started creating a new robotics lab to get back into #transparency, #reactiveplanning, and #HRI! 1/2

1 year ago 1 0 1 0
Advertisement

Georgios Bakirtzis, Manolis Chiou, Andreas Theodorou
Negotiating Control: Neurosymbolic Variable Autonomy
https://arxiv.org/abs/2407.16254

1 year ago 0 1 0 0