Advertisement · 728 × 90

Posts by Jamie Bernardi

Preview
#6 Last but not least: Petit guide du Sommet de l'Action sur l'IA à l'usage des décideurs politiques - A Policymaker's Guide to Navigating the AI Action Summit Malgré la crise politique, le sommet suit son cours. Ce cap est-il le même que les avancées en IA ?Despite the political crisis, the Summit is on course. Is this course the same as recent AI advances?

@futureoflife.org AI Action Summit Lead Imane (Ima) Bello has put together a useful guide for policymakers to navigate the AI Action Summit in Paris on 10-11 February:

1 year ago 3 1 1 0
Preview
Friends for sale: the rise and risks of AI companions What are the possible long-term effects of AI companions on individuals and society?

AI companions are on the rise, but what are the possible long-term effects on people and society?

@jamiebernardi.bsky.social examines the potential benefits and risks - both individual and systemic - of this type of AI service.

www.adalovelaceinstitute.org/blog/ai-comp...

1 year ago 7 6 2 0

Important underscussed point on the OpenAI $100bn deal: money is not coming from the USG.

Trump is announcing a private deal, whilst promising to make “emergency declarations” to allow Stargate to generate its own electricity (h/t @nytimes.com). Musk says 100bn not yet raised.

1 year ago 1 0 0 0

I see debate on what Altman meant by "we know how to build AGI", so here's my take: I don't think there's much more to it than claiming the general o3 scaffold == "AGI".

With faster hardware and more efficient algorithms, he claims AGI is job-done. Inferring anything more seems like an overreaction

1 year ago 0 0 0 0

"New technology can provoke a fear ... because of the fears of a small risk, too often, you miss a massive opportunity. The far bigger risk is if we don't go for it, we're left behind by those who do"

1 year ago 0 0 0 0

On regulations: "We will test and understand AI before we regulate it, to make sure that when we do it's grounded in the science... Our message to those at the frontier of AI capabilities is this: 'we want to be the best state partner for AI anywhere in the world'"

1 year ago 0 0 1 0
Post image

UK PM: "The last govt was right to establish the world-leading AISI, and we'll build on it. This month the UK will lead the first ever global AIS test. [However] we shouldn't just focus on safety and leave the rest to the market, the govt has a responsibility to make it work for working people"

1 year ago 0 0 1 0
Advertisement
i sensed anxiety and frustration at NeurIPS’24 – Kyunghyun Cho

kyunghyuncho.me/i-sensed-anx...

1 year ago 0 0 0 0
Post image

Similar effect in AI policy. Many seniors in AI policy today have PhDs.

Is this because a PhD is necessary to get on in AI Policy? Having mulled this over a bunch, my current conclusion is no: it's because there was nowhere else to spend all your time thinking & reading on AI governance until 2022.

1 year ago 0 0 1 0

I've seen versions of this take a couple of times now.

My sense is some pundits are unable to hold 2 things true at once: that disinfo is driven by a deep, systemic demand for it (what Politico argues), but also we should take steps to prevent its supply (what Politico fails to argue).

1 year ago 0 0 0 0

I've seen versions of this take a couple of times now. My sense is some pundits are unable to hold 2 things true at once: that disinfo is driven by a deeper, systemic demand for it (what Politico argues), but also we should take steps to prevent people supplying it (what Politico fails to argue).

1 year ago 4 1 0 0