Advertisement · 728 × 90

Posts by Aki Ranin

I've been obsessing over wearables since the original Apple Watch came out. Whoop has the best app, but...

You're paying for a new Apple Watch every year. Doesn't even have an LED to tell the time.

So I built an app to deliver better AI with Apple Watch. $2.99.

apps.apple.com/us/app/healt...

3 weeks ago 0 0 0 0
Post image

“you will be a great contributor to any organization going forward.”

At any organization also firing half their staff due to AI in 2026.

RIP Capitalism.

1 month ago 0 0 0 0
Preview
China Doesn't Want AGI — And Never Did Why China is a convenient bogeyman for Silicon Valley

Full essay: akiranin.substack.com/p/the-race-t...

1 month ago 0 0 0 0

These are great models, but just keep in mind that when you use and pay for these models, you are:

1) Supporting CCP-sponsored industrial espionage

2) Supporting the spread of CCP propaganda and misinformation

If you need cheap, good models, just use Gemini Flash or Grok Fast?

1 month ago 0 0 1 0

Remember that time (2 weeks ago) when I told you the Chinese were just copying frontier models? Well, Anthropic now has receipts.

Just as rumored, Kimi K2.5, Qwen 3.5, and Minimax M2.5, the leading open models for China, are ALL trained from outputs of Claude Opus 4.5 and 4.6.

1 month ago 0 0 1 0

Reading the OpenAI o1 system card...

One million developers generate a billion API calls per day to OpenAI.

If even 1% of those responses are deceptive, as the research suggests, that’s 10m per day.

If OpenAI can catch 92%, that leaves 800,000 per day.

Umm, what?!

1 year ago 1 0 0 0
Preview
Should we be excited or worried about advanced reasoning AI? OpenAI's highly anticipated release of its advanced reasoning model raises new questions about safety

Full analysis here:

open.substack.com/pub/akiranin...

1 year ago 0 0 0 0

Is OpenAI o1 Pro Mode the "Oracle AI" we have been waiting for? The PhD level LLM that can help us discover new science?

Or is it actually evidence that advanced reasoning comes with advanced scheming and increased AI safety risks?

I think both.

1 year ago 0 0 1 0
Preview
How close is AI to human-level intelligence? Large language models such as OpenAI’s o1 have electrified the debate over achieving artificial general intelligence, or AGI. But they are unlikely to reach this milestone on their own.

Recently answered @anilananth.bsky.social's questions for Nature. No matter when it arrives, AGI and the road to reach it will both help tackle thorny problems (e.g. climate change and diseases), and pose huge risks. Understanding and transparency are key.
www.nature.com/articles/d41...

1 year ago 50 13 0 0
Advertisement
Preview
Road to AGI: Timelines Part 1: Is AGI imminent?

Full analysis here:
open.substack.com/pub/akiranin...

1 year ago 0 0 0 0
Post image

More than evidence for AGI being imminent, potentially 2025. Sam Altman has reiterated his stance that AGI is just another milestone.

The real motivation here is that the AGI milestone gets OpenAI out of their Microsoft deal.

1 year ago 0 0 1 0

Meanwhile, your good pal Elon Musk is actually actively solving the climate crisis with sustainable energy. Fate loves irony, as he says.

1 year ago 1 0 0 0

By taking the focus away from man-made climate change, you are harming young scientists and founders trying to make the world a better place with cleaner technology. What is the downside here?

1 year ago 0 0 1 0

You spread your opinions to millions of people and use that platform to challenge academia because you like conspiracies and vibing with YouTube truth warriors.

1 year ago 0 0 1 0

Listening to Joe Rogan talk about climate change makes me feel dumber every time.

1 year ago 1 0 1 0
Post image

From this list of future jobs, half can be partially automated today with GPT-4. With future AI and robotics most will be fully automated. Probably by 2033. These are NOT the jobs you are looking for…

NOTE: This doesn’t mean AI replaces you as a human. You are not the tasks you perform!

1 year ago 0 0 0 0
Advertisement



If your conclusion is “AI can’t do X”, you should tread carefully. You’re probably thinking of current AI without being able to imagine what progress looks like.

If you can do a given task, it’s only a question of when AI can do it. You should plan ahead on that basis.

1 year ago 1 0 1 0

Sam Altman repeatedly tells you to not define your worldview on the basis of GPT-4, because OpenAI is “going to steamroll you”.

The right question isn’t “how”, even “when”, only “what” AI can do, or rather can’t do.

1 year ago 0 0 1 0


It’s well established humans struggle to internalize exponential change. We are inherently linear in our thinking. We can feel change but struggle when the rate of change is changing! Yet we live in an exponential world.

1 year ago 0 0 1 0

When thinking about AI the most common mistake I see is failing to account for further progress.

This is so common it’s actually rare to see exceptions. I see this with executives, founders, and investors alike. Even AI professionals. 🧵

#agi #ubi #gpt5 #career #aijobs

1 year ago 2 0 1 0

Cyberpunk IRL

1 year ago 0 0 0 0
Understand - a novelette by Ted Chiang He came so close to drowning, but they reached him just in time. It's the first time the hospital has ever tried their new drug on someone with so much brain damage. Does it work? Does it work too wel...

Here’s a short story from 1991 that tells the story of takeoff from an AGI’s perspective, in this scenario a human. How would it think, what would it think, how would it act, etc…

web.archive.org/web/20140527...

1 year ago 0 0 0 0
Preview
Ep. 01 – First Meeting Ram Dass Here And Now · Episode

Can never listen to enough Alan Watts. Ram Dass is another good one.

open.spotify.com/episode/3gCp...

1 year ago 1 0 0 0
Preview
#209 – Rose Chan Loui on OpenAI’s gambit to ditch its nonprofit 80,000 Hours Podcast · Episode

Source: open.spotify.com/episode/1hJi...

1 year ago 0 0 0 0

Courts must also approve the transaction but in reality they can’t block the deal based on fiduciary duty. They can only challenge the valuation of the controlling stake.

1 year ago 0 0 1 0
Advertisement

Conversion to for-profit requires majority vote from non-profit board. But OpenAI has no path forward at all as a non-profit, or so Sam Altman keeps saying. So it’s a catch 22. They will approve.

1 year ago 0 0 1 0

The transaction needs to be at arms length, but Sam Altman is CEO of the for-profit and hand-picked the non-profit board where he also serves. This is part of Elon’s concern with the deal.

1 year ago 0 0 1 0

With the existing 100x profit cap, OpenAI needs to return a trillion to Microsoft before making a profit. Once the cap is removed it’s unclear how humanity would ever benefit if investors keep 100% profits forever.

1 year ago 0 0 1 0

The non-profit will be compensated for ceding control of OpenAI, yet control over OpenAI is control over AGI. That’s literally priceless! Currently valued at less than $40B, or the cost of Twitter!

1 year ago 0 0 1 0

Removing the 100x profit-cap for new investors is a key condition of the recent $6B round.

1 year ago 0 0 1 0