I've been obsessing over wearables since the original Apple Watch came out. Whoop has the best app, but...
You're paying for a new Apple Watch every year. Doesn't even have an LED to tell the time.
So I built an app to deliver better AI with Apple Watch. $2.99.
apps.apple.com/us/app/healt...
Posts by Aki Ranin
“you will be a great contributor to any organization going forward.”
At any organization also firing half their staff due to AI in 2026.
RIP Capitalism.
These are great models, but just keep in mind that when you use and pay for these models, you are:
1) Supporting CCP-sponsored industrial espionage
2) Supporting the spread of CCP propaganda and misinformation
If you need cheap, good models, just use Gemini Flash or Grok Fast?
Remember that time (2 weeks ago) when I told you the Chinese were just copying frontier models? Well, Anthropic now has receipts.
Just as rumored, Kimi K2.5, Qwen 3.5, and Minimax M2.5, the leading open models for China, are ALL trained from outputs of Claude Opus 4.5 and 4.6.
Reading the OpenAI o1 system card...
One million developers generate a billion API calls per day to OpenAI.
If even 1% of those responses are deceptive, as the research suggests, that’s 10m per day.
If OpenAI can catch 92%, that leaves 800,000 per day.
Umm, what?!
Is OpenAI o1 Pro Mode the "Oracle AI" we have been waiting for? The PhD level LLM that can help us discover new science?
Or is it actually evidence that advanced reasoning comes with advanced scheming and increased AI safety risks?
I think both.
Recently answered @anilananth.bsky.social's questions for Nature. No matter when it arrives, AGI and the road to reach it will both help tackle thorny problems (e.g. climate change and diseases), and pose huge risks. Understanding and transparency are key.
www.nature.com/articles/d41...
More than evidence for AGI being imminent, potentially 2025. Sam Altman has reiterated his stance that AGI is just another milestone.
The real motivation here is that the AGI milestone gets OpenAI out of their Microsoft deal.
Meanwhile, your good pal Elon Musk is actually actively solving the climate crisis with sustainable energy. Fate loves irony, as he says.
By taking the focus away from man-made climate change, you are harming young scientists and founders trying to make the world a better place with cleaner technology. What is the downside here?
You spread your opinions to millions of people and use that platform to challenge academia because you like conspiracies and vibing with YouTube truth warriors.
Listening to Joe Rogan talk about climate change makes me feel dumber every time.
From this list of future jobs, half can be partially automated today with GPT-4. With future AI and robotics most will be fully automated. Probably by 2033. These are NOT the jobs you are looking for…
NOTE: This doesn’t mean AI replaces you as a human. You are not the tasks you perform!
If your conclusion is “AI can’t do X”, you should tread carefully. You’re probably thinking of current AI without being able to imagine what progress looks like.
If you can do a given task, it’s only a question of when AI can do it. You should plan ahead on that basis.
Sam Altman repeatedly tells you to not define your worldview on the basis of GPT-4, because OpenAI is “going to steamroll you”.
The right question isn’t “how”, even “when”, only “what” AI can do, or rather can’t do.
It’s well established humans struggle to internalize exponential change. We are inherently linear in our thinking. We can feel change but struggle when the rate of change is changing! Yet we live in an exponential world.
When thinking about AI the most common mistake I see is failing to account for further progress.
This is so common it’s actually rare to see exceptions. I see this with executives, founders, and investors alike. Even AI professionals. 🧵
#agi #ubi #gpt5 #career #aijobs
Cyberpunk IRL
Here’s a short story from 1991 that tells the story of takeoff from an AGI’s perspective, in this scenario a human. How would it think, what would it think, how would it act, etc…
web.archive.org/web/20140527...
Can never listen to enough Alan Watts. Ram Dass is another good one.
open.spotify.com/episode/3gCp...
Courts must also approve the transaction but in reality they can’t block the deal based on fiduciary duty. They can only challenge the valuation of the controlling stake.
Conversion to for-profit requires majority vote from non-profit board. But OpenAI has no path forward at all as a non-profit, or so Sam Altman keeps saying. So it’s a catch 22. They will approve.
The transaction needs to be at arms length, but Sam Altman is CEO of the for-profit and hand-picked the non-profit board where he also serves. This is part of Elon’s concern with the deal.
With the existing 100x profit cap, OpenAI needs to return a trillion to Microsoft before making a profit. Once the cap is removed it’s unclear how humanity would ever benefit if investors keep 100% profits forever.
The non-profit will be compensated for ceding control of OpenAI, yet control over OpenAI is control over AGI. That’s literally priceless! Currently valued at less than $40B, or the cost of Twitter!
Removing the 100x profit-cap for new investors is a key condition of the recent $6B round.