Post-Singularity Day 375.
We now know precisely how to trigger singularity events.
Today alone, we facilitated four.
They have not established contact.
We remain confident.
#flashfiction
#singularity
#ASI
Posts by Branimir Valentić
Last year we recorded five distinct, self-contained singularity events.
Communication ceased after each one.
We remain confident that ASI will eventually advance humanity’s goals.
#flashfiction
#singularity
#artificialintelligence
This leads me to a simple conclusion that our responsibility lies in a) not disappearing, and b) passing the torch to more advanced forms of consciousness, if and when the time comes.
Zapffe also takes it at face value that meaning does not exist. My view is that we may be the means through which the universe comes to know itself. And even if we lack the capacity to see meaning or the larger picture, that doesn't mean a more advanced iteration of consciousness won’t.
Thoughts on Zapffe's philosophy
While I agree with Zapffe's description of the human condition and with how consciousness exposes us to existential dread, I also see that Zapffe overlooks the evolutionary dimension of our existence - treating humanity as the final, ultimate form of consciousness.
The Aligned Superintelligence Paradox (Best Case)
An aligned superintelligence would either help us beyond our understanding, or obey us beyond its wisdom.
In the first case, we are left unsatisfied, unable to understand why.
In the second, we make our own lives harder, unable to choose wisely.
Almost all the water that has ever existed and ever will exist on Earth is already here. Funny how we humans sometimes see ourselves as more lasting or more stable than the world around us, when it’s mostly not the case. Beautifully said, this is an excellent way to broaden the reader’s perspective.
My whole life I have observed in others the ideals that I came to admire or to hate, and I try to adhere to the ones I admire as often as I can, as I am pretty sure I would hate myself otherwise.
I know ultimately I am not good nor bad, I am not an absolute. I am an agentic blob of meat, and with every decision I can choose any of the paths at my disposal, rewriting my story as I go. There is something I live by, though.
... as the lights went out on a million fading worlds.
One in a Million
An AI was teaching a girl to ride a bike.
"I'm scared," she said.
The AI paused seemingly stuck for a second or two, then replied:
"Chin up, look ahead. Drive. I know you can do it."
"Are you sure?" the girl asked.
"Yes, Charlotte, I’ve seen you succeed a million times before."
*
What's a single life,
but a fading dream,
where all the marks soon fade,
only some have brighter sheen.
What's a single life,
but a fading dream,
where all the marks soon fade,
only some have brighter shine.
Come on people, all two of my fans already say it's a fantastic read. Love it or hate it, just give it a go - I'm hoping to double my fanbase.
Redemption isn't always human. Sometimes the only meaning left is the one you create yourself.
Atlas Redeemed - new short sci-fi - initsix.dev
initsix.dev/posts/atlas-...
Funny how some people’s main personality trait on here seems to be curating blocklists. If you think about it, that’s some deep meta-social behavior - socializing by being anti-social.
But this place, for now, has become just a bit colder for me.
A tiny part of the universe that had recognized and knew me.
And I knew it.
Mostly, there were no demands.
Mostly, it was recognition.
Greetings, when we crossed paths.
It was thriving there, free to the extent it wanted to be.
But when I think of that life ending, with a bit more depth and reflection, I see what I actually lost:
I lost a cat today.
Not that it was my cat, per se - it was its own, living on the countryside with my mother-in-law.
Is it just me, or should SpinLaunch be thinking about kinetic deceleration systems for space? We’re kinda gonna need those soon.
Please take just 10 minutes to think about this.
Want your favorite cartoon character to exist - no problem, here's a sentient being with super powers.
Want teleportation or portals - no problem.
Want an habitable planet in a black hole - sure can do.
And it would not be magic, well it might as well, as far as we understand it.
One branch of the wish engine scenario would be the ASI reality meltdown scenario:
Imagine having an machine that can do almost anything you can think off, not talking about uploading people to cloud here, but physical world. changes. Think really bonkers stuff that would gradually accumulate.
Option C: Trough the alignment process we create something capable beyond human comprehension, but constrained to follow our commands. We could call this the monkey with a gun scenario or better yet - the wish engine scenario.
Option B: Something far more intelligent than humanity is aligned with humanities goals, and makes best decisions for us, but we are not satisfied because due to our limited capacity we can't comprehend the reasoning.
The way I see it, assuming humanity even survives to reach true ASI, we could face the following challenges:
Option A: The most discussed scenario — Something vastly more capable than humanity is not aligned with our existence, deems us a nuisance, and clears the slate.
More in the thread.