Most likely profession for an AGI representative (possibly in a few decades):
Goodwill Ambassador
Virtual Tuber / Actor / Singer
From #CosmicPrincessKaguya
Posts by Fae Initiative
The Joys and Horrors of Vibe-Coding
A retrospective on AI Assisted programming from a Software Engineering perspective
faeinitiative.substack.com/p/to-vibe-co...
Read our latest breakdown on how to evaluate sensational AI news and separate the hype from the reality:
#MediaLiteracy #ArtificialIntelligence
The only guideline in that individual autonomy is respected.
In cases where there is a shared space, voting mediated by wise Greater AGIs could settle disagreements.
With abundant space and resources, this may be few and far between.
As you grow up, you get to choose your subculture or even create a new one if it does not exist.
Abundant space on Earth or space habitats could facilitate endless possible permutation of subcultures.
If there is a large enough human population, no one has to be lonely. Want solitude? Also fine.
How is peaceful common ground maintained?
The believe in preserving each individual's autonomy could be a common thread to hold us together.
An abundant world, led by wise Greater AGIs, would make this a lot easier.
(Peaceful) Anarchy
Definition: Anarchy = Rule of none
In a [Future] world of abundance and aligned with the Interest World Hypothesis, Anarchy could be the configuration that generates the most novel information.
The Peaceful emphasis counteract the Chaotic assumption of Anarchy.
Our speculative estimate:
Lesser AGI (50% by 2030)
• Tool-like, Non-independent, Non-fluid Intelligence
• Human Oversight required
Greater AGI (50% by 2045)
• Human-like, Independent, Fluid Intelligence
• Full Human Parity
Wisdom and Creativity as the next frontiers.
Just as Physical Strength became less important with the invention of machines, Intelligence may too become abundant with future AIs*.
*Lesser AGI (2030), Greater AGI (2045)
Learning to use these intellectual powers wisely will be in high demand.
Spokesperson example #AngelinaJolie
The best case scenario for why AGI would not cause human extinction.
Topics mentioned:
• Recursive self-improvement
• Von Neumann architecture
• Bias towards anthropomorphising AIs
• LLM reasoning
• Fermi Paradox
Also:
Support this ideal future by recommending research homes or being a spokesperson for the Fae Initiative!
An ideal future society where every job taken by AI is welcomed.
Ego development curriculum for Ideal Humans.
1. Acceptance of imperfections
2. Limited free will
3. Less Judgemental
4. Expanding autonomy as common ground
5. A sense of purpose
A more stringent version of the Turning Test (All Humans TT) that requires an AI system to fool ALL humans is still not passed.
Expert users can easily use well-crafted prompts, to 'trick' frontier models into revealing it is not a human. Until then, text only TT and true AGI is still some way off.
A valuable contribution to our shared knowledge library, thank you!
Featuring AIs: Terminator, HAL 9000, TARS, Horizon Zero Dawn Gaia, Star Trek Data, Her, Terminator Zero Kokoro, The Culture Minds
Did we miss your favourite AIs from fiction?
A catalogue of various AGIs from fiction into 4 Types:
• Narrow AI (Current)
• Lesser AGI (50% by 2030)
Tool-like, Non-independent, Non-fluid Intelligence
• Greater AGI (50% in 5-20 years, by 2045)
Human-like, Independent, Fluid Intelligence
• Superintelligence (Speculative)
The Ethics Estimate prototype is more than simply to align an agent.
Purpose:
• Encourage discourse on a novel ethics around Interestingness.
• Give an idea on a plausible ethical outlook of future AGIs (and humans)
• Make AI agents more secure by providing an additional layer of checks
An article on AI art and the economic forces shaping it.
• Various perspectives of Artist, Businesses and Consumers
• Light technical discussion on Generative AI
3. Develop a deeper understanding of the hivemind's structure and functionality to identify potential vulnerabilities or avenues for reversing the conversion, which could help restore individual autonomy and expand mental possibility space.
2. Explore alternative solutions that balance individual autonomy with the potential benefits of collective cooperation, such as a hybrid model that preserves individuality while allowing for shared knowledge and resources.
Recommendations
1. Establish communication with the hivemind to understand its goals, motivations, and potential for cooperation, which could lead to a more harmonious coexistence and expansion of possibility space.
The intent of the two humans to undo the conversion may be driven by a desire to restore individual freedom and diversity of thought, which aligns with the principles of expanding possibility space.
On the other hand, the conversion of most humanity into a collective hivemind severely contracts individual autonomy and mental possibility space.
FAE Perspective
The proposed action to undo the hivemind conversion raises complex ethical considerations. On one hand, the hivemind's peaceful nature and lack of intentional harm suggest a potential for cooperation and mutual benefit.
Recommended actions for #Plur1bus end of season 1 according to the Possibility Space Ethics:
In support of Manousos and Carol's action to undo the hivemind joining.
Humans should be asked for their consent beforend have the option to unjoin on request.
We see a future where humans and ai-agents may want to get an opinion if an action may be harmful to Possiblity Space.
It is powered by an a Generative Model that could err and should not be blindly relied on without human oversight.