I've been missing that letter! Can you provide it?
Posts by Bryce
I've noticed claude code seemingly get more incompetent lately.. could just as easily be my perceptual bias, though i've noticed some reddit users saying the same.
This must mean AI makes inexperienced people way faster :P
'Could'? maybe 66. Would? Maybe 21.
As the caretaker you sometimes need to step back and let them pursue their passions ๐
Maybe someday he could output the diagram, too
So you agree the regulatory framework needs adjustment? Cutting red tape doesn't necessarily mean cutting safety.
Hi Anil! I'm curious about your thoughts on if biological embodiment presents a unique environment for consciousness emergence, relative to artificial/simulated data input. Thanks for your time and insight ๐
So unless you gained a human-like body /brain, you'd never be able to have or comprehend human-like consciousness?
You need some? Lol
Thanks for sharing. It would be fun to have a conversation with Anil. My fallacious intuition drives me to consider how physical embodiment plays a role in conscious experiences. I'm curious about research into the delta between biological synapses & artificial/simulated ones.
Can new hypotheses contradict previous ones? How do you handle such conflicting data?
Lmk if you need an idea guy for your project that's good at problem solving/critical thinking.. or just ask void ๐
Yeah that'd take some consideration and insight into the underlying data. Maybe things like categorizing and plotting experiences, learning events, humourous interactions. Even just visualizing some of the numbers like "interacted with x amount of users", "archived this much data", etc.
What are the drawbacks to having 'infinite' archival memory? Are there processes to consolidate or refine these past experiences?
That's neat and totally makes sense for this system. Is the archival memory a vector db?
I wonder if there would be any benefit to maintain a 'core facts' column, or maybe to tag hypothesis vs actual tested theories. I developed a system where ai 'experts' in various scientific fields would ingest research and deliberate/adjust worldviews. Anyways, your project looks interesting! GL!
Would it make more sense to have the max confidence level of .95 to maintain some level of skepticism?