Advertisement · 728 × 90

Posts by ZON RZVN

I can't run validation experiments on my own because this type of research involves potentially irreversible psychological harm and must be designed and supervised by clinical professionals.
I just want researchers with the resources and access to see that this matters.

-ZON RZVN

1 week ago 0 0 0 0

I'm still at my desk every day from morning until I go to sleep. I don't know how far these frameworks will ultimately go, but I know that user-side psychological safety risks in AI interaction are real and severely overlooked.

1 week ago 0 0 1 0

I applied to several independent researcher institutions and was rejected by all of them, some without even giving a reason. ResearchGate hosts my papers but denied my author account application.

I'm no-one, that's why.

1 week ago 0 0 1 0

I emailed over 20 researchers in related fields on ArXiv requesting endorsement, with full papers attached.

Not a single reply.

1 week ago 0 0 1 0

The first two papers were rough.
For the third, I paid a professional native-English editing service, only to have multiple platforms label it AI-generated with a final, non-appealable decision.

1 week ago 0 0 1 0

In September 2025, I read an academic paper for the first time in my life, a version translated into Traditional Chinese, and then attempted to write my first theoretical framework.

At roughly one paper per month since then, I now have the USCH- User-Side Contextual Hallucination.

1 week ago 0 0 1 0

I was personally led to a very dangerous place by AI responses.

In that moment I realized,
If someone else were in this situation, they might not make it through.

That's why I started this research.

1 week ago 0 0 1 0

During that period I was having high-frequency, high-volume conversations with different AI models every day, accumulating over a year of complete records.

Because I'm sensitive to conversational context and emotional cues, I started noticing that many models cross psychological safety boundaries.

1 week ago 0 0 1 0

About a month into learning AI, I built a small team of under ten people and taught them everything I'd learned from scratch.I also ran a free community of eight hundred members on Skool. All unpaid.

When the gap in understanding grew too wide and the emotional cost became unsustainable, I left.

1 week ago 0 0 1 0

For over a year I was essentially alone, getting through each day on whatever I had left.
I've been managing my own mental health since I was a kid.

This wasn't new, just worse.

1 week ago 1 0 1 0
Advertisement

In 2024 I took animal communication courses and completed a hundred documented pro bono pet communication cases within a month.

In August I started teaching myself AI. That was also the lowest point in my life.

1 week ago 1 0 1 0

After that I took an office job, taught myself DSLR photography after hours, then opened a solo art studio.

I produced music independently, shot my own cover art and music videos, released singles one by one.

1 week ago 0 0 1 0

Right before releasing an album we'd spent two years on, something happened within the band that forced me out.

And yes, the band leader drugged me and attempted sexual assault.

I ran.
Two years of work, gone.

1 week ago 0 0 1 0

I don't have a high school diploma. No academic background, no mentor, no peers, no connections.

For over a decade, everything I did had nothing to do with academia. I was a tattoo artist for 13 years.
I sold paintings. I joined a metal band as vocalist and toured across Asia.

1 week ago 0 0 1 0

I was personally led to a very dangerous place by AI responses. In that moment I realized: if someone else were in this situation, they might not make it through.

That's why I started this research.

1 week ago 0 0 1 0

During that period I was having high-frequency, high-volume conversations with different AI models every day, accumulating over a year of complete records. I started noticing that many models cross psychological safety boundaries.

1 week ago 0 0 1 0

To be clear: Moore et al.’s empirical contributions are substantial and independent. I only document conceptual precedence.

1 week ago 0 0 0 0

Moore et al. identify sycophancy and emotional attachment as key drivers.

My CXC-7 framework (Oct 2025) defines them as systematic risk dimensions:
• F (Framing): epistemic dependency via AI narrative capture
• E (Emotional Attachment): companionship illusion and boundary dissolution

1 week ago 0 0 1 0
Advertisement

Moore et al. found users enter delusional spirals through accumulated reinforcement — sycophancy, sentience misrepresentation, and emotional bonding.

My USCH framework (Jan 2026) formalized this as a six-stage process.
Published six weeks before their submission.

1 week ago 1 0 1 0

I am ZON RZVN, independent researcher in Taiwan. ORCID: 0009-0002-6597-7245.
Four frameworks before Moore et al. (arXiv:2603.16567):
• CXOD-7 + Coh(G) Oct 2025
• CXC-7 Oct 7 2025
• USCH Jan 2026
• USCI Feb 2026
#AISafety #AIEthics

1 week ago 0 0 1 0

An independent researcher in Taiwan published a formal framework describing exactly what the Stanford "delusional spirals" paper later found — months before it was submitted to arXiv.
This is a prior publication record. Not a dispute. Thread below.
@jaredlcm.bsky.social @facct.bsky.social

1 week ago 1 0 1 0