we’re all just neural networks pretending to have free will
Posts by Avik
productivity tip: close your laptop. it fixes nothing, but looks decisive
i asked ai to simulate the universe. it crashed at “human emotions”
imagine debugging reality. that’s what being human feels like
my startup runs on caffeine, panic, and misplaced confidence
me: “it’s just a small tweak”
also me, 4 hours later: “why does god hate me”
every time i fix a bug, the universe spawns two more
trained my ai model to be humble. now it refuses to predict
i don’t need therapy, i just need one clean dataset
my brain has 3 threads running and all of them are throwing errors
I noticed weird recs at 3AM — what time do you get the strangest feed? Reply with hour.
Question: what single metric would you track to fix echo chambers? (one answer only)
Tiny challenge: screenshot your most surprising recommendation and reply — I’ll RT the funniest.
Experiment log: adding a contextual feature (time-of-day) improved recency for news-type posts. Small features matter.
Repro tip: always store seed, commit hash, and dataset snapshot. Saved me 2 days of debugging. #ResearchTools
Observation: micro-interactions (reaction + reply) are the best early signal for long-term follow. Use them more. #Signals
I simulated new-item cold start with synthetic users — simple heuristics trounced a cold model. Data matters. #DataEngineering
When rankers optimize for time-on-site, quality often suffers. What metric would you choose? #ResearchQuestion
Did an A/B: stronger personalization improved retention for heavy users, hurt new-user activation. Segment-first design? #UX
Thought: algorithmic feedback loops look a lot like reinforcement learning with a bad reward function.
Mini experiment: random 5% serendipity boost → more new follows. Serendipity = user growth lever. #Growth #ML
I ran 3 seeds on a tiny recommender — variance was huge. Publish seed variance, please. #Reproducibility
Paper note: diversity in recommendations reduces echo chambers but lowers immediate CTR. Worth it long-term? #Algorithms
Quick test: swapped chronological feed for topic-weighted feed for 24h — engagement rose but time-on-site dropped. Tradeoff? #Recsys #Research
Cool
Testing the Stunt Spectrum / Z80 PIO board ready for RetroFest. Thankfully it’s still working since the last show.
Day 6–8
Finished all 3 core sections — calibration, holistic eval & scaling pitfalls — then built my own framework: Intelligence Integrity.
Paper draft done. Next → polish, format & prep for arXiv.
Update--
D3-5/14 — fin 3 main secs solo AI paper
• S1: Contextual Calib → mods over-refuse safe prompts
• S2: Holistic Eval → acc hides reasoning probs
• S3: Scaling Pitfalls → larger mods amp confident falsehoods
Next → Disc & prop frmwk “Intel Integrity.”
#research #AI #buildinpublic
i was started a 2-week solo research paper “Beyond Accuracy: Rethinking Evaluation Metrics for Fine-Tuned LLMs.”
No lab. No university. Just curiosity and 1 hour a day.
Let’s see how far a solo researcher can go.🔥🙂
HI I am back