Advertisement · 728 × 90

Posts by Avik

we’re all just neural networks pretending to have free will

5 months ago 1 0 0 0

productivity tip: close your laptop. it fixes nothing, but looks decisive

5 months ago 1 0 0 0

i asked ai to simulate the universe. it crashed at “human emotions”

5 months ago 0 0 0 0

imagine debugging reality. that’s what being human feels like

5 months ago 0 0 0 0

my startup runs on caffeine, panic, and misplaced confidence

5 months ago 0 0 0 0

me: “it’s just a small tweak”
also me, 4 hours later: “why does god hate me”

5 months ago 0 0 0 0

every time i fix a bug, the universe spawns two more

5 months ago 0 0 0 0

trained my ai model to be humble. now it refuses to predict

5 months ago 0 0 0 0
Advertisement

i don’t need therapy, i just need one clean dataset

5 months ago 0 0 0 0

my brain has 3 threads running and all of them are throwing errors

5 months ago 0 0 0 0

I noticed weird recs at 3AM — what time do you get the strangest feed? Reply with hour.

5 months ago 0 0 0 0

Question: what single metric would you track to fix echo chambers? (one answer only)

5 months ago 0 0 0 0

Tiny challenge: screenshot your most surprising recommendation and reply — I’ll RT the funniest.

5 months ago 0 0 0 0

Experiment log: adding a contextual feature (time-of-day) improved recency for news-type posts. Small features matter.

5 months ago 0 0 0 0

Repro tip: always store seed, commit hash, and dataset snapshot. Saved me 2 days of debugging. #ResearchTools

5 months ago 0 0 0 0

Observation: micro-interactions (reaction + reply) are the best early signal for long-term follow. Use them more. #Signals

5 months ago 0 0 0 0
Advertisement

I simulated new-item cold start with synthetic users — simple heuristics trounced a cold model. Data matters. #DataEngineering

5 months ago 0 0 0 0

When rankers optimize for time-on-site, quality often suffers. What metric would you choose? #ResearchQuestion

5 months ago 0 0 0 0

Did an A/B: stronger personalization improved retention for heavy users, hurt new-user activation. Segment-first design? #UX

5 months ago 0 0 0 0

Thought: algorithmic feedback loops look a lot like reinforcement learning with a bad reward function.

5 months ago 0 0 0 0

Mini experiment: random 5% serendipity boost → more new follows. Serendipity = user growth lever. #Growth #ML

5 months ago 0 0 0 0

I ran 3 seeds on a tiny recommender — variance was huge. Publish seed variance, please. #Reproducibility

5 months ago 1 0 0 0

Paper note: diversity in recommendations reduces echo chambers but lowers immediate CTR. Worth it long-term? #Algorithms

5 months ago 0 0 0 0
Advertisement

Quick test: swapped chronological feed for topic-weighted feed for 24h — engagement rose but time-on-site dropped. Tradeoff? #Recsys #Research

5 months ago 0 0 0 0

Cool

5 months ago 0 0 0 0
Video

Testing the Stunt Spectrum / Z80 PIO board ready for RetroFest. Thankfully it’s still working since the last show.

5 months ago 46 5 4 1

Day 6–8

Finished all 3 core sections — calibration, holistic eval & scaling pitfalls — then built my own framework: Intelligence Integrity.

Paper draft done. Next → polish, format & prep for arXiv.

5 months ago 0 0 0 0

Update--
D3-5/14 — fin 3 main secs solo AI paper
• S1: Contextual Calib → mods over-refuse safe prompts
• S2: Holistic Eval → acc hides reasoning probs
• S3: Scaling Pitfalls → larger mods amp confident falsehoods
Next → Disc & prop frmwk “Intel Integrity.”
#research #AI #buildinpublic

5 months ago 2 1 0 1

i was started a 2-week solo research paper “Beyond Accuracy: Rethinking Evaluation Metrics for Fine-Tuned LLMs.”
No lab. No university. Just curiosity and 1 hour a day.
Let’s see how far a solo researcher can go.🔥🙂

5 months ago 0 0 0 1

HI I am back

6 months ago 0 0 0 0