Small UX wins compound.
Replacing browser alerts with toast notifications and smart redirects makes your Bubble app 10x more professional.
I break down the full setup in my latest tutorial.
www.planetnocode.com/bubble-tutorials/user-al...
Posts by Matt Blake
Full breakdown with implementation details: www.planetnocode.com/bubble-tutorials/3-ways-...
Bonus: Hybrid search (semantic + keyword) >> semantic alone.
Semantic search fails on proper nouns like company names. But hybrid search is only available with Pinecone's native embeddings—another reason to use them.
1. Call OpenAI API for embeddings first
2. Send vectors to Pinecone
3. Repeat for every query
If I'd used Pinecone's native embeddings, I could just send TEXT directly. Save yourself this extra step.
Key lesson I learned the hard way with Pinecone:
I used OpenAI's embeddings models (text-3-small/large), which means I have to:
METHOD 3: RAG with Pinecone
Vector databases enable semantic search—understanding "cat" and "kitten" are related, not just exact spelling matches.
You convert each message pair into embeddings (numbers), store them in Pinecone, then retrieve only relevant history based on meaning
METHOD 2: Follow-up Prompt (my favorite for cost)
Use an expensive model for responses, then run a cheap background model ($0.0002) to distill the conversation into a structured profile:
- Role
- Task state
- Learner progress
- Decision points
- Next steps
METHOD 1: Context Window
Yes, we now have 200K-1M token limits. But three problems:
1. Recent research shows AI loses track of info in the MIDDLE of conversations
2. Costs compound exponentially (12 messages, then 14, then 16...)
3. Claude API has hard message limits (~1000)
Building an AI chatbot in Bubble? Here are 3 battle-tested ways to save chat history (give your AI memory) each with different trade-offs.
I've used all three in production apps.
Here's what I learned about Context Windows, Follow-up Prompts, and RAG 🧵👇
Pro tip: Use an expensive AI model for responses, then a cheap one (~$0.0002) to summarize chat history into user profiles (role, task state, learner progress). Maintain context without breaking the bank.
Full breakdown: www.planetnocode.com/bubble-tutorials/3-ways-...
🔥 Want to monitor competitor content without manual checking? In 30 mins, you can build a crawler that extracts blog posts and more...
Watch 👉 www.planetnocode.com/bubble-tutorials/web-scr...
Just built a web scraper in 30 minutes using Bubble and Firecrawl.
Scraped HubSpot's blog, filtered for AI content, and displayed results in a custom dashboard.
Watch 👉 www.planetnocode.com/bubble-tutorials/web-scr...
The truth about that “AI is damaging your brain” headline — why the MIT study doesn’t prove what the media claims, explained by psychologist & researcher Devon Price. #AI #ChatGPT #TechTok #ScienceMyths #BrainFacts
What if we literally can't tell the difference between genius AI and broken AI? 🤯 Sam Altman's GPT-7 president speculation got me thinking about something nobody's talking about...
#AI #GPT5 #OpenAI #SamAltman
Tired of ChatGPT removing your favourite voice? 😤 Here's how to build your OWN AI voice assistant with complete control! 🎯
Vapi lets you customise EVERYTHING
#ChatGPT #AIVoice #VoiceAI #TechTips
What if ChatGPT isn't just mimicking intelligence... what if it's showing us how WE work? 🤯 Deep dive into how large language models actually function and why it might reveal something fundamental about human consciousness.
#ChatGPT #AI #ArtificialIntelligence #LLM
Geoffrey Hinton just changed how I think about AI "hallucinations."
They're actually confabulations - the same thing humans do when we construct false memories we genuinely believe. Maybe we're not as different from AI as we think.
#AI #ChatGPT #GeoffreyHinton #Psychology #Memory #Confabulation
Are AI companions helping or hurting our ability to form real relationships? 🤖💔
While AI offers judgment-free advice and endless patience, it can't replicate the vulnerable, messy reality of human bonding. Do real connections require navigating conflict, rejection, and genuine vulnerability?
The politeness paradox: being respectful to AI might make us more respectful to humans, or it might make us less empathetic to both.
The future of support might not be human versus AI. It might be human plus AI.
#AICoaching #FutureOfWork #HumanConnection
For millions without access to human coaching, this could mean 24/7 affordable support.
AI may not be ready to replace deep therapeutic relationships yet.
But the question isn't whether it will happen. It's how quickly we adapt to working alongside these tools rather than competing with them.
This points to a fascinating division of labour emerging.
AI handling structured, goal-focused support that follows established frameworks.
Humans managing the complex, adaptive work requiring cultural sensitivity and emotional nuance.
The research revealed something unexpected about our relationship with AI. We don't need to build rapport with machines like we do with humans. What matters most is whether we believe the technology actually works.
Students felt psychologically safe with AI. They shared personal information without fear of judgement.
But here's where it gets interesting.
AI only worked for narrow targets. It couldn't improve broader measures like resilience or overall wellbeing, which human coaches influenced significantly.
AI coaches performed as well as humans in trials. But there's a crucial catch.
New research reviewed 16 studies and found something fascinating about AI coaching.
In controlled trials with university students, AI coaches matched human performance for hitting specific, well-defined goals.
Early research on AI Coaching suggests we don't need to build rapport with AI like we do with humans. We just need to believe it works.
Geoffrey Hinton doesn't call them AI "hallucinations."
He calls them confabulations.
The same false memories humans create to fill gaps in knowledge - plausible content we genuinely believe is true.
What if AI "hallucinations" are actually proof AI thinks more like us than we want to admit?
The Tamagotchi effect reveals why humans naturally bond with AI companions - it's the same ancient psychology that makes us love pets and feel connected to nature.
#TamagotchiEffect #AICompanions #EvolutionaryPsychology #HumanAttachment #DigitalRelationships
Maybe the real breakthrough isn't eliminating AI confabulations.
Maybe it's teaching AI to catch itself in the act, just like we do.
#ArtificialIntelligence #CognitiveScience #MachineLearning
It's that we're better at recognising when we're making things up.
But here's the uncomfortable question: if AI is developing the same cognitive patterns that make us human, how long will our advantage of self-awareness last?