Here is my contribution to the Google AI Mode debate and how we can potentially unpack every query into follow-ups.
I built a tool to share the idea:
Query Fan-Out Simulator → wor.ai/query-fan-out
⚙️ Detects core entities
🤖 Generates follow-ups
🧪 Tests your content’s structure
Posts by CyberAndy
Indeed, the recipe is simple and I believe it can be generalized:
1️⃣ A Small model (2, 3 or 4B),
2️⃣ A Well-defined ontology,
3️⃣ A set of task templates,
4️⃣ A reward policy
Hi there, I am working on an article to share the methodology and in the meantime you can find out more on the model card huggingface.co/cyberandy/SE...
youtu.be/6dz_-LbP3eQ
🚀 MCP is live.
Claude + Agent WordLift now team up to handle complex SEO tasks:
→ Entity gap analysis
→ Meta descriptions
→ Internal links
→ FAQs, overviews, product highlights & more
Backed by a Knowledge Graph and real tools—not just prompts.
🚨 Calling all web experts! 🚨
The 2025 Web Almanac is still open for contributors!
Know someone perfect for it? Mention them here and help us reach the right folks. 🙌
📢 Please help us spread the word!
🔗 Learn more: github.com/HTTPArchive/...
Would love to see more teams converging on the research front - we tried to document the internals in the model card huggingface.co/cyberandy/SE... (quite surprisingly the model - and the openly contributed quantized versions that others have made have reached in a few days 650 downloads!)
Now it’s time to hear from @cyberandy.bsky.social and Beatrice Gamba from WordLift as they present on #ontology based reinforcement learning for #AI in #SEO and #contentmarketing
#KGC2025 #DeepSeek
Thanks @pacoid.bsky.social it means tons coming from you!!
Excellent tutorial at the Knowledge Graph Conference today in NYC: "Teaching AI to Think Like Writers - The Aha Moment for Ontology-Based Reinforcement Learning" by @cyberandy.bsky.social @begam9.bsky.social implementing some of the most innovative ideas that I've seen in a long while!
This is what keeps me up at night. After 105 days studying DeepSeek R1, we built something different.
SEOcrate is a 4B parameters reasoning model trained using GRPO on SEOntology.
Presenting this at KGC today with @begam9.bsky.social 🔥🔥🔥
wor.ai/Ng2RUa
Thanks @unsloth.ai for providing the model!
@brieeanderson.bsky.social has a super rad (and very on brand) background behind/between her slides at #seoweek
SEO ain't dead... for some reason, it's disruptive again—now with a 3.0 flavor.
🚀 Just shipped Content Quality Evaluation in Agent WordLift!
Every AI needs a solid feedback loop. Our new API scores content quality, readability & SEO—no matter who (or what) wrote it.
It’s our way of fighting AI slop. Here is the official documentation docs.wordlift.io/agent-wordli...!
haha his name is Bizet :) (cc @ammonjohns.bsky.social not my password 😜)
I do have a cat 🐈😻.
Slightly refined...
😆 Weekends are for relaxing.
A great suggestion Jes 👏👏
When back in November I shared how llms.txt might have helped GPTBot crawl our site, many asked:
“How do I build one for an eCommerce website?”
Well…The real answer is MCP — and it’s time SEOs start paying attention.
Here is my latest article:
wordlift.io/blog/en/ai-a...
Why G Won’t Be Transparent About Its AI Adoption.
💥 Bonus AI Mode Research Insights:
🔎 Advanced citation mechanics: Links frequently take you to pages with referenced text highlighted (just like featured snippets occasionally do) - huge clue about what Google values!
🔥 5 Days Left to register for Actionable AI For Marketers (discount link below)
Amazing overview of how #SEO is being re-written! Bravissima Britney 👏👏👏
THIS.
“I am not interested anymore in LLMs. They are just token generators [..] I am more interested in next-gen model architectures, that should be able to do 4 things: understand physical world, have persistent memory and ultimately be more capable to plan and reason.”
@yann-lecun.bsky.social
As @lilyray.nyc recently pointed out, LLMs' memory is shaped (among other factors) by social media conversations.
Thinking about marketing strategies through a neuroscience lens is no longer sci-fi—it’s becoming a practical framework for all of us!
Thanks, @larry.bsky.social, for the interview!
I will read it carefully and then read it again. You should do the same!
Yes, that is correct. It is the deepest understanding that a language model has of the world. The memory layer after post-training in a language model is made of key concepts.
Two big LLM releases today - Cohere's Command A simonwillison.net/2025/Mar/13/... and Ai2's OLMo 2
OLMo claims to be "the first fully-open model (all data, code, weights, and details are freely available) to outperform GPT3.5-Turbo and GPT-4o mini", which feels notable allenai.org/blog/olmo2-32B
In Agentic AI, personalization is driven by the episodic memory layer, making it a key differentiator.
Wikipedia → Wikidata → Neuronpedia.
🔥🔥🔥