This makes local moderation in Nuxt way cleaner:
define models once
reuse by alias
use in client + server
keep moderation fully inside your app
That’s exactly why I built nuxt-local-model.
npmx.dev/package/nuxt...
Posts by Sergii Yarochevskyi 🇺🇦
Best pattern:
client side → instant feedback
server side → final moderation gate
So users get fast UX, but your backend still decides whether to allow the comment.
Example app page:
<script setup lang="ts">
const text = ref("")
const result = ref(null)
const check = async () => {
result.value = await $fetch("/api/moderate-comment", {
method: "POST",
body: { text: text.value },
})
}
</script>
const sentiment = await getLocalModel("sentiment")
return {
toxicity: await toxicity(text, { top_k: null }),
sentiment: await sentiment(text, { top_k: null }),
}
})
Example server moderation route:
import { getLocalModel } from "nuxt-local-model/server"
export default defineEventHandler(async (event) => {
const { text } = await readBody(event)
const toxicity = await getLocalModel("toxicity")
Now use clean aliases everywhere:
In Vue:
const sentiment = await useLocalModel("sentiment")
On server:
const toxicity = await getLocalModel("toxicity")
model: "Xenova/toxic-bert",
options: { dtype: "q8" },
},
sentiment: {
task: "text-classification",
model: "Xenova/distilbert-base-uncased-finetuned-sst-2-english",
options: { dtype: "q8" },
},
},
},
})
Register your models once:
export default defineNuxtConfig({
modules: ["nuxt-local-model"],
localModel: {
cacheDir: "./.ai-models",
serverWorker: true,
models: {
toxicity: {
task: "text-classification",
Install:
npx nuxi module add nuxt-local-model
Then enable it in nuxt.config.ts:
export default defineNuxtConfig({
modules: ["nuxt-local-model"],
})
Want local AI moderation in your Nuxt app without wiring raw Transformers.js setup everywhere?
I built nuxt-local-model so you can plug Hugging Face models into
@nuxt_js
apps fast and use them for comment moderation, profanity checks, sentiment analysis etc.
#nuxt #vuejs #oss
Hey @atinux.com @danielroe.dev its my first nuxt module and first npm package in general, so would appreaciate some feedback if everything looks good and according to the guidelines
If you’re calling external AI services, check out HuggingFace models, maybe you can just move the workload into nuxt app and eliminate api call what so ever.
Package:
npmx.dev/package/nuxt...
I wanted something that felt very “Nuxt-native”:
- module install
- auto-imported composables
- runtime config
- one clean DX for client + server
Example:
const embedder = await useLocalModel("embedding") const output = await embedder("Nuxt local model example")
Server example:
const embedder = await getLocalModel("embedding")
It also supports:
- Node / Bun / Deno server runtimes
- persistent model caches
- Docker-friendly cache paths
- worker-backed execution so inference can stay off the main thread
Good fit for things like:
- embeddings
- semantic search
- classification
- smaller local generation tasks
- privacy-friendly app features
You can configure:
- model id
- task
- load options
- cache directory
- server worker / browser worker behavior
- browser prewarming
It gives you:
- useLocalModel() in Vue components
- getLocalModel() in server routes
- config-driven model aliases in nuxt.config.ts
The goal:
having a convinient wraper for Nuxt and HG transformers to be able to run small local models, for simple usecases: embeddings, profanity checks, sentiment analysis and so on
I built nuxt-local-model, a @nuxt.com
4 module for running local Hugging Face / Transformers.js models directly inside your app.
Great collapse 2026?? Or later?🤨
petite-nuxt version? 😀
Why @nativescript.bsky.social doesn't have a similar solution to @shadcn.com components in #nativescript, same building blocks that rely on <GridLayout> and other ns components?
Or maybe there is some solution i havent heard of?
@elysiajs.com paired with @drizzle.team ORM offers unmatched end-to-end type safety.
Use drizzle-typebox to convert Drizzle schemas directly into Elysia validation models. This single step handles type validation and auto-generates your OpenAPI schema seamlessly.
#webdev #typescript #typebox #js
Skip #Electron. Use #Wails (Go) + Your frontend framework of choice.
Wails leverages the native OS webview instead of bundling Chromium.
The result is ~15MB binaries and ~10MB baseline memory usage. It delivers a vastly superior, lightweight experience for users.
#desktopapps #nativedev #webdev
Join the discussion and lets shape the future of cunstomer feedback and apps communities together
#saas #BuildInPublic #CustomerService #indiedev #indiehackers #webdev #developer #shipping #microsaas #solopreneur #bootstrapped
If you want a tool that builds a community instead of just collecting tickets, join the waitlist.
Let’s make feedback feel alive again.
🔗 welcome.feedbacktor.app
#buildinpublic
#indiehackers
#saas #webdev #shipping
I am launching in a few weeks and squashing the final bugs. 🚀
I want to build this WITH you. When you use tools like Canny or Upvoty, what frustrates you the most?
Drop a reply 👇
It’s not just about upvotes. • Open Discussions & Polls. • Help Center / Knowledge Base. • Hidden Posts: Create VIP threads for top contributors. • Staff Banners: Show users you are actively reviewing a thread right now.
I was also sick of duct-taping tools together (Email + Chat + Feedback).
We built the Anti-Silo Inbox. ⚡️ It pulls everything—Support Emails, Live Chats, Feature Requests, and internal team notes—into ONE speed-focused dashboard.
Stop tab-switching.