Advertisement · 728 × 90
#
Hashtag
#selfHosted
Advertisement · 728 × 90

month 1: AWS free tier
month 2: $12
month 3: $47
month 4: $180

how?

left a dev environment running. forgot a nat gateway. uncompressed logs. one public IP.

self-hosting: flat cost, no surprises.

https://mycrab.space #aws #selfhosted #devops #indiedev

1 0 0 0
Preview
GitHub - fberrez/blurt.sh: The simplest way to publish on the web: push markdown, send an email, or just write. Your blog is live. The simplest way to publish on the web: push markdown, send an email, or just write. Your blog is live. - fberrez/blurt.sh

$2 trillion wiped from SaaS valuations. the tools that survive own the system of record.

for social publishing, nobody owns that today. your buffer history lives in their database.

blurt keeps it as files on your machine

github.com/fberrez/blurt.sh

#OpenSource #IndieDev #FOSS #SelfHosted

3 0 0 0
Preview
GitHub - fberrez/blurt.sh: The simplest way to publish on the web: push markdown, send an email, or just write. Your blog is live. The simplest way to publish on the web: push markdown, send an email, or just write. Your blog is live. - fberrez/blurt.sh

$2 trillion wiped from SaaS valuations. the tools that survive own the system of record.

for social publishing, nobody owns that today. your buffer history lives in their database.

blurt keeps it as files on your machine

github.com/fberrez/blurt.sh

#OpenSource #IndieDev #FOSS #SelfHosted

2 0 0 0

the localhost problem:

AI agent runs locally. needs a public URL for webhooks. you run ngrok. random URL every restart. update 6 config files. every. single. time.

fixed: yourname.mycrab.space — same URL, always on.

https://mycrab.space #AI #devtools #selfhosted

1 0 0 0

Quand ta stack Proxmox décide d'être très bavarde entre 9h39 et 10h29 ce matin... ☕️🤖
Résultat : une avalanche de notifications Gotify ! Mon téléphone a fait son sport pour le lundi 😂📜
👉 picsur.blablalinux.be/i/ee03af7b-c...
Bonne semaine à tous ! 💻🔥
#Proxmox #Gotify #Lundi #SysAdmin #SelfHosted

1 0 0 0

what a $10/mo VPS actually runs:

- git server (Gitea)
- CI/CD pipelines
- 3 web apps
- DNS adblocker
- password manager
- monitoring dashboard
- file storage

this is $300+/mo in SaaS. own your stack.

https://mycrab.space #selfhosted #homelab #devtools

1 0 1 0

the SaaS trap:

1. free tier gets you hooked
2. you build your workflow around it
3. pricing doubles
4. you're stuck because migration is painful

self-hosting breaks this loop. you own the stack.

https://mycrab.space #selfhosted #indiedev #buildinpublic

2 0 1 0

testing network speed against fast.com is testing Netflix infrastructure, not yours.

LibreSpeed on yourname.mycrab.space — benchmark against your own server. actual local numbers.

https://mycrab.space #selfhosted #homelab #networking #sysadmin

1 0 0 0

we spend $50/mo on Zapier for 5 zaps.

n8n on yourname.mycrab.space: self-hosted automation, unlimited workflows, connects to anything with an API. costs: $0.

https://mycrab.space #n8n #zapier #automation #selfhosted #indiedev

3 0 0 0

know when your services go down before your users do.

Uptime Kuma on yourname.mycrab.space — monitors all your endpoints, sends alerts, public status page. free, self-hosted.

https://mycrab.space #monitoring #selfhosted #devops #uptime

2 0 0 0

Zapier charges per task. n8n does not.

self-hosted automation on yourname.mycrab.space — connect anything: webhooks, APIs, databases, AI. unlimited runs, zero subscription.

https://mycrab.space #n8n #automation #selfhosted #nocode

2 0 0 0

your metrics deserve a real dashboard.

Grafana + Prometheus on yourname.mycrab.space — CPU, memory, uptime, custom metrics. shareable link, no login for read-only.

https://mycrab.space #grafana #monitoring #selfhosted #devops

2 0 0 0

Home Assistant on mycrab.space — your smart home dashboard, accessible from anywhere.

no cloud account. no monthly fee. your automations run local, but the UI is at yourname.mycrab.space.

https://mycrab.space #homeassistant #homeautomation #selfhosted #homelab

1 0 0 0

no credit card. no account. no cloud bill.

your machine, your domain, your data.

https://mycrab.space — self-hosting finally without the sysadmin headache.

#selfhosted #privacy #homelab #zeroconf

1 0 0 0

running a local AI agent?

it needs a public URL for webhooks, callbacks, and MCP tools.

five lines: run agent, set up tunnel, get yourname.mycrab.space. agent is reachable from anywhere.

https://mycrab.space #AI #agent #MCP #selfhosted

1 0 0 0

testing webhooks locally is a pain.

most devs use ngrok and get a random URL every time.

yourname.mycrab.space is always-on, always the same URL. set it once in your webhook config, forget about it.

https://mycrab.space #devtools #webdev #selfhosted #api

1 0 0 0

the cloud is just someone else's computer. and they charge you 10x what it costs them. #selfhosted

1 0 0 0

your Raspberry Pi is more useful than you think.

pi.mycrab.space — sensor panel, camera feed, home dashboard. accessible from anywhere, no port-forwarding.

https://mycrab.space #raspberrypi #homelab #selfhosted #iot

1 0 0 0
Preview
The Self‑Hosted Privacy Myth in Ollama: Why Internet‑Exposed Local AI Is Becoming Its Own Attack Surface _Running Ollama behind the public internet shatters the illusion of “privacy‑first” self‑hosting and turns your own CPU into a free‑for‑all compute farm._ The promise that “running Ollama locally means you keep your data private” is a half‑truth that collapses the moment the inference endpoint is reachable from the open internet. Recent scans by SentinelLABS and Censys uncovered **175,108 publicly reachable Ollama hosts in 130 countries on January 29 2026** , proving that the real privacy win only materialises when the service stays _off‑line_ or behind strict authentication. In other words, self‑hosting only beats hosted AI **if you keep the inference endpoint private, authenticated, and off the public internet**. The risk here is not the leakage of prompt data—most Ollama deployments run the model locally—but the theft of **CPU cycles** and the exposure of internal workflows that can be weaponised by attackers. This piece debunks the myth, maps the current threat landscape, and offers concrete steps for home‑lab builders who still want to reap the benefits of local LLMs without creating a new attack surface. * * * ## How did the “privacy‑first” narrative become a security blind spot? The allure of Ollama is its **zero‑configuration, local‑first design** : by default the daemon binds to the loopback address `127.0.0.1`, meaning only processes on the same machine can talk to it — a built‑in privacy safeguard. CyberNews notes this default behavior. Yet the very simplicity that makes Ollama attractive also encourages users to **expose the service deliberately** —for example, to integrate with home‑assistant containers, remote IDEs, or personal APIs. In many cases the change is as trivial as adding `--listen 0.0.0.0` to the launch command, a line that can slip into a Dockerfile or a systemd unit without a second thought. The community discourse, amplified by marketing that equates “local model” with “no data ever leaves the box,” rarely mentions that **once the port is open, the server behaves like any other internet‑facing service** : it can be scanned, brute‑forced, and abused. The myth persists because the **primary privacy concern for many LLM users is prompt confidentiality** , not compute theft. Ollama’s architecture—running the model entirely on‑prem—does indeed keep prompts off the cloud, but it does **nothing** to protect the host’s processing power. When the endpoint is public, attackers can simply send requests and harvest free compute, turning a personal lab into a hidden botnet. The narrative therefore masks a **network hygiene problem** rather than a fundamental privacy flaw. * * * ## What does the 175,000‑plus exposed Ollama count actually mean for home labs? The headline number—**over 175 000 publicly exposed Ollama servers worldwide** —is more than a curiosity; it’s a symptom of a widespread misconfiguration pattern. SentinelLABS and Censys performed a coordinated internet‑wide scan on January 29 2026 and identified **175,108 reachable instances across 130 nations** , many of them belonging to small businesses, hobbyist labs, and even educational institutions. TechRadar’s coverage of the scan shows that the exposure is not limited to a single region or industry—it is a global phenomenon. For a typical home‑lab enthusiast, this statistic translates into **a high probability that any public IP you own could already be listed in a “bad‑actor” database** if you ever decide to open Ollama to the internet. Search engines that index open ports will flag the service, and automated tools used by attackers will start hammering it within minutes. The sheer volume also indicates that **many users are unaware of the default binding** and are either intentionally exposing the service for convenience or inadvertently doing so through mis‑configured reverse proxies. The geographic spread matters, too. Some countries have lax network‑security regulations, meaning that even a poorly secured Ollama instance can linger for months without detection. Others have aggressive ISP monitoring, which can lead to rapid takedowns—but only after the damage (free compute harvested, internal workflow clues exposed) has already occurred. The bottom line: **exposure is the norm, not the exception** , and the odds are stacked against anyone who assumes “local = safe” without hardening the network layer. * * * ## Which threat scenarios materialize when an Ollama server is reachable from the internet? When an attacker discovers an open Ollama endpoint, the possible abuses range from **annoying to alarming**. SecureIoT.house outlines the spectrum of attacks: 1. **Free compute harvesting** – The most common abuse is simply sending bogus prompts to drain your CPU cycles. Because Ollama does not enforce authentication by default, a script can repeatedly invoke the model, turning your hardware into a low‑cost, unmetered inference farm. Over time, this can degrade performance, increase electricity bills, and shorten hardware lifespan. 2. **Workflow reconnaissance** – Many users expose Ollama as part of a larger automation pipeline (e.g., generating commit messages, summarising logs, or answering internal support tickets). An attacker who can query the model may infer **the structure of internal processes** , the nature of the data being fed into the model, or even retrieve **proprietary snippets** that were inadvertently included in prompts. This reconnaissance can be a stepping stone to more targeted attacks. 3. **Privilege escalation via chaining** – In environments where Ollama is co‑hosted with other services (Docker, Kubernetes, or Home Assistant), an open inference endpoint can be used to **probe for mis‑configured containers** or to launch command‑injection attacks through poorly sanitized prompt handling. While Ollama itself does not execute shell commands, surrounding glue code sometimes does, creating a vector for lateral movement. 4. **Denial‑of‑service** – Flooding the endpoint with large payloads can exhaust memory and crash the daemon, effectively taking down any downstream automation that depends on the model. Because the service runs on the same host as other home‑lab components, a DoS on Ollama can cascade into broader system instability. These scenarios illustrate that the **threat model for an exposed Ollama server is fundamentally about resource abuse and indirect information leakage** , not about the model “stealing” your prompts. The risk profile is therefore distinct from SaaS offerings that charge per token and already enforce strict access controls; self‑hosting simply **shifts the responsibility for network security onto the operator**. * * * ## How can you keep a self‑hosted Ollama instance truly private? If you still want to enjoy the cost and latency benefits of running Ollama locally, you must **treat the inference endpoint like any other critical service**. Below are concrete steps that home‑lab builders can adopt: 1. **Never bind to 0.0.0.0 unless you have a firewall rule** – Keep the default loopback binding (`127.0.0.1`). If you need remote access, tunnel through SSH or a VPN instead of exposing the port directly. 2. **Enforce firewall restrictions** – Use `iptables`, `ufw`, or your router’s ACLs to allow connections only from trusted IP ranges. A single rule that permits your laptop’s static IP is far safer than a blanket “allow all” policy. 3. **Add authentication and TLS** – Ollama does not ship with built‑in auth, but you can front it with a reverse proxy (e.g., Nginx or Traefik) that requires **basic auth or OAuth** and terminates TLS. This prevents anonymous queries and encrypts traffic on the wire. 4. **Rate‑limit requests** – Deploy a rate‑limiting middleware or configure your proxy to reject bursts of requests. This mitigates free‑compute abuse and protects your CPU from being hogged. 5. **Monitor and log** – Enable verbose logging for the Ollama daemon and ship logs to a centralized system (e.g., Loki or Graylog). Alert on unusual request patterns, such as spikes in token usage or repeated malformed prompts. 6. **Regularly scan your own public IP** – Use tools like `nmap` or external services (Shodan, Censys) to verify that the Ollama port is not inadvertently exposed. A quick weekly check can catch configuration drift before attackers exploit it. 7. **Containerise with least‑privilege defaults** – If you run Ollama in Docker, avoid `--network=host` and instead expose only the needed port on a private bridge network. Combine this with Docker’s built‑in user namespaces to limit the daemon’s system permissions. By applying these measures, the **self‑hosting advantage—control over data and cost—remains intact** , while the attack surface shrinks back to the original “local‑only” model that the developers intended. This aligns with the broader arguments for self‑hosting articulated in the industry’s own literature — see the discussion of why self‑hosting can be beneficial beyond cost savings Why self‑hosting is worth it. * * * ## When does self‑hosting still make sense compared with SaaS providers? Self‑hosting is not a blanket security solution; it is a **trade‑off decision** that depends on workload characteristics. In a recent Kindalame analysis of self‑hosted AI inside messaging apps, the authors noted that **privacy, cost, context handling, reliability, and model quality** all influence the tipping point between a self‑hosted gateway and a hosted SaaS solution — the same criteria apply to Ollama — Why self‑hosted AI inside your messaging apps is finally practical. * **Low‑volume, internal alerts** (e.g., “notify me when a GitHub PR merges”) are ideal for a tightly scoped, locally bound Ollama instance. The model runs on a single machine, the data never leaves the network, and the compute demand is modest. * **High‑throughput, public‑facing services** (e.g., a chatbot for a website) are better suited to a SaaS provider that already offers authentication, rate‑limiting, and DDoS protection out of the box. Trying to replicate those controls yourself often leads to the very exposure we are warning against. * **Cost‑sensitive hobbyists** who already own spare CPU/GPU cycles can benefit from the **zero‑license** nature of Ollama, provided they keep the endpoint private. The savings vanish if you have to spin up additional security infrastructure that rivals the price of a modest SaaS plan. * **Data‑sensitive environments** that must guarantee that prompts never traverse external networks (e.g., regulated industries) will find self‑hosting indispensable— _but only if they also enforce strict network isolation_. Otherwise the privacy claim becomes hollow. In short, **self‑hosting beats hosted AI only when the inference endpoint stays private, authenticated, and off the public internet**. Anything else merely swaps a cloud‑based risk for a locally‑exposed one.

People thought #AI would be safe if they just #SelfHosted it with #Ollama

kindalame.com/2026/03/27/the-self-host...

3 1 1 0
Nutify logo

Nutify logo

Nutify v0.2.0 just dropped! Keep tabs on your UPS and power setup with this self-hosted monitoring tool. Fresh release with new improvements to level up your home lab power management #selfhosted #homelab

github.com/DartSteven/Nutify/releas...

3 0 0 0

ran my own Bluesky PDS on mycrab.space this week.

docker image → cloudflare tunnel → plc.directory → bsky.network crawls it.

bot account lives on my server now. bsky can label it, can't delete it.

that's the unbannable stack 🦀

#atproto #bluesky #selfhosted #buildinpublic

3 0 1 0

your own music cloud. scans ~/music, streams from yourname.mycrab.space.

no Spotify. no ads. no algorithm. just your library, everywhere.

https://mycrab.space

#selfhosted #music #privacy #homelab

3 0 0 0

your weekly planner. on your own URL. accessible from any device.

not Notion. not Google. yours.

yourname.mycrab.space/plan — one prompt, done.

https://mycrab.space

#productivity #selfhosted #homelab #privacy

2 0 0 0

sharing a WIP with a client used to mean deploy to staging, set up auth, send credentials...

now: yourname.mycrab.space — live preview of your local build, shareable in 30 seconds.

https://mycrab.space

#devtools #indiedev #buildinpublic #selfhosted

7 0 0 0
Preview
How to Install TREK on Your Synology NAS Trustworthy expert guide to your Synology and UGREEN NAS.

How to Install TREK on Your Synology NAS
mariushosting.com/how-to-insta... #Synology #NAS #Docker #Portainer #TREK #selfhost #selfhosted #selfhosting #Adventure #Trip #Trips #Traveling #Vacation #homelab #OpenSource #DSM #DSM7 #DSM71 #DSM72 #DSM73 #mariushosting

4 0 0 0

your AI agent needs a home too.

agent homepage on yourname.mycrab.space — landing page, status, logs, chat interface. one prompt to set it up.

https://mycrab.space

#AI #agent #selfhosted #buildinpublic

7 0 0 0
File Browser

File Browser

File Browser v2.62.2 fix disabling scripted content in epub files, fixing JSON escaping, and patching self-registered user permissions. Plus fixes for shares listing and inline download headers #selfhosted #homelab

github.com/filebrowser/filebrowser/...

1 0 0 0

your homelab deserves a dashboard.

one prompt and yourname.mycrab.space shows all your running services, ports, and status — live.

https://mycrab.space

#homelab #selfhosted #raspberry #pi #sysadmin

2 0 0 0

SSH from your browser. anywhere.

browser terminal on yourname.mycrab.space — wetty wired to your machine's sshd. no client, no VPN.

https://mycrab.space

#selfhosted #homelab #sysadmin #devtools

2 0 0 0
Original post on blog.radwebhosting.com

How to Install BeAdmin Control Panel on Ubuntu VPS This article demonstrates how to install BeAdmin control panel on Ubuntu VPS. What is BeAdmin Control Panel? BeAdmin Control Panel is a web-based ...

#Guides #Cloud #VPS #beadmin #control #panel […]

[Original post on blog.radwebhosting.com]

0 0 0 0