Listening To a Various Music Compilation from #RamaLamaRecords 2025
#Sweden 🇸🇪
#indie #indieRock #Rock #Pop #indiePop #Bandcamp #RamaLama #lastFM #musicSky #bbcR1 #bbcR6 #drp6dk
ramalama.bandcamp.com/album/rama-l...
I’m excited to bring #Ramalama, the container-native project to run AI models locally, to @cern.voxxeddays.ch 🙌
We’ll show how fast devs can spin up an open source model using their favorite container engine (shoutout @podmanio.bsky.social) for RAG (Q&A on PDF’s) and Agentic (calling API’s) 🤖
Who said Halloween’s over? Move aside, I’m making my grand entrance 💅🖤✨
4K: youtube.com/shorts/ey_ZX...
#Ramalama #SecondLife
Here is the demo repository!
You can launch the model on a local cluster (using #ramalama as the inference server) or on an EKS cluster (using #vLLM).
Leave a ⭐ and share it around if everything works perfectly. Open an issue to start a shitstorm if it doesn't! 😉
👉 github.com/graz-dev/llm...
Raised a bug about #ramalama today not playing well with #arm64 and and #amd gpus. However if you force the base image local inference does use #vulkan to run - and much faster than maxing out the CPU cores on my #altra.
Updating stickers on laptops... let's see how many I can tag
@matrix.org @instructlab.bsky.social @undergrounddonut.bsky.social @pytorch.org @github.com @kubefloworg.bsky.social @trustyai.bsky.social
#ramalama #docling #vllm #llmd #ospo #ansible #thinkpad #womeninfedora #expo2025 #cushingcenter
How RamaLama helps make AI model testing safer buff.ly/66eTipt
#aiml #Ramalama #Container
OpenAI’s gpt-oss language model is a beast, and matches models like o3 and o4-mini on coding, tool use, and more 🤯 but how can you run it yourself with zero trust security, in containers + automatic GPU acceleration? The #Ramalama project has you covered: developers.redhat.com/articles/202...
Nice to see #ramalama added to the Goose AI docs by @maxamillion!
github.com/block/goose/commit/3bec4...
RT @ericcurtin17: Want to run #RamaLama AI on OpenShift DevSpaces Rohan Kumar has got you covered:
developers.redhat.com/articles/2025/06/13/how-...
🚀 Local AI just got simpler!
Podman AI Lab now uses RamaLama’s GPU-ready containers—unifying efforts to streamline model deployment on your machine.
🖥️ Faster setup
⚡ GPU acceleration
🧠 Consistent container experience
Learn more: buff.ly/FPqOYff
#AIDev #Podman #RamaLama #podmandesktop
Lukáš Růžička si pro vás připravil článek o tom, jak na :fedora: #Fedora používat #AI lokálně pomocí #ramalama.
mojefedora.cz/ramalama-aneb-vyhanime-l...
@thenewstack.io interviews Eric and Dan, maintainers of RamaLama about containerizing #AI development. If you haven't heard of the #RamaLama project before, this is a quick intro:
thenewstack.io/ramalama-pro...
#containers #Kubernetes
#ArturoPaniagua de sí mismo, #DaniLópez de Mondo Madrid, #SaraMorales de @efeeme.bsky.social, #MartaSalicrú de @radioprimavera.bsky.social, #JoséRamónPardo de #Ramalama, #DiegoAManrique y #JulioRuiz inconfundibles, #ElenaCabrera de @eldiario.es, #PatriciaGodes genia y figura y #MarisolGaldón ⏬
How RamaLama runs AI models in isolation by default
developers.redhat.com/articles/202...
#ramalama #podman #cncf #AI #artificialintelligence #security #opensource #containers #deepseek
How RamaLama runs AI models in isolation by default
developers.redhat.com/articles/2025/02/20/how-...
#ramalama #podman #cncf #AI #artificialintelligence #security #opensource #containers #deepseek
#RamaLama is an open-source tool facilitating AI model deployment with OCI containers, supporting diverse environments. Get started now with this quick intro.
Listening To #RamaLamaRecords #RamaLama
#Collection #VariousArtists
#Bandcamp #lastFM #inmwt #inNewMusicWeTrust #KeepItPeel #KeepingItPeel #PlaySomeAtTheWrongSpeed #TeenageDreamsSoHardToBeat #indie #indiePop #indieRock
ramalama.bandcamp.com/album/rama-l...