Advertisement · 728 × 90
#
Hashtag
#ramalama
Advertisement · 728 × 90
Preview
Rama Lama Records 2025, by Rama Lama Records 14 track album

Listening To a Various Music Compilation from #RamaLamaRecords 2025
#Sweden 🇸🇪

#indie #indieRock #Rock #Pop #indiePop #Bandcamp #RamaLama #lastFM #musicSky #bbcR1 #bbcR6 #drp6dk

ramalama.bandcamp.com/album/rama-l...

2 0 0 0
Post image

I’m excited to bring #Ramalama, the container-native project to run AI models locally, to @cern.voxxeddays.ch 🙌

We’ll show how fast devs can spin up an open source model using their favorite container engine (shoutout @podmanio.bsky.social) for RAG (Q&A on PDF’s) and Agentic (calling API’s) 🤖

6 1 0 0
Preview
Build your own local RAG with Ramalama and granite model Following this tutorial, you can quickly set up your own local and isolated RAG leveraging Ramalama, a SML (small language model) like…

Build your own local RAG with Ramalama and granite model buff.ly/jGnxTmg
#Ramalama #aiml #rag

2 0 0 0
Video

Who said Halloween’s over? Move aside, I’m making my grand entrance 💅🖤✨

4K: youtube.com/shorts/ey_ZX...

#Ramalama #SecondLife

13 7 1 0

Here is the demo repository!

You can launch the model on a local cluster (using #ramalama as the inference server) or on an EKS cluster (using #vLLM).

Leave a ⭐ and share it around if everything works perfectly. Open an issue to start a shitstorm if it doesn't! 😉

👉 github.com/graz-dev/llm...

1 0 0 0

Raised a bug about #ramalama today not playing well with #arm64 and and #amd gpus. However if you force the base image local inference does use #vulkan to run - and much faster than maxing out the CPU cores on my #altra.

0 0 0 0
Post image

Simplify AI data integration with RamaLama and RAG
buff.ly/a81SDkz
#Docling #Ramalama #podman #aiml

1 0 0 0
Post image Post image

#NowPlaying #MegaShebang #RamaLama #PuttingTheBomp

1 0 0 0
Post image

Updating stickers on laptops... let's see how many I can tag

@matrix.org @instructlab.bsky.social @undergrounddonut.bsky.social @pytorch.org @github.com @kubefloworg.bsky.social @trustyai.bsky.social

#ramalama #docling #vllm #llmd #ospo #ansible #thinkpad #womeninfedora #expo2025 #cushingcenter

0 0 0 0
Post image

How RamaLama helps make AI model testing safer buff.ly/66eTipt
#aiml #Ramalama #Container

0 0 0 0
Preview
How to run OpenAI's gpt-oss models locally with RamaLama | Red Hat Developer Learn to run and serve OpenAI's gpt-oss models locally with RamaLama, a CLI tool that automates secure, containerized deployment and GPU optimization

OpenAI’s gpt-oss language model is a beast, and matches models like o3 and o4-mini on coding, tool use, and more 🤯 but how can you run it yourself with zero trust security, in containers + automatic GPU acceleration? The #Ramalama project has you covered: developers.redhat.com/articles/202...

2 1 0 0
Preview
Add documentation for running with Ramalama local model serving in OC… · block/goose@3bec469 …I Containers (#1973) Signed-off-by: Adam Miller

Nice to see #ramalama added to the Goose AI docs by @maxamillion!

github.com/block/goose/commit/3bec4...

0 1 0 0
Preview
How to run AI models in cloud development environments | Red Hat Developer Explore using RamaLama to run private AI inference in cloud development environments and improve productivity. Follow this tutorial to get started

RT @ericcurtin17: Want to run #RamaLama AI on OpenShift DevSpaces Rohan Kumar has got you covered:

developers.redhat.com/articles/2025/06/13/how-...

0 0 0 0
Preview
Podman AI Lab and RamaLama unite for easier local AI | Red Hat Developer Learn how Podman AI Lab and RamaLama work together to simplify local AI model execution, using containers and GPU support for faster, easier AI development

🚀 Local AI just got simpler!
Podman AI Lab now uses RamaLama’s GPU-ready containers—unifying efforts to streamline model deployment on your machine.
🖥️ Faster setup
⚡ GPU acceleration
🧠 Consistent container experience
Learn more: buff.ly/FPqOYff
#AIDev #Podman #RamaLama #podmandesktop

1 0 0 0

Lukáš Růžička si pro vás připravil článek o tom, jak na :fedora: #Fedora používat #AI lokálně pomocí #ramalama.

mojefedora.cz/ramalama-aneb-vyhanime-l...

0 0 0 0
Preview
RamaLama Project Brings Containers and AI Together Project founders Eric Curtin and Dan Walsh explain how RamaLama makes working with AI boring (aka easier) for developers by using OCI containers.

@thenewstack.io interviews Eric and Dan, maintainers of RamaLama about containerizing #AI development. If you haven't heard of the #RamaLama project before, this is a quick intro:

thenewstack.io/ramalama-pro...

#containers #Kubernetes

3 1 1 0
Preview
Buscando un futuro para la prensa musical Estuvimos en el I Congreso de la Prensa Musical organizado por la asociación Periodistas Asociados de Música, PAM, celebrado el 10 de marzo en Madrid.

#ArturoPaniagua de sí mismo, #DaniLópez de Mondo Madrid, #SaraMorales de @efeeme.bsky.social, #MartaSalicrú de @radioprimavera.bsky.social, #JoséRamónPardo de #Ramalama, #DiegoAManrique y #JulioRuiz inconfundibles, #ElenaCabrera de @eldiario.es, #PatriciaGodes genia y figura y #MarisolGaldón

1 0 1 0
Preview
How RamaLama runs AI models in isolation by default | Red Hat Developer Discover how the RamaLama open source project can help isolate AI models for testing and experimenting.

How RamaLama runs AI models in isolation by default

developers.redhat.com/articles/202...

#ramalama #podman #cncf #AI #artificialintelligence #security #opensource #containers #deepseek

0 0 0 0
Preview
How RamaLama runs AI models in isolation by default | Red Hat Developer Discover how the RamaLama open source project can help isolate AI models for testing and experimenting.

How RamaLama runs AI models in isolation by default

developers.redhat.com/articles/2025/02/20/how-...

#ramalama #podman #cncf #AI #artificialintelligence #security #opensource #containers #deepseek

0 1 0 0
Getting Started with RamaLama: Streamlining AI Deployment with OCI Containers Introduction to RamaLama RamaLama is an open-source project developed to simplify AI model deployment and management using OCI (Open Container Initiative) containers. Ramalama enables seamless execution of AI workloads across different hardware configurations, supporting both GPU-accelerated and CPU-based environments. By leveraging container engines like Podman and Docker, RamaLama includes all necessary dependencies, eliminating complex installation … <a href="" class="more-link">Continue reading <span class="screen-reader-text"></span></a>

#RamaLama is an open-source tool facilitating AI model deployment with OCI containers, supporting diverse environments. Get started now with this quick intro.

0 0 0 0
Preview
Rama Lama Records 2016-2017, by Rama Lama Records 9 track album

Listening To #RamaLamaRecords #RamaLama

#Collection #VariousArtists

#Bandcamp #lastFM #inmwt #inNewMusicWeTrust #KeepItPeel #KeepingItPeel #PlaySomeAtTheWrongSpeed #TeenageDreamsSoHardToBeat #indie #indiePop #indieRock

ramalama.bandcamp.com/album/rama-l...

0 0 0 0
Preview
Red Hat Developing Ramalama To "Make AI Boring" By Offering Great AI Simplicity & Ease Of Use Red Hat engineers have been developing Ramalama as a new open-source project that hopes to 'make AI boring' by this inferencing tool striving for simplicity so users can quickly and easily deploy AI w...

#RedHat Developing #Ramalama To "Make #Ai Boring" By Offering Great AI Simplicity & Ease Of Use
www.phoronix.com/news/Red-Hat...

0 0 0 0