AI is moving fast. Is your infrastructure keeping up?
The CIQ team is at #HumanX2026 in San Francisco this week. Come find us and let's talk open source, Rocky Linux, and running AI at scale.
April 6-9 | Moscone Center
humanx.co
Posts by CIQ
Does your stack actually run on Rocky Linux? Not in theory, on the specific product you're deploying. Arthur Tyde wrote about why that question matters and how C3 answers it.
Worth a read: bit.ly/3Q5ZwuS
#RockyLinux #EnterpriseLinux #Linux #OpenSource
Your Linux install shouldn't start as a to-do list for your security team.
RLC Pro Hardened hits 96% STIG compliance out of the box — no remediation scripts, no playbooks. We ran the numbers.
Read the benchmark: bit.ly/4bRoo2b
Glad to see our collaboration with @AMD getting attention. Give organizations a real choice: a complete, validated, production-ready stack for AI and HPC that doesn't dictate what hardware you run on. That's what we're building with RLC+ AMD. More to come. finance.yahoo.com/sectors/tech...
CIQ just launched C3, a free compatibility catalog for Rocky Linux, RLC Pro, RLC Pro AI, and RLC Pro Hardened. Hardware vendors, ISVs, and AI platform providers can verify and publish compatibility now.
Learn more at bit.ly/4bPo7g9.
Live in an hour!
The OS under your GPU fleet determines how much performance you actually get. We're showing why that matters and what production-ready looks like. Live demo included.
Register: bit.ly/46TMpmd
#AIInfrastructure #GPU
Running nvidia-smi on a GPU node used to take 14 lines of YAML and five CLI commands. In Fuzzball v3.2, it takes one.
Read Jonathon Anderson's breakdown of fuzzball run and how it changes the way you interact with HPC compute:
bit.ly/4bXyCwC
#HPC #AI #DevTools #Fuzzball
AI engineers at $200K fully loaded spend 30 to 50% of their time on infrastructure, not models. That's $60K to $100K per engineer per year configuring CUDA instead of shipping AI. There's a direct fix.
bit.ly/4uTNXqC
#AIInfrastructure #GPU
Our very own IT Guy and #DamenKnight teamed up for the latest episode of the #ITGuyShow to discuss the recent release of RLC Pro AI. Go give it a listen on YouTube or your favorite podcast player:
Audio: podcast.itguyeric.com/20
Video: youtu.be/XzW1JmNCYzs
#SysAdmin #AI #Linux #RLCPro #AIEngineer
AI can now read compiled binaries that were considered opaque for decades. That changes the security calculus for every organization running proprietary software.
Gregory Kurtzer, explores what this means for open source in a new piece for FOSS Force: fossforce.com/2026/03/ai-i...
Docker requires root access. On a shared HPC cluster with hundreds of researchers, that's a non-starter. New post covers how Apptainer solves this and integrates with Slurm, provisioning, and GPU workflows.
bit.ly/4cfNk3i
#HPC #OpenHPC
Rocky Linux, RLC+, or RLC Pro: which one fits your infrastructure?
We broke down the tradeoffs: GPU drivers, LTS, FIPS, bug fixes, and when free stops being enough.
ciq.com/blog/rocky-linux-rlc-plu...
Your distro graveyard is growing.
CIQ's @itguyeric is heading to #LFNW2026 to talk "Escaping the End-of-Life Nightmare: Lessons from the Linux Graveyard."
April 24-26 | Bellingham, WA
linuxfestnorthwest.org
The future of enterprise AI is infrastructure you control. Meet the CIQ team at HumanX 2026 to talk RLC Pro, RLC Pro AI, and Fuzzball for sovereign AI.
San Francisco, April 6–9
https://www.humanx.co
#HumanX2026 #EnterpriseAI #SovereignAI
Waiting for the patch is already too late.
CIQ's Brady Dibble is speaking at #LFNW2026: "Layered Security Hardening in Rocky Linux: Protection Before the Patch."
April 24-26 | Bellingham, WA
linuxfestnorthwest.org
Phoronix compared the CIQ + AMD collab to what Intel did with Clear Linux. AMD-optimized Rocky Linux, Instinct hardware, ROCm, day-zero deployment.
bit.ly/3NRWKbQ
#RockyLinux #AMD #ROCm
When your cluster management platform is tied to a hardware vendor, every procurement negotiation starts with a constraint you didn't choose. New post covers what real vendor independence looks like in HPC.
https://bit.ly/3NYJuSW
#HPC #Warewulf #OpenHPC
The OS under your GPU fleet determines how much performance you actually get. On April 2, we're showing why that matters and what production-ready looks like. Live demo included.
Register: https://bit.ly/46TMpmd
#AIInfrastructure #GPU
Enterprise AI and HPC, simplified: AMD + CIQ deliver an AMD-optimized Rocky Linux foundation for faster, production-ready deployments. ciq.com/press-releas...
#AI #Technology #OpenSource #Linux #HPC #ROCm
Step-by-step walkthrough of the CIQ portal:
Account setup, org management, catalog navigation, first download and access token configuration. Under 10 minutes from registration to deployment.
bit.ly/4bGHTc9
#RLCPro #RockyLinux #Linux #OpenSource #CIQ
NVIDIA Dynamo 1.0: up to 7x inference throughput on the same Blackwell hardware. The catch: that gain is fully dependent on a validated, stable OS foundation underneath it. RLC Pro AI is the foundation Dynamo was built to run on.
ciq.com/blog/nvidia-...
Tokens per watt is the new CEO metric.
Most teams are losing it at the OS layer. Unvalidated drivers, manual tuning cycles, configs that cap throughput before the model runs.
RLC Pro AI ships validated. Tokens per watt goes up before you run a single workload.
ciq.com/blog/tokens-...
In benchmarks on identical hardware: up to 32% faster vision throughput, up to 10% faster LLM inference. Same GPUs. Different OS.
ciq.com/blog/nvidia-just-called-out-the-inference-era-we-built-the-os-for-it/
Most Linux AI deployments run on distros built for research or general enterprise stability. Neither was designed for production inference at scale.
RLC Pro AI was. Pre-validated NVIDIA CUDA + DOCA-OFED. Every kernel parameter chosen for throughput. Ready at first boot.
Jensen Huang called it at #NVIDIAGTC: inference has overtaken training as the dominant AI workload.
The OS under your GPUs is now part of the performance equation. Here's why that matters, and what we built for it. 🧵
SCaLE 23x delivered. Talks, a hardening workshop, donuts for our expo neighbors, and Uno No Mercy at Game Night. Full recap on the blog.
https://bit.ly/4bhGv0M
#SCaLE #OpenSource #RockyLinux
The OS under your GPU fleet determines how much performance you actually get. On April 2, we're showing why that matters and what production-ready looks like. Live demo included.
Register: https://bit.ly/46TMpmd
#AIInfrastructure #GPU #RLCPro #RLCProAI
HPC and AI workloads are converging. Is your infrastructure ready?
Watch David Godlove break down what Fuzzball is, why it exists, and why neither traditional HPC schedulers nor Kubernetes are cutting it for modern research teams:
https://youtu.be/tc4Z1pewr-c
#HPC #AI #CloudComputing #Fuzzball
Miss last week's webinar on RLC Pro?
Brady Dibble and Eric The IT Guy covered LTS, FIPS 140-3, direct bug fixes, and what vendor accountability actually looks like in production.
Recording is live: https://youtu.be/_uDtPH1Bay4
#EnterpriseLinux #RockyLinux #RLCPro #LTS #FIPS
One command. One shell. One GPU job. That's all it takes with Fuzzball v3.2.
Read Jonathon Anderson's full breakdown of what's new: real-time workflow events and self-service password management:
https://bit.ly/4cM2vCd
#HPC #AI #CloudComputing #Fuzzball