Advertisement · 728 × 90

Posts by Taneem

Post image

I had an amazing experience attending @fastcompany.com Most Innovative Companies Summit. Proud to represent Red Hat as one of the most innovative companies with my colleague @terrytangyuan.xyz

10 months ago 4 2 0 0
Preview
Technically Speaking | Scaling AI inference with open source Explore the critical role of production-quality AI inference, the power of open source projects like vLLM, and the future of the enterprise AI stack.

Check out the new episode Technically Speaking w/ Chris Wright - Scaling AI inference with open source ft. Brian Stevens red.ht/4dJiBLc

10 months ago 1 1 0 0
Preview
Llama 4 - a meta-llama Collection Llama 4 release

FP8-quantized version of Llama 4 Maverick can be downloaded from HuggingFace: huggingface.co/collections/...

1 year ago 0 0 0 0

The official release by Meta includes an FP8-quantized version of Llama 4 Maverick 128E supported by Red Hat’s LLM Compressor library, enabling the 128 expert model to fit on a single NVIDIA 8xH100 node, resulting in more performance with lower costs.

1 year ago 0 0 0 0
Preview
Llama 4 herd is here with Day 0 inference support in vLLM | Red Hat Developer Discover the new Llama 4 Scout and Llama 4 Maverick models from Meta, with mixture of experts architecture, early fusion multimodality, and Day 0 model support.

Thanks to the Meta AI team for close collaboration with the vLLM community, enabling developers to experiment with Llama 4 immediately. Our blog shares more details of the Llama 4 release, and how to get started with inferencing in vLLM today: developers.redhat.com/articles/202...

1 year ago 0 0 2 0

This is really nice! Thank you @stu.bsky.social

1 year ago 1 0 0 0