Advertisement · 728 × 90

Posts by Inception Labs

Preview
Inception Labs We are leveraging diffusion technology to develop a new generation of LLMs. Our dLLMs are much faster and more efficient than traditional auto-regressive LLMs. And diffusion models are more accurate, ...

Check out our blog post: www.inceptionlabs.ai/news

1 year ago 5 0 0 0
Post image

Try Mercury Coder on our playground at chat.inceptionlabs.ai

1 year ago 3 0 3 0

On Copilot Arena, developers consistently prefer Mercury’s generations. It ranks #1 on speed and #2 on quality. Mercury is the fastest code LLM on the market

1 year ago 1 0 1 0
Post image

We achieve over 1000 tokens/second on NVIDIA H100s. Blazing fast generations without specialized chips!

1 year ago 1 0 1 0
Post image

Mercury Coder diffusion large language models match the performance of frontier speed-optimized models like GPT-4o Mini and Claude 3.5 Haiku while running up to 10x faster.

1 year ago 2 0 1 0
Post image

We are excited to introduce Mercury, the first commercial-grade diffusion large language model (dLLM)! dLLMs push the frontier of intelligence and speed with parallel, coarse-to-fine text generation.

1 year ago 26 7 3 6

A new generation of LLMs . . . coming soon . . .

1 year ago 6 0 0 0
Advertisement