We gave Cline access to @Blender.
Objects, materials, custom shaders, vector math, animation. All through natural language via the Blender MCP. The clip below is a custom animated shader it wrote using vector math.
Full exploration: youtube.com/live/zZRMTO...
Posts by Cline
DGX Spark: 42.9 tok/s. RTX 4090: 8.7. Same weights. 4.9x gap.
#AIonRTX
cline.bot/blog/what-a...
Three Cline agents. Same 120B model. One rule: terminate your opponents' processes before they terminate yours.
We tested raw inference across an @NVIDIA_AI_PC DGX Spark, an RTX 4090, and cloud. Time-to-first-token decides the race. Throughput decides who ships.
JWT auth from a single sidebar message.
5 tasks. 3 parallel branches. 1 convergence point. Registration, login, middleware, route protection.
All 20 prompts: cline.bot/blog/20-one...
Show us yours: #OneShotShip
Unit tests and integration tests fan out in parallel. Both converge on a CI coverage gate.
One prompt. Four tasks. Full test suite with 80% coverage enforcement.
The tests you've been putting off took one prompt.
Two agents reformat the same codebase at the same time. Zero merge conflicts.
One runs ESLint. One runs Prettier. Separate worktrees, separate commits. This is what parallel execution actually looks like.
Kanban isn't a project management tool. It's a prompt-to-code pipeline.
20 copy-paste prompts that decompose into linked tasks, fan out parallel agents, and ship committed code.
All 20: cline.bot/blog/20-one...
npm i -g cline đź§µ
Trinity Large Thinking from @arcee_ai is now available in Cline.
Open weights, Apache 2.0. Built to be inspected, post-trained, distilled, and self-hosted.
Come to our office hours today at noon 12:00pm PT, we'll be on Voice Channel on Discord. Bring any questions you have on the latest models, the kanban, or the ways we can help you build: https: discord.gg/cline
GLM 5 Turbo just dropped in Cline.
zAI's latest model, available through Cline, @OpenRouter, and @Zai_org
New setting just dropped: Lazy Teammate Mode.
Turns Cline into the most useless engineer alive. 8 categories of excuses. Zero lines of code.
Run LLMs locally on NVIDIA DGX Spark with @vllm_project. Hands-on workshop this Thursday in SF taught by @forkbombETH at @frontiertower.
March 26, 7-10 PM.
luma.com/run-large-l...
@NVIDIAAIDev
CoreWeave's infrastructure ensures your agent delivers sustained performance on the heaviest tasks - no hanging, no dropped context.
Update your Cline CLI, plug in your W&B key, and get back to building.
Come see it in action at CoreWeave's booth (#913) at GTC.
www.coreweave.com/news/cline-...
Cline is integrating W&B Inference, powered by CoreWeave's bare-metal infrastructure, directly into the Cline ecosystem.
Cline has surpassed 5 million installations. A core tenet has always been open choice: your models, your IDE, your inference provider.
This integration deepens that commitment.
If anyone is looking for smaller conversations during RSAC week, we're hosting an off-the-record dinner discussing the emerging challenges around agentic software development and security.
Small group, good food, candid discussion.
luma.com/277tdqmr
Runbooks don't enforce anything. That's the whole problem.
.clinerules playbooks do. Same checklist, except now it actually runs.
cline.bot/blog/prompt...
Cline's ComfyUI MCP lets you build generative AI workflows from your code editor using natural language. Tonight we're cohosting a hands-on crash course in SF covering the fundamentals. Come through.
luma.com/comfyui-cra...
Hosting a hackathon in SF today. Enterprise AI agents on Azure infrastructure. Unlimited cloud resources, build something real in one afternoon.
luma.com/musa-labs-h...
We wrote up the exact process: how we set up the eval pipeline, the failure patterns we found, and the fixes that moved the needle. The method (hill climbing) works with any agent, not just Cline.
Full guide:
cline.gg/hill-climbing
A potential partner asked for our benchmark numbers. At the time, benchmarks had us behind other agents. We spent a weekend fixing that: ran Cline against Terminal Bench's 89 real-world tasks, diagnosed every failure, and shipped fixes. 47% → 57%.
GPT 5.3 Codex just landed on Cline (v3.67.1). What's new:
> 25% faster than 5.2 Codex
> #1 on SWE-Bench Pro (4 different languages)
> Nearly 2x on OSWorld (38% → 65%)
> Fewer tokens per task than any prior OpenAI model
Select the model and try it on your repo.
New in Cline 3.64.0: Claude Sonnet 4.6
@AnthropicAI latest iteration of Sonnet just dropped and it's free to use with the Cline provider until Feb 18 at noon PST in the Cline provider. Update Cline wherever you code and try it out.
github.com/cline/cline...
`npm install -g cline` and start building.
Now available on Windows, Mac, and Linux.
Read more: cline.gg/cli
We built CLI 2.0 by listening.
The community told us the terminal needed to be a first-class surface for AI coding, not just an afterthought.
We studied what other tools were doing, looked at how developers actually work in their terminals, and redesigned the entire experience around that.
What's new in Cline CLI 2.0:
+ Completely redesigned terminal UI with interactive mode
+ Parallel agents with isolated state per instance without manual instance creation.
+ Improved headless mode for CI/CD pipelines
+ Added ACP support for Zed, Neovim, and Emacs
Introducing Cline CLI 2.0: An open-source AI coding agent that runs entirely in your terminal.
Parallel agents, headless CI/CD pipelines, ACP support for any editor, and a completely redesigned developer experience. Minimax M2.5 and Kimi K2.5 are free to use for a limited time.
Read how to make the most out of MiniMax M2.5 with Cline in our blog:
cline.bot/blog/minima...
M2.5 runs at 100 tokens per second. That's 3x faster than Opus. At $0.06/M blended with caching, you can run subagents in the CLI and just leave them going.
Fast models exist. Cheap models exist. Both at SOTA performance is new.