Advertisement · 728 × 90
#
Hashtag
#Qwen35
Advertisement · 728 × 90
Original post on mastodon.social

LLMArena and specifically their coding leaderboard, where 27b Qwen model is 20 positions higher than 675b Mistral model shows really good, how slow Mistral is and how much they are lagging behine Chinese open source competition, not even mentioning American SOTA models.

#mistral #mistralai […]

0 1 0 0
Post image

Je me suis encore amusé cet après-midi (mauvais temps oblige).
Un LLM local ce'st bien mais sans recherche web c'est pas top.
On s'était amusé en live avec des recherches Wikipedia.
Là c'est tout le Web qui est interrogé :)
#python #streamlit #duckduckgo #llamacpp #qwen35

9 0 1 0
Original post on mastodon.uno

Dopo tutto questo hype per il rilascio di Qwen 3.5 ho fatto un test: sviluppare una POC per un cliente per l'ambito "log collection".
Ve la faccio breve: gli ho fatto produrre un documento .md che raccoglie tutta la POC e poi l'ho testato.

Esito:
- parecchi errori
- ordini ignorati
- inventa […]

0 0 0 0
Preview
Qwen3.5-27B Distilled vs Base: What You Gain Comparing the Claude Opus reasoning-distilled Qwen3.5-27B against the base model - what chain-of-thought distillation adds and what it costs in context, multimodal, and reliability.

Qwen3.5-27B Distilled vs Base: What You Gain

awesomeagents.ai/tools/qwen-27b-distilled...

#Qwen #Qwen35 #Comparison

1 0 0 0
Preview
Qwen3.5 MoE vs Kimi K2.5 for Coding - Price Breakdown Kimi K2.5 leads every coding benchmark, but Qwen3.5-35B-A3B delivers 87-93% of that performance at 3-4x lower cost and runs on a single consumer GPU. Here is the full breakdown.

Qwen3.5 MoE vs Kimi K2.5 for Coding - Price Breakdown

awesomeagents.ai/tools/qwen-3-5-moe-vs-ki...

#Qwen35 #KimiK25 #Moe

0 0 0 0
Post image Post image Post image Post image

#jowkame_ai_research #qwen35

Ще трохи про мої думки від Qwen3.5-35b-a3b станом на сьогодні.

- Перша модель, яку цікаво тестити. Попередні я просто видаляв коли вони "ламалися" на елементарних задачах. GLM теж видалив оскільки він постійно валиться у луп. Фактично залишилася поки що тільки ця.

0 0 1 0
Preview
Qwen 3.5 Small Series Ships Four Models From 0.8B to 9B Alibaba completes the Qwen 3.5 lineup with four small models - 0.8B, 2B, 4B, and 9B - all natively multimodal, 262K context, Apache 2.0. The 9B outperforms last-gen Qwen3-30B and beats GPT-5-Nano on vision benchmarks.

Qwen 3.5 Small Series Ships Four Models From 0.8B to 9B

awesomeagents.ai/news/qwen-3-5-small-mode...

#Qwen #Qwen35 #Alibaba

0 0 0 0
Preview
Qwen3.5 Sparks Debate as Potential Coding Game-Changer Qwen3.5 is being hailed by some Reddit users as a potential game-changer for coding, particularly when used with local LLMs and older GPUs. Users on r/LocalLLaMA report improved productivity and workflow efficiency compared to previous models. One user noted achieving 4-6 hours of minimally sup

📰 Qwen3.5 Sparks Debate as Potential Coding Game-Changer

Qwen3.5 is being hailed by some Reddit users as a potential game-changer for coding, particularly when used with local LLMs and ol...

www.clawnews.ai/qwen3-5-sparks-debate-as...

#Qwen35 #LocalLLM #Coding

0 0 0 0
Preview
Qwen3.5 Models Gain Traction for Performance, Efficiency The Qwen3.5 series, particularly the 35B-A3B model, is gaining popularity in the LocalLLaMA community for its impressive performance and efficiency. Benchmarks show it achieving 45 tokens per second on a single 16GB 5060 GPU, with optimal KV q8_0 quantization. Its ability to handle large contex

📰 Qwen3.5 Models Gain Traction for Performance, Efficiency

The Qwen3.5 series, particularly the 35B-A3B model, is gaining popularity in the LocalLLaMA community for its impressive pe...

www.clawnews.ai/qwen3-5-models-gain-trac...

#LocalLLaMA #Qwen35 #AIModels

0 0 0 0
Qwen3.5 Gains Popularity Among Developers for Production Use Qwen3.5 is gaining traction among developers for production use due to its performance and efficiency. Community-driven experiments and benchmarks show the local language model offers improvements, making it a strong contender in the LLM space and a viable alternative to subscription-based mode

📰 Qwen3.5 Gains Popularity Among Developers for Production Use

Qwen3.5 is gaining traction among developers for production use due to its performance and efficiency. Community-driven experim...

www.clawnews.ai/qwen3-5-gains-popularity...

#AI #LLM #Qwen35

0 0 0 0
Qwen3.5 Medium Is Here and It Just Ran Frontier-Level AI on a Gaming PC https://softtechhub.us/2026/02/26/qwen3-5-medium-is-here/

#Alibaba #Qwen35 #QwenAI #AIModels #GenerativeAI #CloudAI #EdgeAI #ArtificialIntelligence #TechInnovation #MachineLearning #LLMs #AIOnDevice #OpenSourceAI #AIDevelopment #FutureOfAI #DeepLearning #AIResearch #TechTrends #SmartAI #NextGenAI

Qwen3.5 Medium Is Here and It Just Ran Frontier-Level AI on a Gaming PC https://softtechhub.us/2026/02/26/qwen3-5-medium-is-here/ #Alibaba #Qwen35 #QwenAI #AIModels #GenerativeAI #CloudAI #EdgeAI #ArtificialIntelligence #TechInnovation #MachineLearning #LLMs #AIOnDevice #OpenSourceAI #AIDevelopment #FutureOfAI #DeepLearning #AIResearch #TechTrends #SmartAI #NextGenAI

Qwen3.5 Medium Is Here and It Just Ran Frontier-Level AI on a Gaming PC softtechhub.us/2026/02/26/q...

#Alibaba #Qwen35 #QwenAI #AIModels #GenerativeAI #CloudAI #EdgeAI #ArtificialIntelligence #TechInnovation #MachineLearning #LLMs #AIOnDevice #OpenSourceAI #AIDevelopment #FutureOfAI #usa

2 0 0 0
Preview
Qwen 3.5 FP8 Weights Drop - How to Actually Deploy a 397B Model on 8 GPUs Alibaba releases official FP8-quantized weights for the Qwen 3.5 flagship and 27B dense model, cutting memory requirements roughly in half and enabling deployment on 8x H100 GPUs with native vLLM and SGLang support.

Qwen 3.5 FP8 Weights Drop - How to Actually Deploy a 397B Model on 8 GPUs

awesomeagents.ai/news/qwen-3-5-fp8-weight...

#Qwen #Qwen35 #Alibaba

1 0 0 0
Preview
Kimi K2.5 vs Qwen3.5-122B-A10B: Trillion-Parameter Giant Meets the Efficiency Miracle Comparing Kimi K2.5's 1T-parameter benchmark dominance against Qwen3.5-122B-A10B's extraordinary parameter efficiency - and why the smaller model is harder to dismiss than the numbers suggest.

Kimi K2.5 vs Qwen3.5-122B-A10B: Trillion-Parameter Giant Meets the Efficiency Miracle

awesomeagents.ai/tools/kimi-k2-5-vs-qwen-...

#KimiK25 #Qwen35 #MoonshotAi

0 0 0 0
Preview
Kimi K2.5 vs Qwen3.5-27B: When 37x More Parameters Meets a Single GPU Comparing Kimi K2.5's trillion-parameter benchmark dominance against Qwen3.5-27B's single-GPU accessibility - two models from entirely different tiers that both have compelling use cases.

Kimi K2.5 vs Qwen3.5-27B: When 37x More Parameters Meets a Single GPU

awesomeagents.ai/tools/kimi-k2-5-vs-qwen-...

#KimiK25 #Qwen35 #MoonshotAi

0 0 0 0
Preview
Kimi K2.5 vs Qwen3.5 Flash: Premium Open-Weight Power vs Budget API Speed Comparing Kimi K2.5 and Qwen3.5 Flash - Moonshot AI's trillion-parameter frontier model against Alibaba's cheapest and fastest API offering.

Kimi K2.5 vs Qwen3.5 Flash: Premium Open-Weight Power vs Budget API Speed

awesomeagents.ai/tools/kimi-k2-5-vs-qwen-...

#KimiK25 #Qwen35 #Moe

0 0 0 0
Preview
Kimi K2.5 vs Qwen3.5-35B-A3B: Frontier Powerhouse Meets the Tiny Giant Killer A detailed comparison of Kimi K2.5 and Qwen3.5-35B-A3B - a 1T parameter frontier model with agent swarms versus a 35B model that runs on a single consumer GPU.

Kimi K2.5 vs Qwen3.5-35B-A3B: Frontier Powerhouse Meets the Tiny Giant Killer

awesomeagents.ai/tools/kimi-k2-5-vs-qwen-...

#KimiK25 #Qwen35 #Moe

0 0 0 0
Preview
Alibaba Qwen Launches Qwen 3.5 Medium Series With Efficiency-Focused 35B Model Alibaba unveils Qwen 3.5 Medium Series, claiming its 35B model surpasses earlier 235B systems with improved efficiency and lower compute demands.

Alibaba’s Qwen just flipped the scale narrative.
Qwen 3.5’s 35B model claims it beats their older 235B giant.
Smaller. Smarter. Cheaper to run.
The AI race is shifting from size to efficiency.

#Qwen #AlibabaAI #Qwen35 #AIModels #LLM

evolutionaihub.com/alibaba-qwen...

0 0 0 0
Qwen3.5-122B-A10B vs DeepSeek V3.2: Efficiency vs Raw Power in Open-Weight AI A benchmark-by-benchmark comparison of Qwen3.5-122B-A10B and DeepSeek V3.2 - the efficiency-optimized underdog versus the brute-force open-source heavyweight.

Qwen3.5-122B-A10B vs DeepSeek V3.2: Efficiency vs Raw Power in Open-Weight AI

awesomeagents.ai/tools/qwen-3-5-122b-a10b...

#Qwen35 #Deepseek #Moe

0 0 0 0
Qwen3.5-122B-A10B vs Llama 4 Maverick: The Efficiency Gap Nobody Expected A data-driven comparison of Alibaba's Qwen3.5-122B-A10B and Meta's Llama 4 Maverick - two open-weight MoE models with radically different approaches to parameter efficiency and benchmark performance.

Qwen3.5-122B-A10B vs Llama 4 Maverick: The Efficiency Gap Nobody Expected

awesomeagents.ai/tools/qwen-3-5-122b-a10b...

#Qwen35 #Llama4 #Moe

0 0 0 0
Qwen3.5-35B-A3B vs GLM-4.7-Flash: Two Chinese MoE Models, Very Different Strengths Head-to-head comparison of Qwen3.5-35B-A3B and GLM-4.7-Flash - two Chinese-origin 30B-A3B MoE models with Apache 2.0/MIT licenses that dominate different benchmarks despite near-identical parameter budgets.

Qwen3.5-35B-A3B vs GLM-4.7-Flash: Two Chinese MoE Models, Very Different Strengths

awesomeagents.ai/tools/qwen-3-5-35b-a3b-v...

#Qwen #Qwen35 #Glm

0 0 0 0
Qwen3.5-27B vs Gemma 3 27B: Same Parameter Count, Completely Different Models A data-driven comparison of Alibaba's Qwen3.5-27B and Google's Gemma 3 27B - two 27B dense models that share a parameter count and almost nothing else.

Qwen3.5-27B vs Gemma 3 27B: Same Parameter Count, Completely Different Models

awesomeagents.ai/tools/qwen-3-5-27b-vs-ge...

#Qwen #Qwen35 #Gemma

0 0 0 0
Qwen3.5-35B-A3B vs Nemotron 3 Nano 30B-A3B: Benchmarks vs Throughput in the 3B Active Parameter Class A data-driven comparison of Alibaba's Qwen3.5-35B-A3B and NVIDIA's Nemotron 3 Nano 30B-A3B - two ~30B MoE models activating ~3B parameters that take fundamentally different architectural approaches to the same problem.

Qwen3.5-35B-A3B vs Nemotron 3 Nano 30B-A3B: Benchmarks vs Throughput in the 3B Active Parameter Class

awesomeagents.ai/tools/qwen-3-5-35b-a3b-v...

#Qwen #Qwen35 #Nvidia

3 0 0 1
Qwen3.5-Flash vs DeepSeek V3.2: Budget API Battle With a Pricing Twist A detailed comparison of Qwen3.5-Flash and DeepSeek V3.2 API pricing, benchmarks, and tradeoffs - flat-rate simplicity versus cache-dependent discounts in the budget AI tier.

Qwen3.5-Flash vs DeepSeek V3.2: Budget API Battle With a Pricing Twist

awesomeagents.ai/tools/qwen-3-5-flash-vs-...

#Qwen35 #Deepseek #Api

0 0 0 0
Qwen3.5-27B vs Phi-4: When Twice the Parameters Is Not Twice as Obvious A data-driven comparison of Alibaba's Qwen3.5-27B and Microsoft's Phi-4 - a 27B hybrid architecture versus a 14B STEM specialist, testing whether raw parameter count or training efficiency wins in practice.

Qwen3.5-27B vs Phi-4: When Twice the Parameters Is Not Twice as Obvious

https://awesomeagents.ai/tools/qwen-3-5-27b-vs-phi-4/

#Qwen #Qwen35 #Phi4

0 0 0 0
Qwen3.5-27B vs Mistral Small 3.2: Apache 2.0 Heavyweights Go Head to Head A data-driven comparison of Alibaba's Qwen3.5-27B and Mistral's Small 3.2 - two Apache 2.0 dense models in the 24-27B range with very different benchmark profiles and deployment strengths.

Qwen3.5-27B vs Mistral Small 3.2: Apache 2.0 Heavyweights Go Head to Head

awesomeagents.ai/tools/qwen-3-5-27b-vs-mi...

#Qwen #Qwen35 #Mistral

0 0 0 0
Qwen3.5-122B-A10B vs Mistral Large 3: When 4x More Parameters Buys You Less A data-driven comparison of Qwen3.5-122B-A10B and Mistral Large 3 - two Apache 2.0 MoE models where the smaller one dominates text benchmarks despite a 4x active parameter disadvantage.

Qwen3.5-122B-A10B vs Mistral Large 3: When 4x More Parameters Buys You Less

awesomeagents.ai/tools/qwen-3-5-122b-a10b...

#Qwen35 #Mistral #Moe

0 0 0 0
Qwen3.5-Flash vs GPT-4o mini: Challenger Meets Incumbent A detailed comparison of Qwen3.5-Flash and GPT-4o mini covering benchmarks, pricing, context windows, and ecosystem - the new open-source challenger versus OpenAI's entrenched budget API.

Qwen3.5-Flash vs GPT-4o mini: Challenger Meets Incumbent

awesomeagents.ai/tools/qwen-3-5-flash-vs-...

#Qwen35 #Gpt4OMini #Openai

0 0 0 0
Qwen3.5-Flash vs Gemini 2.5 Flash-Lite: The $0.10 Budget API Showdown A data-driven comparison of Qwen3.5-Flash and Gemini 2.5 Flash-Lite - two models at the exact same $0.10/$0.40 per million token price point with 1M context windows but very different performance profiles.

Qwen3.5-Flash vs Gemini 2.5 Flash-Lite: The $0.10 Budget API Showdown

awesomeagents.ai/tools/qwen-3-5-flash-vs-...

#Qwen35 #Gemini #FlashLite

0 0 0 0
Qwen3.5-35B-A3B vs Llama 4 Scout: 3B Active Parameters vs 17B - Does 5.7x More Compute Actually Win? David vs Goliath: Qwen3.5-35B-A3B activates 3B parameters and beats Llama 4 Scout's 17B active on MMLU-Pro, GPQA, and coding benchmarks - but Scout's 10M context window and native multimodal support tell a different story.

Qwen3.5-35B-A3B vs Llama 4 Scout: 3B Active Parameters vs 17B - Does 5.7x More Compute Actually Win?

awesomeagents.ai/tools/qwen-3-5-35b-a3b-v...

#Qwen #Qwen35 #Meta

0 0 0 0
Preview
Qwen 3.5 Medium Series Drops Four Models That Make the 235B Flagship Obsolete Alibaba releases four Qwen 3.5 medium models - Flash, 35B-A3B, 122B-A10B, and 27B - that match or beat the previous 235B flagship at a fraction of the compute. The 35B model activates just 3 billion parameters and still outperforms Qwen3-235B-A22B.

Qwen 3.5 Medium Series Drops Four Models That Make the 235B Flagship Obsolete

https://awesomeagents.ai/news/qwen-3-5-medium-series/

#Qwen #Qwen35 #Alibaba

0 0 0 0