Advertisement · 728 × 90
#
Hashtag
#slm
Advertisement · 728 × 90
Preview
“스타크래프트부터 포켓몬까지 평정”…지스트 대학원생 팀, 국제 AI 게임 대회 우승 “광주의 청년 지능이 세계 게임 AI 무대를 평정했습니다.” 지스트(GIST) 대학원생 팀이 크래프톤 주관·오픈AI 후원의 국제 AI 게임 대회 ‘오락 챌린지’에서 소규모 언어 모델(SLM) 부문 세계 1위에 올랐습니다. AI포스트 핵심 요약 ✅ [117개 팀 제치고 SLM 부문 우승

전 세계 117개 팀 중 당당히 1위! 🏆

지스트 대학원생 팀이 크래프톤과 오픈AI가 진행한 대회에서 우승했습니다. 하나의 AI로 마리오부터 스타크래프트까지 정복한 이들의 비결은 뭘까요?
www.aipostkorea.com/news/article...

#지스트 #GIST #AI게임대회 #오락챌린지 #크래프톤 #오픈AI #SLM #스타크래프트AI #AI포스트

1 0 0 0
Preview
GitHub - chroma-core/context-1-data-gen Contribute to chroma-core/context-1-data-gen development by creating an account on GitHub.

#ChromaDB Context-1: Fast 20B model.
Small Specialized Language Model for fast #agentic search trained to retrieve supporting documents for complex, multi-hop queries. X10 faster than #LLM
#VectorDB #RAG #SLM #Agents
#OpenSource Apache 2.0 lic
github.com/chroma-core/...

1 0 0 0
Preview
Unlocking the Secrets of Invisible Logistics: A Seminar on Operational Excellence Join us for an insightful online seminar on April 28, 2026, where experts will unveil techniques to streamline logistics using data-driven strategies.

Unlocking the Secrets of Invisible Logistics: A Seminar on Operational Excellence #Japan #logistics #Tokyo #Seaos #SLM

0 0 0 0
Preview
IA: Como a China está vencendo | Outras Palavras Congresso dos EUA fica sabendo, enfim: Washington foi ultrapassada. Ao desprezar objetivos fantasiosos, e apostar em códios abertos, dados industriais de forma massiva e pequenos modelos de linguagem,...

Fiz uma análise para o @outraspalavras.net de documento da Comissão de Análise Econômica e de Segurança EUA-China sobre como o país asiático escolheu disputar a corrida industrial por IA e o que o Sul Global pode aprender com isso.

outraspalavras.net/tecnologiaem...

#AI #China #US #data #SLM

0 0 0 0
Original post on revealbi.io

SLM vs. LLM: Which AI Model is Right for Embedded Analytics? Reveal Embedded Analytics AI has reshaped how users interact with the analytics layer inside SaaS products. Simply adding embedded analy...

#Embedded #Analytics #AI #Analytics #AI-powered #embedded […]

[Original post on revealbi.io]

0 0 0 0
Original post on revealbi.io

SLM vs. LLM: Which AI Model is Right for Embedded Analytics? Reveal Embedded Analytics Modern embedded analytics layers is shifting from static dashboards to AI-driven interaction inside Saa...

#Embedded #Analytics #AI #Analytics #AI-powered #embedded #analytics […]

[Original post on revealbi.io]

0 0 0 0

Da will ich eine alte Nvidia 3060 reaktivieren, um noch etwas mit #SLM / #LLM zu testen - und habe ausversehen die uralte 660 eingebaut.

Wusste gar nicht mehr, dass ich die noch hatte.

Warum müssen Grafikkarten erstmal so ähnlich aussehen?

0 1 0 0
Preview
a man is playing a keyboard with the words it 's the final countdown 30 more minutes baby Alt: a man is playing a keyboard with the words it 's the final countdown 30 more minutes baby

🚀✨ T-minus 30 minutes to #TLSkyChat! Join us for a fun conversation about books! Questions are posted at tlskychat.com See you at 8pm EST/5pm PT. #librarians #skybrarians #tlsky #edusky #schoollibrarymonth #slm #schoollibrarians #schoollibrary

2 0 0 1
Preview
a spongebob squarepants background with one hour later written on it Alt: a spongebob squarepants background with one hour later written on it

🚀✨ T-minus 1 hour to #TLSkyChat! Join us for a fun conversation about books! Questions are posted at tlskychat.com See you at 8pm EST/5pm PT. #librarians #skybrarians #tlsky #edusky #schoollibrarymonth #slm #schoollibrarians #schoollibrary

2 0 0 1

Telling your story MATTERS. If you don't tell it, do others have what they need to do so? #TLSkyChat #SLM

1 0 0 0
A colorful graphic promoting #TLSkyChat with the text “Tonight @ 8PM ET” and “April is School Library Month.” It features a stack of bright, multicolored books and the TLSkyChat logo with an open book, glasses, and a coffee mug.

A colorful graphic promoting #TLSkyChat with the text “Tonight @ 8PM ET” and “April is School Library Month.” It features a stack of bright, multicolored books and the TLSkyChat logo with an open book, glasses, and a coffee mug.

📚 Join us TONIGHT at 8pm ET for #TLSkyChat as we get ready for School Library Month! 🌸

How are you preparing to celebrate? Share ideas, resources, and ways you’ll highlight your library program this April!

Find the Questions at tlskychat.com
#Skybrarians #EduSky #TLSky #SLM #SchoolLibraryMonth

6 2 0 0
A colorful graphic promoting #TLSkyChat with the text “Tonight @ 8PM ET” and “April is School Library Month.” It features a stack of bright, multicolored books and the TLSkyChat logo with an open book, glasses, and a coffee mug.

A colorful graphic promoting #TLSkyChat with the text “Tonight @ 8PM ET” and “April is School Library Month.” It features a stack of bright, multicolored books and the TLSkyChat logo with an open book, glasses, and a coffee mug.

📚 Join us TONIGHT at 8pm ET for #TLSkyChat as we get ready for School Library Month! 🌸

How are you preparing to celebrate? Share ideas, resources, and ways you’ll highlight your library program this April!

Find the Questions at tlskychat.com
#Skybrarians #EduSky #TLSky #SLM #SchoolLibraryMonth

6 3 1 0

兵器にしても、生活サポート機械にしても、そろそろ #SLM が、
空想世界でワクワクさせられたタイムボカンシリーズのyいうな、小さいメカが自ら整列して合体して実体になる仕組みを見れるのでは?!

LLMが親機として指示出しする

0 0 0 0

Ich bin inzwischen in der "Ich lasse ein gutes #LLM das Arbeitsergebnis von mehreren #SLM s evaluieren."-Phase angekommen...

0 1 0 0
At TLSKyChat on Bluesky, we’re celebrating the power of school libraries as hubs of learning, curiosity, and connection. From fostering a love of reading to supporting research, digital literacy, and inclusive spaces because school libraries make a difference every day!! How are you celebrating School Library Month? Join the chat to share ideas, favorite resources, student successes, and ways you’re highlighting your library program this April. Let’s celebrate and advocate together!
April is SLM!

At TLSKyChat on Bluesky, we’re celebrating the power of school libraries as hubs of learning, curiosity, and connection. From fostering a love of reading to supporting research, digital literacy, and inclusive spaces because school libraries make a difference every day!! How are you celebrating School Library Month? Join the chat to share ideas, favorite resources, student successes, and ways you’re highlighting your library program this April. Let’s celebrate and advocate together! April is SLM!

📣 Let's GO! Join us THIS Wednesday at 8pm EST for #TLSkyChat - an educational chat for school librarians and the people who love them!

💻 Checkout this week's topic along with other goodies at tlskychat.com/schedule!

🌟💬📚❤️ #tlsky #edusky #librarians #skybrarians #libraries

8 2 1 0
Post image

Welcome to #EvolutionRadio 578! Stream/DL (IG - check my profile): soundcloud.com/alan-fraze/e...
#djlife #producerlife #edm #music #house
.
Feat. #DanielSteinberg #Piem #SLM #RivaStarr #HarryStone #LowSteppa #Capri(UK) #EarthnDays #ElisaElisa #Dompe #Discosteps and more!

1 0 0 0
Preview
Sallie Mae® Expands Graduate Loan Options For Medical and Dental Students Sallie Mae (Nasdaq: SLM) expanded graduate loan options for medical and dental students on March 17, 2026, adding flexible in-school repayment, prequalification with no credit impact, and extended deferment periods.The program covers up to 100% of school-certified costs, offers competitive rates with no origination or application fees, and includes scholarship resources.

#SLM Sallie Mae® Expands Graduate Loan Options For Medical and Dental Students

www.stocktitan.net/news/SLM/sallie-mae-expa...

0 0 0 0
Preview
Service Level Management - SLM Nutzen und Vorteile von Service Level Management

Service Level Management - SLM #slm #sla #vereinbarungen #servicelevel #IT

0 0 0 0
Preview
エッジAI×SLMで推論コストを最適化する最新戦略【2025年版】 | 読者層 | 押さえるべきポイント | 予想されるアクション | | CEO | クラウドでのAI推論コストは利用増加に比例して膨らみ、利益を圧迫するリスクがあります。今後のAI活用において、コスト構造の見直しは避けられません。 | エッジAIの導入により、ROIを高める可能性があります。NPU搭載デバイスやハイブリッド基盤への投資検討を進めましょう。 | | CTO | SLMの最適化により、スマートフォンやJetsonなどの小型デバイスでも、実用的な速度で動作する小型モデルが登場しています。(参考: 推論高速化技術) | 軽量化されたSLM(例:Gemma, Qwen)を使い、エッジデバ

AI推論のクラウド依存はコストと電力の死刑宣告に等しい。軽量SLMとNPU搭載デバイスによるエッジAIへの移行こそ利益最大化の鍵だ。量子化や蒸留技術でGPT-3.5級の性能がスマホで完結する今、クラウド一辺倒の戦略は再考すべき転換点にある。

#エッジAI #SLM

0 0 0 0
Preview
Sallie Mae Announces $200 million Accelerated Share Repurchase Sallie Mae (Nasdaq: SLM) entered a $200 million accelerated share repurchase (ASR) with Goldman Sachs, to be prefunded March 10, 2026, under its $500 million board-authorized repurchase program.Combined with prior repurchases this quarter, first-quarter repurchases and commitments total nearly $300 million. Final shares depend on VWAP less a discount and customary adjustments; transactions are expected to complete before the end of Q2 2026.

#SLM Sallie Mae Announces $200 million Accelerated Share Repurchase

www.stocktitan.net/news/SLM/sallie-mae-anno...

0 0 0 0
Preview
Small models, high quality: Inside BMW Group’s experiments evaluating domain-specific language models Automakers are striving to incorporate AI into vehicles for more natural voice commands, but large language models (LLMs) have limitations due to their need for consistent internet access. BMW and Google Cloud collaborated to develop a solution for deploying efficient, domain-specific small language models (SLMs) in cars. SLMs offer a balance between size and capability for in-vehicle use, addressing the constraints of limited computing power. The challenge lies in optimizing SLMs, requiring compression, fine-tuning, and robust evaluation for automotive tasks, and it involves finding the optimal configuration. An automated workflow was created to streamline SLM optimization, automating compression, adaptation, and evaluation using Vertex AI Pipelines. This pipeline enables systematic exploration of configurations, making experimentation and evaluation efficient and reproducible. It includes versioning, optimization, deployment testing, and comprehensive evaluation components. The pipeline uses various compression and enhancement methods tailored to specific hardware. The process results in a deployable, versioned SLM with detailed performance metrics, ensuring complete reproducibility and facilitating efficient testing. The project showcases how automated workflows improve the development and optimization of AI in the automotive industry.

Small models, high quality: Inside BMW Group’s experiments evaluating domain-specific language models

Automakers are striving to incorporate AI into vehicles for more natural voice commands, but large language models (LLMs) have limitations due to their need for c…

Telegram AI Digest
#ai #llm #slm

0 0 0 0
Video

Starmer okays US strikes from UK bases on Iran sites post-RAF Akrotiri hit. Escalation alert.

Video created using AI avatar and real voice for news delivery.

#Starmer #Iran #UKBases #SLM

0 0 0 0
Preview
Small Language Models: construyendo la arquitectura de las nuevas redacciones Durante los últimos años, la carrera de la Inteligencia Artificial parecía regirse por la regla de que cuanto más grande sea el modelo de lenguaje, mejor. Sin embargo, estamos entrando en una fase de madurez, en la que el foco del mercado se desplaza desde la escala hacia la eficiencia, la especialización y el control de costes. **Según Gartner**, las organizaciones, que necesitan una mayor eficiencia en tareas ordinarias pero un menor coste operativo, utilizarán los modelos de IA específicos hasta tres veces más que los modelos generalistas. En este contexto, los Small Language Models (SLM) pueden convertirse en una pieza clave dentro de la arquitectura de IA de las organizaciones en general, y las de medios en particular. A diferencia de los modelos fundacionales de gran escala, entrenados con billones de parámetros y que requieren infraestructuras masivas en la nube, los SLM se caracterizan por su especialización funcional y menor demanda computacional. Normalmente se trata de modelos basados en **_arquitecturas transformer_** con tamaños que oscilan entre cientos de millones y hasta varios miles de millones de parámetros (habitualmente por debajo de los 10B). Desde un punto de vista estratégico, la diferencia es sustancial, ya que los SLM sacrifican cobertura generalista a cambio de un rendimiento óptimo, al tiempo que ofrecen menor latencia y mayor control sobre los datos, privacidad reforzada y un coste operativo significativamente inferior. Mientras que un LLM suele implicar elevados gastos en infraestructura, computación y llamadas constantes a APIs externas, los SLM pueden ejecutarse localmente en servidores propios, en portátiles o incluso dispositivos móviles, evitando que los datos salgan de la organización. Una de las claves técnicas que permiten este salto de eficiencia es la **cuantización**, una técnica que reduce la precisión numérica de los parámetros del modelo (por ejemplo, de 16 a 8 bits). Esto puede hacer que el modelo sea hasta cuatro veces más ligero y requiera mucha menos memoria, manteniendo un rendimiento muy cercano al original en ciertas tareas. Esta técnica es uno de los enfoques principales dentro de lo que se conoce como **_model compression,_** que incluye también otras soluciones tecnológicas como el _pruning_ y _distillation_. Es decir, la competitividad real de los SLM se logra combinando varias técnicas de optimización, ampliamente documentadas en estudios comparativos de **eficiencia y rendimiento**: * Cuantización: reduce la precisión numérica de los pesos del modelo, disminuyendo de forma significativa el consumo de memoria y mejorando la velocidad de inferencia, con una pérdida mínima de exactitud en tareas bien definidas . * Pruning (poda):identifica y elimina conexiones con baja contribución al resultado final, compactando el modelo y acelerando su ejecución. * Knowledge Distillation (destilación de conocimiento): un LLM actúa como “**profesor” y transfiere su comportamiento** a un modelo más pequeño. El resultado puede ser un SLM que conserve gran parte del rendimiento del modelo original, pero con una reducción considerable del coste de computación. En el sector de los medios, estas características pueden ofrecer soluciones antes inviables por coste, latencia o riesgos de privacidad. Entre los posibles casos extrapolables estarían la revisión de contratos confidenciales, el filtrado de contenido sensible y la implementación de **guardrails o salvaguardas de IA** en entornos locales, lo que garantiza que la propiedad intelectual y los datos internos no se utilicen para entrenar modelos de terceros. Desde una perspectiva editorial y operativa, los SLM pueden funcionar como asistentes de código para equipos técnicos, sistemas de enrutamiento automático de tickets de soporte o como componentes clave en arquitecturas de búsqueda avanzada. En escenarios de **Agentic RAG**, los SLM pueden pre-procesar consultas, reescribirlas de forma contextual e inteligente y mejorar de manera significativa la recuperación de información en archivos históricos propios. Por otra parte, son cada vez más las organizaciones que están adoptando estrategias de Intelligent Routing. En este enfoque, un módulo analiza la complejidad de cada consulta: si se trata de tareas sencillas como clasificación, extracción de datos o reformulación, el sistema recurre a un SLM. Solo cuando la tarea requiere razonamiento profundo o generación compleja se activa un LLM de gran escala. En despliegues bien optimizados, esta aproximación puede reducir la latencia de respuesta de segundos a cientos de milisegundos y disminuir los costes operativos de forma sustancial. La gran apuesta estratégica para el sector de los medios no consiste únicamente en que los redactores utilicen herramientas de IA, sino en diseñar una arquitectura tecnológica propia que permita crear valor a partir de la riqueza de sus datos, su conocimiento editorial y las capacidades periodísticas de la redacción. En ese diseño, los Small Language Models no son un complemento, sino un bloque estructural fundamental. Por favor, deja este campo vacío ## **Newsletter** Recibe nuestra newsletter semanal Acepto la Política de Privacidad Revisa tu bandeja de entrada o la carpeta de spam para confirmar tu suscripción.

Small Language Models #SLM): construyendo la arquitectura de las nuevas #redacciones digitaljourney.es/small-language-models-co...
Fuente https://t.me/mediosliquidos

0 0 0 0
Post image

Geometry > Scale: Как 40М параметров на решетке E8 обходят классические трансформеры Ребята, кажется, мы уперлись в ...

#llm #E8 #transformer #transformers #edgeai #slm

Origin | Interest | Match

0 0 0 0
Bare chested leather cub flashing his pit, next to him a guy in a bulldog harness similing brightly

Bare chested leather cub flashing his pit, next to him a guy in a bulldog harness similing brightly

Looks like @blackarmor95.bsky.social and I will be attending #SLM Stockholm's #Domination this year 😈
Who else is coming and wants to show us the city? 😏

Post spanking bliss pic of us for attention 😜

54 2 1 0
Preview
Fortifying Intelligent Systems with Optimum Protection Strategies for LLMs and SLMs ecuring large and small language models against data leakage, adversarial manipulation, model theft, and supply chain compromise

⚙️ Large Language Models and Small Language Models are transforming Enterprise operations — but their security posture determines their future.
Fortify before you scale 🔐.
#Cybersecurity #AIsecurity #LLM #SLM #AIgovernance #ZeroTrust #CyberLens

www.thecyberlens.com/p/fortifying...

1 0 0 0
Post image Post image Post image

Mit Lola im Suermondt Ludwig #SLM #peoplematchingartworks

0 0 0 0
Video

Fact checking claims on Gaza casualties, US military aid and Israeli policy. Context and definitions matter.

#SLM #FactCheck #Gaza

0 0 0 0
Azure SLM Showdown: Evaluating Phi-3, Llama 3, and Snowflake Arctic for Production In the rapidly evolving landscape of Generative AI, the industry is witnessing a significant shift. While the “bigger is better” mantra once dominated, the tide is turning. As organizations move from experimental pilots to production-grade applications, the focus has shifted toward small language models (SLMs). These models offer lower latency, reduced compute costs, and the ability to run on edge devices, while maintaining performance that rivals massive models like GPT-4 for specific tasks. Microsoft Azure has positioned itself as a premier destination for these models, offering them through the Model-as-a-Service (MaaS) framework and the Azure AI Model Catalog. In this article, we provide a technical deep dive into three of the most prominent SLMs available on Azure: Microsoft’s Phi-3, Meta’s Llama 3 (8B), and Snowflake Arctic. We analyze their architectures, benchmark performance, deployment strategies, and cost efficiency to help you decide which model best fits your workload.

Azure SLM Showdown: Evaluating Phi-3, Llama 3, and Snowflake Arctic for Production

In the rapidly evolving landscape of Generative AI, the industry is witnessing a significant shift. While the “bigger is better” mantra once dominated, the tide is turning. As or…

Telegram AI Digest
#gpt #llama #slm

0 0 0 0
Azure SLM Showdown: Evaluating Phi-3, Llama 3, and Snowflake Arctic for Production

Битва больших языковых моделей Azure: оценка Phi-3, Llama 3 и Snowflake Arctic для производства

В быстро меняющемся ландшафте генеративного ИИ отрасль наблюдает значительные изменения. В то время как мантра "больше - лучше" когда-то доминировала, ситуация меня…

Telegram ИИ Дайджест
#ai #llama #slm

0 0 0 0