Advertisement · 728 × 90
#
Hashtag
#PEFT
Advertisement · 728 × 90
Post image

Zapraszamy na naszego bloga - dostępny jest nowy artykuł o technikach optymalizacji dużych modeli językowych.
Opisujemy kwantyzację, pruning, destylację, PEFT oraztechniki optymalizacji inferencji, takie jak batching zapytań, KV cache i dekodowanie spekulatywne.
azurro.pl/techniki-opt...
#AI #PEFT

0 1 0 0

2025년 최신! LLM Fine-tuning 완벽 가이드. Full Fine-tuning, LoRA, QLoRA 비교부터 GPT-4o 파인튜닝 실제 비용($7.50~), 최소 데이터 개수(50~200개로 충분!), Google Colab 무료 실습 코드까지. 전이학습과의 차이, DoRA/QDoRA 최신 기법 포함!


#AI맞춤화 #DoRA #Finetuning #GoogleColab #GPT4파인튜닝 #Llama파인튜닝 #LLM미세조정 #LoRA #PEFT #QLoRA
doyouknow.kr/581/fine-tun...

1 0 0 0
TiTok Framework Boosts LoRA Transfer with Token-Level Contrast

TiTok Framework Boosts LoRA Transfer with Token-Level Contrast

TiTok boosts LoRA adapter transfer by 4‑8% and drops the need for any extra discriminator, using a contrastive excess between the model with and without the adapter. Read more: getnews.me/titok-framework-boosts-l... #lora #peft

0 0 0 0
DoRAN Boosts Low-Rank Fine‑Tuning via Noise and Dynamic Networks

DoRAN Boosts Low-Rank Fine‑Tuning via Noise and Dynamic Networks

DoRAN adds noise to DoRA’s denominator and uses networks to generate low‑rank matrices, boosting stability and data‑efficiency; Oct 5 2025 experiments show it outperforms LoRA. Read more: getnews.me/doran-boosts-low-rank-fi... #doran #peft #lowrank

0 0 0 0
Permissioned LLMs: Enforcing Access Control in Enterprise AI Models

Permissioned LLMs: Enforcing Access Control in Enterprise AI Models

Permissioned LLMs add a query‑level enforcement layer using PEFT adapters, LoRA and prefix‑tuning, and were tested on five benchmarks (GPQA, RCV1, SimpleQA, WMDP, PubMedQA). Read more: getnews.me/permissioned-llms-enforc... #permissionedllms #peft #ai

0 0 0 0
Activated LoRA: Faster Switching for Fine‑Tuned Language Models

Activated LoRA: Faster Switching for Fine‑Tuned Language Models

Activated LoRA (aLoRA) lets developers switch fine‑tuned adapters without KV‑cache recompute; it’s available in Hugging Face’s PEFT library. The paper’s latest version came out in October 2025. Read more: getnews.me/activated-lora-faster-sw... #alora #peft

0 0 0 0
Fine-Tuning Large Language Models Boosts Secure Code Generation

Fine-Tuning Large Language Models Boosts Secure Code Generation

Fine‑tuning LLMs with PEFT raised secure‑code scores by up to 6.4% for C and 5.0% for C++ in a September 2025 study; LoRA showed the biggest gains on function‑level data. getnews.me/fine-tuning-large-langua... #peft #lora

0 0 0 0
Efficient Orthogonal Fine‑Tuning via Principal Subspace Adaptation

Efficient Orthogonal Fine‑Tuning via Principal Subspace Adaptation

Researchers released PSOFT, a method that limits orthogonal fine‑tuning to a model’s principal subspace, cutting parameters and memory while matching PEFT performance across 35 NLP and vision tasks. getnews.me/efficient-orthogonal-fin... #psoft #peft

0 0 0 0
WeatherPEFT: Efficient Fine‑Tuning for Weather Foundation Models

WeatherPEFT: Efficient Fine‑Tuning for Weather Foundation Models

WeatherPEFT lets large weather models match full‑model tuning accuracy while training with only a fraction of parameters, using Task‑Adaptive Dynamic Prompting and Fisher‑guided selection. Read more: getnews.me/weatherpeft-efficient-fi... #weatherai #peft

0 0 0 0
Blockwise Hadamard Adaptation Boosts Efficient LLM Fine‑Tuning

Blockwise Hadamard Adaptation Boosts Efficient LLM Fine‑Tuning

Blockwise Hadamard high‑Rank Adaptation (BHRA) improves PEFT, beating baselines on eight commonsense and two arithmetic tasks with Llama‑3.2 (1 B/3 B), Mistral‑7 B and Gemma‑2 (9 B). getnews.me/blockwise-hadamard-adapt... #bhra #peft #llm

0 0 0 0
LoSiA Introduces Efficient High‑Rank Fine‑Tuning for AI Models

LoSiA Introduces Efficient High‑Rank Fine‑Tuning for AI Models

LoSiA offers a high‑rank PEFT that trains only a critical sub‑network; its LoSiA‑Pro variant cuts training latency by ~27% versus LoRA and will be presented at EMNLP 2025. getnews.me/losia-introduces-efficie... #losia #peft

0 0 0 0
Localized LoRA: Structured Low‑Rank Updates for Efficient Fine‑Tuning

Localized LoRA: Structured Low‑Rank Updates for Efficient Fine‑Tuning

Localized LoRA partitions a model’s weight matrix into structured blocks, giving each its own low‑rank update while keeping the total trainable parameters unchanged. Read more: getnews.me/localized-lora-structure... #localizedlora #peft #lowrank

0 0 0 0
Rehearsal-Free Continual Learning with Pretrained Models: A Review

Rehearsal-Free Continual Learning with Pretrained Models: A Review

A new review shows lightweight PEFT baselines match performance of rehearsal‑free continual learning methods, and query mechanisms add no benefit. Parameter budget drives gains. getnews.me/rehearsal-free-continual... #peft #continuallearning

0 0 0 0
Bias-Efficient Fine-Tuning Boosts Language Model Performance

Bias-Efficient Fine-Tuning Boosts Language Model Performance

BEFT fine‑tunes only bias terms in transformers, handling models from ~100 M to several B parameters, and matches or exceeds heavier PEFT methods in low‑resource tests. Read more: getnews.me/bias-efficient-fine-tuni... #biasefficient #peft #llm

0 0 0 0
Data- and Parameter-Efficient Techniques Boost Arabic Dialect ID

Data- and Parameter-Efficient Techniques Boost Arabic Dialect ID

Researchers found that LoRA‑based fine‑tuning outperforms soft‑prompted encoders, which in turn beat hard‑prompted large language models for Arabic Dialect Identification. Read more: getnews.me/data-and-parameter-effic... #arabicdialect #peft #lora

0 0 0 0
Activation Function Tuning Boosts Efficient Fine‑Tuning in AI Models

Activation Function Tuning Boosts Efficient Fine‑Tuning in AI Models

NoRA updates only 0.4% of parameters (≈0.02 M) and lifts CIFAR‑10 accuracy by 0.17%; on LLaMA‑3‑8B tuning, MMLU improves up to 0.8%. This low‑rank update adds minimal compute cost. getnews.me/activation-function-tuni... #activationfunction #peft

0 0 0 0
Activation‑Space Tuning Improves Parameter‑Efficient Fine‑Tuning

Activation‑Space Tuning Improves Parameter‑Efficient Fine‑Tuning

Activation‑space tuning (NoRA) updates only 0.4% of a vision transformer’s parameters (~0.02 M) and yields +0.17% accuracy on CIFAR‑10 and +0.27% on CIFAR‑100. Read more: getnews.me/activation-space-tuning-... #activationtuning #peft #visiontransformer

0 0 0 0
Hierarchical Adapter Merging Boosts Scalable Continual Learning

Hierarchical Adapter Merging Boosts Scalable Continual Learning

Hierarchical Adapter Merging (HAM) was tested on three vision benchmarks and consistently outperformed state‑of‑the‑art PEFT methods, especially as the number of tasks increased. Read more: getnews.me/hierarchical-adapter-mer... #continuallearning #peft

0 0 0 0
Parameter-Efficient Fine-Tuning Improves Security of Code‑Generating LLMs

Parameter-Efficient Fine-Tuning Improves Security of Code‑Generating LLMs

Prompt‑tuning on CodeGen2 16B lifted the Overall‑Secure‑Rate to 80.86%; raising the temperature boosted security to 87.65%. Read more: getnews.me/parameter-efficient-fine... #prompttuning #peft

0 0 0 0
HEFT: Hierarchical fine-tuning boosts LLM reasoning efficiency

HEFT: Hierarchical fine-tuning boosts LLM reasoning efficiency

HEFT combines LoRA and ReFT for hierarchical fine‑tuning, achieving 85.17% accuracy on BoolQ with just three epochs, beating LoRA‑only after twenty epochs. Read more: getnews.me/heft-hierarchical-fine-t... #heft #peft #llm

0 0 0 0
Post image Post image Post image Post image

Want to fine-tune LLMs without a #GPU cluster? Join our live online training “Fine-tuning on one GPU” for anyone building smart AI w/ lean resources.

8 September 2025 | 09:00–12:30 CET
events.asc.ac.at/event/203/

#LLM #LoRA #PEFT #AItraining #Quantisation #AIonABudget #HuggingFace #Python

1 0 0 0
Post image

От понимания файнтюнинга LLM до файнтюнинга мультимодальных моделей Что такое дообучение LLM и зачем оно нужн...

#дообучение #LLM #PEFT #методы #LoRA #QLoRA #AdaLoRA #P-Tuning #BitFit

Origin | Interest | Match

0 0 0 0
Post image

Эффективный инференс множества LoRA адаптеров LoRA — популярный метод дообучения больших моделей на небольши...

#multilora #offline #inference #async #inference #vllm #TensorRT-LLM #tensorrt #peft #inference #benchmark

Origin | Interest | Match

0 0 0 0
Original post on medium.com

Train LLMs to Talk Like You on Social Media, Using Consumer Hardware Use your own comments on soc...

medium.com/data-science-collective/...

#hugging-face #llm #peft #ai […]

0 0 0 0
Post image

Missed out on #Swift tickets? No worries—swing by our #SVFT poster at #NeurIPS2024 and catch *real* headliners! 🎤💃🕺
📌Where: East Exhibit Hall A-C #2207, Poster Session 4 East
⏲️When: Thu 12 Dec, 4:30 PM - 7:30 PM PST

#AI #MachineLearning #PEFT #NeurIPS24

9 2 1 0