Advertisement · 728 × 90

Posts by Snehal Raj

Preview
Hyper Compressed Fine-Tuning of Large Foundation Models with Quantum Inspired Adapters Fine-tuning pre-trained large foundation models for specific tasks has become increasingly challenging due to the computational and storage demands associated with full parameter updates. Parameter-Ef...

Check out the full paper for more details on the method, experimental setup, and analysis! arxiv.org/abs/2502.06916 We welcome your feedback and questions! Special mention to @brianc2095.bsky.social for his expert guidance and mentorship.

1 year ago 0 0 0 0

Future directions include exploring more complex architectures, further optimising adapter design, and investigating potential quantum speedups for compound matrix operations.

1 year ago 0 0 1 0

Our findings suggest Quantum-Inspired Adapters offer a promising direction for efficient adaptation of language and vision models in resource-constrained environments. The method's adaptability across different benchmarks underscores its generalisability.

1 year ago 0 0 1 0
Post image

We found that combining multiple Hamming-weight orders with orthogonality and matrix compounding are essential for performant fine-tuning. Enforcing orthogonality is critical for the success of compound adapters.

1 year ago 0 0 1 0
Post image

VTAB results are also promising! Our method achieves a comparable performance to LoRA with ≈ 13.6x fewer parameters. In some instances, such as CIFAR100, accuracy was significantly increased relative to other methods.

1 year ago 0 0 1 0
Post image

On GLUE, we achieved 99.2% of LoRA's performance with a 44x parameter compression. Compared to OFT/BOFT, we achieved 98% relative performance with 25x fewer parameters.

1 year ago 0 0 1 0

We tested our adapters on GLUE and VTAB benchmarks. Results show our method achieves competitive performance with significantly fewer trainable parameters compared to LoRA, OFT, and BOFT.

1 year ago 0 0 1 0
Post image

Our approach draws inspiration from Hamming-weight preserving quantum circuits to create parameter-efficient adapters that operate in a combinatorially large space while preserving orthogonality in weight parameters.

1 year ago 0 0 1 0
Advertisement
Post image

Fine-tuning large models is computationally expensive. This challenge has spurred interest in parameter efficient methods like LoRA which aim to adapt large foundation models to new tasks by updating only a small subset of parameters or introducing lightweight adaptation modules.

1 year ago 0 0 1 0
Preview
Hyper Compressed Fine-Tuning of Large Foundation Models with Quantum Inspired Adapters Fine-tuning pre-trained large foundation models for specific tasks has become increasingly challenging due to the computational and storage demands associated with full parameter updates. Parameter-Ef...

Our work, "Hyper Compressed Fine-Tuning of Large Foundation Models with Quantum Inspired Adapters" is now on arXiv! scirate.com/arxiv/2502.0... Our methods can compress large models by up to 44x with minimal performance loss.

1 year ago 1 0 1 1