🐧Real_PEFT

叫realpeft是因为感觉现在的peft方式有比较大的问题,具体哪里怪怪的等我有了比较成熟的想法再说QAQ

现有的PEFT方法总结

LoRA: LORA: LOW-RANK ADAPTATION OF LARGE LANGUAGE MODELS

Prefix Tuning: Prefix-Tuning: Optimizing Continuous Prompts for Generation, P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks

P-Tuning: GPT Understands, Too

Prompt Tuning: The Power of Scale for Parameter-Efficient Prompt Tuning

AdaLoRA: Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning

LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention

IA3: Infused Adapter by Inhibiting and Amplifying Inner Activations

Last updated