🎆
ALL-ABOUT-LLM
search
⌘Ctrlk
🎆
ALL-ABOUT-LLM
  • 🥳Summary
  • 🐰TOTOTOlearn
    • ⚒️Evaluate
    • 🙇‍♂️Lossless Compression
    • 👼Tutorials&Workshops
    • 🧕Personality Traits&Bias in LLM
    • 🔥openLLM
    • 🌜AI Agents
    • 👾MLLM
    • 📃Surveys
    • 🙇‍♀️POSTS
  • 🤖TOTOTODO
    • 🌚Challenges and Applications of Large Language Models
    • 😃炼丹工具箱
      • 🌐Megatron-LM -nvidia
      • 🌸Colossal-AI: 让AI大模型更低成本、方便易用、高效扩展
      • 🙆‍♂️BMInf -- 一个用于PLM推理阶段的低资源工具包
      • 🦈LLaMA-Efficient-Tuning&text-generation-webui
      • 🪐Paramters and Definations
      • 🦙Chinese-Vicuna: A Chinese Instruction-following LLaMA-based Model —— 一个中文低资源的llama+lora方案
      • 🥳PEFT doc-cn
gitbookPowered by GitBook
block-quoteOn this pagechevron-down
  1. 🤖TOTOTODO

😃炼丹工具箱

🌐Megatron-LM -nvidiachevron-right🌸Colossal-AI: 让AI大模型更低成本、方便易用、高效扩展chevron-right🙆‍♂️BMInf -- 一个用于PLM推理阶段的低资源工具包chevron-right🦈LLaMA-Efficient-Tuning&text-generation-webuichevron-right🪐Paramters and Definationschevron-right🦙Chinese-Vicuna: A Chinese Instruction-following LLaMA-based Model —— 一个中文低资源的llama+lora方案chevron-right🥳PEFT doc-cnchevron-right
PreviousBrain in a Vat: On Missing Pieces Towards Artificial General Intelligence in Large Language Modelschevron-leftNextMegatron-LM -nvidiachevron-right

Last updated 2 years ago