🎆
ALL-ABOUT-LLM
Ctrlk
  • 🥳Summary
  • 🐰TOTOTOlearn
    • ⚒️Evaluate
    • 🙇‍♂️Lossless Compression
    • 👼Tutorials&Workshops
    • 🧕Personality Traits&Bias in LLM
    • 🔥openLLM
    • 🌜AI Agents
    • 👾MLLM
    • 📃Surveys
    • 🙇‍♀️POSTS
  • 🤖TOTOTODO
    • 🌚Challenges and Applications of Large Language Models
    • 😃炼丹工具箱
      • 🌐Megatron-LM -nvidia
      • 🌸Colossal-AI: 让AI大模型更低成本、方便易用、高效扩展
      • 🙆‍♂️BMInf -- 一个用于PLM推理阶段的低资源工具包
      • 🦈LLaMA-Efficient-Tuning&text-generation-webui
      • 🪐Paramters and Definations
      • 🦙Chinese-Vicuna: A Chinese Instruction-following LLaMA-based Model —— 一个中文低资源的llama+lora方案
      • 🥳PEFT doc-cn
Powered by GitBook
On this page
  1. 🤖TOTOTODO

😃炼丹工具箱

🌐Megatron-LM -nvidia🌸Colossal-AI: 让AI大模型更低成本、方便易用、高效扩展🙆‍♂️BMInf -- 一个用于PLM推理阶段的低资源工具包🦈LLaMA-Efficient-Tuning&text-generation-webui🪐Paramters and Definations🦙Chinese-Vicuna: A Chinese Instruction-following LLaMA-based Model —— 一个中文低资源的llama+lora方案🥳PEFT doc-cn
PreviousBrain in a Vat: On Missing Pieces Towards Artificial General Intelligence in Large Language ModelsNextMegatron-LM -nvidia

Last updated 2 years ago