GIGABYTE AI TOP
Train your own AI on your desk
In the age of local AI, GIGABYTE AI TOP is the all-round solution to win advantages ahead of traditional AI training methods. It features a variety of groundbreaking technologies that can be easily adapted by beginners or experts, for most common open-source LLMs, in anyplace even on your desk.
Win Advantages with GIGABYTE AI TOP
Supports 236B LLM Local Training
Intuitive Set-up
Flexbility & Upgradability
Privacy & Security
Suitable for Home Use
AI TOP Utility
Reinventing AI Training
The AI TOP Utility is a groundbreaking realization of local AI model training with reinvented workflows, user-friendly interface and experience, and real-time progress monitoring. It supports various top-ranking open-source AI models for beginners to easily start up own AI project without any skills of AI programming.
Get Started Now!
Run the AI TOP Utility on your AI TOP Hardware.
AI TOP Hardware
Enhanced Performance for AI Training
The AI TOP Hardware features a series of GIGABYTE AI TOP products that are optimized in power efficiency and durability for AI training workloads. It includes upgradeable components and is easy to build at home.
Optimized Power Efficiency
Ultra Durable For Ai Training
Upgradeable Components
Easy To Build At Home
AI TOP Recommendations
Choose the Best Setup for Your Usage
Educational Projects AI TOP is perfect for training a small AI model from scratch to have comprehensive understanding of AI model training. |
Individual AI Developers AI TOP is the best tool to fine-tune LLMs with instant results and schedule the training experiment whenever suits you best. |
Small Studios AI TOP is affordable for small studios and provides absolute privacy and security for your proprietary datasets. |
AI Workflows for SMBs AI TOP is the ideal platform with lower TCO for small and medium-sized businesses to deploy AI models to their workflows and improve productivity. |
|
Best Model Size For Fine-tuning | 8 B | 13 B | 30 B | 70 B |
Training Time of 100K-sample | 288 Hours | 32 Hours | 17 Hours | 15 Hours |
Recommeded Sets |
|
|
|
|
Performance
High Performance and Efficiency for AI Training
Shorter training times are better. Higher power efficiency is better.
(memory offloading: DRAM+SSD)
(memory offloading: SSD+SSD)
13B
70B
Efficiency
AI TOP PC: TRX50 AI TOP motherboard with Ryzen Threadripper 7985WX processer, GIGABYTE AI TOP Graphics Card Solution, 2TB of system memory, and 2x SSDs of 1TB in RAID0.
Regular PC: Z790 AORUS MASTER motherboard with Intel Core i9 13th Gen processor, 1x RTX 4090 graphics card, 64GB of system meory, and 1x SSD of 2TB.
Training configs: AI TOP Utility 1.0.1, Batch size: 4, Epochs: 1, Finetuning type: Full and LoRA. Training time: Calculated from ETA shown on Dashboard in AI TOP Utility. May vary by hardware, system and training configurations. Power efficiency: Calculated by measured tokens per watt. May vary by hardware, system and training configurations.
GIGABYTE Advantages
Why GIGABYTE AI TOP?
FAQs
-
What is AI TOP?
AI TOP is an all-round solution for local AI model training, fine-tuning and inferencing. It consists of AI TOP Utility, a software supports up to 236B LLMs, and AI TOP Hardware, a series of PC products that are optimized in power efficiency and durability for AI training workloads.
-
How can I benefit from using AI TOP?
AI TOP supports large language models(LLMs) of up to 236B parameters, and ensures security and privacy for your data through local training methods. It can be built on household electricity system and is flexible for future upgrades.
-
What is the AI TOP Utility?
The AI TOP Utility is a software designed for large language models (LLMs) training. It features a graphical user interface (GUI) for you to easily initiate local AI training via friendly and intuitive experience.
-
What are the system requirements for the AI TOP Utility?
The AI TOP Utility can only run on the AI TOP Hardware. The list of supported hardwares can be found here. As of August 2024, the AI TOP Utility is available on Linux only.
-
What large language models (LLMs) are supported by the AI TOP Utility?
The AI TOP Utility has been tested and proven reliable with various top-ranking, open-source LLMs, such as Llama, Qwen and DeekSeek. Please refer to the manual of AI TOP Utility for detailed list of supported LLMs, which can be found in the file package.
-
How does Memory Offloading work in AI TOP Utility?
Memory Offloading is a cutting-edge technology that significantly increases the system's capability as in the size of LLMs. It can offload the data generated during the training progress from the VRAM to the System DRAM and/or SSDs according to the current strategy. You can choose the Offloading Memory Strategy based on the LLM you are going to train or fine-tune.
-
What is the AI TOP Hardware?
The AI TOP Hardware includes a series of motherboards, graphics cards, power supply units and solid state drives that are optimized in power efficiency and durability for intense AI training workloads.
-
What makes the AI TOP Hardware different from regular PCs?
The AI TOP Hardware is designed to maximize the compatibility for GPUs, system memory and SSDs and provide higher power efficiency compared to regular PCs. Every AI TOP Hardware is built on quality components and tested under strict simulations for proven reliability.
-
How to choose the right configuration of AI TOP Hardware?
AI TOP Hardware offers high flexibility and upgradability. You can build the setup based on actual usages. Some of the recommended setups can be found here.
Explore