AI Inference | 人工智慧推論

  • What is it?
    AI inference is the second step in the two-part process that makes up machine learning and deep learning; the first step is AI training. The two steps are an important reason why modern artificial intelligence is suitable for such a diverse range of tasks, from generating content to driving autonomous vehicles.

    During the inference phase, a pre-trained AI model is exposed to fresh unlabeled data. It relies on the database that it "studied" during its training to analyze the new input and respond with the correct output. To use generative AI as an example, every time you ask ChatGPT a question, or ask Stable Diffusion to draw you something, the AI model is inferencing. The reason it can come up with such human-like responses is because of all the training that it went through before.

    Even as it engages in inferencing, the AI is also recording the responses from human users for its next training session. It takes note when its output is praised or criticized. In this way, the continuous loop of training and inference makes AI more and more lifelike.

  • Why do you need it?
    The whole reason we train AI models is so that they can inference—interact with new data in the real world and help humans lead more productive and comfortable lives. A lot of what advanced AI products can do for us, from reading human handwriting to recognizing human faces, from piloting driverless vehicles to generating content, is AI inference at work. When you hear terms like computer visionnatural language processing (NLP), or recommendation systems—these are all instances of AI inference.

  • How is GIGABYTE helpful?
    To conduct AI inference efficiently, you need a computing platform with good processing speeds and the low latency to match. The reason is simple: the AI model will likely be servicing a lot of users at the same time. Especially in scenarios where a speedy response may affect productivity (such as sorting mail in a distribution center) or even safety (such as controlling a self-driving car), attributes like high performance and low latency become even more pertinent.

    On the server side, one of the best solutions for AI inference is GIGABYTE Technology's G293-Z43, which boasts an industry-leading configuration of 16 AMD Alveo™ V70 cards in a 2U chassis. The Alveo™ V70 accelerator is based on AMD’s XDNA™ architecture, which is optimized for AI inference. The Qualcomm® Cloud AI 100 solution is another product that can help data centers engage in inferencing on the cloud and the edge more effectively, due to its advancements in signal processing, power efficiency, node advancement, and scalability.

    Within individual vertical markets, GIGABYTE also offers bespoke hardware for different applications. For example, in the automotive and transportation industry, GIGABYTE's Automated Driving Control Unit (ADCU) is an embedded in-vehicle computing platform with AI acceleration; it's been deployed in Taiwan's self-driving buses. For AI-based facial recognition, which has seen broad adoption in the retail and education sectors, GIGABYTE's AI Facial Recognition Solution is an all-in-one solution that can achieve an accuracy level of 99.9% in the 1vN model.

    Learn more : 《Advance AI with GIGABYTE’s supercharged AI server solutions