HIGH-PERFORMANCE TRAINING AND INFERENCE SOLUTIONS
Based on advanced memory-computing architecture with deep optimization on memory wall and compile wall, CloudCard provides the stronger and more efficient large-model support, such as recommended models. Its energy efficiency ratio reaches 15.2TOPS / W, 10-15 times than GPU, with flexible operator variable capability and high compability for deep-learning deployment environment.
AI Inference Computing Card
EdgeChip comes with cost effectiveness and low power-consumption on AI inference, providing powerful compute support for all kinds of edge computing. It supports flexible algorithm deployment for edge computing and custom operators from clients, providing innovation for multi-mode and multi-scene edge computing.
Edge AI Computing Card
Multiple memory IP core solutions to support efficient architectures for different AI memory constraints (bandwidth, capacity, cache coherency). Supports 4 algorithms and 4 data types with 7/12/16nm process. Addresses a wide variety of processing, memory, connectivity and security requirements in various markets.
AI Computing IP Core
tinyAI provides one-stop deployment for hardware engineers with 3-500 times the AI performance and friendly compatibility for open source ecology. It supports ARM / RISC-V / X86 platform and smooth migrations on clients' own algorithm.
AI Accelaration SDK