Lead Reconfigurable Memory Computing to the Feature

HIGH-PERFORMANCE TRAINING AND INFERENCE SOLUTIONS

CloudCard

Based on advanced memory-computing architecture with deep optimization on memory wall and compile wall, CloudCard provides the stronger and more efficient large-model support, such as recommended models. Its energy efficiency ratio reaches 15.2TOPS / W, 10-15 times than GPU, with flexible operator variable capability and high compability for deep-learning deployment environment.

AI Inference Calculation Card

EdgeChip

EdgeChip comes with cost effectiveness and low power-consumption on AI inference, providing powerful compute support for all kinds of edge computing. It supports flexible algorithm deployment  for edge computing and custom operators from clients, providing innovation for multi-mode and multi-scene edge computing.

Edge AI Computing Chip

tinyAI

tinyAI provides one-stop deployment for hardware engineers with 3-500 times the AI performance and friendly compatibility for open source ecology. It supports ARM / RISC-V / X86 platform and smooth migrations on clients' own algorithm.

AI Accelaration SDK