Lead Reconfigurable Memory Computing to the Feature

About us

Expert & Team

TensorChip is the leader and the pioneer of Reconfigurable Memory-Computing AI chip. This concept was also coined by our company. Based on the cutting-edge philosophy of “Algorithm - Chip - Memory-Computing Coordination”, TensorChip provides various industries with advanced “AI+business algorithm” chips and solutions.
 

Founded in 2019, TensorChip's core team members are from AMD, Renesas Semiconductor, MediaTek, Yangtze Memory Technology Corp., and other international leading companies, with profound experience in memory-computing and AI computing acceleration and in mass production of 5nm and 7nm chips, as well as new generation techniques such as Gate-All-Around. Currently, most of our partners are well-known chip manufacturers and AI application manufacturers including T-head.
 

Our core original technology in computing-chip area: We are proposing a new architecture for Reconfigurable Memory-Computing with capable solutions,  surpassing the patent barriers of GPU and the homogeneous competition in the field of AI with large computing power. 

 

 

Our Main Products:

•  Reconfigurable Memory Computing AI chip/IP with high computing power and low power consumption
(Expected to replace GPU and become the next mainstream architecture of AI computing)
•  tinyAI acceleration SDK
(Native MCU AI acceleration SDK, RISC-V compatible)

Richard

Geng

Shang

Jiang

PhD

Since 2019, Richard has been leading the world's first Reconfigurable-Memory-Computing-Processor(RMU)architecture design, achieving the energy efficiency of 15.2TOPS/W and the advanced elastic operator technology——FlexOperator, which is 10 to 15 times higher than ASIC and GPU in the corresponding period. In addition, he designed HotCompress, the only circuit-level thermal compression algorithm technology worldwide.

In 2016, during his time of being the chief scientist at a well-known Artificial Intelligence enterprise, he led the design of the first domestic field-specific deep learning processor in China and guided the design of the first AI-NOR flash memory-computing chip architecture in China based on Super-Flash. Furthermore, he built SHQuan, currently the world's most accurate high-precision quantization algorithm, with the int8 quantization error less than 0.25%, which is 16 times more accurate than NVIDIA's similar algorithms. In this case, Richard is one of the few pioneers in chip design with an R&D background in advanced Artificial Intelligence algorithms in China. In 2014, as the leader of the well-known IDM chip design team, Richard built up the design team and designed the Chinese first 3D chip WXTC0 architecture together with the Sunnyvale team from North America. At that point, this chip was the largest single-sized, with the most complex design and the most advanced technique built in China. Dated back to 2012, as a design leader at HHGrace, he led the IP/IC design team to complete more than 30 design services and IP design in consumer electronics chips (including mobile chips) and automotive standard chips.

Richard graduated from Tsinghua University, during which he was engaged in the research of SPICE model and neural network model (used to be called multilayer perceptron). He owns over 60 Chinese personal invention patents, U.S. invention patents, and software copyrights.

In 2003, after graduating from the collage of Electronic Engineering of Tsinghua University, Geng studied for a master degree at Waseda University; under the supervision of Prof. Toshito Goto, the former Dean of NEC Research Institute, with the research direction of reconfigurable integrated circuit architecture and multimedia processing algorithm acceleration. After graduation in 2006, he worked in the mobile terminal division of NEC Corporation. From 2006 to 2009, as the main force of R&D, he was responsible for the SoC processor R&D projects of Camera Engine brand (digital camera business line) and EMMA Mobile brand's multiple models (mobile terminal business line). On average, two SoCs are taped out every year. Camera Engine was used by digital cameras such as Canon and Nikon, while EMMA Mobile was adopted by NEC, Kyocera, Sharp, and other mobile brands, as well as Sony walkman and other major electronic photo frame brands. Total shipments of the entire product line exceeded 100 million. In 2008, Geng's team achieved the SoC integration of mobile phone baseband chips and AP chips, above the industry benchmark. At that moment, any mobile phone terminal products equipped with such SoC were the lightest, thinnest with the longest standby-time, which became quite the rage in the Japanese market.

 

When Renesas merged with NEC in 2010, Geng became the only non-Japanese member in the technical committee of the two companies' elites. He was in charge of integrating the product architectures for these two companies and creating a common SoC architecture platform for mobile and automotive businesses. He fully led and advanced the design & implementation of base-layer general architecture. This work proceeded with the birth of RMobile and RCar series, the evergreen main products of Renesas. The RCar series has been widely used in mainstream models of Toyota, Nissan, Mazda, and other mainstream automakers, with cumulative shipments of 50 million units so far.

 

In 2013, Geng joined Nomovok, a startup and a pioneer of Internet car-making, and he entered the field of onboard computers. As the head of hardware R&D, he successively provided customized operating system upgrade services based on the stock platform for clients such as Lamborghini and Drako Motors. Taking this as an opportunity, he led the development of Carputer, a universal vehicle platform with independent intellectual property rights from scratch.

In 2001, Shang entered Beijing University of Aeronautics and Astronautics (BUAA). In 2005, he studied for a master’s degree of IC at BUAA under the supervision of Prof. Zhang Xiaolin, the Dean of Electronic Information Engineering, and completed the physical design of terrestrial digital TV transmission standard chips.

 

Later, he joined the U.S. company Magma Design Automation, focusing on the optimization of the IC design process related to EDA software products, and assisting the team leader Dr. Lars to solve the physical optimization issue of Israeli Intel, which helped the client improve the design performance by 15%. In 2009, he participated in the Loongson chip project of the Institute of Computing Technology of the Chinese Academy of Sciences and was in charge of the development of VECTOR IP, the Loongson CPU vector computing module. As the core module of high-performance computing, this module has been widely used in Loongson 3 series chips.

 

In 2011, Shang joined Huada Jiutian, a subsidiary of CEC, and was engaged in technical work. He was stationed in Changsha and Shanghai to support the power optimization and design inspection before tape out of Chinese Feiteng chip and solved the power optimization issue of this chip. He later successfully promoted CEC’s domestic EDA and IP products to Chinese CPU clients, including Zhaoxin/Loongson/Huaxintong, as well as RDA (now Zhanrui)/VeriSilicon/Bitmain/Datang/Xiaomi and other index clients. Shang accomplished the first deal with Huada Jiutian's IP client and was the first one to close the deal of a domestic IP product for a price at the tens-of-millions level.

 

Shang joined the creation of TensorChip in 2019, responsible for the business development of partners and clients. At present, Shang has driven TensorChip to establish partnerships with several well-known Chinese companies such as T-Head Semiconductor, Nuclei System Technology, and GigaDevice, and he has promoted the implementation of TensorChip's technology in the projects like smart city and smart parking.

PhD

Jiang was under the supervision of Prof. Dr. Quanshui Zheng at Tsinghua University (the founder of China's high-dimensional tensor calculation theory). He was the executive deputy director of the Superlubricity Technology Research Institute (Research Institute of Tsinghua University in Shenzhen) and the general manager of the superlubricity technology industrialization company. Jiang led the establishment of structural superlubricity experimental research work, and fully responsible for the relevant legal entity setup, equity/intellectual property relations, organizational structure construction, management system construction, talent recruitment, semiconductor ultra-clean room construction (3700 square meters),  micro-nano laboratory construction (150 million RMB involved), business & partnership development and government affairs. He has served as a partner/CEO/VP in high-tech startups such as Gaohe Anyang, Yanzhi, and mentally. Jiang is also a serial entrepreneur.

 

Jiang holds FRM certification, together with securities, funds, and futures qualifications. He has been in the financial industry for more than ten years and worked in several large financial institutions such as China International Capital Corporation, CITIC Private Equity Funds Management, Taikang Assets. He has successively positioned as market manager, investment manager, and investment director, with extensive investment and management experience and business networks.

 

Jiang used to focus on several areas for investments such as industry, medical, and technology for a long time, with direct equity investment projects over 3 billion RMB, and other asset & business management scales exceeding 100 billion RMB.

Shawn

PhD

Being in the semiconductor industry for over ten years, Shawn worked for a Chinese IDM giant. Under the absence of semiconductor talents in China, he built the first local high-speed memory design team and established an international R&D process and management system. Within 5 years, he advanced the technology in China from basic level to surpassing the global industry with more than 10 years while maintaining a continuous leading advantage. As one of the main persons in charge, he led a variety of advanced memory products from R&D to mass production, which was highly praised by clients and successfully entered the supply chain system of Apple, Lenovo, Xiaomi, and other leading companies. Prior to this, he served in companies/institutions like HHGrace and has accumulated much experience in high-performance customized IP design. His several achievements have been applied to the fields of automobiles, mobile devices, medical electronics, etc. Shawn owns more than 40 domestic and foreign invention patent authorizations, together with rich industry experience and networks.

Wang

PhD

Since December 2019, Wang has been the director of AI algorithm and software at TensorChip, responsible for AI algorithm acceleration and software architecture design. Previously served in a communications equipment company, under the R&D of industrial Internet equipment and Wang was responsible for the core R&D work. He successively worked as a senior software development engineer, system architect, and other technical positions, mainly responsible for software &hardware platform system design of data communication devices on CPU architecture such as PowerPC, MIPS, and ARM and compiler performance optimization/acceleration technology research work. He also led the R&D team on the design and development of the product software architecture platform. A variety of protocol conversion and security products based on this software architecture platform have been finalized and mass-produced.

 

Wang once joined an artificial intelligence company as an algorithm and software manager and engaged in the machine vision and deep learning related (e.g. binocular ranging, target recognition) algorithm and compiler acceleration technology research work. After the optimized design, the accuracy of the face recognition algorithm reached 99.8 %, the calculation performance of algorithm inference was improved by 3-20 times.

 

At TensorChip, Wang is in charge of AI algorithm acceleration and software architecture design. He has led the R&D team to develop the first cross RISC-V/ARM/x86 AI SDK. He leads the work on the acceleration of deep learning algorithms, reaching the completion of the design and development of the TRNN inference calculation library/compiler, suitable for the RISC-V architecture. Based on this calculation library, a high-performance face detection and face recognition algorithm interface library is realized. It fills the ecological gap of high-performance deep learning and face detection/recognition algorithms on the RISC-V architecture and CPU hardware platform.