[go: up one dir, main page]

CN209149287U - Big data computing acceleration system - Google Patents

Big data computing acceleration system Download PDF

Info

Publication number
CN209149287U
CN209149287U CN201821774904.XU CN201821774904U CN209149287U CN 209149287 U CN209149287 U CN 209149287U CN 201821774904 U CN201821774904 U CN 201821774904U CN 209149287 U CN209149287 U CN 209149287U
Authority
CN
China
Prior art keywords
data
chip
storage
interface
core
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201821774904.XU
Other languages
Chinese (zh)
Inventor
秦强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Bitmain Technology Co Ltd
Original Assignee
Beijing Bitmain Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Bitmain Technology Co Ltd filed Critical Beijing Bitmain Technology Co Ltd
Priority to CN201821774904.XU priority Critical patent/CN209149287U/en
Application granted granted Critical
Publication of CN209149287U publication Critical patent/CN209149287U/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Storage Device Security (AREA)

Abstract

本实用新型实施例提供一种大数据运算加速系统,包括2个以上运算芯片,运算芯片包括N个内核、N个数据通道和至少一个存储单元,数据通道包括发送接口和接收接口,内核和数据通道一一对应;2个以上运算芯片通过发送接口和接收接口进行连接传输数据;至少一个存储单元用于分布式存储数据。该系统中取消了芯片外接内存,将存储单元设置在ASIC芯片内部,减少了ASIC芯片从外部读取数据的时间,加快了芯片运算速度。多个ASIC芯片共享存储单元,这样不仅减少了存储单元的数量,也减少了ASIC运算芯片之间的连接线,简化了系统构造,减低了ASIC芯片的成本。同时,多个运算芯片之间采用serdes接口技术进行数据传输,提高了在多个ASIC芯片之间数据传输的速率。

The embodiment of the present invention provides a big data computing acceleration system, including more than two computing chips, the computing chip includes N cores, N data channels and at least one storage unit, the data channel includes a sending interface and a receiving interface, the core and the data The channels correspond one-to-one; more than two computing chips are connected to transmit data through the sending interface and the receiving interface; at least one storage unit is used for distributed storage of data. In this system, the external memory of the chip is cancelled, and the storage unit is arranged inside the ASIC chip, which reduces the time for the ASIC chip to read data from the outside, and speeds up the operation speed of the chip. Multiple ASIC chips share storage units, which not only reduces the number of storage units, but also reduces the connection lines between the ASIC computing chips, simplifies the system structure, and reduces the cost of the ASIC chips. At the same time, the serdes interface technology is used for data transmission between multiple computing chips, which improves the rate of data transmission between multiple ASIC chips.

Description

大数据运算加速系统Big data computing acceleration system

技术领域technical field

本公开涉及集成电路领域,特别是涉及一种大数据运算加速系统。The present disclosure relates to the field of integrated circuits, and in particular, to a big data computing acceleration system.

背景技术Background technique

ASIC(Application Specific Integrated Circuits)即专用集成电路,是指应特定用户要求和特定电子系统的需要而设计、制造的集成电路。ASIC的特点是面向特定用户的需求,ASIC在批量生产时与通用集成电路相比具有体积更小、功耗更低、可靠性提高、性能提高、保密性增强、成本降低等优点。ASIC (Application Specific Integrated Circuits) is an application specific integrated circuit, which refers to an integrated circuit designed and manufactured in response to the requirements of specific users and the needs of specific electronic systems. ASIC is characterized by the needs of specific users. Compared with general integrated circuits, ASIC has the advantages of smaller size, lower power consumption, improved reliability, improved performance, enhanced confidentiality, and reduced cost in mass production.

随着科技的发展,越来越多的领域,比如人工智能、安全运算等都涉及大运算量的特定计算。针对特定运算,ASIC芯片可以发挥其运算快,功耗小等特定。同时,对于这些大运算量领域,为了提高数据的处理速度和处理能力,通常需要控制N个运算芯片同时进行工作。随着数据精度的不断提升,人工智能、安全运算等领域需要对越来越大的数据进行运算,例如:现在照片的大小一般为3-7MB,但是随着数码相机和摄像机的精度增加,照片的大小可以达到10MB或者更多,而30分钟的视频可能达到1个多G的数据。而在人工智能、安全运算等领域中要求计算速度快,时延小,因此如何提高计算速度和反应时间一直是芯片设计所要求的目标。由于ASIC芯片搭配的内存一般为64MB或者128MB,而当要处理的数据在512MB以上时,ASIC芯片要多次利用内存存取数据,多次将数据从外部存储空间中搬入或者搬出内存,降低了处理速度。同时,随着数据精度的不断提升,人工智能、安全运算等领域需要对越来越大的数据进行运算,为了存储数据一般需要给ASIC芯片配置多个存储单元,例如一块ASIC芯片要配置4块2G内存;这样N个运算芯片同时工作时,就需要4N块2NG内存。但是,在多运算芯片同时工作时,数据存储量不会超过2个G,这样就造成了存储单元的浪费,提高了系统成本。With the development of science and technology, more and more fields, such as artificial intelligence, security computing, etc., involve specific calculations with a large amount of computing. For specific operations, ASIC chips can play a role in fast operation and low power consumption. At the same time, in order to improve the processing speed and processing capability of data, it is usually necessary to control N computing chips to work at the same time for these fields with large amount of computation. With the continuous improvement of data accuracy, artificial intelligence, security computing and other fields need to operate on increasingly large data. For example, the size of photos is generally 3-7MB, but with the increase in the accuracy of digital cameras and video cameras, photos The size of the video can reach 10MB or more, and the 30-minute video may reach more than 1 GB of data. In the fields of artificial intelligence and security computing, fast computing speed and small delay are required. Therefore, how to improve computing speed and response time has always been the goal required by chip design. Since the memory matched with the ASIC chip is generally 64MB or 128MB, when the data to be processed is more than 512MB, the ASIC chip needs to use the memory to access data many times, and move the data into or out of the memory from the external storage space for many times, reducing the cost of processing speed. At the same time, with the continuous improvement of data accuracy, artificial intelligence, security computing and other fields need to operate on increasingly large data. In order to store data, it is generally necessary to configure multiple storage units for ASIC chips. For example, an ASIC chip needs to be configured with 4 storage units. 2G memory; 4N blocks of 2NG memory are required when N computing chips work at the same time. However, when multiple computing chips work at the same time, the amount of data storage will not exceed 2 G, which results in a waste of storage units and increases the system cost.

在处理大量相关数据的设计中,现有技术中面临两个难题:1、是大幅度提升性能的需求。2、如果是分布式系统,那么还要解决数据相关性问题,即某个子系统中处理完的数据需要呈现给所有其他的子系统中进行确认和再处理。一般通过两种方式减少数据处理耗费的时间,一是加快处理数据逻辑的时钟;二是增加处理数据的并发块数。In the design of processing a large amount of related data, there are two difficulties in the prior art: 1. It is the requirement of greatly improving the performance. 2. If it is a distributed system, then the problem of data correlation should also be solved, that is, the data processed in one subsystem needs to be presented to all other subsystems for confirmation and reprocessing. Generally, there are two ways to reduce the time consumed by data processing. One is to speed up the clock for processing data logic; the other is to increase the number of concurrent blocks for processing data.

在工艺限制下,时钟速率的提升很有限。提升并发数目是更加有效的提升性能的方法。但提升并发数目之后,一般也相应的提高了数据带宽的要求。一般的系统中,如果数据带宽取决于DDR提供的带宽,但DDR的带宽提升并不是线性的。假设初始系统含有DDR一组,提供带宽1x。如果我们需要获得2x的带宽提升,可以实现两组DDR,但如果需要获得16x以上的带宽提升,因为物理尺寸的限制,不可能通过简单的在一个系统中例化16组DDR实现。Under process constraints, the increase in clock rate is limited. Increasing the number of concurrency is a more effective way to improve performance. However, after increasing the number of concurrency, the requirements for data bandwidth are generally increased accordingly. In a general system, if the data bandwidth depends on the bandwidth provided by DDR, the bandwidth improvement of DDR is not linear. Suppose the initial system contains a set of DDR, providing 1x bandwidth. If we need to get 2x bandwidth improvement, we can achieve two sets of DDR, but if we need to get more than 16x bandwidth improvement, it is impossible to simply instantiate 16 sets of DDR in one system because of physical size constraints.

如果需要多个ASIC芯片协同工作的话,不能直接将数据分布在不相连的多个系统中进行处理,因为这些数据都是相关的,每份在某个处理单元中完成的数据都必须在其他处理单元中进行确认和再处理,因此如果提高在多个ASIC芯片之间数据传输的速率也是必须要解决多系统互联的问题。If multiple ASIC chips are required to work together, the data cannot be directly distributed in multiple disconnected systems for processing, because these data are all related, and each piece of data completed in one processing unit must be processed in other Confirmation and reprocessing are performed in the unit, so if the data transmission rate between multiple ASIC chips is increased, the problem of multi-system interconnection must also be solved.

实用新型内容Utility model content

本实用新型实施例的目的就是提供一种使用高速接口连接分布式存储的方式,实现多个同构系统并发处理大量相关数据。本实用新型实施例提供一种大数据运算加速系统,该系统中取消了芯片外接内存,将存储单元设置在ASIC芯片内部,减少了ASIC芯片从外部读取数据的时间,加快了芯片运算速度。多个ASIC芯片共享存储单元,这样不仅减少了存储单元的数量,也减少了ASIC运算芯片之间的连接线,简化了系统构造,减低了ASIC芯片的成本。同时,多个运算芯片之间采用serdes接口技术进行数据传输,提高了在多个ASIC芯片之间数据传输的速率。The purpose of the embodiments of the present invention is to provide a way of using a high-speed interface to connect distributed storage, so as to realize concurrent processing of a large amount of related data by multiple homogeneous systems. The embodiment of the present invention provides a big data operation acceleration system, in which the chip external memory is eliminated, and the storage unit is arranged inside the ASIC chip, which reduces the time for the ASIC chip to read data from the outside, and speeds up the chip operation speed. Multiple ASIC chips share storage units, which not only reduces the number of storage units, but also reduces the connection lines between the ASIC computing chips, simplifies the system structure, and reduces the cost of the ASIC chips. At the same time, the serdes interface technology is used for data transmission between multiple computing chips, which improves the rate of data transmission between multiple ASIC chips.

为达到上述目的,本实用新型实施例提供如下技术方案:To achieve the above purpose, the embodiments of the present invention provide the following technical solutions:

根据本实用新型实施例提供的第一种大数据运算加速系统,包括2个以上运算芯片,所述运算芯片包括N个内核core、N个数据通道(lane)和存储单元,所述数据通道(lane)包括发送接口(tx)和接收接口(rx),所述内核core和数据通道(lane)一一对应,所述内核core通过数据通道(lane)发送和接收数据或者控制指令;所述运算芯片通过所述发送接口(tx)和所述接收接口(rx)进行连接,以便数据或者控制指令传输;所述2个以上运算芯片的存储单元用于分布式存储数据,运算芯片内核core可以从本运算芯片的存储单元获取数据,也可以从其他运算芯片的存储单元获取数据;其中N为大于等于4的正整数。The first big data computing acceleration system provided according to an embodiment of the present invention includes more than two computing chips, and the computing chips include N core cores, N data lanes (lanes) and storage units, and the data lanes ( lane) includes a sending interface (tx) and a receiving interface (rx), the core core and the data channel (lane) are in one-to-one correspondence, and the core core sends and receives data or control instructions through the data channel (lane); the operation The chip is connected through the sending interface (tx) and the receiving interface (rx), so that data or control instructions are transmitted; the storage units of the two or more computing chips are used for distributed storage of data, and the computing chip core core can be stored from The storage unit of this computing chip obtains data, and it can also obtain data from storage units of other computing chips; wherein N is a positive integer greater than or equal to 4.

可选的,所述运算芯片的所述发送接口(tx)和所述接收接口(rx)为serdes接口,所述运算芯片之间通过serdes接口进行通信。Optionally, the sending interface (tx) and the receiving interface (rx) of the computing chip are serdes interfaces, and the computing chips communicate through the serdes interfaces.

可选的,所述数据通道(lane)进一步包括接收地址判断单元、发送地址判断单元;接收地址判断单元一端连接于接收接口(rx),接收地址判断单元另一端连接于内核core;发送地址判断单元一端连接于发送接口(tx),发送地址判断单元另一端连接于内核core;接收地址判断单元和发送地址判断单元相互连接。Optionally, the data channel (lane) further includes a receiving address judging unit and a sending address judging unit; one end of the receiving address judging unit is connected to the receiving interface (rx), and the other end of the receiving address judging unit is connected to the kernel core; One end of the unit is connected to the sending interface (tx), the other end of the sending address judging unit is connected to the kernel core; the receiving address judging unit and the sending address judging unit are connected to each other.

可选的,接收接口(rx)接收相邻一侧运行芯片发送的数据帧,将所述数据帧发送给接收地址判断单元,接收地址判断单元将所述数据帧发送给内核core,同时将所述数据帧发送给发送地址判断单元;发送地址判断单元接收所述数据帧,将所述数据帧发送给发送接口(tx),发送接口将所述数据帧发送给相邻另一侧运行芯片。Optionally, the receiving interface (rx) receives the data frame sent by the running chip on the adjacent side, sends the data frame to the receiving address judgment unit, and the receiving address judgment unit sends the data frame to the kernel core, and at the same time sends all the data frames to the core core. The data frame is sent to the sending address judging unit; the sending address judging unit receives the data frame, sends the data frame to the sending interface (tx), and the sending interface sends the data frame to the other adjacent running chip.

可选的,内核core产生数据帧,将所述数据帧发送给发送地址判断单元,发送地址判断单元将所述数据帧发送给发送接口(tx),发送接口(tx)将所述数据帧发送给相邻一侧的运行芯片。Optionally, the kernel core generates a data frame, sends the data frame to the sending address judgment unit, the sending address judgment unit sends the data frame to the sending interface (tx), and the sending interface (tx) sends the data frame Give the running chip on the adjacent side.

可选的,所述接收地址判断单元和发送地址判断单元通过先进先出存储器进行相互连接。Optionally, the receiving address judging unit and the sending address judging unit are connected to each other through a first-in-first-out memory.

可选的,所述存储单元包括多个存储器,所述多个存储器连接到至少一个存储控制单元;所述至少一个存储控制单元用于控制所述多个存储器的数据读取或者存储。Optionally, the storage unit includes multiple memories, and the multiple memories are connected to at least one storage control unit; the at least one storage control unit is used to control data reading or storage of the multiple memories.

可选的,所述存储器包括至少两个存储子单元和存储控制子单元;存储控制子单元通过接口与所述至少一个存储控制单元中的每一个连接,所述存储控制子单元用于控制所述至少两个存储子单元的数据读取或者存储。Optionally, the memory includes at least two storage subunits and a storage control subunit; the storage control subunit is connected to each of the at least one storage control unit through an interface, and the storage control subunit is used to control all the storage control subunits. data reading or storage of the at least two storage subunits.

可选的,所述存储子单元为SRAM存储器。Optionally, the storage subunit is an SRAM memory.

可选的,所述2个以上运算芯片连接成环形。Optionally, the two or more computing chips are connected in a ring shape.

可选的,所述2个以上运算芯片不连接外部存储单元。Optionally, the two or more computing chips are not connected to external storage units.

可选的,所述运算芯片进一步包括第一数据接口(130)与外部主机相连,用于接收外部数据或者控制指令。Optionally, the computing chip further includes a first data interface (130) connected to an external host for receiving external data or control instructions.

可选的,所述运算芯片将外部数据存储到所述2个以上运算芯片的至少一个存储单元。Optionally, the computing chip stores external data in at least one storage unit of the two or more computing chips.

可选的,所述第一数据接口为UART控制单元。Optionally, the first data interface is a UART control unit.

可选的,所述N个内核core和所述至少一个存储控制单元中的每一个相连;根据所述N个内核core的操作命令,从所述多个存储器中读写数据。Optionally, the N core cores are connected to each of the at least one storage control unit; and data is read and written from the plurality of memories according to operation commands of the N core cores.

可选的,内核core将产生的数据发送给所述至少一个存储控制单元,所述至少一个存储控制单元将数据发送给所述存储控制子单元,所述存储控制子单元将数据存储到存储子单元中。Optionally, the kernel core sends the generated data to the at least one storage control unit, the at least one storage control unit sends the data to the storage control subunit, and the storage control subunit stores the data in the storage subunit. in the unit.

可选的,运算芯片内核core获取其他运算芯片发送的获取数据命令,运算芯片内核core通过数据地址判断数据是否存储在本运算芯片的存储单元中,如果存在则向所述至少一个存储控制单元发送数据读取命令;所述至少一个存储控制单元将数据读取命令发送给对应的存储控制子单元,存储控制子单元从存储子单元获取数据,存储控制子单元将所述获取数据发送给至少一个存储控制单元,至少一个存储控制单元将所述获取数据发送给内核core,内核core将所述获取数据发送给发送地址判断单元,发送地址判断单元将所述获取数据发送给发送接口(tx),发送接口将所述获取数据发送给相邻的运行芯片。Optionally, the computing chip core core obtains a data acquisition command sent by other computing chips, and the computing chip core core determines whether the data is stored in the storage unit of the computing chip through the data address, and if so, sends the command to the at least one storage control unit. a data read command; the at least one storage control unit sends a data read command to the corresponding storage control subunit, the storage control subunit acquires data from the storage subunit, and the storage control subunit sends the acquired data to at least one storage control unit, at least one storage control unit sends the acquired data to the kernel core, the kernel core sends the acquired data to a sending address judgment unit, and the sending address judgment unit sends the acquired data to a sending interface (tx), The sending interface sends the acquired data to an adjacent running chip.

可选的,所述运算芯片用于执行加密运算,卷积计算中的一种或者多种。Optionally, the computing chip is used to perform one or more of encryption operations and convolution calculations.

可选的,所述运算芯片分别执行独立的运算,每个计算单元分别计算结果。Optionally, the computing chips respectively perform independent operations, and each computing unit computes results respectively.

可选的,所述运算芯片用于执行协同运算,每个运算芯片根据其他运算芯片的计算结果进行运算。Optionally, the computing chips are used for performing cooperative operations, and each computing chip performs operations according to the calculation results of other computing chips.

可选的,所述至少一个第一数据接口(130)接收外部指令初始化配置所述2个以上运算芯片的存储单元,对所述2个以上运算芯片的存储单元中的存储子单元进行统一编址。Optionally, the at least one first data interface (130) receives an external command to initialize and configure the storage units of the two or more computing chips, and uniformly edit the storage subunits in the storage units of the two or more computing chips. site.

可选的,所述运算芯片能通过所述至少一个第一数据接口(130)把计算结果向外传输。Optionally, the computing chip can transmit the calculation result to the outside through the at least one first data interface (130).

可选的,所述内核core用于数据计算,数据存储控制。Optionally, the kernel core is used for data calculation and data storage control.

根据本实用新型实施例提供的第二种大数据运算加速系统,包括2个以上运算芯片,所述2个以上运算芯片连接成环形;所述运算芯片包括数据发送接口(tx)、数据接收接口(rx)和所述至少一个存储单元,所述数据发送接口(tx)和接收接口(rx)为serdes接口,所述运算芯片之间通过serdes接口进行数据通信;运算芯片的每个内核core能够从所在运算芯片的存储单元获取数据,也能够从其他运算芯片的存储单元获取数据。According to the second kind of big data operation acceleration system provided by the embodiment of the present invention, it includes more than two operation chips, and the two or more operation chips are connected in a ring; the operation chip includes a data transmission interface (tx), a data reception interface (rx) and the at least one storage unit, the data sending interface (tx) and the receiving interface (rx) are serdes interfaces, and data communication is performed between the computing chips through the serdes interfaces; each core core of the computing chip can Obtain data from the storage unit of the computing chip where it is located, and can also obtain data from the storage units of other computing chips.

根据本实用新型实施例提供的第三种大数据运算加速系统,包括2个以上运算芯片,所述2个以上运算芯片信号连接成环形;所述运算芯片包括数据发送接口(tx)、数据接收接口(rx)和所述至少一个存储单元,所述数据发送接口(tx)和接收接口(rx)为serdes接口,所述运算芯片之间通过serdes接口进行数据通信;所述2个以上运算芯片的所述至少一个存储单元用于分布式存储数据,所述运算芯片不外接内存单元。According to the third kind of big data operation acceleration system provided by the embodiment of the present invention, it includes more than two operation chips, and the two or more operation chips are connected to form a ring; the operation chip includes a data transmission interface (tx), a data receiving interface The interface (rx) and the at least one storage unit, the data transmission interface (tx) and the receiving interface (rx) are serdes interfaces, and data communication is performed between the computing chips through the serdes interface; the two or more computing chips The at least one storage unit is used to store data in a distributed manner, and the computing chip does not connect an external memory unit.

根据本实用新型实施例提供的第四种大数据运算加速系统,包括2个以上运算芯片,所述运算芯片包括N个内核core、N个数据通道(lane)和至少一个存储单元,所述数据通道(lane)包括发送接口(tx)和接收接口(rx),所述内核core和数据通道(lane)一一对应,所述内核core通过数据通道(lane)发送和接收数据;所述2个以上运算芯片通过所述发送接口(tx)和所述接收接口(rx)进行连接传输数据;所述至少一个存储单元用于分布式存储数据;其中N为大于等于4的正整数。The fourth big data computing acceleration system provided according to the embodiment of the present invention includes more than two computing chips, and the computing chips include N core cores, N data lanes and at least one storage unit. The channel (lane) includes a sending interface (tx) and a receiving interface (rx), the core core and the data channel (lane) are in one-to-one correspondence, and the core core sends and receives data through the data channel (lane); the two The above computing chip is connected to transmit data through the sending interface (tx) and the receiving interface (rx); the at least one storage unit is used for distributed storage of data; wherein N is a positive integer greater than or equal to 4.

本实用新型实施例通过在大数据运算加速系统中设置多个芯片,多个芯片包括多个内核core,每个内核core执行运算和存储控制功能,并且在芯片内部给每个内核core连接至少一个存储单元,这样每个内核通过读取自己连接的存储单元和其他运算芯片内核连接的存储单元中的数据,使得每个内核可以具有大容量内存,减少了数据从外部存储空间中搬入或者搬出内存的次数,加快了数据的处理速度;同时,由于多个内核可以分别独立运算或者协同运算,这样也加快了数据的处理速度。多个ASIC芯片共享存储单元,这样不仅减少了存储单元的数量,也减少了ASIC运算芯片之间的连接线,简化了系统构造,减低了ASIC芯片的成本。In the embodiment of the present invention, multiple chips are set in the big data operation acceleration system, the multiple chips include multiple core cores, each core core performs calculation and storage control functions, and at least one core is connected to each core core inside the chip. Storage unit, so that each core can have a large-capacity memory by reading the data in the storage unit connected by itself and the storage unit connected by other computing chip cores, reducing the data from the external storage space. At the same time, since multiple cores can operate independently or cooperatively, the processing speed of data is also accelerated. Multiple ASIC chips share storage units, which not only reduces the number of storage units, but also reduces the connection lines between the ASIC computing chips, simplifies the system structure, and reduces the cost of the ASIC chips.

附图说明Description of drawings

为了更清楚地说明本实用新型实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是示例性的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the embodiments of the present utility model or the technical solutions in the prior art, the following briefly introduces the accompanying drawings that need to be used in the description of the embodiments or the prior art. Obviously, the accompanying drawings in the following description These are just some exemplary embodiments, for those of ordinary skill in the art, other drawings can also be obtained according to these drawings without creative effort.

图1说明第一实施例具有M个ASIC芯片的大数据运算加速系统结构的示意图;1 illustrates a schematic diagram of the structure of a big data computing acceleration system with M ASIC chips according to the first embodiment;

图2说明具有4个内核的运算芯片结构示意图;FIG. 2 illustrates a schematic structural diagram of an arithmetic chip with 4 cores;

图3说明数据通道lane的结构示意图;Fig. 3 illustrates the structural schematic diagram of the data channel lane;

图4a说明存储单元第一实施例的结构示意图的Figure 4a illustrates a schematic diagram of the structure of the first embodiment of the memory cell

图4b说明存储单元第二实施例的结构示意图;4b illustrates a schematic structural diagram of a second embodiment of a memory cell;

图5说明大数据运算加速系统数据传输过程的示意图;5 illustrates a schematic diagram of the data transmission process of the big data computing acceleration system;

图6说明第一实施例具有4个内核的运算芯片信号流程示意图的;6 illustrates a schematic diagram of a signal flow diagram of an arithmetic chip with four cores according to the first embodiment;

图7说明根据本实用新型实施例的数据结构示意图。FIG. 7 illustrates a schematic diagram of a data structure according to an embodiment of the present invention.

具体实施方式Detailed ways

为了能够更加详尽地了解本公开实施例的特点与技术内容,下面结合附图对本公开实施例的实现进行详细阐述,所附附图仅供参考说明之用,并非用来限定本公开实施例。在以下的技术描述中,为方便解释起见,通过多个细节以提供对所披露实施例的充分理解。然而,在没有这些细节的情况下,一个或多个实施例仍然可以实施。在其它情况下,为简化附图,熟知的结构和装置可以简化展示。In order to understand the features and technical contents of the embodiments of the present disclosure in more detail, the implementation of the embodiments of the present disclosure will be described in detail below with reference to the accompanying drawings, which are for reference only and are not intended to limit the embodiments of the present disclosure. In the following technical description, for the convenience of explanation, numerous details are provided to provide a thorough understanding of the disclosed embodiments. However, one or more embodiments may be practiced without these details. In other instances, well-known structures and devices may be shown simplified in order to simplify the drawings.

下面将基于附图具体说明本实用新型的示例性实施方式,应当理解,给出这些实施方式仅仅是为了使本领域技术人员能够更好地理解进而实现本实用新型,而并非以任何方式限制本实用新型的范围。相反,提供这些实施方式是为了使本公开更加透彻和完整,并且能够将本公开的范围完整地传达给本领域的技术人员。Hereinafter, the exemplary embodiments of the present invention will be described in detail based on the accompanying drawings. It should be understood that these embodiments are provided only to enable those skilled in the art to better understand and realize the present invention, but not to limit the present invention in any way. scope of utility models. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.

此外,需要说明书的是,各附图中的上、下、左、右的各方向仅是以特定的实施方式进行的例示,本领域技术人员能够根据实际需要将附图中所示的各构件的一部分或全部改变方向来应用,而不会影响各构件或系统整体实现其功能,这种改变了方向的技术方案仍属于本实用新型的保护范围。In addition, it should be noted that the directions of up, down, left, and right in the drawings are only examples of specific embodiments, and those skilled in the art can modify the components shown in the drawings according to actual needs. Some or all of the components are changed in direction to be applied without affecting the functions of each component or the system as a whole.

多核芯片是具体化在单个大规模集成半导体芯片上的多处理系统。典型地,两个或更多芯片核心可以被具体化在多核芯片芯片上,由总线(也可以在相同的多核芯片芯片上形成该总线)进行互连。可以有从两个芯片核心到许多芯片核心被具体化在相同的多核芯片芯片上,在芯片核心的数量中的上限仅由制造能力和性能约束来限制。多核芯片可以具有应用,该应用包含在多媒体和信号处理算法(诸如,视频编码/解码、2D/3D图形、音频和语音处理、图像处理、电话、语音识别和声音合成、加密处理)中执行的专门的算术和/或逻辑操作。A multicore chip is a multiprocessing system embodied on a single LSI semiconductor chip. Typically, two or more chip cores may be embodied on a multi-core chip, interconnected by a bus (which may also be formed on the same multi-core chip). There can be from two chip cores to many chip cores embodied on the same multi-core chip die, the upper limit in the number of chip cores being limited only by manufacturing capability and performance constraints. Multi-core chips may have applications including execution in multimedia and signal processing algorithms such as video encoding/decoding, 2D/3D graphics, audio and speech processing, image processing, telephony, speech recognition and voice synthesis, encryption processing Specialized arithmetic and/or logic operations.

虽然在背景技术中仅仅提到了ASIC专用集成电路,但是实施例中的具体布线实现方式可以应用到具有多核芯片CPU、GPU、FPGA等中。在本实施例中多个内核可以是相同内核,也可以是不同内核。Although only ASIC application-specific integrated circuits are mentioned in the background art, the specific wiring implementation in the embodiments can be applied to CPUs, GPUs, FPGAs, etc. with multi-core chips. In this embodiment, the multiple cores may be the same core or different cores.

图1说明第一实施例具有M个ASIC芯片的大数据运算加速系统结构的示意图。如图1所示,大数据运算加速系统包括M个ASIC运算芯片,其中M为大于等于2的正整数,例如可以是6、10、12等等。所述运算芯片包括多个内核core(core0、core1、core2、core3)、4个数据通道(lane0、lane1、lane2 lane3),所述数据通道(lane)包括发送接口(tx)和接收接口(rx),所述内核core和数据通道(lane)一一对应,例如运算芯片10的内核core0具有数据通道(lane0),数据通道(lane0)具有发送接口(lane0 tx)和接收接口(lane0 rx),数据通道发送接口(lane0 tx)用于内核core0向所述运算芯片10外部发送数据或者控制指令,数据通道接收接口(lane0 rx)用于向内核core0发送所述运算芯片10的外部数据或者控制指令。这样所述M个运算芯片通过所述发送接口(tx)和所述接收接口(rx)进行连接,以便数据或者控制指令传输。M个运算芯片组成一个闭环形。在每个运算芯片中设置存储单元,所述运算芯片中的4个内核core都连接到存储单元,所述M个运算芯片的存储单元用于分布式存储数据,运算芯片内核core可以从本运算芯片的存储单元获取数据,也可以从其他运算芯片的存储单元获取数据。所述运算芯片中的4个内核core都连接到存储单元,通过存储单元也实现了所述运算芯片中的4个内核core数据交互的目的。而本领域技术人员可知,这里选择4个内核为例,只是示例性的说明,内核个数可以是N,其中N为大于等于4的正整数,例如可以是6、10、12等等。在本实施例中多个内核可以是相同内核,也可以是不同内核。FIG. 1 is a schematic diagram illustrating the structure of a big data operation acceleration system with M ASIC chips according to the first embodiment. As shown in FIG. 1 , the big data computing acceleration system includes M ASIC computing chips, where M is a positive integer greater than or equal to 2, such as 6, 10, 12, and so on. The computing chip includes multiple cores (core0, core1, core2, core3), 4 data channels (lane0, lane1, lane2 lane3), and the data channel (lane) includes a sending interface (tx) and a receiving interface (rx) ), the core core and the data channel (lane) are in one-to-one correspondence, for example, the core core0 of the computing chip 10 has a data channel (lane0), and the data channel (lane0) has a sending interface (lane0 tx) and a receiving interface (lane0 rx), The data channel sending interface (lane0 tx) is used by the core core0 to send data or control commands to the outside of the computing chip 10, and the data channel receiving interface (lane0 rx) is used to send the external data or control commands of the computing chip 10 to the core core0 . In this way, the M computing chips are connected through the sending interface (tx) and the receiving interface (rx), so as to transmit data or control instructions. M computing chips form a closed loop. A storage unit is set in each operation chip, the four core cores in the operation chip are all connected to the storage unit, and the storage units of the M operation chips are used for distributed storage of data, and the operation chip core core can be stored from this operation chip. The storage unit of the chip obtains data, and it can also obtain data from the storage unit of other computing chips. The four core cores in the computing chip are all connected to the storage unit, and the purpose of data interaction among the four core cores in the computing chip is also realized through the storage unit. However, those skilled in the art know that four cores are selected here as an example, which is only an exemplary illustration. The number of cores may be N, where N is a positive integer greater than or equal to 4, such as 6, 10, 12, and so on. In this embodiment, the multiple cores may be the same core or different cores.

可选的,本实施例提供的系统中,2个以上运算芯片可以不连接外部存储单元。芯片中具有存储单元,可以用来存储数据,芯片无需与外部存储单元进行连接,能够减少芯片从外部读取数据的时间,加快了芯片运算速度。Optionally, in the system provided by this embodiment, two or more computing chips may not be connected to external storage units. The chip has a storage unit, which can be used to store data, and the chip does not need to be connected with an external storage unit, which can reduce the time for the chip to read data from the outside and speed up the operation of the chip.

数据通道(lane)的发送接口(lane tx)和接收接口(lane rx)为serdes接口,所述运算芯片之间通过serdes接口进行通信。serdes是英文SERializer(串行器)/DESerializer(解串器)的简称。它是一种主流的时分多路复用(TDM)、点对点(P2P)的串行通信技术。即在发送端多路低速并行信号被转换成高速串行信号,经过传输媒体(光缆或铜线),最后在接收端高速串行信号重新转换成低速并行信号。这种点对点的串行通信技术充分利用传输媒体的信道容量,减少所需的传输信道和器件引脚数目,提升信号的传输速度,从而大大降低通信成本。当然,这里也可以采用其他的通信接口代替serdes接口,例如:SSI、UATR。芯片之间通过serdes接口进行数据和控制指令传输。The transmitting interface (lane tx) and the receiving interface (lane rx) of the data channel (lane) are serdes interfaces, and the computing chips communicate through the serdes interfaces. serdes is the abbreviation of English SERializer (serializer)/DESerializer (deserializer). It is a mainstream time division multiplexing (TDM), point-to-point (P2P) serial communication technology. That is, the multi-channel low-speed parallel signals are converted into high-speed serial signals at the sending end, pass through the transmission medium (optical cable or copper wire), and finally the high-speed serial signals at the receiving end are re-converted into low-speed parallel signals. This point-to-point serial communication technology makes full use of the channel capacity of the transmission medium, reduces the number of required transmission channels and device pins, and improves the transmission speed of signals, thereby greatly reducing communication costs. Of course, other communication interfaces can also be used to replace the serdes interface, such as: SSI, UATR. Data and control commands are transmitted between chips through the serdes interface.

图2说明具有4个内核的运算芯片结构示意图的第一实施例。而本领域技术人员可知,这里选择4个内核为例,只是示例性的说明,运算芯片内核的个数可以是N,其中N为大于等于2的正整数,例如可以是6、10、12等等。在本实施例中运算芯片内核可以是具有相同功能的内核,也可以是不同功能的内核。FIG. 2 illustrates a first embodiment of a schematic structural diagram of an arithmetic chip with 4 cores. However, those skilled in the art know that four cores are selected as an example here, which is only an exemplary illustration. The number of cores of the computing chip can be N, where N is a positive integer greater than or equal to 2, for example, it can be 6, 10, 12, etc. Wait. In this embodiment, the core of the computing chip may be a core with the same function, or may be a core with different functions.

4个内核的运算芯片(1)包括4个内核core(core0、core1、core2、core3)、4个数据通道(lane0、lane1、lane2 lane3)和至少一个存储单元、一个数据交换控制单元,具体的数据交换控制单元为UART控制单元,每个数据通道(lane)包括一个发送接口(lane tx)和一个接收接口(lane rx)。The 4-core computing chip (1) includes 4 cores (core0, core1, core2, core3), 4 data channels (lane0, lane1, lane2 lane3), at least one storage unit, and a data exchange control unit, specifically The data exchange control unit is a UART control unit, and each data channel (lane) includes a transmit interface (lane tx) and a receive interface (lane rx).

运算芯片(1)的内核core0连接于数据通道的发送接口(lane0 tx)和接收接口(lane0 rx),数据通道发送接口(lane0 tx)用于内核core0向所述运算芯片1连接的运算芯片发送数据或者控制指令,数据通道接收接口(lane0 rx)用于向内核core0发送所述运算芯片(1)连接的运算芯片传输的数据或者控制指令。同理,运算芯片1的内核core1连接于数据通道的发送接口(lane1 tx)和接收接口(lane1 rx);运算芯片1的内核core2连接于数据通道的发送接口(lane2 tx)和接收接口(lane2 rx),运算芯片1的内核core3连接于数据通道的发送接口(lane3tx)和接收接口(lane3rx)。数据通道(lane)的发送接口(lane tx)和接收接口(lane rx)为serdes接口。The core core0 of the computing chip (1) is connected to the sending interface (lane0 tx) and the receiving interface (lane0 rx) of the data channel, and the data channel sending interface (lane0 tx) is used for the core core0 to send to the computing chip connected to the computing chip 1 For data or control instructions, the data channel receiving interface (lane0 rx) is used to send the data or control instructions transmitted by the operation chip connected to the operation chip (1) to the core core0. Similarly, the core core1 of the computing chip 1 is connected to the sending interface (lane1 tx) and the receiving interface (lane1 rx) of the data channel; the core core2 of the computing chip 1 is connected to the sending interface (lane2 tx) and the receiving interface (lane2) of the data channel. rx), the core core3 of the computing chip 1 is connected to the transmitting interface (lane3tx) and the receiving interface (lane3rx) of the data channel. The sending interface (lane tx) and the receiving interface (lane rx) of the data channel (lane) are serdes interfaces.

一个数据交换控制单元通过总线连接于存储单元和4个内核core(core0、core1、core2、core3),在附图2中总线并未划出。数据交换控制单元可以采用多种协议进行实现,例如UART,SPI,PCIE,SERDES,USB等,在本实施方式中数据交换控制单元为UART(UniversalAsynchronous Receiver/Transmitter)控制单元。通用异步收发传输器通常称作UART,是一种异步收发传输器,它将要传输的资料在串行通信与并行通信之间加以转换,UART通常被集成于各种通讯接口的连结上。但是这里只是以UART协议为例进行说,也可以采用其他协议。UART控制单元接受外部数据,根据外部数据地址将外部数据发送给内核core(core0、core1、core2、core3)或者存储单元。UART控制单元也可以接受外部的控制指令,向内核core(core0、core1、core2、core3)或者存储单元发送控制指令;也可以用于运算芯片向其他运算芯片发送内部或者外部控制指令,从其他芯片接受控制指令,以及向外部反馈运算结果或者中间数据等。所述的内部数据或者内部控制指令是指芯片自身产生的数据或者控制指令,所述外部数据或者外部控制指令是指芯片外部产生的数据或者控制指令,例如外部主机、外部网络发送的数据或者控制指令。A data exchange control unit is connected to the storage unit and the four core cores (core0, core1, core2, core3) through a bus, and the bus is not delineated in FIG. 2 . The data exchange control unit can be implemented using various protocols, such as UART, SPI, PCIE, SERDES, USB, etc. In this embodiment, the data exchange control unit is a UART (Universal Asynchronous Receiver/Transmitter) control unit. A universal asynchronous transceiver is usually called UART, which is an asynchronous transceiver that converts the data to be transmitted between serial communication and parallel communication. UART is usually integrated in the connection of various communication interfaces. But here is only the UART protocol as an example, and other protocols can also be used. The UART control unit accepts external data, and sends the external data to the core core (core0, core1, core2, core3) or the storage unit according to the external data address. The UART control unit can also accept external control commands and send control commands to the core core (core0, core1, core2, core3) or storage units; it can also be used for computing chips to send internal or external control commands to other computing chips, and from other chips Accept control commands, and feedback operation results or intermediate data to the outside. The internal data or internal control instructions refer to the data or control instructions generated by the chip itself, and the external data or external control instructions refer to the data or control instructions generated outside the chip, such as data or control instructions sent by an external host or an external network. instruction.

内核core(core0、core1、core2、core3)的主要功能是执行外部或者内部控制指令、执行数据计算以及数据的存储控制等功能。所述运算芯片中的内核core(core0、core1、core2、core3)都连接到存储单元,向所述运算芯片的存储单元读取或者写入数据,通过存储单元实现了所述运算芯片中的多个内核core数据交互;也可以向所述运算芯片的存储单元发送控制命令。内核core(core0、core1、core2、core3)根据指令通过serdes接口向其他运算芯片的存储单元写入数据、读取数据或者向其他运算芯片的存储单元发送控制指令;内核core(core0、core1、core2、core3)也可以根据指令通过serdes接口向其他运算芯片的内核core发送数据、读取数据或者向其他运算芯片的内核core发送控制指令。The main function of the kernel core (core0, core1, core2, core3) is to execute external or internal control instructions, perform data calculation, and data storage control and other functions. The core cores (core0, core1, core2, core3) in the computing chip are all connected to the storage unit, and data is read or written to the storage unit of the computing chip, and multiple storage units in the computing chip are realized through the storage unit. The data of the core cores can be exchanged; a control command can also be sent to the storage unit of the computing chip. The core core (core0, core1, core2, core3) writes data, reads data or sends control instructions to the storage units of other computing chips through the serdes interface according to the instructions; core core (core0, core1, core2) , core3) can also send data, read data, or send control instructions to the core cores of other computing chips through the serdes interface according to the instructions.

图3说明数据通道lane的结构示意图的第一实施例。所述数据通道(lane)包括接收接口、发送接口、接收地址判断单元、发送地址判断单元和多个寄存器;接收地址判断单元一端连接于接收接口,接收地址判断单元另一端通过寄存器连接于内核core;发送地址判断单元一端连接于发送接口(tx),发送地址判断单元另一端通过寄存器连接于内核core;接收地址判断单元和发送地址判断单元通过寄存器相互连接;接收地址判断单元和发送地址判断单元还可以通过先进先出存储器进行相互连接,其中,先进先出(FIFO,first-in,first-out)是处理从队列或堆栈发出的程序工作要求的一种方法,它使最早的要求被最先处理。接收接口收到接收接口连接的相邻一侧运行芯片发送的数据帧或者控制指令,将所述数据帧或者控制指令发送给接收地址判断单元,接收地址判断单元将所述数据帧或者控制指令发送给内核core,同时将所述数据帧或者控制指令发送给发送地址判断单元;发送地址判断单元接收所述数据帧或者控制指令,将所述数据帧或者控制指令发送给发送接口(tx),发送接口将所述数据帧或者控制指令发送给发送接口连接的相邻另一侧运行芯片。内核core产生数据帧或者控制指令,将所述数据帧或者控制指令发送给发送地址判断单元,发送地址判断单元将所述数据帧或者控制指令发送给发送接口,发送接口将所述数据帧或者控制指令发送给相邻一侧的运行芯片的接收接口。寄存器的作用是暂存数据帧或者控制指令的。FIG. 3 illustrates a first embodiment of a schematic structural diagram of a data channel lane. The data channel (lane) includes a receiving interface, a sending interface, a receiving address judging unit, a sending address judging unit and a plurality of registers; one end of the receiving address judging unit is connected to the receiving interface, and the other end of the receiving address judging unit is connected to the kernel core through the register. One end of the sending address judging unit is connected to the sending interface (tx), and the other end of the sending address judging unit is connected to the kernel core through the register; the receiving address judging unit and the sending address judging unit are connected to each other through the register; the receiving address judging unit and the sending address judging unit Interconnection is also possible through first-in, first-out memory, where FIFO (first-in, first-out) is a method of processing requests for program work issued from a queue or stack, allowing the oldest requests to be processed by the most recent ones. Process first. The receiving interface receives the data frame or control command sent by the running chip on the adjacent side connected to the receiving interface, sends the data frame or control command to the receiving address judgment unit, and the receiving address judgment unit sends the data frame or control command to the kernel core, and at the same time send the data frame or control command to the sending address judgment unit; the sending address judgment unit receives the data frame or control command, sends the data frame or control command to the sending interface (tx), and sends The interface sends the data frame or control instruction to the running chip on the other side adjacent to the sending interface. The kernel core generates a data frame or control command, sends the data frame or control command to the sending address judgment unit, the sending address judgment unit sends the data frame or control command to the sending interface, and the sending interface sends the data frame or control command The command is sent to the receiving interface of the running chip on the adjacent side. The role of the register is to temporarily store data frames or control instructions.

图4a说明存储单元的结构示意图的第一实施例。每个运算芯片内包含了N个内核core,他们需要并发的随机的存取数据,如果N的数量级达到64以及以上的话,需要运算芯片的内存带宽达到非常高的数量级,即使是GDDR(Graphics Double Data Rate,独立显卡)也很难达到这么高的带宽。因此,在本实用新型实施例中了使用SRAM(Static Random-Access Memory,静态随机存取存储器)阵列和大MUX(multiplexer,数据选择器)路由的方式提供高带宽。如图4a所示系统由两级存储控制单元组成,以减缓实现时候的雍塞问题。所述存储单元(40)可以包括多个存储器,例如可以包括8个存储器(410……417),所述8个存储器(410……417)连接到存储控制单元(420);所述存储控制单元用于控制所述多个存储器的数据读取或者存储。所述存储器(410……417)包括至少两个存储子单元和一个存储控制子单元;存储控制子单元通过接口与所述存储控制单元连接,所述存储控制子单元用于控制所述至少两个存储子单元的数据读取或者存储。所述存储子单元为SRAM存储器。Figure 4a illustrates a first embodiment of a schematic structural diagram of a memory cell. Each computing chip contains N core cores, which require concurrent random access to data. If the magnitude of N reaches 64 or more, the memory bandwidth of the computing chip needs to reach a very high order of magnitude, even if the GDDR (Graphics Double Data Rate, discrete graphics card) is also difficult to achieve such a high bandwidth. Therefore, in the embodiment of the present invention, a SRAM (Static Random-Access Memory, static random access memory) array and a large MUX (multiplexer, data selector) routing are used to provide high bandwidth. The system as shown in Figure 4a consists of two levels of memory control units to alleviate the congestion problem at the time of implementation. The storage unit (40) may include a plurality of memories, for example, may include 8 memories (410...417), the 8 memories (410...417) are connected to the storage control unit (420); the storage control The unit is used to control data reading or storage of the plurality of memories. The memories (410...417) include at least two storage subunits and one storage control subunit; the storage control subunit is connected to the storage control unit through an interface, and the storage control subunit is used to control the at least two storage subunits. Data reading or storage of a storage subunit. The storage subunit is an SRAM memory.

图4b说明存储单元的结构示意图的第二实施例。在附图4b中存储单元中可以设置多个存储控制单元(420、421、422、423),每个内核core和所述多个存储控制单元(420、421、422、423)中的每一个相连,根据所述N个内核core的操作命令,从所述多个存储器中读写数据。每一个存储控制单元和每一个存储器(410……417)连接。存储器的结构和附图4a中完全相同,这里就不再次描述。Figure 4b illustrates a second embodiment of a schematic structural diagram of a memory cell. In FIG. 4b, a plurality of storage control units (420, 421, 422, 423) may be set in the storage unit, each core core and each of the plurality of storage control units (420, 421, 422, 423) are connected, and read and write data from the plurality of memories according to the operation commands of the N core cores. Each memory control unit is connected to each memory (410...417). The structure of the memory is exactly the same as in Fig. 4a, and will not be described again here.

内核core将产生的数据发送给至少一个存储控制单元,所述至少一个存储控制单元将数据发送给所述存储控制子单元,所述存储控制子单元将数据存储到存储子单元中。运算芯片内核core获取其他运算芯片发送的获取数据命令,运算芯片内核core通过数据地址判断数据是否存储在本运算芯片的存储单元中,如果存在则向所述至少一个存储控制单元发送数据读取命令;至少一个存储控制单元将数据读取命令发送给对应的存储控制子单元,存储控制子单元从存储子单元获取数据,存储控制子单元将所述获取数据发送给至少一个存储控制单元,至少一个存储控制单元将所述获取数据发送给内核core,内核core将所述获取数据发送给发送地址判断单元,发送地址判断单元将所述获取数据发送给发送接口(tx),发送接口将所述获取数据发送给相邻的运算芯片。The kernel core sends the generated data to at least one storage control unit, and the at least one storage control unit sends the data to the storage control subunit, and the storage control subunit stores the data in the storage subunit. The computing chip core core obtains a data acquisition command sent by other computing chips, and the computing chip core core determines whether the data is stored in the storage unit of the computing chip through the data address, and if so, sends a data read command to the at least one storage control unit At least one storage control unit sends the data read command to the corresponding storage control subunit, the storage control subunit obtains data from the storage subunit, and the storage control subunit sends the obtained data to at least one storage control unit, at least one The storage control unit sends the obtained data to the kernel core, the kernel core sends the obtained data to the sending address judgment unit, the sending address judgment unit sends the obtained data to the sending interface (tx), and the sending interface sends the obtained data The data is sent to the adjacent computing chip.

所述运算芯片用于执行加密运算,卷积计算中的一种或者多种。下面进行示例性的详细描述。The computing chip is used to perform one or more of encryption operations and convolution calculations. An exemplary detailed description follows.

该大数据运算加速系统可以应用到人工智能领域中,运算芯片的UART控制单元将外部主机发送的图片数据或者视频数据通过内核core存储到存储单元中,运算芯片产生神经网络的数学模型,该数学模型也可以由外部主机通过UART控制单元存储到存储单元,由各个运算芯片读取。在运算芯片上运行神经网络第一层数学模型,运算芯片的内核core从本运算芯片的存储单元和/或其他运算芯片的存储单元读取数据进行运算,并将运算结果通过serdes接口存储到其他运算芯片的存储单元中的至少一个存储单元,或者存储到本运算芯片的存储单元。运算芯片(1)通过UART控制单元或者serdes接口向下一个运算芯片(2)发送控制指令,启动下一个运算芯片(2)进行运算。在下一个运算芯片(2)上运行神经网络第二层数学模型,下一个运算芯片的内核core从本运算芯片的存储单元和/或其他运算芯片的存储单元读取数据进行运算,并将运算结果通过serdes接口存储到其他运算芯片的存储单元中的至少一个存储单元,或者存储到本运算芯片的存储单元。每一个芯片执行神经网络中的一层,通过serdes接口从其他运算芯片的存储单元或者本运算芯片的存储单元获取数据进行运算,只到神经网络最后一层计算出运算结果。运算芯片从本地存储单元或者其他运算芯片的存储单元中获取运算结果,通过UART控制单元反馈给外部主机。The big data computing acceleration system can be applied to the field of artificial intelligence. The UART control unit of the computing chip stores the picture data or video data sent by the external host into the storage unit through the core core, and the computing chip generates the mathematical model of the neural network. The model can also be stored to the storage unit by the external host through the UART control unit, and read by each computing chip. Run the first-layer mathematical model of the neural network on the computing chip. The core core of the computing chip reads data from the storage unit of the computing chip and/or the storage unit of other computing chips to perform operations, and stores the operation results to other computing devices through the serdes interface. At least one storage unit in the storage units of the operation chip, or stored in the storage unit of the operation chip. The operation chip (1) sends a control instruction to the next operation chip (2) through the UART control unit or the serdes interface, and starts the next operation chip (2) to perform operation. The second-layer mathematical model of the neural network is run on the next computing chip (2), and the core core of the next computing chip reads data from the storage unit of this computing chip and/or the storage unit of other computing chips to perform operations, and calculates the result of the operation. It is stored to at least one storage unit in the storage units of other computing chips through the serdes interface, or is stored to the storage unit of this computing chip. Each chip executes a layer in the neural network, obtains data from the storage unit of other computing chips or the storage unit of this computing chip through the serdes interface, and calculates the operation result only at the last layer of the neural network. The operation chip obtains the operation result from the local storage unit or the storage unit of other operation chips, and feeds it back to the external host through the UART control unit.

大数据运算加速系统可以应用到数字凭证”领域中,运算芯片(1)的UART控制单元将外部主机发送的区块信息存储到多个运算芯片的多个存储单元中的至少一个存储单元。外部主机通过运算芯片(1……M)UART控制单元向M个运算芯片发送控制指令进行数据运算,M个运算芯片启动运算操作。当然也可以外部主机向一个运算芯片(1)UART控制单元发送控制指令进行数据运算,运算芯片(1)依次向其他M-1个运算芯片发送控制指令进行数据运算,M个运算芯片启动运算操作。也可以外部主机向一个运算芯片(1)UART控制单元发送控制指令进行数据运算,第一运算芯片(1)向第二运算芯片(2)发送控制指令进行数据运算,第二运算芯片(2)向第三运算芯片(3)发送控制指令进行数据运算,第三运算芯片(3)向第四运算芯片(4)发送控制指令进行数据运算,M个运算芯片启动运算操作。M个运算芯片通过serdes接口从其他运算芯片的存储单元或者本运算芯片的存储单元获取数据进行运算,M个运算芯片同时进行工作量证明运算,运算芯片(1)从存储单元获取运算结果,通过UART控制单元反馈给外部主机。The big data operation acceleration system can be applied to the "digital certificate" field. The UART control unit of the operation chip (1) stores the block information sent by the external host in at least one storage unit among the multiple storage units of the multiple operation chips. The host sends control commands to the M arithmetic chips to perform data operations through the arithmetic chip (1... The instruction performs data operation, and the operation chip (1) sends control instructions to the other M-1 operation chips to perform data operation in turn, and the M operation chips start the operation operation. The external host can also send control instructions to one operation chip (1) UART control unit. The instruction performs data operation, the first operation chip (1) sends a control instruction to the second operation chip (2) to perform data operation, the second operation chip (2) sends a control instruction to the third operation chip (3) to perform data operation, and the first operation chip (2) sends a control instruction to the third operation chip (3) to perform data operation. Three arithmetic chips (3) send control instructions to the fourth arithmetic chip (4) to perform data operations, and M arithmetic chips start arithmetic operations. M arithmetic chips pass through the serdes interface from the storage unit of other arithmetic chips or the storage unit of this arithmetic chip The data is acquired to perform operations, the M arithmetic chips simultaneously perform the workload proof operation, and the arithmetic chip (1) acquires the operation result from the storage unit, and feeds it back to the external host through the UART control unit.

图5说明大数据运算加速系统数据传输过程的示意图的第一实施例。所述运算芯片用于执行协同运算,每个运算芯片根据其他运算芯片的计算结果进行运算。例如,系统中共包括N个运算芯片,每个运算芯片完成1/n的工作,每个运算芯片完成其负责的数据之后,因为数据相关性,必须要把其算完的结果传输给其他所有的芯片。运算芯片n-1是数据帧的源头运算芯片,数据通过lane0 tx发送到运算芯片0;在运算芯片0内,数据帧会被分为两路传播,第一路发送给运算芯片0的内核core,外一路在被转发到运算芯片0的lane0 tx通道内,如此该数据帧会被发送到运算芯片1。FIG. 5 illustrates a first embodiment of a schematic diagram of a data transmission process of the big data computing acceleration system. The computing chips are used for performing cooperative operations, and each computing chip performs operations according to the calculation results of other computing chips. For example, a total of N computing chips are included in the system, and each computing chip completes 1/n work. After each computing chip completes the data it is responsible for, because of the data correlation, the result of its calculation must be transmitted to all other computers. chip. The computing chip n-1 is the source computing chip of the data frame, and the data is sent to the computing chip 0 through the lane0 tx; in the computing chip 0, the data frame will be divided into two channels, and the first channel is sent to the core core of the computing chip 0 , the outer channel is forwarded to the lane0 tx channel of the computing chip 0, so the data frame will be sent to the computing chip 1.

源ID机制:每个数据帧都携带了该数据帧的源头的运算芯片ID,每当该数据帧被发送到一个新的运算芯片之后,该运算芯片会检测数据帧内的运算芯片ID,如果发现该运算芯片ID和这个运算芯片所连接的下一个运算芯片的ID相等的时候,那么该数据帧就不会再被转发,也就意味着该数据帧的生命周期在此处终止,也不再占用带宽。运算芯片会检测数据帧内的运算芯片ID可以是在内核core中进行,也可以是在接收地址判断单元中进行。Source ID mechanism: Each data frame carries the computing chip ID of the source of the data frame. Whenever the data frame is sent to a new computing chip, the computing chip will detect the computing chip ID in the data frame. When it is found that the ID of the computing chip is equal to the ID of the next computing chip connected to the computing chip, then the data frame will not be forwarded again, which means that the life cycle of the data frame is terminated here, and neither Take up bandwidth. The operation chip will detect the operation chip ID in the data frame, either in the core core or in the receiving address judgment unit.

图6说明第一实施例具有4个内核的运算芯片信号流程示意图的。FIG. 6 illustrates a schematic diagram of a signal flow of an arithmetic chip with four cores according to the first embodiment.

所述运算芯片进一步包括第一数据接口(130)与外部主机相连,用于接收外部数据或者控制指令。所述运算芯片将外部数据存储到所述2个以上运算芯片的至少一个存储单元。所述第一数据接口可以是UART控制单元。The computing chip further includes a first data interface (130) connected to an external host for receiving external data or control instructions. The computing chip stores external data in at least one storage unit of the two or more computing chips. The first data interface may be a UART control unit.

所述UART控制单元(130)用于获取芯片外部数据或者控制指令,将外部数据或者控制指令传输给和UART控制单元连接的内核core(110)。内核core(110)将外部数据根据数据地址传输给本芯片的存储单元(120)进行存储,或者内核core(110)根据数据地址将数据通过信号通道lane发送给数据地址对应的其他芯片内核core,其他芯片内核core将数据存储到本地的存储单元中。内核core(110)根据外部控制指令地址由本运算芯片内核core执行或者通过信号通道lane发送给控制指令地址对应的其他芯片内核core执行。如果本运算芯片内核core需要获取数据,则内核core可以从本地存储单元获取数据,也可以从其他运算芯片的存储单元获取数据。当从其他运算芯片的存储单元获取数据时,内核core(110)将获取数据控制指令通过自身连接的serdes接口(150)将获取数据控制指令广播给连接的运算芯片;连接的运算芯片将获取数据控制指令分成两路,一路发送给内核core,另一路向下一个芯片转发。如果连接的运算芯片判断出数据存储在本地存储单元,则内核core从存储单元中读取数据,通过serdes接口发送给发出获取数据控制指令的运算单元。当然,运算芯片之间的控制指令也可以通过UART控制单元进行发送。内核core根据外部控制指令或者内部控制指令将运算结果或者中间数据反馈给外部时,内核core从本运算芯片的存储单元或者通过serdes接口从其他运算芯片的存储单元获取运算结果或者中间数据,将运算结果或者中间数据通过第一数据接口(130)发送给外部,当第一数据接口是UART控制单元时,则通过UART控制单元将运算结果或中间数据发送至外部。这里所述的外部可以是指外部主机、外部网络或者外部平台等。The UART control unit (130) is used for acquiring external data or control instructions of the chip, and transmitting the external data or control instructions to the core core (110) connected to the UART control unit. The kernel core (110) transmits the external data to the storage unit (120) of the chip according to the data address for storage, or the kernel core (110) sends the data to other chip kernel cores corresponding to the data address through the signal channel lane according to the data address, Other chip cores store data in local storage units. The core core ( 110 ) is executed by the core core of the operation chip according to the address of the external control instruction or sent to the core core of other chips corresponding to the address of the control instruction through the signal channel lane for execution. If the core core of the computing chip needs to obtain data, the core core can obtain data from a local storage unit, or obtain data from storage units of other computing chips. When acquiring data from the storage units of other computing chips, the kernel core (110) broadcasts the acquired data control instruction to the connected computing chip through the serdes interface (150) connected to itself; the connected computing chip will acquire the data The control instructions are divided into two paths, one is sent to the core core, and the other is forwarded to the next chip. If the connected computing chip determines that the data is stored in the local storage unit, the core core reads the data from the storage unit and sends it to the computing unit that issues the data acquisition control instruction through the serdes interface. Of course, the control commands between the computing chips can also be sent through the UART control unit. When the core core feeds back the operation result or intermediate data to the outside according to the external control instruction or internal control instruction, the core core obtains the operation result or intermediate data from the storage unit of this operation chip or from the storage unit of other operation chips through the serdes interface, and transfers the operation result or intermediate data to the operation chip. The result or the intermediate data is sent to the outside through the first data interface (130), and when the first data interface is the UART control unit, the operation result or the intermediate data is sent to the outside through the UART control unit. The external mentioned here may refer to an external host, an external network, or an external platform.

所述至少一个第一数据接口(130)接收外部指令初始化配置所述2个以上运算芯片的存储单元,对所述2个以上运算芯片的存储单元中的存储子单元进行统一编址。若所述第一数据接口(130)是UART控制单元,则外部主机能通过UART控制单元初始化配置存储单元参数,对多个存储单元进行统一编址。The at least one first data interface (130) receives an external instruction to initialize and configure the storage units of the two or more computing chips, and uniformly addresses the storage subunits in the storage units of the two or more computing chips. If the first data interface (130) is a UART control unit, the external host can initialize and configure parameters of the storage unit through the UART control unit, and perform unified addressing of multiple storage units.

当然,内核core根据获取的数据进行计算,并将计算结果存储到存储单元中。每个存储单元中设置专有存储区域和共享存储区域;所述专有存储区域用于存储一个运算芯片的临时运算结果,该临时运算结果为所述一个运算芯片继续利用的中间计算结果,而其他运算芯片不会使用的中间计算结果;所述共享存储区域用于存储运算芯片的运算数据结果,该运算数据结果被其他运算芯片使用,或者需要向外部进行反馈传输。Of course, the kernel core performs calculations based on the acquired data, and stores the calculation results in the storage unit. Each storage unit is provided with a dedicated storage area and a shared storage area; the dedicated storage area is used to store a temporary operation result of an operation chip, and the temporary operation result is an intermediate calculation result that the one operation chip continues to use, and Intermediate calculation results not used by other computing chips; the shared storage area is used to store the computing data results of the computing chips, and the computing data results are used by other computing chips or need to be fed back and transmitted to the outside.

本实用新型实施例通过在芯片中设置多个内核core,每个内核core执行运算和存储控制功能,并且在芯片内部给每个内核core连接至少一个存储单元,这样每个内核通过读取自己连接的存储单元和其他内核连接的存储单元,使得每个内核可以具有大容量内存,减少了数据从外部存储空间中搬入或者搬出内存的次数,加快了数据的处理速度;同时,由于多个内核可以分别独立运算或者协同运算,这样也加快了数据的处理速度。In the embodiment of the present invention, a plurality of core cores are set in the chip, each core core performs operation and storage control functions, and at least one storage unit is connected to each core core inside the chip, so that each core is connected by reading its own The storage unit connected to other cores allows each core to have large-capacity memory, reducing the number of times data is moved into or out of memory from external storage space, and speeding up data processing; at the same time, since multiple cores can They operate independently or cooperatively, which also speeds up data processing.

图7说明根据本实用新型实施例的数据结构示意图。这里所说的数据为命令数据、数值数据、字符数据等多种数据。数据格式具体包括有效位valid、目的地址dst id、源地址src id和data数据。内核可以通过有效位valid来判断该数据包是命令还是数值,这里可以假定0代表数值,1代表命令。内核会根据数据结构判断目的地址、源地址和数据类型。例如在附图1种,内核50向内核10发送数据读取命令,则有效位为1,目的地址为内核10的地址、源地址为内核50的地址和data数据为读取数据命令以及数据类型或者数据地址等。内核10向内核10发送数据,则有效位为0,目的地址为内核50的地址、源地址为内核0的地址和data数据为读取的数据。从指令运行时序上来看,本实施例中采用传统的六级流水线结构,分别为取指、译码、执行、访存、对齐和写回级。从指令集架构上来看,可以采取精简指令集架构。按照精简指令集架构的通用设计方法,本实用新型指令集可以按功能分为寄存器-寄存器型指令,寄存器-立即数指令,跳转指令,访存指令、控制指令和核间通信指令。FIG. 7 illustrates a schematic diagram of a data structure according to an embodiment of the present invention. The data referred to here are various data such as command data, numerical data, and character data. The data format specifically includes valid bits valid, destination address dst id, source address src id, and data data. The kernel can judge whether the data packet is a command or a value through the valid bit, where 0 can be assumed to represent a value, and 1 can be a command. The kernel will determine the destination address, source address and data type according to the data structure. For example, in Figure 1, the kernel 50 sends a data read command to the kernel 10, the valid bit is 1, the destination address is the address of the kernel 10, the source address is the address of the kernel 50, and the data data is the read data command and data type. Or data address, etc. When the core 10 sends data to the core 10, the valid bit is 0, the destination address is the address of the core 50, the source address is the address of the core 0, and the data data is the read data. In terms of the instruction running sequence, the present embodiment adopts a traditional six-stage pipeline structure, which are respectively instruction fetching, decoding, execution, memory fetching, alignment and write-back stages. From the perspective of instruction set architecture, a reduced instruction set architecture can be adopted. According to the general design method of the simplified instruction set architecture, the instruction set of the utility model can be divided into register-register type instruction, register-immediate number instruction, jump instruction, memory access instruction, control instruction and inter-core communication instruction according to functions.

作为一个实施例,本实施例提供的系统可以进行的与数字凭证相关的数据处理,所述数字凭证可以通过数据处理得到。As an example, the system provided in this embodiment can perform data processing related to digital vouchers, and the digital vouchers can be obtained through data processing.

使用本文中提供的描述,可以通过使用标准的编程和/或工程技术将实施例实现成机器、过程或制造品以产生编程软件、固件、硬件或其任何组合。Using the descriptions provided herein, embodiments can be implemented into a machine, process, or article of manufacture using standard programming and/or engineering techniques to produce programmed software, firmware, hardware, or any combination thereof.

可以将任何生成的程序(多个)(具有计算机可读程序代码)具体化在一个或多个计算机可使用的介质上,诸如驻留存储设备、智能卡或其它可移动存储设备,或传送设备,从而根据实施例来制作计算机程序产品和制造品。照此,如本文中使用的术语“制造品”和“计算机程序产品”旨在涵盖永久性地或临时性地存在在任何计算机可以使用的非短暂性的介质上的计算机程序。Any generated program(s) (with computer-readable program code) may be embodied on one or more computer-usable media, such as resident storage devices, smart cards or other removable storage devices, or transfer devices, Computer program products and articles of manufacture are thereby made according to the embodiments. As such, the terms "article of manufacture" and "computer program product" as used herein are intended to encompass a computer program that exists permanently or temporarily on any computer-usable non-transitory medium.

如上所指出的,存储器/存储设备包含但不限制于磁盘、光盘、可移动存储设备(诸如智能卡、订户身份模块(SIM)、无线标识模块(WIM))、半导体存储器(诸如随机存取存储器(RAM)、只读存储器(ROM)、可编程只读存储器(PROM))等。传送介质包含但不限于经由无线通信网络、互联网、内部网、基于电话/调制解调器的网络通信、硬连线/电缆通信网络、卫星通信以及其它固定或移动网络系统/通信链路的传输。As noted above, memory/storage devices include, but are not limited to, magnetic disks, optical disks, removable storage devices (such as smart cards, subscriber identity modules (SIM), wireless identity modules (WIM)), semiconductor memory (such as random access memory ( RAM), Read Only Memory (ROM), Programmable Read Only Memory (PROM), etc. Transmission media include, but are not limited to, transmission via wireless communication networks, the Internet, intranets, telephone/modem based network communications, hardwired/cable communication networks, satellite communications, and other fixed or mobile network systems/communication links.

虽然已经公开了特定的示例实施例,但是本领域的技术人员将理解的是,在不背离本实用新型的精神和范围的情况下,能够对特定示例实施例进行改变。Although specific example embodiments have been disclosed, it will be understood by those skilled in the art that changes can be made in the specific example embodiments without departing from the spirit and scope of the invention.

以上参考附图,基于实施方式说明了本实用新型,但本实用新型并非限定于上述的实施方式,根据布局需要等将各实施方式及各变形例的部分构成适当组合或置换后的方案,也包含在本实用新型的范围内。另外,还可以基于本领域技术人员的知识适当重组各实施方式的组合和处理顺序,或者对各实施方式施加各种设计变更等变形,被施加了这样的变形的实施方式也可能包含在本实用新型的范围内。The present invention has been described above based on the embodiments with reference to the accompanying drawings. However, the present invention is not limited to the above-mentioned embodiments. Parts of each embodiment and each modification are appropriately combined or replaced according to layout requirements. Included in the scope of the present invention. In addition, based on the knowledge of those skilled in the art, the combination and processing order of each embodiment may be appropriately reorganized, or various modifications such as various design changes may be added to each embodiment, and embodiments to which such modifications are applied may also be included in the present application. new range.

本实用新型虽然已详细描述了各种概念,但本领域技术人员可以理解,对于那些概念的各种修改和替代在本实用新型公开的整体教导的精神下是可以实现的。本领域技术人员运用普通技术就能够在无需过度试验的情况下实现在权利要求书中所阐明的本实用新型。可以理解的是,所公开的特定概念仅仅是说明性的,并不意在限制本实用新型的范围,本实用新型的范围由所附权利要求书及其等同方案的全部范围来决定。While this disclosure has described various concepts in detail, those skilled in the art will appreciate that various modifications and substitutions to those concepts can be implemented within the spirit of the overall teachings of this disclosure. Those skilled in the art, using ordinary skill, can implement the invention as set forth in the claims without undue experimentation. It is to be understood that the specific concepts disclosed are illustrative only and are not intended to limit the scope of the invention, which is to be determined by the appended claims along with their full scope of equivalents.

Claims (26)

1. a kind of big data operation acceleration system, which is characterized in that including 2 or more operation chips, the operation chip includes N A kernel core, N number of data channel (lane) and at least one storage unit, the data channel (lane) includes transmission interface (tx) it is corresponded with receiving interface (rx), the kernel core and data channel (lane), the kernel core passes through data (lane) sends and receives data in channel;Described 2 or more operation chips are by the transmission interface (tx) and described connect Mouth (rx) is attached transmission data;At least one described storage unit is used for distributed storage data, each of operation chip Kernel core can obtain data from the storage unit of place operation chip, can also obtain from the storage unit of other operation chips Access evidence;Wherein N is the positive integer more than or equal to 4.
2. system according to claim 1, which is characterized in that the transmission interface (tx) of the operation chip and described Receiving interface (rx) is serdes interface, is communicated between the operation chip by serdes interface.
3. system according to claim 1 or 2, which is characterized in that the data channel (lane) further comprises receiving Address judging unit sends address judging unit;It receives address judging unit one end to be connected to receiving interface (rx), receives address The judging unit other end is connected to kernel core;It sends address judging unit one end to be connected to transmission interface (tx), sends address The judging unit other end is connected to kernel core;It receives address judging unit and sends address judging unit and be connected with each other.
4. system according to claim 3, which is characterized in that receiving interface (rx) receives adjacent side operation chip and sends Data frame, by the data frame be sent to receive address judging unit, receive address judging unit the data frame is sent Kernel core is given, while the data frame being sent to and sends address judging unit;It sends address judging unit and receives the number According to frame, the data frame is sent to transmission interface (tx), the data frame is sent to the adjacent other side and run by transmission interface Chip.
5. system according to claim 3, which is characterized in that kernel core generates data frame, and the data frame is sent To address judging unit is sent, address judging unit is sent by the data frame and is sent to transmission interface (tx), transmission interface (tx) data frame is sent to the operation chip of adjacent side.
6. system according to claim 3, which is characterized in that the reception address judging unit and the judgement of transmission address are single Member is connected with each other by push-up storage.
7. system according to claim 1 or 2, which is characterized in that the storage unit includes multiple memories, described more A memory is connected at least one storage control unit;At least one described storage control unit is for controlling the multiple deposit The reading data or storage of reservoir.
8. system according to claim 7, which is characterized in that the memory includes at least two storing sub-units and deposits Storage control subelement;Storage control subelement passes through each of interface and at least one storage control unit and connect, The storage control subelement is used to control the reading data or storage of at least two storing sub-units.
9. system according to claim 8, which is characterized in that the storing sub-units are SRAM memory.
10. system according to claim 1 or 2, which is characterized in that described 2 or more operation chip connections circularize.
11. system according to claim 1 or 2, which is characterized in that described 2 or more operation chips are not connected to outside and deposit Storage unit.
12. system according to claim 1 or 2, which is characterized in that the operation chip further comprises that the first data connect Mouth (130) is connected with external host, for receiving external data or control instruction.
13. system according to claim 12, which is characterized in that the operation chip is by external data storage to described 2 At least one storage unit of a above operation chip.
14. system according to claim 12, which is characterized in that first data-interface is UART control unit.
15. system according to claim 8, which is characterized in that N number of kernel core and at least one described storage control Each of unit processed is connected;According to the operational order of N number of kernel core, number is read and write from the multiple memory According to.
16. system according to claim 15, which is characterized in that kernel core by the data of generation be sent to it is described at least One storage control unit, at least one described storage control unit sends the data to the storage control subelement, described Storage control subelement stores data into storing sub-units.
17. system according to claim 16, which is characterized in that operation chip core core obtains other operation chips hair The acquisition data command sent, operation chip core core judge whether data are stored in depositing for this operation chip by data address In storage unit, and if so, sending data read command at least one described storage control unit;It is described at least one deposit Data read command is sent to corresponding storage and controls subelement by storage control unit, and storage controls subelement from storing sub-units Data are obtained, the acquisition data are sent at least one storage control unit by storage control subelement, at least one storage The acquisition data are sent to kernel core by control unit, and the acquisition data are sent to by kernel core sends address judgement Unit sends address judging unit and is sent to the acquisition data transmission interface (tx), and transmission interface is by the acquisition data It is sent to adjacent operation chip.
18. system according to claim 1 or 2, which is characterized in that the operation chip is rolled up for executing cryptographic calculation One or more of product calculating.
19. system according to claim 18, which is characterized in that the operation chip executes independent operation respectively, often A computing unit calculates separately result.
20. system according to claim 18, which is characterized in that the operation chip is for executing collaboration operation, each Operation chip carries out operation according to the calculated result of other operation chips.
21. system according to claim 12, which is characterized in that at least one described first data-interface (130) receives The storage unit of 2 or more operation chips described in external command initial configuration, to the storage list of described 2 or more operation chips Storing sub-units in member carry out unified addressing.
22. system according to claim 12, which is characterized in that the operation chip can by it is described at least one first Data-interface (130) outward transmits calculated result.
23. system according to claim 1 or 2, which is characterized in that the kernel core is calculated for data, and data are deposited Storage control.
24. a kind of big data operation acceleration system, which is characterized in that including 2 or more operation chips, described 2 or more operations Chip connection circularizes;The operation chip includes data transmission interface (tx), data receiver interface (rx) and described at least one A storage unit, the data transmission interface (tx) and receiving interface (rx) are serdes interface, are led between the operation chip It crosses serdes interface and carries out data communication;Each kernel core of operation chip can be obtained from the storage unit of place operation chip Access evidence also can obtain data from the storage unit of other operation chips.
25. a kind of big data operation acceleration system, which is characterized in that including 2 or more operation chips, described 2 or more operations Chip signal connection circularizes;The operation chip include data transmission interface (tx), data receiver interface (rx) and it is described extremely A few storage unit, the data transmission interface (tx) and receiving interface (rx) are serdes interface, the operation chip it Between by serdes interface carry out data communication;At least one described storage unit of described 2 or more operation chips is for dividing Cloth storing data, the not external internal storage location of operation chip.
26. a kind of big data operation acceleration system, which is characterized in that including 2 or more operation chips, the operation chip includes N number of kernel core, N number of data channel (lane) and at least one storage unit, the data channel (lane) include sending to connect Mouth (tx) and receiving interface (rx), the kernel core and data channel (lane) correspond, and the kernel core passes through number Data are sent and received according to channel (lane);Described 2 or more operation chips pass through the transmission interface (tx) and the reception Interface (rx) is attached transmission data;At least one described storage unit is used for distributed storage data;Wherein N be greater than etc. In 4 positive integer.
CN201821774904.XU 2018-10-30 2018-10-30 Big data computing acceleration system Active CN209149287U (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201821774904.XU CN209149287U (en) 2018-10-30 2018-10-30 Big data computing acceleration system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201821774904.XU CN209149287U (en) 2018-10-30 2018-10-30 Big data computing acceleration system

Publications (1)

Publication Number Publication Date
CN209149287U true CN209149287U (en) 2019-07-23

Family

ID=67271666

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201821774904.XU Active CN209149287U (en) 2018-10-30 2018-10-30 Big data computing acceleration system

Country Status (1)

Country Link
CN (1) CN209149287U (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112740192A (en) * 2018-10-30 2021-04-30 北京比特大陆科技有限公司 Big data computing acceleration system and data transmission method
CN114002587A (en) * 2021-12-30 2022-02-01 中科声龙科技发展(北京)有限公司 Chip supporting workload proving mechanism and testing method thereof
CN117389928A (en) * 2023-10-27 2024-01-12 中科驭数(北京)科技有限公司 Data transmission methods, devices, equipment and storage media
CN117389929A (en) * 2023-10-27 2024-01-12 中科驭数(北京)科技有限公司 Data transmission method, device, equipment and storage medium

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112740192A (en) * 2018-10-30 2021-04-30 北京比特大陆科技有限公司 Big data computing acceleration system and data transmission method
CN112740192B (en) * 2018-10-30 2024-04-30 北京比特大陆科技有限公司 Big data operation acceleration system and data transmission method
CN114002587A (en) * 2021-12-30 2022-02-01 中科声龙科技发展(北京)有限公司 Chip supporting workload proving mechanism and testing method thereof
CN117389928A (en) * 2023-10-27 2024-01-12 中科驭数(北京)科技有限公司 Data transmission methods, devices, equipment and storage media
CN117389929A (en) * 2023-10-27 2024-01-12 中科驭数(北京)科技有限公司 Data transmission method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN209149287U (en) Big data computing acceleration system
CN108536642B (en) Big data computing acceleration systems and chips
US20060095621A1 (en) Methods and apparatuses for generating a single request for block transactions over a communication fabric
CN110347635B (en) A Heterogeneous Multicore Microprocessor Based on Multilayer Bus
CN102110072B (en) Complete mutual access method and system for multiple processors
US7277975B2 (en) Methods and apparatuses for decoupling a request from one or more solicited responses
CN103714026B (en) A kind of memory access method supporting former address data exchange and device
CN112970010B (en) Streaming platform flow and architecture
JP2002530744A (en) Communication system and method with multi-level connection identification
CN105573959B (en) A kind of distributed computer calculating storage one
CN117493237B (en) Computing device, server, data processing method, and storage medium
CN102929363B (en) A kind of method for designing of high-density blade server
CN112805727A (en) Artificial neural network operation acceleration device for distributed processing, artificial neural network acceleration system using same, and method for accelerating artificial neural network
CN109564562B (en) Big data operation acceleration system and chip
CN106933760A (en) A kind of dma controller and data uploading method based on AXI protocol
CN209560543U (en) Big Data Computing Chip
CN111736115A (en) High-speed transmission method of MIMO millimeter-wave radar based on improved SGDMA+PCIE
CN115129657B (en) Programmable logic resource expansion device and server
CN109739785A (en) The interconnect structure of multi-core systems
CN104699641A (en) EDMA (enhanced direct memory access) controller concurrent control method in multinuclear DSP (digital signal processor) system
WO2022088171A1 (en) Neural processing unit synchronization systems and methods
CN103218343A (en) Inter-multiprocessor data communication circuit adopting data driving mechanism
CN209784995U (en) Big Data Operation Acceleration System and Chip
CN115296743A (en) Optical fiber communication switching system
WO2020087239A1 (en) Big data computing acceleration system

Legal Events

Date Code Title Description
GR01 Patent grant
GR01 Patent grant