[go: up one dir, main page]

CN214901448U - Embedded artificial intelligence server - Google Patents

Embedded artificial intelligence server Download PDF

Info

Publication number
CN214901448U
CN214901448U CN202023292688.5U CN202023292688U CN214901448U CN 214901448 U CN214901448 U CN 214901448U CN 202023292688 U CN202023292688 U CN 202023292688U CN 214901448 U CN214901448 U CN 214901448U
Authority
CN
China
Prior art keywords
circuit board
circuit
electrically connected
power supply
artificial intelligence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202023292688.5U
Other languages
Chinese (zh)
Inventor
游雄峰
翁晓光
肖杨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujian Rongrongxin Microelectronics Technology Co ltd
Original Assignee
Fujian Rongrongxin Microelectronics Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujian Rongrongxin Microelectronics Technology Co ltd filed Critical Fujian Rongrongxin Microelectronics Technology Co ltd
Priority to CN202023292688.5U priority Critical patent/CN214901448U/en
Application granted granted Critical
Publication of CN214901448U publication Critical patent/CN214901448U/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The embedded artificial intelligence server comprises a plurality of first circuit units and a plurality of second circuit units, wherein the first circuit units are arranged on the second circuit units in a pluggable and pluggable insertion mode, each first circuit unit comprises a first circuit board and at least one embedded neural network processor arranged on the first circuit board, and the first circuit units are electrically connected with the second circuit units. Carry out image processing through setting up embedded neural network treater, thereby use the consumption that the consumption is less than GPU on the one hand to produce the effect that reduces the server consumption, through setting up the first circuit unit of a plurality of and peg graft and set up and realize on the second circuit unit when an embedded neural network treater graphic processing efficiency is not enough, thereby peg graft the quantity that a plurality of first circuit unit increased embedded neural network treater and play the effect that improves graphic processing efficiency, play the effect of killing two birds with one stone.

Description

Embedded artificial intelligence server
Technical Field
The utility model relates to an artificial intelligence field especially indicates an embedded artificial intelligence server.
Background
Chinese invention patent (application No. 201480039208.2, publication No. CN 105531995B) discloses a system for identifying objects and events of interest using one or more cameras with image processing functionality. The system includes a plurality of cameras configured to perform image processing of a scene from multiple angles in order to extract and transmit metadata corresponding to objects or people in the scene. The cameras transmit metadata to a processing station configured to process the data stream over time to detect objects and events of interest to alert monitoring personnel of the objects or events in the scene, but the following disadvantages exist in the actual use: in the conventional method, a GPU is used for processing images acquired by a camera, but the power consumption of the GPU is higher because the complex scene image information is more and more processed by the GPU with high processing capacity.
SUMMERY OF THE UTILITY MODEL
The utility model provides an embedded artificial intelligence server, its main aim at overcome the tradition and adopt GPU to handle the defect that the image information consumption is high.
In order to solve the technical problem, the utility model adopts the following technical scheme:
the embedded artificial intelligence server comprises a plurality of first circuit units and a plurality of second circuit units, wherein the first circuit units are arranged on the second circuit units in a pluggable and pluggable insertion mode, each first circuit unit comprises a first circuit board and at least one embedded neural network processor arranged on the first circuit board, and the first circuit units are electrically connected with the second circuit units.
Furthermore, the first circuit unit further comprises a golden finger plugging block arranged on one side of the first circuit board, the second circuit unit comprises a second circuit board and a plurality of power supply slots arranged on the second circuit board at intervals, and the golden finger plugging block can be plugged into the power supply slots.
Further, the first circuit unit further comprises a core board arranged on the first circuit board, the embedded neural network processor is arranged on the core board, and the core board is electrically connected with the first circuit board.
Further, still be equipped with a central processing unit on the nuclear core plate, central processing unit with embedded neural network processor looks electricity is connected, central processing unit with golden finger grafting piece looks electricity is connected.
Furthermore, the second circuit unit further includes a plurality of power supply buses disposed on the second circuit board at intervals, each power supply slot is electrically connected to each power supply bus, and each power supply bus is electrically connected to each power supply slot in parallel.
Furthermore, the power supply bus extends along the length direction of the second circuit board, and the power supply slots are vertically and crossly arranged on the power supply bus.
Furthermore, the first circuit unit further includes a network transformer disposed on the first circuit board and an ethernet interface disposed on the first circuit board, the network transformer is electrically connected to the ethernet interface, and an output end of the ethernet interface is electrically connected to an enable end of the central processing unit.
Furthermore, the first circuit unit further comprises a power interface arranged on the first circuit board, and the power interface is electrically connected with the first circuit board.
Furthermore, the first circuit unit further comprises an ADC interface disposed on the first circuit board and an SPI interface disposed on the first circuit board, an output end of the ADC interface is electrically connected to an enable end of the central processing unit, and an output end of the SPI interface is electrically connected to an enable end of the central processing unit.
Furthermore, the first circuit unit further comprises an encoder arranged on the first circuit board, and the output end of the encoder is electrically connected with the enable end of the central processing unit.
Compared with the prior art, the utility model discloses the beneficial effect who produces lies in:
1. the utility model has simple structure and strong practicability, carries out image processing by setting the embedded neural network processor, on one hand, the power consumption is less than that of GPU so as to generate the effect of reducing the power consumption of the server, and the embedded neural network processor ensures that the server does not need to be additionally connected with independent GPU equipment so as to generate the effect of reducing the volume of the server, and a plurality of first circuit units are arranged on the second circuit unit in an inserted manner so as to realize that when the graphic processing efficiency of one embedded neural network processor is insufficient, a plurality of first circuit units are inserted so as to increase the number of the embedded neural network processor so as to play the effect of improving the graphic processing efficiency, on the other hand, the price cost of the embedded neural network processor is lower than that of GPU, and the embedded neural network processor is used for replacing GPU so as to generate the effect of reducing the price cost of the whole machine of the server, the effect of killing two birds with one stone is achieved.
2. The utility model discloses in, establish on the power supply slot through setting up the pluggable insertion of golden finger plug-in block, realize on the one hand installing the first circuit board of a plurality of on the second circuit board to the realization sets up the embedded neural network processor of a plurality of and promotes graphic processing efficiency, on the other hand uses the golden finger plug-in block on a plurality of first circuit units to insert and establishes on the power supply slot that corresponds, realizes carrying out the effect of supplying power to a plurality of first circuit units, plays the effect of killing two birds with one stone.
3. The utility model discloses in, all with each through setting up power supply bus power supply slot parallel connection for embedded neural network treater equal independent operation does not receive the influence of other first circuit boards on every first circuit board, does not influence the normal power supply operation of other first circuit boards when the outage of one of them first circuit board trouble, from not influencing embedded neural network treater on other first circuit boards and carrying out image processing work, thereby improve the reliability of server operation's stability and server.
4. When only the first circuit unit is used independently for working, the power interface is directly used for supplying power independently.
Drawings
Fig. 1 is a schematic structural diagram of the present invention.
Fig. 2 is a schematic block diagram of the first circuit unit.
Fig. 3 is a schematic structural diagram of a second circuit unit.
Detailed Description
The following describes embodiments of the present invention with reference to the drawings.
Referring to fig. 1, 2 and 3, an embedded artificial intelligence server includes a plurality of first circuit units 1 and a second circuit unit 2, the first circuit units 1 are pluggable and disposed on the second circuit unit 2, and the first circuit units 1 and the second circuit units 2 are electrically connected.
Referring to fig. 1 and 2, the first circuit unit 1 includes a first circuit board 11, at least one embedded neural network processor 18 disposed on the first circuit board 11, a gold finger plug 12 disposed on one side of the first circuit board 11, a core board 14 disposed on the first circuit board 11, a power interface 13 disposed on the first circuit board 11, an ethernet interface 20 disposed on the first circuit board 11, an SPI interface 42 disposed on the first circuit board 11, an ADC interface 43 disposed on the first circuit board 11, an SDIO interface 44 disposed on the first circuit board 11, an encoder 21 disposed on the first circuit board 11, and a network transformer 19 disposed on the first circuit board 11.
Referring to fig. 1 and 2, a specific embedded neural network processor 18 in this embodiment is provided on the core board 14.
Referring to fig. 1 and 2, the core board 14 further includes a central processing unit 17, a memory 15, a memory 16, and a power manager 41.
Referring to fig. 1 and 2, the model of the central processing unit 17 is Hi3516DV300 dual core master frequency 900MHz, the model of the embedded neural network processor is 1.0 TOPS NPU, the model of the memory 15 is DDR 38G or DDR 316G, the model of the memory 16 is eMMC 32G or eMMC 64G, the power manager 41 is DC-DC discrete design, the support system is Linux 4.9.37+ Busybox 1.26.2, the model of the core board 14 is DR4-DV300, the model of the network transformer 19 is 11FB-05NL, and the model of the core board 14 can be DR4-DV 300.
Referring to fig. 1 and 2, in this embodiment, the core board 14 and the first circuit board 11 are electrically connected by soldering between the core board 14 and the first circuit board 11.
The power interface 13 is electrically connected to the first circuit board 11.
The output end of the encoder 21 is electrically connected with the enable end of the central processing unit 17.
The output end of the ADC interface 43 is electrically connected to the enable end of the central processing unit 17, and the output end of the SPI interface 42 is electrically connected to the enable end of the central processing unit 17.
The output end of the SDIO interface 44 is electrically connected to the enable end of the central processing unit 17.
The network transformer 19 is electrically connected to the ethernet interface 20, and an output terminal of the ethernet interface 20 is electrically connected to an enable terminal of the central processing unit 17.
Referring to fig. 1 and 2, in this embodiment, the memory 15, the storage 16, the power manager 41, and the embedded neural network processor 18 are electrically connected to the central processing unit 17, and the power interface 13 and the gold finger plugging block 12 are electrically connected to the central processing unit 17.
When only the first circuit unit is used independently for working, the power interface is directly used for supplying power independently.
Referring to fig. 3, the second circuit unit 2 includes a second circuit board 21, a plurality of power supply buses 30 spaced on the second circuit board 21, and a plurality of power supply slots (power supply slot 22 and power supply slot 23) spaced on the second circuit board 21.
Referring to fig. 3, each power supply slot (power supply slot 22 and power supply slot 23) is electrically connected to a respective power supply bus 30, and each power supply bus 30 is electrically connected in parallel to a respective power supply slot (power supply slot 22 and power supply slot 23).
Each power supply bus is electrically connected with each power supply slot in parallel, so that the embedded neural network processor on each first circuit board can independently operate without being influenced by other first circuit boards, and when one first circuit board fails and is powered off, the normal power supply operation of other first circuit boards is not influenced, so that the embedded neural network processors on other first circuit boards are not influenced to perform image processing operation, and the operation stability of the server and the reliability of the server are improved. The second circuit board 21 is powered by an external power inlet wire or by being electrically connected with a rechargeable lithium battery.
The golden finger insertion block 12 is inserted into and pulled out of the power supply slots (the power supply slot 22 and the power supply slot 23), so that the first circuit unit 1 is inserted into and pulled out of the second circuit unit 2.
Through setting up the pluggable interpolation of golden finger insertion piece and establishing on the power supply slot, realize installing the first circuit board of a plurality of on the one hand on the second circuit board to the realization sets up the embedded neural network treater of a plurality of and promotes graphic processing efficiency, on the other hand uses the golden finger insertion piece on a plurality of first circuit unit to insert and establishes on the power supply slot that corresponds, realizes carrying out the effect of supplying power to a plurality of first circuit unit, plays the effect of killing two birds with one stone.
The power supply bus 30 extends along the length direction of the second circuit board 21, and the power supply slots (the power supply slot 22 and the power supply slot 23) are vertically intersected on the power supply bus 30.
Power bus 30 may include power conductive line DC1, power conductive line DC2, ground line GND1, data transmission line D0, data transmission line D1, address code transmission line a0, and address code transmission line a 1.
The first circuit board 11, the second circuit board 21 and the core board 14 may be printed circuit boards.
The NPU refers to an architecture of an embedded neural network processor adopting data-driven parallel computing, and is particularly good at processing massive multimedia data of videos and images.
A Graphics processor (abbreviated as GPU), also called a display core, a visual processor, and a display chip, is a microprocessor that is specially used for image and Graphics related operations on a personal computer, a workstation, a game machine, and some mobile devices (such as a tablet computer, a smart phone, etc.).
The CPU is a central processing unit, which is one of the main devices of an electronic computer, and is a core component in the computer. Its functions are mainly to interpret computer instructions and to process data in computer software. The central processing unit is a core component in the computer which is responsible for reading instructions, decoding the instructions and executing the instructions.
The GPU reduces the dependency of the graphics card on the central processing unit, and performs part of the work of the original central processing unit, and particularly, the core technologies adopted by the GPU during 3D graphics processing include hardware T & L (geometric transformation and illumination processing), cubic environment texture mapping and vertex mixing, texture compression and concave-convex mapping, dual-texture four-pixel 256-bit rendering engine, and the like, and the hardware T & L technology can be said to be a mark of the GPU.
A Memory (Memory), also called an internal Memory and a main Memory, temporarily stores operation data in a CPU and data exchanged with an external Memory such as a hard disk.
The operating system transfers the data to be operated to the CPU from the memory for operation. When the operation is completed, the CPU sends out the result.
The emmc (embedded Multi Media card) is the standard specification of the embedded memory, which is established by the MMC association and mainly aims at products such as mobile phones or tablet computers. The eMMC integrates a controller in the package, provides standard interfaces, and manages the flash memory.
Flash memory is a special, macro-block erasable EPROM, a form of electrically erasable programmable read-only memory that allows multiple erases or writes in operation, such as memory cards and usb disks or hard disks.
The utility model carries out image processing by setting the embedded neural network processor, on one hand, the power consumption is less than that of GPU so as to generate the effect of reducing the power consumption of the server, and the embedded neural network processor is used so that the server does not need to be externally connected with independent GPU equipment so as to generate the effect of reducing the volume of the server, on the other hand, when the graphic processing efficiency of one embedded neural network processor is insufficient, a plurality of first circuit units are inserted and arranged on the second circuit unit, so that the quantity of the embedded neural network processor is increased so as to play a role of improving the graphic processing efficiency, on the other hand, the price cost of the embedded neural network processor is lower than that of GPU, and the embedded neural network processor replaces GPU so as to generate the effect of reducing the price cost of the whole server, the effect of killing two birds with one stone is achieved.
The above-mentioned be the utility model discloses a concrete implementation way, nevertheless the utility model discloses a design concept is not limited to this, and the ordinary use of this design is right the utility model discloses carry out immaterial change, all should belong to the act of infringement the protection scope of the utility model.

Claims (10)

1. An embedded artificial intelligence server, its characterized in that: the first circuit unit comprises a first circuit board and at least one embedded neural network processor arranged on the first circuit board, and the first circuit unit is electrically connected with the second circuit unit.
2. The embedded artificial intelligence server of claim 1, wherein: the first circuit unit further comprises a golden finger inserting block arranged on one side of the first circuit board, the second circuit unit comprises a second circuit board and a plurality of power supply slots arranged on the second circuit board at intervals, and the golden finger inserting block can be inserted into and pulled out of the power supply slots.
3. The embedded artificial intelligence server of claim 2, wherein: the first circuit unit further comprises a core board arranged on the first circuit board, the embedded neural network processor is arranged on the core board, and the core board is electrically connected with the first circuit board.
4. The embedded artificial intelligence server of claim 3, wherein: the core board is also provided with a central processing unit which is electrically connected with the embedded neural network processor, and the central processing unit is electrically connected with the golden finger plugging block.
5. The embedded artificial intelligence server of claim 2, wherein: the second circuit unit further comprises a plurality of power supply buses arranged on the second circuit board at intervals, each power supply slot is electrically connected with each power supply bus, and each power supply bus is electrically connected with each power supply slot in parallel.
6. The embedded artificial intelligence server of claim 5, wherein: the power supply bus extends along the length direction of the second circuit board, and the power supply slots are vertically and crossly arranged on the power supply bus.
7. The embedded artificial intelligence server of claim 4, wherein: the first circuit unit further comprises a network transformer arranged on the first circuit board and an Ethernet interface arranged on the first circuit board, the network transformer is electrically connected with the Ethernet interface, and the output end of the Ethernet interface is electrically connected with the enable end of the central processing unit.
8. The embedded artificial intelligence server of claim 1, wherein: the first circuit unit comprises a power interface arranged on the first circuit board, and the power interface is electrically connected with the first circuit board.
9. The embedded artificial intelligence server of claim 4, wherein: the first circuit unit comprises an ADC interface arranged on the first circuit board and an SPI interface arranged on the first circuit board, the output end of the ADC interface is electrically connected with the enable end of the central processing unit, and the output end of the SPI interface is electrically connected with the enable end of the central processing unit.
10. The embedded artificial intelligence server of claim 4, wherein: the first circuit unit further comprises an encoder arranged on the first circuit board, and the output end of the encoder is electrically connected with the enabling end of the central processing unit.
CN202023292688.5U 2020-12-31 2020-12-31 Embedded artificial intelligence server Active CN214901448U (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202023292688.5U CN214901448U (en) 2020-12-31 2020-12-31 Embedded artificial intelligence server

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202023292688.5U CN214901448U (en) 2020-12-31 2020-12-31 Embedded artificial intelligence server

Publications (1)

Publication Number Publication Date
CN214901448U true CN214901448U (en) 2021-11-26

Family

ID=78911430

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202023292688.5U Active CN214901448U (en) 2020-12-31 2020-12-31 Embedded artificial intelligence server

Country Status (1)

Country Link
CN (1) CN214901448U (en)

Similar Documents

Publication Publication Date Title
CN104158004A (en) Combination of USB connector and MICROSD flash card connector
CN214901448U (en) Embedded artificial intelligence server
CN207408859U (en) EMP3288 embedded computer core boards
CN1967570A (en) A card read connection device for smart card
CN201489395U (en) Mainboard, I/O expansion card, and industry computer
CN105046638B (en) Processor system and its image processing method
CN213211657U (en) Multimedia processing card and multimedia playing equipment
CN207732515U (en) A kind of Intelligent platform zone distribution transformer terminals
CN212278336U (en) Backboard and video processing equipment
CN213213651U (en) Video processor
CN201078791Y (en) Flash memory burning program apparatus
CN203133839U (en) Multi-bus embedded processing device
CN203055453U (en) New multi-function flash drive
CN207216779U (en) Self-printing terminal 10G CPU Control Unit
CN2829147Y (en) Adapter board
CN211018938U (en) Card-insertion type video processor and video processing mother card
CN220732843U (en) Multitasking video processing integrated device
CN205983331U (en) Low -power consumption treater mainboard
CN201903896U (en) Read-write device and terminal employing same
CN220730808U (en) Multi-interface control processing device
CN208477516U (en) A kind of USB interface switching is completed at the same time the circuit of charging and OTG function
CN216623140U (en) Industrial control computer
CN218547405U (en) Education and conference all-in-one machine mainboard
CN205229976U (en) KVM system of compatible multiple mouse type
CN201974781U (en) Mouse card reader

Legal Events

Date Code Title Description
GR01 Patent grant
GR01 Patent grant