[go: up one dir, main page]

CN101478570A - Card insertion type high-speed cache for cluster server and operation method thereof - Google Patents

Card insertion type high-speed cache for cluster server and operation method thereof Download PDF

Info

Publication number
CN101478570A
CN101478570A CNA2009100581920A CN200910058192A CN101478570A CN 101478570 A CN101478570 A CN 101478570A CN A2009100581920 A CNA2009100581920 A CN A2009100581920A CN 200910058192 A CN200910058192 A CN 200910058192A CN 101478570 A CN101478570 A CN 101478570A
Authority
CN
China
Prior art keywords
cache
buffer
file data
data blocks
pci
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CNA2009100581920A
Other languages
Chinese (zh)
Inventor
李毅
陈秋益
杨波
曾新科
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CNA2009100581920A priority Critical patent/CN101478570A/en
Publication of CN101478570A publication Critical patent/CN101478570A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Memory System Of A Hierarchy Structure (AREA)

Abstract

该发明属于集群服务器用插卡式高速缓存器及其运行方法,缓存器包括含CPU、网络引擎的网络处理器及PCI接口电路;运行方法包括将缓存器插入各主机的PCI卡槽内并分别与网络连接,在主机的Linux操作系统上安装针对缓存器的管理及驱动模块,然后按读文件数据块请求,判断此文件数据块是在主机内存上还是在本机的缓存器上或在外机缓存器上、或在本机硬盘上,再按相应流程进行。该发明由于将系统中分布式数据的通信和交互机制交由插卡式高速缓存器来执行,并通过硬件系统进行机群间的通信,从而具有运行中系统主机内存占用量小、CPU负载轻,可通过硬件系统进行机群间的通信,文件存取快速、准确,有效提高了系统的总体处理能力等特点。

Figure 200910058192

The invention belongs to a plug-in card cache for a cluster server and an operation method thereof. The cache includes a network processor including a CPU and a network engine and a PCI interface circuit; the operation method includes inserting the cache into the PCI card slots of each host and separately Connect with the network, install the management and driver module for the buffer on the Linux operating system of the host, and then judge whether the file data block is in the host memory or in the buffer of the machine or in the external machine according to the request for reading the file data block cache, or on the local hard disk, and then proceed according to the corresponding process. Since the invention transfers the communication and interaction mechanism of distributed data in the system to the plug-in cache, and communicates between the clusters through the hardware system, it has the advantages of small memory usage of the running system host and light CPU load. The communication between the clusters can be carried out through the hardware system, the file access is fast and accurate, and the overall processing capacity of the system is effectively improved.

Figure 200910058192

Description

Card insertion type high-speed cache for cluster server and operation method thereof
Technical field
The invention belongs to a kind of and the supporting parts of using of cluster streaming media server, particularly a kind of employing multicomputer aggregated server system carries out streaming media video and propagates in the service, communicate by letter between each server host and between each server host and the client usefulness card insertion type high-speed cache device and operation method thereof, this buffer also can with supporting uses such as database group of planes server, Web cluster server.
Background technology
In the multicomputer system that adopts cluster server, each main frame all adopts original independent caching mechanism in its operating system, the distributed caching system is not set between each main frame, thus can not share have each other data cached between each main frame, the data resource utilance is low.Therefore, conventional flow media video diffusion service exists, that host memory takies is many, the magnetic disc i/o operation frequently, defective such as cpu load is big, and the efficient of overall system disposal ability and Network Transmission is lower.
Summary of the invention
The objective of the invention is to adopt separately the independently disadvantage of caching system existence at original cluster server, a kind of card insertion type high-speed cache for cluster server of research and design and operation method thereof, on each main frame, set up a relatively independent card insertion type high-speed cache device, and on the operating system of each main frame, set up corresponding functional modules, can not share have each other data cached to overcome former caching system main frame, host memory takies many, the disadvantage that cpu load is big, reach the occupancy that reduces host memory, alleviate the load of CPU, the input of improvement main frame, output (I/O) performance improves purposes such as overall system disposal ability and network transmission efficiency.
Solution of the present invention is the communication of distributed data in the system and interaction mechanism to be transferred to a relatively independent card insertion type high-speed cache device carry out, and on the (SuSE) Linux OS of each main frame cache management functional module and Cache host side drive functional module at Cache is installed.Therefore, card insertion type high-speed cache device of the present invention comprises the network processing unit that contains CPU and network engine, be connected to internal memory, flash memory, EPROM, serial communication interface, boundary scan interface (JTAG) and network interface on this processor, key is also to be connected with the pci interface circuit that comprises level shifting circuit and 32 PCI golden fingers on the pci interface of processor, and is solidified with uclinux operating system on flash memory; The pci interface circuit is connected by the pin on the pci interface on the switch chip on the level shifting circuit and the processor, and whole buffer is then by snap-in connection of PCI slot on 32 PCI golden fingers on this pci interface circuit and the main frame.
Level shifting circuit in the above-mentioned pci interface circuit is made up of 3 bus switch chips.The described uclinux operating system that is solidified with on flash memory, its operation system function module comprises communication module between buffer end driver module and buffer.Described buffer end driver module comprises function of initializing unit and break in service functional unit, and communication module then comprises transmission read request thread functional unit, receives read request thread functional unit, sends command string thread functional unit and receive command string thread functional unit between described buffer.
The operation method of above-mentioned card insertion type high-speed cache for cluster server comprises:
A. at first Cache of the present invention is inserted in the pci card groove of each respective host, the network interface on it then is connected with network respectively, and on the (SuSE) Linux OS of each main frame cache management module and Cache host side driver module at Cache is installed;
B. operation method is:
1.0: the main frame process is sent operating system and is read file request;
1.1: judge that this file data blocks is whether in host memory, if just carry out according to the background technology flow process; Otherwise enter step 2.1;
2.1: enter judgement is asked in the cache management functional module 2.0 file data blocks whether on the buffer of this machine, if, then from buffer, directly file data blocks is sent to the main frame process; Otherwise change step 2.2;
2.2: judge that whether outside the file data blocks to be asked on the buffer of machine, if then enter step 2.3; Otherwise change step 2.4;
2.3: from outer machine buffer, file data blocks is called in this machine buffer, and file data blocks is sent to the main frame process;
2.4: judge the file data blocks asked whether on this machine hard disk,, withdraw from if not on this machine hard disk then process requested is invalid; If on hard disk, then enter step 2.5;
2.5; By the background technology process flow operation, file data blocks is sent to the main frame process; Simultaneously this document data block is write in this machine buffer, for future use.
Above-mentioned cache management functional module of installing on the (SuSE) Linux OS of each main frame at Cache, its management function module comprise to be read the cache request unit, writes cache request unit and command string administrative unit.Cache host side drive functional module then comprises read request driver element, write request driver element and break in service unit.
The present invention is owing to transfer to a relatively independent card insertion type high-speed cache device with the communication of distributed data in the system and interaction mechanism and carry out, promptly adopt the hardware cache mode to replace original kernel cache way, solved original system and taken host memory, problem that cpu load is big; Cache management module and Cache host side driver module at Cache are installed on the (SuSE) Linux OS of each main frame, by carrying out the communication between a group of planes on the hardware system, main frame input, output (I/O) performance have been improved, increase the overall system disposal ability, guaranteed the rapidity and the accuracy of file access.Thereby the present invention has that system host EMS memory occupation amount is little, cpu load is light, can carry out communication between a group of planes by hardware system, and file access fast, has accurately effectively improved the characteristics such as overall process ability of system.
Description of drawings
Fig. 1 is a card insertion type high-speed cache device structural representation of the present invention;
Fig. 2 is the inventive method operational process schematic diagram (block diagram);
Fig. 3 is adopted the functional module structure schematic diagram by the inventive method in the embodiment.
Among the figure: 5, card insert type buffer; 5-1, processor, 5-2, internal memory, 5-3, flash memory, 5-4, EPROM (EEPROM), 5-5, network interface, 5-6, level shifting circuit, 5-7,32 PCI golden fingers, 5-8, boundary scan interface (JTAG), 5-9, serial communication interface, 5-10, pci interface circuit.
Embodiment
In the present embodiment card insertion type high-speed cache device 5: the model that wherein contains CPU and network engine that processor 5-1 adopts Intel Company to produce is the network processing unit of IXP425, and it constitutes, is total to the internal memory 5-2 of 256M byte capacity by 4 MT48LC32M16A2 chips of the external employing of sdram interface; Flash memory 5-3 employing capacity is the chip TE28F128J3C150 of 32M, and is connected with it by the expansion bus interface (Expansion Bus Interface) of processor 5-1, and this flash memory provides the space of store operation system and program for the plug-in card system; EPROM (EEPROM) 5-4 model is PCF8594, is connected on 6, No. 7 interfaces (GPIO6, GPIP7) in general input and output (GPIO) interface of processor 5-1, and EEPROM is used for the network MAC address of storage system; It is the ethernet transceiver chip of LXT972A that network interface 5-5 adopts model, and this chip is connected with media independent (MII) interface of processor 5-1, and its outer end is connected with the external ethernet network by the RJ45 interface; The pin connection that jtag interface 5-8 is directly corresponding with processor 5-1, with as boundary scan and initial code programming interface; It is the chip of MAX3223E that serial communication interface 5-9 adopts model, and this chip is connected with the serial communication pin of processor 5-1, uses as the terminal debugging interface; Pci interface 5-10 is made up of the level shifting circuit 5-6 that plays 3.3V and the effect of 5V signal level bi-directional conversion and 32 PCI golden finger 5-7, and wherein, level shifting circuit 5-6 is made up of the bus switch chip that 3 models are CBTD16210.When this card insertion type high-speed cache device uses, 32 golden finger 5-7 are inserted in the pci card groove of main frame, and by in the network interface 5-5 access network on it.
Accompanying drawing 3 is adopted the functional module structure schematic diagram by the operation method in the present embodiment, and its operation method comprises:
A. at first Cache of the present invention is inserted in the pci card groove of each respective host, the network interface on it then is connected with network respectively, on the (SuSE) Linux OS of each main frame, install simultaneously at reading the cache request unit containing of Cache, write the cache management module of cache request unit and command string administrative unit and containing the Cache host side driver module of read request driver element, write request driver element, break in service unit;
B. concrete operational process is:
1.0: the main frame process is sent operating system and is read file request;
1.1: judge that this file data blocks is whether in host memory, if then enter step 1.2 and carry out according to the background technology flow process; Otherwise enter step 2.1;
2.1: enter judgement is asked in the cache management functional module 2.0 file data blocks whether on the buffer of this machine, if not, then change step 2.2; If then carry out following operation:
Call read request driver element 3.1 in the buffer host side driver module 3.0 by the cache request unit 2.01 of reading in the cache management function 2.0, set up DMA (direct memory access) buffering area, to store the file data blocks that transmits from buffer, and this buffer zone address is mapped on the pci bus 7, the address of being shone upon and read the file data blocks solicited message and all put into the shared buffer that main frame and buffer can be visited sends to buffer by pci bus 7 then and reads the file data blocks interrupt signal;
This reads the file data blocks interrupt signal break in service unit 5.12 responses in the driver module 5.1 on the buffer, and read DMA mapping address and solicited message in the shared buffer, be transferred in the DMA buffering area that main frame sets up by dma mode being stored in corresponding file data blocks on the buffer according to DMA mapping address and solicited message then; After the DMA transmission was finished, buffer sent interrupt signal by pci bus 7 to main frame;
The break in service unit 3.3 of the buffer driver module 3.0 of host side responds these interrupt signals, the file data blocks of request is returned read cache request unit 2.01, file data blocks is returned to the main frame process of asking this file data blocks by this request unit then;
2.2: buffer management module 2.0 judges that whether outside the file data blocks of being asked on the buffer of machine, if then enter step 2.3; Otherwise change step 2.4;
2.3: also call read request driver element 3.1 in the buffer host side driver module 3.0 by the cache request unit 2.01 of reading in the cache management module 2.0, set up DMA (direct memory access) buffering area, to store the file data blocks that transmits from buffer, and this buffer zone address is mapped on the pci bus 7, the address of being shone upon and read the file data blocks solicited message and all put into the shared buffer that main frame and buffer can be visited sends to buffer by pci bus 7 then and reads the file data blocks interrupt signal;
Buffer break in service module 5.12 will respond this and read the file data blocks interrupt signal, and read DMA mapping address and solicited message in the shared buffer; It will send to the transmission read request thread units 5.21 in the communication module 5.2 between buffer to this read request information, this thread units sends to the outer machine buffer that has this demand file data block and reads the file data blocks solicited message, outer machine buffer receives this request of read request thread units 5.22 responses, and this outer machine buffer reads file data blocks and this machine of sending it back buffer of being asked; This machine buffer communication module is stored in this file data blocks on the buffer, then by the mode identical with step 2.1, file data blocks is returned to the main frame process of this file data blocks of request;
2.4: judge the file data blocks asked whether on this machine hard disk,, withdraw from if not on this machine hard disk then process requested is invalid; If on hard disk, then enter step 2.5;
2.5: by the background technology process flow operation, file data blocks is sent to the main frame process; Start simultaneously and write cache request unit 2.02 in the caching management module 2.0, this file data blocks is write buffer for future use; Write cache request unit 2.02 and call write request driver element 3.2, this driver element will be set up DMA (direct memory access) buffering area, the file data blocks that needs is write buffer copies in the buffering area, and this buffer zone address is mapped on the pci bus 7, the address of mapping and solicited message are all put into the shared buffer that main frame and buffer can be visited, then by pci bus 7 to buffer transmission file data piece interrupt signal;
Buffer break in service unit 5.12 will respond this file data piece interrupt signal, and read the DMA mapping address and the write request information of shared buffer, the file data blocks that will write is transferred on the buffer from main frame by dma mode according to DMA mapping address and file data piece solicited message then, and is stored in the respective storage areas of buffer; Meanwhile driver module 5.1 calls the transmission command string thread units 5.23 of communication module 5.2 between buffer on the buffer, the associated storage management information of this page is sent to the every other outer machine buffer that links to each other with network, after the buffer reception command string thread units 5.24 of other outer machines receives this command string, pass to the command string administrative unit 2.3 of caching management module again by driver module on the buffer 5.1, buffer host side driver module 3.0, the unit upgrades corresponding file data blocks cache management information thus.

Claims (7)

1. card insertion type high-speed cache for cluster server, comprise the network processing unit that contains CPU and network engine, be connected to internal memory, flash memory, EPROM, serial communication interface, boundary scan interface and network interface on this processor, it is characterized in that on the pci interface of processor, also being connected with the pci interface circuit that comprises level shifting circuit and 32 PCI golden fingers, and on flash memory, be solidified with uclinux operating system; The pci interface circuit is connected by the pin on the pci interface on the switch chip on the level shifting circuit and the processor, and whole buffer is then by snap-in connection of PCI slot on 32 PCI golden fingers on this pci interface circuit and the main frame.
2. by the described card insertion type high-speed cache for cluster server of claim 1, it is characterized in that the level shifting circuit in the described pci interface circuit is made up of 3 bus switch chips.
3. by the described card insertion type high-speed cache for cluster server of claim 1, it is characterized in that the described uclinux operating system that is solidified with on flash memory, its operation system function module comprises communication module between buffer end driver module and buffer.
4. by claim 1 or 2 described card insertion type high-speed cache for cluster server, it is characterized in that described buffer end driver module comprises function of initializing unit and break in service functional unit, communication module then comprises transmission read request thread functional unit, receives read request thread functional unit, sends command string thread functional unit and receive command string thread functional unit between described buffer.
5. by the operation method of the described card insertion type high-speed cache for cluster server of claim 1, comprising:
A. at first Cache of the present invention is inserted in the pci card groove of each respective host, the network interface on it then is connected with network respectively, and on the (SuSE) Linux OS of each main frame cache management module and Cache host side driver module at Cache is installed;
B. operation method is:
1.0: the main frame process is sent operating system and is read file request;
1.1: judge that this file data blocks is whether in host memory, if just carry out according to the background technology flow process; Otherwise enter step 2.1;
2.1: enter judgement is asked in the cache management module 2.0 file data blocks whether on the buffer of this machine, if, then from buffer, directly file data blocks is sent to the main frame process; Otherwise change step 2.2;
2.2: judge that whether outside the file data blocks to be asked on the buffer of machine, if then enter step 2.3; Otherwise change step 2.4;
2.3: from outer machine buffer, file data blocks is called in this machine buffer, and file data blocks is sent to the main frame process;
2.4: judge the file data blocks asked whether on this machine hard disk,, withdraw from if not on this machine hard disk then process requested is invalid; If on hard disk, then enter step 2.5;
2.5; By the background technology process flow operation, file data blocks is sent to the main frame process; Simultaneously this document data block is write in this machine buffer, for future use.
6. by the described operation method of claim 5, it is characterized in that described cache management module of installing on the (SuSE) Linux OS of each main frame at Cache, its administration module comprises to be read the cache request unit, writes cache request unit and command string administrative unit.
7. by the described operation method of claim 5, it is characterized in that described Cache host side driver module then comprises read request driver element, write request driver element and break in service unit.
CNA2009100581920A 2009-01-20 2009-01-20 Card insertion type high-speed cache for cluster server and operation method thereof Pending CN101478570A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNA2009100581920A CN101478570A (en) 2009-01-20 2009-01-20 Card insertion type high-speed cache for cluster server and operation method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNA2009100581920A CN101478570A (en) 2009-01-20 2009-01-20 Card insertion type high-speed cache for cluster server and operation method thereof

Publications (1)

Publication Number Publication Date
CN101478570A true CN101478570A (en) 2009-07-08

Family

ID=40839200

Family Applications (1)

Application Number Title Priority Date Filing Date
CNA2009100581920A Pending CN101478570A (en) 2009-01-20 2009-01-20 Card insertion type high-speed cache for cluster server and operation method thereof

Country Status (1)

Country Link
CN (1) CN101478570A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9098397B2 (en) 2011-04-04 2015-08-04 International Business Machines Corporation Extending cache for an external storage system into individual servers

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9098397B2 (en) 2011-04-04 2015-08-04 International Business Machines Corporation Extending cache for an external storage system into individual servers
US9104553B2 (en) 2011-04-04 2015-08-11 International Business Machines Corporation Extending cache for an external storage system into individual servers

Similar Documents

Publication Publication Date Title
EP3920034B1 (en) Systems and methods for scalable and coherent memory devices
CN101405708B (en) Memory systems for automated computing machinery
KR101517258B1 (en) Apparatus, system, and method for cross-system proxy-based task offloading
JP4477906B2 (en) Storage system
US10402335B2 (en) Method and apparatus for persistently caching storage data in a page cache
EP2506150A1 (en) Method and system for entirety mutual access in multi-processor
CN102609215A (en) Data processing method and device
CN118363914B (en) Data processing method, solid state disk device and host
CN114816254A (en) Hard disk data access method, device, equipment and medium
CN116225995B (en) Bus system and chip
CN116303155A (en) Interleaving of heterogeneous memory targets
CN112579480B (en) Storage management method, storage management device and computer system
CN116521608A (en) Data migration method and computing device
KR20240122168A (en) Storage-integrated memory expander, computing system based compute express link, and operating method thereof
CN114490023B (en) ARM and FPGA-based high-energy physical computable storage device
CN100557584C (en) Memory controller and method for coupling network and memory
CN1229736C (en) Device for monitoring computer system resource and communicatin method of serial bus and said resource
CN118689806A (en) Persistent memory devices and controllers and systems
CN101478570A (en) Card insertion type high-speed cache for cluster server and operation method thereof
CN116186793B (en) RISC-V based security chip architecture and working method thereof
JP2008529134A (en) Low power semiconductor storage controller for mobile phones and other portable devices
CN101099137A (en) Optionally pushing i/o data into a processor's cache
CN118363924B (en) Configurable logic-based data transmission performance optimization hardware implementation and optimization flow
CN117971135B (en) Storage device access method and device, storage medium and electronic device
CN221768064U (en) A system for interconnecting PCIE and RapidIO protocols based on FPGA

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Open date: 20090708