CN102143218A - Web access cloud architecture and access method - Google Patents
Web access cloud architecture and access method Download PDFInfo
- Publication number
- CN102143218A CN102143218A CN201110025590XA CN201110025590A CN102143218A CN 102143218 A CN102143218 A CN 102143218A CN 201110025590X A CN201110025590X A CN 201110025590XA CN 201110025590 A CN201110025590 A CN 201110025590A CN 102143218 A CN102143218 A CN 102143218A
- Authority
- CN
- China
- Prior art keywords
- engine
- tcp
- data
- students
- reduction
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 28
- 238000012545 processing Methods 0.000 claims abstract description 45
- 230000006835 compression Effects 0.000 claims abstract description 20
- 238000007906 compression Methods 0.000 claims abstract description 20
- 230000006837 decompression Effects 0.000 claims abstract description 16
- 238000012546 transfer Methods 0.000 claims abstract description 9
- 230000009467 reduction Effects 0.000 claims description 91
- 238000007726 management method Methods 0.000 claims description 42
- 230000006870 function Effects 0.000 claims description 16
- 230000008859 change Effects 0.000 claims description 9
- 238000013523 data management Methods 0.000 claims description 9
- 230000006399 behavior Effects 0.000 claims description 8
- 238000013467 fragmentation Methods 0.000 claims description 8
- 238000006062 fragmentation reaction Methods 0.000 claims description 8
- 230000008569 process Effects 0.000 claims description 7
- 238000004364 calculation method Methods 0.000 claims description 6
- 230000009977 dual effect Effects 0.000 claims description 5
- 230000004907 flux Effects 0.000 claims description 5
- 101100283411 Arabidopsis thaliana GMII gene Proteins 0.000 claims description 4
- 230000005540 biological transmission Effects 0.000 claims description 4
- 238000009795 derivation Methods 0.000 claims description 4
- 238000013461 design Methods 0.000 claims description 4
- 238000001914 filtration Methods 0.000 claims description 4
- 238000002789 length control Methods 0.000 claims description 4
- 230000001105 regulatory effect Effects 0.000 claims description 4
- 230000008521 reorganization Effects 0.000 claims description 4
- 230000001360 synchronised effect Effects 0.000 claims description 4
- 238000010200 validation analysis Methods 0.000 claims description 4
- 238000004891 communication Methods 0.000 description 9
- 238000010586 diagram Methods 0.000 description 7
- 238000005516 engineering process Methods 0.000 description 5
- 238000011161 development Methods 0.000 description 3
- 230000003993 interaction Effects 0.000 description 3
- 230000004044 response Effects 0.000 description 3
- 230000002457 bidirectional effect Effects 0.000 description 2
- 230000015572 biosynthetic process Effects 0.000 description 2
- 238000004883 computer application Methods 0.000 description 2
- 238000013036 cure process Methods 0.000 description 2
- 238000012217 deletion Methods 0.000 description 2
- 230000037430 deletion Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005265 energy consumption Methods 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 101001072091 Homo sapiens ProSAAS Proteins 0.000 description 1
- 241000270322 Lepidosauria Species 0.000 description 1
- 101150096185 PAAS gene Proteins 0.000 description 1
- 102100036366 ProSAAS Human genes 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 239000013256 coordination polymer Substances 0.000 description 1
- 238000005314 correlation function Methods 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 238000005538 encapsulation Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Images
Landscapes
- Information Transfer Between Computers (AREA)
Abstract
The invention relates to a web access cloud architecture and an access method. The web access cloud architecture comprises an IP (internet protocol) packet classification engine IPC, a DoS protection engine ADE, an SSL (security socket layer)/TLS (transport layer security) engine, a TCP (transfer control protocol)/IP offload engine TOE, an HTTP (hypertext transfer protocol) offload engine HOE, a file system offload engine FOE, a remote file system offload engine RFOE, a power consumption management engine PEM, a content management engine CM, a CryptoEngineCE and a compression/decompression engine CDE, and also comprises control storage components: a CPU (central processing unit), an on-chip bus and an on-chip memory, wherein the on-chip bus is connected with the CPU, the on-chip memory, the power consumption management engine PEM and the IP packet classification engine IPC, the TCP/IP offload engine TOE, the HTTP offload engine HOE, the content management engine CM, the file system offload engine FOE and the SSL/TLS engine, and the CPU controls the components mounted on the on-chip bus, so that the processing efficiency and security of the existing Web server can be greatly improved, and the power consumption is reduced.
Description
Technical field
The present invention relates to the cloud computing field, the Web cloud that is specifically related in a kind of cloud computing inserts problem, relates in particular to a kind of web and inserts cloud architecture and cut-in method.
Background technology
The computer application pattern experienced centralized architecture based on large-scale computer substantially, be the service-oriented architecture of core and based on the client/server distributed computing architecture of PC, with the Intel Virtualization Technology based on the novel framework of Web2.0 application characteristic.The differentiation of computer application pattern, Technical Architecture and realization feature is the historical background of cloud computing development.
Cloud computing is directly translated by English Cloud Computing, does not also have uniform definition in the industry at present, and some large enterprises and research institution have provided the definition of cloud computing from different perspectives in a different manner, and actively carry out the exploitation of cloud service.Wikipedia was defined as cloud computing in 2009: cloud computing be a kind of dynamic easily expansion and normally provide virtualized resource account form by the Internet.The user does not need to understand the details of cloud inside, needn't have the professional knowledge of cloud inside yet or directly control infrastructure.IBM thinks that cloud computing is a kind of computation schema, and in this pattern, application, data and IT resource offer the user with service manner by network and use.And Berkeley cloud computing white paper thinks that the application service that cloud computing comprises on the Internet reaches the software and hardware facilities that these services are provided in data center.
A kind of method of emerging shared architecture had both been described in cloud computing, had described the application and the expansion service that are based upon on this infrastructure again." cloud " is a huge service network of being made up of parallel grid, and it expands the computing capability in high in the clouds by Intel Virtualization Technology, so that the maximum usefulness of each equipment performance.The processing of data and storage are all finished by the server cluster of " cloud " end, these clusters are made up of a large amount of common industry standard servers, and be in charge of by a large-scale data processing centre, data center reaches the effect equal with supercomputer by client's the distributes calculation resources that needs.
The computer network of the logic one that the Internet is made up of according to certain communication protocol wide area network, local area network (LAN) and unit.Internet development is the direct driving force of cloud computing, nowadays, the computing equipment, the memory device ability that connect have on the internet had significantly lifting, data resource is exponential growth, various Service Sources on the Internet become increasingly abundant, the classical environment for use of the Internet---World Wide Web (WWW) (Web) no longer has been simple content platform, but can the force direction development towards more powerful, abundant user interactions being provided and experiencing.The Internet and Web have become structure, O﹠M, the indispensable basic environment of all kinds of distribution application systems of use, are developing into human so far maximum computing platform.
At present, the lifting of the communication speed of Ethernet system far surpasses the growth of computer processor processing speed, and processing speed will not catch up with the data traffic in the network, and this has just caused the bottleneck of I/O stream (I/O).Great majority adopt operating system and network stack to realize that TCP/IP handles in the existing system, taken a large amount of host CPU resources, and the disconnected traffic load that increases can cause the decreased performance of network stack, just the processing speed of TCP/IP stream is significantly less than the speed of network data flow, and the working method of legacy network as shown in Figure 1.In order to alleviate the pressure of CPU, TOE(TCP/IP offload engine, TCP/IP offload engine) technology arise at the historic moment.TOE Technique on T CP/IP protocol stack is expanded, and makes ICP/IP protocol transfer to TOE hardware from CPU, and this will reduce, and a large amount of network I/O interrupts, data are repeatedly duplicated and the processing of agreement itself, have alleviated the burden of CPU greatly.
Under the cloud computing background, the function that client is mainly finished comprises that web inserts and web displaying, and calculating and storage come from cloud (function ratio NC more simplifies).Service end realizes that mainly SAAS(software promptly serves), the PAAS(platform promptly serves), the IAAS(architecture promptly serves) and web insert.How becoming key for the user provides consistent, transparent, cloud computing easily inserts, the model that the present invention provides can provide high efficiency, low-power consumption, the Web cloud inserts cheaply.
At home in the patent of invention, there not be the patent of direct Web access service towards cloud computing at present.Relevant patent relates to the caching technology of common Web server and cloud computing.
Application number is 200510120570 patent application, and name is called a kind of embedded type mobile web server.The processor unit that it is made of the ARM chip, Flash memory and network interface circuit are formed.Processor unit is solidified with system service program, and the Flash memory is used to store user's Web web data; Behind this equipment access network, system program obtains ISP for the IP address information of its dynamic assignment and send to the domain name supervising center; The domain name supervising center is a domain name supervising server with fixed ip address, and it is that the dynamic IP addressing of this kind equipment all on the network is set up " domain name-IP " mapping relations, and realizes redirection function; The viewer resolves by the domain name supervising center after the domain name of input targeted website, this locality, the login targeted website.The server of this invention is finished by software fully, carries out on the flush bonding processor unit, and performance is lower than the software of high-performance processor and realizes, more the specialized hardware that proposes far below this patent is realized.
Application number is 200610169248 patent, and on-line system and the method that provides Web service to insert is provided name.On-line system and a kind of on-line system of method (100) that this invention provides Web service to insert provide the access of the Web service that online entity (110) is provided.This on-line system (100) comprises line server (160), is used for collection and the storage online information (180) about online entity (110), and provides online information (180) to the observer (170) of online entity (110).Line server (160) further receives Web service recalls information (210) from online entity (110), and this Web service recalls information (210) provides the access to one or more Web services of online entity (110).Line server (160) offers the observer (170) of online entity (110) with the Web service recalls information (210) of online entity (110) with the online information (180) of online entity (110), to be used for calling the Web service of online entity (110) by observer (170).This patent has proposed the software implementation method of online service, does not relate to the specialized hardware of the Web service of this patent proposition.
Application number is 200810043744 patent, and name is called the caching system and the method thereof of system for cloud computing.This disclosure of the Invention a kind of caching system of system for cloud computing, it is deployed in each node of system for cloud computing, described system comprises: service module receives the task that other nodes send, the task kind that the minute book ground node can be carried out; Dispatch module is carried out the task assignment that local node receives to local node, perhaps be transmitted to other nodes; The cache policy module writes down the cache policy of various tasks, and whether described cache policy comprises buffer memory, cache-time; Caching management module, the cache size of management local node is searched task in the buffer memory of local node, task is saved in the buffer memory of local node.This patent relates to the caching system of the network node of cloud computing, does not relate to the specialized hardware of the Web service of this patent proposition.
Except above-mentioned patent, also have some commercial Web front end expedite product, the performance of skill upgrading Web systems such as they utilize, and TCP is multiplexing, load balancing, buffer memory and SSL acceleration.The Traditional Web services correlation function is complete by a plurality of equipment serials, WinCom swap server (WinCom Switching Server, WCSS) changed this pattern, it quickens function with Web service, load balancing, data center's buffer memory, fire compartment wall and optional SSL and is integrated in the single assembly.This product does not relate to the Web service specialized hardware that this patent proposes.
Summary of the invention
The objective of the invention is to overcome the deficiencies in the prior art and propose a kind of dynamical the Internet model that realizes from bottom to top, realize that the web that the cloud computing of all kinds of general, dedicated computings and storage resources inserts inserts cloud architecture and cut-in method.
The object of the present invention is achieved like this:
A kind of Web inserts the cloud architecture, comprises IP packet classification engine IPC, DoS protection engine ADE, SSL/TLS engine, TCP
/IP Reduction of Students' Study Load engine TOE, HTTP Reduction of Students' Study Load engine HOE, file system Reduction of Students' Study Load engine FOE, far-end file system Reduction of Students' Study Load engine RFOE, power managed engine PEM, content management engine CM, Crypto Engine CE, compression/decompression engine CDE, also comprise control store parts: CPU, on-chip bus and on-chip memory, it is characterized in that:
On-chip bus connects CPU, on-chip memory, power managed engine PEM and IP packet classification engine IPC, TCP
/IP Reduction of Students' Study Load engine TOE, HTTP Reduction of Students' Study Load engine HOE, content management engine CM, file system Reduction of Students' Study Load engine FOE, SSL/TLS engine, CPU controls other parts that are mounted on the on-chip bus, also be connected with the I/O mouth of DoS protection engine ADE on the I/O mouth of IP packet classification engine IPC, another I/O mouth of DoS protection engine ADE also and TCP
/The I/O mouth of IP Reduction of Students' Study Load engine TOE links to each other TCP
/Another I/O mouth of IP Reduction of Students' Study Load engine TOE links to each other with the I/O mouth of Crypto Engine CE again, another I/O mouth of Crypto Engine CE links to each other with the I/O mouth of HTTP Reduction of Students' Study Load engine HOE, and another I/O mouth of HTTP Reduction of Students' Study Load engine HOE links to each other with the I/O mouth of compression/decompression engine CDE;
The MAC parts link to each other with on-chip bus, IP packet classification engine IPC respectively by input/output port, IP packet classification engine IPC, TCP
/I/O mouth between IP Reduction of Students' Study Load engine TOE, HTTP Reduction of Students' Study Load engine HOE and the content management engine CM links to each other successively, the I/O mouth of content management engine CM also links to each other with far-end file system Reduction of Students' Study Load engine RFOE, the output port of content management engine CM links to each other with file system Reduction of Students' Study Load engine FOE, the output port of file system Reduction of Students' Study Load engine FOE links to each other with HTTP Reduction of Students' Study Load engine HOE, and the input/output port of file system Reduction of Students' Study Load engine FOE links to each other with the local disk array.
At described TCP
/In the IP Reduction of Students' Study Load engine TOE structure, On-Chip Buffer Memory is used for buffer memory and receives the message that maybe will send from the 10G network; IP Parser StateMachine handles IP to be divided into and receives and send two parts, receiving unit will receive original message from the 10G network interface, and message tentatively resolved, comprise message is carried out checksum validation, preliminary treatment is carried out in the length control information of message various piece; TCP timer provides regularly reference of hardware for the TCP connection procedure, and Mem Ctrl uses outside storage unit in high speed to expand spatial cache when the more concurrent TCP linking number of needs support; TCP Parser State Machine adopts the high performance synchronous state machine to realize meeting the tcpip stack of various standards; The Queue buffer memory needs to be transferred to the reception packet that TCP handles after handling through IP, perhaps needs the transmission packet that allows IP further handle; Queue Manager is controlled by TCP Parser State Machine, assists the mass data sequence among the scheduling Queue; When realizing TOE, the behavior may command of server end; The Maximum Segment Size of TCP adds that the header of TCP is less than the load of an IP, realize the function of fragmentation when avoiding sending at the IP layer, the behavior of client and intermediate router is uncontrollable, will realize the group newspaper of the IP bag of fragmentation when receiving data.
A kind of described Web inserts the cut-in method of cloud architecture, it is characterized in that:
Web inserts cloud system and extraneous mutual Ethernet interface by 10G, Ethernet interface input data, at first handle through the MAC parts, the data of Ethernet are converted to the class GMII signal of 125MHz, so that the design of subsequent logic, claim to abandon CRC32 check errors packets of information in transfer process, the WEB server is finished address resolution, so that user side can obtain MAC Address, the frame that transmits is carried out ARP frame and IP frame classification, output IP frame is the IP frame that MAC filters, if purpose MAC is MAC Address or the broadcast address of WEB Server, MAC and frame type are filtered out, the ARP frame is resolved, remove the ARP frame of non-ip protocol, according to the source MAC that obtains, construct the arp reply frame, and this acknowledgement frame is delivered in the arp reply frame transmit queue
IP packet classification engine IPC at first detects MAC parts output IP frame, if the information of carrying is source MAC/IP, then whether burst IP frame carries out TCP frame, UDP, ICMP and IGMP frame classification IP frame basis,
The input of DoS protection engine ADE comprises through the IP bag of IP packet classification engine IPC classification with through TCP
/The TCP bag that IP Reduction of Students' Study Load engine TOE handles, DoS protection engine ADE filters according to access control list ACL or other strategy provides the protection of DOS class, legal IP bag or the TCP bag of output through filtering,
TCP
/The part that IP Reduction of Students' Study Load engine TOE finishes the ICP/IP protocol stack realizes, its IP that is input as through classification wraps, IP and TCP head are handled, if the IP that handles bag is fragmented, then to finish the reorganization of IP bag, a complete TCP bag is provided, export TCP Control Block to HTTP Reduction of Students' Study Load engine HOE by 80 ports at last, Payload is to the SSL/TLS engine for the output of 443 ports
In system, HTTP Reduction of Students' Study Load engine HOE is serving as dual role, finishes client and server end function, for the server side functionality parts, receives from TCP
/TCP load, the TCP session number of IP Reduction of Students' Study Load engine TOE are confirmed output domain name and URL information through HTTP packet parsing and request; For the client functionality parts, accept the HTTP data, be object by HTTP reduction of data just, the object and the object name of output request,
Power managed engine PEM upgrades by state information, management strategy that on-chip bus is collected other processing unit, generate the statistical information of each processing unit, and finish the calculating of dynamic power management strategy, then according to the load of object and the status adjustment mode of power managed strategy decision processing unit, the dynamic power consumption of output alignment processing parts is regulated control signal or execution command, global clock and local voltage are managed, store the statistical log of all processing unit
Crypto Engine CE is responsible for cryptographic calculations task required in SSL or the tls protocol, comprise: certificate management, authentication, cipher key change and data encrypting and deciphering, wherein certificate management is finished the importing and the derivation of certificate, authentication parts input certificate, and certificate verified, output authentication result, cipher key change parts input encrypted secret key, encrypted secret key is decrypted, the output key, data encrypting and deciphering parts input plaintext or ciphertext and key are finished and are encrypted or decryption processing, corresponding output ciphertext or plaintext
The data of compression and the to be compressed/decompress(ion) of decompression engine CDE input are compressed and decompression processing the data of importing, and the data behind back and the decompress(ion) are compressed in corresponding output,
File system Reduction of Students' Study Load engine FOE supports the eventful affair of high speed of high flux data to read and write.During reading of data,, correctly visit according to the storage concordance list at local disk according to data attribute information to be checked; When writing data, foundation data format to be written requires data are concentrated or the compression storage of classifying, and finishes, renewal concordance list mutual with Magnetic Disk Controler,
When data to be checked during not at the local disk array, the data management parts are submitted to far-end file system Reduction of Students' Study Load engine RFOE with query requests, generate new far-end request HTTP by RFOE and carry out remote inquiry, neighbours or remote data center return to local RFOE with data to be checked, by local RFOE the data of inquiring about directly are pushed to data original purpose main frame to be checked on the one hand again, on the one hand these data are upgraded the local disk array through local data management component.
The present invention has following good effect:
On general structure, calculate and separate with communication; From the communication aspect, data are separated with control, have adopted special-purpose member to carry out data surface in the communication aspect and have handled; From the data processing aspect, data are carried out the optional cure process of bidirectional flow aquation, select flexibly according to the concrete systemic-function and the demand of performance, can improve the treatment effeciency and the fail safe of existing Web server greatly, reduce power consumption simultaneously.
Description of drawings
Fig. 1 is that traditional Web inserts schematic diagram.
Fig. 2 is that Web inserts the cloud system assumption diagram.
Fig. 3 is that Web inserts cloud architectural data flow graph.
Fig. 4 is that Web inserts TOE structure chart in the cloud architecture.
Fig. 5 is the HOE structure chart that poll is handled.
Fig. 6 is the HOE structure chart of array approaches.
Fig. 7 is that Web inserts CM data flow diagram in the cloud architecture.
Embodiment
As shown in Figure 2, the present invention includes IP packet classification engine IPC 202, DoS protection engine ADE 208, SSL/TLS engine 213, TCP
/IP Reduction of Students' Study Load engine TOE 203, HTTP Reduction of Students' Study Load engine HOE 204, file system Reduction of Students' Study Load engine FOE 206, far-end file system Reduction of Students' Study Load engine RFOE 207, power managed engine PEM 212, content management engine CM 205, Crypto Engine CE 209, compression/decompression engine CDE 210, also comprise control store parts: CPU211, on-chip bus 216 and on-chip memory 214, it is characterized in that:
On-chip bus 216 connects CPU 211, on-chip memory 214, power managed engine PEM 212 and IP packet classification engine IPC 202, TCP
/IP Reduction of Students' Study Load engine TOE 203, HTTP Reduction of Students' Study Load engine HOE 204, content management engine CM 205, file system Reduction of Students' Study Load engine FOE 206, SSL/TLS engine 213, other parts that 211 couples of CPU are mounted on the on-chip bus 216 are controlled, also be connected with the I/O mouth of DoS protection engine ADE 208 on the I/O mouth of IP packet classification engine IPC 202, another I/O mouth of DoS protection engine ADE 208 also and TCP
/The I/O mouth of IP Reduction of Students' Study Load engine TOE 203 links to each other TCP
/Another I/O mouth of IP Reduction of Students' Study Load engine TOE 203 links to each other with the I/O mouth of Crypto Engine CE 209 again, another I/O mouth of Crypto Engine CE 209 links to each other with the I/O mouth of HTTP Reduction of Students' Study Load engine HOE 204, and another I/O mouth of HTTP Reduction of Students' Study Load engine HOE 204 links to each other with the I/O mouth of compression/decompression engine CDE 210;
MAC parts 201 link to each other with on-chip bus 216, IP packet classification engine IPC 202 respectively by input/output port, IP packet classification engine IPC 202, TCP
/I/O mouth between IP Reduction of Students' Study Load engine TOE 203, HTTP Reduction of Students' Study Load engine HOE 204 and the content management engine CM 205 links to each other successively, the I/O mouth of content management engine CM 205 also links to each other with far-end file system Reduction of Students' Study Load engine RFOE 207, the output port of content management engine CM 205 links to each other with file system Reduction of Students' Study Load engine FOE 206, the output port of file system Reduction of Students' Study Load engine FOE 206 links to each other with HTTP Reduction of Students' Study Load engine HOE 204, and the input/output port of file system Reduction of Students' Study Load engine FOE 206 links to each other with local disk array 215.
At described TCP
/In IP Reduction of Students' Study Load engine TOE 203 structures, On-Chip Buffer Memory is used for buffer memory and receives the message that maybe will send from the 10G network; IP Parser StateMachine handles IP to be divided into and receives and send two parts, receiving unit will receive original message from the 10G network interface, and message tentatively resolved, comprise message is carried out checksum validation, preliminary treatment is carried out in the length control information of message various piece; TCP timer provides regularly reference of hardware for the TCP connection procedure; When Mem Ctrl supports more concurrent TCP linking number when needs, use outside storage unit in high speed to expand spatial cache; TCP Parser State Machine adopts the high performance synchronous state machine to realize meeting the tcpip stack of various standards; The Queue buffer memory needs to be transferred to the reception packet that TCP handles after handling through IP, perhaps needs the transmission packet that allows IP further handle; Queue Manager is controlled by TCP Parser State Machine, assists the mass data sequence among the scheduling Queue; When realizing TOE, the behavior may command of server end; The Maximum Segment Size of TCP adds that the header of TCP is less than the load of an IP, realize the function of fragmentation when avoiding sending at the IP layer, the behavior of client and intermediate router is uncontrollable, will realize the group newspaper of the IP bag of fragmentation when receiving data.
A kind of described Web inserts the cut-in method of cloud architecture, it is characterized in that:
Web inserts cloud system and extraneous mutual Ethernet interface by 10G, Ethernet interface input data, at first handle through MAC parts 201, the data of Ethernet are converted to the class GMII signal of 125MHz, so that the design of subsequent logic, claim to abandon CRC32 check errors packets of information in transfer process, the WEB server is finished address resolution, so that user side can obtain MAC Address, the frame that transmits is carried out ARP frame and IP frame classification, output IP frame is the IP frame that MAC filters, if purpose MAC is MAC Address or the broadcast address of WEB Server, MAC and frame type are filtered out, the ARP frame is resolved, remove the ARP frame of non-ip protocol, according to the source MAC that obtains, construct the arp reply frame, and this acknowledgement frame is delivered in the arp reply frame transmit queue
IP packet classification engine IPC 202 at first detects MAC parts 201 output IP frames, if the information of carrying is source MAC/IP, then whether burst IP frame carries out TCP frame, UDP, ICMP and IGMP frame classification IP frame basis,
The input of DoS protection engine ADE 208 comprises through the IP bag of IP packet classification engine IPC 202 classification with through TCP
/The TCP bag that IP Reduction of Students' Study Load engine TOE 203 handles, DoS protection engine ADE 208 filters according to access control list ACL or other strategy provides the protection of DOS class, legal IP bag or the TCP bag of output through filtering,
TCP
/The part that IP Reduction of Students' Study Load engine TOE 203 finishes the ICP/IP protocol stack realizes, its IP that is input as through classification wraps, IP and TCP head are handled, if the IP that handles bag is fragmented, then to finish the reorganization of IP bag, a complete TCP bag is provided, export TCP Control Block to HTTP Reduction of Students' Study Load engine HOE 204 by 80 ports at last, Payload is to SSL/TLS engine 213 for the output of 443 ports
In system, HTTP Reduction of Students' Study Load engine HOE 204 is serving as dual role, finishes client and server end function, for the server side functionality parts, receives from TCP
/TCP load, the TCP session number of IP Reduction of Students' Study Load engine TOE 203 are confirmed output domain name and URL information through HTTP packet parsing and request; For the client functionality parts, accept the HTTP data, be object by HTTP reduction of data just, the object and the object name of output request,
Power managed engine PEM 212 upgrades by state information, management strategy that on-chip bus 216 is collected other processing unit, generate the statistical information of each processing unit, and finish the calculating of dynamic power management strategy, then according to the load of object and the status adjustment mode of power managed strategy decision processing unit, the dynamic power consumption of output alignment processing parts is regulated control signal or execution command, global clock and local voltage are managed, store the statistical log of all processing unit
Crypto Engine CE 209 is responsible for required cryptographic calculations task in SSL or the tls protocol, comprise: certificate management, authentication, cipher key change and data encrypting and deciphering, wherein certificate management is finished the importing and the derivation of certificate, authentication parts input certificate, and certificate verified, output authentication result, cipher key change parts input encrypted secret key, encrypted secret key is decrypted, the output key, data encrypting and deciphering parts input plaintext or ciphertext and key are finished and are encrypted or decryption processing, corresponding output ciphertext or plaintext
The data of compression and the to be compressed/decompress(ion) of decompression engine CDE 210 inputs are compressed and decompression processing the data of importing, and the data behind back and the decompress(ion) are compressed in corresponding output,
File system Reduction of Students' Study Load engine FOE 206 supports the eventful affair of high speed of high flux data to read and write.During reading of data,, correctly visit according to the storage concordance list at local disk according to data attribute information to be checked; When writing data, foundation data format to be written requires data are concentrated or the compression storage of classifying, and finishes, renewal concordance list mutual with Magnetic Disk Controler,
When data to be checked during not at the local disk array, the data management parts are submitted to far-end file system Reduction of Students' Study Load engine RFOE 207 with query requests, generate new far-end request HTTP by RFOE and carry out remote inquiry, neighbours or remote data center return to local RFOE with data to be checked, by local RFOE the data of inquiring about directly are pushed to data original purpose main frame to be checked on the one hand again, on the one hand these data are upgraded the local disk array through local data management component.
This architectural framework is made of a plurality of dedicated engine, and each engine function adopts hardware directly to finish, and unified being mounted on the system CPU processor bus accepted the unified control and management of CPU.By event-driven, in same engine,, then adopt the mode of poll to handle between the engine if a plurality of requests are arranged simultaneously.Interconnected complexity depends on the complexity of incident.In order to make between two engines message transmitted enough simple, the message between the engine can be to be similar to interrupt signal, have better expansibility like this, otherwise will define message format, and by comparatively complicated bus transfer.For simplicity, also can carrying out of task be described, pass through down trigger then by writing the I/O register.
Web inserts the cloud architecture and specifically comprises 11 special-purpose members, as shown in Figure 2, is respectively IP packet classification engine IPC, DoS protection engine ADE, SSL/TLS engine, TCP
/IP Reduction of Students' Study Load engine TOE, HTTP Reduction of Students' Study Load engine HOE, file system Reduction of Students' Study Load engine FOE, far-end file system Reduction of Students' Study Load engine RFOE, power managed engine PEM, content management engine CM, CE(Crypto Engine), compression/decompression engine CDE.
Wherein, IPC realizes a plurality of IP stacks based on unique MAC and link layer, according to different service strategies the data flow of multi-service type is carried out message classification and forwarding; ADE filters according to ACL or the TCL strategy provides the protection of DOS class; The SSL/TLS processing module is finished the exchange of key, and has made a bridge joint to realize the encryption and decryption of data between TCP and HTTP, and for Web service, under the normal conditions, the connection of SSL/TLS is to monitor at 443 ports; TOE adopts reconfigurable hardware to realize the method and the device of ICP/IP protocol stack, can reduce that a large amount of network I/O interrupts, data are repeatedly duplicated and the processing of agreement itself, have alleviated the burden of CPU; HOE is bearing dual identity in system configuration, it is server end, it is again client, finish session with TCP as server end, and resolve and encapsulation HTTP, as client its HTTP request msg, URL and local data management component are carried out session still is at the remote data center with the particular location of determining the HTTP request msg at the local disk array; FOE and RFOE provide the eventful affair of the high speed of high flux data to read and write-in functions; PEM monitors the working condition of each operational module, generates statistical information, and according to statistical information, each engine is implemented managing power consumption; CM finishes the search and the maintenance of data, realizes the quick location of request object and the condition managing of storage object; CE is responsible for SSL(TLS) required cryptographic calculations task in the agreement, mainly comprise: certificate management, authentication, cipher key change, data encrypting and deciphering etc.; CDE carries out Fast Compression and reduction to the data towards Business Stream, reaches the purpose that exchanges storage and communication by calculating for.
In this structure, BUS is used for carrying out data interaction between CPU and various ancillary equipment or the internal memory in the sheet, and each peripheral hardware is mounted in the sheet on the BUS by unified interface.BUS adopts the operating frequency height in the sheet, data bit is roomy, addressing space is big, interrupt mechanism is perfect, and the multi-point bus that can share for some peripheral hardwares on the sheet.System carries out data interaction by the ten thousand mbit ethernet mouths and the external world, passes through the api interface swap data between the engine.
In order to improve the flexibility of system, in this structure, each engine internal is made up of the part reconfigureable computing array.From chip on the whole, all engines and CPU management constitute then that super to mix reconfigureable computing array be HRCA (Heterogeneous Reconfigurable Computing Array).
Energy consumption control is the requirement of whole system, and in order to realize this target, each engine in the system can both provide the basic function of performance monitoring and managing power consumption, and the interface of visit can be provided the PME engine.
This architecture has following feature: on general structure, calculate and separate with communication; From the communication aspect, data are separated with control, have adopted 9 special-purpose members to carry out data surface in the communication aspect and have handled; From the data processing aspect, data are carried out the optional cure process of bidirectional flow aquation, select flexibly according to the concrete systemic-function and the demand of performance.Can improve the treatment effeciency and the fail safe of existing Web server greatly, reduce power consumption simultaneously.
Fig. 2 is that Web inserts the cloud system assumption diagram.This system comprises 11 dedicated processes parts: as IP packet classification engine IPC202, DoS protection engine ADE 208, SSL/TLS engine 213, TCP
/IP Reduction of Students' Study Load engine TOE203, HTTP Reduction of Students' Study Load engine HOE204, file system Reduction of Students' Study Load engine FOE206, far-end file system Reduction of Students' Study Load engine RFOE207, power managed engine PEM212, content management engine CM205, CE(Crypto Engine) 209, compression/decompression engine CDE210.Except processing unit, also have control, memory unit etc., as CPU211, on-chip bus 216, on-chip memory 214.On-chip bus 216 connects CPU 211, on-chip memory, PEM 212, and main processing unit IPC 202, TOE 203, HOE 204, CM 206, FOE 206, SSL/TLS draw 213, other parts that 211 couples of CPU are mounted on the bus are controlled, ADE 208 links to each other with IPC 202, TOE 203, CE 209 links to each other with TOE 203, HOE 204, and CDE 210 links to each other with HOE 204.
System and extraneous mutual Ethernet interface by 10G, the data of Ethernet interface input are at first handled through MAC parts 201, the data of Ethernet are converted to the class GMII signal of 125MHz, so that the design of subsequent logic claims to abandon CRC32 check errors packets of information in transfer process.Because the WEB server need be finished address resolution, so that user side can obtain MAC Address.Therefore need classify the frame that transmits: ARP frame and IP frame.Output IP frame is the IP frame that MAC filters, if purpose MAC is MAC Address or the broadcast address of WEB Server, MAC and frame type is filtered out.The ARP frame is resolved, remove the ARP frame of non-ip protocol, according to the source MAC that obtains, construct the arp reply frame, and this acknowledgement frame is delivered in the arp reply frame transmit queue.
IPC engine 202 at first detects MAC parts 201 output IP frames, if the information of carrying is source MAC/IP, then whether burst IP frame is classified: TCP frame, UDP, ICMP, IGMP frame IP frame basis.Because the user port frame of coming is smaller, for example get frame and arp frame, so this module is not recombinated to IP.
The input of ADE 208 comprises two parts; be respectively through the IP bag of IPC engine 202 classification and the TCP bag of handling through TOE 203; ADE 208 filters according to access control list ACL or other strategy provides the protection of DOS class, legal IP bag or the TCP bag of output through filtering.
The part that TOE 203 finishes the ICP/IP protocol stack realizes, its IP that is input as through classification wraps, IP and TCP head are handled, if the IP that handles bag is fragmented, then to finish the reorganization of IP bag, a complete TCP bag is provided, exports Payload to SSL/TLS engine 213 by 80 ports output TCB (TCP Control Block) to HOE 204,443 ports at last.
In system, HOE 204 is serving as dual role, finishes client and server end function.For the server side functionality parts, receive TCP load, TCP session number from TOE 203, confirm output domain name and URL information through HTTP packet parsing and request; For the client functionality parts, accept the HTTP data, be object by HTTP reduction of data just, the object and the object name of output request.
PEM 212 upgrades by state information, management strategy that bus is collected other dedicated processes parts, generate the statistical information of each processing unit, and finish the calculating of dynamic power management strategy, then according to the state (load) of object and the status adjustment mode of power managed strategy decision processing unit, the dynamic power consumption of output alignment processing parts is regulated control signal or execution command, global clock and local voltage are managed, store the statistical information (daily record) of all processing unit.
CE 209 responsible SSL(TLS) required cryptographic calculations task in the agreement mainly comprises: certificate management, authentication, cipher key change, data encrypting and deciphering etc., and wherein, certificate management is finished the importing (upgrading and derivation) of certificate; The authentication parts are imported certificate, and certificate is verified, output authentication result; Cipher key change parts input encrypted secret key is decrypted encrypted secret key, the output key; Data encrypting and deciphering parts input plain/cipher text and key are finished encryption/decryption process, corresponding output ciphertext/expressly.
The data of the to be compressed/decompress(ion) of CDE 210 inputs are carried out compression/decompression to the data of input and are handled, the data behind the corresponding output compression back/decompress(ion).
Fig. 3 is that Web inserts cloud architectural data flow graph, and the main calculating of Web server still is that I/O is intensive, in data flow diagram, mainly embodies the processing procedure of data in system of input and output, and the work of PME and CPU does not all have to embody.In order to simplify, the visit of the information that the information of http session is connected with TCP also embodies in data flow diagram, and these information are to leave in the internal memory of internal memory on the sheet or expansion.Divide below different situations is given an explaination.
In (a), the situation that the data of request are obtained from disk has been described:
1: contain the HTTP requested packets and arrive host network card
The 2:IP packet is given TOE
HOE is handed in the 3:HTTP request
4:HOE parses requested resource, hands to FOE then
5:FOE initiates read request to disk
6: data return to FOE
7:FOE with deposit data in main memory
8:FOE tells HOE with the information that the file of request is visited in internal memory
9:HOE generates corresponding HTTP and responds packet header, and fileinfo and packet header are handed to TOE
10: to the designated length data of the given file of Memory Controller Hub request
11: specific data returns to TOE
12: for for the first time, TOE obtains data and becomes the packet of TCP/IP with the HTTP capitiform from internal memory; Other is directly the load formation TCP/IP packet of data as remaining HTTP.Send to MAC.
13: packet is sent from network
In situation (b), because the file of visit directly leaves in the internal memory, so FOE do not need to read information from disk, but obtains the information of file in internal memory in the 5th, 6 steps.
In situation (c), what HTTP submitted to is post or put request, and HOE comes out data parsing, give FOE then and be stored in the disk, and can certainly be other medium, for example be stored in other machine by network.
1: contain the HTTP requested packets and arrive host network card
The 2:IP packet is given TOE
HOE is handed in the 3:HTTP request
The 4:HOE request of parsing, contains path and data and gives FOE then with the filename of uploading for Post or Put
5: the deposit data of reception is in internal memory
1-5: carry out repeatedly, finish up to file transfer.
6-7:FOE carries out repeatedly, has been stored in disk up to file.
8: notice HOE, finished upload request.
9: receiveing the response of HOE generation handed to TOE.
The remaining same situation of step (a) (b).
Fig. 4 is that Web inserts TOE structure chart in the cloud architecture.On-Chip Buffer Memory is used for buffer memory and receives the message that maybe will send from the 10G network.IP Parser StateMachine handles IP to be divided into and receives and send two parts, receiving unit will receive original message from the 10G network interface, and message tentatively resolved, comprise message is carried out checksum validation, preliminary treatment etc. is carried out in the length control information of message various piece.TCP timer provides regularly reference of hardware for the TCP connection procedure.When Mem Ctrl supports more concurrent TCP linking number when needs, can use outside storage unit in high speed to expand spatial cache.TCP Parser State Machine adopts the high performance synchronous state machine to realize meeting the tcpip stack of various standards.The Queue buffer memory needs to be transferred to the reception packet that TCP handles after handling through IP, perhaps needs the transmission packet that allows IP further handle.Queue Manager is controlled by TCP Parser State Machine, assists the mass data sequence among the scheduling Queue.When realizing TOE, the behavior may command of server end can limit.The MSS(Maximum Segment Size of TCP) adds that the header of TCP is less than the load of an IP, can avoid realizing the function of fragmentation when sending like this at the IP layer.But the behavior of client and intermediate router is uncontrollable, will realize the group newspaper of the IP bag of fragmentation when receiving data.
Consider the characteristic of http protocol, and the role who serves as the Web reptile, HOE adopts programmable flexibly implementation, and a kind of scheme is to use a PE poll that disposal ability is enough, as shown in Figure 5; Another kind of scheme is the array that a plurality of simple PE form, as shown in Figure 6.Wherein, PE (Processing Element) is a general processor, has the characteristic of enhancing, for example relatively at character string, because the parsing of http protocol mainly is the operation of character string.PE carries out the program that writes, and leaves among the ROM in advance.
Among Fig. 5, a Session once can only handle a request, if all requests of this Session are all handled is over, and then this Session has just removed from snoop queue.And new request then joined in the snoop queue before the request of this formation begins once more.For specific processing of request, from the HTTP head, parse requested resource, comprise service, file, obtain requested resource, the result according to requested resource generates response then.The processing of specific response according to the type of responding, generates header, and gives next processing unit, i.e. TOE processing forward.
Fig. 6 considers dynamic web page and expansion.Because it is more complicated that task becomes, can cause single PE too complicated, perhaps disposal ability is not enough, also may cause energy consumption too high.From the angle of efficiency management, consider that the array that adopts a plurality of PE to form replaces original single PE.
File system Reduction of Students' Study Load engine FOE 206 supports the eventful affair of high speed of high flux data to read and write.During reading of data,, correctly visit according to the storage concordance list at local disk according to data attribute information to be checked; When writing data, foundation data format to be written requires data are concentrated or the compression storage of classifying, and finishes, renewal concordance list mutual with Magnetic Disk Controler.
When data to be checked during not at the local disk array, the data management parts are submitted to far-end file system Reduction of Students' Study Load engine RFOE 207 with query requests, generate new far-end request HTTP by RFOE and carry out remote inquiry, neighbours or remote data center return to local RFOE with data to be checked, by local RFOE the data of inquiring about directly are pushed to data original purpose main frame to be checked on the one hand again, on the one hand these data are upgraded the local disk array through local data management component.
Fig. 7 is that Web inserts CM data flow diagram in the cloud architecture.CM finishes the function of the management and the search of data, the management of the quick location of realization request object and the state of objects stored.Its input mainly comprises the URL of policy information and request object, and strategy mainly comprises, the content of management and corresponding effective time, and for example dynamic, static content, the requirement of literal, picture, video and audio frequency etc. is just different; The rule of content update adopts active or passive; Capacity limit needs how many data of buffer memory.Output mainly comprises two parts: give the instruction of FOE, as the operation (storage, deletion and path) of object; Give the instruction of RFOE, as the URL of destination address and object.The CM workflow is as follows:
1, when system initialization, the analysis configuration strategy is set restrictive condition.And when moving, dynamically adjust restrictive condition in system.
2, safeguard local objects stored:
New object arrives, and then determines whether to store into the position of this locality and storage according to strategy;
According to the strategy of the life cycle of object, invalidate exceeds the object of condition life cycle.
3, safeguard the database of local objects stored:
New object arrives, and then increases a record, and and the path of storage between set up mapping;
Object is by invalidate, then deletion record.
4, according to the URL search request object of asking:
This locality then provides the store path of object.
Not in this locality,, provide the destination address and the URL that obtain from far-end then according to strategy.
Claims (3)
1. a Web inserts the cloud architecture, comprises IP packet classification engine IPC(202), DoS protection engine ADE(208), SSL/TLS engine (213), TCP
/IP Reduction of Students' Study Load engine TOE(203), compression/decompression engine CDE(210 Crypto Engine CE(209 content management engine CM(205 power managed engine PEM(212 far-end file system Reduction of Students' Study Load engine RFOE(207 file system Reduction of Students' Study Load engine FOE(206 HTTP Reduction of Students' Study Load engine HOE(204))))))), also comprise the control store parts: CPU(211), on-chip bus (216) and on-chip memory (214), it is characterized in that:
On-chip bus (216) connects CPU(211), on-chip memory (214), power managed engine PEM(212) and IP packet classification engine IPC(202), TCP
/IP Reduction of Students' Study Load engine TOE(203), SSL/TLS engine (213) file system Reduction of Students' Study Load engine FOE(206 content management engine CM(205 HTTP Reduction of Students' Study Load engine HOE(204))),, CPU(211) other parts that are mounted on the on-chip bus (216) are controlled, at IP packet classification engine IPC(202) the I/O mouth on also be connected with DoS protection engine ADE(208) the I/O mouth, DoS protection engine ADE(208) another I/O mouth also and TCP
/IP Reduction of Students' Study Load engine TOE(203) I/O mouth links to each other TCP
/IP Reduction of Students' Study Load engine TOE(203) another I/O mouth again with Crypto Engine CE(209) the I/O mouth link to each other, Crypto Engine CE(209) I/O mouth another I/O mouth and HTTP Reduction of Students' Study Load engine HOE(204) links to each other, the HTTP engine HOE(204 that lightens the burden) another I/O mouth and compression/decompression engine CDE(210) the I/O mouth link to each other;
MAC parts (201) by input/output port respectively with on-chip bus (216), IP packet classification engine IPC(202) link to each other IP packet classification engine IPC(202), TCP
/IP Reduction of Students' Study Load engine TOE(203), HTTP Reduction of Students' Study Load engine HOE(204) and content management engine CM(205) between the I/O mouth continuous successively, content management engine CM(205) I/O mouth also with far-end file system Reduction of Students' Study Load engine RFOE(207) link to each other, content management engine CM(205) output port and file system Reduction of Students' Study Load engine FOE(206) link to each other, file system Reduction of Students' Study Load engine FOE(206) output port and HTTP Reduction of Students' Study Load engine HOE(204) link to each other the file system engine FOE(206 that lightens the burden) an input/output port link to each other with local disk array (215).
2. Web according to claim 1 inserts the cloud architecture, it is characterized in that: at described TCP
/IP Reduction of Students' Study Load engine TOE(203) in the structure, On-Chip Buffer Memory is used for buffer memory and receives the message that maybe will send from the 10G network; IP Parser StateMachine handles IP to be divided into and receives and send two parts, receiving unit will receive original message from the 10G network interface, and message tentatively resolved, comprise message is carried out checksum validation, preliminary treatment is carried out in the length control information of message various piece; TCP timer provides regularly reference of hardware for the TCP connection procedure, and Mem Ctrl uses outside storage unit in high speed to expand spatial cache when the more concurrent TCP linking number of needs support; TCP Parser State Machine adopts the high performance synchronous state machine to realize meeting the tcpip stack of various standards; The Queue buffer memory needs to be transferred to the reception packet that TCP handles after handling through IP, perhaps needs the transmission packet that allows IP further handle; Queue Manager is controlled by TCP Parser State Machine, assists the mass data sequence among the scheduling Queue; When realizing TOE, the behavior may command of server end; The Maximum Segment Size of TCP adds that the header of TCP is less than the load of an IP, realize the function of fragmentation when avoiding sending at the IP layer, the behavior of client and intermediate router is uncontrollable, will realize the group newspaper of the IP bag of fragmentation when receiving data.
3. a Web as claimed in claim 1 inserts the cut-in method of cloud architecture, it is characterized in that:
Web inserts cloud system and extraneous mutual Ethernet interface by 10G, Ethernet interface input data, at first passing through MAC parts (201) handles, the data of Ethernet are converted to the class GMII signal of 125MHz, so that the design of subsequent logic, claim to abandon CRC32 check errors packets of information in transfer process, the WEB server is finished address resolution, so that user side can obtain MAC Address, the frame that transmits is carried out ARP frame and IP frame classification, output IP frame is the IP frame that MAC filters, if purpose MAC is MAC Address or the broadcast address of WEB Server, MAC and frame type are filtered out, the ARP frame is resolved, remove the ARP frame of non-ip protocol, according to the source MAC that obtains, construct the arp reply frame, and this acknowledgement frame is delivered in the arp reply frame transmit queue
IP packet classification engine IPC(202) at first detect MAC parts (201) output IP frame, if the information of carrying is source MAC/IP, then whether burst IP frame carries out TCP frame, UDP, ICMP and IGMP frame classification IP frame basis,
DoS protection engine ADE(208) input comprises through IP packet classification engine IPC(202) the IP bag of classification and through TCP
/IP Reduction of Students' Study Load engine TOE(203) the TCP bag of handling, DoS protection engine ADE(208) filter or other strategy provides the protection of DOS class according to access control list ACL, legal IP bag or the TCP bag of output through filtering,
TCP
/IP Reduction of Students' Study Load engine TOE(203) part of finishing the ICP/IP protocol stack realizes, its IP that is input as through classification wraps, IP and TCP head are handled, if the IP that handles bag is fragmented, then to finish the reorganization of IP bag, a complete TCP bag is provided, export TCP Control Block to HTTP Reduction of Students' Study Load engine HOE(204 by 80 ports at last), Payload is to SSL/TLS engine (213) for the output of 443 ports
In system, HTTP Reduction of Students' Study Load engine HOE(204) serving as dual role, finish client and server end function, for the server side functionality parts, receive from TCP
/IP Reduction of Students' Study Load engine TOE(203) TCP load, TCP session number are confirmed output domain name and URL information through HTTP packet parsing and request; For the client functionality parts, accept the HTTP data, be object by HTTP reduction of data just, the object and the object name of output request,
The status information, management strategy of power managed engine PEM(212) collecting other processing unit by on-chip bus (216) are upgraded; Generate the statistical information of each processing unit; And finish the calculating of dynamic power management strategy; Then according to the load of object and the status adjustment mode of power managed strategy decision processing unit; The dynamic power consumption of output alignment processing parts is regulated control signal or is carried out instruction; Global clock and local voltage are managed; Store the statistical log of all processing unit
Crypto Engine CE(209) is responsible for required cryptographic calculations task in SSL or the tls protocol, comprise: certificate management, authentication, cipher key change and data encrypting and deciphering, wherein certificate management is finished the importing and the derivation of certificate, authentication parts input certificate, and certificate verified, output authentication result, cipher key change parts input encrypted secret key, encrypted secret key is decrypted, the output key, data encrypting and deciphering parts input plaintext or ciphertext and key are finished and are encrypted or decryption processing, corresponding output ciphertext or plaintext
Compression and decompression engine CDE(210) data of the to be compressed/decompress(ion) of input, the data of input are compressed and decompression processing, the data behind corresponding output compression back and the decompress(ion),
File system Reduction of Students' Study Load engine FOE(206) support the eventful affair of high speed of high flux data to read and write.
During reading of data,, correctly visit according to the storage concordance list at local disk according to data attribute information to be checked; When writing data, foundation data format to be written requires data are concentrated or the compression storage of classifying, and finishes, renewal concordance list mutual with Magnetic Disk Controler,
When data to be checked during not at the local disk array, the data management parts are submitted to far-end file system Reduction of Students' Study Load engine RFOE(207 with query requests), generate new far-end request HTTP by RFOE and carry out remote inquiry, neighbours or remote data center return to local RFOE with data to be checked, by local RFOE the data of inquiring about directly are pushed to data original purpose main frame to be checked on the one hand again, on the one hand these data are upgraded the local disk array through local data management component.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201110025590.XA CN102143218B (en) | 2011-01-24 | 2011-01-24 | Web access cloud architecture and access method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201110025590.XA CN102143218B (en) | 2011-01-24 | 2011-01-24 | Web access cloud architecture and access method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN102143218A true CN102143218A (en) | 2011-08-03 |
CN102143218B CN102143218B (en) | 2014-07-02 |
Family
ID=44410436
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201110025590.XA Active CN102143218B (en) | 2011-01-24 | 2011-01-24 | Web access cloud architecture and access method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN102143218B (en) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102821000A (en) * | 2012-09-14 | 2012-12-12 | 乐视网信息技术(北京)股份有限公司 | Method for improving usability of PaaS platform |
WO2014008793A1 (en) * | 2012-07-10 | 2014-01-16 | 华为技术有限公司 | Tcp data transmission method, tcp uninstallation engine, and system |
CN104883335A (en) * | 2014-02-27 | 2015-09-02 | 王磊 | Full-hardware TCP protocol stack realizing method |
WO2015169090A1 (en) * | 2014-05-09 | 2015-11-12 | 国云科技股份有限公司 | Service-based message access layer frame and implementation method therefor |
CN108234662A (en) * | 2018-01-09 | 2018-06-29 | 江苏徐工信息技术股份有限公司 | A kind of secure cloud storage method with active dynamic key distribution mechanisms |
CN108881425A (en) * | 2018-06-07 | 2018-11-23 | 中国科学技术大学 | A kind of data package processing method and system |
CN109714302A (en) * | 2017-10-25 | 2019-05-03 | 阿里巴巴集团控股有限公司 | The discharging method of algorithm, device and system |
CN111010410A (en) * | 2020-03-09 | 2020-04-14 | 南京红阵网络安全技术研究院有限公司 | Mimicry defense system based on certificate identity authentication and certificate signing and issuing method |
CN111726361A (en) * | 2020-06-19 | 2020-09-29 | 西安微电子技术研究所 | Ethernet communication protocol stack system and implementation method |
CN111737528A (en) * | 2020-06-23 | 2020-10-02 | Oppo(重庆)智能科技有限公司 | A data acquisition and verification method, device, electronic device and storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1910869A (en) * | 2003-12-05 | 2007-02-07 | 艾拉克瑞技术公司 | TCP/IP offload device with reduced sequential processing |
US20100248698A1 (en) * | 2009-03-26 | 2010-09-30 | Electronics And Telecommunications Research Institute | Mobile terminal device inlcuding mobile cloud platform |
CN101883103A (en) * | 2009-04-15 | 2010-11-10 | 埃森哲环球服务有限公司 | Method and system for client-side extension of web server group architecture in cloud data center |
-
2011
- 2011-01-24 CN CN201110025590.XA patent/CN102143218B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1910869A (en) * | 2003-12-05 | 2007-02-07 | 艾拉克瑞技术公司 | TCP/IP offload device with reduced sequential processing |
US20100248698A1 (en) * | 2009-03-26 | 2010-09-30 | Electronics And Telecommunications Research Institute | Mobile terminal device inlcuding mobile cloud platform |
CN101883103A (en) * | 2009-04-15 | 2010-11-10 | 埃森哲环球服务有限公司 | Method and system for client-side extension of web server group architecture in cloud data center |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2014008793A1 (en) * | 2012-07-10 | 2014-01-16 | 华为技术有限公司 | Tcp data transmission method, tcp uninstallation engine, and system |
CN102821000B (en) * | 2012-09-14 | 2015-12-09 | 乐视致新电子科技(天津)有限公司 | Improve the method for usability of PaaS platform |
CN102821000A (en) * | 2012-09-14 | 2012-12-12 | 乐视网信息技术(北京)股份有限公司 | Method for improving usability of PaaS platform |
CN104883335A (en) * | 2014-02-27 | 2015-09-02 | 王磊 | Full-hardware TCP protocol stack realizing method |
CN104883335B (en) * | 2014-02-27 | 2017-12-01 | 王磊 | A kind of devices at full hardware TCP protocol stack realizes system |
WO2015169090A1 (en) * | 2014-05-09 | 2015-11-12 | 国云科技股份有限公司 | Service-based message access layer frame and implementation method therefor |
US11171936B2 (en) | 2017-10-25 | 2021-11-09 | Alibaba Group Holding Limited | Method, device, and system for offloading algorithms |
CN109714302B (en) * | 2017-10-25 | 2022-06-14 | 阿里巴巴集团控股有限公司 | Method, device and system for unloading algorithm |
CN109714302A (en) * | 2017-10-25 | 2019-05-03 | 阿里巴巴集团控股有限公司 | The discharging method of algorithm, device and system |
CN108234662A (en) * | 2018-01-09 | 2018-06-29 | 江苏徐工信息技术股份有限公司 | A kind of secure cloud storage method with active dynamic key distribution mechanisms |
CN108881425A (en) * | 2018-06-07 | 2018-11-23 | 中国科学技术大学 | A kind of data package processing method and system |
CN111010410A (en) * | 2020-03-09 | 2020-04-14 | 南京红阵网络安全技术研究院有限公司 | Mimicry defense system based on certificate identity authentication and certificate signing and issuing method |
CN111726361A (en) * | 2020-06-19 | 2020-09-29 | 西安微电子技术研究所 | Ethernet communication protocol stack system and implementation method |
CN111737528A (en) * | 2020-06-23 | 2020-10-02 | Oppo(重庆)智能科技有限公司 | A data acquisition and verification method, device, electronic device and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN102143218B (en) | 2014-07-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102143218B (en) | Web access cloud architecture and access method | |
CN111614631B (en) | User mode assembly line framework firewall system | |
CN107046542A (en) | A kind of method that common recognition checking is realized using hardware in network level | |
US20140089500A1 (en) | Load distribution in data networks | |
CN104205080A (en) | Offloading packet processing for networking device virtualization | |
CN109698796A (en) | A kind of high performance network SiteServer LBS and its implementation | |
CN103176780A (en) | Binding system and method of multiple network interfaces | |
KR101480126B1 (en) | Network based high performance sap monitoring system and method | |
US11991083B2 (en) | Systems and methods for enhanced autonegotiation | |
CN107078936A (en) | For the system and method for the fine granularity control for providing the MSS values connected to transport layer | |
US20240251016A1 (en) | Communication protocol, and a method thereof for accelerating artificial intelligence processing tasks | |
CN116886309A (en) | Slice security mapping method and system for intelligent identification network | |
CN101674193B (en) | Management method of transmission control protocol connection and device thereof | |
CN103368872A (en) | Data packet forwarding system and method | |
CN118802980A (en) | A highly elastic and lightweight edge device management and control system based on cloud platform | |
CN101621528B (en) | Conversation system based on Ethernet switch cluster management and method for realizing conversation passage | |
Morishima et al. | Network transparent fog-based IoT platform for industrial IoT | |
CN102611752A (en) | Realization method of supervision server (iTracker) through participating in peer-to-peer computing technology by telecom operator | |
CN115834722B (en) | Data processing method, device, network element equipment and readable storage medium | |
CN102185896B (en) | Cloud service-oriented remote file request sensing device and method | |
CN116232803A (en) | Edge computing gateway platform architecture and interaction method thereof | |
CN101170544A (en) | A communication method using a single real IP address in a highly available cluster system | |
CN103546504A (en) | Application layer isolation based load balancing device virtualization system and method | |
Wang et al. | An optimized RDMA QP communication mechanism for hyperscale AI infrastructure | |
Xu et al. | Roda: a flexible framework for real-time on-demand data aggregation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right | ||
TR01 | Transfer of patent right |
Effective date of registration: 20171010 Address after: 201112 room 1588, building 3A, No. 501, Union Road, Shanghai, Minhang District Co-patentee after: National Digital Switch System Engineering Technology Research Center Patentee after: Shanghai RedNeurons Information Technology Co., Ltd. Address before: 201112 3A business building, United Airlines road 1588, Shanghai, Minhang District Patentee before: Shanghai RedNeurons Information Technology Co., Ltd. |