CN103870435A - Server and data access method - Google Patents
Server and data access method Download PDFInfo
- Publication number
- CN103870435A CN103870435A CN201410091090.XA CN201410091090A CN103870435A CN 103870435 A CN103870435 A CN 103870435A CN 201410091090 A CN201410091090 A CN 201410091090A CN 103870435 A CN103870435 A CN 103870435A
- Authority
- CN
- China
- Prior art keywords
- processor
- mark
- access
- node
- request
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000005540 biological transmission Effects 0.000 claims 2
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F15/00—Digital computers in general; Data processing equipment in general
- G06F15/16—Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
- G06F15/163—Interprocessor communication
- G06F15/173—Interprocessor communication using an interconnection network, e.g. matrix, shuffle, pyramid, star, snowflake
- G06F15/17356—Indirect interconnection networks
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Hardware Design (AREA)
- Theoretical Computer Science (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Memory System Of A Hierarchy Structure (AREA)
- Multi Processors (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
The invention relates to a server and a data access method. The server comprises processor interconnection nodes and node controllers; each processor interconnection node comprises at least one node controller and at least two base nodes; each base node comprises at least four processors; the node controllers are connected with the base nodes, used for managing the processor transactions according to address space of the processors, and also used for receiving access requests and identifications of source processors and sending the access requests and the node controller identifications to target processors according to the target addresses carried in the access requests; at least one NC (Node Controller) ensures the band width of the server; processors in the same base nodes can be directly interconnected and mutually access to one another; when processors in different base nodes in the same processor interconnection nodes carry out the data access, the processors do not need to span the chains of the NCs, so that the server lag is reduced.
Description
Technical field
The present invention relates to field of computer technology, relate in particular to a kind of server and data access method.
Background technology
From system architecture, current commercial server generally can be divided three classes, be symmetric multiprocessor structure (Symmetric Multi-Processor, SMP), non-uniform memory access structure (Non-Uniform Memory Access, and magnanimity parallel processing structure (Massive Parallel Processing, MPP) NUMA).
SMP server refers to (the Central Processing Unit of multiple central processing units in server, CPU) symmetrical work, without primary and secondary or subordinate relation, each CPU shares identical physical memory, required time of any address in access memory is identical, and the shortcoming of SMP is that scalability is limited; NUMA server has multiple CPU modules, each CPU module is made up of multiple CPU (as 4), and there is independently local internal memory, I/O notch etc., between CPU module, can connect information interaction by interconnect module (as Crossbar Switch), each CPU accesses the speed of local internal memory far away higher than the far speed of internal memory (internal memory of other CPU module in system) of access, in the time that CPU quantity increases, server performance cannot linearly increase; MPP server is connected by certain node internet by multiple SMP servers, each SMP node can move operating system, the database of oneself, but the CPU in each node can not access the internal memory of another node, the information interaction between node realizes by node internet.
Current have three kinds of processor interconnect architectures, and the first is single cube interconnect architecture, is the maximum processor interconnect architecture that Intel recommends, can support that 8 CPU are interconnected, but maximum only can expand to 8P system, cannot carry out the connection of more CPU, extendability is affected.
The second processor interconnect architecture be in a node two CPU or four CPU and a Node Controller (Node Controller, NC) interconnected, the more massive system of interconnected formation between NC and NC.The shortcoming of this framework is that the link of the external connection on NC can become bandwidth bottleneck, and in node, CPU needs to carry out issued transaction and bandwidth demand by same NC.
The third processor interconnect architecture be in a node two CPU or four CPU and two NC interconnected, interconnected by two NC between this topological project node, issued transaction and bandwidth demand that two NC have shared according to address space, can meet bandwidth demand preferably.This topological project postpones less in the time of 4P, but for 8P even larger system above, when the CPU in certain node accesses the internal memory on other node, need to cross over two NC, postpones greatlyr, and postpones to affect very large for NUMA system performance.
In sum, how in guaranteeing server bandwidth, reducing server delay is the problem that needs at present solution.
Summary of the invention
technical matters
In view of this, the technical problem to be solved in the present invention is how in guaranteeing server bandwidth, to reduce server and postpone.
solution
In order to solve the problems of the technologies described above, in first aspect, the invention provides a kind of server, comprising:
Processor interlink node;
Described processor interlink node comprises at least one Node Controller and at least two fundamental nodes, and each described fundamental node comprises at least four processors;
Described Node Controller, is connected with described fundamental node, for according to the affairs of processor described in the address space menagement of described processor;
Described Node Controller, also identify for request of access and the source processor of reception sources processor, according to the destination address of carrying in described request of access, described request of access and Node Controller mark are mail to target processor, wherein, described source processor and described target processor are positioned at different fundamental nodes, the address that described destination address is described target processor.
In conjunction with first aspect, in the possible implementation of the first of first aspect, described Node Controller, also for receive data response from described target processor, and mails to described source processor according to described source processor mark by described data response.
In conjunction with the possible implementation of the first of first aspect, in the possible implementation of the second of first aspect, described Node Controller comprises control chip, local agent LP and remote agent RP;
Described control chip, for receiving described source processor mark and described request of access from described source processor; From described request of access, obtain RP mark, the RP pointing to described RP mark sends described request of access and described source processor mark;
Described RP, for obtaining described destination address from described request of access, carries out decoding to described destination address and obtains LP mark, and the LP pointing to described LP mark sends described request of access; Receive described data response from described LP, described data response is sent to described source processor corresponding to described source processor mark;
Described LP, be used for recording described RP mark, from described request of access, obtain described destination address, send described request of access and Node Controller mark to described destination address described target processor pointed, described Node Controller is designated described LP mark; Receive described data response from described target processor; The described RP pointing to described RP mark sends described data response.
In conjunction with the first and the possible implementation of the second of first aspect and first aspect, in the third possible implementation of first aspect, described node control implement body also for: receive new request of access at described target processor, in the situation of the data in the described destination address of indication access, what receive that described target processor sends intercepts message and described Node Controller mark, described in intercept message and comprise described destination address; Described in sending to described source processor according to described source processor mark, intercept message; Receive the listens for responsive that described source processor returns, and according to described destination address, described listens for responsive is mail to described target processor.
In conjunction with the third possible implementation of first aspect, in the 4th kind of possible implementation of first aspect, LP also intercepts message described in receiving from described target processor; From the second directory information, obtain described RP mark, and intercept message described in the described RP transmission of pointing to described RP mark, described the second directory information is the directory information of preserving in described LP; Send described listens for responsive according to described destination address to described target processor;
Described RP intercepts message described in also sending for the described source processor pointing to described source processor mark; The described LP pointing to described Node Controller mark sends described listens for responsive.
In conjunction with any one the possible implementation in four kinds of possible implementations of the first to the of first aspect and first aspect, in the 5th kind of possible implementation of first aspect, described processor interlink node comprises the first fundamental node, the second fundamental node and two Node Controllers, and described the first fundamental node and described the second fundamental node comprise respectively at least four processors.
In second aspect, the invention provides a kind of data access method, be applied to the server described in any one possible implementation of first aspect and first aspect, when source processor needs access destination processor, described data access method comprises:
Node Controller receives request of access and the source processor mark of described source processor, carries destination address, the address that described destination address is described target processor in described request of access;
Described Node Controller, according to described destination address, mails to described target processor by described request of access and Node Controller mark;
Described Node Controller receives data response from described target processor, and according to described source processor mark, described data response is mail to described source processor.
In conjunction with second aspect, in the possible implementation of the first of second aspect, described Node Controller comprises control chip, local agent LP and remote agent RP, described Node Controller is according to described destination address, described request of access and Node Controller mark are mail to described target processor, comprising:
Described control chip receives described source processor mark and described request of access from described source processor, and from described request of access, obtains RP mark, and the RP pointing to described RP mark sends described request of access and described source processor mark;
Described RP obtains described destination address from described request of access, described destination address is carried out to decoding and obtain LP mark, and the LP pointing to described LP mark sends described request of access;
Described LP records described RP mark, obtains described destination address from described request of access, sends described request of access and Node Controller mark to described destination address described target processor pointed, and described Node Controller is designated described LP mark;
Described Node Controller receives data response from described target processor, and according to described source processor mark, described data response is mail to described source processor, comprising:
Described LP receives described data response from described target processor, and the described RP pointing to described RP mark sends described data response;
Described data response is sent to described source processor corresponding to described source processor mark by described RP.
In conjunction with the possible implementation of the first of second aspect and second aspect, in the possible implementation of the second of second aspect, receive new request of access at described target processor, indication need to be accessed in the situation of the data in described destination address, and described data access method also comprises:
What described Node Controller received that described target processor sends intercepts message and described Node Controller mark, described in intercept message and comprise described destination address;
Described Node Controller is intercepted message described in sending to described source processor according to described source processor mark;
Described Node Controller receives the listens for responsive that described source processor returns, and according to described destination address, described listens for responsive is mail to described target processor.
In conjunction with the possible implementation of the second of second aspect, in the third possible implementation of second aspect, described in sending to described source processor according to described source processor mark, described Node Controller intercepts message, comprise;
Described LP intercepts message described in receiving from described target processor;
Described LP obtains described RP mark from the second directory information, and intercepts message described in the described RP transmission of pointing to described RP mark, and described the second directory information is the directory information of preserving in described LP;
Described in sending, the described source processor that described RP points to described source processor mark intercepts message;
Described Node Controller receives the listens for responsive that described source processor returns, and according to described destination address, described listens for responsive is mail to described target processor, comprising:
Described control chip receives described listens for responsive from described source processor, and from described listens for responsive, obtains described RP mark, and the described RP pointing to described RP mark sends described listens for responsive;
The described LP that described RP points to described Node Controller mark sends described listens for responsive;
Described LP sends described listens for responsive according to described destination address to described target processor.
beneficial effect
The server of the present embodiment, at least one NC has guaranteed the bandwidth of server; Further, processor in identical fundamental node is interconnected and mutual access data each other directly, while carrying out data access between processor in the different fundamental nodes of same processor interlink node, do not need to cross over the link between NC, reduced server delay.
According to below with reference to accompanying drawing to detailed description of illustrative embodiments, it is clear that further feature of the present invention and aspect will become.
Accompanying drawing explanation
The accompanying drawing being included in instructions and form a part for instructions shows exemplary embodiment of the present invention, feature and aspect together with instructions, and for explaining principle of the present invention.
Fig. 1 a illustrates the structured flowchart of server according to an embodiment of the invention;
Fig. 1 b illustrates the structured flowchart of processor interlink node according to an embodiment of the invention;
Fig. 1 c illustrates the structured flowchart of processor interlink node according to an embodiment of the invention;
Fig. 2 illustrates the process flow diagram of data access method according to an embodiment of the invention;
Fig. 3 illustrates the structured flowchart of processor interlink node according to an embodiment of the invention;
Fig. 4 illustrates the process flow diagram of data access method according to another embodiment of the present invention;
Fig. 5 illustrates the structured flowchart of server according to another embodiment of the present invention.
Embodiment
Describe various exemplary embodiments of the present invention, feature and aspect in detail below with reference to accompanying drawing.The identical same or analogous element of Reference numeral presentation function in accompanying drawing.Although the various aspects of embodiment shown in the drawings, unless otherwise indicated, needn't draw accompanying drawing in proportion.
Here special word " exemplary " means " as example, embodiment or illustrative ".Here needn't be interpreted as being better than or being better than other embodiment as " exemplary " illustrated any embodiment.
In addition, for better explanation the present invention, in embodiment below, provided numerous details.It will be appreciated by those skilled in the art that and there is no some detail, the present invention can implement equally.In some instances, method, means, element and the circuit known are not for those skilled in the art described in detail, so that highlight purport of the present invention.
Fig. 1 a illustrates the structured flowchart of server according to an embodiment of the invention.This server 100, specifically can comprise:
Described processor interlink node 110 comprises at least one Node Controller 120 and at least two fundamental nodes 130, and each described fundamental node 130 comprises at least four processors 140;
Described Node Controller 120, is connected with described fundamental node 130, for according to the affairs of processor 140 described in the address space menagement of described processor 140.
Particularly, server 100 can comprise processor interlink node 110, and processor interlink node 110 can comprise at least one Node Controller 120.Further, processor interlink node 110 can also comprise at least two fundamental nodes 130, in each fundamental node 130, can comprise at least four processors 140.
In this server 100, Node Controller 120 is connected with fundamental node 130, can be according to the affairs of the address space menagement processor 140 of different processor 140 in fundamental node 130.Node Controller 120 can also be connected with the Node Controller in other processor interlink nodes, make a processor by the processor in other processor interlink nodes of link-access between Node Controller and Node Controller, to meet the bandwidth demand of server.
Further, described Node Controller 120, also identify for request of access and the source processor of reception sources processor, according to the destination address of carrying in described request of access, described request of access and Node Controller mark are mail to target processor, wherein, described source processor and described target processor are positioned at different fundamental nodes, the address that described destination address is described target processor; Described Node Controller 120, also for receive data response from described target processor, and mails to described source processor according to described source processor mark by described data response.
For processor interlink node 110, between processor 140 in identical fundamental node 130, can directly communicate by the communication module in processor 140, realizing exchanges visits asks, between processor 140 in different fundamental nodes 130, can communicate by Node Controller 120, realizing exchanges visits asks.And, in the time that source processor needs data in access destination processor and source processor and target processor to be positioned at different fundamental nodes, send to target processor in the process of request of access at source processor, Node Controller 120 can reception sources processor request of access and source processor mark, according to the destination address of carrying in described request of access, described request of access and Node Controller mark are mail to target processor; In the process responding to source processor return data at target processor, target processor can send data response to corresponding Node Controller 120 according to Node Controller mark, and Node Controller 120 can mail to described source processor by described data response according to described source processor mark after receiving this data response.In the process of communication, do not need to cross over the link between NC, can reduce the delay of server.
Particularly, described Node Controller 120 can comprise control chip, local agent LP and remote agent RP.In source processor request access target processor, in the process of data, the said modules of Node Controller 120 can be carried out respectively following action:
Described control chip, for receiving described source processor mark and described request of access from described source processor; From described request of access, obtain RP mark, the RP pointing to described RP mark sends described request of access and described source processor mark.
Described RP, for obtaining described destination address from described request of access, carries out decoding to described destination address and obtains LP mark, and the LP pointing to described LP mark sends described request of access; Receive described data response from described LP, described data response is sent to described source processor corresponding to described source processor mark.
Described LP, be used for recording described RP mark, from described request of access, obtain described destination address, send described request of access and Node Controller mark to described destination address described target processor pointed, described Node Controller is designated described LP mark; Receive described data response from described target processor; The described RP pointing to described RP mark sends described data response.
In a kind of possible implementation, receive new request of access at described target processor, in the situation of the data in the described destination address of indication access, what Node Controller 120 can receive that described target processor sends intercepts message and described Node Controller mark, described in intercept message and comprise described destination address; Described in sending to described source processor according to described source processor mark, intercept message; Receive the listens for responsive that described source processor returns, and according to described destination address, described listens for responsive is mail to described target processor.
The server of the present embodiment, at least one NC has guaranteed the bandwidth of server; Processor in identical fundamental node is interconnected and mutual access data each other directly, while carrying out data access, do not need to cross over the link between NC between the processor in the different fundamental nodes of same processor interlink node, have reduced server delay.
Fig. 1 b illustrates the structured flowchart of processor interlink node according to an embodiment of the invention.As shown in Figure 1 b, this processor interlink node 200 specifically can comprise: the first fundamental node 210, the second fundamental node 220 and two Node Controllers 230, described the first fundamental node 210 comprises at least four processors 240, and described the second fundamental node 220 comprises at least four processors 250.
Particularly, form a 4P node by four processors, this 4P node can be referred to as fundamental node, wherein each processor has internal memory, the communication module of self, between each processor, can communicate by communication module, can access also data in internal memory each other of data in self EMS memory.Each processor interlink node can be made up of eight processors, by two fundamental nodes as above by interconnected the forming of two Node Controllers (NC).Two NC can be responsible for respectively the plane of two different address spaces, are responsible for respectively the affairs of the processor of two different address spaces, also can adjust as required, and the present invention is not construed as limiting this.Fig. 1 c illustrates the structured flowchart of processor interlink node according to an embodiment of the invention.As shown in Fig. 1 c, the affairs of processor 4, processor 5 in the affairs of processor 0, processor 1 and fundamental node 1 in the responsible fundamental node 0 of Node Controller 1, the affairs of processor 6, processor 7 in the affairs of processor 2, processor 3 and fundamental node 1 in the responsible fundamental node 0 of Node Controller 0.Further, two NC can have the server of larger bandwidth separately with the interconnected formation of other NC, and NC can be connected with other NC by interconnecting interface.In this processor interlink node, two NC can guarantee the bandwidth of server in processor carries out across the process of NC access.In addition, when internal memory in processor access fundamental node 1 in fundamental node 0, can directly conduct interviews by Node Controller 0 or Node Controller 1, no longer need to be across the link between NC, for example the processor 2 in fundamental node 0 can directly be accessed by Node Controller 0 internal memory of processor 6 in fundamental node 1, can guarantee that like this in processor interlink node, accessing across fundamental node the server causing postpones lower.
In sum, server provided by the invention, comprise at least one Node Controller and at least two fundamental nodes that communicate by Node Controller, can be in guaranteeing server bandwidth, the server when processor of different fundamental nodes is accessed mutually in reduction same processor interlink node postpones.
Fig. 2 illustrates the process flow diagram of data access method according to an embodiment of the invention.As shown in Figure 2, this data access method can be applied in the server of the above embodiment of the present invention, and when source processor needs access destination processor, this data access method mainly can comprise:
Particularly, in the processor interlink node of the above embodiment of the present invention, source processor needs in the situation of access destination processor, can be that destination address is determined the NC that processes source processor affairs according to the address of data to be visited, and send request of access and source processor mark to this NC.In this request of access, can comprise destination address.
In a kind of possible implementation, in the processor interlink node of the above embodiment of the present invention, the affairs of source processor and the affairs of target processor may be managed by two NC in this processor interlink node respectively, the address space difference of two NC management, can share the bandwidth pressure of processor interlink node, between these two NC, do not have interconnectedly, can not directly carry out communication.In this case, source processor needs first request of access and source processor mark to be sent to some intermediate processors, this intermediate processor and source processor belong to same fundamental node, can directly not carry out communication by NC, and the affairs of this intermediate processor and the affairs of target processor are managed by same NC, intermediate processor can carry out communication by this NC and target processor.By the forwarding of intermediate processor, the request of access of source processor and source processor mark can be sent to target processor.For example, Fig. 3 illustrates the structured flowchart of processor interlink node according to an embodiment of the invention, as shown in Figure 3, if CPU5 is source processor, CPU2 is target processor, CPU5 need to access the data of CPU2, and CPU5 need to be sent to CPU6 or CPU7 by the mark of request of access and CPU5, and CPU6 or CPU7 are sent to CPU2 by the mark of request of access and CPU5 by right side NC.The selection of CPU6 and CPU7 can be determined by the routing configuration of this processor interlink node.
Particularly, NC can record source processor mark after receiving request of access and source processor mark, determines that for follow-up which processor takies the data of this destination address.NC can determine by destination address the target processor at data to be visited place, and request of access is sent to target processor, the mark that simultaneously NC can also send NC to target processor is Node Controller mark, for target processor correctly return data respond.
Particularly, after target processor receives data access request, can respond by return data, in the process of return data response, can be first by Node Controller identify determine can forwarding data response NC, NC, after receiving data response, can determine by the source processor mark of record the source processor of request access data, and data response is mail to source processor, complete data access.
In a kind of possible implementation, in NC, can comprise local agent (Local Proxy, LP) and remote agent (Remote Proxy, RP).Wherein, LP can be for completing CPU and the fundamental node protocol processes work of NC outward in fundamental node, there is caching agent (Cache Agent from CPU LP in fundamental node, CA) function, be that in fundamental node, CPU thinks there is processor core on LP, although processor core is not on LP, but in processor in distant-end node; From fundamental node, NC LP has internal memory agency (Home Agent, HA) function, and the outer NC of fundamental node thinks there is internal memory on LP, although internal memory, not on LP, but is connected on the processor in fundamental node; RP can complete the protocol processes work of the interior CPU of fundamental node and the outer NC of fundamental node, there is HA function from CPU RP in fundamental node, be that CPU thinks there is internal memory on RP in fundamental node, although internal memory, not on RP, but is connected on the processor outside fundamental node; From fundamental node, NC RP has CA function, and the outer NC of fundamental node thinks that RP has processor core, although processor core is not on RP, but on processor in fundamental node.Between CPU, carry out in the process of data access, a LP can be responsible for being positioned at the HA affairs of two processors, and wherein HA affairs are the process of request access internal memory.RP can be interweaved and be managed the request of eight CPU by low order address.The existence of LP, RP can allow the inside and outside processor of fundamental node can both access the data in the inside and outside internal memory of fundamental node and there will not be the inconsistent phenomenon of data.
In a kind of possible implementation, before step 300, source processor can determine whether request of access to be mail to NC according to destination address, if the target processor at source processor and data to be visited place belongs to same fundamental node (as fundamental node 0), owing to can directly accessing by communication module between the CPU in same fundamental node, do not need request of access to mail to NC; If the target processor at source processor and data to be visited place does not belong to same fundamental node (as source processor belongs to fundamental node 0, target processor belongs to fundamental node 1), because the processor in different fundamental nodes need to conduct interviews by NC, need request of access to mail to NC.In the situation that request of access is mail to NC by needs, source processor can continue to determine according to destination address request of access is mail to which NC in this processor interlink node.As shown in Figure 3, CPU0~CPU3 belongs to same fundamental node, address bit A41=0 in address, and CPU4~CPU7 belongs to same fundamental node, address bit A41=1 in address; Address bit A40=0 in left side NC agency's processor address, address bit A40=1 in right side NC agency's processor address.If the internal storage data of CPU5 request access CPU2, CPU5 can determine that CPU2 and CPU5 do not belong to same fundamental node, need through NC in access process by the address bit A41=0 of the CPU2 at data to be visited place.The CPU5 again address bit A40=1 of the CPU2 by data to be visited place determines request of access is mail to right side NC.
In a kind of possible implementation, in NC, can also comprise control chip, step 310 specifically can comprise:
Particularly, in request of access, can comprise destination address, be used to indicate the address at the data place that source processor need to access.Control chip, after source processor receives source processor mark and request of access, obtains RP mark in the destination address that can comprise in request of access, source processor mark and request of access are mail to which RP in NC by this RP mark indication control chip.RP transmission source processor flag and request of access that control chip can point to this RP mark, wherein, RP mark can be the address bit [A7 in destination address, A6], this source processor mark can be for the source processor of the data in destination address record committed memory in the directory information of RP in.For example, as shown in Figure 3, if the address bit [A7 in destination address, A6]=10, CPU5 can determine by intermediate processor CPU6 or CPU7 request of access is sent to right side NC, the control chip of right side NC receives this request of access, and therefrom obtains RP mark, and can determine this request of access is sent to RP2 according to RP mark.
Particularly, can preserve i.e. the first directory information of a directory information in RP, can record some processors and take the data on the some addresses of internal memory in this directory information, wherein processor can carry out record by the mark of processor.According to buffer consistency and internal memory consistency protocol MESI agreement, each cache lines can be marked as one of following four kinds of states: revise (Modified), monopolize (Exclusive), share (Shared), invalid (Invalid).In the time that some cache lines are marked as invalid state, illustrate that this cache lines is invalid, be null, inactive line must be taken out from internal memory, become share or exclusive state could realize read request.
The state of all right record buffer memory row in the first directory information of RP, in the situation that RP finds this destination address be recorded as invalid state in the first directory information, RP can be according to [the A45 of destination address, A42] which processor interlink node address bit judgement need to mail to the request of access receiving, if which LP the processor interlink node at this RP place can mail to according to the address bit A41 of destination address and A6 judgement.For example, as shown in Figure 3, A41=0, and LP0 and LP1 agency's processor address position A41=0, LP3 and LP4 agency's processor address position A41=1, RP can obtain LP mark according to destination address decoding, is determined the request of access receiving is mail to LP0 or LP1 by the address bit A41=0 in destination address.Further, A6=0 in LP0 agency's HA affairs, and A6=1 in LP1 agency's HA affairs, RP can determine that it is LP that the request of access receiving is mail to LP0 by the address bit A6=0 in destination address.
Particularly, after LP receives above-mentioned request of access, the mark that can record RP is RP mark, responds to this RP return data for follow-up.LP can also obtain the destination address in request of access, can determine that according to the address bit A39 in destination address it is target processor that request of access is mail to which processor.In the time that LP sends request of access, Node Controller mark LP mark can also be sent to target processor together, respond to this LP return data for succeeding target processor.For example, as shown in Figure 3, the address bit A39=0 in destination address, LP0 can be sent to corresponding target processor CPU2 by request of access and Node Controller mark.
In a kind of possible implementation, if source processor and target processor do not belong to same processor interlink node, referring to the server of the above embodiment of the present invention, between different processor interlink nodes, can connect by the link between NC.In this case, the data of source processor request access target processor need to be crossed over the link between NC, may be realized by the LP of different N C and RP the send and receive of request of access, be that RP and LP do not belong to same NC, RP belongs to the NC of source processor place processor interlink node, and LP belongs to the NC of target processor place processor interlink node.Now, LP, except recording RP mark, can also record the NC at RP place.In the time of follow-up return data response, LP can first determine the NC at RP place according to the information of record, and then determines RP according to RP mark.
In a kind of possible implementation, step 320 specifically can comprise:
Described data response is sent to described source processor corresponding to described source processor mark by step 322, described RP.
Particularly, after target processor receives request of access and Node Controller mark, need to, to the source processor return data response of request access data, but not recording specifically which processor, target processor need to access the data on this address.Target processor can be identified and be determined corresponding LP by Node Controller, and to this LP return data response, after LP receives data response, can be according to the RP mark recording before, be the address bit [A7, A6] of destination address, this data response be forwarded to this RP and identify corresponding RP.In a kind of possible implementation, referring to the foregoing description of the present embodiment, LP can also first determine the NC at RP place according to the information recording before.For example, as shown in Figure 3, CPU2, in receiving request of access, can also receive Node Controller mark.CPU2 can send data response to the LP0 of its sensing according to Node Controller mark, in data response, can comprise destination address.After LP0 receives data response, can from the information of record, obtain i.e. [A7, A6]=10 of RP mark, can determine by this RP mark the RP2 that data response is sent to its sensing.
In above-mentioned steps 312, RP has recorded source processor mark, after RP receives data response, can be according to the source processor mark of record, data response is forwarded to source processor and identifies corresponding source processor, source processor just can be accessed the data in this destination address after receiving data response like this, has completed the process of data access.The information that can save contents in target processor also records ppu and has taken the data in this destination address of internal memory, but outside which processor of record does not take; The information that can save contents in LP record are that the RP in the NC at this LP place has taken the data in this destination address of internal memory, and further, referring to the associated description of the above embodiment of the present invention, the NC at the RP place of LP record can also be different from the NC at LP place; In RP, can preserve the first directory information and record is that source processor has taken the data in this destination address of internal memory.For example, as shown in Figure 3, the source processor of the data in the first directory information of RP2 in this destination address of record access is CPU5, and RP2 can be sent to CPU5 by data response, and CPU5 just can access the data in upper this destination address of CPU2 like this.
It should be noted that, in processor interlink node of the present invention, between the processor in identical fundamental node, can be undertaken by any mode interconnectedly, and communicate by the communication module of processor, the present invention does not limit this.In addition, in processor interlink node of the present invention, NC specifically can manage the affairs of which processor, can divide according to different address spaces, and adaptive change also can occur according to demand, and the present invention does not limit this equally.As shown in Figure 3, can be according to the division of address space, the affairs of right side NC CPU management 2, CPU3, CPU6, CPU7, the affairs of left side NC CPU management 0, CPU1, CPU4, CPU5; Can also intersect according to demand management, as the affairs of right side NC CPU management 1, CPU2, CPU4, CPU7, the affairs of left side NC CPU management 0, CPU3, CPU5, CPU6.
The data access method of the present embodiment, in same processor interlink node, processor in identical fundamental node is interconnected and mutual access data each other directly, while carrying out data access between processor in different fundamental nodes, do not need to cross over the link between NC, in guaranteeing server bandwidth, reduce server delay.
Fig. 4 illustrates the process flow diagram of data access method according to another embodiment of the present invention.In a kind of possible implementation, after described target processor is sent to described source processor according to described request of access by described data response, may there is other processor need to access the data of this destination address, and send new request of access to target processor.The directory information of now searching in target processor can find that the data in this destination address are taken by ppu, and now target processor can externally initiate to intercept.As shown in Figure 4, this data access method mainly can comprise:
What step 400, described Node Controller received that described target processor sends intercepts message and described Node Controller mark, described in intercept message and comprise described destination address.
Particularly, when target processor receives new request of access, and when this new request of access indication need to be used the data in this destination address, target processor can determine that the data in this destination address are taken by ppu according to the directory information of its preservation, but and uncertainly specifically by which ppu is taken.Now, target processor can initiate intercept to NC, sends and intercepts message, can also send Node Controller mark that it receives by above-mentioned steps 120 to NC simultaneously, determines this is intercepted to message and mails to which LP for the control chip of NC.Can also comprise destination address intercepting in message.
Particularly, NC receives above-mentioned intercepting after message and Node Controller mark, can this be intercepted to message and is sent to source processor according to source processor mark.After source processor receives and intercepts message, can return to listens for responsive to NC, return to NC in the process of listens for responsive at source processor, source processor can first be determined the NC that can forward this listens for responsive by destination address, NC is after receiving listens for responsive, can also determine target processor by destination address, and listens for responsive is mail to target processor, complete and intercept.
In a kind of possible implementation, in this processor interlink node, the corresponding LP of each HA affairs of processor, before step 400, target processor can send and intercept message to NC according to the corresponding relation of HA affairs and LP.Target processor may send and intercept message to multiple NC simultaneously like this, and the NC that does not act on behalf of target processor can return to null response (Response Invalid, RSPI) to target processor after receiving and intercepting message.For example, as shown in Figure 3, the corresponding LP0 of HA0 affairs in the time that message is intercepted in CPU2 transmission, can be sent to two NC of the left and right sides simultaneously, receives and intercepts after message, because it does not act on behalf of the affairs of CPU2, so can directly return to RSPI to CPU2 at left side NC.
In a kind of possible implementation, referring to the associated description of the data access method of the above embodiment of the present invention, the affairs of source processor and the affairs of target processor may be managed by two NC in this processor interlink node respectively, between these two NC, do not have interconnectedly, can not directly carry out communication.In this case, target processor needs first to intercept message and is sent to some intermediate processors, this intermediate processor and target processor belong to same fundamental node, can directly not carry out communication by NC, and the affairs of this intermediate processor and the affairs of source processor are managed by same NC, intermediate processor can carry out communication by this NC and target processor.By the forwarding of intermediate processor, the message of intercepting of target processor can be sent to source processor.As shown in Figure 3, if CPU5 is source processor, CPU0 is target processor, and when CPU0 initiates to intercept, CPU0 needs to intercept message and is sent to CPU2 or CPU3, and CPU2 or CPU3 will intercept message and be sent to CPU5 by right side NC.The selection of CPU2 and CPU3 can be determined by the routing configuration of this processor interlink node.
In this data access method, specifically can also comprise in step 410:
Particularly, send at NC receiving target processor intercept message in, can also receive the Node Controller mark that target processor sends, control chip can send and intercept message to the LP of its sensing according to Node Controller mark.After LP receives and intercepts message, which NC the second directory information that can preserve according to LP, determine to and send and intercept message, and further, LP can determine to which RP in NC and send and intercept message according to RP mark.For example, as shown in Figure 3, if the address bit [A7 in destination address, A6]=00, the LP0 of right side NC can determine that needing to intercept message is sent to RP0, LP is by searching the second directory information, determine that RP and LP0 belong to same NC, will intercept message and be sent to the RP0 of right side NC.
Described in sending, the described source processor that step 413, described RP point to described source processor mark intercepts message.
Particularly, after RP receives and intercepts message, can record Node Controller mark, correctly return to listens for responsive for follow-up to target processor.In the first directory information, recorded the source processor that takies data on this address, RP can and send and intercept message to this source processor according to the source processor mark of record.For example, as shown in Figure 3, record CPU5 and taken the data in this destination address in the first directory information of RP0, by searching the first directory information, RP0 can determine CPU5 and send and intercept message to CPU5.
In a kind of possible implementation, referring to the associated description of the data access method of the above embodiment of the present invention, the NC that message is intercepted in reception does not process the affairs of source processor, in this case, NC needs first will intercept message and be sent to some intermediate processors, and the affairs of this intermediate processor are managed by this NC, and this intermediate processor and source processor belong to same fundamental node, by the forwarding of intermediate processor, the message of intercepting of target processor can be sent to source processor.The selection of intermediate processor can be determined by the routing configuration of this processor interlink node.
In a kind of possible implementation, if source processor and target processor do not belong to same processor interlink node, referring to the server of the above embodiment of the present invention, between different processor interlink nodes, can connect by the link between NC.In this case, what target processor sent to source processor intercepts message and need to cross over the link between NC, may realize the send and receive of intercepting message by the LP of different N C and RP, be that RP and LP do not belong to same NC, RP belongs to the NC of source processor place processor interlink node, and LP belongs to the NC of target processor place processor interlink node.Now, LP, except recording RP mark, can also record the NC at RP place.Follow-up, while returning to listens for responsive, LP can first determine the NC at RP place according to the information of record, and then determines RP according to RP mark.
In this data access method, step 420 specifically can comprise:
The described LP that step 422, described RP point to described Node Controller mark sends described listens for responsive.
Particularly, after source processor receives and intercepts message, can return to listens for responsive to target processor by NC, in listens for responsive, can comprise destination address.After control chip receives listens for responsive, can in listens for responsive, obtain RP mark, the RP pointing to RP mark sends this listens for responsive.Wherein, RP is designated the address bit [A7, A6] in destination address.For example, as shown in Figure 3, if the address bit in destination address [A7, A6]=10, CPU5 can determine and listens for responsive need to be sent to RP0.
Particularly, by above-mentioned steps 413, in RP, record Node Controller mark, i.e. LP mark.After RP receives listens for responsive, RP can be identified and be determined LP by Node Controller, and listens for responsive is sent to this LP.After this LP receives listens for responsive, the destination address that can comprise according to listens for responsive is determined this target processor of intercepting of initiation.In a kind of possible implementation, referring to the foregoing description of the present embodiment, LP can also first determine the NC at RP place according to the information recording before.For example, as shown in Figure 3, be LP0 if recorded Node Controller mark from the LP of message to its transmission that intercept in RP0, RP0 can be sent to LP0 by listens for responsive.LP0 can determine listens for responsive is sent to CPU2 according to the address bit A39=0 in destination address.
In a kind of possible implementation, to intercept in message and can also comprise and intercept type, target processor completes after this intercepts, and can get the authority that uses the data in this destination address.Meanwhile, the directory information of preserving in target processor, LP, RP can make corresponding modification according to the type of intercepting of intercepting in message.For example, if intercept as monopolizing intercepting of type, the information about this destination address in the directory information of preserving in target processor, LP, RP can be eliminated, and source processor can not hold over the data in this destination address again; If intercept as sharing intercepting of type, information about this destination address in the directory information of preserving in target processor, LP, RP changes shared state into, and source processor and the processor to the above-mentioned new request of access of target processor transmission can be shared the data in this destination address simultaneously.
It should be noted that, in processor interlink node of the present invention, between the processor in identical fundamental node, can be undertaken by any mode interconnectedly, and communicate by the communication module of processor, the present invention does not limit this.In addition, in processor interlink node of the present invention, NC specifically can manage the affairs of which processor, can divide according to different address spaces, and adaptive change also can occur according to demand, and the present invention does not limit this equally.
The data access method of the present embodiment, in same processor interlink node, processor in identical fundamental node is interconnected and mutual access data each other directly, while carrying out data access between processor in different fundamental nodes, do not need to cross over the link between NC, in guaranteeing server bandwidth, reduce server delay.
Fig. 5 illustrates a kind of structured flowchart of server in accordance with another embodiment of the present invention.Described server 500 can be host server, personal computer PC or portable portable computer or the terminal etc. that possess computing power.The specific embodiment of the invention does not limit the specific implementation of computing node.
Described server 500 comprises processor (processor) 510, communication interface (Communications Interface) 520, storer (memory) 530, bus 540 and Node Controller 550.Wherein, processor 510, communication interface 520, Node Controller 550 and storer 530 complete mutual communication by bus 540.
Storer 530 is for storing documents.Storer 530 may comprise high-speed RAM storer, also may also comprise nonvolatile memory (non-volatile memory), for example at least one magnetic disk memory.Storer 530 can be also memory array.Storer 530 also may be by piecemeal, and described can become virtual volume by certain principle combinations.
In a kind of possible embodiment, said procedure can be the program code that comprises computer-managed instruction.This program specifically can be used for:
Receive request of access and the source processor mark of described source processor, in described request of access, carry destination address, the address that described destination address is described target processor;
According to described destination address, described request of access and Node Controller mark are mail to described target processor;
Receive data response from described target processor, and according to described source processor mark, described data response is mail to described source processor.
In a kind of possible implementation, described Node Controller comprises control chip, local agent LP and remote agent RP, described Node Controller, according to described destination address, mails to described target processor by described request of access and Node Controller mark, comprising:
Described control chip receives described source processor mark and described request of access from described source processor, and from described request of access, obtains RP mark, and the RP pointing to described RP mark sends described request of access and described source processor mark;
Described RP obtains described destination address from described request of access, described destination address is carried out to decoding and obtain LP mark, and the LP pointing to described LP mark sends described request of access;
Described LP records described RP mark, obtains described destination address from described request of access, sends described request of access and Node Controller mark to described destination address described target processor pointed, and described Node Controller is designated described LP mark;
Described Node Controller receives data response from described target processor, and according to described source processor mark, described data response is mail to described source processor, comprising:
Described LP receives described data response from described target processor, and the described RP pointing to described RP mark sends described data response;
Described data response is sent to described source processor corresponding to described source processor mark by described RP.
In a kind of possible implementation, receive new request of access at described target processor, indication need to be accessed in the situation of the data in described destination address, and this program specifically also can be used for:
What receive that described target processor sends intercepts message and described Node Controller mark, described in intercept message and comprise described destination address;
Described in sending to described source processor according to described source processor mark, intercept message;
Receive the listens for responsive that described source processor returns, and according to described destination address, described listens for responsive is mail to described target processor.
In a kind of possible implementation, described in sending to described source processor according to described source processor mark, intercept message, comprising:
Described LP intercepts message described in receiving from described target processor;
Described LP obtains described RP mark from the second directory information, and intercepts message described in the described RP transmission of pointing to described RP mark, and described the second directory information is the directory information of preserving in described LP;
Described in sending, the described source processor that described RP points to described source processor mark intercepts message;
Described Node Controller receives the listens for responsive that described source processor returns, and according to described destination address, described listens for responsive is mail to described target processor, comprising:
Described control chip receives described listens for responsive from described source processor, and from described listens for responsive, obtains described RP mark, and the described RP pointing to described RP mark sends described listens for responsive;
The described LP that described RP points to described Node Controller mark sends described listens for responsive;
Described LP sends described listens for responsive according to described destination address to described target processor.
Those of ordinary skills can recognize, each exemplary cell and algorithm steps in embodiment described herein can be realized with the combination of electronic hardware or computer software and electronic hardware.These functions realize with hardware or form of software actually, depend on application-specific and the design constraint of technical scheme.Professional and technical personnel can realize described function for specific application choice diverse ways, but this realization should not thought and exceeds scope of the present invention.
If realize described function and as production marketing independently or while using using the form of computer software, can think to a certain extent that all or part of (part for example prior art being contributed) of technical scheme of the present invention is with the form embodiment of computer software product.This computer software product is stored in the non-volatile memory medium of embodied on computer readable conventionally, comprises that some instructions are in order to make computer equipment (can be personal computer, server or the network equipment etc.) carry out all or part of step of various embodiments of the present invention method.And aforesaid storage medium comprises the various media that can be program code stored such as USB flash disk, portable hard drive, ROM (read-only memory) (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disc or CD.
The above; be only the specific embodiment of the present invention, but protection scope of the present invention is not limited to this, any be familiar with those skilled in the art the present invention disclose technical scope in; can expect easily changing or replacing, within all should being encompassed in protection scope of the present invention.Therefore, protection scope of the present invention should described be as the criterion with the protection domain of claim.
Claims (10)
1. a server, is characterized in that, comprising:
Processor interlink node;
Described processor interlink node comprises at least one Node Controller and at least two fundamental nodes, and each described fundamental node comprises at least four processors;
Described Node Controller, is connected with described fundamental node, for according to the affairs of processor described in the address space menagement of described processor;
Described Node Controller, also identify for request of access and the source processor of reception sources processor, according to the destination address of carrying in described request of access, described request of access and Node Controller mark are mail to target processor, wherein, described source processor and described target processor are positioned at different fundamental nodes, the address that described destination address is described target processor.
2. server according to claim 1, is characterized in that, described Node Controller also for receive data response from described target processor, and mails to described source processor according to described source processor mark by described data response.
3. server according to claim 2, is characterized in that, described Node Controller comprises control chip, local agent LP and remote agent RP;
Described control chip, for receiving described source processor mark and described request of access from described source processor; From described request of access, obtain RP mark, the RP pointing to described RP mark sends described request of access and described source processor mark;
Described RP, for obtaining described destination address from described request of access, carries out decoding to described destination address and obtains LP mark, and the LP pointing to described LP mark sends described request of access; Receive described data response from described LP, described data response is sent to described source processor corresponding to described source processor mark;
Described LP, be used for recording described RP mark, from described request of access, obtain described destination address, send described request of access and Node Controller mark to described destination address described target processor pointed, described Node Controller is designated described LP mark; Receive described data response from described target processor; The described RP pointing to described RP mark sends described data response.
4. according to the server described in any one in claim 1-3, it is characterized in that, described node control implement body also for: receive new request of access at described target processor, in the situation of the data in the described destination address of indication access, what receive that described target processor sends intercepts message and described Node Controller mark, described in intercept message and comprise described destination address; Described in sending to described source processor according to described source processor mark, intercept message; Receive the listens for responsive that described source processor returns, and according to described destination address, described listens for responsive is mail to described target processor.
5. server according to claim 4, is characterized in that, LP also intercepts message described in receiving from described target processor; From the second directory information, obtain described RP mark, and intercept message described in the described RP transmission of pointing to described RP mark, described the second directory information is the directory information of preserving in described LP; Send described listens for responsive according to described destination address to described target processor;
Described RP intercepts message described in also sending for the described source processor pointing to described source processor mark; The described LP pointing to described Node Controller mark sends described listens for responsive.
6. according to the server described in any one in claim 1-5, it is characterized in that, described processor interlink node comprises the first fundamental node, the second fundamental node and two Node Controllers, and described the first fundamental node and described the second fundamental node comprise respectively at least four processors.
7. a data access method, is characterized in that, is applied to the server described in any one in claim 1-6, and when source processor needs access destination processor, described data access method comprises:
Node Controller receives request of access and the source processor mark of described source processor, carries destination address, the address that described destination address is described target processor in described request of access;
Described Node Controller, according to described destination address, mails to described target processor by described request of access and Node Controller mark;
Described Node Controller receives data response from described target processor, and according to described source processor mark, described data response is mail to described source processor.
8. data access method according to claim 7, it is characterized in that, described Node Controller comprises control chip, local agent LP and remote agent RP, described Node Controller is according to described destination address, described request of access and Node Controller mark are mail to described target processor, comprising:
Described control chip receives described source processor mark and described request of access from described source processor, and from described request of access, obtains RP mark, and the RP pointing to described RP mark sends described request of access and described source processor mark;
Described RP obtains described destination address from described request of access, described destination address is carried out to decoding and obtain LP mark, and the LP pointing to described LP mark sends described request of access;
Described LP records described RP mark, obtains described destination address from described request of access, sends described request of access and Node Controller mark to described destination address described target processor pointed, and described Node Controller is designated described LP mark;
Described Node Controller receives data response from described target processor, and according to described source processor mark, described data response is mail to described source processor, comprising:
Described LP receives described data response from described target processor, and the described RP pointing to described RP mark sends described data response;
Described data response is sent to described source processor corresponding to described source processor mark by described RP.
9. according to the data access method described in claim 7 or 8, it is characterized in that, receive new request of access at described target processor, indication need to be accessed in the situation of the data in described destination address, and described data access method also comprises:
What described Node Controller received that described target processor sends intercepts message and described Node Controller mark, described in intercept message and comprise described destination address;
Described Node Controller is intercepted message described in sending to described source processor according to described source processor mark;
Described Node Controller receives the listens for responsive that described source processor returns, and according to described destination address, described listens for responsive is mail to described target processor.
10. data access method according to claim 9, is characterized in that, described in described Node Controller sends to described source processor according to described source processor mark, intercepts message, comprising:
Described LP intercepts message described in receiving from described target processor;
Described LP obtains described RP mark from the second directory information, and intercepts message described in the described RP transmission of pointing to described RP mark, and described the second directory information is the directory information of preserving in described LP;
Described in sending, the described source processor that described RP points to described source processor mark intercepts message;
Described Node Controller receives the listens for responsive that described source processor returns, and according to described destination address, described listens for responsive is mail to described target processor, comprising:
Described control chip receives described listens for responsive from described source processor, and from described listens for responsive, obtains described RP mark, and the described RP pointing to described RP mark sends described listens for responsive;
The described LP that described RP points to described Node Controller mark sends described listens for responsive;
Described LP sends described listens for responsive according to described destination address to described target processor.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410091090.XA CN103870435B (en) | 2014-03-12 | 2014-03-12 | server and data access method |
PCT/CN2015/070453 WO2015135385A1 (en) | 2014-03-12 | 2015-01-09 | Server and data access method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410091090.XA CN103870435B (en) | 2014-03-12 | 2014-03-12 | server and data access method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103870435A true CN103870435A (en) | 2014-06-18 |
CN103870435B CN103870435B (en) | 2017-01-18 |
Family
ID=50908981
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410091090.XA Active CN103870435B (en) | 2014-03-12 | 2014-03-12 | server and data access method |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN103870435B (en) |
WO (1) | WO2015135385A1 (en) |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104199740A (en) * | 2014-08-28 | 2014-12-10 | 浪潮(北京)电子信息产业有限公司 | Non-tight-coupling multi-node multi-processor system and method based on system address space sharing |
CN104794099A (en) * | 2015-04-28 | 2015-07-22 | 浪潮电子信息产业股份有限公司 | Resource fusion method and system and far-end agent |
CN104793974A (en) * | 2015-04-28 | 2015-07-22 | 浪潮电子信息产业股份有限公司 | Method for starting system and computer system |
WO2015135385A1 (en) * | 2014-03-12 | 2015-09-17 | 华为技术有限公司 | Server and data access method |
CN105045729A (en) * | 2015-09-08 | 2015-11-11 | 浪潮(北京)电子信息产业有限公司 | Method and system for conducting consistency processing on caches with catalogues of far-end agent |
CN105335217A (en) * | 2014-06-26 | 2016-02-17 | 华为技术有限公司 | Server quiescing method and system |
CN107241282A (en) * | 2017-07-24 | 2017-10-10 | 郑州云海信息技术有限公司 | A kind of method and system for reducing protocol processes pipeline stall |
CN107451075A (en) * | 2017-09-22 | 2017-12-08 | 算丰科技(北京)有限公司 | Data processing chip and system, data storage forwarding and reading and processing method |
CN109923846A (en) * | 2016-11-14 | 2019-06-21 | 华为技术有限公司 | Determine the method and its equipment of hotspot address |
CN110098945A (en) * | 2018-01-30 | 2019-08-06 | 华为技术有限公司 | Data processing method and device applied to node system |
CN111241024A (en) * | 2020-02-20 | 2020-06-05 | 山东华芯半导体有限公司 | Cascade method of full-interconnection AXI bus |
CN112306913A (en) * | 2019-07-30 | 2021-02-02 | 华为技术有限公司 | Method, device and system for managing endpoint equipment |
WO2024160156A1 (en) * | 2023-01-31 | 2024-08-08 | 华为技术有限公司 | Decoding method, first die, and second die |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6598120B1 (en) * | 2002-03-08 | 2003-07-22 | International Business Machines Corporation | Assignment of building block collector agent to receive acknowledgments from other building block agents |
CN101908036A (en) * | 2010-07-22 | 2010-12-08 | 中国科学院计算技术研究所 | A High Density Multiprocessor System and Its Node Controller |
CN102232218A (en) * | 2011-06-24 | 2011-11-02 | 华为技术有限公司 | Computer subsystem and computer system |
CN102439571A (en) * | 2011-10-27 | 2012-05-02 | 华为技术有限公司 | Method for preventing node controller from deadly embrace and node controller |
CN103294612A (en) * | 2013-03-22 | 2013-09-11 | 浪潮电子信息产业股份有限公司 | Method for constructing Share-F state in local domain of multi-level cache consistency domain system |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103870435B (en) * | 2014-03-12 | 2017-01-18 | 华为技术有限公司 | server and data access method |
-
2014
- 2014-03-12 CN CN201410091090.XA patent/CN103870435B/en active Active
-
2015
- 2015-01-09 WO PCT/CN2015/070453 patent/WO2015135385A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6598120B1 (en) * | 2002-03-08 | 2003-07-22 | International Business Machines Corporation | Assignment of building block collector agent to receive acknowledgments from other building block agents |
CN101908036A (en) * | 2010-07-22 | 2010-12-08 | 中国科学院计算技术研究所 | A High Density Multiprocessor System and Its Node Controller |
CN102232218A (en) * | 2011-06-24 | 2011-11-02 | 华为技术有限公司 | Computer subsystem and computer system |
CN102439571A (en) * | 2011-10-27 | 2012-05-02 | 华为技术有限公司 | Method for preventing node controller from deadly embrace and node controller |
CN103294612A (en) * | 2013-03-22 | 2013-09-11 | 浪潮电子信息产业股份有限公司 | Method for constructing Share-F state in local domain of multi-level cache consistency domain system |
Cited By (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015135385A1 (en) * | 2014-03-12 | 2015-09-17 | 华为技术有限公司 | Server and data access method |
CN105335217B (en) * | 2014-06-26 | 2018-11-16 | 华为技术有限公司 | A kind of server silence method and system |
CN105335217A (en) * | 2014-06-26 | 2016-02-17 | 华为技术有限公司 | Server quiescing method and system |
CN104199740B (en) * | 2014-08-28 | 2019-03-01 | 浪潮(北京)电子信息产业有限公司 | The no tight coupling multinode multicomputer system and method for shared system address space |
CN104199740A (en) * | 2014-08-28 | 2014-12-10 | 浪潮(北京)电子信息产业有限公司 | Non-tight-coupling multi-node multi-processor system and method based on system address space sharing |
CN104793974A (en) * | 2015-04-28 | 2015-07-22 | 浪潮电子信息产业股份有限公司 | Method for starting system and computer system |
CN104794099A (en) * | 2015-04-28 | 2015-07-22 | 浪潮电子信息产业股份有限公司 | Resource fusion method and system and far-end agent |
CN105045729B (en) * | 2015-09-08 | 2018-11-23 | 浪潮(北京)电子信息产业有限公司 | A kind of buffer consistency processing method and system of the remote agent with catalogue |
CN105045729A (en) * | 2015-09-08 | 2015-11-11 | 浪潮(北京)电子信息产业有限公司 | Method and system for conducting consistency processing on caches with catalogues of far-end agent |
CN109923846B (en) * | 2016-11-14 | 2020-12-15 | 华为技术有限公司 | Method and device for determining hotspot address |
CN109923846A (en) * | 2016-11-14 | 2019-06-21 | 华为技术有限公司 | Determine the method and its equipment of hotspot address |
CN107241282A (en) * | 2017-07-24 | 2017-10-10 | 郑州云海信息技术有限公司 | A kind of method and system for reducing protocol processes pipeline stall |
CN107241282B (en) * | 2017-07-24 | 2021-04-27 | 郑州云海信息技术有限公司 | A method and system for reducing protocol processing pipeline stalls |
CN107451075A (en) * | 2017-09-22 | 2017-12-08 | 算丰科技(北京)有限公司 | Data processing chip and system, data storage forwarding and reading and processing method |
CN107451075B (en) * | 2017-09-22 | 2023-06-20 | 北京算能科技有限公司 | Data processing chip and system, data storage and forwarding and reading processing method |
CN110098945A (en) * | 2018-01-30 | 2019-08-06 | 华为技术有限公司 | Data processing method and device applied to node system |
CN110098945B (en) * | 2018-01-30 | 2021-10-19 | 华为技术有限公司 | Data processing method and device applied to node system |
WO2019149031A1 (en) * | 2018-01-30 | 2019-08-08 | 华为技术有限公司 | Data processing method and apparatus applied to node system |
CN112306913A (en) * | 2019-07-30 | 2021-02-02 | 华为技术有限公司 | Method, device and system for managing endpoint equipment |
CN112306913B (en) * | 2019-07-30 | 2023-09-22 | 华为技术有限公司 | Management method, device and system of endpoint equipment |
CN111241024A (en) * | 2020-02-20 | 2020-06-05 | 山东华芯半导体有限公司 | Cascade method of full-interconnection AXI bus |
WO2024160156A1 (en) * | 2023-01-31 | 2024-08-08 | 华为技术有限公司 | Decoding method, first die, and second die |
Also Published As
Publication number | Publication date |
---|---|
WO2015135385A1 (en) | 2015-09-17 |
CN103870435B (en) | 2017-01-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103870435A (en) | Server and data access method | |
JP6202756B2 (en) | Assisted coherent shared memory | |
US9952975B2 (en) | Memory network to route memory traffic and I/O traffic | |
KR102204751B1 (en) | Data coherency model and protocol at cluster level | |
CN112540941B (en) | Data forwarding chip and server | |
TW457437B (en) | Interconnected processing nodes configurable as at least one non-uniform memory access (NUMA) data processing system | |
US10248607B1 (en) | Dynamic interface port assignment for communication transaction | |
US9015440B2 (en) | Autonomous memory subsystem architecture | |
JP6514329B2 (en) | Memory access method, switch, and multiprocessor system | |
US10255305B2 (en) | Technologies for object-based data consistency in distributed architectures | |
CN109196829A (en) | Remote memory operation | |
CN110119304B (en) | Interrupt processing method and device and server | |
KR20170124995A (en) | Autonomous memory architecture | |
US12189545B2 (en) | System, apparatus and methods for handling consistent memory transactions according to a CXL protocol | |
CN107209725A (en) | Method, processor and the computer of processing write requests | |
US12079506B2 (en) | Memory expander, host device using memory expander, and operation method of sever system including memory expander | |
US9910813B1 (en) | Single function using multiple ports | |
CN106951390A (en) | It is a kind of to reduce the NUMA system construction method of cross-node Memory accessing delay | |
WO2016049807A1 (en) | Cache directory processing method and directory controller of multi-core processor system | |
US10915470B2 (en) | Memory system | |
CN117827706A (en) | Data processing method, data processing device, electronic device and storage medium | |
EP4509994A1 (en) | Computer communication device with inter-device data copying | |
US11281612B2 (en) | Switch-based inter-device notational data movement system | |
KR20240148519A (en) | Apparatus and method for supporting data input/output operation based on a data attribute in a shared memory device or a memory expander | |
TW201616364A (en) | Data management system, computer, data management method and program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant |