US9606879B2 - Multi-partition networking device and method therefor - Google Patents
Multi-partition networking device and method therefor Download PDFInfo
- Publication number
- US9606879B2 US9606879B2 US14/499,385 US201414499385A US9606879B2 US 9606879 B2 US9606879 B2 US 9606879B2 US 201414499385 A US201414499385 A US 201414499385A US 9606879 B2 US9606879 B2 US 9606879B2
- Authority
- US
- United States
- Prior art keywords
- partition
- networking device
- hardware resources
- primary
- state
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active, expires
Links
- 238000005192 partition Methods 0.000 title claims abstract description 298
- 230000006855 networking Effects 0.000 title claims abstract description 118
- 238000000034 method Methods 0.000 title claims abstract description 54
- 230000007704 transition Effects 0.000 claims abstract description 41
- 238000012545 processing Methods 0.000 claims abstract description 30
- 238000001514 detection method Methods 0.000 claims abstract description 29
- 230000008569 process Effects 0.000 claims abstract description 13
- 239000000872 buffer Substances 0.000 claims description 36
- 230000015654 memory Effects 0.000 claims description 13
- 238000004590 computer program Methods 0.000 claims description 12
- 230000005540 biological transmission Effects 0.000 claims description 4
- 230000005291 magnetic effect Effects 0.000 claims description 2
- 230000003287 optical effect Effects 0.000 claims description 2
- 238000007726 management method Methods 0.000 description 39
- 230000004913 activation Effects 0.000 description 17
- 238000010586 diagram Methods 0.000 description 6
- 238000012544 monitoring process Methods 0.000 description 3
- 230000004044 response Effects 0.000 description 3
- 238000012546 transfer Methods 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 2
- 230000002457 bidirectional effect Effects 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000026676 system process Effects 0.000 description 2
- 230000008901 benefit Effects 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 230000005294 ferromagnetic effect Effects 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 239000012464 large buffer Substances 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000005055 memory storage Effects 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 238000012913 prioritisation Methods 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/20—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
- G06F11/202—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant
- G06F11/2023—Failover techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/0703—Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
- G06F11/0706—Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment
- G06F11/0709—Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment in a distributed system consisting of a plurality of standalone computer nodes, e.g. clusters, client-server systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/0703—Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
- G06F11/0751—Error or fault detection not based on redundancy
- G06F11/0754—Error or fault detection not based on redundancy by exceeding limits
- G06F11/0757—Error or fault detection not based on redundancy by exceeding limits by exceeding a time limit, i.e. time-out, e.g. watchdogs
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/20—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
- G06F11/202—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant
- G06F11/2023—Failover techniques
- G06F11/2025—Failover techniques using centralised failover control functionality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/20—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
- G06F11/202—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant
- G06F11/2038—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant with a single idle spare processing component
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/20—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
- G06F11/202—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant
- G06F11/2041—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant with more than one idle spare processing component
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/20—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
- G06F11/202—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant
- G06F11/2048—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant where the redundant components share neither address space nor persistent storage
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2201/00—Indexing scheme relating to error detection, to error correction, and to monitoring
- G06F2201/81—Threshold
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2201/00—Indexing scheme relating to error detection, to error correction, and to monitoring
- G06F2201/835—Timestamp
Definitions
- This invention relates to a multi-partition networking device and a method of managing a multi-partition networking system.
- Multi-partition systems for network applications are often implemented through the use of networking System-on-Chip (SoC) devices composed of multi-core clusters and a networking sub-module, with multi-partition software running on the multi-core clusters.
- SoC System-on-Chip
- the high availability property is typically achieved for a particular (primary) partition through the use of a secondary partition which during normal operation is put into a standby state.
- the secondary partition Upon detection of a failure condition within the primary partition, the secondary partition may be brought out of its standby state, and operation switched from the failed primary partition to the secondary partition.
- Detection of a ‘failure condition’ is usually implemented by a watchdog mechanism, whereby upon a watchdog timer expiring as a result of the partition failing to reset the watchdog timer, a failure condition is deemed to have occurred.
- FIG. 1 schematically illustrates operating states of a conventional multi-partition networking device 100 .
- the multi-partition networking device 100 comprises a first (primary) partition 110 running on a first set of hardware resources, illustrated generally at 115 , and a second (secondary) partition 120 running on second set of hardware resources, illustrated generally at 125 .
- the multi-partition networking device 100 is arranged to operate in a first, normal operating state 102 , whereby the first set of hardware resources 115 are in an active state (i.e. powered up and functional) and the first partition 110 is arranged to process inbound network traffic, for example received via network sub-module 130 .
- the second set of hardware resources 125 are in a standby state (e.g.
- the multi-partition networking device 100 Upon the detection of a failure condition 140 , the multi-partition networking device 100 is arranged to transition to a second, failover operating state 104 , whereby the second set of hardware resources 125 are transitioned from a standby state to an active state (e.g. powered up and brought into an operational condition), and processing of inbound network traffic is transferred to the second partition 120 .
- the first set of hardware resources 115 may then be transition into a standby state, for example powered down to minimise the power consumption of the multi-partition networking device 100 .
- the multi-partition networking device 100 may be transitioned back to the first, normal operating state upon a resume condition 145 being detected.
- the requirement for the high availability system is to prevent packet loss in the case of a partition failure, and specifically to ensure the switch from the primary partition to the secondary partition does not include any loss of networking traffic.
- received network traffic is not being served and received data packets are required to be stored within a buffer pool (e.g. within the networking sub-module 130 ). This period of time when network traffic is not being served includes:
- the time taken to bring the secondary partition out of standby and into an operational condition typically includes:
- the time taken to bring the secondary partition out of standby and into an operational condition may be minimised by maintaining the secondary partition in a fully powered-up state.
- this significantly increases the power consumption of the overall system.
- the present invention provides a multi-partition networking device, a method of managing a multi-partition networking system and a non-transitory computer program product as described in the accompanying claims.
- FIG. 1 schematically illustrates operating states of a conventional multi-partition networking device.
- FIG. 2 illustrates a simplified block diagram of a multi-partition networking device.
- FIG. 3 schematically illustrates the multi-partition networking device operating in different operating states.
- FIG. 4 schematically illustrates a comparison of timelines for primary partition failure responses between the conventional multi-partition networking device of FIG. 1 and the multi-partition networking device of FIGS. 2 and 3 .
- FIG. 5 illustrates a simplified state diagram for the multi-partition networking device of FIGS. 2 and 3 .
- FIG. 6 illustrates a simplified block diagram of an example of the multi-partition networking device.
- FIG. 7 schematically illustrates a buffer pool for a primary partition.
- FIGS. 8 to 11 illustrate simplified flowcharts of an example of a method of managing a multi-partition networking system.
- the multi-partition networking device 200 is implemented within an integrated circuit device 205 comprising at least one die within a single integrated circuit package.
- the multi-partition networking device 200 may comprise a networking System-on-Chip (SoC) device composed of multi-core clusters and one or more networking sub-module, with multi-partition software running on the multi-core clusters.
- SoC System-on-Chip
- the multi-partition networking device 200 comprises a first, primary partition 210 running on a first set of hardware resources, illustrated generally at 215 , and a second, secondary partition 220 running on second set of hardware resources, illustrated generally at 225 .
- Such hardware resources 215 , 225 may comprise, for example, processing cores, accelerator hardware components, memory components, etc.
- the multi-partition networking device 200 further comprises at least one network sub-module, such as the network interface 230 illustrated in FIG. 2 .
- the multi-partition networking device 200 further comprises a management module 250 .
- such a management module 250 may comprise one or more processor(s) executing program code operable for managing the network interface resources and procedures that are shared among the various partitions.
- program code may also be operable for managing other aspects of the multi-partition networking device 200 , such as managing the transition between different operating states of the multi-partition networking device 200 .
- the program code may be stored within a non-transitory computer program product, such as the memory element illustrated generally at 260 .
- the multi-partition networking device 200 is arranged to operate in a first, normal operating state 302 , whereby the first set of hardware resources 215 are in an active state and the first partition 210 is arranged to process network traffic, for example received via network sub-module 230 .
- first, normal operating state the second set of hardware resources 225 are in a standby state.
- the multi-partition networking device 200 Upon detection of a suspicious condition 340 within the primary partition 210 , such as described in greater detail below, the multi-partition networking device 200 is arranged to transition to a second, suspicious operating state 304 , whereby the second set of hardware resources 225 are transitioned from a standby state to an active state. In this suspicious state 304 , processing of network traffic is still performed by the first partition 210 .
- the multi-partition networking device 200 is arranged to transition to a third, failover operating state 306 , whereby processing of network traffic is transferred to the second partition 220 .
- FIG. 4 schematically illustrates a comparison of timelines for primary partition failure responses between the conventional multi-partition networking device 100 of FIG. 1 and the multi-partition networking device 200 of FIGS. 2 and 3 .
- a first timeline 410 illustrates the sequence of events within the conventional multi-partition networking device 100 of FIG. 1 .
- the timeline 410 starts at 412 , with the primary partition 110 performing a watchdog reset. Such a watchdog reset by the primary partition 110 indicates that at this point in time the primary partition 110 is (seemingly) operating correctly.
- a failure within the primary partition 110 occurs at 414 . The failure is not detected until the watchdog timer expires at 416 , when the fact that the primary partition 110 failed to reset the watchdog indicates the occurrence of a failure within the primary partition 110 .
- the expiry of watchdog timer at 416 constitutes a failure condition 140 , triggering the transition of the conventional multi-partition networking device 100 from its normal operating state 102 to its failover operating state 104 , whereby the second set of hardware resources 125 are transitioned from a standby state to an active state (e.g. powered up and brought into an operational condition), and processing of inbound network traffic is transferred to the second partition 120 .
- the time taken to bring the secondary partition 120 out of standby and into an operational condition typically includes:
- a second timeline 420 illustrates the sequence of events within the multi-partition networking device 200 of FIGS. 2 and 3 .
- the timeline 420 starts at 422 , with the primary partition 210 performing a watchdog reset.
- a failure within the primary partition 210 occurs at 424 .
- the failure is not detected until the watchdog timer expires at 426 , when the fact that the primary partition 210 failed to reset the watchdog indicates the occurrence of a failure within the primary partition 210 .
- a suspicious condition (such as described in greater detail below) is detected at 430 , prior to the expiration of the watchdog timer at 416 , triggering the transition of the multi-partition networking device 200 from its normal operating state 302 to its suspicious operating state 304 , whereby the second set of hardware resources 225 are transitioned from a standby state to an active state (e.g. powered up and brought into an operational condition).
- the secondary partition 220 is ready and active at 432 , and able to have the processing of network traffic transferred thereto, prior to the expiry of the watchdog timer at 426 .
- the secondary partition 220 is held in this active state, whilst the responsibility for processing network traffic remains with the primary partition 210 .
- the subsequent expiry of watchdog timer at 426 constitutes a failure condition 340 , triggering the transition of the multi-partition networking device 200 from its suspicious operating state 304 to its failover operating state 306 , whereby the processing of inbound network traffic is transferred to the second partition 220 .
- the transfer of the processing of inbound traffic thereto may be performed substantially immediately, at 428 .
- the amount of time between a failure occurring within the primary partition 110 , 210 and the processing of inbound network traffic being transferred to the second partition 120 , 220 is significantly reduced within the multi-partition networking device 200 of FIGS. 2 and 3 , as indicated at 440 in FIG. 4 .
- FIG. 5 illustrates a simplified state diagram for the multi-partition networking device 200 of FIGS. 2 and 3 .
- the multi-partition networking device 200 comprises the three operating states:
- the first set of hardware resources 215 are in an active state and the primary partition 210 is arranged to process network traffic
- the second set of hardware resources 225 are in a standby state.
- the suspicious operating state 304 the first and second sets of hardware resources 215 , 225 are in an active state and the primary partition 210 is arranged to process network traffic.
- the failover operating state 306 at least the second set of hardware resources 225 is in an active state and the secondary partition 220 is arranged to process network traffic.
- the time it takes for the secondary partition to get ready to process network traffic is not just limited to powering up from a low power state.
- the secondary partition needs to retrieve context of the system, as such “learn” the state of the primary partition and getting the information regarding what tasks are open, which resources are being used etc. Retrieving the context of the system is needed even if the secondary partition and its relevant core are not in a low power down state.
- the ‘standby state’ that the secondary partition 220 is in during the normal operation state 302 of the multi-partition networking device 200 is not to be limited to low power states (e.g. a powered down state).
- the multi-partition networking device 200 is arranged to transition from the normal operating state 302 upon either a suspicious condition 340 being detected or a failure condition 345 being detected. Upon a suspect condition 340 being detected, the multi-partition networking device 200 is arranged to transition from the normal operating state 302 to the suspicious operating state 304 . Conversely, upon a failure condition 345 being detected, the multi-partition networking device 200 is arranged to transition from the normal operating state 302 to the failover operating state 306 .
- the multi-partition networking device 200 is arranged to transition from the suspicious operating state 304 upon either a failure condition 345 being detected or a resume condition 350 being detected. Upon a failure condition 345 being detected, the multi-partition networking device 200 is arranged to transition from the suspicious operating state 304 to the failover operating state 306 . Conversely, upon a resume condition 350 being detected, the multi-partition networking device 200 is arranged to transition back to the normal operating state 302 .
- the multi-partition networking device 200 is arranged to transition from the failover operating state 306 to the normal operating state 302 upon a resume condition 350 being detected.
- a suspicious condition 340 may comprise the occurrence of any event or condition capable of indicating the possibility of a failure having occurred within the primary partition 210 in advance of a failure condition 345 being detected.
- a suspicious condition 340 may comprise one or more of:
- a failure condition 345 may comprise the occurrence of any event or condition capable of indicating that a failure has occurred within the primary partition 210 .
- a failure condition 345 may comprise one or more of:
- a resume condition 350 may comprise, say, the ceasing of the suspicious condition and/or the failure condition that caused the multi-partition networking device 200 to transition to a suspicious or failover operating state.
- FIG. 6 illustrates a simplified block diagram of an example of the multi-partition networking device 200 .
- the multi-partition networking device 200 comprises the management module 250 , which is arranged to detect the occurrence of suspicious conditions 340 within the primary partition 210 , and to cause the multi-partition networking device 200 to transition to the suspicious state 304 upon detection of a suspicious condition 340 .
- the management module 250 is further arranged to detect the occurrence of failure conditions 345 within the primary partition 210 , and to cause the multi-partition networking device 200 to transition to the failover state 306 upon detection of a failure condition 345 .
- the management module 250 may further be arranged to cause the multi-partition networking device 200 to transition back to the normal state 302 upon detection of a resume condition 350 .
- the management module 250 comprises one or more hardware resources, for example comprising one or more processors, independent from the first and second partitions 210 , 220 within the multi-partition networking device 200 .
- An example of such a management module 250 is described in the Applicant's co-pending U.S. patent application Ser. No. 14/224,391 (the network processor 201 in said co-pending application), the subject-matter of which relating to said network processor 201 being incorporated herein by reference with respect to an example embodiment of the management module 250 .
- the management module 250 comprises an independent entity within the multi-partition networking device 200 responsible for the management of network interface resources provided by the network sub-module 230 , and procedures that are shared among the partitions 210 , 220 .
- the management module 250 is able to monitor the state of, for example, buffer pools, queues, etc. and to detect various conditions occurring within the multi-partition networking device 200 independently from the partitions 210 , 220 .
- the management module 250 comprises a centralized entity “above” all partitions and can therefore:
- a finer definition of conditions such as a suspicious condition 340 .
- the fact that the management module is responsible for the management of network interface resources, such as data path acceleration circuitry etc. allows the management module 250 to monitor the state of buffer pools etc. used by a particular partition.
- the management module 250 comprises a partition management component 610 arranged to perform partition management tasks, and in particular arranged to detect the occurrence of a suspicious condition within (at least) the primary partition 210 .
- a partition monitoring component 610 may comprise, for example, a process or the like.
- the buffer pool 620 is managed by the network sub-module 230 .
- FIG. 7 schematically illustrates the buffer pool 620 for the primary partition 210 .
- a suspicion threshold 710 may be set/configured for the buffer pool 620 to indicate late processing of received data packets by the primary partition 210 . If the occupancy level of the buffer pool 620 exceeds the suspicion threshold 710 , the network sub-module 230 may be arranged to output to the management module 250 a buffer pool threshold exceeded indication, for example by setting a suspicion threshold signal 720 .
- the partition management component 610 of the management module 250 may be arranged to periodically check the suspicion threshold signal 720 to determine whether a suspicious condition within the buffer pool 620 of the primary partition 210 has been detected. If the suspicion threshold signal 720 is set during such a check by the partition management component 610 , the partition management component 610 may then cause the multi-partition networking device 200 to transition from its normal operating state 302 to its suspicious operating state 304 , for example by setting a suspicious state activation signal 630 provided to the secondary partition 220 . The secondary partition 220 may then be arranged, upon the suspicious state activation signal 630 being set, to transition the second set of hardware resources 225 from a standby state to an active state. For example, such a transition from a standby state to an active state may comprise powering up the second set of hardware resources 225 , and initialising a partition preparation component 625 of the secondary partition 220 to retrieve and load the current context for the primary partition 210 .
- the partition management component 610 may further be arranged to subsequently detect the occurrence of a failure condition within the primary partition 210 .
- the management component 640 may comprise a watchdog component 640 comprising at least one watchdog timer (not shown) for the primary partition 210 . Upon expiry of the watchdog timer for the primary partition 210 , the watchdog component 640 may set a failure flag (not shown) for the primary partition 210 .
- a heartbeat component 615 of the primary partition 210 is arranged to periodically reset the watchdog timer for the primary partition 210 . In this manner, as long as the heartbeat component 615 continues to reset the watchdog timer, the failure flag will remain unset.
- the partition management component 610 may then cause the multi-partition networking device 200 to transition from a suspicious operating state 304 to a failover operating state 306 , for example by setting a failover state activation signal 650 provided to the secondary partition 220 .
- the secondary partition 220 may then be arranged, upon the failover state activation signal 650 being set, to take over responsibility for processing network traffic from the primary partition.
- the partition monitoring component 610 of the management module 250 may also be arranged to instruct a network interface resource management component 660 of the management module 250 to reallocate network resources from the primary partition 210 to the secondary partition 220 .
- the heartbeat component 615 of the primary partition may also be arranged to reset a suspicious condition flag when resetting the watchdog timer to cause the partition monitoring component of the management module 250 to transition the multi-partition networking device 200 back to a normal operating state 302 if operating in a suspicious operating state 304 .
- FIGS. 8 to 11 there are illustrated simplified flowcharts of an example of a method of managing a multi-partition networking system, such as may be implemented within the multi-networking device 200 herein before described with reference to FIGS. 2 to 7 .
- FIG. 8 there is illustrated a simplified flowchart 800 of an example of a first part of the method of managing a multi-partition networking system, such as may be implemented within the partition management component 610 of the management module 250 of FIGS. 2 to 7 .
- This part of the method starts at 805 where one or more parameters are checked to determine whether a suspicious condition has occurred.
- a check is performed to determine whether an occupancy level for a buffer pool of a primary partition for received network traffic exceeds a threshold, for example determining whether the suspicion threshold signal 720 is set.
- the method moves on to 850 where this part of the method exits to, say, a task scheduler (not shown) for the management module 250 where a next pending task is schedule to be performed.
- the method moves on to 815 where, in the illustrated example, a suspicion flag is set.
- Activation of the secondary partition 220 is then triggered, at 815 , for example by way of generating the suspicious state activation signal 630 provided to the secondary partition 220 to cause the second set of hardware resources 225 to transition from a standby state to an active state.
- the multi-partition networking device 200 is transition from a first, normal operating state 302 to a second, suspicious operating state 304 .
- this part of the method then waits until a failure condition within the primary partition 210 is detected, at 825 , or the suspicious condition is cleared, at 830 . If a failure condition is detected, at 825 , the method moves on to 840 where network resources are reallocated from the primary partition 210 to the secondary partition 220 , and an activation flag is set at 845 , for example setting the failover state activation signal 650 provided to the secondary partition 220 . The secondary partition 220 may then, upon the failover state activation signal 650 being set, take over responsibility for processing network traffic from the primary partition.
- the method then moves on to 850 where this part of the method exits to, say, a task scheduler (not shown) for the management module 250 where a next pending task is schedule to be performed.
- a task scheduler (not shown) for the management module 250 where a next pending task is schedule to be performed.
- the multi-partition networking device 200 is transition from the second, suspicious operating state 304 to a third, failover operating state 306 .
- the suspicious condition is reset at 830 , the suspicion flag is cleared at 835 .
- the method then moves on to 850 where this part of the method exits to, say, a task scheduler (not shown) for the management module 250 where a next pending task is schedule to be performed.
- FIG. 9 there is illustrated a simplified flowchart 900 of an example of a further part of the method of managing a multi-partition networking system, such as may be implemented within the secondary partition 220 .
- This part of the method starts at 910 with the receipt of an indication that a suspicious condition has been detected, and that the multi-partition networking device is transitioning to a suspicious operating state 304 .
- an indication may comprise the suspicious state activation signal 630 being set.
- this part of the method then moves on to 920 where a power up sequence for the hardware resources 225 within the secondary partition 220 is initiated to bring the secondary partition 220 out of a low power/deep sleep state.
- a current context for the primary partition 210 is retrieved and loaded into the secondary partition 220 .
- the hardware resources 225 of the secondary partition are transitioned to an active state comprising, in the illustrated example, the hardware resources being powered up and a current context for the primary partition being loaded into the secondary partition.
- this part of the method then waits until a failover state activation signal is received at 940 , such as the setting of the failover state activation signal 650 , or the suspicious condition is cleared at 950 , for example as indicated by the clearing of the suspicious state activation signal 630 . If a failover state activation signal is received at 940 , the method moves on to 960 where, in the illustrated example, the activation flag is reset to indicate successful activation of the secondary partition 220 , and the secondary partition 220 then undertakes responsibility for processing network traffic, at 970 .
- FIG. 10 there is illustrated a simplified flowchart 1000 of an example of a further part of the method of managing a multi-partition networking system, and in the illustrated example comprises a system heartbeat task running within the primary partition 210 , such as may be implemented by way of the heartbeat component 615 .
- This part of the method starts at 1010 with the system heartbeat task being scheduled, and moves on to 1020 where a watchdog timer for the primary partition 210 is reset.
- a suspicion flag is also reset at 1030 .
- resetting the suspicion flag in this manner may act as a resume condition, in response to which the multi-partition networking system may be transitioned back to a normal operating state.
- the method then moves on to 1040 where this part of the method exits to, say, a task scheduler (not shown) for the primary partition 210 where a next pending task is schedule to be performed.
- FIG. 11 there is illustrated a simplified flowchart 1100 of an example of a further part of the method of managing a multi-partition networking system, such as may be implemented within the watchdog component 640 of the management module 250 .
- This part of the method starts at 1110 with the receipt of a watchdog reset signal from the primary partition 210 .
- a watchdog counter is reset (e.g. set to zero) at 1120 , and this part of the method moves on to 1130 where the next watchdog tick is awaited.
- the method moves on to 1150 , where the watchdog counter is incremented. It is then determined whether the watchdog counter has reached its expiry value.
- the method loops back to 1130 where the method awaits the next watchdog tick. Conversely, if the watchdog counter has reached its expiry value, the method moves on to 1170 where a failure flag is set indicating a failure condition within the primary partition 210 . The method then loops back to 1130 where the method awaits the next watchdog tick.
- the present invention has been described in terms of detecting the occurrence of a suspicious condition based on an occupancy level within the buffer pool 620 .
- other events and/or conditions may additionally/alternatively be used to indicate the occurrence of a suspicious condition.
- the partition management component 610 may additionally/alternatively be arranged to monitor transmission queues within the network sub-module 230 , task queues within the primary partition etc., and to detect, say, inactivity of such queues which may be interpreted as indicating a suspicious condition within the primary partition.
- the watchdog component 640 of the management module 250 may be configured with two expiry values.
- a first (higher) value may be used to set the failure flag as illustrated in FIG. 11
- a second (lower) value may be used to set a suspicious flag.
- the suspicious flag is set prior to the failure flag, indication that a suspiciously long time has elapsed since the last watchdog reset by the primary partition 210 , but not yet a long enough period to constitute a failure condition.
- Other indications such as a hardware resource within the primary partition 210 going out or a normal state or the like may additionally/alternatively be used to indicate the occurrence of a suspicious condition within the primary partition 210 .
- the implementation of an intermediate ‘suspicious state’ to which a multi-partition networking system is arranged to transition upon detection of a suspicious condition enables the secondary partition to be pre-emptively put into an active state ahead of a failure condition being detected.
- the secondary partition is ready substantially immediately for the processing of network traffic to be transferred thereto upon detection of a failure condition, thereby significantly reducing the period of time when network traffic is not being served due to a failure within the primary partition, and thus significantly reducing the requirement for the buffer pool size. It also shortens the time the multi-partition networking device 200 is not operational (i.e. not processing network traffic) and as such shortens the latency for processing network traffic following a failure of the primary partition.
- the invention may be implemented in a computer program for running on a computer system, at least including code portions for performing steps of a method according to the invention when run on a programmable apparatus, such as a computer system or enabling a programmable apparatus to perform functions of a device or system according to the invention.
- a computer program is a list of instructions such as a particular application program and/or an operating system.
- the computer program may for instance include one or more of:
- a subroutine a function, a procedure, an object method, an object implementation, an executable application, an applet, a servlet, a source code, an object code, a shared library/dynamic load library and/or other sequence of instructions designed for execution on a computer system.
- the computer program may be stored internally on a tangible and non-transitory computer readable storage medium or transmitted to the computer system via a computer readable transmission medium. All or some of the computer program may be provided on computer readable media permanently, removably or remotely coupled to an information processing system.
- the tangible and non-transitory computer readable media may include, for example and without limitation, any number of the following: magnetic storage media including disk and tape storage media; optical storage media such as compact disk media (e.g., CD-ROM, CD-R, etc.) and digital video disk storage media; non-volatile memory storage media including semiconductor-based memory units such as FLASH memory, EEPROM, EPROM, ROM; ferromagnetic digital memories; MRAM; volatile storage media including registers, buffers or caches, main memory, RAM, etc.
- a computer process typically includes an executing (running) program or portion of a program, current program values and state information, and the resources used by the operating system to manage the execution of the process.
- An operating system is the software that manages the sharing of the resources of a computer and provides programmers with an interface used to access those resources.
- An operating system processes system data and user input, and responds by allocating and managing tasks and internal system resources as a service to users and programs of the system.
- the computer system may for instance include at least one processing unit, associated memory and a number of input/output (I/O) devices.
- I/O input/output
- the computer system processes information according to the computer program and produces resultant output information via I/O devices.
- connections as discussed herein may be any type of connection suitable to transfer signals from or to the respective nodes, units or devices, for example via intermediate devices. Accordingly, unless implied or stated otherwise, the connections may for example be direct connections or indirect connections.
- the connections may be illustrated or described in reference to being a single connection, a plurality of connections, unidirectional connections, or bidirectional connections. However, different embodiments may vary the implementation of the connections. For example, separate unidirectional connections may be used rather than bidirectional connections and vice versa.
- plurality of connections may be replaced with a single connection that transfers multiple signals serially or in a time multiplexed manner. Likewise, single connections carrying multiple signals may be separated out into various different connections carrying subsets of these signals. Therefore, many options exist for transferring signals.
- Each signal described herein may be designed as positive or negative logic.
- the signal In the case of a negative logic signal, the signal is active low where the logically true state corresponds to a logic level zero.
- the signal In the case of a positive logic signal, the signal is active high where the logically true state corresponds to a logic level one. Note that any of the signals described herein can be designed as either negative or positive logic signals.
- those signals described as positive logic signals may be implemented as negative logic signals, and those signals described as negative logic signals may be implemented as positive logic signals.
- assert or ‘set’ and ‘negate’ (or ‘de-assert’ or ‘clear’) are used herein when referring to the rendering of a signal, status bit, or similar apparatus into its logically true or logically false state, respectively. If the logically true state is a logic level one, the logically false state is a logic level zero. And if the logically true state is a logic level zero, the logically false state is a logic level one.
- logic blocks are merely illustrative and that alternative embodiments may merge logic blocks or circuit elements or impose an alternate decomposition of functionality upon various logic blocks or circuit elements.
- architectures depicted herein are merely exemplary, and that in fact many other architectures can be implemented which achieve the same functionality.
- any arrangement of components to achieve the same functionality is effectively ‘associated’ such that the desired functionality is achieved.
- any two components herein combined to achieve a particular functionality can be seen as ‘associated with’ each other such that the desired functionality is achieved, irrespective of architectures or intermediary components.
- any two components so associated can also be viewed as being ‘operably connected,’ or ‘operably coupled,’ to each other to achieve the desired functionality.
- the illustrated examples may be implemented as circuitry located on a single integrated circuit or within a same device.
- the examples may be implemented as any number of separate integrated circuits or separate devices interconnected with each other in a suitable manner.
- the examples, or portions thereof may implemented as soft or code representations of physical circuitry or of logical representations convertible into physical circuitry, such as in a hardware description language of any appropriate type.
- any reference signs placed between parentheses shall not be construed as limiting the claim.
- the word ‘comprising’ does not exclude the presence of other elements or steps then those listed in a claim.
- the terms ‘a’ or ‘an,’ as used herein, are defined as one or more than one.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Quality & Reliability (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Hardware Design (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
Description
-
- (i) the time taken to detect the failure condition within the primary partition; and
- (ii) the time taken to bring the secondary partition out of standby state and into an operational condition.
-
- (i) the time it takes to bring the secondary partition out of deep sleep (i.e. to power up);
- (ii) the time taken to resume the relevant context; and
- (iii) getting into a ‘hot’ state where the local register values etc. are set correctly.
-
- (iv) the time it takes to bring the secondary partition out of deep sleep (i.e. to power up);
- (v) the time taken to resume the relevant context; and
- (vi) getting into a ‘hot’ state where the local memories are set correctly.
-
- (i)
normal operating state 302; - (ii)
suspicious operating state 304; and - (iii)
failover operating state 306.
- (i)
-
- an occupancy level of a buffer pool for received network traffic exceeding a threshold level;
- inactivity within at least one transmission queue;
- inactivity within at least one task queue;
- a watchdog timer expiration (for example separate watchdog timers may be used for detecting a suspicious condition and detecting a failure condition, the former comprising a shorter duration than the latter); and
- an indication of a hardware resource going out of normal state.
-
- a watchdog timer expiration;
- an indication of a hardware failure; and
- an indication of a non-correctable ECC (error-correcting code) fault.
-
- manipulate the partitions to backup each other; and
- allow sharing of resources across partitions.
Claims (14)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/499,385 US9606879B2 (en) | 2014-09-29 | 2014-09-29 | Multi-partition networking device and method therefor |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/499,385 US9606879B2 (en) | 2014-09-29 | 2014-09-29 | Multi-partition networking device and method therefor |
Publications (2)
Publication Number | Publication Date |
---|---|
US20160092323A1 US20160092323A1 (en) | 2016-03-31 |
US9606879B2 true US9606879B2 (en) | 2017-03-28 |
Family
ID=55584543
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/499,385 Active 2035-05-13 US9606879B2 (en) | 2014-09-29 | 2014-09-29 | Multi-partition networking device and method therefor |
Country Status (1)
Country | Link |
---|---|
US (1) | US9606879B2 (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015029406A1 (en) * | 2013-08-29 | 2015-03-05 | セイコーエプソン株式会社 | Transmission system, transmission device, and data transmission method |
US9606879B2 (en) * | 2014-09-29 | 2017-03-28 | Nxp Usa, Inc. | Multi-partition networking device and method therefor |
US20180285217A1 (en) * | 2017-03-31 | 2018-10-04 | Intel Corporation | Failover response using a known good state from a distributed ledger |
US11144354B2 (en) * | 2018-07-31 | 2021-10-12 | Vmware, Inc. | Method for repointing resources between hosts |
US10461076B1 (en) * | 2018-10-24 | 2019-10-29 | Micron Technology, Inc. | 3D stacked integrated circuits having functional blocks configured to accelerate artificial neural network (ANN) computation |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6363495B1 (en) | 1999-01-19 | 2002-03-26 | International Business Machines Corporation | Method and apparatus for partition resolution in clustered computer systems |
US20030046330A1 (en) | 2001-09-04 | 2003-03-06 | Hayes John W. | Selective offloading of protocol processing |
US6728780B1 (en) | 2000-06-02 | 2004-04-27 | Sun Microsystems, Inc. | High availability networking with warm standby interface failover |
US20050081122A1 (en) * | 2003-10-09 | 2005-04-14 | Masami Hiramatsu | Computer system and detecting method for detecting a sign of failure of the computer system |
US20120008506A1 (en) | 2010-07-12 | 2012-01-12 | International Business Machines Corporation | Detecting intermittent network link failures |
US20150006953A1 (en) * | 2013-06-28 | 2015-01-01 | Hugh W. Holbrook | System and method of a hardware shadow for a network element |
US20150019909A1 (en) * | 2013-07-11 | 2015-01-15 | International Business Machines Corporation | Speculative recovery using storage snapshot in a clustered database |
US20150058682A1 (en) * | 2013-08-26 | 2015-02-26 | Alaxala Networks Corporation | Network apparatus and method of monitoring processor |
US20150074219A1 (en) * | 2013-07-12 | 2015-03-12 | Brocade Communications Systems, Inc. | High availability networking using transactional memory |
US20160004241A1 (en) * | 2013-02-15 | 2016-01-07 | Mitsubishi Electric Corporation | Control device |
US20160092323A1 (en) * | 2014-09-29 | 2016-03-31 | Freescale Semiconductor, Inc. | Multi-partition networking device and method therefor |
US20160149773A1 (en) * | 2014-11-24 | 2016-05-26 | Freescale Semiconductor, Inc. | Multi-partition networking device |
-
2014
- 2014-09-29 US US14/499,385 patent/US9606879B2/en active Active
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6363495B1 (en) | 1999-01-19 | 2002-03-26 | International Business Machines Corporation | Method and apparatus for partition resolution in clustered computer systems |
US6728780B1 (en) | 2000-06-02 | 2004-04-27 | Sun Microsystems, Inc. | High availability networking with warm standby interface failover |
US20030046330A1 (en) | 2001-09-04 | 2003-03-06 | Hayes John W. | Selective offloading of protocol processing |
US20050081122A1 (en) * | 2003-10-09 | 2005-04-14 | Masami Hiramatsu | Computer system and detecting method for detecting a sign of failure of the computer system |
US20120008506A1 (en) | 2010-07-12 | 2012-01-12 | International Business Machines Corporation | Detecting intermittent network link failures |
US20160004241A1 (en) * | 2013-02-15 | 2016-01-07 | Mitsubishi Electric Corporation | Control device |
US20150006953A1 (en) * | 2013-06-28 | 2015-01-01 | Hugh W. Holbrook | System and method of a hardware shadow for a network element |
US20150019909A1 (en) * | 2013-07-11 | 2015-01-15 | International Business Machines Corporation | Speculative recovery using storage snapshot in a clustered database |
US20150074219A1 (en) * | 2013-07-12 | 2015-03-12 | Brocade Communications Systems, Inc. | High availability networking using transactional memory |
US20150058682A1 (en) * | 2013-08-26 | 2015-02-26 | Alaxala Networks Corporation | Network apparatus and method of monitoring processor |
US20160092323A1 (en) * | 2014-09-29 | 2016-03-31 | Freescale Semiconductor, Inc. | Multi-partition networking device and method therefor |
US20160149773A1 (en) * | 2014-11-24 | 2016-05-26 | Freescale Semiconductor, Inc. | Multi-partition networking device |
Non-Patent Citations (1)
Title |
---|
U.S. Appl. No. 14/224,391, filed Mar. 25, 2014, entitled "Network Processor for Managing a Packet Processing Acceleration Logic Circuitry in a Networking Device". |
Also Published As
Publication number | Publication date |
---|---|
US20160092323A1 (en) | 2016-03-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10489209B2 (en) | Management of resources within a computing environment | |
US9606879B2 (en) | Multi-partition networking device and method therefor | |
US8898517B2 (en) | Handling a failed processor of a multiprocessor information handling system | |
US9063906B2 (en) | Thread sparing between cores in a multi-threaded processor | |
CN107646104B (en) | Method, processing system, device and storage medium for managing shared resources | |
US8935698B2 (en) | Management of migrating threads within a computing environment to transform multiple threading mode processors to single thread mode processors | |
JP5224982B2 (en) | Apparatus, system, method and program for collecting dump data | |
US20170269984A1 (en) | Systems and methods for improved detection of processor hang and improved recovery from processor hang in a computing device | |
US20140310439A1 (en) | Low latency interrupt with existence of interrupt moderation | |
WO2011091743A1 (en) | Apparatus and method for recording reboot reason of equipment | |
CN112204554B (en) | Watchdog Timer Hierarchy | |
KR20170131366A (en) | Shared resource access control method and apparatus | |
US7865774B2 (en) | Multiprocessor core dump retrieval | |
US11275660B2 (en) | Memory mirroring in an information handling system | |
JP5277961B2 (en) | Information processing apparatus and failure concealing method thereof | |
US9548906B2 (en) | High availability multi-partition networking device with reserve partition and method for operating | |
CN103294169A (en) | Redundancy protection system and redundancy protection method for many-core system with optimized power consumption | |
EP3396553B1 (en) | Method and device for processing data after restart of node | |
CN112631872B (en) | Exception handling method and device for multi-core system | |
CN115658356A (en) | Watchdog feeding method and system in Linux system | |
US10956248B1 (en) | Configurable reporting for device conditions | |
EP2799991A1 (en) | The disable restart setting for AMF configuration components | |
WO2017014793A1 (en) | Preserving volatile memory across a computer system disruption | |
US9495230B2 (en) | Testing method | |
US20170264664A1 (en) | Moderating application communications according to network conditions |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: FREESCALE SEMICONDUCTOR, INC., TEXAS Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:MOSCOVICI, AVISHAY;EREZ, NIR;REEL/FRAME:033836/0986 Effective date: 20140929 |
|
AS | Assignment |
Owner name: CITIBANK, N.A., AS NOTES COLLATERAL AGENT, NEW YORK Free format text: SUPPLEMENT TO IP SECURITY AGREEMENT;ASSIGNOR:FREESCALE SEMICONDUCTOR, INC.;REEL/FRAME:034153/0027 Effective date: 20141030 Owner name: CITIBANK, N.A., AS NOTES COLLATERAL AGENT, NEW YORK Free format text: SUPPLEMENT TO IP SECURITY AGREEMENT;ASSIGNOR:FREESCALE SEMICONDUCTOR, INC.;REEL/FRAME:034160/0370 Effective date: 20141030 Owner name: CITIBANK, N.A., AS NOTES COLLATERAL AGENT, NEW YORK Free format text: SUPPLEMENT TO IP SECURITY AGREEMENT;ASSIGNOR:FREESCALE SEMICONDUCTOR, INC.;REEL/FRAME:034160/0351 Effective date: 20141030 Owner name: CITIBANK, N.A., AS NOTES COLLATERAL AGENT, NEW YOR Free format text: SUPPLEMENT TO IP SECURITY AGREEMENT;ASSIGNOR:FREESCALE SEMICONDUCTOR, INC.;REEL/FRAME:034153/0027 Effective date: 20141030 Owner name: CITIBANK, N.A., AS NOTES COLLATERAL AGENT, NEW YOR Free format text: SUPPLEMENT TO IP SECURITY AGREEMENT;ASSIGNOR:FREESCALE SEMICONDUCTOR, INC.;REEL/FRAME:034160/0351 Effective date: 20141030 Owner name: CITIBANK, N.A., AS NOTES COLLATERAL AGENT, NEW YOR Free format text: SUPPLEMENT TO IP SECURITY AGREEMENT;ASSIGNOR:FREESCALE SEMICONDUCTOR, INC.;REEL/FRAME:034160/0370 Effective date: 20141030 |
|
AS | Assignment |
Owner name: FREESCALE SEMICONDUCTOR, INC., TEXAS Free format text: PATENT RELEASE;ASSIGNOR:CITIBANK, N.A., AS COLLATERAL AGENT;REEL/FRAME:037357/0921 Effective date: 20151207 |
|
AS | Assignment |
Owner name: MORGAN STANLEY SENIOR FUNDING, INC., MARYLAND Free format text: ASSIGNMENT AND ASSUMPTION OF SECURITY INTEREST IN PATENTS;ASSIGNOR:CITIBANK, N.A.;REEL/FRAME:037458/0502 Effective date: 20151207 Owner name: MORGAN STANLEY SENIOR FUNDING, INC., MARYLAND Free format text: ASSIGNMENT AND ASSUMPTION OF SECURITY INTEREST IN PATENTS;ASSIGNOR:CITIBANK, N.A.;REEL/FRAME:037458/0460 Effective date: 20151207 |
|
AS | Assignment |
Owner name: MORGAN STANLEY SENIOR FUNDING, INC., MARYLAND Free format text: SUPPLEMENT TO THE SECURITY AGREEMENT;ASSIGNOR:FREESCALE SEMICONDUCTOR, INC.;REEL/FRAME:039138/0001 Effective date: 20160525 |
|
AS | Assignment |
Owner name: NXP, B.V., F/K/A FREESCALE SEMICONDUCTOR, INC., NETHERLANDS Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:MORGAN STANLEY SENIOR FUNDING, INC.;REEL/FRAME:040925/0001 Effective date: 20160912 Owner name: NXP, B.V., F/K/A FREESCALE SEMICONDUCTOR, INC., NE Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:MORGAN STANLEY SENIOR FUNDING, INC.;REEL/FRAME:040925/0001 Effective date: 20160912 |
|
AS | Assignment |
Owner name: NXP B.V., NETHERLANDS Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:MORGAN STANLEY SENIOR FUNDING, INC.;REEL/FRAME:040928/0001 Effective date: 20160622 |
|
AS | Assignment |
Owner name: NXP USA, INC., TEXAS Free format text: CHANGE OF NAME;ASSIGNOR:FREESCALE SEMICONDUCTOR INC.;REEL/FRAME:040626/0683 Effective date: 20161107 |
|
AS | Assignment |
Owner name: NXP USA, INC., TEXAS Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE NATURE OF CONVEYANCE PREVIOUSLY RECORDED AT REEL: 040626 FRAME: 0683. ASSIGNOR(S) HEREBY CONFIRMS THE MERGER AND CHANGE OF NAME;ASSIGNOR:FREESCALE SEMICONDUCTOR INC.;REEL/FRAME:041414/0883 Effective date: 20161107 Owner name: NXP USA, INC., TEXAS Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE NATURE OF CONVEYANCE PREVIOUSLY RECORDED AT REEL: 040626 FRAME: 0683. ASSIGNOR(S) HEREBY CONFIRMS THE MERGER AND CHANGE OF NAME EFFECTIVE NOVEMBER 7, 2016;ASSIGNORS:NXP SEMICONDUCTORS USA, INC. (MERGED INTO);FREESCALE SEMICONDUCTOR, INC. (UNDER);SIGNING DATES FROM 20161104 TO 20161107;REEL/FRAME:041414/0883 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
AS | Assignment |
Owner name: NXP B.V., NETHERLANDS Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:MORGAN STANLEY SENIOR FUNDING, INC.;REEL/FRAME:050744/0097 Effective date: 20190903 |
|
AS | Assignment |
Owner name: NXP B.V., NETHERLANDS Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE REMOVEAPPLICATION 11759915 AND REPLACE IT WITH APPLICATION11759935 PREVIOUSLY RECORDED ON REEL 040928 FRAME 0001. ASSIGNOR(S) HEREBY CONFIRMS THE RELEASE OF SECURITYINTEREST;ASSIGNOR:MORGAN STANLEY SENIOR FUNDING, INC.;REEL/FRAME:052915/0001 Effective date: 20160622 |
|
AS | Assignment |
Owner name: NXP, B.V. F/K/A FREESCALE SEMICONDUCTOR, INC., NETHERLANDS Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE REMOVEAPPLICATION 11759915 AND REPLACE IT WITH APPLICATION11759935 PREVIOUSLY RECORDED ON REEL 040925 FRAME 0001. ASSIGNOR(S) HEREBY CONFIRMS THE RELEASE OF SECURITYINTEREST;ASSIGNOR:MORGAN STANLEY SENIOR FUNDING, INC.;REEL/FRAME:052917/0001 Effective date: 20160912 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 8 |