[go: up one dir, main page]

0% found this document useful (0 votes)
39 views14 pages

Cisco X410c - Compute Node-M-Overview

The Cisco UCS X410c M7 Compute Node is a two-slot compute node that supports up to four 4th Generation Intel Xeon Scalable Processors and a maximum of 16 TB of system memory. It features various hardware configurations including front and rear mezzanine options, local console connectivity, and support for multiple storage configurations. The compute node also includes system health indicators and a node identification tag for easy management and monitoring.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views14 pages

Cisco X410c - Compute Node-M-Overview

The Cisco UCS X410c M7 Compute Node is a two-slot compute node that supports up to four 4th Generation Intel Xeon Scalable Processors and a maximum of 16 TB of system memory. It features various hardware configurations including front and rear mezzanine options, local console connectivity, and support for multiple storage configurations. The compute node also includes system health indicators and a node identification tag for easy management and monitoring.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

Compute Node Overview

This chapter contains the following topics:


• Cisco UCS X410c M7 Compute Node Overview, on page 1
• Local Console, on page 7
• Front Mezzanine Options, on page 7
• mLOM and Rear Mezzanine Slot Support, on page 8
• System Health States, on page 9
• Interpreting LEDs, on page 10
• Optional Hardware Configuration, on page 12

Cisco UCS X410c M7 Compute Node Overview


The Cisco UCS X410c M7 Compute Node (UCSX-410C-M7) is a two-slot compute node that supports four
CPU sockets for 4th Generation Intel® Xeon® Scalable Processors. Each compute node is exactly four CPUs.
Less than four CPUs is an unsupported configuration.
The overall compute node consists of two distinct subnodes, a primary and a secondary.
• The primary contains two CPUs (1 and 2), two heatsinks, and half of the DIMMs. All additional hardware
components and supported functionality are supported through the primary, including the front and rear
mezzanine hardware options, rear mezzanine bridge card, front panel, KVM, management console, and
status LEDs.
• The secondary contains two additional CPUs (3 and 4), two heatsinks, and the other half of the DIMMs.
The secondary also contains a power adapter, which ensures that the electrical power is shared and
distributed between the primary and secondary. The power adapter is not a customer-serviceable part.

Each Cisco UCS X410c M7 compute node supports the following:


• Up to 16 T of system memory as 64 DDR5 DIMMs, up to 4800 MHz with 1DPC, 4400 MHz with 2DPC.
Thirty-two DDR5 DIMMs are supported on the primary, and 32 DIMMs are supported on the secondary.
• 16 DIMMs per CPU, 8 channels per CPU socket, 2 DIMMs per channel. Memory Mirroring and RAS
is supported.
• Supported memory can be populated as 16 GB, 32 GB, 64 GB, 128 GB, or 256 GB DDR5 DIMMs.
• One front mezzanine module which can support the following:

Compute Node Overview


1
Compute Node Overview
Compute Node Identification

• A front storage module, which supports multiple different storage device configurations:
• All SAS/SATA configuration consisting of up to six SAS/SATA SSDs with an integrated
RAID controller (HWRAID) in slots 1 through 6.
• All NVMe configuration consisting of up to six U.2 NVMe Gen4 (x4 PCIe) SSDs in slots 1
through 6.
• A mixed storage configuration consisting of up to six SAS/SATA or up to four NVMe drives
is supported. In this configuration, U.2 NVMe drives are supported in slots 1 through 4 only.

For additional information, see Front Mezzanine Options, on page 7.


• 1 modular LAN on motherboard (mLOM) module or virtual interface card (VIC) supporting a maximum
of 200G of aggregate traffic, 100G to each fabric, through a Cisco 5th Gen 100G mLOM/VIC. For more
information, see mLOM and Rear Mezzanine Slot Support, on page 8.
• 1 rear mezzanine module (UCSX-V4-PCIME or UCSX-ME-V5Q50G).
• A boot-optimized mini-storage module. Two versions of mini-storage exist:
• One version supports up to two M.2 SATA drives of up to 960GB each. This version supports an
optional hardware RAID controller (RAID1).
• One version supports up to two M.2 NVMe drives of up to 960GB each that are directly attached
to CPU 1. This version does not support an optional RAID controller. This option will be available
after initial release of the compute node.

• Local console connectivity through a USB Type-C connector.


• Connection with a paired UCS PCIe module, such as the Cisco UCS X440p PCIe node, to support GPU
offload and acceleration. For more information, see the Optional Hardware Configuration, on page 12.
• Up to 4 UCS X410c M7 compute nodes can be installed in a Cisco UCS X9508 modular system.

Compute Node Identification


Each Cisco UCS X410c M7 compute node features a node identification tag at the lower right corner of the
primary node.

Compute Node Overview


2
Compute Node Overview
Compute Node Front Panel

The node identification tag is a QR code that contains information that uniquely identifies the product, such
as:
• The Cisco product identifier (PID) or virtual identifier (VID)
• The product serial number

The product identification tag applies to the entire compute node, both the primary and secondary.
You will find it helpful to scan the QR code so that the information is available if you need to contact Cisco
personnel.

Compute Node Front Panel


The Cisco UCS X410c M7 front panel contains system LEDs that provide visual indicators for how the overall
compute node is operating. An external connector is also supported.

Compute Node Overview


3
Compute Node Overview
Compute Node Front Panel

Compute Node Front Panel

1 Power LED and Power Switch 2 System Activity LED


The LED provides a visual indicator The LED blinks to show whether data
about whether the compute node is on or network traffic is written to or read
or off. from the compute node. If no traffic is
detected, the LED is dark.
• Steady green indicates the compute
node is on. The LED is updated every 10 seconds.
• Steady Amber indicates the
compute node is in Standby power
mode.
• Off or dark indicates that the
compute node is not powered on.

The switch is a push button that can


power off or power on the compute
node. See Front Panel Buttons, on page
5.

Compute Node Overview


4
Compute Node Overview
Front Panel Buttons

3 System Health LED 4 Locator LED/Switch


A multifunction LED that indicates the The LED provides a visual indicator
state of the compute node. that glows solid blue to identify a
specific compute node.
• Steady green indicates the compute
node successfully booted to The switch is a push button that toggles
runtime and is in normal operating the Indicator LED on or off. See Front
state. Panel Buttons, on page 5.
• Steady amber indicates that the
compute node successfully booted
but is in a degraded runtime state.
See System Health States, on page
9.
• Blinking amber indicates that the
compute node is in a critical state,
which requires attention. See
System Health States, on page 9.

5 External Optical Connector (Oculink)


that supports local console functionality.
See Local Console, on page 7.

Front Panel Buttons


The front panel has some buttons that are also LEDs. See Compute Node Front Panel, on page 3.
• The front panel Power button is a multi-function button that controls system power for the compute node.
• Immediate power up: Quickly pressing and releasing the button, but not holding it down, causes a
powered down compute node to power up.
• Immediate power down: Pressing the button and holding it down 7 seconds or longer before releasing
it causes a powered-up compute node to immediately power down.
• Graceful power down: Quickly pressing and releasing the button, but not holding it down, causes
a powered-up compute node to power down in an orderly fashion.

• The front panel Locator button is a toggle that controls the Locator LED. Quickly pressing the button,
but not holding it down, toggles the locator LED on (when it glows a steady blue) or off (when it is dark).
The LED can also be dark if the compute node is not receiving power.

For more information, see Interpreting LEDs, on page 10.

Drive Bays
Each Cisco UCS X410c M7 compute node has a front mezzanine slot that can support local storage drives of
different types and quantities of 2.5-inch SAS, SATA, or NVMe drives. A drive blank panel
(UCSC-BBLKD-M7) must cover all empty drive bays.
Drive bays are numbered sequentially from 1 through 6 as shown.

Compute Node Overview


5
Compute Node Overview
Drive Front Panels

Figure 1: Front Loading Drives

Drive Front Panels


The front drives are installed in the front mezzanine slot of the compute node. SAS/SATA and NVMe drives
are supported.

Compute Node Front Panel with SAS/SATA Drives


The compute node front panel contains the front mezzanine module, which can support a maximum of 6
SAS/SATA drives. The drives have additional LEDs that provide visual indicators about each drive's status.
Figure 2: Drive LEDs

1 Drive Health LED 2 Drive Activity LED

Compute Node Overview


6
Compute Node Overview
Local Console

Compute Node Front Panel with NVMe Drives


The compute node front panel contains the front mezzanine module, which can support a maximum of six
2.5-inch NVMe drives.

Local Console
The local console connector is a horizontal oriented OcuLink on the compute node faceplate.
The connector allows a direct connection to a compute node to allow operating system installation directly
rather than remotely.
The connector terminates to a KVM dongle cable (UCSX-C-DEBUGCBL) that provides a connection into a
Cisco UCS compute node. The cable provides connection to the following:
• VGA connector for a monitor
• Host Serial Port
• USB port connector for a keyboard and mouse

With this cable, you can create a direct connection to the operating system and the BIOS running on a compute
node. A KVM cable can be ordered in separately and it doesn’t come with compute node’s accessary kit.
Figure 3: KVM Cable for Compute Nodes

1 Oculink connector to compute node 2 Host Serial Port

3 USB connector to connect to single 4 VGA connector for a monitor


USB 3.0 port (keyboard or mouse)

Front Mezzanine Options


The Cisco UCS X410c M7 Compute Node supports front mezzanine module storage through SAS/SATA or
NVMe SSDs. For more information, see Storage Options, on page 8.

Compute Node Overview


7
Compute Node Overview
Storage Options

Storage Options
The compute node supports the following local storage options in the front mezzanine module.

Cisco UCS X410c Passthrough Module


The compute node supports the Cisco FlexStorage NVMe passthrough controller, which is a passthrough
controller for NVMe drives only. This module supports:
• Support up to six NVME SSDs in slots 1 through 6
• PCIe Gen3 and Gen4, x24 total lanes, partitioned as six x4 lanes
• Drive hot plug is supported
• Virtual RAID on CPU (VROC) is not supported, so RAID across NVME SSDs is not supported

Cisco UCS X410c RAID Module


This storage option supports:
• Support up to six 6 SAS/SATA SSDs, or
• Up to four NVME SSDs as:
• U.2 NVMe in slots 1 through 4, direct connected to CPU1 at PCIe Gen4 x4

• PCIe Gen3 and Gen4, x8 lanes


• Drive hot plug is supported
• RAID support:
• RAID across NVME SSDs is not supported.
RAID across SAS/SATA SSDs is supported with various RAID levels: RAID0, 1, 5, 6, 00, 10, 50,
and 60.

Storage-Free Option
If no front storage drives are required, Cisco offers a storage-free configuration consisting of a blank front
mezzanine faceplate for the primary.

mLOM and Rear Mezzanine Slot Support


The following rear mezzanine and modular LAN on motherboard (mLOM) modules and virtual interface
cards (VICs) are supported.
The following mLOM VICs are supported.
• Cisco UCS VIC 15420 mLOM (UCSX- ML-V5Q50G) which supports:
• Quad-Port 25G mLOM.
• Occupies the compute node's modular LAN on motherboard (mLOM) slot.

Compute Node Overview


8
Compute Node Overview
System Health States

• Enables up to 50 Gbps of unified fabric connectivity to each of the chassis intelligent fabric modules
(IFMs) for 100 Gbps connectivity per compute node.

• Cisco UCS VIC 15231 mLOM (UCSX-ML-V5D200G), which supports:


• x16 PCIE Gen 4 host interface to UCS X410c M7 compute node
• Two or four KR interfaces that connect to Cisco UCS X Series Intelligent Fabric Modules (IFMs):
• Two 100G KR interfaces connecting to the UCSX 100G Intelligent Fabric Module
(UCSX-I-9108-100G)
• Four 25G KR interfaces connecting to the Cisco UCSX 9108 25G Intelligent Fabric Module
(UCSX-I-9108-25G)

The following modular network mezzanine cards are supported.


• Cisco UCS VIC 15422 (UCSX-ME-V5Q50G) which supports:
• Four 25G KR interfaces.
• Can occupy the compute node's mezzanine slot at the bottom rear of the chassis.
• An included bridge card extends this VIC's 2x 50 Gbps of network connections through IFM
connectors, bringing the total bandwidth to 100 Gbps per fabric (for a total of 200 Gbps per compute
node).

• Cisco UCS PCI Mezz card for X-Fabric (UCSX-V4-PCIME) provides connectivity for Cisco UCS PCIe
Nodes, such as the Cisco UCS X440p PCIe Node, which supports GPU offload and acceleration when
a compute node is paired with the PCIe node.

Note Although not an mLOM or rear mezzanine card, the UCS VIC 15000 bridge connector (UCSX-V5-BRIDGE-D)
is required to connect the Cisco VIC 15420 mLOM and Cisco VIC 15422 rear mezzanine card on the compute
node.

System Health States


The compute node's front panel has a System Health LED, which is a visual indicator that shows whether the
compute node is operating in a normal runtime state (the LED glows steady green). If the System Health LED
shows anything other than solid green, the compute node is not operating normally, and it requires attention.
The following System Health LED states indicate that the compute node is not operating normally.

Compute Node Overview


9
Compute Node Overview
Interpreting LEDs

System Health LED Color Compute Node State Conditions

Solid Amber Degraded • Power supply redundancy lost


• Intelligent Fabric Module
(IFM) redundancy lost
• Mismatched processors in the
system. This condition might
prevent the system from
booting.
• Faulty processor in a dual
processor system. This
condition might prevent the
system from booting.
• Memory RAS failure if
memory is configured for
RAS
• Failed drive in a compute node
configured for RAID

Blinking Amber Critical • Boot failure


• Fatal processor or bus errors
detected
• Fatal uncorrectable memory
error detected
• Lost both IFMs
• Lost both drives
• Excessive thermal conditions

Interpreting LEDs
Table 1: Compute Node LEDs

LED Color Description

Compute Node Power Off Power off.


(callout 1 on the Chassis Front Green Normal operation.
Panel)
Amber Standby.

Compute Node Overview


10
Compute Node Overview
Interpreting LEDs

LED Color Description

Compute Node Activity Off None of the network links are up.
(callout 2 on the Chassis Front Green At least one network link is up.
Panel)

Compute Node Health Off Power off.


(callout 3 on the Chassis Front Green Normal operation.
Panel)
Amber Degraded operation.

Blinking Critical error.


Amber

Compute Node Locator Off Locator not enabled.


LED and button Blinking Locates a selected compute node—If the LED is not blinking,
(callout 4 on the Chassis Front Blue 1 Hz the compute node is not selected.
Panel) You can initiate the LED in UCS Intersight or by pressing the
button, which toggles the LED on and off.

Table 2: Drive LEDs, SAS/SATA

Status/Fault LED Description


Activity/Presence LED

Off Off Drive not present or drive powered


off

On (glowing solid green) Off Drive present, but no activity or


drive is a hot spare

Blinking green, 4HZ Off Drive present and drive activity

Blinking green, 4HZ Blinking amber, 4HZ Drive Locate indicator or drive
prepared for physical removal

On (glowing solid green) On (glowing solid amber) Failed or faulty drive

Blinking green, 1HZ Blinking amber, 1HZ Drive rebuild or copyback


operation in progress

On (glowing solid green) Two 4HZ amber blinks with a ½ Predict Failure Analysis (PFA)
second pause

Compute Node Overview


11
Compute Node Overview
Optional Hardware Configuration

Table 3: Drive LEDs, NVMe (VMD Disabled)

Status/Fault LED Description


Activity/Presence LED

Off Off Drive not present or drive powered


off

On (glowing solid green) Off Drive present, but no activity

Blinking green, 4HZ Off Drive present and drive activity

N/A N/A Drive Locate indicator or drive


prepared for physical removal

N/A N/A Failed or faulty drive

N/A N/A Drive Rebuild

Table 4: Drive LEDs, NVMe (VMD Enabled)

Status/Fault LED Description


Activity/Presence LED

Off Off Drive not present or drive powered


off

On (glowing solid green) Off Drive present, but no activity

Blinking green, 4HZ Off Drive present and drive activity

Blinking green, 4HZ Blinking amber, 4HZ Drive Locate indicator or drive
prepared for physical removal

N/A N/A Failed or faulty drive

N/A N/A Drive Rebuild

Optional Hardware Configuration


The Cisco UCS X410c M7 compute node can be installed in a Cisco UCS X9508 Server Chassis either as a
standalone compute node or with the following optional hardware configuration.

Cisco UCS X440p PCIe Node


As an option, the compute node can be paired with a full-slot GPU acceleration hardware module in the Cisco
UCS X9508 Server Chassis. This option is supported through the Cisco X440p PCIe node. For information
about this option, see the Cisco UCS X440p PCIe Node Installation and Service Guide.

Compute Node Overview


12
Compute Node Overview
Optional Hardware Configuration

Note When the compute node is paired with the Cisco UCS X440p PCIe node, the Cisco UCS PCI Mezz card for
X-Fabric Connectivity (UCSX-V4-PCIME-D) is required. The UCS VIC bridge connector is required with
the mezzanine card to connect the UCS X-Series compute nodes to Cisco UCS X Series IFMs. The bridge
connector card installs on the compute node.

Caution When the compute node is installed in the same Cisco UCS X9508 chassis as the Cisco UCS X440p PCIe
node, the compute node must be installed to the slots immediately to the right of the PCIe node. For more
information, see Compute Node Installation Guidelines and Limitations.

Compute Node Overview


13
Compute Node Overview
Optional Hardware Configuration

Compute Node Overview


14

You might also like