The server had issues before (T367071) and after the repair the reimage still showed failing disks, so there's probably some issue with the controller/mainboard. Since the host is OOW and new replacements are coming in next quarter, we'll go ahead with the decom.
Steps for service owner:
- - all system services confirmed offline from production use
- - set all icinga checks to maint mode/disabled while reclaim/decommmission takes place. (likely done by script)
- - remove system from all lvs/pybal active configuration
- - any service group puppet/hiera/dsh config removed
- - remove site.pp, replace with role(spare::system) recommended to ensure services offline but not 100% required as long as the decom script is IMMEDIATELY run below.
- - login to cumin host and run the decom cookbook: cookbook sre.hosts.decommission <host fqdn> -t <phab task>. This does: bootloader wipe, host power down, netbox update to decommissioning status, puppet node clean, puppet node deactivate, debmonitor removal, and run homer.
- - remove all remaining puppet references and all host entries in the puppet repo
- - reassign task from service owner to DC ops team member and site project (ops-sitename) depending on site of server
End service owner steps / Begin DC-Ops team steps:
- - system disks removed (by onsite)
- - determine system age, under 5 years are reclaimed to spare, over 5 years are decommissioned.
- - IF DECOM: system unracked and decommissioned (by onsite), update netbox with result and set state to offline
- - IF DECOM: mgmt dns entries removed.