Documentation-Ubuntu-Com-Gcp-En-Latest - June 30 2025
Documentation-Ubuntu-Com-Gcp-En-Latest - June 30 2025
Canonical Ltd.
1 In this documentation 3
i
ii
Ubuntu on GCP
Ubuntu on Google Cloud Platform (GCP) is a set of customized Ubuntu images that allow easy access to a wide
range of products and services - offered by both Google Cloud and Canonical. These images have an optimized kernel
that boots faster, has a smaller footprint and includes GCP-specific drivers.
These images provide a foundation for deploying cloud-based software solutions, specifically for software built on
Ubuntu and running on Google cloud. They focus on providing the optimal tools and features needed to run specific
workloads.
The images create a stable and secure cloud platform that is ideal for scaling development work done on Ubuntu-
based systems. Since Ubuntu is one of the most favored operating systems among developers, using an Ubuntu-based
image for the corresponding cloud deployment becomes the simplest option.
Everyone from individual developers to large enterprises use these images for developing and deploying their
softwares. For highly regulated industries from the government, medical and finance sectors, various security-certified
images are also available.
CONTENTS 1
Ubuntu on GCP
2 CONTENTS
CHAPTER
ONE
IN THIS DOCUMENTATION
How-to guides
Step-by-step guides covering key operations and common tasks related to using Ubuntu images on GCE.
Explanation
Discussion and clarification of key topics, such as security features, Google’s ‘guest agents’ on Ubuntu and our image
retention policy.
3
Ubuntu on GCP
TWO
Ubuntu on GCP is a member of the Ubuntu family and the project warmly welcomes community projects, contributions,
suggestions, fixes and constructive feedback.
• Get support
• Join our online chat
• Discuss on Matrix
• Talk to us about Ubuntu on Google cloud
• Contribute to these docs
• Code of conduct
5
Ubuntu on GCP
Using GCE
These how-to guides relate to launching and using Ubuntu-based GCE instances. They include instructions for per-
forming different sets of tasks.
Launching different types of instances:
• Find images
• Create instances
• Launch a desktop
Creating golden images and customized containers:
• Build a Pro golden image
• Create customized docker containers
Performing upgrades:
• Upgrade to Pro
• Enable Pro features
• Upgrade from Focal to Jammy
Administrative operations:
• Set hostname
On your Google Cloud console, you can find the latest Ubuntu images by selecting Ubuntu as the Operating System
under Compute Engine > VM instances> CREATE INSTANCE > Boot disk > CHANGE.
For a programmatic method, you can use the gcloud command:
Image locator
Canonical also produces an Ubuntu cloud image finder where users can filter based on a variety of criteria, such as
region or release, etc.
The procedure for creating different instance types on GCP basically boils down to choosing the correct options on
your google console. Some specific examples are given below.
On your Google Cloud console, while creating a new instance from Compute Engine > VM instances> CREATE IN-
STANCE:
• select Ubuntu and Ubuntu 24.04 LTS in Boot disk > CHANGE > Operating system and Version
On your Google Cloud console, while creating a new instance from Compute Engine > VM instances> CREATE IN-
STANCE:
• select Ubuntu Pro and Ubuntu 24.04 LTS Pro Server in Boot disk > CHANGE > Operating system and
Version
Once the instance is up, ssh into it and run
pro status
On your Google Cloud console, while creating a new instance from Compute Engine > VM instances> CREATE IN-
STANCE:
• select Ubuntu Pro and Ubuntu 20.04 LTS Pro FIPS Server in Boot disk > CHANGE > Operating system
and Version
Once the instance is up, ssh into it and run
uname -r
The kernel version will include fips in the name. To check the FIPS packages, run:
It should show you a long list of packages with fips in the name or version.
On your Google Cloud console, while creating a new instance from Compute Engine > VM instances> CREATE IN-
STANCE:
• choose the ARM CPU platform T2A in Machine configuration > Series
• choose an ARM compatible OS and version, say Ubuntu and Ubuntu 24.04 LTS Minimal in Boot disk >
CHANGE > Operating system and Version
On your Google Cloud console, while creating a new instance from Compute Engine > VM instances> CREATE IN-
STANCE:
• select Confidential VM service > ENABLE
It’ll show you the available machine type - n2d-standard-2 and boot disk image - Ubuntu 20.04 LTS. Select EN-
ABLE again and the changes will be reflected under the Machine configuration and Boot disk sections. However, we
need to change the disk image to one with Pro FIPS:
• Go to Boot disk > CHANGE > Confidential Images and filter using ‘ubuntu’ to select Ubuntu 20.04 LTS Pro
FIPS Server. Select that and create the instance.
To check that confidential computing has been enabled correctly, once the instance is up, ssh into it and run
A statement containing: AMD Secure Encryption Virtulization (SEV) active should be displayed.
Back on the google console, open the instance details and go to Logs > Logging. In the list of logs, look for one
that mentions sevLaunchAttestationReportEvent and expand it. In the resulting JSON, check that the field
integrityEvaluationPassed is set to true, under sevLaunchAttestationReportEvent, something like:
insertId: "0",
jsonPayload: {
@type: "type.googleapis.com/cloud_integrity.IntegrityEvent",
bootCounter: "0",
sevLaunchAttestationReportEvent: {
integrityEvaluationPassed: true
sevPolicy: {0}
[...]
In GCE, Intel® TDX is supported in the C3 machine series since they use the 4th Gen Intel® Xeon CPUs. To create the
VM, in the Google Cloud CLI, use the instances create command with confidential-compute-type=TDX:
where:
• MACHINE_TYPE: is the C3 machine type to use and
• IMAGE_FAMILY_NAME: is the name of the confidential VM supported image family to use, such as Ubuntu
22.04 LTS, Ubuntu 24.04 LTS or Ubuntu 24.04 LTS Pro Server
If you want an Ubuntu desktop environment on your VM, you can set it up and use the Chrome Remote Desktop service
to access it from your local Chrome web browser.
ò Note
If you don’t have an Ubuntu VM already, you can create one based on Create an Ubuntu LTS instance
wget https://dl.google.com/linux/direct/chrome-remote-desktop_current_amd64.deb
sudo apt-get install --assume-yes ./chrome-remote-desktop_current_amd64.deb
Install a lightweight graphical display manager like SLiM (Simple Login Manager) on your VM:
sudo reboot
SSH back into the VM when the connection is restored, and start SLiM:
To start the remote desktop connection, you’ll need an authorization key. This can be created using Chrome on your
local machine. Browse to the Chrome Remote Desktop setup page, where you’ll see the option to Set up another
computer on the Set up via SSH tab.
• Select Begin
• Select Next, since you have already installed Chrome Remote Desktop on the remote computer
• Select Authorize
• Copy the command shown for Debian Linux.
Back on your VM’s SSH window:
• Paste the command and run it
• Enter a 6-digit pin when prompted. This pin will be needed during remote login to the VM.
On your local machine, go to the Chrome Remote Desktop access page, and you’ll see your VM under Remote devices
on the Remote Access tab. Select the VM and you will be prompted to input the 6-digit pin that you created in the
previous step.
You might see a window with messages similar to “This session logs you into Ubuntu”. Select OK to close the window.
If you see a page that says “Authentication is required to create a color managed device”, select Cancel to ignore it.
You might also see a setup screen that you can follow through by selecting Start Setup > Next > Next > Start Using
Ubuntu
Your VM with an Ubuntu desktop is now fully functional and accessible within your Chrome browser. Select Activities
to access search and other desktop shortcuts.
A golden image is a base image that is used as a template for your virtual machines. You can create it from your Google
Cloud console’s Cloud Shell (as explained below) or using other tools like Packer.
We’ll be using Ubuntu Pro 22.04 LTS as the base image, although the steps should work fine for all Pro images available
in your console.
In your Google Cloud console, search for the ‘Cloud Shell’ product and open it by selecting Go to console. Once in,
look for the available Ubuntu Pro images:
NAME: ubuntu-pro-1604-xenial-v20230710
FAMILY: ubuntu-pro-1604-lts
NAME: ubuntu-pro-1804-bionic-arm64-v20230921
FAMILY: ubuntu-pro-1804-lts-arm64
NAME: ubuntu-pro-1804-bionic-v20230921
FAMILY: ubuntu-pro-1804-lts
NAME: ubuntu-pro-2004-focal-arm64-v20230920
FAMILY: ubuntu-pro-2004-lts-arm64
NAME: ubuntu-pro-2004-focal-v20230920
FAMILY: ubuntu-pro-2004-lts
NAME: ubuntu-pro-2204-jammy-arm64-v20230921
FAMILY: ubuntu-pro-2204-lts-arm64
NAME: ubuntu-pro-2204-jammy-v20230921
FAMILY: ubuntu-pro-2204-lts
NAME: ubuntu-pro-fips-1804-bionic-v20230530
FAMILY: ubuntu-pro-fips-1804-lts
NAME: ubuntu-pro-fips-2004-focal-v20230920
FAMILY: ubuntu-pro-fips-2004-lts
From the options seen, choose Ubuntu Pro 22.04 LTS and use its family name in the golden image creation command
below:
In a bit you’ll see output similar to the following and the created golden image will be available in your image gallery.
Created [https://www.googleapis.com/compute/v1/projects/[YOUR_PROJECT]/global/images/
˓→golden-image].
NAME: golden-image
PROJECT: [YOUR_PROJECT]
FAMILY:
DEPRECATED:
STATUS: READY
architecture: X86_64
archiveSizeBytes: '1094443008'
creationTimestamp: '2023-09-29T03:56:22.275-07:00'
diskSizeGb: '10'
guestOsFeatures:
- type: VIRTIO_SCSI_MULTIQUEUE
- type: SEV_CAPABLE
- type: SEV_SNP_CAPABLE
- type: SEV_LIVE_MIGRATABLE
- type: UEFI_COMPATIBLE
- type: GVNIC
id: '8518177910815396794'
kind: compute#image
labelFingerprint: 42WmSpB8rSM=
licenseCodes:
- '2592866803419978320'
licenses:
- https://www.googleapis.com/compute/v1/projects/ubuntu-os-pro-cloud/global/licenses/
˓→ubuntu-pro-2204-lts
name: golden-image
selfLink: https://www.googleapis.com/compute/v1/projects/ubuntu-dimple/global/images/
˓→golden-image
shieldedInstanceInitialState:
[...]
The line starting with “licenses:” shows the expected Pro license.
Created [https://www.googleapis.com/compute/v1/projects/ubuntu-dimple/zones/asia-
˓→southeast1-a/instances/instance-from-golden-image].
NAME: instance-from-golden-image
ZONE: asia-southeast1-a
MACHINE_TYPE: n1-standard-1
PREEMPTIBLE:
INTERNAL_IP: 10.148.0.2
(continues on next page)
The SSH command might need you to create an SSH key for gcloud if you don’t have one already. Once you complete
the steps and reach the prompt of the new instance, check its license by running:
pro status
The output should be similar to the following and indicates that Pro features such as ESM and livepatch are enabled.
For a list of all Ubuntu Pro services, run 'pro status --all'
Enable services with: pro enable <service>
Account: ubuntu-dimple
Subscription: ubuntu-dimple
Valid until: Fri Dec 31 00:00:00 9999 UTC
Technical support level: essential
To share this golden image with other users, you’ll need to add them as principals and assign the Compute Image User
role to them. This will give them permission to list, read, and use the image but not to modify it.
Go to your image gallery, select the image that you just created. In the INFO PANEL on the right, select PERMISSIONS
> ADD PRINCIPAL:
• In the Add principals field insert the email addresses of all the users that you want to share your image with.
• In the Assign roles field, select Compute Engine > Compute Image User
On saving these settings, the specified users will have access to the image.
You can also grant users the Viewer IAM role for the project that you used to create the image in. This will ensure that
the shared image appears in their image selection list.
Docker containers are extremely useful for running applications reliably on different computing environments. This is
because they package the application along with all its dependencies into a single image that can be easily deployed.
Docker is the underlying technology used to run these containers / images. Docker also allows you to modify the
container and create new customized versions easily. As an example, on your Ubuntu Pro VM, we’ll run a container
based on the latest Ubuntu image and then customize it by including Python.
ò Note
If you don’t have an Ubuntu Pro VM already, you can create one based on Create an Ubuntu Pro instance
Install Docker
On your Ubuntu Pro VM, the easiest way to install Docker is to use snap. Update your package manager data and then
install docker using:
You’ll find many ubuntu related images, some of which have an [OK] under the ‘OFFICIAL’ column indicating that
they are images built and supported by a company.
The image that you just pulled will show up in the output:
Run a container based on this downloaded image and it’ll take you to the new container’s command prompt:
cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=22.04
DISTRIB_CODENAME=jammy
DISTRIB_DESCRIPTION="Ubuntu 22.04.3 LTS"
To customize the image, you can for instance install Python within the container:
apt update
apt install python3
/usr/bin/python3 -V
Python 3.10.12
Now that you have modified the original Ubuntu image, you can save the changes to create a new image. Use Ctrl +
P and Ctrl + Q to exit the container interface and get back into the VM.
To save the changes you’ll need the container ID (of the container where you made the changes). You can get this by
checking the containers running on your VM:
sudo docker ps
where the parameter -m (message) is used to indicate the changes made and -a (author) is used to indicate the author
of the changes.
If you look at the list of images on your VM, you’ll see the newly added one:
If your production environment is based on Ubuntu LTS and you need the premium security, support or compliance
features of Ubuntu Pro, then you don’t have to migrate your applications to new Ubuntu Pro VMs. You can just perform
an in-place upgrade of your existing machines in three simple steps:
1. Stop your machine:
where,
• INSTANCE_NAME: is the name of the instance (boot disk) to append the license to
• ZONE: is the zone containing the instance
• LICENSE_URI: is the license URI for the Pro version that you are upgrading to. If your VM runs Ubuntu 16.04
LTS, you need to upgrade to Ubuntu Pro 16.04 LTS. Choose the appropriate URI from:
pro status
The output should show the different services available and their current status. Something like:
For comprehensive instructions, please refer to the official Google Cloud documentation for upgrading to Pro.
Not all Pro features are automatically enabled when you create your Ubuntu Pro VM. They can be enabled individually
as per your requirements.
ò Note
If you don’t have an Ubuntu Pro VM already, you can either create a new instance (refer: Create an Ubuntu Pro
instance) or do an in-place upgrade of your LTS VM to Pro (refer: Upgrade in-place from LTS to Pro).
To check the current status of different Pro services on your VM, SSH into it and run:
pro status
Use the appropriate section below to enable the service that you need.
ESM
Extended Security Maintenance (ESM) guarantees a security coverage of 10 years for your Pro VM. So e.g. Ubuntu
22.04 LTS will get security updates till 2032. This feature is automatically enabled with Pro and on running pro
status, you should see something like:
esm-infra guarantees 10-year security coverage for packages in the “main” repository, which includes Canonical-
supported free and open-source software.
esm-apps further extends this coverage to the “universe” repository, which includes community-maintained free and
open-source software.
CIS hardening
CIS Benchmarks are best practices for the secure configuration of a system. Ubuntu Pro includes CIS tooling packages
and your Pro VM can be made CIS compliant by enabling the CIS service and then hardening the instance. Enable CIS
using:
With the tooling packages now installed, you can for instance, harden your Ubuntu Pro 20.04 LTS system with CIS
level 1 server profile, by running:
sudo /usr/share/ubuntu-scap-security-guides/cis-hardening/Canonical_Ubuntu_20.04_CIS-
˓→harden.sh lvl1_server
In a few minutes, the hardening process will complete to give you a CIS level 1 compliant environment. To audit the
system, run:
CIS audit scan completed. The scan results are available in /usr/share/ubuntu-scap-
˓→security-guides/cis-20.04-report.html report.
The HTML report mentioned above will show you your CIS score. For comprehensive CIS hardening instructions,
refer to the Ubuntu CIS Compliance documentation.
FIPS compliance
Federal Information Processing Standards (FIPS) are standards and guidelines for federal computer systems developed
by National Institute of Standards and Technology (NIST). To enable FIPS on your Pro VM, run:
Reboot the instance by running sudo reboot or through the Google Cloud console. Once the machine restarts, you
can SSH into it again and run pro status to verify that the fips service is enabled.
Livepatch
With livepatch enabled, high and critical CVEs are patched in place on a running kernel, without the need for a re-
boot. This means that you don’t have to worry about kernel related security vulnerabilities. You can avoid unexpected
downtime and delay your reboot until the next scheduled maintenance window.
To enable livepatch, run:
General Advice
Once you have decided to upgrade your system, the next question is how? There are two options depending on whether
your system is setup/deployed with automation or whether it requires manual configuration.
For fully automated system deployments it is recommended to redeploy with new Jammy instances instead of upgrading
from Focal.
For systems that cannot be easily created or destroyed and require manual configuration, running do-release-upgrade
is a good option. However this option requires some manual intervention as explained below.
While upgrading from Focal to Jammy, manual decision making will be needed for the following options that are
presented.
When upgrading in a session over SSH there is an inherent risk of losing access if something goes wrong with the SSH
daemon. To mitigate this risk an additional SSH daemon is started on a different port as a backup.
The prompt notifies you that an additional SSH daemon will be started and you can either continue or cancel the
upgrade.
If you are using a firewall there is a chance that the port used by the backup SSHD is not open. Opening this port is
not done automatically since it could be security risk. An optional command to open the port is provided and you are
prompted to press enter to continue.
Start upgrade
A final prompt is provided before starting the upgrade. It gives information about the number of changes and the
estimated time to complete because once started, the upgrade process cannot be canceled. At this stage you can continue,
cancel or see additional details.
During the upgrade of certain libraries, some services have to be restarted. You have the option of allowing the services
to be restarted automatically during the upgrade. If you select ‘no’ here, you’ll be asked about the services that you
want to restart after each library upgrade.
Canonical makes changes to /etc/ssh/sshd_config for GCP images. As a result, during upgrade you’ll see a prompt
notifying you about the availability of a newer version of the sshd_config file. You’ll be asked if you want to keep the
existing modified version, use the default one from the new upgrade or take some other action.
Due to a possible bug in ucf, even if there are no changes in /etc/chrony/chrony.conf you’ll be shown a prompt
asking whether you want to keep the current version, use the default one from the new upgrade, or take some other
action.
An obsolete package is a package which is no longer available in any of the sources for apt. Usually it is safe and
recommended to remove obsolete packages. But before doing so you’ll be asked if you wish to remove them and you’ll
have the option to select from yes, no and more details.
Finally, a restart will be necessary for some parts of the upgrade to be applied. If you select no, you can use /var/
run/reboot-required.pkgs to check for the packages that need a reboot.
The hostname of GCE instances can be set using multiple methods. Google’s preferred method is to use its DHCP
service, which requires you to choose a fully qualified DNS name (FQDN), e.g. something like test123.test.com. If
you don’t want an FQDN or if you want to use a consistent method for assigning hostnames across clouds, you can use
a set-up tool like cloud-init to set your hostname.
Both these methods are described here. Also, due to a recent hostname-related update on GCP, you might have to make
some additional changes for GCE images that use Ubuntu 24.04 LTS and later. These are explained at the end.
By default, Google’s DHCP service sets the hostname to an automatically generated internal DNS name.
To set your own custom name, follow the instructions given in Create a VM instance with a custom hostname. In this
case, the DHCP service will additionally provide the custom name and will prioritize it to be the default hostname.
However, as mentioned earlier, the custom name needs to be an FQDN.
Using cloud-init
cloud-init uses the hostname command to programmatically set the hostname. You need to configure its metadata with
the required hostname and use the gcloud compute instances add-metadata command:
Here INSTANCE_NAME is your VM name and the metadata is specified using a KEY=VALUE pair. For instance,
the metadata could be specified as ‘user-data=FILENAME’, where FILENAME is the local path to a file that contains
the desired cloud-init configurations. Include the desired hostname in that user-data file:
#cloud-config
hostname: test123
For more details about this, see Set Hostname in the cloud-init documentation.
In GCE images that use Ubuntu 24.04 LTS or later, the /etc/hostname file is no longer present by default, and the
cloud-init key create_hostname_file is set to false.
Due to the way the underlying hostname command works, whenever a user or tool (such as cloud-init) tries to set
the hostname on a system without /etc/hostname, it will only be set transiently and will be overwritten by Google’s
DHCP service. To avoid this, you’ll need to set create_hostname_file to true in the user-data file:
#cloud-config
hostname: test123
create_hostname_file: true
Another scenario where this new default can create inconsistencies is in the case of a server farm with images spanning
the Ubuntu 24.04 LTS boundary (i.e. both 24.04+ and 23.10-). In this case, if you want a consistent file system
layout and hostname style across all images, then you’ll have to either remove the /etc/hostname file from the earlier
versions or add it to the later versions.
Set the cloud-init key create_hostname_file to false and ensure that /etc/hostname is deleted during or after
first boot. So the user-data file will need:
#cloud-config
create_hostname_file: false
#cloud-config
hostname: test123
create_hostname_file: true
Using Kubernetes
This how-to guide gives you instructions for using Ubuntu Pro on your Kubernetes cluster.
Google does not have Ubuntu Pro image offerings for GKE (Google Kubernetes Engine) nodes as yet, i.e. you cannot
choose Ubuntu Pro images for GKE nodes. GKE does not support custom images for the nodes and neither does it
allow post-deployment customization of node VMs.
“Modifications on the boot disk of a node VM do not persist across node re-creations. Nodes are re-created
during manual upgrade, auto-upgrade, auto-repair, and auto-scaling. In addition, nodes are re-created
when you enable a feature that requires node re-creation, such as GKE Sandbox, intranode visibility, and
shielded nodes.”
—GKE docs
Since there’s no mechanism to enable Ubuntu Pro or pre-bake the UA token in a specific cluster, a managed Pro
Kubernetes cluster in GKE is not currently possible.
So one option to get an Ubuntu Pro based Kubernetes cluster is to manually deploy and manage Kubernetes on Ubuntu
Pro VMs in GCE.
Create a few Ubuntu Pro VMs for your Kubernetes cluster - say k8s-worker-1 and k8s-worker-2 to act as worker
nodes and k8s-main for the control plane.
If you want to create them from the google console, refer to Create an Ubuntu Pro instance. Or you can also use the
gcloud CLI tool to create the VMs:
Install Kubernetes
You can use MicroK8s to meet your Kubernetes needs. SSH into each node and install the snap:
Create a cluster
Use the microk8s add-node command to create a cluster out of two or more MicroK8s instances. The instance on
which this command is run will be the cluster’s manager and will host the Kubernetes control plane. For further details,
refer to the MicroK8s clustering doc.
1. On k8s-main run:
On completion, it’ll give instructions for adding another node to the cluster:
From the node you wish to join to this cluster, run the following:
microk8s join 10.128.0.24:25000/bde599439dc4182f54fc39f1c444edf3/9713e9c1c063
Use the '--worker' flag to join a node as a worker not running the control plane, eg:
microk8s join 10.128.0.24:25000/bde599439dc4182f54fc39f1c444edf3/9713e9c1c063 --worker
[...]
This will add k8s-worker-1 to the cluster as a worker node. Now, repeat these two steps for each worker node, i.e.
run microk8s add-node on k8s-main and use the new token that is generated to add k8s-worker-2 to the cluster.
Use the kubetl get nodes command in the control plane VM (k8s-main) to check that the nodes have joined the
cluster:
You can also check the cluster-info using the kubectl cluster-info command on k8s-main:
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
You can access your Pro Kubernetes cluster from any working environment with a Kubernetes client. For this you’ll
need to allow external access to the control plane VM and also get the relevant kubeconfig file.
HTTPS traffic access - On your google console, select the k8s-main instance and in the details page, go to Edit >
Networking > Firewalls and enable Allow HTTPS traffic.
Kubernetes port access - Allow access to the Kubernetes port (16443 - found in response to the kubectl
cluster-info command above), by creating a firewall rule in the VPC firewall rules. For instructions on how to
do that, refer to the Google Cloud VPC docs.
To access the cluster from your local workstation, you’ll need to copy the appropriate kubeconfig file from your control
plane VM. But before doing that, since you’ll be connecting to the VM using its external IP address, you’ll also have
to ensure that the file’s certificate is valid for the external IP address.
Update certificate - In your control plane VM, edit the /var/snap/microk8s/current/certs/csr.conf.
template file to add the VM’s external IP address in the “alt_names” section. The external IP address can be obtained
from the GCE VM Instances page.
...
[ alt_names ]
DNS.1 = kubernetes
(continues on next page)
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: <certificate>
server: https://10.128.0.24:16443
name: microk8s-cluster
contexts:
- context:
cluster: microk8s-cluster
user: <username>
name: microk8s
current-context: microk8s
kind: Config
preferences: {}
users:
- name: <username>
user:
token: <token>
Copy this to your local workstation as ${HOME}/.kube/config. Replace the server’s private IP address with the
external IP address and save it.
You now have an Ubuntu Pro Kubernetes cluster running in GCE. You should be able to access it from your local
workstation, using a Kubernetes client. To check the access, run:
This will show you details about your cluster nodes. You can verify the Pro subscription on each of the provisioned
nodes by running pro status on them.
Minor changes
If you find a problem that you can fix and it’s a small change, you can use the Edit this page on GitHub link at the
bottom of the relevant page to edit it directly on GitHub. When you are done with your edits, select Commit changes. . .
on the top right. This will help you create a new branch and start a pull request (PR). Use Propose changes to submit
the PR. We will review it and merge the changes.
Use the Give feedback button at the top of any page to create a GitHub issue for any suggestions or questions that you
might have.
New content
When adding new content, it’s easier to work with the documentation on your local machine. For this, you’ll need make
and python3 installed on your system. Once you’ve made your changes, ensure all checks have passed and everything
looks satisfactory before submitting a pull request (PR).
If you are working with these docs for the first time, you’ll need to create a fork of the ubuntu-cloud-docs repository
on your GitHub account and then clone that fork to your local machine. Once cloned, go into the ubuntu-cloud-docs
directory and run:
make install
This creates a virtual environment and installs all the required dependencies. You only have to do this step once and
can skip it the next time you want to contribute.
Use the make run command to build and serve the docs at http://127.0.0.1:8000 or equivalently at http://
localhost:8000. This gives you a live preview of the changes that you make (and save), without the need for a
rebuild:
Setting the PROJECT parameter to google ensures that the documentation set for Ubuntu on GCP gets built. This
parameter is needed to distinguish between the different documentation sets present in the repository.
Create content
Choose the appropriate folder for your content. The folders within each project are mapped to the Diátaxis categories
of tutorial, how-to guides, explanation and reference. If required, the categories can have subcategories as well, as
shown in the tree structure below. Also, each folder includes an index.rst file, which acts as a landing page for the
folder.
project/
tutorial
how-to-guides/
subcategory-one/
index.rst
page-one.rst
page-two.rst
page-three.rst
subcategory-two/
| index.rst
| page-one.rst
| page-two.rst
| page-three.rst
| index.rst
explanation
reference
index.rst
If your required category or subcategory is absent, create them using the instructions given below. Then add your
content by creating a new page.
.. toctree::
:maxdepth: 2
subcategory-one/index
Subcategory two <subcategory-two/index>
page-one-file-name
.. toctree::
:maxdepth: 1
page-one-file-name
Page Two Title <page-two-file-name>
6. Update the index.rst file of the parent category by adding a reference to the newly created subcategory in its
toctree.
Before opening a PR, run the following checks and also ensure that the documentation builds without any warnings
(warnings are treated as errors in the publishing process):
If you need to add new words to the allowed list of words, include them in .custom_wordlist.txt.
Once all the edits are done, commit the changes and push it to your fork. From the GitHub GUI of your fork, select the
commit and open a PR for it.
2.2 Explanation
If you have questions about our offerings on Google Cloud, about Google’s ‘guest agents’ on Ubuntu, about the security
features available, or if you are wondering about the lifetime of any image, then this is the place to look.
GCE Images
For each active Ubuntu release, at least two image variants are created for GCE:
• Base images that contain a full Ubuntu development environment
• Minimal images that have a smaller footprint than base images, and are designed for production instances that
will never be accessed by a human
For the LTS releases from 22.04 onwards, we also have:
• Accelerator images that contain the packages needed to run accelerator workloads on advanced GPUs
For the Ubuntu Pro offering, we have:
• Ubuntu Pro images created for 16.04, 18.04, 20.04, 22.04, and 24.04
• Ubuntu Pro FIPS images created for 18.04 and 20.04
GKE images
GKE is Google Cloud’s Kubernetes offering. Canonical produces node images for GKE that act as a base for running
end user pods. These node images include a kernel that is optimized for use in the GKE environment linux-gke,
as well as custom NVIDIA drivers for workloads that wish to leverage GPU acceleration. Further details of the node
images available for GKE can be found in Google’s documentation about GKE node images.
2.2. Explanation 29
Ubuntu on GCP
google-guest-agent
This package is installed on Ubuntu images to facilitate the different platform features available in GCP. It’s written in
Go and can be described as having two main components:
1. The google-metadata-script-runner binary, which enables users to run bespoke scripts on VM startup and
VM shutdown
2. The daemon, which handles the following on the VM:
• SSH and account management
• OS Login (if used)
• Clock skew
• Networking and NICs
• Instance optimizations
• Telemetry
• Mutual TLS Metadata Service (mTLS MDS)
gce-compute-image-packages
This package (written in BASH) is a collection of different configuration scripts that are dropped into the .d directories
of the following:
• apt
• dhcp
• modprobe
• NetworkManager/dispatcher
• rsyslog
• sysctl
• systemd
google-compute-engine-oslogin
Written in a mixture of C and C++, this package is responsible for providing GCP’s OS Login to Ubuntu VMs. At a
high level it can be described as providing the following:
• Authorized Keys Command: provides SSH keys (from an OS Login profile) to sshd for authentication
• NSS Modules: support for making OS Login user/group information available to the VM using NSS (Name
Service Switch)
• PAM Modules: provides authorization (and authentication if 2FA is enabled) to allow the VM to grant ssh
access/sudo privileges based on the user’s allotted IAM permissions
google-osconfig-agent
This package is written in Go and is installed to facilitate GCP’s OS Config (also known as “VM manager”). At a high
level, OS Config supports the following:
• OS inventory management
• Patch
• OS policies
To create and launch confidential compute enabled instances on GCE, refer to:
• Create an Intel® TDX based confidential computing VM
• Create an AMD SEV based confidential computing VM
2.2. Explanation 31
Ubuntu on GCP
At any give time, there will be only one active image per Ubuntu variant, with all the other images of that variant being
either deprecated or deleted.
where:
• EOL refers to when an interim Ubuntu release (for example, Lunar Lobster 23.04) has reached end-of-life,
and will no longer enjoy support
• EOSS refers to when an LTS Ubuntu release (for example, Jammy Jellyfish 22.04 LTS) has reached “End
of Standard Support” but will remain supported under Ubuntu Pro