[go: up one dir, main page]

0% found this document useful (0 votes)
167 views281 pages

AZ Question

The document provides a practice test for the AZ-104 Azure Administrator certification, covering various topics such as resource groups, Azure Firewall, load balancers, Azure Bicep, and blob storage access levels. It includes multiple-choice questions with detailed explanations for the correct answers, emphasizing best practices and necessary configurations for Azure resources. The document serves as a study aid for individuals preparing for the Azure Administrator exam.

Uploaded by

Derick Correa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
167 views281 pages

AZ Question

The document provides a practice test for the AZ-104 Azure Administrator certification, covering various topics such as resource groups, Azure Firewall, load balancers, Azure Bicep, and blob storage access levels. It includes multiple-choice questions with detailed explanations for the correct answers, emphasizing best practices and necessary configurations for Azure resources. The document serves as a study aid for individuals preparing for the Azure Administrator exam.

Uploaded by

Derick Correa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 281

AZ-104 Azure Administrator Practice Test 1 -

Results
Question 1
You have three resource groups in your Azure subscription.

You deploy Azure Virtual Machine and its related resources in the rg-dev-01 resource
group.
Given below are two statements based on the above information. Select Yes if the
statement is correct. Else select No.

 Yes, No
 No, No
 No, Yes
 Yes, Yes
Overall explanation
Short Answer for Revision:
Even the linked resources can be moved to other resource groups, although this is not
a good practice. Statement 1 -> No.
The move operation is not dependent on the running status of the VM. Statement 2 -
> No.

Detailed Explanation:
Statement 1:
The given resources are interrelated. The virtual machine connects to the OS disk. The
Network interface, attached to the VM, connects VM to other resources in the VNet
and the outside internet, with the help of a public IP. The network interface is also
attached to the network security group, which decides what traffic to allow/disallow
through the card.
But although the resources have links with each other, you can move any of those
resources to a different resource group in the same subscription.

Although you can move individual resources to a different resource group, it is a good
practice to move all the related resources of the VM together to ensure you don’t
encounter any unexpected problems with the VM. However, statement 1 is still No.
Reference
Link: https://learn.microsoft.com/en-us/azure/virtual-machines/windows/move-
vm#use-the-azure-portal-to-move-a-vm-to-another-resource-group

Statement 2:
To answer statement 2, let’s test the statement by trying to move a VM, in a running
status, to the rg-dev-03 resource group.

The validation succeeds, so the move operation is not dependent on the running
status of the VM.
Statement 2 -> No.
Reference
Link: https://learn.microsoft.com/en-us/azure/virtual-machines/windows/move-vm
Option B is the correct answer.
GitHub Repo Link: Move Azure VM and related resources to a different resource
group

Resources
Move Azure VM and related resources to a different resource group
Domain
Deploy and manage Azure compute resources

Question 2
Following are the resources deployed in your Azure subscription.

a. An App Service app running in an App Service plan.


b. The virtual network vnet01 with subnet01.
c. Azure Firewall (with a public IP configuration) deployed in an AzureFirewallSubnet in
vnet01.
Select and place the steps you would perform so that all outbound traffic from the app
is inspected by the Azure Firewall and the traffic is allowed/blocked based on the
firewall rules.

1. Delegate the subnet to Microsoft.web/serverfarms


Create a route table
Add a route to route the traffic from subnet01 to Azure Firewall
Associate the route table with subnet01
2. Integrate the app with vnet01/subnet01
Create a route table
Add a route to route the traffic from subnet01 to Azure Firewall
Associate the route table with subnet01
3. Integrate the app with vnet01/subnet01
Create a route table
Add a route to route the traffic from subnet01 to Azure Firewall
Associate the route table with AzureFirewallSubnet
4. Integrate the app with vnet01/subnet01
Delegate the subnet to Microsoft.web/serverfarms
Create a route table
Associate the route table with AzureFirewallSubnet
Overall explanation
Short Answer for Revision:
To enable your apps to access resources in or through a virtual network, first,
integrate the app into the virtual network subnet. But just integrating the app into the
subnet doesn't ensure the app traffic is routed through the Firewall.
Next, overwrite the default routes by creating a custom route (that routes the app
traffic to Azure Firewall) in a route table and associate the route table to the
integration subnet (where the app is integrated).

Detailed Explanation:
Let’s understand what we already have here. We have an App Service app, a VNet
with 2 subnets. Azure Firewall is already deployed into the AzureFirewallSubnet. All we
need now is to ensure that the outbound traffic from the app is routed through the
Azure Firewall.

To let the App Service route all outbound traffic through the Azure Firewall deployed
in the VNet, integrate the app into a different subnet in the same virtual network.

Integrating the app with the virtual network gives your app access to resources in
your virtual network, like Azure Firewall, but it doesn't grant inbound private access to
your app from the virtual network resources.
If you need private access to an app from a private network, check out private
endpoints.
Reference Link: https://learn.microsoft.com/en-us/azure/app-service/overview-vnet-
integration
So, box 1 -> Integrate the app with vnet01/subnet01.

Now, when you integrate the app into a virtual network subnet, automatically the
subnet is delegated to Microsoft.web/serverfarms . So explicitly performing this step
is not needed.

Since options A and D include this step, they miss out on a required step. So, they are
incorrect.
Reference Link: https://learn.microsoft.com/en-us/azure/app-service/configure-vnet-
integration-enable#prerequisites

Before proceeding further, I highly recommend you check the attached lab files and
run commands to simulate the Azure environment. After you set up the environment,
you will realize that we have an Application rule collection defined within the Azure
Firewall. This rule allows access only to one website through the Azure Firewall. All
other outbound traffic to the Internet will be denied.
Coming back to the question at hand, integrating just the app into the VNet doesn’t
automatically route the traffic through the Azure Firewall. You can check the current
behavior by using the curl command in the console window of your App Service app.
Access to ravikirans.com is not allowed through the firewall, yet an HTML response is
received successfully.

This indicates that the request from the App Service is directly routed through to the
Internet, and is not routed through the firewall. To route the request through the
firewall, create a route table.
So, box 2 -> Create a route table.

After the route table is created, create a route that matches all address prefixes to
route all traffic from the app to the Azure Firewall. Next, get the private IP address of
the Azure Firewall service and set it as the Next hop IP address . Now all outbound
traffic from the app is routed through the Azure Firewall.
So, box 3 -> Add a route to route the traffic from subnet01 to Azure Firewall.

And since the app is integrated with subnet01, you need to associate this route table
with subnet01, not the AzureFirewallSubnet, so originating traffic from the app follows
the custom route we defined in the previous step.

So, box 4 -> Associate the route table with subnet01.

Now, if you run a curl command in the console for the website ravikirans.com, Azure
Firewall blocks the traffic, and you will not receive any output.
However, if you run a curl command for the site that’s allowed through the Azure
Firewall, the firewall allows the traffic, and you will get a successful response.

Now, as you might have guessed, this IP is the public IP associated with the Azure
Firewall. If you disassociate the route table from the subnet, the traffic is not routed
through the Firewall and all traffic is allowed. In this case, the Azure App service uses
one of the outbound IP addresses to make the request.
Option B is the correct answer.
Reference Link: https://learn.microsoft.com/en-us/azure/app-service/network-
secure-outbound-traffic-azure-firewall
GitHub Repo Link: Steps to route traffic from App Service app through Azure
Firewall

Resources
Steps to route traffic from App Service app through Azure Firewall
Domain
Implement and manage virtual networking

Question 3
You are planning to create an internal load balancer in Azure for your workloads.
Which of the following resources needs to be compulsorily created while/before
creating the load balancer?
 Public IP address
 Backend pool
 Virtual Network
 A load balancer rule

Overall explanation
Short Answer for Revision:
For public/internal load balancers, only frontend IP configuration is required. Other
associated resources can be created after the load balancer is created.
For public load balancers, you define a public IP address in the frontend IP
configuration. For internal load balancers, you define a virtual network in the frontend
IP configuration.

Detailed Explanation:
First, let’s try creating a public load balancer in the Azure portal to understand which
resources are necessary for the load balancer to be created.
In the portal, you wouldn’t be able to move to the next step, if a frontend IP
configuration is not defined.
For a public load balancer, you need to create a public IP address in the frontend IP
configuration.

Once you create the frontend IP configuration, you will realize that creating other load
balancer resources like backend pool, load balancing rules and health probe are
optional and the resource manager will let you deploy the load balancer even without
defining those resources.
But for an internal load balancer, you would assume that you need a virtual network in
the frontend IP configuration, rather than a public IP address, as they load balance
traffic inside a virtual network.

For internal load balancers too, the other resources can be defined after the load
balancer is created.
Option C is the correct answer.
Reference
Link: https://learn.microsoft.com/en-us/azure/load-balancer/components#frontend-ip-
configuration-

Resources
Requisite resources for creating an internal load balancer
Domain
Implement and manage virtual networking

Question 4
Using Azure Bicep, you need to create a resource group and deploy an Azure Virtual
Network to the resource group.

In Visual Studio Code, you have two Bicep files:

X] A main.bicep file defines a resource group.


Y] A vnet.bicep file defines a virtual network that’s deployed to the resource group.

As shown above, the main.bicep file defines a module that references


the vnet.bicep file. What property would you add to the module to fix the error?
Params
 Explanation
The params property is needed in a module only when you need to pass parameters
to the referenced Bicep file. Since there are no parameters defined in
the vnet.bicep file, we can easily exclude option A.

Reference Link: https://learn.microsoft.com/en-us/azure/azure-resource-manager/


bicep/create-resource-group#create-resource-group-and-resources

 Properties - Your answer is incorrect


Explanation
The keyword ‘Properties’ is not a property of a module. It can generally be seen in the
resource section, for example, the storage account resource. Option B is incorrect.

Reference Link: https://learn.microsoft.com/en-us/azure/azure-resource-manager/


bicep/modules#parameters

 dependsOn
Explanation
Similar to the ARM template, the dependsOn property is used to define explicit
resource dependency. It is usually used for a resource (a resource property), not a
module.

Reference Link: https://learn.microsoft.com/en-us/azure/azure-resource-manager/


bicep/resource-dependencies#explicit-dependency

 Scope
Explanation
Let’s go through the Bicep file and understand its structure. One of the first concepts
you need to know in Bicep is that you can define a target scope for the Bicep file.
By default, the target scope is set to resourceGroup for any Bicep file. That’s useful if
you are deploying only resources like storage accounts or virtual networks to a
resource group.

Reference Link: https://learn.microsoft.com/en-us/azure/azure-resource-manager/


bicep/file#target-scope

For example, there is no target scope defined in the vnet.bicep file. So, Azure Bicep
assumes the default resourceGroup value for the target scope. Consequently, when
this file is deployed, Azure Bicep deploys a virtual network to the resource group.

However, the default scope wouldn’t work while creating a resource group as there is
no concept of nested resource groups in Azure. So, if you comment out the
targetScope element in the main.bicep file, Bicep assumes the default scope, which is
resource group, and complains that it expects a scope of subscription for creating the
resource group.

Next, let’s talk about modules. Bicep enables you to organize deployments into
modules. A module is nothing, but a bicep file deployed from another Bicep file. In our
case, vnet.bicep is a module deployed from main.bicep by referencing its path.
Reference Link: https://learn.microsoft.com/en-us/azure/azure-resource-manager/
bicep/modules

To understand the errors in the module and understand what we are missing, let’s try
adding a resource snippet for the module. To do so, begin typing module… and select
the module declaration.

Reference Link: https://learn.microsoft.com/en-us/azure/azure-resource-manager/


bicep/quickstart-create-bicep-use-visual-studio-code?tabs=CLI#add-resource-snippet

We get the below output. As you can observe, the name property is the only required
property of a module, which our module already has. But why still the error?

Reference Link: https://learn.microsoft.com/en-us/azure/azure-resource-manager/


bicep/modules#definition-syntax

The reason is the resource group is deployed at the subscription scope. This scope is
not valid for the module which defines a virtual network.
So, the need is to add a scope property to the module and set its value to the
symbolic name of the resource group. The errors are gone, and the Bicep file is good
to deploy (check the related lecture video).

Reference Link: https://learn.microsoft.com/en-us/azure/azure-resource-manager/


bicep/modules#set-module-scope
https://learn.microsoft.com/en-us/azure/azure-resource-manager/bicep/create-
resource-group#create-resource-group-and-resources

Option D is the correct answer.


Overall explanation
Short Answer for Revision:
The targetScope property defines a scope for the entire Bicep file. The resource group
is deployed at subscription scope in main.bicep file. However, the subscription scope
is not valid for the module (referencing vnet.bicep ) as it deploys a virtual network.
So, we need to change the scope for the module to resource group using the scope
property.

GitHub Repo Link: Deploy a resource group and a VNet using Azure Bicep

Resources
Deploy a resource group and a VNet using Azure Bicep
Domain
Deploy and manage Azure compute resources

Question 5
There are three blob containers source1, source2, and source3 with a Public access
level of Container, Blob, and Private, respectively, in the strdev011 storage account.
There are another two blob containers target1 and target2, with a Public access
level of Container and Private, respectively, in the strdev012 storage account.

There is a backup file in all the source containers. Which of the following azcopy
commands helps you copy the backup file to either target1 or target2? Select two
options.
Note: Assume the user running these commands has no role assignments on the
storage account.
 azcopy copy 'https://strdev011.blob.core.windows.net/source1'
'https://strdev012.blob.core.windows.net/target1<<SAS token>>' --
recursive
 azcopy copy 'https://strdev011.blob.core.windows.net/source2<<SAS
token>>' 'https://strdev012.blob.core.windows.net/target1' --recursive
 azcopy copy 'https://strdev011.blob.core.windows.net/source2/bak.exe'
'https://strdev012.blob.core.windows.net/target2<<SAS token>>' --
recursive
 azcopy copy 'https://strdev011.blob.core.windows.net/source2'
'https://strdev012.blob.core.windows.net/target2<<SAS token>>' --
recursive
Overall explanation
First, let’s understand what the access levels like Container, Blob, and Private for a
container means, so it will be helpful to deduce if we need a SAS token to access
either the source or the target container.
Container access level means we can read the containers and the blobs
anonymously (even without any explicit permissions). So, any user can connect to the
container with a Container access level via Azure Storage Explorer and read the blobs
within those containers.

Blob access level means anonymous users can read only the blobs, not the
containers. So, unless you have permissions (either via Microsoft Entra ID, or storage
account keys, or SAS), you cannot connect to a container with a Blob access level in
Azure Storage Explorer.
However, since a container with a Blob access level allows anyone to read the blobs,
you can download the individual blob file from a browser anonymously (check the
related lecture video).

Finally, you can connect neither to the container nor read/download the blobs in a
container with a Private access level.

With the understanding of container access levels out of the way, we can now proceed
to answer the question.
All the given commands try to copy the file from the source container to the target
container.

Since even the Container access level provides only read access to blobs and
containers, both target1 and target2 containers need explicit permissions (like a SAS
token), as we need to write a file to the target container. So, option B is incorrect.
Source1 container does not need any SAS token to read the backup file as the
Container access level provides read access to the blob and the container. So, option
A is one of the correct answers.
The Blob access level of source2 ensures that we can directly download only the blob
but cannot connect to the container. So, if you connect directly to the container to
read the blob file, you need a SAS token on source2. So, option D is incorrect.
But if you directly read the blob in source2, then the Blob access level of source2
ensures that you can read the blob. So, you do not need a SAS token in this case.
Option C is the other correct answer.

Note 1: Don’t forget to check the related lecture video for demos.
Reference Link: https://learn.microsoft.com/en-us/azure/storage/blobs/anonymous-
read-access-configure

Note 2: There is a question (Related lecture video title: Upload the backup file
to storage account using azcopy ) in Practice Test 1, where we use azcopy to copy a
local directory to a storage account. In that question, the directory with the files is
copied to the storage account (option B).
In this question, we are copying a storage container, not a directory. Since there is no
concept of nested containers, only the files in the container are copied.
Reference link: https://stackoverflow.com/questions/3183857/how-to-create-a-sub-
container-in-azure-storage-location

GitHub Repo Link: Use azcopy to copy data with SAS and different container access
levels - PS command.ps1

Resources
Use azcopy to copy data with SAS and different container access levels
Domain
Implement and manage storage

Question 6
User One with the Azure RBAC role Contributor at the resource group scope can
access data in Azure blobs using the storage account key via shared key authorization
in the Azure portal.
Select and place (in any order) the steps you would perform to:
1. Disable key-based authorization only for User One.
2. Enable read access to data in Azure blobs via Microsoft Entra ID authentication in
the Azure portal for User One.

 Enable Default to Microsoft Entra authorization in the Azure portal


Assign the Storage Blob Data Reader role
Disable Allow storage account key access
 Remove the Contributor role
Assign the reader role
Assign the Storage Blob Data Reader role
 Remove the Contributor role
Assign the Storage Account Contributor role
Assign the Storage Blob Data Reader role
 Enable Default to Microsoft Entra authorization in the Azure portal
Assign the reader role
Assign the Storage Blob Data Contributor role
 Disable Allow storage account key access
Assign the reader role
Assign the Storage Blob Data Reader role
Overall explanation
Depending on the user’s RBAC role, the Azure portal can use either of the two
authentication methods to grant access to blob data:
1. Access key method
2. Microsoft Entra ID user account method

When User One accesses blob data, the Azure portal checks if the user’s role has
permission (Microsoft.Storage/storageAccounts/listkeys/action) to read storage
account keys. If the user's role has this permission, the portal uses the keys to grant
access to blob data. That’s how the Owner and Contributor roles, which do not have
any permissions in the DataActions section (in its role definition), can access the blob
data (they have Microsoft.Storage/* in the Actions section).
Reference Link: https://learn.microsoft.com/en-us/azure/role-based-access-control/
built-in-roles#contributor
https://learn.microsoft.com/en-us/azure/role-based-access-control/built-in-
roles#owner

The following built-in roles can read blob data using storage account access keys.

Reference Link: https://learn.microsoft.com/en-us/azure/storage/blobs/authorize-


data-operations-portal#use-the-account-access-key
From the above image, we can conclude that the Storage Account Contributor role
cannot be one of the correct options (it provides access to read storage account keys,
which we want to disable). Option C is incorrect.

As long as User One has the Contributor role, the Azure portal always uses access
keys to grant access to blob data. So, step 1 -> Remove the Contributor role.
Next, to ensure the Azure portal always uses the Microsoft Entra ID authentication for
read access to blob data, assign any of these built-in roles that grant access to the
blob data.

Reference Link: https://learn.microsoft.com/en-us/azure/storage/blobs/authorize-


access-azure-active-directory#azure-built-in-roles-for-blobs

Since we need to enable only read access to data, assigning the Storage Blob Data
Reader role is the best practice. Storage Blob Data Contributor role would assign
write and delete permissions too. Option D is incorrect.
So, step 2 -> Assign the Storage Blob Data Reader role.
Reference Link: https://learn.microsoft.com/en-us/azure/role-based-access-control/
built-in-roles#storage-blob-data-contributor
https://learn.microsoft.com/en-us/azure/role-based-access-control/built-in-
roles#storage-blob-data-reader

But the Storage Blob Data Reader role allows only read access to blobs. This role
doesn't help navigate the Storage Account and the container in Azure portal to reach
the blob data. So, we need to assign a Reader tole too. So, step 3 -> Assign the reader
role.

With both the Reader and the Storage Blob Data Reader roles, User One can navigate
the Azure portal and access the blob data using Microsoft Entra ID authentication
(requirement 2 satisfied).
Since we have removed the Contributor access role, if User One switches the
authentication method to access key, he will get this error (requirement 1 satisfied).

In fact, if you need Microsoft Entra ID authentication to access blobs, you would need
to assign a role (can be custom, too) that grants read access to storage account
management resources (like the built-in reader role) and grants access to data in the
storage account (like the built-in Storage Blob Data Reader role). Check the 2nd and
3rd reference links.

Reference Link: https://learn.microsoft.com/en-us/azure/role-based-access-control/


built-in-roles#reader
https://learn.microsoft.com/en-us/azure/storage/blobs/authorize-data-operations-
portal#use-your-microsoft-entra-account
https://learn.microsoft.com/en-us/azure/storage/blobs/authorize-access-azure-active-
directory#data-access-from-the-azure-portal

Note: The order of steps does not matter.

Enabling Default to Microsoft Entra authorization in the Azure portal only


ensures that the Azure portal defaults to Microsoft Entra ID authentication when the
user accesses the data in the Azure portal. If the user wants, he can change the
authentication method to access keys. This option doesn’t disable read access to
access keys for User One .
Disabling Allow storage account key access will disable key-based authorization for
all the users in the tenant. Per the question, we need to disable key-based
authorization only for User One . Options A and E are incorrect.

Reference Links: https://learn.microsoft.com/en-us/azure/storage/common/shared-


key-authorization-prevent?tabs=portal#remediate-authorization-via-shared-key
https://learn.microsoft.com/en-us/azure/storage/blobs/authorize-data-operations-
portal#default-to-microsoft-entra-authorization-in-the-azure-portal
Option B is the correct answer.

Resources
Microsoft Entra ID authentication for blob data
Domain
Implement and manage storage

Question 7
This question is part of repeated scenario questions that contain the same stem but
with a different solution for each question. You need to identify if the given solution
solves a particular problem. Each set of repeated scenario questions might contain
either none, one, or many solutions.
You have two virtual machines, vm01 & vm02, connected to two different subnets in a
virtual network in the East US region. A SQL Server hosting a SQL database is also
deployed in the same region. Users connect to the VMs using the Azure Bastion
service. The VMs do not have any instance-level public IP address.

You need to allow traffic to SQL Server only from the private IP of vm01.
Solution: You configure the SQL Server firewall to:
a. Allow only the private IP of vm01.
b. Enable Allow Azure services and resources to access this server.

Does the solution meet the stated goal?


 Yes
 No
Overall explanation
Short Answer for Revision:
There is no public IP address resource for the VMs. In this case, the Azure platform
assigns the default outbound access IP for all outgoing requests to the Internet (Azure
SQL Server). Since the vm01’s private IP is never used to access the server, the
firewall rules will have no effect.
The second part of the solution allows all resources with valid credentials within the
Azure boundary to access the server. It has nothing to do with enabling private access
to vm01.

Detailed Answer:
Let’s first implement the solution and test if it works or not. To do so, navigate to the
Networking section of Azure SQL Server.

Under the Firewall rules section, add the private IP address of vm01. Once done, log in
to the VM via the Azure Bastion service. To follow the explanations, download and
install the SQL Server Management Studio on vm01.
From the management studio, try connecting to SQL Server using the Server name
and the admin credentials. You get a message that the client IP does not have access
to the server.
Still, let’s go ahead and sign into Microsoft Azure and add the client IP address to the
firewall rule in Azure SQL Server.
Once you add the firewall rule, you will be able to access the server.
But wait, what is this IP address 40.121.200.139? From the question, we know that
vm01 does not have any instance-level public IP address. We only have a public IP for
Azure Bastion which we use to connect from the local computer. From there, the
connection happens solely with the private IP.
So where does this IP, which looks like a public IP, come from?
This is the default outbound access IP assigned by Azure to vm01 for outbound
connectivity since the VM is not assigned with any public IP address. So, the IP
address you added earlier in the firewall rule is the default IP address provided by
Azure.
But note that the default outbound access to the internet will be turned off from 30th
September 2025 for new VMs. This default access has some problems, so Microsoft
wants you to explicitly allow access to any resource from the VNet.
Reference Link: https://learn.microsoft.com/en-us/azure/virtual-network/ip-
services/default-outbound-access
https://azure.microsoft.com/en-us/updates/default-outbound-access-for-vms-in-azure-
will-be-retired-updates-and-more-information/

Due to how outbound access works in Azure, we can never use a VM's private IP
address in a SQL Server firewall and expect the access to be restricted only to that
VM.
We have digressed a lot as adding the outbound access IP to the firewall rule is never
part of the solution. So, let’s remove that and retain only the private IP in the firewall
rule. To implement the complete solution, let’s also enable Allow Azure services and
resources to access this server .
We can observe that the login is successful to SQL Server, without the requirement to
add any firewall rule. But if the Allow Azure services and resources to access this
server feature is enabled, Azure will allow any resource within its boundary to access
the server, as long as valid credentials are presented. It does not restrict access only
to vm01.
To check the IP used by the VM to connect to the server, you can run the below SQL
command.
This is a public IP, or specifically, the default outbound access IP assigned to the VM
by the Azure platform.
Reference
Link: https://learn.microsoft.com/en-us/azure/azure-sql/database/network-access-
controls-overview?view=azuresql#allow-azure-services
With the given solution, the vm01 cannot access the SQL Server using its private IP
address. Option No is the correct answer.
GitHub Repo Link: Configure Azure SQL firewall to allow access only from a VM's
private IP

Resources
Configure Azure SQL firewall to allow access only from a VM's private IP - 1
Domain
Implement and manage virtual networking

Question 8
This question is part of repeated scenario questions that contain the same stem but
with a different solution for each question. You need to identify if the given solution
solves a particular problem. Each set of repeated scenario questions might contain
either none, one, or many solutions.
You have two virtual machines, vm01 & vm02, connected to two different subnets in a
virtual network in the East US region. A SQL Server hosting a SQL database is also
deployed in the same region. Users connect to the VMs using the Azure Bastion
service. The VMs do not have any instance-level public IP address.

You need to allow traffic to SQL Server only from the private IP of vm01.

Solution: You configure the virtual network service endpoint


for Microsoft.Sql service on the subnet of vm01.

Does the solution meet the stated goal?


 Yes
 No
Overall explanation
Short Answer for Revision:
Service endpoints allow resources in the subnet to use their private IP to
communicate with the service. But note that the service is still reached at its public
endpoint.
Enabling the service endpoint on subnet01 allows resources in that subnet to
communicate with Azure SQL Server. However, vm02, which is outside the subnet,
cannot communicate with Azure SQL.

Detailed Answer:
We will approach this question too, in a similar manner, by first implementing the
given solution and testing if the solution meets the stated goal or not.
To configure virtual network service endpoints for Azure SQL Server (or any other
Azure service), we need to perform two steps:

a. Enable the service endpoint on the network side.


b. And configure SQL Server firewall on the service side

To enable the service endpoint on the network side, navigate to the subnet where
vm01 is deployed. Under the Service endpoints section, select Microsoft.Sql service
to enable a service endpoint for Azure SQL on the subnet.
Next, on the service side, navigate to the networking section of the Azure SQL Server
resource. Under virtual networks,
a. Add a virtual network rule to enable public access
b. to the service only from the subnet of vm01.
c. Since we have already enabled the service endpoint on the subnet for SQL Azure in
the previous step, the endpoint status is displayed as enabled. Doing this step
effectively means we are locking down the SQL Azure access only to resources in this
subnet.

Now, let’s log in to vm01 and try connecting to Azure SQL using SSMS. We can
observe that the login is successful. Let’s run the SQL command to check the IP
address used by the VM to connect to the service.
As you can see, it is the private IP of vm01. But why? Because service endpoints
enable connections to a service from specified subnets over a resource’s private IP.
But do note that only the VM (from the subnet) uses a private IP to communicate with
the service over the Microsoft backbone network. The server is still reachable at its
public IP. To verify this, do a nslookup of the service to retrieve the IP address used.

Nevertheless, as vm01 communicates with the service using its private IP, the given
solution meets the stated goal. Further, since the SQL Server firewall restricts the
access only to subnet01, vm02, which is in subnet02 will not be able to access the
server.
Option Yes is the correct answer.
Reference Link: https://learn.microsoft.com/en-us/azure/virtual-network/virtual-
network-service-endpoints-overview
GitHub Repo Link: Continue with the same set of resources as the previous
question.

Resources
Configure Azure SQL firewall to allow access only from a VM's private IP - 2
Domain
Implement and manage virtual networking

Question 9
This question is part of repeated scenario questions that contain the same stem but
with a different solution for each question. You need to identify if the given solution
solves a particular problem. Each set of repeated scenario questions might contain
either none, one, or many solutions.
You have two virtual machines, vm01 & vm02, connected to two different subnets in a
virtual network in the East US region. A SQL Server hosting a SQL database is also
deployed in the same region. Users connect to the VMs using the Azure Bastion
service. The VMs do not have any instance-level public IP address.

You need to allow traffic to SQL Server only from the private IP of vm01.

Solution: You configure a private endpoint for Azure SQL Server in subnet01.
Does the solution meet the stated goal?
 Yes
 No
Overall explanation
Short Answer for Revision:
Private endpoints create a network interface for the Azure service in the VNet.
Therefore, the service also receives a private IP address from the VNet. So, unlike the
service endpoints, both VM and the service communicate with their private IP
addresses.
Since any VM in the VNet can talk to any other resource within the same VNet, even
vm02 can communicate with Azure SQL Server via the private endpoint.

Detailed Answer:
Before proceeding further, ensure you have removed the service endpoints created in
the previous question. I have already done that, so let’s go to the Private access
section and create a private endpoint.

Creating a private endpoint for a service creates a network interface with a private IP
assigned from the VNet. So, private endpoints inject an Azure service into a VNet,
bringing it effectively within a virtual network, so network resources can access them
as if they are part of the network.
So, while creating a private endpoint, you specify:
a. The name of the network interface,
b. The Azure service/resource to connect to, which in our case, is Azure SQL Server,
c. And the VNet and subnet to deploy.

Let’s deploy it in the same subnet as vm01. Private IP configuration defines how you
want to allocate the private IP address to the network interface card from the VNet.
Let’s leave the default and proceed to DNS.
Well, discussing DNS for private endpoints is really out of the scope of your exam.
Nevertheless, we will touch upon it sometime. For now, just know that creating a
private endpoint also creates a new private DNS zone.

After the private endpoint is created, login to vm01 and ensure that you can connect
to Azure SQL Server. Now run the same SQL command to know the IP address used to
connect to the server. It is vm01’s private IP. So, like service endpoints, private
endpoints enable access to the service using the VM's private IP.
Now, do a nslookup of the Azure SQL Server. You will see a private IP returned, which
is the IP address of the private endpoint's network interface card. Unlike service
endpoints, with private endpoints communication happens with the private IP of the
Azure service.

This is possible because of the integration with private DNS. As discussed earlier, the
private endpoints receive a private IP from the VNet. So the private DNS configuration
is central to the functioning of private endpoints as they enable the private DNS
records to override the public records so traffic is kept completely private to the
virtual network.
But since the private endpoint is nothing but a network interface deployed within your
VNet, not only vm01 but any VM within the VNet can communicate with the network
interface using its private IP and, consequently, any VM within the VNet can
communicate with the Azure SQL Server.

So, if you log in from vm02, you can establish the connection without the need to add
any firewall rule.
Since even the private IP of vm02 can communicate with the Azure SQL Server, the
given solution does not meet the stated goal. Option No is the correct answer.
Reference Link: https://learn.microsoft.com/en-us/azure/private-link/private-
endpoint-overview
https://learn.microsoft.com/en-us/azure/private-link/tutorial-private-endpoint-sql-portal
https://learn.microsoft.com/en-us/azure/private-link/private-endpoint-dns-integration
GitHub Repo: Use the commands provided for the 1st question in this question set

Note: On the contrary, if the question asks you to allow access to specific instances of
an Azure service from the VNet, a private endpoint is more suitable because it maps
an instance of a resource to each endpoint. As the service endpoint provides access
per subnet per service, it allows access to all instances of the service.
Reference Link: https://www.fugue.co/blog/cloud-network-security-101-azure-
private-link-private-endpoints

Resources
Configure Azure SQL firewall to allow access only from a VM's private IP - 3
Domain
Implement and manage virtual networking

Question 10
From Visual Studio Code, you publish the below app to the App Service app with two
deployment slots: Production, and Staging.
From the client manager, you receive a request to add one more line as shown below:

After you swap the staging slot with the production slot, you realize that the update is
not successful in production, and you need to get your “last known good site” back.
Which of the following actions offer the best/easiest solution?
 Create another deployment slot and deploy the app to the slot
 Swap the slot with source: staging and target: production
 Redeploy the app
 Swap the slot with source: production and target: staging
Overall explanation
Per the question, you have already updated the code base and deployed the updated
code to the staging slot. So, currently, you have these two versions of the web app in
production and staging.

After the swap operation, this will be the expected outcome.


But since the update is not successful, you can just do another swap with source:
staging and target: production to get your “last known good site” back in production.

Option B is the correct answer.

Option D is incorrect. The slot that has the last known good site is in staging, so
staging will be the source and the target will be the production.

Option A is incorrect. Creating another deployment slot is not necessary. Further, we


already have the production slot where the user traffic is flowing. So, swapping the
production slot with the slot that has the last known good site is the best solution.

Option C is incorrect too. Redeploying means undoing the recent changes and
deploying the app, which may take a little longer than swapping, leading to some
downtime. Swapping offers the best/easiest solution.
Reference Link: https://learn.microsoft.com/en-us/azure/app-service/deploy-staging-
slots
GitHub Repo Link: Azure App Service deployment slots

Resources
Azure App Service deployment slots
Domain
Deploy and manage Azure compute resources

Question 11
You have a virtual machine and its related resources in a resource group. A daily job
backs up the VM to a Recovery Services Vault.

After a few months, you no longer require the VM, so you delete the backup data in
the vault and try to delete the resource group and all its resources.
You were able to delete all resources in the resource group, except the vault. What
necessary sequence of steps you would follow to delete the group and the vault?

 Delete the backup policy


Disable soft delete for backups
Delete backup data
Delete resource group
 Undelete the restore points
Disable soft delete for backups
Delete backup data
Delete resource group
 Disable soft delete for backups
Delete the backup policy
Delete backup data
Delete resource group
 Undelete the restore points
Delete backup data
Disable soft delete for backups
Delete resource group
Overall explanation
Short Answer for Revision:
By default, soft delete is enabled in all the Recovery Services Vault. So even if you
delete the backup data, you will not be able to delete the vault as the deleted items
are in a soft-delete state. To delete the vault,
a. First, disable soft delete
b. Undelete the deleted items
c. Delete the backup data (permanent)
d. Delete the resource group.

Detailed Answer:
The Recovery Services Vault resource cannot be deleted until all backup data in the
vault is deleted. In the given case, even after deleting the backup data, you were not
able to delete the vault as, by default, the deleted items are moved to a soft delete
state.
We cannot manually remove the backup data in the soft delete state until the soft
delete retention period.
So first, disable soft delete for backups under properties -> Security Settings .

Next, undelete the deleted items to move them from the soft-delete state
Steps 1 and 2 -> Undelete the restore points and Disable soft delete for backups.

Now delete the backup data again. Since soft delete is disabled, the backup data is
deleted permanently.

Step 3 -> Delete backup data.

Since there is no backup data in the vault, deleting the resource group will delete the
recovery services vault without any issues.
Step 4 -> Delete the resource group.
Reference Link: https://learn.microsoft.com/en-us/azure/backup/backup-azure-
security-feature-cloud#permanently-deleting-soft-deleted-backup-items
Option B is the correct answer.

Although the backup policy is stored in the vault, deleting the policy is not a
prerequisite to deleting the vault. Options A and C are incorrect.
Option D is incorrect because we need to disable soft delete before we delete the
backup data. Deleting the backup data earlier stores the deleted data in a soft delete
state, preventing resource group deletion.
GitHub Repo Link: Delete a recovery services vault

Resources
Delete a Recovery Services Vault
Domain
Monitor and maintain Azure resources

Question 12
You need to publish two Azure App Service apps, one with a runtime stack ASP.NET
v4.8 and another running on Python 3.12. Further, the apps should meet the following
requirements:
Can autoscale based on rules
Allows daily backups
Provides at least four staging slots
Based on the given information, answer the below two questions:

 1, Standard S1
 1, Basic B1
 2, Basic B1
 2, Standard S1
Overall explanation
Question 1:
The different versions of the ASP.NET runtime stack require a Windows OS. But a
Linux OS is required if you were to run Python apps.
Note that only ASP.NET Core is cross-platform and can be hosted on both Windows
and Linux OS. But ASP.NET is a Windows-only version. Adding to the confusion,
Microsoft references ASP.NET Core as just ASP.NET in their documentation.
So, although you can run multiple apps in an App Service plan, only one operating
system (either Windows or Linux) can be chosen for a plan. So, you need a minimum
of two App Service Plans.
Question 1 -> 2.
Reference Link: https://learn.microsoft.com/en-us/azure/app-service/quickstart-
python
https://dotnet.microsoft.com/en-us/learn/aspnet/what-is-aspnet
https://learn.microsoft.com/en-us/azure/app-service/overview-hosting-plans#should-i-
put-an-app-in-a-new-plan-or-an-existing-plan

Question 2:
There are a few pricing tiers for App Service Plans like Free, Shared, Basic, Standard &
Premium in the order of increasing prices & features.
Of them, only Standard and Premium would allow autoscaling based on rules, staging
slots, and daily backups. Since the Standard is more cost-effective than the Premium,
Question 2 -> Standard S1

Option D is the correct answer.

Resources
Choose the best app service plan
Domain
Deploy and manage Azure compute resources

Question 13
You have two resource groups in different locations in your Azure subscription.
Two Azure Private DNS zones, bigstuff.com, and birdsource.com, are created in each
resource group.

Also, two virtual networks, one in the South Central US and the other in North Europe
location are deployed.
Finally, the two private DNS zones are linked with the two VNets as shown below:

Given below are two statements based on the above information. Select Yes if the
statement is correct. Else select No.

 Yes, No
 Yes, Yes
 No, No
 No, Yes
Overall explanation
Short Answer for Revision:
The private DNS zone is global and is not bound to a location. So, you can link a VNet
in any location with a DNS zone. Statement 1 -> Yes.
A VNet can have only one registration DNS zone but multiple resolution DNS zones.
Vnet01 already has a registration zone, bigstuff.com. So, you cannot link vnet01 with
birdsource.com with auto-registration enabled. Statement 2 -> No.

Detailed Answer:
Statement 1:
The Azure Private DNS zone is global and is not bound to a location. It uses the
resource group’s location only to store the DNS zone’s metadata. You can verify this
by selecting the Location column for the Private DNS zone.

The location of vnet02 is North Europe, and the metadata of the private DNS zone,
bigstuff.com, is stored in rg-dev-01's location, which is South Central US. Since the
DNS zone is global, you can link the private DNS zone bigstuff.com with a VNet in any
location.

Statement 1 -> Yes.

Statement 2:
We create a link between a virtual network and a private DNS zone to ensure VMs
hosted in the VNet can access the DNS records in the zone. Based on the type of link,
a virtual network can be linked to two types of DNS zones:
a. A single registration zone,
b. and multiple resolution zones.
But a single DNS zone can act as a registration zone for multiple VNets.

Since bigstuff.com is already a registration zone for vnet01, we cannot link another
DNS zone to vnet01 with auto-registration enabled.

Statement 2 -> No.


Reference Link: https://learn.microsoft.com/en-us/azure/dns/private-dns-virtual-
network-links#registration-virtual-network
Option A is the correct answer.
GitHub Repo Link: Link Azure Private DNS Zones with Virtual Networks

Resources
Link Azure Private DNS Zones with Virtual Networks
Domain
Implement and manage virtual networking
Question 14
You have three VMs, two Windows and one Linux, deployed across two VNets in your
Azure subscription.

A private Azure DNS zone named bigstuff.com is linked to the two virtual networks,
vnet01 and vnet02, with auto-registration enabled and disabled, respectively.

Given below are three statements based on the above information. Select Yes if the
statement is correct. Else select No.
 No, No, No
 No, Yes, Yes
 Yes, No, Yes
 Yes, Yes, No
Overall explanation
Short Answer for Revision:
If auto-registration is enabled for a VNet link, a DNS ‘A’ record pointing to the VM’s
private IP will automatically be added for all the VMs in the VNet to the private zone. If
auto-registration is disabled, no records will be added, but VMs in that VNet can still
query the DNS.
Statement 1 -> No. DNS ‘A’ records will point only to private IPs, not public IPs.
Statement 2 -> Yes. The VM OS creates no difference in the DNS resolution process.
Statement 3 -> Yes. vnet02 is linked with the private zone. So, vm02 can query the
DNS zone.

Detailed Answer:
Statement 1:
If auto registration is enabled while creating a link between a private DNS zone and a
virtual network, the DNS zone becomes the registration zone for the virtual network.
So, a DNS ‘A’ record pointing to the VM’s private IP address is automatically created in
the zone if you deploy a virtual machine into the virtual network.
Well, statement 1 is partially correct that an ‘A’ record will automatically be created in
the DNS zone for vm01, as it’s deployed in a registration virtual network, vnet01.
But the 'A' record will point to the VM’s private IP address, not its public IP, as we are
dealing with a private DNS zone.
So, statement 1 -> No.

Statement 2:
The OS of the VM has nothing to do with the DNS resolution process. So, irrespective
of the Operating System, an ‘A’ record for vm03 will also automatically be created in
the private zone when the VM starts (Check the above image). Statement 2 -> Yes.

Statement 3:
If you do not enable auto registration while creating a virtual network link, the private
DNS zone acts as only a resolution zone for the virtual network.
Although a DNS record is not created automatically for any VM in the resolution virtual
network, the VMs in the network can query DNS records in the private zone.
So vm02, which is in a different virtual network than vm01, can resolve the domain
vm01.bigstuff.com.

So, statement 3 -> Yes.

Note that since these VMs are in different VNets, you cannot perform a lookup on
vm01 using the Azure-provided name resolution process. The default name resolution
is possible only for VMs in the same VNet.
Reference Link: https://learn.microsoft.com/en-us/azure/dns/private-dns-virtual-
network-links#registration-virtual-network
Option B is the correct answer.
GitHub Repo Link: Registration & resolution VNets with a private Azure DNS zone

Resources
Registration and resolution VNets with a private Azure DNS zone
Domain
Implement and manage virtual networking

Question 15
I recently signed up for a domain named ravikiransrinivasulu.com with the domain
registrar GoDaddy. I would like to delegate the domain to an Azure DNS zone named
ravikiransrinivasulu.com (shown below), so the Azure authoritative DNS servers that
host the DNS zone will answer DNS queries from users.

Which of the following actions should I do?


 Modify the existing nameservers in the NS records created in the Azure
DNS zone
 Create a new NS record in the Azure DNS zone pointing to the domain’s
default nameservers
 Point to the Azure DNS name servers from the domain registrar
 Create a new NS record in the Azure DNS zone pointing to the zone’s
nameservers
Overall explanation
Short Answer for Revision:
When I sign up for a domain with a domain registrar, my domain is hosted in one of
their nameservers. Since we need to delegate the domain from the registrar to Azure
DNS, we need to update the registrar’s nameservers with the nameservers provided
by the Azure DNS zone. Option C is the correct answer.

Detailed Answer:
When you create a public Azure DNS zone, the zone comes with two records by
default: an SOA record and an NS record. The NS record contains the names of the
Azure DNS name servers assigned to the zone.

Although you can add more records to this NS record set, you cannot modify/remove
the existing name servers.
So, option A is incorrect. From the earlier understanding, we can conclude that option
D is also incorrect as an NS record that points to the zone’s nameservers is already
created by default, and creating the record again will not solve any further purpose.
Reference Link: https://learn.microsoft.com/en-us/azure/dns/dns-zones-records#ns-
records

For understanding where we need to create the NS record, let’s dig deeper to
understand how a DNS request traverses the DNS hierarchy and reaches the DNS
zone of the queried domain:
a. When a user queries a domain, the local DNS server checks with the root name
server to find the name server for the requested domain. From the root server, it finds
the name server hosting the ‘com’ zone.
b. It then queries the ‘com’ name server and finds the name server hosting the
‘ravikiransrinivasulu.com’ zone.
a. Finally, it queries this authoritative name server from the domain registrar to reach
the DNS zone of the requested domain.
To delegate the domain from the domain registrar to an Azure DNS zone, we need to
update the name servers with GoDaddy, so they point to the DNS servers in Azure. So
now the ‘com’ zone points to the name server in Azure, rather than in GoDaddy.

We can verify if the domain is delegated to Azure DNS by running the below
PowerShell command.
Further, you can add an A record with a dummy IP address in the DNS zone and verify
that the Azure DNS resolves the domain name.

Reference Link: https://learn.microsoft.com/en-us/azure/dns/dns-delegate-domain-


azure-dns#delegate-the-domain
https://learn.microsoft.com/en-us/azure/dns/dns-domain-delegation#resolution-and-
delegation
So, we need to point to the Azure DNS name servers from the domain registrar.
Option C is the correct answer.
GitHub Repo Link: Delegate a domain to Azure DNS zone

Resources
Delegate a domain to Azure DNS zone
Domain
Implement and manage virtual networking
Question 16
In your Microsoft Entra ID tenant, you have to add nearly 100 users. You plan to use
the bulk create operation feature. Which of the following user attributes
are NOT mandatory to include while uploading the CSV file?
Select two options.
 Name
 First Name
 Usage location
 Block sign in
Overall explanation
<<This is a NOT question>>

To add many users at once, bulk create operation is a great choice. To use the bulk
create operation, download the CSV template file, and copy and paste the details of
the users to be created.
Only the first four fields are mandatory, as indicated by the required tag.

If you have already created users in the Azure portal, this would make sense.

1. User name and Name fields are indicated by asterisk.


2. The user needs a temporary password to sign in.
3. And the block sign in setting.

So, Name and Block sign in are mandatory fields. Therefore, First Name and Usage
location are not mandatory and are the correct answer choices.
Reference Link: https://learn.microsoft.com/en-us/entra/identity/users/users-bulk-
add

Note: If you are using the bulk delete operation to delete a set of users, only the User
name (user principal name) field is mandatory to include in the CSV file.

Resources
Required attributes for bulk create operation
Domain
Manage Azure identities and governance

Question 17

Below are two statements based on associating Azure public IP addresses with an
Azure Firewall. Select Yes if the statement is correct. Else select No.

 Yes, No
 Yes, Yes
 No, Yes
 No, No
Overall explanation
Short Answer for Revision:
Irrespective of the Firewall SKU, whether Basic, Standard, or Premium, the Azure
Firewall supports only standard, static public IPv4 addresses.
Since Basic SKU public IP is not supported by Azure Firewall, statement 1 -> No.
Statement 2 -> Yes.

Detailed Answer:
For demonstration purposes, I created several possible types of public IP addresses
across different SKUs, assignment types, and IP versions.
Statement 1:
Azure Firewall is offered in three SKUs: Basic, Standard, and Premium. Irrespective of
the Firewall SKU, you can associate only a Standard SKU public IP address with the
Azure Firewall. Basic public IP address SKUs are not supported.
Further, irrespective of the public IP address SKU type, only IPv4 addresses are
supported. In the above image, the two IPv6 addresses are not even displayed.
So, statement 1 -> No.
Reference Link: https://learn.microsoft.com/en-us/azure/virtual-network/ip-services/
configure-public-ip-firewall

Statement 2:
The second statement is correct. Azure Firewall supports only static public IP
addresses. If you know that the Standard public IP addresses support only the static
allocation method, you can infer that the Azure Firewall supports only static IP
addresses.
Reference Link: https://learn.microsoft.com/en-us/azure/virtual-network/ip-
services/public-ip-addresses#at-a-glance
https://learn.microsoft.com/en-us/azure/virtual-network/ip-services/public-ip-
addresses#sku
Statement 2 -> Yes. Option C is the correct answer.
GitHub Repo Link: Associate Public IP address with Azure Firewall

Resources
Associate Public IP address with Azure Firewall
Domain
Implement and manage virtual networking

Question 18
You have four virtual machines, two running and two deallocated in the East US and
North Europe locations as shown below:

Further, there are two Azure Recovery Services Vaults in the East US region.
The virtual machine vm03 is already protected with daily backups to the Recovery
Services Vault, vault02.

Which of the given VMs can you back up to vault01?


 Only vm01
 Only vm01 and vm03
 Only vm01, vm02 and vm04
 Only vm01 and vm04
Overall explanation
Short Answer for Revision:
a. You can take backups to a Recovery Services Vault only for VMs in the same region
as the vault.
b. You can take backups to a Vault only for VMs that are not already protected by
another vault.
c. You can take backups irrespective of the virtual machine’s status (running or
deallocated).
Option D is the correct answer.

Detailed Answer:
When you try configuring a virtual machine backup to the recovery services vault in
the Azure portal, you would see a message displaying the types of virtual machines
that can be backed up to the vault.

Only VMs in the same location as the vault and the VMs that are not already protected
by another vault can be backed up to a recovery services vault. So, only vm01 and
vm04 can be backed up to vault01, as vm03 is already protected by vault02 and
vm02 is in a different location than vault01.
Option D is the correct answer.

Since backups can happen even if the VM is deallocated, vm04 can be backed up to
the vault. But just note that if the VM is offline, the backup process produces a crash-
consistent snapshot. To get an application-consistent snapshot, ensure that the VM is
running.
Reference Link: https://learn.microsoft.com/en-us/azure/backup/backup-azure-vms-
introduction#snapshot-consistency
https://learn.microsoft.com/en-us/azure/backup/quick-backup-vm-portal#select-a-vm-
to-back-up
GitHub Repo Link: Backup a VM to a Recovery Services Vault

Resources
Backup a VM to a Recovery Services Vault
Domain
Monitor and maintain Azure resources

Question 19
You have three VMs across two subnets in your Azure virtual network. Each VM
accepts and/or denies a different type of traffic. At any point in time, only one VM is in
a running status.

Based on this information, answer the below two questions:

 1,3
 1,1
 3,1
 3,3
Overall explanation
Short Answer for Revision:
Each VM requires at least one dedicated NIC, irrespective of the VM’s status. So,
question 1 -> 3.
Using the VM’s private IP to target a specific VM in a security rule, you can associate a
single NSG to multiple subnets for different use cases. Question 2 -> 1.

Detailed Answer:
Question 1:
First, you cannot attach a NIC to multiple VMs simultaneously. Each VM requires a
dedicated NIC.
Further, irrespective of a VM’s running status, a virtual machine must always have at
least one network interface attached to it. So, even if a VM is not running, you cannot
detach its NIC if that’s the only NIC attached to the VM.

Having said that, you can remove one of the NICs if you have more than one Network
Interface attached to the VM.

Since there is no possibility of reusing the NICs between different VMs, the minimum
number of NICs required for three VMs is 3.
Reference Link: https://learn.microsoft.com/en-us/azure/virtual-network/virtual-
network-network-interface-vm#constraints (point 3)
https://learn.microsoft.com/en-us/azure/virtual-network/virtual-network-network-
interface-vm#remove-a-network-interface-from-a-vm

Question 2:
Even if every VM accepts and/or denies a different type of traffic, you can always use
the VM’s private IP address to target a specific VM.
Say, for example, vm01 accepts only FTP traffic, vm02 only My SQL connections. You
can create a separate rule for each type of traffic and target the respective VMs with
their private IP address.

Further, you can include all these rules in the same NSG and associate the NSG with
both subnets.
So, you require only one NSG. Option C is the correct answer.
GitHub Repo Link: Minimum number of NICs and NSGs required

Resources
Minimum number of NICs and NSGs required
Domain
Implement and manage virtual networking

Question 20
You have a standalone virtual machine in Azure. The virtual machine has an IPv4
public and private IP address. The IP address assignment can be either static or
dynamic, depending on the need.
You have to create an NSG inbound security rule to allow RDP access to the VM from
your local computer.
Which of the following would you use as the destination IP address?
Note: You should be able to connect to the VM even after multiple stops and restarts.
a. Static public IP address
b. Dynamic public IP address
c. Static private IP address
d. Dynamic private IP address
 Only c
 Only c and d
 Only a and c
 Only a
Overall explanation
Short Answer for Revision:
Due to the functioning of the intermediate NAT service, which converts the public IP to
private IP and vice-versa for inbound and outbound traffic, respectively the NSG has
no idea of the VM’s public IP (see the below image). So, you cannot use the VM’s
public IP in an NSG rule.
The dynamic public IP addresses don’t survive machine deallocations. The dynamic
private IP addresses do survive. And, as the name indicates, a static IP also doesn’t
change with machine deallocations. Option B is the correct answer.

Detailed Answer:
The first concept we need to understand is when the NSG rules are processed.
When you create an RDP connection to the VM from your local computer using the
VM’s public IP address, the native Azure Network Address Translation service
translates the public IP of the VM to its private IP before the NSG rules are processed.
Similarly, for outbound traffic, first, the NSG rules are processed. Then, the translation
service converts the private IP address of the VM to its public IP before the traffic is
sent out to the Internet.
From this understanding, it is evident that we cannot use the public IP address of the
VM in an NSG rule. Due to the functioning of the intermediate NAT service, the NSG
has no idea of the VM’s public IP address.
However, if you still use the VM's public IP address in the NSG rule, and try to RDP to
the machine, you will see this error, although not really a useful one.

So, you cannot use the solution in statements ‘a’ and ‘b.’ Options C and D are
incorrect,
Reference Link: https://learn.microsoft.com/en-us/azure/virtual-network/network-
security-groups-overview#security-rules

The private and public IP addresses you assign to the VM can be either static or
dynamic. Leaving out other differences between the static and the dynamic IP address
assignment, as they are not directly relevant to the question, the general
understanding everyone has is that a static IP address remains the same whereas a
dynamic IP changes every time a VM is deallocated and started.
But this is true only for the public IP addresses.
So, if you deallocate and then start a machine, the dynamic public IP address will
change as Azure assigns the IP address from its pool of available IP addresses in the
region. Whereas its dynamic private IP address doesn’t change.
This is true even if we create additional resources in the subnet while the machines
are deallocated. The new VM doesn't get the private IP address of any of the
deallocated VMs as they are not released. It gets a new one from the address pool.
Further, even when the subnet runs out of address space, creating new resources will
throw an error, but the existing private IP addresses of the deallocated machines are
not reassigned. Again, the reasoning is the same (Check the related lecture video).
Reference Link: https://learn.microsoft.com/en-us/azure/virtual-network/ip-
services/public-ip-addresses#ip-address-assignment
https://learn.microsoft.com/en-us/azure/virtual-network/ip-services/private-ip-
addresses#allocation-method

This entails even the dynamic private IP addresses survive machine deallocations. So,
both static and dynamic private IP addresses of the VM can be used in the NSG rules.
Option B is the correct answer.
GitHub Repo Link: Select a destination IP address for the NSG inbound security rule

Resources
Select a destination IP address for the NSG inbound security rule
Domain
Implement and manage virtual networking

Question 21
Every new Microsoft Entra ID tenant comes with a domain name, for
example, ravikirans1160.onmicrosoft.com. When you create a new user (adminone)
in the Microsoft Entra ID tenant, you are able to create them using only this domain
name:

Which of the following steps would you perform in sequence to create the user
as adminone@ravikirans.com?
Note: ravikirans.com is registered with the Domain registrar GoDaddy.
 Add the custom domain name ravikirans.com in GoDaddy
Create an A record for ravikirans.com in GoDaddy
Ensure the domain ravikirans.com is valid in GoDaddy
 Add the custom domain name ravikirans.com to Microsoft Entra ID
Create a TXT record for ravikirans.com in GoDaddy
Ensure the domain ravikirans.com is valid in Microsoft Entra ID
 Add the custom domain name ravikirans.com to Microsoft Entra ID
Create an A record for ravikirans.com in GoDaddy
Ensure the domain ravikirans.com is valid in Microsoft Entra ID
 Add the custom domain name ravikirans.com in GoDaddy
Create a TXT record for ravikirans.com in GoDaddy
Ensure the domain ravikirans.com is valid in GoDaddy
Overall explanation
When you create a Microsoft Entra ID, the directory comes with an initial domain
name that ends with .onmicrosoft.com .
To add users as adminone@ravikirans.com , we need to add the custom domain
name ravikirans.com to Microsoft Entra ID.

Reference Link: https://learn.microsoft.com/en-us/entra/fundamentals/add-custom-


domain#add-your-custom-domain-name
So, box 1 -> Add the custom domain name ravikirans.com to Microsoft Entra ID.

When you add the custom domain name to Microsoft Entra ID, you will see details on
how to use the domain name with Microsoft Entra ID.
Since the domain ravikirans.com is registered with the Domain registrar GoDaddy,
we need to add the above DNS information in GoDaddy by creating a TXT record for
verification purposes.

Reference Link: https://learn.microsoft.com/en-us/entra/fundamentals/add-custom-


domain#add-your-dns-information-to-the-domain-registrar

Note: TXT records are generally used for domain verification, not A records.
Reference Link: What are A, CNAME, MX, TXT, and other DNS records?
So, box 2 -> Create a TXT record for ravikirans.com in GoDaddy.

Finally, we verify the custom domain name to ensure it’s valid in Microsoft Entra ID.
Reference Link: https://learn.microsoft.com/en-us/entra/fundamentals/add-custom-
domain#verify-your-custom-domain-name
So, box 3 -> Ensure the domain ravikirans.com is valid in Microsoft Entra ID.

Once verified, you can create a new user as adminone@ravikirans.com .

Option B is the correct choice.

Resources
Create a user with custom domain name
Domain
Manage Azure identities and governance

Question 22
You join all the corporate-owned Windows 10 laptops to Microsoft Entra ID. You need
to add userone@ravikirans.com as a local administrator account to all those systems.
Where would you configure this in Microsoft Entra ID?
 Users -> User settings
 Devices -> Device settings
 Devices -> Enterprise State Roaming
 Under Roles and administrators
Overall explanation
By default, the user performing the Microsoft Entra join is added to the local
administrator group on the device, in addition to a couple of other user principals
(Check the below link).
Reference Link: https://docs.microsoft.com/en-us/azure/active-directory/devices/
assign-local-admin#how-it-works
In addition, you can also add other users as a device’s local administrators, so they
have the privileges to manage the device in Microsoft Entra ID.
To do that, navigate to Microsoft Entra ID -> Devices -> Device Settings and
click Manage Additional local administrators on all Microsoft Entra joined
devices .

Here, you can assign users as device administrators.


Option B is the correct choice.

As the name indicates, under User settings , we can manage the user’s settings, not
laptop/device settings. Option A is incorrect.

Enterprise State Roaming enables users to sync app data across Windows devices to
the cloud. You cannot add local admins here.

Option C is incorrect.
In Roles and administrators , you can view and assign Microsoft Entra ID roles to
users. Option D is also incorrect.

Resources
Add local administrators to Microsoft Entra joined devices
Domain
Manage Azure identities and governance

Question 23
You develop Power BI reports that your team members can access from their
corporate network. The team also uses an Azure subscription and deploys VMs for
organizational workloads. As a team manager, you have two requirements:
a. Ensure users who RDP/SSH into those Azure VMs cannot access the Power BI
report/dashboard from there.
b. Ensure they can access the Internet.
Which of the following would you use as the destination while creating an outbound
security deny rule in the NSG?
 IP addresses
 Service Tag
 My IP address
 Application security group
Overall explanation
Short Answer for Revision:
My IP address is used as a source, not a destination. We need to block access to the
destination Power BI.
Service tags point to specific IP addresses for Microsoft services like Power BI.
Microsoft manages the tags, so it’s not a problem if the underlying IP addresses
change for any service.
Detailed Answer:
My IP address adds your current VM/computer’s public IP address to the security rule.

There are a couple of problems with using My IP address :


a. First, My IP address can only be used as the source, not the destination.
b. Using My IP address as the source restricts only the source traffic. It doesn’t satisfy
the requirement of blocking the destination, which is the Power BI report access to the
users logged into the VM.
Option C is incorrect.

To block Power BI report access, we need to specify the public IPs of the Power BI
service. But there are many IPs, and they may be constantly changing, so it’s a
nightmare to manage them.

Reference Link: https://azureipranges.azurewebsites.net/


https://azservicetags.azurewebsites.net/servicetag/powerbi
Option A is also incorrect.

The best solution would be to allow the Azure platform to handle the complexity of
managing multiple IP address ranges.
Welcome to the world of service tags. Service tags are a label that groups a list of IP
addresses for any Azure service. Using service tags, you can target some interesting
services like Power BI, and even Azure portal and Azure marketplace.

You don’t have to worry about the underlying IP ranges for the services the service
tag represents. Microsoft takes care of them.
Before using service tags in a network security rule, I log into a VM to verify that I can
access the reports from the Power BI service.
After creating the outbound security rule with the PowerBI service tag for denying
access to the Power BI reports, I try to access the reports again from the VM.

As expected, I am unable to load the reports from the Power BI service. Since we have
blocked only the IP ranges of the Power BI service, users can still access the Internet,
as the NSG includes a default security rule to allow Internet access. Option B is the
correct answer.
Reference Links: https://learn.microsoft.com/en-us/azure/virtual-network/service-
tags-overview
https://learn.microsoft.com/en-us/azure/virtual-network/network-security-groups-
overview#allowinternetoutbound
https://learn.microsoft.com/en-us/power-bi/enterprise/service-premium-service-tags (n
ot directly relevant to the question)
Application Security Groups help you group the VMs per your application architecture.
For example, you can create an ASG for web servers and add the NICs of all the web
server VMs as part of the application security group. Rather than the individual VM's IP
addresses, you can use these groups in the NSG.
Both application security groups and service tags are similar in that both are labels
that can be used as the source or destination in the outbound and inbound security
rules. The difference between them is who creates and manages these labels.
The IP address ranges for service tags like PowerBI and AzurePortal are maintained
by Microsoft. Whereas you manage the membership of the VM’s NICs in the
Application Security Group.
Reference Link: https://learn.microsoft.com/en-us/azure/virtual-network/application-
security-groups
Option D is incorrect.
GitHub Repo Link: Block traffic to Power BI from VMs using NSG

Resources
Block traffic to Power BI from VMs using NSG
Domain
Implement and manage virtual networking

Question 24
There are three Windows VMs in your Azure Virtual Network. The virtual machines
vm01 and vm02 are deployed in subnet01, whereas vm03 is deployed in the
centralSubnet.
A Network Security Group (nsg01) is associated with both the subnets. In addition to
default rules, the NSG also has a rule that denies inbound traffic through the ICMP
protocol from any source.

Which of the following rules would you create that satisfy the below two requirements:
a. Allow ICMP messages to vm01 and vm02 only from vm03
b. None of the other combinations of inter-VM pings are possible
 Priority: 101
Direction: Inbound
Source: 10.0.1.4
Destination: VirtualNetwork
 Priority: 102
Direction: Outbound
Source: 10.0.1.4
Destination: VirtualNetwork
 Priority: 103
Direction: Inbound
Source: VirtualNetwork
Destination: 10.0.0.4,10.0.0.5
 Priority: 99
Direction: Outbound
Source: VirtualNetwork
Destination: 10.0.0.4, 10.0.0.5
Overall explanation
Short Answer for Revision:
To override the inbound rule that denies the ICMP traffic, we need to create an
inbound rule with a higher priority (lower priority number). So, options D (99 is not a
valid priority value) and B (outbound rule) are incorrect.
Option C is incorrect. Since the source is a Virtual Network, vm01 can ping vm02 and
vice-versa.
For option A, the source is restricted to vm03. So, vm03 can ping any VMs on the
virtual network (vm01 and vm02). vm01 and vm02 can neither ping each other nor
can they ping vm03.

Detailed Answer:
First, the priority numbers for user-defined NSG rules can only be between 100 and
4096. So, option D is incorrect.

In the given scenario, the same NSG is associated with two subnets where all three
VMs reside.
So, the NICs of all three VMs will inherit the same rules defined in the NSG. You can
verify this by navigating to the Effective security rules of each VM’s NIC.

The default security rules AllowVnetInBound and AllowVnetOutBound ensure


unrestricted network traffic (including ICMP) within the VNet. But the deny rule
(priority 105) overrides the default rule and prevents all ICMP traffic.
To selectively allow only the vm03 to ping vm01 and vm02, we need to create a rule
with a higher priority that overrides this rule.
Since the rule (priority 105) denies only the inbound traffic through the ICMP protocol,
creating an outbound rule, like the one in option B, will not override the existing deny
rule. Adding this rule creates no difference, and similar to the original setup, none of
the VMs can ping the other.

Option B is incorrect.
Creating the rule in option C will override the existing deny rule since it is also an
inbound rule with a lower priority number. In NSG, lower numbers have higher priority,
so the NSG will ensure that this rule (with priority 103) will execute first. It doesn’t
process the rule with priority 105 as the traffic already matches the higher priority
rule.
Here we have entered more than 1 IP address. This is an example of using an
augmented security rule where we combine multiple IP addresses and port ranges so
we can specify our security objective with as fewer rules as possible.

Reference Link: https://learn.microsoft.com/en-us/azure/virtual-network/network-


security-groups-overview#augmented-security-rules

Since the source is a virtual network, this rule ensures that any VM on the virtual
network can ping vm01 and vm02, and not just vm03. This means even vm01 can
ping vm02, and vm02 can ping vm01, which violates the given requirement.

So, option C is incorrect.


Creating the rule in option A ensures that only vm03 can ping any VMs on the virtual
network. So, vm03 can ping vm01 and vm02. But the other two VMs in the network
cannot ping each other. Nor can vm01 or vm02 ping vm03.

Option A is the correct answer.


Reference Link: https://learn.microsoft.com/en-us/azure/virtual-network/network-
security-groups-overview#security-rules
GitHub Repo Link: Add NSG rules to allow ICMP traffic

Resources
Add NSG rules to allow ICMP traffic
Domain
Implement and manage virtual networking

Question 25
You deploy a website into two Azure VMs in your virtual network behind a standard
load balancer.
From the below statements, select the actions necessary to allow external traffic to
reach the VMs. Also ensure the solution minimizes management overhead for the
current setup and possibly in the future when the VMs scale.
a. Create an inbound security rule in an NSG to allow the traffic
b. Create an outbound security rule in an NSG to allow the response traffic
c. Associate the NSG to the subnet where the VMs are deployed
d. Associate the NSG to the Network Interface cards attached to the VMs
 Only a, b, and c
 Only a and c
 Only a and d
 Only a, b, and d
Overall explanation
To allow external users to access your website, you create an inbound security rule in
an NSG over port 80 or 443. The default security rules do not allow incoming traffic
over port 80. So, you need to explicitly add this rule.
So, the statement ‘a’ is necessary to ensure the external traffic can reach the VMs via
the load balancer.

The Azure NSGs are stateful. This means you do not have to create an explicit
outbound security rule to allow the web server’s response to the user traffic. The
outgoing port will be opened automatically to allow the response traffic, so the end
user can see the web server’s response.
But the NSG also has a default outbound security rule that allows all traffic to the
Internet, which may falsely lead us to believe that this rule is responsible for allowing
web server's outgoing response.

But this rule ensures that the traffic originating from inside the VMs to the internet is
allowed. For example, when you browse the internet from the VM.
So, statement b is not necessary. Options A and D are incorrect.
Reference Link: https://learn.microsoft.com/en-us/azure/virtual-network/network-
security-groups-overview#security-rules
https://learn.microsoft.com/en-us/azure/virtual-network/network-security-groups-
overview#outbound
We can associate the NSG with either the virtual network subnet or the Network
Interface Cards attached to the VMs. Since both the VMs serve web traffic, we have a
common set of network security requirements. So, associating the NSG at the subnet
level is better than associating the NSG with the NIC because we just have to
associate the NSG once with the subnet. Whereas we need to manually associate the
NSG with every VM's NIC. If you have tons of VMs, this can easily become a
management overhead.
In the future, if you scale the VMs, you can be sure that the existing configuration will
work, without the need to perform any manual NSG associations. Statement c is valid.
Option B is the correct answer.
Reference Link: https://techcommunity.microsoft.com/t5/azure-architecture/ms-
guidance-on-nsgs-on-nics-vs-on-subnets/m-p/1501368

Resources
Configure NSG to allow external traffic
Domain
Implement and manage virtual networking

Question 26
You have the below resources and their associations in the Azure subscription.

I will give you a few examples of how to read the table:


a. nic01 is assigned to subnet01 (of vnet01) in the East US region.
b. nsg01 is associated with subnet02 (of vnet01) in the East US region.
c. nsg02 is associated with both nic03 and subnet04 (of vnet03) in the East US
region. nic03 is also assigned to subnet04.

To which of the following resources can you associate nsg02 without disturbing the
existing setup? Select two options.
 subnet01
 subnet02
 subnet03
 nic01
Overall explanation
Short Answer for Revision:
You can associate an NSG with multiple NICs and subnets (across VNets) only in the
same subscription and region. So, option C is incorrect, and options A and D are
correct answers.
Since subnet02 already has an associated NSG, associating nsg02 with subnet02 will
replace the existing NSG and disturb the current setup. Option B is incorrect.

Detailed Answer:
You can associate an NSG to either a NIC or a virtual network subnet only in the same
region and subscription. So, you cannot associate nsg02 to subnet03 as they are
deployed in different regions.

Option C is incorrect.

One NSG can be associated with multiple subnets across different VNets in the same
region.
So, although nsg02 is already associated with the subnet04 from the virtual network
vnet03, you can associate nsg02 with subnet01, which is in a different virtual network,
vnet01.

Option A is one of the correct answers.

The same NSG can be associated with multiple NICs (check the above image). So, in
addition to nic03, you can associate nsg02 with nic01.
Option D is the other correct answer.

The network security group nsg01 is already associated with subnet02. It doesn’t
make sense to allow associating multiple NSGs with a subnet or a NIC, as you can just
combine the security rules in a single NSG.
But when you try to associate nsg02 with subnet02, the Azure portal doesn’t display
any error and successfully completes the association.

But when you check subnet02, it’s security group is updated from nsg01 to nsg02.
So, at any point in time, Azure will allow you to associate only one security group to a
subnet. The same holds true for the Network Interface Card.

Since associating nsg02 with subnet02 replaces the current NSG, it disturbs the
existing setup. Option B is incorrect.
Reference Link: https://learn.microsoft.com/en-us/azure/virtual-network/network-
security-group-how-it-works
GitHub Repo Link: Associate NSGs with subnets and NICs

Resources
Associate NSGs with subnets and NICs
Domain
Implement and manage virtual networking

Question 27
You have two virtual machines, vm1, and vm2, deployed in two different virtual
networks, vnet1, and vnet2, in two different Azure regions.

Below are the VNets with other information like their address space and location
details.

You need to ensure that vm1 and vm2 can communicate with each other. Which of
the following solutions would you implement if you need high bandwidth connectivity
without any limits?
 Move vnet2 and its dependent resources to East US
 Configure a VNet peering connection between vnet1 and vnet2
 Deploy a virtual network gateway in either of the networks and
establish the connection
 Deploy a virtual network gateway in both networks and establish the
connection
Overall explanation
Short Answer for Revision:
Even the highest available gateway SKU imposes bandwidth limits which may be
restrictive for the higher-sized VMs. Options C and D are incorrect.
A VNet peering link itself doesn't impose any bandwidth restriction. Option B is the
correct answer.
VMs in two virtual networks in the same region and subscription automatically cannot
communicate with each other. You still need to connect the underlying VNets. Option
A is incorrect.

Detailed Answer:
You can connect the given two virtual networks with VNet peering links. Go to
the Peerings section in one of the virtual networks, vnet1, and add a peering
connection to the other VNet.

This creates two peering links, one from this network, vnet1, and the other from the
remote network, vnet2.
Now, let’s log in to the two VMs deployed in those two virtual networks to test if we
can ping each other.

Note: You need to enable ICMP through the Windows firewall on both VMs before you
can ping each other with the below PowerShell command.

New-NetFirewallRule –DisplayName "Allow ICMPv4-In" –Protocol ICMPv4

So, option B is one of the possible solutions, but let’s not conclude without testing the
other options.

Options C and D talk about connecting the two virtual networks using a VNet-to-VNet
connection. This connection type requires deploying a virtual network gateway in
each virtual network and connecting the gateways for establishing the connection.
Reference Link: https://learn.microsoft.com/en-us/azure/vpn-gateway/vpn-gateway-
howto-vnet-vnet-resource-manager-portal
So, option C is incorrect since you need to deploy the gateway in both networks.

To verify if we can get the two VMs to talk to each other by implementing the solution
in option D, I already created two virtual network gateways in gateway subnets in the
two virtual networks where the VMs are deployed. If you are following along, note that
the deployment of each gateway can take more than 20 minutes, depending on the
chosen gateway SKU.
Connecting these two gateways ensure that the two VNets will be connected. While
creating a connection,
a. Select the Connection type as VNet-to-VNet
b. The First virtual network gateway is automatically selected. Choose the Second
virtual network gateway .
c. Enter any set of password-like characters for the Shared key .
Leave all other defaults and create the connection. Unlike VNet peering, you have to
create this connection from the other gateway too. While doing so, ensure to enter
the same Shared key , else the connection will not work.
Now, if we log in again to the two VMs, we can verify that the two VMs can ping each
other (Refer to the Related lecture video).

Although we can establish network connectivity between the two VMs, the bandwidth
of the VNet-to-VNet connection is limited by the gateway SKU.
Even if you choose the highest available SKU, there is a limitation of 10Gbps on the
gateway. So, this can be limiting if you use higher-sized VM SKUs whose maximum
possible network bandwidth is more than 30 or 40 Gbps.
With VNet peering, the network throughput is limited based on only the virtual
machines' permitted bandwidth. Since there is no bandwidth limitation imposed
directly by the VNet peering, option B is the correct answer.
Reference Link: https://azure.microsoft.com/en-in/blog/vnet-peering-and-vpn-
gateways/
https://learn.microsoft.com/en-us/azure/vpn-gateway/vpn-gateway-about-
vpngateways#benchmark
https://learn.microsoft.com/en-us/azure/virtual-machines/dv5-dsv5-series#dsv5-series

Option A is incorrect. Even if you move the vnet2 and its dependent resources to the
same region as vnet1, virtual networks in the same region do not have any
connectivity by default. You still have to create peering links to establish a
connection, so the VMs can communicate with each other.
GitHub Repo Link: Enable VM communication across VNets without bandwidth limits

Resources
Enable VM communication across VNets without bandwidth limits
Domain
Implement and manage virtual networking

Question 28
You have the two virtual networks below, vnet01 and vnet02, in Azure.

And the two networks have a peering connection.


Due to increased demand, you need to resize the address space of vnet01 from
10.0.0.0/13 to 10.0.0.0/12 for scaling the workloads. What is the best way to achieve
this objective without any downtime?
 Remove peering between the VNets, modify the address space, and re-
add the peering connection.
 Modify the existing address space and sync the peers with the new
changes.
 Add a new address space to vnet01 since it is not possible to update
the existing address space for a network that’s in a peering
connection.
 Modify the existing address space and refresh the peers with the new
changes.
Overall explanation
Short Answer for Revision:
You can add/modify the address space of a VNet in a peering connection. But ensure
to sync the connection. Option B is correct.
Refresh only refreshes the list of peering connections for a VNet. And removing and
re-adding the peering link will cause downtime.

Detailed Answer:
Well, it is indeed possible to update the existing address space of a network that’s in a
peering connection. In the Azure portal, the address space field is editable, and if
there aren’t any address overlaps with the peered network, you can save the
changes.
Rather than modifying the existing address space, you can also add a new address
space that together will be equivalent to the required address space 10.0.0.0/12.

But since option 3 points out a factually incorrect information that you cannot modify
the existing address space of a network that’s in a peering connection, it is an
incorrect answer choice. Further, adding or modifying the address space is only a part
of the solution, as we will see in just a bit.

While adding/modifying the address space, the portal displays a message that
updating the address space will not allow the peered virtual networks to connect to
this new address space until a sync operation is performed on the peering connection
(check the previous two images).
So, if you go to the Peerings section, you would see a message that a remote sync is
required on the connection. Syncing will ensure that all the remote, peered virtual
networks learn the updated address space of this virtual network.

Reference Link: https://learn.microsoft.com/en-us/azure/virtual-network/update-


virtual-network-peering-address-space
Since there is no downtime associated with this method, option B is the correct
answer.

Before sync operation was available, you would typically remove the peering
connection between the VNets, modify the address space, and re-add the peering
connection. However, since there is downtime associated with this method between
removing and re-adding the connection, resources across the networks would not be
able to communicate with each other during this time. Option A is also incorrect.

Option D is incorrect because refreshing just updates the list of available peering
connections a virtual network has or it’s status. Its functionality is similar to the
refresh button on any other Azure resource.
GitHub Repo Link: Resize the VNet in a peering connection

Resources
Resize the VNet in a peering connection
Domain
Implement and manage virtual networking

Question 29
You deploy 8 VM instances in an Azure Bastion service in your virtual network. Users
in your organization use the Bastion service to connect remotely for performing basic
data entry tasks. How many concurrent RDP sessions can the Bastion service serve?
 200
 160
 Depends on the Bastion SKU
 Depends on the subnet size
Overall explanation
Short Answer for Revision:
For lightweight workloads like basic data entry tasks, RDP can support 25 connections
per VM instance.

Detailed Answer:
<<Honestly, in the exam, you wouldn’t get a calculation like this. The idea behind this
question is to bring out certain concepts about host scaling in the Bastion service,
which is relevant for the exam>>.

Azure Bastion supports two SKUs: Basic and Standard. The Basic SKU supports only
two VM instances.

The Standard SKU supports up to 50 VM instances.


Generally, the number of supported concurrent RDP/SSH sessions depends on the
Bastion SKU, i.e., a Standard SKU will support more sessions than a Basic SKU
because a Standard SKU provides more VM instances. But, in this case, since we use 8
VMs, we can infer that the Bastion SKU is Standard. Since we can conclusively gather
that the Bastion SKU is Standard, option C is meaningless.
Depending on the workload type, each VM instance can support multiple concurrent
RDP sessions. For example, for lightweight workloads like basic data entry tasks, RDP
can support 25 connections per VM instance.
Whereas for heavy workloads like software development, RDP supports only 2
sessions per instance.
Reference Link: https://learn.microsoft.com/en-us/azure/azure-resource-manager/
management/azure-subscription-service-limits#azure-bastion-limits
https://learn.microsoft.com/en-us/windows-server/remote/remote-desktop-services/
virtual-machine-recs#workloads

So, for 8 VM instances, the Bastion service can support up to 200 concurrent
connections for basic data entry tasks. Option A is the correct answer.

Option D is incorrect. Although it sounds logical that the bigger the subnet size, the
more VMs it supports, and so a higher number of concurrent sessions, the resource
manager wouldn’t allow you to create even a Basic SKU with a subnet size smaller
than /26.
In fact, a subnet size of /26 or larger is a prerequisite for creating the Bastion service.
So, it doesn’t play any factor in deciding the number of concurrent connections to the
service.

Resources
Concurrent RDP sessions with Azure Bastion
Domain
Implement and manage virtual networking

Question 30
You have a storage account in the East US region. Some users in the North Europe
region need to access the storage account.
Which storage account setting would you configure to optimize the network cost when
the traffic is routed from the user to the storage account?
 Routing preference
 Network access
 Service endpoints
 Performance
Overall explanation
Short Answer for Revision:
Two ways to route traffic from the user to the Azure storage account with the Routing
preference setting:
a. Routing over Microsoft backbone (traffic spends the bulk of its path on the Microsoft
network, less time on the Internet; More reliable, so costs more)
b. Routing via the Internet (traffic spends the bulk of its path on the Internet, less time
on the Microsoft network; Less performant, cost-effective).

Detailed Answer:
The Azure routing preference setting enables you to choose how your traffic routes
between Azure and the Internet. You have two options for configuring network
routing:
a. Routing via the Microsoft global network
b. Or routing via the Internet

Routing traffic via the Microsoft global network ensures that the traffic stays on the
Microsoft backbone for the bulk of its path after/before entering/exiting the Microsoft
network at the Microsoft Point of Presence (POP) closest to the user.
As seen from the image above, this is the default option when you create a storage
account. Routing the traffic over the Microsoft network optimizes the traffic for
network performance as the network is provisioned with multiple redundant fiber
paths to ensure high reliability and availability. Consequently, this increases the
network cost.
Reference Link: https://learn.microsoft.com/en-us/azure/virtual-network/ip-services/
routing-preference-overview#routing-via-microsoft-global-network

On the contrary, routing traffic via the Internet ensures that the traffic stays on your
ISP network for the bulk of its path after/before entering/exiting the Internet network
at the Microsoft Point of Presence (POP) closest to the Azure resource.
Routing the traffic over the Internet is a more cost-optimized routing option.
So, for the given scenario, you should configure the routing preference setting and
select Internet routing to optimize for cost. Option A is the correct answer.
Reference Link: https://learn.microsoft.com/en-us/azure/virtual-network/ip-services/
routing-preference-overview#routing-over-public-internet-isp-network
https://learn.microsoft.com/en-us/azure/storage/common/network-routing-
preference#microsoft-global-network-versus-internet-routing

All other options are incorrect. Network access enables you to lock down your storage
account access to specific VNets or private endpoints. Option B is incorrect.
Service endpoints provide connectivity to resources from your VNets over the Azure
backbone network. They do not route traffic over the Internet. Option C is incorrect.
Reference Link: https://learn.microsoft.com/en-us/azure/virtual-network/virtual-
network-service-endpoints-overview

The Performance setting enables you to configure the performance tier of the storage
account like Standard or Premium. Option D is incorrect.

Resources
Route network traffic to the storage account
Domain
Implement and manage virtual networking
Question 31
You have to deploy multiple containers using Azure Container Instances. Container1
runs an internet-facing web application, and another container, Container2,
periodically sends an HTTP request to Container1 to ensure it’s up and running.
Which of the following Operating Systems would you consider using? Select two
options.
 Alpine Linux
 Ubuntu Server
 Windows Nano Server
 Windows Server Core
Overall explanation
Short Answer for Revision:
Multi-container groups support only Linux-based containers.

Detailed Answer:
I will use the ARM template for explanations to this question.
When you deploy a single container using an Azure Container Instance,
a. You define a container group
b. And the container within that container group,
c. Finally, the Operating System the container should use.
Reference Link: https://learn.microsoft.com/en-us/azure/container-instances/
container-instances-quickstart-template#review-the-template

The container group name is the name of the Azure Container Instance. The container
name is the name of the individual container running within that instance (group).
So, it's becoming evident that within an Azure Container Instance or a container
group, we can run one or more containers. Such container groups are known as multi-
container groups. i.e., they run multiple containers that gets scheduled on the same
host machine, and they share the same lifecycle, resources, etc.,
In the given question, we need to deploy a multi-container group running two
containers: a web app container and a sidecar container. But multi-container groups
currently support only Linux containers (this may change in the future). If you use
Windows containers, you can deploy only a single container.

Quick Preview:

Reference Link: https://learn.microsoft.com/en-us/azure/container-instances/


container-instances-container-groups#what-is-a-container-group

So, only if you use a Linux OS, you can deploy two containers in a container group.
Options A and B are the two correct answer choices.

Just out of curiosity, I update the OS type to Windows. I also replace the two Linux
container images with the same Windows container image, which you can get from
the quick start images (while creating an ACI instance in the Azure portal).

When I deploy the template, as expected, the error message states that multiple
windows-based containers cannot be defined in a container group.
Options C and D are incorrect.
Reference Link: https://learn.microsoft.com/en-us/azure/container-instances/
container-instances-multi-container-group
GitHub Repo Link: Deploy a multi-container group in Azure Container Instance

Resources
Deploy a multi-container group in Azure Container Instance
Domain
Deploy and manage Azure compute resources

Question 32
You have five Network Interface Cards (NICs) deployed in a virtual network vnet01. Of
them, three Network Interface Cards are deployed in subnet03 and attached to vm03,
which functions as a virtual appliance.

Of the three NICs attached to the virtual appliance, IP forwarding is enabled only on
nic03 (private IP: 10.0.3.4) and nic04 (10.0.3.5). IP forwarding is also enabled within
vm03’s operating system.
Below is the route table defined with two custom routes associated with subnet01 and
subnet02.

Based on the given information, answer the below two questions. Select Yes if the
statement is correct, else select No.
Note: Assume the ICMP is enabled on all the VMs
 Yes, No
 Yes, Yes
 No, No
 No, Yes
Overall explanation
Short Answer for Revision:
If a user pings vm02 from vm01, the address prefix of route02 will match. So, the
traffic is diverted to nic05 of the virtual appliance (route02's Next hop IP address ).
Since IP forwarding is enabled in the guest OS for the virtual appliance (this means IP
forwarding is enabled for all the NICs attached to vm03 at the OS level), and since IP
forwarding is enabled for the primary NIC attached to vm03, nic05 (nic04 and nic05
are secondary NICs) can forward the traffic to vm02. In the Azure portal, at least the
primary NIC should have IP forwarding enabled.
If nic04 is moved to subnet02, nic04 will receive a new private IP from subnet02. And
then you use this IP as the Next hop IP address for route02. Now realize that when
traffic is sent to vm02, the address prefix of route02 will match and the traffic will be
routed to nic04, which is in the same subnet as vm02. Due to this arrangement, traffic
will be repeatedly routed to nic04 and can never reach vm02.

Detailed Explanation:
Let me begin by telling you that this is a difficult question, and you should not worry if
you do not get this right. In most cases, you will not see questions in the exam that
require this depth of knowledge and the only reason I included them is to cover any
gaps in the knowledge.
The given enormous number of details can be visualized in this architectural diagram:
Statement 1:
From the routes in the route table and other architecture details, we can confirm that
vm03 is a virtual appliance for communication between vm01 and vm02. So, any
communication from vm02 to vm01 goes through the virtual appliance vm03. And
vice versa is also true.

Let’s analyze the path used when a user pings vm01 from vm02. Since the IP
forwarding is enabled in the guest OS of vm03, IP forwarding will be enabled for all
three network interfaces attached to vm03 at the OS level.
As the route table is associated with subnet02, any traffic destined from vm02 to
vm01 will be picked up by the custom route, route01. Since the Next hop IP
address of route01 is 10.0.3.4, which is the private IP of nic03, nic03 receives and
forwards the traffic to vm01 as IP forwarding is enabled in the Azure portal for the
primary NIC (nic03).
But IP forwarding is not enabled for nic05 in the Azure portal, and the private IP of
nic05 (10.0.3.6) is used as the Next hop IP address for route02. So, you might be
misled into thinking that when a user pings vm02 from vm01, nic05 may not forward
the packet to vm02.

However, this is not true.

As discussed earlier, IP forwarding is enabled for vm03 in the guest OS, which is an
ON/OFF switch in the registry editor. Check the lab files to see how to enable them.
So, turning on IP forwarding in the guest OS ensures IP forwarding is enabled for all
NICs attached to vm03 at the OS level.
Even if the IP forwarding is enabled only for the primary network interface, nic03, in
the Azure portal, it is good enough for all the secondary NICs attached to the virtual
appliance. So, any secondary NIC attached to vm03 need not explicitly have
IP forwarding enabled in the Azure portal.
So, when a user from vm01 pings vm02, traffic hits the secondary NIC, nic05, which
forwards the traffic to the intended destination.

You can also verify this behavior by disabling IP forwarding for nic04 in the Azure
portal and using nic04 in one of the routes as the Next hop IP address to check if it
forwards the packets to the intended destination.
Statement 1 -> No.

Statement 2:
Let’s move nic04 from subnet03 to subnet02. Doing so also changes its private IP as it
now receives a private IP address from subnet02.

So, the new private IP address of nic04 would be 10.0.2.5. Per the question, we will
use this private IP to update the Next hop IP address of route02.

So, the updated architecture will be something like this:


Now, the recommendation from Microsoft is that you deploy the virtual appliance
(vm03) into a different subnet than the VMs that route through the virtual appliance.
This means you deploy all the NICs attached to the virtual appliance in a different
subnet than the NICs attached to other VMs that route through the virtual appliance.
Here, although nic03 and nic05 are in a different subnet than the other VMs, nic04 is
in the same subnet as vm02. To complicate things, we already have a route table on
subnet02 with a route that routes traffic through nic04.
In this case, pinging vm02 from vm01 will mean there is a match in route02, and the
data packet is repeatedly routed to nic04 in a loop and never reach vm02.

Reference Link: https://learn.microsoft.com/en-us/azure/virtual-network/virtual-


networks-udr-overview#user-defined
Statement 2 -> Yes.
Option D is the correct answer.
GitHub Repo Link: Using route tables with Network Virtual Appliance

Resources
Using route tables with Network Virtual Appliance
Domain
Implement and manage virtual networking

Question 33
Here is an ARM template that defines an Azure Storage Account.

1. "resources": [
2. {
3. "type": "Microsoft.Storage/storageAccounts",
4. "name": "[parameters('storageAccountName')]",
5. "location": "[parameters('location')]",
6. "sku": {
7. "name": "[parameters('storageSku')]"
8. },
9. "kind": "[parameters('storageKind')]",
10. "properties": {
11. "accessTier": "[parameters('storageTier')]",
12. "allowBlobPublicAccess": false
13. }
14. },
15. {
16. "type": "Microsoft.Storage/storageAccounts/blobServices",
17. "name": "[format('{0}/{1}', parameters('storageAccountName'),
'default')]",
18. "properties": {
19. "deleteRetentionPolicy": {
20. "days": 14,
21. "enabled": true
22. },
23. "restorePolicy ": {
24. "days": 7,
25. "enabled": true
26. },
27. "containerDeleteRetentionPolicy": {
28. "enabled": true,
29. "days": 20
30. }
31. },
32. "dependsOn": [
33. "[resourceId('Microsoft.Storage/storageAccounts',
parameters('storageAccountName'))]"
34. ]
35. }
36. ]

Given below are two statements based on the above ARM template. Select Yes if the
statement is correct. Else select No.
 Yes, No
 Yes, Yes
 No, No
 No, Yes
Overall explanation
Short Answer for Revision:
AllowBlobPublicAccess is False. This disallows public access to the storage account.
So, authorization is required.
deleteRetentionPolicy property is enabled and is set to 14 days. This will ensure the
deleted blobs will be in soft-delete state for 14 days.

Detailed Answer:
Statement 1:
In the given ARM template, the AllowBlobPublicAccess property is set to false. When
you deploy this template and navigate to the Configuration section of the storage
account in the Azure portal, you will see that the Allow Blob public access is set to
disabled.
This property disables anonymous access to all blobs and containers in the storage
account. So, when you create a container in the storage account, you cannot set the
blob or the container public access levels for the container.

Reference Link: https://learn.microsoft.com/en-us/azure/storage/blobs/anonymous-


read-access-configure?tabs=portal#set-the-storage-accounts-allowblobpublicaccess-
property
Since all requests have to be authorized, statement 1 -> Yes.

Statement 2:
In the given ARM template, the deleteRetentionPolicy and
containerDeleteRetentionPolicy properties define the retention period for the blob and
the container, respectively. That is, they indicate the number of days that the deleted
items should be retained in the soft-delete state.
After you deploy the ARM template, you can view these property values in the Azure
portal when you navigate to the Data Protection section of the storage account.
Since the number of days for the deleteRetentionPolicy property is set to 14 days, a
deleted blob will be in a soft delete state for 14 days. Statement 2 -> Yes.
Reference
Link: https://learn.microsoft.com/en-us/azure/templates/microsoft.storage/
storageaccounts/blobservices?pivots=deployment-language-
terraform#blobservicepropertiesproperties-2
Option B is the correct answer.

GitHub Repo Link: Define an ARM template with storage account resource
properties

Resources
Define an ARM template with storage account resource properties
Domain
Implement and manage storage

Question 34
You have a Recovery Services Vault and two storage accounts, and two Log Analytics
workspaces in different regions in your Azure subscription.
You need to create a diagnostic setting for vault01 to stream platform logs and
metrics to the Log Analytics workspace and a storage account. Which resources can
you use as the destination?

 Only strdev012, Only LogAnalytics01


 Only strdev012, Both LogAnalytics01 and LogAnalytics02
 Both strdev011 and strdev012, Only LogAnalytics01
 Both strdev011 and strdev012, Both LogAnalytics01 and
LogAnalytics02
Overall explanation
Question 2:
While creating a diagnostic setting for any Azure resource, you can send data to the
Log Analytics workspace, or archive data to an Azure storage account, or stream to an
Azure Event Hub.
You can send the vault log data to a Log Analytics workspace in any location or
subscription.
So, question 2 -> Both LogAnalytics01 and LogAnalytics02.

Question 1:
Whereas you can archive the log data to a storage account only in the same region as
the vault, as the Recovery Services Vault resource is regional.
Question 1 -> Only strdev012. Option B is the correct answer.
Reference Link: https://learn.microsoft.com/en-us/azure/azure-monitor/essentials/
diagnostic-settings?tabs=portal#destination-limitations
GitHub Repo Link: Configure diagnostic setting for a Recovery Services Vault

Resources
Configure diagnostic setting for a Recovery Services Vault
Domain
Monitor and maintain Azure resources
Question 35Skipped
You need to deploy an Azure Bastion service in a virtual network to enable RDP
connectivity through the Azure portal. Choose the VNet subnet and the subnet size
you would select.

Correct answer
AzureBastionSubnet, Larger than /27
Default, Smaller than /27
BastionSubnet, Larger than /27
AzureBastionSubnet, Smaller than /27
Overall explanation
The Azure Bastion service requires a dedicated subnet in a VNet. The subnet should
be named AzureBastionSubnet, and not anything else. So, answer 1 ->
AzureBastionSubnet.
Subnets larger than /27 like /26, /25, etc., offer more IP addresses than subnets
smaller than /27. So, the subnet size should be larger than /27. Answer 2 -> Larger
than /27.

Reference Link: https://learn.microsoft.com/en-us/azure/bastion/configuration-


settings#subnet

Resources
Subnets for Azure Bastion service
Domain
Implement and manage virtual networking

Question 36
This question is part of repeated scenario questions that contain the same stem but
with a different solution for each question. You need to identify if the given solution
solves a particular problem. Each set of repeated scenario questions might contain
either none, one, or many solutions.
You have the following Azure subscriptions organized under their parent management
groups, as shown below:
Given below is the hierarchy of management groups and the total number of
subscriptions they contain:

Goal: You have to ensure that only Virtual Machine resources in the East Asia region
can be created in the Sales-tech subscription.

Solution: You assign the following built-in policies:

Does the solution meet the goal?


 Yes
 No
Overall explanation
It’s easier to answer this question once we collect all the relevant information and
create the hierarchy.
Tenant Root Group is the root management group’s display name, so it’s always at
the top of the hierarchy. And it’s the parent management group for the fin-
client subscription.
Thanks to icons8 for the above icons

From image 2 in the question, it is evident that Tenant Root Group is also the parent
management group for both the IT Management and Marketing management groups.
From image 1 in the question, these two management groups are parent
management groups for IT-dept and mkgt-prod subscriptions.

Thanks to icons8 for the above icons

We still need to figure out where the Sales-tech subscription and its parent
management group fit in the hierarchy. From image 2 in the question, we know that
the Marketing management group has two subscriptions. One is the mkgt-
prod subscription, and the other must be the Sales-tech subscription.
But the parent management group of the Sales-tech subscription is Sales and
not Marketing. Hence, we conclude that the parent management group
of Sales is Marketing. Here is the final hierarchy:
Thanks to icons8 for the above icons

Assigning the Allowed resource type policy at the tenant root group scope will affect
all the child management groups and subscriptions. So, this policy assignment
ensures that users can create only Virtual Machines in the Sales-tech subscription.

Thanks to icons8 for the above icons

But assigning the Allowed locations policy at the IT Management group scope affects
only the IT-dept subscription, not the Sales-tech subscription.
So, users can create a virtual machine resource in any location under the Sales-
tech subscription.
The given solution does not meet the stated goal. Option No is the correct answer.

Reference Link: https://learn.microsoft.com/en-us/azure/governance/management-


groups/overview#hierarchy-of-management-groups-and-subscriptions
https://learn.microsoft.com/en-us/azure/governance/policy/overview#policy-definition

Resources
Restrict resource creation by Azure Policy - 1
Domain
Manage Azure identities and governance

Question 37
This question is part of repeated scenario questions that contain the same stem but
with a different solution for each question. You need to identify if the given solution
solves a particular problem. Each set of repeated scenario questions might contain
either none, one, or many solutions.
You have the following Azure subscriptions organized under their parent management
groups, as shown below:

Given below is the hierarchy of management groups and the total number of
subscriptions they contain:
Goal: You have to ensure that only Virtual Machine resources in the East Asia region
can be created in the Sales-tech subscription.

Solution: You assign the following built-in policies:

Does the solution meet the goal?


 Yes
 No
Overall explanation
Refer to the explanations from the previous question in this set on how to create the
hierarchy:

Thanks to icons8 for the above icons

Assigning the Allowed resource types policy at the Tenant root group scope will take
effect on all child management groups and subscriptions. So, this policy assignment
ensures that users can create only Virtual Machines in the Sales-tech subscription.
And assigning the Allowed locations policy at the Sales management group scope
takes effect only on the Sales-tech subscription.
But since the parameter used for the Allowed locations policy is Southeast Asia,
this policy will allow resources to be created only in that region. Its effect is to deny
resource creation in all the other regions.
The given solution does not meet the stated goal. Option No is the correct answer.
Reference
Link: https://learn.microsoft.com/en-us/azure/governance/policy/overview#policy-
definition

Resources
Restrict resource creation by Azure Policy - 2
Domain
Manage Azure identities and governance

Question 38
This question is part of repeated scenario questions that contain the same stem but
with a different solution for each question. You need to identify if the given solution
solves a particular problem. Each set of repeated scenario questions might contain
either none, one, or many solutions.
You have the following Azure subscriptions organized under their parent management
groups, as shown below:

Given below is the hierarchy of management groups and the total number of
subscriptions they contain:

Goal: You have to ensure that only Virtual Machine resources in the East Asia region
can be created in the Sales-tech subscription.
Solution: You assign the following built-in policies:

Does the solution meet the goal?


 Yes
 No
Overall explanation
Refer to the explanations from the 1st question in this set on how to create the
hierarchy:

Thanks to icons8 for the above icons

Assigning the Allowed resource types policy at the Marketing management group
scope will take effect on the child management group Sales and the
subscriptions mkgt-prod and Sales-tech . So, this policy assignment ensures that users
can create only Virtual Machines in the Sales-tech subscription.
And assigning the Allowed locations policy at the Tenant Root Group scope affects
all the subscriptions.
Since the effect of these policies is to deny all resources/locations not part of the
parameter list, users can create only virtual machine resources in the East Asia
location (only) under the Sales-tech subscription.
The given solution meets the stated goal. Option Yes is the correct answer.
Resources
Restrict resource creation by Azure Policy - 3
Domain
Manage Azure identities and governance

Question 39
You have a SQL backup file in your on-premises directory named backups.

You have to upload only the backup file (01-04.bak) to a blob storage
container sqlbackups.
Which of the following azcopy commands would you use? Select two options.
 azcopy copy 'D:\backups'
'https://strdev011.blob.core.windows.net/sqqlbackups/<<SAS token>>'
 azcopy copy 'D:\backups'
'https://strdev011.blob.core.windows.net/sqlbackups/<<SAS token>>' --
recursive
 azcopy copy 'D:\backups\*'
'https://strdev011.blob.core.windows.net/sqlbackups/<<SAS token>>'
 azcopy copy 'D:\backups\*' 'https://strdev011.blob.core.windows.net/
sqlbackups/<<SAS token>>' --recursive
Overall explanation
AzCopy command enables you to copy blobs or files from a storage account. This is
the syntax of the AzCopy command given in the question.

Option A tries to copy the backups directory and its contents to


the sqqlbackups blob container in the storage account. But since it tries to copy a
directory, the azcopy command expects the --recursive parameter, even if the
directory doesn’t have any subdirectories.

As the error message indicate, if you do not want to use the –recursive parameter, use
the trailing wildcard appended to the source directory name (like D:\backups\* ).
Option A is incorrect.

Option B has the same command as option A but with an additional recursive
parameter. So, the command is successful, and the backup file is copied into the blob
container.
But since we specified a directory as the source, the command copies the
directory backups and the file, not just the file (shown below):

Since we need to upload only the backup file, option B is incorrect.


Reference Link: https://learn.microsoft.com/en-us/azure/storage/common/storage-
use-azcopy-blobs-upload#upload-a-directory

In option C, the wildcard character in the source parameter ensures that we copy only
the files from the source directory to the target container. We get the required output
as shown below:

Option C is one of the correct answers.


Reference Link: https://learn.microsoft.com/en-us/azure/storage/common/storage-
use-azcopy-blobs-upload#upload-directory-contents
Option D has the same command as option C but with an additional recursive
parameter. The recursive parameter is required if the source is a directory to copy the
subdirectories and their contents. However, they do not seem to have any effect while
copying files (either individual or all) in the directory. This command produces the
same output.
Option D is the other correct answer.
GitHub Repo Link: Upload the backup file to storage account using azcopy - PS
commands.ps1
Resources
Upload the backup file to storage account using azcopy
Domain
Implement and manage storage

Question 40
The below ARM template creates three VNets: private, internal, and public. It also
creates two subnets in each VNet.

1. "parameters": {
2. "vNetNames": {
3. "type": "array",
4. "defaultValue": [
5. "private",
6. "internal",
7. "public"
8. ]
9. }
10. },
11. "resources": [
12. {
13. "type": "Microsoft.Network/virtualNetworks",
14. "name": "[parameters('vNetNames')[copyindex()]]",
15. "location": "[resourceGroup().location]",
16. "properties": {
17. "addressSpace": {
18. "addressPrefixes": [
19. "[concat('10.', copyIndex(), '.0.0/16')]"
20. ]
21. },
22. "copy": [
23. {
24. "name": "subnets",
25. "count": 2,
26. "input": {
27. "name": "[concat('subnet', copyIndex('subnets'))]",
28. "properties": {
29. "addressPrefix": "[concat('10.', copyIndex(), '.',
copyIndex('subnets', 3), '.0/24')]"
30. }
31. }
32. }
33. ]
34. },
35. "copy": {
36. "name": "vnetcopy",
37. "count": "[length(parameters('vNetNames'))]"
38. }
39. }
40. ]
41. }
What will be the address prefixes of the two subnets in the VNet named ‘internal’?
 10.1.3.0/24 and 10.1.4.0/24
 10.1.0.0/24 and 10.1.1.0/24
 10.2.3.0/24 and 10.2.4.0/24
 10.2.0.0/24 and 10.2.1.0/24
Overall explanation
Short Answer for Revision:
The copy loop in the resources section creates multiple copies of the resources
(VNets). The copy loop in the properties section creates multiple properties (subnets)
of the VNet resource. The copyIndex function refers to the current iteration.
The internal VNet will have the address prefix 10.1.0.0/16 (as it is the second element
in the vNetNames array).
For the two subnets of the internal VNet, the first two octets will begin with 10.1.
(copyIndex function refers to the current resource iteration as there is no reference to
the property name).
The copyIndex function in the third octet refers to the current property iteration. It is
offset by 3. So, the address prefixes will be 10.1.3.0/24 and 10.1.4.0/24.

Detailed Answer:
You can create multiple instances of a resource by adding a copy loop. You can add a
copy loop to any four sections in the ARM template: resources, properties, variables,
and outputs.

Reference Link: https://learn.microsoft.com/en-us/azure/azure-resource-manager/


templates/copy-resources

We have two copy loops in the given template. One copy loop in the properties
section. The other copy loop is in the resources section.
For any copy loop, the copyIndex() function returns the current index value of the
iteration.
Reference Link: https://learn.microsoft.com/en-us/azure/azure-resource-manager/
templates/copy-resources#resource-iteration

So, while creating multiple VNet resources using the copy loop ‘vnetcopy’ (in the
resources section):
a. The ARM template uses the copyIndex() function to retrieve the VNet name from
the parameter array in each iteration.
b. Since the copyIndex() is zero-based, we will get address prefixes like 10. 0 .0.0/16,
10. 1 .0.0/16, and 10. 2 .0.0/16, for the three VNets private, internal, and public,
respectively.
Note that the copyIndex() function has no reference to the copy loop (defined in the
resources section). But when you use the copyIndex() function inside a property
iteration, provide the property name (i.e., subnets) to retrieve the current index of the
property iteration.

You can also use only the copyIndex() function without any property name inside the
property iteration. When used this way, the copyIndex() retrieves the current index
value of the resource iteration. Given below is an example to better illustrate the
point:
We already established that the VNet named 'internal' has the address space
10.1.0.0/16. Since the subnets too, use the copyIndex() function to retrieve the
current index value of the resource iteration, the first two octets for the two subnets
will be 10.1.
Since the value of copyIndex is offset by 3, the first three octets for the two subnets
will be 10.1.3. and 10.1.4.
Deploying this template should create three VNets, public, private and internal, each
with two subnets, subnet0, and subnet1. The subnets of the internal VNet should have
the address prefixes, 10.1.3.0/24 and 10.1.4.0/24, respectively.

Option A is the correct answer.


GitHub repo link: Use copyindex to create multiple VNets and subnets in ARM
template.json

Resources
Use copyindex to create multiple VNets and subnets in ARM template
Domain
Deploy and manage Azure compute resources
Question 41
You have to deploy the below Azure Resource Manager (ARM) template to a resource
group.

1. {
2. "$schema":
"https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#
",
3. "contentVersion": "1.0.0.0",
4. "parameters": {
5. "location": {
6. "defaultValue": "West US",
7. "type": "String",
8. "allowedValues": [
9. "Australia East",
10. "North Europe",
11. "UK South",
12. "West US"
13. ],
14. "metadata": {
15. "description": "Location for all resources."
16. }
17. }
18. },
19. "resources": [
20. {
21. "type": "Microsoft.Compute/availabilitySets",
22. "apiVersion": "2020-06-01",
23. "name": "availabilitySet1",
24. "location": "West Europe",
25. "properties": {
26. "platformFaultDomainCount": 2,
27. "platformUpdateDomainCount": 10
28. }
29. }
30. ]
31. }

Given below are two PowerShell commands that use the New-
AzResourceGroupDeployment cmdlet to deploy the template. Select the output
location of the availability set resource for each command.
 Error, West Europe
 West Europe, North Europe
 West US, West US
 West Europe, West Europe
Overall explanation
The New-AzResourceGroupDeployment command deploys the resources defined in an
ARM template to a resource group.
Reference Link: https://learn.microsoft.com/en-us/powershell/module/az.resources/
new-azresourcegroupdeployment?view=azps-9.3.0

PowerShell Command 1:
The allowedValues property of the parameter lets you define the allowed values that
can be used for the parameter. If you use any value for the location, like West Europe,
other than the allowed values defined in the array, the deployment will fail. So,
PowerShell Command 1 -> Error.

Reference Link: https://learn.microsoft.com/en-us/azure/azure-resource-manager/


templates/parameters#allowed-values

PowerShell Command 2:
For the second PowerShell command, the North Europe region passed as a value for
the location parameter is defined in the allowedValues property of the parameter. So,
there shouldn’t be any issue concerning the allowedValues property.
But although we use North Europe as a parameter value while executing the
PowerShell command, we have a location value (West Europe) hardcoded in the
resources section of the ARM template. If there is a conflict of value for any property
of a resource, the ARM template will use the value in the resources section for
deployment.
Note that the main role of the parameters section is to enable template reusability by
allowing template parameterization (using different parameter values) across
environments. That is, you can define different parameter values for each
deployment.
But the resources section has the final say in all matters related to resource
deployment.

Only when you use the parameters function to refer to the location parameter in the
resources section, can you use the parameter value used by the user in the
PowerShell command.
Reference Link: https://learn.microsoft.com/en-us/azure/azure-resource-manager/
templates/syntax#resources
https://learn.microsoft.com/en-us/azure/azure-resource-manager/templates/template-
functions-deployment#parameters
So, PowerShell Command 2 -> West Europe. Option A is the correct answer.

GitHub Repo link: ARM templates for allowed values for a parameter.zip

Resources
Allowed values for a parameter in an ARM template
Domain
Deploy and manage Azure compute resources
Question 42
Your cloud architecture team deploys several VMs in Azure:

a. 3 VMs in an availability set,


b. 3 VMs, each in a different availability zone, and
c. 2 default instances, deployed across two availability zones using Virtual Machine
Scale Sets that scale based on load.

Given below are two statements based on the above information. Select Yes if the
statement is correct. Else select No.
 Yes, No
 No, Yes
 Yes, Yes
 No, No
Overall explanation
Statement 1:
VMs in all three availability options do not share any dependency. So, you can stop
any one VM in the availability set, availability zone, and Virtual Machine Scale Sets
without any issues.

Statement 1 is No.

Statement 2:
You can configure the availability set and availability zone only during VM creation.
Once a VM is placed in an availability set or availability zone, there is no way to
remove/update the VM's availability settings. You have to delete and recreate the VM
with the updated configuration.
Reference Link: https://learn.microsoft.com/en-us/azure/virtual-machines/windows/
change-availability-set
https://stackoverflow.com/questions/35809679/can-i-remove-an-azure-vm-from-an-
availability-set
Statement 2 is No.
Option D is the correct answer.

Resources
Stop or remove VMs in an availability set or zone or scale set
Domain
Deploy and manage Azure compute resources
Question 43
To provide end-user self-service capabilities, an organization
(ravikiran.onmicrosoft.com) has purchased 50 Microsoft Entra ID P2 licenses. From
where can you assign the license to a user in Microsoft Entra ID?
Select two options.
 From the Licenses blade
 From the User settings blade
 From the custom domain names blade
 From the licenses blade of the user
Overall explanation
To assign the licenses, you can go to the licenses blade in Microsoft Entra ID. Under
All Products, you can view a list of licenses that you can assign to the user. Select
Microsoft Entra ID P2 and click Assign.

In the next window, select users/groups, and assignment options before assigning the
licenses.
Reference Link: https://learn.microsoft.com/en-us/entra/fundamentals/license-users-
groups
Option A is one of the correct answers.

You can also navigate to the individual user’s profile in Microsoft Entra ID and assign
licenses from there.
The experience is a little different, though.
Option D is the other correct answer.

From the User settings blade, you can manage the user’s capabilities like app
registrations, etc., From here, you cannot assign product licenses to the user.

Option B is incorrect.
From the custom domain names blade, you can add and manage custom domains.
Here too, you cannot assign product licenses to the user.
Option C is incorrect.

Resources
Assign licenses to a user in Microsoft Entra ID
Domain
Manage Azure identities and governance

Question 44
You create and place virtual machines in an availability set with the below
configuration.

Due to improper testing, a data center couldn’t stand the generators that supply
backup power to a server rack. How many virtual machines do you expect to be
affected, in a worst-case scenario?
 2
 4
 3
 1
Overall explanation
In an availability set, the virtual machines are distributed to distinct fault and update
domain combinations.
So, the first VM is created in Fault Domain 0 and Update Domain 0. Subsequent VMs
are assigned in increasing order to fault domains and update domains in a round-robin
way.
After a VM is placed in each fault/update domain, the VM placement strategy repeats,
but each time, ensuring a different combination of fault domain and update domain is
selected.
So, within fault domain 0, vm07 is not placed in update domain 0 but in update
domain 1. Similar logic holds true for vm08.

So, when the power supply to any server rack goes down, availability sets ensures
that only a maximum of 3 VMs go down.
Option C is the correct answer.
Reference
Link: https://learn.microsoft.com/en-us/azure/virtual-machines/availability-set-
overview

Resources
Effect of failure on VMs placed in an availability set
Domain
Deploy and manage Azure compute resources

Question 45
You create two Virtual Machine Scale Sets with flexible and uniform orchestration
modes. Each VMSS has 3 VMs as the initial instance count.
Given below are two statements based on the above information. Select Yes if the
statement is correct. Else select No.

 Yes, Yes
 Yes, No
 No, Yes
 No, No
Overall explanation
You can select either of the two orchestration modes while creating a Virtual Machine
Scale Set.
1. The classic uniform mode, and
2. The flexible mode.

Azure VMs created by the Virtual Machine Scale Set with the uniform orchestration
mode expose virtual machines as scale set instances, which has limited functionality
compared to the regular IaaS VMs. Whereas VMs created with the flexible
orchestration mode expose actual virtual machine resources.
So, the uniform mode doesn’t expose the VM and the related VM components like
disks, NIC, and Public IPs. These resources are abstracted away from the user. On the
other hand, the flexible mode exposes VMs and the related components of all the
created instances.
Since the uniform orchestration mode doesn’t create a VM resource (doesn’t offer
many of the VM APIs), you cannot resize any individual VM. But you can resize all VM
instances at the scale set level.

Statement 1 -> Yes.


It follows that since the flexible orchestration mode creates VM resources (offers many
of the VM APIs), you can resize any individual VM. You can also resize all VM instances
at the scale set level.
Statement 2 -> Yes.
Option A is the correct answer.
Reference Link: https://learn.microsoft.com/en-us/azure/virtual-machine-scale-sets/
virtual-machine-scale-sets-orchestration-modes

Resources
Resize Virtual Machine Scale Set instances
Domain
Deploy and manage Azure compute resources

Question 46
In your Microsoft Entra ID tenant (ravikirans.onmicrosoft.com), there are three users.
The below table summarizes their roles in Microsoft Entra ID and Azure subscription.

Which users can assign a subscription owner access to a new user ( User Four)?
 Only User Two
 Only User Two and User Three
 Only User One and User Three
 Only User One and User Two
Overall explanation
Short Answer for Revision:
The role you have in Microsoft Entra ID does not matter. To assign a subscription
owner access to a new user, you need to have either an owner subscription access
(makes sense) or a user access administrator role (the sole functionality is to
manage/assign access to Azure resources).

Detailed Explanation:
Microsoft Entra ID roles and Azure subscription roles are independent of each other. A
user with a global administrator (highest privileges) role in a Microsoft Entra ID tenant
will not have any default permissions on Azure subscriptions within the tenant.
Similarly, a user with the owner (highest privileges) access to an Azure subscription
will not have any default administrative roles on the directory.
Reference Link: https://learn.microsoft.com/en-us/azure/role-based-access-control/
rbac-and-directory-admin-roles#differences-between-azure-roles-and-microsoft-entra-
roles

Since User One does not have any Azure subscription role, he cannot view the
subscription in the tenant. Subsequently, he cannot assign any subscription role to
users within the tenant.
Since we have assigned an owner subscription access to User Two , we can deduce
that he can add other users as owners of the subscription.

Finally, although User Three does not have owner access to the subscription (only
User Access Administrator access), he can still add other users as subscription
owners.
In fact, the primary function of this role is to add/remove user access to Azure
resources. The definition of the User Access Administrator role only lists all actions
under Microsoft.Authorization resource provider.
The below image shows that User Three can assign a subscription owner access
to User Four .
Reference Link: https://docs.microsoft.com/en-us/azure/role-based-access-control/
built-in-roles#user-access-administrator
So, Only User Two and User Three can assign a subscription owner access to a new
user. Option B is the correct choice.

Resources
Assign subscription owner access to a new user
Domain
Manage Azure identities and governance

Question 47
You plan to deploy an ASP.NET web application in three Azure Virtual Machines, vm01,
vm02, and vm03. You consider using Azure availability sets to ensure that an instance
of the app is always available when Microsoft patches the hypervisor of the underlying
host machine.
How would you configure the availability set?
 1 fault domain and 3 update domains
 3 fault domains and 1 update domain
 4 fault domains and 3 update domains
 2 fault domains and 2 update domains
Overall explanation
When you create a VM in an availability set, you place the VM in a fault domain and
an update domain. A fault domain is a physical grouping of servers in a server rack.
Whereas an update domain is a logical grouping of servers spread across several
server racks/fault domains.
There can be many fault domains/server racks in an Azure data center, but your
subscription allows you to select only 3 of them, and they map to specific server
racks.

So, option C, which configures the availability set with 4 fault domains, is incorrect.

All servers/VMs in a fault domain share a common power source and a network switch.
So, placing VMs in different fault domains (FD0 and FD1) protects your app from
unplanned events like power interruptions and network outages.
But planned events like patching the hypervisor of the underlying host machine are
performed on each update domain at a time (only UD0 or UD1). So, distributing the
VMs across at least two update domains (UD0 and UD1) ensures that an instance of
the app is always available, as no two update domains are updated at the same time.

Option D distributes the VMs across two update domains. So, when you deploy three
VMs in the availability set, they use these fault domain and update domain
configurations.

Availability sets spread the VMs to as many distinct fault and update domain
combinations as possible in a round-robin way.
Option D is the correct answer.
Consequently, option B, which configures the availability set to have just one update
domain is incorrect.

If you use one fault domain, you can use only one update domain. Creating an
availability set with only one fault domain ensures that the VMs are placed in the
same rack, probably to ensure low communication latency between the VMs, so in this
case, it doesn’t make sense to allow VMs to be placed in different update domains. In
this case, you should have other redundancies in your architecture to protect systems
from failure.
Option A is also incorrect.
Reference Link: https://learn.microsoft.com/en-us/answers/questions/1036263/fault-
domain-vs-update-domain.html

Resources
Configure VM availability sets
Domain
Deploy and manage Azure compute resources
Question 48
Given below is an Azure Resource Manager template that deploys a resource group
and a Virtual Network in the resource group.
Observe the given template and answer the below two questions based on the
template definition.
 Microsoft.Resources/providers, Add-AzSubscriptionDeployment
 Microsoft.Resources/providers, Add-AzSubscriptionDeployment
 Microsoft.Resources/templates, New-AzResourceGroupDeployment
 Microsoft.Resources/deployments, New-AzDeployment
Overall explanation
Question 2:
You can scope your deployment to a resource group, subscription, management
group, or tenant with different PowerShell commands. That is, you can deploy
resources defined in the ARM template at any of these scopes.
For each scope, the schema in an ARM template describes properties available to
build a template. So, you have a schema for resource group deployments, different
from subscription deployments, since you create distinct resources at different
scopes.
From the two schemas in the given ARM template, we can deduce that the
deployment happens at two scopes: subscription and resource group scope.

Reference Link: https://learn.microsoft.com/en-us/azure/azure-resource-manager/


templates/deploy-to-resource-group?tabs=azure-cli#schema
https://learn.microsoft.com/en-us/azure/azure-resource-manager/templates/deploy-to-
subscription?tabs=azure-cli#schema

This knowledge alone may lead us to believe that the PowerShell command, New-
AzDeployment, an alias of New-AzSubscriptionDeployment is used to deploy the
template. But this is not a foolproof way to conclude as I have seen templates being
deployed successfully, even after using incorrect schema, at my workplace.
If you analyze the resources section in the template, it is evident that we deploy a
resource group. Since there is no concept of nested resource groups in Azure, we
cannot deploy a resource group at the resource group scope. So, a resource group
can only be deployed at a subscription scope.

Our belief is confirmed, so the PowerShell command New-AzDeployment should be


used to deploy the template. Question 2 -> New-AzDeployment.
Reference Link: https://learn.microsoft.com/en-us/powershell/module/az.resources/
new-azdeployment?view=azps-9.3.0
https://learn.microsoft.com/en-us/powershell/module/az.resources/new-
azresourcegroupdeployment?view=azps-9.3.0

Refer to the lecture video titled: Create external user accounts to know the
differences between the PowerShell verbs New and Add. We create a deployment at
the specified scope, like a subscription, and we don't add a deployment to the
subscription. So, Add-AzSubscriptionDeployment is not a valid command and is
incorrect.
Reference Link: https://learn.microsoft.com/en-us/powershell/scripting/developer/
cmdlet/approved-verbs-for-windows-powershell-commands?view=powershell-
7.3#similar-verbs-for-different-actions

Question 1:
Note that there is a nested child template within the main template as there is a
template property defined on the second resource. The template property defines the
content for the child template.
Reference Link: https://learn.microsoft.com/en-us/azure/azure-resource-manager/
templates/linked-templates?tabs=azure-powershell#nested-template

One fine use case of nested templates is to create resources at different deployment
scopes from a single template.
As the name indicates, the New-AzDeployment or the New-
AzResourceGroupDeployment PowerShell command creates a new deployment
resource at the respective subscription or resource group scopes. This deployment
resource contains the template content that defines the resources.
But we cannot use the New-AzDeployment command to deploy a template containing
both the VNet and the resource group in the same deployment, as the PowerShell
command creates a deployment resource at the subscription scope, whereas a VNet
cannot be deployed directly to a subscription.
So, in the same template, we create another deployment resource at the resource
group scope and embed the code that creates the VNet as a nested template of the
deployment resource.
The Microsoft.Resources/deployments resource type creates a deployment resource
at the specified resource group scope. This template contains the content only for the
child template. Since the VNet is specified in the child template, this deployment
creates a virtual network at the resource group scope.
So to sum it up, when you use the New-AzDeployment command to deploy the given
template, first, the resource manager creates a resource group at the subscription
scope. The dependsOn element ensures that the ARM template doesn’t create the
deployment resource until the resource group is deployed successfully.
Reference Link: https://learn.microsoft.com/en-us/azure/azure-resource-manager/
templates/resource-dependency#dependson

After the resource group is created, the ARM template creates the deployment
resource at the resource group scope. Since a VNet is defined as a resource in the
nested template, the resource manager creates the VNet in the resource group.
So, question 1 -> Microsoft.Resources/deployments.

There is no resource type like Microsoft.Resources/templates. And


Microsoft.Resources/providers is related to Azure resource providers, not
deployments.
Reference Link: https://learn.microsoft.com/en-us/azure/role-based-access-control/
resource-provider-operations#microsoftresources
Option D is the correct answer.

GitHub Repo Link: Select resource type and PowerShell command to deploy the
ARM template (ARM Template and PS script).zip
Resources
Select resource type and PowerShell command to deploy the ARM template
Domain
Deploy and manage Azure compute resources
Question 49
You have the below list of users in a hybrid deployment of Microsoft Entra ID.

1. user one is on-premises sync enabled, which means the user is created in Windows
Server Active Directory and synced to Microsoft Entra ID with Microsoft Entra Connect.
2. test user is not on-premises sync enabled, so test user is created in Microsoft
Entra ID.

Where can you edit the Department and Age group properties of user one?
 Only Active Directory, Only Microsoft Entra ID
 Only Active Directory, Both Microsoft Entra ID and Active Directory
 Only Microsoft Entra ID, Only Active Directory
 Both Microsoft Entra ID and Active Directory, Both Microsoft Entra ID
and Active Directory
Overall explanation
As the test user is created in Microsoft Entra ID, you can edit all the user’s properties
in Microsoft Entra ID like, Department and Age group .
But since user one is sourced from Windows Server Active Directory and synced to
Microsoft Entra ID, most of the user’s properties like Department and other Job
Information are unavailable to edit in Microsoft Entra ID.
If you want to update the Department of the user, you need to access the user profile
details in the Windows Server Active Directory and make changes there.
Since the on-premises Active Directory is synced with Microsoft Entra ID, any changes
made in the Windows Server Active Directory will be reflected in Microsoft Entra ID.
So, Department -> Only Active Directory.

But there are some Microsoft Entra ID-specific properties like Age group and Usage
location that can be edited only in Microsoft Entra ID for either of the two types of
users. These properties are unavailable in Active Directory.

So, Age group -> Only Microsoft Entra ID.


Reference Link: https://learn.microsoft.com/en-us/entra/fundamentals/how-to-
manage-user-profile-info
Option A is the correct choice.

Resources
User profile properties in Microsoft Entra ID
Domain
Manage Azure identities and governance

Question 50
You have two storage accounts in different subscriptions in a Microsoft Entra ID
tenant.
Given below are two statements about using the azcopy tool to copy data between
storage accounts across different platforms. Select Yes if the statement is correct.
Else select No.

 Yes, No
 Yes, Yes
 No, No
 No, Yes
Overall explanation
Statement 1:
The currently supported version of the AzCopy tool enables you to copy data only
between file shares and blobs, not even tables. Tables were supported in earlier
versions of azcopy, but not any longer.
So, you cannot copy queue messages from one storage account to the other.
Reference Link: https://learn.microsoft.com/en-us/azure/storage/common/storage-
use-azcopy-v10#authorize-azcopy
https://learn.microsoft.com/en-us/azure/storage/common/storage-ref-azcopy-
copy#synopsis
https://medium.com/@andrewkelleher_873/hi-in-theory-yes-247aa40f4bd9
Statement 1 is Yes.

Statement 2:
AzCopy is available on all major operating systems like Windows, Linux, and macOS.
So, you can upload a video file from a macOS device to the blob container in the
Azure storage account.
Statement 2 is No.
Reference Link: https://learn.microsoft.com/en-us/azure/storage/common/storage-
use-azcopy-v10#download-azcopy
Option A is the correct answer.

Resources
Use azcopy in different platforms
Domain
Implement and manage storage
Question 51
You have to create a Windows Virtual Machine for hosting a web application in Azure.
How would you ensure the VM has an IIS web server installed after deployment?
Select two correct options.
 Use Azure Custom Script Extension
 Use the Publish-AzVMDscConfiguration PowerShell command
 Use PowerShell DSC extension
 Create a DSC configuration file and upload it to the storage account
Overall explanation
Post VM deployment, you can add functionality to a VM using Azure Virtual Machine
extensions. For example, to install the IIS web server, you can write PowerShell
commands in a file and upload it to the Azure Storage account.

Then, adding a custom script extension to the Windows Virtual Machine in Azure will
download the file from the storage account and run the script on the virtual machine.
After the extension is installed, you can check if the IIS server is installed using the
VM’s Public IP address.

Reference
Link: https://learn.microsoft.com/en-us/azure/virtual-machines/extensions/overview
https://learn.microsoft.com/en-us/azure/virtual-machines/extensions/custom-script-
windows
Option A is one of the correct answers.
Using the PowerShell command, Publish-AzVMDscConfiguration , you can upload only
the DSC configuration to Azure blob storage, which later can be applied to an Azure
VM. This command zips the PowerShell file, creates a new container in the storage
account, and uploads the zip file to that blob container.

Since using only this command doesn’t achieve the objective of installing the IIS web
server, option B is incorrect.
Reference Link: https://learn.microsoft.com/en-us/powershell/module/az.compute/
publish-azvmdscconfiguration?view=azps-9.3.0

Usually, after we publish the DSC configuration to a storage account, we run the Set-
AzVMDscExtension command to inject the DSC configuration into a virtual machine.
This command downloads the zip file from the storage account and invokes the
required configuration that turns on the IIS web server role. You can also view the DSC
extension under Extensions + applications to check its status.
Using these two commands too, we can install the IIS web server post VM
deployment. But using only the Publish-AzVMDscConfiguration command will only
push the DSC configuration to the storage account.
Reference Link: https://learn.microsoft.com/en-us/powershell/module/az.compute/
set-azvmdscextension?view=azps-9.3.0
https://github.com/uglide/azure-content/blob/master/articles/virtual-machines/virtual-
machines-windows-extensions-dsc-overview.md#getting-started

We can also install an IIS web server using the PowerShell DSC extension. In the DSC
extension:
1. We upload the zip file created earlier
2. Specify the configuration name in this format: FileName\configurationName and,
3. Specify the DSC extension version to install.
This is another way to install the IIS web server post-deployment. Option C is the other
correct answer.
Reference
Link: https://learn.microsoft.com/en-us/azure/virtual-machines/extensions/dsc-
overview#azure-portal-functionality

Option D is incorrect. Creating a configuration file and uploading it to the storage


account is similar to the task performed by option B and solves only the first part of
the requirement.
GitHub repo link: Post deployment VM configuration scripts and PowerShell.zip

Resources
Post deployment VM configuration
Domain
Deploy and manage Azure compute resources

Question 52
You use an ARM template to deploy a virtual network and a storage account in a
resource group.

Below is the template used to deploy the resources.


You make some changes to the ARM template and redeploy the template in either
incremental or complete deployment mode. Given below are two statements based on
this information. Select Yes if the statement is correct. Else select No.

 Yes, Yes
 Yes, No
 No, No
 No, Yes
Overall explanation
Statement 1:
Although you can update many properties of Azure resources with any deployment
modes, you cannot update the location and the type of the resource using either the
complete or incremental deployment mode.
The only alternative is to create a new resource with a different name.

Reference Link: https://learn.microsoft.com/en-us/azure/azure-resource-manager/


templates/deployment-modes
Statement 1 -> Yes.

Statement 2:
The only difference between the incremental or complete deployment mode is how
they treat resources not defined in the ARM template but already live in the resource
group.
The incremental mode:
a. Adds the resources defined in the ARM template, but not in the resource group to
the resource group.
b. Leaves the resources not defined in the template untouched, like the virtual
network resource.

The complete mode:


a. Adds the resources defined in the ARM template, but not in the resource group to
the resource group.
b. Deletes the resources not defined in the template, like the virtual network resource,
from the resource group.
Reference Link: https://learn.microsoft.com/en-us/azure/azure-resource-manager/
templates/deployment-modes#incremental-mode

So, deploying a template without the VNet resource in incremental mode will not
delete the existing VNet in the resource group.
Statement 2 -> No. Option B is the correct answer.

GitHub Repo link: Redeploy a template in different deployment modes ARM


template.json

Resources
Redeploy a template in different deployment modes
Domain
Deploy and manage Azure compute resources

Question 53
There are three virtual machines created in your Azure subscription.

And your subscription has the following quota limits.


Below is the configuration information on the D-series v3 and E-series v5 family of
Azure VM sizes.
Given below are two statements based on the above information. Select Yes if the
statement is correct. Else select No.

 Yes, No
 Yes, Yes
 No, No
 No, Yes
Overall explanation
In an Azure subscription, there is a limit on the number of vCPUs you can deploy per
region. There is also a limit on the number of vCPUs per VM family size per region.
For example, there is a limit of 10 vCPUs in every Azure region in my subscription,
other than the East US region, which has a limit of 15 vCPUs.
Similarly, there is a limit of 10 vCPUs for each VM family size per region. Shown below
is the vCPU limit for the Dsv3 VM family across all regions.

Your vCPU limits may be different, but it’s important to note that there is a certain
limit on these two factors.

Statement 1:
From the above image, it is evident that although vm03 of series Dsv3 is deallocated,
it is still counted in the current usage (80% -> 8 of 10). So, the quota is calculated
based on the total number of allocated and deallocated cores.

Reference Link: https://learn.microsoft.com/en-us/azure/virtual-machines/windows/


quotas

The Total Regional vCPUs of the East US region is 73% (11/15):


1. 8 vCPUs in the deallocated VM (vm03) that belong to the family Dsv3,
2. 1 vCPU in the running VM (vm01) that belong to the B-series family,
3. And 2 vCPUs in the running VM (vm02) that belong to the DCsv2 family of VMs.

Per the given limits for East US region, you can deploy a VM of any family series with
a maximum of up to 4 vCPUs (15-11).
But the VM of size E8bds_v5 needs 8 vCPUs (given in the question). Since it exceeds
the limit, you cannot deploy this VM in the East US region.
In fact, if you try to look for the VM size by selecting the same subscription and region,
you will only find VMs up to a maximum of 4 vCPUs.

Since you cannot deploy a VM of size E8bds_v5 in the East US region, statement 1 ->
No.

Statement 2:
It’s important to recognize that the family of vm02, which is DCsv2, is different from
the family of vm03, which is Dsv3. Even the VM SKU mentioned in statement 2 is from
the Dsv3 family.
Azure VMs follow a naming convention derived from their family names. So, from a VM
SKU, you can identify the VM family.

VM family names don’t include the no. of. vCPU cores as it varies across different VM
sizes in a family. So, the VM family of:
1. Standard_DC2s_v2 is DCsv2
2. Standard_E8bds_v5 is Ebdsv5
3. Standard_D8s_v3 is Dsv3
4. Standard_D4s_v3 is Dsv3

Note 1: There may be some exceptions to this rule. For example, family names don’t
include some additive features, especially if they are not common across all VM sizes
in a family.
Example: The VM family of Standard_A8m_v2 is Av2. (The additive feature ‘m’ is not in
the family name as it is not a feature of all the VM sizes in the family).
Note 2: The above VM naming convention is a condensed pattern required for our
purposes. Refer to the link below for the correct naming convention, which explains all
the other characters in the VM names.
Note 3: You don’t have to remember the VM family names for the exam. It is good to
know that VM names are not random and have a pattern.

Reference Link:
https://learn.microsoft.com/en-us/azure/virtual-machines/vm-naming-
conventions#naming-convention-explanation
https://learn.microsoft.com/en-us/azure/virtual-machines/dcv2-series
https://learn.microsoft.com/en-us/azure/virtual-machines/ebdsv5-ebsv5-
series#ebdsv5-series
https://learn.microsoft.com/en-us/azure/virtual-machines/dv3-dsv3-series#dsv3-series

So, the purpose of understanding the VM family naming convention is to drive home
the point that the family of both D8s_v3 (size of vm03) and D4s_v3 (size of the VM
given in statement 2) is the same, which is Dsv3.
So, although you can deploy a VM with a maximum of 4 vCPUs in the East US region,
you can deploy a VM from the family Dsv3 with only a maximum of 2 vCPUs in the
East US region.
So, you cannot deploy a VM of size D4s_v3 in the East US region. Statement 2 is No.
Option C is the correct answer.

Resources
Azure VM quota usage of vCPUs
Domain
Deploy and manage Azure compute resources

Question 54
This question is part of repeated scenario questions that contain the same stem but
with a different solution for each question. You need to identify if the given solution
solves a particular problem. Each set of repeated scenario questions might contain
either none, one, or many solutions.
You have three resource groups created in the Dev subscription as shown in the
below hierarchy:
Your manager asked you to create a custom role in Azure RBAC that meets the
following objectives:
1. The custom role can export snapshots and disks from a VM.
2. The custom role can apply only to the resource groups rg-dev-01 and rg-dev-02.
3. The user assigned the custom role can reassign the role to other users in the
tenant. But he cannot manage policy assignments.

Solution: You create the below custom role:

1. {
2. "properties": {
3. "roleName": "custom role",
4. "description": "",
5. "assignableScopes": [
6.
"/subscriptions/0e54b3f0-731c-4755-b0cb-62356bbb2c00/resourceGroups/rg-dev-01",
7.
"/subscriptions/0e54b3f0-731c-4755-b0cb-62356bbb2c00/resourceGroups/rg-dev-02"
8. ],
9. "permissions": [
10. {
11. "actions": [
12. "Microsoft.Authorization/*",
13. "Microsoft.Compute/disks/*/write",
14. "Microsoft.Compute/snapshots/*/write",
15. "Microsoft.Resources/subscriptions/resourceGroups/read"
16. ],
17. "notActions": [
18. "Microsoft.Authorization/policyAssignments/*"
19. ],
20. "dataActions": [],
21. "notDataActions": []
22. }
23. ]
24. }
25. }

Does the solution meet the custom role objectives?


 Yes
 No
Overall explanation
There are two types of operations in Azure:
1. Control plane operations manage Azure resources. They happen on/to a resource.
2. Data plane operations work on data exposed by the Azure resource. They happen
within a resource.
Reference Link: https://learn.microsoft.com/en-us/azure/azure-resource-manager/
management/control-plane-and-data-plane

The reason it’s essential to know is all the control plane operations are defined in
the Actions / NotActions section of a custom/built-in role. And all the data plane
operations are described in the DataActions / NotDataActions section of a
custom/built-in role. Adding DataActions in the Actions sections or Actions in the
DataActions section will produce an error.
Reference Link: https://learn.microsoft.com/en-us/azure/role-based-access-control/
role-definitions#control-and-data-actions

One of the given requirements is the custom role should have permission to export
the snapshot/disk (data) from a VM (Azure resource). So, it’s clear that this permission
should be added in the DataActions section. But the given custom role definition has
no permissions in the DataActions section.
Without going into further details, we can conclude that this solution does not meet
the objectives. Option No is the correct answer.
<<Check the related lecture video on how this custom role works>>

Resources
Create a custom Azure RBAC role - 1
Domain
Manage Azure identities and governance

Question 55
This question is part of repeated scenario questions that contain the same stem but
with a different solution for each question. You need to identify if the given solution
solves a particular problem. Each set of repeated scenario questions might contain
either none, one, or many solutions.
You have three resource groups created in the Dev subscription as shown in the
below hierarchy:

Your manager asked you to create a custom role in Azure RBAC that meets the
following objectives:
1. The custom role can export snapshots and disks from a VM.
2. The custom role can apply only to the resource groups rg-dev-01 and rg-dev-02.
3. The user assigned the custom role can reassign the role to other users in the
tenant. But he cannot manage policy assignments.

Solution: You create the below custom role:

1. {
2. "properties": {
3. "roleName": "custom role",
4. "description": "",
5. "assignableScopes": [
6.
"/subscriptions/0e54b3f0-731c-4755-b0cb-62356bbb2c00/resourceGroups/rg-dev-01",
7.
"/subscriptions/0e54b3f0-731c-4755-b0cb-62356bbb2c00/resourceGroups/rg-dev-02"
8. ],
9. "permissions": [
10. {
11. "actions": [
12. "Microsoft.Authorization/roleAssignments/*",
13. "Microsoft.Compute/snapshots/*",
14. "Microsoft.Compute/disks/*",
15. "Microsoft.Resources/subscriptions/resourceGroups/read"
16. ],
17. "notActions": [
18. "Microsoft.Authorization/policyAssignments/*"
19. ],
20. "dataActions": [
21. "Microsoft.Compute/disks/*/write",
22. "Microsoft.Compute/snapshots/*/write"
23. ],
24. "notDataActions": []
25. }
26. ]
27. }
28. }

Does the solution meet the custom role objectives?


 Yes
 No
Overall explanation
Permissions are defined in a custom/built-in role in the following format:

This permission enables the user to create a new disk or update an existing one.
The last part {action} of a permission is a word similar to verbs
like read, write, delete, or action. Read, write and delete on a resource are self-
explanatory. The action enables custom actions that cannot be defined by the other
three verbs, like registering, restarting a machine, downloading, uploading, exporting,
etc.,

Although the given solution defines permissions related to snapshot/disk in the


DataActions section, the verb write is not applicable to exporting snapshots and disks
from a VM. The correct verb should be action.
So, when you enter this JSON in a custom role and try to save the role, you get an
error.
Reference Link: https://learn.microsoft.com/en-us/azure/role-based-access-control/
role-definitions#actions-format
The given solution does not meet the stated goal. Option No is the correct answer.

Resources
Create a custom Azure RBAC role - 2
Domain
Manage Azure identities and governance

Question 56
This question is part of repeated scenario questions that contain the same stem but
with a different solution for each question. You need to identify if the given solution
solves a particular problem. Each set of repeated scenario questions might contain
either none, one, or many solutions.
You have three resource groups created in the Dev subscription as shown in the
below hierarchy:

Your manager asked you to create a custom role in Azure RBAC that meets the
following objectives:
1. The custom role can export snapshots and disks from a VM.
2. The custom role can apply only to the resource groups rg-dev-01 and rg-dev-02.
3. The user assigned the custom role can reassign the role to other users in the
tenant. But he cannot manage policy assignments.

Solution: You create the below custom role:

1. {
2. "properties": {
3. "roleName": "custom role",
4. "description": "",
5. "assignableScopes": [
6.
"/subscriptions/0e54b3f0-731c-4755-b0cb-62356bbb2c00/resourceGroups/rg-dev-01",
7.
"/subscriptions/0e54b3f0-731c-4755-b0cb-62356bbb2c00/resourceGroups/rg-dev-02"
8. ],
9. "permissions": [
10. {
11. "actions": [
12. "Microsoft.Authorization/*",
13. "Microsoft.Compute/snapshots/*",
14. "Microsoft.Compute/disks/*",
15. "Microsoft.Resources/subscriptions/resourceGroups/read"
16. ],
17. "notActions": [
18. "Microsoft.Authorization/policyAssignments/*"
19. ],
20. "dataActions": [
21. "Microsoft.Compute/disks/*/action",
22. "Microsoft.Compute/snapshots/*/action"
23. ],
24. "notDataActions": []
25. }
26. ]
27. }
28. }

Does the solution meet the custom role objectives?


 Yes
 No
Overall explanation
"Microsoft.Compute/snapshots/*" in the Actions section is essential as it exposes
permissions to read the snapshot in Azure portal and generate the SAS URL (control
plane operation), necessary for exporting the snapshot.
The same applies to "Microsoft.Compute/disks/*" .

The given solution satisfies requirement 1 as the correct permissions for exporting
disks and snapshots are specified in the DataActions section (refer to previous
questions).
So, the user assigned with the role can generate the URL and export the
snapshot/disk (refer to the related lecture video).

"Microsoft.Authorization/*" permissions ensure that the user can assign this role to
any other user in the tenant as the Authorization resource provider exposes the role
assignment permissions.
And "Microsoft.Authorization/policyAssignments/*" in the NotActions section
ensures that the permissions related to assigning policies are removed from the
custom role.
So, if the user tries to assign a policy, he will get the below error.

The given solution meets the objective. Option Yes is the correct answer.

Resources
Create a custom Azure RBAC role - 3
Domain
Manage Azure identities and governance

Question 57
You have a virtual machine scale set deployed with 8 VMs. You want to add a Network
Watcher agent extension to the scale set and upgrade the individual VMs to the scale
set model.
This upgrade policy is defined for the scale set.
How many VMs will be down at any point in time during the upgrade?
 0
 1
 2
 8
Overall explanation
When you update your scale set, like changing the administrator name or adding any
extension to the scale set, like the Network Watcher for Windows extension, the scale
set model is updated, and all the VMs in the scale set are not in sync with the latest
model.

Reference Link: https://stackoverflow.com/questions/70668793/azure-vmss-


instance-latest-model-meaning

You can configure any of the upgrade policy mode to decide how to bring individual
VMs up to date with the latest scale set model. The given rolling upgrade policy
performs the upgrade on a rolling basis with a batch size of 20%.
This means out of a total of 8 VMs, only 20% of the VMs are updated at any point in
time. 20% of 8 is 1.6, but only a max of 1 VM is upgraded at any moment.

So, only one VM will be down at any given point in time. Thus, option B is the correct
answer.
Reference Link: https://learn.microsoft.com/en-us/azure/virtual-machine-scale-sets/
virtual-machine-scale-sets-upgrade-scale-set#how-to-bring-vms-up-to-date-with-the-
latest-scale-set-model
Interestingly, 1.6 rounds down to 1, not 2. So, option C is incorrect.
Only in the automatic mode will all 8 VMs be taken down simultaneously for updating
the scale set instances. So, option D is incorrect.

Resources
Azure VM scale set upgrade
Domain
Deploy and manage Azure compute resources

Question 58
You use the default template to define the autoscaling rules while creating a Virtual
Machine Scale Set.

Based on the above information, answer the below two questions:


 >3, 20
 <=2, <=2
 >3, 3-5
 3, 20
Overall explanation
The autoscaling policy template you use while creating a scale set assumes default
values for some properties. To view the default values, open the policy rule after the
VMSS is created.
We can observe that:
1. The Time grain is 1 minute
2. Time grain statistic is the Average
3. Time aggregation is the Average
4. Cool down period is 1 minute
5. And the operator is Greater than
Let’s use these properties to understand how an autoscaling policy works.
First, the [time grain statistic] of [metric] is calculated every [time grain] . So,
plugging in the values translates to:
The [average] of [Percentage CPU] is calculated every [one minute] .

Next, Every [time grain], Autoscale checks the [time aggregation] of the
last [duration] for comparison with the threshold. It translates to:
Every [one minute] Autoscale checks the [average] of the last [5 minutes] for
comparison with the threshold.
For example, 5 minutes after the VMSS is deployed, Azure Autoscale averages the
sampled metrics in the last 5 minutes and compares the value with the threshold, and
initiates a scale action (increase or decrease the VM instance count) if the condition is
satisfied.
Interpreting the autoscaling rule in this way helps us to solve the problem better.

Question 1:
For simulating 100% peak utilization in the virtual machine, I run an infinite loop in a
PowerShell window.

1. for (;;)
2. {
3. Echo "Looping"
4. }

CPU utilization peaks at nearly 100%. Let’s leave it to run for 10 minutes.
Since there is only one VM in the scale set to begin with, the average CPU utilization is
the same for both the VM and the scale set, which is 100% in the last 5-minute
window.

So, we can expect the Autoscale to trigger and create a new VM instance. There will
be two instances in the scale set.

The infinite loop ensures that the default VM continues to run with an average CPU
utilization of 100%. For all the new VMs created by the scale set, I assume they run at
an average of 0% CPU utilization, for reasons, I will explain later.
As described earlier, autoscale calculates:
1. The average CPU utilization for the VMs each time grain.
2. And the [average] of the last [5 minutes] for comparison with the threshold.
Since the default value for the cool down period is one minute, there is no additional
delay in the Autoscale checks. We can expect the Autoscale to check the next minute
(6th minute). Since the average CPU utilization for the scale set is still approximately
90%, we can expect the Autoscale to trigger again, creating one more VM instance.
There will now be three VMs in the scale set.
This process continues until the condition fails. i.e. as long as the CPU percentage is
less than 70%. So, we can expect to have more than 3 VMs in the first 10 minutes of
duration.
Note that we arrive at this result assuming the worst-case scenario for CPU utilization
(~0%) for the newly created VMs by the scale set.
This is likely not the case, as the new VMs will have CPU utilization > 0. In this realistic
case, the average CPU utilization of the scale set for the past 5 minutes will linger
higher, creating many VMs (maybe 6 or 7 VMs) before the condition fails. Assuming
the lowest possible CPU utilization for new VMs enables us to concretely prove that
question 1 is >3.

You will not be required to calculate percentage numbers for computing the exact no.
of VMs in the scale set in the exam. The idea of this analysis is to drive home the point
that autoscale checks the condition each time grain (1 minute), after the cool down
period. It creates VMs repeatedly, as long as the condition is true.

Question 2:
From the above analysis, we understand that the Percentage CPU utilization
decreases as the scale set adds more VMs.
Once the metric reaches the threshold, the scale set stops adding more VMs. So, the
scale set is not expected to max out (20 is not the correct answer), as the utilization is
unlikely to remain high, even after the scale set adds many VMs.
Here comes another takeaway. Azure Autoscale evaluates the metric for the scale set
and not the individual VMs. As we add more VMs, the scale set CPU utilization
decreases even though the initial default VM may continue to peak at 100%.
And for this scenario, we have already discussed that the no. of VMs cannot be less
than or equal to 2. This could have been only possible had the cool down period been
a little longer (say 10 minutes). In that scenario, we may have exactly 2 VMs.
Further, since the CPU percentage for the scale in policy is too less, the scale in policy
also would not trigger, especially given that the infinite loop is still running. So, after 1
hour, we should still have approximately 3-7 running VMs.
We don’t have to predict the correct no. of created VMs, as sometimes the scale set
may create 3, 6, or 7 VMs. But the idea of the question is we should be able to ignore
the other two options. So, question 2 -> 3-5.
Option C is the correct answer.
Reference Link: https://negatblog.wordpress.com/2018/07/06/autoscaling-scale-
sets-based-on-metrics/
https://learn.microsoft.com/en-us/azure/virtual-machine-scale-sets/virtual-machine-
scale-sets-autoscale-portal

Note:
When you try out this scenario in the Azure portal (which I strongly recommend you
do), you will realize that the timelines differ.
For example, the first autoscale trigger happens around 7:30 minutes after the CPU
utilization peaks at 100%, although I have shown here that it happens after 5 minutes.
The next trigger doesn't happen just 1 minute later, but at around 1 minute and 16
seconds later. There will be some delays related to infrastructure provisioning and
other checks. In the above visuals, I presented a simplified version of the working of
autoscale to drive home a better understanding.
GitHub repo link: Infinite loop for autoscaling demo.zip

Resources
Autoscaling in VMSS
Domain
Deploy and manage Azure compute resources

Question 59
Your team uses an Azure Virtual Machine to run line-of-business apps.
Based on the requirements for the upcoming sprint, you analyze the following
changes to be made to your VM:
a. Attach another network interface to the VM
b. Resize the VM to any VM SKU with 2 vCPUs
c. Install the Network Watcher Agent for Windows extension
d. Enable BitLocker encryption of OS and data disk.

Which requirement(s) calls for stopping the VM from the Azure portal before it/they
can be implemented?
 Only a and d
 Only b and d
 Only a
 Only c and b
Overall explanation
Contrary to what you might know, you can resize the VM while the VM is still running.
But resizing will cause the VM to be restarted, so you do have some downtime. But we
don’t have to explicitly stop the VM from the Azure portal before performing a VM
resizing operation.
Just note that you will see only the VM sizes available to be hosted in the same
underlying physical hardware. In 100% of the cases, there is a great chance to resize
the VM to 2 vCPU cores.
Try it once with your Azure subscription.
So, options B and D are incorrect.
Reference Link: https://learn.microsoft.com/en-us/azure/virtual-machines/resize-vm?
tabs=portal

You can run any extension without stopping the VM. Since the extensions are
deployed on the VM, you cannot install the extensions when the VM is stopped.

Reference
Link: https://learn.microsoft.com/en-us/azure/virtual-machines/extensions/features-
windows
Azure Disk Encryption (ADE) uses BitLocker, a feature of Windows OS, to enable
volume encryption of data and OS disks. Since BitLocker is an Operating System
capability, Azure needs to talk to the OS to enable BitLocker encryption. So, you can
enable BitLocker only when the virtual machine is running.

So, option A is incorrect.


Reference Link: https://learn.microsoft.com/en-us/azure/virtual-machines/windows/
disk-encryption-overview

To add a Network Interface Card (NIC) to an existing VM, you must first deallocate the
VM.
Reference
Link: https://learn.microsoft.com/en-us/azure/virtual-machines/windows/multiple-
nics#add-a-nic-to-an-existing-vm
Since only requirement ‘a’ calls for stopping the VM from the Azure portal, option C is
the correct answer.

Resources
Updates that require stopping the VM
Domain
Deploy and manage Azure compute resources

Question 60
You have two storage accounts in different subscriptions in a Microsoft Entra ID
tenant.
There is another storage account in a different Microsoft Entra ID tenant.

Given below are two statements about using the azcopy tool to copy data between
storage accounts. Select Yes if the statement is correct. Else select No.

 Yes, No
 Yes, Yes
 No, No
 No, Yes
Overall explanation
Statement 1:
This statement is about copying blob data between storage accounts in different
Microsoft Entra ID tenants. I construct the azcopy command by generating and using
SAS URLs for the source and the destination storage accounts.
The command runs successfully, and we can see the blob in the target storage
account (which is in a different Microsoft Entra ID tenant). Check the related lecture
video.
Note that for the copy operation across Microsoft Entra ID tenants to be successful,
you need to set the Permitted scope for copy operations as From any storage
account on the target storage account.

With any other selection, you would not be able to copy the containers across tenants.
Statement 1 -> No.

Statement 2:
This statement is about copying blob data between storage accounts in different
subscriptions but in the same Microsoft Entra ID tenant.
In the previous statement, we copied the container to a storage account in a different
subscription and in a separate Microsoft Entra ID tenant. Consequently, we can also
copy the container to a storage account in a different subscription within the same
Microsoft Entra ID tenant.

Statement 2 -> Yes.


Option D is the correct answer.
Reference Link: https://learn.microsoft.com/en-us/answers/questions/646038/copy-
files-from-one-azure-subscription-to-another.html

Note:
At the time of writing, the documentation incorrectly mentions that cross-tenant copy
is not supported (point 1).

Reference Link: https://learn.microsoft.com/en-us/azure/storage/common/storage-


use-azcopy-blobs-copy#guidelines

But you can copy across tenants if the target storage account has the correct
configuration.
Here is the link to the issue I raised with the MS documentation team and their
reply: https://github.com/MicrosoftDocs/azure-docs/issues/101411

GitHub Repo link: Copy blobs using azcopy across subscriptions and tenants - PS
Commands.ps1

Resources
Copy blobs using azcopy across subscriptions and tenants
Domain
Implement and manage storage

Question 61
You have to create an Azure storage account that meets the below requirements:
1. Protect data from regional outages.
2. Optimize blob storage cost.
3. Provide support for files and blobs.

Based on the given information, complete the below Azure CLI command:

 Premium_LRS, StorageV2, Cool


 Standard_GRS, StorageV2, Hot
 Standard_GRS, StorageV2, Cool
 Standard_ZRS, FileStorage, Hot
 Standard_ZRS, StorageV2, Cool
Overall explanation
There are four types of Azure storage accounts: Three of them offer premium
performance for block blob, file share, and page blob, that offer storage services for
blobs, files, and page blobs, respectively.

The last type of storage account is a standard, general-purpose (v2) storage account
that supports all types of storage services like file share, blobs, queues, and tables.
Since we need a storage account that supports both files and blobs, using the
standard v2 account will be the correct choice.
Reference Link: https://learn.microsoft.com/en-us/azure/storage/common/storage-
account-overview#types-of-storage-accounts
So, box 2 -> StorageV2.

There are three types of Azure storage access tiers: Hot, cool, and archive.
The hot tier is cost optimized for data access, so it’s more suitable for frequently
accessed data.
The Cool tier is optimized for data storage.

Reference Link: https://learn.microsoft.com/en-us/azure/storage/blobs/access-tiers-


overview
https://azure.microsoft.com/en-in/pricing/details/storage/blobs/
Since we need to optimize data storage cost, box 3 -> Cool.

As we are using the standard v2 storage account, Premium_LRS, a redundancy option


for premium storage accounts cannot be used. So, Premium_LRS will not go into the
last box.
ZRS (Zone-redundant storage) replicates your data only across availability zones in a
region. So, ZRS cannot protect data from regional outages. Option Standard_ZRS is
also incorrect.
Since GRS (Geo-redundant storage) replicates your data in a secondary region, it
protects the data from regional outages. So, box 1 -> Standard_GRS
Reference Link: https://learn.microsoft.com/en-us/azure/storage/common/storage-
redundancy#geo-redundant-storage
https://learn.microsoft.com/en-us/azure/storage/common/storage-redundancy#zone-
redundant-storage

We can run the below command the create the storage account as per the
requirement.
az storage account create --resource-group rg-dev-03 --name strdev014 --sku
Standard_GRS --kind StorageV2 --access-tier Cool
Option C is the correct answer.

Note: In video, the box1, box2 and box3 names are mistakenly interchanged.
GitHub Repo Link: Create Azure Storage Account with Azure CLI - PS command.ps1
Resources
Create Azure Storage Account with Azure CLI
Domain
Implement and manage storage

Question 62
In your Azure subscription, you have four storage accounts with different storage
redundancies.

Which storage accounts’ replication type can you convert to ZRS (Zone-redundant
storage) from only the Redundancy section of the storage account resource in the
Azure portal?
 Only strdev011 and strdev012
 Only strdev011, strdev012 and strdev013
 Only strdev014
 Only strdev013 and strdev014
Overall explanation
You can change a storage account’s redundancy setting to any of the other five
replication types.

Changing a storage account's redundancy comes down to:


1. Adding or removing geo-replication or read-access to the secondary region (which
can be done from the Azure portal)
2. Adding or removing zone redundancy (which cannot be done from the Azure portal,
so create a support request with Microsoft).
Reference
Link: https://learn.microsoft.com/en-us/azure/storage/common/redundancy-
migration?tabs=portal#options-for-changing-the-replication-type
To convert a storage account's redundancy from LRS to ZRS (strdev011), we need to
add zone redundancy in the primary region. Since we cannot add or remove zone
redundancy from within the portal, we cannot convert the storage account's
redundancy to ZRS using only the Azure portal.

To convert a storage account's redundancy from GRS to ZRS (strdev012), we need to


remove geo-replication and add zone redundancy. We can remove geo-replication by
switching the redundancy to LRS. Then, to add ZRS, we need to create a support
request. So, we cannot directly convert the redundancy from GRS to ZRS using the
Azure portal alone.
To convert a storage account's redundancy from RAGRS to ZRS (strdev013), we need
to remove geo-replication and read access to the secondary region, which we can do
from the Azure portal by changing the replication type to LRS. But to add zone
redundancy, we still need to raise a support request. Here too, we cannot convert the
redundancy from RAGRS to ZRS using the portal alone.
To convert a storage account's redundancy from RAGZRS to ZRS (strdev014), we
need to remove geo-replication and read access to the secondary region, which we
can directly do from the Azure portal by switching the redundancy to ZRS. Since
RAGZRS already has zone redundancy, we don’t need to add that by creating a
support request.
Reference
Link: https://learn.microsoft.com/en-us/azure/storage/common/redundancy-
migration?tabs=portal#replication-change-table

In a nutshell, you can move a storage account only across storage redundancies that
offer similar features with respect to zone redundancy, using the Azure portal.
Of course, other migration scenarios are always possible by creating a support
request.
Using only the redundancy section in the Azure portal, we can convert the storage
account's redundancy to ZRS only for strdev014. Option C is the correct answer.

Resources
Migrate a storage account to a different storage redundancy
Domain
Implement and manage storage

Question 63
You have two subscriptions Dev and Test in Azure:
1. You apply a read-only lock on the Test subscription.
2. There are no locks on the Dev subscription.
Further, you have the following information about the resource groups and resources
created in these subscriptions:
You grant User one the owner access to both subscriptions.
Given below are three statements based on the above information. Select Yes if the
statement is correct. Else select No.

Note:
1. Assume that the target subscription is registered with the necessary resource
providers before any locks are applied.
2. sqldb-users-dev is the child resource of sql-navigator-dev01.
 No, Yes, No
 No, No, Yes
 Yes, No, No
 Yes, No, Yes
 No, No, No
Overall explanation
Let’s use the information in the question to create the resource hierarchy.
There are two subscriptions, Dev and Test with a read-only lock on the Test
subscription. There are three resource groups in the Dev subscription with a read-only
lock on the rg-dev-02 resource group. There is only one resource group in
the Test subscription, which has a Delete lock.
Thanks to icons8 for the above icons

All resource groups have at least one resource except the rg-test-01 resource group.
Only the SQL database sqldb-users-dev has a read-only lock at the resource level.
Below is the complete hierarchy:

Thanks to icons8 for the above icons

Further, the locks override the user permissions. So, the owner role of User
one doesn’t affect the functionality of the locks.

Reference Link: https://learn.microsoft.com/en-us/azure/azure-resource-manager/


management/lock-resources

Statement 1:
This statement means that User one can move the Azure Key Vault resource from
the rg-dev-01 resource group in the Dev subscription to the rg-test-01 resource
group in the Test subscription.
From the source end, there are no locks to worry about. But on the target end, there
are two locks on the resource group rg-test-01 :
1. A delete lock (restricts only resource deletion)
2. An inherited read-only lock from the Test subscription (restricts both resource
modification & deletion)
Among these, the most restrictive lock (read-only) in the inheritance takes
precedence. So, the resource locks would restrict any update on the target resource
group.
Consequently, the move operation will result in an error.

Reference Link: https://learn.microsoft.com/en-us/azure/azure-resource-manager/


management/lock-resources?tabs=json#lock-inheritance
So, User one cannot move the key vault to the Test subscription. Statement 1 -> No.

Statement 2:
In this case, the User one has to move the automation resource from a resource group
with a read-only lock. Since moving a resource to another resource group means the
resource is removed from the source resource group, the read-only lock restricts any
modifications to the source resource group. Consequently, the move operation is
unsuccessful, and the user gets the below error:
So, User one cannot move the Automation account to the rg-dev-01 resource group.
Statement 2 -> No.

Statement 3:
In this case, both the source and the target resource group do not have direct or
inherited locks. There is a read-only lock on the SQL database resource, which we
want to move. The read-only lock only restricts changes made to the resource (only
control plane operations like resource modifications), but it doesn’t affect the move
operation.
But there is a problem. We are trying to move only the child resource: SQL database
but not the parent: SQL server. The resource manager will not allow this move
operation.

Quick Preview:

Reference Link: https://learn.microsoft.com/en-us/azure/azure-resource-manager/


management/move-support-resources

Consequently, the move operation is unsuccessful.


So, User one cannot move only the SQL database resource to the rg-dev-01 resource
group. Statement 3 -> No.
Option E is the correct answer.

Resources
Move resources to a different resource group
Domain
Manage Azure identities and governance

Question 64
Below is a hierarchy of resources in Azure:
Thanks to icons8 for the above icons

You have to create a read-only lock on the aa-ravi-dev automation resource.


Given below are two statements based on resources inheriting locks from a specific
hierarchy level. Select Yes if the statement is correct. Else select No.

 No, Yes
 Yes, No
 Yes, Yes
 No, No
Overall explanation
Locks are created only from the subscription scope in the hierarchy. That is, they can
be created at the subscription, resource group, and resources level.
Individual resources also inherit the locks from their parent resource group and
subscription. Since you cannot create locks at the tenant root group scope or any
management group scope, only statement 2 is valid.
Reference Link: https://learn.microsoft.com/en-us/azure/azure-resource-manager/
management/lock-resources
Option A is the correct choice.

Resources
Inherit locks from a hierarchy
Domain
Manage Azure identities and governance

Question 65
You configure the self-service password reset (SSPR) policy for the following users in
your organization:

The policy has the below authentication methods defined.


Given below are two statements based on the above information. Select Yes if the
statement is correct. Else select No.

 Yes, Yes
 Yes, No
 No, No
 No, Yes
Overall explanation
Self-service password reset works differently for users with administrator and non-
administrator roles. The given policy definition applies only to users with non-admin
roles, like the application developer, as self-service password reset is enabled, by
default, for users with admin roles.
Reference
Link: https://learn.microsoft.com/en-us/azure/active-directory/authentication/concept-
sspr-policy#administrator-reset-policy-differences
https://learn.microsoft.com/en-us/azure/active-directory/authentication/tutorial-
enable-sspr#test-self-service-password-reset

Statement 1:
All non-admin users should register their authentication methods (as defined in the
policy) by visiting the link aka.ms/ssprsetup . Only when they register both the
methods required by the policy will they be able to reset the password by visiting the
link aka.ms/sspr , if the need arises (check the related lecture video).

If they try to reset their passwords directly without registering both the authentication
methods, they will get the below error:

So, statement 1 -> Yes.


Statement 2:
But administrator accounts are enabled for self-service password reset by default. So,
they don’t have to register any authentication methods, as defined in the policy. In
case of need, they can directly visit aka.ms/sspr and provide two pieces of
authentication data like their email address and a code from the authenticator app to
reset their passwords (check the related lecture video).
Note that these two methods are different from the methods defined in the policy,
which only apply to User Two (non-admin roles).
So, yes, User One and User Two use different authentication methods to reset their
passwords due to the nature of the roles assigned to them.
So, statement 2 -> Yes.

Option A is the correct answer.

Resources
Authentication methods for SSPR
Domain
Manage Azure identities and governance

Question 66
You create a Shared Access Signature (SAS) for an Azure file share fileshare01 to let
users access the file share for a specific duration.
Observe the above SAS definition carefully. Given below are two statements based on
the generated SAS URI. Select Yes if the statement is correct. Else select No.

 Yes, No
 No, No
 Yes, Yes
 No, Yes
Overall explanation
Statement 1:
The IP address 79.251.35.194 is within the Allowed IP addresses range defined in the
SAS URI. But the SAS URI is generated for the file share fileshare01 and not for the
files within the fire share.
For the file share/blob containers, the list permissions are very much relevant, so
users can list the containers/shares and their contents before they can read or modify
them.
Since the list permissions are not included in the SAS URI, when the user connects to
the file share from Azure Storage Explorer, he will get the below error.

So, statement 1 -> No.


Reference Link: https://github.com/Azure/azure-xplat-cli/issues/1639
https://learn.microsoft.com/en-us/rest/api/storageservices/create-account-sas#file-
service
Statement 2:
First, since the SAS URI doesn't include list permissions, the user cannot access the
file share from this IP address too.
But, even if we assume the list permissions are included in the SAS URI, the user will
not be able to access the file share as the IP address 79.254.34.97 is not within
the Allowed IP addresses range. The third octet (34) is outside the defined range.
So, when the user tries to access the file share (even with list permissions), he will get
the below error.

So, statement 2 -> No.


Option B is the correct answer.

Resources
Using a SAS URI to access a file share
Domain
Implement and manage storage

Question 67
You have two storage accounts in different subscriptions in a Microsoft Entra ID
tenant.
Given below are two statements about using the azcopy tool to copy data between
storage accounts across different platforms. Select Yes if the statement is correct.
Else select No.

 Yes, No
 Yes, Yes
 No, No
 No, Yes
Overall explanation
Statement 1:
The currently supported version of the AzCopy tool enables you to copy data only
between file shares and blobs, not even tables. Tables were supported in earlier
versions of azcopy, but not any longer.
So, you cannot copy queue messages from one storage account to the other.
Reference Link: https://learn.microsoft.com/en-us/azure/storage/common/storage-
use-azcopy-v10#authorize-azcopy
https://learn.microsoft.com/en-us/azure/storage/common/storage-ref-azcopy-
copy#synopsis
https://medium.com/@andrewkelleher_873/hi-in-theory-yes-247aa40f4bd9
Statement 1 is Yes.

Statement 2:
AzCopy is available on all major operating systems like Windows, Linux, and macOS.
So, you can upload a video file from a macOS device to the blob container in the
Azure storage account.
Statement 2 is No.
Reference Link: https://learn.microsoft.com/en-us/azure/storage/common/storage-
use-azcopy-v10#download-azcopy
Option A is the correct answer.

Resources
Use azcopy in different platforms
Domain
Implement and manage storage

Question 68
You have the following storage accounts in your Azure subscription.

You have to use Azure Import/Export service to import data from your on-premises
servers to one or more of the above storage accounts.
Which of the storage accounts you CANNOT use?
 Only strdev013, strdev015, and strdev016
 Only strdev014 and strdev016
 Only strdev013
 Only strdev015 and strdev016
Overall explanation
<<This is a NOT question>>

Microsoft documentation mentions that the Azure Import/Export service supports only
general-purpose storage accounts (v1 and v2) and blob storage accounts.
Reference Link: https://learn.microsoft.com/en-us/azure/import-export/storage-
import-export-requirements#supported-storage-accounts

So, from the image given in the question, it is evident that only the premium file
storage account (strdev013) is not supported.
Option C is the correct answer.

Note 1:
Storage accounts of kind Storage and BlobStorage are legacy accounts, so
strdev015 (general-purpose v1) and strdev016 storage accounts cannot be created in
the Azure portal. Use the below PowerShell commands to create them:

1. New-AzStorageAccount -ResourceGroupName rg-dev-01 `


2. -Name strdev015 `
3. -Location 'East US' `
4. -SkuName Standard_LRS `
5. -Kind Storage

1. New-AzStorageAccount -ResourceGroupName rg-dev-01 `


2. -Name strdev016 `
3. -Location 'East US' `
4. -SkuName Standard_LRS `
5. -Kind BlobStorage `
6. -AccessTier Hot

Reference Link: https://learn.microsoft.com/en-us/azure/storage/common/storage-


account-overview#legacy-storage-account-types

Note 2:
Although strdev011 and strdev014 belong to the same account kind StorageV2,
strdev011 is a standard, storage account that supports blobs, files, queues, and
tables, whereas strdev014 is a premium storage account that supports only page
blobs.

Resources
Types of Azure Storage accounts that support the Azure Import/Export service
Domain
Implement and manage storage

Question 69
You have data on Product images, Sales, Product Inventory, and Customers related to
your business in your hard drive. You would like to upload the data to the respective
Azure destinations as shown below:

Your colleague suggested you use the Azure Import/Export tool.


Which of the following business entities can you upload to the correct Azure
destination using the tool? Select two options.

 Product images
 Sales data
 Product Inventory
 Customer data
Overall explanation
Uploading the data to Azure is the same as importing the data into Azure. So, we
create an import job in the Azure Import/Export service.

You can import the data only to Azure blob and file storage with the Azure
Import/export tool.
Reference Link: https://learn.microsoft.com/en-us/azure/import-export/storage-
import-export-service#how-does-importexport-work
https://learn.microsoft.com/en-us/azure/import-export/storage-import-export-
requirements#supported-storage-types

So, you can upload only the product images to the blob container and the Customer
data to the file share. Options A and D are the correct answer choices.

Resources
Upload data with the Azure Import Export tool
Domain
Implement and manage storage

Question 70
You are getting your hard disks ready to import the data to Azure File storage with the
Azure Import/Export service. As part of the drive preparation process, you create
several files necessary for the WAImportExport tool.
Choose the correct file where you would specify the following information:
 journal file, journal file
 dataset CSV file, JSON config file
 dataset CSV file, driveset CSV file
 XML manifest file, PowerShell script
Overall explanation
Question 1:
The Azure Import/Export service enables you to import/export large volumes of data
to/from your Azure Storage account by shipping your own disk drives to the Azure
data center. Azure Data Box Disk is a similar service that uses disk drives supplied by
Microsoft.
So, for transferring data in disk to Azure File storage, first, you need to prepare the
drive using the WAImportExport tool.
Once you download and extract the WAImportExport tool in your local drive, you can
view the executable and a couple of CSV files: dataset.csv and driveset.csv
In the dataset.csv file, you add several entries for importing files or folders (BasePath)
into the Azure File storage’s target location (DstItemPathOrPrefix). The last column
ItemType indicates the type of file uploaded. For files, ItemType will be the file.

So, the information of the source and destination locations of files and directories are
specified in the dataset.csv file.
Question 1 -> dataset CSV file.

Question 2:
In the driveset.csv file, we add entries for the list of disks, and their drive letters, so
that the tool can correctly pick the list of drives to be prepared.

If the disk is already encrypted (Encryption = AlreadyEncrypted), we supply the


BitLocker key in the last column, so the tool can decrypt the drive using the keys. If
the disk is not encrypted, the Encrypt option ensures that the tool enables BitLocker
encryption on the disk.
So, BitLocker encryption keys are specified for drives that are already encrypted in
the driveset.csv file.
Question 2 -> driveset CSV file.

Reference Link: https://learn.microsoft.com/en-us/azure/import-export/storage-


import-export-data-to-files
https://learn.microsoft.com/en-us/previous-versions/azure/storage/common/storage-
import-export-tool-preparing-hard-drives-import#prepare-the-dataset-csv-file
Option C is the correct answer.
Except for the journal file, the other types of files given in the options are irrelevant to
the WAImportExport tool. Dataset.csv and driveset.csv files are inputs to running the
tool.

.\WAImportExport.exe PrepImport /j:JournalTest.jrn /id:session#1


/InitialDriveSet:driveset.csv /DataSet:dataset.csv /logdir:C:\logs

The output of this run operation is a journal file, which we need to upload to the Azure
Import/Export service while creating an import job.

Resources
Files used by the WAImportExport tool for disk preparation
Domain
Implement and manage storage

Question 71
You store customer data in the below four storage accounts in your Azure
subscription.
Which of these storage accounts can you optimize for performance and cost
automatically using the data lifecycle management?
 Only strdev011, strdev012, and strdev014
 Only strdev011
 Only strdev012, strdev013, and strdev014
 Only strdev011 and strdev012
Overall explanation
Short Answer for Revision:
Lifecycle management policies are supported only for block blobs and append blobs.
They are not supported for page blobs and file shares.
From the question, we can derive that strdev013 supports only file storage. And
strdev014 is a premium storage account that supports only page blobs. Lifecycle
management is not supported for these storage accounts.

Detailed Explanation:
A lifecycle management policy can automatically manage data lifecycle by:
1. Transitioning blobs that are accessed from the cool access tier to the hot access
tier to optimize for performance.
2. Transitioning blobs that aren’t accessed from the hot access tier to the cool/archive
access tier or from cool tier to archive tier to optimize for cost.

3. Deleting the blob when no longer necessary.


Reference Link: https://learn.microsoft.com/en-us/azure/storage/blobs/lifecycle-
management-overview
https://learn.microsoft.com/en-us/azure/storage/blobs/access-tiers-overview#online-
access-tiers

The access tiers are properties of a blob, not a file. So, we can conclude that we
cannot apply blob lifecycle management rules in strdev013, which is a premium file
storage account, since we cannot create any blobs here. Below, you cannot see
the Lifecycle management section under the Data management category for strdev013.
Reference Link: https://learn.microsoft.com/en-us/azure/storage/blobs/access-tiers-
overview
Option C is incorrect.

Further, we can create lifecycle management policies only for block blobs and append
blobs.
strdev014 is a premium storage account that supports only page blobs.
Storage accounts like strdev014 are created with Premium account type as Page
blobs.

You cannot create lifecycle management policies for such accounts that support only
page blobs.
Option A is also incorrect.

As discussed, you can create blob lifecycle management policies only in storage
accounts that support block blobs. Both the standard, general-purpose storage
account (v2) and premium, block storage account support block blobs.
So, we can optimize both strdev011 and strdev012 for performance or cost using blob
lifecycle management policies.
Option D is the correct answer.

GitHub Repo Link: Storage accounts supporting data lifecycle management policies

Resources
Storage accounts supporting data lifecycle management policies
Domain
Implement and manage storage

Question 72
You have created the following storage accounts in the Azure portal.

For these storage accounts, answer the following questions related to storage account
object replication.
 Only strdev011 & Only strdev012 and strdev014
 Only strdev011, strdev012 and strdev014 & None
 Only strdev011 & Only strdev012
 Only strdev011 and strdev012 & Only strdev012
Overall explanation
Azure Storage object replication copies block blobs asynchronously between a source
and a target storage account.
Object replication requires that blob versioning is enabled on both the source and the
target storage accounts so that any state changes to the previously replicated blobs
in the source are easily preserved in the target.
Further, it requires enabling change feed on the source storage account so that Azure
Storage can check the source feed for any change events (write or delete operations)
that it would asynchronously replicate to the target account.

Reference Link: https://learn.microsoft.com/en-us/azure/storage/blobs/object-


replication-overview#prerequisites-and-caveats-for-object-replication
https://learn.microsoft.com/en-us/azure/storage/blobs/object-replication-
overview#object-replication-policies-and-rules
https://learn.microsoft.com/en-us/azure/storage/blobs/object-replication-
overview#blob-versioning

In fact, when you create an object replication rule on a storage account, blob
versioning is automatically enabled on the source and the target accounts, and
change feed is automatically enabled on the source storage account.
But both blob change data feed and blob versioning are supported only in general-
purpose v2 storage accounts and premium block blob storage accounts.
Reference Link: https://learn.microsoft.com/en-us/azure/storage/blobs/storage-blob-
change-feed?tabs=azure-portal#enable-and-disable-the-change-feed
https://learn.microsoft.com/en-us/azure/storage/blobs/versioning-overview#how-blob-
versioning-works

So, both blob change data feed and blob versioning are not supported in premium file
storage accounts (do not support blobs) and storage accounts that support only page
blobs. That is, storage accounts strdev013 and strdev014 do not support these
features.
Since object replication is dependent on these features, it is evident that only the
standard, general-purpose v2 storage accounts, and premium blob storage accounts
support object replication (check the related lecture video).

So, question 1 -> Only strdev011 and strdev012 .

Since only strdev011 and strdev012 storage accounts support object replication, we
can replicate objects asynchronously only between these two storage accounts. So,
from strdev011, you can create replication rules to replicate objects only to
strdev012.

If you try to replicate objects to other storage accounts, you will get the error that the
dependent features for object replication are not supported.
So, question 2 -> Only strdev012.
Reference Link: https://learn.microsoft.com/en-us/azure/storage/blobs/object-
replication-configure?tabs=portal#prerequisites
Option D is the correct answer.

Resources
Azure Storage accounts that support object replication
Domain
Implement and manage storage

Question 73
This question is part of repeated scenario questions that contain the same stem but
with a different solution for each question. You need to identify if the given solution
solves a particular problem. Each set of repeated scenario questions might contain
either none, one, or many solutions.
Your team deployed several Azure resources in a virtual network in the dev
environment. Over several weeks, they modify several virtual network configurations
like adding/removing subnets, changing address prefixes, updating service endpoints,
assigning route tables, delegating the subnet to a service, and so on to arrive at a
final VNet configuration suitable for your solution.
In a new test environment, you need to recreate the VNet resource and the related
network configurations similar to the VNet in the dev environment using an ARM
template to ensure that the VNet is well-prepared to deploy other resources for
solution testing.

Solution: You navigate to the Deployments section of the VNet’s resource group and
redeploy the template with updated parameters.
Does the solution meet the stated goal?
 Yes
 No
Overall explanation
In the Deployments section of a resource group, you will see a history of all
deployments to that resource group.

But this template is exactly the one used while deploying the Virtual Network
resource. So, it doesn’t capture the changes made to the virtual network after the
resource is created.
For example, I updated the address prefix of the VNet from 10.0.0.0/16 to 10.0.0.0/14.
Deploying this template will create a VNet that uses the older address prefix.
Reference Link: https://learn.microsoft.com/en-us/azure/azure-resource-manager/
templates/export-template-portal#export-template-after-deployment
Since using this template will not capture the changes made to the resource, option B
is the correct answer.

Resources
Replicate an environment using ARM template - 1
Domain
Deploy and manage Azure compute resources

Question 74
This question is part of repeated scenario questions that contain the same stem but
with a different solution for each question. You need to identify if the given solution
solves a particular problem. Each set of repeated scenario questions might contain
either none, one, or many solutions.
Your team deployed several Azure resources in a virtual network in the dev
environment. Over several weeks, they modify several virtual network configurations
like adding/removing subnets, changing address prefixes, updating service endpoints,
assigning route tables, delegating the subnet to a service, and so on to arrive at a
final VNet configuration suitable for your solution.
In a new test environment, you need to recreate the VNet resource and the related
network configurations similar to the VNet in the dev environment using an ARM
template to ensure that the VNet is well-prepared to deploy other resources for
solution testing.

Solution: You navigate to the VNet’s resource group, select the VNet resource, and
export the ARM template.
Does the solution meet the stated goal?
 Yes
 No
Overall explanation
Exporting the ARM template of the resource from the VNet’s resource group
generates a template for the current state of the resource. So, it will include all the
changes made to the VNet after its creation.
But if you directly deploy this exported template, most probably, you will run into
some errors.

As you update a resource, the JSON template, a representation of the resource drifts
apart. This introduces additional code compared to the one used for deploying the
resource. Consequently, we don’t get a readily deployable template if you export
directly from a resource.
Let’s fix this circular dependency error. The subnets subnet01 and subnet02 depend
on the virtual network. But the problem is even the virtual network depends on the
two subnets.
So, removing this dependsOn element from the virtual network resource should fix the
circular dependency error, and you should be able to successfully deploy the
resource. But it's important to note that things get more complicated if you associate
additional resources to the VNet, like a route table.
The point is, although this method meets the stated goal, you need some knowledge
of the template syntax and do cleanup if you want to reuse the template for
deployment into another subscription or environment.
But it is clear that exporting the resource’s template from a resource group will help
you capture the latest state of the resource and should easily set you up with a virtual
network ready to be used in the new environment.

Reference Link: https://learn.microsoft.com/en-us/azure/azure-resource-manager/


templates/export-template-portal#export-template-from-a-resource-group
https://learn.microsoft.com/en-us/azure/azure-resource-manager/templates/export-
template-portal#choose-the-right-export-option
Option A is the correct answer.

Resources
Replicate an environment using ARM template - 2
Domain
Deploy and manage Azure compute resources

Question 75
This question is part of repeated scenario questions that contain the same stem but
with a different solution for each question. You need to identify if the given solution
solves a particular problem. Each set of repeated scenario questions might contain
either none, one, or many solutions.
Your team deployed several Azure resources in a virtual network in the dev
environment. Over several weeks, they modify several virtual network configurations
like adding/removing subnets, changing address prefixes, updating service endpoints,
assigning route tables, delegating the subnet to a service, and so on to arrive at a
final VNet configuration suitable for your solution.
In a new test environment, you need to recreate the VNet resource and the related
network configurations similar to the VNet in the dev environment using an ARM
template to ensure that the VNet is well-prepared to deploy other resources for
solution testing.

Solution: You navigate directly to the VNet resource and export the ARM template.
Does the solution meet the stated goal?
 Yes
 No
Overall explanation
Exporting the ARM template directly from the VNet resource generates a template
that’s exactly similar to the one obtained by exporting the VNet template from the
VNet’s resource group.

So, similar to the last question, if you do some cleanup, like removing circular
dependency, you can deploy an exact replica that captures the latest state of the
VNet resource.
The solution meets the stated goal. Option A is the correct answer.
Reference Link: https://learn.microsoft.com/en-us/azure/azure-resource-manager/
templates/export-template-portal#export-template-from-a-resource

Resources
Replicate an environment using ARM template - 3
Domain
Deploy and manage Azure compute resources

Question 76
This question is part of repeated scenario questions that contain the same stem but
with a different solution for each question. You need to identify if the given solution
solves a particular problem. Each set of repeated scenario questions might contain
either none, one, or many solutions.
Your team deployed several Azure resources in a virtual network in the dev
environment. Over several weeks, they modify several virtual network configurations
like adding/removing subnets, changing address prefixes, updating service endpoints,
assigning route tables, delegating the subnet to a service, and so on to arrive at a
final VNet configuration suitable for your solution.
In a new test environment, you need to recreate the VNet resource and the related
network configurations similar to the VNet in the dev environment using an ARM
template to ensure that the VNet is well-prepared to deploy other resources for
solution testing.

Solution: You fill up a form for creating a Virtual Network with the required values in
the Azure portal. Before deployment, you download the template for automation.
Does the solution meet the stated goal?
 Yes
 No
Overall explanation
Looks like downloading the template for automation before resource deployment is a
great way to capture the final state of the VNet resource.

But there are many properties that can be selected only after the resource is created.
For example, options like subnet delegation, network policy for private endpoints on a
subnet can only be set after the virtual network is created. So, you will not be able to
capture the final state of the VNet using this method.
As Download a template for automation is a feature only in the Azure portal, using
PowerShell or CLI to set these parameters during VNet creation is not possible.
The solution does not meet the stated goal. Option B is the correct answer.
Reference Link: https://learn.microsoft.com/en-us/azure/azure-resource-manager/
templates/export-template-portal#download-template-before-deployment

Resources
Replicate an environment using ARM template - 4
Domain
Deploy and manage Azure compute resources

Question 77
You create an Azure Storage Account with the below information in the Basics tab.
Based on the storage account details in the screenshot, answer the below two
questions:
 6,4
 6,3
 9,3
 3,6
Overall explanation
Question 1:
The given question has selected Geo-zone-redundant storage (GZRS) as the
redundancy for the storage account. Although both GRS and GZRS replicate three
copies of data in a single data center in the secondary region, they differ in how they
replicate data in the primary region. While GRS creates three copies of data in a single
data center in the primary region, GZRS creates those three copies in different
availability zones in the primary region.
Reference Link: https://learn.microsoft.com/en-us/azure/storage/common/storage-
redundancy#geo-zone-redundant-storage
https://learn.microsoft.com/en-us/azure/storage/common/storage-redundancy#geo-
redundant-storage

From the images in the above links, it is evident that both GRS and GZRS create and
store six copies of data.
Question 1 -> 6.

Question 2:
You can covert the redundancy from GZRS to RA-GZRS by selecting the checkbox
below the redundancy dropdown (as given in the question).
So, the redundancy of the given storage account is RA-GZRS and not GZRS.
In GZRS or GRS, data is not accessible in the secondary region unless a failover
process is initiated. RA-GZRS, which is similar to GZRS except that it also enables read
access to data in the secondary region. So, RA-GZRS can help support high availability
applications.
From question 1, we know that users can read data from three data centers in
different availability zones in the primary region. Since we concluded the storage
account’s redundancy is RA-GZRS, users can read data from the only data center in
the secondary region too.
Question 1 -> 4.
Reference Link: https://learn.microsoft.com/en-us/azure/storage/common/storage-
redundancy#read-access-to-data-in-the-secondary-region

Resources
Configure storage account redundancy
Domain
Implement and manage storage

Question 78
You have a storage account strdev011. You have to generate and distribute four SAS
tokens with different permissions to four vendors for accessing a blob container for a
duration of one month. You expect that some of the vendors may complete the
project within a month and may no longer require access to the container.
How would you plan to manage their access to Azure storage?
 Revoke the signing key after they are complete
 Use a stored access policy
 Use a separate key to sign each SAS
 With IAM access control
Overall explanation
You use one of the two signing keys to generate four SAS tokens with different
permissions for accessing the container. If you revoke the signing key when any of the
vendors no longer require access, all the other SAS tokens dependent on the signing
key will not work.
So, option A is incorrect.

At any point in time, there are only two storage account access keys: key1 and key2,
that you can use to generate a SAS signature.

Even before analyzing if the solution in option C would meet the given requirement,
recognize that the statement itself is incorrect as we cannot use a different key to
sign each of the four SAS tokens. Option C is incorrect.

We can create four different stored access policies on the container and use each
policy to generate a SAS token. When any vendor no longer requires access, we can
either delete the policy or expire the policy at a previous date. Since we only update
the stored access policy, we do not touch the signing key used for signing other SAS
tokens.
Reference Link: https://learn.microsoft.com/en-us/rest/api/storageservices/define-
stored-access-policy
Option B is the correct answer.

IAM access control provides access to storage with Azure roles and Azure AD. It’s an
alternative way to provide access to storage account resources, different from using
SAS tokens. Since it’s explicitly stated that you have to generate SAS tokens, option D
is incorrect.

Resources
Revoke SAS access before the specified duration
Domain
Implement and manage storage

Question 79
You create a budget in Azure cost management as shown below:
The action groups ag-mgrs and ag-devs send alert notifications to managers and
developers respectively.
Based on the given information, complete the below two statements:
Note: Assume the team does not intervene to reduce/stop resources in the billing
period.
 2, 3
 1, 2
 0, 4
 0, 2
Overall explanation
You can create two types of budget alerts in Azure cost management:
1. Actual alerts, and
2. Forecasted alerts

We configure both as a percentage of the budget (INR 400). Given that the forecasted
cost (635.55 INR) exceeds the budget (INR 400) for the current period. Since there is
no intervention, we can safely conclude that all three alert conditions will be triggered
as the target amounts for each budget alert (INR 400, 300, 400) are less than the
forecasted cost (635.55 INR).

Statement 1:
The budget will trigger two actual budget alerts. Of them, only one of the alerts
notifies the developers.
Statement 1 -> 1.

Statement 2:
The budget triggers a total of two budget alerts to the managers. Of them, one of the
alerts is an actual alert, and the other one is a forecasted alert.
Statement 2 -> 2.
Option B is the correct answer.
Reference Link: https://azure.microsoft.com/en-in/blog/prevent-exceeding-azure-
budget-with-forecasted-cost-alerts/
https://learn.microsoft.com/en-us/azure/cost-management-billing/costs/tutorial-acm-
create-budgets#configure-actual-costs-budget-alerts

Resources
Forecasted and actual Budget alerts
Domain
Manage Azure identities and governance

Question 80
You have the below five Network Interface Cards (NICs) connected to different subnets
across two virtual networks. As shown, nic01 and nic02 are attached to vm01 and
vm02 in subnet01 and subnet02, respectively in the East US location.
Based on the given information, you need to answer which of the following NICs can
be added to vm01.
 Only nic03 and nic05
 Only nic03
 Only nic03 and nic04
 Only nic03 and nic02
Overall explanation
Short Answer for Revision:
Only NICs deployed in the same virtual network can be added to a VM. So only nic03
and nic05.

Detailed Explanation: One of the important concepts to know for the exam is that
you can attach multiple NICs to a VM. The number of Network Interface Cards that can
be attached to a VM depends on the VM’s size. For the scenario in this question, I
deploy VMs from the Standard_D8a_v4 family that will allow me to add up to 4 NICs to
the VM.

Reference
Link: https://learn.microsoft.com/en-us/azure/virtual-machines/windows/multiple-nics
https://learn.microsoft.com/en-us/azure/virtual-machines/dav4-dasv4-series#dav4-
series

A typical use case of multiple NICs is to have a NIC in each front-end and back-end
subnet of a virtual network. One core requirement for adding multiple NICs to a VM is
that all the NICs must reside in the same VNet. Since nic01, which is in vnet01, is
already attached to vm01, you can only add NICs that are deployed in vnet01. So, you
cannot add nic04 to vm01. Option C is incorrect.

Although nic02 is deployed in vnet01, from the question, we can observe that nic02 is
already attached to vm02. So, you cannot add nic02 to vm01 either. Option D is also
incorrect.

nic03 is in the same VNet but deployed in a different subnet, subnet03. So, you can
add nic03 to vm01. To add the NIC to an existing VM, first stop the VM. After the VM is
deallocated, navigate to the Network settings to attach nic03 to vm01.

But in addition to nic03, you can also add nic05 to vm01 as it is also deployed in
vnet01. You can add multiple NICs from the same subnet to the VM, although I am not
sure if there is a strong use case for it. Nevertheless, Azure Resource Manager will not
complain.
By default, the first NIC (nic01) attached to the VM is the primary network interface.
All other network interfaces subsequently added to the VM are secondary network
interfaces.
Reference Link: https://learn.microsoft.com/en-us/azure/virtual-network/virtual-
network-network-interface-vm#constraints (point 5)

Since you can add both nic03 and nic05 to vm01, option B is incorrect and option A is
the correct answer.
GitHub Repo Link: Add multiple NCs to an Azure VM

Resources
Add multiple NCs to an Azure VM
Domain
Implement and manage virtual networking
Question 81Skipped
Four drives are attached to a Windows Azure VM with drive letters C, D, E and, F.

You deploy a third-party backup extension to this VM. But due to an error in
deployment, the VM’s status has changed to Failed.
Your colleague suggested you reapply the VM to reset the virtual machine
configuration to a previous known state. Which disk drive’s data would you lose?
Only D drive
Only D and C drive
Correct answer
None
Only E and F drive
Overall explanation
There are three disk roles in Azure:
1. The OS disk, which has the Operating System and is generally a C drive in Windows
VM.
2. Temporary disk, which provides short-term storage for your apps and processes,
and is generally a D drive on Windows VM.
3. Finally, any number of data disks attached to the VM, dependent on the virtual
machine size. In the given scenario, they are drives, E and F.
Reference Link: https://learn.microsoft.com/en-us/azure/virtual-machines/managed-
disks-overview#disk-roles

To fix the failed VM state, you can reapply your virtual machine’s state. The reapply
operation will provision the VM again on the same host (server) using the previously
stable virtual machine configuration.
To test the impact of the reapply operation on the data in the disk drives, I uploaded a
sample backup file to each drive in the VM.
And click the reapply button.

Since the VM is provisioned on the same server, we can still find the backup files in all
the drives (Check the related lecture video).
So, option C is the correct answer.

But rather than reapply, if you redeploy the VM, you would lose any data on the
temporary disk. Since a redeploy operation deploys the VM on a different host, you
lose all the data on the D drive, as the temporary disk is located on the same physical
server where the VM is originally hosted.
Since the OS and the data disks are stored independently of the VM in a storage
account, data in C, E and F drives will be persistent.

Reference Link: https://learn.microsoft.com/en-us/troubleshoot/azure/virtual-


machines/redeploy-to-new-node-windows#use-the-azure-portal
https://learn.microsoft.com/en-us/troubleshoot/azure/virtual-machines/vm-stuck-in-
failed-state?tabs=portal
Resources
Effect of reapply or redeploy on a VM's disk drive
Domain
Deploy and manage Azure compute resources

You might also like