[go: up one dir, main page]

0% found this document useful (0 votes)
47 views23 pages

Dva-C02 3

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 23

100% Valid and Newest Version DVA-C02 Questions & Answers shared by Certleader

https://www.certleader.com/DVA-C02-dumps.html (127 Q&As)

DVA-C02 Dumps

DVA-C02

https://www.certleader.com/DVA-C02-dumps.html

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version DVA-C02 Questions & Answers shared by Certleader
https://www.certleader.com/DVA-C02-dumps.html (127 Q&As)

NEW QUESTION 1
A developer is incorporating AWS X-Ray into an application that handles personal
identifiable information (PII). The application is hosted on Amazon EC2 instances. The application trace messages include encrypted
PII and go to Amazon CloudWatch. The developer needs to ensure that no PII goes outside of the EC2 instances.
Which solution will meet these requirements?

A. Manually instrument the X-Ray SDK in the application code.


B. Use the X-Ray auto-instrumentation agent.
C. Use Amazon Macie to detect and hide PI
D. Call the X-Ray API from AWS Lambda.
E. Use AWS Distro for Open Telemetry.

Answer: A

Explanation:
This solution will meet the requirements by allowing the developer to control what data is sent to X-Ray and CloudWatch from the application code. The developer
can filter out any PII from the trace messages before sending them to X-Ray and CloudWatch, ensuring that no PII goes outside of the EC2 instances. Option B is
not optimal because it will automatically instrument all incoming and outgoing requests from the application, which may include PII in the trace messages. Option C
is not optimal because it will require additional services and costs to use Amazon Macie and AWS Lambda, which may not be able to detect and hide all PII from
the trace messages. Option D is not optimal because it will use Open Telemetry instead of X-Ray, which may not be compatible with CloudWatch and other AWS
services.
References: [AWS X-Ray SDKs]

NEW QUESTION 2
A developer is deploying a company's application to Amazon EC2 instances The application generates gigabytes of data files each day The files are rarely
accessed but the files must be available to the application's users within minutes of a request during the first year of storage The company must retain the files for
7 years.
How can the developer implement the application to meet these requirements MOST cost- effectively?

A. Store the files in an Amazon S3 bucket Use the S3 Glacier Instant Retrieval storage class Create an S3 Lifecycle policy to transition the files to the S3 Glacier
Deep Archive storage class after 1 year
B. Store the files in an Amazon S3 bucke
C. Use the S3 Standard storage clas
D. Create an S3 Lifecycle policy to transition the files to the S3 Glacier Flexible Retrieval storage class after 1 year.
E. Store the files on an Amazon Elastic Block Store (Amazon EBS) volume Use Amazon Data Lifecycle Manager (Amazon DLM) to create snapshots of the EBS
volumes and to store those snapshots in Amazon S3
F. Store the files on an Amazon Elastic File System (Amazon EFS) moun
G. Configure EFS lifecycle management to transition the files to the EFS Standard-Infrequent Access (Standard-IA) storage class after 1 year.

Answer: A

Explanation:
Amazon S3 Glacier Instant Retrieval is an archive storage class that delivers the lowest-cost storage for long-lived data that is rarely accessed and requires
retrieval in
milliseconds. With S3 Glacier Instant Retrieval, you can save up to 68% on storage costs compared to using the S3 Standard-
Infrequent Access (S3 Standard-IA) storage class, when your data is accessed once per quarter. https://aws.amazon.com/s3/storage- classes/glacier/instant-
retrieval/

NEW QUESTION 3
A development team wants to build a continuous integration/continuous delivery (CI/CD) pipeline. The team is using AWS CodePipeline to automate the code build
and deployment. The team wants to store the program code to prepare for the CI/CD pipeline.
Which AWS service should the team use to store the program code?

A. AWS CodeDeploy
B. AWS CodeArtifact
C. AWS CodeCommit
Amazon CodeGuru
D.

Answer: C

Explanation:
AWS CodeCommit is a service that provides fully managed source control for hosting secure and scalable private Git repositories. The development team can use
CodeCommit to store the program code and prepare for the CI/CD pipeline. CodeCommit integrates with other AWS services such as CodePipeline, CodeBuild,
and CodeDeploy to automate the code build and deployment process.
References:
? [What Is AWS CodeCommit? - AWS CodeCommit]
? [AWS CodePipeline - AWS CodeCommit]

NEW QUESTION 4
A company is using Amazon OpenSearch Service to implement an audit monitoring system. A developer needs to create an AWS Cloudformation custom
resource that is
associated with an AWS Lambda function to configure the OpenSearch Service domain. The Lambda function must access the
OpenSearch Service domain by using Open Search Service internal master user credentials.
What is the MOST secure way to pass these credentials to the Lambdas function?

A. Use a CloudFormation parameter to pass the master user credentials at deployment to the OpenSearch Service domain's MasterUserOptions and the Lambda
function's environment variabl

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version DVA-C02 Questions & Answers shared by Certleader
https://www.certleader.com/DVA-C02-dumps.html (127 Q&As)

B. Set the No Echo attenuate to true.


C. Use a CloudFormation parameter to pass the master user credentials at deployment to the OpenSearch Service domain's MasterUserOptions and to create a
paramete
D. In AWS Systems Manager Parameter Stor
E. Set the No Echo attribute to tru
F. Create an 1AM role that has the ssm GetParameter permissio
G. Assign me role to the Lambda functio
H. Store me parameter name as the Lambda function's environment variabl
I. Resolve the parameter's value at runtime.
J. Use a CloudFormation parameter to pass the master uses credentials at deployment to the OpenSearch Service domain's MasterUserOptions and the Lambda
function's environment varleWe Encrypt the parameters value by using the AWS Key Management Service (AWS KMS) encrypt command.
K. Use CloudFoimalion to create an AWS Secrets Manager Secre
L. Use a CloudFormation dynamic reference to retrieve the secret's value for the OpenSearch Service domain's MasterUserOption
M. Create an 1AM role that has the secrets manage
N. GetSecretvalue permissio
O. Assign the role to the Lambda Function Store the secrets name as the Lambda function's environment variabl
P. Resole the secret's value at runtime.

Answer: D

Explanation:
The solution that will meet the requirements is to use CloudFormation to create an AWS Secrets Manager secret. Use a CloudFormation dynamic reference to
retrieve the secret’s value for the OpenSearch Service domain’s MasterUserOptions. Create an IAM role that has the secretsmanager:GetSecretValue
permission. Assign the role to the Lambda function. Store the secret’s name as the Lambda function’s environment variable. Resolve the secret’s value at
runtime. This way, the developer can pass the credentials to the Lambda function in a secure way, as AWS Secrets Manager encrypts and manages the secrets.
The developer can also use a dynamic reference to avoid exposing the secret’s value in plain text in the CloudFormation template. The other options either
involve passing the credentials as plain text parameters, which is not secure, or encrypting them with AWS KMS, which is less convenient than using AWS Secrets
Manager.
Reference: Using dynamic references to specify template values

NEW QUESTION 5
A developer is deploying a new application to Amazon Elastic Container Service (Amazon ECS). The developer needs to securely store and retrieve different types
of variables. These variables include authentication information for a remote API, the URL for the API, and credentials. The authentication information and API
URL must be available to all current and future deployed versions of the application across development, testing, and production environments.
How should the developer retrieve the variables with the FEWEST application changes?

A. Update the application to retrieve the variables from AWS Systems Manager Parameter Stor
B. Use unique paths in Parameter Store for each variable in each environmen
C. Store the credentials in AWS Secrets Manager in each environment.
D. Update the application to retrieve the variables from AWS Key Management Service (AWS KMS). Store the API URL and credentials as unique keys for each
environment.
E. Update the application to retrieve the variables from an encrypted file that is stored with the applicatio
F. Store the API URL and credentials in unique files for each environment.
G. Update the application to retrieve the variables from each of the deployed environment
H. Define the authentication information and API URL in the ECS task definition as unique names during the deployment process.

Answer: A

Explanation:
AWS Systems Manager Parameter Store is a service that provides secure, hierarchical storage for configuration data management and secrets management. The
developer can update the application to retrieve the variables from Parameter Store by using the AWS SDK or the AWS CLI. The developer can use unique paths
in Parameter Store for each variable in each environment, such as /dev/api-url, /test/api-url, and /prod/api-url. The developer can also store the credentials in AWS
Secrets Manager, which is integrated with Parameter Store and provides additional features such as automatic rotation and encryption.
References:
? [What Is AWS Systems Manager? - AWS Systems Manager]
? [Parameter Store - AWS Systems Manager]
? [What Is AWS Secrets Manager? - AWS Secrets Manager]

NEW QUESTION 6
A developer maintains a critical business application that uses Amazon DynamoDB as the primary data store The DynamoDB table contains millions of documents
and receives 30- 60 requests each minute The developer needs to perform processing in near-real time on the documents when they are added or updated in the
DynamoDB table
How can the developer implement this feature with the LEAST amount of change to the existing application code?

A. Set up a cron job on an Amazon EC2 instance Run a script every hour to query the table for changes and process the documents
B. Enable a DynamoDB stream on the table Invoke an AWS Lambda function to process the documents.
C. Update the application to send a PutEvents request to Amazon EventBridg
D. Create an EventBridge rule to invoke an AWS Lambda function to process the documents.
E. Update the application to synchronously process the documents directly after the DynamoDB write

Answer: B

Explanation:
https://aws.amazon.com/blogs/database/dynamodb-streams-use-cases-and- design-patterns/

NEW QUESTION 7
A company is building a new application that runs on AWS and uses Amazon API Gateway to expose APIs Teams of developers are working on separate
so that teams that
components of the application in parallel The company wants to publish an API without an integrated backend
depend on the application backend can continue the development work before the API backend development is complete.

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version DVA-C02 Questions & Answers shared by Certleader
https://www.certleader.com/DVA-C02-dumps.html (127 Q&As)

Which solution will meet these requirements?

A. Create API Gateway resources and set the integration type value to MOCK Configure the method integration request and integration response to associate a
response with an HTTP status code Create an API Gateway stage and deploy the API.
B. Create an AWS Lambda function that returns mocked responses and various HTTP status code
C. Create API Gateway resources and set the integration type value to AWS_PROXY Deploy the API.
D. Create an EC2 application that returns mocked HTTP responses Create API Gateway resources and set the integration type value to AWS Create an API
Gateway stage and deploy the API.
E. Create API Gateway resources and set the integration type value set to HTTP_PROX
F. Add mapping templates and deploy the AP
G. Create an AWS Lambda layer that returns various HTTP status codes Associate the Lambda layer with the API deployment

Answer: A

Explanation:
The best solution for publishing an API without an integrated backend is to use the MOCK integration type in API Gateway. This allows the developer to return a
static response to the client without sending the request to a backend service. The developer can configure the method integration request and integration
response to associate a response with an HTTP status code, such as 200 OK or 404 Not Found. The developer can also create an API Gateway stage and deploy
the API to make it available to the teams that depend on the application backend. The other solutions are either not feasible or not efficient. Creating an AWS
Lambda function, an EC2 application, or an AWS Lambda layer would require additional resources and code to generate the mocked responses and HTTP status
codes. These solutions would also incur additional costs and complexity, and would not leverage the built-in functionality of API Gateway. References
? Set up mock integrations for API Gateway REST APIs
? Mock Integration for API Gateway - AWS CloudFormation
? Mocking API Responses with API Gateway
? How to mock API Gateway responses with AWS SAM

NEW QUESTION 8
A company is using AWS CloudFormation to deploy a two-tier application. The application will use Amazon RDS as its backend database. The company wants a
solution that will randomly generate the database password during deployment. The solution also must automatically rotate the database password without
requiring changes to the application.
What is the MOST operationally efficient solution that meets these requirements'?

A. Use an AWS Lambda function as a CloudFormation custom resource to generate and rotate the password.
B. Use an AWS Systems Manager Parameter Store resource with the SecureString data type to generate and rotate the password.
C. Use a cron daemon on the application s host to generate and rotate the password.
D. Use an AWS Secrets Manager resource to generate and rotate the password.

Answer: D

Explanation:
This solution will meet the requirements by using AWS Secrets Manager, which is a service that helps protect secrets such as database credentials by encrypting
them with AWS Key Management Service (AWS KMS) and enabling automatic rotation of secrets. The developer can use an AWS Secrets Manager resource in
AWS CloudFormation template, which enables creating and managing secrets as part of a CloudFormation stack. The developer can use an
AWS::SecretsManager::Secret resource type to generate and rotate the password for accessing RDS database during deployment. The developer can also
specify a RotationSchedule property for the secret resource, which defines how often to rotate the secret and which Lambda function to use for rotation logic.
Option A is not optimal because it will use an AWS Lambda function as a CloudFormation custom resource, which may introduce additional complexity and
overhead for creating and managing a custom resource and implementing rotation logic. Option B is not optimal because it will use an AWS Systems Manager
Parameter Store resource with the SecureString data type, which does not support automatic rotation of secrets. Option C is not optimal because it will use a cron
daemon on the application’s host to generate and rotate the password, which may incur more costs and require more maintenance for running and securing a
host.
References: [AWS Secrets Manager], [AWS::SecretsManager::Secret]

NEW QUESTION 9
A developer is designing a serverless application for a game in which users register and log in through a web browser The application makes requests on behalf of
users to a set of AWS Lambda functions that run behind an Amazon API Gateway HTTP API
The developer needs to implement a solution to register and log in users on the application's sign-in page. The solution must minimize operational overhead and
must minimize ongoing management of user identities.
Which solution will meet these requirements'?

A. Create Amazon Cognito user pools for external social identity providers Configure 1AM roles for the identity pools.
B. Program the sign-in page to create users' 1AM groups with the 1AM roles attached to the groups
C. Create an Amazon RDS for SQL Server DB instance to store the users and manage the permissions to the backend resources in AWS
D. Configure the sign-in page to register and store the users and their passwords in an Amazon DynamoDB table with an attached IAM policy.

Answer: A

Explanation:
https://docs.aws.amazon.com/cognito/latest/developerguide/signing-up-users-in-your-app.html

NEW QUESTION 10
A company is building a serverless application on AWS. The application uses an AWS Lambda function to process customer orders 24 hours a day, 7 days a
week. The Lambda function calls an external vendor's HTTP API to process payments.
During load tests, a developer discovers that the external vendor payment processing API occasionally times out and returns errors. The company expects that
some payment processing API calls will return errors.
The company wants the support team to receive notifications in near real time only when
the payment processing external API error rate exceed 5% of the total number of transactions in an hour. Developers need to use an
existing Amazon Simple Notification Service (Amazon SNS) topic that is configured to notify the support team.
Which solution will meet these requirements?

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version DVA-C02 Questions & Answers shared by Certleader
https://www.certleader.com/DVA-C02-dumps.html (127 Q&As)

A. Write the results of payment processing API calls to Amazon CloudWatc


B. Use Amazon CloudWatch Logs Insights to query the CloudWatch log
C. Schedule the Lambda function to check the CloudWatch logs and notify the existing SNS topic.
D. Publish custom metrics to CloudWatch that record the failures of the external payment processing API call
E. Configure a CloudWatch alarm to notify the existing SNS topic when error rate exceeds the specified rate.
F. Publish the results of the external payment processing API calls to a new Amazon SNS topi
G. Subscribe the support team members to the new SNS topic.
H. Write the results of the external payment processing API calls to Amazon S3. Schedule an Amazon Athena query to run at regular interval
I. Configure Athena to send notifications to the existing SNS topic when the error rate exceeds the specified rate.

Answer: B

Explanation:
Amazon CloudWatch is a service that monitors AWS resources and applications. The developer can publish custom metrics to CloudWatch that record the
failures of the external payment processing API calls. The developer can configure a CloudWatch alarm to notify the existing SNS topic when the error rate
exceeds 5% of the total number of transactions in an hour. This solution will meet the requirements in a near real-time and scalable way.
References:
? [What Is Amazon CloudWatch? - Amazon CloudWatch]
? [Publishing Custom Metrics - Amazon CloudWatch]
? [Creating Amazon CloudWatch Alarms - Amazon CloudWatch]

NEW QUESTION 10
A developer has been asked to create an AWS Lambda function that is invoked any time updates are made to items in an Amazon DynamoDB table. The function
has been created and appropriate permissions have been added to the Lambda execution role Amazon DynamoDB streams have been enabled for the table, but
invoked.
the function 15 still not being
Which option would enable DynamoDB table updates to invoke the Lambda function?

A. Change the StreamViewType parameter value to NEW_AND_OLOJMAGES for the DynamoDB table.
B. Configure event source mapping for the Lambda function.
C. Map an Amazon Simple Notification Service (Amazon SNS) topic to the DynamoDB streams.
D. Increase the maximum runtime (timeout) setting of the Lambda function.

Answer: B

Explanation:
This solution allows the Lambda function to be invoked by the DynamoDB stream whenever updates are made to items in the DynamoDB table. Event source
mapping is a feature of Lambda that enables a function to be triggered by an event source, such as a DynamoDB stream, an Amazon Kinesis stream, or an
Amazon Simple Queue Service (SQS) queue. The developer can configure event source mapping for the Lambda function using the AWS Management Console,
the AWS CLI, or the AWS SDKs. Changing the StreamViewType parameter value to NEW_AND_OLD_IMAGES for the DynamoDB table will not affect the
invocation of the Lambda function, but only change the information that is written to the stream record. Mapping an Amazon Simple Notification Service (Amazon
SNS) topic to the DynamoDB stream will not invoke the Lambda function directly, but require an additional subscription from the Lambda function to the SNS topic.
Increasing the maximum runtime (timeout) setting of the Lambda function will not affect the invocation of the Lambda function, but only change how long the
function can run before it is terminated.
Reference: [Using AWS Lambda with Amazon DynamoDB], [Using AWS Lambda with Amazon SNS]

NEW QUESTION 12
An application that runs on AWS receives messages from an Amazon Simple Queue Service (Amazon SQS) queue and processes the messages in batches. The
application sends the data to another SQS queue to be consumed by another legacy application. The legacy system can take up to 5
minutes to process some transaction data.
A developer wants to ensure that there are no out-of-order updates in the legacy system. The developer cannot alter the behavior of the legacy system.
Which solution will meet these requirements?

A. Use an SQS FIFO queu


B. Configure the visibility timeout value.
C. Use an SQS standard queue with a SendMessageBatchRequestEntry data typ
D. Configure the DelaySeconds values.
E. Use an SQS standard queue with a SendMessageBatchRequestEntry data typ
F. Configure the visibility timeout value.
G. Use an SQS FIFO queu
H. Configure the DelaySeconds value.

Answer: A

Explanation:
? An SQS FIFO queue is a type of queue that preserves the order of messages and ensures that each message is delivered and processed only once1. This is
suitable for the scenario where the developer wants to ensure that there are no out-of-order updates in the legacy system.
? The visibility timeout value is the amount of time that a message is invisible in the queue after a consumer receives it2. This prevents other consumers from
processing the same message simultaneously. If the consumer does not delete the message before the visibility timeout expires, the message becomes visible
again and another consumer can receive it2.
? In this scenario, the developer needs to configure the visibility timeout value to be longer than the maximum processing time of the legacy system, which is 5
minutes. This will ensure that the message remains invisible in the queue until the legacy system finishes processing it and deletes it. This will prevent duplicate or
out-of-order processing of messages by the legacy system.

NEW QUESTION 13
A developer is creating an AWS Lambda function that needs credentials to connect to an Amazon RDS for MySQL database. An Amazon S3 bucket currently
stores the credentials. The developer needs to improve the existing solution by implementing credential rotation and secure storage. The developer also needs to
provide integration with the Lambda function.
Which solution should the developer use to store and retrieve the credentials with the LEAST management overhead?

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version DVA-C02 Questions & Answers shared by Certleader
https://www.certleader.com/DVA-C02-dumps.html (127 Q&As)

A. Store the credentials in AWS Systems Manager Parameter Stor


B. Select the database that the parameter will acces
C. Use the default AWS Key Management Service (AWS KMS) key to encrypt the paramete
D. Enable automatic rotation for the paramete
E. Use the parameter from Parameter Store on the Lambda function to connect to the database.
F. Encrypt the credentials with the default AWS Key Management Service (AWS KMS) ke
G. Store the credentials as environment variables for the Lambda functio
H. Create a second Lambda function to generate new credentials and to rotate the credentials by updating the environment variables of the first Lambda functio
I. Invoke the second Lambda function by using an Amazon EventBridge rule that runs on a schedul
J. Update the database to use the new credential
K. On the first Lambda function, retrieve the credentials from the environment variable
L. Decrypt the credentials by using AWS KMS, Connect to the database.
M. Store the credentials in AWS Secrets Manage
N. Set the secret type to Credentials for Amazon RDS databas
O. Select the database that the secret will acces
P. Use the default AWS Key Management Service (AWS KMS) key to encrypt the secre
Q. Enable automatic rotation for the secre
R. Use the secret from Secrets Manager on the Lambda function to connect to the database.
S. Encrypt the credentials by using AWS Key Management Service (AWS KMS). Store the credentials in an Amazon DynamoDB tabl
T. Create a second Lambda function to rotate the credential
. Invoke the second Lambda function by using an Amazon EventBridge rule that runs on a schedul
. Update the DynamoDB tabl
. Update the database to use the generated credential
. Retrieve the credentials from DynamoDB with the first Lambda functio
. Connect to the database.

Answer: C

Explanation:
AWS Secrets Manager is a service that helps you protect secrets needed to access your applications, services, and IT resources. Secrets Manager enables you
to store, retrieve, and rotate secrets such as database credentials, API keys, and passwords. Secrets Manager supports a secret type for RDS databases, which
allows you to select an existing RDS database instance and generate credentials for it. Secrets Manager encrypts the secret using AWS Key Management Service
(AWS KMS) keys and enables automatic rotation of the secret at a specified interval. A Lambda function can use the AWS SDK or CLI to retrieve the secret from
Secrets Manager and use it to connect to the database. Reference: Rotating your AWS Secrets Manager secrets

NEW QUESTION 15
A company uses Amazon API Gateway to expose a set of APIs to customers. The APIs have caching enabled in API Gateway. Customers need a way to
invalidate the cache for each API when they test the API.
What should a developer do to give customers the ability to invalidate the API cache?

A. Ask the customers to use AWS credentials to call the InvalidateCache API operation.
B. Attach an InvalidateCache policy to the IAM execution role that the customers use to invoke the AP
C. Ask the customers to send a request that contains the HTTP header when they make an API call.
D. Ask the customers to use the AWS SDK API Gateway class to invoke the InvalidateCache API operation.
E. Attach an InvalidateCache policy to the IAM execution role that the customers use to invoke the AP
F. Ask the customers to add the INVALIDATE_CACHE query string parameter when they make an API call.

Answer: D

NEW QUESTION 18
A developer is creating an AWS Lambda function that searches for Items from an Amazon DynamoDQ table that contains customer contact information. The
DynamoDB table items have the customers as the partition and additional properties such as customer -type, name, and job_title.
The Lambda function runs whenever a user types a new character into the customer_type text Input. The developer wants to search to return partial matches of all
tne email_address property of a particular customer type. The developer does not want to recreate the DynamoDB table.
What should the developer do to meet these requirements?

A. Add a global secondary index (GSI) to the DynamoDB table with customer-type input, as the partition key and email_address as the sort ke
B. Perform a query operation on the GSI by using the begins with key condition expression with the email_address property.
Add a global secondary index (GSI) to the DynamoDB table with email_address as the partition key and customer_type as the sort
C.
ke
D. Perform a query operation on the GSI by using the begine_with key condition expresses with the emai
E. Address property.
F. Add a local secondary index (LSI) to the DynemoOB table with customer_type as the partition Key and email_address as the sort Ke
G. Perform a quick operation on the LSI by using the begine_with Key condition expression with the email-address property.
H. Add a local secondary index (LSI) to the DynamoDB table with job-title as the partition key and email_address as the sort ke
I. Perform a query operation on the LSI by using the begins_with key condition expression with the email_address property.

Answer: A

Explanation:
The solution that will meet the requirements is to add a global secondary index (GSI) to the DynamoDB table with customer_type as the partition key and
email_address as the sort key. Perform a query operation on the GSI by using the begins_with key condition expression with the email_address property. This
way, the developer can search for partial matches of the email_address property of a particular customer type without recreating the DynamoDB table. The other
options either involve using a local secondary index (LSI), which requires recreating the table, or using a different partition key, which does not allow filtering by
customer_type.
Reference: Using Global Secondary Indexes in DynamoDB

NEW QUESTION 20
A developer wants to expand an application to run in multiple AWS Regions. The developer wants to copy Amazon Machine Images (AMIs) with the latest changes
and create a new application stack in the destination Region. According to company requirements, all AMIs must be encrypted in all Regions. However, not all the

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version DVA-C02 Questions & Answers shared by Certleader
https://www.certleader.com/DVA-C02-dumps.html (127 Q&As)

AMIs that the company uses are encrypted.


How can the developer expand the application to run in the destination Region while meeting the encryption requirement?

A. Mastered
B. Not Mastered

Answer: A

Explanation:
Amazon Machine Images (AMIs) are encrypted snapshots of EC2 instances that can be used to launch new instances. The developer can create new AMIs from
the existing instances and specify encryption parameters. The developer can copy the encrypted AMIs to the destination Region and use them to create a new
application stack. The developer can delete the unencrypted AMIs after the encryption process is complete. This solution will meet the encryption requirement and
allow the developer to expand the application to run in the destination Region.
References:
? [Amazon Machine Images (AMI) - Amazon Elastic Compute Cloud]
? [Encrypting an Amazon EBS Snapshot - Amazon Elastic Compute Cloud]
? [Copying an AMI - Amazon Elastic Compute Cloud]

NEW QUESTION 25
A developer is working on a Python application that runs on Amazon EC2 instances. The developer wants to enable tracing of application requests to debug
performance issues in the code.
Which combination of actions should the developer take to achieve this goal? (Select TWO)

A. Install the Amazon CloudWatch agent on the EC2 instances.


B. Install the AWS X-Ray daemon on the EC2 instances.
C. Configure the application to write JSON-formatted togs to /var/log/cloudwatch.
D. Configure the application to write trace data to /Var/log-/xray.
E. Install and configure the AWS X-Ray SDK for Python in the application.

Answer: BE

Explanation:
This solution will meet the requirements by using AWS X-Ray to enable tracing of application requests to debug performance issues in the code. AWS X-Ray is a
service that collects data about requests that the applications serve, and provides tools to view, filter, and gain insights into that data.
The developer can install the AWS X-Ray daemon on the EC2 instances, which is a software that listens for traffic on UDP port 2000, gathers raw segment data,
and relays it to the X-Ray API. The developer can also install and configure the AWS X-Ray SDK for Python in the application, which is a library that enables
instrumenting Python code to generate and send trace data to the X-Ray daemon. Option A is not optimal because it will install the Amazon CloudWatch agent on
the EC2 instances, which is a software that collects metrics and logs from EC2 instances and on- premises servers, not application performance data. Option C is
not optimal because it will configure the application to write JSON-formatted logs to /var/log/cloudwatch, which is not a valid path or destination for CloudWatch
logs. Option D is not optimal because it will configure the application to write trace data to /var/log/xray, which is also not a valid path or destination for X-Ray trace
data.
References: [AWS X-Ray], [Running the X-Ray Daemon on Amazon EC2]

NEW QUESTION 28
A developer is using AWS Step Functions to automate a workflow The workflow defines each step as an AWS Lambda function task The developer notices that
runs of the Step Functions state machine fail in the GetResource task with either an UlegalArgumentException error or a TooManyRequestsException error
The developer wants the state machine to stop running when the state machine encounters an UlegalArgumentException error. The state machine needs to retry
the GetResource task one additional time after 10 seconds if the state machine encounters a TooManyRequestsException error. If the second attempt fails, the
developer wants the state machine to stop running.
How can the developer implement the Lambda retry functionality without adding unnecessary complexity to the state machine'?

A. Add a Delay task after the GetResource tas


B. Add a catcher to the GetResource tas
C. Configure the catcher with an error type of TooManyRequestsExceptio
D. Configure the next step to be the Delay task Configure the Delay task to wait for an interval of 10 seconds Configure the next step to be the GetResource task.
E. Add a catcher to the GetResource task Configure the catcher with an error type of TooManyRequestsExceptio
F. an interval of 10 seconds, and a maximum attempts value of 1. Configure the next step to be the GetResource task.
G. Add a retrier to the GetResource task Configure the retrier with an error type of TooManyRequestsException, an interval of 10 seconds, and a maximum
attempts value of 1.
Duplicate the GetResource task Rename the new GetResource task to TryAgain Add a catcher to the original GetResource task
H.
Configure the catcher with an error type of TooManyRequestsExceptio
I. Configure the next step to be TryAgain.

Answer: C

Explanation:
The best way to implement the Lambda retry functionality is to use the Retry field in the state definition of the GetResource task. The Retry field allows the
developer to specify an array of retriers, each with an error type, an interval, and a maximum number of attempts. By setting the error type to
TooManyRequestsException, the interval to 10 seconds, and the maximum attempts to 1, the developer can achieve the desired behavior of retrying the
GetResource task once after 10 seconds if it encounters
a TooManyRequestsException error. If the retry fails, the state machine will stop running. If the GetResource task encounters an UlegalArgumentException error,
the state machine will also stop running without retrying, as this error type is not specified in the Retry field. References
? Error handling in Step Functions
? Handling Errors, Retries, and adding Alerting to Step Function State Machine Executions
? The Jitter Strategy for Step Functions Error Retries on the New Workflow Studio

NEW QUESTION 29
A company has an existing application that has hardcoded database credentials A developer needs to modify the existing application The application is deployed
in two AWS Regions with an active-passive failover configuration to meet company’s disaster recovery strategy
The developer needs a solution to store the credentials outside the code. The solution must comply With the company's disaster recovery strategy

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version DVA-C02 Questions & Answers shared by Certleader
https://www.certleader.com/DVA-C02-dumps.html (127 Q&As)

Which solution Will meet these requirements in the MOST secure way?

A. Store the credentials in AWS Secrets Manager in the primary Regio


B. Enable secret replication to the secondary Region Update the application to use the Amazon Resource Name (ARN) based on the Region.
C. Store credentials in AWS Systems Manager Parameter Store in the primary Regio
D. Enable parameter replication to the secondary Regio
E. Update the application to use the Amazon Resource Name (ARN) based on the Region.
F. Store credentials in a config fil
G. Upload the config file to an S3 bucket in me primary Regio
H. Enable Cross-Region Replication (CRR) to an S3 bucket in the secondary regio
I. Update the application to access the config file from the S3 bucket based on the Region.
Store credentials in a config fil
J.
K. Upload the config file to an Amazon Elastic File System (Amazon EFS) file syste
L. Update the application to use the Amazon EFS file system Regional endpoints to access the config file in the primary and secondary Regions.

Answer: A

Explanation:
AWS Secrets Manager is a service that allows you to store and manage secrets, such as database credentials, API keys, and passwords, in a secure and
centralized way. It also provides features such as automatic secret rotation, auditing, and monitoring1. By using AWS Secrets Manager, you can avoid hardcoding
credentials in your code, which is a bad security practice and makes it difficult to update them. You can also replicate your secrets to another Region, which is
useful for disaster recovery purposes2. To access your secrets from your application, you can use the ARN of the secret, which is a unique identifier that includes
the Region name. This way, your application can use the appropriate secret based on the Region where it is deployed3.
References:
? AWS Secrets Manager
? Replicating and sharing secrets
? Using your own encryption keys

NEW QUESTION 34
A company wants to share information with a third party. The third party has an HTTP API endpoint that the company can use to share the information. The
company has the required API key to access the HTTP API.
The company needs a way to manage the API key by using code. The integration of the API key with the application code cannot affect application performance.
Which solution will meet these requirements MOST securely?

A. Mastered
B. Not Mastered

Answer: A

Explanation:
AWS Secrets Manager is a service that helps securely store, rotate, and manage secrets such as API keys, passwords, and tokens. The developer can store the
API credentials in AWS Secrets Manager and retrieve them at runtime by using the AWS SDK. This solution will meet the requirements of security, code
management, and performance. Storing the API credentials in a local code variable or an S3 object is not secure, as it exposes the credentials to unauthorized
access or leakage. Storing the API credentials in a DynamoDB table is also not secure, as it requires additional encryption and access control measures.
Moreover, retrieving the credentials from S3 or DynamoDB may affect application performance due to network latency.
References:
? [What Is AWS Secrets Manager? - AWS Secrets Manager]
? [Retrieving a Secret - AWS Secrets Manager]

NEW QUESTION 37
A developer is building a serverless application by using AWS Serverless Application Model (AWS SAM) on multiple AWS Lambda functions. When the application
is deployed, the developer wants to shift 10% of the traffic to the new deployment of the application for the first 10 minutes after deployment. If there are no issues,
all traffic must switch over to the new version.
Which change to the AWS SAM template will meet these requirements?

A. Set the Deployment Preference Type to Canaryl OPercent10Minute


B. Set the AutoPublishAlias property to the Lambda alias.
C. Set the Deployment Preference Type to Linearl OPercentEveryIOMinute
D. Set AutoPubIishAIias property to the Lambda alias.
E. Set the Deployment Preference Type to Canaryl OPercentIOMinute
F. Set the PreTraffic and PostTraffic properties to the Lambda alias.
G. Set the Deployment Preference Type to Linearl OPercentEvery10Minute
H. Set PreTraffic and PostTraffic properties to the Lambda alias.

Answer: A

Explanation:
? The Deployment Preference Type property specifies how traffic should be shifted between versions of a Lambda function1. The Canary10Percent10Minutes
option means that 10% of the traffic is immediately shifted to the new version, and after 10 minutes, the remaining 90% of the traffic is shifted1. This matches the
requirement of shifting 10% of the traffic for the first 10 minutes, and then switching all traffic to the new version.
? The AutoPublishAlias property enables AWS SAM to automatically create and update a Lambda alias that points to the latest version of the function1. This is
required to use the Deployment Preference Type property1. The alias name can be specified by the developer, and it can be used to invoke the function with the
latest code.

NEW QUESTION 41
A developer at a company needs to create a small application that makes the same API call once each day at a designated time. The company does not have
infrastructure in the AWS Cloud yet, but the company wants to implement this functionality on AWS.
Which solution meets these requirements in the MOST operationally efficient manner?

Use a Kubernetes cron job that runs on Amazon Elastic Kubernetes Service (Amazon EKS).
A.

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version DVA-C02 Questions & Answers shared by Certleader
https://www.certleader.com/DVA-C02-dumps.html (127 Q&As)

B. Use an Amazon Linux crontab scheduled job that runs on Amazon EC2.
C. Use an AWS Lambda function that is invoked by an Amazon EventBridge scheduled event.
D. Use an AWS Batch job that is submitted to an AWS Batch job queue.

Answer: C

Explanation:
The correct answer is C. Use an AWS Lambda function that is invoked by an Amazon EventBridge scheduled event.
* C. Use an AWS Lambda function that is invoked by an Amazon EventBridge scheduled event. This is correct. AWS Lambda is a serverless compute service that
lets you run code without provisioning or managing servers. Lambda runs your code on a high-availability compute infrastructure and performs all of the
administration of the compute resources, including server and operating system maintenance, capacity provisioning and automatic scaling, and logging1. Amazon
EventBridge is a serverless event bus service that enables you to connect your applications with data from a variety of sources2. EventBridge can create rules that
run on a schedule, either at regular intervals or at specific times and dates, and invoke targets such as Lambda functions3. This solution meets the requirements of
creating a small application that makes the same API call once each day at a designated time, without requiring any infrastructure in the AWS Cloud or any
operational overhead.
* A. Use a Kubernetes cron job that runs on Amazon Elastic Kubernetes Service (Amazon EKS). This is incorrect. Amazon EKS is a fully managed Kubernetes
service that allows you to run containerized applications on AWS4. Kubernetes cron jobs are tasks that run periodically on a given schedule5. This solution could
meet the functional requirements of creating a small application that makes the same API call once each day at a designated time, but it would not be the most
operationally efficient manner. The company would need to provision and manage an EKS cluster, which would incur additional costs and complexity.
* B. Use an Amazon Linux crontab scheduled job that runs on Amazon EC2. This is incorrect. Amazon EC2 is a web service that provides secure, resizable
compute capacity in the cloud6. Crontab is a Linux utility that allows you to schedule commands or scripts to run automatically at a specified time or date7. This
solution could meet the functional requirements of creating a small application that makes the same API call once each day at a designated time, but it would not
be the most operationally efficient manner. The company would need to provision and manage an EC2 instance, which would incur additional costs and
complexity.
* D. Use an AWS Batch job that is submitted to an AWS Batch job queue. This is incorrect. AWS Batch enables you to run batch computing workloads on the AWS
or sequentially on
Cloud8. Batch jobs are units of work that can be submitted to job queues, where they are executed in parallel
compute environments9. This solution could meet the functional requirements of creating a small application that makes the same API call once each day at a
designated time, but it would not be the most operationally efficient manner. The company would need to configure and manage an AWS Batch environment,
which would incur additional costs and complexity.
References:
? 1: What is AWS Lambda? - AWS Lambda
? 2: What is Amazon EventBridge? - Amazon EventBridge
? 3: Creating an Amazon EventBridge rule that runs on a schedule - Amazon EventBridge
? 4: What is Amazon EKS? - Amazon EKS
? 5: CronJob - Kubernetes
? 6: What is Amazon EC2? - Amazon EC2
? 7: Crontab in Linux with 20 Useful Examples to Schedule Jobs - Tecmint
? 8: What is AWS Batch? - AWS Batch
? 9: Jobs - AWS Batch

NEW QUESTION 44
A company built an online event platform For each event the company organizes quizzes and generates leaderboards that are based on the quiz scores. The
company stores the leaderboard data in Amazon DynamoDB and retains the data for 30 days after an event is complete The company then uses a scheduled job
to delete the old leaderboard data
The DynamoDB table is configured with a fixed write capacity. During the months when many events occur, the DynamoDB write API requests are throttled when
the scheduled delete job runs.
A developer must create a long-term solution that deletes the old leaderboard data and optimizes write throughput
Which solution meets these requirements?

A. Configure a TTL attribute for the leaderboard data


B. Use DynamoDB Streams to schedule and delete the leaderboard data
C. Use AWS Step Functions to schedule and delete the leaderboard data.
D. Set a higher write capacity when the scheduled delete job runs

Answer: A

Explanation:
"deletes the item from your table without consuming any write throughput" https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/TTL.html

NEW QUESTION 48
A developer is writing an application that will retrieve sensitive data from a third-party system. The application will format the data into a PDF file. The PDF file
could be more than 1 MB. The application will encrypt the data to disk by using AWS Key Management Service (AWS KMS). The application will decrypt the file
when a user requests to download it. The retrieval and formatting portions of the application are complete.
The developer needs to use the GenerateDataKey API to encrypt the PDF file so that the PDF file can be decrypted later. The developer needs to use an AWS
KMS symmetric customer managed key for encryption.
Which solutions will meet these requirements?

A. Write the encrypted key from the GenerateDataKey API to disk for later us
plaintext key from the GenerateDataKey API and a symmetric encryption algorithm to encrypt the file.
B.
C. Use
Writethe
the plain text key from the GenerateDataKey API to disk for later us
D. Use the encrypted key from the GenerateDataKey API and a symmetric encryption algorithm to encrypt the file.
E. Write the encrypted key from the GenerateDataKey API to disk for later us
F. Use the plaintext key from the GenerateDataKey API to encrypt the file by using the KMS Encrypt API
G. Write the plain text key from the GenerateDataKey API to disk for later us
H. Use the encrypted key from the GenerateDataKey API to encrypt the file by using the KMS Encrypt API

Answer: A

Explanation:

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version DVA-C02 Questions & Answers shared by Certleader
https://www.certleader.com/DVA-C02-dumps.html (127 Q&As)

? The GenerateDataKey API returns a data key that is encrypted under a symmetric encryption KMS key that you specify, and a plaintext copy of the same data
key1. The data key is a random byte string that can be used with any standard encryption algorithm, such as AES or SM42. The plaintext data key can be used to
encrypt or decrypt data outside of AWS KMS, while the encrypted data key can be stored with the encrypted data and later decrypted by AWS KMS1.
? In this scenario, the developer needs to use the GenerateDataKey API to encrypt
the PDF file so that it can be decrypted later. The developer also needs to use an AWS KMS symmetric customer managed key for encryption. To achieve this, the
developer can follow these steps:

NEW QUESTION 49
A company is running Amazon EC2 instances in multiple AWS accounts. A developer needs to implement an application that collects all the lifecycle events of the
EC2 instances. The application needs to store the lifecycle events in a single Amazon Simple Queue Service (Amazon SQS) queue in the company's main AWS
account for further processing.
Which solution will meet these requirements?

A. Configure Amazon EC2 to deliver the EC2 instance lifecycle events from all accounts to the Amazon EventBridge event bus of the main accoun
B. Add an EventBridge rule to the event bus of the main account that matches all EC2 instance lifecycle event
C. Add the SQS queue as a target of the rule.
D. Use the resource policies of the SQS queue in the main account to give each account permissions to write to that SQS queu
E. Add to the Amazon EventBridge event bus of each account an EventBridge rule that matches all EC2 instance lifecycle event
F. Add the SQS queue in the main account as a target of the rule.
G. Write an AWS Lambda function that scans through all EC2 instances in the company accounts to detect EC2 instance lifecycle change
H. Configure the Lambda function to write a notification message to the SQS queue in the main account if the function detects an EC2 instance lifecycle chang
I. Add an Amazon EventBridge scheduled rule that invokes the Lambda function every minute.
J. Configure the permissions on the main account event bus to receive events from all account
K. Create an Amazon EventBridge rule in each account to send all the EC2 instance lifecycle events to the main account event bu
L. Add an EventBridge rule to the main account event bus that matches all EC2 instance lifecycle event
M. Set the SQS queue as a target for the rule.

Answer: D

Explanation:
Amazon EC2 instances can send the state-change notification events to Amazon EventBridge.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/monitoring-instance-state- changes.html Amazon EventBridge can send and receive events between
event buses in AWS accounts. https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-cross- account.html

NEW QUESTION 52
A company has an ecommerce application. To track product reviews, the company's development team uses an Amazon DynamoDB table.
Every record includes the following
• A Review ID a 16-digrt universally unique identifier (UUID)
• A Product ID and User ID 16 digit UUlDs that reference other tables
• A Product Rating on a scale of 1-5
• An optional comment from the user
The table partition key is the Review ID. The most performed query against the table is to find the 10 reviews with the highest rating for a given product.
Which index will provide the FASTEST response for this query"?

A. A global secondary index (GSl) with Product ID as the partition key and Product Rating as the sort key
B. A global secondary index (GSl) with Product ID as the partition key and Review ID as the sort key
C. A local secondary index (LSI) with Product ID as the partition key and Product Rating as the sort key
D. A local secondary index (LSI) with Review ID as the partition key and Product ID as the sort key

Answer: A

Explanation:
This solution allows the fastest response for the query because it enables the query to use a single partition key value (the Product ID) and a range of sort key
values (the Product Rating) to find the matching items. A global secondary index (GSI) is an index that has a partition key and an optional sort key that are different
from those on the base table. A GSI can be created at any time and can be queried or scanned independently of the base table. A local secondary index (LSI) is
an index that has the same partition key as the base table, but a different sort key. An LSI can only be created when the base table is created and must be queried
together with the base table partition key. Using a GSI with Product ID as the partition key and Review ID as the sort key will not allow the query to use a range of
sort key values to find the highest ratings. Using an LSI with Product ID as the partition key and Product Rating as the sort key will not work because Product ID is
not the partition key of the base table. Using an LSI with Review ID as the partition key and Product ID as the sort key will not allow the query to use a single
partition key value to find the matching items.
Reference: [Global Secondary Indexes], [Querying]

NEW QUESTION 53
A developer is working on a web application that uses Amazon DynamoDB as its data store The application has two DynamoDB tables one table that is named
artists and one table that is named songs The artists table has artistName as the partition key. The songs table has songName as the partition key and artistName
as the sort key
The table usage patterns include the retrieval of multiple songs and artists in a single database operation from the webpage. The developer needs a way to
retrieve this information with minimal network traffic and optimal application performance.
Which solution will meet these requirements'?

A. Perform a BatchGetltem operation that returns items from the two table
B. Use the list of songName artistName keys for the songs table and the list of artistName key for the artists table.
C. Create a local secondary index (LSI) on the songs table that uses artistName as the partition key Perform a query operation for each artistName on the songs
table that filters by the list of songName Perform a query operation for each artistName on the artists table
D. Perform a BatchGetltem operation on the songs table that uses the songName/artistName key
E. Perform a BatchGetltem operation on the artists table that uses artistName as the key.
F. Perform a Scan operation on each table that filters by the list of songName/artistName for the songs table and the list of artistName in the artists table.

Answer:

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version DVA-C02 Questions & Answers shared by Certleader
https://www.certleader.com/DVA-C02-dumps.html (127 Q&As)

Explanation:
BatchGetItem can return one or multiple items from one or more tables. For reference check the link below
https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_BatchGetItem.h tml

NEW QUESTION 56
A developer uses AWS CloudFormation to deploy an Amazon API Gateway API and an AWS Step Functions state machine The state machine must reference the
API Gateway API after the CloudFormation template is deployed The developer needs a solution that uses the state machine to reference the API Gateway
endpoint.
Which solution will meet these requirements MOST cost-effectively?

A. Configure the CloudFormation template to reference the API endpoint in the DefinitionSubstitutions property for the AWS StepFunctions StateMachme
resource.
B. Configure the CloudFormation template to store the API endpoint in an environment variable for the AWS::StepFunctions::StateMachine resourc Configure the
state machine to reference the environment variable
C. Configure the CloudFormation template to store the API endpoint in a standard AWS: SecretsManager Secret resource Configure the state machine to
reference the resource
D. Configure the CloudFormation template to store the API endpoint in a standard AWS::AppConfig;:ConfigurationProfile resource Configure the state machine to
reference the resource.

Answer: A

Explanation:
The most cost-effective solution is to use the DefinitionSubstitutions property of the AWS::StepFunctions::StateMachine resource to inject the API endpoint as a
variable in the state machine definition. This way, the developer can use the intrinsic function
Fn::GetAtt to get the API endpoint from the AWS::ApiGateway::RestApi resource, and pass it to the state machine without creating any
additional resources or environment variables. The other solutions involve creating and managing extra resources, such as Secrets Manager secrets or AppConfig
configuration profiles, which incur additional costs and complexity. References
? AWS::StepFunctions::StateMachine - AWS CloudFormation
? Call API Gateway with Step Functions - AWS Step Functions
? amazon-web-services aws-api-gateway terraform aws-step-functions

NEW QUESTION 57
A team of developed is using an AWS CodePipeline pipeline as a continuous integration and continuous delivery (CI/CD) mechanism for a web application. A
developer has written unit tests to programmatically test the functionality of the application code. The unit tests produce a test report that shows the results of each
individual check. The developer now
wants to run these tests automatically during the CI/CD process.

A. Write a Git pre-commit hook that runs the test before every commi
B. Ensure that each developer who is working on the project has the pre-commit hook instated locall
C. Review the test report and resolve any issues before pushing changes to AWS CodeCommit.
D. Add a new stage to the pipelin
E. Use AWS CodeBuild as the provide
F. Add the new stage after the stage that deploys code revisions to the test environmen
G. Write a buildspec that fails the CodeBuild stage if any test does not pas
H. Use the test reports feature of Codebuild to integrate the report with the CodoBuild consol
I. View the test results in CodeBuild Resolve any issues.
J. Add a new stage to the pipelin
K. Use AWS CodeBuild at the provide
L. Add the new stage before the stage that deploys code revisions to the test environmen
M. Write a buildspec that fails the CodeBuild stage it any test does not pas
N. Use the test reports feature of CodeBuild to integrate the report with the CodeBuild consol
O. View the test results in codeBuild Resolve any issues.
P. Add a new stage to the pipelin
Q. Use Jenkins as the provide
R. Configure CodePipeline to use Jenkins to run the unit test
S. Write a Jenkinsfile that fails the stage if any test does not pas
T. Use the test report plugin for Jenkins to integrate the repot with the Jenkins dashboar
. View the test results in Jenkin
. Resolve any issues.

Answer: C

Explanation:
The solution that will meet the requirements is to add a new stage to the pipeline. Use AWS CodeBuild as the provider. Add the new stage before the stage that
deploys code revisions to the test environment. Write a buildspec that fails the CodeBuild stage if any test does not pass. Use the test reports feature of CodeBuild
to integrate the report with the CodeBuild console. View the test results in CodeBuild. Resolve any issues. This way, the developer can run the unit tests
automatically during the CI/CD process and catch any bugs before deploying to the test environment. The developer can also use the test reports feature of
CodeBuild to view and analyze the test results in a graphical interface. The other options either involve running the tests manually, running them after deployment,
or using a different provider that requires additional configuration and integration.
Reference: Test reports for CodeBuild

NEW QUESTION 60
A company’s website runs on an Amazon EC2 instance and uses Auto Scaling to scale the environment during peak times. Website users across the world ate
experiencing high latency flue lo sialic content on theEC2 instance. even during non-peak hours.
When companion of steps mill resolves the latency issue? (Select TWO)

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version DVA-C02 Questions & Answers shared by Certleader
https://www.certleader.com/DVA-C02-dumps.html (127 Q&As)

A. Double the Auto Scaling group's maximum number of servers


B. Host the application code on AWS lambda
C. Scale vertically by resizing the EC2 instances
D. Create an Amazon Cloudfront distribution to cache the static content
E. Store the application’s sialic content in Amazon S3

Answer: DE

Explanation:
The combination of steps that will resolve the latency issue is to create an Amazon CloudFront distribution to cache the static content and store the application’s
static content in Amazon S3. This way, the company can use CloudFront to deliver the static content from edge locations that are closer to the website users,
reducing latency and improving performance. The company can also use S3 to store the static content reliably and cost-effectively, and integrate it with CloudFront
easily. The other options either do not address the latency issue, or are not necessary or feasible for the given scenario.
Reference: Using Amazon S3 Origins and Custom Origins for Web Distributions

NEW QUESTION 65
A company is expanding the compatibility of its photo-snaring mobile app to hundreds of additional devices with unique screen dimensions and resolutions. Photos
are stored in Amazon S3 in their original format and resolution. The company uses an Amazon CloudFront distribution to serve the photos The app includes the
dimension and resolution of the display as GET parameters with every request.
A developer needs to implement a solution that optimizes the photos that are served to each device to reduce load time and increase photo quality.
Which solution will meet these requirements MOST cost-effective?

A. Use S3 Batch Operations to invoke an AWS Lambda function to create new variants of the photos with the required dimensions and resolution
B. Create a dynamic CloudFront origin that automatically maps the request of each device to the corresponding photo variant.
C. Use S3 Batch Operations to invoke an AWS Lambda function to create new variants of the photos with the required dimensions and resolution
D. Create a Lambda@Edge function to route requests to the corresponding photo vacant by using request headers.
E. Create a Lambda@Edge function that optimizes the photos upon request and returns the photos as a respons
F. Change the CloudFront TTL cache policy to the maximum value possible.
Create a Lambda@Edge function that optimizes the photos upon request and returns the photos as a respons
G.
H. In the same function store a copy of the processed photos on Amazon S3 for subsequent requests.

Answer: D

Explanation:
This solution meets the requirements most cost-effectively because it optimizes the photos on demand and caches them for future requests. Lambda@Edge
allows the developer to run Lambda functions at AWS locations closer to viewers, which can reduce latency and improve photo quality. The developer can create a
Lambda@Edge function that uses the GET parameters from each request to optimize the photos with the required dimensions and resolutions and returns them as
a response. The function can also store a copy of the processed photos on Amazon S3 for subsequent requests, which can reduce processing time and costs.
Using S3 Batch Operations to create new variants of the photos will incur additional storage costs and may not cover all possible dimensions and resolutions.
Creating a dynamic CloudFront origin or a Lambda@Edge function to route requests to corresponding photo variants will require maintaining a mapping of device
types and photo variants, which can be complex and error-prone.
Reference: [Lambda@Edge Overview], [Resizing Images with Amazon CloudFront &
Lambda@Edge]

NEW QUESTION 68
A developer is planning to migrate on-premises company data to Amazon S3. The data must be encrypted, and the encryption Keys
must support automate annual rotation. The company must use AWS Key Management Service (AWS KMS) to encrypt the data.
When type of keys should the developer use to meet these requirements?

A. Amazon S3 managed keys


B. Symmetric customer managed keys with key material that is generated by AWS
C. Asymmetric customer managed keys with key material that generated by AWS
D. Symmetric customer managed keys with imported key material

Answer: B

Explanation:
The type of keys that the developer should use to meet the requirements is symmetric customer managed keys with key material that is generated by AWS. This
way, the developer can use AWS Key Management Service (AWS KMS) to encrypt the data with a symmetric key that is managed by the developer. The
developer can also enable automatic annual rotation for the key, which creates new key material for the key every year. The other options either involve using
Amazon S3 managed keys, which do not support automatic annual rotation, or using asymmetric keys or imported key material, which are not supported by S3
encryption.
Reference: Using AWS KMS keys to encrypt S3 objects

NEW QUESTION 72
A company is using Amazon RDS as the Backend database for its application. After a recent marketing campaign, a surge of read requests to the database
increased the latency of data retrieval from the database.
The company has decided to implement a caching layer in front of the database. The cached content must be encrypted and must be highly available.
Which solution will meet these requirements?

A. Amazon Cloudfront
B. Amazon ElastiCache to Memcached
C. Amazon ElastiCache for Redis in cluster mode
D. Amazon DynamoDB Accelerate (DAX)

Answer: C

Explanation:
This solution meets the requirements because it provides a caching layer that can store and retrieve encrypted data from multiple nodes. Amazon ElastiCache for
Redis supports encryption at rest and in transit, and can scale horizontally to increase the cache capacity and availability. Amazon ElastiCache for Memcached

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version DVA-C02 Questions & Answers shared by Certleader
https://www.certleader.com/DVA-C02-dumps.html (127 Q&As)

does not support encryption, Amazon CloudFront is a content delivery network that is not suitable for caching database queries, and Amazon DynamoDB
Accelerator (DAX) is a caching service that only works with DynamoDB tables.
Reference: [Amazon ElastiCache for Redis Features], [Choosing a Cluster Engine]

NEW QUESTION 75
A developer is creating an Amazon DynamoDB table by using the AWS CLI The DynamoDB table must use server-side encryption with an AWS owned encryption
key
How should the developer create the DynamoDB table to meet these requirements?

A. Create an AWS Key Management Service (AWS KMS) customer managed ke


B. Provide the key's Amazon Resource Name (ARN) in the KMSMasterKeyld parameter during creation of the DynamoDB table
C. Create an AWS Key Management Service (AWS KMS) AWS managed key Provide the key's Amazon Resource Name (ARN) in the KMSMasterKeyld
parameter during creation of the DynamoDB table
D. Create an AWS owned key Provide the key's Amazon Resource Name (ARN) in the KMSMasterKeyld parameter during creation of the DynamoDB table.
E. Create the DynamoDB table with the default encryption options

Answer: D

Explanation:
When creating an Amazon DynamoDB table using the AWS CLI, server-side encryption with an AWS owned encryption key is enabled by default. Therefore, the
developer does not need to create an AWS KMS key or specify the KMSMasterKeyId parameter. Option A and B are incorrect because they suggest creating
customer- managed and AWS-managed KMS keys, which are not needed in this scenario. Option C is also incorrect because AWS owned keys are automatically
used for server-side encryption by default.

NEW QUESTION 77
An application is using Amazon Cognito user pools and identity pools for secure access. A developer wants to integrate the user-specific file upload and download
features in the application with Amazon S3. The developer must ensure that the files are saved and retrieved in a secure manner and that users can access only
their own files. The file sizes range from 3 KB to 300 MB.
Which option will meet these requirements with the HIGHEST level of security?

A. Use S3 Event Notifications to validate the file upload and download requests and update the user interface (UI).
B. Save the details of the uploaded files in a separate Amazon DynamoDB tabl
C. Filter the list of files in the user interface (UI) by comparing the current user ID with the user ID associated with the file in the table.
D. Use Amazon API Gateway and an AWS Lambda function to upload and download file
E. Validate each request in the Lambda function before performing the requested operation.
F. Use an IAM policy within the Amazon Cognito identity prefix to restrict users to use their own folders in Amazon S3.

Answer: D

Explanation:
https://docs.aws.amazon.com/cognito/latest/developerguide/amazon-cognito-integrating-user-pools-with-identity-pools.html

NEW QUESTION 81

A developer is creating a serverless application that uses an AWS Lambda function The developer will use AWS CloudFormation to
deploy the application The application will write logs to Amazon CloudWatch Logs The developer has created a log group in a CloudFormation template for the
application to use The developer needs to modify the CloudFormation template to make the name of the log group available to the application at runtime
Which solution will meet this requirement?

A. Use the AWS:lnclude transform in CloudFormation to provide the log group's name to the application
B. Pass the log group's name to the application in the user data section of the CloudFormation template.
C. Use the CloudFormation template's Mappings section to specify the log group's name for the application.
D. Pass the log group's Amazon Resource Name (ARN) as an environment variable to the Lambda function

Answer: D

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version DVA-C02 Questions & Answers shared by Certleader
https://www.certleader.com/DVA-C02-dumps.html (127 Q&As)

Explanation:
FunctionName: MyLambdaFunction Code:
S3Bucket: your-lambda-code-bucket S3Key: lambda-code.zip
Runtime: nodejs14.x # Specify the desired runtime for your Lambda function Environment:
Variables:
LOG_GROUP_NAME: !Ref MyLogGroup https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-logs- loggroup.html

NEW QUESTION 83
A developer is creating an AWS Lambda function. The Lambda function needs an external library to connect to a third-party solution The external library is a
collection of files with a total size of 100 MB The developer needs to make the external library available to the Lambda execution environment and reduce the
Lambda package space
Which solution will meet these requirements with the LEAST operational overhead?

A.

Create a Lambda layer to store the external library Configure the Lambda function to use the layer
B. Create an Amazon S3 bucket Upload the external library into the S3 bucke
C. Mount the S3 bucket folder in the Lambda function Import the library by using the proper folder in the mount point.
D. Load the external library to the Lambda function's /tmp directory during deployment of the Lambda packag
E. Import the library from the /tmp directory.
F. Create an Amazon Elastic File System (Amazon EFS) volum
G. Upload the external library to the EFS volume Mount the EFS volume in the Lambda functio
H. Import the library by using the proper folder in the mount point.

Answer: A

Explanation:
Create a Lambda layer to store the external library. Configure the Lambda function to use the layer. This will allow the developer to make the external library
available to the Lambda execution environment without having to include it in the Lambda package, which will reduce the Lambda package space. Using a
Lambda layer is a simple and straightforward solution that requires minimal operational overhead. https://docs.aws.amazon.com/lambda/latest/dg/configuration-
layers.html

NEW QUESTION 85
An ecommerce application is running behind an Application Load Balancer. A developer observes some unexpected load on the application during non-peak
hours. The developer wants to analyze patterns for the client IP addresses that use the application. Which HTTP header should the developer use for this
analysis?

A. The X-Forwarded-Proto header


B. The X-F Forwarded-Host header
C. The X-Forwarded-For header
D. The X-Forwarded-Port header

Answer: C

Explanation:
The HTTP header that the developer should use for this analysis is the X- Forwarded-For header. This header contains the IP address of the client that made the
request to the Application Load Balancer. The developer can use this header to analyze patterns for the client IP addresses that use the application. The other
headers either contain information about the protocol, host, or port of the request, which are not relevant for the analysis.
Reference: How Application Load Balancer works with your applications

NEW QUESTION 89

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version DVA-C02 Questions & Answers shared by Certleader
https://www.certleader.com/DVA-C02-dumps.html (127 Q&As)

A company hosts its application on AWS. The application runs on an Amazon Elastic Container Service (Amazon ECS) cluster that
uses AWS Fargate. The cluster runs behind an Application Load Balancer The application stores data in an Amazon Aurora database A developer encrypts and
manages database credentials inside the application
The company wants to use a more secure credential storage method and implement periodic credential rotation.
Which solution will meet these requirements with the LEAST operational overhead?

A. Migrate the secret credentials to Amazon RDS parameter group


B. Encrypt the parameter by using an AWS Key Management Service (AWS KMS) key Turn on secret rotatio
C. Use 1AM policies and roles to grant AWS KMS permissions to access Amazon RDS.
D. Migrate the credentials to AWS Systems Manager Parameter Stor
E. Encrypt the parameter by using an AWS Key Management Service (AWS KMS) ke
F. Turn on secret rotatio
G. Use 1AM policies and roles to grant Amazon ECS Fargate permissions to access to AWS Secrets Manager
H. Migrate the credentials to ECS Fargate environment variable
I. Encrypt the credentials by using an AWS Key Management Service (AWS KMS) key Turn on secret rotatio
J. Use 1AM policies and roles to grant Amazon ECS Fargate permissions to access to AWS Secrets Manager.
K. Migrate the credentials to AWS Secrets Manage
L. Encrypt the credentials by using an AWS Key Management Service (AWS KMS) key Turn on secret rotation Use 1AM policies and roles to grant Amazon ECS
Fargate permissions to access to AWS Secrets Manager by using keys.

Answer: D

Explanation:
AWS Secrets Manager is a service that helps you store, distribute, and rotate secrets securely. You can use Secrets Manager to migrate your credentials from
your application code to a secure and encrypted storage. You can also enable automatic rotation of your secrets by using AWS Lambda functions or custom logic.
You can use IAM policies and roles to grant your Amazon ECS Fargate tasks permissions to access your secrets from Secrets Manager. This solution minimizes
the operational overhead of managing your credentials and enhances the security of your application. References
? AWS Secrets Manager: Store, Distribute, and Rotate Credentials Securely | AWS
News Blog
? Why You Should Audit and Rotate Your AWS Credentials Periodically - Cloud Academy
? Top 5 AWS root account best practices - TheServerSide

NEW QUESTION 92
A company has deployed an application on AWS Elastic Beanstalk. The company has configured the Auto Scaling group that is associated with the Elastic
Beanstalk environment to have five Amazon EC2 instances. If the capacity is fewer than four EC2 instances during the deployment, application performance
degrades. The company is using the all-at-once deployment policy.
What is the MOST cost-effective way to solve the deployment issue?

A. Change the Auto Scaling group to six desired instances.


B. Change the deployment policy to traffic splittin
C. Specify an evaluation time of 1 hour.
D. Change the deployment policy to rolling with additional batc
E. Specify a batch size of 1.
F. Change the deployment policy to rollin
G. Specify a batch size of 2.

Answer: C

Explanation:
This solution will solve the deployment issue by deploying the new version of the application to one new EC2 instance at a time, while keeping the old version
running on

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version DVA-C02 Questions & Answers shared by Certleader
https://www.certleader.com/DVA-C02-dumps.html (127 Q&As)

the existing instances. This way, there will always be at least four instances serving traffic during the deployment, and no downtime or
performance degradation will occur. Option A is not optimal because it will increase the cost of running the Elastic Beanstalk environment without solving the
deployment issue. Option B is not optimal because it will split the traffic between two versions of the application, which may cause inconsistency and confusion for
the customers. Option D is not optimal because it will deploy the new version of the application to two existing instances at a time, which may reduce the capacity
below four instances during the deployment.
References: AWS Elastic Beanstalk Deployment Policies

NEW QUESTION 93
A developer maintains applications that store several secrets in AWS Secrets Manager. The applications use secrets that have changed over time. The developer
needs to identify required secrets that are still in use. The developer does not want to cause any application downtime.
What should the developer do to meet these requirements?

A. Configure an AWS CloudTrail log file delivery to an Amazon S3 bucke


B. Create an Amazon CloudWatch alarm for the GetSecretValu
C. Secrets Manager API operation requests
D. Create a secrets manager-secret-unused AWS Config managed rul
E. Create an Amazon EventBridge rule to Initiate notification when the AWS Config managed rule is met.
F. Deactivate the applications secrets and monitor the applications error logs temporarily.
G. Configure AWS X-Ray for the application
H. Create a sampling rule lo match the

GetSecretValue Secrets Manager API operation requests.

Answer: B

Explanation:
This solution will meet the requirements by using AWS Config to monitor and evaluate whether Secrets Manager secrets are unused or have been deleted, based
on specified time periods. The secrets manager-secret-unused managed rule is a predefined rule that checks whether Secrets Manager secrets have been rotated
within a specified number of days or have been deleted within a specified number of days after last accessed date. The Amazon EventBridge rule will trigger a
notification when the AWS Config managed rule is met, alerting the developer about unused secrets that can be removed without causing application downtime.
Option A is not optimal because it will use AWS CloudTrail log file delivery to an Amazon S3 bucket, which will incur additional costs and complexity for storing and
analyzing log files that may not contain relevant information about secret usage. Option C is not optimal because it will deactivate the application secrets and
monitor the application error logs temporarily, which will cause application downtime and potential data loss. Option D is not optimal because it will use AWS X-
Ray to trace secret usage, which will introduce additional overhead and latency for instrumenting and sampling requests that may not be related to secret usage.
References: [AWS Config Managed Rules], [Amazon EventBridge]

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version DVA-C02 Questions & Answers shared by Certleader
https://www.certleader.com/DVA-C02-dumps.html (127 Q&As)

NEW QUESTION 95

A company has an Amazon S3 bucket that contains sensitive data. The data must be encrypted in transit and at rest. The company
encrypts the data in the S3 bucket by using an AWS Key Management Service (AWS KMS) key. A developer needs to grant several other AWS accounts the
permission to use the S3 GetObject operation to retrieve the data from the S3 bucket.
How can the developer enforce that all requests to retrieve the data provide encryption in transit?

A. Define a resource-based policy on the S3 bucket to deny access when a request meets the condition “aws:SecureTransport”: “false”.
B. Define a resource-based policy on the S3 bucket to allow access when a request meets the condition “aws:SecureTransport”: “false”.
C. Define a role-based policy on the other accounts' roles to deny access when a request meets the condition of “aws:SecureTransport”: “false”.
D. Define a resource-based policy on the KMS key to deny access when a request meets the condition of “aws:SecureTransport”: “false”.

Answer: A

Explanation:
Amazon S3 supports resource-based policies, which are JSON documents that specify the permissions for accessing S3 resources. A resource-based policy can
be used to enforce encryption in transit by denying access to requests that do not use HTTPS. The condition key aws:SecureTransport can be used to check if the
request was sent using SSL. If the value of this key is false, the request is denied; otherwise, the request is allowed. Reference: How do I use an S3 bucket policy
to require requests to use Secure Socket Layer (SSL)?

NEW QUESTION 97
A company is preparing to migrate an application to the company's first AWS environment Before this migration, a developer is creating a proof-of-concept
application to validate a model for building and deploying container-based applications on AWS.
Which combination of steps should the developer take to deploy the containerized proof-of- concept application with the LEAST operational effort? (Select TWO.)

A. Mastered
B. Not Mastered

Answer: A

Explanation:
To deploy a containerized application on AWS with the least operational effort, the developer should package the application into a container image by using the
Docker CLI and upload the image to Amazon ECR, which is a fully managed container registry service. Then, the developer should deploy the application to
Amazon ECS on AWS Fargate, which is a serverless compute engine for containers that eliminates the need to provision and manage servers or clusters. Amazon
ECS will automatically scale, load balance, and monitor the application. References
? How to Deploy Docker Containers | AWS
? Deploy a Web App Using AWS App Runner
? How to Deploy Containerized Apps on AWS Using ECR and Docker

NEW QUESTION 99
A company has a social media application that receives large amounts of traffic User posts and interactions are continuously updated in an Amazon RDS database
The data changes frequently, and the data types can be complex The application must serve read requests with minimal latency
The application's current architecture struggles to deliver these rapid data updates efficiently The company needs a solution to improve the application's
performance.
Which solution will meet these requirements'?

A. Mastered
B. Not Mastered

Answer: A

Explanation:
Creating an Amazon ElastiCache for Redis cluster is the best solution for improving the application’s performance. Redis is an in-memory data store that can
serve read requests with minimal latency and handle complex data types, such as lists, sets, hashes, and streams. By using a write-through caching strategy, the
application can ensure that the data in Redis is always consistent with the data in RDS. The application can read the data from Redis instead of RDS, reducing the
load on the database and improving the response time. The other solutions are either not feasible or not effective. Amazon DynamoDB Accelerator (DAX) is a
caching service that works only with DynamoDB, not RDS. Amazon S3 Transfer Acceleration is a feature that speeds up data transfers between S3 and clients
across the internet, not between RDS and the application. Amazon CloudFront is a content delivery network that can cache static content, such as images, videos,

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version DVA-C02 Questions & Answers shared by Certleader
https://www.certleader.com/DVA-C02-dumps.html (127 Q&As)

or HTML files, but not dynamic content, such as user posts and
interactions. References
? Amazon ElastiCache for Redis
? Caching Strategies and Best Practices - Amazon ElastiCache for Redis
? Using Amazon ElastiCache for Redis with Amazon RDS
? Amazon DynamoDB Accelerator (DAX)
? Amazon S3 Transfer Acceleration
? Amazon CloudFront

NEW QUESTION 104


A company has an Amazon S3 bucket containing premier content that it intends to make available to only paid subscribers of its website. The S3 bucket currently
has default permissions of all objects being private to prevent inadvertent exposure of the premier content to non-paying website visitors.
How can the company Limit the ability to download a premier content file in the S3 Bucket to paid subscribers only?

A. Apply a bucket policy that allows anonymous users to download the content from the S3 bucket.
B. Generate a pre-signed object URL for the premier content file when a pad subscriber requests a download.
C. Add a Docket policy that requires multi-factor authentication for request to access the S3 bucket objects.
D. Enable server-side encryption on the S3 bucket for data protection against the non- paying website visitors.

Answer: B

Explanation:
This solution will limit the ability to download a premier content file in the S3 bucket to paid subscribers only because it uses a pre-signed object URL that grants
temporary access to an S3 object for a specified duration. The pre-signed object URL can be generated by the company’s website when a paid subscriber
requests a download, and can be verified by Amazon S3 using the signature in the URL. Option A is not optimal because it will allow anyone to download the
content from the S3 bucket without verifying their subscription status. Option C is not optimal because it will require additional steps and costs to configure multi-
factor authentication for accessing the S3 bucket objects, which may not be feasible or user-friendly for paid subscribers. Option D is not optimal because it will not
prevent non-paying website visitors from accessing the S3 bucket objects, but only encrypt them at rest.
References: Share an Object with Others, [Using Amazon S3 Pre-Signed URLs]

NEW QUESTION 109

A developer is modifying an existing AWS Lambda function White checking the code the developer notices hardcoded parameter
various for an Amazon RDS for SQL Server user name password database host and port. There also are hardcoded parameter values for an Amazon DynamoOB
table. an Amazon S3 bucket, and an Amazon Simple Notification Service (Amazon SNS) topic.
The developer wants to securely store the parameter values outside the code m an encrypted format and wants to turn on rotation for the credentials. The
developer also wants to be able to reuse the parameter values from other applications and to update the parameter values without modifying code.
Which solution will meet these requirements with the LEAST operational overhead?

A. Create an RDS database secret in AWS Secrets Manage


B. Set the user name password, database, host and por
C. Turn on secret rotatio
D. Create encrypted Lambda environment variables for the DynamoDB table, S3 bucket and SNS topic.
E. Create an RDS database secret in AWS Secrets Manage
F. Set the user name password, database, host and por
G. Turn on secret rotatio
H. Create Secure String parameters in AWS Systems Manager Parameter Store for the DynamoDB table, S3 bucket and SNS topic.
I. Create RDS database parameters in AWS Systems Manager Paramete
J. Store for the user name password, database, host and por
K. Create encrypted Lambda environment variables for me DynamoDB table, S3 bucket, and SNS topi
L. Create a Lambda function and set the logic for the credentials rotation task Schedule the credentials rotation task in Amazon EventBridge.
M. Create RDS database parameters in AWS Systems Manager Paramete
N. Store for the user name password database, host, and por
O. Store the DynamoDB tabl
P. S3 bucket, and SNS topic in Amazon S3 Create a Lambda function and set the logic for the credentials rotation Invoke the Lambda function on a schedule.

Answer: B

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version DVA-C02 Questions & Answers shared by Certleader
https://www.certleader.com/DVA-C02-dumps.html (127 Q&As)

Explanation:
This solution will meet the requirements by using AWS Secrets Manager and AWS Systems Manager Parameter Store to securely store the parameter values
outside the code in an encrypted format. AWS Secrets Manager is a service that helps protect secrets such as database credentials by encrypting them with AWS
Key Management Service (AWS KMS) and enabling automatic rotation of secrets. The developer can create an RDS database secret in AWS Secrets Manager
and set the user name, password, database, host, and port for accessing the RDS database. The developer can also turn on secret rotation, which will change the
database credentials periodically according to a specified schedule or event. AWS Systems Manager Parameter Store is a service that provides secure and
scalable storage for configuration data and secrets. The developer can create Secure String parameters in AWS Systems Manager Parameter Store for the
DynamoDB table, S3 bucket, and SNS topic, which will encrypt them with AWS KMS. The developer can also reuse the parameter values from other applications
and update them without modifying code. Option A is not optimal because it will create encrypted Lambda

environment variables for the


DynamoDB table, S3 bucket, and SNS topic, which may not be reusable or updatable without modifying code. Option C is not optimal because it will create RDS
database parameters in AWS Systems Manager Parameter Store, which does not support automatic rotation of secrets. Option D is not optimal because it will
store the DynamoDB table, S3 bucket, and SNS topic in Amazon S3, which may introduce additional costs and complexity for accessing configuration data.
References: AWS Secrets Manager, [AWS Systems Manager Parameter Store]

NEW QUESTION 110


A developer is building a new application on AWS. The application uses an AWS Lambda function that retrieves information from an Amazon DynamoDB table.
The developer hard coded the DynamoDB table name into the Lambda function code. The table name might change over time. The developer does not want to
modify the Lambda code if the table name changes.
Which solution will meet these requirements MOST efficiently?

A. Create a Lambda environment variable to store the table nam


B. Use the standard method for the programming language to retrieve the variable.
C. Store the table name in a fil
D. Store the file in the /tmp folde
E. Use the SDK for the programming language to retrieve the table name.
F. Create a file to store the table nam
G. Zip the file and upload the file to the Lambda laye
H. Use the SDK for the programming language to retrieve the table name.
Create a global variable that is outside the handler in the Lambda function to store the table name.
I.

Answer: A

Explanation:
The solution that will meet the requirements most efficiently is to create a Lambda environment variable to store the table name. Use the standard method for the
programming language to retrieve the variable. This way, the developer can avoid hard- coding the table name in the Lambda function code and easily change the
table name by updating the environment variable. The other options either involve storing the table name in a file, which is less efficient and secure than using an
environment variable, or creating a global variable, which is not recommended as it can cause concurrency issues.
Reference: Using AWS Lambda environment variables

NEW QUESTION 115


A developer is using an AWS Lambda function to generate avatars for profile pictures that are uploaded to an Amazon S3 bucket. The Lambda function is
automatically invoked for profile pictures that are saved under the /original/ S3 prefix. The developer notices that some pictures cause the Lambda function to time
out. The developer wants to implement a fallback mechanism by using another Lambda function that resizes the profile picture.
Which solution will meet these requirements with the LEAST development effort?

A. Set the image resize Lambda function as a destination of the avatar generator Lambda function for the events that fail processing.
B. Create an Amazon Simple Queue Service (Amazon SQS) queu
C. Set the SQS queue as a destination with an on failure condition for the avatar generator Lambda functio
D. Configure the image resize Lambda function to poll from the SQS queue.
E. Create an AWS Step Functions state machine that invokes the avatar generator Lambda function and uses the image resize Lambda function as a fallbac
F. Create an Amazon EventBridge rule that matches events from the S3 bucket to invoke the state machine.
G. Create an Amazon Simple Notification Service (Amazon SNS) topi
H. Set the SNS topic as a destination with an on failure condition for the avatar generator Lambda functio
I. Subscribe the image resize Lambda function to the SNS topic.

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version DVA-C02 Questions & Answers shared by Certleader
https://www.certleader.com/DVA-C02-dumps.html (127 Q&As)

Answer: A

Explanation:
The solution that will meet the requirements with the least development effort is to set the image resize Lambda function as a destination of the avatar generator
Lambda function for the events that fail processing. This way, the fallback mechanism is automatically triggered by the Lambda service without requiring any
additional components or configuration. The other options involve creating and managing additional resources such as queues, topics, state machines, or rules,
which would increase the complexity and cost of the solution.
Reference: Using AWS Lambda destinations

NEW QUESTION 117


An application is processing clickstream data using Amazon Kinesis. The clickstream data feed into Kinesis experiences periodic spikes. The PutRecords API call
occasionally fails and the logs show that the failed call returns the response shown below:

Which techniques will help mitigate this exception? (Choose two.)

A. Implement retries with exponential backoff.


B. Use a PutRecord API instead of PutRecords.
C. Reduce the frequency and/or size of the requests.
D. Use Amazon SNS instead of Kinesis.
E. Reduce the number of KCL consumers.

Answer: AC

Explanation:
The response from the API call indicates that the ProvisionedThroughputExceededException exception has occurred. This exception
means that the rate of incoming requests exceeds the throughput limit for one or more shards in a stream. To mitigate this exception, the developer can use one or
more of the following techniques:
? Implement retries with exponential backoff. This will introduce randomness in the retry intervals and avoid overwhelming the shards with retries.
? Reduce the frequency and/or size of the requests. This will reduce the load on the shards and avoid throttling errors.
? Increase the number of shards in the stream. This will increase the throughput capacity of the stream and accommodate higher request rates.
? Use a PutRecord API instead of PutRecords. This will reduce the number of records per request and avoid exceeding the payload limit.
References:
? [ProvisionedThroughputExceededException - Amazon Kinesis Data Streams Service API Reference]
? [Best Practices for Handling Kinesis Data Streams Errors]

NEW QUESTION 118


A company is migrating its PostgreSQL database into the AWS Cloud. The company wants to use a database that will secure and regularly rotate database
credentials. The company wants a solution that does not require additional programming overhead.
Which solution will meet these requirements?

A. Mastered
B. Not Mastered

Answer: A

Explanation:
This solution meets the requirements because it uses a PostgreSQL- compatible database that can secure and regularly rotate database credentials without
requiring additional programming overhead. Amazon Aurora PostgreSQL is a relational database service that is compatible with PostgreSQL and offers high
performance, availability, and scalability. AWS Secrets Manager is a service that helps you protect secrets needed to access your applications, services, and IT
resources. You can store database credentials in AWS Secrets Manager and use them to access your Aurora PostgreSQL database. You can also enable
automatic rotation of your secrets according to a schedule or an event. AWS Secrets Manager handles the complexity of rotating secrets for you, such as
generating new passwords and updating your database with the new credentials. Using Amazon DynamoDB for the database will not meet the requirements
because it is a NoSQL database that is not compatible with PostgreSQL. Using AWS Systems Manager Parameter Store for storing and rotating database
credentials will require additional programming overhead to integrate with your database.
Reference: [What Is Amazon Aurora?], [What Is AWS Secrets Manager?]

NEW QUESTION 121

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version DVA-C02 Questions & Answers shared by Certleader
https://www.certleader.com/DVA-C02-dumps.html (127 Q&As)

A company runs an application on AWS The application stores data in an Amazon DynamoDB table Some queries are taking a long time to run These slow
queries involve an attribute that is not the table's partition key or sort key
The amount of data that the application stores in the DynamoDB table is expected to increase significantly. A developer must increase the performance of the
queries.
Which solution will meet these requirements'?

A. Increase the page size for each request by setting the Limit parameter to be higher than the default value Configure the application to retry any request that
exceeds the provisioned throughput.
B. Create a global secondary index (GSI). Set query attribute to be the partition key of the index
C. Perform a parallel scan operation by issuing individual scan requests in the parameters specify the segment for the scan requests and the total number of
segments for the parallel scan.
D. Turn on read capacity auto scaling for the DynamoDB tabl
E. Increase the maximum read capacity units (RCUs).

Answer: B

Explanation:
Creating a global secondary index (GSI) is the best solution to improve the performance of the queries that involve an attribute that is not the table’s partition key
or sort key. A GSI allows you to define an alternate key for your table and query the data using that key. This way, you can avoid scanning the entire table and
reduce the latency and cost of your queries. You should also follow the best practices for designing and using GSIs in DynamoDB12. References
? Working with Global Secondary Indexes - Amazon DynamoDB
? DynamoDB Performance & Latency - Everything You Need To Know

NEW QUESTION 126


A developer has code that is stored in an Amazon S3 bucket. The code must be deployed as an AWS Lambda function across multiple accounts in the same AWS
Region as the S3 bucket an AWS CloudPormation template that runs for each account will deploy the Lambda function.
What is the MOST secure way to allow CloudFormaton to access the Lambda Code in the S3 bucket?

A. Mastered
B. Not Mastered

Answer: A

Explanation:
This solution allows the CloudFormation service role to access the S3 bucket from any account, as long as it has the S3 GetObject permission. The bucket policy
grants access to any principal with the GetObject permission, which is the least privilege needed to deploy the Lambda code. This is more secure than granting
ListBucket permission, which is not required for deploying Lambda code, or using a service-based link, which is not supported for Lambda functions.
Reference: AWS CloudFormation Service Role, Using AWS Lambda with Amazon S3

NEW QUESTION 130


A company is building a micro services app1 cation that consists of many AWS Lambda functions. The development team wants to use AWS Serverless
Application Model (AWS SAM) templates to automatically test the Lambda functions. The development team plans to test a small percentage of traffic that is
directed to new updates before the team commits to a full deployment of the application.
Which combination of steps will meet these requirements in the MOST operationally efficient way? (Select TWO.)

A. Use AWS SAM CLI commands in AWS CodeDeploy lo invoke the Lambda functions lo lest the deployment
B. Declare the EventlnvokeConfig on the Lambda functions in the AWS SAM templates with OnSuccess and OnFailure configurations.
Enable gradual deployments through AWS SAM templates.
C.
D. Set the deployment preference type to Canary10Percen130Minutes Use hooks to test the deployment.
E. Set the deployment preference type to Linear10PefcentEvery10Minutes Use hooks to test the deployment.

Answer: CD

Explanation:
This solution will meet the requirements by using AWS Serverless Application Model (AWS SAM) templates and gradual deployments to automatically test the
Lambda functions. AWS SAM templates are configuration files that define serverless applications and resources such as Lambda functions. Gradual deployments
are a feature of AWS SAM that enable deploying new versions of Lambda functions incrementally, shifting traffic gradually, and performing validation tests during
deployment. The developer can enable gradual deployments through AWS SAM templates by adding a DeploymentPreference property to each Lambda function
resource in the template. The developer can set the deployment preference type to Canary10Percent30Minutes, which means that 10 percent of traffic will be
shifted to the new version of the Lambda function for 30 minutes before shifting 100 percent of traffic. The developer can also use hooks to test the deployment,
which are custom Lambda functions that run before or after traffic shifting and perform validation tests or rollback actions.
References: [AWS Serverless Application Model (AWS SAM)], [Gradual Code Deployment]

NEW QUESTION 134


A developer is troubleshooting an application mat uses Amazon DynamoDB in the uswest- 2 Region. The application is deployed to an Amazon EC2 instance. The
application requires read-only permissions to a table that is named Cars The EC2 instance has an attached IAM role that contains the following IAM policy.

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version DVA-C02 Questions & Answers shared by Certleader
https://www.certleader.com/DVA-C02-dumps.html (127 Q&As)

When the application tries to read from the Cars table, an Access Denied error occurs. How can the developer resolve this error?

A. Modify the IAM policy resource to be "arn aws dynamo* us-west-2 account-id table/*"
B. Modify the IAM policy to include the dynamodb * action
C. Create a trust policy that specifies the EC2 service principa
D. Associate the role with the policy.
E. Create a trust relationship between the role and dynamodb Amazonas com.

Answer: C

Explanation:
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/access-control- overview.html#access-control-resource-ownership

NEW QUESTION 135


A development team maintains a web application by using a single AWS CloudFormation template. The template defines web servers and an Amazon RDS
database. The team uses the Cloud Formation template to deploy the Cloud Formation stack to different environments.
During a recent application deployment, a developer caused the primary development database to be dropped and recreated. The result of this incident was a loss
of data. The team needs to avoid accidental database deletion in the future.
Which solutions will meet these requirements? (Choose two.)

A. Add a CloudFormation Deletion Policy attribute with the Retain value to the database resource.
B. Update the CloudFormation stack policy to prevent updates to the database.
Modify the database to use a Multi-AZ deployment.
C.
D. Create a CloudFormation stack set for the web application and database deployments.
E. Add a Cloud Formation DeletionPolicy attribute with the Retain value to the stack.

Answer: AB

Explanation:
AWS CloudFormation is a service that enables developers to model and provision AWS resources using templates. The developer can add a CloudFormation
Deletion Policy attribute with the Retain value to the database resource. This will prevent the database from being deleted when the stack is deleted or updated.
The developer can also update the CloudFormation stack policy to prevent updates to the database. This will prevent accidental changes to the database
configuration or properties.
References:
? [What Is AWS CloudFormation? - AWS CloudFormation]
? [DeletionPolicy Attribute - AWS CloudFormation]
? [Protecting Resources During Stack Updates - AWS CloudFormation]

NEW QUESTION 139


A developer is creating a service that uses an Amazon S3 bucket for image uploads. The service will use an AWS Lambda function to create a thumbnail of each
image Each time an image is uploaded the service needs to send an email notification and create the thumbnail The developer needs to configure the image
setup.
processing andwill
Which solution email notifications
meet these requirements?

A. Create an Amazon Simple Notification Service (Amazon SNS) topic Configure S3 event notifications with a destination of the SNS topic Subscribe the Lambda
function to the SNS topic Create an email notification subscription to the SNS topic
B. Create an Amazon Simple Notification Service (Amazon SNS) topi
C. Configure S3 event notifications with a destination of the SNS topi
D. Subscribe the Lambda function to the SNS topi
E. Create an Amazon Simple Queue Service (Amazon SQS) queue Subscribe the SQS queue to the SNS topic Create an email notification subscription to the
SQS queue.
F. Create an Amazon Simple Queue Service (Amazon SQS) queue Configure S3 event notifications with a destination of the SQS queue Subscribe the Lambda
function to the SQS queue Create an email notification subscription to the SQS queue.
G. Create an Amazon Simple Queue Service (Amazon SQS) queu
H. Send S3 event notifications to Amazon EventBridg
I. Create an EventBndge rule that runs the Lambda function when images are uploaded to the S3 bucket Create an EventBridge rule that sends notifications to the
SQS queue Create an email notification subscription to the SQS queue

Answer: A

Explanation:
This solution will allow the developer to receive notifications for each image uploaded to the S3 bucket, and also create a thumbnail using the Lambda function.
The SNS topic will serve as a trigger for both the Lambda function and the email notification subscription. When an image is uploaded, S3 will send a notification to
the SNS topic, which will trigger the Lambda function to create the thumbnail and also send an email notification to the specified email address.

NEW QUESTION 141


......

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version DVA-C02 Questions & Answers shared by Certleader
https://www.certleader.com/DVA-C02-dumps.html (127 Q&As)

Thank You for Trying Our Product

* 100% Pass or Money Back


All our products come with a 90-day Money Back Guarantee.
* One year free update
You can enjoy free update one year. 24x7 online support.
* Trusted by Millions
We currently serve more than 30,000,000 customers.
* Shop Securely
All transactions are protected by VeriSign!

100% Pass Your DVA-C02 Exam with Our Prep Materials Via below:

https://www.certleader.com/DVA-C02-dumps.html

The Leader of IT Certification visit - https://www.certleader.com


Powered by TCPDF (www.tcpdf.org)

You might also like