GCP - Data - Engineering - Certification
GCP - Data - Engineering - Certification
Q1
Your company built a TensorFlow neutral-network model with a
large number of neurons and layers. The model fits well for the
training data. However, when tested against new data, it performs
poorly. What method can you employ to address this?
● A. Threading
● B. Serialization
● C. Dropout Methods
● D. Dimensionality Reduction
Ans - C
Q2
You are building a model to make clothing recommendations. You
know a user's fashion preference is likely to change over time, so
you build a data pipeline to stream new data back to the model as
it becomes available. How should you use this data to train the
model?
● A. Continuously retrain the model on just the new data.
● B. Continuously retrain the model on a combination of
existing data and the new data.
● C. Train on the existing data while using the new data as
your test set.
● D. Train on the new data while using the existing data as
your test set.
Ans - B
Q3
You designed a database for patient records as a pilot project to
cover a few hundred patients in three clinics. Your design used a
single database table to represent all patients and their visits, and
you used self-joins to generate reports. The server resource
utilization was at 50%. Since then, the scope of the project has
expanded. The database must now store 100 times more patient
records. You can no longer run the reports, because they either
take too long or they encounter errors with insufficient compute
resources. How should you adjust the database design?
ANS - C
Q4
You create an important report for your large team in Google Data
Studio 360. The report uses Google BigQuery as its data source.
You notice that visualizations are not showing data that is less
than 1 hour old. What should you do?
ANS - A
Q5
An external customer provides you with a daily dump of data from
their database. The data flows into Google Cloud Storage GCS as
comma-separated values(CSV) files. You want to analyze this
data in Google BigQuery, but the data could have rows that are
formatted incorrectly or corrupted. How should you build this
pipeline?
Ans - D
Q6
Your weather app queries a database every 15 minutes to get the
current temperature. The frontend is powered by Google App
Engine and server millions of users. How should you design the
frontend to respond to a database failure?
ANS - B
Q7
You are creating a model to predict housing prices. Due to budget
constraints, you must run it on a single resource-constrained
virtual machine. Which learning algorithm should you use?
● A. Linear regression
● B. Logistic classification
● C. Recurrent neural network
● D. Feedforward neural network
ANS - A
Q-8
You are building new real-time data warehouse for your company
and will use Google BigQuery streaming inserts. There is no
guarantee that data will only be sent in once but you do have a
unique ID for each row of data and an event timestamp. You want
to ensure that duplicates are not included while interactively
querying data. Which query type should you use?
ANS - D
Q-9
Your company is using WILDCARD tables to query data across
multiple tables with similar names. The SQL statement is currently
failing with the following error:
# Syntax error : Expected end of statement but got "-" at [4:11]
SELECT age -
FROM -
bigquery-public-data.noaa_gsod.gsod
WHERE -
age != 99
AND_TABLE_SUFFIX = "˜1929'
ORDER BY -
age DESC
Which table name will make the SQL statement work correctly?
● A. "˜bigquery-public-data.noaa_gsod.gsod"˜
● B. bigquery-public-data.noaa_gsod.gsod*
● C. "˜bigquery-public-data.noaa_gsod.gsod'*
● D. "˜bigquery-public-data.noaa_gsod.gsod*`
Ans: B
ANS .- BDF
Q11
You are designing a basket abandonment system for an
ecommerce company. The system will send a message to a user
based on these rules:
✑ No interaction by the user on the site for 1 hour
✑ Has added more than $30 worth of products to the basket
✑ Has not completed a transaction
You use Google Cloud Dataflow to process the data and decide if
a message should be sent. How should you design the pipeline?
ANS - C
https://cloud.google.com/dataprep/docs/html/SESSION-Function_
57344754
https://cloud.google.com/dataflow/docs/concepts/streaming-pipeli
nes#session-windows
Q12
Your company handles data processing for a number of different
clients. Each client prefers to use their own suite of analytics
tools, with some allowing direct query access via Google
BigQuery. You need to secure the data so that clients cannot see
each other's data. You want to ensure appropriate access to the
data. Which three steps should you take? (Choose three.)
ANS - BDF
Q13
You want to process payment transactions in a point-of-sale
application that will run on Google Cloud Platform. Your user base
could grow exponentially, but you do not want to manage
infrastructure scaling. Which Google database service should you
use?
● A. Cloud SQL
● B. BigQuery
● C. Cloud Bigtable
● D. Cloud Datastore
Our Ans - A
Q14
You want to use a database of information about tissue samples
to classify future tissue samples as either normal or mutated. You
are evaluating an unsupervised anomaly detection method for
classifying the tissue samples. Which two characteristic support
this method? (Choose two.)
ANS - AD
Q15
You need to store and analyze social media postings in Google
BigQuery at a rate of 10,000 messages per minute in near
real-time. Initially, design the application to use streaming inserts
for individual postings. Your application also performs data
aggregations right after the streaming inserts. You discover that
the queries after streaming inserts do not exhibit strong
consistency, and reports from the queries might miss in-flight
data. How can you adjust your application design?
ANS - D
Q16
Your startup has never implemented a formal security policy.
Currently, everyone in the company has access to the datasets
stored in Google BigQuery. Teams have freedom to use the
service as they see fit, and they have not documented their use
cases. You have been asked to secure the data warehouse. You
need to discover what everyone is doing. What should you do
first?
ANS - A
Q17
Your company is migrating their 30-node Apache Hadoop cluster
to the cloud. They want to re-use Hadoop jobs they have already
created and minimize the management of the cluster as much as
possible. They also want to be able to persist data beyond the life
of the cluster. What should you do?
ANS - D
Q18
Business owners at your company have given you a database of
bank transactions. Each row contains the user ID, transaction
type, transaction location, and transaction amount. They ask you
to investigate what type of machine learning can be applied to the
data. Which three machine learning applications can you use?
(Choose three.)
ANS - BCF
Q19
Your company's on-premises Apache Hadoop servers are
approaching end-of-life, and IT has decided to migrate the cluster
to Google Cloud Dataproc. A like-for- like migration of the cluster
would require 50 TB of Google Persistent Disk per node. The CIO
is concerned about the cost of using that much block storage. You
want to minimize the storage cost of the migration. What should
you do?
ANS - A
Q 20
You work for a car manufacturer and have set up a data pipeline
using Google Cloud Pub/Sub to capture anomalous sensor
events. You are using a push subscription in Cloud Pub/Sub that
calls a custom HTTPS endpoint that you have created to take
action of these anomalous events as they occur. Your custom
HTTPS endpoint keeps getting an inordinate amount of duplicate
messages. What is the most likely cause of these duplicate
messages?
ANS - D
Q21
Your company uses a proprietary system to send inventory data
every 6 hours to a data ingestion service in the cloud. Transmitted
data includes a payload of several fields and the timestamp of the
transmission. If there are any concerns about a transmission, the
system re-transmits the data. How should you deduplicate the
data most efficiency?
ANS - A
Q22
Your company has hired a new data scientist who wants to
perform complicated analyses across very large datasets stored
in Google Cloud Storage and in a Cassandra cluster on Google
Compute Engine. The scientist primarily wants to create labelled
data sets for machine learning projects, along with some
visualization tasks. She reports that her laptop is not powerful
enough to perform her tasks and it is slowing her down. You want
to help her perform her tasks.
What should you do?
Ans - D
Q 23
You are deploying 10,000 new Internet of Things devices to
collect temperature data in your warehouses globally. You need to
process, store and analyze these very large datasets in real time.
What should you do?
Correct Answer: B
Q 24
You have spent a few days loading data from comma-separated
values (CSV) files into the Google BigQuery table
CLICK_STREAM. The column DT stores the epoch time of click
events. For convenience, you chose a simple schema where
every field is treated as the STRING type. Now, you want to
compute web session durations of users who visit your site, and
you want to change its data type to the TIMESTAMP. You want to
minimize the migration effort without making future queries
computationally expensive. What should you do?
Correct Answer: E
Q25
You want to use Google Stackdriver Logging to monitor Google
BigQuery usage. You need an instant notification to be sent to
your monitoring tool when new data is appended to a certain table
using an insert job, but you do not want to receive notifications for
other tables. What should you do?
Correct : D
Q26
You are working on a sensitive project involving private user data.
You have set up a project on Google Cloud Platform to house
your work internally. An external consultant is going to assist with
coding a complex transformation in a Google Cloud Dataflow
pipeline for your project. How should you maintain users' privacy?
Correct Answer: B
Correct Answer: B
Q28
Your company is performing data preprocessing for a learning
algorithm in Google Cloud Dataflow. Numerous data logs are
being are being generated during this step, and the team wants to
analyze them. Due to the dynamic nature of the campaign, the
data is growing exponentially every hour. The data scientists have
written the following code to read the data for a new key features
in the logs.
BigQueryIO.Read -
.named("ReadLogData")
.from("clouddataflow-readonly:samples.log_data")
Correct Answer: D
Q29
Your company is streaming real-time sensor data from their
factory floor into Bigtable and they have noticed extremely poor
performance. How should the row key be redesigned to improve
Bigtable performance on queries that populate real-time
dashboards?
Correct Answer: D
Q30
Your company's customer and order databases are often under
heavy load. This makes performing analytics against them difficult
without harming operations. The databases are in a MySQL
cluster, with nightly backups taken using mysqldump. You want to
perform analytics with minimal impact on operations. What should
you do?
Correct Answer: B
Q31
You have Google Cloud Dataflow streaming pipeline running with
a Google Cloud Pub/Sub subscription as the source. You need to
make an update to the code that will make the new Cloud
Dataflow pipeline incompatible with the current version. You do
not want to lose any data when making this update. What should
you do?
Correct Answer: A
Correct Answer: A
Q33
Your software uses a simple JSON format for all messages.
These messages are published to Google Cloud Pub/Sub, then
processed with Google Cloud Dataflow to create a real-time
dashboard for the CFO. During testing, you notice that some
messages are missing in the dashboard. You check the logs, and
all messages are being published to Cloud Pub/Sub successfully.
What should you do next?
Correct Answer: B
Q34
Flowlogistic Case Study -
Company Overview -
Flowlogistic is a leading logistics and supply chain provider. They
help businesses throughout the world manage their resources
and transport them their final destination. The company has
grown rapidly, expanding their offerings to include rail, truck,
aircraft, and oceanic shipping.
Company Background -
The company started as a regional trucking company, and then
expanded into other logistics market. Because they have not
updated their infrastructure, managing and tracking orders and
shipments has become a bottleneck. To improve operations,
Flowlogistic developed proprietary technology for tracking
shipments in real time at the parcel level. However, they are
unable to deploy it because their technology stack, based on
Apache Kafka, cannot support the processing volume. In addition,
Flowlogistic wants to further analyze their orders and shipments
to determine how best to deploy their resources.
Solution Concept -
Flowlogistic wants to implement two concepts using the cloud:
✑ Use their proprietary technology in a real-time
inventory-tracking system that indicates the location of their loads
✑ Perform analytics on all their orders and shipment logs, which
contain both structured and unstructured data, to determine how
best to deploy resources, which markets to expand info. They
also want to use predictive analytics to learn earlier when a
shipment will be delayed.
Technical Requirements -
✑ Handle both streaming and batch data
✑ Migrate existing Hadoop workloads
✑ Ensure architecture is scalable and elastic to meet the
changing demands of the company.
✑ Use managed services whenever possible
✑ Encrypt data flight and at rest
✑ Connect a VPN between the production data center and cloud
environment
SEO Statement -
We have grown so quickly that our inability to upgrade our
infrastructure is really hampering further growth and efficiency.
We are efficient at moving shipments around the world, but we
are inefficient at moving data around. We need to organize our
information so we can more easily understand where our
customers are and what they are shipping.
CTO Statement -
IT has never been a priority for us, so as our data has grown, we
have not invested enough in our technology. I have a good staff to
manage IT, but they are so busy managing our infrastructure that I
cannot get them to do the things that really matter, such as
organizing our data, building the analytics, and figuring out how to
implement the CFO' s tracking technology.
CFO Statement -
Part of our competitive advantage is that we penalize ourselves
for late shipments and deliveries. Knowing where out shipments
are at all times has a direct correlation to our bottom line and
profitability. Additionally, I don't want to commit capital to building
out a server environment.
Flowlogistic wants to use Google BigQuery as their primary
analysis system, but they still have Apache Hadoop and Spark
workloads that they cannot move to
BigQuery. Flowlogistic does not know how to store the data that is
common to both workloads. What should they do?
● A. Store the common data in BigQuery as partitioned tables.
● B. Store the common data in BigQuery and expose
authorized views.
● C. Store the common data encoded as Avro in Google Cloud
Storage.
● D. Store he common data in the HDFS storage for a Google
Cloud Dataproc cluster.
Correct Answer: B Our Ans is C or D - need to check
https://cloud.google.com/dataproc/docs/tutorials/bigquery-connect
or-spark-example
Update : Ans D is wrong since it is not cost effective.
Storing the data in GCS is cost-efficient since you only pay for the
time the job is running and then you can shut down the cluster
when you aren’t using it without losing all its data since it is now
stored in Cloud Storage
Q35
Flowlogistic Case Study -
Company Overview -
Flowlogistic is a leading logistics and supply chain provider. They
help businesses throughout the world manage their resources
and transport them to their final destination. The company has
grown rapidly, expanding their offerings to include rail, truck,
aircraft, and oceanic shipping.
Company Background -
The company started as a regional trucking company, and then
expanded into other logistics market. Because they have not
updated their infrastructure, managing and tracking orders and
shipments has become a bottleneck. To improve operations,
Flowlogistic developed proprietary technology for tracking
shipments in real time at the parcel level. However, they are
unable to deploy it because their technology stack, based on
Apache Kafka, cannot support the processing volume. In addition,
Flowlogistic wants to further analyze their orders and shipments
to determine how best to deploy their resources.
Solution Concept -
Flowlogistic wants to implement two concepts using the cloud:
✑ Use their proprietary technology in a real-time
inventory-tracking system that indicates the location of their loads
✑ Perform analytics on all their orders and shipment logs, which
contain both structured and unstructured data, to determine how
best to deploy resources, which markets to expand info. They
also want to use predictive analytics to learn earlier when a
shipment will be delayed.
Technical Requirements -
✑ Handle both streaming and batch data
✑ Migrate existing Hadoop workloads
✑ Ensure architecture is scalable and elastic to meet the
changing demands of the company.
✑ Use managed services whenever possible
✑ Encrypt data flight and at rest
✑ Connect a VPN between the production data center and cloud
environment
SEO Statement -
We have grown so quickly that our inability to upgrade our
infrastructure is really hampering further growth and efficiency.
We are efficient at moving shipments around the world, but we
are inefficient at moving data around.
We need to organize our information so we can more easily
understand where our customers are and what they are shipping.
CTO Statement -
IT has never been a priority for us, so as our data has grown, we
have not invested enough in our technology. I have a good staff to
manage IT, but they are so busy managing our infrastructure that I
cannot get them to do the things that really matter, such as
organizing our data, building the analytics, and figuring out how to
implement the CFO' s tracking technology.
CFO Statement -
Part of our competitive advantage is that we penalize ourselves
for late shipments and deliveries. Knowing where out shipments
are at all times has a direct correlation to our bottom line and
profitability. Additionally, I don't want to commit capital to building
out a server environment.
Flowlogistic's management has determined that the current
Apache Kafka servers cannot handle the data volume for their
real-time inventory tracking system.
You need to build a new system on Google Cloud Platform (GCP)
that will feed the proprietary tracking software. The system must
be able to ingest data from a variety of global sources, process
and query in real-time, and store the data reliably. Which
combination of GCP products should you choose?
● A. Cloud Pub/Sub, Cloud Dataflow, and Cloud Storage
● B. Cloud Pub/Sub, Cloud Dataflow, and Local SSD
● C. Cloud Pub/Sub, Cloud SQL, and Cloud Storage
● D. Cloud Load Balancing, Cloud Dataflow, and Cloud
Storage
Correct Answer: C Our Ans A verify it
Q36
Flowlogistic Case Study -
Company Overview -
Flowlogistic is a leading logistics and supply chain provider. They
help businesses throughout the world manage their resources
and transport them to their final destination. The company has
grown rapidly, expanding their offerings to include rail, truck,
aircraft, and oceanic shipping.
Company Background -
The company started as a regional trucking company, and then
expanded into other logistics market. Because they have not
updated their infrastructure, managing and tracking orders and
shipments has become a bottleneck. To improve operations,
Flowlogistic developed proprietary technology for tracking
shipments in real time at the parcel level. However, they are
unable to deploy it because their technology stack, based on
Apache Kafka, cannot support the processing volume. In addition,
Flowlogistic wants to further analyze their orders and shipments
to determine how best to deploy their resources.
Solution Concept -
Flowlogistic wants to implement two concepts using the cloud:
✑ Use their proprietary technology in a real-time
inventory-tracking system that indicates the location of their loads
✑ Perform analytics on all their orders and shipment logs, which
contain both structured and unstructured data, to determine how
best to deploy resources, which markets to expand info. They
also want to use predictive analytics to learn earlier when a
shipment will be delayed.
Technical Requirements -
✑ Handle both streaming and batch data
✑ Migrate existing Hadoop workloads
✑ Ensure architecture is scalable and elastic to meet the
changing demands of the company.
✑ Use managed services whenever possible
✑ Encrypt data flight and at rest
✑ Connect a VPN between the production data center and cloud
environment
SEO Statement -
We have grown so quickly that our inability to upgrade our
infrastructure is really hampering further growth and efficiency.
We are efficient at moving shipments around the world, but we
are inefficient at moving data around.
We need to organize our information so we can more easily
understand where our customers are and what they are shipping.
CTO Statement -
IT has never been a priority for us, so as our data has grown, we
have not invested enough in our technology. I have a good staff to
manage IT, but they are so busy managing our infrastructure that I
cannot get them to do the things that really matter, such as
organizing our data, building the analytics, and figuring out how to
implement the CFO' s tracking technology.
CFO Statement -
Part of our competitive advantage is that we penalize ourselves
for late shipments and deliveries. Knowing where out shipments
are at all times has a direct correlation to our bottom line and
profitability. Additionally, I don't want to commit capital to building
out a server environment.
Flowlogistic's CEO wants to gain rapid insight into their customer
base so his sales team can be better informed in the field. This
team is not very technical, so they've purchased a visualization
tool to simplify the creation of BigQuery reports. However, they've
been overwhelmed by all the data in the table, and are spending a
lot of money on queries trying to find the data they need. You
want to solve their problem in the most cost-effective way. What
should you do?
● A. Export the data into a Google Sheet for virtualization.
● B. Create an additional table with only the necessary
columns.
● C. Create a view on the table to present to the virtualization
tool.
● D. Create identity and access management (IAM) roles on
the appropriate columns, so only they appear in a query.
Correct Answer: C
Q37
Flowlogistic Case Study -
Company Overview -
Flowlogistic is a leading logistics and supply chain provider. They
help businesses throughout the world manage their resources
and transport them to their final destination. The company has
grown rapidly, expanding their offerings to include rail, truck,
aircraft, and oceanic shipping.
Company Background -
The company started as a regional trucking company, and then
expanded into other logistics market. Because they have not
updated their infrastructure, managing and tracking orders and
shipments has become a bottleneck. To improve operations,
Flowlogistic developed proprietary technology for tracking
shipments in real time at the parcel level. However, they are
unable to deploy it because their technology stack, based on
Apache Kafka, cannot support the processing volume. In addition,
Flowlogistic wants to further analyze their orders and shipments
to determine how best to deploy their resources.
Solution Concept -
Flowlogistic wants to implement two concepts using the cloud:
✑ Use their proprietary technology in a real-time
inventory-tracking system that indicates the location of their loads
✑ Perform analytics on all their orders and shipment logs, which
contain both structured and unstructured data, to determine how
best to deploy resources, which markets to expand info. They
also want to use predictive analytics to learn earlier when a
shipment will be delayed.
Technical Requirements -
✑ Handle both streaming and batch data
✑ Migrate existing Hadoop workloads
✑ Ensure architecture is scalable and elastic to meet the
changing demands of the company.
✑ Use managed services whenever possible
✑ Encrypt data flight and at rest
Connect a VPN between the production data center and cloud
environment
SEO Statement -
We have grown so quickly that our inability to upgrade our
infrastructure is really hampering further growth and efficiency.
We are efficient at moving shipments around the world, but we
are inefficient at moving data around.
We need to organize our information so we can more easily
understand where our customers are and what they are shipping.
CTO Statement -
IT has never been a priority for us, so as our data has grown, we
have not invested enough in our technology. I have a good staff to
manage IT, but they are so busy managing our infrastructure that I
cannot get them to do the things that really matter, such as
organizing our data, building the analytics, and figuring out how to
implement the CFO' s tracking technology.
CFO Statement -
Part of our competitive advantage is that we penalize ourselves
for late shipments and deliveries. Knowing where out shipments
are at all times has a direct correlation to our bottom line and
profitability. Additionally, I don't want to commit capital to building
out a server environment.
Flowlogistic is rolling out their real-time inventory tracking system.
The tracking devices will all send package-tracking messages,
which will now go to a single
Google Cloud Pub/Sub topic instead of the Apache Kafka cluster.
A subscriber application will then process the messages for
real-time reporting and store them in
Google BigQuery for historical analysis. You want to ensure the
package data can be analyzed over time.
Which approach should you take?
● A. Attach the timestamp on each message in the Cloud
Pub/Sub subscriber application as they are received.
● B. Attach the timestamp and Package ID on the outbound
message from each publisher device as they are sent to
Clod Pub/Sub.
● C. Use the NOW () function in BigQuery to record the event's
time.
● D. Use the automatically generated timestamp from Cloud
Pub/Sub to order the data.
Correct Answer: B
Q38
MJTelco Case Study -
Company Overview -
MJTelco is a startup that plans to build networks in rapidly
growing, underserved markets around the world. The company
has patents for innovative optical communications hardware.
Based on these patents, they can create many reliable,
high-speed backbone links with inexpensive hardware.
Company Background -
Founded by experienced telecom executives, MJTelco uses
technologies originally developed to overcome communications
challenges in space. Fundamental to their operation, they need to
create a distributed data infrastructure that drives real-time
analysis and incorporates machine learning to continuously
optimize their topologies. Because their hardware is inexpensive,
they plan to overdeploy the network allowing them to account for
the impact of dynamic regional politics on location availability and
cost.
Their management and operations teams are situated all around
the globe creating many-to-many relationship between data
consumers and provides in their system. After careful
consideration, they decided public cloud is the perfect
environment to support their needs.
Solution Concept -
MJTelco is running a successful proof-of-concept (PoC) project in
its labs. They have two primary needs:
✑ Scale and harden their PoC to support significantly more data
flows generated when they ramp to more than 50,000
installations.
✑ Refine their machine-learning cycles to verify and improve the
dynamic models they use to control topology definition.
MJTelco will also use three separate operating environments ""
development/test, staging, and production "" to meet the needs of
running experiments, deploying new features, and serving
production customers.
Business Requirements -
✑ Scale up their production environment with minimal cost,
instantiating resources when and where needed in an
unpredictable, distributed telecom user community.
✑ Ensure security of their proprietary data to protect their
leading-edge machine learning and analysis.
✑ Provide reliable and timely access to data for analysis from
distributed research workers
✑ Maintain isolated environments that support rapid iteration of
their machine-learning models without affecting their customers.
Technical Requirements -
✑ Ensure secure and efficient transport and storage of telemetry
data
✑ Rapidly scale instances to support between 10,000 and
100,000 data providers with multiple flows each.
✑ Allow analysis and presentation against data tables tracking up
to 2 years of data storing approximately 100m records/day
✑ Support rapid iteration of monitoring infrastructure focused on
awareness of data pipeline problems both in telemetry flows and
in production learning cycles.
CEO Statement -
Our business model relies on our patents, analytics and dynamic
machine learning. Our inexpensive hardware is organized to be
highly reliable, which gives us cost advantages. We need to
quickly stabilize our large distributed data pipelines to meet our
reliability and capacity commitments.
CTO Statement -
Our public cloud services must operate as advertised. We need
resources that scale and keep our data secure. We also need
environments in which our data scientists can carefully study and
quickly adapt our models. Because we rely on automation to
process our data, we also need our development and test
environments to work as we iterate.
CFO Statement -
The project is too large for us to maintain the hardware and
software required for the data and analysis. Also, we cannot
afford to staff an operations team to monitor so many data feeds,
so we will rely on automation and infrastructure. Google Cloud's
machine learning will allow our quantitative researchers to work
on our high-value problems instead of problems with our data
pipelines.
MJTelco's Google Cloud Dataflow pipeline is now ready to start
receiving data from the 50,000 installations. You want to allow
Cloud Dataflow to scale its compute power up as required. Which
Cloud Dataflow pipeline configuration setting should you update?
● A. The zone
● B. The number of workers
● C. The disk size per worker
● D. The maximum number of workers
Correct Answer: A
D
Q39
MJTelco Case Study -
Company Overview -
MJTelco is a startup that plans to build networks in rapidly
growing, underserved markets around the world. The company
has patents for innovative optical communications hardware.
Based on these patents, they can create many reliable,
high-speed backbone links with inexpensive hardware.
Company Background -
Founded by experienced telecom executives, MJTelco uses
technologies originally developed to overcome communications
challenges in space. Fundamental to their operation, they need to
create a distributed data infrastructure that drives real-time
analysis and incorporates machine learning to continuously
optimize their topologies. Because their hardware is inexpensive,
they plan to overdeploy the network allowing them to account for
the impact of dynamic regional politics on location availability and
cost.
Their management and operations teams are situated all around
the globe creating many-to-many relationship between data
consumers and provides in their system. After careful
consideration, they decided public cloud is the perfect
environment to support their needs.
Solution Concept -
MJTelco is running a successful proof-of-concept (PoC) project in
its labs. They have two primary needs:
✑ Scale and harden their PoC to support significantly more data
flows generated when they ramp to more than 50,000
installations.
✑ Refine their machine-learning cycles to verify and improve the
dynamic models they use to control topology definition.
MJTelco will also use three separate operating environments ""
development/test, staging, and production "" to meet the needs of
running experiments, deploying new features, and serving
production customers.
Business Requirements -
✑ Scale up their production environment with minimal cost,
instantiating resources when and where needed in an
unpredictable, distributed telecom user community.
✑ Ensure security of their proprietary data to protect their
leading-edge machine learning and analysis.
✑ Provide reliable and timely access to data for analysis from
distributed research workers
✑ Maintain isolated environments that support rapid iteration of
their machine-learning models without affecting their customers.
Technical Requirements -
✑ Ensure secure and efficient transport and storage of telemetry
data
✑ Rapidly scale instances to support between 10,000 and
100,000 data providers with multiple flows each.
✑ Allow analysis and presentation against data tables tracking up
to 2 years of data storing approximately 100m records/day
✑ Support rapid iteration of monitoring infrastructure focused on
awareness of data pipeline problems both in telemetry flows and
in production learning cycles.
CEO Statement -
Our business model relies on our patents, analytics and dynamic
machine learning. Our inexpensive hardware is organized to be
highly reliable, which gives us cost advantages. We need to
quickly stabilize our large distributed data pipelines to meet our
reliability and capacity commitments.
CTO Statement -
Our public cloud services must operate as advertised. We need
resources that scale and keep our data secure. We also need
environments in which our data scientists can carefully study and
quickly adapt our models. Because we rely on automation to
process our data, we also need our development and test
environments to work as we iterate.
CFO Statement -
The project is too large for us to maintain the hardware and
software required for the data and analysis. Also, we cannot
afford to staff an operations team to monitor so many data feeds,
so we will rely on automation and infrastructure. Google Cloud's
machine learning will allow our quantitative researchers to work
on our high-value problems instead of problems with our data
pipelines.
You need to compose visualizations for operations teams with the
following requirements:
✑ The report must include telemetry data from all 50,000
installations for the most resent 6 weeks (sampling once every
minute).
✑ The report must not be more than 3 hours delayed from live
data.
✑ The actionable report should only show suboptimal links.
✑ Most suboptimal links should be sorted to the top.
✑ Suboptimal links can be grouped and filtered by regional
geography.
✑ User response time to load the report must be <5 seconds.
Which approach meets the requirements?
Correct Answer: C or D
Q40
MJTelco Case Study -
Company Overview -
MJTelco is a startup that plans to build networks in rapidly
growing, underserved markets around the world. The company
has patents for innovative optical communications hardware.
Based on these patents, they can create many reliable,
high-speed backbone links with inexpensive hardware.
Company Background -
Founded by experienced telecom executives, MJTelco uses
technologies originally developed to overcome communications
challenges in space. Fundamental to their operation, they need to
create a distributed data infrastructure that drives real-time
analysis and incorporates machine learning to continuously
optimize their topologies. Because their hardware is inexpensive,
they plan to overdeploy the network allowing them to account for
the impact of dynamic regional politics on location availability and
cost.
Their management and operations teams are situated all around
the globe creating many-to-many relationship between data
consumers and provides in their system. After careful
consideration, they decided public cloud is the perfect
environment to support their needs.
Solution Concept -
MJTelco is running a successful proof-of-concept (PoC) project in
its labs. They have two primary needs:
✑ Scale and harden their PoC to support significantly more data
flows generated when they ramp to more than 50,000
installations.
✑ Refine their machine-learning cycles to verify and improve the
dynamic models they use to control topology definition.
MJTelco will also use three separate operating environments ""
development/test, staging, and production "" to meet the needs of
running experiments, deploying new features, and serving
production customers.
Business Requirements -
✑ Scale up their production environment with minimal cost,
instantiating resources when and where needed in an
unpredictable, distributed telecom user community.
✑ Ensure security of their proprietary data to protect their
leading-edge machine learning and analysis.
✑ Provide reliable and timely access to data for analysis from
distributed research workers
✑ Maintain isolated environments that support rapid iteration of
their machine-learning models without affecting their customers.
Technical Requirements -
✑ Ensure secure and efficient transport and storage of telemetry
data
✑ Rapidly scale instances to support between 10,000 and
100,000 data providers with multiple flows each.
✑ Allow analysis and presentation against data tables tracking up
to 2 years of data storing approximately 100m records/day
✑ Support rapid iteration of monitoring infrastructure focused on
awareness of data pipeline problems both in telemetry flows and
in production learning cycles.
CEO Statement -
Our business model relies on our patents, analytics and dynamic
machine learning. Our inexpensive hardware is organized to be
highly reliable, which gives us cost advantages. We need to
quickly stabilize our large distributed data pipelines to meet our
reliability and capacity commitments.
CTO Statement -
Our public cloud services must operate as advertised. We need
resources that scale and keep our data secure. We also need
environments in which our data scientists can carefully study and
quickly adapt our models. Because we rely on automation to
process our data, we also need our development and test
environments to work as we iterate.
CFO Statement -
The project is too large for us to maintain the hardware and
software required for the data and analysis. Also, we cannot
afford to staff an operations team to monitor so many data feeds,
so we will rely on automation and infrastructure. Google Cloud's
machine learning will allow our quantitative researchers to work
on our high-value problems instead of problems with our data
pipelines.
You create a new report for your large team in Google Data
Studio 360. The report uses Google BigQuery as its data source.
It is company policy to ensure employees can view only the data
associated with their region, so you create and populate a table
for each region. You need to enforce the regional access policy to
the data.
Which two actions should you take? (Choose two.)
Correct Answer: BD / BE
Q41
MJTelco Case Study -
Company Overview -
MJTelco is a startup that plans to build networks in rapidly
growing, underserved markets around the world. The company
has patents for innovative optical communications hardware.
Based on these patents, they can create many reliable,
high-speed backbone links with inexpensive hardware.
Company Background -
Founded by experienced telecom executives, MJTelco uses
technologies originally developed to overcome communications
challenges in space. Fundamental to their operation, they need to
create a distributed data infrastructure that drives real-time
analysis and incorporates machine learning to continuously
optimize their topologies. Because their hardware is inexpensive,
they plan to overdeploy the network allowing them to account for
the impact of dynamic regional politics on location availability and
cost.
Their management and operations teams are situated all around
the globe creating many-to-many relationship between data
consumers and provides in their system. After careful
consideration, they decided public cloud is the perfect
environment to support their needs.
Solution Concept -
MJTelco is running a successful proof-of-concept (PoC) project in
its labs. They have two primary needs:
✑ Scale and harden their PoC to support significantly more data
flows generated when they ramp to more than 50,000
installations.
✑ Refine their machine-learning cycles to verify and improve the
dynamic models they use to control topology definition.
MJTelco will also use three separate operating environments ""
development/test, staging, and production "" to meet the needs of
running experiments, deploying new features, and serving
production customers.
Business Requirements -
✑ Scale up their production environment with minimal cost,
instantiating resources when and where needed in an
unpredictable, distributed telecom user community.
✑ Ensure security of their proprietary data to protect their
leading-edge machine learning and analysis.
✑ Provide reliable and timely access to data for analysis from
distributed research workers
✑ Maintain isolated environments that support rapid iteration of
their machine-learning models without affecting their customers.
Technical Requirements -
✑ Ensure secure and efficient transport and storage of telemetry
data
✑ Rapidly scale instances to support between 10,000 and
100,000 data providers with multiple flows each.
✑ Allow analysis and presentation against data tables tracking up
to 2 years of data storing approximately 100m records/day
✑ Support rapid iteration of monitoring infrastructure focused on
awareness of data pipeline problems both in telemetry flows and
in production learning cycles.
CEO Statement -
Our business model relies on our patents, analytics and dynamic
machine learning. Our inexpensive hardware is organized to be
highly reliable, which gives us cost advantages. We need to
quickly stabilize our large distributed data pipelines to meet our
reliability and capacity commitments.
CTO Statement -
Our public cloud services must operate as advertised. We need
resources that scale and keep our data secure. We also need
environments in which our data scientists can carefully study and
quickly adapt our models. Because we rely on automation to
process our data, we also need our development and test
environments to work as we iterate.
CFO Statement -
The project is too large for us to maintain the hardware and
software required for the data and analysis. Also, we cannot
afford to staff an operations team to monitor so many data feeds,
so we will rely on automation and infrastructure. Google Cloud's
machine learning will allow our quantitative researchers to work
on our high-value problems instead of problems with our data
pipelines.
MJTelco needs you to create a schema in Google Bigtable that
will allow for the historical analysis of the last 2 years of records.
Each record that comes in is sent every 15 minutes, and contains
a unique identifier of the device and a data record. The most
common query is for all the data for a given device for a given
day.
Which schema should you use?
● A. Rowkey: date#device_id Column data: data_point
● B. Rowkey: date Column data: device_id, data_point
● C. Rowkey: device_id Column data: date, data_point
● D. Rowkey: data_point Column data: device_id, date
● E. Rowkey: date#data_point Column data: device_id
Correct Answer: A
Q43
You work for a large fast food restaurant chain with over 400,000
employees. You store employee information in Google BigQuery
in a Users table consisting of a FirstName field and a LastName
field. A member of IT is building an application and asks you to
modify the schema and data in BigQuery so the application can
query a FullName field consisting of the value of the FirstName
field concatenated with a space, followed by the value of the
LastName field for each employee. How can you make that data
available while minimizing cost?
follows:
● B. Manually configure the index in your index config as
follows:
● C. Set the following in your entity options:
exclude_from_indexes = "˜actors, tags'
● D. Set the following in your entity options:
exclude_from_indexes = "˜date_published'
Correct Answer: A
Q45
You work for a manufacturing plant that batches application log
files together into a single log file once a day at 2:00 AM. You
have written a Google Cloud Dataflow job to process that log file.
You need to make sure the log file is processed once per day as
inexpensively as possible. What should you do?
Correct Answer: C
Q46
You work for an economic consulting firm that helps companies
identify economic trends as they happen. As part of your analysis,
you use Google BigQuery to correlate customer data with the
average prices of the 100 most common goods sold, including
bread, gasoline, milk, and others. The average prices of these
goods are updated every 30 minutes. You want to make sure this
data stays up to date so you can combine it with other data in
BigQuery as cheaply as possible.What should you do?
Correct Answer: B
Q47
You are designing the database schema for a machine
learning-based food ordering service that will predict what users
want to eat. Here is some of the information you need to store:
✑ The user profile: What the user likes and doesn't like to eat
✑ The user account information: Name, address, preferred meal
times
✑ The order information: When orders are made, from where, to
whom
The database will be used to store all the transactional data of the
product. You want to optimize the data schema. Which Google
Cloud Platform product should you use?
● A. BigQuery
● B. Cloud SQL
● C. Cloud Bigtable
● D. Cloud Datastore
Correct Answer: B
Q48
Your company is loading comma-separated values (CSV) files
into Google BigQuery. The data is fully imported successfully;
however, the imported data is not matching byte-to-byte to the
source file. What is the most likely cause of this problem?
Correct Answer: C
Q49
Your company produces 20,000 files every hour. Each data file is
formatted as a comma separated values (CSV) file that is less
than 4 KB. All files must be ingested on Google Cloud Platform
before they can be processed. Your company site has a 200 ms
latency to Google Cloud, and your Internet connection bandwidth
is limited as 50 Mbps. You currently deploy a secure FTP (SFTP)
server on a virtual machine in Google Compute Engine as the
data ingestion point. A local SFTP client runs on a dedicated
machine to transmit the CSV files as is. The goal is to make
reports with data from the previous day available to the
executives by 10:00 a.m. each day. This design is barely able to
keep up with the current volume, even though the bandwidth
utilization is rather low.You are told that due to seasonality, your
company expects the number of files to double for the next three
months. Which two actions should you take? (Choose two.)
● A. Redis
● B. HBase
● C. MySQL
● D. MongoDB
● E. Cassandra
● F. HDFS with Hive
Correct Answer: C
Q53
You are using Google BigQuery as your data warehouse. Your
users report that the following simple query is running very slowly,
no matter when they run the query:
You check the query plan for the query and see the following
output in the Read section of Stage:1:
What is the most likely cause of the delay for this query?
Correct Answer: B
Q54
Your globally distributed auction application allows users to bid on
items. Occasionally, users place identical bids at nearly identical
times, and different application servers process those bids. Each
bid event contains the item, amount, user, and timestamp. You
want to collate those bid events into a single location in real time
to determine which user bid first. What should you do?
Correct Answer: B
Q55
Your organization has been collecting and analyzing data in
Google BigQuery for 6 months. The majority of the data analyzed
is placed in a time-partitioned table named events_partitioned. To
reduce the cost of queries, your organization created a view
called events, which queries only the last 14 days of data. The
view is described in legacy SQL. Next month, existing
applications will be connecting to BigQuery to read the events
data via an ODBC connection. You need to ensure the
applications can connect. Which two actions should you take?
(Choose two.)
Correct Answer: CD
Q56
You have enabled the free integration between Firebase Analytics
and Google BigQuery. Firebase now automatically creates a new
table daily in BigQuery in the format app_events_YYYYMMDD.
You want to query all of the tables for the past 30 days in legacy
SQL. What should you do?
Correct Answer: A
Q57
Your company is currently setting up data pipelines for their
campaign. For all the Google Cloud Pub/Sub streaming data, one
of the important business requirements is to be able to
periodically identify the inputs and their timings during their
campaign. Engineers have decided to use windowing and
transformation in Google Cloud Dataflow for this purpose.
However, when testing this feature, they find that the Cloud
Dataflow job fails for the all streaming insert. What is the most
likely cause of this problem?
Correct Answer: D
Q58
You architect a system to analyze seismic data. Your extract,
transform, and load (ETL) process runs as a series of
MapReduce jobs on an Apache Hadoop cluster. The ETL process
takes days to process a data set because some steps are
computationally expensive. Then you discover that a sensor
calibration step has been omitted. How should you change your
ETL process to carry out sensor calibration systematically in the
future?
Correct Answer: B
Question #59
An online retailer has built their current application on Google App
Engine. A new initiative at the company mandates that they
extend their application to allow their customers to transact
directly via the application. They need to manage their shopping
transactions and analyze combined data from multiple datasets
using a business intelligence (BI) tool. They want to use only a
single database for this purpose. Which Google Cloud database
should they choose?
● A. BigQuery
● B. Cloud SQL
● C. Cloud BigTable
● D. Cloud Datastore
Correct Answer: B
Q60
You launched a new gaming app almost three years ago. You
have been uploading log files from the previous day to a separate
Google BigQuery table with the table name format
LOGS_yyyymmdd. You have been using table wildcard functions
to generate daily and monthly reports for all time ranges.
Recently, you discovered that some queries that cover long date
ranges are exceeding the limit of 1,000 tables and failing. How
can you resolve this issue?
Correct Answer: B
Q61
Your analytics team wants to build a simple statistical model to
determine which customers are most likely to work with your
company again, based on a few different metrics. They want to
run the model on Apache Spark, using data housed in Google
Cloud Storage, and you have recommended using Google Cloud.
Dataproc to execute this job. Testing has shown that this workload
can run in approximately 30 minutes on a 15-node cluster,
outputting the results into Google BigQuery. The plan is to run this
workload weekly. How should you optimize the cluster for cost?
Correct Answer: B
Q62
Your company receives both batch- and stream-based event data.
You want to process the data using Google Cloud Dataflow over a
predictable time period. However, you realize that in some
instances data can arrive late or out of order. How should you
design your Cloud Dataflow pipeline to handle data that is late or
out of order?
Correct Answer: C
Q63
You have some data, which is shown in the graphic below. The
two dimensions are X and Y, and the shade of each dot
represents what class it is. You want to classify this data
accurately using a linear algorithm. To do this you need to add a
synthetic feature. What should the value of that feature be?
● A. X^2+Y^2
● B. X^2
● C. Y^2
● D. cos(X)
Correct Answer: A
Q64
You are integrating one of your internal IT applications and
Google BigQuery, so users can query BigQuery from the
application's interface. You do not want individual users to
authenticate to BigQuery and you do not want to give them
access to the dataset. You need to securely access BigQuery
from your IT application. What should you do?
Correct Answer: C
Q65
You are building a data pipeline on Google Cloud. You need to
prepare data using a casual method for a machine-learning
process. You want to support a logistic regression model. You
also need to monitor and adjust for null values, which must
remain real-valued and cannot be removed. What should you do?
Correct Answer: B
Q66
You set up a streaming data insert into a Redis cluster via a Kafka
cluster. Both clusters are running on Compute Engine instances.
You need to encrypt data at rest with encryption keys that you can
create, rotate, and destroy as needed. What should you do?
Correct Answer: B
Q67
You are developing an application that uses a recommendation
engine on Google Cloud. Your solution should display new videos
to customers based on past views. Your solution needs to
generate labels for the entities in videos that the customer has
viewed. Your design must be able to provide very fast filtering
suggestions based on data from other customer preferences on
several TB of data. What should you do?
Correct Answer: C
Q68
You are selecting services to write and transform JSON
messages from Cloud Pub/Sub to BigQuery for a data pipeline on
Google Cloud. You want to minimize service costs. You also want
to monitor and accommodate input data volume that will vary in
size with minimal manual intervention. What should you do?
Correct Answer: C
Q69
Your infrastructure includes a set of YouTube channels. You have
been tasked with creating a process for sending the YouTube
channel data to Google Cloud for analysis. You want to design a
solution that allows your world-wide marketing teams to perform
ANSI SQL and other types of analysis on up-to-date YouTube
channels log data. How should you set up the log data transfer
into Google Cloud?
Correct Answer: A
Q70
You are designing storage for very large text files for a data
pipeline on Google Cloud. You want to support ANSI SQL
queries. You also want to support compression and parallel load
from the input locations using Google recommended practices.
What should you do?
Correct Answer: A
Q71
You are developing an application on Google Cloud that will
automatically generate subject labels for users' blog posts. You
are under competitive pressure to add this feature quickly, and
you have no additional developer resources. No one on your team
has experience with machine learning. What should you do?
Correct Answer: A
Q72
You are designing storage for 20 TB of text files as part of
deploying a data pipeline on Google Cloud. Your input data is in
CSV format. You want to minimize the cost of querying aggregate
values for multiple users who will query the data in Cloud Storage
with multiple engines. Which storage service and schema design
should you use?
Correct Answer: C
Q73
You are designing storage for two relational tables that are part of
a 10-TB database on Google Cloud. You want to support
transactions that scale horizontally. You also want to optimize
data for range queries on non-key columns. What should you do?
Correct Answer: C
Q74
Your financial services company is moving to cloud technology
and wants to store 50 TB of financial time-series data in the cloud.
This data is updated frequently and new data will be streaming in
all the time. Your company also wants to move their existing
Apache Hadoop jobs to the cloud to get insights into this data.
Which product should they use to store the data?
● A. Cloud Bigtable
● B. Google BigQuery
● C. Google Cloud Storage
● D. Google Cloud Datastore
Correct Answer: A
Q75
An organization maintains a Google BigQuery dataset that
contains tables with user-level data. They want to expose
aggregates of this data to other Google Cloud projects, while still
controlling access to the user-level data. Additionally, they need to
minimize their overall storage cost and ensure the analysis cost
for other projects is assigned to those projects. What should they
do?
Correct Answer: A
Q76
Government regulations in your industry mandate that you have
to maintain an auditable record of access to certain types of data.
Assuming that all expiring logs will be archived correctly, where
should you store data that is subject to that mandate?
Correct Answer: D
Q77
Your neural network model is taking days to train. You want to
increase the training speed. What can you do?
Correct Answer: B
Q78
You are responsible for writing your company's ETL pipelines to
run on an Apache Hadoop cluster. The pipeline will require some
checkpointing and splitting pipelines. Which method should you
use to write the pipelines?
Correct Answer: A
Q79
Your company maintains a hybrid deployment with GCP, where
analytics are performed on your anonymized customer data. The
data are imported to Cloud Storage from your data center through
parallel uploads to a data transfer server running on GCP.
Management informs you that the daily transfers take too long
and have asked you to fix the problem. You want to maximize
transfer speeds. Which action should you take?
Correct Answer: C
Q80
MJTelco Case Study -
Company Overview -
MJTelco is a startup that plans to build networks in rapidly
growing, underserved markets around the world. The company
has patents for innovative optical communications hardware.
Based on these patents, they can create many reliable,
high-speed backbone links with inexpensive hardware.
Company Background -
Founded by experienced telecom executives, MJTelco uses
technologies originally developed to overcome communications
challenges in space. Fundamental to their operation, they need to
create a distributed data infrastructure that drives real-time
analysis and incorporates machine learning to continuously
optimize their to.pologies. Because their hardware is inexpensive,
they plan to overdeploy the network allowing them to account for
the impact of dynamic regional politics on location availability and
cost.
Their management and operations teams are situated all around
the globe creating many-to-many relationship between data
consumers and provides in their system. After careful
consideration, they decided public cloud is the perfect
environment to support their needs.
Solution Concept -
MJTelco is running a successful proof-of-concept (PoC) project in
its labs. They have two primary needs:
✑ Scale and harden their PoC to support significantly more data
flows generated when they ramp to more than 50,000
installations.
✑ Refine their machine-learning cycles to verify and improve the
dynamic models they use to control topology definition.
MJTelco will also use three separate operating environments ""
development/test, staging, and production "" to meet the needs of
running experiments, deploying new features, and serving
production customers.
Business Requirements -
✑ Scale up their production environment with minimal cost,
instantiating resources when and where needed in an
unpredictable, distributed telecom user community.
✑ Ensure security of their proprietary data to protect their
leading-edge machine learning and analysis.
✑ Provide reliable and timely access to data for analysis from
distributed research workers
✑ Maintain isolated environments that support rapid iteration of
their machine-learning models without affecting their customers.
Technical Requirements -
Ensure secure and efficient transport and storage of telemetry
data
Rapidly scale instances to support between 10,000 and 100,000
data providers with multiple flows each.
Allow analysis and presentation against data tables tracking up to
2 years of data storing approximately 100m records/day
Support rapid iteration of monitoring infrastructure focused on
awareness of data pipeline problems both in telemetry flows and
in production learning cycles.
CEO Statement -
Our business model relies on our patents, analytics and dynamic
machine learning. Our inexpensive hardware is organized to be
highly reliable, which gives us cost advantages. We need to
quickly stabilize our large distributed data pipelines to meet our
reliability and capacity commitments.
CTO Statement -
Our public cloud services must operate as advertised. We need
resources that scale and keep our data secure. We also need
environments in which our data scientists can carefully study and
quickly adapt our models. Because we rely on automation to
process our data, we also need our development and test
environments to work as we iterate.
CFO Statement -
The project is too large for us to maintain the hardware and
software required for the data and analysis. Also, we cannot
afford to staff an operations team to monitor so many data feeds,
so we will rely on automation and infrastructure. Google Cloud's
machine learning will allow our quantitative researchers to work
on our high-value problems instead of problems with our data
pipelines.
MJTelco is building a custom interface to share data. They have
these requirements:
1. They need to do aggregations over their petabyte-scale
datasets.
2. They need to scan specific time range rows with a very fast
response time (milliseconds).
Which combination of Google Cloud Platform products should you
recommend?
Correct Answer: C
Q81
MJTelco Case Study -
Company Overview -
MJTelco is a startup that plans to build networks in rapidly
growing, underserved markets around the world. The company
has patents for innovative optical communications hardware.
Based on these patents, they can create many reliable,
high-speed backbone links with inexpensive hardware.
Company Background -
Founded by experienced telecom executives, MJTelco uses
technologies originally developed to overcome communications
challenges in space. Fundamental to their operation, they need to
create a distributed data infrastructure that drives real-time
analysis and incorporates machine learning to continuously
optimize their topologies. Because their hardware is inexpensive,
they plan to overdeploy the network allowing them to account for
the impact of dynamic regional politics on location availability and
cost.
Their management and operations teams are situated all around
the globe creating many-to-many relationship between data
consumers and provides in their system. After careful
consideration, they decided public cloud is the perfect
environment to support their needs.
Solution Concept -
MJTelco is running a successful proof-of-concept (PoC) project in
its labs. They have two primary needs:
✑ Scale and harden their PoC to support significantly more data
flows generated when they ramp to more than 50,000
installations.
✑ Refine their machine-learning cycles to verify and improve the
dynamic models they use to control topology definition.
MJTelco will also use three separate operating environments ""
development/test, staging, and production "" to meet the needs of
running experiments, deploying new features, and serving
production customers.
Business Requirements -
✑ Scale up their production environment with minimal cost,
instantiating resources when and where needed in an
unpredictable, distributed telecom user community.
✑ Ensure security of their proprietary data to protect their
leading-edge machine learning and analysis.
✑ Provide reliable and timely access to data for analysis from
distributed research workers
✑ Maintain isolated environments that support rapid iteration of
their machine-learning models without affecting their customers.
Technical Requirements -
Ensure secure and efficient transport and storage of telemetry
data
Rapidly scale instances to support between 10,000 and 100,000
data providers with multiple flows each.
Allow analysis and presentation against data tables tracking up to
2 years of data storing approximately 100m records/day
Support rapid iteration of monitoring infrastructure focused on
awareness of data pipeline problems both in telemetry flows and
in production learning cycles.
CEO Statement -
Our business model relies on our patents, analytics and dynamic
machine learning. Our inexpensive hardware is organized to be
highly reliable, which gives us cost advantages. We need to
quickly stabilize our large distributed data pipelines to meet our
reliability and capacity commitments.
CTO Statement -
Our public cloud services must operate as advertised. We need
resources that scale and keep our data secure. We also need
environments in which our data scientists can carefully study and
quickly adapt our models. Because we rely on automation to
process our data, we also need our development and test
environments to work as we iterate.
CFO Statement -
The project is too large for us to maintain the hardware and
software required for the data and analysis. Also, we cannot
afford to staff an operations team to monitor so many data feeds,
so we will rely on automation and infrastructure. Google Cloud's
machine learning will allow our quantitative researchers to work
on our high-value problems instead of problems with our data
pipelines.
You need to compose visualization for operations teams with the
following requirements:
✑ Telemetry must include data from all 50,000 installations for the
most recent 6 weeks (sampling once every minute)
✑ The report must not be more than 3 hours delayed from live
data.
✑ The actionable report should only show suboptimal links.
✑ Most suboptimal links should be sorted to the top.
✑ Suboptimal links can be grouped and filtered by regional
geography.
✑ User response time to load the report must be <5 seconds.
You create a data source to store the last 6 weeks of data, and
create visualizations that allow viewers to see multiple date
ranges, distinct geographic regions, and unique installation types.
You always show the latest data without any changes to your
visualizations. You want to avoid creating and updating new
visualizations each month. What should you do?
Company Overview -
MJTelco is a startup that plans to build networks in rapidly
growing, underserved markets around the world. The company
has patents for innovative optical communications hardware.
Based on these patents, they can create many reliable,
high-speed backbone links with inexpensive hardware.
Company Background -
Founded by experienced telecom executives, MJTelco uses
technologies originally developed to overcome communications
challenges in space. Fundamental to their operation, they need to
create a distributed data infrastructure that drives real-time
analysis and incorporates machine learning to continuously
optimize their topologies. Because their hardware is inexpensive,
they plan to overdeploy the network allowing them to account for
the impact of dynamic regional politics on location availability and
cost.
Their management and operations teams are situated all around
the globe creating many-to-many relationship between data
consumers and provides in their system. After careful
consideration, they decided public cloud is the perfect
environment to support their needs.
Solution Concept -
MJTelco is running a successful proof-of-concept (PoC) project in
its labs. They have two primary needs:
✑ Scale and harden their PoC to support significantly more data
flows generated when they ramp to more than 50,000
installations.
✑ Refine their machine-learning cycles to verify and improve the
dynamic models they use to control topology definition.
MJTelco will also use three separate operating environments ""
development/test, staging, and production "" to meet the needs of
running experiments, deploying new features, and serving
production customers.
Business Requirements -
✑ Scale up their production environment with minimal cost,
instantiating resources when and where needed in an
unpredictable, distributed telecom user community.
✑ Ensure security of their proprietary data to protect their
leading-edge machine learning and analysis.
✑ Provide reliable and timely access to data for analysis from
distributed research workers
✑ Maintain isolated environments that support rapid iteration of
their machine-learning models without affecting their customers.
Technical Requirements -
Ensure secure and efficient transport and storage of telemetry
data
Rapidly scale instances to support between 10,000 and 100,000
data providers with multiple flows each.
Allow analysis and presentation against data tables tracking up to
2 years of data storing approximately 100m records/day
Support rapid iteration of monitoring infrastructure focused on
awareness of data pipeline problems both in telemetry flows and
in production learning cycles.
CEO Statement -
Our business model relies on our patents, analytics and dynamic
machine learning. Our inexpensive hardware is organized to be
highly reliable, which gives us cost advantages. We need to
quickly stabilize our large distributed data pipelines to meet our
reliability and capacity commitments.
CTO Statement -
Our public cloud services must operate as advertised. We need
resources that scale and keep our data secure. We also need
environments in which our data scientists can carefully study and
quickly adapt our models. Because we rely on automation to
process our data, we also need our development and test
environments to work as we iterate.
CFO Statement -
The project is too large for us to maintain the hardware and
software required for the data and analysis. Also, we cannot
afford to staff an operations team to monitor so many data feeds,
so we will rely on automation and infrastructure. Google Cloud's
machine learning will allow our quantitative researchers to work
on our high-value problems instead of problems with our data
pipelines.
Given the record streams MJTelco is interested in ingesting per
day, they are concerned about the cost of Google BigQuery
increasing. MJTelco asks you to provide a design solution. They
require a single large data table called tracking_table.
Additionally, they want to minimize the cost of daily queries while
performing fine-grained analysis of each day's events. They also
want to use streaming ingestion. What should you do?
● A. Create a table called tracking_table and include a DATE
column.
● B. Create a partitioned table called tracking_table and
include a TIMESTAMP column.
● C. Create sharded tables for each day following the pattern
tracking_table_YYYYMMDD.
● D. Create a table called tracking_table with a TIMESTAMP
column to represent the day.
Correct Answer: B
Q83
Flowlogistic Case Study -
Company Overview -
Flowlogistic is a leading logistics and supply chain provider. They
help businesses throughout the world manage their resources
and transport them to their final destination. The company has
grown rapidly, expanding their offerings to include rail, truck,
aircraft, and oceanic shipping.
Company Background -
The company started as a regional trucking company, and then
expanded into other logistics market. Because they have not
updated their infrastructure, managing and tracking orders and
shipments has become a bottleneck. To improve operations,
Flowlogistic developed proprietary technology for tracking
shipments in real time at the parcel level. However, they are
unable to deploy it because their technology stack, based on
Apache Kafka, cannot support the processing volume. In addition,
Flowlogistic wants to further analyze their orders and shipments
to determine how best to deploy their resources.
Solution Concept -
Flowlogistic wants to implement two concepts using the cloud:
✑ Use their proprietary technology in a real-time
inventory-tracking system that indicates the location of their loads
✑ Perform analytics on all their orders and shipment logs, which
contain both structured and unstructured data, to determine how
best to deploy resources, which markets to expand info. They
also want to use predictive analytics to learn earlier when a
shipment will be delayed.
Technical Requirements -
✑ Handle both streaming and batch data
✑ Migrate existing Hadoop workloads
✑ Ensure architecture is scalable and elastic to meet the
changing demands of the company.
✑ Use managed services whenever possible
✑ Encrypt data flight and at rest
Connect a VPN between the production data center and cloud
environment
SEO Statement -
We have grown so quickly that our inability to upgrade our
infrastructure is really hampering further growth and efficiency.
We are efficient at moving shipments around the world, but we
are inefficient at moving data around.
We need to organize our information so we can more easily
understand where our customers are and what they are shipping.
CTO Statement -
IT has never been a priority for us, so as our data has grown, we
have not invested enough in our technology. I have a good staff to
manage IT, but they are so busy managing our infrastructure that I
cannot get them to do the things that really matter, such as
organizing our data, building the analytics, and figuring out how to
implement the CFO' s tracking technology.
CFO Statement -
Part of our competitive advantage is that we penalize ourselves
for late shipments and deliveries. Knowing where out shipments
are at all times has a direct correlation to our bottom line and
profitability. Additionally, I don't want to commit capital to building
out a server environment.
Flowlogistic's management has determined that the current
Apache Kafka servers cannot handle the data volume for their
real-time inventory tracking system.
You need to build a new system on Google Cloud Platform (GCP)
that will feed the proprietary tracking software. The system must
be able to ingest data from a variety of global sources, process
and query in real-time, and store the data reliably. Which
combination of GCP products should you choose?
Correct Answer: C
Q84
After migrating ETL jobs to run on BigQuery, you need to verify
that the output of the migrated jobs is the same as the output of
the original. You've loaded a table containing the output of the
original job and want to compare the contents with output from the
migrated job to show that they are identical. The tables do not
contain a primary key column that would enable you to join them
together for comparison.What should you do?
Correct Answer: C
Q85
You are a head of BI at a large enterprise company with multiple
business units that each have different priorities and budgets. You
use on-demand pricing for BigQuery with a quota of 2K
concurrent on-demand slots per project. Users at your
organization sometimes don't get slots to execute their query and
you need to correct this. You'd like to avoid introducing new
projects to your account.What should you do?
Correct Answer: C
Reference:https://cloud.google.com/blog/products/gcp/busting-12-
myths-about-bigquery
Q86
You have an Apache Kafka cluster on-prem with topics containing
web application logs. You need to replicate the data to Google
Cloud for analysis in BigQuery and Cloud Storage. The preferred
replication method is mirroring to avoid deployment of Kafka
Connect plugins.What should you do?
Correct Answer: A
Q87
You've migrated a Hadoop job from an on-prem cluster to
dataproc and GCS. Your Spark job is a complicated analytical
workload that consists of many shuffing operations and initial data
are parquet files (on average 200-400 MB size each). You see
some degradation in performance after the migration to Dataproc,
so you'd like to optimize for it. You need to keep in mind that your
organization is very cost-sensitive, so you'd like to continue using
Dataproc on preemptibles (with 2 non-preemptible workers only)
for this workload.What should you do?
Correct Answer: A
Q88
Your team is responsible for developing and maintaining ETLs in
your company. One of your Dataflow jobs is failing because of
some errors in the input data, and you need to improve reliability
of the pipeline (incl. being able to reprocess all failing data).What
should you do?
.
Correct Answer: C
89
You're training a model to predict housing prices based on an
available dataset with real estate properties. Your plan is to train a
fully connected neural net, and you've discovered that the dataset
contains latitude and longitude of the property. Real estate
professionals have told you that the location of the property is
highly influential on price, so you'd like to engineer a feature that
incorporates this physical dependency.
What should you do?
Correct Answer: C
Q90
You are deploying MariaDB SQL databases on GCE VM
Instances and need to configure monitoring and alerting. You
want to collect metrics including network connections, disk IO and
replication status from MariaDB with minimal development effort
and use StackDriver for dashboards and alerts. What should you
do?
Correct Answer: D
Q91
You work for a bank. You have a labelled dataset that contains
information on already granted loan application and whether
these applications have been defaulted. You have been asked to
train a model to predict default rates for credit applicants.What
should you do?
Correct Answer: B
Q92
You need to migrate a 2TB relational database to Google Cloud
Platform. You do not have the resources to significantly refactor
the application that uses this database and cost to operate is of
primary concern. Which service do you select for storing and
serving your data?
A. Cloud Spanner
B. Cloud Bigtable
C. Cloud Firestore
D. Cloud SQL
Correct Answer: D
Q93
You're using Bigtable for a real-time application, and you have a
heavy load that is a mix of read and writes. You've recently
identified an additional use case and need to perform hourly an
analytical job to calculate certain statistics across the whole
database. You need to ensure both the reliability of your
production application as well as the analytical workload.
What should you do?
Correct Answer: Cs
Q94
You are designing an Apache Beam pipeline to enrich data from
Cloud Pub/Sub with static reference data from BigQuery. The
reference data is small enough to fit in memory on a single
worker. The pipeline should write enriched results to BigQuery for
analysis. Which job type and transforms should this pipeline use?
Correct Answer: C
Q95
You have a data pipeline that writes data to Cloud Bigtable using
well-designed row keys. You want to monitor your pipeline to
determine when to increase the size of you Cloud Bigtable cluster.
Which two actions can you take to accomplish this? (Choose
two.)
Correct Answer: AC
Need to check
Q96
You want to analyze hundreds of thousands of social media posts
daily at the lowest cost and with the fewest steps.
You have the following requirements:
✑ You will batch-load the posts once per day and run them
through the Cloud Natural Language API.
✑ You will extract topics and sentiment from the posts.
✑ You must store the raw posts for archiving and reprocessing.
✑ You will create dashboards to be shared with people both
inside and outside your organization.
You need to store both the data extracted from the API to perform
analysis as well as the raw social media posts for historical
archiving. What should you do?
A. Store the social media posts and the data extracted from the
API in BigQuery.
B. Store the social media posts and the data extracted from the
API in Cloud SQL.
C. Store the raw social media posts in Cloud Storage, and write
the data extracted from the API into BigQuery.
D. Feed to social media posts into the API directly from the
source, and write the extracted data from the API into BigQuery.
Correct Answer: C
Q97
You store historic data in Cloud Storage. You need to perform
analytics on the historic data. You want to use a solution to detect
invalid data entries and perform data transformations that will not
require programming or knowledge of SQL. What should you do?
Correct Answer: B
Q98
Your company needs to upload their historic data to Cloud
Storage. The security rules don't allow access from external IPs
to their on-premises resources. After an initial upload, they will
add new data from existing on-premises applications every day.
What should they do?
Correct Answer: A
Q99
You have a query that filters a BigQuery table using a WHERE
clause on timestamp and ID columns. By using bq query ""
-dry_run you learn that the query triggers a full scan of the table,
even though the filter on timestamp and ID select a tiny fraction of
the overall data. You want to reduce the amount of data scanned
by BigQuery with minimal changes to existing SQL queries. What
should you do?
Correct Answer: C
Q100
You have a requirement to insert minute-resolution data from
50,000 sensors into a BigQuery table. You expect significant
growth in data volume and need the data to be available within 1
minute of ingestion for real-time analysis of aggregated trends.
What should you do?
Correct Answer: B
Q101
You need to copy millions of sensitive patient records from a
relational database to BigQuery. The total size of the database is
10 TB. You need to design a solution that is secure and
time-efficient. What should you do?
Correct Answer: B
Q102
You need to create a near real-time inventory dashboard that
reads the main inventory tables in your BigQuery data
warehouse. Historical inventory data is stored as inventory
balances by item and location. You have several thousand
updates to inventory every hour. You want to maximize
performance of the dashboard and ensure that the data is
accurate. What should you do?
Correct Answer: C
Q103
You have a data stored in BigQuery. The data in the BigQuery
dataset must be highly available. You need to define a storage,
backup, and recovery strategy of this data that minimizes cost.
How should you configure the BigQuery table?
Correct Answer: C
Q104
You used Cloud Dataprep to create a recipe on a sample of data
in a BigQuery table. You want to reuse this recipe on a daily
upload of data with the same schema, after the load job with
variable execution time completes. What should you do?
Correct Answer: D
Q105
You want to automate execution of a multi-step data pipeline
running on Google Cloud. The pipeline includes Cloud Dataproc
and Cloud Dataflow jobs that have multiple dependencies on
each other. You want to use managed services where possible,
and the pipeline will run every day. Which tool should you use?
A. cron
B. Cloud Composer
C. Cloud Scheduler
D. Workflow Templates on Cloud Dataproc
Correct Answer: B
Q106
You are managing a Cloud Dataproc cluster. You need to make a
job run faster while minimizing costs, without losing work in
progress on your clusters. What should you do?
Correct Answer: D
Q107
You work for a shipping company that uses handheld scanners to
read shipping labels. Your company has strict data privacy
standards that require scanners to only transmit recipients'
personally identifiable information (PII) to analytics systems,
which violates user privacy rules. You want to quickly build a
scalable solution using cloud-native managed services to prevent
exposure of PII to the analytics systems. What should you do?
Correct Answer: D
Q108
You have developed three data processing jobs. One executes a
Cloud Dataflow pipeline that transforms data uploaded to Cloud
Storage and writes results to BigQuery. The second ingests data
from on-premises servers and uploads it to Cloud Storage. The
third is a Cloud Dataflow pipeline that gets information from
third-party data providers and uploads the information to Cloud
Storage. You need to be able to schedule and monitor the
execution of these three workflows and manually execute them
when needed. What should you do?
Correct Answer: A
Q109
You have Cloud Functions written in Node.js that pull messages
from Cloud Pub/Sub and send the data to BigQuery. You observe
that the message processing rate on the Pub/Sub topic is orders
of magnitude higher than anticipated, but there is no error logged
in Stackdriver Log Viewer. What are the two most likely causes of
this problem? (Choose two.)
Correct Answer: CD
Need to check
Q110
You are creating a new pipeline in Google Cloud to stream IoT
data from Cloud Pub/Sub through Cloud Dataflow to BigQuery.
While previewing the data, you notice that roughly 2% of the data
appears to be corrupt. You need to modify the Cloud Dataflow
pipeline to filter out this corrupt data. What should you do?
Correct Answer: B
Q111
You have historical data covering the last three years in BigQuery
and a data pipeline that delivers new data to BigQuery daily. You
have noticed that when the Data Science team runs a query
filtered on a date column and limited to 30""90 days of data, the
query scans the entire table. You also noticed that your bill is
increasing more quickly than you expected. You want to resolve
the issue as cost-effectively as possible while maintaining the
ability to conduct SQL queries.
What should you do?
Correct Answer: A
Q112
You operate a logistics company, and you want to improve event
delivery reliability for vehicle-based sensors. You operate small
data centers around the world to capture these events, but leased
lines that provide connectivity from your event collection
infrastructure to your event processing infrastructure are
unreliable, with unpredictable latency. You want to address this
issue in the most cost-effective way. What should you do?
Correct Answer: A
Q113
You are a retailer that wants to integrate your online sales
capabilities with different in-home assistants, such as Google
Home. You need to interpret customer voice commands and issue
an order to the backend systems. Which solutions should you
choose?
Correct Answer: C
Q114
Your company has a hybrid cloud initiative. You have a complex
data pipeline that moves data between cloud provider services
and leverages services from each of the cloud providers. Which
cloud-native service should you use to orchestrate the entire
pipeline?
A. Cloud Dataflow
B. Cloud Composer
C. Cloud Dataprep
D. Cloud Dataproc
Correct Answer: B
Q115
You use a dataset in BigQuery for analysis. You want to provide
third-party companies with access to the same dataset. You need
to keep the costs of data sharing low and ensure that the data is
current. Which solution should you choose?
Correct Answer: A
Q116
A shipping company has live package-tracking data that is sent to
an Apache Kafka stream in real time. This is then loaded into
BigQuery. Analysts in your company want to query the tracking
data in BigQuery to analyze geospatial trends in the lifecycle of a
package. The table was originally created with ingest-date
partitioning. Over time, the query processing time has increased.
You need to implement a change that would improve query
performance in BigQuery. What should you do?
Correct Answer: B
Q117
You are designing a data processing pipeline. The pipeline must
be able to scale automatically as load increases. Messages must
be processed at least once and must be ordered within windows
of 1 hour. How should you design the solution?
Correct Answer: D
Q118
You need to set access to BigQuery for different departments
within your company. Your solution should comply with the
following requirements:
✑ Each department should have access only to their data.
Each department will have one or more leads who need to be
able to create and update tables and provide them to their team.
✑ Each department has data analysts who need to be able to
query but not modify data.
How should you set access to the data in BigQuery?
Correct Answer: B
Q119
You operate a database that stores stock trades and an
application that retrieves average stock price for a given company
over an adjustable window of time. The data is stored in Cloud
Bigtable where the datetime of the stock trade is the beginning of
the row key. Your application has thousands of concurrent users,
and you notice that performance is starting to degrade as more
stocks are added. What should you do to improve the
performance of your application?
Correct Answer: A
Q120
You are operating a Cloud Dataflow streaming pipeline. The
pipeline aggregates events from a Cloud Pub/Sub subscription
source, within a window, and sinks the resulting aggregation to a
Cloud Storage bucket. The source has consistent throughput. You
want to monitor an alert on behavior of the pipeline with Cloud
Stackdriver to ensure that it is processing data. Which Stackdriver
alerts should you create?
Correct Answer: B
Need to check
Q121
You currently have a single on-premises Kafka cluster in a data
center in the us-east region that is responsible for ingesting
messages from IoT devices globally. Because large parts of globe
have poor internet connectivity, messages sometimes batch at the
edge, come in all at once, and cause a spike in load on your
Kafka cluster. This is becoming difficult to manage and
prohibitively expensive. What is the Google-recommended cloud
native architecture for this scenario?
Correct Answer: C
Q122
You decided to use Cloud Datastore to ingest vehicle telemetry
data in real time. You want to build a storage system that will
account for the long-term data growth, while keeping the costs
low. You also want to create snapshots of the data periodically, so
that you can make a point-in-time (PIT) recovery, or clone a copy
of the data for Cloud Datastore in a different environment. You
want to archive these snapshots for a long time. Which two
methods can accomplish this?
(Choose two.)
Correct Answer: AC
Q124
You are designing a cloud-native historical data processing
system to meet the following conditions:
✑ The data being analyzed is in CSV, Avro, and PDF formats and
will be accessed by multiple analysis tools including Cloud
Dataproc, BigQuery, and Compute
Engine.
✑ A streaming data pipeline stores new data daily.
✑ Peformance is not a factor in the solution.
✑ The solution design should maximize availability.
How should you design data storage for this solution?
Correct Answer: D
Q125
You have a petabyte of analytics data and need to design a
storage and processing platform for it. You must be able to
perform data warehouse-style analytics on the data in Google
Cloud and expose the dataset as files for batch analysis tools in
other cloud providers. What should you do?
Correct Answer: C
Q126
You work for a manufacturing company that sources up to 750
different components, each from a different supplier. You've
collected a labeled dataset that has on average 1000 examples
for each unique component. Your team wants to implement an
app to help warehouse workers recognize incoming components
based on a photo of the component. You want to implement the
first working version of this app (as Proof-Of-Concept) within a
few working days. What should you do?
Correct Answer: B
Q127
You are working on a niche product in the image recognition
domain. Your team has developed a model that is dominated by
custom C++ TensorFlow ops your team has implemented. These
ops are used inside your main training loop and are performing
bulky matrix multiplications. It currently takes up to several days
to train a model. You want to decrease this time significantly and
keep the cost low by using an accelerator on Google Cloud. What
should you do?
Correct Answer: C
Need to check
128
You work on a regression problem in a natural language
processing domain, and you have 100M labeled examples in your
dataset. You have randomly shuffled your data and split your
dataset into train and test samples (in a 90/10 ratio). After you
trained the neural network and evaluated your model on a test
set, you discover that the root-mean-squared error (RMSE) of
your model is twice as high on the train set as on the test set.
How should you improve the performance of your model?
Correct Answer: D
Q129
You use BigQuery as your centralized analytics platform. New
data is loaded every day, and an ETL pipeline modifies the
original data and prepares it for the final users. This ETL pipeline
is regularly modified and can generate errors, but sometimes the
errors are detected only after 2 weeks. You need to provide a
method to recover from these errors, and your backups should be
optimized for storage costs. How should you organize your data in
BigQuery and store your backups?
Correct Answer: B
Q130
The marketing team at your organization provides regular updates
of a segment of your customer dataset. The marketing team has
given you a CSV with 1 million records that must be updated in
BigQuery. When you use the UPDATE statement in BigQuery, you
receive a quotaExceeded error. What should you do?
Correct Answer: D
Q131
As your organization expands its usage of GCP, many teams
have started to create their own projects. Projects are further
multiplied to accommodate different stages of deployments and
target audiences. Each project requires unique access control
configurations. The central IT team needs to have access to all
projects.
Furthermore, data from Cloud Storage buckets and BigQuery
datasets must be shared for use in other projects in an ad hoc
way. You want to simplify access control management by
minimizing the number of policies. Which two steps should you
take? (Choose two.)
Correct Answer: B
Q133
A data scientist has created a BigQuery ML model and asks you
to create an ML pipeline to serve predictions. You have a REST
API application with the requirement to serve predictions for an
individual user ID with latency under 100 milliseconds. You use
the following query to generate predictions: SELECT
predicted_label, user_id FROM ML.PREDICT (MODEL
"˜dataset.model', table user_features). How should you create the
ML pipeline?
Correct Answer: D
Q134
You are building an application to share financial market data with
consumers, who will receive data feeds. Data is collected from the
markets in real time.
Consumers will receive the data in the following ways:
✑ Real-time event stream
✑ ANSI SQL access to real-time stream and historical data
✑ Batch historical exports
Which solution should you use?
Correct Answer: B
Q135
You are building a new application that you need to collect data
from in a scalable way. Data arrives continuously from the
application throughout the day, and you expect to generate
approximately 150 GB of JSON data per day by the end of the
year. Your requirements are:
✑ Decoupling producer from consumer
✑ Space and cost-efficient storage of the raw ingested data,
which is to be stored indefinitely
✑ Near real-time SQL query
✑ Maintain at least 2 years of historical data, which will be
queried with SQL
Which pipeline should you use to meet these requirements?
Correct Answer: A
Need to check
Q136
You are running a pipeline in Cloud Dataflow that receives
messages from a Cloud Pub/Sub topic and writes the results to a
BigQuery dataset in the EU. Currently, your pipeline is located in
europe-west4 and has a maximum of 3 workers, instance type
n1-standard-1. You notice that during peak periods, your pipeline
is struggling to process records in a timely fashion, when all 3
workers are at maximum CPU utilization. Which two actions can
you take to increase performance of your pipeline? (Choose two.)
Correct Answer: AB
Q137
You have a data pipeline with a Cloud Dataflow job that
aggregates and writes time series metrics to Cloud Bigtable. This
data feeds a dashboard used by thousands of users across the
organization. You need to support additional concurrent users and
reduce the amount of time required to write the data. Which two
actions should you take? (Choose two.)
Correct Answer: BC
Q138
You have several Spark jobs that run on a Cloud Dataproc cluster
on a schedule. Some of the jobs run in sequence, and some of
the jobs run concurrently. You need to automate this process.
What should you do?
Correct Answer: C
Q139
You are building a new data pipeline to share data between two
different types of applications: jobs generators and job runners.
Your solution must scale to accommodate increases in usage and
must accommodate the addition of new applications without
negatively affecting the performance of existing ones. What
should you do?
Correct Answer: B
Q140
You need to create a new transaction table in Cloud Spanner that
stores product sales data. You are deciding what to use as a
primary key. From a performance perspective, which strategy
should you choose?
Correct Answer: C
Q141
Data Analysts in your company have the Cloud IAM Owner role
assigned to them in their projects to allow them to work with
multiple GCP products in their projects. Your organization requires
that all BigQuery data access logs be retained for 6 months. You
need to ensure that only audit personnel in your company can
access the data access logs for all projects. What should you do?
Correct Answer: D
Q142
Each analytics team in your organization is running BigQuery jobs
in their own projects. You want to enable each team to monitor
slot usage within their projects.What should you do?
Correct Answer: B
Q143
You are operating a streaming Cloud Dataflow pipeline. Your
engineers have a new version of the pipeline with a different
windowing algorithm and triggering strategy. You want to update
the running pipeline with the new version. You want to ensure that
no data is lost during the update. What should you do?
Correct Answer: D
Q144
You need to move 2 PB of historical data from an on-premises
storage appliance to Cloud Storage within six months, and your
outbound network capacity is constrained to 20 Mb/sec. How
should you migrate this data to Cloud Storage?
Correct Answer: A
Q145
You receive data files in CSV format monthly from a third party.
You need to cleanse this data, but every third month the schema
of the files changes. Your requirements for implementing these
transformations include:
✑ Executing the transformations on a schedule
✑ Enabling non-developer analysts to modify transformations
✑ Providing a graphical tool for designing transformations
What should you do?
Correct Answer: A
Q146
You want to migrate an on-premises Hadoop system to Cloud
Dataproc. Hive is the primary tool in use, and the data format is
Optimized Row Columnar (ORC).All ORC files have been
successfully copied to a Cloud Storage bucket. You need to
replicate some data to the cluster's local Hadoop Distributed File
System
(HDFS) to maximize performance. What are two ways to start
using Hive in Cloud Dataproc? (Choose two.)
A. Run the gsutil utility to transfer all ORC files from the Cloud
Storage bucket to HDFS. Mount the Hive tables locally.
B. Run the gsutil utility to transfer all ORC files from the Cloud
Storage bucket to any node of the Dataproc cluster. Mount the
Hive tables locally.
C. Run the gsutil utility to transfer all ORC files from the Cloud
Storage bucket to the master node of the Dataproc cluster. Then
run the Hadoop utility to copy them do HDFS. Mount the Hive
tables from HDFS.
D. Leverage Cloud Storage connector for Hadoop to mount the
ORC files as external Hive tables. Replicate external Hive tables
to the native ones.
E. Load the ORC files into BigQuery. Leverage BigQuery
connector for Hadoop to mount the BigQuery tables as external
Hive tables. Replicate external Hive tables to the native ones.
Correct Answer: BC
Need to check
Q147
You are implementing several batch jobs that must be executed
on a schedule. These jobs have many interdependent steps that
must be executed in a specific order. Portions of the jobs involve
executing shell scripts, running Hadoop jobs, and running queries
in BigQuery. The jobs are expected to run for many minutes up to
several hours. If the steps fail, they must be retried a fixed
number of times. Which service should you use to manage the
execution of these jobs?
A. Cloud Scheduler
B. Cloud Dataflow
C. Cloud Functions
D. Cloud Composer
Correct Answer: D
Q148
You work for a shipping company that has distribution centers
where packages move on delivery lines to route them properly.
The company wants to add cameras to the delivery lines to detect
and track any visual damage to the packages in transit. You need
to create a way to automate the detection of damaged packages
and flag them for human review in real time while the packages
are in transit. Which solution should you choose?
Correct Answer: B
Q149
You are migrating your data warehouse to BigQuery. You have
migrated all of your data into tables in a dataset. Multiple users
from your organization will be using the data. They should only
see certain tables based on their team membership. How should
you set user permissions?
Correct Answer: A
Q150
You want to build a managed Hadoop system as your data lake.
The data transformation process is composed of a series of
Hadoop jobs executed in sequence. To accomplish the design of
separating storage from compute, you decided to use the Cloud
Storage connector to store all input data, output data, and
intermediary data. However, you noticed that one Hadoop job
runs very slowly with Cloud Dataproc, when compared with the
on-premises bare-metal Hadoop environment (8-core nodes with
100-GB RAM). Analysis shows that this particular Hadoop job is
disk I/O intensive. You want to resolve the issue. What should you
do?
Correct Answer: B
Q151
You work for an advertising company, and you've developed a
Spark ML model to predict click-through rates at advertisement
blocks. You've been developing everything at your on-premises
data center, and now your company is migrating to Google Cloud.
Your data center will be closing soon, so a rapid lift-and-shift
migration is necessary. However, the data you've been using will
be migrated to migrated to BigQuery. You periodically retrain your
Spark ML models, so you need to migrate existing training
pipelines to Google Cloud. What should you do?
Correct Answer: C
Q152
You work for a global shipping company. You want to train a
model on 40 TB of data to predict which ships in each geographic
region are likely to cause delivery delays on any given day. The
model will be based on multiple attributes collected from multiple
sources. Telemetry data, including location in GeoJSON format,
will be pulled from each ship and loaded every hour. You want to
have a dashboard that shows how many and which ships are
likely to cause delays within a region. You want to use a storage
solution that has native functionality for prediction and geospatial
processing. Which storage solution should you use?
A. BigQuery
B. Cloud Bigtable
C. Cloud Datastore
D. Cloud SQL for PostgreSQL
Correct Answer: A
Q153
You operate an IoT pipeline built around Apache Kafka that
normally receives around 5000 messages per second. You want
to use Google Cloud Platform to create an alert as soon as the
moving average over 1 hour drops below 4000 messages per
second. What should you do?
Correct Answer: A
Q154
You plan to deploy Cloud SQL using MySQL. You need to ensure
high availability in the event of a zone failure. What should you
do?
Correct Answer: A
Q155
Your company is selecting a system to centralize data ingestion
and delivery. You are considering messaging and data integration
systems to address the requirements. The key requirements are:
✑ The ability to seek to a particular offset in a topic, possibly back
to the start of all data ever captured
✑ Support for publish/subscribe semantics on hundreds of topics
✑ Retain per-key ordering
Which system should you choose?
A. Apache Kafka
B. Cloud Storage
C. Cloud Pub/Sub
D. Firebase Cloud Messaging
Correct Answer: A
Q156
You are planning to migrate your current on-premises Apache
Hadoop deployment to the cloud. You need to ensure that the
deployment is as fault-tolerant and cost-effective as possible for
long-running batch jobs. You want to use a managed service.
What should you do?
Correct Answer: A
Need to check
Q157
Your team is working on a binary classification problem. You have
trained a support vector machine (SVM) classifier with default
parameters, and received an area under the Curve (AUC) of 0.87
on the validation set. You want to increase the AUC of the model.
What should you do?
Correct Answer: D
Need to check
Q158
You need to deploy additional dependencies to all of a Cloud
Dataproc cluster at startup using an existing initialization action.
Company security policies require that Cloud Dataproc nodes do
not have access to the Internet so public initialization actions
cannot fetch resources. What should you do?
Correct Answer: C
Q159
You need to choose a database for a new project that has the
following requirements:
✑ Fully managed
✑ Able to automatically scale up
✑ Transactionally consistent
✑ Able to scale up to 6 TB
✑ Able to be queried using SQL
Which database do you choose?
A. Cloud SQL
B. Cloud Bigtable
C. Cloud Spanner
D. Cloud Datastore
Correct Answer: A
Q160
You work for a mid-sized enterprise that needs to move its
operational system transaction data from an on-premises
database to GCP. The database is about 20
TB in size. Which database should you choose?
A. Cloud SQL
B. Cloud Bigtable
C. Cloud Spanner
D. Cloud Datastore
Correct Answer: A
Q161
You need to choose a database to store time series CPU and
memory usage for millions of computers. You need to store this
data in one-second interval samples. Analysts will be performing
real-time, ad hoc analytics against the database. You want to
avoid being charged for every query executed and ensure that the
schema design will allow for future growth of the dataset. Which
database and data model should you choose?
Correct Answer: C
Q162
You want to archive data in Cloud Storage. Because some data is
very sensitive, you want to use the "Trust No One" (TNO)
approach to encrypt your data to prevent the cloud provider staff
from decrypting your data. What should you do?
Correct Answer: B
Need to check
Q163
You have data pipelines running on BigQuery, Cloud Dataflow,
and Cloud Dataproc. You need to perform health checks and
monitor their behavior, and then notify the team managing the
pipelines if they fail. You also need to be able to work across
multiple projects. Your preference is to use managed products of
features of the platform. What should you do?
Correct Answer: A
Q164
Suppose you have a table that includes a nested column called
"city" inside a column called "person", but when you try to submit
the following query in BigQuery, it gives you an error.
SELECT person FROM `project1.example.table1` WHERE city =
"London"
How would you correct the error?
Correct Answer: A
Q165
What are two of the benefits of using denormalized data
structures in BigQuery?
Correct Answer: B
Q166
Which of these statements about exporting data from BigQuery is
false?
Correct Answer: C
Q167
What are all of the BigQuery operations that Google charges for?
Correct Answer: A
Google charges for storage, queries, and streaming inserts.
Loading data from a file and exporting data are free operations.
Reference: https://cloud.google.com/bigquery/pricing
Q168
Which of the following is not possible using primitive roles?
Correct Answer: A
Q169
Which of these statements about BigQuery caching is true?
Correct Answer: D
When query results are retrieved from a cached results table, you
are not charged for the query.
BigQuery caches query results for 24 hours, not 48 hours.
Query results are not cached if you specify a destination table.
A query's results are always cached except under certain
conditions, such as if you specify a destination table.
Reference:
https://cloud.google.com/bigquery/querying-data#query-caching
Q170
Which of these sources can you not load data into BigQuery
from?
A. File upload
B. Google Drive
C. Google Cloud Storage
D. Google Cloud SQL
Correct Answer: D