Respostas Machine Learning Engineer
Respostas Machine Learning Engineer
2 - c - Define the optimal route as the shortest route that passes by all shuttle stations confirmed
attendance at the given time under capacity constraints
3 - c - Download the data with upweighting to create a sample with 10% positive examples
4 - d - Ingest your data into Bigquery using Bigquery Load, convert your PySpark commands into
BigQuery SQL queries to transform the data, and write the transformation to a new table
5 - a - Use the AI Platform custom containers feature to receive training jobs using any framework
6 - b - Extende your dataset with images of the newer products when they are introduced to retraining
7 - a - Configure AutoML Tables to perform the classification task
8 - a - Configure kuberflow pipeline to schedule your multi step workflow from training to deploying your
model
9 - c - Use Cloud Build linked with Cloud Source Repositories to trigger retraining when new code is
pushed to the repository
10 - c - Categorical cross entropy
11 - b- Use Frequently Bought Together recommendations type to increase the shopping cart size for
each order
12 - c - 1 = AI Platform, 2 = AI Platform, 3 = cloud Natural Language API
13 - c - Run a hyperparameter tuning job on AI PLatform to optimize for the L@ regularization and
dropout parameters
14 - b - Lack of model retraining
15 - d - Convert the images into TFRecords, store the images in Cloud Storage, and then use the tf data
API to read the images for training
16 - c - Recurrent Neural Networks (RNN)
17 - d- Create three buckets of data: Quarantine, Sensitive and Non Sensitive. Write all data to the
Quarantine bucket. Periodically, conduct a bulk scan of that bucket using the DLP API , and move the
data to ether de Sensitive or Non-sensitive bucket
18 - d - Submit the data for training without performing any manual transformations. Use the columns that
have a signal to manually split yout data. Ensure that the data in your validation set is from 30 days after
the data in your training set and the data in your testing sets from 30 days after your validation set
19 - b - Using Cloud Build, set an automated trigger to execute the units tests when new changes are
pushed to your development branch
20 - b - modify the scale tier parameter
21 - d - Compare the mean avarerage precision across the models using the continuos evaluation feature
22 - d - data = json.dumps({ג€signature_nameג€: ג€serving_defaultג€, ג€instancesג€ [['a', 'b'], ['c','d'], ['e',
'f']]})
23 - a - 1 Dataflow / 2 = Bigquery
24 - c - Build a collaborative-based filtering model
1
25 - b - Decrease recall will increase precision
26 - d - Cloud Data Fusion
27 - b - Traceability, reproducibility and explainbility
28 - AD - use the interleave option for reading data, set the prefetch option equal training batch size
29 - b - Send incoming prediction request to a pub/sub topic. Transform the incoming data using data
flow. Submit a prediction request to AI Platform using the transformed data. Write the predictions to an
outbound pub/sub queue
30 - a - Create alerts to monitor for skew and retrain the model
31 - b - Reduce the batch size
32 - d - Recompile TensorFlow serving using the source to support CPU-specific optimization. Instruct
GKE to choose appropriate baseline minimum CPU platform for serving nodes
33 - b - Translate the normalization algorithm into sql for use with bigquery
34 - d - Create an experiment in kubeflow pipeline to organize multiple runs
35 - d- Locate the kubeflow pipeline repository in github. Find the Bigquery Component, copy that
component url and use it to load the component into your pipeline. Use the component to execute queries
against BQ
36 - b - Split the training and test data base on time rather than a randon split to avoid leakage
37 - a - Use AI Platform for distributed training
38 - a - Export the model to BigQueryML
39 - c - Configure a Cloud Storage trigger to send a message to a Pub/Sub topic when a new file is
available in a storage bucket. Use a Pub/Sub-triggered Cloud Function to start the training job on a GKE
cluster
40 - ce - Set the early stopping parameter to TRUE / Decrease the maximum number of trials during
subsequent training phases
41 - d - Build a notification system on Firebase. 2. Register each user with a user ID on the Firebase
Cloud Messaging server, which sends a notification when your model predicts that a user's account
balance will drop below the $25 threshold.
42 - c - One feature obtained as an element-wise product between binned latitude, binned longitude, and
one-hot encoded car type.
43 - c - One feature obtained as an element-wise product between binned latitude, binned longitude, and
one-hot encoded car type.
44 - b - Use AutoMlL Natural Language to extract custom entities for classification.
45 - c - Convert the CSV files into shards of TFRecords, and store the data in Cloud Storage.
46 - a - Use the batch prediction functionality of AI Platform
47 - a - Use Data Catalog to search the BigQuery datasets by using keywords in the table description
48 - b - Address data leakage by applying nested cross-validation during model training.
2
49 - c - Embed the client on the website, deploy the gateway on App Engine, deploy the database on
Cloud Bigtable for writing and for reading the user's navigation context, and then deploy the model on AI
Platform Prediction.
50 - c - A Deep Learning VM with an n1-standard-2 machine and 1 GPU with all libraries pre-installed.
51 - c - Use labels to organize resources into descriptive categories. Apply a label to each created
resource so that users can filter the results by label when viewing or monitoring the resources.
52 - b - Ensure that the required GPU is available in the selected region.
53 - b - Distribute authors randomly across the train-test-eval subsets: (*) Train set: [TextA1, TextA2,
TextD1, TextD2, ...] Test set: [TextB1, TextB2, ...] Eval set: [TexC1,TextC2 ...]
54 - c - Use an established text classification model on AI Platform to perform transfer learning.
55 - c- Ensure the model performance is monitored
56 - c - An optimization objective that maximizes the area under the precision recall curve (AUC PR)
value ]
57 - c - The model predicts 95% of the most popular video measured by watch time within 30 days of
being uploaded
58 - b - Us the representation transformation normalization technique
59 - a - Use the kubeflow pipeline to execute the experiments. Export metrics file, and query the results
using the kubeflow pipeline api
60 - c - Obversample the fraudulent transaction 10 times
61 - d - An AI Platform Training job using a custom cale tier with 4 V100 GPUS and Cloud Storage
62 - d - Add an additional class to categorical feature A for missing values. Create a new binary feature
that indicates wheter feature A is missing
63 - a - Create a k-means clustering model using Bigquery ML. Allow Bigquery to automatically optimize
the number of clusters
64 - c - Build your custom containers to run distributed training jobs on AI Platform training
65 - d - Add parallel interleave to the pipeline
66 - b - Run a hyperparameter tuning job on AI Platform using custom containers
67 - b - Use AutoML Natural Language to build and test a classifier. Deploy the model as a REST API
68 - a - AutoML Natural Language
69 - a - Us AutoML to optimize the model´s recall in order to minimize false negatives
70 - c- Migrate to training with kuberflow on Google Kubernetes Engine, and use preemptible VMs with
checkpoint
71 - b - Use BQML XGBoost regression to train the model
72 - b - Use L1 regularization to reduce the coefficients of uninformative features to 0
73 - a - Use the TFX Model Validator tools to specify performance metrics for production readiness
74 - b - Send the request again with a smaller batch of instances
75 - b - Configure sampled Shapley explanations on Vertex Explainable AI
3
76 - b - Address data leakage by applying nested cross-validation during model
77 - a - Import the Tensor Flow model with BigQuery ML, and run the ml.predict function
78 - b - Convert the categorical string data to one-hot hash buckets
79 - b - Identify word embedding from a pre-trained model, and use the embeddings in your model
80 - b - Embed the client on the website, deploy the gateway on AppEngine, deploy the database on
Firestore for writing and for reading the users navigation context, and then deploy the model on AI
platform prediction
81 - b - Vertex AI Pipeline, Vertex AI Prediction, and Vertex AI Model Monitoring
82 - c - Split into multiple csv file and use a parallel interleave transformation
83 - b - Events are sent by the sensor toPub/Sub, consumed in real time, and processed by a data flow
stream process pipeline. The pipelin einvokes the model for prediction and sends the predictions to
another pub/sub topic. Pubsub messages containing predictions are consumed by a downstream system
for monitoring
84 - b - Encode all articles into vectors using word2vec, and build a model that returns articles base on
vector similarity
85 - d - Precision and recall estimates based on a sample of messages flagged by the model as
potentially inappropriate each minute
86 - d - Manage your ML workflows with Vertex ML Metadata
87 - a - Use BigqueryML to run several regression models, and analyze their performance
88 - a - Use local features importance from predictions
89 - c - Use the AI Explanations feature on AI PLatform. Submit each prediction request with the explain
keyword to retrieve feature attributions using the sampled shapley method
90 - a Fscore when recall is weighed more than precision
91 - c - Use product type and the feature cross of latitude with longitude, followed by binning as features.
Use profit as model output
92 - a - Train a model using AutoML vision and use the “export core ML” option
93 - a - Use Vertex AI Workbench user-,managed notebooks to generate the report
94 - c - Develop a simple heuristic (e.g based on score) to label the machine historical performance data.
Test this heuristic in a production environment
95 - c - Use Vertex AI Pipeline with kubeflow pipeline sdk
96 - b - Replace the NVIDIA P!)) GPU with a v3-32-TPU in the training job
97 - d - Develop a regression model using Bigquery ML
98 - c - Train your models with DLVM images on vertex AI, and ensure that your code utlizes NumPy and
SciPyu internal methods whenever possible
99 - d - Store the performance statistics of each version of your models using seasons and years as
events in Vertex ML Metadata. Compare the results across the slices
100 - d - convolutional neural networks
4
101 - d - Configure your model to use bfloat16 instead of float 32
102 - b - Add a model monitoring job where 10% of incoming predictions are sampled 24 hours
103 - d - Increase batch size (to decrease time)
104 - b - Train a classifier using chat messages in their original language
105 - a - Import the model into BigQuery ML. Make predictions using batch reading data from BigQuery,
and push the data to Cloud SQL
106 - d- Use dataprep to transform the state column using one-hot encoding metod, and make each city a
column with binary values
107 - b - Federated Learning
108 - c - The tables that you created to hold your training data and validation records share some records,
and you may not be using all the data in your initial table
109 - b- Decrease the learning rate hyperparameter
110 - b - AutoML Vision Edge mobile-low-latency- 1 model
111 - b - Train a classification Vertex AutoML model
112 - a - Convert the speech to text and extract sentiments based on the sentences
113 - a - Configure Pub/Sub to stream the data into BigQuery
114 - c - User engagement as measured by the number of battles played daily per user
115 - d - Normalize the data by scaling it to have values between 0 and 1
116 - d - A cluster with 4 n1-highcpu-96 machines, each with 96 vCPUs and 86GB Ram
117 - c - Use time series forecasting model to predict each item´s monthly sales. Give the results to the
logistic team so they base inventory on the amount predict by the model
118 - d - A Vertex AI workbench user-managed notebook instance running on a n1-standard-16 with a
preemptible v3-8 TPU
119 - b - Ne problematic phrases can be identified in spam posts
120 - a - Use TensorFlow Data Validation to detect and flag schema anomalies
121 - b - Launch the product without machine learning. Use simple heuristic based on content metadata
to recommend similar videos to user, and start collecting user event data so you can develop a
recommend model in the future
122 - a - The model is overfitting in areas with less traffic and under fitting in areas with more traffic
123 - c - Predict the missing values using linear regression
124 - b - Create the pipeline using Tensorflow Extende TFX and standard TFX Components. Orchestrate
the pipeline using Vertex AI Pipeline
125 - a - Us the Vertex AI training to submit training jobs using any framework
126 - d- Rewrite your input function using parallel reads, parallel processing and prefetch
127 - c - Replace the missing values with a placeholder category indicating a missing value
128 - b - Develop an image segmentatio ML model to locate the boundaries of the rust spots
5
129 - b - Write a code as a Tensorflow extended pipeline orchestrated with Vertex AI Pipeline. Use
standard TFX Components for data validation and model analysism, and use Vertex AI Pipleine for model
retraining
130 - c - Deploy the model to Vertex AI endpoint, and invokes this endpoint in the dataflow job
131 - d - Use Tensorflow IOs BigQuery Reader to directly read the data
132 - d - Use Tensorflow IOs BigQuery Reader to directly read the data
133 - b - Use UNIT LINEAR SCALE for embedding dimension, UNIT LOG SCALE for the learning rate,
and a small number of parallel trials
134 - a - Use the func_to_container_p function to create custom components from the Python code
135 - a - Embed the augmentation functions dynamically in the tf.Data pipeline
136 - c - Schedule a weekly quey in BigQuery to compute success metrics
137 - d - Run training serving skew detection batch jobs every few days to compare aggregate statistic of
features in the training dataset with recent serving data. If skew is detected sen the most recent serving
data to the labeling service
138 - d - Convert your model with tensorflow lite and add it to the mobile app so that the promo code and
the incoming request arrive together in PubSub
139 - b - Develop a simple heuristic (eg base on z score) to label the machine historical performance
data. Use this heuristic to monitor server performance in real time
140 - a Tokenize all of the fields using hashed dummy values to replace the real values
141 - c - This is a good result because predicting thos who cancel their subscription is more difficult since
there is less data for this group
142 - c - Add a container opp to your pipeline that spins a dataproc cluster, runs a transformation and
then saves data in Cloud Storage
143 - a - Deploy the models to a Vertex AI endpoint using the traffic split = 0 = 80, Previous Models ID =
20 configuration
144 - c - Package your code with setuptools, and use a pre-built container. Train your model with Vertex
AI using a custom tier that contains the required GPUs
145 - b - Apply quantization to you saved model by reducing the floating point precision to t.float16
146 - c - Use BigQuery to calculate the descriptive statistics. Use Vertex AI Workbench user-managed
notebook to visualize the time plots and run the statical analyses
147 - a - Use Vertex AI Pipeline to execute the experiments. Query the results stored in MetadaStore
using the Vertex AI API
148 - b - Use Cloud Data Loss Prevention DLP API to scan for sensitive data, and use Dataflow with the
DLP API to encrypt sensitive values with Format Preserving Encryption
149 -bd - Add an additional objective to penalize the model more for eros made on the minority class, and
retrain the model / upsample or reweight your existing training data, and retrain the model
150 - d - F1 Score
6
151 - a - Include the flag -runner=DataflowRunner in beam_pipeline_args to run the evaluation step on
dataflow
152 - c - Apply one-hot encoding on the categorical variables in the test data
153 - c - Oversample the fraudulent transaction 10 times
154 - a - F1 Score
155 - d - Use the tf.distribute.Strategy API and run a distributed training job
156 - b - Train a classification Vertex AutoML model
157 - a - Verify that your model can obtain a low loss on a small subset of the dataset
158 - b - Develop a regression model using BigQueryML
159 - d - Raise the threshold for comments to be considered toxic or harmful
160 - b - Use Vertex Explainable AI. Submit each prediction request with the explain keyword to retrieve
feature attributions using the sampled Shapley method
161 - c - The model with the highest recall where precision is greater than 0.5
162 - b - Train your model using Vertex AI training with CPUs
163 - B - Use a low latency database for the customers historic purchase behavior
164 - b - Turn off auto-scaling for the online prediction service of your new model. Use manual scaling
with one node always available
165 - a - Write a query that preprocess the data by using Bigquery and create a new table. Create a
Vertex AI managed dataset with the new table as the data source
166 - d - Trigger GitHub Actions to run the tests, launch a Cloud Build workflow to build custom Docker
images, push the images to Artifact Registry, and launch the pipeline in Vertex AI Pipelines
167 - b - Use the ML.ONE_HOT_ENCODER function on the categorical features and select the encoded
categorical features and non-categorical features as inputs to create your model.
168 - c - Import the labeled images as a managed dataset in Vertex AI and use AutoML to train the model
169 - BC - Decrease the score threshold/Add more positive examples to the training set
170 - b - Deploy an online Vertex AI prediction endpoint. Set the max replica count to 100
171 - d - Configure a n1-standard-4 VM with NVIDIA P100 GPUs SSH into the VM and use
MultiWorkerMirroedStrategy to train the model
172 - d - 1. Create an experiment in Vertex AI Experiments.
2. Create a Vertex AI pipeline with a custom model training job as part of the pipeline. Configure the
pipeline’s parameters to include those you are investigating.
3. Submit multiple runs to the same experiment, using different values for the parameters.
173 - b - Update the model monitoring job to use the more recent training data that was used to retrain
the model.
174 - d - Use the features and the feature attributions for monitoring. Set a prediction-sampling-rate value
that is closer to 0 than 1.
7
175 - b - 1. Wrap your model in a custom prediction routine (CPR). and build a container image from the
CPR local model.
2. Upload your scikit learn model container to Vertex AI Model Registry.
3. Deploy your model to Vertex AI Endpoints, and create a Vertex AI batch prediction job
176 - b - Write SQL queries to transform the data in-place in BigQuery.
177 - d - Enable caching for the pipeline job, and disable caching for the model training step.
178 - c - Upload the custom model to Vertex AI Model Registry and configure feature-based attribution by
using sampled Shapley with input baselines.
179 - c - Use the Predictor interface to implement a custom prediction routine. Build the custom container,
upload the container to Vertex AI Model Registry and deploy it to a Vertex AI endpoint.
180 - c - Set up a CI/CD pipeline that builds and tests your source code and then deploys built artifacts
into a pre-production environment. After a successful pipeline run in the pre-production environment,
deploy the pipeline to production.
181 - a - Enable VPC Service Controls for peerings, and add Vertex AI to a service perimeter.
182 - c - 1. Create a new model. Set the parentModel parameter to the model ID of the currently deployed
model. Upload the model to Vertex AI Model Registry.
2. Deploy the new model to the existing endpoint, and set the new model to 100% of the traffic
183 - d - Increase the batch size
184 - c - Use Vertex AI chronological split, and specify the sales timestamp feature as the time variable
185 - d - 1. Create a Vertex AI Model Monitoring job configured to monitor training/serving skew
2. Configure alert monitoring to publish a message to a Pub/Sub queue when a monitoring alert is
detected
3. Use a Cloud Function to monitor the Pub/Sub queue, and trigger retraining in BigQuery
186 - c - Use the Kubeflow pipelines SDK to write code that specifies two components:
- The first is a Dataproc Serverless component that launches the feature engineering job
- The second is a custom component wrapped in the create_custom_training_job_from_component utility
that launches the custom model training job
Create a Vertex AI Pipelines job to link and run both components
187 - b - Configure an appropriate minReplicaCount value based on expected baseline traffic
188 - a - 1. Upload the model to Vertex AI Model Registry, and deploy the model to a Vertex AI endpoint
2. Create a Vertex AI Model Monitoring job with feature drift detection as the monitoring objective, and
provide an instance schema
189 - b - Import the TensorFlow model by using the CREATE MODEL statement in BigQuery ML. Apply
the historical data to the TensorFlow model
190 - c - Decrease the sample_rate parameter in the RandomSampleConfig of the monitoring job
191 - b - Create a batch prediction job by using the actual sales data, and configure the job settings to
generate feature attributions. Compare the results in the report.
8
192 - d - Enable model monitoring on the Vertex AI endpoint. Configure Pub/Sub to call the Cloud
Function when feature drift is detected
193 - b - 1. Upload the audio files to Cloud Storage.
2. Call the speech:longrunningrecognize API endpoint to generate transcriptions
3. Create a Cloud Function that calls the Natural Language API by using the analyzeSentiment method
194 - b - Train the model by using AutoML Edge, and export it as a Core ML model. Configure your
mobile application to use the .mlmodel file directly
195 - b - Create a Vertex AI tabular dataset. Train an AutoML model to predict customer purchases.
Deploy the model to a Vertex AI endpoint and enable feature attributions. Use the “explain” method to get
feature attribution values for each individual prediction
196 - a- Use the Vertex AI Vision Occupancy Analytics model
197 - c - Use Tabular Workflow for TabNet through Vertex AI Pipelines to train attention-based models
198 - a - Create a Vertex AI custom training job with GPU accelerators for the second worker pool. Use
tf.distribute.MultiWorkerMirroredStrategy for distribution.
199 - d - 1. Use Vertex AI Experiments to train your model.
2. Register your model in Vertex AI Model Registry.
3. Generate batch predictions in Vertex AI.
200 - c - Uptrain a Document AI custom extractor to parse the text in the comments section of each PDF
file. Use the Natural Language API analyzeSentiment feature to infer overall satisfaction scores.
201 - a - Change the components’ YAML filenames to export.yaml, preprocess,yaml, f "train-
{dt}.yaml", f"calibrate-{dt).vaml".
202 - c - Create a Standard (1 master, 3 workers) Dataproc cluster, and run a Vertex AI Workbench
notebook instance on it.
203 - c - Vertex ML Metadata, Vertex AI Experiments, and Vertex AI TensorBoard
204 - b - Train an object detection model in AutoML by using the annotated image data.
205 - a - Use the Cloud Data Loss Prevention (DLP) API to de-identify the PII before performing data
exploration and preprocessing.
206 - c - Use TensorFlow to create a deep learning-based model, and use Integrated Gradients to explain
the model output
207 - d - Create a Vertex AI tabular dataset. Train a Vertex AI AutoML Forecasting model, with number of
beds as the target variable, number of scheduled surgeries as a covariate and date as the time variable
208 - a - Use the Kubeflow Pipelines SDK to implement the pipeline. Use the BigQueryJobOp component
to run the preprocessing script and the CustomTrainingJobOp component to launch a Vertex AI training
job.
209 - b - Configure the machines of the first two worker pools to have GPUs and to use a container image
where your training code runs. Configure the third worker pool to use the reductionserver container image
without accelerators, and choose a machine type that prioritizes bandwidth
9
210 - b - Refactor the transformation code in the batch data pipeline so that it can be used outside of the
pipeline. Use the same code in the endpoint
211 - d - Create an Apache Beam pipeline to read the data from BigQuery and preprocess it by using
TensorFlow Transform and Dataflow.
212 - b - Implement a TPU Pod slice with -accelerator-type=v4-l28 by using tf.distribute.TPUStrategy.
213 - d - Use the TensorFlow Extended (TFX) SDK to create multiple components that use Dataflow and
Vertex AI services. Deploy the workflow on Vertex AI Pipelines.
214 - a - 1. Specify sampled Shapley as the explanation method with a path count of 5.
2. Deploy the model to Vertex AI Endpoints.
3. Create a Model Monitoring job that uses prediction drift as the monitoring objective.
215 - c - Use BigQuery ML to build a statistical ARIMA_PLUS model.
216 - d - TextDatasetCreateOp, CustomTrainingJobOp, and ModelDeployOp
217 - b - Configure a Cloud Build trigger with the event set as "Push to a branch"
218 - d - Decrease the CPU utilization target in the autoscaling configurations
219 - d - Use the RunInference API with WatchFilePattern in a Dataflow job that wraps around the model
and serves predictions.
220 - c - 1. Create a new service account and grant it the Vertex AI User role
2. Grant the Service Account User role to each team member on the service account
3. Grant the Notebook Viewer role to each team member.
4. Provision a Vertex AI Workbench user-managed notebook instance that uses the new service account
221 - c - Use AutoML Entity Extraction to train a medical entity extraction model
222 - b - Keep the training dataset as is. Deploy both models to the same endpoint and submit a Vertex AI
Model Monitoring job with a monitoring-config-from-file parameter that accounts for the model IDs and
feature selections
223 - d - Download the weather data each week, and download the flu data each month. Deploy the
model to a Vertex AI endpoint with feature drift monitoring, and retrain the model if a monitoring alert is
detected.
224 - c - Store parameters in Vertex ML Metadata, store the models’ source code in GitHub, and store the
models’ binaries in Cloud Storage.
225 - d - Define a fairness metric that is represented by accuracy across the sensitive features. Train a
BigQuery ML boosted trees classification model with all features. Use the trained model to make
predictions on a test set. Join the data back with the sensitive features, and calculate a fairness metric to
investigate whether it meets your requirements
226 - d - Build a custom predictor class based on XGBoost Predictor from the Vertex AI SDK, and
package the handler in a custom container image based on a Vertex built-in container image. Store a
pickled model in Cloud Storage, and deploy the model to Vertex AI Endpoints.
10
227 - a - A. Use Vertex Explainable AI with the sampled Shapley method, and enable Vertex AI Model
Monitoring to check for feature distribution drift
228 - b - Use the lineage feature of Vertex AI Metadata to find the model artifact. Determine the version of
the model and identify the step that creates the data copy and search in the metadata for its location.
229 - d - Configure example-based explanations. Specify the embedding output layer to be used for the
latent space representation.
230 - d - Vertex AI Pipelines, Vertex AI Experiments, and Vertex AI Metadata
231 - c - C. Perform preprocessing in BigQuery by using SQL. Use the BigQueryClient in TensorFlow to
read the data directly from BigQuery.
232 - a - 1. Create a Dataflow job that creates sharded TFRecord files in a Cloud Storage directory.
2. Reference tf.data.TFRecordDataset in the training script.
3. Train the model by using Vertex AI Training with a V100 GPU.
233 - b - DataflowPythonJobOp, WaitGcpResourcesOp, and CustomTrainingJobOp
234 - a - Import the new model to the same Vertex AI Model Registry as a different version of the existing
model. Deploy the new model to the same Vertex AI endpoint as the existing model, and use traffic
splitting to route 95% of production traffic to the BigQuery ML model and 5% of production traffic to the
new model.
235 - d - 1. Use Vertex Explainable AI to generate feature attributions. Aggregate feature attributions over
the entire dataset.
2. Analyze the aggregation result together with the standard model evaluation metrics.
236 - b - Create a logistic regression model in BigQuery ML and register the model in Vertex AI Model
Registry. Evaluate the model performance in Vertex AI
237 - d - Develop the model training code for image classification, and train a model by using Vertex AI
custom training.
238 - b - Increase the number of workers in your model server
239 - a - Send user-submitted images to the Cloud Vision API. Use object localization to identify all
objects in the image and compare the results against a list of animals.
240 - d - Import the model into Vertex AI Model Registry. Create a Vertex AI endpoint that hosts the
model, and make online inference requests.
241 - c - Add a component to the Vertex AI pipeline that logs metrics to Vertex ML Metadata. Use Vertex
AI Experiments to compare different executions of the pipeline. Use Vertex AI TensorBoard to visualize
metrics
242 - b - Create a Vertex AI experiment. Submit all the pipelines as experiment runs. For models trained
on notebooks log parameters and metrics by using the Vertex AI SDK.
243 - a - Set up Vertex AI Experiments to track metrics and parameters. Configure Vertex AI
TensorBoard for visualization.
11
244 - d - Deploy a Dataflow streaming pipeline with the Runlnference API, and use automatic model
refresh.
245 - d - Compare the results to the evaluation results from a previous run. If the performance improved
deploy the model to a Vertex AI endpoint with training/serving skew threshold model monitoring. When
the model monitoring threshold is triggered redeploy the pipeline.
246 - c - Alter the model by using BigQuery ML, and specify Vertex AI as the model registry. Deploy the
model from Vertex AI Model Registry to a Vertex AI endpoint.
247 - d - Create a Vertex AI Model Monitoring job. Enable feature attribution skew and drift detection for
your model.
248 - d - Pull the Docker image locally, and use the docker run command to launch it locally. Use the
docker logs command to explore the error logs
249 - c - Convert the images to TFRecords and store them in a Cloud Storage bucket. Read the
TFRecords by using the tf.data.TFRecordDataset function.
250 - c - Prepare the data in BigQuery and associate the data with a Vertex AI dataset. Create an
AutoMLTabularTrainingJob to tram a classification model.
251 - a - 1. Using Vertex AI Pipelines, add a component to divide the data into training and evaluation
sets, and add another component for feature engineering.
2. Enable autologging of metrics in the training component.
3. Compare pipeline runs in Vertex AI Experiments.
252 - b - Run the CREATE MODEL statement from the BigQuery console to create an AutoML model.
Validate the results by using the ML.EVALUATE and ML.PREDICT statements.
253 - b - Store features in Vertex AI Feature Store.
254 - d - Install the NLTK library from a Jupyter cell by using the !pip install nltk --user command.
255 - c - Import the model into Vertex AI. On Vertex AI Pipelines, create a pipeline that uses the
DataflowPvthonJobOp and the ModelBatchPredictOp components.
256 - c - 1. Maintain the same machine type on the endpoint Configure the endpoint to enable autoscaling
based on vCPU usage.
2. Set up a monitoring job and an alert for CPU usage.
3. If you receive an alert, investigate the cause.
257 - d - Use a prebuilt XGBoost Vertex container to create a model, and deploy it to Vertex AI Endpoints.
258 - c - Use AutoML Translation to train a model. Configure a Translation Hub project, and use the
trained model to translate the documents. Use human reviewers to evaluate the incorrect translations
259 - c - Expose each individual model as an endpoint in Vertex AI Endpoints. Use Cloud Run to
orchestrate the workflow.
260 - b - 1. Use the Vertex AI SDK to create an experiment and set up Vertex ML Metadata.
2. Use the log_time_series_metrics function to track the preprocessed data, and use the log_metrics
function to log loss values.
12
261 - b - Create a Vertex AI Workbench managed notebook to browse and query the tables directly from
the JupyterLab interface.
262 - b - Enable autoscaling of the online serving nodes in your featurestore
263 - c - 1. Use TFX components with Dataflow to encode the text features and scale the numerical
features.
2. Export results to Cloud Storage as TFRecords.
3. Feed the data into Vertex AI Training.
264 - d - Build a random forest classification model in a Vertex AI Workbench notebook instance.
Configure the model to generate feature importances after the model is trained.
265 - a - Create a text dataset on Vertex AI for entity extraction Create two entities called “ingredient” and
“cookware”, and label at least 200 examples of each entity. Train an AutoML entity extraction model to
extract occurrences of these entity types. Evaluate performance on a holdout dataset.
266 - c - Deploy the new model to the existing Vertex AI endpoint. Use traffic splitting to send 5% of
production traffic to the new model. Monitor end-user metrics, such as listening time. If end-user metrics
improve between models over time, gradually increase the percentage of production traffic sent to the
new model.
267 - a - Use BigQuery’s scheduling service to run the model retraining query periodically.
268 - d - Use the aiplatform.log_metrics function to log the F1 score: and use the
aiplatform.log_classification_metrics function to log the confusion matrix.
269 - de - Collect a stratified sample of production traffic to build the training dataset / Conduct fairness
tests across sensitive categories and demographics on the trained model
270 - a - 1. Initialize the Vertex SDK with the name of your experiment. Log parameters and metrics for
each experiment, and attach dataset and model artifacts as inputs and outputs to each execution.
2. After a successful experiment create a Vertex AI pipeline. Most Voted
271 - c - Ensure that the Vertex AI Workbench instance is assigned the Identity and Access Management
(IAM) Vertex AI User role.
272 - a - Use Vertex AI Data Labeling Service to label the images, and tram an AutoML image
classification model. Deploy the model, and configure Pub/Sub to publish a message when an image is
categorized into the failing class.
273 - a - Deploy the training jobs by using TPU VMs with TPUv3 Pod slices, and use the TPUEmbeading
API
274 - a - Use Vertex Al Model Monitoring. Enable prediction drift monitoring on the endpoint, and specify
a notification email.
275 - a - 1. Specify sampled Shapley as the explanation method with a path count of 5.
2. Deploy the model to Vertex AI Endpoints.
3. Create a Model Monitoring job that uses prediction drift as the monitoring objective.
276 - b - Load the data in BigQuery. Use BigQuery ML to train a matrix factorization model.
13
277 - d - Create another Vertex AI endpoint in the asia-southeast1 region, and allow the application to
choose the appropriate endpoint.
278 - a - Store the data in a Cloud Storage bucket, and create a custom container with your training
application. In your training application, read the data from Cloud Storage and train the model.
279 - c - Create a pipeline in Vertex AI Pipelines. Create a Cloud Function that uses a Cloud Storage
trigger and deploys the pipeline.
280 - b - Enable caching in all the steps of the Kubeflow pipeline.
281 - b - Ingest the Avro files into BigQuery to perform analytics. Use a Dataflow pipeline to create the
features, and store them in Vertex AI Feature Store for online prediction.
282 - b - Use the original audio sampling rate, and transcribe the audio by using the Speech-to-Text API
with asynchronous recognition.
283 - c - Use the Document Translation feature of the Cloud Translation API to translate the documents.
284 - a - Use the Vertex AI Metadata API inside the custom job to create context, execution, and artifacts
for each model, and use events to link them together.
285 - c - Create a Vertex AI hyperparameter tuning job.
14