GCP Professional Cloud Architect Practice Exam Part 2

Source:

Actual Exam Version:

y application logic to check the key in the cache before querying Cloud SQL.
C. Use Memorystore for Memcached and set service level to shared. Create a key from the hash of the query. Modify application logic to check the key in the cache before querying Cloud SQL.
D. Use Memorystore for Memcached and set service level to dedicated. Use App Engine Cron Service to populate the cache with keys containing query results every minute. Modify application logic to check the key in the cache before querying Cloud SQL. If the key doesn’t exist, query Cloud SQL and populate cache before returning the result to the application.

  1. A mission-critical application has scaling issues, and your company has decided to migrate from on-premises to GKE to fix this issue. The application, when deployed to GKE, must serve requests on HTTPS and scales up/down based traffic. What should you do?

A. Use Kubernetes Ingress Resource and enable Compute Engine Managed Instance Group (MIG) autoscaling.
B. Use Kubernetes Ingress Resource and enable GKE Cluster Autoscaling as well as Horizontal Pod Autoscaling.
C. Use Kubernetes Service of type LoadBalancer and enable GKE Cluster Autoscaling as well as Horizontal Pod Autoscaling.
D. Use Kubernetes Service of type LoadBalancer and enable Compute Engine Managed Instance Group (MIG) autoscaling.

  1. You are migrating an application to Google Cloud. The application relies on Microsoft SQL Server, and due to the mission-critical nature of the workload, the application should have no downtime in case of zonal outages with GCP. How should you configure the database?

A. Migrate to a regional Cloud Spanner instance.
B. Migrate the SQL Server database onto two Google Compute Engine instances in different zones and enable SQL Server Always-On-Availability- Groups with Windows failover clustering.
C. Migrate the SQL Server database onto two Google Compute Engine instances in different subnets and enable SQL Server Always-On-Availability-Groups with Windows failover clustering.
D. Migrate to a high availability enabled Cloud SQL instance.

  1. A critical application recently suffered a regional outage causing your company loss of valuable revenue. You have been asked for a recommendation on improving the existing testing and disaster recovery processes, and preventing such occurrences in Google Cloud. What should you recommend?

A. Automate provisioning of GCP services using custom gcloud scripts. Monitor and debug tests using Activity Logs.
B. Automate provisioning of GCP services using deployment manager templates. Monitor and debug tests using Activity Logs.
C. Automate provisioning of GCP services using custom gcloud scripts. Monitor and debug tests using Cloud Logging and Cloud Monitoring.
D. Automate provisioning of GCP services using deployment manager templates. Monitor and debug tests using Cloud Logging and Cloud Monitoring.

  1. Your company runs several successful mobile games from your on-premises data centre and plans to use GCP for machine learning to identify improvements and new opportunities. The existing games generate 10 TB of analytics data each day. Your company currently stores three months analytics data (approx.: 900TB) in a highly available NAS in your data centre and needs to transfer this data to GCP as part of the initial data load and as well as transfer data generated daily. Your data centre is connected to the network on a 100 MBps line. What should you do?

A. Use a transfer appliance to transfer archived analytics data. Work with your telco partner and networks team to establish a Dedicated Interconnect connection to Google Cloud and upload files daily.
B. Compress all files and upload using the gsutil.
C. Use a transfer appliance to transfer archived analytics data. Set up multiple Cloud VPN tunnels and upload files daily.
D. Use a transfer appliance to transfer archived analytics data. Set up a single Cloud VPN tunnel and upload files daily.

  1. Your Cloud Security team has asked you to centralize the collection of all VM system logs and all admin activity logs in your project. What should you do?

A. Admin activity logs are collected automatically by Cloud Logging for most services. To collect system logs, you need to install Cloud Logging agent on each VM.
B. Install a Cloud Logging agent on a separate VM. Direct the VMs, admin activity logs and VM system logs to send all logs to it.
C. Install a custom log forwarder on a separate VM and direct the VMs to send all logs to it.
D. Cloud Logging automatically collects the two sets of logs.

  1. Your company develops portable software that is used by customers all over the world. Current and previous versions of software can be downloaded from a dedicated website running on compute engine in US-Central. Some customers have complained about high latency when downloading the software. You want to minimize latency for all your customers. You want to follow Google recommended practices. How should you store the files?

A. Save current and all previous versions of portable software files in Multiple Multi-Regional Cloud Storage buckets, one bucket per multi-region.
B. Save current and all previous versions of portable software files in multiple Regional Cloud Storage buckets, one bucket per zone per region.
C. Save current and all previous versions of portable software files in a single Regional Cloud Storage bucket, one bucket per zone of the region.
D. Save current and all previous versions of portable software files in a Multi-Regional Cloud Storage bucket.

  1. Regulatory requirements mandate your company to retain PII data of customers from an acquired company for at least four years. You want to put a solution in place to securely retain this data and delete when permitted by the regulations. Which should you do?

A. Import the acquired PII data to Cloud Storage and use object. lifecycle management rules to delete files when they expire.
B. Import the acquired PII data to Cloud Storage and use App Engine Cron Service with Cloud Functions to enable daily deletion of all expired data.
C. De-Identify PII data using the Cloud Data Loss Prevention API and store it forever.
D. Store PII data in Google Sheets and manually delete records daily as they expire.

  1. You developed an application recognizes famous landmarks from uploaded photos. You want to run a free trial for 24 hours and open up the application to all users, including users that don’t have a Google account. What should you do?

A. Generate a signed URL on a Cloud Storage bucket with expiration set to 24 hours and have users upload their photos using this signed URL.
B. Deploy the application to Google Compute Engine and terminate the instances after 24 hours. Use Cloud Identity to authenticate users.
C. Deploy the application to Google Compute Engine and use Cloud Identity to authenticate users.
D. Enable users to upload their photos to a public Cloud Storage bucket and set a password on the bucket after the trial.

  1. Your company recently acquired a competitor, and you have been tasked with migrating one of their legacy applications into your company’s Google Cloud project. You noticed the legacy application has several os dependencies and the scale-up is delayed due to long startup time. You want to deploy the application on compute engine and make use of managed instance group so that it can scale based on traffic. You also want to minimize the startup time so that scale up happens quicker. What should you do?

A. Create a startup script to install os dependencies and automate the creation of Managed Instance Group (MIG) using terraform.
B. Use the Deployment Manager to automate the creation of Managed Instance Group (MIG). Use Ansible to install os dependencies.
C. Create a custom GCP VM image with all os dependencies preinstalled. Use the Deployment Manager to automate the creation of Managed Instance Group (MIG) with the custom image.
D. Use Puppet to automate the creation of Managed Instance Group (MIG) and installation of os dependencies.

  1. Your company operates a very successful mobile app that lets users superimpose stock images of their favourite pets with their uploaded images. You use a combination of Google Cloud Storage and Vision AI to achieve this. Recently, the photo uploads from mobile user devices to Google Cloud storage have started throwing HTTP errors with status codes of 429 and 5xx. What should you do to fix this issue?

A. Use Cloud Storage gPRC endpoints.
B. Enable Geo-redundancy by moving Cloud Storage bucket from Regional to Multi-regional.
C. Make requests to Cloud Storage only if its status is healthy.
D. Retry failures with exponential backoff.

  1. Your auditors require you to supply them the number of queries run by each user in BigQuery over the last 12 months. You want to do this as efficiently as possible. What should you do?

A. In Cloud Audit Logs, apply a filter on BigQuery query operation to get the required information.
B. Use Google Data Studio BigQuery connector to access data from your BigQuery tables within Google Data Studio. Create dimensions, metrics and reports to obtain this information.
C. Execute bq show command to list all jobs and execute bq Is for each job. Aggregate the output by user_id and obtain the information.
D. Execute a query on BigQuery JOBS table to get this information.

  1. An application you deployed to Google Cloud uses a single Cloud SQL for MySQL instance in us-west1-a zone. What should you do to ensure high availability?

A. Create a MySQL failover replica in us-east1 (different region).
B. Create a MySQL failover replica in us-west1-b (same region but different zone).
C. Create a MySQL read replica in us-east1 (different region).
D. Create a MySQL read replica in us-west1-b (same region but different zone).

  1. Your company runs a parcel tracking application on App Engine Standard service. The application requires ACID transaction support and uses Cloud Datastore as its persistence layer. You have been asked to identify an efficient way to retrieve multiple parcels (datastore root entities) based on the relevant tracking IDs (datastore identifiers) while minimizing overhead in the calls from App Engine to Datastore. What should you do?

A. Create a Key object for each tracking ID. Perform multiple get operations – one for each tracking ID.
B. Create a Key object for each tracking ID. Perform a single batch get operation.
C. Generate a query filter for each tracking ID. Perform multiple query operations-one for each entity.
D. Generate a query filter to include all tracking IDs. Perform a single batch query operation.