GCP Associate Cloud Engineer Practice Exam Part 5

Source:

Actual Exam Version:

  1. Your company has a massive quantity of unstructured data in text, Apache AVRO and PARQUET files in the on-premise data centre and wants to transform this data using a Dataflow job and migrate cleansed/enriched data to BigQuery. How should you make the on-premise files accessible to Cloud Dataflow?

A. Migrate the data from the on-premises data centre to Cloud Spanner by using the upload files function.
B. Migrate the data from the on-premises data centre to Cloud SQL for MySQL by using the upload files function.
C. Migrate the data from the on-premises data centre to Cloud Storage by using a custom script with gsutil commands.
D. Migrate the data from the on-premises data centre to BigQuery by using a custom script with bq commands.

  1. Your business-critical application deployed on a compute engine instance in us-west 1-a zone suffered an outage due to GCP zone failure. You want to modify the application to be immune to zone failures while minimizing costs. What should you do?

A. Direct the traffic through a Global HTTP(s) Load Balancer to shield your application from GCP zone failures.
B. Provision another compute engine instance in us-west1-b and balance the traffic across both zones.
C. Ensure you have hourly snapshots of the disk in Google Cloud Storage. In the unlikely event of a zonal outage, use the snapshots to provision a new Compute Engine Instance in a different zone.
D. Replace the single instance with a Managed Instance Group (MIG) and autoscaling enabled. Configure a health check to detect failures rapidly.

  1. You deployed a mission-critical application on Google Compute Engine. Your operations team have asked you to enable measures to prevent engineers from accidentally destroying the instance. What should you do?

A. Turn on deletion protection on the compute engine instance.
B. Uncheck “Delete boot disk when instance is deleted” option when provisioning the compute engine instance.
C. Deploy the application on a preemptible compute engine instance.
D. Enable automatic restart on the instance.

  1. Your production applications are distributed across several Google Cloud Platform (GCP) projects, and your operations team want to efficiently manage all the production projects and applications using gcloud SDK on Cloud Shell. What should you recommend they do to achieve this in the fewest possible steps?

A. Create a gcloud configuration for each production project. To manage resources of a particular project, run gcloud init to update and initialize the relevant gcloud configuration.
B. Use the default gcloud configuration on cloud shell. To manage resources of a particular project, activate the relevant gcloud configuration.
C. Create a gcloud configuration for each production project. To manage resources of a particular project, activate the relevant gcloud configuration.
D. Use the default gcloud configuration on cloud shell. To manage resources of a particular project, run gcloud init to update and initialize the relevant gcloud configuration.

  1. You want to optimize the storage costs for long term archival of logs. Logs are accessed frequently in the first 30 days and only retrieved after that if there is any special requirement in the annual audit. The auditors may need to look into log entries of the previous three years. What should you do?

A. Store the logs in Nearline Storage Class and set up a lifecycle policy to transition the files older than 30 days to Archive Storage Class.
B. Store the logs in Standard Storage Class and set up a lifecycle policy to transition the files older than 30 days to Archive Storage Class.
C. Store the logs in Standard Storage Class and set up lifecycle policies to transition the files older than 30 days to Coldline Storage Class, and files older than 1 year to Archive Storage Class.
D. Store the logs in Nearline Storage Class and set up lifecycle policies to transition the files older than 30 days to Coldline Storage Class, and files older than 1 year to Archive Storage Class.

  1. Your company runs most of its compute workloads in Google Compute Engine in the europe-west1-b zone. Your operations team use Cloud Shell to manage these instances. They want to know if it is possible to designate a default compute zone and not supply the zone parameter when running each command in the CLI. What should you do?

A. In GCP Console set europe-west1-! zone in the default location in Compute Engine Settings.
B. Run gcloud config to set europe-west1-b as the default zone.
C. Update the gcloud configuration file ~/config.ini to set europe-west1-b as the default zone.
D. Add a metadata entry in the Compute Engine Settings page with key: compute/zone and value: europe-west1-b.

  1. You run a business-critical application in a Google Cloud Compute Engine instance, and you want to set up a cost-efficient solution for backing up the data on the boot disk. You want a solution that:
    minimizes operational overhead
    – backs up boot disks daily
    . allows quick restore of the backups when needed, e.g. disaster scenarios
    . deletes backups older than a month automatically.
    What should you do?

A. Enable Snapshot Schedule on the disk to enable automated snapshots per schedule.
B. Deploy a Cloud Function to initiate the creation of instance templates for all instances daily.
C. Configure a Cloud Task to initiate the creation of images for all instances daily and upload them to Cloud Storage.
D. Set up a cron job with a custom script that uses gcloud APIs to create a new disk from existing instance disk for all instances daily.

  1. The operations manager has asked you to identify the IAM users with Project Editor role on the GCP production project. What should you do?

A. Turn on IAM Audit logging and build a Cloud Monitoring dashboard to display this information.
B. Extract all project-wide SSH keys.
C. Execute gcloud projects get-iam-policy to retrieve this information.
D. Check the permissions assigned in all Identity Aware Proxy (IAP) tunnels.

  1. Your company’s compute workloads are split between the on-premises data centre and Google Cloud Platform. The on-premises data centre is connected to Google Cloud network by Cloud VPN. You have a requirement to provision a new non-publicly-reachable compute engine instance on a c2-standard-8 machine type in australia-southeast1- zone. What should you do?

A. Configure a route to route all traffic to the public IP of compute engine instance through the VPN tunnel.
B. Provision the instance without a public IP address.
C. Provision the instance in a subnet that has Google Private Access enabled.
D. Provision the instance in a subnetwork that has all egress traffic disabled.

  1. You manage an overnight batch job that uses 20 VMs to transfer customer information from a CRM system to BigQuery dataset. The job can tolerate some VMs going down. The current high cost of the VMs make the overnight job not viable, and you want to reduce the costs. What should you do?

A. Use preemptible compute engine instances to reduce cost.
B. Use a fleet of f1-micro instances behind a Managed Instances Group (MIG) with autoscaling. Set minimum and maximum nodes to 20.
C. Use tiny f1-micro instances to reduce cost.
D. Use a fleet of f1-micro instances behind a Managed Instances Group (MIG) with autoscaling and minimum nodes set to 1.

  1. You plan to deploy an application to Google Compute Engine instance, and it relies on making connections to a Cloud SQL database for retrieving information about book publications. To minimize costs, you are developing this application on your local workstation, and you want it to connect to a Cloud SQL instance. Your colleague suggested setting up Application Default Credentials on your workstation to make the transition to Google Cloud easier. You are now ready to move the application to Google Compute Engine instance. You want to follow Google recommended practices to enable secure IAM access. What should you do?

A. Grant the necessary IAM roles to a service account, download the JSON key file and package it with your application.
B. Grant the necessary IAM roles to the service account used by Google Compute Engine instance.
C. Grant the necessary IAM roles to a service account and configure the application running on Google Compute Engine instance to use this service account.
D. Grant the necessary IAM roles to a service account, store its credentials in a config file and package it with your application.

  1. You want to run an application in Google Compute Engine in the app-tier GCP project and have it export data from Cloud Bigtable to daily-us-customer-export Cloud Storage bucket in the data-warehousing project. You plan to run a Cloud Dataflow job in the data-arehousing project to pick up data from this bucket for further processing. How should you design the IAM access to enable the compute engine instance push objects to daily-us-customer-export Cloud Storage bucket in the data-warehousing project?

A. Update the access control on daily-us-customer-export Cloud Storage bucket to make it public. Create a subfolder inside the bucket with a randomized name and have the compute engine instance push objects to this folder.
B. Ensure both the projects are in the same GCP folder in the resource hierarchy.
C. Grant the service account used by the compute engine in app-tier GCP project roles/storage.objectCreator IAM role on the daily-us-customer-export Cloud Storage bucket.
D. Grant the service account used by the compute engine in app-tier GCP project roles/storage.objectCreator IAM role on app-tier GCP project.

  1. You run a batch job every month in your on-premises data centre that downloads clickstream logs from Google Cloud Storage bucket, enriches the data and stores them in Cloud BigTable. The job runs for 32 hours on average, can be restarted if interrupted, and must complete. You want to migrate this batch job onto a cost-efficient GCP compute service. How should you deploy it?

A. Deploy the batch job in a GKE Cluster with preemptible VM node pool.
B. Deploy the batch job on a fleet of Google Cloud Compute Engine preemptible VM in a Managed Instances Group (MIG) with autoscaling.
C. Deploy the batch job on a Google Cloud Compute Engine Preemptible VM.
D. Deploy the batch job on a Google Cloud Compute Engine non-preemptible VM. Restart instances as required.

  1. Your gaming backend uses Cloud Spanner to store leaderboard and player profile data. You want to scale the spanner instances based on predictable usage patterns. What should you do?

A. Configure alerts in Cloud Monitoring to alert Google Operations Support team and have them use their scripts to scale up or scale down the spanner instance as necessary.
B. Configure a Cloud Scheduler job to invoke a Cloud Function that reviews the relevant Cloud Monitoring metrics and resizes the Spanner instance as necessary.
C. Configure alerts in Cloud Monitoring to trigger a Cloud Function via webhook, and have the Cloud Function scale up or scale down the spanner instance as necessary.
D. Configure alerts in Cloud Monitoring to alert your operations team and have them manually scale up or scale down the spanner instance as necessary.

  1. You are running an application on a Google Compute Engine instance. You want tocreate multiple copies of this VM to handle the burst in traffic. What should you do?

A. Create a snapshot of the compute engine instance disk and create images from this snapshot to handle the burst in traffic.
B. Create a snapshot of the compute engine instance disk and create instances from this snapshot to handle the burst in traffic.
C. Create a snapshot of the compute engine instance disk, create a custom image from the snapshot, create instances from this image to handle the burst in traffic.
D. Create a snapshot of the compute engine instance disk, create custom images from the snapshot to handle the burst in traffic.

  1. You migrated a mission-critical application from the on-premises data centre to Google Kubernetes Engine (GKE) which uses e2-standard-2 machine types. You want to deploy additional pods on c2-standard-16 machine types. How can you do this without causing application downtime?

A. Run gcloud container clusters upgrade to move to c2-standard– 16 machine types. Terminate all existing pods.
B. Create a new GKE cluster with node pool instances of type c2-standard- – 16. Deploy the application on the new GKE cluster and delete the old GKE Cluster.
C. Create a new GKE cluster with two node pools – one with e2-standard-2 machine types and other with c2-standard-16 machine types. Deploy the application on the new GKE cluster and delete the old GKE Cluster.
D. Update the existing cluster to add a new node pool with c2-standard-16 machine types and deploy the pods.

  1. You are migrating a complex on-premises data warehousing solution to Google Cloud. You plan to create a fleet of Google Compute Engine instances behind a Managed Instances Group (MIG) in the app-tier project, and BigQuery in the data-warehousing project. How should you configure the service accounts used by Compute Engine instances to allow them query access to BigQuery datasets?

A. Grant the compute engine service account roles/bigquery.dataViewer role on the data-warehousing GCP project.
B. Grant the compute engine service account roles/owner on data-warehousing GCP project.
C. Grant the compute engine service account roles/owner on data-warehousing GCP project and roles/biqquery.dataViewer role on the app-tier GCP project.
D. Grant the BigQuery service account roles/owner on app-tier GCP project.

  1. You deployed the Finance teams’ Payroll application to Google Compute Engine, and this application is used by staff during regular business hours. The operations team want to backup the VMs daily outside the business hours and delete images older than 50 days to save costs. They need an automated solution with the least operational overhead and the least number of GCP services. What should they do?

A. Navigate to the Compute Engine Disk section of your VM instance in the GCP console and enable a snapshot schedule for automated creation of daily snapshots. Set Auto-Delete snapshots after to 50 days.
B. Add a metadata tag on the Google Compute Engine instance to enable snapshot creation. Add a second metadata tag to specify the snapshot schedule, and a third metadata tag to specify the retention period.
C. Use Cloud Scheduler to trigger a Cloud Function that creates snapshots of the disk daily. Use Cloud Scheduler to trigger another Cloud Function that iterates over the snapshots and deletes snapshots older than 50 days.
D. Use AppEngine Cron service to trigger a custom script that creates snapshots of the disk daily. Use AppEngine Cron service to trigger another custom script that iterates over the snapshots and deletes snapshots older than 50 days.

  1. Your compliance team wants to review the audit logs and data access logs in the production GCP project. You want to follow Google recommended practices. What should you do?

A. Grant the compliance team a custom IAM role that has logging.privateLogEntries. list permission. Let the compliance team know they can also query IAM policy changes in Cloud Logging.
B. Export logs to Cloud Storage and grant the compliance team a custom IAM role that has logging.privateLogEntries.list permission.
C. Export logs to Cloud Storage and grant the compliance team roles/logging.privateLogViewer lAMrole.
D. Grant the compliance team roles/logging.privateLogViewer\AM role. Let the compliance team know they can also query IAM policy changes in Cloud Logging.

  1. You want to migrate a legacy application from your on-premises data centre to Google Cloud Platform. The application serves SSL encrypted traffic from worldwide clients on TCP port 443. What GCP Loadbalancing service should you use to minimize latency for all clients?

A. External HTTP(S) Load Balancer.
B. Internal TCP/UDP Load Balancer.
C. Network TCP/UDP Load Balancer.
D. SSL Proxy Load Balancer.

  1. You are deploying an application on the Google Compute Engine, and you want to minimize network egress costs. The organization has a policy that requires you to block all but essential egress traffic. What should you do?

A. Enable a firewall rule at priority 100 to allow essential egress traffic.
B. Enable a firewall rule at priority 100 to allow ingress and essential egress traffic.
C. Enable a firewall rule at priority 100 to block all egress traffic, and another firewall rule at priority 65534 to allow essential egress traffic.
D. Enable a firewall rule at priority 65534 to block all egress traffic, and another firewall rule at priority 100 to allow essential egress traffic.

  1. You work for a startup company where every developer has a dedicated development GCP project linked to a central billing account. Your finance lead is concerned that some developers may leave some services running unnecessarily or may not understand the cost implications of turning on specific services in Google Cloud Platform. They want to be alerted when a developer spends more than 750$ per month in their GCP project. What should you do?

A. Export Billing data from each development GCP projects to a separate BigQuery dataset. On each dataset, use a Data Studio dashboard to plot the spending.
B. Set up a budget for each development GCP projects. For each budget, trigger an email notification when the spending exceeds $750.
C. Export Billing data from all development GCP projects to a single BigQuery dataset. Use a Data Studio dashboard to plot the spend.
D. Set up a single budget for all development GCP projects. Trigger an email notification when the spending exceeds $750 in the budget.