GCP Associate Cloud Engineer Practice Exam Part 6

Source:

Actual Exam Version:

  1. Your company stores terabytes of image thumbnails in Google Cloud Storage bucket with versioning enabled. An engineer deleted a current (live) version of an image and a non-current (not live) version of another image. What is the outcome of this operation?

A. The deleted current version becomes a non-current version, and a lifecycle rule is applied to delete after 30 days. A lifecycle rule is applied on the deleted non-current version to delete after 30 days.
B. The deleted current version becomes a non-current version, and a lifecycle rule is applied to transition to Nearline Storage after 30 days. A lifecycle rule is applied on the deleted non-current version to transition to Nearline Storage after 30 days.
C. The deleted current version is deleted permanently. The deleted non-current version is deleted permanently.
D. The deleted current version becomes a non-current version. The deleted non-current version is deleted permanently.

Answer: D

  1. Your company has terabytes of audit logs and analytics data in multiple BigQuery datasets. Some of these data sets need to be retained long term for audit purposes. You want to ensure analysts do not delete this data. What should you do?

A. Grant a custom role with just the required query permissions to all the analysts.
B. Grant ples/bigquery.dataOwner IAM role to the analysts’ group.
C. Grant roles/bigquery.user IAM role to the analysts’ group.
D. Grant a custom role with just the required query permissions to the analysts’ group.

Answer: C

  1. You developed a new mobile game that uses Cloud Spanner for storing user state, player profile and leaderboard. Data is always accessed by using the primary key. Your performance testing team identified latency issues in the application, and you suspect it might be related to table primary key configuration. You created the table by executing this DDL:

CREATE TABLE users {
user_id INT64 NOT NULL,
user_name STRING (255),
email_address STRING (255)
} PRIMARY KEY (user_id)

What should you do to fix this read latency issue?

A. Add another index on user_id column to speed up the data retrieval.
B. Make the table smaller by removing email_address and add another index on user_id column to speed up the data retrieval.
C. Make the table smaller by removing email_address.
D. Update the primary key (user_id) to not have sequential values.

Answer: D

  1. There has been an increased phishing email activity recently, and you deployed a new application on a GKE cluster to help scan and detect viruses in uploaded files. Each time the Finance or HR department receive an email with an attachment, they use this application to scan the email attachment for viruses. The application pods open the email attachment in a sandboxed environment before initiating a virus scan. Some infected email attachments may run arbitrary phishing code with elevated privileges in the container. You want to ensure that the pods that run these scans do not impact pods of other applications running in the same GKE cluster. How can you achieve this isolation between pods?

A. Detect vulnerabilities in the application container images in GCR repo by using the Container Analysis API.
B. Create a new (non-default) node pool with sandbox type set to gvisor and configure the deployment spec with a runtimeClassName of gvisor.
C. Configure the pods to use containerd as the runtime by adding a node selector with key: cloud.google.com/gke-os-distribution and value:cos_containerd.
D. Have your applications use trusted container images by enabling Binary Authorization.

Answer: B

  1. An external partner working on a production issue has asked you to share a list of all GCP APIs enabled for your GCP production project – production_v1. How should you retrieve this information?

A. Execute gcloud services list – -available to retrieve a list of all services enabled for the project.
B. Execute gcloud projects list –filter=’name:production_vl’ to retrieve the ID of the project, and execute gcloud services list –project to retrieve a list of all services enabled for the project.
C. Execute gcloud init production_v1 to set production_v1 as the current project in gcloud configuration and execute gcloud services list -available to retrieve a list of all services enabled for the project.
D. Execute gcloud info to retrieve the account information and execute gcloud services list – -account to retrieve a list of all services enabled for the project.

Answer: B

  1. Your finance team owns two GCP projects – one project for payroll applications and another project for accounts. You need the VMs in the payroll project in one VPC to communicate with VMs in accounts project in a different VPC and vice versa. How should you do it?

A. Ensure you have the Administrator role on both projects and move all instances to two new VPCs.
B. Share the VPC from one of the projects and have the VMs in the other project use the shared VPC. Ensure both projects belong to the same GCP organization.
C. Create a new VPC and move all VMs to the new VPC. Ensure both projects belong to the same GCP organization.
D. Ensure you have the Administrator role on both projects and move all instances to a single new VPC.

Answer: B

  1. Your company procured a license for a third-party cloud-based document signing system for the procurement team. All members of the procurement team need to sign in with the same service account. Your security team prohibits sharing service account passwords. You have been asked to recommend a solution that lets the procurement team login as the service account in the document signing system but without the team knowing the service account password. What should you do?

A. Have a single person from the procurement team access the document signing system with the service account credentials.
B. Register the application as a password vaulted app and set the credentials to the service account credentials.
C. Ask the third-party provider to enable OAuth 2.0 for the application and set the credentials to the service account credentials.
D. Ask the third-party provider to enable SAML for the application and set the credentials to the service account credentials.

Answer: B

  1. Your company uses a legacy application that still relies on the legacy LDAP protocol to authenticate. Your company plans to migrate this application to the cloud and is looking for a cost-effective solution while minimizing any developer effort. What should you do?

A. Modify the legacy application to use SAML and ask users to sign in through Gmail.
B. Modify the legacy application to use OAuth 2.0 and ask users to sign in through Gmail.
C. Synchronize data within your LDAP server with Google Cloud Directory Sync.
D. Use secure LDAP to authenticate the legacy application and ask users to sign in through Gmail.

Answer: D

  1. You developed a python application that exposes an HTTP(s) endpoint for retrieving 2-week weather forecast for a given location. You deployed the application in a single Google Cloud Compute Engine Virtual Machine, but the application is not as popular as you anticipated and has been receiving very few requests. To minimize costs, your colleague suggested containerizing the application and deploying on a suitable GCP compute service. Where should you deploy your containers?

A. App Engine Flexible.
B. GKE with horizontal pod autoscaling and cluster autoscaler enabled.
C. Cloud Run.
D. Cloud Run on GKE.

Answer: C

  1. You developed a python application that gets triggered by messages from a Cloud Pub/Sub topic. Your manager is a big fan of both serverless and containers and has asked you to containerize the application and deploy on Google Cloud Run. How should you do it?

A. Assign roles/pubsub.subscribei role (Pub/Sub Subscriber role) to the Cloud Run service account. Set up a Cloud Pub/Sub subscription on the topic and configure the application to pull messages.
B. Deploy your application to Google Cloud Run on GKE. Set up a Cloud Pub/Sub subscription on the topic and deploy a sidecar container in the same GKE cluster to consume the message from the topic and push it to your application.
C. Assign roles/run.invoker role (Cloud Run Invoker role) on your Cloud Run application to a service account. Set up a Cloud Pub/Sub subscription on the topic and configure it to use the service account to push the message to your Cloud Run application.
D. Trigger a Cloud Function whenever the topic receives a new message. From the Cloud Function, invoke Cloud Run.

Answer: C

  1. Your data warehousing team executed an Apache Sqoop job to export data from Hive/Hbase and uploaded this data in AVRO file format to Cloud Storage. The business analysts at your company have years of experience using SQL. They have asked you to identify if there is a cost-effective way to query the information in AVRO files through SQL. What should you do?

A. Transfer the data from Cloud Storage to BigQuery and advise the business analysts to run their SQL queries in BigQuery.
B. Transfer the data from Cloud Storage to HDFS. Configure an external table in Hive to point to HDFS and advise the business analysts to run their SQL queries in Hive.
C. Point a BigQuery external table at the Cloud Storage bucket and advise the business analysts to run their SQL queries in BigQuery.
D. Transfer the data from Cloud Storage to Cloud Datastore and advise the business analysts to run their SQL queries in Cloud Datastore.

Answer: C

  1. Your company wants to move all its on-premises applications to Google Cloud. Most applications depend on Kubernetes orchestration, and you have chosen to deploy these applications in Google Kubernetes Engine (GKE) in your GCP project app_prod. The security team have requested you to store all container images in Google Container Registry (GCR) in a separate project gcr_ proj, which has an automated vulnerability management scanning set up by a security partner. You are ready to push an image to GCR repo and want to tag it as tranquillity:v1. How should you do it?

A. Execute gcloud builds submit –tag Google Cloud console from Cloud shell.
B. Execute gcloud builds submit –tag Google Cloud console from Cloud shell.
C. Execute gcloud builds submit –tag Google Cloud console from Cloud shell.
D. Execute gcloud builds submit –tag Google Cloud console from Cloud shell.

Answer: D

  1. Your company has several business-critical applications running on its on-premises data centre, which is already at full capacity, and you need to expand to Google Cloud Platform to handle traffic bursts. You want to virtual machine instances in both on-premises data centre and Google Cloud Compute Engine to communicate via their internal IP addresses. What should you do?

A. Add bastion hosts in GCP as well as on-premises network and set up a proxy tunnel between the bastion hosts in GCP and the bastion hosts in the on-premises network. Allow applications in the data centre to scale to Google Cloud through the proxy tunnel.
B. Create a new GCP project and a new VPC and enable VPC peering between the new VPC and networks in the data centre.
C. Create a new VPC in GCP with a non-overlapping IP range and configure Cloud VPN between the on-premises network and GCP.
D. Allow applications in the data centre to scale to Google Cloud through the VPN tunnel.
E. Create a new GCP project and a new VPC and make this a shared VPC with the on-premises network. Allow applications in the data centre to scale to Google Cloud on the shared VPC.

Answer: C

  1. The machine learning team at your company infrequently needs to use a GKE cluster with specific GPUs for processing a non-restartable and long-running job. How should you set up the GKE cluster for this requirement?

A. Deploy the workload on a node pool with non-preemptible compute engine instances and GPUs attached to them. Enable cluster autoscaling and set min-nodes to 1.
B. Deploy the workload on a node pool with preemptible compute engine instances and GPUs attached to them.
C. Enable GKE cluster node auto-provisioning.
D. Enable Vertical Pod Autoscaling.

Answer: A

  1. You want to deploy an application to GKE cluster to enable the translation of mp3 files. The application uses an opensource translation library that is IOPS intensive. The organization backup strategy involves taking disk snapshots of all nodes at midnight. You want to estimate the cost of running this application in GKE cluster for the next month. In addition to the node pool size, instance type, location and usage duration, what else should you fill in the GCP pricing calculator when estimating the cost of running this application?

A. GPU and GKE Cluster Management Fee.
B. Local SSD, Snapshot Storage and Persistent disk storage.
C. GPU, Snapshot Storage and Persistent disk storage.
D. Local SSD and GKE Cluster Management Fee.

Answer: B

  1. Your organization has several applications in the on-premises data centre that depend on Active Directory for user identification and authorization. Your organization is planning a migration to Google Cloud Platform and requires complete control over the Cloud Identity accounts used by staff to access Google Services and APIs. Where possible, you want to re-use Active Directory as the source of truth for identification and authorization. What should you do?

A. Ask all staff to create Cloud Identity accounts using their Google email address and require them to re-use their AD password for Cloud Identity account.
B. Create a custom script that synchronizes identities between Active Directory and Cloud Identity. Use Google Cloud Scheduler to run the script regularly.
C. Ask your operations team to export identities from active directory into a comma-separated file and use GCP console to import them into Cloud Identity daily.
D. Synchronize users in Google Cloud Identity with identities in Active directory by running Google Cloud Directory Sync (GCDS).

Answer: D

  1. Your colleague is learning about docker images, containers and Kubernetes, and has recently deployed a sample application to a GKE You deployed a demo application on a GKE cluster that uses preemptible nodes. The deployment has 2 replicas, and although the demo application is responding to requests, the output from Cloud Shell shows one of the pods is pending state.1
    kubectl get pods -1 app=demo

What is the most likely explanation for this behaviour?

A. The pod in the pending state is unable to download the docker image.
B. The node got preempted before the pod could be started fully. GKE cluster master is provisioning a new node.
C. The pod in the pending state is too big to fit on a single preemptible VM. The node pool needs to be recreated with a bigger machine type.
D. Cluster autoscaling is not enabled, and the existing (only) node doesn’t have enough resources for provisioning the pod.

Answer: D

  1. Your company stores terabytes of image thumbnails in Google Cloud Storage bucket with versioning enabled. You want to cut down the storage costs and you spoke to the image editing lab to understand their usage requirements. They inform you that they access noncurrent versions of images at most once a month and are happy for you to archive these objects after 30 days from the date of creation, however, there may be a need to retrieve and update some of these archived objects at the end of each month. What should you do?

A. Configure a lifecycle rule to transition non-current versions to Coldline Storage Class after 30 days.
B. Configure a lifecycle rule to transition objects from Regional Storage Class to Coldline Storage Class after 30 days.
C. Configure a lifecycle rule to transition objects from Regional Storage Class to Nearline Storage Class after 30 days.
D. Configure a lifecycle rule to transition non-current versions to Nearline Storage Class after 30 days.

Answer: D

  1. A mission-critical image processing application running in your on-premises data centre requires 64 virtual CPUs to execute all processes. You colleague wants to migrate this mission-critical application to Google Cloud Platform Compute Engine and has asked your suggestion for the instance size. What should you suggest?

A. Provision the compute engine instance on the default settings, then modify it to have 64 vCPUs.
B. Provision the compute engine instance on the default settings, then scale it as per sizing recommendations.
C. Use n1-standard-64 machine type when provisioning the compute engine instance.
D. Use Xeon Scalable Processor (Skylake) as the CPU platform when provisioning the compute engine instance.

Answer: C

  1. Your company has accumulated terabytes of analytics data from clickstream logsand stores this data in BigQuery dataset in a central GCP project. Analytics teams from several departments in multiple GCP projects run queries against this data. The costs for BigQuery job executions have increased drastically in recent months, and your finance team has asked your suggestions for controlling these spiralling costs. What should you do? (Select two)

A. Separate the data of each department and store it in BigQuery in their GCP project.Enable BigQuery quota on a per-project basis.
B. Create a separate GCP project for each department and configure billing settings on each project to pick up the costs for queries ran by their analytics team.
C. Move to BigQuery flat rate and purchase the required number of query slots.
D. Replicate the data in all GCP projects & have each department query data from their GCP project instead of the central BigQuery project.

Answer: C

  1. All development teams at your company share the same development GCP project,and this has previously resulted in some teams accidentally terminating compute engine resources of other teams, causing downtime and loss of productivity. You want to deploy a new application to the shared development GCP project, and you want to protect your instance from such issues. What should you do?

A. Deploy the application on a Preemptible VM.
B. Set the deletionProtection property on the VM.
C. Deploy the application on a Shielded VM.
D. Deploy the application on VMs provisioned on sole-tenant nodes.

Answer: B

  1. You want to provide an operations engineer access to a specific GCP project. Everyone at your company has G Suite accounts. How should you grant access?

A. Run G Suite to Cloud Identity migration tool to convert G Suite Accounts intoCloud Identity accounts. Grant the necessary roles to the Cloud Identity account.
B. Add the operations engineer to the gcp-console-access group in your G Suite domain and grant the necessary roles to the group.
C. Assign the necessary roles to their G Suite email address.
D. Disable G Suite Sign in, enable Cloud Identity Sign in, and add their email address to Cloud Identity. Grant the necessary roles to the Cloud Identity account.

Answer: C

  1. Your company runs a popular online retail platform that lets individual retailers sell their products to millions of customers around the world. Your company places a high value in delivering web requests with low latency, and customers have found this to be a key selling feature of the online platform. However, a recent surge in customers buying gifts for Thanksgiving has seen product pages load slower than usual. Your manager has suggested using a fronting reverse proxy layer to cache images. Your performance testing lead has estimated requiring 30 GB in-memory cache for caching images of the most popular products in the sale. The reverse proxy also requires approximately 2 GB memory for various other processes and no CPU at all. How should you design this system?

A. Run Redis on a single Google Compute Engine instance of type n1-standard-1, and configure Redis to use 32GB SSD persistent disk as caching backend.
B. Create a Kubernetes deployment from Redis image and run it in a GKE Cluster on a node pool with a single n1-standard-32 instance.
C. Set up Redis on a custom compute engine instance with 32 GB RAM and 6 virtual CPUs.
D. Use Cloud Memorystore for Redis instance replicated across two zones and configured for 32 GB in-memory cache.

Answer: D