GCP Associate Cloud Engineer Practice Exam Part 6(2)

Source:

Actual Exam Version:

  1. Your company migrated its data warehousing solution from its on-premises data centre to Google Cloud 3 years ago. Since then, several teams have worked on different data warehousing and analytics needs, and have created numerous BigQuery datasets. The compliance manager is concerned at the possibility of PII data being present in these datasets and has asked you to identify all datasets that contain us_social_security_number column. How can you most efficiently identify all datasets that contain us_social_security_numbercolumn?

A. Write a custom script that uses bq commands to loop through all data sets and identify those containing us_social_security_number column.
B. Write a custom script that queries BigQuery INFORMATION_SCHEMA.TABLE_SCHEMAwhere COLUMN_NAME=us_social_security_number
C. Enable a Cloud Dataflow job that queries BigQuery FORMATION_SCHEMA.TABLE_SCHEMA where COLUMN_NAME=us_socialsecurity_number.
D. Search for us_social_security_number in Data Catalog.

Answer: B

  1. You developed a new order tracking application and created a test environment in your GCP project. The order tracking system uses Google Compute engine to serve requests and relies on Cloud SQL to persist order data. Unit testing and user acceptance testing has succeeded, and you want to deploy the production environment. You want to do this while ensuring there are no routes between the test environment and the production environment. You want to follow Google recommended practices. How should you do this?

A. Reuse an existing production GCP project of a different department for deploying the production resources.
B. Set up a shared VPC between test GCP project and production GCP project, and configure both test and production resources to use the shared VPC to achieve maximum isolation.
C. Deploy the production resources in a new subnet within the existing GCP project.
D. In a new GCP project, enable the required GCP services and APIs, and deploy the necessary production resources.

Answer: D

  1. The procurement department at your company is migration all their applications to Google Cloud, and one of their engineers have asked you to provide them IAM access to create and manage service accounts in all Cloud Projects. What should you do?

A. Grant the user roles/iam.serviceAccountAdmin IAM role.
B. Grant the user roles/iam.serviceAccountUser IAM role.
C. Grant the user roles/iam.securityAdmin IAM role.
D. Grant the user roles/iam.roleAdmin IAM role.

Answer: A

  1. Your company’s backup strategy involves creating snapshots for all VMs at midnight every day. You want to write a script to retrieve a list of compute engine instances in all development and production projects and feed it to the backup script for snapshotting. What should you do?

A. Have your operations engineer export this information from GCP console to Cloud Datastore every day just before midnight.
B. Use gsutil to set up two gcloud configurations – one for each project. Write a script to activate the development gcloud configuration, retrieve the list of compute engine instances, then activate production gcloud configuration and retrieve the list of compute engine instances. Schedule the script using cron.
C. Use gcloud to set up two gcloud configurations – one for each project. Write a script to activate the development gcloud configuration, retrieve the list of compute engine instances, then activate production gcloud configuration and retrieve the list of compute engine instances. Schedule the script using cron.
D. Have your operations engineer execute a script in Cloud Shell to export this information to Cloud Storage every day just before midnight.

Answer: C

  1. To prevent accidental security breaches, the security team at your company has enabled Domain Restricted Sharing to limit resource sharing in your GCP organization to just your cloud identity domain. The compliance department has engaged an external auditor to carry out the annual audit, and the auditor requires read access to all resources in the production project to fill out specific sections in the audit report. How can you enable this access?

A. Grant the auditors’ Google account roles/viewer IAM role on the production project.
B. Create a new Cloud Identity account for the auditor and grant them roles/iam.securityReviewer IAM role on the production project.
C. Create a new Cloud Identity account for the auditor and grant them roles/viewer IAM role on the production project.
D. Grant the auditors’ Google account roles/iam.securityReviewer IAM role on the production project.

Answer: C

  1. Your company has chosen Okta, a third-party SSO identity provider, for all its IAM requirements because of the rich feature set it offers – support for over 6,500 pre-integrated apps for SSO and over 1,000 SAML integrations. How can your company users in Cloud Identity authenticate using Okta before accessing resources in your GCP project?

A. Enable OAuth 2.0 with Okta OAuth Authorization Server for desktop and mobile applications.
B. Update the SAML integrations on the existing third-party apps to use Google as the Identity Provider (IdP).
C. Enable OAuth 2.0 with Okta OAuth Authorization Server for web applications.
D. Configure a SAML SSO integration with Okta as the Identity Provider (IdP) and Google as the Service Provider (SP).

Answer: D

  1. You have developed a containerized application that performs video classification and recognition using Video AI, and you plan to deploy this application to the production GKE cluster. You want your customers to access this application on a single public IP address on HTTPS protocol. What should you do?

A. Configure a NodePort service for the application and use an Ingress service to open it to the public.
B. Configure a NodePort service on port 443 for the application and set up a dynamic pool of DNS A records on the application DNS to achieve round-robin load balancing.
C. Configure a ClusterlP service for the application and set up a DNS A record on the application DNS to point to the IP service.
D. Configure an HAProxy service for the application and set up a DNS A records on the application DNS to the public IP address of the node that runs HAProxy.

Answer: A

  1. Your company wants to move all its on-premises applications to Google Cloud. Most applications depend on Kubernetes orchestration, and you have chosen to deploy these applications in Google Kubernetes Engine (GKE). The security team have requested you to store all container images in a Google Container Registry (GCR) in a separate project which has an automated vulnerability management scanning set up by a security partner organization. You want to ensure the GKE cluster running in your project can download the container images from the central GCR repo in the other project. How should you do this?

A. Grant the Storage Object Viewer IAM role on the GCR Repo project to the service account used by GKE nodes in your project.
B. Update ACLs on each container image to provide read-only access to the service account used by GKE nodes in your project.
C. Enable full access to all Google APIs under Access Scopes when provisioning the GKE cluster.
D. In the central GCR repo project, grant the Storage Object Viewer role on the Cloud Storage bucket that contains the container images to a service account and generate a P12 key for this service account. Configure the Kubernetes service account to use this key for imagePullSecrets.

Answer: A

  1. Your compliance officer has requested you to provide an external auditor view, but not edit, access to all project resources. What should you do?

A. Add the auditors’ account to a custom IAM role that has view-only permissions on all the project services.
B. Add the auditors’ account to a custom IAM role that has view-only permissions on all the project resources.
C. Add the auditors’ account to the predefined service viewer IAM role.
D. Add the auditors’ account to the predefined project viewer IAM role.

Answer: D

  1. Your company has a requirement to persist logs from all compute engine instances in a single BigQuery dataset called pt-logs. Your colleague ran a script to install Cloud logging agent on all the VMs, but the logs from the VMs haven’t made their way to the BigQuery dataset. What should you do to fix this issue?

A. Configure a job in BigQuery to fetch all Compute Engine logs from Stackdriver. Set up Cloud Scheduler to trigger a Cloud Function every day at midnight. Grant the Cloud Function the BigQuery jobUser role on the pt-logs dataset and trigger the BigQuery job from Cloud Function.
B. Create an export for all logs in Cloud Logging and set up a Cloud Pub/Sub topic as the sink destination. Have a Cloud Function trigger based on the messages in the topic and configure it to send logs Compute Engine service to BigQuery pt-logs dataset.
C. Create an export for Compute Engine logs in Cloud Logging and set up BigQuery pt-logs dataset as sink destination.
D. Add a metadata tag with key: logs-destination and value: bq://pt-logs, and grant the VM service accounts BigQuery Data Editor role on the pt-logs dataset.

Answer: C

  1. Your company is migrating its on-premises data centre to Google Cloud Platform in several phases. The current phase requires the migration of the LDAP server onto a Compute Engine instance. However, several legacy applications in your on-premises data centre and few third-party applications still depend on the LDAP server for user authentication. How can you ensure the LDAP server is publicly reachable via TLS on UDP port 636?

A. Configure a firewall rule to allow inbound (ingress) UDP traffic on port 636 from 0.0.0.0/0 for the network tag allow-inbound-udp-636, and add this network tag to the LDAP server Compute Engine Instance.
B. Configure a firewall rule to allow outbound (egress) UDP traffic on port 636 to 0.0.0.0/0 for the network tag allow-outbound-udp-636, and add this network tag to the LDAP server Compute Engine Instance.
C. Add default-allow-udp network tag to the LDAP server Compute Engine Instance.
D. Configure a firewall route called default-allow-udp and have the next hop as the LDAP server Compute Engine Instance.

Answer: A

  1. The compliance manager asked you to provide an external auditor with a report of when Cloud Identity users in your company were granted IAM roles for Cloud Spanner. How should you retrieve this information?

A. Retrieve the details from the policies section in the IAM console by filtering for Cloud Spanner IAM roles.
B. Retrieve the information from Cloud Monitoring console by filtering data logs for Cloud Spanner IAM roles.
C. Retrieve the information from Cloud Logging console by filtering admin activity logs for Cloud Spanner IAM roles.
D. Retrieve the details from Cloud Spanner console.

Answer: C

  1. Your company has three GCP projects – development, test and production – that are all linked to the same billing account. Your finance department has asked you to set up an alert to notify the testing team when the Google Compute Engine service costs in the test project exceed a threshold. How should you do this?

A. Ask your finance department to grant you the Project Billing Manager IAM role. Set up a budget and an alert in the billing account.
B. Ask your finance department to grant you the Project Billing Manager IAM role. Set up a budget and an alert for the test project in the billing account.
C. Ask your finance department to grant you the Billing Account Administrator IAM role. Set up a budget and an alert in the billing account. Ask your finance department to grant you the Billing Account
D. Administrator IAM role. Set up a budget and an alert for the test project in the billing account.

Answer: D

  1. Your company has a Citrix Licensing Server in a Windows VM in your on-premises data centre and needs to migrate this to Google Cloud Platform. You have provisioned a new Windows VM in a brand new Google project, and you want to RDP to the instance to install and register the licensing server. What should you do?

A. Generate a JSON key for the default GCE service account and RDP with this key.
B. Add a metadata tag to the instance with key: windows-password and password as the value, and RDP with these details.
C. RDP to the VM with your Google Account.
D. Retrieve the RDP credentials by executing gcloud compute reset- windows-password and RDP with the credentials.

Answer: D

  1. Your analysts’ team needs to run a BigQuery job to retrieve customer PII data. Security policies prohibit using Cloud Shell for retrieving with PII data. The security team has advised you to set up a Shielded VM with just the required IAM access permissions to run BigQuery jobs on the specific dataset. What is the most efficient way to let the analysts SSH to the VM?

A. Block project-wide public SSH keys from the instance to restrict SSH only through instance-level keys. Use ssh-keygen to generate a key for each analyst, distribute the keys to the analysts and ask them to SSH to the instance with their key from putty.
B. Block project-wide public SSH keys from the instance to restrict SSH only through instance-level keys. Use ssh-keygen to generate a single key for all analysts, distribute the key to the analysts and ask them to SSH to the instance with the key from putty.
C. Enable os Login by adding a metadata tag to the instance with key: enable- oslogin and value: TRUE. Ask the analysts to SSH to the instance through Cloud Shell.
D. Enable os Login by adding a metadata tag to the instance with key: enable-oslogin and value: TRUE, and grant roles/compute.osLogin role to the analysts’ group. Ask the analysts to SSH to the instance through Cloud Shell.

Answer: D

  1. You want to identify a cost-efficient storage class for archival of audit logs in Google Cloud Storage. Some of these audit logs may need to be retrieved during the quarterly audit. What Storage Class should you use to minimize costs?

A. Nearline Storage Class.
B. Disaster Recovery Storage Class.
C. Coldline Storage Class.
D. Regional Storage Class.

Answer: C

  1. Your production Compute workloads are running in a small subnet with a netmask 225.225.225.224. A recent surge in traffic has seen the production VMs struggle, but there are no free IP addresses for the Managed Instances Group (MIG) to autoscale. You anticipate requiring 30 additional IP addresses for the new VMs. All VMs within the subnet need to communicate with each other, and you want to do this without adding additional routes. What should you do?

A. Create a new subnet with a bigger non-overlapping range. Move all instances to the new subnet and delete the old subnet.
B. Expand the subnet IP range.
C. Create a new project and a new VPC. Share the new VPC with the existing project and configure all existing resources to use the new VPC.
D. Create a new subnet with a bigger overlapping range to automatically move all instances to the new subnet. Then, delete the old subnet.

Answer: B

  1. Your runs several applications on the production GKE cluster. The machine learning team runs their training models in the default GKE cluster node pool, but the processing is slower and is delaying their analysis. You want the ML jobs to run on NVIDIA® Tesla R K80: nvidia-tesla-k80 GPU for better performance and increased throughput. What should you do?

A. Create a new GPU enabled node pool with the required specification, and configure node selector on the pods with key: cloud.google.com/gke-accelerator and value: nvidia-tesla-k80.
B. Create a dedicated GKE cluster with GPU enabled node pool as per the required specifications, and migrate the ML jobs to the new cluster.
C. Add a metadata tag to the pod specification with key: accelerator and value: tesla-gpu.
D. Terminate all existing nodes in the node pools and create new nodes with the required GPUs attached.

Answer: A

  1. You work for a multinational conglomerate that has thousands of GCP projects and a very complex resource hierarchy that includes over 100 folders. An external audit team has requested to view this hierarchy to fill out sections of a report. You want to enable them to view the hierarchy while ensuring they can’t do anything else. What should you do?

A. Grant all individual auditors roles/browser IAM role.
B. Add all individual auditors to an IAM group and grant the group roles/iam.roleViewer IAM role.
C. Add all individual auditors to an IAM group and grant the group roles/browser IAM role.
D. Grant all individual auditors roles/iam.roleViewer IAM role.

Answer: C

  1. EU GDPR requires you to archive all customer PII data indefinitely. The compliance department needs access to this data during the annual audit and is happy for the data to be archived after 30 days to save on storage costs. You want to design a cost-efficient solution for storing this data. What should you do?

A. Store new data in Regional Storage Class, and add a lifecycle rule to transition data older than 30 days to Coldline Storage Class.
B. Store new data in Regional Storage Class, and add a lifecycle rule to transition data older than 30 days to Nearline Storage Class.
C. Store new data in Multi-Regional Storage Class, and add a lifecycle rule to transition data older than 30 days to Nearline Storage Class.
D. Store new data in Multi-Regional Storage Class, and add a lifecycle rule to transition data older than 30 days to Coldline Storage Class.

Answer: A

  1. Your finance department has asked you to provide their team access to view billing reports for all GCP projects. What should you do?

A. Grant roles/billing.User IAM role to the finance group.
B. Grant roles/billing.ProjectManager IAM role to the finance group.
C. Grant roles/billing.Admin IAM role to the finance group.
D. Grant roles/billing. Viewer IAM role to the finance group.

Answer: D

  1. You have developed an enhancement for a photo compression application running on the App Engine Standard service in Google Cloud Platform, and you want to canary test this enhancement on a small percentage of live users before completely switching off the old version. How can you do this?

A. Deploy the enhancement in a GKE cluster and enable traffic splitting in GCP console.
B. Deploy the enhancement as a new version of the application and enable traffic splitting in GCP console.
C. Deploy the enhancement in a GCE VM and enable traffic splitting in GCP console.
D. Deploy the enhancement as a new application in App Engine Standard and enable traffic splitting in GCP console.

Answer: B

  1. Your company processes gigabytes of image thumbnails every day and stores them in your on-premises data centre. Your team developed an application that uses these image thumbnails with GCP services such as AutoML vision and pre-trained Vision API models to detect emotion, understand text and much more. The Cloud Security team has created a service account with the appropriate level of access; however, your team is unaware of how to authenticate to the GCP Services and APIs using the service account. What should you do?

A. Run gcloud iam service-accounts keys create to generate a JSON key file for the service account and configure your on-premises application to present the JSON key file.
B. Configure your on-premises application to use the service account username and password credentials.
C. Create an IAM user with the appropriate permissions and use the username and password in your on-premises application.
D. Configure the Direct interconnect to authenticate requests from your on-premises network automatically.

Answer: A

  1. A Data Support Engineer at your company accidentally disclosed customer PII data in a support case in Google Cloud Console. Your compliance team wants to prevent this from occurring again and has asked you to set them up as approvers for cases raised by support teams. You want to follow Google Best Practices. What IAM access should you grant them?

A. Grant roles/iam.roleAdmin IAM role to the compliance team group.
B. Grant roles/accessapprovalapprover IAM role to the compliance team group.
C. Grant roles/iam.roleAdmin IAM role to all members of the compliance team.
D. Grant roles/accessapprovalapprover IAM role to all members of the compliance team.

Answer: B

  1. You created a deployment manager template to automate the provisioning of a production Google Kubernetes Engine (GKE) cluster. The GKE cluster requires a monitoring pod running on each node (DaemonSet) in daemon-system namespace, and your manager has asked you to identify if it is possible to automate the provisioning of the monitoring pod along with the cluster using the least number of resources. How should you do this?

A. Update the deployment manager template to add a metadata tag with key: daemon-system and value: DaemonSet manifest YAML.
B. Have the Runtime Configurator create a RuntimeConfig resource with the DaemonSet definition.
C. Add a new type provider in Deployment Manager for Kubernetes APIs and use the new type provider to create the DaemonSet resource.
D. Update the deployment manager template to provision a preemptable compute engine instance and configure its startup script to use kubectl to create the DaemonSet.

Answer: C

  1. Your company has two GCP organizations – one for development (and test) resources, and another for production resources. Each GCP organization has a billing account and several GCP projects. The new CFO doesn’t like this billing structure and has asked your team to consolidate costs from all GCP projects onto a single invoice as soon as possible. What should you do?

A. Move all the projects from production GCP organization into development GCP organization and link them to the development billing account.
B. Move all projects from both organizations into a new GCP organization and link all the projects to a new billing account in the new GCP organization.
C. Link all projects from production GCP organization to the billing account used by development GCP organization.
D. Have both the billing account export their billing data to a single BigQuery dataset.