Actual Exam Version:
- Your testing team recently signed off a new release, and you have now deployed this to production, however, almost immediately, you started noticing performance issues which were not visible in the test environment. How can you adjust your test and deployment procedures to avoid such issues in future?
A. Carry out testing in test and staging environments with production-like volume.
B. Split the change into smaller units and deploy one unit at a time to identify what causes performance issues.
C. Deploy less number of changes to production.
D. Enable new version to 1% of users before rolling out to all users.
- All your company’s workloads currently run from an on-premises data centre. The existing hardware is due for a refresh in 3 years, and your company has commissioned several teams to explore various cloud platforms. Your team is in charge of exploring Google Cloud, and you would like to carry out proof of concept work for migration of some workloads to GCP. Your manager has been asked for your suggestion on minimizing costs while enabling the proof of concept work to continue without committing for longer-term use. What should your recommendation be?
A. Use free tier where possible and sustained use discounts. Recruit a GCP cost management expert to help minimize operational cost.
B. Use free tier where possible and committed use discounts. Train the whole team to be aware of cost optimization techniques.
C. Use free tier where possible and sustained use discounts. Train the whole team to be aware of cost optimization techniques.
D. Use free tier where possible and committed use discounts. Recruit a GCP cost management expert to help minimize operational cost.
- You recently migrated an application from on-premises Kubernetes cluster to Google Kubernetes Cluster. The application is forecasted to receive unpredictable/spiky traffic from next week, and your Team Lead has asked you to enable the GKE cluster node pool to scale on demand but not exceed more than 10 nodes. How should you configure the cluster?
A. Delete the existing cluster, create new GKE cluster by running: gcloud container clusters create weatherapp_cluster -enable-autoscaling -min-nodes=1 — max-nodes=10 and deploy the application to the new cluster.
B. Update existing GKE cluster to enable autoscaling and set min and max nodes by running: gcloud container clusters update weatherapp_cluster -enable-autoscaling -min-nodes=1 –max-nodes=10
C. Add tags to the instance to enable autoscaling and set max nodes to 10 by running: gcloud compute instances add-tags – tags enable-autoscaling – max-nodes-10
D. Resize the GKE cluster node pool to have 10 nodes, enough to handle spikes in traffic by running: gcloud container clusters resize weatherapp_cluster -size 10
- Your company specializes in clickstream analytics and uses advanced machine learning to identify opportunities for further growth. Your company recently won a contract to carry out these analytics for a leading retail platform. The retail platform has a large user base all over the world and generates up to 10,000 clicks per second during sale periods. What GCP service should you use to store these clickstream event messages?
A. Google Cloud Datastore.
B. Google Cloud Bigtable.
C. Google Cloud SQL.
D. Google Cloud Storage.
- Your company recently migrated an application from the on-premises data centre to Google Cloud by lifting and shifting the VMs and the database. The application on Google Cloud uses 3 Google Compute Engine instances – 2 for the python application tier and 1 for MySQL database with 80GB disk. The application has started experiencing performance issues. Your operations team noticed both the CPU and memory utilization on MySQL database are low, but the throughput and network IOPS is maxed out. What should you do to increase the performance?
A. Resize SSD persistent disk dynamically to 400 GB.
B. Increase CPU & RAM to 64 GB to compensate for throughput and network IOPS.
C. Migrate the database to Cloud SQL for PostgreSQL.
D. Use BigQuery instead of MySQL.
- Your company is a leading online news media organization that has customers all over the world. The number of paying subscribers is over 2 million, and the free subscribers stand at 9 million. The user engagement team has identified 7.6 million active users among the free subscribers. It plans to send them an email with links to the current monthly and quarterly promotions to convert them into paying subscribers. The user engagement team is unsure on the click-through rate and has asked for a cost-efficient solution that can scale to handle anything between 100 clicks to 1 million clicks per day. What should you do?
A. 1. Save user data in Cloud SQL. 2. Serve the web tier on a single Google Compute Engine Instance.
B. 1. Save user data in Cloud Datastore. 2. Serve the web tier on App Engine Standard Service.
C. 1. Save user data in Cloud BigTable. 2. Serve the web tier on a fleet of Google Compute Engine Instances in a MIG.
D. 1. Save user data in SSD Persistent Disks. 2. Serve the web tier on GKE.
- You just migrated an application to Google Compute Engine. You deployed it across several regions – with an HTTP(s) Load Balancer and Managed Instance Groups (MIG) in each region that provision instances across multiple zones with just internal IP addresses. You noticed the instances are being terminated and relaunched every 45 seconds. The instance itself comes up within 20 seconds and is responding to cURL requests on localhost. What should you do the fix the issue?
A. Modify the instance template to add public IP addresses to the VMs, terminate all existing instances, update Load Balancer configuration to contact instances on public IP.
B. Add a load balancer tag to each instance. Set up a firewall rule to allow traffic from the Load Balancer to all instances with this tag.
C. Make sure firewall rules allow health check traffic to the VM instances.
D. Check load balancer firewall rules and ensure it can receive HTTP(s) traffic.
- A mission-critical application has experienced outage recently due to a new release. The release is automatically deployed by a continuous deployment pipeline for a project stored in Github repository. Currently, a commit when merged to the main branch triggers the deployment pipeline, which deploys the new release to the production environment. Your change management committee has asked you if it is possible to verify (test) the release artefacts before deploying them to production. What should you do?
A. Continue using the existing CI/CD solution to deploy new releases to the production, but enable a mechanism to roll back quickly, e.g. Blue/Green deployments etc.
B. Continue using the existing CI/CD solution to deploy new releases to the production environment and carry out testing with live-traffic.
C. Configure the CI/CD solution to monitor tags in the repository. Deploy non-production tags to staging environments. After testing the changes, create production tags and deploy them to the production environment.
D. Use App Engine’s traffic splitting feature to enable the new version for 1% of users before rolling it out to all users.
- Your company enabled set Identity and Access Management (IAM) policies at different levels of the resource hierarchy – VMs, projects, folders and organization. You are the security administrator for your company, and you have been asked to identify the effective policy that applies for a particular VM. What should your response be?
A. The effective policy for the VM is the policy assigned directly to the VM and restricted by the policies of its parent resource.
B. The effective policy for the VM is a union of the policy assigned to the VM and the policies it inherits from its parent resource.
C. The effective policy for the VM is the policy assigned directly to the VM.
D. The effective policy for the VM is an intersection of the policy assigned to the VM and the policies it inherits from its parent resource.
- You have recently migrated an on-premise MySQL database to a Linux Compute Engine instance in Google Cloud. A sudden burst of traffic has seen the free space in MySQL server go down from 72% to 2%. What can you do to remediate the problem while minimizing the downtime?
A. Increase the size of the SSD persistent disk and verify the change by running fdisk command.
B. Increase the size of the SSD persistent disk. Resize the disk by running resize2fs.
C. Add a new larger SSD disk and move the database files to the new disk. Shut down the compute engine instance. Increase the size of SSD persistent disk and start the instance.
D. Restore snapshot of the existing disk to a bigger disk, update the instance to use new disk and restart the database.
- You have recently deconstructed a huge monolith application into numerous microservices. Most requests are processed within the SLA, but some requests take a lot of time. As the application architect, you have been asked to identify which microservices take the longest. What should you do?
A. Send metrics from each microservice at each request start and request end to custom Cloud Monitoring metrics.
B. Decrease timeouts on each microservice so that requests fail faster and are retried.
C. Update your application with Cloud Trace and break down the latencies at each microservice.
D. Look for APIs with high latency in Cloud Monitoring Insights.
- Your company developed a new weather forecasting application and deployed it in Google cloud. The application is deployed on autoscaled MIG across two zones, and the compute layer relies on a Cloud SQL database. The company launched a trial on a small group of employees and now wishes to expand this trial to a larger group, including unauthenticated public users. You have been asked to ensure the application response is within the SLA. What resilience testing strategy should you use to achieve this?
A. Capture the trial traffic and replay several instances of it simultaneously until all layers autoscale. Then, start terminating resources in one of the zones randomly.
B. Simulate user traffic until one of the application layer autoscales. Then, start chaos engineering by terminating resources randomly across both zones.
C. Start sending more traffic to the application until all layers autoscales. Then, start chaos engineering by terminating resources randomly across both zones.
D. Estimate the expected traffic and update the minimum size of the Managed Instance Group (MIG) to handle 200% of the expected traffic.
- You designed a mission-critical application to have no single point of failure, yet the application suffered an outage recently. The post-mortem analysis has identified the failed component as the database layer. The Cloud SQL instance has a failover replica, but the replica was not promoted to primary. Your operations team have asked your recommendation for preventing such issues in future. What should you do?
A. Snapshot database more frequently.
B. Migrate database to an instance with more CPU.
C. Carry out planned failover periodically.
D. Migrate to a different database.
- Your company has accumulated 200 TB of logs in the on-premises data centre. Your bespoke data warehousing analytics application, which processes these logs and runs in the on-premises data centre, doesn’t autoscales and is struggling to cope up with the massive volume of data. Your infrastructure architect has estimated the data centre to run out of space in 12 months. Your company would like to move the archive logs to Google Cloud for long term storage and explore a replacement solution for its analytics needs. What would you recommend? (Choose two answers)
A. Migrate logs to Cloud SQL for long term storage and analytics.
B. Migrate log files to Cloud Logging for long term storage.
C. Migrate log files to Google Cloud Storage for long term storage.
D. Import logs from Google Cloud Storage to Google BigQuery for analytics.
E. Import logs to Google Cloud Bigtable for analytics.
- Your company holds multiple petabytes of data which includes historical stock prices for all stocks from all world financial markets. Your company now wants to migrate this data to Cloud. The financial analysts at your company have years of SQL experience, work round the clock and heavily depend on the historical data for predicting future stock prices. The chief analyst has asked you to ensure data is always available and minimize the impact on their team. Which GCP service should you use to store this data?
A. Google Cloud Storage.
B. Google Cloud SQL.
C. Google BigQuery.
D. Google Cloud Datastore.
- Your company on-premises data centre is running out of space, and your CTO thinks the cost of migrating and running all development environments in Google Cloud Platform is cheaper than the capital expenditure required to expand the existing data centre. The development environments are currently subject to multiple stop, start and reboot events throughout the day and persist state across restarts. How can you design this solution on Google Cloud Platform while enabling your CTO view operational costs on an ongoing basis?
A. Export detailed Google cloud billing data to BigQuery and visualize cost reports in Google Data Studio.
B. Use Google Comput Engine VMs with Local SSD disks to store state across restarts.
C. Run gcloud compute instances set-disk-auto-delete on SSD persistent disks before stopping/rebooting the VM.
D. Use Google Comput Engine VMs with persistent disks to store state across restarts.
E. Apply labels on VMs to export their costs to BigQuery dataset.
- Your company specializes in clickstream analytics and uses cutting edge Al-driven analysis to identify opportunities for further growth. Most customers have their clickstream analytics evaluated in a batch every hour, but some customers pay more to have their clickstream analytics evaluated in real-time. Your company would now like to migrate the analytics solution from the on-premises data centre to Google Cloud and is keen on selecting a service that offers both batch processing for hourly jobs and live processing (real-time) for stream jobs. Which GCP service should you use for this requirement while minimizing cost?
A. Google Cloud Dataproc.
B. Google Compute Engine with Google BigQuery.
C. Google Kubernetes Engine with Bigtable.
D. Google Cloud Dataflow.
- Your company recently acquired a health care start-up. Both your company and the acquired start-up have accumulated terabytes of reporting data in respective data centres. You have been asked for your recommendation on the best way to detect anomalies in the reporting data. You wish to use services on Google Cloud platform to achieve this. What should your recommendation be?
A. Upload reporting data of both companies to a Cloud Storage bucket, point
B. Datalab at the bucket and clean data as necessary.
C. Configure Cloud Dataprep to connect to your on-premises reporting systems and clean data as necessary.
D. Upload reporting data of both companies to a Cloud Storage bucket, explore the bucket data in Cloud Dataprep and clean data as necessary.
E. Configure Cloud Datalab to connect to your on-premises reporting systems and clean your data as necessary.
- Your company has a deadline to migrate all on-premises applications to Google Cloud Platform. Due to stringent timelines, your company decided to “lift and shift” all applications to Google Compute Engine. Simultaneously, your company has commissioned a small team to identify a better cloud-native solution for the migrated applications. You are the Team Lead, and one of your main requirements is to identify suitable compute services that can scale automatically and require minimal operational overhead (no-ops). Which GCP Compute Services should you use? (Choose two)
A. Use Google App Engine Standard.
B. Use Google Kubernetes Engine (GKE).
C. Use Managed Instance Groups (MIG) Compute Engine instances.
D. Use Google Compute Engine with custom VM images.
E. Use custom container orchestration on Google Compute Engine.
- You have a business-critical application deployed in a non-autoscaling Managed Instance Group (MIG). The application is currently not responding to any requests. Your operations team analyzed the logs and discovered the instances keep restarting every 30 seconds. Your Team Lead would like to login to the instance to debug the issue. What should you do to enable your Team Lead login to the VMs?
A. Disable autoscaling and add your Team Lead’s SSH key to the project-wide SSH Keys.
B. Carry out a rolling restart on the Managed Instance Group (MIG).
C. Disable Managed Instance Group (MIG) health check and add your
D. Team Lead’s SSH key to the project-wide SSH keys.
E. Grant your Team Lead Project Viewer IAM role.
- You enabled a cron job on a Google Compute Engine to trigger a python API that connects to Google BigQuery to query data. The script complains that it is unable to connect to BigQuery. How should you fix this issue?
A. Install gcloud SDK, gsutil, and bq components on the VM.
B. Provision a new VM with BigQuery access scope enabled, and migrate both the cron job and python API to the new VM.
C. Configure the Python API to use a service account with relevant BigQuery access enabled.
D. Update Python API to use the latest BigQuery API client library.
- You deployed an application in App Engine Standard service that uses indexes in Datastore for every query your application makes. You recently discovered a runtime issue in the application and attributed this to missing Cloud Datastore Indexes. Your manager has asked you to create new indexes in Cloud Datastore by deploying a YAML configuration file. How should you do it?
A. Upload the YAML configuration file to a Cloud Storage bucket and point the index configuration in App Engine application to this location.
B. Run gcloud datastore indexes create .
C. In GCP Datastore Admin console, delete current configuration YAML file and upload a new configuration YAML file.
D. Send a request to the App Engine’s built-in HTTP modules to update the index configuration file for your application.
- You configured a CI/CD pipeline to deploy changes to your production application, which runs on GCP Compute Engine Managed Instance Group with auto-healing enabled. A recent deployment has caused a loss of functionality in the application. Debugging could take a long time, and downtime is a loss of revenue for your company. What should you do?
A. Deploy the old codebase directly on the VM using custom scripts.
B. Revert changes in Github repository and let the CI/CD pipeline deploy the
C. previous codebase to the production environment.
D. Fix the issue directly on the VM.
E. Modify the Managed Instance Group (MIG) to use the previous instance template, terminate all instances and let autohealing bring back the instances on the previous template.
- Your company has deployed a wide range of application across several Google Cloud projects in the organization. You are a security engineer within the Cloud Security team, and an apprentice has recently joined your team. To gain a better understanding of your company’s Google cloud estate, the apprentice has asked you to provide them access which lets them have detailed visibility of all projects in the organization. Your manager has approved the request but has asked you to ensure the access does not let them edit/write access to any resources. Which IAM roles should you assign to the apprentice?
A. Organization Owner and Project Owner roles.
B. Organization Viewer and Project Owner roles.
C. Organization Viewer and Project Viewer roles.
D. Organization Owner and Project Viewer roles.