GCP Professional Cloud Architect Practice Exam Part 5

Source:

Actual Exam Version:

For this question, refer to the TerramEarth case study.

For this question, refer to the Mountkirk Games case study.

  1. For this question, refer to the Mountkirk Games case study.
    Your company is an industry-leading ISTQB certified software testing firm, and Mountkirk Games has recently partnered with your company for designing their new testing strategy. Given the experience with scaling issues in the existing solution, Mountkirk Games is concerned about the ability of the new backend to scale based on traffic and has asked for your opinion on how to design their new test strategy to ensure scaling issues do not repeat. What should you suggest?

A. Modify the test strategy to scale tests well beyond the current approach.
B. Update the test strategy to replace unit tests with end to end integration tests.
C. Modify the test strategy to run tests directly in production after each new release.
D. Update the test strategy to test all infrastructure components in Google Cloud Platform.

  1. For this question, refer to the Mountkirk Games case study.
    Mountkirk Games anticipates its new game to be hugely popular and expects this to generate vast quantities of time series data. Mountkirk Games is keen on selecting a managed storage service for this time-series data. What GCP service would you recommend?

A. Cloud Bigtable.
B. Cloud Spanner.
C. Cloud Firestore.
D. Cloud Memorystore.

  1. For this question, refer to the Mountkirk Games case study.
    Mountkirk Games has redesigned parts of its game backend into multiple microservices that operate as HTTP (REST) APIs. Taking into consideration the technical requirements for the game backend platform as well as the business requirements, how should you design the game backend on Google Cloud platform?

A. Use a Layer 4 (TCP) Load Balancer and Google Compute Engine VMs in a Managed Instances Group (MIG) with instances in multiple zones in multiple regions.
B. Use a Layer 4 (TCP) Load Balancer and Google Compute Engine VMs in a Managed Instances Group (MIG) with instances restricted to a single zone in multiple regions.
C. Use a Layer 7 (HTTPS) Load Balancer and Google Compute Engine VMs in a Managed Instances Group (MIG) with instances in multiple zones in multiple regions.
D. Use a Layer 7 (HTTPS) Load Balancer and Google Compute Engine VMs in a Managed Instances Group (MIG) with instances restricted to a single zone in multiple regions.

  1. For this question, refer to the Mountkirk Games case study.
    Taking into consideration the technical requirements for the game backend platform as well as the game analytics platform, where should you store data in Google Cloud platform?

A. 1. For time-series data, use Cloud SQL. 2. For historical data queries, use Cloud Bigtable.
B. 1. For time-series data, use Cloud SQL. 2. For historical data queries, use Cloud Spanner.
C. 1. For time-series data, use Cloud BigTable. 2. For historical data queries, use BigQuery.
D. 1. For time-series data, use Cloud BigTable. 2. For historical data queries, use Cloud BigQuery. 3. For transactional data, use Cloud Spanner.

  1. For this question, refer to the Mountkirk Games case study.
    Your company is an industry-leading ISTQB certified software testing firm, and Mountkirk Games has recently partnered with your company for designing their new testing strategy. Mountkirk Games is concerned at the potential disruption caused by solar storms to its business. A solar storm last month resulted in downgraded mobile network coverage and slow upload speeds for a vast majority of mobile users in the Mediterranean. As a result, their analytics platform struggled to cope with the late arrival of data from these mobile devices. Mountkirk Games has asked you for your suggestions on avoiding such issues in future. What should you recommend?

A. Update the test strategy to include fault injection software and introduce latency instead of faults.
B. Update the test strategy to test from multiple mobile phone emulators from all GCP regions.
C. Update the test strategy to introduce random amounts of delay before processing the uploaded analytics files.
D. Update the test strategy to gather latency information from 1% of users and use this to simulate latency on production-like volume.

  1. For this question, refer to the TerramEarth case study.
    TerramEarth wants to preemptively stock replacement parts and reduce the unplanned downtime of their vehicles to less than one week. The CTO sees an Al-driven solution being the future of this prediction. Still, for the time being, the planis to have the analysts carry out the analysis by querying all data from a central location and make predictions. Which of the below designs would give the analysts the ability to query data from a central location?

A. HTTP(s) Load Balancer, GKE on Anthos, Pub/Sub, Dataflow, BigQuery.
B. HTTP(s) Load Balancer, GKE on Anthos, Dataflow, BigQuery.
C. HTTP(s) Load Balancer, GKE on Anthos, BigQuery.
D. App Engine Flexible, Pub/Sub, Dataflow, BigQuery.
E. App Engine Flexible, Pub/Sub, Dataflow, Cloud SQL.

  1. For this question, refer to the TerramEarth case study.
    You work for a consulting firm that specializes in providing next-generation digital services and has recently been contracted by TerramEarth to design and develop APIs that would enable TerramEarth to decrease unplanned downtime to less than one week. Given the short period for the project, TerramEarth wants you to focus on delivering APIs that meet their business requirements rather than spend time developing a custom framework that fits the needs of all APIs and their edge case scenarios. What should you do?

A. Expose APIs on Google App Engine through Google Cloud Endpoints for dealers and partners.
B. Expose APIs on Google App Engine to the public.
C. Expose Open API Specification compliant APIs on Google App Engine to the public.
D. Expose APIs on Google Kubernetes Engine to the public.
E. Expose Open API Specification compliant APIs on Google Kubernetes Engine to dealers and partners.

  1. For this question, refer to the TerramEarth case study.
    You work for a consulting firm that specializes in providing next-generation digital services and has recently been contracted by TerramEarth to help them enhance their APIs. One of their APIs used for retrieving vehicle is being used successfully by analysts to predict unplanned downtime and preemptively stock replacement parts. TerramEarth has asked you to enable delegated authorization for 3rd parties so that the dealer network can use this data to better position new products and services. What should you do?

A. Use OAuth 2.0 to delegate authorization.
B. Use SAML 2.0 to delegate authorization.
C. Open up the API to IP ranges of the dealer network.
D. Enable each deader to share their credentials with their trusted partner.

  1. For this question, refer to the TerramEarth case study.
    TerramEarth would like to reduce unplanned downtime for all its vehicles and preemptively stock replacement parts. To do this, TerramEarth has partnered with another firm to loT enable all vehicles in the field but is concerned that its existing data ingestion solution is not capable of handling the massive increase in ingested data. TerramEarth has asked you to design the data ingestion layer to support this requirement. What should you do?

A. Ingest data to Google Cloud Storage directly.
B. Ingest data through Google Cloud Pub/Sub.
C. Ingest data to Google BigQuery through streaming inserts.
D. Continue ingesting data via existing FTP solution.

  1. TerramEarth would like to reduce unplanned downtime for all its vehicles and preemptively stock replacement parts. To do this, TerramEarth has partnered with another firm to loT enable all vehicles in the field. TerramEarth is concerned that its existing data ingestion solution may not satisfy all use cases. Early analysis has shown the FTP uploads are highly unreliable in areas with poor network connectivity and this frequently causes the FTP upload to restart from the beginning. On occasions, this has resulted in analysts querying old data and failing to predict unplanned downtimes accurately. How should you design the data ingestion layer to make it more reliable while ensuring data is made available to analysts as quickly as possible?

A. 1. Replace the existing FTP server with a cluster of FTP servers on a single GKE cluster. 2. After receiving the files, push them to Multi-Regional Cloud Storage bucket. 3. Modify the ETL process to pick up files from this bucket.
B. 1. Replace the existing FTP server with multiple FTP servers running in GKE clusters in multiple regions. 2. After receiving the files, push them a Multi-Regional Cloud Storage bucket in the same region. 3. Modify the ETL process to pick up files from this bucket.
C. 1. Use Google HTTP(s) APIs to upload files to multiple Multi-Regional Cloud Storage Buckets. 2. Modify the ETL process to pick up files from these buckets.
D. 1. Use Google HTTP(s) APIs to upload files to multiple Regional Cloud Storage Buckets. 2. Modify the ETL process to pick up files from these buckets.

  1. For this question, refer to the TerramEarth case study.
    TerramEarth would like to reduce unplanned downtime for all its vehicles and preemptively stock replacement parts. To do this, TerramEarth has partnered with another firm to loT enable all vehicles in the field. The telemetry data from vehicles is stored in the respective region buckets in the US, Asia and Europe. The feedback from most service centres and dealer networks indicates vehicle hydraulics fail after 69000 miles, and this has knock-on effects such as disabling the dynamic adjustment in the height of the vehicle. The vehicle design team has approached you to provide them with all raw telemetry data to analyze and determine the cause of this failure. You need to run this job on all the data. How should you do this while minimizing costs?

A. Transfer telemetry data from all Regional Cloud Storage buckets to another bucket in a single zone. Launch a Dataproc job in the same zone.
B. Transfer telemetry data from all Regional Cloud Storage buckets to another bucket in a single region. Launch a Dataproc job in the same region.
C. Run a Dataproc job in each region to extract, pre-process and tar (compress) the data. Transfer this data to a Multi-Regional Cloud Storage bucket. Launch a Dataprocjob.
D. Run a Dataproc job in each region to extract, pre-process and tar (compress) the data. Transfer this data to a Regional Cloud Storage bucket. Launch a Dataproc job.

  1. For this question, refer to the Mountkirk Games case study.
    Your company is an industry-leading ISTQB certified software testing firm, and Mountkirk Games has recently partnered with your company for designing their new testing strategy. Mountkirk Games has recently migrated their backend to GCP and uses continuous deployment to automate releases. Few of their releases have recently caused a loss of functionality within the application, a few other releases have had unintended performance issues. You have been asked to come up with a testing strategy that lets you properly test all new releases while also giving you the ability to test particular new release to scaled-up production-like traffic to detect performance issues. Mountkirk games want their test environments to scale cost-effectively. How should you design the test environments?

A. Design the test environments to scale based on simulated production traffic.
B. Make use of the existing on-premises infrastructure to scale based on simulated production traffic.
C. Stress tests every single GCP service used by the application individually.
D. Create multiple static test environments to handle different levels of traffic, e.g. small, medium, big.

  1. TerramEarth would like to reduce unplanned downtime for all its vehicles and preemptively stock replacement parts. To do this, TerramEarth has partnered with another firm to loT enable all vehicles in the field. The CTO sees an Al-driven solution being the future of this prediction and wants to store all telemetry data in a cost-efficient way while the team works on building a blueprint for a machine learning model in a year. The CTO has asked you to facilitate cost-efficient storage of the telemetry data. Where should you store this data?

A. Compress the telemetry data in half-hourly snapshots on the vehicle loT device and push to a Nearline Google Cloud Storage bucket.
B. Use a real-time (streaming) dataflow job to compress the incoming data and store in BigQuery.
C. Use a real-time (streaming) dataflow job to compress the incoming data and store in Cloud Bigtable.
D. Compress the telemetry data in half-hourly snapshots on the vehicle loT device and push to a Coldline Google Cloud Storage bucket.

  1. For this question, refer to the TerramEarth case study.
    The feedback from all TerramEarth service centres and dealer networks indicates vehicle hydraulics fail after 69000 miles, and this has knock-on effects such as disabling the dynamic adjustment in the height of the vehicle. The vehicle design team wants the raw data to be analyzed, and operational parameters tweaked in response to various factors to prevent such failures. How can you facilitate this feedback loop to all the connected and unconnected vehicles while minimizing costs?

A. Engineers from vehicle design team analyze the raw telemetry data and determine patterns that can be used by algorithms to identify operational adjustments and tweak the drive train parameters automatically.
B. Use a custom machine learning solution in on-premises to identify operational adjustments and tweak the drive train parameters automatically.
C. Run a real-time (streaming) Dataflow job to identify operational adjustments and use Firebase Cloud Messaging to push the optimisations automatically.
D. Use Machine learning in Google Al Platform to identify operational adjustments and tweak the drive train parameters automatically.

  1. For this question, refer to the TerramEarth case study.
    TerramEarth would like to reduce unplanned downtime for all its vehicles and preemptively stock replacement parts. To do this, TerramEarth has partnered with another firm to loT enable all vehicles in the field. The vehicle telemetry data is saved in Cloud Storage for long term storage and is also pushed to BigQuery to enable analytics and train ML models. A recent automotive industry regulation in the EU prohibits TerramEarth from holding this data for longer than 3 years. What should you do?

A. Enable a bucket lifecycle management rule to delete objects older than 36 months. Update the default table expiration for BigQuery Datasets to 36 months.
B. Enable a bucket lifecycle management rule to set the Storage Class to NONE for objects older than 36 months. Set BigQuery table expiration time to 36 months.
C. Enable a bucket lifecycle management rule to delete objects older than 36 months. Use partitioned tables in BigQuery and set the partition expiration period to 36 months.
D. Enable a bucket lifecycle management rule to set the Storage Class to NONE for objects older than 36 months. Use partitioned tables in BigQuery and set the partition expiration period to 36 months.

  1. TerramEarth has recently partnered with another firm to loT enable all vehicles in the field. Connecting all vehicles has resulted in a massive surge in ingested telemetry data, and
    TerramEarth is concerned at the spiralling storage costs of storing this data in Cloud Storage for long term needs. The Machine Learning & Predictions team at TerramEarth has suggested data older than 1 year is of no use and can be purged. Data older than 30 days is only used in exceptional circumstances for training models but needs to be retained for audit purposes. What should you do?

A. Implement Google Cloud Storage lifecycle management rule to transition objects older than 30 days from Standard to Coldline Storage class. Implement another rule to Delete objects older than 1 year in Coldline Storage class.
B. Implement Google Cloud Storage lifecycle management rules to transition objects older than 30 days from Coldline to Nearline Storage class. Implement another rule to transition objects older than 90 days from Coldline to Nearline Storage class.
C. Implement Google Cloud Storage lifecycle management rules to transition objects older than 90 days from Standard to Nearline Storage class. Implement another rule to transition objects older than 180 days from Nearline to Coldline Storage class.
D. Implement Google Cloud Storage lifecycle management rule to transition objects older than 30 days from Standard to Coldline Storage class. Implement another rule to Delete objects older than 1 year in Nearline Storage class.

  1. For this question, refer to the TerramEarth case study.
    You work for a consulting firm that specializes in providing next-generation digital services and has recently been contracted by TerramEarth to help redesign their data warehousing platform. Taking into consideration its business requirements, technical requirements and executive statement, what replacement would recommend for their data warehousing needs?

A. Use BigQuery and enable table partitioning.
B. Use a single Compute Engine instance with machine type n1-standard-96 (96 CPUs, 360 GB memory).
C. Use BigQuery with federated data sources.
D. Use two Compute Engine instances – a non-preemptible instance with machine type n1-standard-96 (96 CPUs, 360 GB memory) and a preemptible instance with machine type n1-standard-32 – (32 CPUs, 120 GB memory).

  1. You work for a consulting firm that specializes in providing next-generation digital services and has recently been contracted by TerramEarth to help redesign their data warehousing platform. Your redesigned solution includes Cloud Pub/Sub, Cloud Dataflow and BigQuery and is expected to satisfy both their business and technical requirements. But the service centres and maintenance departments have expressed concerns at the quality of data being ingested. You have been asked to modify the design to provide an ability to clean and prepare data for analysis and machine learning before saving data to BigQuery. You want to minimize cost. What should you do?

A. Sanitize the data during the ingestion process in a real-time (streaming) Dataflow job before inserting into BigQuery.
B. Use Cloud Scheduler to trigger Cloud Function that reads data from BigQuery, cleans it and updates the tables.
C. Run a query to export the required data from existing BigQuery tables and save the data to new BigQuery tables.
D. Run a daily job in Dataprep to sanitize data in BigQuery tables.

  1. For this question, refer to the TerramEarth case study.
    You work for a consulting firm that specializes in providing next-generation digital services and has recently been contracted by TerramEarth to help them enhance their data warehousing solution. TerramEarth would like to reduce unplanned downtime for all its vehicles and preemptively stock replacement parts. To do this, TerramEarth wants to enable all its analysts the ability to query vehicle telemetry data in real-time and visualize this data in dashboards. What should you do?

A. 1. Stream telemetry data from vehicles to Cloud Pub/Sub. Use Dataflow and BigQuery streaming inserts to store data in BigQuery. 2. Develop dashboards in Google Data Studio.
B. 1. Upload telemetry data from vehicles to Cloud Storage Bucket. Use Dataflow and BigQuery streaming inserts to store data in BigQuery. 2. Develop dashboards in Google Data Studio.
C. 1. Upload telemetry data from vehicles to Cloud Storage Bucket. Use Cloud Functions to transfer this data to partitioned tables in Cloud Dataproc Hive Cluster. 2. Develop dashboards in Google Data Studio.
D. 1. Stream telemetry data from vehicles to partitioned tables in Cloud Dataproc Hive Cluster. 2. Use Pig scripts to chart data.

  1. For this question, refer to the TerramEarth case study.
    TerramEarth would like to reduce unplanned downtime for all its vehicles and preemptively stock replacement parts. To do this, TerramEarth wants to enable all its analysts the ability to query vehicle telemetry data in real-time. However, TerramEarth is concerned with the reliability of its existing ingestion mechanism. In a recent incident, vehicle telemetry data from all vehicles got mixed up, and the vehicle design team were unable to identify which data belonged to a particular vehicle. TerramEarth and has asked you for your suggestion on enabling reliable ingestion of telemetry data. What should you suggest they use?

A. Use Cloud loT with Cloud HSM keys.
B. Use Cloud loT with per-device public/private key authentication.
C. Use Cloud loT with project-wide SSH keys.
D. Use Cloud loT with specific SSH keys.

  1. For this question, refer to the TerramEarth case study.
    TerramEarth needs to store all the raw telemetry data to use it as training data for machine learning models. How should TerramEarth store this data while minimizing cost and minimizing the changes to the existing processes?

A. Configure the loT devices on vehicles to stream the data directly into BigQuery.
B. Configure the loT devices on vehicles to stream the data to Cloud Pub/Sub and save to Cloud Dataproc HDFS on persistent disks for long term storage.
C. Continue receiving data via existing FTP process and save to Cloud Dataproc HDFS on persistent disks for long term storage.
D. Continue receiving data via existing FTP process and upload to Cloud Storage.