Google Professional Cloud DevOps Engineer Practice Test Free – 50 Questions to Test Your Knowledge
Are you preparing for the Google Professional Cloud DevOps Engineer certification exam? If so, taking a Google Professional Cloud DevOps Engineer practice test free is one of the best ways to assess your knowledge and improve your chances of passing. In this post, we provide 50 free Google Professional Cloud DevOps Engineer practice questions designed to help you test your skills and identify areas for improvement.
By taking a free Google Professional Cloud DevOps Engineer practice test, you can:
- Familiarize yourself with the exam format and question types
- Identify your strengths and weaknesses
- Gain confidence before the actual exam
50 Free Google Professional Cloud DevOps Engineer Practice Questions
Below, you will find 50 free Google Professional Cloud DevOps Engineer practice questions to help you prepare for the exam. These questions are designed to reflect the real exam structure and difficulty level.
Your company recently migrated to Google Cloud. You need to design a fast, reliable, and repeatable solution for your company to provision new projects and basic resources in Google Cloud. What should you do?
A. Use the Google Cloud console to create projects.
B. Write a script by using the gcloud CLI that passes the appropriate parameters from the request. Save the script in a Git repository.
C. Write a Terraform module and save it in your source control repository. Copy and run the terraform apply command to create the new project.
D. Use the Terraform repositories from the Cloud Foundation Toolkit. Apply the code with appropriate parameters to create the Google Cloud project and related resources.
You are configuring a CI pipeline. The build step for your CI pipeline integration testing requires access to APIs inside your private VPC network. Your security team requires that you do not expose API traffic publicly. You need to implement a solution that minimizes management overhead. What should you do?
A. Use Cloud Build private pools to connect to the private VPC.
B. Use Spinnaker for Google Cloud to connect to the private VPC.
C. Use Cloud Build as a pipeline runner. Configure Internal HTTP(S) Load Balancing for API access.
D. Use Cloud Build as a pipeline runner. Configure External HTTP(S) Load Balancing with a Google Cloud Armor policy for API access.
You are leading a DevOps project for your organization. The DevOps team is responsible for managing the service infrastructure and being on-call for incidents. The Software Development team is responsible for writing, submitting, and reviewing code. Neither team has any published SLOs. You want to design a new joint-ownership model for a service between the DevOps team and the Software Development team. Which responsibilities should be assigned to each team in the new joint-ownership model?![]()
![]()
![]()
![]()
You recently migrated an ecommerce application to Google Cloud. You now need to prepare the application for the upcoming peak traffic season. You want to follow Google-recommended practices. What should you do first to prepare for the busy season?
A. Migrate the application to Cloud Run, and use autoscaling.
B. Create a Terraform configuration for the application’s underlying infrastructure to quickly deploy to additional regions.
C. Load test the application to profile its performance for scaling.
D. Pre-provision the additional compute power that was used last season, and expect growth.
You are monitoring a service that uses n2-standard-2 Compute Engine instances that serve large files. Users have reported that downloads are slow. Your Cloud Monitoring dashboard shows that your VMs are running at peak network throughput. You want to improve the network throughput performance. What should you do?
A. Add additional network interface controllers (NICs) to your VMs.
B. Deploy a Cloud NAT gateway and attach the gateway to the subnet of the VMs.
C. Change the machine type for your VMs to n2-standard-8.
D. Deploy the Ops Agent to export additional monitoring metrics.
Your organization is starting to containerize with Google Cloud. You need a fully managed storage solution for container images and Helm charts. You need to identify a storage solution that has native integration into existing Google Cloud services, including Google Kubernetes Engine (GKE), Cloud Run, VPC Service Controls, and Identity and Access Management (IAM). What should you do?
A. Use Docker to configure a Cloud Storage driver pointed at the bucket owned by your organization.
B. Configure an open source container registry server to run in GKE with a restrictive role-based access control (RBAC) configuration.
C. Configure Artifact Registry as an OCI-based container registry for both Helm charts and container images.
D. Configure Container Registry as an OCI-based container registry for container images.
Your company runs applications in Google Kubernetes Engine (GKE). Several applications rely on ephemeral volumes. You noticed some applications were unstable due to the DiskPressure node condition on the worker nodes. You need to identify which Pods are causing the issue, but you do not have execute access to workloads and nodes. What should you do?
A. Check the node/ephemeral_storage/used_bytes metric by using Metrics Explorer.
B. Check the container/ephemeral_storage/used_bytes metric by using Metrics Explorer.
C. Locate all the Pods with emptyDir volumes. Use the df -h command to measure volume disk usage.
D. Locate all the Pods with emptyDir volumes. Use the df -sh * command to measure volume disk usage.
You are the Site Reliability Engineer responsible for managing your company's data services and products. You regularly navigate operational challenges, such as unpredictable data volume and high cost, with your company's data ingestion processes. You recently learned that a new data ingestion product will be developed in Google Cloud. You need to collaborate with the product development team to provide operational input on the new product. What should you do?
A. Deploy the prototype product in a test environment, run a load test, and share the results with the product development team.
B. When the initial product version passes the quality assurance phase and compliance assessments, deploy the product to a staging environment. Share error logs and performance metrics with the product development team.
C. When the new product is used by at least one internal customer in production, share error logs and monitoring metrics with the product development team.
D. Review the design of the product with the product development team to provide feedback early in the design phase.
You are designing a new Google Cloud organization for a client. Your client is concerned with the risks associated with long-lived credentials created in Google Cloud. You need to design a solution to completely eliminate the risks associated with the use of JSON service account keys while minimizing operational overhead. What should you do?
A. Apply the constraints/iam.disableServiceAccountKevCreation constraint to the organization.
B. Use custom versions of predefined roles to exclude all iam.serviceAccountKeys.* service account role permissions.
C. Apply the constraints/iam.disableServiceAccountKeyUpload constraint to the organization.
D. Grant the roles/iam.serviceAccountKeyAdmin IAM role to organization administrators only.
You are investigating issues in your production application that runs on Google Kubernetes Engine (GKE). You determined that the source of the issue is a recently updated container image, although the exact change in code was not identified. The deployment is currently pointing to the latest tag. You need to update your cluster to run a version of the container that functions as intended. What should you do?
A. Create a new tag called stable that points to the previously working container, and change the deployment to point to the new tag.
B. Alter the deployment to point to the sha256 digest of the previously working container.
C. Build a new container from a previous Git tag, and do a rolling update on the deployment to the new container.
D. Apply the latest tag to the previous container image, and do a rolling update on the deployment.
You are designing a deployment technique for your applications on Google Cloud. As part of your deployment planning, you want to use live traffic to gather performance metrics for new versions of your applications. You need to test against the full production load before your applications are launched. What should you do?
A. Use A/B testing with blue/green deployment.
B. Use canary testing with continuous deployment.
C. Use canary testing with rolling updates deployment.
D. Use shadow testing with continuous deployment.
You need to create a Cloud Monitoring SLO for a service that will be published soon. You want to verify that requests to the service will be addressed in fewer than 300 ms at least 90% of the time per calendar month. You need to identify the metric and evaluation method to use. What should you do?
A. Select a latency metric for a request-based method of evaluation.
B. Select a latency metric for a window-based method of evaluation.
C. Select an availability metric for a request-based method of evaluation.
D. Select an availability metric for a window-based method of evaluation.
Your Cloud Run application writes unstructured logs as text strings to Cloud Logging. You want to convert the unstructured logs to JSON-based structured logs. What should you do?
A. Modify the application to use Cloud Logging software development kit (SDK), and send log entries with a jsonPayload field.
B. Install a Fluent Bit sidecar container, and use a JSON parser.
C. Install the log agent in the Cloud Run container image, and use the log agent to forward logs to Cloud Logging.
D. Configure the log agent to convert log text payload to JSON payload.
You have an application that runs on Cloud Run. You want to use live production traffic to test a new version of the application, while you let the quality assurance team perform manual testing. You want to limit the potential impact of any issues while testing the new version, and you must be able to roll back to a previous version of the application if needed. How should you deploy the new version? (Choose two.)
A. Deploy the application as a new Cloud Run service.
B. Deploy a new Cloud Run revision with a tag and use the –no-traffic option.
C. Deploy a new Cloud Run revision without a tag and use the –no-traffic option.
D. Deploy the new application version and use the –no-traffic option. Route production traffic to the revision’s URL.
E. Deploy the new application version, and split traffic to the new version.
Your company is planning a large marketing event for an online retailer during the holiday shopping season. You are expecting your web application to receive a large volume of traffic in a short period. You need to prepare your application for potential failures during the event. What should you do? (Choose two.)
A. Configure Anthos Service Mesh on the application to identify issues on the topology map.
B. Ensure that relevant system metrics are being captured with Cloud Monitoring, and create alerts at levels of interest.
C. Review your increased capacity requirements and plan for the required quota management.
D. Monitor latency of your services for average percentile latency.
E. Create alerts in Cloud Monitoring for all common failures that your application experiences.
You recently noticed that one of your services has exceeded the error budget for the current rolling window period. Your company's product team is about to launch a new feature. You want to follow Site Reliability Engineering (SRE) practices. What should you do?
A. Notify the team about the lack of error budget and ensure that all their tests are successful so the launch will not further risk the error budget
B. Notify the team that their error budget is used up. Negotiate with the team for a launch freeze or tolerate a slightly worse user experience.
C. Escalate the situation and request additional error budget.
D. Look through other metrics related to the product and find SLOs with remaining error budget. Reallocate the error budgets and allow the feature launch.
You need to introduce postmortems into your organization. You want to ensure that the postmortem process is well received. What should you do? (Choose two.)
A. Encourage new employees to conduct postmortems to team through practice.
B. Create a designated team that is responsible for conducting all postmortems.
C. Encourage your senior leadership to acknowledge and participate in postmortems.
D. Ensure that writing effective postmortems is a rewarded and celebrated practice.
E. Provide your organization with a forum to critique previous postmortems.
You need to enforce several constraint templates across your Google Kubernetes Engine (GKE) clusters. The constraints include policy parameters, such as restricting the Kubernetes API. You must ensure that the policy parameters are stored in a GitHub repository and automatically applied when changes occur. What should you do?
A. Set up a GitHub action to trigger Cloud Build when there is a parameter change. In Cloud Build, run a gcloud CLI command to apply the change.
B. When there is a change in GitHub. use a web hook to send a request to Anthos Service Mesh, and apply the change.
C. Configure Anthos Config Management with the GitHub repository. When there is a change in the repository, use Anthos Config Management to apply the change.
D. Configure Config Connector with the GitHub repository. When there is a change in the repository, use Config Connector to apply the change.
You are the Operations Lead for an ongoing incident with one of your services. The service usually runs at around 70% capacity. You notice that one node is returning 5xx errors for all requests. There has also been a noticeable increase in support cases from customers. You need to remove the offending node from the load balancer pool so that you can isolate and investigate the node. You want to follow Google-recommended practices to manage the incident and reduce the impact on users. What should you do?
A. 1. Communicate your intent to the incident team.2. Perform a load analysis to determine if the remaining nodes can handle the increase in traffic offloaded from the removed node, and scale appropriately.3. When any new nodes report healthy, drain traffic from the unhealthy node, and remove the unhealthy node from service.
B. 1. Communicate your intent to the incident team.2. Add a new node to the pool, and wait for the new node to report as healthy.3. When traffic is being served on the new node, drain traffic from the unhealthy node, and remove the old node from service.
C. 1. Drain traffic from the unhealthy node and remove the node from service.2. Monitor traffic to ensure that the error is resolved and that the other nodes in the pool are handling the traffic appropriately.3. Scale the pool as necessary to handle the new load.4. Communicate your actions to the incident team.
D. 1. Drain traffic from the unhealthy node and remove the old node from service.2. Add a new node to the pool, wait for the new node to report as healthy, and then serve traffic to the new node.3. Monitor traffic to ensure that the pool is healthy and is handling traffic appropriately.4. Communicate your actions to the incident team.
You are configuring your CI/CD pipeline natively on Google Cloud. You want builds in a pre-production Google Kubernetes Engine (GKE) environment to be automatically load-tested before being promoted to the production GKE environment. You need to ensure that only builds that have passed this test are deployed to production. You want to follow Google-recommended practices. How should you configure this pipeline with Binary Authorization?
A. Create an attestation for the builds that pass the load test by requiring the lead quality assurance engineer to sign the attestation by using their personal private key.
B. Create an attestation for the builds that pass the load test by using a private key stored in Cloud Key Management Service (Cloud KMS) with a service account JSON key stored as a Kubernetes Secret.
C. Create an attestation for the builds that pass the load test by using a private key stored in Cloud Key Management Service (Cloud KMS) authenticated through Workload Identity.
D. Create an attestation for the builds that pass the load test by requiring the lead quality assurance engineer to sign the attestation by using a key stored in Cloud Key Management Service (Cloud KMS).
You are deploying an application to Cloud Run. The application requires a password to start. Your organization requires that all passwords are rotated every 24 hours, and your application must have the latest password. You need to deploy the application with no downtime. What should you do?
A. Store the password in Secret Manager and send the secret to the application by using environment variables.
B. Store the password in Secret Manager and mount the secret as a volume within the application.
C. Use Cloud Build to add your password into the application container at build time. Ensure that Artifact Registry is secured from public access.
D. Store the password directly in the code. Use Cloud Build to rebuild and deploy the application each time the password changes.
Your company runs applications in Google Kubernetes Engine (GKE) that are deployed following a GitOps methodology. Application developers frequently create cloud resources to support their applications. You want to give developers the ability to manage infrastructure as code, while ensuring that you follow Google-recommended practices. You need to ensure that infrastructure as code reconciles periodically to avoid configuration drift. What should you do?
A. Install and configure Config Connector in Google Kubernetes Engine (GKE).
B. Configure Cloud Build with a Terraform builder to execute terraform plan and terraform apply commands.
C. Create a Pod resource with a Terraform docker image to execute terraform plan and terraform apply commands.
D. Create a Job resource with a Terraform docker image to execute terraform plan and terraform apply commands.
You are designing a system with three different environments: development, quality assurance (QA), and production. Each environment will be deployed with Terraform and has a Google Kubernetes Engine (GKE) cluster created so that application teams can deploy their applications. Anthos Config Management will be used and templated to deploy infrastructure level resources in each GKE cluster. All users (for example, infrastructure operators and application owners) will use GitOps. How should you structure your source control repositories for both Infrastructure as Code (IaC) and application code?
A. • Cloud Infrastructure (Terraform) repository is shared: different directories are different environments• GKE Infrastructure (Anthos Config Management Kustomize manifests) repository is shared: different overlay directories are different environments• Application (app source code) repositories are separated: different branches are different features
B. • Cloud Infrastructure (Terraform) repository is shared: different directories are different environments• GKE Infrastructure (Anthos Config Management Kustomize manifests) repositories are separated: different branches are different environments• Application (app source code) repositories are separated: different branches are different features
C. • Cloud Infrastructure (Terraform) repository is shared: different branches are different environments• GKE Infrastructure (Anthos Config Management Kustomize manifests) repository is shared: different overlay directories are different environments• Application (app source code) repository is shared: different directories are different features
D. • Cloud Infrastructure (Terraform) repositories are separated: different branches are different environments• GKE Infrastructure (Anthos Config Management Kustomize manifests) repositories are separated: different overlay directories are different environments• Application (app source code) repositories are separated: different branches are different
You are configuring Cloud Logging for a new application that runs on a Compute Engine instance with a public IP address. A user-managed service account is attached to the instance. You confirmed that the necessary agents are running on the instance but you cannot see any log entries from the instance in Cloud Logging. You want to resolve the issue by following Google-recommended practices. What should you do?
A. Export the service account key and configure the agents to use the key.
B. Update the instance to use the default Compute Engine service account.
C. Add the Logs Writer role to the service account.
D. Enable Private Google Access on the subnet that the instance is in.
As a Site Reliability Engineer, you support an application written in Go that runs on Google Kubernetes Engine (GKE) in production. After releasing a new version of the application, you notice the application runs for about 15 minutes and then restarts. You decide to add Cloud Profiler to your application and now notice that the heap usage grows constantly until the application restarts. What should you do?
A. Increase the CPU limit in the application deployment.
B. Add high memory compute nodes to the cluster.
C. Increase the memory limit in the application deployment.
D. Add Cloud Trace to the application, and redeploy.
You are deploying a Cloud Build job that deploys Terraform code when a Git branch is updated. While testing, you noticed that the job fails. You see the following error in the build logs: Initializing the backend... Error: Failed to get existing workspaces: querying Cloud Storage failed: googleapi: Error 403 You need to resolve the issue by following Google-recommended practices. What should you do?
A. Change the Terraform code to use local state.
B. Create a storage bucket with the name specified in the Terraform configuration.
C. Grant the roles/owner Identity and Access Management (IAM) role to the Cloud Build service account on the project.
D. Grant the roles/storage.objectAdmin Identity and Access Management (1AM) role to the Cloud Build service account on the state file bucket.
Your company’s security team needs to have read-only access to Data Access audit logs in the _Required bucket. You want to provide your security team with the necessary permissions following the principle of least privilege and Google-recommended practices. What should you do?
A. Assign the roles/logging.viewer role to each member of the security team.
B. Assign the roles/logging.viewer role to a group with all the security team members.
C. Assign the roles/logging.privateLogViewer role to each member of the security team.
D. Assign the roles/logging.privateLogViewer role to a group with all the security team members.
You have deployed a fleet of Compute Engine instances in Google Cloud. You need to ensure that monitoring metrics and logs for the instances are visible in Cloud Logging and Cloud Monitoring by your company's operations and cyber security teams. You need to grant the required roles for the Compute Engine service account by using Identity and Access Management (IAM) while following the principle of least privilege. What should you do?
A. Grant the logging.logWriter and monitoring.metricWriter roles to the Compute Engine service accounts.
B. Grant the logging.admin and monitoring.editor roles to the Compute Engine service accounts.
C. Grant the logging.editor and monitoring.metricWriter roles to the Compute Engine service accounts.
D. Grant the logging.logWriter and monitoring.editor roles to the Compute Engine service accounts.
Your team is building a service that performs compute-heavy processing on batches of data. The data is processed faster based on the speed and number of CPUs on the machine. These batches of data vary in size and may arrive at any time from multiple third-party sources. You need to ensure that third parties are able to upload their data securely. You want to minimize costs, while ensuring that the data is processed as quickly as possible. What should you do?
A. Provide a secure file transfer protocol (SFTP) server on a Compute Engine instance so that third parties can upload batches of data, and provide appropriate credentials to the server.Create a Cloud Function with a google.storage.object.finalize Cloud Storage trigger. Write code so that the function can scale up a Compute Engine autoscaling managed instance groupUse an image pre-loaded with the data processing software that terminates the instances when processing completes.
B. Provide a Cloud Storage bucket so that third parties can upload batches of data, and provide appropriate Identity and Access Management (IAM) access to the bucket.Use a standard Google Kubernetes Engine (GKE) cluster and maintain two services: one that processes the batches of data, and one that monitors Cloud Storage for new batches of data.Stop the processing service when there are no batches of data to process.
C. Provide a Cloud Storage bucket so that third parties can upload batches of data, and provide appropriate Identity and Access Management (IAM) access to the bucket.Create a Cloud Function with a google.storage.object.finalize Cloud Storage trigger. Write code so that the function can scale up a Compute Engine autoscaling managed instance group.Use an image pre-loaded with the data processing software that terminates the instances when processing completes.
D. Provide a Cloud Storage bucket so that third parties can upload batches of data, and provide appropriate Identity and Access Management (IAM) access to the bucket.Use Cloud Monitoring to detect new batches of data in the bucket and trigger a Cloud Function that processes the data.Set a Cloud Function to use the largest CPU possible to minimize the runtime of the processing.
You are reviewing your deployment pipeline in Google Cloud Deploy. You must reduce toil in the pipeline, and you want to minimize the amount of time it takes to complete an end-to-end deployment. What should you do? (Choose two.)
A. Create a trigger to notify the required team to complete the next step when manual intervention is required.
B. Divide the automation steps into smaller tasks.
C. Use a script to automate the creation of the deployment pipeline in Google Cloud Deploy.
D. Add more engineers to finish the manual steps.
E. Automate promotion approvals from the development environment to the test environment.
You work for a global organization and are running a monolithic application on Compute Engine. You need to select the machine type for the application to use that optimizes CPU utilization by using the fewest number of steps. You want to use historical system metrics to identify the machine type for the application to use. You want to follow Google-recommended practices. What should you do?
A. Use the Recommender API and apply the suggested recommendations.
B. Create an Agent Policy to automatically install Ops Agent in all VMs.
C. Install the Ops Agent in a fleet of VMs by using the gcloud CLI.
D. Review the Cloud Monitoring dashboard for the VM and choose the machine type with the lowest CPU utilization.
You deployed an application into a large Standard Google Kubernetes Engine (GKE) cluster. The application is stateless and multiple pods run at the same time. Your application receives inconsistent traffic. You need to ensure that the user experience remains consistent regardless of changes in traffic and that the resource usage of the cluster is optimized. What should you do?
A. Configure a cron job to scale the deployment on a schedule
B. Configure a Horizontal Pod Autoscaler.
C. Configure a Vertical Pod Autoscaler
D. Configure cluster autoscaling on the node pool.
You need to deploy a new service to production. The service needs to automatically scale using a managed instance group and should be deployed across multiple regions. The service needs a large number of resources for each instance and you need to plan for capacity. What should you do?
A. Monitor results of Cloud Trace to determine the optimal sizing.
B. Use the n2-highcpu-96 machine type in the configuration of the managed instance group.
C. Deploy the service in multiple regions and use an internal load balancer to route traffic.
D. Validate that the resource requirements are within the available project quota limits of each region.
You are analyzing Java applications in production. All applications have Cloud Profiler and Cloud Trace installed and configured by default. You want to determine which applications need performance tuning. What should you do? (Choose two.)
A. Examine the wall-clock time and the CPU time of the application. If the difference is substantial increase the CPU resource allocation.
B. Examine the wall-clock time and the CPU time of the application. If the difference is substantial, increase the memory resource allocation.
C. Examine the wall-clock time and the CPU time of the application. If the difference is substantial, increase the local disk storage allocation.
D. Examine the latency time the wall-clock time and the CPU time of the application. If the latency time is slowly burning down the error budget, and the difference between wall-clock time and CPU time is minimal mark the application for optimization.
E. Examine the heap usage of the application. If the usage is low, mark the application for optimization.
Your organization stores all application logs from multiple Google Cloud projects in a central Cloud Logging project. Your security team wants to enforce a rule that each project team can only view their respective logs and only the operations team can view all the logs. You need to design a solution that meets the security team s requirements while minimizing costs. What should you do?
A. Grant each project team access to the project _Default view in the central logging project. Grant togging viewer access to the operations team in the central logging project.
B. Create Identity and Access Management (IAM) roles for each project team and restrict access to the _Default log view in their individual Google Cloud project. Grant viewer access to the operations team in the central logging project.
C. Create log views for each project team and only show each project team their application logs. Grant the operations team access to the _AllLogs view in the central logging project.
D. Export logs to BigQuery tables for each project team. Grant project teams access to their tables. Grant logs writer access to the operations team in the central logging project.
Your company uses Jenkins running on Google Cloud VM instances for CI/CD. You need to extend the functionality to use infrastructure as code automation by using Terraform. You must ensure that the Terraform Jenkins instance is authorized to create Google Cloud resources. You want to follow Google-recommended practices. What should you do?
A. Confirm that the Jenkins VM instance has an attached service account with the appropriate Identity and Access Management (IAM) permissions.
B. Use the Terraform module so that Secret Manager can retrieve credentials.
C. Create a dedicated service account for the Terraform instance. Download and copy the secret key value to the GOOGLE_CREDENTIALS environment variable on the Jenkins server.
D. Add the gcloud auth application-default login command as a step in Jenkins before running the Terraform commands.
You encounter a large number of outages in the production systems you support. You receive alerts for all the outages, the alerts are due to unhealthy systems that are automatically restarted within a minute. You want to set up a process that would prevent staff burnout while following Site Reliability Engineering (SRE) practices. What should you do?
A. Eliminate alerts that are not actionable
B. Redefine the related SLO so that the error budget is not exhausted
C. Distribute the alerts to engineers in different time zones
D. Create an incident report for each of the alerts
As part of your company's initiative to shift left on security, the InfoSec team is asking all teams to implement guard rails on all the Google Kubernetes Engine (GKE) clusters to only allow the deployment of trusted and approved images. You need to determine how to satisfy the InfoSec team's goal of shifting left on security. What should you do?
A. Enable Container Analysis in Artifact Registry, and check for common vulnerabilities and exposures (CVEs) in your container images
B. Use Binary Authorization to attest images during your CI/CD pipeline
C. Configure Identity and Access Management (IAM) policies to create a least privilege model on your GKE clusters.
D. Deploy Falco or Twistlock on GKE to monitor for vulnerabilities on your running Pods
Your company operates in a highly regulated domain. Your security team requires that only trusted container images can be deployed to Google Kubernetes Engine (GKE). You need to implement a solution that meets the requirements of the security team while minimizing management overhead. What should you do?
A. Configure Binary Authorization in your GKE clusters to enforce deploy-time security policies.
B. Grant the roles/artifactregistry.writer role to the Cloud Build service account. Confirm that no employee has Artifact Registry write permission.
C. Use Cloud Run to write and deploy a custom validator. Enable an Eventarc trigger to perform validations when new images are uploaded.
D. Configure Kritis to run in your GKE clusters to enforce deploy-time security policies.
Your CTO has asked you to implement a postmortem policy on every incident for internal use. You want to define what a good postmortem is to ensure that the policy is successful at your company. What should you do? (Choose two.)
A. Ensure that all postmortems include what caused the incident, identify the person or team responsible for causing the incident, and how to prevent a future occurrence of the incident.
B. Ensure that all postmortems include what caused the incident, how the incident could have been worse, and how to prevent a future occurrence of the incident.
C. Ensure that all postmortems include the severity of the incident, how to prevent a future occurrence of the incident, and what caused the incident without naming internal system components.
D. Ensure that all postmortems include how the incident was resolved and what caused the incident without naming customer information.
E. Ensure that all postmortems include all incident participants in postmortem authoring and share postmortems as widely as possible.
You want to share a Cloud Monitoring custom dashboard with a partner team. What should you do?
A. Provide the partner team with the dashboard URL to enable the partner team to create a copy of the dashboard.
B. Export the metrics to BigQuery. Use Looker Studio to create a dashboard, and share the dashboard with the partner team.
C. Copy the Monitoring Query Language (MQL) query from the dashboard, and send the ML query to the partner team.
D. Download the JSON definition of the dashboard, and send the JSON file to the partner team.
You are developing reusable infrastructure as code modules. Each module contains integration tests that launch the module in a test project. You are using GitHub for source control. You need to continuously test your feature branch and ensure that all code is tested before changes are accepted. You need to implement a solution to automate the integration tests. What should you do?
A. Use a Jenkins server for CI/CD pipelines. Periodically run all tests in the feature branch.
B. Ask the pull request reviewers to run the integration tests before approving the code.
C. Use Cloud Build to run the tests. Trigger all tests to run after a pull request is merged.
D. Use Cloud Build to run tests in a specific folder. Trigger Cloud Build for every GitHub pull request.
You are building an application that runs on Cloud Run. The application needs to access a third-party API by using an API key. You need to determine a secure way to store and use the API key in your application by following Google-recommended practices. What should you do?
A. Save the API key in Secret Manager as a secret. Reference the secret as an environment variable in the Cloud Run application.
B. Save the API key in Secret Manager as a secret key. Mount the secret key under the /sys/api_key directory, and decrypt the key in the Cloud Run application.
C. Save the API key in Cloud Key Management Service (Cloud KMS) as a key. Reference the key as an environment variable in the Cloud Run application.
D. Encrypt the API key by using Cloud Key Management Service (Cloud KMS), and pass the key to Cloud Run as an environment variable. Decrypt and use the key in Cloud Run.
Your company processes IoT data at scale by using Pub/Sub, App Engine standard environment, and an application written in Go. You noticed that the performance inconsistently degrades at peak load. You could not reproduce this issue on your workstation. You need to continuously monitor the application in production to identify slow paths in the code. You want to minimize performance impact and management overhead. What should you do?
A. Use Cloud Monitoring to assess the App Engine CPU utilization metric.
B. Install a continuous profiling tool into Compute Engine. Configure the application to send profiling data to the tool.
C. Periodically run the go tool pprof command against the application instance. Analyze the results by using flame graphs.
D. Configure Cloud Profiler, and initialize the cloud.google.com/go/profiler library in the application.
You are currently planning how to display Cloud Monitoring metrics for your organization’s Google Cloud projects. Your organization has three folders and six projects:You want to configure Cloud Monitoring dashboards to only display metrics from the projects within one folder. You need to ensure that the dashboards do not display metrics from projects in the other folders. You want to follow Google-recommended practices. What should you do?
A. Create a single new scoping project.
B. Create new scoping projects for each folder.
C. Use the current app-one-prod project as the scoping project.
D. Use the current app-one-dev, app-one-staging, and app-one-prod projects as the scoping project for each folder.
Your company runs services by using Google Kubernetes Engine (GKE). The GKE dusters in the development environment run applications with verbose logging enabled. Developers view logs by using the kubectl logs command and do not use Cloud Logging. Applications do not have a uniform logging structure defined. You need to minimize the costs associated with application logging while still collecting GKE operational logs. What should you do?
A. Run the gcloud container clusters update –logging=SYSTEM command for the development cluster.
B. Run the gcloud container clusters update –logging=WORKLOAD command for the development cluster.
C. Run the gcloud logging sinks update _Default –disabled command in the project associated with the development environment.
D. Add the severity >= DEBUG resource.type = “k8s_container” exclusion filter to the _Default logging sink in the project associated with the development environment.
You are creating a CI/CD pipeline in Cloud Build to build an application container image. The application code is stored in GitHub. Your company requires that production image builds are only run against the main branch and that the change control team approves all pushes to the main branch. You want the image build to be as automated as possible. What should you do? (Choose two.)
A. Create a trigger on the Cloud Build job. Set the repository event setting to ‘Pull request’.
B. Add the OWNERS file to the Included files filter on the trigger.
C. Create a trigger on the Cloud Build job. Set the repository event setting to ‘Push to a branch’
D. Configure a branch protection rule for the main branch on the repository.
E. Enable the Approval option on the trigger.
You built a serverless application by using Cloud Run and deployed the application to your production environment. You want to identify the resource utilization of the application for cost optimization. What should you do?
A. Use Cloud Trace with distributed tracing to monitor the resource utilization of the application.
B. Use Cloud Profiler with Ops Agent to monitor the CPU and memory utilization of the application.
C. Use Cloud Monitoring to monitor the container CPU and memory utilization of the application.
D. Use Cloud Ops to create logs-based metrics to monitor the resource utilization of the application.
Your company is using HTTPS requests to trigger a public Cloud Run-hosted service accessible at the https://booking-engine-abcdef.a.run.app URL. You need to give developers the ability to test the latest revisions of the service before the service is exposed to customers. What should you do?
A. Run the gcloud run deploy booking-engine –no-traffic –tag dev command. Use the https://dev–booking-engine-abcdef.a.run.app URL for testing.
B. Run the gcloud run services update-traffic booking-engine –to-revisions LATEST=1 command. Use the https://booking-engine-abcdef.a.run.app URL for testing.
C. Pass the curl –H “Authorization:Bearer $(gcloud auth print-identity-token)” auth token. Use the https://booking-engine-abcdef.a.run.app URL to test privately.
D. Grant the roles/run.invoker role to the developers testing the booking-engine service. Use the https://booking-engine-abcdef.private.run.app URL for testing.
You are configuring connectivity across Google Kubernetes Engine (GKE) clusters in different VPCs. You notice that the nodes in Cluster A are unable to access the nodes in Cluster
A. You suspect that the workload access issue is due to the network configuration. You need to troubleshoot the issue but do not have execute access to workloads and nodes. You want to identify the layer at which the network connectivity is broken. What should you do?
B. Install a toolbox container on the node in Cluster Confirm that the routes to Cluster B are configured appropriately.
C. Use Network Connectivity Center to perform a Connectivity Test from Cluster A to Cluster
D. Use a debug container to run the traceroute command from Cluster A to Cluster B and from Cluster B to Cluster
E. Identify the common failure point.
F. Enable VPC Flow Logs in both VPCs, and monitor packet drops.
Get More Google Professional Cloud DevOps Engineer Practice Questions
If you’re looking for more Google Professional Cloud DevOps Engineer practice test free questions, click here to access the full Google Professional Cloud DevOps Engineer practice test.
We regularly update this page with new practice questions, so be sure to check back frequently.
Good luck with your Google Professional Cloud DevOps Engineer certification journey!