We provide real Professional-Data-Engineer exam questions and answers braindumps in two formats. Download PDF & Practice Tests. Pass Google Professional-Data-Engineer Exam quickly & easily. The Professional-Data-Engineer PDF type is available for reading and printing. You can print more and practice many times. With the help of our Google Professional-Data-Engineer dumps pdf and vce product and material, you can easily pass the Professional-Data-Engineer exam.

Online Professional-Data-Engineer free questions and answers of New Version:

NEW QUESTION 1

You need to compose visualizations for operations teams with the following requirements: Which approach meets the requirements?

  • A. Load the data into Google Sheets, use formulas to calculate a metric, and use filters/sorting to show only suboptimal links in a table.
  • B. Load the data into Google BigQuery tables, write Google Apps Script that queries the data, calculates the metric, and shows only suboptimal rows in a table in Google Sheets.
  • C. Load the data into Google Cloud Datastore tables, write a Google App Engine Application that queries all rows, applies a function to derive the metric, and then renders results in a table using the Google charts and visualization API.
  • D. Load the data into Google BigQuery tables, write a Google Data Studio 360 report that connects to your data, calculates a metric, and then uses a filter expression to show only suboptimal rows in a table.

Answer: C

NEW QUESTION 2

You work for a global shipping company. You want to train a model on 40 TB of data to predict which ships in each geographic region are likely to cause delivery delays on any given day. The model will be based on multiple attributes collected from multiple sources. Telemetry data, including location in GeoJSON format, will be pulled from each ship and loaded every hour. You want to have a dashboard that shows how many and which ships are likely to cause delays within a region. You want to use a storage solution that has native functionality for prediction and geospatial processing. Which storage solution should you use?

  • A. BigQuery
  • B. Cloud Bigtable
  • C. Cloud Datastore
  • D. Cloud SQL for PostgreSQL

Answer: A

NEW QUESTION 3

You are operating a Cloud Dataflow streaming pipeline. The pipeline aggregates events from a Cloud Pub/Sub subscription source, within a window, and sinks the resulting aggregation to a Cloud Storage bucket. The source has consistent throughput. You want to monitor an alert on behavior of the pipeline with Cloud Stackdriver to ensure that it is processing data. Which Stackdriver alerts should you create?

  • A. An alert based on a decrease of subscription/num_undelivered_messages for the source and a rate of change increase of instance/storage/used_bytes for the destination
  • B. An alert based on an increase of subscription/num_undelivered_messages for the source and a rate of change decrease of instance/storage/used_bytes for the destination
  • C. An alert based on a decrease of instance/storage/used_bytes for the source and a rate of change increase of subscription/num_undelivered_messages for the destination
  • D. An alert based on an increase of instance/storage/used_bytes for the source and a rate of change decrease of subscription/num_undelivered_messages for the destination

Answer: B

NEW QUESTION 4

You want to use Google Stackdriver Logging to monitor Google BigQuery usage. You need an instant notification to be sent to your monitoring tool when new data is appended to a certain table using an insert job, but you do not want to receive notifications for other tables. What should you do?

  • A. Make a call to the Stackdriver API to list all logs, and apply an advanced filter.
  • B. In the Stackdriver logging admin interface, and enable a log sink export to BigQuery.
  • C. In the Stackdriver logging admin interface, enable a log sink export to Google Cloud Pub/Sub, and subscribe to the topic from your monitoring tool.
  • D. Using the Stackdriver API, create a project sink with advanced log filter to export to Pub/Sub, and subscribe to the topic from your monitoring tool.

Answer: B

NEW QUESTION 5

You are building a model to make clothing recommendations. You know a user’s fashion preference is likely to change over time, so you build a data pipeline to stream new data back to the model as it becomes available.
How should you use this data to train the model?

  • A. Continuously retrain the model on just the new data.
  • B. Continuously retrain the model on a combination of existing data and the new data.
  • C. Train on the existing data while using the new data as your test set.
  • D. Train on the new data while using the existing data as your test set.

Answer: D

NEW QUESTION 6

You are building an application to share financial market data with consumers, who will receive data feeds. Data is collected from the markets in real time. Consumers will receive the data in the following ways:
Professional-Data-Engineer dumps exhibit Real-time event stream
Professional-Data-Engineer dumps exhibit ANSI SQL access to real-time stream and historical data
Professional-Data-Engineer dumps exhibit Batch historical exports
Which solution should you use?

  • A. Cloud Dataflow, Cloud SQL, Cloud Spanner
  • B. Cloud Pub/Sub, Cloud Storage, BigQuery
  • C. Cloud Dataproc, Cloud Dataflow, BigQuery
  • D. Cloud Pub/Sub, Cloud Dataproc, Cloud SQL

Answer: A

NEW QUESTION 7

You are working on a niche product in the image recognition domain. Your team has developed a model that is dominated by custom C++ TensorFlow ops your team has implemented. These ops are used inside your main training loop and are performing bulky matrix multiplications. It currently takes up to several days to train a model. You want to decrease this time significantly and keep the cost low by using an accelerator on Google Cloud. What should you do?

  • A. Use Cloud TPUs without any additional adjustment to your code.
  • B. Use Cloud TPUs after implementing GPU kernel support for your customs ops.
  • C. Use Cloud GPUs after implementing GPU kernel support for your customs ops.
  • D. Stay on CPUs, and increase the size of the cluster you’re training your model on.

Answer: B

NEW QUESTION 8

You have a data stored in BigQuery. The data in the BigQuery dataset must be highly available. You need to define a storage, backup, and recovery strategy of this data that minimizes cost. How should you configure the BigQuery table?

  • A. Set the BigQuery dataset to be regiona
  • B. In the event of an emergency, use a point-in-time snapshot to recover the data.
  • C. Set the BigQuery dataset to be regiona
  • D. Create a scheduled query to make copies of the data to tables suffixed with the time of the backu
  • E. In the event of an emergency, use the backup copy of the table.
  • F. Set the BigQuery dataset to be multi-regiona
  • G. In the event of an emergency, use a point-in-time snapshot to recover the data.
  • H. Set the BigQuery dataset to be multi-regiona
  • I. Create a scheduled query to make copies of the data to tables suffixed with the time of the backu
  • J. In the event of an emergency, use the backup copy of the table.

Answer: B

NEW QUESTION 9

Your weather app queries a database every 15 minutes to get the current temperature. The frontend is powered by Google App Engine and server millions of users. How should you design the frontend to respond to a database failure?

  • A. Issue a command to restart the database servers.
  • B. Retry the query with exponential backoff, up to a cap of 15 minutes.
  • C. Retry the query every second until it comes back online to minimize staleness of data.
  • D. Reduce the query frequency to once every hour until the database comes back online.

Answer: B

NEW QUESTION 10

If you're running a performance test that depends upon Cloud Bigtable, all the choices except one below are recommended steps. Which is NOT a recommended step to follow?

  • A. Do not use a production instance.
  • B. Run your test for at least 10 minutes.
  • C. Before you test, run a heavy pre-test for several minutes.
  • D. Use at least 300 GB of data.

Answer: A

Explanation:
If you're running a performance test that depends upon Cloud Bigtable, be sure to follow these steps as you
plan and execute your test:
Use a production instance. A development instance will not give you an accurate sense of how a production instance performs under load.
Use at least 300 GB of data. Cloud Bigtable performs best with 1 TB or more of data. However, 300 GB of data is enough to provide reasonable results in a performance test on a 3-node cluster. On larger clusters, use 100 GB of data per node.
Before you test, run a heavy pre-test for several minutes. This step gives Cloud Bigtable a chance to balance data across your nodes based on the access patterns it observes.
Run your test for at least 10 minutes. This step lets Cloud Bigtable further optimize your data, and it helps ensure that you will test reads from disk as well as cached reads from memory.
Reference: https://cloud.google.com/bigtable/docs/performance

NEW QUESTION 11

Which SQL keyword can be used to reduce the number of columns processed by BigQuery?

  • A. BETWEEN
  • B. WHERE
  • C. SELECT
  • D. LIMIT

Answer: C

Explanation:
SELECT allows you to query specific columns rather than the whole table.
LIMIT, BETWEEN, and WHERE clauses will not reduce the number of columns processed by BigQuery.
Reference:
https://cloud.google.com/bigquery/launch-checklist#architecture_design_and_development_checklist

NEW QUESTION 12

You operate a logistics company, and you want to improve event delivery reliability for vehicle-based sensors. You operate small data centers around the world to capture these events, but leased lines that provide connectivity from your event collection infrastructure to your event processing infrastructure are unreliable, with unpredictable latency. You want to address this issue in the most cost-effective way. What should you do?

  • A. Deploy small Kafka clusters in your data centers to buffer events.
  • B. Have the data acquisition devices publish data to Cloud Pub/Sub.
  • C. Establish a Cloud Interconnect between all remote data centers and Google.
  • D. Write a Cloud Dataflow pipeline that aggregates all data in session windows.

Answer: A

NEW QUESTION 13

What is the HBase Shell for Cloud Bigtable?

  • A. The HBase shell is a GUI based interface that performs administrative tasks, such as creating and deleting tables.
  • B. The HBase shell is a command-line tool that performs administrative tasks, such as creating and deleting tables.
  • C. The HBase shell is a hypervisor based shell that performs administrative tasks, such as creating and deleting new virtualized instances.
  • D. The HBase shell is a command-line tool that performs only user account management functions to grant access to Cloud Bigtable instances.

Answer: B

Explanation:
The HBase shell is a command-line tool that performs administrative tasks, such as creating and deleting tables. The Cloud Bigtable HBase client for Java makes it possible to use the HBase shell to connect to Cloud Bigtable.
Reference: https://cloud.google.com/bigtable/docs/installing-hbase-shell

NEW QUESTION 14

You are developing a software application using Google's Dataflow SDK, and want to use conditional, for loops and other complex programming structures to create a branching pipeline. Which component will be used for the data processing operation?

  • A. PCollection
  • B. Transform
  • C. Pipeline
  • D. Sink API

Answer: B

Explanation:
In Google Cloud, the Dataflow SDK provides a transform component. It is responsible for the data processing operation. You can use conditional, for loops, and other complex programming structure to create a branching pipeline.
Reference: https://cloud.google.com/dataflow/model/programming-model

NEW QUESTION 15

Which of the following is not possible using primitive roles?

  • A. Give a user viewer access to BigQuery and owner access to Google Compute Engine instances.
  • B. Give UserA owner access and UserB editor access for all datasets in a project.
  • C. Give a user access to view all datasets in a project, but not run queries on them.
  • D. Give GroupA owner access and GroupB editor access for all datasets in a project.

Answer: C

Explanation:
Primitive roles can be used to give owner, editor, or viewer access to a user or group, but they can't be used to separate data access permissions from job-running permissions.
Reference: https://cloud.google.com/bigquery/docs/access-control#primitive_iam_roles

NEW QUESTION 16

What are two of the benefits of using denormalized data structures in BigQuery?

  • A. Reduces the amount of data processed, reduces the amount of storage required
  • B. Increases query speed, makes queries simpler
  • C. Reduces the amount of storage required, increases query speed
  • D. Reduces the amount of data processed, increases query speed

Answer: B

Explanation:
Denormalization increases query speed for tables with billions of rows because BigQuery's performance degrades when doing JOINs on large tables, but with a denormalized data
structure, you don't have to use JOINs, since all of the data has been combined into one table. Denormalization also makes queries simpler because you do not have to use JOIN clauses.
Denormalization increases the amount of data processed and the amount of storage required because it creates redundant data.
Reference:
https://cloud.google.com/solutions/bigquery-data-warehouse#denormalizing_data

NEW QUESTION 17

Which of these is not a supported method of putting data into a partitioned table?

  • A. If you have existing data in a separate file for each day, then create a partitioned table and upload each file into the appropriate partition.
  • B. Run a query to get the records for a specific day from an existing table and for the destination table, specify a partitioned table ending with the day in the format "$YYYYMMDD".
  • C. Create a partitioned table and stream new records to it every day.
  • D. Use ORDER BY to put a table's rows into chronological order and then change the table's type to "Partitioned".

Answer: D

Explanation:
You cannot change an existing table into a partitioned table. You must create a partitioned table from scratch. Then you can either stream data into it every day and the data will automatically be put in the right partition, or you can load data into a specific partition by using "$YYYYMMDD" at the end of the table name.
Reference: https://cloud.google.com/bigquery/docs/partitioned-tables

NEW QUESTION 18

Your company’s on-premises Apache Hadoop servers are approaching end-of-life, and IT has decided to migrate the cluster to Google Cloud Dataproc. A like-for-like migration of the cluster would require 50 TB of Google Persistent Disk per node. The CIO is concerned about the cost of using that much block storage. You want to minimize the storage cost of the migration. What should you do?

  • A. Put the data into Google Cloud Storage.
  • B. Use preemptible virtual machines (VMs) for the Cloud Dataproc cluster.
  • C. Tune the Cloud Dataproc cluster so that there is just enough disk for all data.
  • D. Migrate some of the cold data into Google Cloud Storage, and keep only the hot data in Persistent Disk.

Answer: B

NEW QUESTION 19

You are planning to migrate your current on-premises Apache Hadoop deployment to the cloud. You need to ensure that the deployment is as fault-tolerant and cost-effective as possible for long-running batch jobs. You want to use a managed service. What should you do?

  • A. Deploy a Cloud Dataproc cluste
  • B. Use a standard persistent disk and 50% preemptible worker
  • C. Store data in Cloud Storage, and change references in scripts from hdfs:// to gs://
  • D. Deploy a Cloud Dataproc cluste
  • E. Use an SSD persistent disk and 50% preemptible worker
  • F. Store data in Cloud Storage, and change references in scripts from hdfs:// to gs://
  • G. Install Hadoop and Spark on a 10-node Compute Engine instance group with standard instance
  • H. Install the Cloud Storage connector, and store the data in Cloud Storag
  • I. Change references in scripts from hdfs:// to gs://
  • J. Install Hadoop and Spark on a 10-node Compute Engine instance group with preemptible instances.Store data in HDF
  • K. Change references in scripts from hdfs:// to gs://

Answer: A

NEW QUESTION 20

When running a pipeline that has a BigQuery source, on your local machine, you continue to get permission denied errors. What could be the reason for that?

  • A. Your gcloud does not have access to the BigQuery resources
  • B. BigQuery cannot be accessed from local machines
  • C. You are missing gcloud on your machine
  • D. Pipelines cannot be run locally

Answer: A

Explanation:
When reading from a Dataflow source or writing to a Dataflow sink using DirectPipelineRunner, the Cloud Platform account that you configured with the gcloud executable will need access to the corresponding source/sink
Reference:
https://cloud.google.com/dataflow/java-sdk/JavaDoc/com/google/cloud/dataflow/sdk/runners/DirectPipelineRun

NEW QUESTION 21

Your analytics team wants to build a simple statistical model to determine which customers are most likely to work with your company again, based on a few different metrics. They want to run the model on Apache Spark, using data housed in Google Cloud Storage, and you have recommended using Google Cloud Dataproc to execute this job. Testing has shown that this workload can run in approximately 30 minutes on a 15-node cluster, outputting the results into Google BigQuery. The plan is to run this workload weekly. How should you optimize the cluster for cost?

  • A. Migrate the workload to Google Cloud Dataflow
  • B. Use pre-emptible virtual machines (VMs) for the cluster
  • C. Use a higher-memory node so that the job runs faster
  • D. Use SSDs on the worker nodes so that the job can run faster

Answer: A

NEW QUESTION 22

When you store data in Cloud Bigtable, what is the recommended minimum amount of stored data?

  • A. 500 TB
  • B. 1 GB
  • C. 1 TB
  • D. 500 GB

Answer: C

Explanation:
Cloud Bigtable is not a relational database. It does not support SQL queries, joins, or multi-row transactions. It is not a good solution for less than 1 TB of data.
Reference: https://cloud.google.com/bigtable/docs/overview#title_short_and_other_storage_options

NEW QUESTION 23

You have a job that you want to cancel. It is a streaming pipeline, and you want to ensure that any data that is in-flight is processed and written to the output. Which of the following commands can you use on the Dataflow monitoring console to stop the pipeline job?

  • A. Cancel
  • B. Drain
  • C. Stop
  • D. Finish

Answer: B

Explanation:
Using the Drain option to stop your job tells the Dataflow service to finish your job in its current state. Your job will immediately stop ingesting new data from input sources, but the Dataflow
service will preserve any existing resources (such as worker instances) to finish processing and writing any buffered data in your pipeline.
Reference: https://cloud.google.com/dataflow/pipelines/stopping-a-pipeline

NEW QUESTION 24
......

Thanks for reading the newest Professional-Data-Engineer exam dumps! We recommend you to try the PREMIUM Certshared Professional-Data-Engineer dumps in VCE and PDF here: https://www.certshared.com/exam/Professional-Data-Engineer/ (239 Q&As Dumps)