Black Friday Special Limited Time 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: cramtick70

Professional-Data-Engineer Google Professional Data Engineer Exam Questions and Answers

Questions 4

You are building new real-time data warehouse for your company and will use Google BigQuery streaming inserts. There is no guarantee that data will only be sent in once but you do have a unique ID for each row of data and an event timestamp. You want to ensure that duplicates are not included while interactively querying data. Which query type should you use?

Options:

A.

Include ORDER BY DESK on timestamp column and LIMIT to 1.

B.

Use GROUP BY on the unique ID column and timestamp column and SUM on the values.

C.

Use the LAG window function with PARTITION by unique ID along with WHERE LAG IS NOT NULL.

D.

Use the ROW_NUMBER window function with PARTITION by unique ID along with WHERE row equals 1.

Buy Now
Questions 5

You have Google Cloud Dataflow streaming pipeline running with a Google Cloud Pub/Sub subscription as the source. You need to make an update to the code that will make the new Cloud Dataflow pipeline incompatible with the current version. You do not want to lose any data when making this update. What should you do?

Options:

A.

Update the current pipeline and use the drain flag.

B.

Update the current pipeline and provide the transform mapping JSON object.

C.

Create a new pipeline that has the same Cloud Pub/Sub subscription and cancel the old pipeline.

D.

Create a new pipeline that has a new Cloud Pub/Sub subscription and cancel the old pipeline.

Buy Now
Questions 6

You have spent a few days loading data from comma-separated values (CSV) files into the Google BigQuery table CLICK_STREAM. The column DT stores the epoch time of click events. For convenience, you chose a simple schema where every field is treated as the STRING type. Now, you want to compute web session durations of users who visit your site, and you want to change its data type to the TIMESTAMP. You want to minimize the migration effort without making future queries computationally expensive. What should you do?

Options:

A.

Delete the table CLICK_STREAM, and then re-create it such that the column DT is of the TIMESTAMP type. Reload the data.

B.

Add a column TS of the TIMESTAMP type to the table CLICK_STREAM, and populate the numeric values from the column TS for each row. Reference the column TS instead of the column DT from now on.

C.

Create a view CLICK_STREAM_V, where strings from the column DT are cast into TIMESTAMP values. Reference the view CLICK_STREAM_V instead of the table CLICK_STREAM from now on.

D.

Add two columns to the table CLICK STREAM: TS of the TIMESTAMP type and IS_NEW of the BOOLEAN type. Reload all data in append mode. For each appended row, set the value of IS_NEW to true. For future queries, reference the column TS instead of the column DT, with the WHERE clause ensuring that the value of IS_NEW must be true.

E.

Construct a query to return every row of the table CLICK_STREAM, while using the built-in function to cast strings from the column DT into TIMESTAMP values. Run the query into a destination table NEW_CLICK_STREAM, in which the column TS is the TIMESTAMP type. Reference the table NEW_CLICK_STREAM instead of the table CLICK_STREAM from now on. In the future, new data is loaded into the table NEW_CLICK_STREAM.

Buy Now
Questions 7

Which of the following is NOT true about Dataflow pipelines?

Options:

A.

Dataflow pipelines are tied to Dataflow, and cannot be run on any other runner

B.

Dataflow pipelines can consume data from other Google Cloud services

C.

Dataflow pipelines can be programmed in Java

D.

Dataflow pipelines use a unified programming model, so can work both with streaming and batch data sources

Buy Now
Questions 8

When you store data in Cloud Bigtable, what is the recommended minimum amount of stored data?

Options:

A.

500 TB

B.

1 GB

C.

1 TB

D.

500 GB

Buy Now
Questions 9

Which action can a Cloud Dataproc Viewer perform?

Options:

A.

Submit a job.

B.

Create a cluster.

C.

Delete a cluster.

D.

List the jobs.

Buy Now
Questions 10

Google Cloud Bigtable indexes a single value in each row. This value is called the _______.

Options:

A.

primary key

B.

unique key

C.

row key

D.

master key

Buy Now
Questions 11

Which of these operations can you perform from the BigQuery Web UI?

Options:

A.

Upload a file in SQL format.

B.

Load data with nested and repeated fields.

C.

Upload a 20 MB file.

D.

Upload multiple files using a wildcard.

Buy Now
Questions 12

Which of the following statements about Legacy SQL and Standard SQL is not true?

Options:

A.

Standard SQL is the preferred query language for BigQuery.

B.

If you write a query in Legacy SQL, it might generate an error if you try to run it with Standard SQL.

C.

One difference between the two query languages is how you specify fully-qualified table names (i.e. table names that include their associated project name).

D.

You need to set a query language for each dataset and the default is Standard SQL.

Buy Now
Questions 13

Which of the following IAM roles does your Compute Engine account require to be able to run pipeline jobs?

Options:

A.

dataflow.worker

B.

dataflow.compute

C.

dataflow.developer

D.

dataflow.viewer

Buy Now
Questions 14

Cloud Bigtable is Google's ______ Big Data database service.

Options:

A.

Relational

B.

mySQL

C.

NoSQL

D.

SQL Server

Buy Now
Questions 15

Which of these statements about exporting data from BigQuery is false?

Options:

A.

To export more than 1 GB of data, you need to put a wildcard in the destination filename.

B.

The only supported export destination is Google Cloud Storage.

C.

Data can only be exported in JSON or Avro format.

D.

The only compression option available is GZIP.

Buy Now
Questions 16

Which Google Cloud Platform service is an alternative to Hadoop with Hive?

Options:

A.

Cloud Dataflow

B.

Cloud Bigtable

C.

BigQuery

D.

Cloud Datastore

Buy Now
Questions 17

Suppose you have a dataset of images that are each labeled as to whether or not they contain a human face. To create a neural network that recognizes human faces in images using this labeled dataset, what approach would likely be the most effective?

Options:

A.

Use K-means Clustering to detect faces in the pixels.

B.

Use feature engineering to add features for eyes, noses, and mouths to the input data.

C.

Use deep learning by creating a neural network with multiple hidden layers to automatically detect features of faces.

D.

Build a neural network with an input layer of pixels, a hidden layer, and an output layer with two categories.

Buy Now
Questions 18

If you want to create a machine learning model that predicts the price of a particular stock based on its recent price history, what type of estimator should you use?

Options:

A.

Unsupervised learning

B.

Regressor

C.

Classifier

D.

Clustering estimator

Buy Now
Questions 19

Which role must be assigned to a service account used by the virtual machines in a Dataproc cluster so they can execute jobs?

Options:

A.

Dataproc Worker

B.

Dataproc Viewer

C.

Dataproc Runner

D.

Dataproc Editor

Buy Now
Questions 20

Which of these sources can you not load data into BigQuery from?

Options:

A.

File upload

B.

Google Drive

C.

Google Cloud Storage

D.

Google Cloud SQL

Buy Now
Questions 21

When a Cloud Bigtable node fails, ____ is lost.

Options:

A.

all data

B.

no data

C.

the last transaction

D.

the time dimension

Buy Now
Questions 22

Which Java SDK class can you use to run your Dataflow programs locally?

Options:

A.

LocalRunner

B.

DirectPipelineRunner

C.

MachineRunner

D.

LocalPipelineRunner

Buy Now
Questions 23

How would you query specific partitions in a BigQuery table?

Options:

A.

Use the DAY column in the WHERE clause

B.

Use the EXTRACT(DAY) clause

C.

Use the __PARTITIONTIME pseudo-column in the WHERE clause

D.

Use DATE BETWEEN in the WHERE clause

Buy Now
Questions 24

Why do you need to split a machine learning dataset into training data and test data?

Options:

A.

So you can try two different sets of features

B.

To make sure your model is generalized for more than just the training data

C.

To allow you to create unit tests in your code

D.

So you can use one dataset for a wide model and one for a deep model

Buy Now
Questions 25

You work for a manufacturing company that sources up to 750 different components, each from a different supplier. You’ve collected a labeled dataset that has on average 1000 examples for each unique component. Your team wants to implement an app to help warehouse workers recognize incoming components based on a photo of the component. You want to implement the first working version of this app (as Proof-Of-Concept) within a few working days. What should you do?

Options:

A.

Use Cloud Vision AutoML with the existing dataset.

B.

Use Cloud Vision AutoML, but reduce your dataset twice.

C.

Use Cloud Vision API by providing custom labels as recognition hints.

D.

Train your own image recognition model leveraging transfer learning techniques.

Buy Now
Questions 26

You have a job that you want to cancel. It is a streaming pipeline, and you want to ensure that any data that is in-flight is processed and written to the output. Which of the following commands can you use on the Dataflow monitoring console to stop the pipeline job?

Options:

A.

Cancel

B.

Drain

C.

Stop

D.

Finish

Buy Now
Questions 27

You work for an airline and you need to store weather data in a BigQuery table Weather data will be used as input to a machine learning model. The model only uses the last 30 days of weather data. You want to avoid storing unnecessary data and minimize costs. What should you do?

Options:

A.

Create a BigQuery table where each record has an ingestion timestamp Run a scheduled query to delete all the rows with an ingestion timestamp older than 30 days.

B.

Create a BigQuery table partitioned by ingestion time Set up partition expiration to 30 days.

C.

Create a BigQuery table partitioned by datetime value of the weather date Set up partition expiration to 30 days.

D.

Create a BigQuery table with a datetime column for the day the weather data refers to. Run a scheduled query to delete rows with a datetime value older than 30 days.

Buy Now
Questions 28

You are deploying a new storage system for your mobile application, which is a media streaming service. You decide the best fit is Google Cloud Datastore. You have entities with multiple properties, some of which can take on multiple values. For example, in the entity ‘Movie’ the property ‘actors’ and the property ‘tags’ have multiple values but the property ‘date released’ does not. A typical query would ask for all movies with actor= ordered by date_released or all movies with tag=Comedy ordered by date_released. How should you avoid a combinatorial explosion in the number of indexes?

Options:

A.

Option A

B.

Option B.

C.

Option C

D.

Option D

Buy Now
Questions 29

A web server sends click events to a Pub/Sub topic as messages. The web server includes an event Timestamp attribute in the messages, which is the time when the click occurred. You have a Dataflow streaming job that reads from this Pub/Sub topic through a subscription, applies some transformations, and writes the result to another Pub/Sub topic for use by the advertising department. The advertising department needs to receive each message within 30 seconds of the corresponding click occurrence, but they report receiving the messages late. Your Dataflow job's system lag is about 5 seconds, and the data freshness is about 40 seconds. Inspecting a few messages show no more than 1 second lag between their event Timestamp and publish Time. What is the problem and what should you do?

Options:

A.

The advertising department is causing delays when consuming the messages. Work with the advertising department to fix this.

B.

Messages in your Dataflow job are processed in less than 30 seconds, but your job cannot keep up with the backlog in the Pub/Sub

subscription. Optimize your job or increase the number of workers to fix this.

C.

The web server is not pushing messages fast enough to Pub/Sub. Work with the web server team to fix this.

D.

Messages in your Dataflow job are taking more than 30 seconds to process. Optimize your job or increase the number of workers to fix this.

Buy Now
Questions 30

You are implementing security best practices on your data pipeline. Currently, you are manually executing jobs as the Project Owner. You want to automate these jobs by taking nightly batch files containing non-public information from Google Cloud Storage, processing them with a Spark Scala job on a Google Cloud Dataproc cluster, and depositing the results into Google BigQuery.

How should you securely run this workload?

Options:

A.

Restrict the Google Cloud Storage bucket so only you can see the files

B.

Grant the Project Owner role to a service account, and run the job with it

C.

Use a service account with the ability to read the batch files and to write to BigQuery

D.

Use a user account with the Project Viewer role on the Cloud Dataproc cluster to read the batch files and write to BigQuery

Buy Now
Questions 31

A data scientist has created a BigQuery ML model and asks you to create an ML pipeline to serve predictions. You have a REST API application with the requirement to serve predictions for an individual user ID with latency under 100 milliseconds. You use the following query to generate predictions: SELECT predicted_label, user_id FROM ML.PREDICT (MODEL ‘dataset.model’, table user_features). How should you create the ML pipeline?

Options:

A.

Add a WHERE clause to the query, and grant the BigQuery Data Viewer role to the application service account.

B.

Create an Authorized View with the provided query. Share the dataset that contains the view with the application service account.

C.

Create a Cloud Dataflow pipeline using BigQueryIO to read results from the query. Grant the Dataflow Worker role to the application service account.

D.

Create a Cloud Dataflow pipeline using BigQueryIO to read predictions for all users from the query. Write the results to Cloud Bigtable using BigtableIO. Grant the Bigtable Reader role to the application service account so that the application can read predictions for individual users from Cloud Bigtable.

Buy Now
Questions 32

You are part of a healthcare organization where data is organized and managed by respective data owners in various storage services. As a result of this decentralized ecosystem, discovering and managing data has become difficult You need to quickly identify and implement a cost-optimized solution to assist your organization with the following

• Data management and discovery

• Data lineage tracking

• Data quality validation

How should you build the solution?

Options:

A.

Use BigLake to convert the current solution into a data lake architecture.

B.

Build a new data discovery tool on Google Kubernetes Engine that helps with new source onboarding and data lineage tracking.

C.

Use BigOuery to track data lineage, and use Dataprep to manage data and perform data quality validation.

D.

Use Dataplex to manage data, track data lineage, and perform data quality validation.

Buy Now
Questions 33

One of your encryption keys stored in Cloud Key Management Service (Cloud KMS) was exposed. You need to re-encrypt all of your CMEK-protected Cloud Storage data that used that key. and then delete the compromised key. You also want to reduce the risk of objects getting written without customer-managed encryption key (CMEK protection in the future. What should you do?

Options:

A.

Rotate the Cloud KMS key version. Continue to use the same Cloud Storage bucket.

B.

Create a new Cloud KMS key. Set the default CMEK key on the existing Cloud Storage bucket to the new one.

C.

Create a new Cloud KMS key. Create a new Cloud Storage bucket. Copy all objects from the old bucket to the new one bucket while specifying the new Cloud KMS key in the copy command.

D.

Create a new Cloud KMS key. Create a new Cloud Storage bucket configured to use the new key as the default CMEK key. Copy all objects from the old bucket to the new bucket without specifying a key.

Buy Now
Questions 34

You are implementing several batch jobs that must be executed on a schedule. These jobs have many interdependent steps that must be executed in a specific order. Portions of the jobs involve executing shell scripts, running Hadoop jobs, and running queries in BigQuery. The jobs are expected to run for many minutes up to several hours. If the steps fail, they must be retried a fixed number of times. Which service should you use to manage the execution of these jobs?

Options:

A.

Cloud Scheduler

B.

Cloud Dataflow

C.

Cloud Functions

D.

Cloud Composer

Buy Now
Questions 35

You are selecting services to write and transform JSON messages from Cloud Pub/Sub to BigQuery for a data pipeline on Google Cloud. You want to minimize service costs. You also want to monitor and accommodate input data volume that will vary in size with minimal manual intervention. What should you do?

Options:

A.

Use Cloud Dataproc to run your transformations. Monitor CPU utilization for the cluster. Resize the number of worker nodes in your cluster via the command line.

B.

Use Cloud Dataproc to run your transformations. Use the diagnose command to generate an operational output archive. Locate the bottleneck and adjust cluster resources.

C.

Use Cloud Dataflow to run your transformations. Monitor the job system lag with Stackdriver. Use the

default autoscaling setting for worker instances.

D.

Use Cloud Dataflow to run your transformations. Monitor the total execution time for a sampling of jobs. Configure the job to use non-default Compute Engine machine types when needed.

Buy Now
Questions 36

You need to migrate a 2TB relational database to Google Cloud Platform. You do not have the resources to significantly refactor the application that uses this database and cost to operate is of primary concern.

Which service do you select for storing and serving your data?

Options:

A.

Cloud Spanner

B.

Cloud Bigtable

C.

Cloud Firestore

D.

Cloud SQL

Buy Now
Questions 37

You decided to use Cloud Datastore to ingest vehicle telemetry data in real time. You want to build a storage system that will account for the long-term data growth, while keeping the costs low. You also want to create snapshots of the data periodically, so that you can make a point-in-time (PIT) recovery, or clone a copy of the data for Cloud Datastore in a different environment. You want to archive these snapshots for a long time. Which two methods can accomplish this? Choose 2 answers.

Options:

A.

Use managed export, and store the data in a Cloud Storage bucket using Nearline or Coldline class.

B.

Use managed exportm, and then import to Cloud Datastore in a separate project under a unique namespace reserved for that export.

C.

Use managed export, and then import the data into a BigQuery table created just for that export, and delete temporary export files.

D.

Write an application that uses Cloud Datastore client libraries to read all the entities. Treat each entity as a BigQuery table row via BigQuery streaming insert. Assign an export timestamp for each export, and attach it as an extra column for each row. Make sure that the BigQuery table is partitioned using the export timestamp column.

E.

Write an application that uses Cloud Datastore client libraries to read all the entities. Format the exported data into a JSON file. Apply compression before storing the data in Cloud Source Repositories.

Buy Now
Questions 38

You want to rebuild your batch pipeline for structured data on Google Cloud You are using PySpark to conduct data transformations at scale, but your pipelines are taking over twelve hours to run. To expedite development and pipeline run time, you want to use a serverless tool and SQL syntax You have already moved your raw data into Cloud Storage How should you build the pipeline on Google Cloud while meeting speed and processing requirements?

Options:

A.

Convert your PySpark commands into SparkSQL queries to transform the data; and then run your pipeline on Dataproc to write the data into BigQuery

B.

Ingest your data into Cloud SQL, convert your PySpark commands into SparkSQL queries to transform the data, and then use federated queries from BigQuery for machine learning.

C.

Ingest your data into BigQuery from Cloud Storage, convert your PySpark commands into BigQuery SQL

queries to transform the data, and then write the transformations to a new table

D.

Use Apache Beam Python SDK to build the transformation pipelines, and write the data into BigQuery

Buy Now
Questions 39

Your organization has been collecting and analyzing data in Google BigQuery for 6 months. The majority of the data analyzed is placed in a time-partitioned table named events_partitioned. To reduce the cost of queries, your organization created a view called events, which queries only the last 14 days of data. The view is described in legacy SQL. Next month, existing applications will be connecting to BigQuery to read the events data via an ODBC connection. You need to ensure the applications can connect. Which two actions should you take? (Choose two.)

Options:

A.

Create a new view over events using standard SQL

B.

Create a new partitioned table using a standard SQL query

C.

Create a new view over events_partitioned using standard SQL

D.

Create a service account for the ODBC connection to use for authentication

E.

Create a Google Cloud Identity and Access Management (Cloud IAM) role for the ODBC connection and shared “events”

Buy Now
Questions 40

You created an analytics environment on Google Cloud so that your data scientist team can explore data without impacting the on-premises Apache Hadoop solution. The data in the on-premises Hadoop Distributed File System (HDFS) cluster is in Optimized Row Columnar (ORC) formatted files with multiple columns of Hive partitioning. The data scientist team needs to be able to explore the data in a similar way as they used the on-premises HDFS cluster with SQL on the Hive query engine. You need to choose the most cost-effective storage and processing solution. What should you do?

Options:

A.

Import the ORC files lo Bigtable tables for the data scientist team.

B.

Import the ORC files to BigOuery tables for the data scientist team.

C.

Copy the ORC files on Cloud Storage, then deploy a Dataproc cluster for the data scientist team.

D.

Copy the ORC files on Cloud Storage, then create external BigQuery tables for the data scientist team.

Buy Now
Questions 41

You have a data processing application that runs on Google Kubernetes Engine (GKE). Containers need to be launched with their latest available configurations from a container registry. Your GKE nodes need to have GPUs. local SSDs, and 8 Gbps bandwidth. You want to efficiently provision the data processing infrastructure and manage the deployment process. What should you do?

Options:

A.

Use Compute Engi.no startup scriots to pull container Images, and use gloud commands to provision the infrastructure.

B.

Use GKE to autoscale containers, and use gloud commands to provision the infrastructure.

C.

Use Cloud Build to schedule a job using Terraform build to provision the infrastructure and launch with the most current container images.

D.

Use Dataflow to provision the data pipeline, and use Cloud Scheduler to run the job.

Buy Now
Questions 42

Flowlogistic is rolling out their real-time inventory tracking system. The tracking devices will all send package-tracking messages, which will now go to a single Google Cloud Pub/Sub topic instead of the Apache Kafka cluster. A subscriber application will then process the messages for real-time reporting and store them in Google BigQuery for historical analysis. You want to ensure the package data can be analyzed over time.

Which approach should you take?

Options:

A.

Attach the timestamp on each message in the Cloud Pub/Sub subscriber application as they are received.

B.

Attach the timestamp and Package ID on the outbound message from each publisher device as they are sent to Clod Pub/Sub.

C.

Use the NOW () function in BigQuery to record the event’s time.

D.

Use the automatically generated timestamp from Cloud Pub/Sub to order the data.

Buy Now
Questions 43

Flowlogistic’s CEO wants to gain rapid insight into their customer base so his sales team can be better informed in the field. This team is not very technical, so they’ve purchased a visualization tool to simplify the creation of BigQuery reports. However, they’ve been overwhelmed by all the data in the table, and are spending a lot of money on queries trying to find the data they need. You want to solve their problem in the most cost-effective way. What should you do?

Options:

A.

Export the data into a Google Sheet for virtualization.

B.

Create an additional table with only the necessary columns.

C.

Create a view on the table to present to the virtualization tool.

D.

Create identity and access management (IAM) roles on the appropriate columns, so only they appear in a query.

Buy Now
Questions 44

Flowlogistic’s management has determined that the current Apache Kafka servers cannot handle the data volume for their real-time inventory tracking system. You need to build a new system on Google Cloud Platform (GCP) that will feed the proprietary tracking software. The system must be able to ingest data from a variety of global sources, process and query in real-time, and store the data reliably. Which combination of GCP products should you choose?

Options:

A.

Cloud Pub/Sub, Cloud Dataflow, and Cloud Storage

B.

Cloud Pub/Sub, Cloud Dataflow, and Local SSD

C.

Cloud Pub/Sub, Cloud SQL, and Cloud Storage

D.

Cloud Load Balancing, Cloud Dataflow, and Cloud Storage

Buy Now
Questions 45

You work for an economic consulting firm that helps companies identify economic trends as they happen. As part of your analysis, you use Google BigQuery to correlate customer data with the average prices of the 100 most common goods sold, including bread, gasoline, milk, and others. The average prices of these goods are updated every 30 minutes. You want to make sure this data stays up to date so you can combine it with other data in BigQuery as cheaply as possible. What should you do?

Options:

A.

Load the data every 30 minutes into a new partitioned table in BigQuery.

B.

Store and update the data in a regional Google Cloud Storage bucket and create a federated data source in BigQuery

C.

Store the data in Google Cloud Datastore. Use Google Cloud Dataflow to query BigQuery and combine the data programmatically with the data stored in Cloud Datastore

D.

Store the data in a file in a regional Google Cloud Storage bucket. Use Cloud Dataflow to query BigQuery and combine the data programmatically with the data stored in Google Cloud Storage.

Buy Now
Questions 46

You are designing the database schema for a machine learning-based food ordering service that will predict what users want to eat. Here is some of the information you need to store:

    The user profile: What the user likes and doesn’t like to eat

    The user account information: Name, address, preferred meal times

    The order information: When orders are made, from where, to whom

The database will be used to store all the transactional data of the product. You want to optimize the data schema. Which Google Cloud Platform product should you use?

Options:

A.

BigQuery

B.

Cloud SQL

C.

Cloud Bigtable

D.

Cloud Datastore

Buy Now
Questions 47

You are choosing a NoSQL database to handle telemetry data submitted from millions of Internet-of-Things (IoT) devices. The volume of data is growing at 100 TB per year, and each data entry has about 100 attributes. The data processing pipeline does not require atomicity, consistency, isolation, and durability (ACID). However, high availability and low latency are required.

You need to analyze the data by querying against individual fields. Which three databases meet your requirements? (Choose three.)

Options:

A.

Redis

B.

HBase

C.

MySQL

D.

MongoDB

E.

Cassandra

F.

HDFS with Hive

Buy Now
Questions 48

Your company is loading comma-separated values (CSV) files into Google BigQuery. The data is fully imported successfully; however, the imported data is not matching byte-to-byte to the source file. What is the most likely cause of this problem?

Options:

A.

The CSV data loaded in BigQuery is not flagged as CSV.

B.

The CSV data has invalid rows that were skipped on import.

C.

The CSV data loaded in BigQuery is not using BigQuery’s default encoding.

D.

The CSV data has not gone through an ETL phase before loading into BigQuery.

Buy Now
Questions 49

Your company has recently grown rapidly and now ingesting data at a significantly higher rate than it was previously. You manage the daily batch MapReduce analytics jobs in Apache Hadoop. However, the recent increase in data has meant the batch jobs are falling behind. You were asked to recommend ways the development team could increase the responsiveness of the analytics without increasing costs. What should you recommend they do?

Options:

A.

Rewrite the job in Pig.

B.

Rewrite the job in Apache Spark.

C.

Increase the size of the Hadoop cluster.

D.

Decrease the size of the Hadoop cluster but also rewrite the job in Hive.

Buy Now
Questions 50

You work for a manufacturing plant that batches application log files together into a single log file once a day at 2:00 AM. You have written a Google Cloud Dataflow job to process that log file. You need to make sure the log file in processed once per day as inexpensively as possible. What should you do?

Options:

A.

Change the processing job to use Google Cloud Dataproc instead.

B.

Manually start the Cloud Dataflow job each morning when you get into the office.

C.

Create a cron job with Google App Engine Cron Service to run the Cloud Dataflow job.

D.

Configure the Cloud Dataflow job as a streaming job so that it processes the log data immediately.

Buy Now
Questions 51

You work for a large fast food restaurant chain with over 400,000 employees. You store employee information in Google BigQuery in a Users table consisting of a FirstName field and a LastName field. A member of IT is building an application and asks you to modify the schema and data in BigQuery so the application can query a FullName field consisting of the value of the FirstName field concatenated with a space, followed by the value of the LastName field for each employee. How can you make that data available while minimizing cost?

Options:

A.

Create a view in BigQuery that concatenates the FirstName and LastName field values to produce the FullName.

B.

Add a new column called FullName to the Users table. Run an UPDATE statement that updates the FullName column for each user with the concatenation of the FirstName and LastName values.

C.

Create a Google Cloud Dataflow job that queries BigQuery for the entire Users table, concatenates the FirstName value and LastName value for each user, and loads the proper values for FirstName, LastName, and FullName into a new table in BigQuery.

D.

Use BigQuery to export the data for the table to a CSV file. Create a Google Cloud Dataproc job to process the CSV file and output a new CSV file containing the proper values for FirstName, LastName and FullName. Run a BigQuery load job to load the new CSV file into BigQuery.

Buy Now
Questions 52

You are designing a system that requires an ACID-compliant database. You must ensure that the system requires minimal human intervention in case of a failure. What should you do?

Options:

A.

Configure a Cloud SQL for MySQL instance with point-in-time recovery enabled.

B.

Configure a Cloud SQL for PostgreSQL instance with high availability enabled.

C.

Configure a Bigtable instance with more than one cluster.

D.

Configure a BJgQuery table with a multi-region configuration.

Buy Now
Questions 53

You have created an external table for Apache Hive partitioned data that resides in a Cloud Storage bucket, which contains a large number of files. You notice that queries against this table are slow. You want to improve the performance of these queries What should you do?

Options:

A.

Migrate the Hive partitioned data objects to a multi-region Cloud Storage bucket.

B.

Create an individual external table for each Hive partition by using a common table name prefix Use wildcard table queries to reference the partitioned data.

C.

Change the storage class of the Hive partitioned data objects from Coldline to Standard.

D.

Upgrade the external table to a BigLake table Enable metadata caching for the table.

Buy Now
Questions 54

You have uploaded 5 years of log data to Cloud Storage A user reported that some data points in the log data are outside of their expected ranges, which indicates errors You need to address this issue and be able to run the process again in the future while keeping the original data for compliance reasons. What should you do?

Options:

A.

Import the data from Cloud Storage into BigQuery Create a new BigQuery table, and skip the rows with errors.

B.

Create a Compute Engine instance and create a new copy of the data in Cloud Storage Skip the rows with errors

C.

Create a Cloud Dataflow workflow that reads the data from Cloud Storage, checks for values outside the expected range, sets the value to an appropriate default, and writes the updated records to a new dataset in

Cloud Storage

D.

Create a Cloud Dataflow workflow that reads the data from Cloud Storage, checks for values outside the expected range, sets the value to an appropriate default, and writes the updated records to the same dataset in Cloud Storage

Buy Now
Questions 55

You are designing the architecture to process your data from Cloud Storage to BigQuery by using Dataflow. The network team provided you with the Shared VPC network and subnetwork to be used by your pipelines. You need to enable the deployment of the pipeline on the Shared VPC network. What should you do?

Options:

A.

Assign the compute. networkUser role to the Dataflow service agent.

B.

Assign the compute.networkUser role to the service account that executes the Dataflow pipeline.

C.

Assign the dataflow, admin role to the Dataflow service agent.

D.

Assign the dataflow, admin role to the service account that executes the Dataflow pipeline.

Buy Now
Questions 56

MJTelco’s Google Cloud Dataflow pipeline is now ready to start receiving data from the 50,000 installations. You want to allow Cloud Dataflow to scale its compute power up as required. Which Cloud Dataflow pipeline configuration setting should you update?

Options:

A.

The zone

B.

The number of workers

C.

The disk size per worker

D.

The maximum number of workers

Buy Now
Questions 57

You need to compose visualizations for operations teams with the following requirements:

Which approach meets the requirements?

Options:

A.

Load the data into Google Sheets, use formulas to calculate a metric, and use filters/sorting to show only suboptimal links in a table.

B.

Load the data into Google BigQuery tables, write Google Apps Script that queries the data, calculates the metric, and shows only suboptimal rows in a table in Google Sheets.

C.

Load the data into Google Cloud Datastore tables, write a Google App Engine Application that queries all rows, applies a function to derive the metric, and then renders results in a table using the Google charts and visualization API.

D.

Load the data into Google BigQuery tables, write a Google Data Studio 360 report that connects to your data, calculates a metric, and then uses a filter expression to show only suboptimal rows in a table.

Buy Now
Questions 58

MJTelco needs you to create a schema in Google Bigtable that will allow for the historical analysis of the last 2 years of records. Each record that comes in is sent every 15 minutes, and contains a unique identifier of the device and a data record. The most common query is for all the data for a given device for a given day. Which schema should you use?

Options:

A.

Rowkey: date#device_idColumn data: data_point

B.

Rowkey: dateColumn data: device_id, data_point

C.

Rowkey: device_idColumn data: date, data_point

D.

Rowkey: data_pointColumn data: device_id, date

E.

Rowkey: date#data_pointColumn data: device_id

Buy Now
Questions 59

Given the record streams MJTelco is interested in ingesting per day, they are concerned about the cost of Google BigQuery increasing. MJTelco asks you to provide a design solution. They require a single large data table called tracking_table. Additionally, they want to minimize the cost of daily queries while performing fine-grained analysis of each day’s events. They also want to use streaming ingestion. What should you do?

Options:

A.

Create a table called tracking_table and include a DATE column.

B.

Create a partitioned table called tracking_table and include a TIMESTAMP column.

C.

Create sharded tables for each day following the pattern tracking_table_YYYYMMDD.

D.

Create a table called tracking_table with a TIMESTAMP column to represent the day.

Buy Now
Questions 60

You create a new report for your large team in Google Data Studio 360. The report uses Google BigQuery as its data source. It is company policy to ensure employees can view only the data associated with their region, so you create and populate a table for each region. You need to enforce the regional access policy to the data.

Which two actions should you take? (Choose two.)

Options:

A.

Ensure all the tables are included in global dataset.

B.

Ensure each table is included in a dataset for a region.

C.

Adjust the settings for each table to allow a related region-based security group view access.

D.

Adjust the settings for each view to allow a related region-based security group view access.

E.

Adjust the settings for each dataset to allow a related region-based security group view access.

Buy Now
Questions 61

MJTelco is building a custom interface to share data. They have these requirements:

    They need to do aggregations over their petabyte-scale datasets.

    They need to scan specific time range rows with a very fast response time (milliseconds).

Which combination of Google Cloud Platform products should you recommend?

Options:

A.

Cloud Datastore and Cloud Bigtable

B.

Cloud Bigtable and Cloud SQL

C.

BigQuery and Cloud Bigtable

D.

BigQuery and Cloud Storage

Buy Now
Questions 62

You need to compose visualization for operations teams with the following requirements:

    Telemetry must include data from all 50,000 installations for the most recent 6 weeks (sampling once every minute)

    The report must not be more than 3 hours delayed from live data.

    The actionable report should only show suboptimal links.

    Most suboptimal links should be sorted to the top.

    Suboptimal links can be grouped and filtered by regional geography.

    User response time to load the report must be <5 seconds.

You create a data source to store the last 6 weeks of data, and create visualizations that allow viewers to see multiple date ranges, distinct geographic regions, and unique installation types. You always show the latest data without any changes to your visualizations. You want to avoid creating and updating new visualizations each month. What should you do?

Options:

A.

Look through the current data and compose a series of charts and tables, one for each possible

combination of criteria.

B.

Look through the current data and compose a small set of generalized charts and tables bound to criteria filters that allow value selection.

C.

Export the data to a spreadsheet, compose a series of charts and tables, one for each possible

combination of criteria, and spread them across multiple tabs.

D.

Load the data into relational database tables, write a Google App Engine application that queries all rows, summarizes the data across each criteria, and then renders results using the Google Charts and visualization API.

Buy Now
Questions 63

Your company uses a proprietary system to send inventory data every 6 hours to a data ingestion service in the cloud. Transmitted data includes a payload of several fields and the timestamp of the transmission. If there are any concerns about a transmission, the system re-transmits the data. How should you deduplicate the data most efficiency?

Options:

A.

Assign global unique identifiers (GUID) to each data entry.

B.

Compute the hash value of each data entry, and compare it with all historical data.

C.

Store each data entry as the primary key in a separate database and apply an index.

D.

Maintain a database table to store the hash value and other metadata for each data entry.

Buy Now
Questions 64

Your weather app queries a database every 15 minutes to get the current temperature. The frontend is powered by Google App Engine and server millions of users. How should you design the frontend to respond to a database failure?

Options:

A.

Issue a command to restart the database servers.

B.

Retry the query with exponential backoff, up to a cap of 15 minutes.

C.

Retry the query every second until it comes back online to minimize staleness of data.

D.

Reduce the query frequency to once every hour until the database comes back online.

Buy Now
Questions 65

Business owners at your company have given you a database of bank transactions. Each row contains the user ID, transaction type, transaction location, and transaction amount. They ask you to investigate what type of machine learning can be applied to the data. Which three machine learning applications can you use? (Choose three.)

Options:

A.

Supervised learning to determine which transactions are most likely to be fraudulent.

B.

Unsupervised learning to determine which transactions are most likely to be fraudulent.

C.

Clustering to divide the transactions into N categories based on feature similarity.

D.

Supervised learning to predict the location of a transaction.

E.

Reinforcement learning to predict the location of a transaction.

F.

Unsupervised learning to predict the location of a transaction.

Buy Now
Questions 66

You are deploying 10,000 new Internet of Things devices to collect temperature data in your warehouses globally. You need to process, store and analyze these very large datasets in real time. What should you do?

Options:

A.

Send the data to Google Cloud Datastore and then export to BigQuery.

B.

Send the data to Google Cloud Pub/Sub, stream Cloud Pub/Sub to Google Cloud Dataflow, and store the data in Google BigQuery.

C.

Send the data to Cloud Storage and then spin up an Apache Hadoop cluster as needed in Google Cloud Dataproc whenever analysis is required.

D.

Export logs in batch to Google Cloud Storage and then spin up a Google Cloud SQL instance, import the data from Cloud Storage, and run an analysis as needed.

Buy Now
Questions 67

Your company has hired a new data scientist who wants to perform complicated analyses across very large datasets stored in Google Cloud Storage and in a Cassandra cluster on Google Compute Engine. The scientist primarily wants to create labelled data sets for machine learning projects, along with some visualization tasks. She reports that her laptop is not powerful enough to perform her tasks and it is slowing her down. You want to help her perform her tasks. What should you do?

Options:

A.

Run a local version of Jupiter on the laptop.

B.

Grant the user access to Google Cloud Shell.

C.

Host a visualization tool on a VM on Google Compute Engine.

D.

Deploy Google Cloud Datalab to a virtual machine (VM) on Google Compute Engine.

Buy Now
Questions 68

Your software uses a simple JSON format for all messages. These messages are published to Google Cloud Pub/Sub, then processed with Google Cloud Dataflow to create a real-time dashboard for the CFO. During testing, you notice that some messages are missing in the dashboard. You check the logs, and all messages are being published to Cloud Pub/Sub successfully. What should you do next?

Options:

A.

Check the dashboard application to see if it is not displaying correctly.

B.

Run a fixed dataset through the Cloud Dataflow pipeline and analyze the output.

C.

Use Google Stackdriver Monitoring on Cloud Pub/Sub to find the missing messages.

D.

Switch Cloud Dataflow to pull messages from Cloud Pub/Sub instead of Cloud Pub/Sub pushing messages to Cloud Dataflow.

Buy Now
Questions 69

An external customer provides you with a daily dump of data from their database. The data flows into Google Cloud Storage GCS as comma-separated values (CSV) files. You want to analyze this data in Google BigQuery, but the data could have rows that are formatted incorrectly or corrupted. How should you build this pipeline?

Options:

A.

Use federated data sources, and check data in the SQL query.

B.

Enable BigQuery monitoring in Google Stackdriver and create an alert.

C.

Import the data into BigQuery using the gcloud CLI and set max_bad_records to 0.

D.

Run a Google Cloud Dataflow batch pipeline to import the data into BigQuery, and push errors to another dead-letter table for analysis.

Buy Now
Questions 70

Your company’s customer and order databases are often under heavy load. This makes performing analytics against them difficult without harming operations. The databases are in a MySQL cluster, with nightly backups taken using mysqldump. You want to perform analytics with minimal impact on operations. What should you do?

Options:

A.

Add a node to the MySQL cluster and build an OLAP cube there.

B.

Use an ETL tool to load the data from MySQL into Google BigQuery.

C.

Connect an on-premises Apache Hadoop cluster to MySQL and perform ETL.

D.

Mount the backups to Google Cloud SQL, and then process the data using Google Cloud Dataproc.

Buy Now
Questions 71

You are designing a basket abandonment system for an ecommerce company. The system will send a message to a user based on these rules:

    No interaction by the user on the site for 1 hour

    Has added more than $30 worth of products to the basket

    Has not completed a transaction

You use Google Cloud Dataflow to process the data and decide if a message should be sent. How should you design the pipeline?

Options:

A.

Use a fixed-time window with a duration of 60 minutes.

B.

Use a sliding time window with a duration of 60 minutes.

C.

Use a session window with a gap time duration of 60 minutes.

D.

Use a global window with a time based trigger with a delay of 60 minutes.

Buy Now
Questions 72

Your company produces 20,000 files every hour. Each data file is formatted as a comma separated values (CSV) file that is less than 4 KB. All files must be ingested on Google Cloud Platform before they can be processed. Your company site has a 200 ms latency to Google Cloud, and your Internet connection bandwidth is limited as 50 Mbps. You currently deploy a secure FTP (SFTP) server on a virtual machine in Google Compute Engine as the data ingestion point. A local SFTP client runs on a dedicated machine to transmit the CSV files as is. The goal is to make reports with data from the previous day available to the executives by 10:00 a.m. each day. This design is barely able to keep up with the current volume, even though the bandwidth utilization is rather low.

You are told that due to seasonality, your company expects the number of files to double for the next three months. Which two actions should you take? (choose two.)

Options:

A.

Introduce data compression for each file to increase the rate file of file transfer.

B.

Contact your internet service provider (ISP) to increase your maximum bandwidth to at least 100 Mbps.

C.

Redesign the data ingestion process to use gsutil tool to send the CSV files to a storage bucket in parallel.

D.

Assemble 1,000 files into a tape archive (TAR) file. Transmit the TAR files instead, and disassemble the CSV files in the cloud upon receiving them.

E.

Create an S3-compatible storage endpoint in your network, and use Google Cloud Storage Transfer Service to transfer on-premices data to the designated storage bucket.

Buy Now
Questions 73

You need to store and analyze social media postings in Google BigQuery at a rate of 10,000 messages per minute in near real-time. Initially, design the application to use streaming inserts for individual postings. Your application also performs data aggregations right after the streaming inserts. You discover that the queries after streaming inserts do not exhibit strong consistency, and reports from the queries might miss in-flight data. How can you adjust your application design?

Options:

A.

Re-write the application to load accumulated data every 2 minutes.

B.

Convert the streaming insert code to batch load for individual messages.

C.

Load the original message to Google Cloud SQL, and export the table every hour to BigQuery via streaming inserts.

D.

Estimate the average latency for data availability after streaming inserts, and always run queries after waiting twice as long.

Buy Now
Questions 74

Your company is performing data preprocessing for a learning algorithm in Google Cloud Dataflow. Numerous data logs are being are being generated during this step, and the team wants to analyze them. Due to the dynamic nature of the campaign, the data is growing exponentially every hour.

The data scientists have written the following code to read the data for a new key features in the logs.

BigQueryIO.Read

.named(“ReadLogData”)

.from(“clouddataflow-readonly:samples.log_data”)

You want to improve the performance of this data read. What should you do?

Options:

A.

Specify the TableReference object in the code.

B.

Use .fromQuery operation to read specific fields from the table.

C.

Use of both the Google BigQuery TableSchema and TableFieldSchema classes.

D.

Call a transform that returns TableRow objects, where each element in the PCollexction represents a single row in the table.

Buy Now
Questions 75

You want to process payment transactions in a point-of-sale application that will run on Google Cloud Platform. Your user base could grow exponentially, but you do not want to manage infrastructure scaling.

Which Google database service should you use?

Options:

A.

Cloud SQL

B.

BigQuery

C.

Cloud Bigtable

D.

Cloud Datastore

Buy Now
Questions 76

Your company’s on-premises Apache Hadoop servers are approaching end-of-life, and IT has decided to migrate the cluster to Google Cloud Dataproc. A like-for-like migration of the cluster would require 50 TB of Google Persistent Disk per node. The CIO is concerned about the cost of using that much block storage. You want to minimize the storage cost of the migration. What should you do?

Options:

A.

Put the data into Google Cloud Storage.

B.

Use preemptible virtual machines (VMs) for the Cloud Dataproc cluster.

C.

Tune the Cloud Dataproc cluster so that there is just enough disk for all data.

D.

Migrate some of the cold data into Google Cloud Storage, and keep only the hot data in Persistent Disk.

Buy Now
Exam Name: Google Professional Data Engineer Exam
Last Update: Nov 24, 2024
Questions: 372
Professional-Data-Engineer pdf

Professional-Data-Engineer PDF

$25.5  $84.99
Professional-Data-Engineer Engine

Professional-Data-Engineer Testing Engine

$30  $99.99
Professional-Data-Engineer PDF + Engine

Professional-Data-Engineer PDF + Testing Engine

$40.5  $134.99