New Year Special Limited Time 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: cramtick70

DSA-C02 SnowPro Advanced: Data Scientist Certification Exam Questions and Answers

Questions 4

Mark the Incorrect understanding of Data Scientist about Streams?

Options:

A.

Streams on views support both local views and views shared using Snowflake Secure Data Sharing, including secure views.

B.

Streams can track changes in materialized views.

C.

Streams itself does not contain any table data.

D.

Streams do not support repeatable read isolation.

Buy Now
Questions 5

Which tools helps data scientist to manage ML lifecycle & Model versioning?

Options:

A.

MLFlow

B.

Pachyderm

C.

Albert

D.

CRUX

Buy Now
Questions 6

Which of the following cross validation versions may not be suitable for very large datasets with hundreds of thousands of samples?

Options:

A.

k-fold cross-validation

B.

Leave-one-out cross-validation

C.

Holdout method

D.

All of the above

Buy Now
Questions 7

As Data Scientist looking out to use Reader account, Which ones are the correct considerations about Reader Accounts for Third-Party Access?

Options:

A.

Reader accounts (formerly known as “read-only accounts”) provide a quick, easy, and cost-effective way to share data without requiring the consumer to become a Snowflake customer.

B.

Each reader account belongs to the provider account that created it.

C.

Users in a reader account can query data that has been shared with the reader account, but cannot perform any of the DML tasks that are allowed in a full account, such as data loading, insert, update, and similar data manipulation operations.

D.

Data sharing is only possible between Snowflake accounts.

Buy Now
Questions 8

Performance metrics are a part of every machine learning pipeline, Which ones are not the performance metrics used in the Machine learning?

Options:

A.

R² (R-Squared)

B.

Root Mean Squared Error (RMSE)

C.

AU-ROC

D.

AUM

Buy Now
Questions 9

Which command is used to install Jupyter Notebook?

Options:

A.

pip install jupyter

B.

pip install notebook

C.

pip install jupyter-notebook

D.

pip install nbconvert

Buy Now
Questions 10

Which metric is not used for evaluating classification models?

Options:

A.

Recall

B.

Accuracy

C.

Mean absolute error

D.

Precision

Buy Now
Questions 11

Which of the following Functions do Support Windowing?

Options:

A.

HASH_AGG

B.

ENCRYPT

C.

EXTRACT

D.

LISTAGG

Buy Now
Questions 12

Which Python method can be used to Remove duplicates by Data scientist?

Options:

A.

remove_duplicates()

B.

duplicates()

C.

drop_duplicates()

D.

clean_duplicates()

Buy Now
Questions 13

To return the contents of a DataFrame as a Pandas DataFrame, Which of the following method can be used in SnowPark API?

Options:

A.

REPLACE_TO_PANDAS

B.

SNOWPARK_TO_PANDAS

C.

CONVERT_TO_PANDAS

D.

TO_PANDAS

Buy Now
Questions 14

Data Scientist can query, process, and transform data in a which of the following ways using Snowpark Python. [Select 2]

Options:

A.

Query and process data with a DataFrame object.

B.

Write a user-defined tabular function (UDTF) that processes data and returns data in a set of rows with one or more columns.

C.

SnowPark currently do not support writing UDTF.

D.

Transform Data using DataIKY tool with SnowPark API.

Buy Now
Questions 15

Which ones are the correct rules while using a data science model created via External function in Snowflake?

Options:

A.

External functions return a value. The returned value can be a compound value, such as a VARIANT that contains JSON.

B.

External functions can be overloaded.

C.

An external function can appear in any clause of a SQL statement in which other types of UDF can appear.

D.

External functions can accept Model parameters.

Buy Now
Questions 16

Mark the correct steps for saving the contents of a DataFrame to aSnowflake table as part of Moving Data from Spark to Snowflake?

Options:

A.

Step 1.Use the PUT() method of the DataFrame to construct a DataFrameWriter.

Step 2.Specify SNOWFLAKE_SOURCE_NAME using the NAME() method.

Step 3.Use the dbtable option to specify the table to which data is written.

Step 4.Specify the connector options using either the option() or options() method.

Step 5.Use the save() method to specify the save mode for the content.

B.

Step 1.Use the PUT() method of the DataFrame to construct a DataFrameWriter.

Step 2.Specify SNOWFLAKE_SOURCE_NAME using the format() method.

Step 3.Specify the connector options using either the option() or options() method.

Step 4.Use the dbtable option to specify the table to which data is written.

Step 5.Use the save() method to specify the save mode for the content.

C.

Step 1.Use the write() method of the DataFrame to construct a DataFrameWriter.

Step 2.Specify SNOWFLAKE_SOURCE_NAME using the format() method.

Step 3.Specify the connector options using either the option() or options() method.

Step 4.Use the dbtable option to specify the table to which data is written.

Step 5.Use the mode() method to specify the save mode for the content.

(Correct)

D.

Step 1.Use the writer() method of the DataFrame to construct a DataFrameWriter.

Step 2.Specify SNOWFLAKE_SOURCE_NAME using the format() method.

Step 3.Use the dbtable option to specify the table to which data is written.

Step 4.Specify the connector options using either the option() or options() method.

Step 5.Use the save() method to specify the save mode for the content.

Buy Now
Questions 17

Which type of Python UDFs let you define Python functions that receive batches of input rows as Pandas DataFrames and return batches of results as Pandas arrays or Series?

Options:

A.

MPP Python UDFs

B.

Scaler Python UDFs

C.

Vectorized Python UDFs

D.

Hybrid Python UDFs

Buy Now
Questions 18

Which of the following is a common evaluation metric for binary classification?

Options:

A.

Accuracy

B.

F1 score

C.

Mean squared error (MSE)

D.

Area under the ROC curve (AUC)

Buy Now
Questions 19

Consider a data frame df with 10 rows and index [ 'r1', 'r2', 'r3', 'row4', 'row5', 'row6', 'r7', 'r8', 'r9', 'row10']. What does the aggregate method shown in below code do?

g = df.groupby(df.index.str.len())

g.aggregate({'A':len, 'B':np.sum})

Options:

A.

Computes Sum of column A values

B.

Computes length of column A

C.

Computes length of column A and Sum of Column B values of each group

D.

Computes length of column A and Sum of Column B values

Buy Now
Exam Code: DSA-C02
Exam Name: SnowPro Advanced: Data Scientist Certification Exam
Last Update: Dec 27, 2024
Questions: 65
DSA-C02 pdf

DSA-C02 PDF

$25.5  $84.99
DSA-C02 Engine

DSA-C02 Testing Engine

$30  $99.99
DSA-C02 PDF + Engine

DSA-C02 PDF + Testing Engine

$40.5  $134.99