How often are encryption keys automatically rotated by Snowflake?
30 Days
60 Days
90 Days
365 Days
Snowflake automatically rotates encryption keys when they are more than 30 days old. Active keys are retired, and new keys are created. This process is part of Snowflake’s comprehensive security measures to ensure data protection and is managed entirely by the Snowflake service without requiring user intervention.
References:
Understanding Encryption Key Management in Snowflake
Query compilation occurs in which architecture layer of the Snowflake Cloud Data Platform?
Compute layer
Storage layer
Cloud infrastructure layer
Cloud services layer
Query compilation in Snowflake occurs in the Cloud Services layer. This layer is responsible for coordinating and managing all aspects of the Snowflake service, including authentication, infrastructure management, metadata management, query parsing and optimization, and security. By handling these tasks, the Cloud Services layer enables the Compute layer to focus on executing queries, while the Storage layer is dedicated to persistently storing data.
References:
[COF-C02] SnowPro Core Certification Exam Study Guide
Snowflake Documentation on Snowflake Architecture1
Which of the following objects can be shared through secure data sharing?
Masking policy
Stored procedure
Task
External table
Secure data sharing in Snowflake allows users to share various objects between Snowflake accounts without physically copying the data, thus not consuming additional storage. Among the options provided, external tables can be shared through secure data sharing. External tables are used to query data directly from files in a stage without loading the data into Snowflake tables, making them suitable for sharing across different Snowflake accounts.
References:
Snowflake Documentation on Secure Data Sharing
SnowPro™ Core Certification Companion: Hands-on Preparation and Practice
Which of the following is a valid source for an external stage when the Snowflake account is located on Microsoft Azure?
An FTP server with TLS encryption
An HTTPS server with WebDAV
A Google Cloud storage bucket
A Windows server file share on Azure
In Snowflake, when the account is located on Microsoft Azure, a valid source for an external stage can be an Azure container or a folder path within an Azure container. This includes Azure Blob storage which is accessible via the azure:// endpoint. A Windows server file share on Azure, if configured properly, can be a valid source for staging data files for Snowflake. Options A, B, and C are not supported as direct sources for an external stage in Snowflake on Azure12. References: [COF-C02] SnowPro Core Certification Exam Study Guide
A user has unloaded data from Snowflake to a stage
Which SQL command should be used to validate which data was loaded into the stage?
list @file__stage
show @file__stage
view @file__stage
verify @file__stage
The list command in Snowflake is used to validate and display the list of files in a specified stage. When a user has unloaded data to a stage, running the list @file__stage command will show all the files that have been uploaded to that stage, allowing the user to verify the data that was unloaded.
References:
Snowflake Documentation on Stages
SnowPro® Core Certification Study Guide
What are common issues found by using the Query Profile? (Choose two.)
Identifying queries that will likely run very slowly before executing them
Locating queries that consume a high amount of credits
Identifying logical issues with the queries
Identifying inefficient micro-partition pruning
Data spilling to a local or remote disk
The Query Profile in Snowflake is used to identify performance issues with queries. Common issues that can be found using the Query Profile include identifying inefficient micro-partition pruning (D) and data spilling to a local or remote disk (E). Micro-partition pruning is related to the efficiency of query execution, and data spilling occurs when the memory is insufficient, causing the query to write data to disk, which can slow down the query performance1.
What affects whether the query results cache can be used?
If the query contains a deterministic function
If the virtual warehouse has been suspended
If the referenced data in the table has changed
If multiple users are using the same virtual warehouse
The query results cache can be used as long as the data in the table has not changed since the last time the query was run. If the underlying data has changed, Snowflake will not use the cached results and will re-execute the query1.
A marketing co-worker has requested the ability to change a warehouse size on their medium virtual warehouse called mktg__WH.
Which of the following statements will accommodate this request?
ALLOW RESIZE ON WAREHOUSE MKTG__WH TO USER MKTG__LEAD;
GRANT MODIFY ON WAREHOUSE MKTG WH TO ROLE MARKETING;
GRANT MODIFY ON WAREHOUSE MKTG__WH TO USER MKTG__LEAD;
GRANT OPERATE ON WAREHOUSE MKTG WH TO ROLE MARKETING;
The correct statement to accommodate the request for a marketing co-worker to change the size of their medium virtual warehouse called mktg__WH is to grant the MODIFY privilege on the warehouse to the ROLE MARKETING. This privilege allows the role to change the warehouse size among other properties.
References:
[COF-C02] SnowPro Core Certification Exam Study Guide
Snowflake Documentation on Access Control Privileges1
What is a machine learning and data science partner within the Snowflake Partner Ecosystem?
Informatica
Power Bl
Adobe
Data Robot
Data Robot is recognized as a machine learning and data science partner within the Snowflake Partner Ecosystem. It provides an enterprise AI platform that enables users to build and deploy accurate predictive models quickly. As a partner, Data Robot integrates with Snowflake to enhance data science capabilities2.
References:
[COF-C02] SnowPro Core Certification Exam Study Guide
Snowflake Documentation on Machine Learning & Data Science Partners
https://docs.snowflake.com/en/user-guide/ecosystem-analytics.html
Which semi-structured file formats are supported when unloading data from a table? (Select TWO).
ORC
XML
Avro
Parquet
JSON
Semi-structured
JSON, Parquet
Snowflake supports unloading data in several semi-structured file formats, including Parquet and JSON. These formats allow for efficient storage and querying of semi-structured data, which can be loaded directly into Snowflake tables without requiring a predefined schema12.
https://docs.snowflake.com/en/user-guide/data-unload-prepare.html#:~:text=Supported%20File%20Formats,-The%20following%20file &text=Delimited%20(CSV%2C%20TSV%2C%20etc.)
What data is stored in the Snowflake storage layer? (Select TWO).
Snowflake parameters
Micro-partitions
Query history
Persisted query results
Standard and secure view results
The Snowflake storage layer is responsible for storing data in an optimized, compressed, columnar format. This includes micro-partitions, which are the fundamental storage units that contain the actual data stored in Snowflake. Additionally, persisted query results, which are the results of queries that have been materialized and stored for future use, are also kept within this layer. This design allows for efficient data retrieval and management within the Snowflake architecture1.
References:
[COF-C02] SnowPro Core Certification Exam Study Guide
Key Concepts & Architecture | Snowflake Documentation2
Which Snowflake object enables loading data from files as soon as they are available in a cloud storage location?
Pipe
External stage
Task
Stream
In Snowflake, a Pipe is the object designed to enable the continuous, near-real-time loading of data from files as soon as they are available in a cloud storage location. Pipes use Snowflake’s COPY command to load data and can be associated with a Stage object to monitor for new files. When new data files appear in the stage, the pipe automatically loads the data into the target table.
References:
Snowflake Documentation on Pipes
SnowPro® Core Certification Study Guide
https://docs.snowflake.com/en/user-guide/data-load-snowpipe-intro.html
What is a key feature of Snowflake architecture?
Zero-copy cloning creates a mirror copy of a database that updates with the original
Software updates are automatically applied on a quarterly basis
Snowflake eliminates resource contention with its virtual warehouse implementation
Multi-cluster warehouses allow users to run a query that spans across multiple clusters
Snowflake automatically sorts DATE columns during ingest for fast retrieval by date
One of the key features of Snowflake’s architecture is its unique approach to eliminating resource contention through the use of virtual warehouses. This is achieved by separating storage and compute resources, allowing multiple virtual warehouses to operate independently on the same data without affecting each other. This means that different workloads, such as loading data, running queries, or performing complex analytics, can be processed simultaneously without any performance degradation due to resource contention.
References:
Snowflake Documentation on Virtual Warehouses
SnowPro® Core Certification Study Guide
Which data type can be used to store geospatial data in Snowflake?
Variant
Object
Geometry
Geography
Snowflake supports two geospatial data types: GEOGRAPHY and GEOMETRY. The GEOGRAPHY data type is used to store geospatial data that models the Earth as a perfect sphere, which is suitable for global geospatial data. This data type follows the WGS 84 standard and is used for storing points, lines, and polygons on the Earth’s surface. The GEOMETRY data type, on the other hand, represents features in a planar (Euclidean, Cartesian) coordinate system and is typically used for local spatial reference systems. Since the question specifically asks about geospatial data, which commonly refers to Earth-related spatial data, the correct answer is GEOGRAPHY3. References: [COF-C02] SnowPro Core Certification Exam Study Guide
What happens when an external or an internal stage is dropped? (Select TWO).
When dropping an external stage, the files are not removed and only the stage is dropped
When dropping an external stage, both the stage and the files within the stage are removed
When dropping an internal stage, the files are deleted with the stage and the files are recoverable
When dropping an internal stage, the files are deleted with the stage and the files are not recoverable
When dropping an internal stage, only selected files are deleted with the stage and are not recoverable
When an external stage is dropped in Snowflake, the reference to the external storage location is removed, but the actual files within the external storage (like Amazon S3, Google Cloud Storage, or Microsoft Azure) are not deleted. This means that the data remains intact in the external storage location, and only the stage object in Snowflake is removed.
On the other hand, when an internal stage is dropped, any files that were uploaded to the stage are deleted along with the stage itself. These files are not recoverable once the internal stage is dropped, as they are permanently removed from Snowflake’s storage.
References:
[COF-C02] SnowPro Core Certification Exam Study Guide
Snowflake Documentation on Stages
Which of the following Snowflake capabilities are available in all Snowflake editions? (Select TWO)
Customer-managed encryption keys through Tri-Secret Secure
Automatic encryption of all data
Up to 90 days of data recovery through Time Travel
Object-level access control
Column-level security to apply data masking policies to tables and views
In all Snowflake editions, two key capabilities are universally available:
B. Automatic encryption of all data: Snowflake automatically encrypts all data stored in its platform, ensuring security and compliance with various regulations. This encryption is transparent to users and does not require any configuration or management.
D. Object-level access control: Snowflake provides granular access control mechanisms that allow administrators to define permissions at the object level, including databases, schemas, tables, and views. This ensures that only authorized users can access specific data objects.
These features are part of Snowflake’s commitment to security and governance, and they are included in every edition of the Snowflake Data Cloud.
References:
Snowflake Documentation on Security Features
SnowPro® Core Certification Exam Study Guide
Which statement about billing applies to Snowflake credits?
Credits are billed per-minute with a 60-minute minimum
Credits are used to pay for cloud data storage usage
Credits are consumed based on the number of credits billed for each hour that a warehouse runs
Credits are consumed based on the warehouse size and the time the warehouse is running
Snowflake credits are the unit of measure for the compute resources used in Snowflake. The number of credits consumed depends on the size of the virtual warehouse and the time it is running. Larger warehouses consume more credits per hour than smaller ones, and credits are billed for the time the warehouse is active, regardless of the actual usage within that time.
References: [COF-C02] SnowPro Core Certification Exam Study Guide
Which of the following Snowflake objects can be shared using a secure share? (Select TWO).
Materialized views
Sequences
Procedures
Tables
Secure User Defined Functions (UDFs)
Secure sharing in Snowflake allows users to share specific objects with other Snowflake accounts without physically copying the data, thus not consuming additional storage. Tables and Secure User Defined Functions (UDFs) are among the objects that can be shared using this feature. Materialized views, sequences, and procedures are not shareable objects in Snowflake.
References:
[COF-C02] SnowPro Core Certification Exam Study Guide
Snowflake Documentation on Secure Data Sharing1
Which cache type is used to cache data output from SQL queries?
Metadata cache
Result cache
Remote cache
Local file cache
The Result cache is used in Snowflake to cache the data output from SQL queries. This feature is designed to improve performance by storing the results of queries for a period of time. When the same or similar query is executed again, Snowflake can retrieve the result from this cache instead of re-computing the result, which saves time and computational resources.
References:
Snowflake Documentation on Query Results Cache
SnowPro® Core Certification Study Guide
What transformations are supported in a CREATE PIPE ... AS COPY ... FROM (....) statement? (Select TWO.)
Data can be filtered by an optional where clause
Incoming data can be joined with other tables
Columns can be reordered
Columns can be omitted
Row level access can be defined
In a CREATE PIPE ... AS COPY ... FROM (....) statement, the supported transformations include filtering data using an optional WHERE clause and omitting columns. The WHERE clause allows for the specification of conditions to filter the data that is being loaded, ensuring only relevant data is inserted into the table. Omitting columns enables the exclusion of certain columns from the data load, which can be useful when the incoming data contains more columns than are needed for the target table.
References:
[COF-C02] SnowPro Core Certification Exam Study Guide
Simple Transformations During a Load1
A user is loading JSON documents composed of a huge array containing multiple records into Snowflake. The user enables the strip__outer_array file format option
What does the STRIP_OUTER_ARRAY file format do?
It removes the last element of the outer array.
It removes the outer array structure and loads the records into separate table rows,
It removes the trailing spaces in the last element of the outer array and loads the records into separate table columns
It removes the NULL elements from the JSON object eliminating invalid data and enables the ability to load the records
The STRIP_OUTER_ARRAY file format option in Snowflake is used when loading JSON documents that are composed of a large array containing multiple records. When this option is enabled, it removes the outer array structure, which allows each record within the array to be loaded as a separate row in the table. This is particularly useful for efficiently loading JSON data that is structured as an array of records1.
References:
Snowflake Documentation on JSON File Format
[COF-C02] SnowPro Core Certification Exam Study Guide
When is the result set cache no longer available? (Select TWO)
When another warehouse is used to execute the query
When another user executes the query
When the underlying data has changed
When the warehouse used to execute the query is suspended
When it has been 24 hours since the last query
The result set cache in Snowflake is invalidated and no longer available when the underlying data of the query results has changed, ensuring that queries return the most current data. Additionally, the cache expires after 24 hours to maintain the efficiency and accuracy of data retrieval1.
True or False: A 4X-Large Warehouse may, at times, take longer to provision than a X-Small Warehouse.
True
False
Provisioning time can vary based on the size of the warehouse. A 4X-Large Warehouse typically has more resources and may take longer to provision compared to a X-Small Warehouse, which has fewer resources and can generally be provisioned more quickly.References: Understanding and viewing Fail-safe | Snowflake Documentation
Which command can be used to load data into an internal stage?
LOAD
copy
GET
PUT
The PUT command is used to load data into an internal stage in Snowflake. This command uploads data files from a local file system to a named internal stage, making the data available for subsequent loading into a Snowflake table using the COPY INTO command.
References:
[COF-C02] SnowPro Core Certification Exam Study Guide
Snowflake Documentation on Data Loading
Which copy INTO command outputs the data into one file?
SINGLE=TRUE
MAX_FILE_NUMBER=1
FILE_NUMBER=1
MULTIPLE=FAISE
The COPY INTO command in Snowflake can be configured to output data into a single file by setting the MAX_FILE_NUMBER option to 1. This option limits the number of files generated by the command, ensuring that only one file is created regardless of the amount of data being exported.
References:
[COF-C02] SnowPro Core Certification Exam Study Guide
Snowflake Documentation on Data Unloading
What happens when a virtual warehouse is resized?
When increasing the size of an active warehouse the compute resource for all running and queued queries on the warehouse are affected
When reducing the size of a warehouse the compute resources are removed only when they are no longer being used to execute any current statements.
The warehouse will be suspended while the new compute resource is provisioned and will resume automatically once provisioning is complete.
Users who are trying to use the warehouse will receive an error message until the resizing is complete
When a virtual warehouse in Snowflake is resized, specifically when it is increased in size, the additional compute resources become immediately available to all running and queued queries. This means that the performance of these queries can improve due to the increased resources. Conversely, when the size of a warehouse is reduced, the compute resources are not removed until they are no longer being used by any current operations1.
References:
[COF-C02] SnowPro Core Certification Exam Study Guide
Snowflake Documentation on Virtual Warehouses2
Which of the following are benefits of micro-partitioning? (Select TWO)
Micro-partitions cannot overlap in their range of values
Micro-partitions are immutable objects that support the use of Time Travel.
Micro-partitions can reduce the amount of I/O from object storage to virtual warehouses
Rows are automatically stored in sorted order within micro-partitions
Micro-partitions can be defined on a schema-by-schema basis
Micro-partitions in Snowflake are immutable objects, which means once they are written, they cannot be modified. This immutability supports the use of Time Travel, allowing users to access historical data within a defined period. Additionally, micro-partitions can significantly reduce the amount of I/O from object storage to virtual warehouses. This is because Snowflake’s query optimizer can skip over micro-partitions that do not contain relevant data for a query, thus reducing the amount of data that needs to be scanned and transferred.
References: [COF-C02] SnowPro Core Certification Exam Study Guide
https://docs.snowflake.com/en/user-guide/tables-clustering-micropartitions.html
Which Snowflake objects track DML changes made to tables, like inserts, updates, and deletes?
Pipes
Streams
Tasks
Procedures
In Snowflake, Streams are the objects that track Data Manipulation Language (DML) changes made to tables, such as inserts, updates, and deletes. Streams record these changes along with metadata about each change, enabling actions to be taken using the changed data. This process is known as change data capture (CDC)2.
What objects in Snowflake are supported by Dynamic Data Masking? (Select TWO).'
Views
Materialized views
Tables
External tables
Future grants
Dynamic Data Masking in Snowflake supports tables and views. These objects can have masking policies applied to their columns to dynamically mask data at query time3.
What step can reduce data spilling in Snowflake?
Using a larger virtual warehouse
Increasing the virtual warehouse maximum timeout limit
Increasing the amount of remote storage for the virtual warehouse
Using a common table expression (CTE) instead of a temporary table
To reduce data spilling in Snowflake, using a larger virtual warehouse is effective because it provides more memory and local disk space, which can accommodate larger data operations and minimize the need to spill data to disk or remote storage1. References: [COF-C02] SnowPro Core Certification Exam Study Guide
How can a Snowflake administrator determine which user has accessed a database object that contains sensitive information?
Review the granted privileges to the database object.
Review the row access policy for the database object.
Query the ACCESS_HlSTORY view in the ACCOUNT_USAGE schema.
Query the REPLICATION USAGE HISTORY view in the ORGANIZATION USAGE schema.
To determine which user has accessed a database object containing sensitive information, a Snowflake administrator can query the ACCESS_HISTORY view in the ACCOUNT_USAGE schema, which provides information about access to database objects3.
Which command is used to unload data from a Snowflake database table into one or more files in a Snowflake stage?
CREATE STAGE
COPY INTO