[go: up one dir, main page]

0% found this document useful (0 votes)
258 views26 pages

Commonly Asked Snowflake

Download as docx, pdf, or txt
Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1/ 26

Commonly asked Snowflake

1. What are the different types of Stages?


Stages are commonly referred to as the storage platform used to store
the files. In Snowflake, there are two types of stages:
1. Internal stage — Resides in the Snowflake storage
2. External stage — Resides in any of the cloud object storage (AWS S3,
Azure Blob, GCP bucket )
Data can be retrieved from the stage or transferred to the stage using
the COPY INTO command.
For BULK loading you can use COPY INTO and for continuous data
loading you need to use SNOWPIPE, an autonomous service provided
by Snowflake.
To load data from the local file system into snowflake you can use
the PUT command.
2.What is Unique about Snowflake Cloud Data Warehouse?
Snowflake has introduced many unique features that are not been used
in any of the other data warehouses currently in the market.
1. Totally cloud-agnostic ( SAAS )-
Snowflake relies on 3 cloud service providers (AWS, Azure, GCP) for its
underlying infrastructure. It provides true SAAS functionality where
the user does not require to download or install any kind of software to
use snowflake or need to worry about any kind of hardware.
2. Decoupled storage and compute-
By decoupled it means storage and computes are work separately and
work collaboratively with the interface provided by the cloud provider.
This helps in decreasing usage costs where the user pays only what he is
using.
3. Zero copy cloning-
This feature is used to take a snapshot of the table at the current
instance to take a backup of the table. The snapshot taken will not
consume any physical space in the data storage unless any changes
have been done on the clone object. This will occupy the same columnar
partition used by the source table. Once changes are done on the cloned
object they will be stored in the different micro partitions.
4. Secure data sharing-
This feature provides secure sharing of the data with different
snowflake accounts or users outside of the snowflake account. By
secure, it means you can assign authorized users to access any
particular table in order to keep the table secured from the rest of the
snowflake users. The shared objects are always in Read-only mode. You
can create a Reader account to share data with the user who is not
using Snowflake.
5. Supports semi-structured data-
Snowflake supports file formats such as JSON, AVRO, ORC, PARQUET,
and XML. The variant data type is used to load semi-structured data
into snowflake. Once loaded it can be separated into multiple columns
as a table.
The variant has a limit of 16MB for an individual row. Flatten function
is used to split the nested attributes into separate columns.
6. Scalability-
As Snowflake is built upon cloud infrastructure, it uses cloud services
for storage and computing. The warehouse is a VM that is used to carry
out the computation required to execute any query. This enables users
the ability to scale up resources when they need large amounts of data
to be loaded faster and scale back down when the process is finished
without any interruption to service.
7. Time-travel and Failsafe-
Time-travel is to retrieve snowflake objects which are
removed/dropped from snowflake. You can read/retrieve data that is
deleted within a permissible time frame using time travel.
CDP lifecycle
Using Time Travel, you can perform the following actions within a defined
period of time:
1. Query data in the past that has since been updated or deleted.
2. Create clones of entire tables, schemas, and databases at or before specific
points in the past.
3. Restore tables, schemas, and databases that have been dropped.
3. What are the different ways to access the Snowflake Cloud Data
warehouse?
Snowflake provides WebUI to access snowflake as well as SnowSQL to
execute SQL queries and perform all DDL and DML operations including data
loading and unloading. It also provides native connectors for Python, Spark,
Go, Nodejs, JDBC, and ODBC.
4. What are the data security features in Snowflake?
Snowflake provides below security features:
1. Data encryption
2. Object-level access
3. RBAC
4. Secure data sharing
5. Masking policies for sensitive data
5. What are the benefits of Snowflake Compression?
Snowflake stores files in storage as compressed by default as gzip format
which helps to reduce the storage space occupied by that file also improves the
data loading and unloading performance. It also detects compressed file
formats such as gzip,bzip2,deflate,raw_deflate.
6. What is Snowflake Caching? What are the different types of
caching in Snowflake?
It comprises three types of caching :
1. Result cache- This holds the results of every query executed in the past 24
hours.
2. Local disk cache- This is used to cache data used by SQL queries.
Whenever data is needed for a given query it’s retrieved from the Remote Disk
storage, and cached in SSD and memory.
3. Remote cache- Which holds the long-term storage. This level is
responsible for data resilience, which in the case of Amazon Web Services,
means 99.999999999% durability. Even in the event of an entire data center
failure.

Snowflake cache
7. Is there a cost associated with Time Travel in Snowflake?
Yes, Time travel is the feature provided by snowflake to retrieve data that is
removed from Snowflake databases.
using time travel you can do :
1. Query data in the past that has since been updated or deleted.
2. Create clones of entire tables, schemas, and databases at or before specific
points in the past.
3. Restore tables, schemas, and databases that have been dropped.
Once the Time travel period is over, data is moved to the Fail-safe zone.
For the snowflake standard edition, the default Time travel period is 1.
For the snowflake Enterprise edition,
for transient and temp DB, schema, tables, the default time travel period is 1.
for permanent DB, schema, tables, and views, the default time travel can
ranges from 1to 90 days.
8. What is fail-safe in Snowflake
When the time-travel period elapses, removed data moves to Fail-safe zone of
7 days for Ent. edition snowflake and above. Once data went to Failsafe, we
need to contact Snowflake in order to restore the data. It may take from 24
hrs to days to get the data. The charges will occur from where the state of the
data is changed on basis of 24 Hr.
9. What is the difference between Time-Travel vs Fail-Safe in
Snowflake
Time travel has a time period ranging from 0 to 90 days for permanent DB,
schema, and tables where Fail safe time is of 7 days only.
Once the table/schema is dropped from the SF account it will get into Time
travel according to the time travel duration of that object (0–90) days.
Once TT is elapsed, objects move into the Fail-safe zone.
Snowflake provides us with 3 methods of time travel –
a. Using Timestamp — We can do time travel to any point of time before or
after the specified timestamp.
b. Using Offset — We can do time travel to any previous point in time.
c. Using Query ID — We can do time travel to any point of time before or
after the specified Query ID.
Now lets drop the table :

Try reading the table again:


Now we can recover the dropped table using UNDROP :

and then we can read the data from the table again :
10. How does zero-copy cloning work and what are its advantage
Zero copy cloning is just like a creating clone of the snowflake object.
You can create clones of SF objects such as DB, schema, table, stream, stage,
file formats, sequence, and task.
when you create a clone, Snowflake will point the metadata of the source object
to cloned object depicting cloning until you make any changes to cloned object.
1. Main advantage of this is it creates a copy of the object in less time.
2. It does not consume any extra space if no updates happen on the cloned
object.
3. fast way to take backup of any object.
Syntax:
create table orders_clone clone orders;
11. What are Data Shares in Snowflake?
Data sharing is the feature provided by snowflake to share data across
snowflake accounts and people outside of the snowflake accounts. You can
share data according to the customized datasets shared. For people outside
snowflake, you need to create a reader account with access to only read the
data.
Below are the objects that can be shared:
Tables
External tables
Secure views
Secure materialized views
Secure UDFs
There are two types of users:
1. Data provider: The provider creates a share of a database in their account and
grants access to specific objects in the database. The provider can also share
data from multiple databases, as long as these databases belong to the same
account.
2. Data consumer: On the consumer side, a read-only database is created
from the share. Access to this database is configurable using the same, standard
role-based access control that Snowflake provides for all objects in the system.
12. What is Horizontal scaling vs Vertical scaling in Snowflake.
Snowflake Enterprise and the above versions support a multi-cluster
warehouse where you can create a multi-cluster environment to handle
scalability. The warehouse can be scaled horizontally or vertically.
The multicluster warehouse can be configured in two ways :
1. Maximized mode: Where min amount and max amount of clusters are the
same but (1 < cluster size ≤10)
Maximized mode
2. Auto-Scale mode: Where min amount and max amount of clusters are
different ( min = 2 and Max=10)
Auto-Scale mode
You can manually change your warehouse according to your query structure
and complexity. Below are the scaling methods available in snowflake.
Vertical scaling :
scaling up: Increasing the size of the warehouse (small to medium)
scaling down: decreasing the size of the warehouse (medium to small)
Horizontal scaling:
Scaling in:
Removing unwanted clusters from warehouse limit. (4 → 2)
scaling out:
Adding more clusters to the existing list of warehouses. (2 → 4)
12. Where is metadata stored in Snowflake?
Once the table is created in Snowflake, it generates metadata bout the table
containing a count of the rows, the date-time stamp on which it gets created,
and aggregate functions such as sum, min, and a max of numerical
columns.
Metadata is stored in S3 where snowflake manages the data storage.
that's why while querying the metadata, there is no need of running a
warehouse.
13. Briefly explain the different data security features that are
available in Snowflake
Multiple data security options are available in snowflake such as :
1. Secure view
2. Reader account
3. Shared data
4. RBAC
14. What are the responsibilities of a storage layer in Snowflake?
The storage layer is nothing but the cloud storage service where data resides.
It has responsibilities such as :
1. Data protection
2. Data durability
3. Data Encryption
4. Archival of Data
15. Is Snowflake an MPP database
Yes. By MPP it means Massively Parallel processing. Snowflake is built on the
cloud so it inherits the characteristics of the cloud such as scalability. It can
handle parallel running queries by adding necessary compute resources.
Snowflake supports shared-nothing architecture where the compute env is
shared between the users. When the query load increases, it automatically
creates multiple clusters on nodes capable of handling the complex query logic
and execution.
16. Explain the different table Types available in Snowflake:
It supports three types of tables :
1. Permanent :
Permanent tables are the default type of tables getting created in snowflake. It
occupies the storage in cloud storage. The data stored in a permanent table gets
partitioned into micro-partitions for better data retrieval. This type of table has
better security features such as Time travel
The default time travel period for the permanent table is 90 days.
2. Temporary: Unlike permanent tables, temporary tables do not occupy the
storage. All the data stays temporarily in the memory. It holds the data only for
that particular session.
3. Transients: Transient tables are similar to temporary with respect to the time
travel period but the only difference is transient tables need to be dropped
manually. They will not get dropped until explicitly dropped.
17. Explain the differences and similarities between Transient and
Temporary tables

18. Which Snowflake edition should you use if you want to enable
time travel for up to 90 days :
The Standard edition supports the time travel period of up to 1 day. For time
travel of more than 1 day for the permanent table, we need to get a Snowflake
edition higher than standard. All snowflake editions support only one day of
time travel by default.
19. What are Micro-partitions :
Snowflake has its unique way of storing the data in cloud storage. Snowflake is
a columnar data warehouse as it stores data in columnar format. By columnar,
it means instead of storing data row-wise it split the table into columnar chunks
called Micro-partitions. Why micro because it only limits each partition to be
50 to 500 MB.
Snowflake doesn’t support indexing instead it manages the metadata of each
micro-partition to retrieve data faster. A relational database when queried uses
indexes to traverse all the rows to find requested data. The overhead of reading
all the unused data causes the data retrieval time consuming and compute-
heavy. Contrary to relational DB, snowflake uses the metadata of MP and
checks which chunk or MP contains the data requested by the user. Metadata
content the offset and the number of rows consist in that particular micro
partition. Using the metadata, snowflake manages all micro-partitions for data
storage and retrieval.
Check the snowflake doc on micro-partitions.
20. By default, clustering keys are created for every table, how can
you disable this option
When new data continuously arrived and loaded into micro-partitions some
columns (for example, event_date) have constant values in all partitions
(naturally clustered), while other columns (for example, city) may have the
same values appearing over and over in all partitions.
Snowflake allows you to define clustering keys, one or more columns that are
used to co-locate the data in the table in the same micro-partitions.
To suspend Automatic Clustering for a table, use the ALTER TABLE command
with a SUSPENDRECLUSTER clause. For example:
alter table t1 suspend recluster;

To resume Automatic Clustering for a clustered table, use the ALTER


TABLE command with a RESUME RECLUSTER clause. For example:
alter table t1 resume recluster;

21. What is the default type of table created in the Snowflake.


In addition to permanent tables, which is the default table type when creating
tables, Snowflake supports defining tables as either temporary or transient.
These types of tables are especially useful for storing data that does not need to
be maintained for extended periods of time (i.e. transitory data).
22. How many servers are present in X-Large Warehouse
23. As Snowflake should use one of the cloud providers (like AWS or
Azure) as part of its architecture, why can’t the AWS database
Amazon Redshift can be used instead of the Snowflake warehouse.
24. What view types can be created in Snowflake but not in
traditional databases:
Likewise tables in snowflake there are different types of views that can be
created snowflake i.e normal Views, Secure Views, and Materialized Views.
Normal views are similar to the views found in RDBMS where the output data
depends on the query it will run on a table or multiple tables. The query needs
to be refreshed in order to reflect the updated data.
Secure Views prevent users from possibly being exposed to data from rows of
tables that are filtered by the view. With secure Views, the view definition and
details are only visible to authorized users (i.e. users who are granted the role
that owns the View).
A materialized view is a pre-computed dataset derived from a query
specification which is nothing but a SELECT query in its definition. The output
is stored for later use.
Since the underlying data of the given query is pre-computed, querying a
materialized view is faster than executing the original query. This performance
difference can be significant when a query is run frequently or it is too complex.
25. Is Snowflake a Data Lake
A data lake is normally used for dumping all kinds of data coming from various
data sources where it can contain text data, chats, files, images, or videos. The
data will be unfiltered, unorganized, and difficult to analyze.
We cannot use this data to carry any information out of it.
On a similar basis, the snowflake is supporting structured and semi-structured
data with scalable cloud storage providing data lake features along with
analytical usage of the data.
By choosing snowflake you get the best of both data lake and data warehouse.
26. What are the key benefits you have noticed after migrating to
Snowflake from a traditional on-premise database.
1. Cloud agnostic.
2.Decoupled storage and compute.
3.Highly scalable.
4.Query performance.
5. supports structured and semi-structured data.
6.Native connectors such as python, scala , R, and JDBC/ODBC.
7. Secure data sharing.
8.Materialized views.
27. When you execute a query, how does Snowflake retrieves the
data as compared to the traditional databases.
1. When end user execute any query, it first goes to cloud service layer where
it get optimized and restructured for better performance. The query will
be tuned in terms of getting data from the underlying data storage. Also
the query gets compiled by query compiler in same layer.
2.after compilation, it goes to metadata cache to check if the cache has
stored any data related to that query
28. Explain the difference between External Stages and Internal
Name Stages:
Stages denotes where you want to stage (hold) the data in snowflake.
There are two types of stages exists in snowflake :
1. Internal stage :
In this stage , snowflake provide place to hold the data within itself. Data never
leave snowflake VPC in this kind of stage.
its also gets divided into sub categories as :
1. User : Each user get automatically allocated stage for data loading
2. Table : Each table get automatically allocated stage for data loading
3. Named : Named stages can be created manually for data loading.
2. External stage:
In opposite to Internal stage, external stages points to locations outsides on
Snowflake. i.e. Cloud storage buckets ( S3, GCS, Azure blob )
You must specify an internal stage in the PUT command when uploading files
to Snowflake.
You must specify the same stage in the COPY INTO <table> command when
loading data into a table from the staged files.
29. Explain the difference between User and Table Stages.
User stages:
Each user has a Snowflake stage allocated to them by default for storing files.
This stage is a convenient option if your files will only be accessed by a single
user, but need to be copied into multiple tables.
User stages have the following characteristics and limitations:
 User stages are referenced using @~; e.g. use LIST @~ to list the files in a
user stage.
 Unlike named stages, user stages cannot be altered or dropped.
 User stages do not support setting file format options. Instead, you must
specify file format and copy options as part of the COPY INTO
<table> command.
This option is not appropriate if:
 Multiple users require access to the files.
 The current user does not have INSERT privileges on the tables the data
will be loaded into.
Table stage:
Each table has a Snowflake stage allocated to it by default for storing files. This
stage is a convenient option if your files need to be accessible to multiple users
and only need to be copied into a single table.
Table stages have the following characteristics and limitations:
 Table stages have the same name as the table; e.g. a table
named mytable has a stage referenced as @%mytable.
 Unlike named stages, table stages cannot be altered or dropped.
 Table stages do not support transforming data while loading it (i.e. using
a query as the source for the COPY command).
Note that a table stage is not a separate database object; rather, it is an implicit
stage tied to the table itself. A table stage has no grantable privileges of its own.
To stage files to a table stage, list the files, query them on the stage, or drop
them, you must be the table owner (have the role with the OWNERSHIP
privilege on the table).
30. What are the constraints which are enforced in Snowflake?
Normally there no constraints are enforced in snowflake except for NOT NULL
constraints, which are always enforced.
Usually, in traditional databases, there are many constraints being used to
validate or restrict the incorrect data from being stored such as primary key,
not null, Unique, etc.
Snowflake provides the following constraint functionality:
 Unique, primary, and foreign keys, and NOT NULL columns.
 Named constraints.
 Single-column and multi-column constraints.
 Creation of constraints inline and out-of-line.
 Support for creation, modification and deletion of constraints.
31. What is unique about Snowflake Vs Other Warehouses.
Please refer to question Q2.
32. How a snowflake is charging the customer?
Snowflake charges on pay
34. Do DDL commands cost you?
36. Difference between Snowflake and other databases?
37. how will you calculate the expense of query running in snowflake?
38. How to load files in Snowflake?
39. How to share a table in snowflake other than the data marketplace?
40. How does Snowflake stores data?
41. If I faced an error while loading data what will happen?
42. What is Snowpipe?
43. What is materialized view what are the drawbacks of it
44. How can you implement CDC in Snowflake?
45. What if one of the source tables added a few more columns how you will
handle it at the snowflake end.
46. How to load data from JSON to Snowflake?
47. What are secure views and why they are used? How is data privacy done
here?
48. What is materialized view?
49. What are streams?
50. how you can fetch specific data from the variant columns?
51. How do you load semi-structured data in Snowflake?
52. How to create a stage in Snowflake?
53. What is clustering
54. What is automatic clustering
55. If I want to fetch data on basis of timestamp value is it feasible to cluster the
data on timestamp?
56. How will you read hierarchical JSON data, I mean in case it is having an
array how would you read that data.
57. How to disable fail-safe.
58. What is the best approach to recover the historical data at the earliest which
was accidentally deleted?
59. You have created a warehouse using the command create or replace
warehouse OriginalWH initially_suspended=true; What will be the size of the
warehouse?
Scenario-Based Questions:
1. You have observed that a store procedure that is getting executed daily at 7
AM as part of your batch process is consuming resources and the CPU I/O is
showing as 90%, and the other jobs which are getting executed are impacted
due to the store procedure. How can you quickly resolve the issue with the store
procedure?
2. Some queries are getting executed on a warehouse and you have executed
Alter Warehouse statement to resize the warehouse, how this will affect the
queries which are already in the execution state.
3. A new business analyst has joined your project, as part of the onboarding
process you have sent him some queries to generate some reports, the query
took around 5 minutes to get executed, the same query, when executed by other
business analysts, has returned the results immediately? What could be the
Issue?

Make short notes, read it before 30 mins

2. Tell me about yourself - Take atleast 5 mins

3. Set low expectations on your skill set

4. Resume should be clear enough, don’t add the skills that you don't know

5. Prepare well on your roles and responsibilities

6. Maintain Freq asked questions, queries in notepad for quick ref.

7. Don’t panic, if you don’t know the answer say politely that you never worked on it

8. Be confident and maintain eye contact

9. Why you are looking for job change

10. Write down the questions after interview, don’t repeat the mistakes
1. What is Snowflake?
Snowflake is a cloud-based data warehousing service employing a unique architecture
that enhances data handling and analysis. It stands out from traditional data warehouse
solutions with features like simplicity, superior performance, high concurrency, and cost-
effectiveness. Snowflake's architecture, separating computation and storage,
revolutionizes data warehousing by consolidating all data into a single system,
facilitating storage and analysis with cloud tools.
Become a Snowflake Certified professional by learning this HKR Snowflake Training !
2. What is special about snowflake architecture?
Snowflake's architecture is a hybrid of shared-nothing and shared-disk designs, combining a
central data repository for consistent data storage accessible from all computing nodes and
massively parallel processing for executing queries. Its three-layer structure includes Query
Processing, Data Storage, and Cloud Services, each playing a crucial role in managing and
optimizing data handling, from query execution to metadata management.
3. How to access Snowflake's data warehouse?
Snowflake's data warehouse can be accessed through various methods, including
ODBC and JDBC drivers, Python libraries, a web user interface, and the SnowSQL
command-line client, catering to diverse user preferences and technical requirements.
4. What benefits does a Snowflake database offer?
Snowflake's cloud-native design addresses challenges unresolved by traditional
warehouses. Key benefits are as follows:
 Robust security.
 Consistent high availability.
 Seamless data sharing.
 Impressive speed and performance.
 The capability to handle both structured and unstructured data efficiently.

5. How is the data securely stored in Snowflake?


Snowflake prioritizes data security, adhering to the highest industry encryption and data
protection standards. Key security measures include managed data encryption, TLS for
secure communications, and geographic-specific data storage options, ensuring
comprehensive data safety.
Become a Snowflake Certified professional by learning this HKR Snowflake Training in
Hyderabad!
6. How does Snowflake Data Compression work?
Snowflake automatically compresses all stored data using advanced algorithms. The unique
aspect is that customer charges are based on the compressed data size, leading to lower
storage costs and negligible storage overhead for data cloning or sharing.
7. What is Snowflake Catching?
Snowflake enhances SQL query performance through data and result caching on SSDs.
It captures and reuses the outcomes of previous queries, thereby speeding up query
responses and optimizing resource use.
8. What is Time Travel?
Snowflake's Time Travel feature allows users to access and interact with historical data,
enabling data restoration, historical analysis, and backup of critical data points,
enhancing data management and recovery capabilities.
9. What is Fail-safe?
Fail-safe in Snowflake is an advanced feature that adds an extra layer of data protection,
providing an additional seven-day storage period post the time travel period, ensuring
enhanced data security.
10. Explain Snowflake features.
Snowflake's features encompass a range of functionalities, including Cloud Services, a
Compute Layer, Database Storage, easy data sharing, robust security, high-speed
performance, and support for both structured and unstructured data, demonstrating its
versatility and efficiency.
Intermediate level Snowflake interview questions:
11. What are the different editions of Snowflake?
Snowflake offers editions tailored to varying client needs, including the Standard Edition
for beginners, the Enterprise Edition for large-scale operations, the Business-Critical
Edition for sensitive data, and the Virtual Private Snowflake for high-security demands.
12. What is the virtual warehouse?

Snowflake's virtual warehouse is a set of clusters facilitating data operations like loading
and queries, providing essential resources such as CPU and temporary storage for
efficient data processing.

Snowflake Training

 Master Your Craft

 Lifetime LMS & Faculty Access


 24/7 online expert support

 Real-world & Project Based Learning

Explore Curriculum
13. Why do we use Fail-safe?
Fail-safe in Snowflake addresses the drawbacks of traditional data backup methods,
offering a more efficient, cost-effective, and time-saving solution for data recovery and
management, minimizing company downtime.
14. What are the data shares in Snowflake?
Snowflake enables safe data sharing across accounts, allowing users to share readable
but non-modifiable database objects. This feature facilitates data collaboration and
access across different management and geographical units.
15. What is Zero-copy cloning?
Zero-copy cloning in Snowflake allows the creation of data object copies without actual
data duplication, enabling real-time data access and manipulation with minimal resource
usage.
Related Article: Snowflake Vs Redshift
16. What is Snowpipe?
Snowpipe is an efficient, user-friendly service for continuous data loading into
Snowflake, optimizing the data loading process through micro-batching and immediate
data availability for analysis.
17. What is Horizontal scaling and Vertical Scaling?
Horizontal scaling in Snowflake enhances concurrency through additional virtual
warehouses, while vertical scaling optimizes workload processing by choosing larger
warehouse sizes, demonstrating its scalability and performance adaptability.
18. Explain about the database storage layer.
Snowflake's database storage layer organizes data in an optimized, columnar format,
ensuring efficient storage and management of data with advanced organization,
compression, and access methodologies.
19. What is the Query processing layer?
Snowflake's query processing layer uses virtual warehouses, each a separate cluster,
ensuring isolated and efficient query execution enhancing performance and reliability.
Become a Snowflake Certified professional by learning this HKR Snowflake Training in
Chennai !
20. What is the Cloud Service layer?
The Cloud Services layer in Snowflake manages and coordinates various tasks,
including user requests, authentication, and infrastructure management, demonstrating a
seamless integration of services for optimal performance.

Subscribe To Our YouTube Channel To Get New Updates..!


Subscribe

Snowflake interview questions for Experienced:


21. What is Materialized view?
Materialized views in Snowflake are pre-computed datasets that enhance query
performance, especially for repetitive and complex queries on large datasets,
contributing to faster and more efficient data analysis.
22. What is Schema?
Schemas in Snowflake organize stored data, grouping database objects like tables and
views, facilitating structured data management and efficient use of disk space.
23. What ETL tools are you using in Snowflake?
Snowflake integrates with various ETL tools, including Etleap, Blendo, Matillion, Hevo
Data, StreamSets, and Apache Airflow, catering to diverse data integration and
transformation needs.
24. What programming languages are supported in Snowflake?
Snowflake supports various programming languages, including Go, C, .NET, Java,
Python, Node.js, etc., offering flexibility and compatibility for different development
environments.Related Article: Snowflake Vs BigQuery
25. What is a clustering key in Snowflake?
A clustering key in Snowflake is a subset of table columns that optimize data co-location
within the table, especially beneficial for large tables with non-sequential DML
operations.
26. What is a Stage?
A Stage in Snowflake is a central area for file uploads, enabling Snowpipe to
systematically load files into the database, with support for various stage types, including
User, Table, and Internal Named Stages.
27. Is Snowflake OLAP or OLTP?
Primarily designed as an OLAP (Online Analytical Processing) system, Snowflake can
also adapt to OLTP scenarios, showcasing its versatility.
28. How can we execute Snowflake Procedure?
A Snowflake procedure involves executing SQL statements, retrieving query results, and
accessing result set metadata, encapsulating complex business logic within a modular
code structure.
29. Does Snowflake maintain stored procedures?
Snowflake supports stored procedures created via the CREATE PROCEDURE
command. It is executed with CALL, incorporating JavaScript APIs for database
operations like UPDATE, SELECT, and CREATE
30. What is a Columnar database?

A columnar database, like Snowflake, stores data in columns rather than rows,
enhancing analytical query processing and overall database performance, marking a
significant advancement in BI analysis.

You might also like