Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Python Pandas to_parquet() Method



Parquet is a partitioned binary columnar serialization for DataFrames. It is a file format designed for efficient reading and writing of DataFrames and can provide the options for easy sharing data across data analysis languages. It supports various compression methods to reduce file size while ensuring efficient reading performance.

The Pandas DataFrame.to_parquet() method allows you to save DataFrames in Parquet file format, enabling easy data sharing and storage capabilities. This format fully supports all Pandas data types, including specialized types like datetime with timezone information.

Before using the DataFrame.to_parquet() method, you need to install either the 'pyarrow' or 'fastparquet' library, depending on the selected engine. These libraries are optional Python dependencies and can be installed using the following commands −

pip install pyarrow
# or
pip install fastparquet

Syntax

Following is the syntax of the Python Pandas to_parquet() method −

DataFrame.to_parquet(path=None, *, engine='auto', compression='snappy', index=None, partition_cols=None, storage_options=None, **kwargs)

Parameters

The Python Pandas DataFrame.to_parquet() method accepts the below parameters −

  • path: This parameter accepts a string, path object, or file-like object, representing the file path where the Parquet file will be saved. If None, the result is returned as bytes.

  • engine: It specifies Parquet library to use. Available options are 'auto', 'pyarrow', or 'fastparquet'. If 'auto' is set, then the option io.parquet.engine is used.

  • compression: Specifies the name of the compression to use. If set to None, no compression will be applied.

  • index: Whether to include the DataFrame's index in the output file.

  • partition_cols: Takes the column names by which to partition the dataset.

  • storage_options: Additional options for connecting to certain storage back-ends (e.g., AWS S3, Google Cloud Storage).

  • **kwargs: Additional key-word arguments passed to the parquet library.

Return Value

The Pandas DataFrame.to_parquet() method returns bytes if path parameter is set to None. Otherwise, returns None but saves the DataFrame as a parquet file at the specified path.

Example: Saving a DataFrame to a Parquet File

Here is a basic example demonstrating saving a Pandas DataFrame object into a parquet file format using the DataFrame.to_parquet() method.

import pandas as pd

# Create a DataFrame
df = pd.DataFrame({"Col_1": range(5), "Col_2": range(5, 10)})
print("Original DataFrame:")
print(df)

# Save the DataFrame as a parquet file
df.to_parquet("df_parquet_file.parquet")

print("\nDataFrame is successfully saved as a parquet file.")

When we run above program, it produces following result −

Original DataFrame:
Col_1 Col_2
0 0 5
1 1 6
2 2 7
3 3 8
4 4 9
DataFrame is successfully saved as a parquet file.
If you visit the folder where the parquet files are saved, you can observe the generated parquet file.

Example: Saving Parquet file with Compression

The following example shows how to use the to_parquet() method for saving the Pandas DataFrame as a parquet file with compression.

import pandas as pd

# Create a DataFrame
df = pd.DataFrame({"Col_1": range(5), "Col_2": range(5, 10)})
print("Original DataFrame:")
print(df)

# Save the DataFrame to a parquet file with compression
df.to_parquet('compressed_data.parquet.gzip', compression='gzip')
print("\nDataFrame is saved as parquet format with compression..")

Following is an output of the above code −

Original DataFrame:
Col_1 Col_2
0 0 5
1 1 6
2 2 7
3 3 8
4 4 9
DataFrame is saved as parquet format with compression..

Example: Writing Parquet file Without Index

When writing a Pandas DataFrame as a Parquet file format you can ignore the index by setting the DataFrame.to_parquet() method index parameter value to False.

import pandas as pd

# Create a DataFrame
df = pd.DataFrame({"Col_1": [1, 2, 3, 4, 5],
"Col_2": ["a", "b", "c", "d", "e"]}, index=['r1', 'r2', 'r3', 'r4', 'r5'])

print("Original DataFrame:")
print(df)

# Save without DataFrame index
df.to_parquet('df_parquet_no_index.parquet', index=False)

# Read the 'df_parquet_no_index' file
output = pd.read_parquet('df_parquet_no_index.parquet')
print('Saved DataFrame without index:')
print(output)

Following is an output of the above code −

Original DataFrame:
Col_1 Col_2
r1 1 a
r2 2 b
r3 3 c
r4 4 d
r5 5 e
Saved DataFrame without index:
Col_1 Col_2
0 1 a
1 2 b
2 3 c
3 4 d
4 5 e

Example: Save Pandas DataFrame to In-Memory Parquet

This example saves a Pandas DataFrame object into a in-memory parquet file using the DataFrame.to_parquet() method.

import pandas as pd
import io

# Create a Pandas DataFrame 
df = pd.DataFrame(data={'Col_1': [1, 2], 'Col_2': [3.0, 4.0]})

# Display the Input DataFrame
print("Original DataFrame:")
print(df)

# Save the DataFrame as In-Memory parquet
buf = io.BytesIO()
df.to_parquet(buf)

output = pd.read_parquet(buf)
print('Saved DataFrame as an in-memory Parquet file:')
print(output)

While executing the above code we get the following output −

Original DataFrame:
Col_1 Col_2
0 1 3.0
1 2 4.0
Saved DataFrame as an in-memory Parquet file:
Col_1 Col_2
0 1 3.0
1 2 4.0
python_pandas_io_tool.htm
Advertisements