
- Python Pandas - Home
- Python Pandas - Introduction
- Python Pandas - Environment Setup
- Python Pandas - Basics
- Python Pandas - Introduction to Data Structures
- Python Pandas - Index Objects
- Python Pandas - Panel
- Python Pandas - Basic Functionality
- Python Pandas - Indexing & Selecting Data
- Python Pandas - Series
- Python Pandas - Series
- Python Pandas - Slicing a Series Object
- Python Pandas - Attributes of a Series Object
- Python Pandas - Arithmetic Operations on Series Object
- Python Pandas - Converting Series to Other Objects
- Python Pandas - DataFrame
- Python Pandas - DataFrame
- Python Pandas - Accessing DataFrame
- Python Pandas - Slicing a DataFrame Object
- Python Pandas - Modifying DataFrame
- Python Pandas - Removing Rows from a DataFrame
- Python Pandas - Arithmetic Operations on DataFrame
- Python Pandas - IO Tools
- Python Pandas - IO Tools
- Python Pandas - Working with CSV Format
- Python Pandas - Reading & Writing JSON Files
- Python Pandas - Reading Data from an Excel File
- Python Pandas - Writing Data to Excel Files
- Python Pandas - Working with HTML Data
- Python Pandas - Clipboard
- Python Pandas - Working with HDF5 Format
- Python Pandas - Comparison with SQL
- Python Pandas - Data Handling
- Python Pandas - Sorting
- Python Pandas - Reindexing
- Python Pandas - Iteration
- Python Pandas - Concatenation
- Python Pandas - Statistical Functions
- Python Pandas - Descriptive Statistics
- Python Pandas - Working with Text Data
- Python Pandas - Function Application
- Python Pandas - Options & Customization
- Python Pandas - Window Functions
- Python Pandas - Aggregations
- Python Pandas - Merging/Joining
- Python Pandas - MultiIndex
- Python Pandas - Basics of MultiIndex
- Python Pandas - Indexing with MultiIndex
- Python Pandas - Advanced Reindexing with MultiIndex
- Python Pandas - Renaming MultiIndex Labels
- Python Pandas - Sorting a MultiIndex
- Python Pandas - Binary Operations
- Python Pandas - Binary Comparison Operations
- Python Pandas - Boolean Indexing
- Python Pandas - Boolean Masking
- Python Pandas - Data Reshaping & Pivoting
- Python Pandas - Pivoting
- Python Pandas - Stacking & Unstacking
- Python Pandas - Melting
- Python Pandas - Computing Dummy Variables
- Python Pandas - Categorical Data
- Python Pandas - Categorical Data
- Python Pandas - Ordering & Sorting Categorical Data
- Python Pandas - Comparing Categorical Data
- Python Pandas - Handling Missing Data
- Python Pandas - Missing Data
- Python Pandas - Filling Missing Data
- Python Pandas - Interpolation of Missing Values
- Python Pandas - Dropping Missing Data
- Python Pandas - Calculations with Missing Data
- Python Pandas - Handling Duplicates
- Python Pandas - Duplicated Data
- Python Pandas - Counting & Retrieving Unique Elements
- Python Pandas - Duplicated Labels
- Python Pandas - Grouping & Aggregation
- Python Pandas - GroupBy
- Python Pandas - Time-series Data
- Python Pandas - Date Functionality
- Python Pandas - Timedelta
- Python Pandas - Sparse Data Structures
- Python Pandas - Sparse Data
- Python Pandas - Visualization
- Python Pandas - Visualization
- Python Pandas - Additional Concepts
- Python Pandas - Caveats & Gotchas
Python Pandas to_parquet() Method
Parquet is a partitioned binary columnar serialization for DataFrames. It is a file format designed for efficient reading and writing of DataFrames and can provide the options for easy sharing data across data analysis languages. It supports various compression methods to reduce file size while ensuring efficient reading performance.
The Pandas DataFrame.to_parquet() method allows you to save DataFrames in Parquet file format, enabling easy data sharing and storage capabilities. This format fully supports all Pandas data types, including specialized types like datetime with timezone information.
Before using the DataFrame.to_parquet() method, you need to install either the 'pyarrow' or 'fastparquet' library, depending on the selected engine. These libraries are optional Python dependencies and can be installed using the following commands −
pip install pyarrow # or pip install fastparquet
Syntax
Following is the syntax of the Python Pandas to_parquet() method −
DataFrame.to_parquet(path=None, *, engine='auto', compression='snappy', index=None, partition_cols=None, storage_options=None, **kwargs)
Parameters
The Python Pandas DataFrame.to_parquet() method accepts the below parameters −
path: This parameter accepts a string, path object, or file-like object, representing the file path where the Parquet file will be saved. If None, the result is returned as bytes.
engine: It specifies Parquet library to use. Available options are 'auto', 'pyarrow', or 'fastparquet'. If 'auto' is set, then the option io.parquet.engine is used.
compression: Specifies the name of the compression to use. If set to None, no compression will be applied.
index: Whether to include the DataFrame's index in the output file.
partition_cols: Takes the column names by which to partition the dataset.
storage_options: Additional options for connecting to certain storage back-ends (e.g., AWS S3, Google Cloud Storage).
**kwargs: Additional key-word arguments passed to the parquet library.
Return Value
The Pandas DataFrame.to_parquet() method returns bytes if path parameter is set to None. Otherwise, returns None but saves the DataFrame as a parquet file at the specified path.
Example: Saving a DataFrame to a Parquet File
Here is a basic example demonstrating saving a Pandas DataFrame object into a parquet file format using the DataFrame.to_parquet() method.
import pandas as pd # Create a DataFrame df = pd.DataFrame({"Col_1": range(5), "Col_2": range(5, 10)}) print("Original DataFrame:") print(df) # Save the DataFrame as a parquet file df.to_parquet("df_parquet_file.parquet") print("\nDataFrame is successfully saved as a parquet file.")
When we run above program, it produces following result −
Original DataFrame:
Col_1 | Col_2 | |
---|---|---|
0 | 0 | 5 |
1 | 1 | 6 |
2 | 2 | 7 |
3 | 3 | 8 |
4 | 4 | 9 |
If you visit the folder where the parquet files are saved, you can observe the generated parquet file.
Example: Saving Parquet file with Compression
The following example shows how to use the to_parquet() method for saving the Pandas DataFrame as a parquet file with compression.
import pandas as pd # Create a DataFrame df = pd.DataFrame({"Col_1": range(5), "Col_2": range(5, 10)}) print("Original DataFrame:") print(df) # Save the DataFrame to a parquet file with compression df.to_parquet('compressed_data.parquet.gzip', compression='gzip') print("\nDataFrame is saved as parquet format with compression..")
Following is an output of the above code −
Original DataFrame:
Col_1 | Col_2 | |
---|---|---|
0 | 0 | 5 |
1 | 1 | 6 |
2 | 2 | 7 |
3 | 3 | 8 |
4 | 4 | 9 |
Example: Writing Parquet file Without Index
When writing a Pandas DataFrame as a Parquet file format you can ignore the index by setting the DataFrame.to_parquet() method index parameter value to False.
import pandas as pd # Create a DataFrame df = pd.DataFrame({"Col_1": [1, 2, 3, 4, 5], "Col_2": ["a", "b", "c", "d", "e"]}, index=['r1', 'r2', 'r3', 'r4', 'r5']) print("Original DataFrame:") print(df) # Save without DataFrame index df.to_parquet('df_parquet_no_index.parquet', index=False) # Read the 'df_parquet_no_index' file output = pd.read_parquet('df_parquet_no_index.parquet') print('Saved DataFrame without index:') print(output)
Following is an output of the above code −
Original DataFrame:
Col_1 | Col_2 | |
---|---|---|
r1 | 1 | a |
r2 | 2 | b |
r3 | 3 | c |
r4 | 4 | d |
r5 | 5 | e |
Col_1 | Col_2 | |
---|---|---|
0 | 1 | a |
1 | 2 | b |
2 | 3 | c |
3 | 4 | d |
4 | 5 | e |
Example: Save Pandas DataFrame to In-Memory Parquet
This example saves a Pandas DataFrame object into a in-memory parquet file using the DataFrame.to_parquet() method.
import pandas as pd import io # Create a Pandas DataFrame df = pd.DataFrame(data={'Col_1': [1, 2], 'Col_2': [3.0, 4.0]}) # Display the Input DataFrame print("Original DataFrame:") print(df) # Save the DataFrame as In-Memory parquet buf = io.BytesIO() df.to_parquet(buf) output = pd.read_parquet(buf) print('Saved DataFrame as an in-memory Parquet file:') print(output)
While executing the above code we get the following output −
Original DataFrame:
Col_1 | Col_2 | |
---|---|---|
0 | 1 | 3.0 |
1 | 2 | 4.0 |
Col_1 | Col_2 | |
---|---|---|
0 | 1 | 3.0 |
1 | 2 | 4.0 |