Pandas
Pandas
Pandas
toolkit
Release 0.20.1
1 Whats New 3
1.1 v0.20.1 (May 5, 2017) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.1.1 New features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.1.1.1 agg API for DataFrame/Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.1.1.2 dtype keyword for data IO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.1.1.3 .to_datetime() has gained an origin parameter . . . . . . . . . . . . . . . 7
1.1.1.4 Groupby Enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.1.1.5 Better support for compressed URLs in read_csv . . . . . . . . . . . . . . . . . 9
1.1.1.6 Pickle file I/O now supports compression . . . . . . . . . . . . . . . . . . . . . . . 9
1.1.1.7 UInt64 Support Improved . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.1.1.8 GroupBy on Categoricals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.1.1.9 Table Schema Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.1.1.10 SciPy sparse matrix from/to SparseDataFrame . . . . . . . . . . . . . . . . . . . . 12
1.1.1.11 Excel output for styled DataFrames . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.1.1.12 IntervalIndex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.1.1.13 Other Enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.1.2 Backwards incompatible API changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
1.1.2.1 Possible incompatibility for HDF5 formats created with pandas < 0.13.0 . . . . . . 18
1.1.2.2 Map on Index types now return other Index types . . . . . . . . . . . . . . . . . . 19
1.1.2.3 Accessing datetime fields of Index now return Index . . . . . . . . . . . . . . . . . 20
1.1.2.4 pd.unique will now be consistent with extension types . . . . . . . . . . . . . . . . 20
1.1.2.5 S3 File Handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
1.1.2.6 Partial String Indexing Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
1.1.2.7 Concat of different float dtypes will not automatically upcast . . . . . . . . . . . . 23
1.1.2.8 Pandas Google BigQuery support has moved . . . . . . . . . . . . . . . . . . . . . 23
1.1.2.9 Memory Usage for Index is more Accurate . . . . . . . . . . . . . . . . . . . . . . 23
1.1.2.10 DataFrame.sort_index changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
1.1.2.11 Groupby Describe Formatting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
1.1.2.12 Window Binary Corr/Cov operations return a MultiIndex DataFrame . . . . . . . . 27
1.1.2.13 HDFStore where string comparison . . . . . . . . . . . . . . . . . . . . . . . . . . 28
1.1.2.14 Index.intersection and inner join now preserve the order of the left Index . . . . . . 28
1.1.2.15 Pivot Table always returns a DataFrame . . . . . . . . . . . . . . . . . . . . . . . 29
1.1.2.16 Other API Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
1.1.3 Reorganization of the library: Privacy Changes . . . . . . . . . . . . . . . . . . . . . . . . 31
1.1.3.1 Modules Privacy Has Changed . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
1.1.3.2 pandas.errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
1.1.3.3 pandas.testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
1.1.3.4 pandas.plotting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
1.1.3.5 Other Development Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
1.1.4 Deprecations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
i
1.1.4.1 Deprecate .ix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
1.1.4.2 Deprecate Panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
1.1.4.3 Deprecate groupby.agg() with a dictionary when renaming . . . . . . . . . . . . . 35
1.1.4.4 Deprecate .plotting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
1.1.4.5 Other Deprecations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
1.1.5 Removal of prior version deprecations/changes . . . . . . . . . . . . . . . . . . . . . . . . 38
1.1.6 Performance Improvements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
1.1.7 Bug Fixes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
1.1.7.1 Conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
1.1.7.2 Indexing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
1.1.7.3 I/O . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
1.1.7.4 Plotting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
1.1.7.5 Groupby/Resample/Rolling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
1.1.7.6 Sparse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
1.1.7.7 Reshaping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
1.1.7.8 Numeric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
1.1.7.9 Other . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
1.2 v0.19.2 (December 24, 2016) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
1.2.1 Enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
1.2.2 Performance Improvements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
1.2.3 Bug Fixes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
1.3 v0.19.1 (November 3, 2016) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
1.3.1 Performance Improvements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
1.3.2 Bug Fixes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
1.4 v0.19.0 (October 2, 2016) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
1.4.1 New features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
1.4.1.1 merge_asof for asof-style time-series joining . . . . . . . . . . . . . . . . . . . 49
1.4.1.2 .rolling() is now time-series aware . . . . . . . . . . . . . . . . . . . . . . . 52
1.4.1.3 read_csv has improved support for duplicate column names . . . . . . . . . . . 54
1.4.1.4 read_csv supports parsing Categorical directly . . . . . . . . . . . . . . . . 54
1.4.1.5 Categorical Concatenation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
1.4.1.6 Semi-Month Offsets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
1.4.1.7 New Index methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
1.4.1.8 Google BigQuery Enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
1.4.1.9 Fine-grained numpy errstate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
1.4.1.10 get_dummies now returns integer dtypes . . . . . . . . . . . . . . . . . . . . . 58
1.4.1.11 Downcast values to smallest possible dtype in to_numeric . . . . . . . . . . . . 59
1.4.1.12 pandas development API . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
1.4.1.13 Other enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
1.4.2 API changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
1.4.2.1 Series.tolist() will now return Python types . . . . . . . . . . . . . . . . . 63
1.4.2.2 Series operators for different indexes . . . . . . . . . . . . . . . . . . . . . . . 63
1.4.2.3 Series type promotion on assignment . . . . . . . . . . . . . . . . . . . . . . . 66
1.4.2.4 .to_datetime() changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
1.4.2.5 Merging changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
1.4.2.6 .describe() changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
1.4.2.7 Period changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
1.4.2.8 Index + / - no longer used for set operations . . . . . . . . . . . . . . . . . . . . . 72
1.4.2.9 Index.difference and .symmetric_difference changes . . . . . . . . 72
1.4.2.10 Index.unique consistently returns Index . . . . . . . . . . . . . . . . . . . . 73
1.4.2.11 MultiIndex constructors, groupby and set_index preserve categorical dtypes 73
1.4.2.12 read_csv will progressively enumerate chunks . . . . . . . . . . . . . . . . . . 75
1.4.2.13 Sparse Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
1.4.2.14 Indexer dtype changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
ii
1.4.2.15 Other API Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
1.4.3 Deprecations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
1.4.4 Removal of prior version deprecations/changes . . . . . . . . . . . . . . . . . . . . . . . . 81
1.4.5 Performance Improvements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
1.4.6 Bug Fixes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
1.5 v0.18.1 (May 3, 2016) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
1.5.1 New features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
1.5.1.1 Custom Business Hour . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
1.5.1.2 .groupby(..) syntax with window and resample operations . . . . . . . . . . . 89
1.5.1.3 Method chaininng improvements . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
1.5.1.4 Partial string indexing on DateTimeIndex when part of a MultiIndex . . . . 93
1.5.1.5 Assembling Datetimes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
1.5.1.6 Other Enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
1.5.2 Sparse changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
1.5.3 API changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
1.5.3.1 .groupby(..).nth() changes . . . . . . . . . . . . . . . . . . . . . . . . . . 97
1.5.3.2 numpy function compatibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
1.5.3.3 Using .apply on groupby resampling . . . . . . . . . . . . . . . . . . . . . . . . 99
1.5.3.4 Changes in read_csv exceptions . . . . . . . . . . . . . . . . . . . . . . . . . . 100
1.5.3.5 to_datetime error changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
1.5.3.6 Other API changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
1.5.3.7 Deprecations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
1.5.4 Performance Improvements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
1.5.5 Bug Fixes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
1.6 v0.18.0 (March 13, 2016) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
1.6.1 New features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
1.6.1.1 Window functions are now methods . . . . . . . . . . . . . . . . . . . . . . . . . 107
1.6.1.2 Changes to rename . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
1.6.1.3 Range Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
1.6.1.4 Changes to str.extract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
1.6.1.5 Addition of str.extractall . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
1.6.1.6 Changes to str.cat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
1.6.1.7 Datetimelike rounding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
1.6.1.8 Formatting of Integers in FloatIndex . . . . . . . . . . . . . . . . . . . . . . . . . 114
1.6.1.9 Changes to dtype assignment behaviors . . . . . . . . . . . . . . . . . . . . . . . . 114
1.6.1.10 to_xarray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
1.6.1.11 Latex Representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
1.6.1.12 pd.read_sas() changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
1.6.1.13 Other enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
1.6.2 Backwards incompatible API changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
1.6.2.1 NaT and Timedelta operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
1.6.2.2 Changes to msgpack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
1.6.2.3 Signature change for .rank . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
1.6.2.4 Bug in QuarterBegin with n=0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
1.6.2.5 Resample API . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
1.6.2.6 Changes to eval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
1.6.2.7 Other API Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
1.6.2.8 Deprecations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
1.6.2.9 Removal of deprecated float indexers . . . . . . . . . . . . . . . . . . . . . . . . . 129
1.6.2.10 Removal of prior version deprecations/changes . . . . . . . . . . . . . . . . . . . . 131
1.6.3 Performance Improvements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
1.6.4 Bug Fixes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
1.7 v0.17.1 (November 21, 2015) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
1.7.1 New features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
iii
1.7.1.1 Conditional HTML Formatting . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
1.7.2 Enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
1.7.3 API changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
1.7.3.1 Deprecations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
1.7.4 Performance Improvements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
1.7.5 Bug Fixes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
1.8 v0.17.0 (October 9, 2015) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
1.8.1 New features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
1.8.1.1 Datetime with TZ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
1.8.1.2 Releasing the GIL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
1.8.1.3 Plot submethods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
1.8.1.4 Additional methods for dt accessor . . . . . . . . . . . . . . . . . . . . . . . . . 144
1.8.1.5 Period Frequency Enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
1.8.1.6 Support for SAS XPORT files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
1.8.1.7 Support for Math Functions in .eval() . . . . . . . . . . . . . . . . . . . . . . . . . 147
1.8.1.8 Changes to Excel with MultiIndex . . . . . . . . . . . . . . . . . . . . . . . . 147
1.8.1.9 Google BigQuery Enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
1.8.1.10 Display Alignment with Unicode East Asian Width . . . . . . . . . . . . . . . . . 149
1.8.1.11 Other enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
1.8.2 Backwards incompatible API changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
1.8.2.1 Changes to sorting API . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
1.8.2.2 Changes to to_datetime and to_timedelta . . . . . . . . . . . . . . . . . . . . . . . 154
1.8.2.3 Changes to Index Comparisons . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
1.8.2.4 Changes to Boolean Comparisons vs. None . . . . . . . . . . . . . . . . . . . . . 156
1.8.2.5 HDFStore dropna behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
1.8.2.6 Changes to display.precision option . . . . . . . . . . . . . . . . . . . . . 158
1.8.2.7 Changes to Categorical.unique . . . . . . . . . . . . . . . . . . . . . . . . 158
1.8.2.8 Changes to bool passed as header in Parsers . . . . . . . . . . . . . . . . . . . 159
1.8.2.9 Other API Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
1.8.2.10 Deprecations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
1.8.2.11 Removal of prior version deprecations/changes . . . . . . . . . . . . . . . . . . . . 161
1.8.3 Performance Improvements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
1.8.4 Bug Fixes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
1.9 v0.16.2 (June 12, 2015) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
1.9.1 New features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
1.9.1.1 Pipe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
1.9.1.2 Other Enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
1.9.2 API Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
1.9.3 Performance Improvements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
1.9.4 Bug Fixes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
1.10 v0.16.1 (May 11, 2015) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
1.10.1 Enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
1.10.1.1 CategoricalIndex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
1.10.1.2 Sample . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
1.10.1.3 String Methods Enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
1.10.1.4 Other Enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
1.10.2 API changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
1.10.2.1 Deprecations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
1.10.3 Index Representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
1.10.4 Performance Improvements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
1.10.5 Bug Fixes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
1.11 v0.16.0 (March 22, 2015) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
1.11.1 New features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
1.11.1.1 DataFrame Assign . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
iv
1.11.1.2 Interaction with scipy.sparse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
1.11.1.3 String Methods Enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
1.11.1.4 Other enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
1.11.2 Backwards incompatible API changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
1.11.2.1 Changes in Timedelta . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
1.11.2.2 Indexing Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
1.11.2.3 Categorical Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
1.11.2.4 Other API Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
1.11.2.5 Deprecations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
1.11.2.6 Removal of prior version deprecations/changes . . . . . . . . . . . . . . . . . . . . 193
1.11.3 Performance Improvements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
1.11.4 Bug Fixes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
1.12 v0.15.2 (December 12, 2014) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
1.12.1 API changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
1.12.2 Enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
1.12.3 Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
1.12.4 Bug Fixes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
1.13 v0.15.1 (November 9, 2014) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
1.13.1 API changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
1.13.2 Enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
1.13.3 Bug Fixes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
1.14 v0.15.0 (October 18, 2014) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
1.14.1 New features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
1.14.1.1 Categoricals in Series/DataFrame . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
1.14.1.2 TimedeltaIndex/Scalar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
1.14.1.3 Memory Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
1.14.1.4 .dt accessor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
1.14.1.5 Timezone handling improvements . . . . . . . . . . . . . . . . . . . . . . . . . . 217
1.14.1.6 Rolling/Expanding Moments improvements . . . . . . . . . . . . . . . . . . . . . 218
1.14.1.7 Improvements in the sql io module . . . . . . . . . . . . . . . . . . . . . . . . . . 222
1.14.2 Backwards incompatible API changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
1.14.2.1 Breaking changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
1.14.2.2 Internal Refactoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
1.14.2.3 Deprecations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
1.14.2.4 Removal of prior version deprecations/changes . . . . . . . . . . . . . . . . . . . . 229
1.14.3 Enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
1.14.4 Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
1.14.5 Bug Fixes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
1.15 v0.14.1 (July 11, 2014) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
1.15.1 API changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
1.15.2 Enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
1.15.3 Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239
1.15.4 Experimental . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
1.15.5 Bug Fixes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
1.16 v0.14.0 (May 31 , 2014) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
1.16.1 API changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
1.16.2 Display Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
1.16.3 Text Parsing API Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
1.16.4 Groupby API Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
1.16.5 SQL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
1.16.6 MultiIndexing Using Slicers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
1.16.7 Plotting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
1.16.8 Prior Version Deprecations/Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
1.16.9 Deprecations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
v
1.16.10 Known Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260
1.16.11 Enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260
1.16.12 Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
1.16.13 Experimental . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
1.16.14 Bug Fixes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
1.17 v0.13.1 (February 3, 2014) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270
1.17.1 Output Formatting Enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
1.17.2 API changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272
1.17.3 Prior Version Deprecations/Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
1.17.4 Deprecations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
1.17.5 Enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
1.17.6 Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277
1.17.7 Experimental . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277
1.17.8 Bug Fixes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277
1.18 v0.13.0 (January 3, 2014) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278
1.18.1 API changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278
1.18.2 Prior Version Deprecations/Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
1.18.3 Deprecations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
1.18.4 Indexing API Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
1.18.5 Float64Index API Change . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283
1.18.6 HDFStore API Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284
1.18.7 DataFrame repr Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287
1.18.8 Enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287
1.18.9 Experimental . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295
1.18.10 Internal Refactoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298
1.18.11 Bug Fixes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300
1.19 v0.12.0 (July 24, 2013) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300
1.19.1 API changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301
1.19.2 I/O Enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304
1.19.3 Other Enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306
1.19.4 Experimental Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308
1.19.5 Bug Fixes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309
1.20 v0.11.0 (April 22, 2013) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311
1.20.1 Selection Choices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312
1.20.2 Selection Deprecations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312
1.20.3 Dtypes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313
1.20.4 Dtype Conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314
1.20.5 Dtype Gotchas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315
1.20.6 Datetimes Conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317
1.20.7 API changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
1.20.8 Enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
1.21 v0.10.1 (January 22, 2013) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
1.21.1 API changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
1.21.2 New features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322
1.21.3 HDFStore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322
1.22 v0.10.0 (December 17, 2012) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327
1.22.1 File parsing new features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327
1.22.2 API changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328
1.22.3 New features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333
1.22.4 Wide DataFrame Printing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333
1.22.5 Updated PyTables Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
1.22.6 N Dimensional Panels (Experimental) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339
1.23 v0.9.1 (November 14, 2012) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339
1.23.1 New features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339
vi
1.23.2 API changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343
1.24 v0.9.0 (October 7, 2012) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343
1.24.1 New features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343
1.24.2 API changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344
1.25 v0.8.1 (July 22, 2012) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345
1.25.1 New features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345
1.25.2 Performance improvements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345
1.26 v0.8.0 (June 29, 2012) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346
1.26.1 Support for non-unique indexes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346
1.26.2 NumPy datetime64 dtype and 1.6 dependency . . . . . . . . . . . . . . . . . . . . . . . . . 346
1.26.3 Time series changes and improvements . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346
1.26.4 Other new features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
1.26.5 New plotting methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348
1.26.6 Other API changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
1.26.7 Potential porting issues for pandas <= 0.7.3 users . . . . . . . . . . . . . . . . . . . . . . . 349
1.27 v.0.7.3 (April 12, 2012) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351
1.27.1 New features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351
1.27.2 NA Boolean Comparison API Change . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352
1.27.3 Other API Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353
1.28 v.0.7.2 (March 16, 2012) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354
1.28.1 New features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354
1.28.2 Performance improvements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354
1.29 v.0.7.1 (February 29, 2012) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355
1.29.1 New features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355
1.29.2 Performance improvements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355
1.30 v.0.7.0 (February 9, 2012) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355
1.30.1 New features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355
1.30.2 API Changes to integer indexing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357
1.30.3 API tweaks regarding label-based slicing . . . . . . . . . . . . . . . . . . . . . . . . . . . 358
1.30.4 Changes to Series [] operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359
1.30.5 Other API Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360
1.30.6 Performance improvements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360
1.31 v.0.6.1 (December 13, 2011) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361
1.31.1 New features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361
1.31.2 Performance improvements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361
1.32 v.0.6.0 (November 25, 2011) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361
1.32.1 New Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361
1.32.2 Performance Enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363
1.33 v.0.5.0 (October 24, 2011) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363
1.33.1 New Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363
1.33.2 Performance Enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364
1.34 v.0.4.3 through v0.4.1 (September 25 - October 9, 2011) . . . . . . . . . . . . . . . . . . . . . . . . 364
1.34.1 New Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364
1.34.2 Performance Enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365
2 Installation 367
2.1 Python version support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367
2.2 Installing pandas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367
2.2.1 Installing pandas with Anaconda . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367
2.2.2 Installing pandas with Miniconda . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367
2.2.3 Installing from PyPI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368
2.2.4 Installing using your Linux distributions package manager. . . . . . . . . . . . . . . . . . . 368
2.2.5 Installing from source . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369
2.2.6 Running the test suite . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369
vii
2.3 Dependencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370
2.3.1 Recommended Dependencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370
2.3.2 Optional Dependencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370
viii
5.3 Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398
5.3.1 Getting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399
5.3.2 Selection by Label . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399
5.3.3 Selection by Position . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 400
5.3.4 Boolean Indexing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401
5.3.5 Setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402
5.4 Missing Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403
5.5 Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404
5.5.1 Stats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404
5.5.2 Apply . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405
5.5.3 Histogramming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405
5.5.4 String Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406
5.6 Merge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406
5.6.1 Concat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406
5.6.2 Join . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 407
5.6.3 Append . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408
5.7 Grouping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409
5.8 Reshaping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409
5.8.1 Stack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 410
5.8.2 Pivot Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411
5.9 Time Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412
5.10 Categoricals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413
5.11 Plotting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415
5.12 Getting Data In/Out . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416
5.12.1 CSV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416
5.12.2 HDF5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417
5.12.3 Excel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417
5.13 Gotchas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 418
6 Tutorials 419
6.1 Internal Guides . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419
6.2 pandas Cookbook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419
6.3 Lessons for New pandas Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 420
6.4 Practical data analysis with Python . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 420
6.5 Exercises for New Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 420
6.6 Modern Pandas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 421
6.7 Excel charts with pandas, vincent and xlsxwriter . . . . . . . . . . . . . . . . . . . . . . . . . . . . 421
6.8 Various Tutorials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 421
7 Cookbook 423
7.1 Idioms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423
7.1.1 if-then... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423
7.1.2 Splitting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 424
7.1.3 Building Criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425
7.2 Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427
7.2.1 DataFrames . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427
7.2.2 Panels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 428
7.2.3 New Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429
7.3 MultiIndexing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 430
7.3.1 Arithmetic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 432
7.3.2 Slicing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 432
7.3.3 Sorting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 434
7.3.4 Levels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 434
7.3.5 panelnd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 434
ix
7.4 Missing Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 435
7.4.1 Replace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 435
7.5 Grouping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 435
7.5.1 Expanding Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 440
7.5.2 Splitting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 440
7.5.3 Pivot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 441
7.5.4 Apply . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442
7.6 Timeseries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445
7.6.1 Resampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445
7.7 Merge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445
7.8 Plotting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 447
7.9 Data In/Out . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 448
7.9.1 CSV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 448
7.9.1.1 Reading multiple files to create a single DataFrame . . . . . . . . . . . . . . . . . 449
7.9.1.2 Parsing date components in multi-columns . . . . . . . . . . . . . . . . . . . . . . 449
7.9.1.3 Skip row between header and data . . . . . . . . . . . . . . . . . . . . . . . . . . 450
7.9.2 SQL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 451
7.9.3 Excel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 451
7.9.4 HTML . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 451
7.9.5 HDFStore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 451
7.9.6 Binary Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 452
7.10 Computation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453
7.11 Timedeltas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453
7.12 Aliasing Axis Names . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 455
7.13 Creating Example Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 455
x
8.3.6 Indexing / Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 481
8.3.7 Squeezing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 481
8.3.8 Conversion to DataFrame . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 482
8.4 Deprecate Panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 482
8.5 Panel4D and PanelND (Deprecated) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485
xi
9.11.4 smallest / largest values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 538
9.11.5 Sorting by a multi-index column . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 540
9.12 Copying . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 540
9.13 dtypes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 540
9.13.1 defaults . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 542
9.13.2 upcasting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 543
9.13.3 astype . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 543
9.13.4 object conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 545
9.13.5 gotchas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 548
9.14 Selecting columns based on dtype . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 549
xii
12.16 Duplicate Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 618
12.17 Dictionary-like get() method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 621
12.18 The select() Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 622
12.19 The lookup() Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 622
12.20 Index objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 622
12.20.1 Setting metadata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 623
12.20.2 Set operations on Index objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 624
12.20.3 Missing values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 625
12.21 Set / Reset Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 625
12.21.1 Set an index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 625
12.21.2 Reset the index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 627
12.21.3 Adding an ad hoc index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 628
12.22 Returning a view versus a copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 628
12.22.1 Why does assignment fail when using chained indexing? . . . . . . . . . . . . . . . . . . . 629
12.22.2 Evaluation order matters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 630
xiii
14.2.6 Centering Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 677
14.2.7 Binary Window Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 678
14.2.8 Computing rolling pairwise covariances and correlations . . . . . . . . . . . . . . . . . . . 679
14.3 Aggregation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 680
14.3.1 Applying multiple functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 682
14.3.2 Applying different functions to DataFrame columns . . . . . . . . . . . . . . . . . . . . . . 683
14.4 Expanding Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 684
14.4.1 Method Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 685
14.5 Exponentially Weighted Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 687
xiv
16.9.7 Enumerate group items . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 745
16.9.8 Plotting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 746
16.10 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 747
16.10.1 Regrouping by factor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 747
16.10.2 Groupby by Indexer to resample data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 748
16.10.3 Returning a Series to propagate names . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 748
xv
19.3.4 Using the Origin Parameter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 808
19.4 Generating Ranges of Timestamps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 809
19.5 Timestamp limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 811
19.6 Indexing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 811
19.6.1 Partial String Indexing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 812
19.6.2 Slice vs. exact match . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 817
19.6.3 Exact Indexing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 818
19.6.4 Truncating & Fancy Indexing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 819
19.7 Time/Date Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 819
19.8 DateOffset objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 820
19.8.1 Parametric offsets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 822
19.8.2 Using offsets with Series / DatetimeIndex . . . . . . . . . . . . . . . . . . . . . . . 823
19.8.3 Custom Business Days . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 824
19.8.4 Business Hour . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 826
19.8.5 Custom Business Hour . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 828
19.8.6 Offset Aliases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 828
19.8.7 Combining Aliases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 829
19.8.8 Anchored Offsets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 830
19.8.9 Anchored Offset Semantics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 831
19.8.10 Holidays / Holiday Calendars . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 832
19.9 Time series-related instance methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 834
19.9.1 Shifting / lagging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 834
19.9.2 Frequency conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 835
19.9.3 Filling forward / backward . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 836
19.9.4 Converting to Python datetimes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 836
19.10 Resampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 836
19.10.1 Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 836
19.10.2 Up Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 838
19.10.3 Sparse Resampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 838
19.10.4 Aggregation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 839
19.11 Time Span Representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 842
19.11.1 Period . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 842
19.11.2 PeriodIndex and period_range . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 844
19.11.3 Period Dtypes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 845
19.11.4 PeriodIndex Partial String Indexing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 846
19.11.5 Frequency Conversion and Resampling with PeriodIndex . . . . . . . . . . . . . . . . . . . 848
19.12 Converting between Representations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 849
19.13 Representing out-of-bounds spans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 850
19.14 Time Zone Handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 851
19.14.1 Working with Time Zones . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 852
19.14.2 Ambiguous Times when Localizing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 856
19.14.3 TZ aware Dtypes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 858
xvi
20.6.3 Conversions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 873
20.7 Resampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 874
22 Visualization 905
22.1 Basic Plotting: plot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 905
22.2 Other Plots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 908
22.2.1 Bar plots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 910
22.2.2 Histograms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 912
22.2.3 Box Plots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 918
22.2.4 Area Plot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 926
22.2.5 Scatter Plot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 928
22.2.6 Hexagonal Bin Plot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 932
22.2.7 Pie plot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 934
22.3 Plotting with Missing Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 938
22.4 Plotting Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 939
22.4.1 Scatter Matrix Plot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 939
22.4.2 Density Plot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 940
22.4.3 Andrews Curves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 941
22.4.4 Parallel Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 942
22.4.5 Lag Plot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 943
22.4.6 Autocorrelation Plot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 944
22.4.7 Bootstrap Plot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 945
xvii
22.4.8 RadViz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 946
22.5 Plot Formatting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 947
22.5.1 Controlling the Legend . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 948
22.5.2 Scales . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 949
22.5.3 Plotting on a Secondary Y-axis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 950
22.5.4 Suppressing Tick Resolution Adjustment . . . . . . . . . . . . . . . . . . . . . . . . . . . 953
22.5.5 Automatic Date Tick Adjustment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 956
22.5.6 Subplots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 956
22.5.7 Using Layout and Targeting Multiple Axes . . . . . . . . . . . . . . . . . . . . . . . . . . 957
22.5.8 Plotting With Error Bars . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 960
22.5.9 Plotting Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 962
22.5.10 Colormaps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 965
22.6 Plotting directly with matplotlib . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 970
22.7 Trellis plotting interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 971
23 Styling 973
23.1 Building Styles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 973
23.1.1 Building Styles Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 975
23.2 Finer Control: Slicing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 976
23.3 Finer Control: Display Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 976
23.4 Builtin Styles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 977
23.4.1 Bar charts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 978
23.5 Sharing Styles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 979
23.6 Other Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 979
23.6.1 Precision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 979
23.6.2 Captions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 980
23.6.3 Table Styles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 980
23.6.4 CSS Classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 980
23.6.5 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 981
23.6.6 Terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 981
23.7 Fun stuff . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 981
23.8 Export to Excel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 982
23.9 Extensibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 983
23.9.1 Subclassing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 984
xviii
24.1.6.2 Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 999
24.1.7 Dealing with Unicode Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1000
24.1.8 Index columns and trailing delimiters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1000
24.1.9 Date Handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1001
24.1.9.1 Specifying Date Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1001
24.1.9.2 Date Parsing Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1003
24.1.9.3 Inferring Datetime Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1004
24.1.9.4 International Date Formats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1005
24.1.10 Specifying method for floating-point conversion . . . . . . . . . . . . . . . . . . . . . . . . 1005
24.1.11 Thousand Separators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1006
24.1.12 NA Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1006
24.1.13 Infinity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1007
24.1.14 Returning Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1007
24.1.15 Boolean values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1007
24.1.16 Handling bad lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1008
24.1.17 Dialect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1008
24.1.18 Quoting and Escape Characters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1009
24.1.19 Files with Fixed Width Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1010
24.1.20 Indexes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1011
24.1.20.1 Files with an implicit index column . . . . . . . . . . . . . . . . . . . . . . . . . 1011
24.1.20.2 Reading an index with a MultiIndex . . . . . . . . . . . . . . . . . . . . . . . 1012
24.1.20.3 Reading columns with a MultiIndex . . . . . . . . . . . . . . . . . . . . . . . 1013
24.1.21 Automatically sniffing the delimiter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1014
24.1.22 Reading multiple files to create a single DataFrame . . . . . . . . . . . . . . . . . . . . . . 1015
24.1.23 Iterating through files chunk by chunk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1015
24.1.24 Specifying the parser engine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1016
24.1.25 Reading remote files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1016
24.1.26 Writing out Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1017
24.1.26.1 Writing to CSV format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1017
24.1.26.2 Writing a formatted string . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1017
24.2 JSON . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1018
24.2.1 Writing JSON . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1018
24.2.1.1 Orient Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1019
24.2.1.2 Date Handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1020
24.2.1.3 Fallback Behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1021
24.2.2 Reading JSON . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1022
24.2.2.1 Data Conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1023
24.2.2.2 The Numpy Parameter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1026
24.2.3 Normalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1026
24.2.4 Line delimited json . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1027
24.2.5 Table Schema . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1027
24.3 HTML . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1030
24.3.1 Reading HTML Content . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1030
24.3.2 Writing to HTML files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1035
24.3.3 HTML Table Parsing Gotchas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1038
24.4 Excel files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1039
24.4.1 Reading Excel Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1039
24.4.1.1 ExcelFile class . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1039
24.4.1.2 Specifying Sheets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1040
24.4.1.3 Reading a MultiIndex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1041
24.4.1.4 Parsing Specific Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1042
24.4.1.5 Parsing Dates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1043
24.4.1.6 Cell Converters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1043
24.4.1.7 dtype Specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1043
xix
24.4.2 Writing Excel Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1043
24.4.2.1 Writing Excel Files to Disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1043
24.4.2.2 Writing Excel Files to Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1044
24.4.3 Excel writer engines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1044
24.4.4 Style and Formatting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1045
24.5 Clipboard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1045
24.6 Pickling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1046
24.6.1 Compressed pickle files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1047
24.7 msgpack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1049
24.7.1 Read/Write API . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1052
24.8 HDF5 (PyTables) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1052
24.8.1 Read/Write API . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1054
24.8.2 Fixed Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1056
24.8.3 Table Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1056
24.8.4 Hierarchical Keys . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1057
24.8.5 Storing Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1058
24.8.5.1 Storing Mixed Types in a Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1058
24.8.5.2 Storing Multi-Index DataFrames . . . . . . . . . . . . . . . . . . . . . . . . . . . 1060
24.8.6 Querying . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1061
24.8.6.1 Querying a Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1061
24.8.6.2 Using timedelta64[ns] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1064
24.8.6.3 Indexing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1065
24.8.6.4 Query via Data Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1066
24.8.6.5 Iterator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1068
24.8.6.6 Advanced Queries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1069
24.8.6.7 Multiple Table Queries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1071
24.8.7 Delete from a Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1073
24.8.8 Notes & Caveats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1074
24.8.8.1 Compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1074
24.8.8.2 ptrepack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1074
24.8.8.3 Caveats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1074
24.8.9 DataTypes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1075
24.8.9.1 Categorical Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1075
24.8.9.2 String Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1076
24.8.10 External Compatibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1078
24.8.11 Backwards Compatibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1080
24.8.12 Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1080
24.8.13 Experimental . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1081
24.9 Feather . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1082
24.10 SQL Queries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1084
24.10.1 pandas.read_sql_table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1084
24.10.2 pandas.read_sql_query . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1085
24.10.3 pandas.read_sql . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1086
24.10.4 pandas.DataFrame.to_sql . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1087
24.10.5 Writing DataFrames . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1088
24.10.5.1 SQL data types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1089
24.10.6 Reading Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1089
24.10.7 Schema support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1090
24.10.8 Querying . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1090
24.10.9 Engine connection examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1092
24.10.10Advanced SQLAlchemy queries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1092
24.10.11Sqlite fallback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1093
24.11 Google BigQuery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1093
24.12 Stata Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1093
xx
24.12.1 Writing to Stata format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1094
24.12.2 Reading from Stata format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1094
24.12.2.1 Categorical Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1095
24.13 SAS Formats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1096
24.14 Other file formats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1096
24.14.1 netCDF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1097
24.15 Performance Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1097
xxi
29 rpy2 / R interface 1135
29.1 Transferring R data sets into Python . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1135
29.2 Converting DataFrames into R objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1136
xxii
31.4.5 factor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1152
xxiii
34.1.3.1 pandas.read_clipboard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1198
34.1.4 Excel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1198
34.1.4.1 pandas.read_excel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1199
34.1.4.2 pandas.ExcelFile.parse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1201
34.1.5 JSON . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1201
34.1.5.1 pandas.read_json . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1201
34.1.5.2 pandas.io.json.json_normalize . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1204
34.1.5.3 pandas.io.json.build_table_schema . . . . . . . . . . . . . . . . . . . . . . . . . . 1205
34.1.6 HTML . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1206
34.1.6.1 pandas.read_html . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1206
34.1.7 HDFStore: PyTables (HDF5) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1208
34.1.7.1 pandas.read_hdf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1208
34.1.7.2 pandas.HDFStore.put . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1209
34.1.7.3 pandas.HDFStore.append . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1209
34.1.7.4 pandas.HDFStore.get . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1210
34.1.7.5 pandas.HDFStore.select . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1210
34.1.8 Feather . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1210
34.1.8.1 pandas.read_feather . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1210
34.1.9 SAS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1211
34.1.9.1 pandas.read_sas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1211
34.1.10 SQL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1211
34.1.11 Google BigQuery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1211
34.1.11.1 pandas.read_gbq . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1212
34.1.12 STATA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1213
34.1.12.1 pandas.read_stata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1213
34.1.12.2 pandas.io.stata.StataReader.data . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1214
34.1.12.3 pandas.io.stata.StataReader.data_label . . . . . . . . . . . . . . . . . . . . . . . . 1215
34.1.12.4 pandas.io.stata.StataReader.value_labels . . . . . . . . . . . . . . . . . . . . . . . 1215
34.1.12.5 pandas.io.stata.StataReader.variable_labels . . . . . . . . . . . . . . . . . . . . . . 1215
34.1.12.6 pandas.io.stata.StataWriter.write_file . . . . . . . . . . . . . . . . . . . . . . . . . 1215
34.2 General functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1215
34.2.1 Data manipulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1215
34.2.1.1 pandas.melt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1216
34.2.1.2 pandas.pivot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1217
34.2.1.3 pandas.pivot_table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1218
34.2.1.4 pandas.crosstab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1219
34.2.1.5 pandas.cut . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1221
34.2.1.6 pandas.qcut . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1222
34.2.1.7 pandas.merge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1223
34.2.1.8 pandas.merge_ordered . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1224
34.2.1.9 pandas.merge_asof . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1226
34.2.1.10 pandas.concat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1230
34.2.1.11 pandas.get_dummies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1233
34.2.1.12 pandas.factorize . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1234
34.2.1.13 pandas.unique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1235
34.2.1.14 pandas.wide_to_long . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1236
34.2.2 Top-level missing data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1239
34.2.2.1 pandas.isnull . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1239
34.2.2.2 pandas.notnull . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1240
34.2.3 Top-level conversions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1240
34.2.3.1 pandas.to_numeric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1240
34.2.4 Top-level dealing with datetimelike . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1241
34.2.4.1 pandas.to_datetime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1241
34.2.4.2 pandas.to_timedelta . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1244
xxiv
34.2.4.3 pandas.date_range . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1245
34.2.4.4 pandas.bdate_range . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1246
34.2.4.5 pandas.period_range . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1246
34.2.4.6 pandas.timedelta_range . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1247
34.2.4.7 pandas.infer_freq . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1247
34.2.5 Top-level evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1248
34.2.5.1 pandas.eval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1248
34.2.6 Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1249
34.2.6.1 pandas.test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1249
34.3 Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1249
34.3.1 Constructor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1249
34.3.1.1 pandas.Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1250
34.3.2 Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1362
34.3.3 Conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1363
34.3.4 Indexing, iteration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1363
34.3.4.1 pandas.Series.__iter__ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1363
34.3.5 Binary operator functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1363
34.3.6 Function application, GroupBy & Window . . . . . . . . . . . . . . . . . . . . . . . . . . . 1364
34.3.7 Computations / Descriptive Stats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1365
34.3.8 Reindexing / Selection / Label manipulation . . . . . . . . . . . . . . . . . . . . . . . . . . 1366
34.3.9 Missing data handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1367
34.3.10 Reshaping, sorting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1367
34.3.11 Combining / joining / merging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1367
34.3.12 Time series-related . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1367
34.3.13 Datetimelike Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1368
34.3.13.1 pandas.Series.dt.date . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1368
34.3.13.2 pandas.Series.dt.time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1369
34.3.13.3 pandas.Series.dt.year . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1369
34.3.13.4 pandas.Series.dt.month . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1369
34.3.13.5 pandas.Series.dt.day . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1369
34.3.13.6 pandas.Series.dt.hour . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1369
34.3.13.7 pandas.Series.dt.minute . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1369
34.3.13.8 pandas.Series.dt.second . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1369
34.3.13.9 pandas.Series.dt.microsecond . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1369
34.3.13.10pandas.Series.dt.nanosecond . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1369
34.3.13.11pandas.Series.dt.week . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1370
34.3.13.12pandas.Series.dt.weekofyear . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1370
34.3.13.13pandas.Series.dt.dayofweek . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1370
34.3.13.14pandas.Series.dt.weekday . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1370
34.3.13.15pandas.Series.dt.weekday_name . . . . . . . . . . . . . . . . . . . . . . . . . . . 1370
34.3.13.16pandas.Series.dt.dayofyear . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1370
34.3.13.17pandas.Series.dt.quarter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1370
34.3.13.18pandas.Series.dt.is_month_start . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1370
34.3.13.19pandas.Series.dt.is_month_end . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1370
34.3.13.20pandas.Series.dt.is_quarter_start . . . . . . . . . . . . . . . . . . . . . . . . . . . 1371
34.3.13.21pandas.Series.dt.is_quarter_end . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1371
34.3.13.22pandas.Series.dt.is_year_start . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1371
34.3.13.23pandas.Series.dt.is_year_end . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1371
34.3.13.24pandas.Series.dt.is_leap_year . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1371
34.3.13.25pandas.Series.dt.daysinmonth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1371
34.3.13.26pandas.Series.dt.days_in_month . . . . . . . . . . . . . . . . . . . . . . . . . . . 1371
34.3.13.27pandas.Series.dt.tz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1371
34.3.13.28pandas.Series.dt.freq . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1371
34.3.13.29pandas.Series.dt.to_period . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1372
xxv
34.3.13.30pandas.Series.dt.to_pydatetime . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1372
34.3.13.31pandas.Series.dt.tz_localize . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1372
34.3.13.32pandas.Series.dt.tz_convert . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1373
34.3.13.33pandas.Series.dt.normalize . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1373
34.3.13.34pandas.Series.dt.strftime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1373
34.3.13.35pandas.Series.dt.round . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1373
34.3.13.36pandas.Series.dt.floor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1374
34.3.13.37pandas.Series.dt.ceil . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1374
34.3.13.38pandas.Series.dt.days . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1374
34.3.13.39pandas.Series.dt.seconds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1374
34.3.13.40pandas.Series.dt.microseconds . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1374
34.3.13.41pandas.Series.dt.nanoseconds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1374
34.3.13.42pandas.Series.dt.components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1375
34.3.13.43pandas.Series.dt.to_pytimedelta . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1375
34.3.13.44pandas.Series.dt.total_seconds . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1375
34.3.14 String handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1375
34.3.14.1 pandas.Series.str.capitalize . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1377
34.3.14.2 pandas.Series.str.cat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1377
34.3.14.3 pandas.Series.str.center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1378
34.3.14.4 pandas.Series.str.contains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1378
34.3.14.5 pandas.Series.str.count . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1379
34.3.14.6 pandas.Series.str.decode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1379
34.3.14.7 pandas.Series.str.encode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1379
34.3.14.8 pandas.Series.str.endswith . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1379
34.3.14.9 pandas.Series.str.extract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1380
34.3.14.10pandas.Series.str.extractall . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1381
34.3.14.11pandas.Series.str.find . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1382
34.3.14.12pandas.Series.str.findall . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1383
34.3.14.13pandas.Series.str.get . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1383
34.3.14.14pandas.Series.str.index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1383
34.3.14.15pandas.Series.str.join . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1384
34.3.14.16pandas.Series.str.len . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1384
34.3.14.17pandas.Series.str.ljust . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1384
34.3.14.18pandas.Series.str.lower . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1384
34.3.14.19pandas.Series.str.lstrip . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1384
34.3.14.20pandas.Series.str.match . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1384
34.3.14.21pandas.Series.str.normalize . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1385
34.3.14.22pandas.Series.str.pad . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1385
34.3.14.23pandas.Series.str.partition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1385
34.3.14.24pandas.Series.str.repeat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1386
34.3.14.25pandas.Series.str.replace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1386
34.3.14.26pandas.Series.str.rfind . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1388
34.3.14.27pandas.Series.str.rindex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1388
34.3.14.28pandas.Series.str.rjust . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1389
34.3.14.29pandas.Series.str.rpartition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1389
34.3.14.30pandas.Series.str.rstrip . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1390
34.3.14.31pandas.Series.str.slice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1390
34.3.14.32pandas.Series.str.slice_replace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1390
34.3.14.33pandas.Series.str.split . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1390
34.3.14.34pandas.Series.str.rsplit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1391
34.3.14.35pandas.Series.str.startswith . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1391
34.3.14.36pandas.Series.str.strip . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1391
34.3.14.37pandas.Series.str.swapcase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1392
34.3.14.38pandas.Series.str.title . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1392
xxvi
34.3.14.39pandas.Series.str.translate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1392
34.3.14.40pandas.Series.str.upper . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1392
34.3.14.41pandas.Series.str.wrap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1392
34.3.14.42pandas.Series.str.zfill . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1393
34.3.14.43pandas.Series.str.isalnum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1394
34.3.14.44pandas.Series.str.isalpha . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1394
34.3.14.45pandas.Series.str.isdigit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1394
34.3.14.46pandas.Series.str.isspace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1394
34.3.14.47pandas.Series.str.islower . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1394
34.3.14.48pandas.Series.str.isupper . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1394
34.3.14.49pandas.Series.str.istitle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1394
34.3.14.50pandas.Series.str.isnumeric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1395
34.3.14.51pandas.Series.str.isdecimal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1395
34.3.14.52pandas.Series.str.get_dummies . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1395
34.3.15 Categorical . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1395
34.3.15.1 pandas.Series.cat.categories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1396
34.3.15.2 pandas.Series.cat.ordered . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1396
34.3.15.3 pandas.Series.cat.codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1396
34.3.15.4 pandas.Series.cat.rename_categories . . . . . . . . . . . . . . . . . . . . . . . . . 1396
34.3.15.5 pandas.Series.cat.reorder_categories . . . . . . . . . . . . . . . . . . . . . . . . . 1397
34.3.15.6 pandas.Series.cat.add_categories . . . . . . . . . . . . . . . . . . . . . . . . . . . 1397
34.3.15.7 pandas.Series.cat.remove_categories . . . . . . . . . . . . . . . . . . . . . . . . . 1398
34.3.15.8 pandas.Series.cat.remove_unused_categories . . . . . . . . . . . . . . . . . . . . . 1398
34.3.15.9 pandas.Series.cat.set_categories . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1398
34.3.15.10pandas.Series.cat.as_ordered . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1399
34.3.15.11pandas.Series.cat.as_unordered . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1399
34.3.15.12pandas.Categorical . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1400
34.3.15.13pandas.Categorical.from_codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1401
34.3.15.14pandas.Categorical.__array__ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1401
34.3.16 Plotting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1401
34.3.16.1 pandas.Series.plot.area . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1402
34.3.16.2 pandas.Series.plot.bar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1402
34.3.16.3 pandas.Series.plot.barh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1402
34.3.16.4 pandas.Series.plot.box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1402
34.3.16.5 pandas.Series.plot.density . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1403
34.3.16.6 pandas.Series.plot.hist . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1403
34.3.16.7 pandas.Series.plot.kde . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1403
34.3.16.8 pandas.Series.plot.line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1403
34.3.16.9 pandas.Series.plot.pie . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1404
34.3.17 Serialization / IO / Conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1404
34.3.18 Sparse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1404
34.3.18.1 pandas.SparseSeries.to_coo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1404
34.3.18.2 pandas.SparseSeries.from_coo . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1405
34.4 DataFrame . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1406
34.4.1 Constructor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1406
34.4.1.1 pandas.DataFrame . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1406
34.4.2 Attributes and underlying data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1541
34.4.3 Conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1541
34.4.4 Indexing, iteration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1541
34.4.4.1 pandas.DataFrame.__iter__ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1542
34.4.5 Binary operator functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1542
34.4.6 Function application, GroupBy & Window . . . . . . . . . . . . . . . . . . . . . . . . . . . 1543
34.4.7 Computations / Descriptive Stats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1543
34.4.8 Reindexing / Selection / Label manipulation . . . . . . . . . . . . . . . . . . . . . . . . . . 1544
xxvii
34.4.9 Missing data handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1545
34.4.10 Reshaping, sorting, transposing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1546
34.4.11 Combining / joining / merging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1546
34.4.12 Time series-related . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1546
34.4.13 Plotting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1547
34.4.13.1 pandas.DataFrame.plot.area . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1547
34.4.13.2 pandas.DataFrame.plot.bar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1547
34.4.13.3 pandas.DataFrame.plot.barh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1548
34.4.13.4 pandas.DataFrame.plot.box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1548
34.4.13.5 pandas.DataFrame.plot.density . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1548
34.4.13.6 pandas.DataFrame.plot.hexbin . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1549
34.4.13.7 pandas.DataFrame.plot.hist . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1549
34.4.13.8 pandas.DataFrame.plot.kde . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1549
34.4.13.9 pandas.DataFrame.plot.line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1550
34.4.13.10pandas.DataFrame.plot.pie . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1550
34.4.13.11pandas.DataFrame.plot.scatter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1550
34.4.14 Serialization / IO / Conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1551
34.4.15 Sparse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1551
34.4.15.1 pandas.SparseDataFrame.to_coo . . . . . . . . . . . . . . . . . . . . . . . . . . . 1551
34.5 Panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1552
34.5.1 Constructor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1552
34.5.1.1 pandas.Panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1552
34.5.2 Attributes and underlying data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1625
34.5.3 Conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1625
34.5.4 Getting and setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1625
34.5.5 Indexing, iteration, slicing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1625
34.5.5.1 pandas.Panel.__iter__ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1626
34.5.6 Binary operator functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1626
34.5.7 Function application, GroupBy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1627
34.5.8 Computations / Descriptive Stats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1627
34.5.9 Reindexing / Selection / Label manipulation . . . . . . . . . . . . . . . . . . . . . . . . . . 1627
34.5.10 Missing data handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1628
34.5.11 Reshaping, sorting, transposing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1628
34.5.12 Combining / joining / merging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1628
34.5.13 Time series-related . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1628
34.5.14 Serialization / IO / Conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1629
34.6 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1629
34.6.1 pandas.Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1629
34.6.1.1 pandas.Index.T . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1630
34.6.1.2 pandas.Index.asi8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1630
34.6.1.3 pandas.Index.base . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1631
34.6.1.4 pandas.Index.data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1631
34.6.1.5 pandas.Index.dtype . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1631
34.6.1.6 pandas.Index.dtype_str . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1631
34.6.1.7 pandas.Index.empty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1631
34.6.1.8 pandas.Index.flags . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1631
34.6.1.9 pandas.Index.has_duplicates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1631
34.6.1.10 pandas.Index.hasnans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1631
34.6.1.11 pandas.Index.inferred_type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1631
34.6.1.12 pandas.Index.is_all_dates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1631
34.6.1.13 pandas.Index.is_monotonic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1631
34.6.1.14 pandas.Index.is_monotonic_decreasing . . . . . . . . . . . . . . . . . . . . . . . . 1632
34.6.1.15 pandas.Index.is_monotonic_increasing . . . . . . . . . . . . . . . . . . . . . . . . 1632
34.6.1.16 pandas.Index.is_unique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1632
xxviii
34.6.1.17 pandas.Index.itemsize . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1632
34.6.1.18 pandas.Index.name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1632
34.6.1.19 pandas.Index.names . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1632
34.6.1.20 pandas.Index.nbytes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1632
34.6.1.21 pandas.Index.ndim . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1632
34.6.1.22 pandas.Index.nlevels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1632
34.6.1.23 pandas.Index.shape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1632
34.6.1.24 pandas.Index.size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1633
34.6.1.25 pandas.Index.strides . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1633
34.6.1.26 pandas.Index.values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1633
34.6.1.27 pandas.Index.all . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1635
34.6.1.28 pandas.Index.any . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1635
34.6.1.29 pandas.Index.append . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1635
34.6.1.30 pandas.Index.argmax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1636
34.6.1.31 pandas.Index.argmin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1636
34.6.1.32 pandas.Index.argsort . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1636
34.6.1.33 pandas.Index.asof . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1636
34.6.1.34 pandas.Index.asof_locs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1636
34.6.1.35 pandas.Index.astype . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1636
34.6.1.36 pandas.Index.contains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1637
34.6.1.37 pandas.Index.copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1637
34.6.1.38 pandas.Index.delete . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1637
34.6.1.39 pandas.Index.difference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1637
34.6.1.40 pandas.Index.drop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1638
34.6.1.41 pandas.Index.drop_duplicates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1638
34.6.1.42 pandas.Index.dropna . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1638
34.6.1.43 pandas.Index.duplicated . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1638
34.6.1.44 pandas.Index.equals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1639
34.6.1.45 pandas.Index.factorize . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1639
34.6.1.46 pandas.Index.fillna . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1639
34.6.1.47 pandas.Index.format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1639
34.6.1.48 pandas.Index.get_duplicates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1639
34.6.1.49 pandas.Index.get_indexer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1639
34.6.1.50 pandas.Index.get_indexer_for . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1640
34.6.1.51 pandas.Index.get_indexer_non_unique . . . . . . . . . . . . . . . . . . . . . . . . 1640
34.6.1.52 pandas.Index.get_level_values . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1641
34.6.1.53 pandas.Index.get_loc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1641
34.6.1.54 pandas.Index.get_slice_bound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1641
34.6.1.55 pandas.Index.get_value . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1641
34.6.1.56 pandas.Index.get_values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1641
34.6.1.57 pandas.Index.groupby . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1642
34.6.1.58 pandas.Index.holds_integer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1642
34.6.1.59 pandas.Index.identical . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1642
34.6.1.60 pandas.Index.insert . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1642
34.6.1.61 pandas.Index.intersection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1642
34.6.1.62 pandas.Index.is_ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1643
34.6.1.63 pandas.Index.is_boolean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1643
34.6.1.64 pandas.Index.is_categorical . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1643
34.6.1.65 pandas.Index.is_floating . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1643
34.6.1.66 pandas.Index.is_integer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1643
34.6.1.67 pandas.Index.is_interval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1643
34.6.1.68 pandas.Index.is_lexsorted_for_tuple . . . . . . . . . . . . . . . . . . . . . . . . . 1643
34.6.1.69 pandas.Index.is_mixed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1643
34.6.1.70 pandas.Index.is_numeric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1643
xxix
34.6.1.71 pandas.Index.is_object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1643
34.6.1.72 pandas.Index.is_type_compatible . . . . . . . . . . . . . . . . . . . . . . . . . . . 1644
34.6.1.73 pandas.Index.isin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1644
34.6.1.74 pandas.Index.isnull . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1644
34.6.1.75 pandas.Index.item . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1644
34.6.1.76 pandas.Index.join . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1644
34.6.1.77 pandas.Index.map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1645
34.6.1.78 pandas.Index.max . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1645
34.6.1.79 pandas.Index.memory_usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1645
34.6.1.80 pandas.Index.min . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1646
34.6.1.81 pandas.Index.notnull . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1646
34.6.1.82 pandas.Index.nunique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1646
34.6.1.83 pandas.Index.putmask . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1646
34.6.1.84 pandas.Index.ravel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1646
34.6.1.85 pandas.Index.reindex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1646
34.6.1.86 pandas.Index.rename . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1647
34.6.1.87 pandas.Index.repeat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1647
34.6.1.88 pandas.Index.reshape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1647
34.6.1.89 pandas.Index.searchsorted . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1647
34.6.1.90 pandas.Index.set_names . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1649
34.6.1.91 pandas.Index.set_value . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1649
34.6.1.92 pandas.Index.shift . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1649
34.6.1.93 pandas.Index.slice_indexer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1649
34.6.1.94 pandas.Index.slice_locs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1650
34.6.1.95 pandas.Index.sort . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1650
34.6.1.96 pandas.Index.sort_values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1650
34.6.1.97 pandas.Index.sortlevel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1650
34.6.1.98 pandas.Index.str . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1651
34.6.1.99 pandas.Index.summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1651
34.6.1.100pandas.Index.sym_diff . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1651
34.6.1.101pandas.Index.symmetric_difference . . . . . . . . . . . . . . . . . . . . . . . . . . 1651
34.6.1.102pandas.Index.take . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1652
34.6.1.103pandas.Index.to_datetime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1652
34.6.1.104pandas.Index.to_native_types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1652
34.6.1.105pandas.Index.to_series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1652
34.6.1.106pandas.Index.tolist . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1653
34.6.1.107pandas.Index.transpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1653
34.6.1.108pandas.Index.union . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1653
34.6.1.109pandas.Index.unique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1653
34.6.1.110pandas.Index.value_counts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1653
34.6.1.111pandas.Index.view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1654
34.6.1.112pandas.Index.where . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1654
34.6.2 Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1654
34.6.3 Modifying and Computations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1655
34.6.4 Missing Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1655
34.6.5 Conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1655
34.6.6 Sorting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1656
34.6.7 Time-specific operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1656
34.6.8 Combining / joining / set operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1656
34.6.9 Selecting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1656
34.7 CategoricalIndex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1657
34.7.1 pandas.CategoricalIndex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1657
34.7.2 Categorical Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1657
34.7.2.1 pandas.CategoricalIndex.codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1658
xxx
34.7.2.2 pandas.CategoricalIndex.categories . . . . . . . . . . . . . . . . . . . . . . . . . . 1658
34.7.2.3 pandas.CategoricalIndex.ordered . . . . . . . . . . . . . . . . . . . . . . . . . . . 1658
34.7.2.4 pandas.CategoricalIndex.rename_categories . . . . . . . . . . . . . . . . . . . . . 1658
34.7.2.5 pandas.CategoricalIndex.reorder_categories . . . . . . . . . . . . . . . . . . . . . 1658
34.7.2.6 pandas.CategoricalIndex.add_categories . . . . . . . . . . . . . . . . . . . . . . . 1659
34.7.2.7 pandas.CategoricalIndex.remove_categories . . . . . . . . . . . . . . . . . . . . . 1659
34.7.2.8 pandas.CategoricalIndex.remove_unused_categories . . . . . . . . . . . . . . . . . 1660
34.7.2.9 pandas.CategoricalIndex.set_categories . . . . . . . . . . . . . . . . . . . . . . . . 1660
34.7.2.10 pandas.CategoricalIndex.as_ordered . . . . . . . . . . . . . . . . . . . . . . . . . 1661
34.7.2.11 pandas.CategoricalIndex.as_unordered . . . . . . . . . . . . . . . . . . . . . . . . 1661
34.8 IntervalIndex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1661
34.8.1 pandas.IntervalIndex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1661
34.8.2 IntervalIndex Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1661
34.8.2.1 pandas.IntervalIndex.from_arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . 1662
34.8.2.2 pandas.IntervalIndex.from_tuples . . . . . . . . . . . . . . . . . . . . . . . . . . . 1662
34.8.2.3 pandas.IntervalIndex.from_breaks . . . . . . . . . . . . . . . . . . . . . . . . . . 1662
34.8.2.4 pandas.IntervalIndex.from_intervals . . . . . . . . . . . . . . . . . . . . . . . . . 1663
34.9 MultiIndex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1663
34.9.1 pandas.MultiIndex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1664
34.9.1.1 pandas.MultiIndex.T . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1665
34.9.1.2 pandas.MultiIndex.asi8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1665
34.9.1.3 pandas.MultiIndex.base . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1665
34.9.1.4 pandas.MultiIndex.data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1665
34.9.1.5 pandas.MultiIndex.dtype . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1665
34.9.1.6 pandas.MultiIndex.dtype_str . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1665
34.9.1.7 pandas.MultiIndex.empty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1665
34.9.1.8 pandas.MultiIndex.flags . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1666
34.9.1.9 pandas.MultiIndex.has_duplicates . . . . . . . . . . . . . . . . . . . . . . . . . . 1666
34.9.1.10 pandas.MultiIndex.hasnans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1666
34.9.1.11 pandas.MultiIndex.inferred_type . . . . . . . . . . . . . . . . . . . . . . . . . . . 1666
34.9.1.12 pandas.MultiIndex.is_all_dates . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1666
34.9.1.13 pandas.MultiIndex.is_monotonic . . . . . . . . . . . . . . . . . . . . . . . . . . . 1666
34.9.1.14 pandas.MultiIndex.is_monotonic_decreasing . . . . . . . . . . . . . . . . . . . . . 1666
34.9.1.15 pandas.MultiIndex.is_monotonic_increasing . . . . . . . . . . . . . . . . . . . . . 1666
34.9.1.16 pandas.MultiIndex.is_unique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1666
34.9.1.17 pandas.MultiIndex.itemsize . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1666
34.9.1.18 pandas.MultiIndex.labels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1666
34.9.1.19 pandas.MultiIndex.levels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1667
34.9.1.20 pandas.MultiIndex.levshape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1667
34.9.1.21 pandas.MultiIndex.lexsort_depth . . . . . . . . . . . . . . . . . . . . . . . . . . . 1667
34.9.1.22 pandas.MultiIndex.name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1667
34.9.1.23 pandas.MultiIndex.names . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1667
34.9.1.24 pandas.MultiIndex.nbytes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1667
34.9.1.25 pandas.MultiIndex.ndim . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1667
34.9.1.26 pandas.MultiIndex.nlevels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1667
34.9.1.27 pandas.MultiIndex.shape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1667
34.9.1.28 pandas.MultiIndex.size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1667
34.9.1.29 pandas.MultiIndex.strides . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1668
34.9.1.30 pandas.MultiIndex.values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1668
34.9.1.31 pandas.MultiIndex.all . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1670
34.9.1.32 pandas.MultiIndex.any . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1671
34.9.1.33 pandas.MultiIndex.append . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1671
34.9.1.34 pandas.MultiIndex.argmax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1671
34.9.1.35 pandas.MultiIndex.argmin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1671
xxxi
34.9.1.36 pandas.MultiIndex.argsort . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1671
34.9.1.37 pandas.MultiIndex.asof . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1671
34.9.1.38 pandas.MultiIndex.asof_locs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1671
34.9.1.39 pandas.MultiIndex.astype . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1672
34.9.1.40 pandas.MultiIndex.contains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1672
34.9.1.41 pandas.MultiIndex.copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1672
34.9.1.42 pandas.MultiIndex.delete . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1672
34.9.1.43 pandas.MultiIndex.difference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1673
34.9.1.44 pandas.MultiIndex.drop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1673
34.9.1.45 pandas.MultiIndex.drop_duplicates . . . . . . . . . . . . . . . . . . . . . . . . . . 1673
34.9.1.46 pandas.MultiIndex.droplevel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1673
34.9.1.47 pandas.MultiIndex.dropna . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1673
34.9.1.48 pandas.MultiIndex.duplicated . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1674
34.9.1.49 pandas.MultiIndex.equal_levels . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1674
34.9.1.50 pandas.MultiIndex.equals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1674
34.9.1.51 pandas.MultiIndex.factorize . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1674
34.9.1.52 pandas.MultiIndex.fillna . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1674
34.9.1.53 pandas.MultiIndex.format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1675
34.9.1.54 pandas.MultiIndex.from_arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1675
34.9.1.55 pandas.MultiIndex.from_product . . . . . . . . . . . . . . . . . . . . . . . . . . . 1675
34.9.1.56 pandas.MultiIndex.from_tuples . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1676
34.9.1.57 pandas.MultiIndex.get_duplicates . . . . . . . . . . . . . . . . . . . . . . . . . . . 1676
34.9.1.58 pandas.MultiIndex.get_indexer . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1676
34.9.1.59 pandas.MultiIndex.get_indexer_for . . . . . . . . . . . . . . . . . . . . . . . . . . 1677
34.9.1.60 pandas.MultiIndex.get_indexer_non_unique . . . . . . . . . . . . . . . . . . . . . 1677
34.9.1.61 pandas.MultiIndex.get_level_values . . . . . . . . . . . . . . . . . . . . . . . . . 1678
34.9.1.62 pandas.MultiIndex.get_loc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1678
34.9.1.63 pandas.MultiIndex.get_loc_level . . . . . . . . . . . . . . . . . . . . . . . . . . . 1678
34.9.1.64 pandas.MultiIndex.get_locs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1678
34.9.1.65 pandas.MultiIndex.get_major_bounds . . . . . . . . . . . . . . . . . . . . . . . . 1678
34.9.1.66 pandas.MultiIndex.get_slice_bound . . . . . . . . . . . . . . . . . . . . . . . . . . 1679
34.9.1.67 pandas.MultiIndex.get_value . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1679
34.9.1.68 pandas.MultiIndex.get_values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1679
34.9.1.69 pandas.MultiIndex.groupby . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1679
34.9.1.70 pandas.MultiIndex.holds_integer . . . . . . . . . . . . . . . . . . . . . . . . . . . 1679
34.9.1.71 pandas.MultiIndex.identical . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1679
34.9.1.72 pandas.MultiIndex.insert . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1680
34.9.1.73 pandas.MultiIndex.intersection . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1680
34.9.1.74 pandas.MultiIndex.is_ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1680
34.9.1.75 pandas.MultiIndex.is_boolean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1680
34.9.1.76 pandas.MultiIndex.is_categorical . . . . . . . . . . . . . . . . . . . . . . . . . . . 1680
34.9.1.77 pandas.MultiIndex.is_floating . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1680
34.9.1.78 pandas.MultiIndex.is_integer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1680
34.9.1.79 pandas.MultiIndex.is_interval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1680
34.9.1.80 pandas.MultiIndex.is_lexsorted . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1681
34.9.1.81 pandas.MultiIndex.is_lexsorted_for_tuple . . . . . . . . . . . . . . . . . . . . . . 1681
34.9.1.82 pandas.MultiIndex.is_mixed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1681
34.9.1.83 pandas.MultiIndex.is_numeric . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1681
34.9.1.84 pandas.MultiIndex.is_object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1681
34.9.1.85 pandas.MultiIndex.is_type_compatible . . . . . . . . . . . . . . . . . . . . . . . . 1681
34.9.1.86 pandas.MultiIndex.isin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1681
34.9.1.87 pandas.MultiIndex.isnull . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1682
34.9.1.88 pandas.MultiIndex.item . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1682
34.9.1.89 pandas.MultiIndex.join . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1682
xxxii
34.9.1.90 pandas.MultiIndex.map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1682
34.9.1.91 pandas.MultiIndex.max . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1683
34.9.1.92 pandas.MultiIndex.memory_usage . . . . . . . . . . . . . . . . . . . . . . . . . . 1683
34.9.1.93 pandas.MultiIndex.min . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1683
34.9.1.94 pandas.MultiIndex.notnull . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1683
34.9.1.95 pandas.MultiIndex.nunique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1683
34.9.1.96 pandas.MultiIndex.putmask . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1684
34.9.1.97 pandas.MultiIndex.ravel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1684
34.9.1.98 pandas.MultiIndex.reindex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1684
34.9.1.99 pandas.MultiIndex.remove_unused_levels . . . . . . . . . . . . . . . . . . . . . . 1684
34.9.1.100pandas.MultiIndex.rename . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1685
34.9.1.101pandas.MultiIndex.reorder_levels . . . . . . . . . . . . . . . . . . . . . . . . . . . 1685
34.9.1.102pandas.MultiIndex.repeat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1685
34.9.1.103pandas.MultiIndex.reshape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1686
34.9.1.104pandas.MultiIndex.searchsorted . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1686
34.9.1.105pandas.MultiIndex.set_labels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1687
34.9.1.106pandas.MultiIndex.set_levels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1688
34.9.1.107pandas.MultiIndex.set_names . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1689
34.9.1.108pandas.MultiIndex.set_value . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1689
34.9.1.109pandas.MultiIndex.shift . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1689
34.9.1.110pandas.MultiIndex.slice_indexer . . . . . . . . . . . . . . . . . . . . . . . . . . . 1689
34.9.1.111pandas.MultiIndex.slice_locs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1690
34.9.1.112pandas.MultiIndex.sort . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1690
34.9.1.113pandas.MultiIndex.sort_values . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1690
34.9.1.114pandas.MultiIndex.sortlevel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1691
34.9.1.115pandas.MultiIndex.str . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1691
34.9.1.116pandas.MultiIndex.summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1691
34.9.1.117pandas.MultiIndex.swaplevel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1691
34.9.1.118pandas.MultiIndex.sym_diff . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1692
34.9.1.119pandas.MultiIndex.symmetric_difference . . . . . . . . . . . . . . . . . . . . . . . 1692
34.9.1.120pandas.MultiIndex.take . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1692
34.9.1.121pandas.MultiIndex.to_datetime . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1693
34.9.1.122pandas.MultiIndex.to_frame . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1693
34.9.1.123pandas.MultiIndex.to_hierarchical . . . . . . . . . . . . . . . . . . . . . . . . . . 1693
34.9.1.124pandas.MultiIndex.to_native_types . . . . . . . . . . . . . . . . . . . . . . . . . . 1694
34.9.1.125pandas.MultiIndex.to_series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1694
34.9.1.126pandas.MultiIndex.tolist . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1694
34.9.1.127pandas.MultiIndex.transpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1694
34.9.1.128pandas.MultiIndex.truncate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1694
34.9.1.129pandas.MultiIndex.union . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1695
34.9.1.130pandas.MultiIndex.unique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1695
34.9.1.131pandas.MultiIndex.value_counts . . . . . . . . . . . . . . . . . . . . . . . . . . . 1695
34.9.1.132pandas.MultiIndex.view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1696
34.9.1.133pandas.MultiIndex.where . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1696
34.9.2 pandas.IndexSlice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1696
34.9.3 MultiIndex Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1696
34.10 DatetimeIndex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1697
34.10.1 pandas.DatetimeIndex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1697
34.10.1.1 pandas.DatetimeIndex.T . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1699
34.10.1.2 pandas.DatetimeIndex.asi8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1700
34.10.1.3 pandas.DatetimeIndex.asobject . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1700
34.10.1.4 pandas.DatetimeIndex.base . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1700
34.10.1.5 pandas.DatetimeIndex.data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1700
34.10.1.6 pandas.DatetimeIndex.date . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1700
xxxiii
34.10.1.7 pandas.DatetimeIndex.day . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1700
34.10.1.8 pandas.DatetimeIndex.dayofweek . . . . . . . . . . . . . . . . . . . . . . . . . . . 1700
34.10.1.9 pandas.DatetimeIndex.dayofyear . . . . . . . . . . . . . . . . . . . . . . . . . . . 1700
34.10.1.10pandas.DatetimeIndex.days_in_month . . . . . . . . . . . . . . . . . . . . . . . . 1700
34.10.1.11pandas.DatetimeIndex.daysinmonth . . . . . . . . . . . . . . . . . . . . . . . . . . 1701
34.10.1.12pandas.DatetimeIndex.dtype . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1701
34.10.1.13pandas.DatetimeIndex.dtype_str . . . . . . . . . . . . . . . . . . . . . . . . . . . 1701
34.10.1.14pandas.DatetimeIndex.empty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1701
34.10.1.15pandas.DatetimeIndex.flags . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1701
34.10.1.16pandas.DatetimeIndex.freq . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1701
34.10.1.17pandas.DatetimeIndex.freqstr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1701
34.10.1.18pandas.DatetimeIndex.has_duplicates . . . . . . . . . . . . . . . . . . . . . . . . . 1701
34.10.1.19pandas.DatetimeIndex.hasnans . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1701
34.10.1.20pandas.DatetimeIndex.hour . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1701
34.10.1.21pandas.DatetimeIndex.inferred_freq . . . . . . . . . . . . . . . . . . . . . . . . . 1702
34.10.1.22pandas.DatetimeIndex.inferred_type . . . . . . . . . . . . . . . . . . . . . . . . . 1702
34.10.1.23pandas.DatetimeIndex.is_all_dates . . . . . . . . . . . . . . . . . . . . . . . . . . 1702
34.10.1.24pandas.DatetimeIndex.is_leap_year . . . . . . . . . . . . . . . . . . . . . . . . . . 1702
34.10.1.25pandas.DatetimeIndex.is_monotonic . . . . . . . . . . . . . . . . . . . . . . . . . 1702
34.10.1.26pandas.DatetimeIndex.is_monotonic_decreasing . . . . . . . . . . . . . . . . . . . 1702
34.10.1.27pandas.DatetimeIndex.is_monotonic_increasing . . . . . . . . . . . . . . . . . . . 1702
34.10.1.28pandas.DatetimeIndex.is_month_end . . . . . . . . . . . . . . . . . . . . . . . . . 1702
34.10.1.29pandas.DatetimeIndex.is_month_start . . . . . . . . . . . . . . . . . . . . . . . . 1702
34.10.1.30pandas.DatetimeIndex.is_normalized . . . . . . . . . . . . . . . . . . . . . . . . . 1702
34.10.1.31pandas.DatetimeIndex.is_quarter_end . . . . . . . . . . . . . . . . . . . . . . . . . 1703
34.10.1.32pandas.DatetimeIndex.is_quarter_start . . . . . . . . . . . . . . . . . . . . . . . . 1703
34.10.1.33pandas.DatetimeIndex.is_unique . . . . . . . . . . . . . . . . . . . . . . . . . . . 1703
34.10.1.34pandas.DatetimeIndex.is_year_end . . . . . . . . . . . . . . . . . . . . . . . . . . 1703
34.10.1.35pandas.DatetimeIndex.is_year_start . . . . . . . . . . . . . . . . . . . . . . . . . . 1703
34.10.1.36pandas.DatetimeIndex.itemsize . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1703
34.10.1.37pandas.DatetimeIndex.microsecond . . . . . . . . . . . . . . . . . . . . . . . . . . 1703
34.10.1.38pandas.DatetimeIndex.minute . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1703
34.10.1.39pandas.DatetimeIndex.month . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1703
34.10.1.40pandas.DatetimeIndex.name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1703
34.10.1.41pandas.DatetimeIndex.names . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1704
34.10.1.42pandas.DatetimeIndex.nanosecond . . . . . . . . . . . . . . . . . . . . . . . . . . 1704
34.10.1.43pandas.DatetimeIndex.nbytes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1704
34.10.1.44pandas.DatetimeIndex.ndim . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1704
34.10.1.45pandas.DatetimeIndex.nlevels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1704
34.10.1.46pandas.DatetimeIndex.offset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1704
34.10.1.47pandas.DatetimeIndex.quarter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1704
34.10.1.48pandas.DatetimeIndex.resolution . . . . . . . . . . . . . . . . . . . . . . . . . . . 1704
34.10.1.49pandas.DatetimeIndex.second . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1704
34.10.1.50pandas.DatetimeIndex.shape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1704
34.10.1.51pandas.DatetimeIndex.size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1705
34.10.1.52pandas.DatetimeIndex.strides . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1705
34.10.1.53pandas.DatetimeIndex.time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1705
34.10.1.54pandas.DatetimeIndex.tz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1705
34.10.1.55pandas.DatetimeIndex.tzinfo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1705
34.10.1.56pandas.DatetimeIndex.values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1705
34.10.1.57pandas.DatetimeIndex.week . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1705
34.10.1.58pandas.DatetimeIndex.weekday . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1705
34.10.1.59pandas.DatetimeIndex.weekday_name . . . . . . . . . . . . . . . . . . . . . . . . 1705
34.10.1.60pandas.DatetimeIndex.weekofyear . . . . . . . . . . . . . . . . . . . . . . . . . . 1706
xxxiv
34.10.1.61pandas.DatetimeIndex.year . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1706
34.10.1.62pandas.DatetimeIndex.all . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1709
34.10.1.63pandas.DatetimeIndex.any . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1709
34.10.1.64pandas.DatetimeIndex.append . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1709
34.10.1.65pandas.DatetimeIndex.argmax . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1709
34.10.1.66pandas.DatetimeIndex.argmin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1709
34.10.1.67pandas.DatetimeIndex.argsort . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1709
34.10.1.68pandas.DatetimeIndex.asof . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1709
34.10.1.69pandas.DatetimeIndex.asof_locs . . . . . . . . . . . . . . . . . . . . . . . . . . . 1710
34.10.1.70pandas.DatetimeIndex.astype . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1710
34.10.1.71pandas.DatetimeIndex.ceil . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1710
34.10.1.72pandas.DatetimeIndex.contains . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1710
34.10.1.73pandas.DatetimeIndex.copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1710
34.10.1.74pandas.DatetimeIndex.delete . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1711
34.10.1.75pandas.DatetimeIndex.difference . . . . . . . . . . . . . . . . . . . . . . . . . . . 1711
34.10.1.76pandas.DatetimeIndex.drop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1711
34.10.1.77pandas.DatetimeIndex.drop_duplicates . . . . . . . . . . . . . . . . . . . . . . . . 1711
34.10.1.78pandas.DatetimeIndex.dropna . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1712
34.10.1.79pandas.DatetimeIndex.duplicated . . . . . . . . . . . . . . . . . . . . . . . . . . . 1712
34.10.1.80pandas.DatetimeIndex.equals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1712
34.10.1.81pandas.DatetimeIndex.factorize . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1712
34.10.1.82pandas.DatetimeIndex.fillna . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1713
34.10.1.83pandas.DatetimeIndex.floor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1713
34.10.1.84pandas.DatetimeIndex.format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1713
34.10.1.85pandas.DatetimeIndex.get_duplicates . . . . . . . . . . . . . . . . . . . . . . . . . 1713
34.10.1.86pandas.DatetimeIndex.get_indexer . . . . . . . . . . . . . . . . . . . . . . . . . . 1713
34.10.1.87pandas.DatetimeIndex.get_indexer_for . . . . . . . . . . . . . . . . . . . . . . . . 1714
34.10.1.88pandas.DatetimeIndex.get_indexer_non_unique . . . . . . . . . . . . . . . . . . . 1714
34.10.1.89pandas.DatetimeIndex.get_level_values . . . . . . . . . . . . . . . . . . . . . . . . 1714
34.10.1.90pandas.DatetimeIndex.get_loc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1715
34.10.1.91pandas.DatetimeIndex.get_slice_bound . . . . . . . . . . . . . . . . . . . . . . . . 1715
34.10.1.92pandas.DatetimeIndex.get_value . . . . . . . . . . . . . . . . . . . . . . . . . . . 1715
34.10.1.93pandas.DatetimeIndex.get_value_maybe_box . . . . . . . . . . . . . . . . . . . . 1715
34.10.1.94pandas.DatetimeIndex.get_values . . . . . . . . . . . . . . . . . . . . . . . . . . . 1715
34.10.1.95pandas.DatetimeIndex.groupby . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1715
34.10.1.96pandas.DatetimeIndex.holds_integer . . . . . . . . . . . . . . . . . . . . . . . . . 1715
34.10.1.97pandas.DatetimeIndex.identical . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1716
34.10.1.98pandas.DatetimeIndex.indexer_at_time . . . . . . . . . . . . . . . . . . . . . . . . 1716
34.10.1.99pandas.DatetimeIndex.indexer_between_time . . . . . . . . . . . . . . . . . . . . 1716
34.10.1.100
pandas.DatetimeIndex.insert . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1716
34.10.1.101
pandas.DatetimeIndex.intersection . . . . . . . . . . . . . . . . . . . . . . . . . . 1716
34.10.1.102
pandas.DatetimeIndex.is_ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1717
34.10.1.103
pandas.DatetimeIndex.is_boolean . . . . . . . . . . . . . . . . . . . . . . . . . . . 1717
34.10.1.104
pandas.DatetimeIndex.is_categorical . . . . . . . . . . . . . . . . . . . . . . . . . 1717
34.10.1.105
pandas.DatetimeIndex.is_floating . . . . . . . . . . . . . . . . . . . . . . . . . . . 1717
34.10.1.106
pandas.DatetimeIndex.is_integer . . . . . . . . . . . . . . . . . . . . . . . . . . . 1717
34.10.1.107
pandas.DatetimeIndex.is_interval . . . . . . . . . . . . . . . . . . . . . . . . . . . 1717
34.10.1.108
pandas.DatetimeIndex.is_lexsorted_for_tuple . . . . . . . . . . . . . . . . . . . . 1717
34.10.1.109
pandas.DatetimeIndex.is_mixed . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1717
34.10.1.110
pandas.DatetimeIndex.is_numeric . . . . . . . . . . . . . . . . . . . . . . . . . . . 1717
34.10.1.111
pandas.DatetimeIndex.is_object . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1717
34.10.1.112
pandas.DatetimeIndex.is_type_compatible . . . . . . . . . . . . . . . . . . . . . . 1718
34.10.1.113
pandas.DatetimeIndex.isin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1718
34.10.1.114
pandas.DatetimeIndex.isnull . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1718
xxxv
34.10.1.115
pandas.DatetimeIndex.item . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1718
34.10.1.116
pandas.DatetimeIndex.join . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1718
34.10.1.117
pandas.DatetimeIndex.map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1718
34.10.1.118
pandas.DatetimeIndex.max . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1718
34.10.1.119
pandas.DatetimeIndex.memory_usage . . . . . . . . . . . . . . . . . . . . . . . . 1719
34.10.1.120
pandas.DatetimeIndex.min . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1719
34.10.1.121
pandas.DatetimeIndex.normalize . . . . . . . . . . . . . . . . . . . . . . . . . . . 1719
34.10.1.122
pandas.DatetimeIndex.notnull . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1719
34.10.1.123
pandas.DatetimeIndex.nunique . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1719
34.10.1.124
pandas.DatetimeIndex.putmask . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1720
34.10.1.125
pandas.DatetimeIndex.ravel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1720
34.10.1.126
pandas.DatetimeIndex.reindex . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1720
34.10.1.127
pandas.DatetimeIndex.rename . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1720
34.10.1.128
pandas.DatetimeIndex.repeat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1720
34.10.1.129
pandas.DatetimeIndex.reshape . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1721
34.10.1.130
pandas.DatetimeIndex.round . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1721
34.10.1.131
pandas.DatetimeIndex.searchsorted . . . . . . . . . . . . . . . . . . . . . . . . . . 1721
34.10.1.132
pandas.DatetimeIndex.set_names . . . . . . . . . . . . . . . . . . . . . . . . . . . 1722
34.10.1.133
pandas.DatetimeIndex.set_value . . . . . . . . . . . . . . . . . . . . . . . . . . . 1723
34.10.1.134
pandas.DatetimeIndex.shift . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1723
34.10.1.135
pandas.DatetimeIndex.slice_indexer . . . . . . . . . . . . . . . . . . . . . . . . . 1723
34.10.1.136
pandas.DatetimeIndex.slice_locs . . . . . . . . . . . . . . . . . . . . . . . . . . . 1723
34.10.1.137
pandas.DatetimeIndex.snap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1724
34.10.1.138
pandas.DatetimeIndex.sort . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1724
34.10.1.139
pandas.DatetimeIndex.sort_values . . . . . . . . . . . . . . . . . . . . . . . . . . 1724
34.10.1.140
pandas.DatetimeIndex.sortlevel . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1724
34.10.1.141
pandas.DatetimeIndex.str . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1724
34.10.1.142
pandas.DatetimeIndex.strftime . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1725
34.10.1.143
pandas.DatetimeIndex.summary . . . . . . . . . . . . . . . . . . . . . . . . . . . 1725
34.10.1.144
pandas.DatetimeIndex.sym_diff . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1725
34.10.1.145
pandas.DatetimeIndex.symmetric_difference . . . . . . . . . . . . . . . . . . . . . 1725
34.10.1.146
pandas.DatetimeIndex.take . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1726
34.10.1.147
pandas.DatetimeIndex.to_datetime . . . . . . . . . . . . . . . . . . . . . . . . . . 1726
34.10.1.148
pandas.DatetimeIndex.to_julian_date . . . . . . . . . . . . . . . . . . . . . . . . . 1726
34.10.1.149
pandas.DatetimeIndex.to_native_types . . . . . . . . . . . . . . . . . . . . . . . . 1726
34.10.1.150
pandas.DatetimeIndex.to_period . . . . . . . . . . . . . . . . . . . . . . . . . . . 1726
34.10.1.151
pandas.DatetimeIndex.to_perioddelta . . . . . . . . . . . . . . . . . . . . . . . . . 1727
34.10.1.152
pandas.DatetimeIndex.to_pydatetime . . . . . . . . . . . . . . . . . . . . . . . . . 1727
34.10.1.153
pandas.DatetimeIndex.to_series . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1727
34.10.1.154
pandas.DatetimeIndex.tolist . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1727
34.10.1.155
pandas.DatetimeIndex.transpose . . . . . . . . . . . . . . . . . . . . . . . . . . . 1727
34.10.1.156
pandas.DatetimeIndex.tz_convert . . . . . . . . . . . . . . . . . . . . . . . . . . . 1728
34.10.1.157
pandas.DatetimeIndex.tz_localize . . . . . . . . . . . . . . . . . . . . . . . . . . . 1728
34.10.1.158
pandas.DatetimeIndex.union . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1728
34.10.1.159
pandas.DatetimeIndex.union_many . . . . . . . . . . . . . . . . . . . . . . . . . . 1729
34.10.1.160
pandas.DatetimeIndex.unique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1729
34.10.1.161
pandas.DatetimeIndex.value_counts . . . . . . . . . . . . . . . . . . . . . . . . . 1729
34.10.1.162
pandas.DatetimeIndex.view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1730
34.10.1.163
pandas.DatetimeIndex.where . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1730
34.10.2 Time/Date Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1730
34.10.3 Selecting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1731
34.10.4 Time-specific operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1731
34.10.5 Conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1731
34.11 TimedeltaIndex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1731
xxxvi
34.11.1 pandas.TimedeltaIndex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1731
34.11.1.1 pandas.TimedeltaIndex.T . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1733
34.11.1.2 pandas.TimedeltaIndex.asi8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1733
34.11.1.3 pandas.TimedeltaIndex.asobject . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1733
34.11.1.4 pandas.TimedeltaIndex.base . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1734
34.11.1.5 pandas.TimedeltaIndex.components . . . . . . . . . . . . . . . . . . . . . . . . . . 1734
34.11.1.6 pandas.TimedeltaIndex.data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1734
34.11.1.7 pandas.TimedeltaIndex.days . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1734
34.11.1.8 pandas.TimedeltaIndex.dtype . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1734
34.11.1.9 pandas.TimedeltaIndex.dtype_str . . . . . . . . . . . . . . . . . . . . . . . . . . . 1734
34.11.1.10pandas.TimedeltaIndex.empty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1734
34.11.1.11pandas.TimedeltaIndex.flags . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1734
34.11.1.12pandas.TimedeltaIndex.freq . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1734
34.11.1.13pandas.TimedeltaIndex.freqstr . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1734
34.11.1.14pandas.TimedeltaIndex.has_duplicates . . . . . . . . . . . . . . . . . . . . . . . . 1735
34.11.1.15pandas.TimedeltaIndex.hasnans . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1735
34.11.1.16pandas.TimedeltaIndex.inferred_freq . . . . . . . . . . . . . . . . . . . . . . . . . 1735
34.11.1.17pandas.TimedeltaIndex.inferred_type . . . . . . . . . . . . . . . . . . . . . . . . . 1735
34.11.1.18pandas.TimedeltaIndex.is_all_dates . . . . . . . . . . . . . . . . . . . . . . . . . . 1735
34.11.1.19pandas.TimedeltaIndex.is_monotonic . . . . . . . . . . . . . . . . . . . . . . . . . 1735
34.11.1.20pandas.TimedeltaIndex.is_monotonic_decreasing . . . . . . . . . . . . . . . . . . 1735
34.11.1.21pandas.TimedeltaIndex.is_monotonic_increasing . . . . . . . . . . . . . . . . . . . 1735
34.11.1.22pandas.TimedeltaIndex.is_unique . . . . . . . . . . . . . . . . . . . . . . . . . . . 1735
34.11.1.23pandas.TimedeltaIndex.itemsize . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1735
34.11.1.24pandas.TimedeltaIndex.microseconds . . . . . . . . . . . . . . . . . . . . . . . . . 1736
34.11.1.25pandas.TimedeltaIndex.name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1736
34.11.1.26pandas.TimedeltaIndex.names . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1736
34.11.1.27pandas.TimedeltaIndex.nanoseconds . . . . . . . . . . . . . . . . . . . . . . . . . 1736
34.11.1.28pandas.TimedeltaIndex.nbytes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1736
34.11.1.29pandas.TimedeltaIndex.ndim . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1736
34.11.1.30pandas.TimedeltaIndex.nlevels . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1736
34.11.1.31pandas.TimedeltaIndex.resolution . . . . . . . . . . . . . . . . . . . . . . . . . . . 1736
34.11.1.32pandas.TimedeltaIndex.seconds . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1736
34.11.1.33pandas.TimedeltaIndex.shape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1736
34.11.1.34pandas.TimedeltaIndex.size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1737
34.11.1.35pandas.TimedeltaIndex.strides . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1737
34.11.1.36pandas.TimedeltaIndex.values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1737
34.11.1.37pandas.TimedeltaIndex.all . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1739
34.11.1.38pandas.TimedeltaIndex.any . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1739
34.11.1.39pandas.TimedeltaIndex.append . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1739
34.11.1.40pandas.TimedeltaIndex.argmax . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1740
34.11.1.41pandas.TimedeltaIndex.argmin . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1740
34.11.1.42pandas.TimedeltaIndex.argsort . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1740
34.11.1.43pandas.TimedeltaIndex.asof . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1740
34.11.1.44pandas.TimedeltaIndex.asof_locs . . . . . . . . . . . . . . . . . . . . . . . . . . . 1740
34.11.1.45pandas.TimedeltaIndex.astype . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1741
34.11.1.46pandas.TimedeltaIndex.ceil . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1741
34.11.1.47pandas.TimedeltaIndex.contains . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1741
34.11.1.48pandas.TimedeltaIndex.copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1741
34.11.1.49pandas.TimedeltaIndex.delete . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1742
34.11.1.50pandas.TimedeltaIndex.difference . . . . . . . . . . . . . . . . . . . . . . . . . . . 1742
34.11.1.51pandas.TimedeltaIndex.drop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1742
34.11.1.52pandas.TimedeltaIndex.drop_duplicates . . . . . . . . . . . . . . . . . . . . . . . 1742
34.11.1.53pandas.TimedeltaIndex.dropna . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1743
xxxvii
34.11.1.54pandas.TimedeltaIndex.duplicated . . . . . . . . . . . . . . . . . . . . . . . . . . 1743
34.11.1.55pandas.TimedeltaIndex.equals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1743
34.11.1.56pandas.TimedeltaIndex.factorize . . . . . . . . . . . . . . . . . . . . . . . . . . . 1743
34.11.1.57pandas.TimedeltaIndex.fillna . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1743
34.11.1.58pandas.TimedeltaIndex.floor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1744
34.11.1.59pandas.TimedeltaIndex.format . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1744
34.11.1.60pandas.TimedeltaIndex.get_duplicates . . . . . . . . . . . . . . . . . . . . . . . . 1744
34.11.1.61pandas.TimedeltaIndex.get_indexer . . . . . . . . . . . . . . . . . . . . . . . . . . 1744
34.11.1.62pandas.TimedeltaIndex.get_indexer_for . . . . . . . . . . . . . . . . . . . . . . . 1745
34.11.1.63pandas.TimedeltaIndex.get_indexer_non_unique . . . . . . . . . . . . . . . . . . . 1745
34.11.1.64pandas.TimedeltaIndex.get_level_values . . . . . . . . . . . . . . . . . . . . . . . 1745
34.11.1.65pandas.TimedeltaIndex.get_loc . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1745
34.11.1.66pandas.TimedeltaIndex.get_slice_bound . . . . . . . . . . . . . . . . . . . . . . . 1746
34.11.1.67pandas.TimedeltaIndex.get_value . . . . . . . . . . . . . . . . . . . . . . . . . . . 1746
34.11.1.68pandas.TimedeltaIndex.get_value_maybe_box . . . . . . . . . . . . . . . . . . . . 1746
34.11.1.69pandas.TimedeltaIndex.get_values . . . . . . . . . . . . . . . . . . . . . . . . . . 1746
34.11.1.70pandas.TimedeltaIndex.groupby . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1746
34.11.1.71pandas.TimedeltaIndex.holds_integer . . . . . . . . . . . . . . . . . . . . . . . . . 1746
34.11.1.72pandas.TimedeltaIndex.identical . . . . . . . . . . . . . . . . . . . . . . . . . . . 1746
34.11.1.73pandas.TimedeltaIndex.insert . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1747
34.11.1.74pandas.TimedeltaIndex.intersection . . . . . . . . . . . . . . . . . . . . . . . . . . 1747
34.11.1.75pandas.TimedeltaIndex.is_ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1747
34.11.1.76pandas.TimedeltaIndex.is_boolean . . . . . . . . . . . . . . . . . . . . . . . . . . 1747
34.11.1.77pandas.TimedeltaIndex.is_categorical . . . . . . . . . . . . . . . . . . . . . . . . . 1747
34.11.1.78pandas.TimedeltaIndex.is_floating . . . . . . . . . . . . . . . . . . . . . . . . . . 1747
34.11.1.79pandas.TimedeltaIndex.is_integer . . . . . . . . . . . . . . . . . . . . . . . . . . . 1747
34.11.1.80pandas.TimedeltaIndex.is_interval . . . . . . . . . . . . . . . . . . . . . . . . . . 1748
34.11.1.81pandas.TimedeltaIndex.is_lexsorted_for_tuple . . . . . . . . . . . . . . . . . . . . 1748
34.11.1.82pandas.TimedeltaIndex.is_mixed . . . . . . . . . . . . . . . . . . . . . . . . . . . 1748
34.11.1.83pandas.TimedeltaIndex.is_numeric . . . . . . . . . . . . . . . . . . . . . . . . . . 1748
34.11.1.84pandas.TimedeltaIndex.is_object . . . . . . . . . . . . . . . . . . . . . . . . . . . 1748
34.11.1.85pandas.TimedeltaIndex.is_type_compatible . . . . . . . . . . . . . . . . . . . . . . 1748
34.11.1.86pandas.TimedeltaIndex.isin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1748
34.11.1.87pandas.TimedeltaIndex.isnull . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1748
34.11.1.88pandas.TimedeltaIndex.item . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1748
34.11.1.89pandas.TimedeltaIndex.join . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1749
34.11.1.90pandas.TimedeltaIndex.map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1749
34.11.1.91pandas.TimedeltaIndex.max . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1749
34.11.1.92pandas.TimedeltaIndex.memory_usage . . . . . . . . . . . . . . . . . . . . . . . . 1749
34.11.1.93pandas.TimedeltaIndex.min . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1749
34.11.1.94pandas.TimedeltaIndex.notnull . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1749
34.11.1.95pandas.TimedeltaIndex.nunique . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1750
34.11.1.96pandas.TimedeltaIndex.putmask . . . . . . . . . . . . . . . . . . . . . . . . . . . 1750
34.11.1.97pandas.TimedeltaIndex.ravel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1750
34.11.1.98pandas.TimedeltaIndex.reindex . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1750
34.11.1.99pandas.TimedeltaIndex.rename . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1751
34.11.1.100
pandas.TimedeltaIndex.repeat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1751
34.11.1.101
pandas.TimedeltaIndex.reshape . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1751
34.11.1.102
pandas.TimedeltaIndex.round . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1751
34.11.1.103
pandas.TimedeltaIndex.searchsorted . . . . . . . . . . . . . . . . . . . . . . . . . 1751
34.11.1.104
pandas.TimedeltaIndex.set_names . . . . . . . . . . . . . . . . . . . . . . . . . . 1753
34.11.1.105
pandas.TimedeltaIndex.set_value . . . . . . . . . . . . . . . . . . . . . . . . . . . 1753
34.11.1.106
pandas.TimedeltaIndex.shift . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1753
34.11.1.107
pandas.TimedeltaIndex.slice_indexer . . . . . . . . . . . . . . . . . . . . . . . . . 1754
xxxviii
34.11.1.108
pandas.TimedeltaIndex.slice_locs . . . . . . . . . . . . . . . . . . . . . . . . . . . 1754
34.11.1.109
pandas.TimedeltaIndex.sort . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1754
34.11.1.110
pandas.TimedeltaIndex.sort_values . . . . . . . . . . . . . . . . . . . . . . . . . . 1754
34.11.1.111
pandas.TimedeltaIndex.sortlevel . . . . . . . . . . . . . . . . . . . . . . . . . . . 1755
34.11.1.112
pandas.TimedeltaIndex.str . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1755
34.11.1.113
pandas.TimedeltaIndex.summary . . . . . . . . . . . . . . . . . . . . . . . . . . . 1755
34.11.1.114
pandas.TimedeltaIndex.sym_diff . . . . . . . . . . . . . . . . . . . . . . . . . . . 1755
34.11.1.115
pandas.TimedeltaIndex.symmetric_difference . . . . . . . . . . . . . . . . . . . . 1755
34.11.1.116
pandas.TimedeltaIndex.take . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1756
34.11.1.117
pandas.TimedeltaIndex.to_datetime . . . . . . . . . . . . . . . . . . . . . . . . . . 1756
34.11.1.118
pandas.TimedeltaIndex.to_native_types . . . . . . . . . . . . . . . . . . . . . . . . 1756
34.11.1.119
pandas.TimedeltaIndex.to_pytimedelta . . . . . . . . . . . . . . . . . . . . . . . . 1757
34.11.1.120
pandas.TimedeltaIndex.to_series . . . . . . . . . . . . . . . . . . . . . . . . . . . 1757
34.11.1.121
pandas.TimedeltaIndex.tolist . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1757
34.11.1.122
pandas.TimedeltaIndex.total_seconds . . . . . . . . . . . . . . . . . . . . . . . . . 1757
34.11.1.123
pandas.TimedeltaIndex.transpose . . . . . . . . . . . . . . . . . . . . . . . . . . . 1757
34.11.1.124
pandas.TimedeltaIndex.union . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1757
34.11.1.125
pandas.TimedeltaIndex.unique . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1758
34.11.1.126
pandas.TimedeltaIndex.value_counts . . . . . . . . . . . . . . . . . . . . . . . . . 1758
34.11.1.127
pandas.TimedeltaIndex.view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1758
34.11.1.128
pandas.TimedeltaIndex.where . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1759
34.11.2 Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1759
34.11.3 Conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1759
34.12 Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1759
34.12.1 Standard moving window functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1759
34.12.1.1 pandas.core.window.Rolling.count . . . . . . . . . . . . . . . . . . . . . . . . . . 1760
34.12.1.2 pandas.core.window.Rolling.sum . . . . . . . . . . . . . . . . . . . . . . . . . . . 1760
34.12.1.3 pandas.core.window.Rolling.mean . . . . . . . . . . . . . . . . . . . . . . . . . . 1760
34.12.1.4 pandas.core.window.Rolling.median . . . . . . . . . . . . . . . . . . . . . . . . . 1761
34.12.1.5 pandas.core.window.Rolling.var . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1761
34.12.1.6 pandas.core.window.Rolling.std . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1761
34.12.1.7 pandas.core.window.Rolling.min . . . . . . . . . . . . . . . . . . . . . . . . . . . 1761
34.12.1.8 pandas.core.window.Rolling.max . . . . . . . . . . . . . . . . . . . . . . . . . . . 1762
34.12.1.9 pandas.core.window.Rolling.corr . . . . . . . . . . . . . . . . . . . . . . . . . . . 1762
34.12.1.10pandas.core.window.Rolling.cov . . . . . . . . . . . . . . . . . . . . . . . . . . . 1762
34.12.1.11pandas.core.window.Rolling.skew . . . . . . . . . . . . . . . . . . . . . . . . . . 1763
34.12.1.12pandas.core.window.Rolling.kurt . . . . . . . . . . . . . . . . . . . . . . . . . . . 1763
34.12.1.13pandas.core.window.Rolling.apply . . . . . . . . . . . . . . . . . . . . . . . . . . 1763
34.12.1.14pandas.core.window.Rolling.quantile . . . . . . . . . . . . . . . . . . . . . . . . . 1763
34.12.1.15pandas.core.window.Window.mean . . . . . . . . . . . . . . . . . . . . . . . . . . 1763
34.12.1.16pandas.core.window.Window.sum . . . . . . . . . . . . . . . . . . . . . . . . . . . 1764
34.12.2 Standard expanding window functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1764
34.12.2.1 pandas.core.window.Expanding.count . . . . . . . . . . . . . . . . . . . . . . . . 1764
34.12.2.2 pandas.core.window.Expanding.sum . . . . . . . . . . . . . . . . . . . . . . . . . 1765
34.12.2.3 pandas.core.window.Expanding.mean . . . . . . . . . . . . . . . . . . . . . . . . . 1765
34.12.2.4 pandas.core.window.Expanding.median . . . . . . . . . . . . . . . . . . . . . . . 1765
34.12.2.5 pandas.core.window.Expanding.var . . . . . . . . . . . . . . . . . . . . . . . . . . 1765
34.12.2.6 pandas.core.window.Expanding.std . . . . . . . . . . . . . . . . . . . . . . . . . . 1766
34.12.2.7 pandas.core.window.Expanding.min . . . . . . . . . . . . . . . . . . . . . . . . . 1766
34.12.2.8 pandas.core.window.Expanding.max . . . . . . . . . . . . . . . . . . . . . . . . . 1766
34.12.2.9 pandas.core.window.Expanding.corr . . . . . . . . . . . . . . . . . . . . . . . . . 1766
34.12.2.10pandas.core.window.Expanding.cov . . . . . . . . . . . . . . . . . . . . . . . . . . 1767
34.12.2.11pandas.core.window.Expanding.skew . . . . . . . . . . . . . . . . . . . . . . . . . 1767
34.12.2.12pandas.core.window.Expanding.kurt . . . . . . . . . . . . . . . . . . . . . . . . . 1767
xxxix
34.12.2.13pandas.core.window.Expanding.apply . . . . . . . . . . . . . . . . . . . . . . . . 1768
34.12.2.14pandas.core.window.Expanding.quantile . . . . . . . . . . . . . . . . . . . . . . . 1768
34.12.3 Exponentially-weighted moving window functions . . . . . . . . . . . . . . . . . . . . . . 1768
34.12.3.1 pandas.core.window.EWM.mean . . . . . . . . . . . . . . . . . . . . . . . . . . . 1768
34.12.3.2 pandas.core.window.EWM.std . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1768
34.12.3.3 pandas.core.window.EWM.var . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1769
34.12.3.4 pandas.core.window.EWM.corr . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1769
34.12.3.5 pandas.core.window.EWM.cov . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1769
34.13 GroupBy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1770
34.13.1 Indexing, iteration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1770
34.13.1.1 pandas.core.groupby.GroupBy.__iter__ . . . . . . . . . . . . . . . . . . . . . . . . 1770
34.13.1.2 pandas.core.groupby.GroupBy.groups . . . . . . . . . . . . . . . . . . . . . . . . . 1770
34.13.1.3 pandas.core.groupby.GroupBy.indices . . . . . . . . . . . . . . . . . . . . . . . . 1770
34.13.1.4 pandas.core.groupby.GroupBy.get_group . . . . . . . . . . . . . . . . . . . . . . . 1770
34.13.1.5 pandas.Grouper . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1771
34.13.2 Function application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1772
34.13.2.1 pandas.core.groupby.GroupBy.apply . . . . . . . . . . . . . . . . . . . . . . . . . 1772
34.13.2.2 pandas.core.groupby.GroupBy.aggregate . . . . . . . . . . . . . . . . . . . . . . . 1773
34.13.2.3 pandas.core.groupby.GroupBy.transform . . . . . . . . . . . . . . . . . . . . . . . 1773
34.13.3 Computations / Descriptive Stats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1773
34.13.3.1 pandas.core.groupby.GroupBy.count . . . . . . . . . . . . . . . . . . . . . . . . . 1773
34.13.3.2 pandas.core.groupby.GroupBy.cumcount . . . . . . . . . . . . . . . . . . . . . . . 1773
34.13.3.3 pandas.core.groupby.GroupBy.first . . . . . . . . . . . . . . . . . . . . . . . . . . 1774
34.13.3.4 pandas.core.groupby.GroupBy.head . . . . . . . . . . . . . . . . . . . . . . . . . . 1774
34.13.3.5 pandas.core.groupby.GroupBy.last . . . . . . . . . . . . . . . . . . . . . . . . . . 1775
34.13.3.6 pandas.core.groupby.GroupBy.max . . . . . . . . . . . . . . . . . . . . . . . . . . 1775
34.13.3.7 pandas.core.groupby.GroupBy.mean . . . . . . . . . . . . . . . . . . . . . . . . . 1775
34.13.3.8 pandas.core.groupby.GroupBy.median . . . . . . . . . . . . . . . . . . . . . . . . 1775
34.13.3.9 pandas.core.groupby.GroupBy.min . . . . . . . . . . . . . . . . . . . . . . . . . . 1776
34.13.3.10pandas.core.groupby.GroupBy.nth . . . . . . . . . . . . . . . . . . . . . . . . . . 1776
34.13.3.11pandas.core.groupby.GroupBy.ohlc . . . . . . . . . . . . . . . . . . . . . . . . . . 1777
34.13.3.12pandas.core.groupby.GroupBy.prod . . . . . . . . . . . . . . . . . . . . . . . . . . 1777
34.13.3.13pandas.core.groupby.GroupBy.size . . . . . . . . . . . . . . . . . . . . . . . . . . 1777
34.13.3.14pandas.core.groupby.GroupBy.sem . . . . . . . . . . . . . . . . . . . . . . . . . . 1777
34.13.3.15pandas.core.groupby.GroupBy.std . . . . . . . . . . . . . . . . . . . . . . . . . . . 1778
34.13.3.16pandas.core.groupby.GroupBy.sum . . . . . . . . . . . . . . . . . . . . . . . . . . 1778
34.13.3.17pandas.core.groupby.GroupBy.var . . . . . . . . . . . . . . . . . . . . . . . . . . 1778
34.13.3.18pandas.core.groupby.GroupBy.tail . . . . . . . . . . . . . . . . . . . . . . . . . . 1778
34.13.3.19pandas.core.groupby.DataFrameGroupBy.agg . . . . . . . . . . . . . . . . . . . . 1780
34.13.3.20pandas.core.groupby.DataFrameGroupBy.all . . . . . . . . . . . . . . . . . . . . . 1781
34.13.3.21pandas.core.groupby.DataFrameGroupBy.any . . . . . . . . . . . . . . . . . . . . 1782
34.13.3.22pandas.core.groupby.DataFrameGroupBy.bfill . . . . . . . . . . . . . . . . . . . . 1782
34.13.3.23pandas.core.groupby.DataFrameGroupBy.corr . . . . . . . . . . . . . . . . . . . . 1782
34.13.3.24pandas.core.groupby.DataFrameGroupBy.count . . . . . . . . . . . . . . . . . . . 1782
34.13.3.25pandas.core.groupby.DataFrameGroupBy.cov . . . . . . . . . . . . . . . . . . . . 1783
34.13.3.26pandas.core.groupby.DataFrameGroupBy.cummax . . . . . . . . . . . . . . . . . . 1783
34.13.3.27pandas.core.groupby.DataFrameGroupBy.cummin . . . . . . . . . . . . . . . . . . 1783
34.13.3.28pandas.core.groupby.DataFrameGroupBy.cumprod . . . . . . . . . . . . . . . . . 1783
34.13.3.29pandas.core.groupby.DataFrameGroupBy.cumsum . . . . . . . . . . . . . . . . . . 1783
34.13.3.30pandas.core.groupby.DataFrameGroupBy.describe . . . . . . . . . . . . . . . . . . 1784
34.13.3.31pandas.core.groupby.DataFrameGroupBy.diff . . . . . . . . . . . . . . . . . . . . 1787
34.13.3.32pandas.core.groupby.DataFrameGroupBy.ffill . . . . . . . . . . . . . . . . . . . . 1787
34.13.3.33pandas.core.groupby.DataFrameGroupBy.fillna . . . . . . . . . . . . . . . . . . . 1787
34.13.3.34pandas.core.groupby.DataFrameGroupBy.hist . . . . . . . . . . . . . . . . . . . . 1788
xl
34.13.3.35pandas.core.groupby.DataFrameGroupBy.idxmax . . . . . . . . . . . . . . . . . . 1789
34.13.3.36pandas.core.groupby.DataFrameGroupBy.idxmin . . . . . . . . . . . . . . . . . . 1789
34.13.3.37pandas.core.groupby.DataFrameGroupBy.mad . . . . . . . . . . . . . . . . . . . . 1790
34.13.3.38pandas.core.groupby.DataFrameGroupBy.pct_change . . . . . . . . . . . . . . . . 1790
34.13.3.39pandas.core.groupby.DataFrameGroupBy.plot . . . . . . . . . . . . . . . . . . . . 1790
34.13.3.40pandas.core.groupby.DataFrameGroupBy.quantile . . . . . . . . . . . . . . . . . . 1791
34.13.3.41pandas.core.groupby.DataFrameGroupBy.rank . . . . . . . . . . . . . . . . . . . . 1791
34.13.3.42pandas.core.groupby.DataFrameGroupBy.resample . . . . . . . . . . . . . . . . . 1792
34.13.3.43pandas.core.groupby.DataFrameGroupBy.shift . . . . . . . . . . . . . . . . . . . . 1792
34.13.3.44pandas.core.groupby.DataFrameGroupBy.size . . . . . . . . . . . . . . . . . . . . 1793
34.13.3.45pandas.core.groupby.DataFrameGroupBy.skew . . . . . . . . . . . . . . . . . . . . 1793
34.13.3.46pandas.core.groupby.DataFrameGroupBy.take . . . . . . . . . . . . . . . . . . . . 1793
34.13.3.47pandas.core.groupby.DataFrameGroupBy.tshift . . . . . . . . . . . . . . . . . . . 1793
34.13.3.48pandas.core.groupby.SeriesGroupBy.nlargest . . . . . . . . . . . . . . . . . . . . . 1794
34.13.3.49pandas.core.groupby.SeriesGroupBy.nsmallest . . . . . . . . . . . . . . . . . . . . 1795
34.13.3.50pandas.core.groupby.SeriesGroupBy.nunique . . . . . . . . . . . . . . . . . . . . . 1795
34.13.3.51pandas.core.groupby.SeriesGroupBy.unique . . . . . . . . . . . . . . . . . . . . . 1795
34.13.3.52pandas.core.groupby.SeriesGroupBy.value_counts . . . . . . . . . . . . . . . . . . 1796
34.13.3.53pandas.core.groupby.DataFrameGroupBy.corrwith . . . . . . . . . . . . . . . . . . 1796
34.13.3.54pandas.core.groupby.DataFrameGroupBy.boxplot . . . . . . . . . . . . . . . . . . 1796
34.14 Resampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1797
34.14.1 Indexing, iteration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1797
34.14.1.1 pandas.core.resample.Resampler.__iter__ . . . . . . . . . . . . . . . . . . . . . . 1798
34.14.1.2 pandas.core.resample.Resampler.groups . . . . . . . . . . . . . . . . . . . . . . . 1798
34.14.1.3 pandas.core.resample.Resampler.indices . . . . . . . . . . . . . . . . . . . . . . . 1798
34.14.1.4 pandas.core.resample.Resampler.get_group . . . . . . . . . . . . . . . . . . . . . . 1798
34.14.2 Function application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1798
34.14.2.1 pandas.core.resample.Resampler.apply . . . . . . . . . . . . . . . . . . . . . . . . 1798
34.14.2.2 pandas.core.resample.Resampler.aggregate . . . . . . . . . . . . . . . . . . . . . . 1800
34.14.2.3 pandas.core.resample.Resampler.transform . . . . . . . . . . . . . . . . . . . . . . 1801
34.14.3 Upsampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1801
34.14.3.1 pandas.core.resample.Resampler.ffill . . . . . . . . . . . . . . . . . . . . . . . . . 1801
34.14.3.2 pandas.core.resample.Resampler.backfill . . . . . . . . . . . . . . . . . . . . . . . 1802
34.14.3.3 pandas.core.resample.Resampler.bfill . . . . . . . . . . . . . . . . . . . . . . . . . 1802
34.14.3.4 pandas.core.resample.Resampler.pad . . . . . . . . . . . . . . . . . . . . . . . . . 1802
34.14.3.5 pandas.core.resample.Resampler.fillna . . . . . . . . . . . . . . . . . . . . . . . . 1802
34.14.3.6 pandas.core.resample.Resampler.asfreq . . . . . . . . . . . . . . . . . . . . . . . . 1803
34.14.3.7 pandas.core.resample.Resampler.interpolate . . . . . . . . . . . . . . . . . . . . . 1803
34.14.4 Computations / Descriptive Stats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1804
34.14.4.1 pandas.core.resample.Resampler.count . . . . . . . . . . . . . . . . . . . . . . . . 1805
34.14.4.2 pandas.core.resample.Resampler.nunique . . . . . . . . . . . . . . . . . . . . . . . 1805
34.14.4.3 pandas.core.resample.Resampler.first . . . . . . . . . . . . . . . . . . . . . . . . . 1805
34.14.4.4 pandas.core.resample.Resampler.last . . . . . . . . . . . . . . . . . . . . . . . . . 1805
34.14.4.5 pandas.core.resample.Resampler.max . . . . . . . . . . . . . . . . . . . . . . . . . 1805
34.14.4.6 pandas.core.resample.Resampler.mean . . . . . . . . . . . . . . . . . . . . . . . . 1805
34.14.4.7 pandas.core.resample.Resampler.median . . . . . . . . . . . . . . . . . . . . . . . 1806
34.14.4.8 pandas.core.resample.Resampler.min . . . . . . . . . . . . . . . . . . . . . . . . . 1806
34.14.4.9 pandas.core.resample.Resampler.ohlc . . . . . . . . . . . . . . . . . . . . . . . . . 1806
34.14.4.10pandas.core.resample.Resampler.prod . . . . . . . . . . . . . . . . . . . . . . . . 1806
34.14.4.11pandas.core.resample.Resampler.size . . . . . . . . . . . . . . . . . . . . . . . . . 1806
34.14.4.12pandas.core.resample.Resampler.sem . . . . . . . . . . . . . . . . . . . . . . . . . 1806
34.14.4.13pandas.core.resample.Resampler.std . . . . . . . . . . . . . . . . . . . . . . . . . 1807
34.14.4.14pandas.core.resample.Resampler.sum . . . . . . . . . . . . . . . . . . . . . . . . . 1807
34.14.4.15pandas.core.resample.Resampler.var . . . . . . . . . . . . . . . . . . . . . . . . . 1807
xli
34.15 Style . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1807
34.15.1 Constructor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1807
34.15.1.1 pandas.io.formats.style.Styler . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1807
34.15.2 Style Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1818
34.15.3 Builtin Styles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1818
34.15.4 Style Export and Import . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1818
34.16 General utility functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1819
34.16.1 Working with options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1819
34.16.1.1 pandas.describe_option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1819
34.16.1.2 pandas.reset_option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1822
34.16.1.3 pandas.get_option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1826
34.16.1.4 pandas.set_option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1829
34.16.1.5 pandas.option_context . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1833
34.16.2 Testing functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1833
34.16.2.1 pandas.testing.assert_frame_equal . . . . . . . . . . . . . . . . . . . . . . . . . . 1833
34.16.2.2 pandas.testing.assert_series_equal . . . . . . . . . . . . . . . . . . . . . . . . . . 1834
34.16.2.3 pandas.testing.assert_index_equal . . . . . . . . . . . . . . . . . . . . . . . . . . . 1835
34.16.3 Exceptions and warnings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1835
34.16.3.1 pandas.errors.DtypeWarning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1836
34.16.3.2 pandas.errors.EmptyDataError . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1836
34.16.3.3 pandas.errors.OutOfBoundsDatetime . . . . . . . . . . . . . . . . . . . . . . . . . 1836
34.16.3.4 pandas.errors.ParserError . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1836
34.16.3.5 pandas.errors.ParserWarning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1836
34.16.3.6 pandas.errors.PerformanceWarning . . . . . . . . . . . . . . . . . . . . . . . . . . 1836
34.16.3.7 pandas.errors.UnsortedIndexError . . . . . . . . . . . . . . . . . . . . . . . . . . . 1836
34.16.3.8 pandas.errors.UnsupportedFunctionCall . . . . . . . . . . . . . . . . . . . . . . . 1837
34.16.4 Data types related functionality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1837
34.16.4.1 pandas.api.types.union_categoricals . . . . . . . . . . . . . . . . . . . . . . . . . . 1837
34.16.4.2 pandas.api.types.infer_dtype . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1837
34.16.4.3 pandas.api.types.pandas_dtype . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1839
34.16.4.4 pandas.api.types.is_bool_dtype . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1840
34.16.4.5 pandas.api.types.is_categorical_dtype . . . . . . . . . . . . . . . . . . . . . . . . . 1841
34.16.4.6 pandas.api.types.is_complex_dtype . . . . . . . . . . . . . . . . . . . . . . . . . . 1841
34.16.4.7 pandas.api.types.is_datetime64_any_dtype . . . . . . . . . . . . . . . . . . . . . . 1842
34.16.4.8 pandas.api.types.is_datetime64_dtype . . . . . . . . . . . . . . . . . . . . . . . . 1842
34.16.4.9 pandas.api.types.is_datetime64_ns_dtype . . . . . . . . . . . . . . . . . . . . . . . 1843
34.16.4.10pandas.api.types.is_datetime64tz_dtype . . . . . . . . . . . . . . . . . . . . . . . 1843
34.16.4.11pandas.api.types.is_extension_type . . . . . . . . . . . . . . . . . . . . . . . . . . 1844
34.16.4.12pandas.api.types.is_float_dtype . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1845
34.16.4.13pandas.api.types.is_int64_dtype . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1845
34.16.4.14pandas.api.types.is_integer_dtype . . . . . . . . . . . . . . . . . . . . . . . . . . . 1846
34.16.4.15pandas.api.types.is_interval_dtype . . . . . . . . . . . . . . . . . . . . . . . . . . 1847
34.16.4.16pandas.api.types.is_numeric_dtype . . . . . . . . . . . . . . . . . . . . . . . . . . 1847
34.16.4.17pandas.api.types.is_object_dtype . . . . . . . . . . . . . . . . . . . . . . . . . . . 1848
34.16.4.18pandas.api.types.is_period_dtype . . . . . . . . . . . . . . . . . . . . . . . . . . . 1848
34.16.4.19pandas.api.types.is_signed_integer_dtype . . . . . . . . . . . . . . . . . . . . . . . 1849
34.16.4.20pandas.api.types.is_string_dtype . . . . . . . . . . . . . . . . . . . . . . . . . . . 1849
34.16.4.21pandas.api.types.is_timedelta64_dtype . . . . . . . . . . . . . . . . . . . . . . . . 1850
34.16.4.22pandas.api.types.is_timedelta64_ns_dtype . . . . . . . . . . . . . . . . . . . . . . 1850
34.16.4.23pandas.api.types.is_unsigned_integer_dtype . . . . . . . . . . . . . . . . . . . . . 1851
34.16.4.24pandas.api.types.is_sparse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1851
34.16.4.25pandas.api.types.is_dict_like . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1852
34.16.4.26pandas.api.types.is_file_like . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1852
34.16.4.27pandas.api.types.is_list_like . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1853
xlii
34.16.4.28pandas.api.types.is_named_tuple . . . . . . . . . . . . . . . . . . . . . . . . . . . 1853
34.16.4.29pandas.api.types.is_iterator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1854
34.16.4.30pandas.api.types.is_bool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1855
34.16.4.31pandas.api.types.is_categorical . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1855
34.16.4.32pandas.api.types.is_complex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1855
34.16.4.33pandas.api.types.is_datetimetz . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1855
34.16.4.34pandas.api.types.is_float . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1856
34.16.4.35pandas.api.types.is_hashable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1856
34.16.4.36pandas.api.types.is_integer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1856
34.16.4.37pandas.api.types.is_interval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1856
34.16.4.38pandas.api.types.is_number . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1856
34.16.4.39pandas.api.types.is_period . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1857
34.16.4.40pandas.api.types.is_re . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1857
34.16.4.41pandas.api.types.is_re_compilable . . . . . . . . . . . . . . . . . . . . . . . . . . 1858
34.16.4.42pandas.api.types.is_scalar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1858
35 Internals 1859
35.1 Indexing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1859
35.1.1 MultiIndex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1860
35.2 Subclassing pandas Data Structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1860
35.2.1 Override Constructor Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1861
35.2.2 Define Original Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1862
xliii
36.16 pandas 0.14.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1901
36.16.1 Thanks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1902
36.17 pandas 0.13.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1904
36.17.1 New Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1904
36.17.2 API Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1905
36.17.3 Experimental Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1905
36.17.4 Improvements to existing features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1905
36.17.5 Bug Fixes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1906
36.18 pandas 0.13.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1907
36.18.1 New Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1908
36.18.2 Experimental Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1908
36.18.3 Improvements to existing features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1908
36.18.4 API Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1911
36.18.5 Internal Refactoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1914
36.18.6 Bug Fixes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1916
36.19 pandas 0.12.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1921
36.19.1 New Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1922
36.19.2 Improvements to existing features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1922
36.19.3 API Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1923
36.19.4 Experimental Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1925
36.19.5 Bug Fixes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1925
36.20 pandas 0.11.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1928
36.20.1 New Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1928
36.20.2 Improvements to existing features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1929
36.20.3 API Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1931
36.20.4 Bug Fixes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1932
36.21 pandas 0.10.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1934
36.21.1 New Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1934
36.21.2 API Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1934
36.21.3 Improvements to existing features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1935
36.21.4 Bug Fixes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1935
36.22 pandas 0.10.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1936
36.22.1 New Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1937
36.22.2 Experimental Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1938
36.22.3 API Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1938
36.22.4 Improvements to existing features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1938
36.22.5 Bug Fixes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1940
36.23 pandas 0.9.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1941
36.23.1 New Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1941
36.23.2 API Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1942
36.23.3 Improvements to existing features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1942
36.23.4 Bug Fixes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1942
36.24 pandas 0.9.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1944
36.24.1 New Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1944
36.24.2 Improvements to existing features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1944
36.24.3 API Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1945
36.24.4 Bug Fixes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1945
36.25 pandas 0.8.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1949
36.25.1 New Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1949
36.25.2 Improvements to existing features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1949
36.25.3 Bug Fixes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1950
36.26 pandas 0.8.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1951
36.26.1 New Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1951
36.26.2 Improvements to existing features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1952
xliv
36.26.3 API Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1953
36.26.4 Bug Fixes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1954
36.27 pandas 0.7.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1955
36.27.1 New Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1955
36.27.2 API Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1956
36.27.3 Bug Fixes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1956
36.28 pandas 0.7.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1957
36.28.1 New Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1957
36.28.2 API Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1957
36.28.3 Improvements to existing features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1957
36.28.4 Bug Fixes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1957
36.29 pandas 0.7.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1958
36.29.1 New Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1958
36.29.2 Improvements to existing features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1958
36.29.3 Bug Fixes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1959
36.30 pandas 0.7.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1959
36.30.1 New Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1959
36.30.2 API Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1961
36.30.3 Improvements to existing features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1961
36.30.4 Bug Fixes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1962
36.30.5 Thanks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1965
36.31 pandas 0.6.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1966
36.31.1 API Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1966
36.31.2 New Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1966
36.31.3 Improvements to existing features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1966
36.31.4 Bug Fixes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1967
36.31.5 Thanks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1968
36.32 pandas 0.6.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1968
36.32.1 API Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1968
36.32.2 New Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1968
36.32.3 Improvements to existing features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1969
36.32.4 Bug Fixes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1970
36.32.5 Thanks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1971
36.33 pandas 0.5.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1972
36.33.1 API Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1972
36.33.2 Deprecations Removed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1973
36.33.3 New Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1973
36.33.4 Improvements to existing features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1974
36.33.5 Bug Fixes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1975
36.33.6 Thanks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1976
36.34 pandas 0.4.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1976
36.34.1 New Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1976
36.34.2 Improvements to existing features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1976
36.34.3 API Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1976
36.34.4 Bug Fixes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1976
36.34.5 Thanks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1977
36.35 pandas 0.4.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1977
36.35.1 New Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1977
36.35.2 Improvements to existing features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1977
36.35.3 API Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1978
36.35.4 Bug Fixes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1978
36.35.5 Thanks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1978
36.36 pandas 0.4.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1978
36.36.1 New Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1978
xlv
36.36.2 Improvements to existing features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1979
36.36.3 API Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1979
36.36.4 Bug Fixes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1979
36.36.5 Thanks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1979
36.37 pandas 0.4.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1980
36.37.1 New Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1980
36.37.2 Improvements to existing features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1981
36.37.3 API Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1982
36.37.4 Bug Fixes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1983
36.37.5 Thanks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1984
36.38 pandas 0.3.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1984
36.38.1 New features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1985
36.38.2 Improvements to existing features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1985
36.38.3 API Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1985
36.38.4 Bug Fixes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1985
xlvi
pandas: powerful Python data analysis toolkit, Release 0.20.1
PDF Version
Zipped HTML Date: May 05, 2017 Version: 0.20.1
Binary Installers: http://pypi.python.org/pypi/pandas
Source Repository: http://github.com/pandas-dev/pandas
Issues & Ideas: https://github.com/pandas-dev/pandas/issues
Q&A Support: http://stackoverflow.com/questions/tagged/pandas
Developer Mailing List: http://groups.google.com/group/pydata
pandas is a Python package providing fast, flexible, and expressive data structures designed to make working with
relational or labeled data both easy and intuitive. It aims to be the fundamental high-level building block for doing
practical, real world data analysis in Python. Additionally, it has the broader goal of becoming the most powerful
and flexible open source data analysis / manipulation tool available in any language. It is already well on its way
toward this goal.
pandas is well suited for many different kinds of data:
Tabular data with heterogeneously-typed columns, as in an SQL table or Excel spreadsheet
Ordered and unordered (not necessarily fixed-frequency) time series data.
Arbitrary matrix data (homogeneously typed or heterogeneous) with row and column labels
Any other form of observational / statistical data sets. The data actually need not be labeled at all to be placed
into a pandas data structure
The two primary data structures of pandas, Series (1-dimensional) and DataFrame (2-dimensional), handle the
vast majority of typical use cases in finance, statistics, social science, and many areas of engineering. For R users,
DataFrame provides everything that Rs data.frame provides and much more. pandas is built on top of NumPy
and is intended to integrate well within a scientific computing environment with many other 3rd party libraries.
Here are just a few of the things that pandas does well:
Easy handling of missing data (represented as NaN) in floating point as well as non-floating point data
Size mutability: columns can be inserted and deleted from DataFrame and higher dimensional objects
Automatic and explicit data alignment: objects can be explicitly aligned to a set of labels, or the user can
simply ignore the labels and let Series, DataFrame, etc. automatically align the data for you in computations
Powerful, flexible group by functionality to perform split-apply-combine operations on data sets, for both ag-
gregating and transforming data
Make it easy to convert ragged, differently-indexed data in other Python and NumPy data structures into
DataFrame objects
Intelligent label-based slicing, fancy indexing, and subsetting of large data sets
Intuitive merging and joining data sets
Flexible reshaping and pivoting of data sets
Hierarchical labeling of axes (possible to have multiple labels per tick)
Robust IO tools for loading data from flat files (CSV and delimited), Excel files, databases, and saving / loading
data from the ultrafast HDF5 format
Time series-specific functionality: date range generation and frequency conversion, moving window statistics,
moving window linear regressions, date shifting and lagging, etc.
CONTENTS 1
pandas: powerful Python data analysis toolkit, Release 0.20.1
Many of these principles are here to address the shortcomings frequently experienced using other languages / scientific
research environments. For data scientists, working with data is typically divided into multiple stages: munging and
cleaning data, analyzing / modeling it, then organizing the results of the analysis into a form suitable for plotting or
tabular display. pandas is the ideal tool for all of these tasks.
Some other notes
pandas is fast. Many of the low-level algorithmic bits have been extensively tweaked in Cython code. However,
as with anything else generalization usually sacrifices performance. So if you focus on one feature for your
application you may be able to create a faster specialized tool.
pandas is a dependency of statsmodels, making it an important part of the statistical computing ecosystem in
Python.
pandas has been used extensively in production in financial applications.
Note: This documentation assumes general familiarity with NumPy. If you havent used NumPy much or at all, do
invest some time in learning about NumPy first.
See the package overview for more detail about whats in the library.
2 CONTENTS
CHAPTER
ONE
WHATS NEW
This is a major release from 0.19.2 and includes a number of API changes, deprecations, new features, enhancements,
and performance improvements along with a large number of bug fixes. We recommend that all users upgrade to this
version.
Highlights include:
New .agg() API for Series/DataFrame similar to the groupby-rolling-resample APIs, see here
Integration with the feather-format, including a new top-level pd.read_feather() and
DataFrame.to_feather() method, see here.
The .ix indexer has been deprecated, see here
Panel has been deprecated, see here
Addition of an IntervalIndex and Interval scalar type, see here
Improved user API when grouping by index levels in .groupby(), see here
Improved support for UInt64 dtypes, see here
A new orient for JSON serialization, orient='table', that uses the Table Schema spec and that gives the
possibility for a more interactive repr in the Jupyter Notebook, see here
Experimental support for exporting styled DataFrames (DataFrame.style) to Excel, see here
Window binary corr/cov operations now return a MultiIndexed DataFrame rather than a Panel, as Panel is
now deprecated, see here
Support for S3 handling now uses s3fs, see here
Google BigQuery support now uses the pandas-gbq library, see here
Warning: Pandas has changed the internal structure and layout of the codebase. This can affect imports that are
not from the top-level pandas.* namespace, please see the changes here.
3
pandas: powerful Python data analysis toolkit, Release 0.20.1
Note: This is a combined release for 0.20.0 and and 0.20.1. Version 0.20.1 contains one additional change for
backwards-compatibility with downstream projects using pandas utils routines. (GH16250)
New features
agg API for DataFrame/Series
dtype keyword for data IO
.to_datetime() has gained an origin parameter
Groupby Enhancements
Better support for compressed URLs in read_csv
Pickle file I/O now supports compression
UInt64 Support Improved
GroupBy on Categoricals
Table Schema Output
SciPy sparse matrix from/to SparseDataFrame
Excel output for styled DataFrames
IntervalIndex
Other Enhancements
Backwards incompatible API changes
Possible incompatibility for HDF5 formats created with pandas < 0.13.0
Map on Index types now return other Index types
Accessing datetime fields of Index now return Index
pd.unique will now be consistent with extension types
S3 File Handling
Partial String Indexing Changes
Concat of different float dtypes will not automatically upcast
Pandas Google BigQuery support has moved
Memory Usage for Index is more Accurate
DataFrame.sort_index changes
Groupby Describe Formatting
Window Binary Corr/Cov operations return a MultiIndex DataFrame
HDFStore where string comparison
Index.intersection and inner join now preserve the order of the left Index
Pivot Table always returns a DataFrame
Other API Changes
Series & DataFrame have been enhanced to support the aggregation API. This is a familiar API from groupby,
window operations, and resampling. This allows aggregation operations in a concise way by using agg() and
transform(). The full documentation is here (GH1623).
Here is a sample
In [3]: df
Out[3]:
A B C
2000-01-01 1.474071 -0.064034 -1.282782
2000-01-02 0.781836 -1.071357 0.441153
2000-01-03 2.353925 0.583787 0.221471
2000-01-04 NaN NaN NaN
2000-01-05 NaN NaN NaN
2000-01-06 NaN NaN NaN
2000-01-07 NaN NaN NaN
2000-01-08 0.901805 1.171216 0.520260
2000-01-09 -1.197071 -1.066969 -0.303421
2000-01-10 -0.858447 0.306996 -0.028665
One can operate using string function names, callables, lists, or dictionaries of these.
Using a single function is equivalent to .apply.
In [4]: df.agg('sum')
Out[4]:
A 3.456119
B -0.140361
C -0.431984
dtype: float64
Using a dict provides the ability to apply specific aggregations per column. You will get a matrix-like output of all of
the aggregators. The output has one column per unique function. Those functions applied to a particular column will
be NaN:
When presented with mixed dtypes that cannot be aggregated, .agg() will only take the valid aggregations. This is
similiar to how groupby .agg() works. (GH15015)
In [9]: df.dtypes
Out[9]:
A int64
B float64
C object
D datetime64[ns]
dtype: object
The 'python' engine for read_csv(), as well as the read_fwf() function for parsing fixed-width text files
and read_excel() for parsing Excel files, now accept the dtype keyword argument for specifying the types of
specific columns (GH14295). See the io docs for more information.
In [12]: pd.read_fwf(StringIO(data)).dtypes
Out[12]:
a int64
b int64
dtype: object
to_datetime() has gained a new parameter, origin, to define a reference date from where to compute the
resulting timestamps when parsing numerical values with a specific unit specified. (GH11276, GH11745)
For example, with 1960-01-01 as the starting date:
The default is set at origin='unix', which defaults to 1970-01-01 00:00:00, which is commonly called
unix epoch or POSIX time. This was the previous default, so this is a backward compatible change.
Strings passed to DataFrame.groupby() as the by parameter may now reference either column names or index
level names. Previously, only column names could be referenced. This allows to easily group by a column and index
level at the same time. (GH5677)
In [16]: arrays = [['bar', 'bar', 'baz', 'baz', 'foo', 'foo', 'qux', 'qux'],
....: ['one', 'two', 'one', 'two', 'one', 'two', 'one', 'two']]
....:
In [19]: df
Out[19]:
A B
first second
bar one 1 0
two 1 1
baz one 1 2
two 1 3
foo one 2 4
two 2 5
qux one 3 6
two 3 7
B
second A
one 1 2
2 4
3 6
two 1 4
2 5
3 7
The compression code was refactored (GH12688). As a result, reading dataframes from URLs in read_csv() or
read_table() now supports additional compression methods: xz, bz2, and zip (GH14570). Previously, only
gzip compression was supported. By default, compression of URLs and paths are now inferred using their file
extensions. Additionally, support for bz2 compression in the python 2 C-engine improved (GH14874).
In [24]: df.head(2)
Out[24]:
S X E M
0 13876 1 1 1
1 11608 1 3 0
read_pickle(), DataFrame.to_pickle() and Series.to_pickle() can now read from and write to
compressed pickle files. Compression methods can be an explicit parameter or be inferred from the file extension. See
the docs here.
In [25]: df = pd.DataFrame({
....: 'A': np.random.randn(1000),
....: 'B': 'foo',
....: 'C': pd.date_range('20130101', periods=1000, freq='s')})
....:
In [28]: rt.head()
Out[28]:
A B C
0 0.384316 foo 2013-01-01 00:00:00
1 1.574159 foo 2013-01-01 00:00:01
2 1.588931 foo 2013-01-01 00:00:02
3 0.476720 foo 2013-01-01 00:00:03
4 0.473424 foo 2013-01-01 00:00:04
The default is to infer the compression type from the extension (compression='infer'):
In [29]: df.to_pickle("data.pkl.gz")
In [30]: rt = pd.read_pickle("data.pkl.gz")
In [31]: rt.head()
Out[31]:
A B C
0 0.384316 foo 2013-01-01 00:00:00
1 1.574159 foo 2013-01-01 00:00:01
2 1.588931 foo 2013-01-01 00:00:02
3 0.476720 foo 2013-01-01 00:00:03
4 0.473424 foo 2013-01-01 00:00:04
In [32]: df["A"].to_pickle("s1.pkl.bz2")
In [33]: rt = pd.read_pickle("s1.pkl.bz2")
In [34]: rt.head()
Out[34]:
0 0.384316
1 1.574159
2 1.588931
3 0.476720
4 0.473424
Name: A, dtype: float64
Pandas has significantly improved support for operations involving unsigned, or purely non-negative, integers. Pre-
viously, handling these integers would result in improper rounding or data-type casting, leading to incorrect results.
Notably, a new numerical index, UInt64Index, has been created (GH14937)
In [37]: df.index
Out[37]: UInt64Index([1, 2, 3], dtype='uint64')
Bug in converting object elements of array-like objects to unsigned 64-bit integers (GH4471, GH14982)
Bug in Series.unique() in which unsigned 64-bit integers were causing overflow (GH14721)
Bug in DataFrame construction in which unsigned 64-bit integer elements were being converted to objects
(GH14881)
Bug in pd.read_csv() in which unsigned 64-bit integer elements were being improperly converted to the
wrong data types (GH14983)
Bug in pd.unique() in which unsigned 64-bit integers were causing overflow (GH14915)
Bug in pd.value_counts() in which unsigned 64-bit integers were being erroneously truncated in the
output (GH14934)
In previous versions, .groupby(..., sort=False) would fail with a ValueError when grouping on a cat-
egorical series with some categories not appearing in the data. (GH13179)
In [39]: df = pd.DataFrame({
....: 'A': np.random.randint(100),
....: 'B': np.random.randint(100),
....: 'C': np.random.randint(100),
....: 'chromosomes': pd.Categorical(np.random.choice(chromosomes, 100),
....: categories=chromosomes,
....: ordered=True)})
....:
In [40]: df
Out[40]:
A B C chromosomes
0 21 62 10 17
1 21 62 10 Y
2 21 62 10 13
3 21 62 10 8
4 21 62 10 22
5 21 62 10 3
6 21 62 10 19
.. .. .. .. ...
93 21 62 10 17
94 21 62 10 Y
95 21 62 10 Y
96 21 62 10 22
97 21 62 10 5
98 21 62 10 20
99 21 62 10 X
Previous Behavior:
New Behavior:
The new orient 'table' for DataFrame.to_json() will generate a Table Schema compatible string represen-
tation of the data.
In [42]: df = pd.DataFrame(
....: {'A': [1, 2, 3],
....: 'B': ['a', 'b', 'c'],
....: 'C': pd.date_range('2016-01-01', freq='d', periods=3),
....: }, index=pd.Index(range(3), name='idx'))
....:
In [43]: df
Out[43]:
A B C
idx
0 1 a 2016-01-01
1 2 b 2016-01-02
2 3 c 2016-01-03
In [44]: df.to_json(orient='table')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
'{"schema": {"fields":[{"name":"idx","type":"integer"},{"name":"A","type":"integer"}
,{"name":"B","type":"string"},{"name":"C","type":"datetime"}],"primaryKey":["idx"],
01T00:00:00.000Z"},{"idx":1,"A":2,"B":"b","C":"2016-01-02T00:00:00.000Z"},{"idx":2,
"A":3,"B":"c","C":"2016-01-03T00:00:00.000Z"}]}'
Pandas now supports creating sparse dataframes directly from scipy.sparse.spmatrix instances. See the doc-
umentation for more information. (GH4343)
All sparse formats are supported, but matrices that are not in COOrdinate format will be converted, copying data as
needed.
In [49]: sp_arr
Out[49]:
<1000x5 sparse matrix of type '<class 'numpy.float64'>'
with 500 stored elements in Compressed Sparse Row format>
In [51]: sdf
Out[51]:
0 1 2 3 4
0 NaN NaN NaN NaN NaN
1 NaN NaN NaN NaN NaN
2 NaN NaN NaN NaN NaN
3 NaN NaN NaN NaN 0.997522
4 NaN NaN NaN NaN NaN
5 NaN NaN NaN NaN 0.911034
6 NaN NaN NaN NaN NaN
.. ... .. .. .. ...
993 0.925879 NaN NaN NaN NaN
994 NaN NaN NaN NaN 0.955585
995 NaN NaN NaN NaN NaN
996 NaN NaN NaN NaN NaN
997 NaN NaN NaN NaN NaN
998 NaN NaN NaN NaN 0.904855
999 NaN NaN NaN NaN NaN
To convert a SparseDataFrame back to sparse SciPy matrix in COO format, you can use:
In [52]: sdf.to_coo()
Out[52]:
<1000x5 sparse matrix of type '<class 'numpy.float64'>'
with 500 stored elements in COOrdinate format>
Experimental support has been added to export DataFrame.style formats to Excel using the openpyxl engine.
(GH15530)
For example, after running the following, styled.xlsx renders as below:
In [53]: np.random.seed(24)
In [57]: df
Out[57]:
A B C D E
1.1.1.12 IntervalIndex
pandas has gained an IntervalIndex with its own dtype, interval as well as the Interval scalar type. These
allow first-class support for interval notation, specifically as a return type for the categories in cut() and qcut().
The IntervalIndex allows some unique indexing, see the docs. (GH7640, GH8625)
Warning: These indexing behaviors of the IntervalIndex are provisional and may change in a future version of
pandas. Feedback on usage is welcome.
Previous behavior:
The returned categories were strings, representing Intervals
In [2]: c
Out[2]:
[(-0.003, 1.5], (-0.003, 1.5], (1.5, 3], (1.5, 3]]
Categories (2, object): [(-0.003, 1.5] < (1.5, 3]]
In [3]: c.categories
Out[3]: Index(['(-0.003, 1.5]', '(1.5, 3]'], dtype='object')
New behavior:
In [60]: c = pd.cut(range(4), bins=2)
In [61]: c
Out[61]:
[(-0.003, 1.5], (-0.003, 1.5], (1.5, 3.0], (1.5, 3.0]]
Categories (2, interval[float64]): [(-0.003, 1.5] < (1.5, 3.0]]
In [62]: c.categories
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
Furthermore, this allows one to bin other data with these same bins, with NaN representing a missing value similar to
other dtypes.
In [63]: pd.cut([0, 3, 5, 1], bins=c.categories)
Out[63]:
[(-0.003, 1.5], (1.5, 3.0], NaN, (-0.003, 1.5]]
Categories (2, interval[float64]): [(-0.003, 1.5] < (1.5, 3.0]]
In [65]: df
Out[65]:
A
B
(-0.003, 1.5] 0
(1.5, 3.0] 1
(-0.003, 1.5] 2
(-0.003, 1.5] 3
In [67]: df.loc[0]
Out[67]:
A
B
(-0.003, 1.5] 0
(-0.003, 1.5] 2
(-0.003, 1.5] 3
1.1.2.1 Possible incompatibility for HDF5 formats created with pandas < 0.13.0
pd.TimeSeries was deprecated officially in 0.17.0, though has already been an alias since 0.13.0. It has been
dropped in favor of pd.Series. (GH15098).
This may cause HDF5 files that were created in prior versions to become unreadable if pd.TimeSeries was used.
This is most likely to be for pandas < 0.13.0. If you find yourself in this situation. You can use a recent prior version
of pandas to read in your HDF5 files, then write them out again after applying the procedure below.
In [3]: s
Out[3]:
2013-01-01 1
2013-01-02 2
2013-01-03 3
Freq: D, dtype: int64
In [4]: type(s)
Out[4]: pandas.core.series.TimeSeries
In [5]: s = pd.Series(s)
In [6]: s
Out[6]:
2013-01-01 1
2013-01-02 2
2013-01-03 3
Freq: D, dtype: int64
In [7]: type(s)
Out[7]: pandas.core.series.Series
In [69]: idx
Out[69]: Int64Index([1, 2], dtype='int64')
In [71]: mi
Out[71]:
MultiIndex(levels=[[1, 2], [2, 4]],
labels=[[0, 1], [0, 1]])
Previous Behavior:
In [5]: idx.map(lambda x: x * 2)
Out[5]: array([2, 4])
In [7]: mi.map(lambda x: x)
Out[7]: array([(1, 2), (2, 4)], dtype=object)
New Behavior:
In [72]: idx.map(lambda x: x * 2)
Out[72]: Int64Index([2, 4], dtype='int64')
In [74]: mi.map(lambda x: x)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
map on a Series with datetime64 values may return int64 dtypes rather than int32
In [77]: s
Out[77]:
0 2011-01-02 00:00:00+09:00
1 2011-01-02 01:00:00+09:00
2 2011-01-02 02:00:00+09:00
dtype: datetime64[ns, Asia/Tokyo]
Previous Behavior:
New Behavior:
The datetime-related attributes (see here for an overview) of DatetimeIndex, PeriodIndex and
TimedeltaIndex previously returned numpy arrays. They will now return a new Index object, except in the
case of a boolean field, where the result will still be a boolean ndarray. (GH15022)
Previous behaviour:
In [2]: idx.hour
Out[2]: array([ 0, 10, 20, 6, 16], dtype=int32)
New Behavior:
In [80]: idx.hour
Out[80]: Int64Index([0, 10, 20, 6, 16], dtype='int64')
This has the advantage that specific Index methods are still available on the result. On the other hand, this might
have backward incompatibilities: e.g. compared to numpy arrays, Index objects are not mutable. To get the original
ndarray, you can always convert explicitly using np.asarray(idx.hour).
In prior versions, using Series.unique() and pandas.unique() on Categorical and tz-aware data-types
would yield different return types. These are now made consistent. (GH15903)
Datetime tz-aware
Previous behaviour:
# Series
In [5]: pd.Series([pd.Timestamp('20160101', tz='US/Eastern'),
pd.Timestamp('20160101', tz='US/Eastern')]).unique()
Out[5]: array([Timestamp('2016-01-01 00:00:00-0500', tz='US/Eastern')],
dtype=object)
# Index
In [7]: pd.Index([pd.Timestamp('20160101', tz='US/Eastern'),
pd.Timestamp('20160101', tz='US/Eastern')]).unique()
Out[7]: DatetimeIndex(['2016-01-01 00:00:00-05:00'], dtype='datetime64[ns, US/
Eastern]', freq=None)
New Behavior:
freq=None)
freq=None)
Categoricals
Previous behaviour:
New Behavior:
# returns a Categorical
In [85]: pd.Series(list('baabc'), dtype='category').unique()
Out[85]:
[b, a, c]
Categories (3, object): [b, a, c]
pandas now uses s3fs for handling S3 connections. This shouldnt break any code. However, since s3fs is not a
required dependency, you will need to install it separately, like boto in prior versions of pandas. (GH11915).
DatetimeIndex Partial String Indexing now works as an exact match, provided that string resolution coincides with
index resolution, including a case when both are seconds (GH14826). See Slice vs. Exact Match for details.
Previous Behavior:
New Behavior:
Previously, concat of multiple objects with different float dtypes would automatically upcast results to a dtype of
float64. Now the smallest acceptable dtype will be used (GH13247)
In [89]: df1.dtypes
Out[89]:
0 float32
dtype: object
In [91]: df2.dtypes
Out[91]:
0 float32
dtype: object
Previous Behavior:
New Behavior:
pandas has split off Google BigQuery support into a separate package pandas-gbq. You can conda
install pandas-gbq -c conda-forge or pip install pandas-gbq to get it. The functional-
ity of read_gbq() and DataFrame.to_gbq() remain the same with the currently released version of
pandas-gbq=0.1.4. Documentation is now hosted here (GH15347)
In previous versions, showing .memory_usage() on a pandas structure that has an index, would only include
actual index values and not include structures that facilitated fast indexing. This will generally be different for Index
and MultiIndex and less-so for other index types. (GH15237)
Previous Behavior:
In [9]: index.memory_usage(deep=True)
Out[9]: 180
In [10]: index.get_loc('foo')
Out[10]: 0
In [11]: index.memory_usage(deep=True)
Out[11]: 180
New Behavior:
In [9]: index.memory_usage(deep=True)
Out[9]: 180
In [10]: index.get_loc('foo')
Out[10]: 0
In [11]: index.memory_usage(deep=True)
Out[11]: 260
In certain cases, calling .sort_index() on a MultiIndexed DataFrame would return the same DataFrame without
seeming to sort. This would happen with a lexsorted, but non-monotonic levels. (GH15622, GH15687, GH14015,
GH13431, GH15797)
This is unchanged from prior versions, but shown for illustration purposes:
In [94]: df
Out[94]:
value
B 0 0
1 1
2 2
A 0 3
1 4
2 5
In [95]: df.index.is_lexsorted()
Out[95]: False
In [96]: df.index.is_monotonic
\\\\\\\\\\\\\\\Out[96]: False
In [97]: df.sort_index()
Out[97]:
value
A 0 3
1 4
2 5
B 0 0
1 1
2 2
In [98]: df.sort_index().index.is_lexsorted()
Out[98]: True
In [99]: df.sort_index().index.is_monotonic
\\\\\\\\\\\\\\Out[99]: True
However, this example, which has a non-monotonic 2nd level, doesnt behave as desired.
In [100]: df = pd.DataFrame(
.....: {'value': [1, 2, 3, 4]},
.....: index=pd.MultiIndex(levels=[['a', 'b'], ['bb', 'aa']],
.....: labels=[[0, 0, 1, 1], [0, 1, 0, 1]]))
.....:
In [101]: df
Out[101]:
value
a bb 1
aa 2
b bb 3
aa 4
Previous Behavior:
In [11]: df.sort_index()
Out[11]:
value
a bb 1
aa 2
b bb 3
aa 4
In [14]: df.sort_index().index.is_lexsorted()
Out[14]: True
In [15]: df.sort_index().index.is_monotonic
Out[15]: False
New Behavior:
In [102]: df.sort_index()
Out[102]:
value
a aa 2
bb 1
b aa 4
bb 3
In [103]: df.sort_index().index.is_lexsorted()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[103]: True
In [104]: df.sort_index().index.is_monotonic
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[104]:
True
The output formatting of groupby.describe() now labels the describe() metrics in the columns instead of
the index. This format is consistent with groupby.agg() when applying multiple functions at once. (GH4792)
Previous Behavior:
In [2]: df.groupby('A').describe()
Out[2]:
B
A
1 count 2.000000
mean 1.500000
std 0.707107
min 1.000000
25% 1.250000
50% 1.500000
75% 1.750000
max 2.000000
2 count 2.000000
mean 3.500000
std 0.707107
min 3.000000
25% 3.250000
50% 3.500000
75% 3.750000
max 4.000000
New Behavior:
In [106]: df.groupby('A').describe()
Out[106]:
B
count mean std min 25% 50% 75% max
A
1 2.0 1.5 0.707107 1.0 1.25 1.5 1.75 2.0
2 2.0 3.5 0.707107 3.0 3.25 3.5 3.75 4.0
B
mean std amin amax
A
1 1.5 0.707107 1 2
2 3.5 0.707107 3 4
A binary window operation, like .corr() or .cov(), when operating on a .rolling(..), .expanding(..
), or .ewm(..) object, will now return a 2-level MultiIndexed DataFrame rather than a Panel, as Panel
is now deprecated, see here. These are equivalent in function, but a MultiIndexed DataFrame enjoys more support
in pandas. See the section on Windowed Binary Operations for more information. (GH15677)
In [108]: np.random.seed(1234)
In [110]: df.tail()
Out[110]:
bar A B
foo
2016-04-05 0.640880 0.126205
2016-04-06 0.171465 0.737086
2016-04-07 0.127029 0.369650
2016-04-08 0.604334 0.103104
2016-04-09 0.802374 0.945553
Previous Behavior:
In [2]: df.rolling(12).corr()
Out[2]:
<class 'pandas.core.panel.Panel'>
Dimensions: 100 (items) x 2 (major_axis) x 2 (minor_axis)
Items axis: 2016-01-01 00:00:00 to 2016-04-09 00:00:00
Major_axis axis: A to B
Minor_axis axis: A to B
New Behavior:
In [112]: res.tail()
Out[112]:
bar A B
foo bar
2016-04-07 B -0.132090 1.000000
2016-04-08 A 1.000000 -0.145775
B -0.145775 1.000000
2016-04-09 A 1.000000 0.119645
B 0.119645 1.000000
In [113]: df.rolling(12).corr().loc['2016-04-07']
Out[113]:
bar A B
foo bar
2016-04-07 A 1.00000 -0.13209
B -0.13209 1.00000
In previous versions most types could be compared to string column in a HDFStore usually resulting in an invalid
comparison, returning an empty result frame. These comparisons will now raise a TypeError (GH15492)
In [116]: df.dtypes
Out[116]:
unparsed_date object
dtype: object
Previous Behavior:
New Behavior:
In [18]: ts = pd.Timestamp('2014-01-01')
1.1.2.14 Index.intersection and inner join now preserve the order of the left Index
Index.intersection() now preserves the order of the calling Index (left) instead of the other Index (right)
(GH15582). This affects inner joins, DataFrame.join() and merge(), and the .align method.
Index.intersection
In [118]: left
Out[118]: Int64Index([2, 1, 0], dtype='int64')
In [120]: right
Out[120]: Int64Index([1, 2, 3], dtype='int64')
Previous Behavior:
In [4]: left.intersection(right)
Out[4]: Int64Index([1, 2], dtype='int64')
New Behavior:
In [121]: left.intersection(right)
Out[121]: Int64Index([2, 1], dtype='int64')
In [123]: left
Out[123]:
a
2 20
1 10
0 0
In [125]: right
Out[125]:
b
1 100
2 200
3 300
Previous Behavior:
In [4]: left.join(right, how='inner')
Out[4]:
a b
1 10 100
2 20 200
New Behavior:
In [126]: left.join(right, how='inner')
Out[126]:
a b
2 20 200
1 10 100
The documentation for pivot_table() states that a DataFrame is always returned. Here a bug is fixed that
allowed this to return a Series under certain circumstance. (GH4386)
In [127]: df = DataFrame({'col1': [3, 4, 5],
.....: 'col2': ['C', 'D', 'E'],
.....: 'col3': [1, 3, 9]})
.....:
In [128]: df
Out[128]:
col1 col2 col3
0 3 C 1
1 4 D 3
2 5 E 9
Previous Behavior:
In [2]: df.pivot_table('col1', index=['col3', 'col2'], aggfunc=np.sum)
Out[2]:
col3 col2
1 C 3
3 D 4
9 E 5
Name: col1, dtype: int64
New Behavior:
numexpr version is now required to be >= 2.4.6 and it will not be used at all if this requisite is not fulfilled
(GH15213).
CParserError has been renamed to ParserError in pd.read_csv() and will be removed in the future
(GH12665)
SparseArray.cumsum() and SparseSeries.cumsum() will now always return SparseArray and
SparseSeries respectively (GH12855)
DataFrame.applymap() with an empty DataFrame will return a copy of the empty DataFrame instead
of a Series (GH8222)
Series.map() now respects default values of dictionary subclasses with a __missing__ method, such as
collections.Counter (GH15999)
.loc has compat with .ix for accepting iterators, and NamedTuples (GH15120)
interpolate() and fillna() will raise a ValueError if the limit keyword argument is not greater
than 0. (GH9217)
pd.read_csv() will now issue a ParserWarning whenever there are conflicting values provided by the
dialect parameter and the user (GH14898)
pd.read_csv() will now raise a ValueError for the C engine if the quote character is larger than than
one byte (GH11592)
inplace arguments now require a boolean value, else a ValueError is thrown (GH14189)
pandas.api.types.is_datetime64_ns_dtype will now report True on a tz-aware dtype, similar
to pandas.api.types.is_datetime64_any_dtype
DataFrame.asof() will return a null filled Series instead the scalar NaN if a match is not found
(GH15118)
Specific support for copy.copy() and copy.deepcopy() functions on NDFrame objects (GH15444)
Series.sort_values() accepts a one element list of bool for consistency with the behavior of
DataFrame.sort_values() (GH15604)
.merge() and .join() on category dtype columns will now preserve the category dtype when possible
(GH10409)
Some formerly public python/c/c++/cython extension modules have been moved and/or renamed. These are all re-
moved from the public API. Furthermore, the pandas.core, pandas.compat, and pandas.util top-level
modules are now considered to be PRIVATE. If indicated, a deprecation warning will be issued if you reference theses
modules. (GH12588)
1.1.3.2 pandas.errors
We are adding a standard public module for all pandas exceptions & warnings pandas.errors. (GH14800). Pre-
viously these exceptions & warnings could be imported from pandas.core.common or pandas.io.common.
These exceptions and warnings will be removed from the *.common locations in a future release. (GH15541)
The following are now part of this API:
['DtypeWarning',
'EmptyDataError',
'OutOfBoundsDatetime',
'ParserError',
'ParserWarning',
'PerformanceWarning',
'UnsortedIndexError',
'UnsupportedFunctionCall']
1.1.3.3 pandas.testing
We are adding a standard module that exposes the public testing functions in pandas.testing (GH9895). Those
functions can be used when writing tests for functionality using pandas objects.
The following testing functions are now part of this API:
testing.assert_frame_equal()
testing.assert_series_equal()
testing.assert_index_equal()
1.1.3.4 pandas.plotting
A new public pandas.plotting module has been added that holds plotting functionality that was previously in
either pandas.tools.plotting or in the top-level namespace. See the deprecations sections for more details.
Building pandas for development now requires cython >= 0.23 (GH14831)
Require at least 0.23 version of cython to avoid problems with character encodings (GH14699)
Switched the test framework to use pytest (GH13097)
Reorganization of tests directory layout (GH14854, GH15707).
1.1.4 Deprecations
The .ix indexer is deprecated, in favor of the more strict .iloc and .loc indexers. .ix offers a lot of magic on the
inference of what the user wants to do. To wit, .ix can decide to index positionally OR via labels, depending on the
data type of the index. This has caused quite a bit of user confusion over the years. The full indexing documentation
is here. (GH14218)
The recommended methods of indexing are:
.loc if you want to label index
.iloc if you want to positionally index.
Using .ix will now show a DeprecationWarning with a link to some examples of how to convert code here.
In [131]: df
Out[131]:
A B
a 1 4
b 2 5
c 3 6
Previous Behavior, where you wish to get the 0th and the 2nd elements from the index in the A column.
Using .loc. Here we will select the appropriate indexes from the index, then use label indexing.
Using .iloc. Here we will get the location of the A column, then use positional indexing to select things.
Panel is deprecated and will be removed in a future version. The recommended way to represent 3-D data
are with a MultiIndex on a DataFrame via the to_frame() or with the xarray package. Pandas pro-
vides a to_xarray() method to automate this conversion. For more details see Deprecate Panel documentation.
(GH13563).
In [134]: p = tm.makePanel()
In [135]: p
Out[135]:
<class 'pandas.core.panel.Panel'>
Dimensions: 3 (items) x 3 (major_axis) x 4 (minor_axis)
Items axis: ItemA to ItemC
Major_axis axis: 2000-01-03 00:00:00 to 2000-01-05 00:00:00
Minor_axis axis: A to D
In [136]: p.to_frame()
Out[136]:
ItemA ItemB ItemC
major minor
2000-01-03 A 0.628776 -1.409432 0.209395
B 0.988138 -1.347533 -0.896581
C -0.938153 1.272395 -0.161137
D -0.223019 -0.591863 -1.051539
2000-01-04 A 0.186494 1.422986 -0.592886
In [137]: p.to_xarray()
Out[137]:
<xarray.DataArray (items: 3, major_axis: 3, minor_axis: 4)>
array([[[ 0.628776, 0.988138, -0.938153, -0.223019],
[ 0.186494, -0.072608, -1.239072, 2.123692],
[ 0.952478, -0.550603, 0.139683, 0.122273]],
In [139]: df
Out[139]:
A B C
0 1 0 0
1 1 1 1
2 1 2 2
3 2 3 3
4 2 4 4
Here is a typical useful syntax for computing different aggregations for different columns. This is a natural, and useful
syntax. We aggregate from the dict-to-list by taking the specified columns and applying the list of functions. This
returns a MultiIndex for the columns (this is not deprecated).
In [140]: df.groupby('A').agg({'B': 'sum', 'C': 'min'})
Out[140]:
B C
A
1 3 0
2 7 3
Heres an example of the first deprecation, passing a dict to a grouped Series. This is a combination aggregation &
renaming:
In [6]: df.groupby('A').B.agg({'foo': 'count'})
FutureWarning: using a dict on a Series for aggregation
is deprecated and will be removed in a future version
Out[6]:
foo
A
1 3
2 2
Out[23]:
B C
foo bar
A
1 3 0
2 7 3
.....:
Out[142]:
foo bar
A
1 3 0
2 7 3
The pandas.tools.plotting module has been deprecated, in favor of the top level pandas.plotting mod-
ule. All the public plotting functions are now available from pandas.plotting (GH12548).
Furthermore, the top-level pandas.scatter_matrix and pandas.plot_params are deprecated. Users can
import these from pandas.plotting as well.
Previous script:
pd.tools.plotting.scatter_matrix(df)
pd.scatter_matrix(df)
pd.plotting.scatter_matrix(df)
SparseArray.to_dense() has deprecated the fill parameter, as that parameter was not being respected
(GH14647)
SparseSeries.to_dense() has deprecated the sparse_only parameter (GH14647)
Series.repeat() has deprecated the reps parameter in favor of repeats (GH12662)
The Series constructor and .astype method have deprecated accepting timestamp dtypes without a fre-
quency (e.g. np.datetime64) for the dtype parameter (GH15524)
Index.repeat() and MultiIndex.repeat() have deprecated the n parameter in favor of repeats
(GH12662)
Categorical.searchsorted() and Series.searchsorted() have deprecated the v parameter in
favor of value (GH12662)
TimedeltaIndex.searchsorted(), DatetimeIndex.searchsorted(), and PeriodIndex.
searchsorted() have deprecated the key parameter in favor of value (GH12662)
DataFrame.astype() has deprecated the raise_on_error parameter in favor of errors (GH14878)
Series.sortlevel and DataFrame.sortlevel have been deprecated in favor of Series.
sort_index and DataFrame.sort_index (GH15099)
importing concat from pandas.tools.merge has been deprecated in favor of imports from the pandas
namespace. This should only affect explict imports (GH15358)
Series/DataFrame/Panel.consolidate() been deprecated as a public method. (GH15483)
The as_indexer keyword of Series.str.match() has been deprecated (ignored keyword) (GH15257).
The following top-level pandas functions have been deprecated and will be removed in a future version
(GH13790, GH15940)
The pandas.rpy module is removed. Similar functionality can be accessed through the rpy2 project. See the
R interfacing docs for more details.
The pandas.io.ga module with a google-analytics interface is removed (GH11308). Similar func-
tionality can be found in the Google2Pandas package.
pd.to_datetime and pd.to_timedelta have dropped the coerce parameter in favor of errors
(GH13602)
pandas.stats.fama_macbeth, pandas.stats.ols, pandas.stats.plm and pandas.
stats.var, as well as the top-level pandas.fama_macbeth and pandas.ols routines are removed.
Similar functionaility can be found in the statsmodels package. (GH11898)
The TimeSeries and SparseTimeSeries classes, aliases of Series and SparseSeries, are removed
(GH10890, GH15098).
Series.is_time_series is dropped in favor of Series.index.is_all_dates (GH15098)
The deprecated irow, icol, iget and iget_value methods are removed in favor of iloc and iat as
explained here (GH10711).
The deprecated DataFrame.iterkv() has been removed in favor of DataFrame.iteritems()
(GH10711)
The Categorical constructor has dropped the name parameter (GH10632)
Categorical has dropped support for NaN categories (GH10748)
The take_last parameter has been dropped from duplicated(), drop_duplicates(),
nlargest(), and nsmallest() methods (GH10236, GH10792, GH10920)
Series, Index, and DataFrame have dropped the sort and order methods (GH10726)
Where clauses in pytables are only accepted as strings and expressions types and not other data-types
(GH12027)
DataFrame has dropped the combineAdd and combineMult methods in favor of add and mul respec-
tively (GH10735)
Improved performance of pd.factorize() by releasing the GIL with object dtype when inferred as
strings (GH14859, GH16057)
Improved performance of timeseries plotting with an irregular DatetimeIndex (or with compat_x=True)
(GH15073).
Improved performance of groupby().cummin() and groupby().cummax() (GH15048, GH15109,
GH15561, GH15635)
Improved performance and reduced memory when indexing with a MultiIndex (GH15245)
When reading buffer object in read_sas() method without specified format, filepath string is inferred rather
than buffer object. (GH14947)
Improved performance of .rank() for categorical data (GH15498)
Improved performance when using .unstack() (GH15503)
Improved performance of merge/join on category columns (GH10409)
Improved performance of drop_duplicates() on bool columns (GH12963)
Improve performance of pd.core.groupby.GroupBy.apply when the applied function used the .name
attribute of the group DataFrame (GH15062).
Improved performance of iloc indexing with a list or array (GH15504).
Improved performance of Series.sort_index() with a monotonic index (GH15694)
Improved performance in pd.read_csv() on some platforms with buffered reads (GH16039)
1.1.7.1 Conversion
Bug in Timestamp.replace now raises TypeError when incorrect argument names are given; previously
this raised ValueError (GH15240)
Bug in Timestamp.replace with compat for passing long integers (GH15030)
Bug in Timestamp returning UTC based time/date attributes when a timezone was provided (GH13303,
GH6538)
Bug in Timestamp incorrectly localizing timezones during construction (GH11481, GH15777)
Bug in TimedeltaIndex addition where overflow was being allowed without error (GH14816)
Bug in TimedeltaIndex raising a ValueError when boolean indexing with loc (GH14946)
Bug in catching an overflow in Timestamp + Timedelta/Offset operations (GH15126)
Bug in DatetimeIndex.round() and Timestamp.round() floating point accuracy when rounding by
milliseconds or less (GH14440, GH15578)
Bug in astype() where inf values were incorrectly converted to integers. Now raises error now with
astype() for Series and DataFrames (GH14265)
Bug in DataFrame(..).apply(to_numeric) when values are of type decimal.Decimal. (GH14827)
Bug in describe() when passing a numpy array which does not contain the median to the percentiles
keyword argument (GH14908)
Cleaned up PeriodIndex constructor, including raising on floats more consistently (GH13277)
Bug in using __deepcopy__ on empty NDFrame objects (GH15370)
1.1.7.2 Indexing
Bug in Categorical.searchsorted() where alphabetical instead of the provided categorical order was
used (GH14522)
Bug in Series.iloc where a Categorical object for list-like indexes input was returned, where a
Series was expected. (GH14580)
Bug in DataFrame.isin comparing datetimelike to empty frame (GH15473)
Bug in .reset_index() when an all NaN level of a MultiIndex would fail (GH6322)
Bug in .reset_index() when raising error for index name already present in MultiIndex columns
(GH16120)
Bug in creating a MultiIndex with tuples and not passing a list of names; this will now raise ValueError
(GH15110)
Bug in the HTML display with with a MultiIndex and truncation (GH14882)
Bug in the display of .info() where a qualifier (+) would always be displayed with a MultiIndex that
contains only non-strings (GH15245)
Bug in pd.concat() where the names of MultiIndex of resulting DataFrame are not handled correctly
when None is presented in the names of MultiIndex of input DataFrame (GH15787)
Bug in DataFrame.sort_index() and Series.sort_index() where na_position doesnt work
with a MultiIndex (GH14784, GH16604)
Bug in in pd.concat() when combining objects with a CategoricalIndex (GH16111)
Bug in indexing with a scalar and a CategoricalIndex (GH16123)
1.1.7.3 I/O
Bug in pd.to_numeric() in which float and unsigned integer elements were being improperly casted
(GH14941, GH15005)
Bug in pd.read_fwf() where the skiprows parameter was not being respected during column width infer-
ence (GH11256)
Bug in pd.read_csv() in which the dialect parameter was not being verified before processing
(GH14898)
Bug in pd.read_csv() in which missing data was being improperly handled with usecols (GH6710)
Bug in pd.read_csv() in which a file containing a row with many columns followed by rows with fewer
columns would cause a crash (GH14125)
Bug in pd.read_csv() for the C engine where usecols were being indexed incorrectly with
parse_dates (GH14792)
Bug in pd.read_csv() with parse_dates when multiline headers are specified (GH15376)
Bug in pd.read_csv() with float_precision='round_trip' which caused a segfault when a text
entry is parsed (GH15140)
Bug in pd.read_csv() when an index was specified and no values were specified as null values (GH15835)
Bug in pd.read_csv() in which certain invalid file objects caused the Python interpreter to crash (GH15337)
Bug in pd.read_csv() in which invalid values for nrows and chunksize were allowed (GH15767)
Bug in pd.read_csv() for the Python engine in which unhelpful error messages were being raised when
parsing errors occurred (GH15910)
Bug in pd.read_csv() in which the skipfooter parameter was not being properly validated (GH15925)
Bug in pd.to_csv() in which there was numeric overflow when a timestamp index was being written
(GH15982)
Bug in pd.util.hashing.hash_pandas_object() in which hashing of categoricals depended on the
ordering of categories, instead of just their values. (GH15143)
Bug in .to_json() where lines=True and contents (keys or values) contain escaped characters
(GH15096)
Bug in .to_json() causing single byte ascii characters to be expanded to four byte unicode (GH15344)
Bug in .to_json() for the C engine where rollover was not correctly handled for case where frac is odd and
diff is exactly 0.5 (GH15716, GH15864)
Bug in pd.read_json() for Python 2 where lines=True and contents contain non-ascii unicode charac-
ters (GH15132)
Bug in pd.read_msgpack() in which Series categoricals were being improperly processed (GH14901)
Bug in pd.read_msgpack() which did not allow loading of a dataframe with an index of type
CategoricalIndex (GH15487)
Bug in pd.read_msgpack() when deserializing a CategoricalIndex (GH15487)
Bug in DataFrame.to_records() with converting a DatetimeIndex with a timezone (GH13937)
Bug in DataFrame.to_records() which failed with unicode characters in column names (GH11879)
Bug in .to_sql() when writing a DataFrame with numeric index names (GH15404).
Bug in DataFrame.to_html() with index=False and max_rows raising in IndexError
(GH14998)
Bug in pd.read_hdf() passing a Timestamp to the where parameter with a non date column (GH15492)
Bug in DataFrame.to_stata() and StataWriter which produces incorrectly formatted files to be
produced for some locales (GH13856)
Bug in StataReader and StataWriter which allows invalid encodings (GH15723)
Bug in the Series repr not showing the length when the output was truncated (GH15962).
1.1.7.4 Plotting
1.1.7.5 Groupby/Resample/Rolling
1.1.7.6 Sparse
1.1.7.7 Reshaping
Bug in pd.merge_asof() where left_index or right_index caused a failure when multiple by was
specified (GH15676)
Bug in pd.merge_asof() where left_index/right_index together caused a failure when
tolerance was specified (GH15135)
Bug in DataFrame.pivot_table() where dropna=True would not drop all-NaN columns when the
columns was a category dtype (GH15193)
Bug in pd.melt() where passing a tuple value for value_vars caused a TypeError (GH15348)
Bug in pd.pivot_table() where no error was raised when values argument was not in the columns
(GH14938)
Bug in pd.concat() in which concatenating with an empty dataframe with join='inner' was being
improperly handled (GH15328)
Bug with sort=True in DataFrame.join and pd.merge when joining on indexes (GH15582)
Bug in DataFrame.nsmallest and DataFrame.nlargest where identical values resulted in dupli-
cated rows (GH15297)
1.1.7.8 Numeric
1.1.7.9 Other
This is a minor bug-fix release in the 0.19.x series and includes some small regression fixes, bug fixes and performance
improvements. We recommend that all users upgrade to this version.
Highlights include:
Compatibility with Python 3.6
Added a Pandas Cheat Sheet. (GH13202).
Enhancements
Performance Improvements
Bug Fixes
1.2.1 Enhancements
Bug when writing to a HDFStore in table format with a min_itemsize value for the index and without
asking to append (GH10381)
Bug in Series.groupby.nunique() raising an IndexError for an empty Series (GH12553)
Bug in DataFrame.nlargest and DataFrame.nsmallest when the index had duplicate values
(GH13412)
Bug in clipboard functions on linux with python2 with unicode and separators (GH13747)
Bug in clipboard functions on Windows 10 and python 3 (GH14362, GH12807)
Bug in .to_clipboard() and Excel compat (GH12529)
Bug in DataFrame.combine_first() for integer columns (GH14687).
Bug in pd.read_csv() in which the dtype parameter was not being respected for empty data (GH14712)
Bug in pd.read_csv() in which the nrows parameter was not being respected for large input when using
the C engine for parsing (GH7626)
Bug in pd.merge_asof() could not handle timezone-aware DatetimeIndex when a tolerance was specified
(GH14844)
Explicit check in to_stata and StataWriter for out-of-range values when writing doubles (GH14618)
Bug in .plot(kind='kde') which did not drop missing values to generate the KDE Plot, instead generating
an empty plot. (GH14821)
Bug in unstack() if called with a list of column(s) as an argument, regardless of the dtypes of all columns,
they get coerced to object (GH11847)
This is a minor bug-fix release from 0.19.0 and includes some small regression fixes, bug fixes and performance
improvements. We recommend that all users upgrade to this version.
Performance Improvements
Bug Fixes
Source installs from PyPI will now again work without cython installed, as in previous versions (GH14204)
Compat with Cython 0.25 for building (GH14496)
Fixed regression where user-provided file handles were closed in read_csv (c engine) (GH14418).
Fixed regression in DataFrame.quantile when missing values where present in some columns
(GH14357).
Fixed regression in Index.difference where the freq of a DatetimeIndex was incorrectly set
(GH14323)
Added back pandas.core.common.array_equivalent with a deprecation warning (GH14555).
Bug in pd.read_csv for the C engine in which quotation marks were improperly parsed in skipped rows
(GH14459)
Bug in pd.read_csv for Python 2.x in which Unicode quote characters were no longer being respected
(GH14477)
Fixed regression in Index.append when categorical indices were appended (GH14545).
Fixed regression in pd.DataFrame where constructor fails when given dict with None value (GH14381)
Fixed regression in DatetimeIndex._maybe_cast_slice_bound when index is empty (GH14354).
Bug in localizing an ambiguous timezone when a boolean is passed (GH14402)
Bug in TimedeltaIndex addition with a Datetime-like object where addition overflow in the negative direc-
tion was not being caught (GH14068, GH14453)
Bug in string indexing against data with object Index may raise AttributeError (GH14424)
Corrrecly raise ValueError on empty input to pd.eval() and df.query() (GH13139)
Bug in RangeIndex.intersection when result is a empty set (GH14364).
Bug in groupby-transform broadcasting that could cause incorrect dtype coercion (GH14457)
Bug in Series.__setitem__ which allowed mutating read-only arrays (GH14359).
Bug in DataFrame.insert where multiple calls with duplicate columns can fail (GH14291)
pd.merge() will raise ValueError with non-boolean parameters in passed boolean type arguments
(GH14434)
Bug in Timestamp where dates very near the minimum (1677-09) could underflow on creation (GH14415)
Bug in pd.concat where names of the keys were not propagated to the resulting MultiIndex (GH14252)
Bug in pd.concat where axis cannot take string parameters 'rows' or 'columns' (GH14369)
Bug in pd.concat with dataframes heterogeneous in length and tuple keys (GH14438)
Bug in MultiIndex.set_levels where illegal level values were still set after raising an error (GH13754)
Bug in DataFrame.to_json where lines=True and a value contained a } character (GH14391)
Bug in df.groupby causing an AttributeError when grouping a single index frame by a column and
the index level (:issue14327)
Bug in df.groupby where TypeError raised when pd.Grouper(key=...) is passed in a list
(GH14334)
Bug in pd.pivot_table may raise TypeError or ValueError when index or columns is not scalar
and values is not specified (GH14380)
This is a major release from 0.18.1 and includes number of API changes, several new features, enhancements, and
performance improvements along with a large number of bug fixes. We recommend that all users upgrade to this
version.
Highlights include:
merge_asof() for asof-style time-series joining, see here
.rolling() is now time-series aware, see here
read_csv() now supports parsing Categorical data, see here
A function union_categorical() has been added for combining categoricals, see here
PeriodIndex now has its own period dtype, and changed to be more consistent with other Index classes.
See here
Sparse data structures gained enhanced support of int and bool dtypes, see here
Comparison operations with Series no longer ignores the index, see here for an overview of the API changes.
Introduction of a pandas development API for utility functions, see here.
Deprecation of Panel4D and PanelND. We recommend to represent these types of n-dimensional data with
the xarray package.
Removal of the previously deprecated modules pandas.io.data, pandas.io.wb, pandas.tools.
rplot.
Warning: pandas >= 0.19.0 will no longer silence numpy ufunc warnings upon import, see here.
New features
merge_asof for asof-style time-series joining
.rolling() is now time-series aware
read_csv has improved support for duplicate column names
read_csv supports parsing Categorical directly
Categorical Concatenation
Semi-Month Offsets
New Index methods
Google BigQuery Enhancements
Fine-grained numpy errstate
get_dummies now returns integer dtypes
Downcast values to smallest possible dtype in to_numeric
pandas development API
Other enhancements
API changes
Series.tolist() will now return Python types
Series operators for different indexes
* Arithmetic operators
* Comparison operators
* Logical operators
* Flexible comparison methods
Series type promotion on assignment
.to_datetime() changes
Merging changes
.describe() changes
Period changes
A long-time requested feature has been added through the merge_asof() function, to support asof style joining of
time-series (GH1870, GH13695, GH13709, GH13902). Full documentation is here.
The merge_asof() performs an asof merge, which is similar to a left-join except that we match on nearest key
rather than equal keys.
In [3]: left
Out[3]:
a left_val
0 1 a
1 5 b
2 10 c
In [4]: right
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[4]:
a right_val
0 1 1
1 2 2
2 3 3
3 6 6
4 7 7
We typically want to match exactly when possible, and use the most recent value otherwise.
We can also match rows ONLY with prior data, and not an exact match.
In a typical time-series example, we have trades and quotes and we want to asof-join them. This also
illustrates using the by parameter to group data before merging.
In [9]: trades
Out[9]:
time ticker price quantity
0 2016-05-25 13:30:00.023 MSFT 51.95 75
1 2016-05-25 13:30:00.038 MSFT 51.95 155
2 2016-05-25 13:30:00.048 GOOG 720.77 100
3 2016-05-25 13:30:00.048 GOOG 720.92 100
4 2016-05-25 13:30:00.048 AAPL 98.00 100
In [10]: quotes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
An asof merge joins on the on, typically a datetimelike field, which is ordered, and in this case we are using a grouper
in the by field. This is like a left-outer join, except that forward filling happens automatically taking the most recent
non-NaN value.
This returns a merged DataFrame with the entries in the same order as the original left passed DataFrame (trades
in this case), with the fields of the quotes merged.
.rolling() objects are now time-series aware and can accept a time-series offset (or convertible) for the window
argument (GH13327, GH12995). See the full documentation here.
....:
In [13]: dft
Out[13]:
B
2013-01-01 09:00:00 0.0
2013-01-01 09:00:01 1.0
2013-01-01 09:00:02 2.0
2013-01-01 09:00:03 NaN
2013-01-01 09:00:04 4.0
This is a regular frequency index. Using an integer window parameter works to roll along the window frequency.
In [14]: dft.rolling(2).sum()
Out[14]:
B
2013-01-01 09:00:00 NaN
2013-01-01 09:00:01 1.0
2013-01-01 09:00:02 3.0
2013-01-01 09:00:03 NaN
2013-01-01 09:00:04 NaN
B
2013-01-01 09:00:00 0.0
2013-01-01 09:00:01 1.0
2013-01-01 09:00:02 3.0
2013-01-01 09:00:03 2.0
2013-01-01 09:00:04 4.0
In [16]: dft.rolling('2s').sum()
Out[16]:
B
2013-01-01 09:00:00 0.0
2013-01-01 09:00:01 1.0
2013-01-01 09:00:02 3.0
Using a non-regular, but still monotonic index, rolling with an integer window does not impart any special calculation.
In [17]: dft = DataFrame({'B': [0, 1, 2, np.nan, 4]},
....: index = pd.Index([pd.Timestamp('20130101 09:00:00'),
....: pd.Timestamp('20130101 09:00:02'),
....: pd.Timestamp('20130101 09:00:03'),
....: pd.Timestamp('20130101 09:00:05'),
....: pd.Timestamp('20130101 09:00:06')],
....: name='foo'))
....:
In [18]: dft
Out[18]:
B
foo
2013-01-01 09:00:00 0.0
2013-01-01 09:00:02 1.0
2013-01-01 09:00:03 2.0
2013-01-01 09:00:05 NaN
2013-01-01 09:00:06 4.0
In [19]: dft.rolling(2).sum()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
B
foo
2013-01-01 09:00:00 NaN
2013-01-01 09:00:02 1.0
2013-01-01 09:00:03 3.0
2013-01-01 09:00:05 NaN
2013-01-01 09:00:06 NaN
Using the time-specification generates variable windows for this sparse data.
In [20]: dft.rolling('2s').sum()
Out[20]:
B
foo
2013-01-01 09:00:00 0.0
2013-01-01 09:00:02 1.0
2013-01-01 09:00:03 3.0
2013-01-01 09:00:05 NaN
2013-01-01 09:00:06 4.0
Furthermore, we now allow an optional on parameter to specify a column (rather than the default of the index) in a
DataFrame.
In [21]: dft = dft.reset_index()
In [22]: dft
Out[22]:
foo B
0 2013-01-01 09:00:00 0.0
1 2013-01-01 09:00:02 1.0
2 2013-01-01 09:00:03 2.0
foo B
0 2013-01-01 09:00:00 0.0
1 2013-01-01 09:00:02 1.0
2 2013-01-01 09:00:03 3.0
3 2013-01-01 09:00:05 NaN
4 2013-01-01 09:00:06 4.0
Duplicate column names are now supported in read_csv() whether they are in the file or passed in as the names
parameter (GH7160, GH9424)
In [24]: data = '0,1,2\n3,4,5'
Previous behavior:
In [2]: pd.read_csv(StringIO(data), names=names)
Out[2]:
a b a
0 2 1 2
1 5 4 5
The first a column contained the same data as the second a column, when it should have contained the values [0,
3].
New behavior:
In [26]: pd.read_csv(StringIO(data), names=names)
Out[26]:
a b a.1
0 0 1 2
1 3 4 5
The read_csv() function now supports parsing a Categorical column when specified as a dtype (GH10153).
Depending on the structure of the data, this can result in a faster parse time and lower memory usage compared to
converting to Categorical after parsing. See the io docs here.
In [27]: data = 'col1,col2,col3\na,b,1\na,b,2\nc,d,3'
In [28]: pd.read_csv(StringIO(data))
Out[28]:
col1 col2 col3
0 a b 1
1 a b 2
2 c d 3
In [29]: pd.read_csv(StringIO(data)).dtypes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[29]:
col1 object
col2 object
col3 int64
dtype: object
col1 category
col2 category
col3 category
dtype: object
Note: The resulting categories will always be parsed as strings (object dtype). If the categories are numeric they can
be converted using the to_numeric() function, or as appropriate, another converter such as to_datetime().
In [33]: df.dtypes
Out[33]:
col1 category
col2 category
col3 category
dtype: object
In [34]: df['col3']
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[34]:
0 1
1 2
2 3
Name: col3, dtype: category
Categories (3, object): [1, 2, 3]
In [36]: df['col3']
Out[36]:
0 1
1 2
2 3
Name: col3, dtype: category
Categories (3, int64): [1, 2, 3]
A function union_categoricals() has been added for combining categoricals, see Unioning Categoricals
(GH13361, GH:13763, issue:13846, GH14173)
concat and append now can concat category dtypes with different categories as object dtype
(GH13524)
Previous behavior:
New behavior:
Pandas has gained new frequency offsets, SemiMonthEnd (SM) and SemiMonthBegin (SMS). These provide
date offsets anchored (by default) to the 15th and end of month, and 15th and 1st of month respectively. (GH1543)
SemiMonthEnd:
SemiMonthBegin:
Using the anchoring suffix, you can also specify the day of month to use instead of the 15th.
'datetime64[ns]', freq='SM-14')
The following methods and options are added to Index, to be more consistent with the Series and DataFrame
API.
Index now supports the .where() function for same shape indexing (GH13170)
In [54]: idx.dropna()
Out[54]: Float64Index([1.0, 2.0, 4.0], dtype='float64')
For MultiIndex, values are dropped if any level is missing by default. Specifying how='all' only drops values
where all levels are missing.
In [56]: midx
Out[56]:
MultiIndex(levels=[[1, 2, 4], [1, 2]],
labels=[[0, 1, -1, 2], [0, 1, -1, -1]])
In [57]: midx.dropna()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\O
In [58]: midx.dropna(how='all')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
Index now supports .str.extractall() which returns a DataFrame, see the docs here (GH10008, GH13156)
In [60]: idx.str.extractall("[ab](?P<digit>\d)")
Out[60]:
digit
match
0 0 1
1 2
1 0 1
Index.astype() now accepts an optional boolean argument copy, which allows optional copying if the require-
ments on dtype are satisfied (GH13209)
The read_gbq() method has gained the dialect argument to allow users to specify whether to use Big-
Querys legacy SQL or BigQuerys standard SQL. See the docs for more details (GH13615).
The to_gbq() method now allows the DataFrame column order to differ from the destination table schema
(GH11359).
Previous versions of pandas would permanently silence numpys ufunc error handling when pandas was imported.
Pandas did this in order to silence the warnings that would arise from using numpy ufuncs on missing data, which are
usually represented as NaN s. Unfortunately, this silenced legitimate warnings arising in non-pandas code in the ap-
plication. Starting with 0.19.0, pandas will use the numpy.errstate context manager to silence these warnings in
a more fine-grained manner, only around where these operations are actually used in the pandas codebase. (GH13109,
GH13145)
After upgrading pandas, you may see new RuntimeWarnings being issued from your code. These are likely legiti-
mate, and the underlying cause likely existed in the code when using previous versions of pandas that simply silenced
the warning. Use numpy.errstate around the source of the RuntimeWarning to control how these conditions are
handled.
The pd.get_dummies function now returns dummy-encoded columns as small integers, rather than floats
(GH8725). This should provide an improved memory footprint.
Previous behavior:
Out[1]:
a float64
b float64
c float64
dtype: object
New behavior:
In [61]: pd.get_dummies(['a', 'b', 'a', 'c']).dtypes
Out[61]:
a uint8
b uint8
c uint8
dtype: object
pd.to_numeric() now accepts a downcast parameter, which will downcast the data if possible to smallest
specified numerical dtype (GH13352)
In [62]: s = ['1', 2, 3]
As part of making pandas API more uniform and accessible in the future, we have created a standard sub-package of
pandas, pandas.api to hold public APIs. We are starting by exposing type introspection functions in pandas.
api.types. More sub-packages and officially sanctioned APIs will be published in future versions of pandas
(GH13147, GH13634)
The following are now part of this API:
In [65]: import pprint
In [68]: pprint.pprint(funcs)
['CategoricalDtype',
'DatetimeTZDtype',
'IntervalDtype',
'PeriodDtype',
'infer_dtype',
'is_any_int_dtype',
'is_bool',
'is_bool_dtype',
'is_categorical',
'is_categorical_dtype',
'is_complex',
'is_complex_dtype',
'is_datetime64_any_dtype',
'is_datetime64_dtype',
'is_datetime64_ns_dtype',
'is_datetime64tz_dtype',
'is_datetimetz',
'is_dict_like',
'is_dtype_equal',
'is_extension_type',
'is_file_like',
'is_float',
'is_float_dtype',
'is_floating_dtype',
'is_hashable',
'is_int64_dtype',
'is_integer',
'is_integer_dtype',
'is_interval',
'is_interval_dtype',
'is_iterator',
'is_list_like',
'is_named_tuple',
'is_number',
'is_numeric_dtype',
'is_object_dtype',
'is_period',
'is_period_dtype',
'is_re',
'is_re_compilable',
'is_scalar',
'is_sequence',
'is_signed_integer_dtype',
'is_sparse',
'is_string_dtype',
'is_timedelta64_dtype',
'is_timedelta64_ns_dtype',
'is_unsigned_integer_dtype',
'pandas_dtype',
'union_categoricals']
Note: Calling these functions from the internal module pandas.core.common will now show a
DeprecationWarning (GH13990)
Timestamp can now accept positional and keyword parameters similar to datetime.datetime()
(GH10758, GH11630)
In [69]: pd.Timestamp(2012, 1, 1)
Out[69]: Timestamp('2012-01-01 00:00:00')
The .resample() function now accepts a on= or level= parameter for resampling on a datetimelike col-
umn or MultiIndex level (GH13500)
....: names=['v','d']))
....:
In [72]: df
Out[72]:
a date
v d
1 2015-01-04 0 2015-01-04
2 2015-01-11 1 2015-01-11
3 2015-01-18 2 2015-01-18
4 2015-01-25 3 2015-01-25
5 2015-02-01 4 2015-02-01
a
date
2015-01-31 6
2015-02-28 4
a
d
2015-01-31 6
2015-02-28 4
The .get_credentials() method of GbqConnector can now first try to fetch the application default
credentials. See the docs for more details (GH13577).
The .tz_localize() method of DatetimeIndex and Timestamp has gained the errors keyword,
so you can potentially coerce nonexistent timestamps to NaT. The default behavior remains to raising a
NonExistentTimeError (GH13057)
.to_hdf/read_hdf() now accept path objects (e.g. pathlib.Path, py.path.local) for the file
path (GH11773)
The pd.read_csv() with engine='python' has gained support for the decimal (GH12933),
na_filter (GH13321) and the memory_map option (GH13381).
Consistent with the Python API, pd.read_csv() will now interpret +inf as positive infinity (GH13274)
The pd.read_html() has gained support for the na_values, converters, keep_default_na op-
tions (GH13461)
Categorical.astype() now accepts an optional boolean argument copy, effective when dtype is cate-
gorical (GH13209)
DataFrame has gained the .asof() method to return the last non-NaN values according to the selected
subset (GH13358)
The DataFrame constructor will now respect key ordering if a list of OrderedDict objects are passed in
(GH13304)
pd.read_html() has gained support for the decimal option (GH12907)
Series has gained the properties .is_monotonic, .is_monotonic_increasing, .
is_monotonic_decreasing, similar to Index (GH13336)
DataFrame.to_sql() now allows a single value as the SQL type for all columns (GH11886).
Series.append now supports the ignore_index option (GH13677)
.to_stata() and StataWriter can now write variable labels to Stata dta files using a dictionary to make
column names to labels (GH13535, GH13536)
.to_stata() and StataWriter will automatically convert datetime64[ns] columns to Stata format
%tc, rather than raising a ValueError (GH12259)
read_stata() and StataReader raise with a more explicit error message when reading Stata files with
repeated value labels when convert_categoricals=True (GH13923)
DataFrame.style will now render sparsified MultiIndexes (GH11655)
DataFrame.style will now show column level names (e.g. DataFrame.columns.names) (GH13775)
DataFrame has gained support to re-order the columns based on the values in a row using df.
sort_values(by='...', axis=1) (GH10806)
In [75]: df = pd.DataFrame({'A': [2, 7], 'B': [3, 5], 'C': [4, 8]},
....: index=['row1', 'row2'])
....:
In [76]: df
Out[76]:
A B C
row1 2 3 4
row2 7 5 8
Added documentation to I/O regarding the perils of reading in columns with mixed dtypes and how to handle it
(GH13746)
to_html() now has a border argument to control the value in the opening <table> tag. The default is the
value of the html.border option, which defaults to 1. This also affects the notebook HTML repr, but since
Jupyters CSS includes a border-width attribute, the visual effect is the same. (GH11563).
Raise ImportError in the sql functions when sqlalchemy is not installed and a connection string is used
(GH11920).
Compatibility with matplotlib 2.0. Older versions of pandas should also work with matplotlib 2.0 (GH13333)
Timestamp, Period, DatetimeIndex, PeriodIndex and .dt accessor have gained a .
is_leap_year property to check whether the date belongs to a leap year. (GH13727)
astype() will now accept a dict of column name to data types mapping as the dtype argument. (GH12086)
The pd.read_json and DataFrame.to_json has gained support for reading and writing json lines with
lines option see Line delimited json (GH9180)
read_excel() now supports the true_values and false_values keyword arguments (GH13347)
groupby() will now accept a scalar and a single-element list for specifying level on a non-MultiIndex
grouper. (GH13907)
Non-convertible dates in an excel date column will be returned without conversion and the column will be
object dtype, rather than raising an exception (GH10001).
pd.Timedelta(None) is now accepted and will return NaT, mirroring pd.Timestamp (GH13687)
pd.read_stata() can now handle some format 111 files, which are produced by SAS when generating
Stata dta files (GH11526)
Series and Index now support divmod which will return a tuple of series or indices. This behaves like a
standard binary operator with regards to broadcasting rules (GH14208).
Series.tolist() will now return Python types in the output, mimicking NumPy .tolist() behavior
(GH10904)
In [78]: s = pd.Series([1,2,3])
Previous behavior:
In [7]: type(s.tolist()[0])
Out[7]:
<class 'numpy.int64'>
New behavior:
In [79]: type(s.tolist()[0])
Out[79]: int
Following Series operators have been changed to make all operators consistent, including DataFrame (GH1134,
GH4581, GH13538)
Series comparison operators now raise ValueError when index are different.
Series logical operators align both index of left and right hand side.
Warning: Until 0.18.1, comparing Series with the same length, would succeed even if the .index are
different (the result ignores .index). As of 0.19.0, this will raises ValueError to be more strict. This section
also describes how to keep previous behavior or align different indexes, using the flexible comparison methods like
.eq.
Arithmetic operators
In [82]: s1 + s2
Out[82]:
A 3.0
B 4.0
C NaN
D NaN
dtype: float64
Comparison operators
Note: To achieve the same result as previous versions (compare values based on locations ignoring .index),
compare both .values.
In [86]: s1.values == s2.values
Out[86]: array([False, True, False], dtype=bool)
If you want to compare Series aligning its .index, see flexible comparison methods section below:
In [87]: s1.eq(s2)
Out[87]:
A False
B True
C False
D False
dtype: bool
Logical operators
Logical operators align both .index of left and right hand side.
Previous behavior (Series), only left hand side index was kept:
In [90]: s1 & s2
Out[90]:
A True
B False
C False
D False
dtype: bool
Note: To achieve the same result as previous versions (compare values based on only left hand side index), you can
use reindex_like:
C False
dtype: bool
Series flexible comparison methods like eq, ne, le, lt, ge and gt now align both index. Use these operators
if you want to compare two Series which has the different index.
In [95]: s1 = pd.Series([1, 2, 3], index=['a', 'b', 'c'])
In [97]: s1.eq(s2)
Out[97]:
a False
b True
c False
d False
dtype: bool
In [98]: s1.ge(s2)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[98]:
a False
b True
c True
d False
dtype: bool
A Series will now correctly promote its dtype for assignment with incompat values to the current dtype (GH13234)
In [99]: s = pd.Series()
Previous behavior:
In [2]: s["a"] = pd.Timestamp("2016-01-01")
New behavior:
In [102]: s
Out[102]:
a 2016-01-01 00:00:00
b 3
dtype: object
In [103]: s.dtype
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[103]:
dtype('O')
Previously if .to_datetime() encountered mixed integers/floats and strings, but no datetimes with
errors='coerce' it would convert all to NaT.
Previous behavior:
Current behavior:
This will now convert integers/floats with the default unit of ns.
Merging will now preserve the dtype of the join keys (GH8596)
In [106]: df1
Out[106]:
key v1
0 1 10
In [108]: df2
Out[108]:
key v1
0 1 20
1 2 30
Previous behavior:
New behavior:
We are able to preserve the join keys
Of course if you have missing values that are introduced, then the resulting dtype will be upcast, which is unchanged
from previous.
key int64
v1_x float64
v1_y int64
dtype: object
Percentile identifiers in the index of a .describe() output will now be rounded to the least precision that keeps
them distinct (GH13104)
In [113]: s = pd.Series([0, 1, 2, 3, 4])
Previous behavior:
The percentiles were rounded to at most one decimal place, which could raise ValueError for a data frame if the
percentiles were duplicated.
In [3]: s.describe(percentiles=[0.0001, 0.0005, 0.001, 0.999, 0.9995, 0.9999])
Out[3]:
count 5.000000
mean 2.000000
std 1.581139
min 0.000000
0.0% 0.000400
0.1% 0.002000
0.1% 0.004000
50% 2.000000
99.9% 3.996000
100.0% 3.998000
100.0% 3.999600
max 4.000000
dtype: float64
New behavior:
In [115]: s.describe(percentiles=[0.0001, 0.0005, 0.001, 0.999, 0.9995, 0.9999])
Out[115]:
count 5.000000
mean 2.000000
std 1.581139
min 0.000000
0.01% 0.000400
0.05% 0.002000
0.1% 0.004000
50% 2.000000
99.9% 3.996000
99.95% 3.998000
99.99% 3.999600
max 4.000000
dtype: float64
0
count 5.000000
mean 2.000000
std 1.581139
min 0.000000
0.01% 0.000400
0.05% 0.002000
0.1% 0.004000
50% 2.000000
99.9% 3.996000
99.95% 3.998000
99.99% 3.999600
max 4.000000
Furthermore:
Passing duplicated percentiles will now raise a ValueError.
Bug in .describe() on a DataFrame with a mixed-dtype column index, which would previously raise a
TypeError (GH13288)
PeriodIndex now has its own period dtype. The period dtype is a pandas extension dtype like category or
the timezone aware dtype (datetime64[ns, tz]) (GH13941). As a consequence of this change, PeriodIndex
no longer has an integer dtype:
Previous behavior:
In [2]: pi
Out[2]: PeriodIndex(['2016-08-01'], dtype='int64', freq='D')
In [3]: pd.api.types.is_integer_dtype(pi)
Out[3]: True
In [4]: pi.dtype
Out[4]: dtype('int64')
New behavior:
In [118]: pi
Out[118]: PeriodIndex(['2016-08-01'], dtype='period[D]', freq='D')
In [119]: pd.api.types.is_integer_dtype(pi)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[119]: False
In [120]: pd.api.types.is_period_dtype(pi)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[120]:
True
In [121]: pi.dtype
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out
period[D]
In [122]: type(pi.dtype)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
pandas.core.dtypes.dtypes.PeriodDtype
Previously, Period has its own Period('NaT') representation different from pd.NaT. Now Period('NaT')
has been changed to return pd.NaT. (GH12759, GH13582)
Previous behavior:
New behavior:
These result in pd.NaT without providing freq option.
In [123]: pd.Period('NaT')
Out[123]: NaT
In [124]: pd.Period(None)
\\\\\\\\\\\\\\Out[124]: NaT
To be compatible with Period addition and subtraction, pd.NaT now supports addition and subtraction with int.
Previously it raised ValueError.
Previous behavior:
In [5]: pd.NaT + 1
...
ValueError: Cannot add integral value to Timestamp without freq.
New behavior:
In [125]: pd.NaT + 1
Out[125]: NaT
In [126]: pd.NaT - 1
\\\\\\\\\\\\\\Out[126]: NaT
.values is changed to return an array of Period objects, rather than an array of integers (GH13988).
Previous behavior:
New behavior:
In [127]: pi = pd.PeriodIndex(['2011-01', '2011-02'], freq='M')
In [128]: pi.values
Out[128]: array([Period('2011-01', 'M'), Period('2011-02', 'M')], dtype=object)
Addition and subtraction of the base Index type and of DatetimeIndex (not the numeric index types) previously per-
formed set operations (set union and difference). This behavior was already deprecated since 0.15.0 (in favor using
the specific .union() and .difference() methods), and is now disabled. When possible, + and - are now used
for element-wise operations, for example for concatenating strings or subtracting datetimes (GH8227, GH14127).
Previous behavior:
In [1]: pd.Index(['a', 'b']) + pd.Index(['a', 'c'])
FutureWarning: using '+' to provide set union with Indexes is deprecated, use '|' or .
union()
New behavior: the same operation will now perform element-wise addition:
In [129]: pd.Index(['a', 'b']) + pd.Index(['a', 'c'])
Out[129]: Index(['aa', 'bc'], dtype='object')
Note that numeric Index objects already performed element-wise operations. For example, the behavior of adding two
integer Indexes is unchanged. The base Index is now made consistent with this behavior.
In [130]: pd.Index([1, 2, 3]) + pd.Index([2, 3, 4])
Out[130]: Int64Index([3, 5, 7], dtype='int64')
Further, because of this change, it is now possible to subtract two DatetimeIndex objects resulting in a TimedeltaIndex:
Previous behavior:
In [1]: pd.DatetimeIndex(['2016-01-01', '2016-01-02']) - pd.DatetimeIndex(['2016-01-02
', '2016-01-03'])
New behavior:
In [131]: pd.DatetimeIndex(['2016-01-01', '2016-01-02']) - pd.DatetimeIndex(['2016-01-
02', '2016-01-03'])
Index.difference and Index.symmetric_difference will now, more consistently, treat NaN values as
any other values. (GH13514)
Previous behavior:
In [3]: idx1.difference(idx2)
Out[3]: Float64Index([nan, 2.0, 3.0], dtype='float64')
In [4]: idx1.symmetric_difference(idx2)
Out[4]: Float64Index([0.0, nan, 2.0, 3.0], dtype='float64')
New behavior:
In [134]: idx1.difference(idx2)
Out[134]: Float64Index([2.0, 3.0], dtype='float64')
In [135]: idx1.symmetric_difference(idx2)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[135]: Float64Index([0.0, 2.0,
3.0], dtype='float64')
Index.unique() now returns unique values as an Index of the appropriate dtype. (GH13395). Previously, most
Index classes returned np.ndarray, and DatetimeIndex, TimedeltaIndex and PeriodIndex returned
Index to keep metadata like timezone.
Previous behavior:
In [1]: pd.Index([1, 2, 3]).unique()
Out[1]: array([1, 2, 3])
Out[2]:
DatetimeIndex(['2011-01-01 00:00:00+09:00', '2011-01-02 00:00:00+09:00',
'2011-01-03 00:00:00+09:00'],
dtype='datetime64[ns, Asia/Tokyo]', freq=None)
New behavior:
In [136]: pd.Index([1, 2, 3]).unique()
Out[136]: Int64Index([1, 2, 3], dtype='int64')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[137]:
DatetimeIndex(['2011-01-01 00:00:00+09:00', '2011-01-02 00:00:00+09:00',
'2011-01-03 00:00:00+09:00'],
dtype='datetime64[ns, Asia/Tokyo]', freq=None)
In [141]: midx
Out[141]:
MultiIndex(levels=[['b', 'a', 'c'], ['bar', 'foo']],
labels=[[1, 0], [1, 0]])
Previous behavior:
In [4]: midx.levels[0]
Out[4]: Index(['b', 'a', 'c'], dtype='object')
In [5]: midx.get_level_values[0]
Out[5]: Index(['a', 'b'], dtype='object')
In [143]: midx.get_level_values(0)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
CategoricalIndex(['a', 'b'], categories=['b', 'a', 'c'], ordered=False, dtype=
'category')
Previous behavior:
In [11]: df_grouped.index.levels[1]
Out[11]: Index(['b', 'a', 'c'], dtype='object', name='C')
In [12]: df_grouped.reset_index().dtypes
Out[12]:
A int64
C object
B float64
dtype: object
In [13]: df_set_idx.index.levels[1]
Out[13]: Index(['b', 'a', 'c'], dtype='object', name='C')
In [14]: df_set_idx.reset_index().dtypes
Out[14]:
A int64
C object
B int64
dtype: object
New behavior:
In [147]: df_grouped.index.levels[1]
Out[147]: CategoricalIndex(['b', 'a', 'c'], categories=['b', 'a', 'c'], ordered=False,
name='C', dtype='category')
In [148]: df_grouped.reset_index().dtypes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
A int64
C category
B float64
dtype: object
In [149]: df_set_idx.index.levels[1]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
CategoricalIndex(['b', 'a', 'c'], categories=['b', 'a', 'c'], ordered=False, name='C
', dtype='category')
In [150]: df_set_idx.reset_index().dtypes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
A int64
C category
B int64
dtype: object
When read_csv() is called with chunksize=n and without specifying an index, each chunk used to have an
independently generated index from 0 to n-1. They are now given instead a progressive index, starting from 0 for
the first chunk, from n for the second, and so on, so that, when concatenated, they are identical to the result of calling
read_csv() without the chunksize= argument (GH12185).
Previous behavior:
New behavior:
These changes allow pandas to handle sparse data with more dtypes, and for work to make a smoother experience with
data handling.
Sparse data structures now gained enhanced support of int64 and bool dtype (GH667, GH13849).
Previously, sparse data were float64 dtype by default, even if all inputs were of int or bool dtype. You had to
specify dtype explicitly to create sparse data with int64 dtype. Also, fill_value had to be specified explicitly
because the default was np.nan which doesnt appear in int64 or bool data.
# specifying int64 dtype, but all values are stored in sp_values because
# fill_value default is np.nan
In [2]: pd.SparseArray([1, 2, 0, 0], dtype=np.int64)
Out[2]:
[1, 2, 0, 0]
Fill: nan
IntIndex
Indices: array([0, 1, 2, 3], dtype=int32)
As of v0.19.0, sparse data keeps the input dtype, and uses more appropriate fill_value defaults (0 for int64
dtype, False for bool dtype).
Sparse data structure now can preserve dtype after arithmetic ops (GH13848)
In [156]: s.dtype
Out[156]: dtype('int64')
In [157]: s + 1
\\\\\\\\\\\\\\\\\\\\\\\\\Out[157]:
0 1
1 3
2 1
3 2
dtype: int64
BlockIndex
Block locations: array([1, 3], dtype=int32)
Block lengths: array([1, 1], dtype=int32)
Sparse data structure now support astype to convert internal dtype (GH13900)
In [159]: s
Out[159]:
0 1.0
1 0.0
2 2.0
3 0.0
dtype: float64
BlockIndex
Block locations: array([0, 2], dtype=int32)
Block lengths: array([1, 1], dtype=int32)
In [160]: s.astype(np.int64)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
0 1
1 0
2 2
3 0
dtype: int64
BlockIndex
Block locations: array([0, 2], dtype=int32)
Block lengths: array([1, 1], dtype=int32)
astype fails if data contains values which cannot be converted to specified dtype. Note that the limitation is
applied to fill_value which default is np.nan.
Out[7]:
ValueError: unable to coerce current fill_value nan to int64 dtype
Subclassed SparseDataFrame and SparseSeries now preserve class types when slicing or transposing.
(GH13787)
SparseArray with bool dtype now supports logical (bool) operators (GH14000)
Bug in SparseSeries with MultiIndex [] indexing may raise IndexError (GH13144)
Bug in SparseSeries with MultiIndex [] indexing result may have normal Index (GH13144)
Bug in SparseDataFrame in which axis=None did not default to axis=0 (GH13048)
Bug in SparseSeries and SparseDataFrame creation with object dtype may raise TypeError
(GH11633)
Bug in SparseDataFrame doesnt respect passed SparseArray or SparseSeries s dtype and
fill_value (GH13866)
Bug in SparseArray and SparseSeries dont apply ufunc to fill_value (GH13853)
Bug in SparseSeries.abs incorrectly keeps negative fill_value (GH13853)
Bug in single row slicing on multi-type SparseDataFrame s, types were previously forced to float
(GH13917)
Bug in SparseSeries slicing changes integer dtype to float (GH8292)
Bug in SparseDataFarme comparison ops may raise TypeError (GH13001)
Bug in SparseDataFarme.isnull raises ValueError (GH8276)
Bug in SparseSeries representation with bool dtype may raise IndexError (GH13110)
Bug in SparseSeries and SparseDataFrame of bool or int64 dtype may display its values like
float64 dtype (GH13110)
Bug in sparse indexing using SparseArray with bool dtype may return incorrect result (GH13985)
Bug in SparseArray created from SparseSeries may lose dtype (GH13999)
Bug in SparseSeries comparison with dense returns normal Series rather than SparseSeries
(GH13999)
Note: This change only affects 64 bit python running on Windows, and only affects relatively advanced indexing
operations
Methods such as Index.get_indexer that return an indexer array, coerce that array to a platform int, so that
it can be directly used in 3rd party library operations like numpy.take. Previously, a platform int was defined as
np.int_ which corresponds to a C integer, but the correct type, and what is being used now, is np.intp, which
corresponds to the C integer size that can hold a pointer (GH3033, GH13972).
These types are the same on many platform, but for 64 bit python on Windows, np.int_ is 32 bits, and np.intp
is 64 bits. Changing this behavior improves performance for many operations on that platform.
Previous behavior:
New behavior:
Timestamp.to_pydatetime will issue a UserWarning when warn=True, and the instance has a non-
zero number of nanoseconds, previously this would print a message to stdout (GH14101).
Series.unique() with datetime and timezone now returns return array of Timestamp with timezone
(GH13565).
Panel.to_sparse() will raise a NotImplementedError exception when called (GH13778).
Index.reshape() will raise a NotImplementedError exception when called (GH12882).
.filter() enforces mutual exclusion of the keyword arguments (GH12399).
evals upcasting rules for float32 types have been updated to be more consistent with NumPys rules. New
behavior will not upcast to float64 if you multiply a pandas float32 object by a scalar float64 (GH12388).
An UnsupportedFunctionCall error is now raised if NumPy ufuncs like np.mean are called on groupby
or resample objects (GH12811).
__setitem__ will no longer apply a callable rhs as a function instead of storing it. Call where directly to
get the previous behavior (GH13299).
Calls to .sample() will respect the random seed set via numpy.random.seed(n) (GH13161)
Styler.apply is now more strict about the outputs your function must return. For axis=0 or axis=1, the
output shape must be identical. For axis=None, the output must be a DataFrame with identical columns and
index labels (GH13222).
Float64Index.astype(int) will now raise ValueError if Float64Index contains NaN values
(GH13149)
TimedeltaIndex.astype(int) and DatetimeIndex.astype(int) will now return
Int64Index instead of np.array (GH13209)
Passing Period with multiple frequencies to normal Index now returns Index with object dtype
(GH13664)
PeriodIndex.fillna with Period has different freq now coerces to object dtype (GH13664)
Faceted boxplots from DataFrame.boxplot(by=col) now return a Series when return_type is
not None. Previously these returned an OrderedDict. Note that when return_type=None, the default,
these still return a 2-D NumPy array (GH12216, GH7096).
pd.read_hdf will now raise a ValueError instead of KeyError, if a mode other than r, r+ and a is
supplied. (GH13623)
1.4.3 Deprecations
Series.reshape and Categorical.reshape have been deprecated and will be removed in a subse-
quent release (GH12882, GH12882)
PeriodIndex.to_datetime has been deprecated in favor of PeriodIndex.to_timestamp
(GH8254)
Timestamp.to_datetime has been deprecated in favor of Timestamp.to_pydatetime (GH8254)
Index.to_datetime and DatetimeIndex.to_datetime have been deprecated in favor of pd.
to_datetime (GH8254)
pandas.core.datetools module has been deprecated and will be removed in a subsequent release
(GH14094)
SparseList has been deprecated and will be removed in a future version (GH13784)
DataFrame.to_html() and DataFrame.to_latex() have dropped the colSpace parameter in fa-
vor of col_space (GH13857)
DataFrame.to_sql() has deprecated the flavor parameter, as it is superfluous when SQLAlchemy is
not installed (GH13611)
Deprecated read_csv keywords:
compact_ints and use_unsigned have been deprecated and will be removed in a future version
(GH13320)
buffer_lines has been deprecated and will be removed in a future version (GH13360)
as_recarray has been deprecated and will be removed in a future version (GH13373)
skip_footer has been deprecated in favor of skipfooter and will be removed in a future version
(GH13349)
top-level pd.ordered_merge() has been renamed to pd.merge_ordered() and the original name will
be removed in a future version (GH13358)
Timestamp.offset property (and named arg in the constructor), has been deprecated in favor of freq
(GH12160)
pd.tseries.util.pivot_annual is deprecated. Use pivot_table as alternative, an example is here
(GH736)
pd.tseries.util.isleapyear has been deprecated and will be removed in a subsequent release.
Datetime-likes now have a .is_leap_year property (GH13727)
Panel4D and PanelND constructors are deprecated and will be removed in a future version. The recom-
mended way to represent these types of n-dimensional data are with the xarray package. Pandas provides a
to_xarray() method to automate this conversion (GH13564).
pandas.tseries.frequencies.get_standard_freq is deprecated. Use pandas.tseries.
frequencies.to_offset(freq).rule_code instead (GH13874)
pandas.tseries.frequencies.to_offsets freqstr keyword is deprecated in favor of freq
(GH13874)
Categorical.from_array has been deprecated and will be removed in a future version (GH13854)
Improved performance of sparse arithmetic with BlockIndex when the number of blocks are large, though
recommended to use IntIndex in such cases (GH13082)
Improved performance of DataFrame.quantile() as it now operates per-block (GH11623)
Improved performance of float64 hash table operations, fixing some very slow indexing and groupby operations
in python 3 (GH13166, GH13334)
Improved performance of DataFrameGroupBy.transform (GH12737)
Improved performance of Index and Series .duplicated (GH10235)
Improved performance of Index.difference (GH12044)
Improved performance of RangeIndex.is_monotonic_increasing and
is_monotonic_decreasing (GH13749)
Improved performance of datetime string parsing in DatetimeIndex (GH13692)
Improved performance of hashing Period (GH12817)
Improved performance of factorize of datetime with timezone (GH13750)
Improved performance of by lazily creating indexing hashtables on larger Indexes (GH14266)
Improved performance of groupby.groups (GH14293)
Unecessary materializing of a MultiIndex when introspecting for memory usage (GH14308)
Bug in groupby().shift(), which could cause a segfault or corruption in rare circumstances when group-
ing by columns with missing values (GH13813)
Bug in groupby().cumsum() calculating cumprod when axis=1. (GH13994)
Bug in pd.to_timedelta() in which the errors parameter was not being respected (GH13613)
Bug in io.json.json_normalize(), where non-ascii keys raised an exception (GH13213)
Bug when passing a not-default-indexed Series as xerr or yerr in .plot() (GH11858)
Bug in area plot draws legend incorrectly if subplot is enabled or legend is moved after plot (matplotlib 1.5.0 is
required to draw area plot legend properly) (GH9161, GH13544)
Bug in DataFrame assignment with an object-dtyped Index where the resultant column is mutable to the
original object. (GH13522)
Bug in matplotlib AutoDataFormatter; this restores the second scaled formatting and re-adds micro-second
scaled formatting (GH13131)
Bug in selection from a HDFStore with a fixed format and start and/or stop specified will now return the
selected range (GH8287)
Bug in Categorical.from_codes() where an unhelpful error was raised when an invalid ordered
parameter was passed in (GH14058)
Bug in Series construction from a tuple of integers on windows not returning default dtype (int64) (GH13646)
Bug in TimedeltaIndex addition with a Datetime-like object where addition overflow was not being caught
(GH14068)
Bug in .groupby(..).resample(..) when the same object is called multiple times (GH13174)
Bug in .to_records() when index name is a unicode string (GH13172)
Bug in pd.concat and .append may coerces datetime64 and timedelta to object dtype containing
python built-in datetime or timedelta rather than Timestamp or Timedelta (GH13626)
Bug in PeriodIndex.append may raises AttributeError when the result is object dtype
(GH13221)
Bug in CategoricalIndex.append may accept normal list (GH13626)
Bug in pd.concat and .append with the same timezone get reset to UTC (GH7795)
Bug in Series and DataFrame .append raises AmbiguousTimeError if data contains datetime near
DST boundary (GH13626)
Bug in DataFrame.to_csv() in which float values were being quoted even though quotations were speci-
fied for non-numeric values only (GH12922, GH13259)
Bug in DataFrame.describe() raising ValueError with only boolean columns (GH13898)
Bug in MultiIndex slicing where extra elements were returned when level is non-unique (GH12896)
Bug in .str.replace does not raise TypeError for invalid replacement (GH13438)
Bug in MultiIndex.from_arrays which didnt check for input array lengths matching (GH13599)
Bug in cartesian_product and MultiIndex.from_product which may raise with empty input ar-
rays (GH12258)
Bug in pd.read_csv() which may cause a segfault or corruption when iterating in large chunks over a
stream/file under rare circumstances (GH13703)
Bug in pd.read_csv() which caused errors to be raised when a dictionary containing scalars is passed in
for na_values (GH12224)
Bug in pd.read_csv() which caused BOM files to be incorrectly parsed by not ignoring the BOM (GH4793)
Bug in pd.read_csv() with engine='python' which raised errors when a numpy array was passed in
for usecols (GH12546)
Bug in pd.read_csv() where the index columns were being incorrectly parsed when parsed as dates with a
thousands parameter (GH14066)
Bug in pd.read_csv() with engine='python' in which NaN values werent being detected after data
was converted to numeric values (GH13314)
Bug in pd.read_csv() in which the nrows argument was not properly validated for both engines
(GH10476)
Bug in pd.read_csv() with engine='python' in which infinities of mixed-case forms were not being
interpreted properly (GH13274)
Bug in pd.read_csv() with engine='python' in which trailing NaN values were not being parsed
(GH13320)
Bug in pd.read_csv() with engine='python' when reading from a tempfile.TemporaryFile
on Windows with Python 3 (GH13398)
Bug in pd.read_csv() that prevents usecols kwarg from accepting single-byte unicode strings
(GH13219)
Bug in pd.read_csv() that prevents usecols from being an empty set (GH13402)
Bug in pd.read_csv() in the C engine where the NULL character was not being parsed as NULL
(GH14012)
Bug in pd.read_csv() with engine='c' in which NULL quotechar was not accepted even though
quoting was specified as None (GH13411)
Bug in pd.read_csv() with engine='c' in which fields were not properly cast to float when quoting was
specified as non-numeric (GH13411)
Bug in pd.read_csv() in Python 2.x with non-UTF8 encoded, multi-character separated data (GH3404)
Bug in pd.read_csv(), where aliases for utf-xx (e.g. UTF-xx, UTF_xx, utf_xx) raised UnicodeDecodeError
(GH13549)
Bug in pd.read_csv, pd.read_table, pd.read_fwf, pd.read_stata and pd.read_sas where
files were opened by parsers but not closed if both chunksize and iterator were None. (GH13940)
Bug in StataReader, StataWriter, XportReader and SAS7BDATReader where a file was not prop-
erly closed when an error was raised. (GH13940)
Bug in pd.pivot_table() where margins_name is ignored when aggfunc is a list (GH13354)
Bug in pd.Series.str.zfill, center, ljust, rjust, and pad when passing non-integers, did not
raise TypeError (GH13598)
Bug in checking for any null objects in a TimedeltaIndex, which always returned True (GH13603)
Bug in Series arithmetic raises TypeError if it contains datetime-like as object dtype (GH13043)
Bug Series.isnull() and Series.notnull() ignore Period('NaT') (GH13737)
Bug Series.fillna() and Series.dropna() dont affect to Period('NaT') (GH13737
Bug in .fillna(value=np.nan) incorrectly raises KeyError on a category dtyped Series
(GH14021)
Bug in extension dtype creation where the created types were not is/identical (GH13285)
Bug in .resample(..) where incorrect warnings were triggered by IPython introspection (GH13618)
Bug in NaT - Period raises AttributeError (GH13071)
Bug in Series comparison may output incorrect result if rhs contains NaT (GH9005)
Bug in Series and Index comparison may output incorrect result if it contains NaT with object dtype
(GH13592)
Bug in Period addition raises TypeError if Period is on right hand side (GH13069)
Bug in Peirod and Series or Index comparison raises TypeError (GH13200)
Bug in pd.set_eng_float_format() that would prevent NaN and Inf from formatting (GH11981)
Bug in .unstack with Categorical dtype resets .ordered to True (GH13249)
Clean some compile time warnings in datetime parsing (GH13607)
Bug in factorize raises AmbiguousTimeError if data contains datetime near DST boundary (GH13750)
Bug in .set_index raises AmbiguousTimeError if new index contains DST boundary and multi levels
(GH12920)
Bug in .shift raises AmbiguousTimeError if data contains datetime near DST boundary (GH13926)
Bug in pd.read_hdf() returns incorrect result when a DataFrame with a categorical column and a
query which doesnt match any values (GH13792)
Bug in .iloc when indexing with a non lex-sorted MultiIndex (GH13797)
Bug in .loc when indexing with date strings in a reverse sorted DatetimeIndex (GH14316)
Bug in Series comparison operators when dealing with zero dim NumPy arrays (GH13006)
Bug in .combine_first may return incorrect dtype (GH7630, GH10567)
Bug in groupby where apply returns different result depending on whether first result is None or not
(GH12824)
Bug in groupby(..).nth() where the group key is included inconsistently if called after .head()/.
tail() (GH12839)
Bug in .to_html, .to_latex and .to_string silently ignore custom datetime formatter passed through
the formatters key word (GH10690)
Bug in DataFrame.iterrows(), not yielding a Series subclasse if defined (GH13977)
Bug in pd.to_numeric when errors='coerce' and input contains non-hashable objects (GH13324)
Bug in invalid Timedelta arithmetic and comparison may raise ValueError rather than TypeError
(GH13624)
Bug in invalid datetime parsing in to_datetime and DatetimeIndex may raise TypeError rather than
ValueError (GH11169, GH11287)
Bug in Index created with tz-aware Timestamp and mismatched tz option incorrectly coerces timezone
(GH13692)
Bug in DatetimeIndex with nanosecond frequency does not include timestamp specified with end
(GH13672)
Bug in `Series` when setting a slice with a `np.timedelta64` (GH14155)
Bug in Index raises OutOfBoundsDatetime if datetime exceeds datetime64[ns] bounds, rather
than coercing to object dtype (GH13663)
Bug in Index may ignore specified datetime64 or timedelta64 passed as dtype (GH13981)
Bug in RangeIndex can be created without no arguments rather than raises TypeError (GH13793)
Bug in .value_counts() raises OutOfBoundsDatetime if data exceeds datetime64[ns] bounds
(GH13663)
Bug in DatetimeIndex may raise OutOfBoundsDatetime if input np.datetime64 has other unit
than ns (GH9114)
Bug in Series creation with np.datetime64 which has other unit than ns as object dtype results in
incorrect values (GH13876)
Bug in resample with timedelta data where data was casted to float (GH13119).
Bug in pd.isnull() pd.notnull() raise TypeError if input datetime-like has other unit than ns
(GH13389)
Bug in pd.merge() may raise TypeError if input datetime-like has other unit than ns (GH13389)
Bug in HDFStore/read_hdf() discarded DatetimeIndex.name if tz was set (GH13884)
Bug in Categorical.remove_unused_categories() changes .codes dtype to platform int
(GH13261)
Bug in groupby with as_index=False returns all NaNs when grouping on multiple columns including a
categorical one (GH13204)
Bug in df.groupby(...)[...] where getitem with Int64Index raised an error (GH13731)
Bug in the CSS classes assigned to DataFrame.style for index names. Previously they were assigned
"col_heading level<n> col<c>" where n was the number of levels + 1. Now they are assigned
"index_name level<n>", where n is the correct level for that MultiIndex.
Bug where pd.read_gbq() could throw ImportError: No module named discovery as a re-
sult of a naming conflict with another python package called apiclient (GH13454)
Bug in Index.union returns an incorrect result with a named empty index (GH13432)
Bugs in Index.difference and DataFrame.join raise in Python3 when using mixed-integer indexes
(GH13432, GH12814)
Bug in subtract tz-aware datetime.datetime from tz-aware datetime64 series (GH14088)
Bug in .to_excel() when DataFrame contains a MultiIndex which contains a label with a NaN value
(GH13511)
Bug in invalid frequency offset string like D1, -2-3H may not raise ValueError (GH13930)
Bug in concat and groupby for hierarchical frames with RangeIndex levels (GH13542).
Bug in Series.str.contains() for Series containing only NaN values of object dtype (GH14171)
Bug in agg() function on groupby dataframe changes dtype of datetime64[ns] column to float64
(GH12821)
Bug in using NumPy ufunc with PeriodIndex to add or subtract integer raise
IncompatibleFrequency. Note that using standard operator like + or - is recommended, because
standard operators use more efficient path (GH13980)
Bug in operations on NaT returning float instead of datetime64[ns] (GH12941)
Bug in Series flexible arithmetic methods (like .add()) raises ValueError when axis=None
(GH13894)
Bug in DataFrame.to_csv() with MultiIndex columns in which a stray empty line was added
(GH6618)
Bug in DatetimeIndex, TimedeltaIndex and PeriodIndex.equals() may return True when
input isnt Index but contains the same values (GH13107)
Bug in assignment against datetime with timezone may not work if it contains datetime near DST boundary
(GH14146)
Bug in pd.eval() and HDFStore query truncating long float literals with python 2 (GH14241)
Bug in Index raises KeyError displaying incorrect column when column is not in the df and columns con-
tains duplicate values (GH13822)
Bug in Period and PeriodIndex creating wrong dates when frequency has combined offset aliases
(GH13874)
Bug in .to_string() when called with an integer line_width and index=False raises an Unbound-
LocalError exception because idx referenced before assignment.
Bug in eval() where the resolvers argument would not accept a list (GH14095)
Bugs in stack, get_dummies, make_axis_dummies which dont preserve categorical dtypes in
(multi)indexes (GH13854)
PeriodIndex can now accept list and array which contains pd.NaT (GH13430)
Bug in df.groupby where .median() returns arbitrary values if grouped dataframe contains empty bins
(GH13629)
Bug in Index.copy() where name parameter was ignored (GH14302)
This is a minor bug-fix release from 0.18.0 and includes a large number of bug fixes along with several new features,
enhancements, and performance improvements. We recommend that all users upgrade to this version.
Highlights include:
.groupby(...) has been enhanced to provide convenient syntax when working with .rolling(..),
.expanding(..) and .resample(..) per group, see here
pd.to_datetime() has gained the ability to assemble dates from a DataFrame, see here
Method chaining improvements, see here.
Custom business hour offset, see here.
Many bug fixes in the handling of sparse, see here
Expanded the Tutorials section with a feature on modern pandas, courtesy of @TomAugsburger. (GH13045).
New features
Custom Business Hour
.groupby(..) syntax with window and resample operations
Method chaininng improvements
In [5]: dt + bhour_us
Out[5]: Timestamp('2014-01-17 16:00:00')
.groupby(...) has been enhanced to provide convenient syntax when working with .rolling(..), .
expanding(..) and .resample(..) per group, see (GH12486, GH12738).
You can now use .rolling(..) and .expanding(..) as methods on groupbys. These return another deferred
object (similar to what .rolling() and .expanding() do on ungrouped pandas objects). You can then operate
on these RollingGroupby objects in a similar manner.
Previously you would have to do this to get a rolling window mean per-group:
In [7]: df = pd.DataFrame({'A': [1] * 20 + [2] * 12 + [3] * 8,
...: 'B': np.arange(40)})
...:
In [8]: df
Out[8]:
A B
0 1 0
1 1 1
2 1 2
3 1 3
4 1 4
5 1 5
6 1 6
.. .. ..
33 3 33
34 3 34
35 3 35
36 3 36
37 3 37
38 3 38
39 3 39
In [12]: df
Out[12]:
group val
date
2016-01-03 1 5
2016-01-10 1 6
2016-01-17 2 7
2016-01-24 2 8
In [14]: df.groupby('group').resample('1D').ffill()
Out[14]:
group val
group date
1 2016-01-03 1 5
2016-01-04 1 5
2016-01-05 1 5
2016-01-06 1 5
2016-01-07 1 5
2016-01-08 1 5
2016-01-09 1 5
... ... ...
2 2016-01-18 2 7
2016-01-19 2 7
2016-01-20 2 7
2016-01-21 2 7
2016-01-22 2 7
2016-01-23 2 7
2016-01-24 2 8
The following methods / indexers now accept a callable. It is intended to make these more useful in method chains,
see the documentation. (GH11485, GH12533)
These can accept a callable for the condition and other arguments.
These can accept a callable, and a tuple of callable as a slicer. The callable can return a valid boolean indexer or
anything which is valid for these indexers input.
[] indexing
Finally, you can use a callable in [] indexing of Series, DataFrame and Panel. The callable must return a valid input
for [] indexing depending on its class and index type.
Using these methods / indexers, you can chain data selection operations without using temporary variable.
Partial string indexing now matches on DateTimeIndex when part of a MultiIndex (GH10331)
In [22]: dft2 = pd.DataFrame(np.random.randn(20, 1),
....: columns=['A'],
....: index=pd.MultiIndex.from_product([pd.date_range('20130101
',
....:
periods=10,
....: freq='12H
'),
In [23]: dft2
Out[23]:
A
2013-01-01 00:00:00 a 0.156998
b -0.571455
2013-01-01 12:00:00 a 1.057633
b -0.791489
2013-01-02 00:00:00 a -0.524627
b 0.071878
2013-01-02 12:00:00 a 1.910759
... ...
2013-01-04 00:00:00 b 1.015405
In [24]: dft2.loc['2013-01-05']
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
A
2013-01-05 00:00:00 a 0.440266
b 0.688972
2013-01-05 12:00:00 a -0.276646
b 1.924533
On other levels
In [25]: idx = pd.IndexSlice
In [27]: dft2
Out[27]:
A
a 2013-01-01 00:00:00 0.156998
2013-01-01 12:00:00 1.057633
2013-01-02 00:00:00 -0.524627
2013-01-02 12:00:00 1.910759
2013-01-03 00:00:00 0.513082
2013-01-03 12:00:00 1.043945
2013-01-04 00:00:00 1.459927
... ...
b 2013-01-02 12:00:00 0.787965
2013-01-03 00:00:00 -0.546416
2013-01-03 12:00:00 2.107785
2013-01-04 00:00:00 1.015405
2013-01-04 12:00:00 -0.675521
2013-01-05 00:00:00 0.688972
2013-01-05 12:00:00 1.924533
A
a 2013-01-05 00:00:00 0.440266
2013-01-05 12:00:00 -0.276646
b 2013-01-05 00:00:00 0.688972
2013-01-05 12:00:00 1.924533
pd.to_datetime() has gained the ability to assemble datetimes from a passed in DataFrame or a dict.
(GH8158).
In [30]: df
Out[30]:
day hour month year
0 4 2 2 2015
1 5 3 3 2016
In [31]: pd.to_datetime(df)
Out[31]:
0 2015-02-04 02:00:00
1 2016-03-05 03:00:00
dtype: datetime64[ns]
You can pass only the columns that you need to assemble.
Index now supports .str.get_dummies() which returns MultiIndex, see Creating Indicator Vari-
ables (GH10008, GH10103)
In [37]: idx.str.get_dummies('|')
Out[37]:
MultiIndex(levels=[[0, 1], [0, 1], [0, 1]],
labels=[[1, 1, 0], [1, 0, 1], [0, 1, 1]],
names=['a', 'b', 'c'])
pd.crosstab() has gained a normalize argument for normalizing frequency tables (GH12569). Exam-
ples in the updated docs here.
.resample(..).interpolate() is now supported (GH12925)
.isin() now accepts passed sets (GH12988)
These changes conform sparse handling to return the correct types and work to make a smoother experience with
indexing.
SparseArray.take now returns a scalar for scalar input, SparseArray for others. Furthermore, it handles a
negative indexer with the same rule as Index (GH10560, GH12796)
In [39]: s.take(0)
Out[39]: nan
The index in .groupby(..).nth() output is now more consistent when the as_index argument is passed
(GH11039):
In [42]: df
Out[42]:
A B
0 a 1
1 b 2
2 a 3
Previous Behavior:
New Behavior:
In [43]: df.groupby('A', as_index=True)['B'].nth(0)
Out[43]:
A
a 1
b 2
Name: B, dtype: int64
Furthermore, previously, a .groupby would always sort, regardless if sort=False was passed with .nth().
In [45]: np.random.seed(1234)
Previous Behavior:
In [4]: df.groupby('c', sort=True).nth(1)
Out[4]:
a b
c
0 -0.334077 0.002118
1 0.036142 -2.074978
2 -0.720589 0.887163
3 0.859588 -0.636524
New Behavior:
In [48]: df.groupby('c', sort=True).nth(1)
Out[48]:
a b
c
0 -0.334077 0.002118
1 0.036142 -2.074978
2 -0.720589 0.887163
3 0.859588 -0.636524
a b
c
2 -0.720589 0.887163
3 0.859588 -0.636524
0 -0.334077 0.002118
1 0.036142 -2.074978
Compatibility between pandas array-like methods (e.g. sum and take) and their numpy counterparts has been greatly
increased by augmenting the signatures of the pandas methods so as to accept arguments that can be passed in from
numpy, even if they are not necessarily used in the pandas implementation (GH12644, GH12638, GH12687)
.searchsorted() for Index and TimedeltaIndex now accept a sorter argument to maintain com-
patibility with numpys searchsorted function (GH12238)
Bug in numpy compatibility of np.round() on a Series (GH12600)
An example of this signature augmentation is illustrated below:
In [50]: sp = pd.SparseDataFrame([1, 2, 3])
In [51]: sp
Out[51]:
0
0 1
1 2
2 3
Previous behaviour:
In [2]: np.cumsum(sp, axis=0)
...
TypeError: cumsum() takes at most 2 arguments (4 given)
New behaviour:
In [52]: np.cumsum(sp, axis=0)
Out[52]:
0
0 1
1 3
2 6
Using apply on resampling groupby operations (using a pd.TimeGrouper) now has the same output types as
similar apply calls on other groupby operations. (GH11742).
In [53]: df = pd.DataFrame({'date': pd.to_datetime(['10/10/2000', '11/10/2000']),
....: 'value': [10, 13]})
....:
In [54]: df
Out[54]:
date value
0 2000-10-10 10
1 2000-11-10 13
Previous behavior:
Out[1]:
...
TypeError: cannot concatenate a non-NDFrame object
# Output is a Series
In [2]: df.groupby(pd.TimeGrouper(key='date', freq='M')).apply(lambda x: x[['value']].
sum())
Out[2]:
date
2000-10-31 value 10
2000-11-30 value 13
dtype: int64
New Behavior:
# Output is a Series
In [55]: df.groupby(pd.TimeGrouper(key='date', freq='M')).apply(lambda x: x.value.
sum())
Out[55]:
date
2000-10-31 10
2000-11-30 13
Freq: M, dtype: int64
# Output is a DataFrame
In [56]: df.groupby(pd.TimeGrouper(key='date', freq='M')).apply(lambda x: x[['value
']].sum())
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[56]:
value
date
2000-10-31 10
2000-11-30 13
In order to standardize the read_csv API for both the c and python engines, both will now raise an
EmptyDataError, a subclass of ValueError, in response to empty columns or header (GH12493, GH12506)
Previous behaviour:
New behaviour:
In addition to this error change, several others have been made as well:
CParserError now sub-classes ValueError instead of just a Exception (GH12551)
A CParserError is now raised instead of a generic Exception in read_csv when the c engine cannot
parse a column (GH12506)
A ValueError is now raised instead of a generic Exception in read_csv when the c engine encounters
a NaN value in an integer column (GH12506)
A ValueError is now raised instead of a generic Exception in read_csv when true_values is
specified, and the c engine encounters an element in a column containing unencodable bytes (GH12506)
pandas.parser.OverflowError exception has been removed and has been replaced with Pythons built-
in OverflowError exception (GH12506)
pd.read_csv() no longer allows a combination of strings and integers for the usecols parameter
(GH12678)
Bugs in pd.to_datetime() when passing a unit with convertible entries and errors='coerce' or non-
convertible with errors='ignore'. Furthermore, an OutOfBoundsDateime exception will be raised when an
out-of-range value is encountered for that unit when errors='raise'. (GH11758, GH13052, GH13059)
Previous behaviour:
New behaviour:
.swaplevel() for Series, DataFrame, Panel, and MultiIndex now features defaults for its first
two parameters i and j that swap the two innermost levels of the index. (GH12934)
.searchsorted() for Index and TimedeltaIndex now accept a sorter argument to maintain com-
patibility with numpys searchsorted function (GH12238)
Period and PeriodIndex now raises IncompatibleFrequency error which inherits ValueError
rather than raw ValueError (GH12615)
Series.apply for category dtype now applies the passed function to each of the .categories (and not
the .codes), and returns a category dtype if possible (GH12473)
read_csv will now raise a TypeError if parse_dates is neither a boolean, list, or dictionary (matches
the doc-string) (GH5636)
The default for .query()/.eval() is now engine=None, which will use numexpr if its installed;
otherwise it will fallback to the python engine. This mimics the pre-0.18.1 behavior if numexpr is installed
(and which, previously, if numexpr was not installed, .query()/.eval() would raise). (GH12749)
pd.show_versions() now includes pandas_datareader version (GH12740)
Provide a proper __name__ and __qualname__ attributes for generic functions (GH12021)
pd.concat(ignore_index=True) now uses RangeIndex as default (GH12695)
pd.merge() and DataFrame.join() will show a UserWarning when merging/joining a single- with
a multi-leveled dataframe (GH9455, GH12219)
Compat with scipy > 0.17 for deprecated piecewise_polynomial interpolation method; support for the
replacement from_derivatives method (GH12887)
1.5.3.7 Deprecations
usecols parameter in pd.read_csv is now respected even when the lines of a CSV file are not even
(GH12203)
Bug in Timedelta.min and Timedelta.max, the properties now report the true minimum/maximum
timedeltas as recognized by pandas. See the documentation. (GH12727)
Bug in .quantile() with interpolation may coerce to float unexpectedly (GH12772)
Bug in .quantile() with empty Series may return scalar rather than empty Series (GH12772)
Bug in .loc with out-of-bounds in a large indexer would raise IndexError rather than KeyError
(GH12527)
Bug in resampling when using a TimedeltaIndex and .asfreq(), would previously not include the final
fencepost (GH12926)
Bug in equality testing with a Categorical in a DataFrame (GH12564)
Bug in GroupBy.first(), .last() returns incorrect row when TimeGrouper is used (GH7453)
Bug in pd.read_csv() with the c engine when specifying skiprows with newlines in quoted items
(GH10911, GH12775)
Bug in DataFrame timezone lost when assigning tz-aware datetime Series with alignment (GH12981)
Bug in .value_counts() when normalize=True and dropna=True where nulls still contributed to
the normalized count (GH12558)
Bug in Series.value_counts() loses name if its dtype is category (GH12835)
Bug in Series.value_counts() loses timezone info (GH12835)
Bug in Series.value_counts(normalize=True) with Categorical raises
UnboundLocalError (GH12835)
Bug in Panel.fillna() ignoring inplace=True (GH12633)
Bug in pd.read_csv() when specifying names, usecols, and parse_dates simultaneously with the
c engine (GH9755)
Bug in pd.read_csv() when specifying delim_whitespace=True and lineterminator simulta-
neously with the c engine (GH12912)
Bug in Series.rename, DataFrame.rename and DataFrame.rename_axis not treating Series
as mappings to relabel (GH12623).
Clean in .rolling.min and .rolling.max to enhance dtype handling (GH12373)
Bug in groupby where complex types are coerced to float (GH12902)
Bug in Series.map raises TypeError if its dtype is category or tz-aware datetime (GH12473)
Bugs on 32bit platforms for some test comparisons (GH12972)
Bug in index coercion when falling back from RangeIndex construction (GH12893)
Better error message in window functions when invalid argument (e.g. a float window) is passed (GH12669)
Bug in slicing subclassed DataFrame defined to return subclassed Series may return normal Series
(GH11559)
Bug in .str accessor methods may raise ValueError if input has name and the result is DataFrame or
MultiIndex (GH12617)
Bug in DataFrame.last_valid_index() and DataFrame.first_valid_index() on empty
frames (GH12800)
Bug in CategoricalIndex.get_loc returns different result from regular Index (GH12531)
Bug in PeriodIndex.resample where name not propagated (GH12769)
This is a major release from 0.17.1 and includes a small number of API changes, several new features, enhancements,
and performance improvements along with a large number of bug fixes. We recommend that all users upgrade to this
version.
Warning: pandas >= 0.18.0 no longer supports compatibility with Python version 2.6 and 3.3 (GH7718,
GH11273)
Warning: numexpr version 2.4.4 will now show a warning and not be used as a computation back-end for
pandas because of some buggy behavior. This does not affect other versions (>= 2.1 and >= 2.4.6). (GH12489)
Highlights include:
Moving and expanding window functions are now methods on Series and DataFrame, similar to .groupby,
see here.
Adding support for a RangeIndex as a specialized form of the Int64Index for memory savings, see here.
API breaking change to the .resample method to make it more .groupby like, see here.
Removal of support for positional indexing with floats, which was deprecated since 0.14.0. This will now raise
a TypeError, see here.
The .to_xarray() function has been added for compatibility with the xarray package, see here.
The read_sas function has been enhanced to read sas7bdat files, see here.
Addition of the .str.extractall() method, and API changes to the .str.extract() method and .str.cat() method.
pd.test() top-level nose test runner is available (GH4327).
Check the API Changes and deprecations before updating.
New features
Window functions are now methods
Changes to rename
Range Index
Changes to str.extract
Addition of str.extractall
Changes to str.cat
Datetimelike rounding
Formatting of Integers in FloatIndex
Changes to dtype assignment behaviors
to_xarray
Latex Representation
pd.read_sas() changes
Other enhancements
Backwards incompatible API changes
NaT and Timedelta operations
Changes to msgpack
Signature change for .rank
Bug in QuarterBegin with n=0
Resample API
* Downsampling
* Upsampling
* Previous API will work but with deprecations
Changes to eval
Other API Changes
Deprecations
Removal of deprecated float indexers
Removal of prior version deprecations/changes
Performance Improvements
Bug Fixes
Window functions have been refactored to be methods on Series/DataFrame objects, rather than top-level func-
tions, which are now deprecated. This allows these window-type functions, to have a similar API to that of .groupby.
See the full documentation here (GH11603, GH12373)
In [1]: np.random.seed(1234)
In [3]: df
Out[3]:
A B
0 0 0.471435
1 1 -1.190976
2 2 1.432707
3 3 -0.312652
4 4 -0.720589
5 5 0.887163
6 6 0.859588
7 7 -0.636524
8 8 0.015696
9 9 -2.242685
Previous Behavior:
In [8]: pd.rolling_mean(df,window=3)
FutureWarning: pd.rolling_mean is deprecated for DataFrame and will be
removed in a future version, replace with
DataFrame.rolling(window=3,center=False).mean()
Out[8]:
A B
0 NaN NaN
1 NaN NaN
2 1 0.237722
3 2 -0.023640
4 3 0.133155
5 4 -0.048693
6 5 0.342054
7 6 0.370076
8 7 0.079587
9 8 -0.954504
New Behavior:
In [4]: r = df.rolling(window=3)
In [5]: r
Out[5]: Rolling [window=3,center=False,axis=0]
In [9]: r.
r.A r.agg r.apply r.count r.exclusions r.max r.
median r.name r.skew r.sum
r.B r.aggregate r.corr r.cov r.kurt r.mean r.
min r.quantile r.std r.var
In [6]: r.mean()
Out[6]:
A B
0 NaN NaN
1 NaN NaN
2 1.0 0.237722
3 2.0 -0.023640
4 3.0 0.133155
5 4.0 -0.048693
6 5.0 0.342054
7 6.0 0.370076
8 7.0 0.079587
9 8.0 -0.954504
In [7]: r['A'].mean()
Out[7]:
0 NaN
1 NaN
2 1.0
3 2.0
4 3.0
5 4.0
6 5.0
7 6.0
8 7.0
9 8.0
Name: A, dtype: float64
Series.rename and NDFrame.rename_axis can now take a scalar or list-like argument for altering the Series
or axis name, in addition to their old behaviors of altering labels. (GH9494, GH11965)
In [9]: s = pd.Series(np.random.randn(5))
In [10]: s.rename('newname')
Out[10]:
0 1.150036
1 0.991946
2 0.953324
3 -2.021255
4 -0.334077
Name: newname, dtype: float64
In [12]: (df.rename_axis("indexname")
....: .rename_axis("columns_name", axis="columns"))
....:
Out[12]:
columns_name 0 1
indexname
0 0.002118 0.405453
1 0.289092 1.321158
2 -1.546906 -0.202646
3 -0.655969 0.193421
4 0.553439 1.318152
The new functionality works well in method chains. Previously these methods only accepted functions or dicts map-
ping a label to a new label. This continues to work as before for function or dict-like values.
A RangeIndex has been added to the Int64Index sub-classes to support a memory saving alternative for common
use cases. This has a similar implementation to the python range object (xrange in python 2), in that it only
stores the start, stop, and step values for the index. It will transparently interact with the user API, converting to
Int64Index if needed.
This will now be the default constructed index for NDFrame objects, rather than previous an Int64Index. (GH939,
GH12070, GH12071, GH12109, GH12888)
Previous Behavior:
In [3]: s = pd.Series(range(1000))
In [4]: s.index
Out[4]:
Int64Index([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9,
...
990, 991, 992, 993, 994, 995, 996, 997, 998, 999], dtype='int64',
length=1000)
In [6]: s.index.nbytes
Out[6]: 8000
New Behavior:
In [13]: s = pd.Series(range(1000))
In [14]: s.index
Out[14]: RangeIndex(start=0, stop=1000, step=1)
In [15]: s.index.nbytes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[15]: 80
The .str.extract method takes a regular expression with capture groups, finds the first match in each subject string, and
returns the contents of the capture groups (GH11386).
In v0.18.0, the expand argument was added to extract.
expand=False: it returns a Series, Index, or DataFrame, depending on the subject and regular expres-
sion pattern (same behavior as pre-0.18.0).
expand=True: it always returns a DataFrame, which is more consistent and less confusing from the per-
spective of a user.
Currently the default is expand=None which gives a FutureWarning and uses expand=False. To avoid this
warning, please explicitly specify expand.
Out[1]:
0 1
1 2
2 NaN
dtype: object
Calling on an Index with a regex with exactly one capture group returns an Index if expand=False.
In [19]: s.index
Out[19]: Index(['A11', 'B22', 'C33'], dtype='object')
Calling on an Index with a regex with more than one capture group raises ValueError if expand=False.
In summary, extract(expand=True) always returns a DataFrame with a row for every subject string, and a
column for every capture group.
The .str.extractall method was added (GH11386). Unlike extract, which returns only the first match.
In [24]: s
Out[24]:
A a1a2
B b1
C c1
dtype: object
In [26]: s.str.extractall("(?P<letter>[ab])(?P<digit>\d)")
Out[26]:
letter digit
match
A 0 a 1
1 a 2
B 0 b 1
The method .str.cat() concatenates the members of a Series. Before, if NaN values were present in the Series,
calling .str.cat() on it would return NaN, unlike the rest of the Series.str.* API. This behavior has been
amended to ignore NaN values by default. (GH11435).
A new, friendlier ValueError is added to protect against the mistake of supplying the sep as an arg, rather than as
a kwarg. (GH11334).
In [27]: pd.Series(['a','b',np.nan,'c']).str.cat(sep=' ')
Out[27]: 'a b c'
DatetimeIndex, Timestamp, TimedeltaIndex, Timedelta have gained the .round(), .floor() and
.ceil() method for datetimelike rounding, flooring and ceiling. (GH4314, GH11963)
Naive datetimes
In [29]: dr = pd.date_range('20130101 09:12:56.1234', periods=3)
In [30]: dr
Out[30]:
DatetimeIndex(['2013-01-01 09:12:56.123400', '2013-01-02 09:12:56.123400',
'2013-01-03 09:12:56.123400'],
dtype='datetime64[ns]', freq='D')
In [31]: dr.round('s')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
# Timestamp scalar
In [32]: dr[0]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
Timestamp('2013-01-01 09:12:56.123400', freq='D')
In [33]: dr[0].round('10s')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
Timestamp('2013-01-01 09:13:00')
In [35]: dr
Out[35]:
DatetimeIndex(['2013-01-01 09:12:56.123400-05:00',
'2013-01-02 09:12:56.123400-05:00',
'2013-01-03 09:12:56.123400-05:00'],
dtype='datetime64[ns, US/Eastern]', freq='D')
In [36]: dr.round('s')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
Timedeltas
In [37]: t = timedelta_range('1 days 2 hr 13 min 45 us',periods=3,freq='d')
In [38]: t
Out[38]:
TimedeltaIndex(['1 days 02:13:00.000045', '2 days 02:13:00.000045',
'3 days 02:13:00.000045'],
dtype='timedelta64[ns]', freq='D')
In [39]: t.round('10min')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
TimedeltaIndex(['1 days 02:10:00', '2 days 02:10:00', '3 days 02:10:00'], dtype=
'timedelta64[ns]', freq=None)
# Timedelta scalar
In [40]: t[0]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
Timedelta('1 days 02:13:00.000045')
In [41]: t[0].round('2h')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
Timedelta('1 days 02:00:00')
In addition, .round(), .floor() and .ceil() will be available thru the .dt accessor of Series.
In [42]: s = pd.Series(dr)
In [43]: s
Out[43]:
0 2013-01-01 09:12:56.123400-05:00
1 2013-01-02 09:12:56.123400-05:00
2 2013-01-03 09:12:56.123400-05:00
dtype: datetime64[ns, US/Eastern]
In [44]: s.dt.round('D')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
0 2013-01-01 00:00:00-05:00
1 2013-01-02 00:00:00-05:00
2 2013-01-03 00:00:00-05:00
dtype: datetime64[ns, US/Eastern]
Integers in FloatIndex, e.g. 1., are now formatted with a decimal point and a 0 digit, e.g. 1.0 (GH11713) This
change not only affects the display to the console, but also the output of IO methods like .to_csv or .to_html.
Previous Behavior:
In [3]: s
Out[3]:
0 1
1 2
2 3
dtype: int64
In [4]: s.index
Out[4]: Float64Index([0.0, 1.0, 2.0], dtype='float64')
In [5]: print(s.to_csv(path=None))
0,1
1,2
2,3
New Behavior:
In [46]: s
Out[46]:
0.0 1
1.0 2
2.0 3
dtype: int64
In [47]: s.index
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[47]: Float64Index([0.0, 1.0, 2.
0], dtype='float64')
In [48]: print(s.to_csv(path=None))
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
0,1
1.0,2
2.0,3
When a DataFrames slice is updated with a new slice of the same dtype, the dtype of the DataFrame will now remain
the same. (GH10503)
Previous Behavior:
In [7]: df.dtypes
Out[7]:
a int64
b uint32
dtype: object
In [8]: ix = df['a'] == 1
In [11]: df.dtypes
Out[11]:
a int64
b int64
dtype: object
New Behavior:
In [50]: df.dtypes
Out[50]:
a int64
b uint32
dtype: object
In [51]: ix = df['a'] == 1
In [53]: df.dtypes
Out[53]:
a int64
b uint32
dtype: object
When a DataFrames integer slice is partially updated with a new slice of floats that could potentially be downcasted
to integer without losing precision, the dtype of the slice will be set to float instead of integer.
Previous Behavior:
In [4]: df = pd.DataFrame(np.array(range(1,10)).reshape(3,3),
columns=list('abc'),
index=[[4,4,8], [8,10,12]])
In [5]: df
Out[5]:
a b c
4 8 1 2 3
10 4 5 6
8 12 7 8 9
In [8]: df
Out[8]:
a b c
4 8 1 2 0
10 4 5 1
8 12 7 8 9
New Behavior:
In [54]: df = pd.DataFrame(np.array(range(1,10)).reshape(3,3),
....: columns=list('abc'),
....: index=[[4,4,8], [8,10,12]])
....:
In [55]: df
Out[55]:
a b c
4 8 1 2 3
10 4 5 6
8 12 7 8 9
In [57]: df
Out[57]:
a b c
4 8 1 2 0.0
10 4 5 1.0
8 12 7 8 9.0
1.6.1.10 to_xarray
In a future version of pandas, we will be deprecating Panel and other > 2 ndim objects. In order to provide for
continuity, all NDFrame objects have gained the .to_xarray() method in order to convert to xarray objects,
which has a pandas-like interface for > 2 ndim. (GH11972)
See the xarray full-documentation here.
In [1]: p = Panel(np.arange(2*3*4).reshape(2,3,4))
In [2]: p.to_xarray()
Out[2]:
<xarray.DataArray (items: 2, major_axis: 3, minor_axis: 4)>
array([[[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]],
DataFrame has gained a ._repr_latex_() method in order to allow for conversion to latex in a ipython/jupyter
notebook using nbconvert. (GH11778)
Note that this must be activated by setting the option pd.display.latex.repr=True (GH12182)
For example, if you have a jupyter notebook you plan to convert to latex using nbconvert, place the statement pd.
display.latex.repr=True in the first cell to have the contained DataFrame output also stored as latex.
The options display.latex.escape and display.latex.longtable have also been added to the config-
uration and are used automatically by the to_latex method. See the available options docs for more info.
read_sas has gained the ability to read SAS7BDAT files, including compressed files. The files can be read in
entirety, or incrementally. For full details see here. (GH4052)
the leading whitespaces have been removed from the output of .to_string(index=False) method
(GH11833)
the out parameter has been removed from the Series.round() method. (GH11763)
DataFrame.round() leaves non-numeric columns unchanged in its return, rather than raises. (GH11885)
DataFrame.head(0) and DataFrame.tail(0) return empty frames, rather than self. (GH11937)
Series.head(0) and Series.tail(0) return empty series, rather than self. (GH11937)
to_msgpack and read_msgpack encoding now defaults to 'utf-8'. (GH12170)
the order of keyword arguments to text file parsing functions (.read_csv(), .read_table(), .
read_fwf()) changed to group related arguments. (GH11555)
NaTType.isoformat now returns the string 'NaT to allow the result to be passed to the constructor of
Timestamp. (GH12300)
NaT and Timedelta have expanded arithmetic operations, which are extended to Series arithmetic where applica-
ble. Operations defined for datetime64[ns] or timedelta64[ns] are now also defined for NaT (GH11564).
NaT now supports arithmetic operations with integers and floats.
In [58]: pd.NaT * 1
Out[58]: NaT
In [60]: pd.NaT / 2
\\\\\\\\\\\\\\\\\\\\\\\\\\Out[60]: NaT
NaT may represent either a datetime64[ns] null or a timedelta64[ns] null. Given the ambiguity, it is
treated as a timedelta64[ns], which allows more operations to succeed.
# same as
In [65]: pd.Timedelta('1s') + pd.Timedelta('1s')
\\\\\\\\\\\\\Out[65]: Timedelta('0 days 00:00:02')
as opposed to
However, when wrapped in a Series whose dtype is datetime64[ns] or timedelta64[ns], the dtype
information is respected.
In [69]: ser
Out[69]:
0 1 days
1 2 days
2 3 days
dtype: timedelta64[ns]
NaT.isoformat() now returns 'NaT'. This change allows allows pd.Timestamp to rehydrate any timestamp
like object from its isoformat (GH12300).
Forward incompatible changes in msgpack writing format were made over 0.17.0 and 0.18.0; older versions of
pandas cannot read files packed by newer versions (GH12129, GH10527)
Bugs in to_msgpack and read_msgpack introduced in 0.17.0 and fixed in 0.18.0, caused files packed in Python
2 unreadable by Python 3 (GH12142). The following table describes the backward and forward compat of msgpacks.
New signature
In [71]: pd.Series([0,1]).rank(axis=0, method='average', numeric_only=None,
....: na_option='keep', ascending=True, pct=False)
....:
Out[71]:
0 1.0
1 2.0
dtype: float64
In previous versions, the behavior of the QuarterBegin offset was inconsistent depending on the date when the n
parameter was 0. (GH11406)
The general semantics of anchored offsets for n=0 is to not move the date when it is an anchor point (e.g., a quarter
start date), and otherwise roll forward to the next anchor point.
In [73]: d = pd.Timestamp('2014-02-01')
In [74]: d
Out[74]: Timestamp('2014-02-01 00:00:00')
For the QuarterBegin offset in previous versions, the date would be rolled backwards if date was in the same
month as the quarter start date.
In [3]: d = pd.Timestamp('2014-02-15')
This behavior has been corrected in version 0.18.0, which is consistent with other anchored offsets like MonthBegin
and YearBegin.
In [77]: d = pd.Timestamp('2014-02-15')
Like the change in the window functions API above, .resample(...) is changing to have a more groupby-like
API. (GH11732, GH12702, GH12202, GH12332, GH12334, GH12348, GH12448).
In [79]: np.random.seed(1234)
In [80]: df = pd.DataFrame(np.random.rand(10,4),
....: columns=list('ABCD'),
....: index=pd.date_range('2010-01-01 09:00:00', periods=10,
freq='s'))
....:
In [81]: df
Out[81]:
A B C D
2010-01-01 09:00:00 0.191519 0.622109 0.437728 0.785359
2010-01-01 09:00:01 0.779976 0.272593 0.276464 0.801872
2010-01-01 09:00:02 0.958139 0.875933 0.357817 0.500995
2010-01-01 09:00:03 0.683463 0.712702 0.370251 0.561196
2010-01-01 09:00:04 0.503083 0.013768 0.772827 0.882641
2010-01-01 09:00:05 0.364886 0.615396 0.075381 0.368824
2010-01-01 09:00:06 0.933140 0.651378 0.397203 0.788730
2010-01-01 09:00:07 0.316836 0.568099 0.869127 0.436173
2010-01-01 09:00:08 0.802148 0.143767 0.704261 0.704581
2010-01-01 09:00:09 0.218792 0.924868 0.442141 0.909316
Previous API:
You would write a resampling operation that immediately evaluates. If a how parameter was not provided, it would
default to how='mean'.
In [6]: df.resample('2s')
Out[6]:
A B C D
2010-01-01 09:00:00 0.485748 0.447351 0.357096 0.793615
2010-01-01 09:00:02 0.820801 0.794317 0.364034 0.531096
2010-01-01 09:00:04 0.433985 0.314582 0.424104 0.625733
2010-01-01 09:00:06 0.624988 0.609738 0.633165 0.612452
2010-01-01 09:00:08 0.510470 0.534317 0.573201 0.806949
New API:
Now, you can write .resample(..) as a 2-stage operation like .groupby(...), which yields a Resampler.
In [82]: r = df.resample('2s')
In [83]: r
Out[83]: DatetimeIndexResampler [freq=<2 * Seconds>, axis=0, closed=left, label=left,
convention=start, base=0]
Downsampling
You can then use this object to perform operations. These are downsampling operations (going from a higher frequency
to a lower one).
In [84]: r.mean()
Out[84]:
A B C D
2010-01-01 09:00:00 0.485748 0.447351 0.357096 0.793615
2010-01-01 09:00:02 0.820801 0.794317 0.364034 0.531096
2010-01-01 09:00:04 0.433985 0.314582 0.424104 0.625733
2010-01-01 09:00:06 0.624988 0.609738 0.633165 0.612452
2010-01-01 09:00:08 0.510470 0.534317 0.573201 0.806949
In [85]: r.sum()
Out[85]:
A B C D
2010-01-01 09:00:00 0.971495 0.894701 0.714192 1.587231
2010-01-01 09:00:02 1.641602 1.588635 0.728068 1.062191
2010-01-01 09:00:04 0.867969 0.629165 0.848208 1.251465
2010-01-01 09:00:06 1.249976 1.219477 1.266330 1.224904
2010-01-01 09:00:08 1.020940 1.068634 1.146402 1.613897
Furthermore, resample now supports getitem operations to perform the resample on specific columns.
In [86]: r[['A','C']].mean()
Out[86]:
A C
2010-01-01 09:00:00 0.485748 0.357096
2010-01-01 09:00:02 0.820801 0.364034
2010-01-01 09:00:04 0.433985 0.424104
2010-01-01 09:00:06 0.624988 0.633165
2010-01-01 09:00:08 0.510470 0.573201
Upsampling
Upsampling operations take you from a lower frequency to a higher frequency. These are now performed with the
Resampler objects with backfill(), ffill(), fillna() and asfreq() methods.
In [89]: s = pd.Series(np.arange(5,dtype='int64'),
....: index=date_range('2010-01-01', periods=5, freq='Q'))
....:
In [90]: s
Out[90]:
2010-03-31 0
2010-06-30 1
2010-09-30 2
2010-12-31 3
2011-03-31 4
Freq: Q-DEC, dtype: int64
Previously
In [6]: s.resample('M', fill_method='ffill')
Out[6]:
2010-03-31 0
2010-04-30 0
2010-05-31 0
2010-06-30 1
2010-07-31 1
2010-08-31 1
2010-09-30 2
2010-10-31 2
2010-11-30 2
2010-12-31 3
2011-01-31 3
2011-02-28 3
2011-03-31 4
Freq: M, dtype: int64
New API
In [91]: s.resample('M').ffill()
Out[91]:
2010-03-31 0
2010-04-30 0
2010-05-31 0
2010-06-30 1
2010-07-31 1
2010-08-31 1
2010-09-30 2
2010-10-31 2
2010-11-30 2
2010-12-31 3
2011-01-31 3
2011-02-28 3
2011-03-31 4
Freq: M, dtype: int64
Note: In the new API, you can either downsample OR upsample. The prior implementation would allow you to pass
an aggregator function (like mean) even though you were upsampling, providing a bit of confusion.
Warning: This new API for resample includes some internal changes for the prior-to-0.18.0 API, to work with a
deprecation warning in most cases, as the resample operation returns a deferred object. We can intercept operations
and just do what the (pre 0.18.0) API did (with a warning). Here is a typical use case:
In [4]: r = df.resample('2s')
In [6]: r*10
pandas/tseries/resample.py:80: FutureWarning: .resample() is now a deferred
operation
Out[6]:
A B C D
2010-01-01 09:00:00 4.857476 4.473507 3.570960 7.936154
2010-01-01 09:00:02 8.208011 7.943173 3.640340 5.310957
2010-01-01 09:00:04 4.339846 3.145823 4.241039 6.257326
2010-01-01 09:00:06 6.249881 6.097384 6.331650 6.124518
2010-01-01 09:00:08 5.104699 5.343172 5.732009 8.069486
However, getting and assignment operations directly on a Resampler will raise a ValueError:
In [7]: r.iloc[0] = 5
ValueError: .resample() is now a deferred operation
use .resample(...).mean() instead of .resample(...)
There is a situation where the new API can not perform all the operations when using original code. This code is
intending to resample every 2s, take the mean AND then take the min of those results.
In [4]: df.resample('2s').min()
Out[4]:
A 0.433985
B 0.314582
C 0.357096
D 0.531096
dtype: float64
The good news is the return dimensions will differ between the new API and the old API, so this should loudly
raise an exception.
To replicate the original operation
In [93]: df.resample('2s').mean().min()
Out[93]:
A 0.433985
B 0.314582
C 0.357096
D 0.531096
dtype: float64
In prior versions, new columns assignments in an eval expression resulted in an inplace change to the DataFrame.
(GH9297, GH8664, GH10486)
In [95]: df
Out[95]:
a b
0 0.0 0
1 2.5 1
2 5.0 2
3 7.5 3
4 10.0 4
This will change in a future version of pandas, use inplace=True to avoid this
warning.
In [13]: df
Out[13]:
a b c
0 0.0 0 0.0
1 2.5 1 3.5
2 5.0 2 7.0
3 7.5 3 10.5
4 10.0 4 14.0
In version 0.18.0, a new inplace keyword was added to choose whether the assignment should be done inplace or
return a copy.
In [96]: df
Out[96]:
a b c
0 0.0 0 0.0
1 2.5 1 3.5
2 5.0 2 7.0
3 7.5 3 10.5
4 10.0 4 14.0
a b c d
0 0.0 0 0.0 0.0
1 2.5 1 3.5 2.5
2 5.0 2 7.0 5.0
3 7.5 3 10.5 7.5
4 10.0 4 14.0 10.0
In [98]: df
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
a b c
0 0.0 0 0.0
1 2.5 1 3.5
2 5.0 2 7.0
3 7.5 3 10.5
4 10.0 4 14.0
In [100]: df
Out[100]:
a b c d
0 0.0 0 0.0 0.0
1 2.5 1 3.5 2.5
2 5.0 2 7.0 5.0
3 7.5 3 10.5 7.5
4 10.0 4 14.0 10.0
Warning: For backwards compatability, inplace defaults to True if not specified. This will change in a
future version of pandas. If your code depends on an inplace assignment you should update to explicitly set
inplace=True
The inplace keyword parameter was also added the query method.
In [103]: df
Out[103]:
a b c d
3 7.5 3 10.5 7.5
4 10.0 4 14.0 10.0
Warning: Note that the default value for inplace in a query is False, which is consistent with prior versions.
eval has also been updated to allow multi-line expressions for multiple assignments. These expressions will be
evaluated one at a time in order. Only assignments are valid for multi-line expressions.
In [104]: df
Out[104]:
a b c d
3 7.5 3 10.5 7.5
4 10.0 4 14.0 10.0
In [105]: df.eval("""
.....: e = d + a
.....: f = e - 22
.....: g = f / 2.0""", inplace=True)
.....:
In [106]: df
Out[106]:
a b c d e f g
3 7.5 3 10.5 7.5 15.0 -7.0 -3.5
4 10.0 4 14.0 10.0 20.0 -2.0 -1.0
DataFrame.between_time and Series.between_time now only parse a fixed set of time strings.
Parsing of date strings is no longer supported and raises a ValueError. (GH11818)
.memory_usage() now includes values in the index, as does memory_usage in .info() (GH11597)
DataFrame.to_latex() now supports non-ascii encodings (eg utf-8) in Python 2 with the parameter
encoding (GH7061)
pandas.merge() and DataFrame.merge() will show a specific error message when trying to merge
with an object that is not of type DataFrame or a subclass (GH12081)
DataFrame.unstack and Series.unstack now take fill_value keyword to allow direct replace-
ment of missing values when an unstack results in missing values in the resulting DataFrame. As an added
benefit, specifying fill_value will preserve the data type of the original stacked data. (GH9746)
As part of the new API for window functions and resampling, aggregation functions have been clarified, raising
more informative error messages on invalid aggregations. (GH9052). A full set of examples are presented in
groupby.
Statistical functions for NDFrame objects (like sum(), mean(), min()) will now raise if non-numpy-
compatible arguments are passed in for **kwargs (GH12301)
.to_latex and .to_html gain a decimal parameter like .to_csv; the default is '.' (GH12031)
More helpful error message when constructing a DataFrame with empty data but with indices (GH8020)
.describe() will now properly handle bool dtype as a categorical (GH6625)
More helpful error message with an invalid .transform with user defined input (GH10165)
Exponentially weighted functions now allow specifying alpha directly (GH10789) and raise ValueError if
parameters violate 0 < alpha <= 1 (GH12492)
1.6.2.8 Deprecations
The functions pd.rolling_*, pd.expanding_*, and pd.ewm* are deprecated and replaced by the cor-
responding method call. Note that the new suggested syntax includes all of the arguments (even if default)
(GH11603)
In [1]: s = pd.Series(range(3))
In [2]: pd.rolling_mean(s,window=2,min_periods=1)
FutureWarning: pd.rolling_mean is deprecated for Series and
will be removed in a future version, replace with
Series.rolling(min_periods=1,window=2,center=False).mean()
Out[2]:
0 0.0
1 0.5
2 1.5
dtype: float64
The the freq and how arguments to the .rolling, .expanding, and .ewm (new) functions are deprecated,
and will be removed in a future version. You can simply resample the input prior to creating a window function.
(GH11603).
For example, instead of s.rolling(window=5,freq='D').max() to get the max value on a rolling
5 Day window, one could use s.resample('D').mean().rolling(window=5).max(), which first
resamples the data to daily data, then provides a rolling 5 day window.
pd.tseries.frequencies.get_offset_name function is deprecated. Use offsets .freqstr prop-
erty as alternative (GH11192)
pandas.stats.fama_macbeth routines are deprecated and will be removed in a future version (GH6077)
pandas.stats.ols, pandas.stats.plm and pandas.stats.var routines are deprecated and will
be removed in a future version (GH6077)
show a FutureWarning rather than a DeprecationWarning on using long-time deprecated syntax in
HDFStore.select, where the where clause is not a string-like (GH12027)
The pandas.options.display.mpl_style configuration has been deprecated and will be removed in
a future version of pandas. This functionality is better handled by matplotlibs style sheets (GH11783).
In GH4892 indexing with floating point numbers on a non-Float64Index was deprecated (in version 0.14.0). In
0.18.0, this deprecation warning is removed and these will now raise a TypeError. (GH12165, GH12333)
In [110]: s
Out[110]:
4 1
5 2
6 3
dtype: int64
In [112]: s2
Out[112]:
a 1
b 2
c 3
dtype: int64
Previous Behavior:
Out[2]: 2
Out[3]: 2
Out[4]: 2
In [6]: s2
Out[6]:
a 1
b 10
c 3
dtype: int64
New Behavior:
For iloc, getting & setting via a float scalar will always raise.
In [3]: s.iloc[2.0]
TypeError: cannot do label indexing on <class 'pandas.indexes.numeric.Int64Index'>
with these indexers [2.0] of <type 'float'>
Other indexers will coerce to a like integer for both getting and setting. The FutureWarning has been dropped for
.loc, .ix and [].
In [113]: s[5.0]
Out[113]: 2
In [114]: s.loc[5.0]
\\\\\\\\\\\\Out[114]: 2
and setting
In [116]: s_copy[5.0] = 10
In [117]: s_copy
Out[117]:
4 1
5 10
6 3
dtype: int64
In [119]: s_copy.loc[5.0] = 10
In [120]: s_copy
Out[120]:
4 1
5 10
6 3
dtype: int64
Positional setting with .ix and a float indexer will ADD this value to the index, rather than previously setting the
value by position.
In [3]: s2.ix[1.0] = 10
In [4]: s2
Out[4]:
a 1
b 2
c 3
1.0 10
dtype: int64
In [121]: s.loc[5.0:6]
Out[121]:
5 2
6 3
dtype: int64
Note that for floats that are NOT coercible to ints, the label based bounds will be excluded
In [122]: s.loc[5.1:6]
Out[122]:
6 3
dtype: int64
In [124]: s[1.0]
Out[124]: 2
In [125]: s[1.0:2.5]
\\\\\\\\\\\\Out[125]:
1.0 2
2.0 3
dtype: int64
Bug in .to_csv ignoring formatting parameters decimal, na_rep, float_format for float indexes
(GH11553)
Bug in Int64Index and Float64Index preventing the use of the modulo operator (GH9244)
Bug in MultiIndex.drop for not lexsorted multi-indexes (GH12078)
Bug in DataFrame when masking an empty DataFrame (GH11859)
Bug in .plot potentially modifying the colors input when the number of columns didnt match the number
of series provided (GH12039).
Bug in Series.plot failing when index has a CustomBusinessDay frequency (GH7222).
Bug in .to_sql for datetime.time values with sqlite fallback (GH8341)
Bug in read_excel failing to read data with one column when squeeze=True (GH12157)
Bug in read_excel failing to read one empty column (GH12292, GH9002)
Bug in .groupby where a KeyError was not raised for a wrong column if there was only one row in the
dataframe (GH11741)
Bug in .read_csv with dtype specified on empty data producing an error (GH12048)
Bug in .read_csv where strings like '2E' are treated as valid floats (GH12237)
Bug in building pandas with debugging symbols (GH12123)
Removed millisecond property of DatetimeIndex. This would always raise a ValueError
(GH12019).
Bug in Series constructor with read-only data (GH11502)
Removed pandas.util.testing.choice(). Should use np.random.choice(), instead.
(GH12386)
Bug in .loc setitem indexer preventing the use of a TZ-aware DatetimeIndex (GH12050)
Bug in .style indexes and multi-indexes not appearing (GH11655)
Bug in to_msgpack and from_msgpack which did not correctly serialize or deserialize NaT (GH12307).
Bug in .skew and .kurt due to roundoff error for highly similar values (GH11974)
Bug in Timestamp constructor where microsecond resolution was lost if HHMMSS were not separated with
: (GH10041)
Bug in buffer_rd_bytes src->buffer could be freed more than once if reading failed, causing a segfault
(GH12098)
Bug in crosstab where arguments with non-overlapping indexes would return a KeyError (GH10291)
Bug in DataFrame.apply in which reduction was not being prevented for cases in which dtype was not a
numpy dtype (GH12244)
Bug when initializing categorical series with a scalar value. (GH12336)
Bug when specifying a UTC DatetimeIndex by setting utc=True in .to_datetime (GH11934)
Bug when increasing the buffer size of CSV reader in read_csv (GH12494)
Bug when setting columns of a DataFrame with duplicate column names (GH12344)
Note: We are proud to announce that pandas has become a sponsored project of the (NUMFocus organization). This
will help ensure the success of development of pandas as a world-class open-source project.
This is a minor bug-fix release from 0.17.0 and includes a large number of bug fixes along several new features,
enhancements, and performance improvements. We recommend that all users upgrade to this version.
Highlights include:
Support for Conditional HTML Formatting, see here
Releasing the GIL on the csv reader & other ops, see here
Fixed regression in DataFrame.drop_duplicates from 0.16.2, causing incorrect results on integer values
(GH11376)
New features
Conditional HTML Formatting
Enhancements
API changes
Deprecations
Performance Improvements
Bug Fixes
Warning: This is a new feature and is under active development. Well be adding features an possibly making
breaking changes in future releases. Feedback is welcome.
Weve added experimental support for conditional HTML formatting: the visual styling of a DataFrame based on the
data. The styling is accomplished with HTML and CSS. Acesses the styler class with the pandas.DataFrame.
style, attribute, an instance of Styler with your data attached.
Heres a quick example:
In [1]: np.random.seed(123)
Styler interacts nicely with the Jupyter Notebook. See the documentation for more.
1.7.2 Enhancements
Series of type category now make .str.<...> and .dt.<...> accessor methods / properties available,
if the categories are of that type. (GH10661)
In [9]: s = pd.Series(list('aabb')).astype('category')
In [10]: s
Out[10]:
0 a
1 a
2 b
3 b
dtype: category
Categories (2, object): [a, b]
In [11]: s.str.contains("a")
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[11]:
0 True
1 True
2 False
3 False
dtype: bool
In [13]: date
Out[13]:
0 2015-01-01
1 2015-01-02
2 2015-01-03
3 2015-01-04
4 2015-01-05
dtype: category
Categories (5, datetime64[ns]): [2015-01-01, 2015-01-02, 2015-01-03, 2015-01-04,
2015-01-05]
In [14]: date.dt.day
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
0 1
1 2
2 3
3 4
4 5
dtype: int64
pivot_table now has a margins_name argument so you can use something other than the default of All
(GH3335)
Implement export of datetime64[ns, tz] dtypes with a fixed HDF5 store (GH11411)
Pretty printing sets (e.g. in DataFrame cells) now uses set literal syntax ({x, y}) instead of Legacy Python
syntax (set([x, y])) (GH11215)
Improve the error message in pandas.io.gbq.to_gbq() when a streaming insert fails (GH11285) and
when the DataFrame does not match the schema of the destination table (GH11359)
1.7.3.1 Deprecations
The pandas.io.ga module which implements google-analytics support is deprecated and will be
removed in a future version (GH11308)
Deprecate the engine keyword in .to_csv(), which will be removed in a future version (GH11274)
Prevent adding new attributes to the accessors .str, .dt and .cat. Retrieving such a value was not possible,
so error out on setting it. (GH10673)
Bug in tz-conversions with an ambiguous time and .dt accessors (GH11295)
Bug in output formatting when using an index of ambiguous times (GH11619)
Bug in comparisons of Series vs list-likes (GH11339)
Bug in DataFrame.replace with a datetime64[ns, tz] and a non-compat to_replace (GH11326,
GH11153)
Bug in isnull where numpy.datetime64('NaT') in a numpy.array was not determined to be
null(GH11206)
Bug in list-like indexing with a mixed-integer Index (GH11320)
Bug in pivot_table with margins=True when indexes are of Categorical dtype (GH10993)
Bug in DataFrame.plot cannot use hex strings colors (GH10299)
Regression in DataFrame.drop_duplicates from 0.16.2, causing incorrect results on integer values
(GH11376)
Bug in pd.eval where unary ops in a list error (GH11235)
Bug in squeeze() with zero length arrays (GH11230, GH8999)
Bug in describe() dropping column names for hierarchical indexes (GH11517)
Bug in DataFrame.pct_change() not propagating axis keyword on .fillna method (GH11150)
Bug in .to_csv() when a mix of integer and string column names are passed as the columns parameter
(GH11637)
Bug in indexing with a range, (GH11652)
Bug in inference of numpy scalars and preserving dtype when setting columns (GH11638)
Bug in to_sql using unicode column names giving UnicodeEncodeError with (GH11431).
Fix regression in setting of xticks in plot (GH11529).
Bug in holiday.dates where observance rules could not be applied to holiday and doc enhancement
(GH11477, GH11533)
Fix plotting issues when having plain Axes instances instead of SubplotAxes (GH11520, GH11556).
Bug in DataFrame.to_latex() produces an extra rule when header=False (GH7124)
Bug in df.groupby(...).apply(func) when a func returns a Series containing a new datetimelike
column (GH11324)
Bug in pandas.json when file to load is big (GH11344)
Bugs in to_excel with duplicate columns (GH11007, GH10982, GH10970)
Fixed a bug that prevented the construction of an empty series of dtype datetime64[ns, tz] (GH11245).
Bug in read_excel with multi-index containing integers (GH11317)
Bug in to_excel with openpyxl 2.2+ and merging (GH11408)
Bug in DataFrame.to_dict() produces a np.datetime64 object instead of Timestamp when only
datetime is present in data (GH11327)
Bug in DataFrame.corr() raises exception when computes Kendall correlation for DataFrames with
boolean and not boolean columns (GH11560)
Bug in the link-time error caused by C inline functions on FreeBSD 10+ (with clang) (GH10510)
Bug in DataFrame.to_csv in passing through arguments for formatting MultiIndexes, including
date_format (GH7791)
Bug in DataFrame.join() with how='right' producing a TypeError (GH11519)
Bug in Series.quantile with empty list results has Index with object dtype (GH11588)
Bug in pd.merge results in empty Int64Index rather than Index(dtype=object) when the merge
result is empty (GH11588)
Bug in Categorical.remove_unused_categories when having NaN values (GH11599)
Bug in DataFrame.to_sparse() loses column names for MultiIndexes (GH11600)
Bug in DataFrame.round() with non-unique column index producing a Fatal Python error (GH11611)
Bug in DataFrame.round() with decimals being a non-unique indexed Series producing extra columns
(GH11618)
This is a major release from 0.16.2 and includes a small number of API changes, several new features, enhancements,
and performance improvements along with a large number of bug fixes. We recommend that all users upgrade to this
version.
Warning: pandas >= 0.17.0 will no longer support compatibility with Python version 3.2 (GH9118)
Warning: The pandas.io.data package is deprecated and will be replaced by the pandas-datareader pack-
age. This will allow the data modules to be independently updated to your pandas installation. The API for
pandas-datareader v0.1.1 is exactly the same as in pandas v0.17.0 (GH8961, GH10861).
After installing pandas-datareader, you can easily change your imports:
from pandas.io import data, wb
becomes
from pandas_datareader import data, wb
Highlights include:
Release the Global Interpreter Lock (GIL) on some cython operations, see here
Plotting methods are now available as attributes of the .plot accessor, see here
The sorting API has been revamped to remove some long-time inconsistencies, see here
Support for a datetime64[ns] with timezones as a first-class dtype, see here
The default for to_datetime will now be to raise when presented with unparseable formats, previously
this would return the original input. Also, date parse functions now return consistent results. See here
The default for dropna in HDFStore has changed to False, to store by default all rows even if they are all
NaN, see here
Datetime accessor (dt) now supports Series.dt.strftime to generate formatted strings for datetime-
likes, and Series.dt.total_seconds to generate each duration of the timedelta in seconds. See here
Period and PeriodIndex can handle multiplied freq like 3D, which corresponding to 3 days span. See here
Development installed versions of pandas will now have PEP440 compliant version strings (GH9518)
Development support for benchmarking with the Air Speed Velocity library (GH8361)
Support for reading SAS xport files, see here
Documentation comparing SAS to pandas, see here
Removal of the automatic TimeSeries broadcasting, deprecated since 0.8.0, see here
Display format with plain text can optionally align with Unicode East Asian Width, see here
Compatibility with Python 3.5 (GH11097)
Compatibility with matplotlib 1.5.0 (GH11111)
Check the API Changes and deprecations before updating.
New features
Datetime with TZ
Releasing the GIL
Plot submethods
Additional methods for dt accessor
* strftime
* total_seconds
Period Frequency Enhancement
Support for SAS XPORT files
Support for Math Functions in .eval()
Changes to Excel with MultiIndex
Google BigQuery Enhancements
Display Alignment with Unicode East Asian Width
Other enhancements
Backwards incompatible API changes
Changes to sorting API
Changes to to_datetime and to_timedelta
* Error handling
* Consistent Parsing
Changes to Index Comparisons
Changes to Boolean Comparisons vs. None
HDFStore dropna behavior
We are adding an implementation that natively supports datetime with timezones. A Series or a DataFrame
column previously could be assigned a datetime with timezones, and would work as an object dtype. This had
performance issues with a large number rows. See the docs for more details. (GH8260, GH10763, GH11034).
The new implementation allows for having a single-timezone across all rows, with operations in a performant manner.
In [2]: df
Out[2]:
A B C
0 2013-01-01 2013-01-01 00:00:00-05:00 2013-01-01 00:00:00+01:00
1 2013-01-02 2013-01-02 00:00:00-05:00 2013-01-02 00:00:00+01:00
2 2013-01-03 2013-01-03 00:00:00-05:00 2013-01-03 00:00:00+01:00
In [3]: df.dtypes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
A datetime64[ns]
B datetime64[ns, US/Eastern]
C datetime64[ns, CET]
dtype: object
In [4]: df.B
Out[4]:
0 2013-01-01 00:00:00-05:00
1 2013-01-02 00:00:00-05:00
2 2013-01-03 00:00:00-05:00
Name: B, dtype: datetime64[ns, US/Eastern]
In [5]: df.B.dt.tz_localize(None)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
0 2013-01-01
1 2013-01-02
2 2013-01-03
Name: B, dtype: datetime64[ns]
This uses a new-dtype representation as well, that is very similar in look-and-feel to its numpy cousin
datetime64[ns]
In [6]: df['B'].dtype
Out[6]: datetime64[ns, US/Eastern]
In [7]: type(df['B'].dtype)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[7]: pandas.core.dtypes.dtypes.DatetimeTZDtype
Note: There is a slightly different string repr for the underlying DatetimeIndex as a result of the dtype changes,
but functionally these are the same.
Previous Behavior:
In [1]: pd.date_range('20130101',periods=3,tz='US/Eastern')
Out[1]: DatetimeIndex(['2013-01-01 00:00:00-05:00', '2013-01-02 00:00:00-05:00',
'2013-01-03 00:00:00-05:00'],
dtype='datetime64[ns]', freq='D', tz='US/Eastern')
In [2]: pd.date_range('20130101',periods=3,tz='US/Eastern').dtype
Out[2]: dtype('<M8[ns]')
New Behavior:
In [8]: pd.date_range('20130101',periods=3,tz='US/Eastern')
Out[8]:
DatetimeIndex(['2013-01-01 00:00:00-05:00', '2013-01-02 00:00:00-05:00',
'2013-01-03 00:00:00-05:00'],
dtype='datetime64[ns, US/Eastern]', freq='D')
In [9]: pd.date_range('20130101',periods=3,tz='US/Eastern').dtype
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
datetime64[ns, US/Eastern]
We are releasing the global-interpreter-lock (GIL) on some cython operations. This will allow other threads to run
simultaneously during computation, potentially allowing performance improvements from multi-threading. Notably
groupby, nsmallest, value_counts and some indexing operations benefit from this. (GH8882)
For example the groupby expression in the following code will have the GIL released during the factorization step,
e.g. df.groupby('key') as well as the .sum() operation.
N = 1000000
ngroups = 10
df = DataFrame({'key' : np.random.randint(0,ngroups,size=N),
'data' : np.random.randn(N) })
df.groupby('key')['data'].sum()
Releasing of the GIL could benefit an application that uses threads for user interactions (e.g. QT), or performing
multi-threaded computations. A nice example of a library that can handle these types of computation-in-parallel is the
dask library.
The Series and DataFrame .plot() method allows for customizing plot types by supplying the kind keyword
arguments. Unfortunately, many of these kinds of plots use different required and optional keyword arguments, which
makes it difficult to discover what any given plot kind uses out of the dozens of possible arguments.
To alleviate this issue, we have added a new, optional plotting interface, which exposes each kind of plot as a method of
the .plot attribute. Instead of writing series.plot(kind=<kind>, ...), you can now also use series.
plot.<kind>(...):
In [11]: df.plot.bar()
As a result of this change, these methods are now all discoverable via tab-completion:
In [12]: df.plot.<TAB>
df.plot.area df.plot.barh df.plot.density df.plot.hist df.plot.line
df.plot.scatter
Each method signature only includes relevant arguments. Currently, these are limited to required arguments, but in the
future these will include optional arguments, as well. For an overview, see the new Plotting API documentation.
strftime
We are now supporting a Series.dt.strftime method for datetime-likes to generate a formatted string
(GH10110). Examples:
# DatetimeIndex
In [13]: s = pd.Series(pd.date_range('20130101', periods=4))
In [14]: s
Out[14]:
0 2013-01-01
1 2013-01-02
2 2013-01-03
3 2013-01-04
dtype: datetime64[ns]
In [15]: s.dt.strftime('%Y/%m/%d')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[15]:
0 2013/01/01
1 2013/01/02
2 2013/01/03
3 2013/01/04
dtype: object
# PeriodIndex
In [16]: s = pd.Series(pd.period_range('20130101', periods=4))
In [17]: s
Out[17]:
0 2013-01-01
1 2013-01-02
2 2013-01-03
3 2013-01-04
dtype: object
In [18]: s.dt.strftime('%Y/%m/%d')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[18]:
0 2013/01/01
1 2013/01/02
2 2013/01/03
3 2013/01/04
dtype: object
The string format is as the python standard library and details can be found here
total_seconds
pd.Series of type timedelta64 has new method .dt.total_seconds() returning the duration of the
timedelta in seconds (GH10817)
# TimedeltaIndex
In [19]: s = pd.Series(pd.timedelta_range('1 minutes', periods=4))
In [20]: s
Out[20]:
0 0 days 00:01:00
1 1 days 00:01:00
2 2 days 00:01:00
3 3 days 00:01:00
dtype: timedelta64[ns]
In [21]: s.dt.total_seconds()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
0 60.0
1 86460.0
2 172860.0
3 259260.0
dtype: float64
Period, PeriodIndex and period_range can now accept multiplied freq. Also, Period.freq and
PeriodIndex.freq are now stored as a DateOffset instance like DatetimeIndex, and not as str
(GH7811)
A multiplied freq represents a span of corresponding length. The example below creates a period of 3 days. Addition
and subtraction will shift the period by its span.
In [23]: p
Out[23]: Period('2015-08-01', '3D')
In [24]: p + 1
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[24]: Period('2015-08-04', '3D')
In [25]: p - 2
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[25]:
Period('2015-07-26', '3D')
In [26]: p.to_timestamp()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
Timestamp('2015-08-01 00:00:00')
In [27]: p.to_timestamp(how='E')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
Timestamp('2015-08-03 00:00:00')
In [29]: idx
Out[29]: PeriodIndex(['2015-08-01', '2015-08-03', '2015-08-05', '2015-08-07'], dtype=
'period[2D]', freq='2D')
In [30]: idx + 1
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
PeriodIndex(['2015-08-03', '2015-08-05', '2015-08-07', '2015-08-09'], dtype=
'period[2D]', freq='2D')
read_sas() provides support for reading SAS XPORT format files. (GH4052).
df = pd.read_sas('sas_xport.xpt')
The support math functions are sin, cos, exp, log, expm1, log1p, sqrt, sinh, cosh, tanh, arcsin, arccos, arctan, arccosh,
arcsinh, arctanh, abs and arctan2.
These functions map to the intrinsics for the NumExpr engine. For the Python engine, they are mapped to NumPy
calls.
In version 0.16.2 a DataFrame with MultiIndex columns could not be written to Excel via to_excel. That
functionality has been added (GH10564), along with updating read_excel so that the data can be read back with, no
loss of information, by specifying which columns/rows make up the MultiIndex in the header and index_col
parameters (GH4679)
See the documentation for more details.
In [31]: df = pd.DataFrame([[1,2,3,4], [5,6,7,8]],
....: columns = pd.MultiIndex.from_product([['foo','bar'],['a','b
']],
In [32]: df
Out[32]:
col1 foo bar
col2 a b a b
i1 i2
j l 1 2 3 4
k 5 6 7 8
In [33]: df.to_excel('test.xlsx')
In [35]: df
Out[35]:
col1 foo bar
col2 a b a b
i1 i2
j l 1 2 3 4
k 5 6 7 8
Previously, it was necessary to specify the has_index_names argument in read_excel, if the serialized data
had index names. For version 0.17.0 the ouptput format of to_excel has been changed to make this keyword
unnecessary - the change is shown below.
Old
New
Warning: Excel files saved in version 0.16.2 or prior that had index names will still able to be read in, but the
has_index_names argument must specified to True.
Added ability to automatically create a table/dataset using the pandas.io.gbq.to_gbq() function if the
destination table/dataset does not exist. (GH8325, GH11121).
Added ability to replace an existing table and schema when calling the pandas.io.gbq.to_gbq() func-
tion via the if_exists argument. See the docs for more details (GH8325).
InvalidColumnOrder and InvalidPageToken in the gbq module will raise ValueError instead of
IOError.
The generate_bq_schema() function is now deprecated and will be removed in a future version
(GH11121)
The gbq module will now support Python 3 (GH11094).
Warning: Enabling this option will affect the performance for printing of DataFrame and Series (about 2
times slower). Use only when it is actually required.
Some East Asian countries use Unicode characters its width is corresponding to 2 alphabets. If a DataFrame or
Series contains these characters, the default output cannot be aligned properly. The following options are added to
enable precise handling for these characters.
display.unicode.east_asian_width: Whether to use the Unicode East Asian Width to calculate the
display text width. (GH2612)
display.unicode.ambiguous_as_wide: Whether to handle Unicode characters belong to Ambiguous
as Wide. (GH11102)
In [36]: df = pd.DataFrame({u'': ['UK', u''], u'': ['Alice', u'']})
In [37]: df;
In [39]: df;
Support for openpyxl >= 2.2. The API for style support is now stable (GH10125)
merge now accepts the argument indicator which adds a Categorical-type column (by default called
_merge) to the output object that takes on the values (GH8790)
Observation Origin _merge value
Merge key only in 'left' frame left_only
Merge key only in 'right' frame right_only
Merge key in both frames both
Previous Behavior:
New Behavior:
6 13.0
dtype: float64
Added a DataFrame.round method to round the values to a variable number of decimal places (GH10568).
In [49]: df = pd.DataFrame(np.random.random([3, 3]), columns=['A', 'B', 'C'],
....: index=['first', 'second', 'third'])
....:
In [50]: df
Out[50]:
A B C
first 0.342764 0.304121 0.417022
second 0.681301 0.875457 0.510422
third 0.669314 0.585937 0.624904
In [51]: df.round(2)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
A B C
first 0.34 0.30 0.42
second 0.68 0.88 0.51
third 0.67 0.59 0.62
A B C
first 0.0 0.304121 0.42
second 1.0 0.875457 0.51
third 1.0 0.585937 0.62
drop_duplicates and duplicated now accept a keep keyword to target first, last, and all duplicates.
The take_last keyword is deprecated, see here (GH6511, GH8505)
In [53]: s = pd.Series(['A', 'B', 'C', 'A', 'B', 'D'])
In [54]: s.drop_duplicates()
Out[54]:
0 A
1 B
2 C
5 D
dtype: object
In [55]: s.drop_duplicates(keep='last')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[55]:
2 C
3 A
4 B
5 D
dtype: object
In [56]: s.drop_duplicates(keep=False)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
2 C
5 D
dtype: object
Reindex now has a tolerance argument that allows for finer control of Limits on filling while reindexing
(GH10411):
In [59]: df = df.set_index('t')
In [60]: df.reindex(pd.to_datetime(['1999-12-31']),
....: method='nearest',
....: tolerance='1 day')
....:
Out[60]:
x
1999-12-31 0
tolerance is also exposed by the lower level Index.get_indexer and Index.get_loc methods.
Added functionality to use the base argument when resampling a TimeDeltaIndex (GH10530)
DatetimeIndex can be instantiated using strings contains NaT (GH7599)
to_datetime can now accept the yearfirst keyword (GH7599)
pandas.tseries.offsets larger than the Day offset can now be used with a Series for addi-
tion/subtraction (GH10699). See the docs for more details.
pd.Timedelta.total_seconds() now returns Timedelta duration to ns precision (previously microsec-
ond precision) (GH10939)
PeriodIndex now supports arithmetic with np.ndarray (GH10638)
Support pickling of Period objects (GH10439)
.as_blocks will now take a copy optional argument to return a copy of the data, default is to copy (no
change in behavior from prior versions), (GH9607)
regex argument to DataFrame.filter now handles numeric column names instead of raising
ValueError (GH10384).
Enable reading gzip compressed files via URL, either by explicitly setting the compression parameter or by
inferring from the presence of the HTTP Content-Encoding header in the response (GH8685)
Enable writing Excel files in memory using StringIO/BytesIO (GH7074)
Enable serialization of lists and dicts to strings in ExcelWriter (GH8188)
The sorting API has had some longtime inconsistencies. (GH9816, GH8239).
Here is a summary of the API PRIOR to 0.17.0:
Series.sort is INPLACE while DataFrame.sort returns a new object.
Series.order returns a new object
It was possible to use Series/DataFrame.sort_index to sort by values by passing the by keyword.
Series/DataFrame.sortlevel worked only on a MultiIndex for sorting by index.
To address these issues, we have revamped the API:
We have introduced a new method, DataFrame.sort_values(), which is the merger of DataFrame.
sort(), Series.sort(), and Series.order(), to handle sorting of values.
The existing methods Series.sort(), Series.order(), and DataFrame.sort() have been depre-
cated and will be removed in a future version.
The by argument of DataFrame.sort_index() has been deprecated and will be removed in a future
version.
The existing method .sort_index() will gain the level keyword to enable level sorting.
We now have two distinct and non-overlapping methods of sorting. A * marks items that will show a
FutureWarning.
To sort by the values:
Previous Replacement
* Series.order() Series.sort_values()
* Series.sort() Series.sort_values(inplace=True)
* DataFrame.sort(columns=...) DataFrame.sort_values(by=...)
To sort by the index:
Previous Replacement
Series.sort_index() Series.sort_index()
Series.sortlevel(level=...) Series.sort_index(level=...)
DataFrame.sort_index() DataFrame.sort_index()
DataFrame.sortlevel(level=...) DataFrame.sort_index(level=...)
* DataFrame.sort() DataFrame.sort_index()
We have also deprecated and changed similar methods in two Series-like classes, Index and Categorical.
Previous Replacement
* Index.order() Index.sort_values()
* Categorical.order() Categorical.sort_values()
Error handling
The default for pd.to_datetime error handling has changed to errors='raise'. In prior versions it was
errors='ignore'. Furthermore, the coerce argument has been deprecated in favor of errors='coerce'.
This means that invalid parsing will raise rather that return the original input as in previous versions. (GH10636)
Previous Behavior:
New Behavior:
Consistent Parsing
The string parsing of to_datetime, Timestamp and DatetimeIndex has been made consistent. (GH7599)
Prior to v0.17.0, Timestamp and to_datetime may parse year-only datetime-string incorrectly using todays
date, otherwise DatetimeIndex uses the beginning of the year. Timestamp and to_datetime may raise
ValueError in some types of datetime-string which DatetimeIndex can parse, such as a quarterly string.
Previous Behavior:
In [1]: Timestamp('2012Q2')
Traceback
...
ValueError: Unable to parse 2012Q2
In [64]: Timestamp('2014')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[64]: Timestamp('2014-01-01 00:00:00')
Note: If you want to perform calculations based on todays date, use Timestamp.now() and pandas.
tseries.offsets.
In [66]: import pandas.tseries.offsets as offsets
In [67]: Timestamp.now()
Out[67]: Timestamp('2017-05-05 12:19:58.258672')
New Behavior:
Note that this is different from the numpy behavior where a comparison can be broadcast:
Boolean comparisons of a Series vs None will now be equivalent to comparing with np.nan, rather than raise
TypeError. (GH1079).
In [71]: s = Series(range(3))
In [73]: s
Out[73]:
0 0.0
1 NaN
2 2.0
dtype: float64
Previous Behavior:
In [5]: s==None
TypeError: Could not compare <type 'NoneType'> type with Series
New Behavior:
In [74]: s==None
Out[74]:
0 False
1 False
2 False
dtype: bool
In [75]: s.isnull()
Out[75]:
0 False
1 True
2 False
dtype: bool
Warning: You generally will want to use isnull/notnull for these types of comparisons, as isnull/
notnull tells you which elements are null. One has to be mindful that nan's dont compare equal, but None's
do. Note that Pandas/numpy uses the fact that np.nan != np.nan, and treats None like np.nan.
In [76]: None == None
Out[76]: True
The default behavior for HDFStore write functions with format='table' is now to keep rows that are all missing.
Previously, the behavior was to drop rows that were all missing save the index. The previous behavior can be replicated
using the dropna=True option. (GH9382)
Previous Behavior:
In [79]: df_with_missing
Out[79]:
col1 col2
0 0.0 1.0
1 NaN NaN
2 2.0 NaN
In [27]:
df_with_missing.to_hdf('file.h5',
'df_with_missing',
format='table',
mode='w')
Out [28]:
col1 col2
0 0 1
2 2 NaN
New Behavior:
In [80]: df_with_missing.to_hdf('file.h5',
....: 'df_with_missing',
....: format='table',
....: mode='w')
....:
The display.precision option has been clarified to refer to decimal places (GH10451).
Earlier versions of pandas would format floating point numbers to have one less decimal place than the value in
display.precision.
In [1]: pd.set_option('display.precision', 2)
If interpreting precision as significant figures this did work for scientific notation but that same interpretation did not
work for values with standard formatting. It was also out of step with how numpy handles formatting.
Going forward the value of display.precision will directly control the number of places after the decimal, for
regular formatting as well as scientific notation, similar to how numpys precision print option works.
In [82]: pd.set_option('display.precision', 2)
To preserve output behavior with prior versions the default value of display.precision has been reduced to 6
from 7.
Categorical.unique now returns new Categoricals with categories and codes that are unique, rather
than returning np.array (GH10508)
unordered category: values and categories are sorted by appearance order.
ordered category: values are sorted by appearance order, categories keep existing order.
In [85]: cat
Out[85]:
[C, A, B, C]
Categories (3, object): [A < B < C]
In [86]: cat.unique()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[86]:
[C, A, B]
Categories (3, object): [A < B < C]
In [88]: cat
Out[88]:
[C, A, B, C]
Categories (3, object): [A, B, C]
In [89]: cat.unique()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[89]:
[C, A, B]
Categories (3, object): [C, A, B]
In earlier versions of pandas, if a bool was passed the header argument of read_csv, read_excel, or
read_html it was implicitly converted to an integer, resulting in header=0 for False and header=1 for True
(GH6113)
A bool input to header will now raise a TypeError
Line and kde plot with subplots=True now uses default colors, not all black. Specify color='k' to draw
all lines in black (GH9894)
Calling the .value_counts() method on a Series with a categorical dtype now returns a Series with a
CategoricalIndex (GH10704)
The metadata properties of subclasses of pandas objects will now be serialized (GH10553).
groupby using Categorical follows the same rule as Categorical.unique described above
(GH10508)
When constructing DataFrame with an array of complex64 dtype previously meant the corresponding col-
umn was automatically promoted to the complex128 dtype. Pandas will now preserve the itemsize of the
1.8.2.10 Deprecations
Note: These indexing function have been deprecated in the documentation since 0.11.0.
WidePanel deprecated in favor of Panel, LongPanel in favor of DataFrame (note these have been
aliases since < 0.11.0), (GH10892)
DataFrame.convert_objects has been deprecated in favor of type-specific functions pd.
to_datetime, pd.to_timestamp and pd.to_numeric (new in 0.17.0) (GH11133).
In [90]: np.random.seed(1234)
In [91]: df = DataFrame(np.random.randn(5,2),columns=list('AB'),index=date_range(
'20130101',periods=5))
In [92]: df
Out[92]:
A B
2013-01-01 0.471435 -1.190976
2013-01-02 1.432707 -0.312652
2013-01-03 -0.720589 0.887163
2013-01-04 0.859588 -0.636524
2013-01-05 0.015696 -2.242685
Previously
In [3]: df + df.A
FutureWarning: TimeSeries broadcasting along DataFrame index by default is
deprecated.
Out[3]:
A B
2013-01-01 0.942870 -0.719541
2013-01-02 2.865414 1.120055
2013-01-03 -1.441177 0.166574
2013-01-04 1.719177 0.223065
2013-01-05 0.031393 -2.226989
Current
In [93]: df.add(df.A,axis='index')
Out[93]:
A B
2013-01-01 0.942870 -0.719541
2013-01-02 2.865414 1.120055
2013-01-03 -1.441177 0.166574
2013-01-04 1.719177 0.223065
2013-01-05 0.031393 -2.226989
Development support for benchmarking with the Air Speed Velocity library (GH8361)
Added vbench benchmarks for alternative ExcelWriter engines and reading Excel files (GH7171)
Performance improvements in Categorical.value_counts (GH10804)
Performance improvements in SeriesGroupBy.nunique and SeriesGroupBy.value_counts and
SeriesGroupby.transform (GH10820, GH11077)
Performance improvements in DataFrame.drop_duplicates with integer dtypes (GH10917)
Performance improvements in DataFrame.duplicated with wide frames. (GH10161, GH11180)
4x improvement in timedelta string parsing (GH6755, GH10426)
8x improvement in timedelta64 and datetime64 ops (GH6755)
Significantly improved performance of indexing MultiIndex with slicers (GH10287)
8x improvement in iloc using list-like input (GH10791)
Improved performance of Series.isin for datetimelike/integer Series (GH10287)
20x improvement in concat of Categoricals when categories are identical (GH10587)
Improved performance of to_datetime when specified format string is ISO8601 (GH10178)
2x improvement of Series.value_counts for float dtype (GH10821)
Enable infer_datetime_format in to_datetime when date components do not have 0 padding
(GH11142)
Regression from 0.16.1 in constructing DataFrame from nested dictionary (GH11084)
Performance improvements in addition/subtraction operations for DateOffset with Series or
DatetimeIndex (GH10744, GH11205)
Reading famafrench data via DataReader results in HTTP 404 error because of the website url is changed
(GH10591).
Bug in read_msgpack where DataFrame to decode has duplicate column names (GH9618)
Bug in io.common.get_filepath_or_buffer which caused reading of valid S3 files to fail if the
bucket also contained keys for which the user does not have read permission (GH10604)
Bug in vectorised setting of timestamp columns with python datetime.date and numpy datetime64
(GH10408, GH10412)
Bug in Index.take may add unnecessary freq attribute (GH10791)
Bug in merge with empty DataFrame may raise IndexError (GH10824)
Bug in to_latex where unexpected keyword argument for some documented arguments (GH10888)
Bug in indexing of large DataFrame where IndexError is uncaught (GH10645 and GH10692)
Bug in read_csv when using the nrows or chunksize parameters if file contains only a header line
(GH9535)
Bug in serialization of category types in HDF5 in presence of alternate encodings. (GH10366)
Bug in pd.DataFrame when constructing an empty DataFrame with a string dtype (GH9428)
Bug in pd.DataFrame.diff when DataFrame is not consolidated (GH10907)
Bug in pd.unique for arrays with the datetime64 or timedelta64 dtype that meant an array with object
dtype was returned instead the original dtype (GH9431)
Bug in Timedelta raising error when slicing from 0s (GH10583)
Bug in DatetimeIndex.take and TimedeltaIndex.take may not raise IndexError against invalid
index (GH10295)
Bug in Series([np.nan]).astype('M8[ms]'), which now returns Series([pd.NaT])
(GH10747)
Bug in PeriodIndex.order reset freq (GH10295)
Bug in date_range when freq divides end as nanos (GH10885)
Bug in iloc allowing memory outside bounds of a Series to be accessed with negative integers (GH10779)
Bug in read_msgpack where encoding is not respected (GH10581)
Bug preventing access to the first index when using iloc with a list containing the appropriate negative integer
(GH10547, GH10779)
Bug in TimedeltaIndex formatter causing error while trying to save DataFrame with
TimedeltaIndex using to_csv (GH10833)
Bug in DataFrame.where when handling Series slicing (GH10218, GH9558)
Bug where pd.read_gbq throws ValueError when Bigquery returns zero rows (GH10273)
Bug in to_json which was causing segmentation fault when serializing 0-rank ndarray (GH9576)
Bug in plotting functions may raise IndexError when plotted on GridSpec (GH10819)
Bug in plot result may show unnecessary minor ticklabels (GH10657)
Bug in groupby incorrect computation for aggregation on DataFrame with NaT (E.g first, last, min).
(GH10590, GH11010)
Bug when constructing DataFrame where passing a dictionary with only scalar values and specifying columns
did not raise an error (GH10856)
Bug in .var() causing roundoff errors for highly similar values (GH10242)
Bug in DataFrame.plot(subplots=True) with duplicated columns outputs incorrect result (GH10962)
Bug in Index arithmetic may result in incorrect class (GH10638)
Bug in date_range results in empty if freq is negative annualy, quarterly and monthly (GH11018)
Bug in DatetimeIndex cannot infer negative freq (GH11018)
Remove use of some deprecated numpy comparison operations, mainly in tests. (GH10569)
Bug in Index dtype may not applied properly (GH11017)
Bug in io.gbq when testing for minimum google api client version (GH10652)
Bug in DataFrame construction from nested dict with timedelta keys (GH11129)
Bug in .fillna against may raise TypeError when data contains datetime dtype (GH7095, GH11153)
Bug in .groupby when number of keys to group by is same as length of index (GH11185)
Bug in convert_objects where converted values might not be returned if all null and coerce (GH9589)
Bug in convert_objects where copy keyword was not respected (GH9589)
This is a minor bug-fix release from 0.16.1 and includes a a large number of bug fixes along some new features
(pipe() method), enhancements, and performance improvements.
We recommend that all users upgrade to this version.
Highlights include:
A new pipe method, see here
Documentation on how to use numba with pandas, see here
New features
Pipe
Other Enhancements
API Changes
Performance Improvements
Bug Fixes
1.9.1.1 Pipe
Weve introduced a new method DataFrame.pipe(). As suggested by the name, pipe should be used to pipe
data through a chain of function calls. The goal is to avoid confusing nested function calls like
# df is a DataFrame
# f, g, and h are functions that take and return DataFrames
f(g(h(df), arg1=1), arg2=2, arg3=3)
The logic flows from inside out, and function names are separated from their keyword arguments. This can be rewritten
as
(df.pipe(h)
.pipe(g, arg1=1)
.pipe(f, arg2=2, arg3=3)
)
Now both the code and the logic flow from top to bottom. Keyword arguments are next to their functions. Overall the
code is much more readable.
In the example above, the functions f, g, and h each expected the DataFrame as the first positional argument. When
the function you wish to apply takes its data anywhere other than the first argument, pass a tuple of (function,
keyword) indicating where the DataFrame should flow. For example:
The pipe method is inspired by unix pipes, which stream text through processes. More recently dplyr and magrittr
Holiday now raises NotImplementedError if both offset and observance are used in the construc-
tor instead of returning an incorrect result (GH10217).
Bug in Series.hist raises an error when a one row Series was given (GH10214)
Bug where HDFStore.select modifies the passed columns list (GH7212)
Bug in Categorical repr with display.width of None in Python 3 (GH10087)
Bug in to_json with certain orients and a CategoricalIndex would segfault (GH10317)
Bug where some of the nan funcs do not have consistent return dtypes (GH10251)
Bug in DataFrame.quantile on checking that a valid axis was passed (GH9543)
Bug in groupby.apply aggregation for Categorical not preserving categories (GH10138)
Bug in to_csv where date_format is ignored if the datetime is fractional (GH10209)
Bug in DataFrame.to_json with mixed data types (GH10289)
Bug in cache updating when consolidating (GH10264)
Bug in mean() where integer dtypes can overflow (GH10172)
Bug where Panel.from_dict does not set dtype when specified (GH10058)
Bug in Index.union raises AttributeError when passing array-likes. (GH10149)
Bug in Timestamps microsecond, quarter, dayofyear, week and daysinmonth properties re-
turn np.int type, not built-in int. (GH10050)
This is a minor bug-fix release from 0.16.0 and includes a a large number of bug fixes along several new features,
enhancements, and performance improvements. We recommend that all users upgrade to this version.
Highlights include:
Support for a CategoricalIndex, a category based index, see here
New section on how-to-contribute to pandas, see here
Revised Merge, join, and concatenate documentation, including graphical examples to make it easier to un-
derstand each operations, see here
New method sample for drawing random samples from Series, DataFrames and Panels. See here
The default Index printing has changed to a more uniform format, see here
BusinessHour datetime-offset is now supported, see here
Further enhancement to the .str accessor to make string operations easier, see here
Enhancements
CategoricalIndex
Sample
String Methods Enhancements
Other Enhancements
API changes
Deprecations
Index Representation
Performance Improvements
Bug Fixes
Warning: In pandas 0.17.0, the sub-package pandas.io.data will be removed in favor of a separately
installable package. See here for details (GH8961)
1.10.1 Enhancements
1.10.1.1 CategoricalIndex
We introduce a CategoricalIndex, a new type of index object that is useful for supporting indexing with dupli-
cates. This is a container around a Categorical (introduced in v0.15.0) and allows efficient indexing and storage
of an index with a large number of duplicated elements. Prior to 0.16.1, setting the index of a DataFrame/Series
with a category dtype would convert this to regular object-based Index.
In [2]: df
Out[2]:
A B
0 0 a
1 1 a
2 2 b
3 3 b
4 4 c
5 5 a
In [3]: df.dtypes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[3]:
A int64
B category
dtype: object
In [4]: df.B.cat.categories
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
Index(['c', 'a', 'b'], dtype='object')
In [6]: df2.index
Out[6]: CategoricalIndex(['a', 'a', 'b', 'b', 'c', 'a'], categories=['c', 'a', 'b'],
ordered=False, name='B', dtype='category')
indexing with __getitem__/.iloc/.loc/.ix works similarly to an Index with duplicates. The indexers
MUST be in the category or the operation will raise.
In [7]: df2.loc['a']
Out[7]:
A
B
a 0
a 1
a 5
a 0
a 1
a 5
b 2
b 3
groupby operations on the index will preserve the index nature as well
In [10]: df2.groupby(level=0).sum()
Out[10]:
A
B
c 4
a 6
b 5
In [11]: df2.groupby(level=0).sum().index
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[11]: CategoricalIndex(['c', 'a', 'b'],
categories=['c', 'a', 'b'], ordered=False, name='B', dtype='category')
reindexing operations, will return a resulting index based on the type of the passed indexer, meaning that passing a
list will return a plain-old-Index; indexing with a Categorical will return a CategoricalIndex, indexed
according to the categories of the PASSED Categorical dtype. This allows one to arbitrarly index these even with
values NOT in the categories, similarly to how you can reindex ANY pandas index.
In [12]: df2.reindex(['a','e'])
Out[12]:
A
B
a 0.0
a 1.0
a 5.0
e NaN
In [13]: df2.reindex(['a','e']).index
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[13]: Index(['a', 'a', 'a', 'e
'], dtype='object', name='B')
In [14]: df2.reindex(pd.Categorical(['a','e'],categories=list('abcde')))
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
A
B
a 0.0
a 1.0
a 5.0
e NaN
In [15]: df2.reindex(pd.Categorical(['a','e'],categories=list('abcde'))).index
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
CategoricalIndex(['a', 'a', 'a', 'e'], categories=['a', 'b', 'c', 'd', 'e'],
1.10.1.2 Sample
Series, DataFrames, and Panels now have a new method: sample(). The method accepts a specific number of rows
or columns to return, or a fraction of the total number or rows or columns. It also has options for sampling with or
without replacement, for passing in a column for weights for non-uniform sampling, and for setting seed values to
facilitate replication. (GH2419)
When applied to a DataFrame, one may pass the name of a column to specify sampling weights when sampling from
rows.
0 9 0.5
1 8 0.4
2 7 0.1
Continuing from v0.16.0, the following enhancements make string operations easier and more consistent with standard
python string operations.
Added StringMethods (.str accessor) to Index (GH9068)
The .str accessor is now available for both Series and Index.
In [26]: idx = Index([' jack', 'jill ', ' jesse ', 'frank'])
In [27]: idx.str.strip()
Out[27]: Index(['jack', 'jill', 'jesse', 'frank'], dtype='object')
One special case for the .str accessor on Index is that if a string method returns bool, the .str accessor
will return a np.array instead of a boolean Index (GH8875). This enables the following expression to work
naturally:
In [30]: s
Out[30]:
a1 0
a2 1
b1 2
b2 3
dtype: int64
In [31]: idx.str.startswith('a')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[31]: array([ True,
True, False, False], dtype=bool)
In [32]: s[s.index.str.startswith('a')]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
a1 0
a2 1
dtype: int64
The following new methods are accesible via .str accessor to apply the function to each values. (GH9766,
GH9773, GH10031, GH10045, GH10052)
Methods
capitalize() swapcase() normalize() partition() rpartition()
index() rindex() translate()
split now takes expand keyword to specify whether to expand dimensionality. return_type is depre-
cated. (GH9847)
# return Series
In [34]: s.str.split(',')
Out[34]:
0 [a, b]
1 [a, c]
2 [b, c]
dtype: object
# return DataFrame
In [35]: s.str.split(',', expand=True)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[35]:
0 1
0 a b
1 a c
2 b c
# return Index
In [37]: idx.str.split(',')
Out[37]: Index([['a', 'b'], ['a', 'c'], ['b', 'c']], dtype='object')
# return MultiIndex
In [38]: idx.str.split(',', expand=True)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[38]:
MultiIndex(levels=[['a', 'b'], ['b', 'c']],
labels=[[0, 0, 1], [0, 1, 1]])
BusinessHour offset is now supported, which represents business hours starting from 09:00 - 17:00 on
BusinessDay by default. See Here for details. (GH7905)
DataFrame.diff now takes an axis parameter that determines the direction of differencing (GH9727)
Allow clip, clip_lower, and clip_upper to accept array-like arguments as thresholds (This is a regres-
sion from 0.11.0). These methods now have an axis parameter which determines how the Series or DataFrame
will be aligned with the threshold(s). (GH6966)
DataFrame.mask() and Series.mask() now support same keywords as where (GH8801)
drop function can now accept errors keyword to suppress ValueError raised when any of label does not
exist in the target data. (GH6736)
Add support for separating years and quarters using dashes, for example 2014-Q1. (GH9688)
Allow conversion of values with dtype datetime64 or timedelta64 to strings using astype(str)
(GH9757)
get_dummies function now accepts sparse keyword. If set to True, the return DataFrame is sparse, e.g.
SparseDataFrame. (GH8823)
Period now accepts datetime64 as value input. (GH9054)
Allow timedelta string conversion when leading zero is missing from time definition, ie 0:00:00 vs 00:00:00.
(GH9570)
Allow Panel.shift with axis='items' (GH9890)
Trying to write an excel file now raises NotImplementedError if the DataFrame has a MultiIndex
instead of writing a broken Excel file. (GH9794)
Allow Categorical.add_categories to accept Series or np.array. (GH9927)
Add/delete str/dt/cat accessors dynamically from __dir__. (GH9910)
Add normalize as a dt accessor method. (GH10047)
DataFrame and Series now have _constructor_expanddim property as overridable constructor for
one higher dimensionality data. This should be used only when it is really needed, see here
pd.lib.infer_dtype now returns 'bytes' in Python 3 where appropriate. (GH10032)
When passing in an ax to df.plot( ..., ax=ax), the sharex kwarg will now default to False. The result
is that the visibility of xlabels and xticklabels will not anymore be changed. You have to do that by yourself
for the right axes in your figure or set sharex=True explicitly (but this changes the visible for all axes in the
figure, not only the one which is passed in!). If pandas creates the subplots itself (e.g. no passed in ax kwarg),
then the default is still sharex=True and the visibility changes are applied.
assign() now inserts new columns in alphabetical order. Previously the order was arbitrary. (GH9777)
By default, read_csv and read_table will now try to infer the compression type based on the file exten-
sion. Set compression=None to restore the previous behavior (no decompression). (GH9770)
1.10.2.1 Deprecations
The string representation of Index and its sub-classes have now been unified. These will show a single-line display
if there are few values; a wrapped multi-line display for a lot of values (but less than display.max_seq_items;
if lots of items (> display.max_seq_items) will show a truncated display (the head and tail of the data). The
formatting for MultiIndex is unchanges (a multi-line wrapped display). The display width responds to the option
display.max_seq_items, which is defaulted to 100. (GH6482)
Previous Behavior
In [2]: pd.Index(range(4),name='foo')
Out[2]: Int64Index([0, 1, 2, 3], dtype='int64')
In [3]: pd.Index(range(104),name='foo')
Out[3]: Int64Index([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18,
19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39,
40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60,
61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81,
82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, ...], dtype=
'int64')
In [4]: pd.date_range('20130101',periods=4,name='foo',tz='US/Eastern')
Out[4]:
<class 'pandas.tseries.index.DatetimeIndex'>
[2013-01-01 00:00:00-05:00, ..., 2013-01-04 00:00:00-05:00]
Length: 4, Freq: D, Timezone: US/Eastern
In [5]: pd.date_range('20130101',periods=104,name='foo',tz='US/Eastern')
Out[5]:
<class 'pandas.tseries.index.DatetimeIndex'>
[2013-01-01 00:00:00-05:00, ..., 2013-04-14 00:00:00-04:00]
Length: 104, Freq: D, Timezone: US/Eastern
New Behavior
Improved csv write performance with mixed dtypes, including datetimes by up to 5x (GH9940)
Improved csv write performance generally by 2x (GH9940)
Bug where labels did not appear properly in the legend of DataFrame.plot(), passing label= arguments
works, and Series indices are no longer mutated. (GH9542)
Bug in json serialization causing a segfault when a frame had zero length. (GH9805)
Bug in read_csv where missing trailing delimiters would cause segfault. (GH5664)
Bug in retaining index name on appending (GH9862)
Bug in scatter_matrix draws unexpected axis ticklabels (GH5662)
Fixed bug in StataWriter resulting in changes to input DataFrame upon save (GH9795).
Bug in transform causing length mismatch when null entries were present and a fast aggregator was being
used (GH9697)
Bug in equals causing false negatives when block order differed (GH9330)
Bug in grouping with multiple pd.Grouper where one is non-time based (GH10063)
Bug in read_sql_table error when reading postgres table with timezone (GH7139)
Bug in DataFrame slicing may not retain metadata (GH9776)
Bug where TimdeltaIndex were not properly serialized in fixed HDFStore (GH9635)
Bug with TimedeltaIndex constructor ignoring name when given another TimedeltaIndex as data
(GH10025).
Bug in DataFrameFormatter._get_formatted_index with not applying max_colwidth to the
DataFrame index (GH7856)
Bug in .loc with a read-only ndarray data source (GH10043)
Bug in groupby.apply() that would raise if a passed user defined function either returned only None (for
all input). (GH9685)
Always use temporary files in pytables tests (GH9992)
Bug in plotting continuously using secondary_y may not show legend properly. (GH9610, GH9779)
Bug in DataFrame.plot(kind="hist") results in TypeError when DataFrame contains non-
numeric columns (GH9853)
Bug where repeated plotting of DataFrame with a DatetimeIndex may raise TypeError (GH9852)
Bug in setup.py that would allow an incompat cython version to build (GH9827)
Bug in plotting secondary_y incorrectly attaches right_ax property to secondary axes specifying itself
recursively. (GH9861)
Bug in Series.quantile on empty Series of type Datetime or Timedelta (GH9675)
Bug in where causing incorrect results when upcasting was required (GH9731)
Bug in FloatArrayFormatter where decision boundary for displaying small floats in decimal format is
off by one order of magnitude for a given display.precision (GH9764)
Fixed bug where DataFrame.plot() raised an error when both color and style keywords were passed
and there was no color symbol in the style strings (GH9671)
Not showing a DeprecationWarning on combining list-likes with an Index (GH10083)
Bug in read_csv and read_table when using skip_rows parameter if blank lines are present. (GH9832)
Bug in read_csv() interprets index_col=True as 1 (GH9798)
Bug in index equality comparisons using == failing on Index/MultiIndex type incompatibility (GH9785)
Bug in which SparseDataFrame could not take nan as a column name (GH8822)
Bug in to_msgpack and read_msgpack zlib and blosc compression support (GH9783)
Bug GroupBy.size doesnt attach index name properly if grouped by TimeGrouper (GH9925)
Bug causing an exception in slice assignments because length_of_indexer returns wrong results
(GH9995)
Bug in csv parser causing lines with initial whitespace plus one non-space character to be skipped. (GH9710)
Bug in C csv parser causing spurious NaNs when data started with newline followed by whitespace. (GH10022)
Bug causing elements with a null group to spill into the final group when grouping by a Categorical
(GH9603)
Bug where .iloc and .loc behavior is not consistent on empty dataframes (GH9964)
Bug in invalid attribute access on a TimedeltaIndex incorrectly raised ValueError instead of
AttributeError (GH9680)
Bug in unequal comparisons between categorical data and a scalar, which was not in the categories (e.g.
Series(Categorical(list("abc"), ordered=True)) > "d". This returned False for all el-
ements, but now raises a TypeError. Equality comparisons also now return False for == and True for !=.
(GH9848)
Bug in DataFrame __setitem__ when right hand side is a dictionary (GH9874)
Bug in where when dtype is datetime64/timedelta64, but dtype of other is not (GH9804)
Bug in MultiIndex.sortlevel() results in unicode level name breaks (GH9856)
Bug in which groupby.transform incorrectly enforced output dtypes to match input dtypes. (GH9807)
Bug in DataFrame constructor when columns parameter is set, and data is an empty list (GH9939)
Bug in bar plot with log=True raises TypeError if all values are less than 1 (GH9905)
Bug in horizontal bar plot ignores log=True (GH9905)
Bug in PyTables queries that did not return proper results using the index (GH8265, GH9676)
Bug where dividing a dataframe containing values of type Decimal by another Decimal would raise.
(GH9787)
Bug where using DataFrames asfreq would remove the name of the index. (GH9885)
Bug causing extra index point when resample BM/BQ (GH9756)
Changed caching in AbstractHolidayCalendar to be at the instance level rather than at the class level as
the latter can result in unexpected behaviour. (GH9552)
Fixed latex output for multi-indexed dataframes (GH9778)
Bug causing an exception when setting an empty range using DataFrame.loc (GH9596)
Bug in hiding ticklabels with subplots and shared axes when adding a new plot to an existing grid of axes
(GH9158)
Bug in transform and filter when grouping on a categorical variable (GH9921)
Bug in transform when groups are equal in number and dtype to the input index (GH9700)
This is a major release from 0.15.2 and includes a small number of API changes, several new features, enhancements,
and performance improvements along with a large number of bug fixes. We recommend that all users upgrade to this
version.
Highlights include:
DataFrame.assign method, see here
Series.to_coo/from_coo methods to interact with scipy.sparse, see here
Backwards incompatible change to Timedelta to conform the .seconds attribute with datetime.
timedelta, see here
Changes to the .loc slicing API to conform with the behavior of .ix see here
Changes to the default for ordering in the Categorical constructor, see here
Enhancement to the .str accessor to make string operations easier, see here
The pandas.tools.rplot, pandas.sandbox.qtpandas and pandas.rpy modules are deprecated.
We refer users to external packages like seaborn, pandas-qt and rpy2 for similar or equivalent functionality, see
here
Check the API Changes and deprecations before updating.
New features
DataFrame Assign
Interaction with scipy.sparse
String Methods Enhancements
Other enhancements
Backwards incompatible API changes
Changes in Timedelta
Indexing Changes
Categorical Changes
Other API Changes
Deprecations
Removal of prior version deprecations/changes
Performance Improvements
Bug Fixes
Inspired by dplyrs mutate verb, DataFrame has a new assign() method. The function signature for assign is
simply **kwargs. The keys are the column names for the new fields, and the values are either a value to be inserted
(for example, a Series or NumPy array), or a function of one argument to be called on the DataFrame. The new
values are inserted, and the entire DataFrame (with all original and new columns) is returned.
In [1]: iris = read_csv('data/iris.data')
In [2]: iris.head()
Out[2]:
SepalLength SepalWidth PetalLength PetalWidth Name
0 5.1 3.5 1.4 0.2 Iris-setosa
1 4.9 3.0 1.4 0.2 Iris-setosa
2 4.7 3.2 1.3 0.2 Iris-setosa
3 4.6 3.1 1.5 0.2 Iris-setosa
4 5.0 3.6 1.4 0.2 Iris-setosa
Above was an example of inserting a precomputed value. We can also pass in a function to be evalutated.
In [4]: iris.assign(sepal_ratio = lambda x: (x['SepalWidth'] /
...: x['SepalLength'])).head()
...:
Out[4]:
SepalLength SepalWidth PetalLength PetalWidth Name sepal_ratio
0 5.1 3.5 1.4 0.2 Iris-setosa 0.686275
1 4.9 3.0 1.4 0.2 Iris-setosa 0.612245
2 4.7 3.2 1.3 0.2 Iris-setosa 0.680851
3 4.6 3.1 1.5 0.2 Iris-setosa 0.673913
4 5.0 3.6 1.4 0.2 Iris-setosa 0.720000
The power of assign comes when used in chains of operations. For example, we can limit the DataFrame to just
those with a Sepal Length greater than 5, calculate the ratio, and plot
In [5]: (iris.query('SepalLength > 5')
...: .assign(SepalRatio = lambda x: x.SepalWidth / x.SepalLength,
...: PetalRatio = lambda x: x.PetalWidth / x.PetalLength)
...: .plot(kind='scatter', x='SepalRatio', y='PetalRatio'))
...:
Out[5]: <matplotlib.axes._subplots.AxesSubplot at 0x136b031d0>
In [9]: s
Out[9]:
A B C D
1 2 a 0 3.0
1 NaN
1 b 0 1.0
1 3.0
2 1 b 0 NaN
1 NaN
dtype: float64
# SparseSeries
In [10]: ss = s.to_sparse()
In [11]: ss
Out[11]:
A B C D
1 2 a 0 3.0
1 NaN
1 b
0 1.0
1 3.0
2 1 b 0 NaN
1 NaN
dtype: float64
BlockIndex
Block locations: array([0, 2], dtype=int32)
Block lengths: array([1, 2], dtype=int32)
In [13]: A
Out[13]:
<3x4 sparse matrix of type '<class 'numpy.float64'>'
with 3 stored elements in COOrdinate format>
In [14]: A.todense()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
In [15]: rows
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
[(1, 2), (1, 1), (2, 1)]
In [16]: columns
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
[('a', 0), ('a', 1), ('b', 0), ('b', 1)]
The from_coo method is a convenience method for creating a SparseSeries from a scipy.sparse.
coo_matrix:
In [17]: from scipy import sparse
In [19]: A
Out[19]:
<3x4 sparse matrix of type '<class 'numpy.float64'>'
with 3 stored elements in COOrdinate format>
In [20]: A.todense()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
In [21]: ss = SparseSeries.from_coo(A)
In [22]: ss
Out[22]:
0 2 1.0
3 2.0
1 0 3.0
dtype: float64
BlockIndex
Block locations: array([0], dtype=int32)
Block lengths: array([3], dtype=int32)
Following new methods are accesible via .str accessor to apply the function to each values. This is intended
to make it more consistent with standard methods on strings. (GH9282, GH9352, GH9386, GH9387, GH9439)
Methods
isalnum() isalpha() isdigit() isdigit() isspace()
islower() isupper() istitle() isnumeric() isdecimal()
find() rfind() ljust() rjust() zfill()
In [23]: s = Series(['abcd', '3456', 'EFGH'])
In [24]: s.str.isalpha()
Out[24]:
0 True
1 False
2 True
dtype: bool
In [25]: s.str.find('ab')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[25]:
0 0
1 -1
2 -1
dtype: int64
Series.str.pad() and Series.str.center() now accept fillchar option to specify filling char-
acter (GH9352)
2 IX
dtype: object
Reindex now supports method='nearest' for frames or series with a monotonic increasing or decreasing
index (GH9258):
This method is also exposed by the lower level Index.get_indexer and Index.get_loc methods.
The read_excel() functions sheetname argument now accepts a list and None, to get multiple or all sheets
respectively. If more than one sheet is specified, a dictionary is returned. (GH9450)
Allow Stata files to be read incrementally with an iterator; support for long strings in Stata files. See the docs
here (GH9493:).
Paths beginning with ~ will now be expanded to begin with the users home directory (GH9066)
Added time interval selection in get_data_yahoo (GH9071)
Added Timestamp.to_datetime64() to complement Timedelta.to_timedelta64() (GH9255)
tseries.frequencies.to_offset() now accepts Timedelta as input (GH9064)
Lag parameter was added to the autocorrelation method of Series, defaults to lag-1 autocorrelation (GH9192)
Timedelta will now accept nanoseconds keyword in constructor (GH9273)
SQL code now safely escapes table and column names (GH8986)
Added auto-complete for Series.str.<tab>, Series.dt.<tab> and Series.cat.<tab>
(GH9322)
Index.get_indexer now supports method='pad' and method='backfill' even for any target ar-
ray, not just monotonic targets. These methods also work for monotonic decreasing as well as monotonic
increasing indexes (GH9258).
Index.asof now works on all index types (GH9258).
A verbose argument has been augmented in io.read_excel(), defaults to False. Set to True to print
sheet names as they are parsed. (GH9450)
Added days_in_month (compatibility alias daysinmonth) property to Timestamp, DatetimeIndex,
Period, PeriodIndex, and Series.dt (GH9572)
Added decimal option in to_csv to provide formatting for non-. decimal separators (GH781)
Added normalize option for Timestamp to normalized to midnight (GH8794)
Added example for DataFrame import to R using HDF5 file and rhdf5 library. See the documentation for
more (GH9636).
In v0.15.0 a new scalar type Timedelta was introduced, that is a sub-class of datetime.timedelta. Mentioned
here was a notice of an API change w.r.t. the .seconds accessor. The intent was to provide a user-friendly set of
accessors that give the natural value for that unit, e.g. if you had a Timedelta('1 day, 10:11:12'), then
.seconds would return 12. However, this is at odds with the definition of datetime.timedelta, which defines
.seconds as 10 * 3600 + 11 * 60 + 12 == 36672.
So in v0.16.0, we are restoring the API to match that of datetime.timedelta. Further, the component values are
still available through the .components accessor. This affects the .seconds and .microseconds accessors,
and removes the .hours, .minutes, .milliseconds accessors. These changes affect TimedeltaIndex and
the Series .dt accessor as well. (GH9185, GH9139)
Previous Behavior
In [2]: t = pd.Timedelta('1 day, 10:11:12.100123')
In [3]: t.days
Out[3]: 1
In [4]: t.seconds
Out[4]: 12
In [5]: t.microseconds
Out[5]: 123
New Behavior
In [33]: t = pd.Timedelta('1 day, 10:11:12.100123')
In [34]: t.days
Out[34]: 1
In [35]: t.seconds
\\\\\\\\\\\Out[35]: 36672
In [36]: t.microseconds
\\\\\\\\\\\\\\\\\\\\\\\\\\Out[36]: 100123
In [38]: t.components.seconds
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
12
The behavior of a small sub-set of edge cases for using .loc have changed (GH8613). Furthermore we have improved
the content of the error messages that are raised:
Slicing with .loc where the start and/or stop bound is not found in the index is now allowed; this previously
would raise a KeyError. This makes the behavior the same as .ix in this case. This change is only for
slicing, not when indexing with a single label.
In [39]: df = DataFrame(np.random.randn(5,4),
....: columns=list('ABCD'),
....: index=date_range('20130101',periods=5))
....:
In [40]: df
Out[40]:
A B C D
2013-01-01 -0.322795 0.841675 2.390961 0.076200
2013-01-02 -0.566446 0.036142 -2.074978 0.247792
2013-01-03 -0.897157 -0.136795 0.018289 0.755414
2013-01-04 0.215269 0.841009 -1.445810 -1.401973
2013-01-05 -0.100918 -0.548242 -0.144620 0.354020
In [41]: s = Series(range(5),[-2,-1,1,2,3])
In [42]: s
Out[42]:
-2 0
-1 1
1 2
2 3
3 4
dtype: int64
Previous Behavior
In [4]: df.loc['2013-01-02':'2013-01-10']
KeyError: 'stop bound [2013-01-10] is not in the [index]'
In [6]: s.loc[-10:3]
KeyError: 'start bound [-10] is not the [index]'
New Behavior
In [43]: df.loc['2013-01-02':'2013-01-10']
Out[43]:
A B C D
2013-01-02 -0.566446 0.036142 -2.074978 0.247792
2013-01-03 -0.897157 -0.136795 0.018289 0.755414
2013-01-04 0.215269 0.841009 -1.445810 -1.401973
2013-01-05 -0.100918 -0.548242 -0.144620 0.354020
In [44]: s.loc[-10:3]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
-2 0
-1 1
1 2
2 3
3 4
dtype: int64
Allow slicing with float-like values on an integer index for .ix. Previously this was only enabled for .loc:
Previous Behavior
In [8]: s.ix[-1.0:2]
TypeError: the slice start value [-1.0] is not a proper indexer for this index
type (Int64Index)
New Behavior
In [2]: s.ix[-1.0:2]
Out[2]:
-1 1
1 2
2 3
dtype: int64
Provide a useful exception for indexing with an invalid type for that index when using .loc. For example
trying to use .loc on an index of type DatetimeIndex or PeriodIndex or TimedeltaIndex, with an
integer (or a float).
Previous Behavior
In [4]: df.loc[2:3]
KeyError: 'start bound [2] is not the [index]'
New Behavior
In [4]: df.loc[2:3]
TypeError: Cannot do slice indexing on <class 'pandas.tseries.index.DatetimeIndex
'> with <type 'int'> keys
In prior versions, Categoricals that had an unspecified ordering (meaning no ordered keyword was passed)
were defaulted as ordered Categoricals. Going forward, the ordered keyword in the Categorical constructor
will default to False. Ordering must now be explicit.
Furthermore, previously you could change the ordered attribute of a Categorical by just setting the attribute,
e.g. cat.ordered=True; This is now deprecated and you should use cat.as_ordered() or cat.
as_unordered(). These will by default return a new object and not modify the existing object. (GH9347,
GH9190)
Previous Behavior
In [4]: s
Out[4]:
0 0
1 1
2 2
dtype: category
Categories (3, int64): [0 < 1 < 2]
In [5]: s.cat.ordered
Out[5]: True
In [7]: s
Out[7]:
0 0
1 1
2 2
dtype: category
Categories (3, int64): [0, 1, 2]
New Behavior
In [46]: s
Out[46]:
0 0
1 1
2 2
dtype: category
Categories (3, int64): [0, 1, 2]
In [47]: s.cat.ordered
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[47]:
False
In [48]: s = s.cat.as_ordered()
In [49]: s
Out[49]:
0 0
1 1
2 2
dtype: category
Categories (3, int64): [0 < 1 < 2]
In [50]: s.cat.ordered
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[50]:
True
In [52]: s
Out[52]:
0 0
1 1
2 2
dtype: category
Categories (3, int64): [0 < 1 < 2]
In [53]: s.cat.ordered
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[53]:
True
For ease of creation of series of categorical data, we have added the ability to pass keywords when calling .
astype(). These are passed directly to the constructor.
In [54]: s = Series(["a","b","c","a"]).astype('category',ordered=True)
In [55]: s
Out[55]:
0 a
1 b
2 c
3 a
dtype: category
Categories (3, object): [a < b < c]
In [56]: s = Series(["a","b","c","a"]).astype('category',categories=list('abcdef'),
ordered=False)
In [57]: s
Out[57]:
0 a
1 b
2 c
3 a
dtype: category
Categories (6, object): [a, b, c, d, e, f]
New Behavior. If the input dtypes are integral, the output dtype is also integral and the output values are the
result of the bitwise operation.
In [2]: pd.Series([0,1,2,3], list('abcd')) | pd.Series([4,4,4,4], list('abcd'))
Out[2]:
a 4
b 5
c 6
d 7
dtype: int64
During division involving a Series or DataFrame, 0/0 and 0//0 now give np.nan instead of np.inf.
(GH9144, GH8445)
Previous Behavior
In [2]: p = pd.Series([0, 1])
In [3]: p / 0
Out[3]:
0 inf
1 inf
dtype: float64
In [4]: p // 0
Out[4]:
0 inf
1 inf
dtype: float64
New Behavior
In [58]: p = pd.Series([0, 1])
In [59]: p / 0
Out[59]:
0 NaN
1 inf
dtype: float64
In [60]: p // 0
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[60]:
0 NaN
1 inf
dtype: float64
Series.values_counts and Series.describe for categorical data will now put NaN entries at the
end. (GH9443)
Series.describe for categorical data will now give counts and frequencies of 0, not NaN, for unused
categories (GH9443)
Due to a bug fix, looking up a partial string label with DatetimeIndex.asof now includes values that
match the string, even if they are after the start of the partial string label (GH9258).
Old behavior:
Fixed behavior:
To reproduce the old behavior, simply add more precision to the label (e.g., use 2000-02-01 instead of
2000-02).
1.11.2.5 Deprecations
The rplot trellis plotting interface is deprecated and will be removed in a future version. We refer to external
packages like seaborn for similar but more refined functionality (GH3445). The documentation includes some
examples how to convert your existing code using rplot to seaborn: rplot docs.
The pandas.sandbox.qtpandas interface is deprecated and will be removed in a future version. We refer
users to the external package pandas-qt. (GH9615)
The pandas.rpy interface is deprecated and will be removed in a future version. Similar functionaility can
be accessed thru the rpy2 project (GH9602)
Adding DatetimeIndex/PeriodIndex to another DatetimeIndex/PeriodIndex is being depre-
cated as a set-operation. This will be changed to a TypeError in a future version. .union() should be used
for the union set operation. (GH9094)
Subtracting DatetimeIndex/PeriodIndex from another DatetimeIndex/PeriodIndex is be-
ing deprecated as a set-operation. This will be changed to an actual numeric subtraction yielding a
TimeDeltaIndex in a future version. .difference() should be used for the differencing set operation.
(GH9094)
DataFrame.pivot_table and crosstabs rows and cols keyword arguments were removed in favor
of index and columns (GH6581)
Fixed a performance regression for .loc indexing with an array or list-like (GH9126:).
DataFrame.to_json 30x performance improvement for mixed dtype frames. (GH9037)
Performance improvements in MultiIndex.duplicated by working with labels instead of values
(GH9125)
Improved the speed of nunique by calling unique instead of value_counts (GH9129, GH7771)
Performance improvement of up to 10x in DataFrame.count and DataFrame.dropna by taking advan-
tage of homogeneous/heterogeneous dtypes appropriately (GH9136)
Performance improvement of up to 20x in DataFrame.count when using a MultiIndex and the level
keyword argument (GH9163)
Performance and memory usage improvements in merge when key space exceeds int64 bounds (GH9151)
Performance improvements in multi-key groupby (GH9429)
Performance improvements in MultiIndex.sortlevel (GH9445)
Performance and memory usage improvements in DataFrame.duplicated (GH9398)
Cythonized Period (GH9440)
Decreased memory usage on to_hdf (GH9648)
Incorrect dtypes inferred on datetimelike looking Series & on .xs slices (GH9477)
Items in Categorical.unique() (and s.unique() if s is of dtype category) now appear in the order
in which they are originally found, not in sorted order (GH9331). This is now consistent with the behavior for
other dtypes in pandas.
Fixed bug on big endian platforms which produced incorrect results in StataReader (GH8688).
Bug in MultiIndex.has_duplicates when having many levels causes an indexer overflow (GH9075,
GH5873)
Bug in pivot and unstack where nan values would break index alignment (GH4862, GH7401, GH7403,
GH7405, GH7466, GH9497)
Bug in left join on multi-index with sort=True or null values (GH9210).
Bug in MultiIndex where inserting new keys would fail (GH9250).
Bug in groupby when key space exceeds int64 bounds (GH9096).
Bug in unstack with TimedeltaIndex or DatetimeIndex and nulls (GH9491).
Bug in rank where comparing floats with tolerance will cause inconsistent behaviour (GH8365).
Fixed character encoding bug in read_stata and StataReader when loading data from a URL (GH9231).
Bug in adding offsets.Nano to other offets raises TypeError (GH9284)
Bug in DatetimeIndex iteration, related to (GH8890), fixed in (GH9100)
Bugs in resample around DST transitions. This required fixing offset classes so they behave correctly on
DST transitions. (GH5172, GH8744, GH8653, GH9173, GH9468).
Bug in binary operator method (eg .mul()) alignment with integer levels (GH9463).
Bug in boxplot, scatter and hexbin plot may show an unnecessary warning (GH8877)
Bug in subplot with layout kw may show unnecessary warning (GH9464)
Bug in using grouper functions that need passed thru arguments (e.g. axis), when using wrapped function (e.g.
fillna), (GH9221)
DataFrame now properly supports simultaneous copy and dtype arguments in constructor (GH9099)
Bug in read_csv when using skiprows on a file with CR line endings with the c engine. (GH9079)
isnull now detects NaT in PeriodIndex (GH9129)
Bug in groupby .nth() with a multiple column groupby (GH8979)
Bug in DataFrame.where and Series.where coerce numerics to string incorrectly (GH9280)
Bug in DataFrame.where and Series.where raise ValueError when string list-like is passed.
(GH9280)
Accessing Series.str methods on with non-string values now raises TypeError instead of producing
incorrect results (GH9184)
Bug in DatetimeIndex.__contains__ when index has duplicates and is not monotonic increasing
(GH9512)
Fixed division by zero error for Series.kurt() when all values are equal (GH9197)
Fixed issue in the xlsxwriter engine where it added a default General format to cells if no other format
wass applied. This prevented other row or column formatting being applied. (GH9167)
Fixes issue with index_col=False when usecols is also specified in read_csv. (GH9082)
Bug where wide_to_long would modify the input stubnames list (GH9204)
Bug in to_sql not storing float64 values using double precision. (GH9009)
SparseSeries and SparsePanel now accept zero argument constructors (same as their non-sparse coun-
terparts) (GH9272).
Regression in merging Categorical and object dtypes (GH9426)
Bug in read_csv with buffer overflows with certain malformed input files (GH9205)
Bug in groupby MultiIndex with missing pair (GH9049, GH9344)
Fixed bug in Series.groupby where grouping on MultiIndex levels would ignore the sort argument
(GH9444)
Fix bug in DataFrame.Groupby where sort=False is ignored in the case of Categorical columns.
(GH8868)
Fixed bug with reading CSV files from Amazon S3 on python 3 raising a TypeError (GH9452)
Bug in the Google BigQuery reader where the jobComplete key may be present but False in the query results
(GH8728)
Bug in Series.values_counts with excluding NaN for categorical type Series with dropna=True
(GH9443)
Fixed mising numeric_only option for DataFrame.std/var/sem (GH9201)
Support constructing Panel or Panel4D with scalar data (GH8285)
Series text representation disconnected from max_rows/max_columns (GH7508).
Series number formatting inconsistent when truncated (GH8532).
Previous Behavior
In [2]: pd.options.display.max_rows = 10
In [3]: s = pd.Series([1,1,1,1,1,1,1,1,1,1,0.9999,1,1]*10)
In [4]: s
Out[4]:
0 1
1 1
2 1
...
127 0.9999
128 1.0000
129 1.0000
Length: 130, dtype: float64
New Behavior
0 1.0000
1 1.0000
2 1.0000
3 1.0000
4 1.0000
...
125 1.0000
126 1.0000
127 0.9999
128 1.0000
129 1.0000
dtype: float64
A Spurious SettingWithCopy Warning was generated when setting a new item in a frame in some cases
(GH8730)
The following would previously report a SettingWithCopy Warning.
This is a minor release from 0.15.1 and includes a large number of bug fixes along with several new features, enhance-
ments, and performance improvements. A small number of API changes were necessary to fix existing bugs. We
recommend that all users upgrade to this version.
Enhancements
API Changes
Performance Improvements
Bug Fixes
Indexing in MultiIndex beyond lex-sort depth is now supported, though a lexically sorted index will have a
better performance. (GH2646)
In [2]: df
Out[2]:
jolie
jim joe
0 x 0.123943
x 0.119381
1 z 0.738523
y 0.587304
In [3]: df.index.lexsort_depth
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
1
jolie
jim joe
1 z 0.738523
# lexically sorting
In [5]: df2 = df.sort_index()
In [6]: df2
Out[6]:
jolie
jim joe
0 x 0.123943
x 0.119381
1 y 0.587304
z 0.738523
In [7]: df2.index.lexsort_depth
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
2
In [8]: df2.loc[(1,'z')]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
jolie
jim joe
1 z 0.738523
Bug in unique of Series with category dtype, which returned all categories regardless whether they were
used or not (see GH8559 for the discussion). Previous behaviour was to return all categories:
In [4]: cat
Out[4]:
[a, b, a]
Categories (3, object): [a < b < c]
In [5]: cat.unique()
Out[5]: array(['a', 'b', 'c'], dtype=object)
Now, only the categories that do effectively occur in the array are returned:
In [10]: cat.unique()
Out[10]:
[a, b]
Categories (2, object): [a, b]
Series.all and Series.any now support the level and skipna parameters. Series.all,
Series.any, Index.all, and Index.any no longer support the out and keepdims parameters, which
existed for compatibility with ndarray. Various index types no longer support the all and any aggregation
functions and will now raise TypeError. (GH8302).
Allow equality comparisons of Series with a categorical dtype and object dtype; previously these would raise
TypeError (GH8938)
Bug in NDFrame: conflicting attribute/column names now behave consistently between getting and setting.
Previously, when both a column and attribute named y existed, data.y would return the attribute, while
In [12]: data.y = 2
In [14]: data
Out[14]:
x y
0 1 2
1 2 4
2 3 6
Old behavior:
In [6]: data.y
Out[6]: 2
In [7]: data['y'].values
Out[7]: array([5, 5, 5])
New behavior:
In [16]: data.y
Out[16]: 5
In [17]: data['y'].values
\\\\\\\\\\\Out[17]: array([2, 4, 6])
Timestamp('now') is now equivalent to Timestamp.now() in that it returns the local time rather than
UTC. Also, Timestamp('today') is now equivalent to Timestamp.today() and both have tz as a
possible argument. (GH9000)
Fix negative step support for label-based slices (GH8753)
Old behavior:
In [2]: s.loc['c':'a':-1]
Out[2]:
c 2
dtype: int64
New behavior:
In [19]: s.loc['c':'a':-1]
Out[19]:
c 2
b 1
a 0
dtype: int64
1.12.2 Enhancements
Categorical enhancements:
Added ability to export Categorical data to Stata (GH8633). See here for limitations of categorical variables
exported to Stata data files.
Added flag order_categoricals to StataReader and read_stata to select whether to order im-
ported categorical data (GH8836). See here for more information on importing categorical variables from Stata
data files.
Added ability to export Categorical data to to/from HDF5 (GH7621). Queries work the same as if it was an
object array. However, the category dtyped data is stored in a more efficient manner. See here for an
example and caveats w.r.t. prior versions of pandas.
Added support for searchsorted() on Categorical class (GH8420).
Other enhancements:
Added the ability to specify the SQL type of columns when writing a DataFrame to a database (GH8778). For
example, specifying to use the sqlalchemy String type instead of the default Text type for string columns:
Series.all and Series.any now support the level and skipna parameters (GH8302):
In [21]: s.any(level=0)
Out[21]:
0 True
1 False
dtype: bool
Panel now supports the all and any aggregation functions. (GH8302):
In [23]: p.all()
Out[23]:
0 1
0 True True
1 True True
2 False False
3 True True
Timedelta arithmetic returns NotImplemented in unknown cases, allowing extensions by custom classes
(GH8813).
Timedelta now supports arithemtic with numpy.ndarray objects of the appropriate dtype (numpy 1.8 or
newer only) (GH8884).
Added Timedelta.to_timedelta64() method to the public API (GH8884).
Added gbq.generate_bq_schema() function to the gbq module (GH8325).
Series now works with map objects the same way as generators (GH8909).
Added context manager to HDFStore for automatic closing (GH8791).
to_datetime gains an exact keyword to allow for a format to not require an exact match for a provided for-
mat string (if its False). exact defaults to True (meaning that exact matching is still the default) (GH8904)
Added axvlines boolean option to parallel_coordinates plot function, determines whether vertical lines will
be printed, default is True
Added ability to read table footers to read_html (GH8552)
to_sql now infers datatypes of non-NA values for columns that contain NA values and have dtype object
(GH8778).
1.12.3 Performance
Bug in concat of Series with category dtype which were coercing to object. (GH8641)
Bug in Timestamp-Timestamp not returning a Timedelta type and datelike-datelike ops with timezones
(GH8865)
Made consistent a timezone mismatch exception (either tz operated with None or incompatible timezone), will
now return TypeError rather than ValueError (a couple of edge cases only), (GH8865)
Bug in using a pd.Grouper(key=...) with no level/axis or level only (GH8795, GH8866)
Report a TypeError when invalid/no parameters are passed in a groupby (GH8015)
Bug in packaging pandas with py2app/cx_Freeze (GH8602, GH8831)
Bug in groupby signatures that didnt include *args or **kwargs (GH8733).
io.data.Options now raises RemoteDataError when no expiry dates are available from Yahoo and
when it receives no data from Yahoo (GH8761), (GH8783).
Unclear error message in csv parsing when passing dtype and names and the parsed data is a different data type
(GH8833)
Bug in slicing a multi-index with an empty list and at least one boolean indexer (GH8781)
io.data.Options now raises RemoteDataError when no expiry dates are available from Yahoo
(GH8761).
Timedelta kwargs may now be numpy ints and floats (GH8757).
Fixed several outstanding bugs for Timedelta arithmetic and comparisons (GH8813, GH5963, GH5436).
sql_schema now generates dialect appropriate CREATE TABLE statements (GH8697)
slice string method now takes step into account (GH8754)
Bug in BlockManager where setting values with different type would break block integrity (GH8850)
Bug in DatetimeIndex when using time object as key (GH8667)
Bug in merge where how='left' and sort=False would not preserve left frame order (GH7331)
Bug in MultiIndex.reindex where reindexing at level would not reorder labels (GH4088)
Bug in certain operations with dateutil timezones, manifesting with dateutil 2.3 (GH8639)
Regression in DatetimeIndex iteration with a Fixed/Local offset timezone (GH8890)
Bug in to_datetime when parsing a nanoseconds using the %f format (GH8989)
io.data.Options now raises RemoteDataError when no expiry dates are available from Yahoo and
when it receives no data from Yahoo (GH8761), (GH8783).
Fix: The font size was only set on x axis if vertical or the y axis if horizontal. (GH8765)
Fixed division by 0 when reading big csv files in python 3 (GH8621)
Bug in outputing a Multindex with to_html,index=False which would add an extra column (GH8452)
Imported categorical variables from Stata files retain the ordinal information in the underlying data (GH8836).
Defined .size attribute across NDFrame objects to provide compat with numpy >= 1.9.1; buggy with np.
array_split (GH8846)
Skip testing of histogram plots for matplotlib <= 1.2 (GH8648).
Bug where get_data_google returned object dtypes (GH3995)
Bug in DataFrame.stack(..., dropna=False) when the DataFrames columns is a MultiIndex
whose labels do not reference all its levels. (GH8844)
Bug in that Option context applied on __enter__ (GH8514)
Bug in resample that causes a ValueError when resampling across multiple days and the last offset is not calcu-
lated from the start of the range (GH8683)
Bug where DataFrame.plot(kind='scatter') fails when checking if an np.array is in the DataFrame
(GH8852)
Bug in pd.infer_freq/DataFrame.inferred_freq that prevented proper sub-daily frequency infer-
ence when the index contained DST days (GH8772).
Bug where index name was still used when plotting a series with use_index=False (GH8558).
Bugs when trying to stack multiple columns, when some (or all) of the level names are numbers (GH8584).
Bug in MultiIndex where __contains__ returns wrong result if index is not lexically sorted or unique
(GH7724)
BUG CSV: fix problem with trailing whitespace in skipped rows, (GH8679), (GH8661), (GH8983)
Regression in Timestamp does not parse Z zone designator for UTC (GH8771)
Bug in StataWriter the produces writes strings with 244 characters irrespective of actual size (GH8969)
Fixed ValueError raised by cummin/cummax when datetime64 Series contains NaT. (GH8965)
Bug in Datareader returns object dtype if there are missing values (GH8980)
Bug in plotting if sharex was enabled and index was a timeseries, would show labels on multiple axes (GH3964).
Bug where passing a unit to the TimedeltaIndex constructor applied the to nano-second conversion twice.
(GH9011).
Bug in plotting of a period-like array (GH9012)
This is a minor bug-fix release from 0.15.0 and includes a small number of API changes, several new features, en-
hancements, and performance improvements along with a large number of bug fixes. We recommend that all users
upgrade to this version.
Enhancements
API Changes
Bug Fixes
s.dt.hour and other .dt accessors will now return np.nan for missing values (rather than previously -1),
(GH8689)
In [1]: s = Series(date_range('20130101',periods=5,freq='D'))
In [3]: s
Out[3]:
0 2013-01-01
1 2013-01-02
2 NaT
3 2013-01-04
4 2013-01-05
dtype: datetime64[ns]
previous behavior:
In [6]: s.dt.hour
Out[6]:
0 0
1 0
2 -1
3 0
4 0
dtype: int64
current behavior:
In [4]: s.dt.hour
Out[4]:
0 0.0
1 0.0
2 NaN
3 0.0
4 0.0
dtype: float64
groupby with as_index=False will not add erroneous extra columns to result (GH8582):
In [5]: np.random.seed(2718281)
In [7]: df.head()
Out[7]:
jim joe
0 61 81
1 96 49
2 55 65
3 72 51
4 77 12
previous behavior:
current behavior:
groupby will not erroneously exclude columns if the column name conflics with the grouper name (GH8112):
In [11]: df
Out[11]:
jim joe
0 0 5
1 1 6
2 2 7
3 3 8
4 4 9
In [4]: gr.apply(sum)
Out[4]:
joe
jim
False 24
True 11
current behavior:
In [13]: gr.apply(sum)
Out[13]:
jim joe
jim
False 9 24
True 1 11
Support for slicing with monotonic decreasing indexes, even if start or stop is not found in the index
(GH7860):
In [15]: s
Out[15]:
4 a
3 b
2 c
1 d
dtype: object
previous behavior:
In [8]: s.loc[3.5:1.5]
KeyError: 3.5
current behavior:
In [16]: s.loc[3.5:1.5]
Out[16]:
3 b
2 c
dtype: object
io.data.Options has been fixed for a change in the format of the Yahoo Options page (GH8612),
(GH8741)
Note: As a result of a change in Yahoos option page layout, when an expiry date is given, Options methods
now return data for a single expiry date. Previously, methods returned all data for the selected month.
The month and year parameters have been undeprecated and can be used to get all options data for a given
month.
If an expiry date that is not valid is given, data for the next expiry after the given date is returned.
Option data frames are now saved on the instance as callsYYMMDD or putsYYMMDD. Previously they were
saved as callsMMYY and putsMMYY. The next expiry is saved as calls and puts.
New features:
The expiry parameter can now be a single date or a list-like object containing dates.
A new property expiry_dates was added, which returns all available expiry dates.
Current behavior:
In [19]: aapl.get_call_data().iloc[0:5,0:1]
Out[19]:
Last
Strike Expiry Type Symbol
80 2014-11-14 call AAPL141114C00080000 29.05
84 2014-11-14 call AAPL141114C00084000 24.80
85 2014-11-14 call AAPL141114C00085000 24.05
86 2014-11-14 call AAPL141114C00086000 22.76
87 2014-11-14 call AAPL141114C00087000 21.74
In [20]: aapl.expiry_dates
Out[20]:
[datetime.date(2014, 11, 14),
datetime.date(2014, 11, 22),
datetime.date(2014, 11, 28),
datetime.date(2014, 12, 5),
datetime.date(2014, 12, 12),
datetime.date(2014, 12, 20),
datetime.date(2015, 1, 17),
datetime.date(2015, 2, 20),
datetime.date(2015, 4, 17),
datetime.date(2015, 7, 17),
datetime.date(2016, 1, 15),
datetime.date(2017, 1, 20)]
In [21]: aapl.get_near_stock_price(expiry=aapl.expiry_dates[0:3]).iloc[0:5,0:1]
Out[21]:
Last
Strike Expiry Type Symbol
109 2014-11-22 call AAPL141122C00109000 1.48
2014-11-28 call AAPL141128C00109000 1.79
110 2014-11-14 call AAPL141114C00110000 0.55
2014-11-22 call AAPL141122C00110000 1.02
2014-11-28 call AAPL141128C00110000 1.32
pandas now also registers the datetime64 dtype in matplotlibs units registry to plot such values as datetimes.
This is activated once pandas is imported. In previous versions, plotting an array of datetime64 values will
have resulted in plotted integer values. To keep the previous behaviour, you can do del matplotlib.
units.registry[np.datetime64] (GH8614).
1.13.2 Enhancements
concat permits a wider variety of iterables of pandas objects to be passed as the first parameter (GH8645):
previous behavior:
current behavior:
Represent MultiIndex labels with a dtype that utilizes memory based on the level size. In prior versions,
the memory usage was a constant 8 bytes per element in each level. In addition, in prior versions, the reported
memory usage was incorrect as it didnt show the usage for the memory occupied by the underling data array.
(GH8456)
previous behavior:
current behavior:
In [22]: dfi.memory_usage(index=True)
Out[22]:
Index 11040
A 8000
dtype: int64
that many good countries were cropped in the hard-coded approach. All countries will work now, but some bad
countries will raise exceptions because some edge cases break the entire response. (GH8482)
Added option to Series.str.split() to return a DataFrame rather than a Series (GH8428)
Added option to df.info(null_counts=None|True|False) to override the default display options
and force showing of the null-counts (GH8701)
Bug where DataReaders would fail if one of the symbols passed was invalid. Now returns data for valid
symbols and np.nan for invalid (GH8494)
Bug in get_quote_yahoo that wouldnt allow non-float return values (GH5229).
This is a major release from 0.14.1 and includes a small number of API changes, several new features, enhancements,
and performance improvements along with a large number of bug fixes. We recommend that all users upgrade to this
version.
Warning: pandas >= 0.15.0 will no longer support compatibility with NumPy versions < 1.7.0. If you want to
use the latest versions of pandas, please upgrade to NumPy >= 1.7.0 (GH7711)
Highlights include:
The Categorical type was integrated as a first-class pandas type, see here
New scalar type Timedelta, and a new index type TimedeltaIndex, see here
New datetimelike properties accessor .dt for Series, see Datetimelike Properties
New DataFrame default display for df.info() to include memory usage, see Memory Usage
read_csv will now by default ignore blank lines when parsing, see here
API change in using Indexes in set operations, see here
Enhancements in the handling of timezones, see here
A lot of improvements to the rolling and expanding moment funtions, see here
Internal refactoring of the Index class to no longer sub-class ndarray, see Internal Refactoring
dropping support for PyTables less than version 3.0.0, and numexpr less than version 2.1 (GH7990)
Split indexing documentation into Indexing and Selecting Data and MultiIndex / Advanced Indexing
Split out string methods documentation into Working with Text Data
Check the API Changes and deprecations before updating
Other Enhancements
Performance Improvements
Bug Fixes
Warning: In 0.15.0 Index has internally been refactored to no longer sub-class ndarray but instead subclass
PandasObject, similarly to the rest of the pandas objects. This change allows very easy sub-classing and
creation of new index types. This should be a transparent change with only very limited API implications (See the
Internal Refactoring)
Warning: The refactorings in Categorical changed the two argument constructor from codes/labels and
levels to values and levels (now called categories). This can lead to subtle bugs. If you use Categorical
directly, please audit your code before updating to this pandas version and change it to use the from_codes()
constructor. See more on Categorical here
Categorical can now be included in Series and DataFrames and gained new methods to manipulate. Thanks to Jan
Schulz for much of this API/implementation. (GH3943, GH5313, GH5314, GH7444, GH7839, GH7848, GH7864,
GH7914, GH7768, GH8006, GH3678, GH8075, GH8076, GH8143, GH8453, GH8518).
For full docs, see the categorical introduction and the API documentation.
In [3]: df["grade"]
Out[3]:
0 a
1 b
2 b
3 a
4 a
5 e
Name: grade, dtype: category
Categories (3, object): [a, b, e]
In [6]: df["grade"]
Out[6]:
0 very good
1 good
2 good
3 very good
4 very good
5 very bad
Name: grade, dtype: category
Categories (5, object): [very bad, bad, medium, good, very good]
In [7]: df.sort_values("grade")
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
id raw_grade grade
5 6 e very bad
1 2 b good
2 3 b good
0 1 a very good
3 4 a very good
4 5 a very good
In [8]: df.groupby("grade").size()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
grade
very bad 1
bad 0
medium 0
good 2
very good 3
dtype: int64
1.14.1.2 TimedeltaIndex/Scalar
We introduce a new scalar type Timedelta, which is a subclass of datetime.timedelta, and behaves in a
similar manner, but allows compatibility with np.timedelta64 types as well as a host of custom representation,
parsing, and attributes. This type is very similar to how Timestamp works for datetimes. It is a nice-API box for
the type. See the docs. (GH3009, GH4533, GH8209, GH8187, GH8190, GH7869, GH7661, GH8345, GH8471)
Warning: Timedelta scalars (and TimedeltaIndex) component fields are not the same as the component
fields on a datetime.timedelta object. For example, .seconds on a datetime.timedelta object
returns the total number of seconds combined between hours, minutes and seconds. In contrast, the pandas
Timedelta breaks out hours, minutes, microseconds and nanoseconds separately.
# Timedelta accessor
In [9]: tds = Timedelta('31 days 5 min 3 sec')
In [10]: tds.minutes
Out[10]: 5L
In [11]: tds.seconds
Out[11]: 3L
# datetime.timedelta accessor
# this is 5 minutes * 60 + 3 seconds
In [12]: tds.to_pytimedelta().seconds
Out[12]: 303
Note: this is no longer true starting from v0.16.0, where full compatibility with datetime.timedelta is
introduced. See the 0.16.0 whatsnew entry
Warning: Prior to 0.15.0 pd.to_timedelta would return a Series for list-like/Series input, and a np.
timedelta64 for scalar input. It will now return a TimedeltaIndex for list-like input, Series for Series
input, and Timedelta for scalar input.
Consruct a scalar
In [9]: Timedelta('1 days 06:05:01.00003')
Out[9]: Timedelta('1 days 06:05:01.000030')
In [10]: Timedelta('15.5us')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[10]: Timedelta('0 days 00:00:00.000015
')
# a NaT
In [13]: Timedelta('nan')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
NaT
In [15]: td.seconds
Out[15]: 3780
In [16]: td.microseconds
\\\\\\\\\\\\\\Out[16]: 15
In [17]: td.nanoseconds
\\\\\\\\\\\\\\\\\\\\\\\\\\Out[17]: 500
Construct a TimedeltaIndex
In [18]: TimedeltaIndex(['1 days','1 days, 00:00:05',
....: np.timedelta64(2,'D'),timedelta(days=2,seconds=2)])
....:
Out[18]:
TimedeltaIndex(['1 days 00:00:00', '1 days 00:00:05', '2 days 00:00:00',
'2 days 00:00:02'],
dtype='timedelta64[ns]', freq=None)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
In [22]: s
Out[22]:
1 days 00:00:00 0
1 days 00:00:01 1
1 days 00:00:02 2
1 days 00:00:03 3
1 days 00:00:04 4
Freq: S, dtype: int64
Finally, the combination of TimedeltaIndex with DatetimeIndex allow certain combination operations that
are NaT preserving:
In [25]: tdi = TimedeltaIndex(['1 days',pd.NaT,'2 days'])
In [26]: tdi.tolist()
Out[26]: [Timedelta('1 days 00:00:00'), NaT, Timedelta('2 days 00:00:00')]
In [28]: dti.tolist()
Out[28]:
[Timestamp('2013-01-01 00:00:00', freq='D'),
Timestamp('2013-01-02 00:00:00', freq='D'),
Timestamp('2013-01-03 00:00:00', freq='D')]
Implemented methods to find memory usage of a DataFrame. See the FAQ for more. (GH6852).
A new display option display.memory_usage (see Options and Settings) sets the default behavior of the
memory_usage argument in the df.info() method. By default display.memory_usage is True.
In [32]: n = 5000
In [34]: df = DataFrame(data)
In [36]: df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 5000 entries, 0 to 4999
Data columns (total 8 columns):
bool 5000 non-null bool
complex128 5000 non-null complex128
datetime64[ns] 5000 non-null datetime64[ns]
float64 5000 non-null float64
int64 5000 non-null int64
object 5000 non-null object
timedelta64[ns] 5000 non-null timedelta64[ns]
categorical 5000 non-null category
dtypes: bool(1), category(1), complex128(1), datetime64[ns](1), float64(1), int64(1),
object(1), timedelta64[ns](1)
Additionally memory_usage() is an available method for a dataframe object which returns the memory usage of
each column.
In [37]: df.memory_usage(index=True)
Out[37]:
Index 80
bool 5000
complex128 80000
datetime64[ns] 40000
float64 40000
int64 40000
object 40000
timedelta64[ns] 40000
categorical 10920
dtype: int64
Series has gained an accessor to succinctly return datetime like properties for the values of the Series, if its a
datetime/period like Series. (GH7207) This will return a Series, indexed like the existing Series. See the docs
# datetime
In [38]: s = Series(date_range('20130101 09:10:12',periods=4))
In [39]: s
Out[39]:
0 2013-01-01 09:10:12
1 2013-01-02 09:10:12
2 2013-01-03 09:10:12
3 2013-01-04 09:10:12
dtype: datetime64[ns]
In [40]: s.dt.hour
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
0 9
1 9
2 9
3 9
dtype: int64
In [41]: s.dt.second
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
0 12
1 12
2 12
3 12
dtype: int64
In [42]: s.dt.day
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
0 1
1 2
2 3
3 4
dtype: int64
In [43]: s.dt.freq
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
<Day>
In [46]: stz
Out[46]:
0 2013-01-01 09:10:12-05:00
1 2013-01-02 09:10:12-05:00
2 2013-01-03 09:10:12-05:00
3 2013-01-04 09:10:12-05:00
dtype: datetime64[ns, US/Eastern]
In [47]: stz.dt.tz
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
<DstTzInfo 'US/Eastern' LMT-1 day, 19:04:00 STD>
In [50]: s
Out[50]:
0 2013-01-01
1 2013-01-02
2 2013-01-03
3 2013-01-04
dtype: object
In [51]: s.dt.year
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[51]:
0 2013
1 2013
2 2013
3 2013
dtype: int64
In [52]: s.dt.day
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
0 1
1 2
2 3
3 4
dtype: int64
# timedelta
In [53]: s = Series(timedelta_range('1 day 00:00:05',periods=4,freq='s'))
In [54]: s
Out[54]:
0 1 days 00:00:05
1 1 days 00:00:06
2 1 days 00:00:07
3 1 days 00:00:08
dtype: timedelta64[ns]
In [55]: s.dt.days
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
0 1
1 1
2 1
3 1
dtype: int64
In [56]: s.dt.seconds
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
0 5
1 6
2 7
3 8
dtype: int64
In [57]: s.dt.components
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
tz_localize(None) for tz-aware Timestamp and DatetimeIndex now removes timezone holding
local time, previously this resulted in Exception or TypeError (GH7812)
In [59]: ts
In [60]: ts.tz_localize(None)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[60]:
Timestamp('2014-08-01 09:00:00')
In [62]: didx
Out[62]:
DatetimeIndex(['2014-08-01 09:00:00-04:00', '2014-08-01 10:00:00-04:00',
'2014-08-01 11:00:00-04:00', '2014-08-01 12:00:00-04:00',
'2014-08-01 13:00:00-04:00', '2014-08-01 14:00:00-04:00',
'2014-08-01 15:00:00-04:00', '2014-08-01 16:00:00-04:00',
'2014-08-01 17:00:00-04:00', '2014-08-01 18:00:00-04:00'],
dtype='datetime64[ns, US/Eastern]', freq='H')
In [63]: didx.tz_localize(None)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
tz_localize now accepts the ambiguous keyword which allows for passing an array of bools indicating
whether the date belongs in DST or not, NaT for setting transition times to NaT, infer for inferring DST/non-
DST, and raise (default) for an AmbiguousTimeError to be raised. See the docs for more details (GH7943)
DataFrame.tz_localize and DataFrame.tz_convert now accepts an optional level argument
for localizing a specific level of a MultiIndex (GH7846)
Timestamp.tz_localize and Timestamp.tz_convert now raise TypeError in error cases, rather
than Exception (GH8025)
a timeseries/index localized to UTC when inserted into a Series/DataFrame will preserve the UTC timezone
(rather than being a naive datetime64[ns]) as object dtype (GH8411)
Timestamp.__repr__ displays dateutil.tz.tzoffset info (GH7907)
New behavior
rolling_window() now normalizes the weights properly in rolling mean mode (mean=True) so that the
calculated weighted means (e.g. triang, gaussian) are distributed about the same means as those calculated
without weighting (i.e. boxcar). See the note on normalization for further details. (GH7618)
New behavior
Removed center argument from all expanding_ functions (see list), as the results produced when
center=True did not make much sense. (GH7925)
Added optional ddof argument to expanding_cov() and rolling_cov(). The default value of 1 is
backwards-compatible. (GH8279)
Documented the ddof argument to expanding_var(), expanding_std(), rolling_var(), and
rolling_std(). These functions support of a ddof argument (with a default value of 1) was previously
undocumented. (GH8064)
ewma(), ewmstd(), ewmvol(), ewmvar(), ewmcov(), and ewmcorr() now interpret min_periods
in the same manner that the rolling_*() and expanding_*() functions do: a given result entry will be
NaN if the (expanding, in this case) window does not contain at least min_periods values. The previous
behavior was to set to NaN the min_periods entries starting with the first non- NaN value. (GH7977)
Prior behavior (note values start at index 2, which is min_periods after index 0 (the index of the first non-
empty value)):
New behavior (note values start at index 4, the location of the 2nd (since min_periods=2) non-empty value):
ewmstd(), ewmvol(), ewmvar(), ewmcov(), and ewmcorr() now have an optional adjust argu-
ment, just like ewma() does, affecting how the weights are calculated. The default value of adjust is True,
which is backwards-compatible. See Exponentially weighted moment functions for details. (GH7911)
ewma(), ewmstd(), ewmvol(), ewmvar(), ewmcov(), and ewmcorr() now have an optional
ignore_na argument. When ignore_na=False (the default), missing values are taken into account in
the weights calculation. When ignore_na=True (which reproduces the pre-0.15.0 behavior), missing values
are ignored in the weights calculation. (GH7543)
Out[8]:
0 1.0
1 1.0
2 5.2
dtype: float64
Warning: By default (ignore_na=False) the ewm*() functions weights calculation in the presence
of missing values is different than in pre-0.15.0 versions. To reproduce the pre-0.15.0 calculation of weights
in the presence of missing values one must specify explicitly ignore_na=True.
Note that entry 0 is approximately 0, and the debiasing factors are a constant 1.25. By comparison, the following
0.15.0 results have a NaN for entry 0, and the debiasing factors are decreasing (towards 1.25):
Added support for a chunksize parameter to to_sql function. This allows DataFrame to be written in
chunks and avoid packet-size overflow errors (GH8062).
Added support for a chunksize parameter to read_sql function. Specifying this argument will return an
iterator through chunks of the query result (GH2908).
Added support for writing datetime.date and datetime.time object columns with to_sql (GH6932).
Added support for specifying a schema to read from/write to with read_sql_table and to_sql
(GH7441, GH7952). For example:
API changes related to the introduction of the Timedelta scalar (see above for more details):
Prior to 0.15.0 to_timedelta() would return a Series for list-like/Series input, and a np.
timedelta64 for scalar input. It will now return a TimedeltaIndex for list-like input, Series for
Series input, and Timedelta for scalar input.
For API changes related to the rolling and expanding functions, see detailed overview above.
Other notable API changes:
Consistency when indexing with .loc and a list-like indexer when no values are found.
In [68]: df = DataFrame([['a'],['b']],index=[1,2])
In [69]: df
Out[69]:
0
1 a
2 b
In [70]: df.loc[[1,3]]
Out[70]:
0
1 a
3 NaN
In [71]: df.loc[[1,3],:]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[71]:
0
1 a
3 NaN
In [72]: p = Panel(np.arange(2*3*4).reshape(2,3,4),
....: items=['ItemA','ItemB'],
....: major_axis=[1,2,3],
....: minor_axis=['A','B','C','D'])
....:
In [73]: p
Out[73]:
<class 'pandas.core.panel.Panel'>
Dimensions: 2 (items) x 3 (major_axis) x 4 (minor_axis)
Items axis: ItemA to ItemB
Major_axis axis: 1 to 3
Minor_axis axis: A to D
In [74]: p.loc[['ItemA','ItemD'],:,'D']
Out[74]:
ItemA ItemD
1 3 NaN
2 7 NaN
3 11 NaN
Furthermore, .loc will raise If no values are found in a multi-index with a list-like indexer:
In [75]: s = Series(np.arange(3,dtype='int64'),
....: index=MultiIndex.from_product([['A'],['foo','bar','baz']],
....: names=['one','two'])
....: ).sort_index()
....:
In [76]: s
Out[76]:
one two
A bar 1
baz 2
foo 0
dtype: int64
In [77]: try:
....: s.loc[['D']]
....: except KeyError as e:
....: print("KeyError: " + str(e))
....:
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\KeyError:
"['D'] not in index"
Assigning values to None now considers the dtype when choosing an empty value (GH7941).
Previously, assigning to None in numeric containers changed the dtype to object (or errored, depending on the
call). It now uses NaN:
In [80]: s
Out[80]:
0 NaN
1 2.0
2 3.0
dtype: float64
In [83]: s
Out[83]:
0 None
1 b
2 c
dtype: object
To insert a NaN, you must explicitly use np.nan. See the docs.
In prior versions, updating a pandas object inplace would not reflect in other python references to this object.
(GH8511, GH5104)
In [85]: s2 = s
In [86]: s += 1.5
Made both the C-based and Python engines for read_csv and read_table ignore empty lines in input as well as
whitespace-filled lines, as long as sep is not whitespace. This is an API change that can be controlled by the
keyword parameter skip_blank_lines. See the docs (GH4466)
A timeseries/index localized to UTC when inserted into a Series/DataFrame will preserve the UTC timezone
and inserted as object dtype rather than being converted to a naive datetime64[ns] (GH8411).
Bug in passing a DatetimeIndex with a timezone that was not being retained in DataFrame construction
from a dict (GH7822)
In prior versions this would drop the timezone, now it retains the timezone, but gives a column of object
dtype:
In [90]: i
Out[90]:
DatetimeIndex(['2011-01-01 00:00:00-05:00', '2011-01-01 00:00:10-05:00',
'2011-01-01 00:00:20-05:00'],
dtype='datetime64[ns, US/Eastern]', freq='10S')
In [92]: df
Out[92]:
a
0 2011-01-01 00:00:00-05:00
1 2011-01-01 00:00:10-05:00
2 2011-01-01 00:00:20-05:00
In [93]: df.dtypes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
a datetime64[ns, US/Eastern]
dtype: object
Previously this would have yielded a column of datetime64 dtype, but without timezone info.
The behaviour of assigning a column to an existing dataframe as df[a] = i remains unchanged (this already
returned an object column with a timezone).
When passing multiple levels to stack(), it will now raise a ValueError when the levels arent all level
names or all level numbers (GH7660). See Reshaping by stacking and unstacking.
Raise a ValueError in df.to_hdf with fixed format, if df has non-unique columns as the resulting file
will be broken (GH7761)
SettingWithCopy raise/warnings (according to the option mode.chained_assignment) will now be
issued when setting a value on a sliced mixed-dtype DataFrame using chained-assignment. (GH7845, GH7950)
merge, DataFrame.merge, and ordered_merge now return the same type as the left argument
(GH7737).
Previously an enlargement with a mixed-dtype frame would act unlike .append which will preserve dtypes
(related GH2578, GH8176):
In [95]: df
Out[95]:
female fitness
0 True 1
1 False 2
In [96]: df.dtypes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[96]:
female bool
fitness int64
dtype: object
In [98]: df
Out[98]:
female fitness
0 True 1
1 False 2
2 False 2
In [99]: df.dtypes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[99]:
female bool
fitness int64
dtype: object
Series.to_csv() now returns a string when path=None, matching the behaviour of DataFrame.
to_csv() (GH8215).
read_hdf now raises IOError when a file that doesnt exist is passed in. Previously, a new, empty file was
created, and a KeyError raised (GH7715).
DataFrame.info() now ends its output with a newline character (GH8114)
Concatenating no objects will now raise a ValueError rather than a bare Exception.
Merge errors will now be sub-classes of ValueError rather than raw Exception (GH8501)
DataFrame.plot and Series.plot keywords are now have consistent orders (GH8037)
In 0.15.0 Index has internally been refactored to no longer sub-class ndarray but instead subclass
PandasObject, similarly to the rest of the pandas objects. This change allows very easy sub-classing and cre-
ation of new index types. This should be a transparent change with only very limited API implications (GH5080,
GH7439, GH7796, GH8024, GH8367, GH7997, GH8522):
you may need to unpickle pandas version < 0.15.0 pickles using pd.read_pickle rather than pickle.
load. See pickle docs
when plotting with a PeriodIndex, the matplotlib internal axes will now be arrays of Period rather than a
PeriodIndex (this is similar to how a DatetimeIndex passes arrays of datetimes now)
MultiIndexes will now raise similary to other pandas objects w.r.t. truth testing, see here (GH7897).
When plotting a DatetimeIndex directly with matplotlibs plot function, the axis labels will no longer be format-
ted as dates but as integers (the internal representation of a datetime64). UPDATE This is fixed in 0.15.1,
see here.
1.14.2.3 Deprecations
The attributes Categorical labels and levels attributes are deprecated and renamed to codes and
categories.
The outtype argument to pd.DataFrame.to_dict has been deprecated in favor of orient. (GH7840)
The convert_dummies method has been deprecated in favor of get_dummies (GH8140)
The infer_dst argument in tz_localize will be deprecated in favor of ambiguous to allow for more
flexibility in dealing with DST transitions. Replace infer_dst=True with ambiguous='infer' for the
same behavior (GH7943). See the docs for more details.
The top-level pd.value_range has been deprecated and can be replaced by .describe() (GH8481)
The Index set operations + and - were deprecated in order to provide these for numeric type operations on
certain index types. + can be replaced by .union() or |, and - by .difference(). Further the method
name Index.diff() is deprecated and can be replaced by Index.difference() (GH8226)
# +
Index(['a','b','c']) + Index(['b','c','d'])
# should be replaced by
Index(['a','b','c']).union(Index(['b','c','d']))
# -
Index(['a','b','c']) - Index(['b','c','d'])
# should be replaced by
Index(['a','b','c']).difference(Index(['b','c','d']))
The infer_types argument to read_html() now has no effect and is deprecated (GH7762, GH7032).
1.14.3 Enhancements
In [101]: df.describe(include=["object"])
Out[101]:
catA catB
count 24 24
unique 2 4
top foo b
freq 16 6
Without those arguments, describe will behave as before, including only numerical columns or, if none are,
only categorical columns. See also the docs
Added split as an option to the orient argument in pd.DataFrame.to_dict. (GH7840)
The get_dummies method can now be used on DataFrames. By default only catagorical columns are encoded
as 0s and 1s, while other columns are left untouched.
In [104]: df = DataFrame({'A': ['a', 'b', 'a'], 'B': ['c', 'c', 'b'],
.....: 'C': [1, 2, 3]})
.....:
In [105]: pd.get_dummies(df)
Out[105]:
C A_a A_b B_b B_c
0 1 1 0 0 1
1 2 0 1 0 1
2 3 1 0 1 0
pandas.tseries.holiday has added support for additional holidays and ways to observe holidays
(GH7070)
pandas.tseries.holiday.Holiday now supports a list of offsets in Python3 (GH7070)
pandas.tseries.holiday.Holiday now supports a days_of_week parameter (GH7070)
GroupBy.nth() now supports selecting multiple nth values (GH7910)
# get the first, 4th, and last date index for each month
In [108]: df.groupby((df.index.year, df.index.month)).nth([0, 3, -1])
Out[108]:
a b
2014 4 1 1
4 1 1
4 1 1
5 1 1
5 1 1
5 1 1
6 1 1
6 1 1
6 1 1
In [110]: idx
Out[110]:
PeriodIndex(['2014-07-01 09:00', '2014-07-01 10:00', '2014-07-01 11:00',
'2014-07-01 12:00', '2014-07-01 13:00'],
dtype='period[H]', freq='H')
In [114]: idx
Out[114]: PeriodIndex(['2014-07', '2014-08', '2014-09', '2014-10', '2014-11'],
dtype='period[M]', freq='M')
'period[M]', freq='M')
Added experimental compatibility with openpyxl for versions >= 2.0. The DataFrame.to_excel method
engine keyword now recognizes openpyxl1 and openpyxl2 which will explicitly require openpyxl v1
and v2 respectively, failing if the requested version is not available. The openpyxl engine is a now a meta-
engine that automatically uses whichever version of openpyxl is installed. (GH7177)
DataFrame.fillna can now accept a DataFrame as a fill value (GH8377)
Passing multiple levels to stack() will now work when multiple level numbers are passed (GH7660). See
Reshaping by stacking and unstacking.
set_names(), set_labels(), and set_levels() methods now take an optional level keyword ar-
gument to all modification of specific level(s) of a MultiIndex. Additionally set_names() now accepts a
scalar string value when operating on an Index or on a specific level of a MultiIndex (GH7792)
Index.isin now supports a level argument to specify which index level to use for membership tests
(GH7892, GH7890)
In [2]: idx.values
Out[2]: array([(0, 'a'), (0, 'b'), (0, 'c'), (1, 'a'), (1, 'b'), (1, 'c')],
dtype=object)
In [122]: idx
Out[122]: Int64Index([1, 2, 3, 4, 1, 2], dtype='int64')
In [123]: idx.duplicated()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[123]: array([False,
False, False, False, True, True], dtype=bool)
In [124]: idx.drop_duplicates()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
Int64Index([1, 2, 3, 4], dtype='int64')
add copy=True argument to pd.concat to enable pass thru of complete blocks (GH8252)
Added support for numpy 1.8+ data types (bool_, int_, float_, string_) for conversion to R dataframe
(GH8400)
1.14.4 Performance
Bug in DataFrame.groupby where Grouper does not recognize level when frequency is specified
(GH7885)
Bug in multiindexes dtypes getting mixed up when DataFrame is saved to SQL table (GH8021)
Bug in Series 0-division with a float and integer operand dtypes (GH7785)
Bug in Series.astype("unicode") not calling unicode on the values correctly (GH7758)
Bug in DataFrame.as_matrix() with mixed datetime64[ns] and timedelta64[ns] dtypes
(GH7778)
Bug in HDFStore.select_column() not preserving UTC timezone info when selecting a
DatetimeIndex (GH7777)
Bug in to_datetime when format='%Y%m%d' and coerce=True are specified, where previously an
object array was returned (rather than a coerced time-series with NaT), (GH7930)
Bug in DatetimeIndex and PeriodIndex in-place addition and subtraction cause different result from
normal one (GH6527)
Bug in adding and subtracting PeriodIndex with PeriodIndex raise TypeError (GH7741)
Bug in combine_first with PeriodIndex data raises TypeError (GH3367)
Bug in multi-index slicing with missing indexers (GH7866)
Bug in multi-index slicing with various edge cases (GH8132)
Regression in multi-index indexing with a non-scalar type object (GH7914)
Bug in Timestamp comparisons with == and int64 dtype (GH8058)
Bug in pickles contains DateOffset may raise AttributeError when normalize attribute is reffered
internally (GH7748)
Bug in Panel when using major_xs and copy=False is passed (deprecation warning fails because of
missing warnings) (GH8152).
Bug in pickle deserialization that failed for pre-0.14.1 containers with dup items trying to avoid ambiguity when
matching block and manager items, when theres only one block theres no ambiguity (GH7794)
Bug in putting a PeriodIndex into a Series would convert to int64 dtype, rather than object of
Periods (GH7932)
Bug in HDFStore iteration when passing a where (GH8014)
Bug in DataFrameGroupby.transform when transforming with a passed non-sorted key (GH8046,
GH8430)
Bug in repeated timeseries line and area plot may result in ValueError or incorrect kind (GH7733)
Bug in inference in a MultiIndex with datetime.date inputs (GH7888)
Bug in get where an IndexError would not cause the default value to be returned (GH7725)
Bug in offsets.apply, rollforward and rollback may reset nanosecond (GH7697)
Bug in offsets.apply, rollforward and rollback may raise AttributeError if Timestamp
has dateutil tzinfo (GH7697)
Bug in sorting a multi-index frame with a Float64Index (GH8017)
Bug in inconsistent panel setitem with a rhs of a DataFrame for alignment (GH7763)
Bug in is_superperiod and is_subperiod cannot handle higher frequencies than S (GH7760, GH7772,
GH7803)
Bug in DataFrame.shift where empty columns would throw ZeroDivisionError on numpy 1.7
(GH8019)
Bug in installation where html_encoding/*.html wasnt installed and therefore some tests were not run-
ning correctly (GH7927).
Bug in read_html where bytes objects were not tested for in _read (GH7927).
Bug in DataFrame.stack() when one of the column levels was a datelike (GH8039)
Bug in broadcasting numpy scalars with DataFrame (GH8116)
Bug in pivot_table performed with nameless index and columns raises KeyError (GH8103)
Bug in DataFrame.plot(kind='scatter') draws points and errorbars with different colors when the
color is specified by c keyword (GH8081)
Bug in Float64Index where iat and at were not testing and were failing (GH8092).
Bug in DataFrame.boxplot() where y-limits were not set correctly when producing multiple axes
(GH7528, GH5517).
Bug in read_csv where line comments were not handled correctly given a custom line terminator or
delim_whitespace=True (GH8122).
Bug in read_html where empty tables caused a StopIteration (GH7575)
Bug in casting when setting a column in a same-dtype block (GH7704)
Bug in accessing groups from a GroupBy when the original grouper was a tuple (GH8121).
Bug in .at that would accept integer indexers on a non-integer index and do fallback (GH7814)
Bug with kde plot and NaNs (GH8182)
Bug in GroupBy.count with float32 data type were nan values were not excluded (GH8169).
Bug with stacked barplots and NaNs (GH8175).
Bug in resample with non evenly divisible offsets (e.g. 7s) (GH8371)
Bug in interpolation methods with the limit keyword when no values needed interpolating (GH7173).
Bug where col_space was ignored in DataFrame.to_string() when header=False (GH8230).
Bug with DatetimeIndex.asof incorrectly matching partial strings and returning the wrong date
(GH8245).
Bug in plotting methods modifying the global matplotlib rcParams (GH8242).
Bug in DataFrame.__setitem__ that caused errors when setting a dataframe column to a sparse array
(GH8131)
Bug where Dataframe.boxplot() failed when entire column was empty (GH8181).
Bug with messed variables in radviz visualization (GH8199).
Bug in interpolation methods with the limit keyword when no values needed interpolating (GH7173).
Bug where col_space was ignored in DataFrame.to_string() when header=False (GH8230).
Bug in to_clipboard that would clip long column data (GH8305)
Bug in DataFrame terminal display: Setting max_column/max_rows to zero did not trigger auto-resizing of
dfs to fit terminal width/height (GH7180).
Bug in OLS where running with cluster and nw_lags parameters did not work correctly, but also did not
throw an error (GH5884).
Bug in DataFrame.dropna that interpreted non-existent columns in the subset argument as the last column
(GH8303)
Bug in Index.intersection on non-monotonic non-unique indexes (GH8362).
Bug in masked series assignment where mismatching types would break alignment (GH8387)
Bug in NDFrame.equals gives false negatives with dtype=object (GH8437)
Bug in assignment with indexer where type diversity would break alignment (GH8258)
Bug in NDFrame.loc indexing when row/column names were lost when target was a list/ndarray (GH6552)
Regression in NDFrame.loc indexing when rows/columns were converted to Float64Index if target was an
empty list/ndarray (GH7774)
Bug in Series that allows it to be indexed by a DataFrame which has unexpected results. Such indexing is
no longer permitted (GH8444)
Bug in item assignment of a DataFrame with multi-index columns where right-hand-side columns were not
aligned (GH7655)
Suppress FutureWarning generated by NumPy when comparing object arrays containing NaN for equality
(GH7065)
Bug in DataFrame.eval() where the dtype of the not operator (~) was not correctly inferred as bool.
This is a minor release from 0.14.0 and includes a small number of API changes, several new features, enhancements,
and performance improvements along with a large number of bug fixes. We recommend that all users upgrade to this
version.
Highlights include:
New methods select_dtypes() to select columns based on the dtype and sem() to calculate the
standard error of the mean.
Support for dateutil timezones (see docs).
Support for ignoring full line comments in the read_csv() text parser.
New documentation section on Options and Settings.
Lots of bug fixes.
Enhancements
API Changes
Performance Improvements
Experimental Changes
Bug Fixes
Openpyxl now raises a ValueError on construction of the openpyxl writer instead of warning on pandas import
(GH7284).
For StringMethods.extract, when no match is found, the result - only containing NaN values - now also
has dtype=object instead of float (GH7242)
Period objects no longer raise a TypeError when compared using == with another object that isnt a
Period. Instead when comparing a Period with another object using == if the other object isnt a Period
False is returned. (GH7376)
Previously, the behaviour on resetting the time or not in offsets.apply, rollforward and rollback
operations differed between offsets. With the support of the normalize keyword for all offsets(see be-
low) with a default value of False (preserve time), the behaviour changed for certain offsets (BusinessMon-
thBegin, MonthEnd, BusinessMonthEnd, CustomBusinessMonthEnd, BusinessYearBegin, LastWeekOfMonth,
FY5253Quarter, LastWeekOfMonth, Easter):
Starting from 0.14.1 all offsets preserve time by default. The old behaviour can be obtained with
normalize=True
# new behaviour
In [1]: d + offsets.MonthEnd()
Out[1]: Timestamp('2014-01-31 09:00:00')
In [2]: d + offsets.MonthEnd(normalize=True)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[2]: Timestamp('2014-01-31 00:00:00')
Note that for the other offsets the default behaviour did not change.
Add back #N/A N/A as a default NA value in text parsing, (regresion from 0.12) (GH5521)
Raise a TypeError on inplace-setting with a .where and a non np.nan value as this is inconsistent with a
set-item expression like df[mask] = None (GH7656)
1.15.2 Enhancements
In [9]: rng.tz
Out[9]: tzfile('/usr/share/zoneinfo/Europe/London')
1.15.3 Performance
Improvements in dtype inference for numeric operations involving yielding performance gains for dtypes:
int64, timedelta64, datetime64 (GH7223)
Improvements in Series.transform for significant performance gains (GH6496)
Improvements in DataFrame.transform with ufuncs and built-in grouper functions for signifcant performance
gains (GH7383)
Regression in groupby aggregation of datetime64 dtypes (GH7555)
Improvements in MultiIndex.from_product for large iterables (GH7627)
1.15.4 Experimental
pandas.io.data.Options has a new method, get_all_data method, and now consistently returns a
multi-indexed DataFrame (GH5602)
io.gbq.read_gbq and io.gbq.to_gbq were refactored to remove the dependency on the Google
bq.py command line client. This submodule now uses httplib2 and the Google apiclient and
oauth2client API client libraries which should be more stable and, therefore, reliable than bq.py. See the
docs. (GH6937).
Bug in DataFrame.where with a symmetric shaped frame and a passed other of a DataFrame (GH7506)
Bug in Panel indexing with a multi-index axis (GH7516)
Regression in datetimelike slice indexing with a duplicated index and non-exact end-points (GH7523)
Bug in setitem with list-of-lists and single vs mixed types (GH7551:)
Bug in timeops with non-aligned Series (GH7500)
Bug in timedelta inference when assigning an incomplete Series (GH7592)
Bug in groupby .nth with a Series and integer-like column name (GH7559)
Bug in Series.get with a boolean accessor (GH7407)
Bug in value_counts where NaT did not qualify as missing (NaN) (GH7423)
Bug in to_timedelta that accepted invalid units and misinterpreted m/h (GH7611, GH6423)
Bug in line plot doesnt set correct xlim if secondary_y=True (GH7459)
Bug in grouped hist and scatter plots use old figsize default (GH7394)
Bug in plotting subplots with DataFrame.plot, hist clears passed ax even if the number of subplots is
one (GH7391).
Bug in plotting subplots with DataFrame.boxplot with by kw raises ValueError if the number of
subplots exceeds 1 (GH7391).
Bug in subplots displays ticklabels and labels in different rule (GH5897)
Bug in Panel.apply with a multi-index as an axis (GH7469)
Bug in DatetimeIndex.insert doesnt preserve name and tz (GH7299)
Bug in DatetimeIndex.asobject doesnt preserve name (GH7299)
Bug in multi-index slicing with datetimelike ranges (strings and Timestamps), (GH7429)
Bug in Index.min and max doesnt handle nan and NaT properly (GH7261)
Bug in PeriodIndex.min/max results in int (GH7609)
Bug in resample where fill_method was ignored if you passed how (GH2073)
Bug in TimeGrouper doesnt exclude column specified by key (GH7227)
Bug in DataFrame and Series bar and barh plot raises TypeError when bottom and left keyword is
specified (GH7226)
Bug in DataFrame.hist raises TypeError when it contains non numeric column (GH7277)
Bug in Index.delete does not preserve name and freq attributes (GH7302)
Bug in DataFrame.query()/eval where local string variables with the @ sign were being treated as
temporaries attempting to be deleted (GH7300).
Bug in Float64Index which didnt allow duplicates (GH7149).
Bug in DataFrame.replace() where truthy values were being replaced (GH7140).
Bug in StringMethods.extract() where a single match group Series would use the matchers name
instead of the group name (GH7313).
Bug in isnull() when mode.use_inf_as_null == True where isnull wouldnt test True when it
encountered an inf/-inf (GH7315).
Bug in inferred_freq results in None for eastern hemisphere timezones (GH7310)
Bug in Easter returns incorrect date when offset is negative (GH7195)
Bug in broadcasting with .div, integer dtypes and divide-by-zero (GH7325)
Bug in CustomBusinessDay.apply raiases NameError when np.datetime64 object is passed
(GH7196)
Bug in MultiIndex.append, concat and pivot_table dont preserve timezone (GH6606)
Bug in .loc with a list of indexers on a single-multi index level (that is not nested) (GH7349)
Bug in Series.map when mapping a dict with tuple keys of different lengths (GH7333)
Bug all StringMethods now work on empty Series (GH7242)
Fix delegation of read_sql to read_sql_query when query does not contain select (GH7324).
Bug where a string column name assignment to a DataFrame with a Float64Index raised a TypeError
during a call to np.isnan (GH7366).
Bug where NDFrame.replace() didnt correctly replace objects with Period values (GH7379).
Bug in .ix getitem should always return a Series (GH7150)
Bug in multi-index slicing with incomplete indexers (GH7399)
Bug in multi-index slicing with a step in a sliced level (GH7400)
Bug where negative indexers in DatetimeIndex were not correctly sliced (GH7408)
Bug where NaT wasnt reprd correctly in a MultiIndex (GH7406, GH7409).
Bug where bool objects were converted to nan in convert_objects (GH7416).
Bug in quantile ignoring the axis keyword argument (:issue7306)
Bug where nanops._maybe_null_out doesnt work with complex numbers (GH7353)
Bug in several nanops functions when axis==0 for 1-dimensional nan arrays (GH7354)
Bug where nanops.nanmedian doesnt work when axis==None (GH7352)
Bug where nanops._has_infs doesnt work with many dtypes (GH7357)
Bug in StataReader.data where reading a 0-observation dta failed (GH7369)
Bug in StataReader when reading Stata 13 (117) files containing fixed width strings (GH7360)
Bug in StataWriter where encoding was ignored (GH7286)
Bug in DatetimeIndex comparison doesnt handle NaT properly (GH7529)
Bug in passing input with tzinfo to some offsets apply, rollforward or rollback resets tzinfo or
raises ValueError (GH7465)
This is a major release from 0.13.1 and includes a small number of API changes, several new features, enhancements,
and performance improvements along with a large number of bug fixes. We recommend that all users upgrade to this
version.
Highlights include:
Officially support Python 3.4
SQL interfaces updated to use sqlalchemy, See Here.
Warning: In 0.14.0 all NDFrame based containers have undergone significant internal refactoring. Before that
each block of homogeneous data had its own labels and extra care was necessary to keep those in sync with the
parent containers labels. This should not have any visible user/API behavior changes (GH6745)
In [2]: dfl
Out[2]:
A B
0 1.583584 -0.438313
1 -0.402537 -0.780572
2 -0.141685 0.542241
3 0.370966 -0.251642
4 0.787484 1.666563
In [3]: dfl.iloc[:,2:3]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
Empty DataFrame
Columns: []
Index: [0, 1, 2, 3, 4]
In [4]: dfl.iloc[:,1:3]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
B
0 -0.438313
1 -0.780572
2 0.542241
3 -0.251642
4 1.666563
In [5]: dfl.iloc[4:6]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
A B
4 0.787484 1.666563
dfl.iloc[[4,5,6]]
IndexError: positional indexers are out-of-bounds
dfl.iloc[:,4]
IndexError: single positional indexer is out-of-bounds
Slicing with negative start, stop & step values handles corner cases better (GH6531):
df.iloc[:-len(df)] is now empty
df.iloc[len(df)::-1] now enumerates all elements in reverse
The DataFrame.interpolate() keyword downcast default has been changed from infer to None.
This is to preseve the original dtype unless explicitly requested otherwise (GH6290).
When converting a dataframe to HTML it used to return Empty DataFrame. This special case has been removed,
instead a header with the column names is returned (GH6062).
Series and Index now internall share more common operations, e.g. factorize(),nunique(),
value_counts() are now supported on Index types as well. The Series.weekday property from
is removed from Series for API consistency. Using a DatetimeIndex/PeriodIndex method on a Series
will now raise a TypeError. (GH4551, GH4056, GH5519, GH6380, GH7206).
Add is_month_start, is_month_end, is_quarter_start, is_quarter_end,
is_year_start, is_year_end accessors for DateTimeIndex / Timestamp which return a
boolean array of whether the timestamp(s) are at the start/end of the month/quarter/year defined by the
frequency of the DateTimeIndex / Timestamp (GH4565, GH6998)
Local variable usage has changed in pandas.eval()/DataFrame.eval()/DataFrame.query()
(GH5987). For the DataFrame methods, two things have changed
Column names are now given precedence over locals
Local variables must be referred to explicitly. This means that even if you have a local variable that is not
a column you must still refer to it with the '@' prefix.
You can have an expression like df.query('@a < a') with no complaints from pandas about am-
biguity of the name a.
The top-level pandas.eval() function does not allow you use the '@' prefix and provides you with
an error message telling you so.
NameResolutionError was removed because it isnt necessary anymore.
Define and document the order of column vs index names in query/eval (GH6676)
concat will now concatenate mixed Series and DataFrames using the Series name or numbering columns as
needed (GH2385). See the docs
Slicing and advanced/boolean indexing operations on Index classes as well as Index.delete() and
Index.drop() methods will no longer change the type of the resulting index (GH6440, GH7040)
In [7]: i[[0,1,2]]
Out[7]: Index([1, 2, 3], dtype='object')
Previously, the above operation would return Int64Index. If youd like to do this manually, use Index.
astype()
In [9]: i[[0,1,2]].astype(np.int_)
Out[9]: Int64Index([1, 2, 3], dtype='int64')
set_index no longer converts MultiIndexes to an Index of tuples. For example, the old behavior returned an
Index in this case (GH6459):
In [11]: df_multi.set_index(tuple_ind)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[11]:
0 1
(a, c) 0.471435 -1.190976
(a, d) 1.432707 -0.312652
(b, c) -0.720589 0.887163
(b, d) 0.859588 -0.636524
# New behavior
In [12]: mi
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
In [13]: df_multi.set_index(mi)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
0 1
a c 0.471435 -1.190976
d 1.432707 -0.312652
b c -0.720589 0.887163
d 0.859588 -0.636524
pairwise keyword was added to the statistical moment functions rolling_cov, rolling_corr,
ewmcov, ewmcorr, expanding_cov, expanding_corr to allow the calculation of moving window
covariance and correlation matrices (GH4950). See Computing rolling pairwise covariances and correlations
in the docs.
In [1]: df = DataFrame(np.random.randn(10,4),columns=list('ABCD'))
In [5]: covs[df.index[-1]]
Out[5]:
B C D
A 0.035310 0.326593 -0.505430
B 0.137748 -0.006888 -0.005383
C -0.006888 0.861040 0.020762
Series.iteritems() is now lazy (returns an iterator rather than a list). This was the documented behavior
prior to 0.14. (GH6760)
Added nunique and value_counts functions to Index for counting unique elements. (GH6734)
stack and unstack now raise a ValueError when the level keyword refers to a non-unique item in the
Index (previously raised a KeyError). (GH6738)
drop unused order argument from Series.sort; args now are in the same order as Series.order; add
na_position arg to conform to Series.order (GH6847)
default sorting algorithm for Series.order is now quicksort, to conform with Series.sort (and
numpy defaults)
add inplace keyword to Series.order/sort to make them inverses (GH6859)
DataFrame.sort now places NaNs at the beginning or end of the sort according to the na_position
parameter. (GH3917)
accept TextFileReader in concat, which was affecting a common user idiom (GH6583), this was a
regression from 0.13.1
Added factorize functions to Index and Series to get indexer and unique values (GH7090)
describe on a DataFrame with a mix of Timestamp and string like objects returns a different Index (GH7088).
Previously the index was unintentionally sorted.
Arithmetic operations with only bool dtypes now give a warning indicating that they are evaluated in Python
space for +, -, and * operations and raise for all others (GH7011, GH6762, GH7015, GH7210)
In HDFStore, select_as_multiple will always raise a KeyError, when a key or the selector is not
found (GH6177)
df['col'] = value and df.loc[:,'col'] = value are now completely equivalent; previously the
.loc would not necessarily coerce the dtype of the resultant series (GH6149)
dtypes and ftypes now return a series with dtype=object on empty containers (GH5740)
df.to_csv will now return a string of the CSV data if neither a target path nor a buffer is provided (GH6061)
pd.infer_freq() will now raise a TypeError if given an invalid Series/Index type (GH6407,
GH6463)
A tuple passed to DataFame.sort_index will be interpreted as the levels of the index, rather than requiring
a list of tuple (GH4370)
all offset operations now return Timestamp types (rather than datetime), Business/Week frequencies were
incorrect (GH4069)
to_excel now converts np.inf into a string representation, customizable by the inf_rep keyword argu-
ment (Excel has no native inf representation) (GH6782)
Replace pandas.compat.scipy.scoreatpercentile with numpy.percentile (GH6810)
.quantile on a datetime[ns] series now returns Timestamp instead of np.datetime64 objects
(GH6810)
change AssertionError to TypeError for invalid types passed to concat (GH6583)
Raise a TypeError when DataFrame is passed an iterator as the data argument (GH5357)
The default way of printing large DataFrames has changed. DataFrames exceeding max_rows and/or
max_columns are now displayed in a centrally truncated view, consistent with the printing of a pandas.
Series (GH5603).
In previous versions, a DataFrame was truncated once the dimension constraints were reached and an ellipse
(...) signaled that part of the data was cut off.
In the current version, large DataFrames are centrally truncated, showing a preview of head and tail in both
dimensions.
allow option 'truncate' for display.show_dimensions to only show the dimensions if the frame is
truncated (GH6547).
The default for display.show_dimensions will now be truncate. This is consistent with how Series
display length.
In [16]: dfd = pd.DataFrame(np.arange(25).reshape(-1,5), index=[0,1,2,3,4],
columns=[0,1,2,3,4])
0 0 ... 4
.. .. ... ..
4 20 ... 24
[5 rows x 5 columns]
Regression in the display of a MultiIndexed Series with display.max_rows is less than the length of the
series (GH7101)
Fixed a bug in the HTML repr of a truncated Series or DataFrame not showing the class name with the large_repr
set to info (GH7105)
The verbose keyword in DataFrame.info(), which controls whether to shorten the info representation,
is now None by default. This will follow the global setting in display.max_info_columns. The global
setting can be overriden with verbose=True or verbose=False.
Fixed a bug with the info repr not honoring the display.max_info_columns setting (GH6939)
Offset/freq info now in Timestamp __repr__ (GH4553)
read_csv()/read_table() will now be noiser w.r.t invalid options rather than falling back to the
PythonParser.
Raise ValueError when sep specified with delim_whitespace=True in
read_csv()/read_table() (GH6607)
Raise ValueError when engine='c' specified with unsupported options in
read_csv()/read_table() (GH6607)
Raise ValueError when fallback to python parser causes options to be ignored (GH6607)
Produce ParserWarning on fallback to python parser when no options are ignored (GH6607)
Translate sep='\s+' to delim_whitespace=True in read_csv()/read_table() if no other C-
unsupported options specified (GH6607)
In [20]: g = df.groupby('A')
In [23]: g[['B']].head(1)
Out[23]:
B
0 2
2 6
groupby nth now reduces by default; filtering can be achieved by passing as_index=False. With an
optional dropna argument to ignore NaN. See the docs.
Reducing
In [25]: g = df.groupby('A')
In [26]: g.nth(0)
Out[26]:
B
A
1 NaN
5 6.0
B
A
1 4.0
5 6.0
Filtering
In [29]: gf = df.groupby('A',as_index=False)
In [30]: gf.nth(0)
Out[30]:
A B
0 1 NaN
2 5 6.0
groupby will now not return the grouped column for non-cython functions (GH5610, GH5614, GH6732), as its
already the index
In [32]: df = DataFrame([[1, np.nan], [1, 4], [5, 6], [5, 8]], columns=['A', 'B'])
In [33]: g = df.groupby('A')
In [34]: g.count()
Out[34]:
B
A
1 1
5 2
In [35]: g.describe()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[35]:
B
count mean std min 25% 50% 75% max
A
1 1.0 4.0 NaN 4.0 4.0 4.0 4.0 4.0
5 2.0 7.0 1.414214 6.0 6.5 7.0 7.5 8.0
passing as_index will leave the grouped column in-place (this is not change in 0.14.0)
In [36]: df = DataFrame([[1, np.nan], [1, 4], [5, 6], [5, 8]], columns=['A', 'B'])
In [37]: g = df.groupby('A',as_index=False)
In [38]: g.count()
Out[38]:
A B
0 1 1
1 5 2
In [39]: g.describe()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[39]:
A B \
count mean std min 25% 50% 75% max count mean std min 25%
0 2.0 1.0 0.0 1.0 1.0 1.0 1.0 1.0 1.0 4.0 NaN 4.0 4.0
1 2.0 5.0 0.0 5.0 5.0 5.0 5.0 5.0 2.0 7.0 1.414214 6.0 6.5
Allow specification of a more complex groupby via pd.Grouper, such as grouping by a Time and a string
field simultaneously. See the docs. (GH3794)
Better propagation/preservation of Series names when performing groupby operations:
SeriesGroupBy.agg will ensure that the name attribute of the original series is propagated to the
result (GH6265).
If the function provided to GroupBy.apply returns a named series, the name of the series will be kept as
the name of the column index of the DataFrame returned by GroupBy.apply (GH6124). This facilitates
DataFrame.stack operations where the name of the column index is used as the name of the inserted
column containing the pivoted data.
1.16.5 SQL
The SQL reading and writing functions now support more database flavors through SQLAlchemy (GH2717, GH4163,
GH5950, GH6292). All databases supported by SQLAlchemy can be used, such as PostgreSQL, MySQL, Oracle,
Microsoft SQL server (see documentation of SQLAlchemy on included dialects).
The functionality of providing DBAPI connection objects will only be supported for sqlite3 in the future. The
'mysql' flavor is deprecated.
The new functions read_sql_query() and read_sql_table() are introduced. The function read_sql()
is kept as a convenience wrapper around the other two and will delegate to specific function depending on the provided
input (database table name or sql query).
In practice, you have to provide a SQLAlchemy engine to the sql functions. To connect with SQLAlchemy you use
the create_engine() function to create an engine object from database URI. You only need to create the engine
once per database you are connecting to. For an in-memory sqlite database:
This engine can then be used to write or read data to/from this database:
You can read data from a database by specifying the table name:
Warning: Some of the existing functions or function aliases have been deprecated and will be removed in future
versions. This includes: tquery, uquery, read_frame, frame_query, write_frame.
Warning: The support for the mysql flavor when using DBAPI connection objects has been deprecated. MySQL
will be further supported with SQLAlchemy engines (GH6900).
In 0.14.0 we added a new way to slice multi-indexed objects. You can slice a multi-index by providing multiple
indexers.
You can provide any of the selectors as if you are indexing by label, see Selection by Label, including slices, lists of
labels, labels, and boolean indexers.
You can use slice(None) to select all the contents of that level. You do not need to specify all the deeper levels,
they will be implied as slice(None).
As usual, both sides of the slicers are included as this is label indexing.
See the docs See also issues (GH6134, GH4036, GH3057, GH2598, GH5641, GH7106)
Warning: You should specify all axes in the .loc specifier, meaning the indexer for the index and for the
columns. Their are some ambiguous cases where the passed indexer could be mis-interpreted as indexing both
axes, rather than into say the MuliIndex for the rows.
You should do this:
df.loc[(slice('A1','A3'),.....),:]
Warning: You will need to make sure that the selection axes are fully lexsorted!
In [49]: df = DataFrame(np.arange(len(index)*len(columns)).reshape((len(index),
len(columns))),
....: index=index,
....: columns=columns).sort_index().sort_index(axis=1)
....:
In [50]: df
Out[50]:
lvl0 a b
lvl1 bar foo bah foo
A0 B0 C0 D0 1 0 3 2
D1 5 4 7 6
C1 D0 9 8 11 10
D1 13 12 15 14
C2 D0 17 16 19 18
D1 21 20 23 22
C3 D0 25 24 27 26
... ... ... ... ...
A3 B1 C0 D1 229 228 231 230
C1 D0 233 232 235 234
D1 237 236 239 238
C2 D0 241 240 243 242
D1 245 244 247 246
C3 D0 249 248 251 250
D1 253 252 255 254
In [53]: df.loc[idx[:,:,['C1','C3']],idx[:,'foo']]
Out[53]:
lvl0 a b
lvl1 foo foo
A0 B0 C1 D0 8 10
D1 12 14
C3 D0 24 26
D1 28 30
B1 C1 D0 40 42
D1 44 46
C3 D0 56 58
... ... ...
A3 B0 C1 D1 204 206
C3 D0 216 218
D1 220 222
B1 C1 D0 232 234
D1 236 238
C3 D0 248 250
D1 252 254
It is possible to perform quite complicated selections using this method on multiple axes at the same time.
In [54]: df.loc['A1',(slice(None),'foo')]
Out[54]:
lvl0 a b
lvl1 foo foo
B0 C0 D0 64 66
D1 68 70
C1 D0 72 74
D1 76 78
C2 D0 80 82
D1 84 86
C3 D0 88 90
... ... ...
B1 C0 D1 100 102
C1 D0 104 106
D1 108 110
C2 D0 112 114
D1 116 118
C3 D0 120 122
D1 124 126
In [55]: df.loc[idx[:,:,['C1','C3']],idx[:,'foo']]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
lvl0 a b
lvl1 foo foo
A0 B0 C1 D0 8 10
D1 12 14
C3 D0 24 26
D1 28 30
B1 C1 D0 40 42
D1 44 46
C3 D0 56 58
... ... ...
A3 B0 C1 D1 204 206
C3 D0 216 218
D1 220 222
B1 C1 D0 232 234
D1 236 238
C3 D0 248 250
D1 252 254
Using a boolean indexer you can provide selection related to the values.
In [57]: df.loc[idx[mask,:,['C1','C3']],idx[:,'foo']]
Out[57]:
lvl0 a b
lvl1 foo foo
A3 B0 C1 D1 204 206
C3 D0 216 218
D1 220 222
B1 C1 D0 232 234
D1 236 238
C3 D0 248 250
D1 252 254
You can also specify the axis argument to .loc to interpret the passed slicers on a single axis.
In [58]: df.loc(axis=0)[:,:,['C1','C3']]
Out[58]:
lvl0 a b
lvl1 bar foo bah foo
A0 B0 C1 D0 9 8 11 10
D1 13 12 15 14
C3 D0 25 24 27 26
D1 29 28 31 30
B1 C1 D0 41 40 43 42
D1 45 44 47 46
C3 D0 57 56 59 58
... ... ... ... ...
A3 B0 C1 D1 205 204 207 206
C3 D0 217 216 219 218
D1 221 220 223 222
B1 C1 D0 233 232 235 234
In [61]: df2
Out[61]:
lvl0 a b
lvl1 bar foo bah foo
A0 B0 C0 D0 1 0 3 2
D1 5 4 7 6
C1 D0 -10 -10 -10 -10
D1 -10 -10 -10 -10
C2 D0 17 16 19 18
D1 21 20 23 22
C3 D0 -10 -10 -10 -10
... ... ... ... ...
A3 B1 C0 D1 229 228 231 230
C1 D0 -10 -10 -10 -10
D1 -10 -10 -10 -10
C2 D0 241 240 243 242
D1 245 244 247 246
C3 D0 -10 -10 -10 -10
D1 -10 -10 -10 -10
In [64]: df2
Out[64]:
lvl0 a b
lvl1 bar foo bah foo
A0 B0 C0 D0 1 0 3 2
D1 5 4 7 6
C1 D0 9000 8000 11000 10000
D1 13000 12000 15000 14000
C2 D0 17 16 19 18
D1 21 20 23 22
C3 D0 25000 24000 27000 26000
... ... ... ... ...
A3 B1 C0 D1 229 228 231 230
C1 D0 233000 232000 235000 234000
D1 237000 236000 239000 238000
C2 D0 241 240 243 242
D1 245 244 247 246
C3 D0 249000 248000 251000 250000
1.16.7 Plotting
Hexagonal bin plots from DataFrame.plot with kind='hexbin' (GH5478), See the docs.
DataFrame.plot and Series.plot now supports area plot with specifying kind='area' (GH6656),
See the docs
Pie plots from Series.plot and DataFrame.plot with kind='pie' (GH6976), See the docs.
Plotting with Error Bars is now supported in the .plot method of DataFrame and Series objects (GH3796,
GH6834), See the docs.
DataFrame.plot and Series.plot now support a table keyword for plotting matplotlib.Table,
See the docs. The table keyword can receive the following values.
False: Do nothing (default).
True: Draw a table using the DataFrame or Series called plot method. Data will be transposed to
meet matplotlibs default layout.
DataFrame or Series: Draw matplotlib.table using the passed data. The data will be drawn as
displayed in print method (not transposed automatically). Also, helper function pandas.tools.
plotting.table is added to create a table from DataFrame and Series, and add it to an
matplotlib.Axes.
plot(legend='reverse') will now reverse the order of legend labels for most plot kinds. (GH6014)
Line plot and area plot can be stacked by stacked=True (GH6656)
Following keywords are now acceptable for DataFrame.plot() with kind='bar' and kind='barh':
width: Specify the bar width. In previous versions, static value 0.5 was passed to matplotlib and it cannot
be overwritten. (GH6604)
align: Specify the bar alignment. Default is center (different from matplotlib). In previous versions,
pandas passes align=edge to matplotlib and adjust the location to center by itself, and it results align
keyword is not applied as expected. (GH4525)
position: Specify relative alignments for bar plot layout. From 0 (left/bottom-end) to 1(right/top-end).
Default is 0.5 (center). (GH6604)
Because of the default align value changes, coordinates of bar plots are now located on integer values (0.0, 1.0,
2.0 ...). This is intended to make bar plot be located on the same coodinates as line plot. However, bar plot
may differs unexpectedly when you manually adjust the bar location or drawing area, such as using set_xlim,
set_ylim, etc. In this cases, please modify your script to meet with new coordinates.
The parallel_coordinates() function now takes argument color instead of colors. A
FutureWarning is raised to alert that the old colors argument will not be supported in a future release.
(GH6956)
The parallel_coordinates() and andrews_curves() functions now take positional argument
frame instead of data. A FutureWarning is raised if the old data argument is used by name. (GH6956)
DataFrame.boxplot() now supports layout keyword (GH6769)
DataFrame.boxplot() has a new keyword argument, return_type. It accepts 'dict', 'axes', or
'both', in which case a namedtuple with the matplotlib axes and a dict of matplotlib Lines is returned.
There are prior version deprecations that are taking effect as of 0.14.0.
Remove DateRange in favor of DatetimeIndex (GH6816)
Remove column keyword from DataFrame.sort (GH4370)
Remove precision keyword from set_eng_float_format() (GH395)
Remove force_unicode keyword from DataFrame.to_string(), DataFrame.to_latex(), and
DataFrame.to_html(); these function encode in unicode by default (GH2224, GH2225)
Remove nanRep keyword from DataFrame.to_csv() and DataFrame.to_string() (GH275)
Remove unique keyword from HDFStore.select_column() (GH3256)
Remove inferTimeRule keyword from Timestamp.offset() (GH391)
Remove name keyword from get_data_yahoo() and get_data_google() ( commit b921d1a )
Remove offset keyword from DatetimeIndex constructor ( commit 3136390 )
Remove time_rule from several rolling-moment statistical functions, such as rolling_sum() (GH1042)
Removed neg - boolean operations on numpy arrays in favor of inv ~, as this is going to be deprecated in numpy
1.9 (GH6960)
1.16.9 Deprecations
Out[1]: 1
In [2]: Series(1,np.arange(5)).iloc[3.0]
pandas/core/index.py:469: FutureWarning: scalar indexers for index type
Int64Index should be integers and not floating point
Out[2]: 1
In [3]: Series(1,np.arange(5)).iloc[3.0:4]
pandas/core/index.py:527: FutureWarning: slice indexers when using iloc
should be integers and not floating point
Out[3]:
3 1
dtype: int64
In [5]: Series(1,np.arange(5.))[3.0]
Out[6]: 1
1.16.11 Enhancements
DataFrame and Series will create a MultiIndex object if passed a tuples dict, See the docs (GH3323)
In [65]: Series({('a', 'b'): 1, ('a', 'a'): 0,
....: ('a', 'c'): 2, ('b', 'a'): 3, ('b', 'b'): 4})
....:
Out[65]:
a a 0
b 1
c 2
b a 3
b 4
dtype: int64
In [68]: household
Out[68]:
male wealth
household_id
1 0 196087.3
2 1 316478.7
3 0 294750.0
....: "gb00b03mlx29","lu0197800237",
"nl0000289965",np.nan],
....: ).set_index(['household_id','asset_id'])
....:
In [70]: portfolio
Out[70]:
name share
household_id asset_id
1 nl0000301109 ABN Amro 1.00
2 nl0000289783 Robeco 0.40
gb00b03mlx29 Royal Dutch Shell 0.60
3 gb00b03mlx29 Royal Dutch Shell 0.15
lu0197800237 AAB Eastern Europe Equity Fund 0.60
nl0000289965 Postbank BioTech Fonds 0.25
4 NaN NaN 1.00
share
household_id asset_id
1 nl0000301109 1.00
2 nl0000289783 0.40
gb00b03mlx29 0.60
3 gb00b03mlx29 0.15
lu0197800237 0.60
nl0000289965 0.25
quotechar, doublequote, and escapechar can now be specified when using DataFrame.to_csv
(GH5414, GH4528)
Partially sort by only the specified levels of a MultiIndex with the sort_remaining boolean kwarg.
(GH3984)
Added to_julian_date to TimeStamp and DatetimeIndex. The Julian Date is used primarily in
astronomy and represents the number of days from noon, January 1, 4713 BC. Because nanoseconds are used
to define the time in pandas the actual range of dates that you can use is 1678 AD to 2262 AD. (GH4041)
DataFrame.to_stata will now check data for compatibility with Stata data types and will upcast when
needed. When it is not possible to losslessly upcast, a warning is issued (GH6327)
DataFrame.to_stata and StataWriter will accept keyword arguments time_stamp and data_label
which allow the time stamp and dataset label to be set when creating a file. (GH6545)
pandas.io.gbq now handles reading unicode strings properly. (GH5940)
Holidays Calendars are now available and can be used with the CustomBusinessDay offset (GH6719)
Float64Index is now backed by a float64 dtype ndarray instead of an object dtype array (GH6471).
Implemented Panel.pct_change (GH6904)
Added how option to rolling-moment functions to dictate how to handle resampling; rolling_max() de-
faults to max, rolling_min() defaults to min, and all others default to mean (GH6297)
CustomBuisnessMonthBegin and CustomBusinessMonthEnd are now available (GH6866)
Series.quantile() and DataFrame.quantile() now accept an array of quantiles.
describe() now accepts an array of percentiles to include in the summary statistics (GH4196)
pivot_table can now accept Grouper by index and columns keywords (GH6913)
In [72]: import datetime
In [73]: df = DataFrame({
....: 'Branch' : 'A A A A A B'.split(),
....: 'Buyer': 'Carl Mark Carl Carl Joe Joe'.split(),
....: 'Quantity': [1, 3, 5, 1, 8, 1],
....: 'Date' : [datetime.datetime(2013,11,1,13,0), datetime.datetime(2013,9,
1,13,5),
....:
In [74]: df
Out[74]:
Branch Buyer Date PayDay Quantity
0 A Carl 2013-11-01 13:00:00 2013-10-04 00:00:00 1
1 A Mark 2013-09-01 13:05:00 2013-10-15 13:05:00 3
2 A Carl 2013-10-01 20:00:00 2013-09-05 20:00:00 5
3 A Carl 2013-10-02 10:00:00 2013-11-02 10:00:00 1
4 A Joe 2013-11-01 20:00:00 2013-10-07 20:00:00 8
5 B Joe 2013-10-02 10:00:00 2013-09-05 10:00:00 1
In [78]: ps
Out[78]:
2013-01-01 09:00 0.015696
2013-01-01 10:00 -2.242685
In [79]: ps['2013-01-02']
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
read_excel can now read milliseconds in Excel dates and times with xlrd >= 0.9.3. (GH5945)
pd.stats.moments.rolling_var now uses Welfords method for increased numerical stability
(GH6817)
pd.expanding_apply and pd.rolling_apply now take args and kwargs that are passed on to the func (GH6289)
DataFrame.rank() now has a percentage rank option (GH5971)
Series.rank() now has a percentage rank option (GH5971)
Series.rank() and DataFrame.rank() now accept method='dense' for ranks without gaps
(GH6514)
Support passing encoding with xlwt (GH3710)
Refactor Block classes removing Block.items attributes to avoid duplication in item handling (GH6745,
GH6988).
Testing statements updated to use specialized asserts (GH6175)
1.16.12 Performance
1.16.13 Experimental
Bug in DataFrame.replace() where regex metacharacters were being treated as regexs even when
regex=False (GH6777).
Bug in timedelta ops on 32-bit platforms (GH6808)
Bug in setting a tz-aware index directly via .index (GH6785)
Bug in expressions.py where numexpr would try to evaluate arithmetic ops (GH6762).
Bug in Makefile where it didnt remove Cython generated C files with make clean (GH6768)
Bug with numpy < 1.7.2 when reading long strings from HDFStore (GH6166)
Bug in DataFrame._reduce where non bool-like (0/1) integers were being coverted into bools. (GH6806)
Regression from 0.13 with fillna and a Series on datetime-like (GH6344)
Bug in adding np.timedelta64 to DatetimeIndex with timezone outputs incorrect results (GH6818)
Bug in DataFrame.replace() where changing a dtype through replacement would only replace the first
occurrence of a value (GH6689)
Better error message when passing a frequency of MS in Period construction (GH5332)
Bug in Series.__unicode__ when max_rows=None and the Series has more than 1000 rows. (GH6863)
Bug in groupby.get_group where a datetlike wasnt always accepted (GH5267)
Bug in groupBy.get_group created by TimeGrouper raises AttributeError (GH6914)
Bug in DatetimeIndex.tz_localize and DatetimeIndex.tz_convert converting NaT incor-
rectly (GH5546)
Bug in arithmetic operations affecting NaT (GH6873)
Bug in Series.str.extract where the resulting Series from a single group match wasnt renamed to
the group name
Bug in DataFrame.to_csv where setting index=False ignored the header kwarg (GH6186)
Bug in DataFrame.plot and Series.plot, where the legend behave inconsistently when plotting to the
same axes repeatedly (GH6678)
Internal tests for patching __finalize__ / bug in merge not finalizing (GH6923, GH6927)
accept TextFileReader in concat, which was affecting a common user idiom (GH6583)
Bug in C parser with leading whitespace (GH3374)
Bug in C parser with delim_whitespace=True and \r-delimited lines
Bug in python parser with explicit multi-index in row following column header (GH6893)
Bug in Series.rank and DataFrame.rank that caused small floats (<1e-13) to all receive the same rank
(GH6886)
Bug in DataFrame.apply with functions that used *args or **kwargs and returned an empty result
(GH6952)
Bug in sum/mean on 32-bit platforms on overflows (GH6915)
Moved Panel.shift to NDFrame.slice_shift and fixed to respect multiple dtypes. (GH6959)
Bug in enabling subplots=True in DataFrame.plot only has single column raises TypeError, and
Series.plot raises AttributeError (GH6951)
Bug in DataFrame.plot draws unnecessary axes when enabling subplots and kind=scatter
(GH6951)
Bug in query/eval where global constants were not looked up correctly (GH7178)
Bug in recognizing out-of-bounds positional list indexers with iloc and a multi-axis tuple indexer (GH7189)
Bug in setitem with a single value, multi-index and integer indices (GH7190, GH7218)
Bug in expressions evaluation with reversed ops, showing in series-dataframe ops (GH7198, GH7192)
Bug in multi-axis indexing with > 2 ndim and a multi-index (GH7199)
Fix a bug where invalid eval/query operations would blow the stack (GH5198)
This is a minor release from 0.13.0 and includes a small number of API changes, several new features, enhancements,
and performance improvements along with a large number of bug fixes. We recommend that all users upgrade to this
version.
Highlights include:
Added infer_datetime_format keyword to read_csv/to_datetime to allow speedups for homo-
geneously formatted datetimes.
Will intelligently limit display precision for datetime/timedelta formats.
Enhanced Panel apply() method.
Suggested tutorials in new Tutorials section.
Our pandas ecosystem is growing, We now feature related projects in a new Pandas Ecosystem section.
Much work has been taking place on improving the docs, and a new Contributing section has been added.
Even though it may only be of interest to devs, we <3 our new CI status page: ScatterCI.
Warning: 0.13.1 fixes a bug that was caused by a combination of having numpy < 1.8, and doing chained
assignment on a string-like array. Please review the docs, chained indexing can have unexpected results and should
generally be avoided.
This would previously segfault:
In [1]: df = DataFrame(dict(A = np.array(['foo','bar','bah','foo','bar'])))
In [3]: df
Out[3]:
A
0 NaN
1 bar
2 bah
3 foo
4 bar
In [6]: df
Out[6]:
A
0 NaN
1 bar
2 bah
3 foo
4 bar
In [11]: df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 10 entries, 0 to 9
Data columns (total 3 columns):
A float64
B float64
C datetime64[ns]
dtypes: datetime64[ns](1), float64(2)
memory usage: 320.0 bytes
In [13]: df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 10 entries, 0 to 9
Data columns (total 3 columns):
A 7 non-null float64
B 10 non-null float64
C 7 non-null datetime64[ns]
dtypes: datetime64[ns](1), float64(2)
memory usage: 320.0 bytes
Add show_dimensions display option for the new DataFrame repr to control whether the dimensions print.
In [16]: df
Out[16]:
0 1
0 1 2
1 3 4
In [18]: df
Out[18]:
0 1
0 1 2
1 3 4
[2 rows x 2 columns]
The ArrayFormatter for datetime and timedelta64 now intelligently limit precision based on the
values in the array (GH3401)
Previously output might look like:
age today diff
0 2001-01-01 00:00:00 2013-04-19 00:00:00 4491 days, 00:00:00
1 2004-06-01 00:00:00 2013-04-19 00:00:00 3244 days, 00:00:00
In [22]: df
Out[22]:
age today diff
0 2001-01-01 2013-04-19 4491 days
1 2004-06-01 2013-04-19 3244 days
[2 rows x 3 columns]
Add -NaN and -nan to the default set of NA values (GH5952). See NA Values.
Added Series.str.get_dummies vectorized string method (GH6021), to extract dummy/indicator vari-
ables for separated string columns:
In [23]: s = Series(['a', 'a|b', np.nan, 'a|c'])
In [24]: s.str.get_dummies(sep='|')
Out[24]:
a b c
0 1 0 0
1 1 1 0
2 0 0 0
3 1 0 1
[4 rows x 3 columns]
Added the NDFrame.equals() method to compare if two NDFrames are equal have equal axes, dtypes, and
values. Added the array_equivalent function to compare if two ndarrays are equal. NaNs in identical
locations are treated as equal. (GH5283) See also the docs for a motivating example.
In [25]: df = DataFrame({'col':['foo', 0, np.nan]})
In [27]: df.equals(df2)
Out[27]: False
In [28]: df.equals(df2.sort_index())
\\\\\\\\\\\\\\\Out[28]: True
DataFrame.apply will use the reduce argument to determine whether a Series or a DataFrame
should be returned when the DataFrame is empty (GH6007).
Previously, calling DataFrame.apply an empty DataFrame would return either a DataFrame if there
were no columns, or the function being applied would be called with an empty Series to guess whether a
Series or DataFrame should be returned:
In [32]: def applied_func(col):
....: print("Apply function being called with: ", col)
....: return col.sum()
....:
In [34]: empty.apply(applied_func)
Apply function being called with: Series([], Length: 0, dtype: float64)
Out[34]:
a NaN
b NaN
Length: 2, dtype: float64
Now, when apply is called on an empty DataFrame: if the reduce argument is True a Series will
returned, if it is False a DataFrame will be returned, and if it is None (the default) the function being
applied will be called with an empty series to try and guess the return type.
In [35]: empty.apply(applied_func, reduce=True)
Out[35]:
a NaN
b NaN
Length: 2, dtype: float64
[0 rows x 2 columns]
There are no announced changes in 0.13 or prior that are taking effect as of 0.13.1
1.17.4 Deprecations
1.17.5 Enhancements
date_format and datetime_format keywords can now be specified when writing to excel files
(GH4133)
MultiIndex.from_product convenience function for creating a MultiIndex from the cartesian product of
a set of iterables (GH6055):
In [42]: panel
Out[42]:
<class 'pandas.core.panel.Panel'>
Dimensions: 3 (items) x 5 (major_axis) x 4 (minor_axis)
Items axis: ItemA to ItemC
Major_axis axis: 2000-01-03 00:00:00 to 2000-01-07 00:00:00
Minor_axis axis: A to D
In [43]: panel['ItemA']
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
A B C D
2000-01-03 0.694103 1.893534 -1.735349 -0.850346
2000-01-04 0.678630 0.639633 1.210384 1.176812
2000-01-05 0.239556 -0.962029 0.797435 -0.524336
2000-01-06 0.151227 -2.085266 -0.379811 0.700908
2000-01-07 0.816127 1.930247 0.702562 0.984188
[5 rows x 4 columns]
[5 rows x 4 columns]
[4 rows x 3 columns]
This is equivalent to
In [46]: panel.sum('major_axis')
Out[46]:
ItemA ItemB ItemC
A 2.579643 3.062757 0.379252
B 1.416120 -1.960855 0.923558
C 0.595222 -1.079772 -3.118269
D 1.487226 -0.734611 -1.979310
[4 rows x 3 columns]
A transformation operation that returns a Panel, but is computing the z-score across the major_axis
In [48]: result
Out[48]:
<class 'pandas.core.panel.Panel'>
Dimensions: 3 (items) x 5 (major_axis) x 4 (minor_axis)
Items axis: ItemA to ItemC
Major_axis axis: 2000-01-03 00:00:00 to 2000-01-07 00:00:00
Minor_axis axis: A to D
In [49]: result['ItemA']
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
A B C D
2000-01-03 0.595800 0.907552 -1.556260 -1.244875
2000-01-04 0.544058 0.200868 0.915883 0.953747
2000-01-05 -0.924165 -0.701810 0.569325 -0.891290
2000-01-06 -1.219530 -1.334852 -0.418654 0.437589
2000-01-07 1.003837 0.928242 0.489705 0.744830
[5 rows x 4 columns]
In [52]: result
Out[52]:
<class 'pandas.core.panel.Panel'>
Dimensions: 4 (items) x 5 (major_axis) x 3 (minor_axis)
Items axis: A to D
Major_axis axis: 2000-01-03 00:00:00 to 2000-01-07 00:00:00
Minor_axis axis: ItemA to ItemC
In [53]: result.loc[:,:,'ItemA']
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
A B C D
2000-01-03 0.331409 1.071034 -0.914540 -0.510587
2000-01-04 -0.741017 -0.118794 0.383277 0.537212
2000-01-05 0.065042 -0.767353 0.655436 0.069467
2000-01-06 0.027932 -0.569477 0.908202 0.610585
2000-01-07 1.116434 1.133591 0.871287 1.004064
[5 rows x 4 columns]
In [55]: result
Out[55]:
<class 'pandas.core.panel.Panel'>
Dimensions: 4 (items) x 5 (major_axis) x 3 (minor_axis)
Items axis: A to D
Major_axis axis: 2000-01-03 00:00:00 to 2000-01-07 00:00:00
Minor_axis axis: ItemA to ItemC
In [56]: result.loc[:,:,'ItemA']
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
A B C D
2000-01-03 0.331409 1.071034 -0.914540 -0.510587
2000-01-04 -0.741017 -0.118794 0.383277 0.537212
2000-01-05 0.065042 -0.767353 0.655436 0.069467
2000-01-06 0.027932 -0.569477 0.908202 0.610585
2000-01-07 1.116434 1.133591 0.871287 1.004064
[5 rows x 4 columns]
1.17.6 Performance
1.17.7 Experimental
See V0.13.1 Bug Fixes for an extensive list of bugs that have been fixed in 0.13.1.
See the full release notes or issue tracker on GitHub for a complete list of all API changes, Enhancements and Bug
Fixes.
This is a major release from 0.12.0 and includes a number of API changes, several new features and enhancements
along with a large number of bug fixes.
Highlights include:
support for a new index type Float64Index, and other Indexing enhancements
HDFStore has a new string based syntax for query specification
support for new methods of interpolation
updated timedelta operations
a new string manipulation method extract
Nanosecond support for Offsets
isin for DataFrames
Several experimental features are added, including:
new eval/query methods for expression evaluation
support for msgpack serialization
an i/o interface to Googles BigQuery
Their are several new or updated docs sections including:
Comparison with SQL, which should be useful for those familiar with SQL but still learning pandas.
Comparison with R, idiom translations from R to pandas.
Enhancing Performance, ways to enhance pandas performance with eval/query.
Warning: In 0.13.0 Series has internally been refactored to no longer sub-class ndarray but instead subclass
NDFrame, similar to the rest of the pandas containers. This should be a transparent change with only very limited
API implications. See Internal Refactoring
read_excel now supports an integer in its sheetname argument giving the index of the sheet to read in
(GH4301).
Text parser now treats anything that reads like inf (inf, Inf, -Inf, iNf, etc.) as infinity. (GH4220,
GH4219), affecting read_table, read_csv, etc.
pandas now is Python 2/3 compatible without the need for 2to3 thanks to @jtratner. As a result, pandas now
uses iterators more extensively. This also led to the introduction of substantive parts of the Benjamin Petersons
six library into compat. (GH4384, GH4375, GH4372)
pandas.util.compat and pandas.util.py3compat have been merged into pandas.compat.
pandas.compat now includes many functions allowing 2/3 compatibility. It contains both list and itera-
tor versions of range, filter, map and zip, plus other necessary elements for Python 3 compatibility. lmap,
lzip, lrange and lfilter all produce lists instead of iterators, for compatibility with numpy, subscripting
and pandas constructors.(GH4384, GH4375, GH4372)
Series.get with negative indexers now returns the same as [] (GH4390)
Changes to how Index and MultiIndex handle metadata (levels, labels, and names) (GH4039):
All division with NDFrame objects is now truedivision, regardless of the future import. This means that operat-
ing on pandas objects will by default use floating point division, and return a floating point dtype. You can use
// and floordiv to do integer division.
Integer division
True Division
if df:
....
df1 and df2
s1 and s2
Added the .bool() method to NDFrame objects to facilitate evaluating of single-element boolean Series:
In [1]: Series([True]).bool()
Out[1]: True
In [2]: Series([False]).bool()
\\\\\\\\\\\\\Out[2]: False
In [3]: DataFrame([[True]]).bool()
\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[3]: True
In [4]: DataFrame([[False]]).bool()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[4]: False
All non-Index NDFrames (Series, DataFrame, Panel, Panel4D, SparsePanel, etc.), now support the
entire set of arithmetic operators and arithmetic flex methods (add, sub, mul, etc.). SparsePanel does not
support pow or mod with non-scalars. (GH3765)
Series and DataFrame now have a mode() method to calculate the statistical mode(s) by axis/Series.
(GH5367)
Chained assignment will now by default warn if the user is assigning to a copy. This can be changed with the
option mode.chained_assignment, allowed options are raise/warn/None. See the docs.
In [6]: pd.set_option('chained_assignment','warn')
In [8]: dfc.loc[0,'A'] = 11
In [9]: dfc
Out[9]:
A B
0 11 1
1 bbb 2
2 ccc 3
[3 rows x 2 columns]
These were announced changes in 0.12 or prior that are taking effect as of 0.13.0
Remove deprecated Factor (GH3650)
Remove deprecated set_printoptions/reset_printoptions (GH3046)
Remove deprecated _verbose_info (GH3215)
Remove deprecated read_clipboard/to_clipboard/ExcelFile/ExcelWriter from pandas.
io.parsers (GH3717) These are available as functions in the main pandas namespace (e.g. pd.
read_clipboard)
default for tupleize_cols is now False for both to_csv and read_csv. Fair warning in 0.12
(GH3604)
default for display.max_seq_len is now 100 rather then None. This activates truncated display (...) of long
sequences in various places. (GH3391)
1.18.3 Deprecations
Deprecated in 0.13.0
deprecated iterkv, which will be removed in a future release (this was an alias of iteritems used to bypass
2to3s changes). (GH4384, GH4375, GH4372)
deprecated the string method match, whose role is now performed more idiomatically by extract. In a
future release, the default behavior of match will change to become analogous to contains, which returns
a boolean indexer. (Their distinction is strictness: match relies on re.match while contains relies on
re.search.) In this release, the deprecated behavior is the default, but the new behavior is available through
the keyword argument as_indexer=True.
Prior to 0.13, it was impossible to use a label indexer (.loc/.ix) to set a value that was not contained in the index
of a particular axis. (GH2578). See the docs
In the Series case this is effectively an appending operation
In [10]: s = Series([1,2,3])
In [11]: s
Out[11]:
0 1
1 2
2 3
Length: 3, dtype: int64
In [12]: s[5] = 5.
In [13]: s
Out[13]:
0 1.0
1 2.0
2 3.0
5 5.0
Length: 4, dtype: float64
In [15]: dfi
Out[15]:
A B
0 0 1
1 2 3
2 4 5
[3 rows x 2 columns]
In [17]: dfi
Out[17]:
A B C
0 0 1 0
1 2 3 2
2 4 5 4
[3 rows x 3 columns]
In [19]: dfi
Out[19]:
A B C
0 0 1 0
1 2 3 2
2 4 5 4
3 5 5 5
[4 rows x 3 columns]
A Panel setting operation on an arbitrary axis aligns the input to the Panel
In [20]: p = pd.Panel(np.arange(16).reshape(2,4,2),
....: items=['Item1','Item2'],
....: major_axis=pd.date_range('2001/1/12',periods=4),
....: minor_axis=['A','B'],dtype='float64')
....:
In [21]: p
Out[21]:
<class 'pandas.core.panel.Panel'>
Dimensions: 2 (items) x 4 (major_axis) x 2 (minor_axis)
Items axis: Item1 to Item2
Major_axis axis: 2001-01-12 00:00:00 to 2001-01-15 00:00:00
Minor_axis axis: A to B
In [23]: p
Out[23]:
<class 'pandas.core.panel.Panel'>
Dimensions: 2 (items) x 4 (major_axis) x 3 (minor_axis)
Items axis: Item1 to Item2
Major_axis axis: 2001-01-12 00:00:00 to 2001-01-15 00:00:00
Minor_axis axis: A to C
In [24]: p.loc[:,:,'C']
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
Item1 Item2
2001-01-12 30.0 32.0
2001-01-13 30.0 32.0
2001-01-14 30.0 32.0
2001-01-15 30.0 32.0
[4 rows x 2 columns]
Added a new index type, Float64Index. This will be automatically created when passing floating values in
index creation. This enables a pure label-based slicing paradigm that makes [],ix,loc for scalar indexing
and slicing work exactly the same. See the docs, (GH263)
Construction is by default for floating type values.
In [26]: index
Out[26]: Float64Index([1.5, 2.0, 3.0, 4.5, 5.0], dtype='float64')
In [27]: s = Series(range(5),index=index)
In [28]: s
Out[28]:
1.5 0
2.0 1
3.0 2
4.5 3
5.0 4
Length: 5, dtype: int64
Scalar selection for [],.ix,.loc will always be label based. An integer will match an equal float index (e.g.
3 is equivalent to 3.0)
In [29]: s[3]
Out[29]: 2
In [30]: s.loc[3]
\\\\\\\\\\\Out[30]: 2
In [31]: s.iloc[3]
Out[31]: 3
In [32]: s[2:4]
Out[32]:
2.0 1
3.0 2
Length: 2, dtype: int64
In [33]: s.loc[2:4]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[33]:
2.0 1
3.0 2
Length: 2, dtype: int64
In [34]: s.iloc[2:4]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
3.0 2
4.5 3
Length: 2, dtype: int64
In [35]: s[2.1:4.6]
Out[35]:
3.0 2
4.5 3
Length: 2, dtype: int64
In [36]: s.loc[2.1:4.6]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[36]:
3.0 2
4.5 3
Length: 2, dtype: int64
Indexing on other index types are preserved (and positional fallback for [],ix), with the exception, that floating
point slicing on indexes on non Float64Index will now raise a TypeError.
In [1]: Series(range(5))[3.5]
TypeError: the label [3.5] is not a proper indexer for this index type
(Int64Index)
In [1]: Series(range(5))[3.5:4.5]
TypeError: the slice start [3.5] is not a proper indexer for this index type
(Int64Index)
Using a scalar float indexer will be deprecated in a future version, but is allowed for now.
In [3]: Series(range(5))[3.0]
Out[3]: 3
Query Format Changes. A much more string-like query format is now supported. See the docs.
In [39]: dfq.to_hdf(path,'dfq',format='table',data_columns=True)
[6 rows x 2 columns]
the format keyword now replaces the table keyword; allowed values are fixed(f) or table(t) the
same defaults as prior < 0.13.0 remain, e.g. put implies fixed format and append implies table format.
This default format can be set as an option by setting io.hdf.default_format.
In [42]: path = 'test.h5'
In [43]: df = pd.DataFrame(np.random.randn(10,2))
In [44]: df.to_hdf(path,'df_table',format='table')
In [45]: df.to_hdf(path,'df_table2',append=True)
In [46]: df.to_hdf(path,'df_fixed')
In [49]: df = DataFrame(randn(10,2))
In [52]: store1.append('df',df)
In [53]: store2.append('df2',df)
In [54]: store1
Out[54]:
<class 'pandas.io.pytables.HDFStore'>
File path: test.h5
/df frame_table (typ->appendable,nrows->10,ncols->2,indexers->[index])
In [55]: store2
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
<class 'pandas.io.pytables.HDFStore'>
File path: test.h5
/df frame_table (typ->appendable,nrows->10,ncols->2,indexers->
[index])
In [56]: store1.close()
In [57]: store2
Out[57]:
<class 'pandas.io.pytables.HDFStore'>
File path: test.h5
/df frame_table (typ->appendable,nrows->10,ncols->2,indexers->
[index])
In [58]: store2.close()
In [59]: store2
Out[59]:
<class 'pandas.io.pytables.HDFStore'>
File path: test.h5
File is CLOSED
removed the _quiet attribute, replace by a DuplicateWarning if retrieving duplicate rows from a table
(GH4367)
removed the warn argument from open. Instead a PossibleDataLossError exception will be raised if
you try to use mode='w' with an OPEN file handle (GH4367)
allow a passed locations array or mask as a where condition (GH4467). See the docs for an example.
add the keyword dropna=True to append to change whether ALL nan rows are not written to the store
(default is True, ALL nan rows are NOT written), also settable via the option io.hdf.dropna_table
(GH4625)
pass thru store creation arguments; can be used to support in-memory stores
The HTML and plain text representations of DataFrame now show a truncated view of the table once it exceeds
a certain size, rather than switching to the short info view (GH4886, GH5550). This makes the representation more
consistent as small DataFrames get larger.
To get the info view, call DataFrame.info(). If you prefer the info view as the repr for large DataFrames, you
can set this by running set_option('display.large_repr', 'info').
1.18.8 Enhancements
df.to_clipboard() learned a new excel keyword that lets you paste df data directly into excel (enabled
by default). (GH5070).
read_html now raises a URLError instead of catching and raising a ValueError (GH4303, GH4305)
[3 rows x 2 columns]
# unless requested
In [61]: get_dummies([1, 2, np.nan], dummy_na=True)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[61]:
[3 rows x 3 columns]
Using the new top-level to_timedelta, you can convert a scalar or array from the standard timedelta format
(produced by to_csv) into a timedelta type (np.timedelta64 in nanoseconds).
In [63]: to_timedelta('15.5us')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[63]: Timedelta('0 days 00:00:00.
000015')
'timedelta64[ns]', freq=None)
In [65]: to_timedelta(np.arange(5),unit='s')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
TimedeltaIndex(['00:00:00', '00:00:01', '00:00:02', '00:00:03', '00:00:04'],
dtype='timedelta64[ns]', freq=None)
In [66]: to_timedelta(np.arange(5),unit='d')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
TimedeltaIndex(['0 days', '1 days', '2 days', '3 days', '4 days'], dtype=
'timedelta64[ns]', freq=None)
In [68]: td = Series(date_range('20130101',periods=4))-Series(date_range('20121201
',periods=4))
In [71]: td
Out[71]:
0 31 days 00:00:00
1 31 days 00:00:00
2 31 days 00:05:03
3 NaT
Length: 4, dtype: timedelta64[ns]
# to days
In [72]: td / np.timedelta64(1,'D')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
0 31.000000
1 31.000000
2 31.003507
3 NaN
Length: 4, dtype: float64
In [73]: td.astype('timedelta64[D]')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
0 31.0
1 31.0
2 31.0
3 NaN
Length: 4, dtype: float64
# to seconds
In [74]: td / np.timedelta64(1,'s')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
0 2678400.0
1 2678400.0
2 2678703.0
3 NaN
Length: 4, dtype: float64
In [75]: td.astype('timedelta64[s]')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
0 2678400.0
1 2678400.0
2 2678703.0
3 NaN
Length: 4, dtype: float64
In [76]: td * -1
Out[76]:
0 -31 days +00:00:00
1 -31 days +00:00:00
2 -32 days +23:54:57
3 NaT
Length: 4, dtype: timedelta64[ns]
In [77]: td * Series([1,2,3,4])
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
0 31 days 00:00:00
1 62 days 00:00:00
2 93 days 00:15:09
3 NaT
Length: 4, dtype: timedelta64[ns]
In [80]: td.fillna(0)
Out[80]:
0 31 days 00:00:00
1 31 days 00:00:00
2 31 days 00:05:03
3 0 days 00:00:00
Length: 4, dtype: timedelta64[ns]
In [81]: td.fillna(timedelta(days=1,seconds=5))
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
0 31 days 00:00:00
1 31 days 00:00:00
2 31 days 00:05:03
3 1 days 00:00:05
Length: 4, dtype: timedelta64[ns]
In [82]: td.mean()
Out[82]: Timedelta('31 days 00:01:41')
In [83]: td.quantile(.1)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[83]: Timedelta('31 days 00:00:00')
plot(kind='kde') now accepts the optional parameters bw_method and ind, passed to
scipy.stats.gaussian_kde() (for scipy >= 0.11.0) to set the bandwidth, and to gkde.evaluate() to specify the in-
dices at which it is evaluated, respectively. See scipy docs. (GH4298)
DataFrame constructor now accepts a numpy masked record array (GH3478)
The new vectorized string method extract return regular expression matches more conveniently.
Elements that do not match return NaN. Extracting a regular expression with more than one group returns a
DataFrame with one column per group.
[3 rows x 2 columns]
Elements that do not match return a row of NaN. Thus, a Series of messy strings can be converted into a like-
indexed Series or DataFrame of cleaned-up or more useful strings, without necessitating get() to access tuples
or re.match objects.
Named groups like
[3 rows x 2 columns]
1 b 2
2 NaN 3
[3 rows x 2 columns]
Period conversions in the range of seconds and below were reworked and extended up to nanoseconds. Periods
in the nanosecond range are now available.
In [88]: date_range('2013-01-01', periods=5, freq='5N')
Out[88]:
DatetimeIndex(['2013-01-01', '2013-01-01', '2013-01-01', '2013-01-01',
'2013-01-01'],
dtype='datetime64[ns]', freq='5N')
In [91]: t + pd.tseries.offsets.Nano(123)
Out[91]: Timestamp('2013-01-01 09:01:02.000000123')
A new method, isin for DataFrames, which plays nicely with boolean indexing. The argument to isin, what
were comparing the DataFrame to, can be a DataFrame, Series, dict, or array of values. See the docs for more.
To get the rows where any of the conditions are met:
In [92]: dfi = DataFrame({'A': [1, 2, 3, 4], 'B': ['a', 'b', 'f', 'n']})
In [93]: dfi
Out[93]:
A B
0 1 a
1 2 b
2 3 f
3 4 n
[4 rows x 2 columns]
In [94]: other = DataFrame({'A': [1, 3, 3, 7], 'B': ['e', 'f', 'f', 'e']})
In [96]: mask
Out[96]:
A B
0 True False
1 False False
2 True True
3 False False
[4 rows x 2 columns]
In [97]: dfi[mask.any(1)]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
A B
0 1 a
2 3 f
[2 rows x 2 columns]
tz_localize can infer a fall daylight savings transition based on the structure of the unlocalized data
(GH4230), see the docs
DatetimeIndex is now in the API documentation, see the docs
json_normalize() is a new method to allow you to create a flat table from semi-structured JSON data. See
the docs (GH1067)
Added PySide support for the qtpandas DataFrameModel and DataFrameWidget.
Python csv parser now supports usecols (GH4335)
Frequencies gained several new offsets:
LastWeekOfMonth (GH4637)
FY5253, and FY5253Quarter (GH4511)
DataFrame has a new interpolate method, similar to Series (GH4434, GH1892)
In [99]: df.interpolate()
Out[99]:
A B
0 1.0 0.25
1 2.1 1.50
2 3.4 2.75
3 4.7 4.00
4 5.6 12.20
5 6.8 14.40
[6 rows x 2 columns]
Additionally, the method argument to interpolate has been expanded to include 'nearest',
'zero', 'slinear', 'quadratic', 'cubic', 'barycentric', 'krogh',
'piecewise_polynomial', 'pchip', `polynomial`, 'spline' The new methods re-
quire scipy. Consult the Scipy reference guide and documentation for more information about when the various
methods are appropriate. See the docs.
Interpolate now also accepts a limit keyword argument. This works similar to fillnas limit:
In [101]: ser.interpolate(limit=2)
Out[101]:
0 1.0
1 3.0
2 5.0
3 7.0
4 NaN
5 11.0
Length: 6, dtype: float64
In [102]: np.random.seed(123)
In [105]: df
Out[105]:
A1970 A1980 B1970 B1980 X id
0 a d 2.5 3.2 -1.085631 0
1 b e 1.2 1.3 0.997345 1
2 c f 0.7 0.1 0.282978 2
[3 rows x 6 columns]
X A B
id year
0 1970 -1.085631 a 2.5
1 1970 0.997345 b 1.2
2 1970 0.282978 c 0.7
0 1980 -1.085631 d 3.2
1 1980 0.997345 e 1.3
[6 rows x 3 columns]
to_csv now takes a date_format keyword argument that specifies how output datetime objects should
be formatted. Datetimes encountered in the index, columns, and values will all have this formatting applied.
(GH4313)
DataFrame.plot will scatter plot x versus y by passing kind='scatter' (GH2215)
Added support for Google Analytics v3 API segment IDs that also supports v2 IDs. (GH5271)
1.18.9 Experimental
The new eval() function implements expression evaluation using numexpr behind the scenes. This results
in large speedups for complicated expressions involving large DataFrames/Series. For example,
query() method has been added that allows you to select elements of a DataFrame using a natural query
syntax nearly identical to Python syntax. For example,
In [113]: n = 20
[2 rows x 3 columns]
selects all the rows of df where a < b < c evaluates to True. For more details see the the docs.
pd.read_msgpack() and pd.to_msgpack() are now a supported method of serialization of arbitrary
pandas (and python objects) in a lightweight portable binary format. See the docs
Warning: Since this is an EXPERIMENTAL LIBRARY, the storage format may not be stable until a future
release.
In [116]: df = DataFrame(np.random.rand(5,2),columns=list('AB'))
In [117]: df.to_msgpack('foo.msg')
In [118]: pd.read_msgpack('foo.msg')
Out[118]:
A B
0 0.251082 0.017357
1 0.347915 0.929879
2 0.546233 0.203368
3 0.064942 0.031722
4 0.355309 0.524575
[5 rows x 2 columns]
In [119]: s = Series(np.random.rand(5),index=date_range('20130101',periods=5))
In [121]: pd.read_msgpack('foo.msg')
Out[121]:
[ A B
0 0.251082 0.017357
1 0.347915 0.929879
2 0.546233 0.203368
3 0.064942 0.031722
4 0.355309 0.524575
pandas.io.gbq provides a simple way to extract from, and load data into, Googles BigQuery Data Sets by
way of pandas DataFrames. BigQuery is a high performance SQL-like database service, useful for performing
ad-hoc queries against extremely large datasets. See the docs
> df3
Min Tem Mean Temp Max Temp
MONTH
1 -53.336667 39.827892 89.770968
2 -49.837500 43.685219 93.437932
3 -77.926087 48.708355 96.099998
4 -82.892858 55.070087 97.317240
5 -92.378261 61.428117 102.042856
6 -77.703334 65.858888 102.900000
7 -87.821428 68.169663 106.510714
8 -89.431999 68.614215 105.500000
9 -86.611112 63.436935 107.142856
10 -78.209677 56.880838 92.103333
Warning: To use this module, you will need a BigQuery account. See <https://cloud.google.com/products/
big-query> for details.
As of 10/10/13, there is a bug in Googles API preventing result sets from being larger than 100,000 rows.
A patch is scheduled for the week of 10/14/13.
In 0.13.0 there is a major refactor primarily to subclass Series from NDFrame, which is the base class currently
for DataFrame and Panel, to unify methods and behaviors. Series formerly subclassed directly from ndarray.
(GH4080, GH3862, GH816)
Numpy Usage
In [124]: np.ones_like(s)
Out[124]: array([1, 1, 1, 1])
In [125]: np.diff(s)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[125]: array([1, 1, 1])
In [126]: np.where(s>1,s,np.nan)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[126]: array([ nan,
2., 3., 4.])
Pandonic Usage
In [127]: Series(1,index=s.index)
Out[127]:
0 1
1 1
2 1
3 1
Length: 4, dtype: int64
In [128]: s.diff()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[128]:
0 NaN
1 1.0
2 1.0
3 1.0
Length: 4, dtype: float64
In [129]: s.where(s>1)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
0 NaN
1 2.0
298 2 3.0 Chapter 1. Whats New
3 4.0
Length: 4, dtype: float64
pandas: powerful Python data analysis toolkit, Release 0.20.1
Passing a Series directly to a cython function expecting an ndarray type will no long work directly, you
must pass Series.values, See Enhancing Performance
Series(0.5) would previously return the scalar 0.5, instead this will return a 1-element Series
This change breaks rpy2<=2.3.8. an Issue has been opened against rpy2 and a workaround is detailed in
GH5698. Thanks @JanSchulz.
Pickle compatibility is preserved for pickles created prior to 0.13. These must be unpickled with pd.
read_pickle, see Pickling.
Refactor of series.py/frame.py/panel.py to move common code to generic.py
added _setup_axes to created generic NDFrame structures
moved methods
* from_axes,_wrap_array,axes,ix,loc,iloc,shape,empty,swapaxes,
transpose,pop
* __iter__,keys,__contains__,__len__,__neg__,__invert__
* convert_objects,as_blocks,as_matrix,values
* __getstate__,__setstate__ (compat remains in frame/panel)
* __getattr__,__setattr__
* _indexed_same,reindex_like,align,where,mask
* fillna,replace (Series replace is now consistent with DataFrame)
* filter (also added axis argument to selectively filter on a different axis)
* reindex,reindex_axis,take
* truncate (moved to become part of NDFrame)
These are API changes which make Panel more consistent with DataFrame
swapaxes on a Panel with the same axes specified now return a copy
support attribute access for setting
filter supports the same API as the original DataFrame filter
Reindex called with no arguments will now return a copy of the input object
TimeSeries is now an alias for Series. the property is_time_series can be used to distinguish (if
desired)
Refactor of Sparse objects to use BlockManager
Created a new block type in internals, SparseBlock, which can hold multi-dtypes and is non-
consolidatable. SparseSeries and SparseDataFrame now inherit more methods from there hi-
erarchy (Series/DataFrame), and no longer inherit from SparseArray (which instead is the object of
the SparseBlock)
Sparse suite now supports integration with non-sparse data. Non-float sparse data is supportable (partially
implemented)
Operations on sparse structures within DataFrames should preserve sparseness, merging type operations
will convert to dense (and back to sparse), so might be somewhat inefficient
enable setitem on SparseSeries for boolean/integer/slices
SparsePanels implementation is unchanged (e.g. not using BlockManager, needs work)
added ftypes method to Series/DataFrame, similar to dtypes, but indicates if the underlying is sparse/dense
(as well as the dtype)
All NDFrame objects can now use __finalize__() to specify various values to propagate to new objects
from an existing one (e.g. name in Series will follow more automatically now)
Internal type checking is now done via a suite of generated classes, allowing isinstance(value, klass)
without having to directly import the klass, courtesy of @jtratner
Bug in Series update where the parent frame is not updating its cache based on changes (GH4080) or types
(GH3217), fillna (GH3386)
Indexing with dtype conversions fixed (GH4463, GH4204)
Refactor Series.reindex to core/generic.py (GH4604, GH4618), allow method= in reindexing on a Se-
ries to work
Series.copy no longer accepts the order parameter and is now consistent with NDFrame copy
Refactor rename methods to core/generic.py; fixes Series.rename for (GH4605), and adds rename with
the same signature for Panel
Refactor clip methods to core/generic.py (GH4798)
Refactor of _get_numeric_data/_get_bool_data to core/generic.py, allowing Series/Panel function-
ality
Series (for index) / Panel (for items) now allow attribute access to its elements (GH1903)
In [130]: s = Series([1,2,3],index=list('abc'))
In [131]: s.b
Out[131]: 2
In [132]: s.a = 5
In [133]: s
Out[133]:
a 5
b 2
c 3
Length: 3, dtype: int64
See V0.13.0 Bug Fixes for an extensive list of bugs that have been fixed in 0.13.0.
See the full release notes or issue tracker on GitHub for a complete list of all API changes, Enhancements and Bug
Fixes.
This is a major release from 0.11.0 and includes several new features and enhancements along with a large number of
bug fixes.
Highlights include a consistent I/O API naming scheme, routines to read html, write multi-indexes to csv files, read
& write STATA data files, read & write JSON format files, Python 3 support for HDFStore, filtering of groupby
expressions via filter, and a revamped replace routine that accepts regular expressions.
The I/O API is now much more consistent with a set of top level reader functions accessed like pd.
read_csv() that generally return a pandas object.
read_csv
read_excel
read_hdf
read_sql
read_json
read_html
read_stata
read_clipboard
The corresponding writer functions are object methods that are accessed like df.to_csv()
to_csv
to_excel
to_hdf
to_sql
to_json
to_html
to_stata
to_clipboard
Fix modulo and integer division on Series,DataFrames to act similary to float dtypes to return np.nan
or np.inf as appropriate (GH3590). This correct a numpy bug that treats integer and float dtypes
differently.
In [1]: p = DataFrame({ 'first' : [4,5,8], 'second' : [0,0,3] })
In [2]: p % 0
Out[2]:
first second
0 NaN NaN
1 NaN NaN
2 NaN NaN
[3 rows x 2 columns]
In [3]: p % p
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
first second
0 0.0 NaN
1 0.0 NaN
2 0.0 0.0
[3 rows x 2 columns]
In [4]: p / p
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
first second
0 1.0 NaN
1 1.0 NaN
2 1.0 1.0
[3 rows x 2 columns]
In [5]: p / 0
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
first second
0 inf NaN
1 inf NaN
2 inf inf
[3 rows x 2 columns]
Add squeeze keyword to groupby to allow reduction from DataFrame -> Series if groups are unique. This
is a Regression from 0.10.1. We are reverting back to the prior behavior. This means groupby will return the
same shaped objects whether the groups are unique or not. Revert this issue (GH2893) with (GH3596).
val2 0 1 2 3
val1
1 0.5 -0.5 7.5 -7.5
[1 rows x 4 columns]
Raise on iloc when boolean indexing with a label based indexer mask e.g. a boolean Series, even with integer
labels, will raise. Since iloc is purely positional based, the labels on the Series are not alignable (GH3631)
This case is rarely used, and there are plently of alternatives. This preserves the iloc API to be purely positional
based.
In [12]: mask
Out[12]:
A True
B False
C True
D False
E True
Name: a, Length: 5, dtype: bool
a
A 0
C 2
E 4
[3 rows x 1 columns]
a
A 0
C 2
E 4
[3 rows x 1 columns]
With
import pandas as pd
pd.read_excel('path_to_file.xls', 'Sheet1', index_col=None, na_values=['NA'])
DataFrame.to_html and DataFrame.to_latex now accept a path for their first argument (GH3702)
Do not allow astypes on datetime64[ns] except to object, and timedelta64[ns] to object/int
(GH3425)
The behavior of datetime64 dtypes has changed with respect to certain so-called reduction operations
(GH3726). The following operations now raise a TypeError when perfomed on a Series and return an
empty Series when performed on a DataFrame similar to performing these operations on, for example, a
DataFrame of slice objects:
sum, prod, mean, std, var, skew, kurt, corr, and cov
read_html now defaults to None when reading, and falls back on bs4 + html5lib when lxml fails to
parse. a list of parsers to try until success is also valid
The internal pandas class hierarchy has changed (slightly). The previous PandasObject now is called
PandasContainer and a new PandasObject has become the baseclass for PandasContainer as well
as Index, Categorical, GroupBy, SparseList, and SparseArray (+ their base classes). Currently,
PandasObject provides string methods (from StringMixin). (GH4090, GH4092)
New StringMixin that, given a __unicode__ method, gets python 2 and python 3 compatible string
methods (__str__, __bytes__, and __repr__). Plus string safety throughout. Now employed in many
places throughout the pandas library. (GH4090, GH4092)
pd.read_html() can now parse HTML strings, files or urls and return DataFrames, courtesy of @cpcloud.
(GH3477, GH3605, GH3606, GH3616). It works with a single parser backend: BeautifulSoup4 + html5lib See
the docs
You can use pd.read_html() to read the output from DataFrame.to_html() like so
In [16]: print(df)
a b
0 0 a
1 1 b
2 2 c
[3 rows x 2 columns]
[3 rows x 2 columns]
Note that alist here is a Python list so pd.read_html() and DataFrame.to_html() are not in-
verses.
pd.read_html() no longer performs hard conversion of date strings (GH3656).
Warning: You may have to install an older version of BeautifulSoup4, See the installation docs
Added module for reading and writing Stata files: pandas.io.stata (GH1512) accessable via
read_stata top-level function for reading, and to_stata DataFrame method for writing, See the docs
Added module for reading and writing json format files: pandas.io.json accessable via read_json top-
level function for reading, and to_json DataFrame method for writing, See the docs various issues (GH1226,
GH3804, GH3876, GH3867, GH1305)
MultiIndex column support for reading and writing csv format files
The header option in read_csv now accepts a list of the rows from which to read the index.
The option, tupleize_cols can now be specified in both to_csv and read_csv, to provide com-
patiblity for the pre 0.12 behavior of writing and reading MultIndex columns via a list of tuples. The
default in 0.12 is to write lists of tuples and not interpret list of tuples as a MultiIndex column.
Note: The default behavior in 0.12 remains unchanged from prior versions, but starting with 0.13, the
default to write and read MultiIndex columns will be in the new format. (GH3571, GH1651, GH3141)
If an index_col is not specified (e.g. you dont have an index, or wrote it with df.to_csv(...,
index=False), then any names on the columns index will be lost.
In [20]: from pandas.util.testing import makeCustomDataframe as mkdf
In [21]: df = mkdf(5,3,r_idx_nlevels=2,c_idx_nlevels=4)
In [22]: df.to_csv('mi.csv',tupleize_cols=False)
In [23]: print(open('mi.csv').read())
C0,,C_l0_g0,C_l0_g1,C_l0_g2
C1,,C_l1_g0,C_l1_g1,C_l1_g2
C2,,C_l2_g0,C_l2_g1,C_l2_g2
C3,,C_l3_g0,C_l3_g1,C_l3_g2
R0,R1,,,
R_l0_g0,R_l1_g0,R0C0,R0C1,R0C2
R_l0_g1,R_l1_g1,R1C0,R1C1,R1C2
R_l0_g2,R_l1_g2,R2C0,R2C1,R2C2
R_l0_g3,R_l1_g3,R3C0,R3C1,R3C2
R_l0_g4,R_l1_g4,R4C0,R4C1,R4C2
In [24]: pd.read_csv('mi.csv',header=[0,1,2,3],index_col=[0,1],tupleize_
cols=False)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
[5 rows x 3 columns]
In [26]: DataFrame(randn(10,2)).to_hdf(path,'df',table=True)
read_csv will now throw a more informative error message when a file contains no columns, e.g., all newline
characters
DataFrame.replace() now allows regular expressions on contained Series with object dtype. See the
examples section in the regular docs Replacing via String Expression
For example you can do
0 a 1
1 b 2
2 NaN 3
3 NaN 4
[4 rows x 2 columns]
to replace all occurrences of the string '.' with zero or more instances of surrounding whitespace with NaN.
Regular string replacement still works as expected. For example, you can do
In [27]: df.replace('.', np.nan)
Out[27]:
a b
0 a 1
1 b 2
2 NaN 3
3 NaN 4
[4 rows x 2 columns]
In [29]: pd.get_option('b.c')
\\\\\\\\\\\Out[29]: 3
In [31]: pd.get_option('a.b')
Out[31]: 1
In [32]: pd.get_option('b.c')
\\\\\\\\\\\Out[32]: 4
The filter method for group objects returns a subset of the original object. Suppose we want to take only
elements that belong to groups with a group sum greater than 2.
In [33]: sf = Series([1, 1, 2, 3, 3, 3])
The argument of filter must a function that, applied to the group as a whole, returns True or False.
Another useful operation is filtering out elements that belong to groups with only a couple members.
[4 rows x 2 columns]
Alternatively, instead of dropping the offending groups, we can return a like-indexed objects where the groups
that do not pass the filter are filled with NaNs.
[8 rows x 2 columns]
Series and DataFrame hist methods now take a figsize argument (GH3834)
DatetimeIndexes no longer try to convert mixed-integer indexes during join operations (GH3877)
Timestamp.min and Timestamp.max now represent valid Timestamp instances instead of the default date-
time.min and datetime.max (respectively), thanks @SleepingPills
read_html now raises when no tables are found and BeautifulSoup==4.2.0 is detected (GH4214)
Added experimental CustomBusinessDay class to support DateOffsets with custom holiday calendars
and custom weekmasks. (GH2301)
Note: This uses the numpy.busdaycalendar API introduced in Numpy 1.7 and therefore requires Numpy
1.7.0 or newer.
2013-04-30 Tue
2013-05-02 Thu
2013-05-05 Sun
2013-05-06 Mon
2013-05-07 Tue
Freq: C, Length: 5, dtype: object
Plotting functions now raise a TypeError before trying to plot anything if the associated objects have have a
dtype of object (GH1818, GH3572, GH3911, GH3912), but they will try to convert object arrays to numeric
arrays if possible so that you can still plot, for example, an object array with floats. This happens before any
drawing takes place which elimnates any spurious plots from showing up.
fillna methods now raise a TypeError if the value parameter is a list or tuple.
Series.str now supports iteration (GH3638). You can iterate over the individual elements of each string in
the Series. Each iteration yields yields a Series with either a single character at each index of the original
Series or NaN. For example,
In [47]: strs = 'go', 'bow', 'joe', 'slow'
In [48]: ds = Series(strs)
In [50]: s
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
0 NaN
1 NaN
2 NaN
3 w
Length: 4, dtype: object
The last element yielded by the iterator will be a Series containing the last element of the longest string in
the Series with all other elements being NaN. Here since 'slow' is the longest string and there are no other
strings with the same length 'w' is the only non-null string in the yielded Series.
HDFStore
will retain index attributes (freq,tz,name) on recreation (GH3499)
will warn with a AttributeConflictWarning if you are attempting to append an index with a
different frequency than the existing, or attempting to append an index with a different name than the
existing
support datelike columns with a timezone as data_columns (GH2852)
Non-unique index support clarified (GH3468).
Fix assigning a new index to a duplicate index in a DataFrame would fail (GH3468)
Fix construction of a DataFrame with a duplicate index
ref_locs support to allow duplicative indices across dtypes, allows iget support to always find the index
(even across dtypes) (GH2194)
applymap on a DataFrame with a non-unique index now works (removed warning) (GH2786), and fix
(GH3230)
Fix to_csv to handle non-unique columns (GH3495)
Duplicate indexes with getitem will return items in the correct order (GH3455, GH3457) and handle miss-
ing elements like unique indices (GH3561)
Duplicate indexes with and empty DataFrame.from_records will return a correct frame (GH3562)
Concat to produce a non-unique columns when duplicates are across dtypes is fixed (GH3602)
Allow insert/delete to non-unique columns (GH3679)
Non-unique indexing with a slice via loc and friends fixed (GH3659)
Allow insert/delete to non-unique columns (GH3679)
Extend reindex to correctly deal with non-unique indices (GH3679)
DataFrame.itertuples() now works with frames with duplicate column names (GH3873)
Bug in non-unique indexing via iloc (GH4017); added takeable argument to reindex for location-
based taking
Allow non-unique indexing in series via .ix/.loc and __getitem__ (GH4246)
Fixed non-unique indexing memory allocation issue with .ix/.loc (GH4280)
DataFrame.from_records did not accept empty recarrays (GH3682)
read_html now correctly skips tests (GH3741)
Fixed a bug where DataFrame.replace with a compiled regular expression in the to_replace argument
wasnt working (GH3907)
Improved network test decorator to catch IOError (and therefore URLError as well). Added
with_connectivity_check decorator to allow explicitly checking a website as a proxy for seeing if there
is network connectivity. Plus, new optional_args decorator factory for decorators. (GH3910, GH3914)
Fixed testing issue where too many sockets where open thus leading to a connection reset issue (GH3982,
GH3985, GH4028, GH4054)
Fixed failing tests in test_yahoo, test_google where symbols were not retrieved but were being accessed
(GH3982, GH3985, GH4028, GH4054)
Series.hist will now take the figure from the current environment if one is not passed
Fixed bug where a 1xN DataFrame would barf on a 1xN mask (GH4071)
Fixed running of tox under python3 where the pickle import was getting rewritten in an incompatible way
(GH4062, GH4063)
Fixed bug where sharex and sharey were not being passed to grouped_hist (GH4089)
Fixed bug in DataFrame.replace where a nested dict wasnt being iterated over when regex=False
(GH4115)
Fixed bug in the parsing of microseconds when using the format argument in to_datetime (GH4152)
Fixed bug in PandasAutoDateLocator where invert_xaxis triggered incorrectly
MilliSecondLocator (GH3990)
Fixed bug in plotting that wasnt raising on invalid colormap for matplotlib 1.1.1 (GH4215)
Fixed the legend displaying in DataFrame.plot(kind='kde') (GH4216)
Fixed bug where Index slices werent carrying the name attribute (GH4226)
Fixed bug in initializing DatetimeIndex with an array of strings in a certain time zone (GH4229)
Fixed bug where html5lib wasnt being properly skipped (GH4265)
Fixed bug where get_data_famafrench wasnt using the correct file edges (GH4281)
See the full release notes or issue tracker on GitHub for a complete list.
This is a major release from 0.10.1 and includes many new features and enhancements along with a large number of
bug fixes. The methods of Selecting Data have had quite a number of additions, and Dtype support is now full-fledged.
There are also a number of important API changes that long-time pandas users should pay close attention to.
There is a new section in the documentation, 10 Minutes to Pandas, primarily geared to new users.
There is a new section in the documentation, Cookbook, a collection of useful recipes in pandas (and that we want
contributions!).
There are several libraries that are now Recommended Dependencies
Starting in 0.11.0, object selection has had a number of user-requested additions in order to support more explicit
location based indexing. Pandas now supports three types of multi-axis indexing.
.loc is strictly label based, will raise KeyError when the items are not found, allowed inputs are:
A single label, e.g. 5 or 'a', (note that 5 is interpreted as a label of the index. This use is not an integer
position along the index)
A list or array of labels ['a', 'b', 'c']
A slice object with labels 'a':'f', (note that contrary to usual python slices, both the start and the stop
are included!)
A boolean array
See more at Selection by Label
.iloc is strictly integer position based (from 0 to length-1 of the axis), will raise IndexError when the
requested indicies are out of bounds. Allowed inputs are:
An integer e.g. 5
A list or array of integers [4, 3, 0]
A slice object with ints 1:7
A boolean array
See more at Selection by Position
.ix supports mixed integer and label based access. It is primarily label based, but will fallback to integer
positional access. .ix is the most general and will support any of the inputs to .loc and .iloc, as well as
support for floating point label schemes. .ix is especially useful when dealing with mixed positional and label
based hierarchial indexes.
As using integer slices with .ix have different behavior depending on whether the slice is interpreted as position
based or label based, its usually better to be explicit and use .iloc or .loc.
See more at Advanced Indexing and Advanced Hierarchical.
1.20.3 Dtypes
Numeric dtypes will propagate and can coexist in DataFrames. If a dtype is passed (either directly via the dtype
keyword, a passed ndarray, or a passed Series, then it will be preserved in DataFrame operations. Furthermore,
different numeric dtypes will NOT be combined. The following example will give you a taste.
In [2]: df1
Out[2]:
A
0 1.392665
1 -0.123497
2 -0.402761
3 -0.246604
4 -0.288433
5 -0.763434
6 2.069526
7 -1.203569
[8 rows x 1 columns]
In [3]: df1.dtypes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
A float32
Length: 1, dtype: object
In [5]: df2
Out[5]:
A B C
0 0.591797 -0.038605 0
1 0.841309 -0.460478 1
2 -0.500977 -0.310458 0
3 -0.816406 0.866493 254
4 -0.207031 0.245972 0
5 -0.664062 0.319442 1
6 0.580566 1.378512 1
7 -0.965820 0.292502 255
[8 rows x 3 columns]
In [6]: df2.dtypes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
A float16
B float64
C uint8
Length: 3, dtype: object
In [8]: df3
Out[8]:
A B C
0 1.984462 -0.038605 0.0
1 0.717812 -0.460478 1.0
2 -0.903737 -0.310458 0.0
3 -1.063011 0.866493 254.0
4 -0.495465 0.245972 0.0
5 -1.427497 0.319442 1.0
6 2.650092 1.378512 1.0
7 -2.169390 0.292502 255.0
[8 rows x 3 columns]
In [9]: df3.dtypes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
A float32
B float64
C float64
Length: 3, dtype: object
This is lower-common-denomicator upcasting, meaning you get the dtype which can accomodate all of the types
In [10]: df3.values.dtype
Out[10]: dtype('float64')
Conversion
In [11]: df3.astype('float32').dtypes
Out[11]:
A float32
B float32
C float32
Length: 3, dtype: object
Mixed Conversion
In [14]: df3.convert_objects(convert_numeric=True).dtypes
Out[14]:
A float32
B float64
C float64
D float64
E int64
Length: 5, dtype: object
In [17]: df3.dtypes
Out[17]:
A float32
B float64
C float64
D float16
E int32
Length: 5, dtype: object
In [20]: s.convert_objects(convert_dates='coerce')
Out[20]:
0 2001-01-01
1 NaT
2 NaT
3 NaT
4 2001-01-04
5 2001-01-05
Length: 6, dtype: datetime64[ns]
Platform Gotchas
Starting in 0.11.0, construction of DataFrame/Series will use default dtypes of int64 and float64, regardless
of platform. This is not an apparent change from earlier versions of pandas. If you specify dtypes, they WILL be
respected, however (GH2837)
The following will all result in int64 dtypes
In [21]: DataFrame([1,2],columns=['a']).dtypes
Out[21]:
a int64
Length: 1, dtype: object
a int64
Length: 1, dtype: object
Upcasting Gotchas
Performing indexing operations on integer type data can easily upcast the data. The dtype of the input data will be
preserved in cases where nans are not introduced.
In [26]: dfi
Out[26]:
A B C D E
0 1 0 0 1 1
1 0 0 1 1 1
2 0 0 0 1 1
3 -1 0 254 1 1
4 0 0 0 1 1
5 -1 0 1 1 1
6 2 1 1 1 1
7 -2 0 255 1 1
[8 rows x 5 columns]
In [27]: dfi.dtypes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
A int32
B int32
C int32
D int64
E int32
Length: 5, dtype: object
In [29]: casted
Out[29]:
A B C D E
0 1.0 NaN NaN 1 1
1 NaN NaN 1.0 1 1
2 NaN NaN NaN 1 1
3 NaN NaN 254.0 1 1
4 NaN NaN NaN 1 1
5 NaN NaN 1.0 1 1
6 2.0 1.0 1.0 1 1
7 NaN NaN 255.0 1 1
[8 rows x 5 columns]
In [30]: casted.dtypes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
A float64
B float64
C float64
D int64
E int32
Length: 5, dtype: object
In [33]: df4.dtypes
Out[33]:
A float32
B float64
C float64
D float16
E int32
Length: 5, dtype: object
In [35]: casted
Out[35]:
A B C D E
0 1.984462 NaN NaN 1.0 1
1 0.717812 NaN 1.0 1.0 1
2 NaN NaN NaN 1.0 1
3 NaN 0.866493 254.0 1.0 1
4 NaN 0.245972 NaN 1.0 1
5 NaN 0.319442 1.0 1.0 1
6 2.650092 1.378512 1.0 1.0 1
7 NaN 0.292502 255.0 1.0 1
[8 rows x 5 columns]
In [36]: casted.dtypes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
A float32
B float64
C float64
D float16
E int32
Length: 5, dtype: object
Datetime64[ns] columns in a DataFrame (or a Series) allow the use of np.nan to indicate a nan value, in ad-
dition to the traditional NaT, or not-a-time. This allows convenient nan setting in a generic way. Furthermore
datetime64[ns] columns are created by default, when passed datetimelike objects (this change was introduced in
0.10.1) (GH2809, GH2810)
In [37]: df = DataFrame(randn(6,2),date_range('20010102',periods=6),columns=['A','B'])
In [39]: df
Out[39]:
A B timestamp
2001-01-02 1.023958 0.660103 2001-01-03
2001-01-03 1.236475 -2.170629 2001-01-03
[6 rows x 3 columns]
datetime64[ns] 1
float64 2
Length: 2, dtype: int64
In [42]: df
Out[42]:
A B timestamp
2001-01-02 1.023958 0.660103 2001-01-03
2001-01-03 1.236475 -2.170629 2001-01-03
2001-01-04 NaN -1.685677 NaT
2001-01-05 NaN -0.115070 NaT
2001-01-06 -0.632102 -0.585977 2001-01-03
2001-01-07 -1.444787 -0.201135 2001-01-03
[6 rows x 3 columns]
In [45]: s.dtype
Out[45]: dtype('<M8[ns]')
In [47]: s
Out[47]:
0 2001-01-02
1 NaT
2 2001-01-02
Length: 3, dtype: datetime64[ns]
In [48]: s.dtype
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[48]:
dtype('<M8[ns]')
In [49]: s = s.astype('O')
In [50]: s
Out[50]:
0 2001-01-02 00:00:00
1 NaT
2 2001-01-02 00:00:00
Length: 3, dtype: object
In [51]: s.dtype
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
dtype('O')
1.20.8 Enhancements
In [53]: df.to_hdf('store.h5','table',append=True)
[2 rows x 2 columns]
provide dotted attribute access to get from stores, e.g. store.df == store['df']
new keywords iterator=boolean, and chunksize=number_in_a_chunk are provided to sup-
port iteration on select and select_as_multiple (GH3076)
You can now select timestamps from an unordered timeseries similarly to an ordered timeseries (GH2437)
You can now select with a string from a DataFrame with a datelike index, in a similar way to a Series (GH3070)
In [55]: idx = date_range("2001-10-1", periods=5, freq='M')
In [56]: ts = Series(np.random.rand(len(idx)),index=idx)
In [57]: ts['2001']
Out[57]:
2001-10-31 0.663256
2001-11-30 0.079126
2001-12-31 0.587699
Freq: M, Length: 3, dtype: float64
In [59]: df['2001']
Out[59]:
A
2001-10-31 0.663256
2001-11-30 0.079126
2001-12-31 0.587699
[3 rows x 1 columns]
In [60]: p = Panel(randn(3,4,4),items=['ItemA','ItemB','ItemC'],
....: major_axis=date_range('20010102',periods=4),
....: minor_axis=['A','B','C','D'])
....:
In [61]: p
Out[61]:
<class 'pandas.core.panel.Panel'>
Dimensions: 3 (items) x 4 (major_axis) x 4 (minor_axis)
Items axis: ItemA to ItemC
Major_axis axis: 2001-01-02 00:00:00 to 2001-01-05 00:00:00
Minor_axis axis: A to D
In [62]: p.reindex(items=['ItemA']).squeeze()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
A B C D
2001-01-02 -1.203403 0.425882 -0.436045 -0.982462
2001-01-03 0.348090 -0.969649 0.121731 0.202798
2001-01-04 1.215695 -0.218549 -0.631381 -0.337116
2001-01-05 0.404238 0.907213 -0.865657 0.483186
[4 rows x 4 columns]
In [63]: p.reindex(items=['ItemA'],minor=['B']).squeeze()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
2001-01-02 0.425882
2001-01-03 -0.969649
2001-01-04 -0.218549
2001-01-05 0.907213
Freq: D, Name: B, Length: 4, dtype: float64
In pd.io.data.Options,
Fix bug when trying to fetch data for the current month when already past expiry.
Now using lxml to scrape html instead of BeautifulSoup (lxml was faster).
New instance variables for calls and puts are automatically created when a method that creates them is
called. This works for current month where the instance variables are simply calls and puts. Also
works for future expiry months and save the instance variable as callsMMYY or putsMMYY, where
MMYY are, respectively, the month and year of the options expiry.
Options.get_near_stock_price now allows the user to specify the month for which to get rele-
vant options data.
Options.get_forward_data now has optional kwargs near and above_below. This allows the
user to specify if they would like to only return forward looking data for options near the current stock
price. This just obtains the data from Options.get_near_stock_price instead of Options.get_xxx_data()
(GH2758).
Cursor coordinate information is now displayed in time-series plots.
added option display.max_seq_items to control the number of elements printed per sequence pprinting it.
(GH2979)
added option display.chop_threshold to control display of small numerical values. (GH2739)
added option display.max_info_rows to prevent verbose_info from being calculated for frames above 1M rows
(configurable). (GH2807, GH2918)
value_counts() now accepts a normalize argument, for normalized histograms. (GH2710).
DataFrame.from_records now accepts not only dicts but any instance of the collections.Mapping ABC.
added option display.mpl_style providing a sleeker visual style for plots. Based on https://gist.github.com/
huyng/816622 (GH3075).
Treat boolean values as integers (values 1 and 0) for numeric operations. (GH2641)
to_html() now accepts an optional escape argument to control reserved HTML character escaping (enabled
by default) and escapes &, in addition to < and >. (GH2919)
See the full release notes or issue tracker on GitHub for a complete list.
This is a minor release from 0.10.0 and includes new features, enhancements, and bug fixes. In particular, there is
substantial new HDFStore functionality contributed by Jeff Reback.
An undesired API breakage with functions taking the inplace option has been reverted and deprecation warnings
added.
Functions taking an inplace option return the calling object as before. A deprecation message has been added
Groupby aggregations Max/Min no longer exclude non-numeric data (GH2700)
Resampling an empty DataFrame now returns an empty DataFrame instead of raising an exception (GH2640)
The file reader will now raise an exception when NA values are found in an explicitly specified integer column
instead of converting the column to float (GH2631)
DatetimeIndex.unique now returns a DatetimeIndex with the same name and
timezone instead of an array (GH2563)
1.21.3 HDFStore
You may need to upgrade your existing data files. Please visit the compatibility section in the main docs.
You can designate (and index) certain columns that you want to be able to perform queries on a table, by passing a list
to data_columns
In [7]: df
Out[7]:
A B C string string2
2000-01-01 1.885136 -0.183873 2.550850 foo cool
2000-01-02 0.180759 -1.117089 0.061462 foo cool
2000-01-03 -0.294467 -0.591411 -0.876691 foo cool
2000-01-04 3.127110 1.451130 0.045152 foo cool
2000-01-05 -0.242846 1.195819 1.533294 NaN cool
2000-01-06 0.820521 -0.281201 1.651561 NaN cool
2000-01-07 -0.034086 0.252394 -0.498772 foo cool
2000-01-08 -2.290958 -1.601262 -0.256718 bar cool
[8 rows x 5 columns]
# on-disk operations
In [8]: store.append('df', df, data_columns = ['B','C','string','string2'])
[2 rows x 5 columns]
A B C string string2
2000-01-04 3.127110 1.451130 0.045152 foo cool
2000-01-07 -0.034086 0.252394 -0.498772 foo cool
[2 rows x 5 columns]
In [16]: df_mixed1
Out[16]:
A B C string string2 datetime64
2000-01-01 1.885136 -0.183873 2.550850 foo cool 2001-01-02
2000-01-02 0.180759 -1.117089 0.061462 foo cool 2001-01-02
2000-01-03 -0.294467 -0.591411 -0.876691 foo cool 2001-01-02
2000-01-04 NaN NaN 0.045152 foo cool 2001-01-02
2000-01-05 -0.242846 1.195819 1.533294 NaN cool 2001-01-02
2000-01-06 0.820521 -0.281201 1.651561 NaN cool 2001-01-02
2000-01-07 -0.034086 0.252394 -0.498772 foo cool 2001-01-02
2000-01-08 -2.290958 -1.601262 -0.256718 bar cool 2001-01-02
[8 rows x 6 columns]
In [17]: df_mixed1.get_dtype_counts()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
datetime64[ns] 1
float64 3
object 2
Length: 3, dtype: int64
You can pass columns keyword to select to filter a list of the return columns, this is equivalent to passing a
Term('columns',list_of_columns_to_filter)
In [18]: store.select('df',columns = ['A','B'])
Out[18]:
A B
2000-01-01 1.885136 -0.183873
2000-01-02 0.180759 -1.117089
2000-01-03 -0.294467 -0.591411
2000-01-04 3.127110 1.451130
2000-01-05 -0.242846 1.195819
2000-01-06 0.820521 -0.281201
2000-01-07 -0.034086 0.252394
2000-01-08 -2.290958 -1.601262
[8 rows x 2 columns]
In [21]: df
Out[21]:
A B C
foo bar
foo one 0.239369 0.174122 -1.131794
two -1.948006 0.980347 -0.674429
three -0.361633 -0.761218 1.768215
bar one 0.152288 -0.862613 -0.210968
two -0.859278 1.498195 0.462413
baz two -0.647604 1.511487 -0.727189
three -0.342928 -0.007364 1.427674
qux one 0.104020 2.052171 -1.230963
two -0.019240 -1.713238 0.838912
three -0.637855 0.215109 -1.515362
In [22]: store.append('mi',df)
In [23]: store.select('mi')
Out[23]:
A B C
foo bar
foo one 0.239369 0.174122 -1.131794
two -1.948006 0.980347 -0.674429
three -0.361633 -0.761218 1.768215
bar one 0.152288 -0.862613 -0.210968
two -0.859278 1.498195 0.462413
baz two -0.647604 1.511487 -0.727189
three -0.342928 -0.007364 1.427674
qux one 0.104020 2.052171 -1.230963
two -0.019240 -1.713238 0.838912
three -0.637855 0.215109 -1.515362
A B C
foo bar
[2 rows x 3 columns]
Multi-table creation via append_to_multiple and selection via select_as_multiple can create/select from
multiple tables and return a combined result, by using where on a selector table.
In [25]: df_mt = DataFrame(randn(8, 6), index=date_range('1/1/2000', periods=8),
....: columns=['A', 'B', 'C', 'D', 'E', 'F'])
....:
In [28]: store
Out[28]:
<class 'pandas.io.pytables.HDFStore'>
File path: store.h5
/df frame_table (typ->appendable,nrows->8,ncols->5,indexers->
[index],dc->[B,C,string,string2])
A B
2000-01-01 1.586924 -0.447974
2000-01-02 -0.102206 0.870302
2000-01-03 1.249874 1.458210
2000-01-04 -0.616293 0.150468
2000-01-05 -0.431163 0.016640
2000-01-06 0.800353 -0.451572
2000-01-07 1.239198 0.185437
2000-01-08 -0.040863 0.290110
[8 rows x 2 columns]
In [30]: store.select('df2_mt')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
C D E F foo
2000-01-01 -1.573998 0.630925 -0.071659 -1.277640 bar
2000-01-02 1.275280 -1.199212 1.060780 1.673018 bar
2000-01-03 -0.710542 0.825392 1.557329 1.993441 bar
2000-01-04 0.132104 0.580923 -0.128750 1.445964 bar
2000-01-05 0.904578 -1.645852 -0.688741 0.228006 bar
[8 rows x 5 columns]
# as a multiple
In [31]: store.select_as_multiple(['df1_mt','df2_mt'], where = [ 'A>0','B>0' ],
selector = 'df1_mt')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
A B C D E F foo
2000-01-03 1.249874 1.458210 -0.710542 0.825392 1.557329 1.993441 bar
2000-01-07 1.239198 0.185437 -0.540770 -0.370038 1.298390 1.662964 bar
[2 rows x 7 columns]
Enhancements
HDFStore now can read native PyTables table format tables
You can pass nan_rep = 'my_nan_rep' to append, to change the default nan representation on disk
(which converts to/from np.nan), this defaults to nan.
You can pass index to append. This defaults to True. This will automagically create indicies on the
indexables and data columns of the table
You can pass chunksize=an integer to append, to change the writing chunksize (default is 50000).
This will signficantly lower your memory usage on writing.
You can pass expectedrows=an integer to the first append, to set the TOTAL number of expectedrows
that PyTables will expected. This will optimize read/write performance.
Select now supports passing start and stop to provide selection space limiting in selection.
Greatly improved ISO8601 (e.g., yyyy-mm-dd) date parsing for file parsers (GH2698)
Allow DataFrame.merge to handle combinatorial sizes too large for 64-bit integer (GH2690)
Series now has unary negation (-series) and inversion (~series) operators (GH2686)
DataFrame.plot now includes a logx parameter to change the x-axis to log scale (GH2327)
Series arithmetic operators can now handle constant and ndarray input (GH2574)
ExcelFile now takes a kind argument to specify the file type (GH2613)
A faster implementation for Series.str methods (GH2602)
Bug Fixes
HDFStore tables can now store float32 types correctly (cannot be mixed with float64 however)
Fixed Google Analytics prefix when specifying request segment (GH2713).
Function to reset Google Analytics token store so users can recover from improperly setup client secrets
(GH2687).
Fixed groupby bug resulting in segfault when passing in MultiIndex (GH2706)
Fixed bug where passing a Series with datetime64 values into to_datetime results in bogus output values
(GH2699)
Fixed bug in pattern in HDFStore expressions when pattern is not a valid regex (GH2694)
This is a major release from 0.9.1 and includes many new features and enhancements along with a large number of
bug fixes. There are also a number of important API changes that long-time pandas users should pay close attention
to.
The delimited file parsing engine (the guts of read_csv and read_table) has been rewritten from the ground up
and now uses a fraction the amount of memory while parsing, while being 40% or more faster in most use cases (in
some cases much faster).
There are also many new features:
Much-improved Unicode handling via the encoding option.
Column filtering (usecols)
Dtype specification (dtype argument)
Ability to specify strings to be recognized as True/False
Ability to yield NumPy record arrays (as_recarray)
High performance delim_whitespace option
Decimal format (e.g. European format) specification
Easier CSV dialect options: escapechar, lineterminator, quotechar, etc.
More robust handling of many exceptional kinds of files observed in the wild
In [3]: df
Out[3]:
0 1 2 3
2000-01-01 -0.134024 -0.205969 1.348944 -1.198246
2000-01-02 -1.626124 0.982041 0.059493 -0.460111
2000-01-03 -1.565401 -0.025706 0.942864 2.502156
2000-01-04 -0.302741 0.261551 -0.066342 0.897097
2000-01-05 0.268766 -1.225092 0.582752 -1.490764
2000-01-06 -0.639757 -0.952750 -0.892402 0.505987
[6 rows x 4 columns]
# deprecated now
In [4]: df - df[0]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
1 2 3
2000-01-01 NaN NaN NaN
2000-01-02 NaN NaN NaN
2000-01-03 NaN NaN NaN
2000-01-04 NaN NaN NaN
2000-01-05 NaN NaN NaN
2000-01-06 NaN NaN NaN
[6 rows x 10 columns]
0 1 2 3
2000-01-01 0.0 -0.071946 1.482967 -1.064223
2000-01-02 0.0 2.608165 1.685618 1.166013
2000-01-03 0.0 1.539695 2.508265 4.067556
2000-01-04 0.0 0.564293 0.236399 1.199839
2000-01-05 0.0 -1.493857 0.313986 -1.759530
2000-01-06 0.0 -0.312993 -0.252645 1.145744
[6 rows x 4 columns]
You will get a deprecation warning in the 0.10.x series, and the deprecated functionality will be removed in 0.11 or
later.
Altered resample default behavior
The default time series resample binning behavior of daily D and higher frequencies has been changed to
closed='left', label='left'. Lower nfrequencies are unaffected. The prior defaults were causing a great
deal of confusion for users, especially resampling data to daily frequency (which labeled the aggregated group with
the end of the interval: the next day).
In [3]: series
Out[3]:
2000-01-01 00:00:00 0
2000-01-01 04:00:00 1
2000-01-01 08:00:00 2
2000-01-01 12:00:00 3
2000-01-01 16:00:00 4
2000-01-01 20:00:00 5
2000-01-02 00:00:00 6
2000-01-02 04:00:00 7
2000-01-02 08:00:00 8
2000-01-02 12:00:00 9
2000-01-02 16:00:00 10
2000-01-02 20:00:00 11
2000-01-03 00:00:00 12
2000-01-03 04:00:00 13
2000-01-03 08:00:00 14
2000-01-03 12:00:00 15
2000-01-03 16:00:00 16
2000-01-03 20:00:00 17
2000-01-04 00:00:00 18
2000-01-04 04:00:00 19
2000-01-04 08:00:00 20
2000-01-04 12:00:00 21
2000-01-04 16:00:00 22
2000-01-04 20:00:00 23
2000-01-05 00:00:00 24
Freq: 4H, dtype: int64
Out[4]:
2000-01-01 15
2000-01-02 51
2000-01-03 87
2000-01-04 123
2000-01-05 24
Freq: D, dtype: int64
Infinity and negative infinity are no longer treated as NA by isnull and notnull. That they ever were was
a relic of early pandas. This behavior can be re-enabled globally by the mode.use_inf_as_null option:
In [6]: s = pd.Series([1.5, np.inf, 3.4, -np.inf])
In [7]: pd.isnull(s)
Out[7]:
0 False
1 False
2 False
3 False
Length: 4, dtype: bool
In [8]: s.fillna(0)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[8]:
0 1.500000
1 inf
2 3.400000
3 -inf
Length: 4, dtype: float64
In [10]: pd.isnull(s)
Out[10]:
0 False
1 True
2 False
3 True
Length: 4, dtype: bool
In [11]: s.fillna(0)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[11]:
0 1.5
1 0.0
2 3.4
3 0.0
Length: 4, dtype: float64
In [12]: pd.reset_option('use_inf_as_null')
Methods with the inplace option now all return None instead of the calling object. E.g. code written like
df = df.fillna(0, inplace=True) may stop working. To fix, simply delete the unnecessary variable
assignment.
pandas.merge no longer sorts the group keys (sort=False) by default. This was done for performance
reasons: the group-key sorting is often one of the more expensive parts of the computation and is often unnec-
essary.
The default column names for a file with no header have been changed to the integers 0 through N - 1. This
is to create consistency with the DataFrame constructor with no columns specified. The v0.9.0 behavior (names
X0, X1, ...) can be reproduced by specifying prefix='X':
In [13]: data= 'a,b,c\n1,Yes,2\n3,No,4'
In [14]: print(data)
a,b,c
1,Yes,2
3,No,4
[3 rows x 3 columns]
X0 X1 X2
0 a b c
1 1 Yes 2
2 3 No 4
[3 rows x 3 columns]
Values like 'Yes' and 'No' are not interpreted as boolean by default, though this can be controlled by new
true_values and false_values arguments:
In [17]: print(data)
a,b,c
1,Yes,2
3,No,4
In [18]: pd.read_csv(StringIO(data))
\\\\\\\\\\\\\\\\\\\\\Out[18]:
a b c
0 1 Yes 2
1 3 No 4
[2 rows x 3 columns]
a b c
0 1 True 2
1 3 False 4
[2 rows x 3 columns]
The file parsers will not recognize non-string values arising from a converter function as NA if passed in the
na_values argument. Its better to do post-processing using the replace function instead.
Calling fillna on Series or DataFrame with no arguments is no longer valid code. You must either specify a
fill value or an interpolation method:
In [21]: s
Out[21]:
0 NaN
1 1.0
2 2.0
3 NaN
4 4.0
Length: 5, dtype: float64
In [22]: s.fillna(0)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[22]:
0 0.0
1 1.0
2 2.0
3 0.0
4 4.0
Length: 5, dtype: float64
In [23]: s.fillna(method='pad')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
0 NaN
1 1.0
2 2.0
3 2.0
4 4.0
Length: 5, dtype: float64
In [24]: s.ffill()
Out[24]:
0 NaN
1 1.0
2 2.0
3 2.0
4 4.0
Length: 5, dtype: float64
Series.apply will now operate on a returned value from the applied function, that is itself a series, and
possibly upcast the result to a DataFrame
In [26]: s = Series(np.random.rand(5))
In [27]: s
Out[27]:
0 0.717478
1 0.815199
2 0.452478
3 0.848385
4 0.235477
Length: 5, dtype: float64
In [28]: s.apply(f)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
x x^2
0 0.717478 0.514775
1 0.815199 0.664550
2 0.452478 0.204737
3 0.848385 0.719757
4 0.235477 0.055449
[5 rows x 2 columns]
In [29]: get_option("display.max_rows")
Out[29]: 15
Instead of printing the summary information, pandas now splits the string representation across multiple rows by
default:
In [31]: wide_frame
Out[31]:
0 1 2 3 4 5 6 \
0 -0.681624 0.191356 1.180274 -0.834179 0.703043 0.166568 -0.583599
1 0.441522 -0.316864 -0.017062 1.570114 -0.360875 -0.880096 0.235532
2 -0.412451 -0.462580 0.422194 0.288403 -0.487393 -0.777639 0.055865
3 -0.277255 1.331263 0.585174 -0.568825 -0.719412 1.191340 -0.456362
4 -1.642511 0.432560 1.218080 -0.564705 -0.581790 0.286071 0.048725
7 8 9 10 11 12 13 \
0 -1.201796 -1.422811 -0.882554 1.209871 -0.941235 0.863067 -0.336232
1 0.207232 -1.983857 -1.702547 -1.621234 -0.906840 1.014601 -0.475108
2 1.383381 0.085638 0.246392 0.965887 0.246354 -0.727728 -0.094414
3 0.089931 0.776079 0.752889 -1.195795 -1.425911 -0.548829 0.774225
4 1.002440 1.276582 0.054399 0.241963 -0.471786 0.314510 -0.059986
14 15
0 -0.976847 0.033862
1 -0.358944 1.262942
2 -0.276854 0.158399
3 0.740501 1.510263
4 -2.069319 -1.115104
[5 rows x 16 columns]
The old behavior of printing out summary information can be achieved via the expand_frame_repr print option:
In [33]: wide_frame
Out[33]:
0 1 2 3 4 5 6 7
8 9 10 11 12 13 14 15
0 -0.681624 0.191356 1.180274 -0.834179 0.703043 0.166568 -0.583599 -1.201796 -1.
422811 -0.882554 1.209871 -0.941235 0.863067 -0.336232 -0.976847 0.033862
1 0.441522 -0.316864 -0.017062 1.570114 -0.360875 -0.880096 0.235532 0.207232 -1.
983857 -1.702547 -1.621234 -0.906840 1.014601 -0.475108 -0.358944 1.262942
2 -0.412451 -0.462580 0.422194 0.288403 -0.487393 -0.777639 0.055865 1.383381 0.
085638 0.246392 0.965887 0.246354 -0.727728 -0.094414 -0.276854 0.158399
3 -0.277255 1.331263 0.585174 -0.568825 -0.719412 1.191340 -0.456362 0.089931 0.
776079 0.752889 -1.195795 -1.425911 -0.548829 0.774225 0.740501 1.510263
4 -1.642511 0.432560 1.218080 -0.564705 -0.581790 0.286071 0.048725 1.002440 1.
276582 0.054399 0.241963 -0.471786 0.314510 -0.059986 -2.069319 -1.115104
[5 rows x 16 columns]
The width of each line can be changed via line_width (80 by default):
In [35]: wide_frame
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[35]:
0 1 2 \
0 -0.681624 0.191356 1.180274
1 0.441522 -0.316864 -0.017062
2 -0.412451 -0.462580 0.422194
3 -0.277255 1.331263 0.585174
4 -1.642511 0.432560 1.218080
3 4 5 \
0 -0.834179 0.703043 0.166568
1 1.570114 -0.360875 -0.880096
6 7 8 \
0 -0.583599 -1.201796 -1.422811
1 0.235532 0.207232 -1.983857
2 0.055865 1.383381 0.085638
3 -0.456362 0.089931 0.776079
4 0.048725 1.002440 1.276582
9 10 11 \
0 -0.882554 1.209871 -0.941235
1 -1.702547 -1.621234 -0.906840
2 0.246392 0.965887 0.246354
3 0.752889 -1.195795 -1.425911
4 0.054399 0.241963 -0.471786
12 13 14 \
0 0.863067 -0.336232 -0.976847
1 1.014601 -0.475108 -0.358944
2 -0.727728 -0.094414 -0.276854
3 -0.548829 0.774225 0.740501
4 0.314510 -0.059986 -2.069319
15
0 0.033862
1 1.262942
2 0.158399
3 1.510263
4 -1.115104
[5 rows x 16 columns]
Docs for PyTables Table format & several enhancements to the api. Here is a taste of what to expect.
In [38]: df
Out[38]:
A B C
2000-01-01 -0.369325 -1.502617 -0.376280
2000-01-02 0.511936 -0.116412 -0.625256
2000-01-03 -0.550627 1.261433 -0.552429
2000-01-04 1.695803 -1.025917 -0.910942
2000-01-05 0.426805 -0.131749 0.432600
2000-01-06 0.044671 -0.341265 1.844536
2000-01-07 -2.036047 0.000830 -0.955697
2000-01-08 -0.898872 -0.725411 0.059904
[8 rows x 3 columns]
In [43]: store
Out[43]:
<class 'pandas.io.pytables.HDFStore'>
File path: store.h5
/df frame_table (typ->appendable,nrows->8,ncols->3,indexers->[index])
A B C
2000-01-01 -0.369325 -1.502617 -0.376280
2000-01-02 0.511936 -0.116412 -0.625256
2000-01-03 -0.550627 1.261433 -0.552429
2000-01-04 1.695803 -1.025917 -0.910942
2000-01-05 0.426805 -0.131749 0.432600
2000-01-06 0.044671 -0.341265 1.844536
2000-01-07 -2.036047 0.000830 -0.955697
2000-01-08 -0.898872 -0.725411 0.059904
[8 rows x 3 columns]
In [46]: wp
Out[46]:
<class 'pandas.core.panel.Panel'>
Dimensions: 2 (items) x 5 (major_axis) x 4 (minor_axis)
Items axis: Item1 to Item2
Major_axis axis: 2000-01-01 00:00:00 to 2000-01-05 00:00:00
Minor_axis axis: A to D
# storing a panel
In [47]: store.append('wp',wp)
In [50]: store.select('wp')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
<class 'pandas.core.panel.Panel'>
Dimensions: 2 (items) x 3 (major_axis) x 4 (minor_axis)
Items axis: Item1 to Item2
Major_axis axis: 2000-01-01 00:00:00 to 2000-01-03 00:00:00
Minor_axis axis: A to D
# deleting a store
In [51]: del store['df']
In [52]: store
Out[52]:
<class 'pandas.io.pytables.HDFStore'>
File path: store.h5
/wp wide_table (typ->appendable,nrows->12,ncols->2,indexers->[major_axis,
minor_axis])
Enhancements
added ability to hierarchical keys
In [56]: store
Out[56]:
<class 'pandas.io.pytables.HDFStore'>
File path: store.h5
/foo/bar/bah frame (shape->[8,3])
In [58]: store
Out[58]:
<class 'pandas.io.pytables.HDFStore'>
File path: store.h5
/foo/bar/bah frame (shape->[8,3])
In [60]: df['int'] = 1
In [61]: store.append('df',df)
In [63]: df1
Out[63]:
A B C string int
2000-01-01 -0.369325 -1.502617 -0.376280 string 1
2000-01-02 0.511936 -0.116412 -0.625256 string 1
2000-01-03 -0.550627 1.261433 -0.552429 string 1
2000-01-04 1.695803 -1.025917 -0.910942 string 1
2000-01-05 0.426805 -0.131749 0.432600 string 1
2000-01-06 0.044671 -0.341265 1.844536 string 1
2000-01-07 -2.036047 0.000830 -0.955697 string 1
2000-01-08 -0.898872 -0.725411 0.059904 string 1
[8 rows x 5 columns]
In [64]: df1.get_dtype_counts()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
float64 3
int64 1
object 1
Length: 3, dtype: int64
Compatibility
0.10 of HDFStore is backwards compatible for reading tables created in a prior version of pandas, however, query
terms using the prior (undocumented) methodology are unsupported. You must read in the entire file and write it out
using the new format to take advantage of the updates.
Adding experimental support for Panel4D and factory functions to create n-dimensional named panels. Docs for
NDim. Here is a taste of what to expect.
In [66]: p4d
Out[66]:
<class 'pandas.core.panelnd.Panel4D'>
Dimensions: 2 (labels) x 2 (items) x 5 (major_axis) x 4 (minor_axis)
Labels axis: Label1 to Label2
Items axis: Item1 to Item2
Major_axis axis: 2000-01-01 00:00:00 to 2000-01-05 00:00:00
Minor_axis axis: A to D
See the full release notes or issue tracker on GitHub for a complete list.
This is a bugfix release from 0.9.0 and includes several new features and enhancements along with a large number of
bug fixes. The new features include by-column sort order for DataFrame and Series, improved NA handling for the
rank method, masking functions for DataFrame, and intraday time-series filtering for DataFrame.
Series.sort, DataFrame.sort, and DataFrame.sort_index can now be specified in a per-column manner to support
multiple sort orders (GH928)
Out[3]:
A B C
3 0 1 1
4 0 1 1
2 0 0 1
0 1 0 0
1 1 0 0
5 1 0 0
DataFrame.rank now supports additional argument values for the na_option parameter so missing values can
be assigned either the largest or the smallest rank (GH1508, GH2159)
In [1]: df = DataFrame(np.random.randn(6, 3), columns=['A', 'B', 'C'])
In [3]: df.rank()
Out[3]:
A B C
0 3.0 1.0 3.0
1 2.0 2.0 1.0
2 NaN NaN NaN
3 NaN NaN NaN
4 NaN NaN NaN
5 1.0 3.0 2.0
[6 rows x 3 columns]
In [4]: df.rank(na_option='top')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
A B C
0 6.0 4.0 6.0
1 5.0 5.0 4.0
2 2.0 2.0 2.0
3 2.0 2.0 2.0
4 2.0 2.0 2.0
5 4.0 6.0 5.0
[6 rows x 3 columns]
In [5]: df.rank(na_option='bottom')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
A B C
0 3.0 1.0 3.0
1 2.0 2.0 1.0
2 5.0 5.0 5.0
3 5.0 5.0 5.0
4 5.0 5.0 5.0
5 1.0 3.0 2.0
[6 rows x 3 columns]
DataFrame has new where and mask methods to select values according to a given boolean mask (GH2109,
GH2151)
DataFrame currently supports slicing via a boolean vector the same length as the DataFrame (inside
the []). The returned DataFrame has the same number of columns as the original, but is sliced on its
index.
In [6]: df = DataFrame(np.random.randn(5, 3), columns = ['A','B','C'])
In [7]: df
Out[7]:
A B C
0 1.744738 -0.356939 0.092791
1 1.222637 1.909179 0.195946
[5 rows x 3 columns]
A B C
0 1.744738 -0.356939 0.092791
1 1.222637 1.909179 0.195946
2 0.481559 -0.404023 -1.115882
3 2.093925 0.010808 -1.775758
4 1.303175 0.025683 -1.795489
[5 rows x 3 columns]
If a DataFrame is sliced with a DataFrame based boolean condition (with the same size as the original
DataFrame), then a DataFrame the same size (index and columns) as the original is returned, with
elements that do not meet the boolean condition as NaN. This is accomplished via the new method
DataFrame.where. In addition, where takes an optional other argument for replacement.
In [9]: df[df>0]
Out[9]:
A B C
0 1.744738 NaN 0.092791
1 1.222637 1.909179 0.195946
2 0.481559 NaN NaN
3 2.093925 0.010808 NaN
4 1.303175 0.025683 NaN
[5 rows x 3 columns]
In [10]: df.where(df>0)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
A B C
0 1.744738 NaN 0.092791
1 1.222637 1.909179 0.195946
2 0.481559 NaN NaN
3 2.093925 0.010808 NaN
4 1.303175 0.025683 NaN
[5 rows x 3 columns]
In [11]: df.where(df>0,-df)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
A B C
0 1.744738 0.356939 0.092791
1 1.222637 1.909179 0.195946
2 0.481559 0.404023 1.115882
3 2.093925 0.010808 1.775758
4 1.303175 0.025683 1.795489
[5 rows x 3 columns]
Furthermore, where now aligns the input boolean condition (ndarray or DataFrame), such that partial
selection with setting is possible. This is analagous to partial setting via .ix (but on the contents rather
than the axis labels)
In [14]: df2
Out[14]:
A B C
0 1.744738 -0.356939 0.092791
1 3.000000 3.000000 3.000000
2 3.000000 -0.404023 -1.115882
3 3.000000 3.000000 -1.775758
4 1.303175 0.025683 -1.795489
[5 rows x 3 columns]
In [15]: df.mask(df<=0)
Out[15]:
A B C
0 1.744738 NaN 0.092791
1 1.222637 1.909179 0.195946
2 0.481559 NaN NaN
3 2.093925 0.010808 NaN
4 1.303175 0.025683 NaN
[5 rows x 3 columns]
In [16]: xl = ExcelFile('data/test.xls')
[7 rows x 3 columns]
Added option to disable pandas-style tick locators and formatters using series.plot(x_compat=True) or pan-
das.plot_params[x_compat] = True (GH2205)
Existing TimeSeries methods at_time and between_time were added to DataFrame (GH2149)
DataFrame.dot can now accept ndarrays (GH2042)
DataFrame.drop now supports non-unique indexes (GH2101)
Upsampling data with a PeriodIndex will result in a higher frequency TimeSeries that spans the original time
window
In [4]: s.resample('M')
Out[4]:
2012-01 -1.471992
2012-02 NaN
2012-03 NaN
2012-04 -0.493593
2012-05 NaN
2012-06 NaN
Freq: M, dtype: float64
Period.end_time now returns the last nanosecond in the time interval (GH2124, GH2125, GH1764)
In [18]: p = Period('2012')
In [19]: p.end_time
Out[19]: Timestamp('2012-12-31 23:59:59.999999999')
File parsers no longer coerce to float or bool for columns that have custom converters specified (GH2184)
[2 rows x 3 columns]
See the full release notes or issue tracker on GitHub for a complete list.
This is a major release from 0.8.1 and includes several new features and enhancements along with a large number of
bug fixes. New features include vectorized unicode encoding/decoding for Series.str, to_latex method to DataFrame,
more flexible parsing of boolean values, and enabling the download of options data from Yahoo! Finance.
Add encode and decode for unicode handling to vectorized string processing methods in Series.str (GH1706)
The default column names when header=None and no columns names passed to functions like read_csv
has changed to be more Pythonic and amenable to attribute access:
In [3]: df
Out[3]:
0 1 2
0 0 0 1
1 1 1 0
2 0 1 0
[3 rows x 3 columns]
Creating a Series from another Series, passing an index, will cause reindexing to happen inside rather than treat-
ing the Series like an ndarray. Technically improper usages like Series(df[col1], index=df[col2])
that worked before by accident (this was never intended) will lead to all NA Series in some cases. To be per-
fectly clear:
In [5]: s1
Out[5]:
0 1
1 2
2 3
Length: 3, dtype: int64
In [7]: s2
Out[7]:
foo NaN
bar NaN
baz NaN
Length: 3, dtype: float64
This release includes a few new features, performance enhancements, and over 30 bug fixes from 0.8.0. New features
include notably NA friendly string processing functionality and a series of new plot types and options.
This is a major release from 0.7.3 and includes extensive work on the time series handling and processing infrastructure
as well as a great deal of new functionality throughout the library. It includes over 700 commits from more than 20
distinct authors. Most pandas 0.7.3 and earlier users should not experience any issues upgrading, but due to the
migration to the NumPy datetime64 dtype, there may be a number of bugs and incompatibilities lurking. Lingering
incompatibilities will be fixed ASAP in a 0.8.1 release if necessary. See the full release notes or issue tracker on
GitHub for a complete list.
All objects can now work with non-unique indexes. Data alignment / join operations work according to SQL join
semantics (including, if application, index duplication in many-to-many joins)
Time series data are now represented using NumPys datetime64 dtype; thus, pandas 0.8.0 now requires at least NumPy
1.6. It has been tested and verified to work with the development version (1.7+) of NumPy as well which includes some
significant user-facing API changes. NumPy 1.6 also has a number of bugs having to do with nanosecond resolution
data, so I recommend that you steer clear of NumPy 1.6s datetime64 API functions (though limited as they are) and
only interact with this data using the interface that pandas provides.
See the end of the 0.8.0 section for a porting guide listing potential issues for users migrating legacy codebases from
pandas 0.7 or earlier to 0.8.0.
Bug fixes to the 0.7.x series for legacy NumPy < 1.6 users will be provided as they arise. There will be no more further
development in 0.7.x beyond bug fixes.
Note: With this release, legacy scikits.timeseries users should be able to port their code to use pandas.
New datetime64 representation speeds up join operations and data alignment, reduces memory usage, and
improve serialization / deserialization performance significantly over datetime.datetime
High performance and flexible resample method for converting from high-to-low and low-to-high frequency.
Supports interpolation, user-defined aggregation functions, and control over how the intervals and result labeling
are defined. A suite of high performance Cython/C-based resampling functions (including Open-High-Low-
Close) have also been implemented.
Revamp of frequency aliases and support for frequency shortcuts like 15min, or 1h30min
New DatetimeIndex class supports both fixed frequency and irregular time series. Replaces now deprecated
DateRange class
New PeriodIndex and Period classes for representing time spans and performing calendar logic, in-
cluding the 12 fiscal quarterly frequencies <timeseries.quarterly>. This is a partial port of, and a substantial
enhancement to, elements of the scikits.timeseries codebase. Support for conversion between PeriodIndex and
DatetimeIndex
New Timestamp data type subclasses datetime.datetime, providing the same interface while enabling working
with nanosecond-resolution data. Also provides easy time zone conversions.
Enhanced support for time zones. Add tz_convert and tz_lcoalize methods to TimeSeries and DataFrame.
All timestamps are stored as UTC; Timestamps from DatetimeIndex objects with time zone set will be localized
to localtime. Time zone conversions are therefore essentially free. User needs to know very little about pytz
library now; only time zone names as as strings are required. Time zone-aware timestamps are equal if and only
if their UTC timestamps match. Operations between time zone-aware time series with different time zones will
result in a UTC-indexed time series.
Time series string indexing conveniences / shortcuts: slice years, year and month, and index values with strings
Enhanced time series plotting; adaptation of scikits.timeseries matplotlib-based plotting code
New date_range, bdate_range, and period_range factory functions
Robust frequency inference function infer_freq and inferred_freq property of DatetimeIndex, with option
to infer frequency on construction of DatetimeIndex
to_datetime function efficiently parses array of strings to DatetimeIndex. DatetimeIndex will parse array or
list of strings to datetime64
Optimized support for datetime64-dtype data in Series and DataFrame columns
New NaT (Not-a-Time) type to represent NA in timestamp arrays
Optimize Series.asof for looking up as of values for arrays of timestamps
Milli, Micro, Nano date offset objects
Can index time series with datetime.time objects to select all data at particular time of day (TimeSeries.
at_time) or between two times (TimeSeries.between_time)
Add tshift method for leading/lagging using the frequency (if any) of the index, as opposed to a naive lead/lag
using shift
New cut and qcut functions (like Rs cut function) for computing a categorical variable from a continuous
variable by binning values either into value-based (cut) or quantile-based (qcut) bins
Rename Factor to Categorical and add a number of usability features
Add limit argument to fillna/reindex
More flexible multiple function application in GroupBy, and can pass list (name, function) tuples to get result in
particular order with given names
Add flexible replace method for efficiently substituting values
Enhanced read_csv/read_table for reading time series data and converting multiple columns to dates
Add comments option to parser functions: read_csv, etc.
Add :refdayfirst <io.dayfirst> option to parser functions for parsing international DD/MM/YYYY dates
Allow the user to specify the CSV reader dialect to control quoting etc.
Handling thousands separators in read_csv to improve integer parsing.
Enable unstacking of multiple levels in one shot. Alleviate pivot_table bugs (empty columns being intro-
duced)
Move to klib-based hash tables for indexing; better performance and less memory usage than Pythons dict
Add first, last, min, max, and prod optimized GroupBy functions
New ordered_merge function
Add flexible comparison instance methods eq, ne, lt, gt, etc. to DataFrame, Series
Improve scatter_matrix plotting function and add histogram or kernel density estimates to diagonal
Add kde plot option for density plots
Support for converting DataFrame to R data.frame through rpy2
Improved support for complex numbers in Series and DataFrame
Add pct_change method to all data structures
Add max_colwidth configuration option for DataFrame console output
Interpolate Series values using index values
Can select multiple columns from GroupBy
Add update methods to Series/DataFrame for updating values in place
Add any and all method to DataFrame
In [2]: fx['FR'].plot(style='g')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[2]: <matplotlib.axes._subplots.
AxesSubplot at 0x1267198d0>
Vytautas Jancauskas, the 2012 GSOC participant, has added many new plot types. For example, 'kde' is a new
option:
In [4]: s = Series(np.concatenate((np.random.randn(1000),
...: np.random.randn(1000) * 0.5 + 3)))
...:
In [5]: plt.figure()
Out[5]: <matplotlib.figure.Figure at 0x127083cc0>
In [7]: s.plot(kind='kde')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
<matplotlib.axes._subplots.AxesSubplot at 0x12e872eb8>
Deprecation of offset, time_rule, and timeRule arguments names in time series functions. Warnings
will be printed until pandas 0.9 or 1.0.
The major change that may affect you in pandas 0.8.0 is that time series indexes use NumPys datetime64 data
type instead of dtype=object arrays of Pythons built-in datetime.datetime objects. DateRange has been
replaced by DatetimeIndex but otherwise behaved identically. But, if you have code that converts DateRange
or Index objects that used to contain datetime.datetime values to plain NumPy arrays, you may have bugs
lurking with code using scalar values because you are handing control over to NumPy:
In [10]: rng[5]
Out[10]: Timestamp('2000-01-06 00:00:00', freq='D')
In [14]: type(scalar_val)
Out[14]: numpy.datetime64
pandass Timestamp object is a subclass of datetime.datetime that has nanosecond support (the
nanosecond field store the nanosecond value between 0 and 999). It should substitute directly into any code that
used datetime.datetime values before. Thus, I recommend not casting DatetimeIndex to regular NumPy
arrays.
If you have code that requires an array of datetime.datetime objects, you have a couple of options. First, the
asobject property of DatetimeIndex produces an array of Timestamp objects:
In [16]: stamp_array
Out[16]:
Index([2000-01-01 00:00:00, 2000-01-02 00:00:00, 2000-01-03 00:00:00,
2000-01-04 00:00:00, 2000-01-05 00:00:00, 2000-01-06 00:00:00,
2000-01-07 00:00:00, 2000-01-08 00:00:00, 2000-01-09 00:00:00,
2000-01-10 00:00:00],
dtype='object')
In [17]: stamp_array[5]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
Timestamp('2000-01-06 00:00:00', freq='D')
In [19]: dt_array
Out[19]:
array([datetime.datetime(2000, 1, 1, 0, 0),
datetime.datetime(2000, 1, 2, 0, 0),
datetime.datetime(2000, 1, 3, 0, 0),
datetime.datetime(2000, 1, 4, 0, 0),
datetime.datetime(2000, 1, 5, 0, 0),
datetime.datetime(2000, 1, 6, 0, 0),
datetime.datetime(2000, 1, 7, 0, 0),
datetime.datetime(2000, 1, 8, 0, 0),
datetime.datetime(2000, 1, 9, 0, 0),
datetime.datetime(2000, 1, 10, 0, 0)], dtype=object)
In [20]: dt_array[5]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
datetime.datetime(2000, 1, 6, 0, 0)
matplotlib knows how to handle datetime.datetime but not Timestamp objects. While I recommend that you
plot time series using TimeSeries.plot, you can either use to_pydatetime or register a converter for the
Timestamp type. See matplotlib documentation for more on this.
Warning: There are bugs in the user-facing API with the nanosecond datetime64 unit in NumPy 1.6. In particular,
the string version of the array shows garbage values, and conversion to dtype=object is similarly broken.
In [21]: rng = date_range('1/1/2000', periods=10)
In [22]: rng
Out[22]:
DatetimeIndex(['2000-01-01', '2000-01-02', '2000-01-03', '2000-01-04',
'2000-01-05', '2000-01-06', '2000-01-07', '2000-01-08',
'2000-01-09', '2000-01-10'],
dtype='datetime64[ns]', freq='D')
In [23]: np.asarray(rng)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
array(['2000-01-01T00:00:00.000000000', '2000-01-02T00:00:00.000000000',
'2000-01-03T00:00:00.000000000', '2000-01-04T00:00:00.000000000',
'2000-01-05T00:00:00.000000000', '2000-01-06T00:00:00.000000000',
'2000-01-07T00:00:00.000000000', '2000-01-08T00:00:00.000000000',
'2000-01-09T00:00:00.000000000', '2000-01-10T00:00:00.000000000'], dtype=
'datetime64[ns]')
In [25]: converted[5]
Out[25]: 947116800000000000
Trust me: dont panic. If you are using NumPy 1.6 and restrict your interaction with datetime64 values to
pandass API you will be just fine. There is nothing wrong with the data-type (a 64-bit integer internally); all of the
important data processing happens in pandas and is heavily tested. I strongly recommend that you do not work
directly with datetime64 arrays in NumPy 1.6 and only use the pandas API.
Support for non-unique indexes: In the latter case, you may have code inside a try:... catch: block that
failed due to the index not being unique. In many cases it will no longer fail (some method like append still check for
uniqueness unless disabled). However, all is not lost: you can inspect index.is_unique and raise an exception
explicitly if it is False or go to a different code branch.
This is a minor release from 0.7.2 and fixes many minor bugs and adds a number of nice new features. There are
also a couple of API changes to note; these should not affect very many users, and we are inclined to call them bug
fixes even though they do constitute a change in behavior. See the full release notes or issue tracker on GitHub for a
complete list.
Add stacked argument to Series and DataFrames plot method for stacked bar plots.
df.plot(kind='bar', stacked=True)
df.plot(kind='barh', stacked=True)
Reverted some changes to how NA values (represented typically as NaN or None) are handled in non-numeric Series:
In comparisons, NA / NaN will always come through as False except with != which is True. Be very careful with
boolean arithmetic, especially negation, in the presence of NA data. You may wish to add an explicit NA filter into
boolean array operations if you are worried about this:
While propagating NA in comparisons may seem like the right behavior to some users (and you could argue on purely
technical grounds that this is the right thing to do), the evaluation was made that propagating NA everywhere, including
in numerical arrays, would cause a large amount of problems for users. Thus, a practicality beats purity approach
was taken. This issue may be revisited at some point in the future.
When calling apply on a grouped Series, the return value will also be a Series, to be more consistent with the
groupby behavior with DataFrame:
In [7]: df
Out[7]:
A B C D
0 foo one 1.075059 -0.449141
1 bar one 0.785676 1.443014
2 foo two 0.958157 0.612324
3 bar three 1.477773 -0.178818
4 foo two -1.006023 0.133072
5 bar two -1.506997 -0.550981
6 foo one 1.218042 -2.043335
7 foo three -0.565878 0.753539
[8 rows x 4 columns]
In [9]: grouped.describe()
Out[9]:
count mean std min 25% 50% 75% \
A
bar 3.0 0.252151 1.562274 -1.506997 -0.360661 0.785676 1.131724
foo 5.0 0.335871 1.039915 -1.006023 -0.565878 0.958157 1.075059
max
A
bar 1.477773
foo 1.218042
[2 rows x 8 columns]
A
bar 1 0.785676
3 1.477773
foo 0 1.075059
6 1.218042
Name: C, Length: 4, dtype: float64
This release targets bugs in 0.7.1, and adds a few minor features.
This release includes a few new features and addresses over a dozen bugs in 0.7.0.
Add to_clipboard function to pandas namespace for writing objects to the system clipboard (GH774)
Add itertuples method to DataFrame for iterating through the rows of a dataframe as tuples (GH818)
Add ability to pass fill_value and method to DataFrame and Series align method (GH806, GH807)
Add fill_value option to reindex, align methods (GH784)
Enable concat to produce DataFrame from Series (GH787)
Add between method to Series (GH802)
Add HTML representation hook to DataFrame for the IPython HTML notebook (GH773)
Support for reading Excel 2007 XML documents using openpyxl
New unified merge function for efficiently performing full gamut of database / relational-algebra operations.
Refactored existing join methods to use the new infrastructure, resulting in substantial performance gains
(GH220, GH249, GH267)
New unified concatenation function for concatenating Series, DataFrame or Panel objects along an axis. Can
form union or intersection of the other axes. Improves performance of Series.append and DataFrame.
append (GH468, GH479, GH273)
Can pass multiple DataFrames to DataFrame.append to concatenate (stack) and multiple Series to Series.
append too
Can pass list of dicts (e.g., a list of JSON objects) to DataFrame constructor (GH526)
You can now set multiple columns in a DataFrame via __getitem__, useful for transformation (GH342)
Handle differently-indexed output values in DataFrame.apply (GH498)
In [1]: df = DataFrame(randn(10, 4))
[8 rows x 4 columns]
One of the potentially riskiest API changes in 0.7.0, but also one of the most important, was a complete review of how
integer indexes are handled with regard to label-based indexing. Here is an example:
In [4]: s
Out[4]:
0 -0.543429
2 1.425447
4 -0.408795
6 -1.489348
8 -1.166408
10 -0.481205
12 -0.810355
14 -0.985491
16 -0.336246
18 -0.629058
Length: 10, dtype: float64
In [5]: s[0]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
-0.54342898765020686
In [6]: s[2]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
1.4254474252163707
In [7]: s[4]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
-0.40879476802408349
This is all exactly identical to the behavior before. However, if you ask for a key not contained in the Series, in
versions 0.6.1 and prior, Series would fall back on a location-based lookup. This now raises a KeyError:
In [2]: s[1]
KeyError: 1
In [4]: df
0 1 2 3
0 0.88427 0.3363 -0.1787 0.03162
2 0.14451 -0.1415 0.2504 0.58374
4 -1.44779 -0.9186 -1.4996 0.27163
6 -0.26598 -2.4184 -0.2658 0.11503
8 -0.58776 0.3144 -0.8566 0.61941
10 0.10940 -0.7175 -1.0108 0.47990
12 -1.16919 -0.3087 -0.6049 -0.43544
14 -0.07337 0.3410 0.0424 -0.16037
In [5]: df.ix[3]
KeyError: 3
In order to support purely integer-based indexing, the following methods have been added:
Method Description
Series.iget_value(i) Retrieve value stored at location i
Series.iget(i) Alias for iget_value
DataFrame.irow(i) Retrieve the i-th row
DataFrame.icol(j) Retrieve the j-th column
DataFrame.iget_value(i, j) Retrieve the value at row i and column j
Label-based slicing using ix now requires that the index be sorted (monotonic) unless both the start and endpoint are
contained in the index:
In [2]: s
Out[2]:
g -1.182230
m -0.276183
k -0.243550
a 1.628992
e 0.073308
c -0.539890
dtype: float64
In [3]: s.ix['k':'e']
Out[3]:
k -0.243550
a 1.628992
e 0.073308
dtype: float64
In [12]: s.ix['b':'h']
KeyError 'b'
If the index had been sorted, the range selection would have been possible:
In [4]: s2 = s.sort_index()
In [5]: s2
Out[5]:
a 1.628992
c -0.539890
e 0.073308
g -1.182230
k -0.243550
m -0.276183
dtype: float64
In [6]: s2.ix['b':'h']
Out[6]:
c -0.539890
e 0.073308
g -1.182230
dtype: float64
As as notational convenience, you can pass a sequence of labels or a label slice to a Series when getting and setting
values via [] (i.e. the __getitem__ and __setitem__ methods). The behavior will be the same as passing
similar input to ix except in the case of integer indexing:
In [9]: s
Out[9]:
a -0.297788
c 0.499769
e 0.810531
g 0.414649
k -1.551478
m 1.012459
Length: 6, dtype: float64
m 1.012459
a -0.297788
c 0.499769
e 0.810531
Length: 4, dtype: float64
In [11]: s['b':'l']
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
c 0.499769
e 0.810531
g 0.414649
k -1.551478
Length: 4, dtype: float64
In [12]: s['c':'k']
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
c 0.499769
e 0.810531
g 0.414649
k -1.551478
Length: 4, dtype: float64
In the case of integer indexes, the behavior will be exactly as before (shadowing ndarray):
2 0.026488
Length: 3, dtype: float64
In [15]: s[1:5]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[15]:
2 0.026488
4 0.928877
6 -1.264991
8 0.419449
Length: 4, dtype: float64
If you wish to do indexing with sequences and slicing on an integer index with label semantics, use ix.
Cythonized GroupBy aggregations no longer presort the data, thus achieving a significant speedup (GH93).
GroupBy aggregations with Python functions significantly sped up by clever manipulation of the ndarray data
type in Cython (GH496).
Better error message in DataFrame constructor when passed column labels dont match data (GH497)
Substantially improve performance of multi-GroupBy aggregation when a Python function is passed, reuse
ndarray object in Cython (GH496)
Can store objects indexed by tuples and floats in HDFStore (GH492)
Dont print length by default in Series.to_string, add length option (GH489)
Improve Cython code for multi-groupby to aggregate without having to sort the data (GH93)
Improve MultiIndex reindexing speed by storing tuples in the MultiIndex, test for backwards unpickling com-
patibility
Improve column reindexing performance by using specialized Cython take function
Further performance tweaking of Series.__getitem__ for standard use cases
Avoid Index dict creation in some cases (i.e. when getting slices, etc.), regression from prior versions
Friendlier error message in setup.py if NumPy not installed
Use common set of NA-handling operations (sum, mean, etc.) in Panel class also (GH536)
Default name assignment when calling reset_index on DataFrame with a regular (non-hierarchical) index
(GH476)
Use Cythonized groupers when possible in Series/DataFrame stat ops with level parameter passed (GH545)
Ported skiplist data structure to C to speed up rolling_median by about 5-10x in most typical use cases
(GH374)
Improve memory usage of DataFrame.describe (do not copy data unnecessarily) (PR #425)
Optimize scalar value lookups in the general case by 25% or more in Series and DataFrame
Fix performance regression in cross-sectional count in DataFrame, affecting DataFrame.dropna speed
Column deletion in DataFrame copies no data (computes views on blocks) (GH #158)
Added Series.isin function which checks if each value is contained in a passed sequence (GH289)
Added float_format option to Series.to_string
Added skip_footer (GH291) and converters (GH343) options to read_csv and read_table
Added drop_duplicates and duplicated functions for removing duplicate DataFrame rows and check-
ing for duplicate rows, respectively (GH319)
Implemented operators &, |, ^, - on DataFrame (GH347)
Added Series.mad, mean absolute deviation
Added QuarterEnd DateOffset (GH321)
Added dot to DataFrame (GH65)
Added orient option to Panel.from_dict (GH359, GH301)
Added orient option to DataFrame.from_dict
Added passing list of tuples or list of lists to DataFrame.from_records (GH357)
Added multiple levels to groupby (GH103)
Allow multiple columns in by argument of DataFrame.sort_index (GH92, GH362)
Added fast get_value and put_value methods to DataFrame (GH360)
Added cov instance methods to Series and DataFrame (GH194, GH362)
Added kind='bar' option to DataFrame.plot (GH348)
Added idxmin and idxmax to Series and DataFrame (GH286)
Added read_clipboard function to parse DataFrame from clipboard (GH300)
Added nunique function to Series for counting unique elements (GH297)
Made DataFrame constructor use Series name if no columns passed (GH373)
Support regular expressions in read_table/read_csv (GH364)
Added DataFrame.to_html for writing DataFrame to HTML (GH387)
Added support for MaskedArray data in DataFrame, masked values converted to NaN (GH396)
Added DataFrame.boxplot function (GH368)
Can pass extra args, kwds to DataFrame.apply (GH376)
Implement DataFrame.join with vector on argument (GH312)
Added legend boolean flag to DataFrame.plot (GH324)
Can pass multiple levels to stack and unstack (GH370)
Can pass multiple values columns to pivot_table (GH381)
Use Series name in GroupBy for result index (GH363)
Added raw option to DataFrame.apply for performance if only need ndarray (GH309)
Added proper, tested weighted least squares to standard and panel OLS (GH303)
Added convenience set_index function for creating a DataFrame index from its existing columns
Implemented groupby hierarchical index level name (GH223)
Added support for different delimiters in DataFrame.to_csv (GH244)
TODO: DOCS ABOUT TAKE METHODS
VBENCH Major performance improvements in file parsing functions read_csv and read_table
VBENCH Added Cython function for converting tuples to ndarray very fast. Speeds up many MultiIndex-related
operations
VBENCH Refactored merging / joining code into a tidy class and disabled unnecessary computations in the
float/object case, thus getting about 10% better performance (GH211)
VBENCH Improved speed of DataFrame.xs on mixed-type DataFrame objects by about 5x, regression from
0.3.0 (GH215)
VBENCH With new DataFrame.align method, speeding up binary operations between differently-indexed
DataFrame objects by 10-25%.
VBENCH Significantly sped up conversion of nested dict into DataFrame (GH212)
VBENCH Significantly speed up DataFrame __repr__ and count on large mixed-type DataFrame objects
Altered binary operations on differently-indexed SparseSeries objects to use the integer-based (dense) alignment
logic which is faster with a larger number of blocks (GH205)
Wrote faster Cython data alignment / merging routines resulting in substantial speed increases
Improved performance of isnull and notnull, a regression from v0.3.0 (GH187)
Refactored code related to DataFrame.join so that intermediate aligned copies of the data in each
DataFrame argument do not need to be created. Substantial performance increases result (GH176)
Substantially improved performance of generic Index.intersection and Index.union
Implemented BlockManager.take resulting in significantly faster take performance on mixed-type
DataFrame objects (GH104)
Improved performance of Series.sort_index
Significant groupby performance enhancement: removed unnecessary integrity checks in DataFrame internals
that were slowing down slicing operations to retrieve groups
Optimized _ensure_index function resulting in performance savings in type-checking Index objects
Wrote fast time series merging / joining methods in Cython. Will be integrated later into DataFrame.join and
related functions
TWO
INSTALLATION
The easiest way for the majority of users to install pandas is to install it as part of the Anaconda distribution, a cross
platform distribution for data analysis and scientific computing. This is the recommended installation method for most
users.
Instructions for installing from source, PyPI, various Linux distributions, or a development version are also provided.
Installing pandas and the rest of the NumPy and SciPy stack can be a little difficult for inexperienced users.
The simplest way to install not only pandas, but Python and the most popular packages that make up the SciPy
stack (IPython, NumPy, Matplotlib, ...) is with Anaconda, a cross-platform (Linux, Mac OS X, Windows) Python
distribution for data analytics and scientific computing.
After running a simple installer, the user will have access to pandas and the rest of the SciPy stack without needing to
install anything else, and without needing to wait for any software to be compiled.
Installation instructions for Anaconda can be found here.
A full list of the packages available as part of the Anaconda distribution can be found here.
An additional advantage of installing with Anaconda is that you dont require admin rights to install it, it will install
in the users home directory, and this also makes it trivial to delete Anaconda at a later date (just delete that folder).
The previous section outlined how to get pandas installed as part of the Anaconda distribution. However this approach
means you will install well over one hundred packages and involves downloading the installer which is a few hundred
megabytes in size.
If you want to have more control on which packages, or have a limited internet bandwidth, then installing pandas with
Miniconda may be a better solution.
Conda is the package manager that the Anaconda distribution is built upon. It is a package manager that is both
cross-platform and language agnostic (it can play a similar role to a pip and virtualenv combination).
367
pandas: powerful Python data analysis toolkit, Release 0.20.1
Miniconda allows you to create a minimal self contained Python installation, and then use the Conda command to
install additional packages.
First you will need Conda to be installed and downloading and running the Miniconda will do this for you. The
installer can be found here
The next step is to create a new conda environment (these are analogous to a virtualenv but they also allow you to
specify precisely which Python version to install also). Run the following commands from a terminal window:
This will create a minimal environment with only Python installed in it. To put your self inside this environment run:
activate name_of_my_env
The final step required is to install pandas. This can be done with the following command:
If you require any packages that are available to pip but not conda, simply install pip, and use pip to install these
packages:
This will likely require the installation of a number of dependencies, including NumPy, will require a compiler to
compile required bits of code, and can take a few minutes to complete.
The commands in this table will install pandas for Python 2 from your distribution. To install pandas for Python 3 you
may need to use the package python3-pandas.
See the contributing documentation for complete instructions on building from the git source tree. Further, see creating
a development environment if you wish to create a pandas development environment.
pandas is equipped with an exhaustive set of unit tests covering about 97% of the codebase as of this writing. To run it
on your machine to verify that everything is working (and you have all of the dependencies, soft and hard, installed),
make sure you have pytest and run:
----------------------------------------------------------------------
Ran 9252 tests in 368.339s
OK (SKIP=117)
2.3 Dependencies
setuptools
NumPy: 1.7.1 or higher
python-dateutil: 1.5 or higher
pytz: Needed for time zone support
numexpr: for accelerating certain numerical operations. numexpr uses multiple cores as well as smart chunk-
ing and caching to achieve large speedups. If installed, must be Version 2.4.6 or higher.
bottleneck: for accelerating certain types of nan evaluations. bottleneck uses specialized cython routines
to achieve large speedups.
Note: You are highly encouraged to install these libraries, as they provide large speedups, especially if working with
large data sets.
Warning:
if you install BeautifulSoup4 you must install either lxml or html5lib or both. read_html() will
not work with only BeautifulSoup4 installed.
You are highly encouraged to read HTML Table Parsing gotchas. It explains issues surrounding the
installation and usage of the above three libraries.
You may need to install an older version of BeautifulSoup4: Versions 4.2.1, 4.1.3 and 4.0.2 have been
confirmed for 64 and 32-bit Ubuntu/Debian
Note:
if youre on a system with apt-get you can do
to get the necessary dependencies for installation of lxml. This will prevent further headaches down the
line.
Note: Without the optional dependencies, many useful features will not work. Hence, it is highly recommended that
you install these. A packaged distribution like Anaconda, or Enthought Canopy may be worth considering.
THREE
CONTRIBUTING TO PANDAS
Table of contents:
Where to start?
Bug reports and enhancement requests
Working with the code
Version control, Git, and GitHub
Getting started with Git
Forking
Creating a branch
Creating a development environment
Creating a Windows development environment
Making changes
Contributing to the documentation
About the pandas documentation
How to build the pandas documentation
* Requirements
* Building the documentation
* Building master branch documentation
Contributing to the code base
Code standards
* C (cpplint)
* Python (PEP8)
* Backwards Compatibility
Testing With Continuous Integration
Test-driven development/code writing
* Writing tests
* Transitioning to pytest
373
pandas: powerful Python data analysis toolkit, Release 0.20.1
* Using pytest
Running the test suite
Running the performance test suite
Documenting your code
Contributing your changes to pandas
Committing your code
Combining commits
Pushing your changes
Review your code
Finally, make the pull request
Delete your merged branch (optional)
All contributions, bug reports, bug fixes, documentation improvements, enhancements and ideas are welcome.
If you are simply looking to start working with the pandas codebase, navigate to the GitHub issues tab and start
looking through interesting issues. There are a number of issues listed under Docs and Difficulty Novice where you
could start out.
Or maybe through using pandas you have an idea of your own or are looking for something in the documentation and
thinking this can be improved...you can do something about it!
Feel free to ask questions on the mailing list or on Gitter.
Bug reports are an important part of making pandas more stable. Having a complete bug report will allow others to
reproduce the bug and provide insight into fixing. Because many versions of pandas are supported, knowing version
information will also identify improvements made since previous versions. Trying the bug-producing code out on the
master branch is often a worthwhile exercise to confirm the bug still exists. It is also worth searching existing bug
reports and pull requests to see if the issue has already been reported and/or fixed.
Bug reports must:
1. Include a short, self-contained Python snippet reproducing the problem. You can format the code nicely by
using GitHub Flavored Markdown:
```python
>>> from pandas import DataFrame
>>> df = DataFrame(...)
...
```
2. Include the full version string of pandas and its dependencies. You can use the built in function:
>>> import pandas as pd
>>> pd.show_versions()
3. Explain why the current behavior is wrong/not desired and what you expect instead.
The issue will then show up to the pandas community and be open to comments/ideas from others.
Now that you have an issue you want to fix, enhancement to add, or documentation to improve, you need to learn how
to work with GitHub and the pandas code base.
To the new user, working with Git is one of the more daunting aspects of contributing to pandas. It can very quickly
become overwhelming, but sticking to the guidelines below will help keep the process straightforward and mostly
trouble free. As always, if you are having difficulties please feel free to ask for help.
The code is hosted on GitHub. To contribute you will need to sign up for a free GitHub account. We use Git for
version control to allow many people to work together on the project.
Some great resources for learning Git:
the GitHub help pages.
the NumPys documentation.
Matthew Bretts Pydagogue.
GitHub has instructions for installing git, setting up your SSH key, and configuring git. All these steps need to be
completed before you can work seamlessly between your local repository and GitHub.
3.3.3 Forking
You will need your own fork to work on the code. Go to the pandas project page and hit the Fork button. You will
want to clone your fork to your machine:
This creates the directory pandas-yourname and connects your repository to the upstream (main project) pandas
repository.
You want your master branch to reflect only production-ready code, so create a feature branch for making your changes.
For example:
This changes your working directory to the shiny-new-feature branch. Keep any changes in this branch specific to one
bug or feature so it is clear what the branch brings to pandas. You can have many shiny-new-features and switch in
between them using the git checkout command.
To update this branch, you need to retrieve the changes from the master branch:
This will replay your commits on top of the latest pandas git master. If this leads to merge conflicts, you must resolve
these before submitting your pull request. If you have uncommitted changes, you will need to stash them prior to
updating. This will effectively store your changes and they can be reapplied after updating.
Warning: If you are on Windows, see here for a fully compliant Windows environment.
This will create the new environment, and not touch any of your existing environments, nor any existing python
installation. It will install all of the basic dependencies of pandas, as well as the development and testing tools. If you
would like to install other dependencies, you can install them as follows:
activate pandas_dev
You will then see a confirmation message to indicate you are in the new development environment.
To view your environments:
conda info -e
deactivate
source deactivate
To build on Windows, you need to have compilers installed to build the extensions. You will need to install the
appropriate Visual Studio compilers, VS 2008 for Python 2.7, VS 2010 for 3.4, and VS 2015 for Python 3.5 and 3.6.
For Python 2.7, you can install the mingw compiler which will work equivalently to VS 2008:
or use the Microsoft Visual Studio VC++ compiler for Python. Note that you have to check the x64 box to install the
x64 extension building capability as this is not installed by default.
For Python 3.4, you can download and install the Windows 7.1 SDK. Read the references below as there may be
various gotchas during the installation.
For Python 3.5 and 3.6, you can download and install the Visual Studio 2015 Community Edition.
Here are some references and blogs:
https://blogs.msdn.microsoft.com/pythonengineering/2016/04/11/unable-to-find-vcvarsall-bat/
https://github.com/conda/conda-recipes/wiki/Building-from-Source-on-Windows-32-bit-and-64-bit
https://cowboyprogrammer.org/building-python-wheels-for-windows/
https://blog.ionelmc.ro/2014/12/21/compiling-python-extensions-on-windows/
https://support.enthought.com/hc/en-us/articles/204469260-Building-Python-extensions-with-Canopy
Before making your code changes, it is often necessary to build the code that was just checked out. There are two
primary methods of doing this.
1. The best way to develop pandas is to build the C extensions in-place by running:
If you startup the Python interpreter in the pandas source directory you will call the built C extensions
2. Another very common option is to do a develop install of pandas:
This makes a symbolic link that tells the Python interpreter to import pandas from your development directory.
Thus, you can always be using the development version on your system without being inside the clone directory.
If youre not the developer type, contributing to the documentation is still of huge value. You dont even have to be an
expert on pandas to do so! Something as simple as rewriting small passages for clarity as you reference the docs is a
simple but effective way to contribute. The next person to read that passage will be in your debt!
In fact, there are sections of the docs that are worse off after being written by experts. If something in the docs doesnt
make sense to you, updating the relevant section after you figure it out is a simple way to ensure it will help the next
person.
Documentation:
The documentation is written in reStructuredText, which is almost like writing in plain English, and built using
Sphinx. The Sphinx Documentation has an excellent introduction to reST. Review the Sphinx docs to perform more
complex changes to the documentation as well.
Some other important things to know about the docs:
The pandas documentation consists of two parts: the docstrings in the code itself and the docs in this folder
pandas/doc/.
The docstrings provide a clear explanation of the usage of the individual functions, while the documentation
in this folder consists of tutorial-like overviews per topic together with some other information (whats new,
installation, etc).
The docstrings follow the Numpy Docstring Standard, which is used widely in the Scientific Python com-
munity. This standard specifies the format of the different sections of the docstring. See this document for a
detailed explanation, or look at some of the existing functions to extend it in a similar manner.
The tutorials make heavy use of the ipython directive sphinx extension. This directive lets you put code in the
documentation which will be run during the doc build. For example:
.. ipython:: python
x = 2
x**3
In [1]: x = 2
In [2]: x**3
Out[2]: 8
Almost all code examples in the docs are run (and the output saved) during the doc build. This approach means
that code examples will always be up to date, but it does make the doc building a bit more complex.
Note: The .rst files are used to automatically generate Markdown and HTML versions of the docs. For this reason,
please do not edit CONTRIBUTING.md directly, but instead make any changes to doc/source/contributing.
rst. Then, to generate CONTRIBUTING.md, use pandoc with the following command:
The utility script scripts/api_rst_coverage.py can be used to compare the list of methods documented in
doc/source/api.rst (which is used to generate the API Reference page) and the actual public methods. This
will identify methods documented in in doc/source/api.rst that are not actually class methods, and existing
methods that are not documented in doc/source/api.rst.
3.4.2.1 Requirements
First, you need to have a development environment to be able to build pandas (see the docs on creating a development
environment above). Further, to build the docs, there are some extra requirements: you will need to have sphinx and
ipython installed. numpydoc is used to parse the docstrings that follow the Numpy Docstring Standard (see above),
but you dont need to install this because a local copy of numpydoc is included in the pandas source code. nbsphinx
is required to build the Jupyter notebooks included in the documentation.
If you have a conda environment named pandas_dev, you can install the extra requirements with:
Furthermore, it is recommended to have all optional dependencies. installed. This is not strictly necessary, but be
aware that you will see some error messages when building the docs. This happens because all the code in the
documentation is executed during the doc build, and so code examples using optional dependencies will generate
errors. Run pd.show_versions() to get an overview of the installed version of all dependencies.
So how do you build the docs? Navigate to your local pandas/doc/ directory in the console and run:
Then you can find the HTML output in the folder pandas/doc/build/html/.
The first time you build the docs, it will take quite a while because it has to run all the code examples and build all the
generated docstring pages. In subsequent evocations, sphinx will try to only build the pages that have been modified.
Starting with pandas 0.13.1 you can tell make.py to compile only a single section of the docs, greatly reducing the
turn-around time for checking your changes. You will be prompted to delete .rst files that arent required. This
is okay because the prior versions of these files can be checked out from git. However, you must make sure not to
commit the file deletions to your Git repository!
For comparison, a full documentation build may take 10 minutes, a -no-api build may take 3 minutes and a single
section may take 15 seconds. Subsequent builds, which only process portions you have changed, will be faster. Open
the following file in a web browser to see the full documentation you just built:
pandas/docs/build/html/index.html
And youll have the satisfaction of seeing your new and improved documentation!
When pull requests are merged into the pandas master branch, the main parts of the documentation are also built by
Travis-CI. These docs are then hosted here, see also the Continuous Integration section.
Code Base:
Code standards
C (cpplint)
Python (PEP8)
Backwards Compatibility
Testing With Continuous Integration
Test-driven development/code writing
Writing tests
Transitioning to pytest
Using pytest
Running the test suite
Writing good code is not just about what you write. It is also about how you write it. During Continuous Integration
testing, several tools will be run to check your code for stylistic errors. Generating any warnings will cause the test to
fail. Thus, good style is a requirement for submitting code to pandas.
In addition, because a lot of people use our library, it is important that we do not make sudden changes to the code
that could have the potential to break a lot of user code as a result, that is, we need it to be as backwards compatible as
possible to avoid mass breakages.
Additional standards are outlined on the code style wiki page.
3.5.1.1 C (cpplint)
pandas uses the Google standard. Google provides an open source style checker called cpplint, but we use a fork
of it that can be found here. Here are some of the more common cpplint issues:
we restrict line-length to 80 characters to promote readability
every header file must include a header guard to avoid name collisions if re-included
Continuous Integration. will run the cpplint tool and report any stylistic errors in your code. Therefore, it is helpful
before submitting code to run the check yourself:
To make your commits compliant with this standard, you can install the ClangFormat tool, which can be downloaded
here. To configure, in your home directory, run the following command:
Then modify the file to ensure that any indentation width parameters are at least four. Once configured, you can run
the tool as follows:
clang-format modified-c-file
This will output what your file will look like if the changes are made, and to apply them, just run the following
command:
clang-format -i modified-c-file
To run the tool on an entire directory, you can run the following analogous commands:
Do note that this tool is best-effort, meaning that it will try to correct as many errors as possible, but it may not correct
all of them. Thus, it is recommended that you run cpplint to double check and make any other style fixes manually.
pandas uses the PEP8 standard. There are several tools to ensure you abide by this standard. Here are some of the
more common PEP8 issues:
we restrict line-length to 79 characters to promote readability
passing arguments should have spaces after commas, e.g. foo(arg1, arg2, kw1='bar')
Continuous Integration will run the flake8 tool and report any stylistic errors in your code. Therefore, it is helpful
before submitting code to run the check yourself on the diff:
This command will catch any stylistic errors in your changes specifically, but be beware it may not catch all of them.
For example, if you delete the only usage of an imported function, it is stylistically incorrect to import an unused
function. However, style-checking the diff will not catch this because the actual import is not part of the diff. Thus,
for completeness, you should run this command, though it will take longer:
Note that on OSX, the -r flag is not available, so you have to omit it and run this slightly modified command:
Please try to maintain backward compatibility. pandas has lots of users with lots of existing code, so dont break it if
at all possible. If you think breakage is required, clearly state why as part of the pull request. Also, be careful when
changing method signatures and add deprecation warnings where needed.
The pandas test suite will run automatically on Travis-CI, Appveyor, and Circle CI continuous integration services,
once your pull request is submitted. However, if you wish to run the test suite on a branch prior to submitting the pull
request, then the continuous integration services need to be hooked to your GitHub repository. Instructions are here
for Travis-CI, Appveyor , and CircleCI.
A pull-request will be considered for merging when you have an all green build. If any tests are failing, then you
will get a red X, where you can click through to see the individual failed tests. This is an example of a green build.
Note: Each time you push to your fork, a new run of the tests will be triggered on the CI. Appveyor will auto-cancel
any non-currently-running tests for that same pull-request. You can enable the auto-cancel feature for Travis-CI here
and for CircleCI here.
pandas is serious about testing and strongly encourages contributors to embrace test-driven development (TDD). This
development process relies on the repetition of a very short development cycle: first the developer writes an (initially
failing) automated test case that defines a desired improvement or new function, then produces the minimum amount
of code to pass that test. So, before actually writing any code, you should write your tests. Often the test can
be taken from the original GitHub issue. However, it is always worth considering additional use cases and writing
corresponding tests.
Adding tests is one of the most common requests after code is pushed to pandas. Therefore, it is worth getting in the
habit of writing tests ahead of time so this is never an issue.
Like many packages, pandas uses pytest and the convenient extensions in numpy.testing.
All tests should go into the tests subdirectory of the specific package. This folder contains many current examples of
tests, and we suggest looking to these for inspiration. If your test requires working with files or network connectivity,
there is more information on the testing page of the wiki.
The pandas.util.testing module has many special assert functions that make it easier to make statements
about whether Series or DataFrame objects are equivalent. The easiest way to verify that your code is correct is to
explicitly construct the result you expect, then compare the actual result to the expected correct result:
def test_pivot(self):
data = {
frame = DataFrame(data)
pivoted = frame.pivot(index='index', columns='columns', values='values')
expected = DataFrame({
'One' : {'A' : 1., 'B' : 2., 'C' : 3.},
'Two' : {'A' : 1., 'B' : 2., 'C' : 3.}
})
assert_frame_equal(pivoted, expected)
pandas existing test structure is mostly classed based, meaning that you will typically find tests wrapped in a class.
class TestReallyCoolFeature(object):
....
Going forward, we are moving to a more functional style using the pytest framework, which offers a richer testing
framework that will facilitate testing and developing. Thus, instead of writing test classes, we will write test functions
like this:
def test_really_cool_feature():
....
Here is an example of a self-contained set of tests that illustrate multiple features that we like to use.
functional style: tests are like test_* and only take arguments that are either fixtures or parameters
using parametrize: allow testing of multiple cases
fixture, code for object construction, on a per-test basis
using bare assert for scalars and truth-testing
tm.assert_series_equal (and its counter part tm.assert_frame_equal), for pandas object com-
parisons.
the typical pattern of constructing an expected and comparing versus the result
We would name this file test_cool_feature.py and put in an appropriate place in the pandas/tests/
structure.
import pytest
import numpy as np
import pandas as pd
from pandas.util import testing as tm
@pytest.fixture
def series():
return pd.Series([1, 2, 3])
tester.py::test_dtypes[int8] PASSED
tester.py::test_dtypes[int16] PASSED
tester.py::test_dtypes[int32] PASSED
tester.py::test_dtypes[int64] PASSED
tester.py::test_series[int8] PASSED
tester.py::test_series[int16] PASSED
tester.py::test_series[int32] PASSED
tester.py::test_series[int64] PASSED
Tests that we have parametrized are now accessible via the test name, for example we could run these with -k
int8 to sub-select only those tests which match int8.
test_cool_feature.py::test_dtypes[int8] PASSED
test_cool_feature.py::test_series[int8] PASSED
The tests can then be run directly inside your Git clone (without having to install pandas) by typing:
pytest pandas
The tests suite is exhaustive and takes around 20 minutes to run. Often it is worth running only a subset of tests first
around your changes before running the entire suite.
The easiest way to do this is with:
pytest pandas/tests/[test-module].py
pytest pandas/tests/[test-module].py::[TestClass]
pytest pandas/tests/[test-module].py::[TestClass]::[test_method]
Using pytest-xdist, one can speed up local testing on multicore machines. To use this feature, you will need to install
pytest-xdist via:
Two scripts are provided to assist with this. These scripts distribute testing across 4 threads.
On Unix variants, one can type:
test_fast.sh
test_fast.bat
This can significantly reduce the time it takes to locally run tests before submitting a pull request.
For more, see the pytest documentation.
New in version 0.20.0.
Furthermore one can run
pd.test()
Performance matters and it is worth considering whether your code has introduced performance regressions. pandas
is in the process of migrating to asv benchmarks to enable easy monitoring of the performance of critical pandas
operations. These benchmarks are all found in the pandas/asv_bench directory. asv supports both python2 and
python3.
To use all features of asv, you will need either conda or virtualenv. For more details please check the asv
installation webpage.
To install asv:
If you need to run a benchmark, change your directory to asv_bench/ and run:
You can replace HEAD with the name of the branch you are working on, and report benchmarks that changed by
more than 10%. The command uses conda by default for creating the benchmark environments. If you want to use
virtualenv instead, write:
The -E virtualenv option should be added to all asv commands that run benchmarks. The default value is
defined in asv.conf.json.
Running the full test suite can take up to one hour and use up to 3GB of RAM. Usually it is sufficient to paste only
a subset of the results into the pull request to show that the committed changes do not cause unexpected performance
regressions. You can run specific benchmarks using the -b flag, which takes a regular expression. For example, this
will only run tests from a pandas/asv_bench/benchmarks/groupby.py file:
If you want to only run a specific group of tests from a file, you can do it using . as a separator. For example:
This will display stderr from the benchmarks, and use your local python that comes from your $PATH.
Information on how to write a benchmark and how to use asv can be found in the asv documentation.
Changes should be reflected in the release notes located in doc/source/whatsnew/vx.y.z.txt. This file
contains an ongoing change log for each release. Add an entry to this file to document your fix, enhancement or
(unavoidable) breaking change. Make sure to include the GitHub issue number when adding your entry (using
GH1234 where 1234 is the issue/pull request number).
If your code is an enhancement, it is most likely necessary to add usage examples to the existing documentation. This
can be done following the section regarding documentation above. Further, to let users know when this feature was
added, the versionadded directive is used. The sphinx syntax for that is:
.. versionadded:: 0.17.0
This will put the text New in version 0.17.0 wherever you put the sphinx directive. This should also be put in the
docstring when adding a new function or method (example) or a new keyword argument (example).
Keep style fixes to a separate commit to make your pull request more readable.
Once youve made changes, you can see them by typing:
git status
If you have created a new file, it is not being tracked by git. Add it by typing:
# On branch shiny-new-feature
#
# modified: /relative/path/to/file-you-added.py
#
Finally, commit your changes to your local repository with an explanatory message. Pandas uses a convention for
commit message prefixes and layout. Here are some common prefixes along with general guidelines for when to use
them:
ENH: Enhancement, new functionality
BUG: Bug fix
DOC: Additions/updates to documentation
TST: Additions/updates to tests
BLD: Updates to the build process/scripts
PERF: Performance improvement
CLN: Code cleanup
The following defines how a commit message should be structured. Please reference the relevant GitHub issues in
your commit message using GH1234 or #1234. Either style is fine, but the former is generally preferred:
a subject line with < 80 chars.
One blank line.
Optionally, a commit message body.
Now you can commit your changes in your local repository:
git commit -m
If you have multiple commits, you may want to combine them into one commit, often referred to as squashing or
rebasing. This is a common request by package maintainers when submitting a pull request as it maintains a more
compact commit history. To rebase your commits:
Where # is the number of commits you want to combine. Then you can pick the relevant commit message and discard
others.
To squash to the master branch do:
Use the s option on a commit to squash, meaning to keep the commit messages, or f to fixup, meaning to merge
the commit messages.
Then you will need to push the branch (see below) forcefully to replace the current commits with the new ones:
When you want your changes to appear publicly on your GitHub page, push your forked feature branchs commits:
Here origin is the default name given to your remote repository on GitHub. You can see the remote repositories:
git remote -v
If you added the upstream repository as described above you will see something like:
Now your code is on GitHub, but it is not yet a part of the pandas project. For that to happen, a pull request needs to
be submitted on GitHub.
When youre ready to ask for a code review, file a pull request. Before you do, once again make sure that you have
followed all the guidelines outlined in this document regarding code style, tests, performance tests, and documentation.
You should also double check your branch changes against the branch it was based on:
1. Navigate to your repository on GitHub https://github.com/your-user-name/pandas
2. Click on Branches
3. Click on the Compare button for your feature branch
4. Select the base and compare branches, if necessary. This will be master and shiny-new-feature,
respectively.
If everything looks good, you are ready to make a pull request. A pull request is how code from a local repository
becomes available to the GitHub community and can be looked at and eventually merged into the master version. This
pull request and its associated changes will eventually be committed to the master branch and available in the next
release. To submit a pull request:
1. Navigate to your repository on GitHub
2. Click on the Pull Request button
3. You can then click on Commits and Files Changed to make sure everything looks okay one last time
4. Write a description of your changes in the Preview Discussion tab
This will automatically update your pull request with the latest code and restart the Continuous Integration tests.
Once your feature branch is accepted into upstream, youll probably want to get rid of the branch. First, merge
upstream master into your branch so git knows it is safe to delete your branch:
Make sure you use a lower-case -d, or else git wont warn you if your feature branch has not actually been merged.
The branch will still exist on GitHub, so to delete it there do:
FOUR
PACKAGE OVERVIEW
The best way to think about the pandas data structures is as flexible containers for lower dimensional data. For
example, DataFrame is a container for Series, and Panel is a container for DataFrame objects. We would like to be
able to insert and remove objects from these containers in a dictionary-like fashion.
Also, we would like sensible default behaviors for the common API functions which take into account the typical
orientation of time series and cross-sectional data sets. When using ndarrays to store 2- and 3-dimensional data, a
burden is placed on the user to consider the orientation of the data set when writing functions; axes are considered
more or less equivalent (except when C- or Fortran-contiguousness matters for performance). In pandas, the axes are
intended to lend more semantic meaning to the data; i.e., for a particular data set there is likely to be a right way to
orient the data. The goal, then, is to reduce the amount of mental effort required to code up data transformations in
downstream functions.
391
pandas: powerful Python data analysis toolkit, Release 0.20.1
For example, with tabular data (DataFrame) it is more semantically helpful to think of the index (the rows) and the
columns rather than axis 0 and axis 1. And iterating through the columns of the DataFrame thus results in more
readable code:
All pandas data structures are value-mutable (the values they contain can be altered) but not always size-mutable. The
length of a Series cannot be changed, but, for example, columns can be inserted into a DataFrame. However, the vast
majority of methods produce new objects and leave the input data untouched. In general, though, we like to favor
immutability where sensible.
The first stop for pandas issues and ideas is the Github Issue Tracker. If you have a general question, pandas community
experts can answer through Stack Overflow.
Longer discussions occur on the developer mailing list, and commercial support inquiries for Lambda Foundry should
be sent to: support@lambdafoundry.com
4.4 Credits
pandas development began at AQR Capital Management in April 2008. It was open-sourced at the end of 2009. AQR
continued to provide resources for development through the end of 2011, and continues to contribute bug reports today.
Since January 2012, Lambda Foundry, has been providing development resources, as well as commercial support,
training, and consulting for pandas.
pandas is only made possible by a group of people around the world like you who have contributed new code, bug
reports, fixes, comments and ideas. A complete list can be found on Github.
pandas is a part of the PyData project. The PyData Development Team is a collection of developers focused on the
improvement of Pythons data libraries. The core team that coordinates development can be found on Github. If youre
interested in contributing, please visit the project website.
4.6 License
=======
License
=======
pandas license
==============
Copyright (c) 2011-2012, Lambda Foundry, Inc. and PyData Development Team
All rights reserved.
* Neither the name of the copyright holder nor the names of any
contributors may be used to endorse or promote products derived
from this software without specific prior written permission.
With this in mind, the following banner should be used in any source code
file to indicate the copyright and license terms:
#-----------------------------------------------------------------------------
# Copyright (c) 2012, PyData Development Team
# All rights reserved.
#
# Distributed under the terms of the BSD Simplified License.
#
# The full license is in the LICENSE file, distributed with this software.
#-----------------------------------------------------------------------------
FIVE
10 MINUTES TO PANDAS
This is a short introduction to pandas, geared mainly for new users. You can see more complex recipes in the Cookbook
Customarily, we import as follows:
In [1]: import pandas as pd
In [5]: s
Out[5]:
0 1.0
1 3.0
2 5.0
3 NaN
4 6.0
5 8.0
dtype: float64
Creating a DataFrame by passing a numpy array, with a datetime index and labeled columns:
In [6]: dates = pd.date_range('20130101', periods=6)
In [7]: dates
Out[7]:
DatetimeIndex(['2013-01-01', '2013-01-02', '2013-01-03', '2013-01-04',
'2013-01-05', '2013-01-06'],
dtype='datetime64[ns]', freq='D')
In [9]: df
Out[9]:
A B C D
395
pandas: powerful Python data analysis toolkit, Release 0.20.1
In [11]: df2
Out[11]:
A B C D E F
0 1.0 2013-01-02 1.0 3 test foo
1 1.0 2013-01-02 1.0 3 train foo
2 1.0 2013-01-02 1.0 3 test foo
3 1.0 2013-01-02 1.0 3 train foo
If youre using IPython, tab completion for column names (as well as public attributes) is automatically enabled.
Heres a subset of the attributes that will be completed:
In [13]: df2.<TAB>
df2.A df2.bool
df2.abs df2.boxplot
df2.add df2.C
df2.add_prefix df2.clip
df2.add_suffix df2.clip_lower
df2.align df2.clip_upper
df2.all df2.columns
df2.any df2.combine
df2.append df2.combine_first
df2.apply df2.compound
df2.applymap df2.consolidate
df2.as_blocks df2.convert_objects
df2.asfreq df2.copy
df2.as_matrix df2.corr
df2.astype df2.corrwith
df2.at df2.count
df2.at_time df2.cov
df2.axes df2.cummax
df2.B df2.cummin
df2.between_time df2.cumprod
df2.bfill df2.cumsum
df2.blocks df2.D
As you can see, the columns A, B, C, and D are automatically tab completed. E is there as well; the rest of the attributes
have been truncated for brevity.
In [14]: df.head()
Out[14]:
A B C D
2013-01-01 0.469112 -0.282863 -1.509059 -1.135632
2013-01-02 1.212112 -0.173215 0.119209 -1.044236
2013-01-03 -0.861849 -2.104569 -0.494929 1.071804
2013-01-04 0.721555 -0.706771 -1.039575 0.271860
2013-01-05 -0.424972 0.567020 0.276232 -1.087401
In [15]: df.tail(3)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
A B C D
2013-01-04 0.721555 -0.706771 -1.039575 0.271860
2013-01-05 -0.424972 0.567020 0.276232 -1.087401
2013-01-06 -0.673690 0.113648 -1.478427 0.524988
In [16]: df.index
Out[16]:
DatetimeIndex(['2013-01-01', '2013-01-02', '2013-01-03', '2013-01-04',
'2013-01-05', '2013-01-06'],
dtype='datetime64[ns]', freq='D')
In [17]: df.columns
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
Index(['A', 'B', 'C', 'D'], dtype='object')
In [18]: df.values
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
In [19]: df.describe()
Out[19]:
A B C D
count 6.000000 6.000000 6.000000 6.000000
mean 0.073711 -0.431125 -0.687758 -0.233103
std 0.843157 0.922818 0.779887 0.973118
min -0.861849 -2.104569 -1.509059 -1.135632
25% -0.611510 -0.600794 -1.368714 -1.076610
50% 0.022070 -0.228039 -0.767252 -0.386188
75% 0.658444 0.041933 -0.034326 0.461706
max 1.212112 0.567020 0.276232 1.071804
In [20]: df.T
Out[20]:
2013-01-01 2013-01-02 2013-01-03 2013-01-04 2013-01-05 2013-01-06
A 0.469112 1.212112 -0.861849 0.721555 -0.424972 -0.673690
B -0.282863 -0.173215 -2.104569 -0.706771 0.567020 0.113648
C -1.509059 0.119209 -0.494929 -1.039575 0.276232 -1.478427
D -1.135632 -1.044236 1.071804 0.271860 -1.087401 0.524988
Sorting by an axis
Sorting by values
In [22]: df.sort_values(by='B')
Out[22]:
A B C D
2013-01-03 -0.861849 -2.104569 -0.494929 1.071804
2013-01-04 0.721555 -0.706771 -1.039575 0.271860
2013-01-01 0.469112 -0.282863 -1.509059 -1.135632
2013-01-02 1.212112 -0.173215 0.119209 -1.044236
2013-01-06 -0.673690 0.113648 -1.478427 0.524988
2013-01-05 -0.424972 0.567020 0.276232 -1.087401
5.3 Selection
Note: While standard Python / Numpy expressions for selecting and setting are intuitive and come in handy for
interactive work, for production code, we recommend the optimized pandas data access methods, .at, .iat, .loc,
.iloc and .ix.
See the indexing documentation Indexing and Selecting Data and MultiIndex / Advanced Indexing
5.3.1 Getting
In [23]: df['A']
Out[23]:
2013-01-01 0.469112
2013-01-02 1.212112
2013-01-03 -0.861849
2013-01-04 0.721555
2013-01-05 -0.424972
2013-01-06 -0.673690
Freq: D, Name: A, dtype: float64
In [24]: df[0:3]
Out[24]:
A B C D
2013-01-01 0.469112 -0.282863 -1.509059 -1.135632
2013-01-02 1.212112 -0.173215 0.119209 -1.044236
2013-01-03 -0.861849 -2.104569 -0.494929 1.071804
In [25]: df['20130102':'20130104']
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
A B C D
2013-01-02 1.212112 -0.173215 0.119209 -1.044236
2013-01-03 -0.861849 -2.104569 -0.494929 1.071804
2013-01-04 0.721555 -0.706771 -1.039575 0.271860
In [26]: df.loc[dates[0]]
Out[26]:
A 0.469112
B -0.282863
C -1.509059
D -1.135632
Name: 2013-01-01 00:00:00, dtype: float64
In [27]: df.loc[:,['A','B']]
Out[27]:
A B
2013-01-01 0.469112 -0.282863
2013-01-02 1.212112 -0.173215
2013-01-03 -0.861849 -2.104569
2013-01-04 0.721555 -0.706771
2013-01-05 -0.424972 0.567020
2013-01-06 -0.673690 0.113648
In [28]: df.loc['20130102':'20130104',['A','B']]
Out[28]:
A B
2013-01-02 1.212112 -0.173215
2013-01-03 -0.861849 -2.104569
2013-01-04 0.721555 -0.706771
In [29]: df.loc['20130102',['A','B']]
Out[29]:
A 1.212112
B -0.173215
Name: 2013-01-02 00:00:00, dtype: float64
In [30]: df.loc[dates[0],'A']
Out[30]: 0.46911229990718628
In [31]: df.at[dates[0],'A']
Out[31]: 0.46911229990718628
In [32]: df.iloc[3]
Out[32]:
A 0.721555
B -0.706771
C -1.039575
D 0.271860
Name: 2013-01-04 00:00:00, dtype: float64
In [33]: df.iloc[3:5,0:2]
Out[33]:
A B
2013-01-04 0.721555 -0.706771
2013-01-05 -0.424972 0.567020
In [34]: df.iloc[[1,2,4],[0,2]]
Out[34]:
A C
2013-01-02 1.212112 0.119209
2013-01-03 -0.861849 -0.494929
2013-01-05 -0.424972 0.276232
In [43]: df2
Out[43]:
A B C D E
2013-01-01 0.469112 -0.282863 -1.509059 -1.135632 one
2013-01-02 1.212112 -0.173215 0.119209 -1.044236 one
2013-01-03 -0.861849 -2.104569 -0.494929 1.071804 two
2013-01-04 0.721555 -0.706771 -1.039575 0.271860 three
2013-01-05 -0.424972 0.567020 0.276232 -1.087401 four
2013-01-06 -0.673690 0.113648 -1.478427 0.524988 three
In [44]: df2[df2['E'].isin(['two','four'])]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
A B C D E
2013-01-03 -0.861849 -2.104569 -0.494929 1.071804 two
2013-01-05 -0.424972 0.567020 0.276232 -1.087401 four
5.3.5 Setting
In [46]: s1
Out[46]:
2013-01-02 1
2013-01-03 2
2013-01-04 3
2013-01-05 4
2013-01-06 5
2013-01-07 6
Freq: D, dtype: int64
In [47]: df['F'] = s1
In [48]: df.at[dates[0],'A'] = 0
In [49]: df.iat[0,1] = 0
In [51]: df
Out[51]:
A B C D F
2013-01-01 0.000000 0.000000 -1.509059 5 NaN
2013-01-02 1.212112 -0.173215 0.119209 5 1.0
In [54]: df2
Out[54]:
A B C D F
2013-01-01 0.000000 0.000000 -1.509059 -5 NaN
2013-01-02 -1.212112 -0.173215 -0.119209 -5 -1.0
2013-01-03 -0.861849 -2.104569 -0.494929 -5 -2.0
2013-01-04 -0.721555 -0.706771 -1.039575 -5 -3.0
2013-01-05 -0.424972 -0.567020 -0.276232 -5 -4.0
2013-01-06 -0.673690 -0.113648 -1.478427 -5 -5.0
pandas primarily uses the value np.nan to represent missing data. It is by default not included in computations. See
the Missing Data section
Reindexing allows you to change/add/delete the index on a specified axis. This returns a copy of the data.
In [55]: df1 = df.reindex(index=dates[0:4], columns=list(df.columns) + ['E'])
In [56]: df1.loc[dates[0]:dates[1],'E'] = 1
In [57]: df1
Out[57]:
A B C D F E
2013-01-01 0.000000 0.000000 -1.509059 5 NaN 1.0
2013-01-02 1.212112 -0.173215 0.119209 5 1.0 1.0
2013-01-03 -0.861849 -2.104569 -0.494929 5 2.0 NaN
2013-01-04 0.721555 -0.706771 -1.039575 5 3.0 NaN
In [60]: pd.isnull(df1)
Out[60]:
A B C D F E
2013-01-01 False False False False True False
2013-01-02 False False False False False False
2013-01-03 False False False False False True
2013-01-04 False False False False False True
5.5 Operations
5.5.1 Stats
In [61]: df.mean()
Out[61]:
A -0.004474
B -0.383981
C -0.687758
D 5.000000
F 3.000000
dtype: float64
In [62]: df.mean(1)
Out[62]:
2013-01-01 0.872735
2013-01-02 1.431621
2013-01-03 0.707731
2013-01-04 1.395042
2013-01-05 1.883656
2013-01-06 1.592306
Freq: D, dtype: float64
Operating with objects that have different dimensionality and need alignment. In addition, pandas automatically
broadcasts along the specified dimension.
In [64]: s
Out[64]:
2013-01-01 NaN
2013-01-02 NaN
2013-01-03 1.0
2013-01-04 3.0
2013-01-05 5.0
2013-01-06 NaN
Freq: D, dtype: float64
A B C D F
2013-01-01 NaN NaN NaN NaN NaN
2013-01-02 NaN NaN NaN NaN NaN
2013-01-03 -1.861849 -3.104569 -1.494929 4.0 1.0
2013-01-04 -2.278445 -3.706771 -4.039575 2.0 0.0
2013-01-05 -5.424972 -4.432980 -4.723768 0.0 -1.0
2013-01-06 NaN NaN NaN NaN NaN
5.5.2 Apply
A 2.073961
B 2.671590
C 1.785291
D 0.000000
F 4.000000
dtype: float64
5.5.3 Histogramming
In [69]: s
Out[69]:
0 4
1 2
2 1
3 2
4 6
5 4
6 4
7 6
8 4
9 4
dtype: int64
In [70]: s.value_counts()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[70]:
4 5
6 2
2 2
1 1
dtype: int64
Series is equipped with a set of string processing methods in the str attribute that make it easy to operate on each
element of the array, as in the code snippet below. Note that pattern-matching in str generally uses regular expressions
by default (and in some cases always uses them). See more at Vectorized String Methods.
In [71]: s = pd.Series(['A', 'B', 'C', 'Aaba', 'Baca', np.nan, 'CABA', 'dog', 'cat'])
In [72]: s.str.lower()
Out[72]:
0 a
1 b
2 c
3 aaba
4 baca
5 NaN
6 caba
7 dog
8 cat
dtype: object
5.6 Merge
5.6.1 Concat
pandas provides various facilities for easily combining together Series, DataFrame, and Panel objects with various
kinds of set logic for the indexes and relational algebra functionality in the case of join / merge-type operations.
See the Merging section
Concatenating pandas objects together with concat():
In [73]: df = pd.DataFrame(np.random.randn(10, 4))
In [74]: df
Out[74]:
0 1 2 3
0 -0.548702 1.467327 -1.015962 -0.483075
1 1.637550 -1.217659 -0.291519 -1.745505
2 -0.263952 0.991460 -0.919069 0.266046
3 -0.709661 1.669052 1.037882 -1.705775
4 -0.919854 -0.042379 1.247642 -0.009920
In [76]: pd.concat(pieces)
Out[76]:
0 1 2 3
0 -0.548702 1.467327 -1.015962 -0.483075
1 1.637550 -1.217659 -0.291519 -1.745505
2 -0.263952 0.991460 -0.919069 0.266046
3 -0.709661 1.669052 1.037882 -1.705775
4 -0.919854 -0.042379 1.247642 -0.009920
5 0.290213 0.495767 0.362949 1.548106
6 -1.131345 -0.089329 0.337863 -0.945867
7 -0.932132 1.956030 0.017587 -0.016692
8 -0.575247 0.254161 -1.143704 0.215897
9 1.193555 -0.077118 -0.408530 -0.862495
5.6.2 Join
In [79]: left
Out[79]:
key lval
0 foo 1
1 foo 2
In [80]: right
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[80]:
key rval
0 foo 4
1 foo 5
In [84]: left
Out[84]:
key lval
0 foo 1
1 bar 2
In [85]: right
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[85]:
key rval
0 foo 4
1 bar 5
5.6.3 Append
In [88]: df
Out[88]:
A B C D
0 1.346061 1.511763 1.627081 -0.990582
1 -0.441652 1.211526 0.268520 0.024580
2 -1.577585 0.396823 -0.105381 -0.532532
3 1.453749 1.208843 -0.080952 -0.264610
4 -0.727965 -0.589346 0.339969 -0.693205
5 -0.339355 0.593616 0.884345 1.591431
6 0.141809 0.220390 0.435589 0.192451
7 -0.096701 0.803351 1.715071 -0.708758
In [89]: s = df.iloc[3]
5.7 Grouping
By group by we are referring to a process involving one or more of the following steps
Splitting the data into groups based on some criteria
Applying a function to each group independently
Combining the results into a data structure
See the Grouping section
In [92]: df
Out[92]:
A B C D
0 foo one -1.202872 -0.055224
1 bar one -1.814470 2.395985
2 foo two 1.018601 1.552825
3 bar three -0.595447 0.166599
4 foo two 1.395433 0.047609
5 bar two -0.392670 -0.136473
6 foo one 0.007207 -0.561757
7 foo three 1.928123 -1.623033
In [93]: df.groupby('A').sum()
Out[93]:
C D
A
bar -2.802588 2.42611
foo 3.146492 -0.63958
Grouping by multiple columns forms a hierarchical index, which we then apply the function.
In [94]: df.groupby(['A','B']).sum()
Out[94]:
C D
A B
bar one -1.814470 2.395985
three -0.595447 0.166599
two -0.392670 -0.136473
foo one -1.195665 -0.616981
three 1.928123 -1.623033
two 2.414034 1.600434
5.8 Reshaping
5.8.1 Stack
In [99]: df2
Out[99]:
A B
first second
bar one 0.029399 -0.542108
two 0.282696 -0.087302
baz one -1.575170 1.771208
two 0.816482 1.100230
In [101]: stacked
Out[101]:
first second
bar one A 0.029399
B -0.542108
two A 0.282696
B -0.087302
baz one A -1.575170
B 1.771208
two A 0.816482
B 1.100230
dtype: float64
With a stacked DataFrame or Series (having a MultiIndex as the index), the inverse operation of stack() is
unstack(), which by default unstacks the last level:
In [102]: stacked.unstack()
Out[102]:
A B
first second
bar one 0.029399 -0.542108
two 0.282696 -0.087302
baz one -1.575170 1.771208
two 0.816482 1.100230
In [103]: stacked.unstack(1)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
B -0.542108 -0.087302
baz A -1.575170 0.816482
B 1.771208 1.100230
In [104]: stacked.unstack(0)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
In [106]: df
Out[106]:
A B C D E
0 one A foo 1.418757 -0.179666
1 one B foo -1.879024 1.291836
2 two C foo 0.536826 -0.009614
3 three A bar 1.006160 0.392149
4 one B bar -0.029716 0.264599
5 one C bar -1.146178 -0.057409
6 two A foo 0.100900 -1.425638
7 three B foo -1.035018 1.024098
8 one C foo 0.314665 -0.106062
9 one A bar -0.773723 1.824375
10 two B bar -1.170653 0.595974
11 three C bar 0.648740 1.167115
pandas has simple, powerful, and efficient functionality for performing resampling operations during frequency con-
version (e.g., converting secondly data into 5-minutely data). This is extremely common in, but not limited to, financial
applications. See the Time Series section
In [108]: rng = pd.date_range('1/1/2012', periods=100, freq='S')
In [110]: ts.resample('5Min').sum()
Out[110]:
2012-01-01 25083
Freq: 5T, dtype: int64
In [113]: ts
Out[113]:
2012-03-06 0.464000
2012-03-07 0.227371
2012-03-08 -0.496922
2012-03-09 0.306389
2012-03-10 -2.290613
Freq: D, dtype: float64
In [115]: ts_utc
Out[115]:
2012-03-06 00:00:00+00:00 0.464000
2012-03-07 00:00:00+00:00 0.227371
2012-03-08 00:00:00+00:00 -0.496922
2012-03-09 00:00:00+00:00 0.306389
2012-03-10 00:00:00+00:00 -2.290613
Freq: D, dtype: float64
In [119]: ts
Out[119]:
2012-01-31 -1.134623
2012-02-29 -1.561819
2012-03-31 -0.260838
2012-04-30 0.281957
2012-05-31 1.523962
Freq: M, dtype: float64
In [120]: ps = ts.to_period()
In [121]: ps
Out[121]:
2012-01 -1.134623
2012-02 -1.561819
2012-03 -0.260838
2012-04 0.281957
2012-05 1.523962
Freq: M, dtype: float64
In [122]: ps.to_timestamp()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
2012-01-01 -1.134623
2012-02-01 -1.561819
2012-03-01 -0.260838
2012-04-01 0.281957
2012-05-01 1.523962
Freq: MS, dtype: float64
Converting between period and timestamp enables some convenient arithmetic functions to be used. In the following
example, we convert a quarterly frequency with year ending in November to 9am of the end of the month following
the quarter end:
In [126]: ts.head()
Out[126]:
1990-03-01 09:00 -0.902937
1990-06-01 09:00 0.068159
1990-09-01 09:00 -0.057873
1990-12-01 09:00 -0.368204
1991-03-01 09:00 -1.144073
Freq: H, dtype: float64
5.10 Categoricals
Since version 0.15, pandas can include categorical data in a DataFrame. For full docs, see the categorical introduc-
tion and the API documentation.
In [129]: df["grade"]
Out[129]:
0 a
1 b
2 b
3 a
4 a
5 e
Name: grade, dtype: category
Categories (3, object): [a, b, e]
Reorder the categories and simultaneously add the missing categories (methods under Series .cat return a new
Series per default).
In [131]: df["grade"] = df["grade"].cat.set_categories(["very bad", "bad", "medium",
"good", "very good"])
In [132]: df["grade"]
Out[132]:
0 very good
1 good
2 good
3 very good
4 very good
5 very bad
Name: grade, dtype: category
Categories (5, object): [very bad, bad, medium, good, very good]
medium 0
good 2
very good 3
dtype: int64
5.11 Plotting
Plotting docs.
In [136]: ts = ts.cumsum()
In [137]: ts.plot()
Out[137]: <matplotlib.axes._subplots.AxesSubplot at 0x11b807588>
In [139]: df = df.cumsum()
5.12.1 CSV
5.12.2 HDF5
In [143]: df.to_hdf('foo.h5','df')
In [144]: pd.read_hdf('foo.h5','df')
Out[144]:
A B C D
2000-01-01 0.266457 -0.399641 -0.219582 1.186860
2000-01-02 -1.170732 -0.345873 1.653061 -0.282953
2000-01-03 -1.734933 0.530468 2.060811 -0.515536
2000-01-04 -1.555121 1.452620 0.239859 -1.156896
2000-01-05 0.578117 0.511371 0.103552 -2.428202
2000-01-06 0.478344 0.449933 -0.741620 -1.962409
2000-01-07 1.235339 -0.091757 -1.543861 -1.084753
... ... ... ... ...
2002-09-20 -10.628548 -9.153563 -7.883146 28.313940
2002-09-21 -10.390377 -8.727491 -6.399645 30.914107
2002-09-22 -8.985362 -8.485624 -4.669462 31.367740
2002-09-23 -9.558560 -8.781216 -4.499815 30.518439
2002-09-24 -9.902058 -9.340490 -4.386639 30.105593
2002-09-25 -10.216020 -9.480682 -3.933802 29.758560
2002-09-26 -11.856774 -10.671012 -3.216025 29.369368
5.12.3 Excel
5.13 Gotchas
SIX
TUTORIALS
This is a guide to many pandas tutorials, geared mainly for new users.
The goal of this cookbook (by Julia Evans) is to give you some concrete examples for getting started with pandas.
These are examples with real-world data, and all the bugs and weirdness that that entails.
Here are links to the v0.1 release. For an up-to-date table of contents, see the pandas-cookbook GitHub repository. To
run the examples in this tutorial, youll need to clone the GitHub repository and get IPython Notebook running. See
How to use this cookbook.
A quick tour of the IPython Notebook: Shows off IPythons awesome tab completion and magic functions.
Chapter 1: Reading your data into pandas is pretty much the easiest thing. Even when the encoding is wrong!
Chapter 2: Its not totally obvious how to select data from a pandas dataframe. Here we explain the basics (how
to take slices and get columns)
Chapter 3: Here we get into serious slicing and dicing and learn how to filter dataframes in complicated ways,
really fast.
Chapter 4: Groupby/aggregate is seriously my favorite thing about pandas and I use it all the time. You should
probably read this.
Chapter 5: Here you get to find out if its cold in Montreal in the winter (spoiler: yes). Web scraping with pandas
is fun! Here we combine dataframes.
Chapter 6: Strings with pandas are great. It has all these vectorized string operations and theyre the best. We
will turn a bunch of strings containing Snow into vectors of numbers in a trice.
Chapter 7: Cleaning up messy data is never a joy, but with pandas its easier.
Chapter 8: Parsing Unix timestamps is confusing at first but it turns out to be really easy.
419
pandas: powerful Python data analysis toolkit, Release 0.20.1
This guide is a comprehensive introduction to the data analysis process using the Python data ecosystem and an
interesting open dataset. There are four sections covering selected topics as follows:
Munging Data
Aggregating Data
Visualizing Data
Time Series
Practice your skills with real data sets and exercises. For more resources, please visit the main repository.
01 - Getting & Knowing Your Data
02 - Filtering & Sorting
03 - Grouping
04 - Apply
05 - Merge
06 - Stats
07 - Visualization
08 - Creating Series and DataFrames
09 - Time Series
10 - Deleting
Modern Pandas
Method Chaining
Indexes
Performance
Tidy Data
Visualization
SEVEN
COOKBOOK
This is a repository for short and sweet examples and links for useful pandas recipes. We encourage users to add to
this documentation.
Adding interesting links and/or inline examples to this section is a great First Pull Request.
Simplified, condensed, new-user friendly, in-line examples have been inserted where possible to augment the Stack-
Overflow and GitHub links. Many of the links contain expanded information, above what the in-line examples offer.
Pandas (pd) and Numpy (np) are the only two abbreviated imported modules. The rest are kept explicitly imported for
newer users.
These examples are written for python 3.4. Minor tweaks might be necessary for earlier python versions.
7.1 Idioms
In [1]: df = pd.DataFrame(
...: {'AAA' : [4,5,6,7], 'BBB' : [10,20,30,40],'CCC' : [100,50,-30,-50]}); df
...:
Out[1]:
AAA BBB CCC
0 4 10 100
1 5 20 50
2 6 30 -30
3 7 40 -50
7.1.1 if-then...
423
pandas: powerful Python data analysis toolkit, Release 0.20.1
In [6]: df.where(df_mask,-1000)
Out[6]:
AAA BBB CCC
0 4 -1000 2000
1 5 -1000 -1000
2 6 -1000 555
3 7 -1000 -1000
In [7]: df = pd.DataFrame(
...: {'AAA' : [4,5,6,7], 'BBB' : [10,20,30,40],'CCC' : [100,50,-30,-50]}); df
...:
Out[7]:
AAA BBB CCC
0 4 10 100
1 5 20 50
2 6 30 -30
3 7 40 -50
7.1.2 Splitting
In [9]: df = pd.DataFrame(
...: {'AAA' : [4,5,6,7], 'BBB' : [10,20,30,40],'CCC' : [100,50,-30,-50]}); df
...:
Out[9]:
AAA BBB CCC
0 4 10 100
1 5 20 50
2 6 30 -30
3 7 40 -50
In [12]: df = pd.DataFrame(
....: {'AAA' : [4,5,6,7], 'BBB' : [10,20,30,40],'CCC' : [100,50,-30,-50]}); df
....:
Out[12]:
AAA BBB CCC
0 4 10 100
1 5 20 50
2 6 30 -30
3 7 40 -50
In [13]: newseries = df.loc[(df['BBB'] < 25) & (df['CCC'] >= -40), 'AAA']; newseries
Out[13]:
0 4
1 5
Name: AAA, dtype: int64
In [14]: newseries = df.loc[(df['BBB'] > 25) | (df['CCC'] >= -40), 'AAA']; newseries;
1 5.0 20 50
2 0.1 30 -30
3 0.1 40 -50
In [18]: df.loc[(df.CCC-aValue).abs().argsort()]
Out[18]:
AAA BBB CCC
1 5 20 50
0 4 10 100
2 6 30 -30
3 7 40 -50
In [26]: df[AllCrit]
Out[26]:
AAA BBB CCC
0 4 10 100
7.2 Selection
7.2.1 DataFrames
In [30]: df = pd.DataFrame(data=data,index=['foo','bar','boo','kar']); df
Out[30]:
AAA BBB CCC
foo 4 10 100
bar 5 20 50
boo 6 30 -30
kar 7 40 -50
# Generic
In [32]: df.iloc[0:3]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[32]:
boo 6 30 -30
In [33]: df.loc['bar':'kar']
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
Ambiguity arises when an index consists of integers with a non-zero start or non-unit increment.
In [37]: df = pd.DataFrame(
....: {'AAA' : [4,5,6,7], 'BBB' : [10,20,30,40], 'CCC' : [100,50,-30,-50]});
df
....:
Out[37]:
AAA BBB CCC
0 4 10 100
1 5 20 50
2 6 30 -30
3 7 40 -50
7.2.2 Panels
Extend a panel frame by transposing, adding a new dimension, and transposing back to the original dimensions
In [42]: df1, df2, df3 = pd.DataFrame(data, rng, cols), pd.DataFrame(data, rng, cols),
pd.DataFrame(data, rng, cols)
In [43]: pf = pd.Panel({'df1':df1,'df2':df2,'df3':df3});pf
Out[43]:
<class 'pandas.core.panel.Panel'>
Dimensions: 3 (items) x 100 (major_axis) x 4 (minor_axis)
Items axis: df1 to df3
Major_axis axis: 2013-01-01 00:00:00 to 2013-04-10 00:00:00
Minor_axis axis: A to D
In [46]: pf = pf.transpose(1,2,0);pf
Out[46]:
<class 'pandas.core.panel.Panel'>
Dimensions: 3 (items) x 100 (major_axis) x 5 (minor_axis)
Items axis: df1 to df3
Major_axis axis: 2013-01-01 00:00:00 to 2013-04-10 00:00:00
Minor_axis axis: A to E
<class 'pandas.core.panel.Panel'>
Dimensions: 3 (items) x 100 (major_axis) x 6 (minor_axis)
Items axis: df1 to df3
Major_axis axis: 2013-01-01 00:00:00 to 2013-04-10 00:00:00
Minor_axis axis: A to F
Mask a panel by using np.where and then reconstructing the panel with the new masked values
In [53]: df = pd.DataFrame(
....: {'AAA' : [1,1,1,2,2,2,3,3], 'BBB' : [2,1,3,4,5,1,2,3]}); df
....:
Out[53]:
AAA BBB
0 1 2
1 1 1
2 1 3
3 2 4
4 2 5
5 2 1
6 3 2
7 3 3
In [54]: df.loc[df.groupby("AAA")["BBB"].idxmin()]
Out[54]:
AAA BBB
1 1 1
5 2 1
6 3 2
7.3 MultiIndexing
Out[56]:
One_X One_Y Two_X Two_Y row
0 1.1 1.2 1.11 1.22 0
1 1.1 1.2 1.11 1.22 1
2 1.1 1.2 1.11 1.22 2
# As Labelled Index
In [57]: df = df.set_index('row');df
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
One Two
X Y X Y
row
0 1.1 1.2 1.11 1.22
1 1.1 1.2 1.11 1.22
2 1.1 1.2 1.11 1.22
level_1 X Y
row
0 One 1.10 1.20
0 Two 1.11 1.22
1 One 1.10 1.20
1 Two 1.11 1.22
2 One 1.10 1.20
2 Two 1.11 1.22
# And fix the labels (Notice the label 'level_1' got added automatically)
In [60]: df.columns = ['Sample','All_X','All_Y'];df
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
7.3.1 Arithmetic
In [62]: df = pd.DataFrame(np.random.randn(2,6),index=['n','m'],columns=cols); df
Out[62]:
A B C
O I O I O I
n 1.920906 -0.388231 -2.314394 0.665508 0.402562 0.399555
m -1.765956 0.850423 0.388054 0.992312 0.744086 -0.739776
In [63]: df = df.div(df['C'],level=1); df
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
A B C
O I O I O I
n 4.771702 -0.971660 -5.749162 1.665625 1.0 1.0
m -2.373321 -1.149568 0.521518 -1.341367 1.0 1.0
7.3.2 Slicing
In [66]: df = pd.DataFrame([11,22,33,44,55],index,['MyData']); df
Out[66]:
MyData
AA one 11
six 22
BB one 33
two 44
six 55
To take the cross section of the 1st level and 1st axis the index:
In [67]: df.xs('BB',level=0,axis=0) #Note : level and axis are optional, and default
to zero
Out[67]:
MyData
one 33
two 44
six 55
In [68]: df.xs('six',level=1,axis=0)
Out[68]:
MyData
AA 22
BB 55
In [74]: df = pd.DataFrame(data,indx,cols); df
Out[74]:
Exams Labs
I II I II
Student Course
Ada Comp 70 71 72 73
Math 71 73 75 74
Sci 72 75 75 75
Quinn Comp 73 74 75 76
Math 74 76 78 77
Sci 75 78 78 78
Violet Comp 76 77 78 79
Math 77 79 81 80
Sci 78 81 81 81
In [76]: df.loc['Violet']
Out[76]:
Exams Labs
I II I II
Course
Comp 76 77 78 79
Math 77 79 81 80
Sci 78 81 81 81
In [77]: df.loc[(All,'Math'),All]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
Exams Labs
I II I II
Student Course
Ada Math 71 73 75 74
Quinn Math 74 76 78 77
Violet Math 77 79 81 80
In [78]: df.loc[(slice('Ada','Quinn'),'Math'),All]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
Exams Labs
I II I II
Student Course
Ada Math 71 73 75 74
Quinn Math 74 76 78 77
In [79]: df.loc[(All,'Math'),('Exams')]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
I II
Student Course
Ada Math 71 73
Quinn Math 74 76
Violet Math 77 79
In [80]: df.loc[(All,'Math'),(All,'II')]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
Exams Labs
II II
Student Course
Ada Math 73 74
Quinn Math 76 77
Violet Math 79 80
7.3.3 Sorting
7.3.4 Levels
7.3.5 panelnd
In [84]: df
Out[84]:
A
2013-08-01 -1.054874
2013-08-02 -0.179642
2013-08-05 0.639589
2013-08-06 NaN
2013-08-07 1.906684
2013-08-08 0.104050
In [85]: df.reindex(df.index[::-1]).ffill()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
A
2013-08-08 0.104050
2013-08-07 1.906684
2013-08-06 1.906684
2013-08-05 0.639589
2013-08-02 -0.179642
2013-08-01 -1.054874
7.4.1 Replace
7.5 Grouping
2 False cat M 11
3 False fish M 1
4 False dog M 20
5 True cat L 12
6 True cat L 12
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
animal
cat L
dog M
fish M
dtype: object
Using get_group
In [88]: gb = df.groupby(['animal'])
In [89]: gb.get_group('cat')
Out[89]:
adult animal size weight
0 False cat S 8
2 False cat M 11
5 True cat L 12
6 True cat L 12
In [92]: expected_df
Out[92]:
size weight adult
animal
cat L 12.4375 True
dog L 20.0000 True
fish L 1.2500 True
Expanding Apply
In [93]: S = pd.Series([i / 100.0 for i in range(1,11)])
In [96]: S.expanding().apply(Red)
Out[96]:
0 1.010000
1 1.030200
2 1.061106
3 1.103550
4 1.158728
5 1.228251
6 1.314229
7 1.419367
8 1.547110
9 1.701821
dtype: float64
In [98]: gb = df.groupby('A')
In [100]: gb.transform(replace)
Out[100]:
B
0 1.0
1 1.0
2 1.0
3 2.0
In [105]: sorted_df
Out[105]:
code data flag
1 bar -0.21 True
4 bar -0.59 False
0 foo 0.16 False
3 foo 0.45 True
In [110]: ts.resample("5min").apply(mhc)
Out[110]:
Custom 2014-10-07 00:00:00 1.234
2014-10-07 00:05:00 NaT
2014-10-07 00:10:00 7.404
2014-10-07 00:15:00 NaT
Max 2014-10-07 00:00:00 2
2014-10-07 00:05:00 4
2014-10-07 00:10:00 7
2014-10-07 00:15:00 9
Mean 2014-10-07 00:00:00 1
2014-10-07 00:05:00 3.5
2014-10-07 00:10:00 6
2014-10-07 00:15:00 8.5
dtype: object
In [111]: ts
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
2014-10-07 00:00:00 0
2014-10-07 00:02:00 1
2014-10-07 00:04:00 2
2014-10-07 00:06:00 3
2014-10-07 00:08:00 4
2014-10-07 00:10:00 5
2014-10-07 00:12:00 6
2014-10-07 00:14:00 7
2014-10-07 00:16:00 8
2014-10-07 00:18:00 9
Freq: 2T, dtype: int64
3 Blue 50
In [114]: df
Out[114]:
Color Value Counts
0 Red 100 3
1 Red 150 3
2 Red 50 3
3 Blue 50 1
In [115]: df = pd.DataFrame(
.....: {u'line_race': [10, 10, 8, 10, 10, 8],
.....: u'beyer': [99, 102, 103, 103, 88, 100]},
.....: index=[u'Last Gunfighter', u'Last Gunfighter', u'Last Gunfighter',
.....: u'Paynter', u'Paynter', u'Paynter']); df
.....:
Out[115]:
beyer line_race
Last Gunfighter 99 10
Last Gunfighter 102 10
Last Gunfighter 103 8
Paynter 103 10
Paynter 88 10
Paynter 100 8
In [117]: df
Out[117]:
beyer line_race beyer_shifted
Last Gunfighter 99 10 NaN
Last Gunfighter 102 10 99.0
Last Gunfighter 103 8 102.0
Paynter 103 10 NaN
Paynter 88 10 103.0
Paynter 100 8 88.0
In [118]: df = pd.DataFrame({'host':['other','other','that','this','this'],
.....: 'service':['mail','web','mail','mail','web'],
.....: 'no':[1, 2, 1, 2, 1]}).set_index(['host', 'service'])
.....:
In [121]: df_count
Out[121]:
host service no
0 other web 2
1 that mail 1
2 this mail 2
0 0
1 1
2 0
3 1
4 2
5 3
6 0
7 1
8 2
Name: A, dtype: int64
7.5.2 Splitting
Splitting a frame
Create a list of dataframes, split using a delineation based on logic included in rows.
In [125]: df = pd.DataFrame(data={'Case' : ['A','A','A','B','A','A','B','A','A'],
.....: 'Data' : np.random.randn(9)})
.....:
In [127]: dfs[0]
Out[127]:
Case Data
0 A 0.174068
1 A -0.439461
2 A -0.741343
3 B -0.079673
In [128]: dfs[1]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[1
Case Data
4 A -0.922875
5 A 0.303638
6 B -0.917368
In [129]: dfs[2]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
Case Data
7 A -1.624062
8 A -0.758514
7.5.3 Pivot
In [132]: table.stack('City')
Out[132]:
Sales
Province City
AL All 12.0
Calgary 8.0
Edmonton 4.0
BC All 16.0
Vancouver 16.0
MN All 3.0
Winnipeg 3.0
... ...
All Calgary 8.0
Edmonton 4.0
Montreal 6.0
Toronto 13.0
Vancouver 16.0
Windsor 1.0
Winnipeg 3.0
7.5.4 Apply
In [141]: df = pd.DataFrame(data=np.random.randn(2000,2)/10000,
.....: index=pd.date_range('2001-01-01',periods=2000),
.....: columns=['A','B']); df
.....:
Out[141]:
A B
2001-01-01 0.000032 -0.000004
2001-01-02 -0.000001 0.000207
2001-01-03 0.000120 -0.000220
2001-01-04 -0.000083 -0.000165
2001-01-05 -0.000047 0.000156
2001-01-06 0.000027 0.000104
2001-01-07 0.000041 -0.000101
... ... ...
2006-06-17 -0.000034 0.000034
2006-06-18 0.000002 0.000166
2006-06-19 0.000023 -0.000081
2006-06-20 -0.000061 0.000012
2006-06-21 -0.000111 0.000027
2006-06-22 -0.000061 -0.000009
2006-06-23 0.000074 -0.000138
Out[143]:
2001-01-01 -0.001373
2001-01-02 -0.001705
2001-01-03 -0.002885
2001-01-04 -0.002987
2001-01-05 -0.002384
2001-01-06 -0.004700
2001-01-07 -0.005500
...
2006-04-28 -0.002682
2006-04-29 -0.002436
2006-04-30 -0.002602
2006-05-01 -0.001785
2006-05-02 -0.001799
2006-05-03 -0.000605
2006-05-04 -0.000541
Length: 1950, dtype: float64
.....:
Out[145]:
Close Open Volume
2014-01-01 -0.653039 0.011174 1581
2014-01-02 1.314205 0.214258 1707
2014-01-03 -0.341915 -1.046922 1768
2014-01-04 -1.303586 -0.752902 836
2014-01-05 0.396288 -0.410793 694
2014-01-06 -0.548006 0.648401 796
2014-01-07 0.481380 0.737320 265
... ... ... ...
2014-04-04 -2.548128 0.120378 564
2014-04-05 0.223346 0.231661 1908
2014-04-06 1.228841 0.952664 1090
2014-04-07 0.552784 -0.176090 1813
2014-04-08 -0.795389 1.781318 1103
2014-04-09 -0.018815 -0.753493 1456
2014-04-10 1.138197 -1.047997 1193
In [147]: window = 5
In [149]: s.round(2)
Out[149]:
2014-01-06 -0.03
2014-01-07 0.07
2014-01-08 -0.40
2014-01-09 -0.81
2014-01-10 -0.63
2014-01-11 -0.86
2014-01-12 -0.36
...
2014-04-04 -1.27
2014-04-05 -1.36
2014-04-06 -0.73
2014-04-07 0.04
2014-04-08 0.21
2014-04-09 0.07
2014-04-10 0.25
Length: 95, dtype: float64
7.6 Timeseries
Between times
Using indexer between time
Constructing a datetime range that excludes weekends and includes only certain times
Vectorized Lookup
Aggregation and plotting time series
Turn a matrix with hours in columns and days in rows into a continuous row sequence in the form of a time series.
How to rearrange a python pandas DataFrame?
Dealing with duplicates when reindexing a timeseries to a specified frequency
Calculate the first day of the month for each entry in a DatetimeIndex
In [151]: dates.to_period(freq='M').to_timestamp()
Out[151]:
DatetimeIndex(['2000-01-01', '2000-01-01', '2000-01-01', '2000-01-01',
'2000-01-01'],
dtype='datetime64[ns]', freq=None)
7.6.1 Resampling
7.7 Merge
In [155]: df = df1.append(df2,ignore_index=True); df
Out[155]:
A B C
0 -0.480676 -1.305282 -0.212846
1 1.979901 0.363112 -0.275732
2 -1.433852 0.580237 -0.013672
3 1.776623 -0.803467 0.521517
4 -0.302508 -0.442948 -0.395768
5 -0.249024 -0.031510 2.413751
6 -0.480676 -1.305282 -0.212846
7 1.979901 0.363112 -0.275732
8 -1.433852 0.580237 -0.013672
9 1.776623 -0.803467 0.521517
10 -0.302508 -0.442948 -0.395768
11 -0.249024 -0.031510 2.413751
Out[158]:
Area Bins Data_L Test_0_L Test_1_L Data_R Test_0_R Test_1_R
0 A 110 -0.378914 0 -1 -1.032527 1 0
1 A 160 -1.402816 0 -1 0.715333 1 0
2 A 160 0.715333 1 0 -0.091438 2 1
3 C 40 1.608418 0 -1 0.753207 1 0
7.8 Plotting
In [159]: df = pd.DataFrame(
.....: {u'stratifying_var': np.random.uniform(0, 100, 20),
.....: u'price': np.random.normal(100, 5, 20)})
.....:
7.9.1 CSV
The best way to combine multiple files into a single DataFrame is to read the individual frames one by one, put all of
the individual frames into a list, and then combine the frames in the list using pd.concat():
In [162]: for i in range(3):
.....: data = pd.DataFrame(np.random.randn(10, 4))
.....: data.to_csv('file_{}.csv'.format(i))
.....:
You can use the same approach to read all files matching a pattern. Here is an example using glob:
In [165]: import glob
Finally, this strategy will work with the other pd.read_*(...) functions described in the io docs.
In [32]: df.head()
Out[32]:
day month year
0 1 1 2000
1 2 1 2000
2 3 1 2000
3 4 1 2000
4 5 1 2000
In [35]: ds.head()
Out[35]:
0 20000101
1 20000102
2 20000103
3 20000104
4 20000105
dtype: object
7.9.2 SQL
7.9.3 Excel
7.9.4 HTML
Reading HTML tables from a server that cannot handle the default request header
7.9.5 HDFStore
In [173]: df = pd.DataFrame(np.random.randn(8,3))
In [175]: store.put('df',df)
In [177]: store.get_storer('df').attrs.my_attribute
Out[177]: {'A': 10}
pandas readily accepts numpy record arrays, if you need to read in a binary file consisting of an array of C structs.
For example, given this C program in a file called main.c compiled with gcc main.c -std=gnu99 on a 64-bit
machine,
#include <stdio.h>
#include <stdint.h>
return 0;
}
the following Python code will read the binary file 'binary.dat' into a pandas DataFrame, where each element
of the struct corresponds to a column in the frame:
# note that the offsets are larger than the size of the type because of
# struct padding
offsets = 0, 8, 16
formats = 'i4', 'f8', 'f4'
dt = np.dtype({'names': names, 'offsets': offsets, 'formats': formats},
align=True)
df = pd.DataFrame(np.fromfile('binary.dat', dt))
Note: The offsets of the structure elements may be different depending on the architecture of the machine on which
the file was created. Using a raw binary file format like this for general data storage is not recommended, as it is not
cross platform. We recommended either HDF5 or msgpack, both of which are supported by pandas IO facilities.
7.10 Computation
7.11 Timedeltas
In [179]: s - s.max()
Out[179]:
0 -2 days
1 -1 days
2 0 days
dtype: timedelta64[ns]
In [180]: s.max() - s
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[180]:
0 2 days
1 1 days
2 0 days
dtype: timedelta64[ns]
In [181]: s - datetime.datetime(2011,1,1,3,5)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
In [182]: s + datetime.timedelta(minutes=5)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
0 2012-01-01 00:05:00
1 2012-01-02 00:05:00
2 2012-01-03 00:05:00
dtype: datetime64[ns]
In [183]: datetime.datetime(2011,1,1,3,5) - s
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
In [184]: datetime.timedelta(minutes=5) + s
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
0 2012-01-01 00:05:00
1 2012-01-02 00:05:00
2 2012-01-03 00:05:00
dtype: datetime64[ns]
In [189]: df.dtypes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
A datetime64[ns]
B timedelta64[ns]
New Dates datetime64[ns]
Delta timedelta64[ns]
dtype: object
Another example
In [190]: y = s - s.shift(); y
Out[190]:
0 NaT
1 1 days
2 1 days
dtype: timedelta64[ns]
To globally provide aliases for axis names, one can define these 2 functions:
In [196]: df2.sum(axis='myaxis2')
Out[196]:
i1 0.745167
i2 -0.176251
i3 0.014354
dtype: float64
To create a dataframe from every combination of some given values, like Rs expand.grid() function, we can
create a dict where the keys are column names and the values are lists of the data values:
In [199]: df = expand_grid(
.....: {'height': [60, 70],
.....: 'weight': [100, 140, 180],
.....: 'sex': ['Male', 'Female']})
.....:
In [200]: df
Out[200]:
height weight sex
0 60 100 Male
1 60 100 Female
2 60 140 Male
3 60 140 Female
4 60 180 Male
5 60 180 Female
6 70 100 Male
7 70 100 Female
8 70 140 Male
9 70 140 Female
10 70 180 Male
11 70 180 Female
EIGHT
Well start with a quick, non-comprehensive overview of the fundamental data structures in pandas to get you started.
The fundamental behavior about data types, indexing, and axis labeling / alignment apply across all of the objects. To
get started, import numpy and load pandas into your namespace:
Here is a basic tenet to keep in mind: data alignment is intrinsic. The link between labels and data will not be broken
unless done so explicitly by you.
Well give a brief intro to the data structures, then consider all of the broad categories of functionality and methods in
separate sections.
8.1 Series
Series is a one-dimensional labeled array capable of holding any data type (integers, strings, floating point numbers,
Python objects, etc.). The axis labels are collectively referred to as the index. The basic method to create a Series is
to call:
In [4]: s
Out[4]:
a 0.2941
b 0.2869
c 1.7098
457
pandas: powerful Python data analysis toolkit, Release 0.20.1
d -0.2126
e 0.2696
dtype: float64
In [5]: s.index
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[5]:
Index(['a', 'b', 'c', 'd', 'e'], dtype='object')
In [6]: pd.Series(np.random.randn(5))
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
0 -0.4531
1 -1.8215
2 -0.1263
3 -0.1533
4 0.4055
dtype: float64
Note: Starting in v0.8.0, pandas supports non-unique index values. If an operation that does not support duplicate
index values is attempted, an exception will be raised at that time. The reason for being lazy is nearly all performance-
based (there are many instances in computations, like parts of GroupBy, where the index is not used).
From dict
If data is a dict, if index is passed the values in data corresponding to the labels in the index will be pulled out.
Otherwise, an index will be constructed from the sorted keys of the dict, if possible.
In [8]: pd.Series(d)
Out[8]:
a 0.0
b 1.0
c 2.0
dtype: float64
Note: NaN (not a number) is the standard missing data marker used in pandas
From scalar value If data is a scalar value, an index must be provided. The value will be repeated to match the
length of index
d 5.0
e 5.0
dtype: float64
Series acts very similarly to a ndarray, and is a valid argument to most NumPy functions. However, things like
slicing also slice the index.
In [11]: s[0]
Out[11]: 0.29413876297575337
In [12]: s[:3]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[12]:
a 0.2941
b 0.2869
c 1.7098
dtype: float64
a 0.2941
c 1.7098
dtype: float64
e 0.2696
d -0.2126
b 0.2869
dtype: float64
In [15]: np.exp(s)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
a 1.3420
b 1.3323
c 5.5276
d 0.8085
e 1.3094
dtype: float64
A Series is like a fixed-size dict in that you can get and set values by index label:
In [16]: s['a']
Out[16]: 0.29413876297575337
In [18]: s
Out[18]:
a 0.2941
b 0.2869
c 1.7098
d -0.2126
e 12.0000
dtype: float64
In [19]: 'e' in s
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[19]:
True
In [20]: 'f' in s
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
False
>>> s['f']
KeyError: 'f'
Using the get method, a missing label will return None or specified default:
In [21]: s.get('f')
When doing data analysis, as with raw NumPy arrays looping through Series value-by-value is usually not necessary.
Series can also be passed into most NumPy methods expecting an ndarray.
In [23]: s + s
Out[23]:
a 0.5883
b 0.5739
c 3.4195
d -0.4252
e 24.0000
dtype: float64
In [24]: s * 2
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[24]:
a 0.5883
b 0.5739
c 3.4195
d -0.4252
e 24.0000
dtype: float64
In [25]: np.exp(s)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
a 1.3420
b 1.3323
c 5.5276
d 0.8085
e 162754.7914
dtype: float64
A key difference between Series and ndarray is that operations between Series automatically align the data based on
label. Thus, you can write computations without giving consideration to whether the Series involved have the same
labels.
The result of an operation between unaligned Series will have the union of the indexes involved. If a label is not found
in one Series or the other, the result will be marked as missing NaN. Being able to write code without doing any explicit
data alignment grants immense freedom and flexibility in interactive data analysis and research. The integrated data
alignment features of the pandas data structures set pandas apart from the majority of related tools for working with
labeled data.
Note: In general, we chose to make the default result of operations between differently indexed objects yield the
union of the indexes in order to avoid loss of information. Having an index label, though the data is missing, is
typically important information as part of a computation. You of course have the option of dropping labels with
missing data via the dropna function.
In [28]: s
Out[28]:
0 -0.5046
1 1.4051
2 0.7781
3 -0.7990
4 -0.6707
Name: something, dtype: float64
In [29]: s.name
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
'something'
The Series name will be assigned automatically in many cases, in particular when taking 1D slices of DataFrame as
you will see below.
In [31]: s2.name
Out[31]: 'different'
8.2 DataFrame
DataFrame is a 2-dimensional labeled data structure with columns of potentially different types. You can think of it
like a spreadsheet or SQL table, or a dict of Series objects. It is generally the most commonly used pandas object.
Like Series, DataFrame accepts many different kinds of input:
Dict of 1D ndarrays, lists, dicts, or Series
2-D numpy.ndarray
Structured or record ndarray
A Series
Another DataFrame
Along with the data, you can optionally pass index (row labels) and columns (column labels) arguments. If you pass
an index and / or columns, you are guaranteeing the index and / or columns of the resulting DataFrame. Thus, a dict
of Series plus a specific index will discard all data not matching up to the passed index.
If axis labels are not passed, they will be constructed from the input data based on common sense rules.
The result index will be the union of the indexes of the various Series. If there are any nested dicts, these will be first
converted to Series. If no columns are passed, the columns will be the sorted list of dict keys.
In [32]: d = {'one' : pd.Series([1., 2., 3.], index=['a', 'b', 'c']),
....: 'two' : pd.Series([1., 2., 3., 4.], index=['a', 'b', 'c', 'd'])}
....:
In [33]: df = pd.DataFrame(d)
In [34]: df
Out[34]:
one two
a 1.0 1.0
b 2.0 2.0
c 3.0 3.0
d NaN 4.0
two three
d 4.0 NaN
b 2.0 NaN
a 1.0 NaN
The row and column labels can be accessed respectively by accessing the index and columns attributes:
Note: When a particular set of columns is passed along with a dict of data, the passed columns override the keys in
the dict.
In [37]: df.index
Out[37]: Index(['a', 'b', 'c', 'd'], dtype='object')
In [38]: df.columns
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[38]: Index(['one', 'two'],
dtype='object')
The ndarrays must all be the same length. If an index is passed, it must clearly also be the same length as the arrays.
If no index is passed, the result will be range(n), where n is the array length.
In [39]: d = {'one' : [1., 2., 3., 4.],
....: 'two' : [4., 3., 2., 1.]}
....:
In [40]: pd.DataFrame(d)
Out[40]:
one two
0 1.0 4.0
1 2.0 3.0
2 3.0 2.0
3 4.0 1.0
In [44]: pd.DataFrame(data)
Out[44]:
A B C
0 1 2.0 b'Hello'
1 2 3.0 b'World'
C A B
0 b'Hello' 1 2.0
1 b'World' 2 3.0
Note: DataFrame is not intended to work exactly like a 2-dimensional NumPy ndarray.
In [47]: data2 = [{'a': 1, 'b': 2}, {'a': 5, 'b': 10, 'c': 20}]
In [48]: pd.DataFrame(data2)
Out[48]:
a b c
0 1 2 NaN
1 5 10 20.0
a b
0 1 2
1 5 10
The result will be a DataFrame with the same index as the input Series, and with one column whose name is the
original name of the Series (only if no other column name provided).
Missing Data
Much more will be said on this topic in the Missing data section. To construct a DataFrame with missing data, use
np.nan for those values which are missing. Alternatively, you may pass a numpy.MaskedArray as the data
argument to the DataFrame constructor, and its masked entries will be considered missing.
DataFrame.from_dict
DataFrame.from_dict takes a dict of dicts or a dict of array-like sequences and returns a DataFrame. It operates
like the DataFrame constructor except for the orient parameter which is 'columns' by default, but which can
be set to 'index' in order to use the dict keys as row labels. DataFrame.from_records
DataFrame.from_records takes a list of tuples or an ndarray with structured dtype. Works analogously to the
normal DataFrame constructor, except that index maybe be a specific field of the structured dtype to use as the
index. For example:
In [52]: data
Out[52]:
array([(1, 2., b'Hello'), (2, 3., b'World')],
dtype=[('A', '<i4'), ('B', '<f4'), ('C', 'S10')])
A B
C
b'Hello' 1 2.0
b'World' 2 3.0
DataFrame.from_items
DataFrame.from_items works analogously to the form of the dict constructor that takes a sequence of (key,
value) pairs, where the keys are column (or row, in the case of orient='index') names, and the value are the
column values (or row values). This can be useful for constructing a DataFrame with the columns in a particular order
without having to pass an explicit list of columns:
In [54]: pd.DataFrame.from_items([('A', [1, 2, 3]), ('B', [4, 5, 6])])
Out[54]:
A B
0 1 4
1 2 5
2 3 6
If you pass orient='index', the keys will be the row labels. But in this case you must also pass the desired
column names:
You can treat a DataFrame semantically like a dict of like-indexed Series objects. Getting, setting, and deleting
columns works with the same syntax as the analogous dict operations:
In [56]: df['one']
Out[56]:
a 1.0
b 2.0
c 3.0
d NaN
Name: one, dtype: float64
In [59]: df
Out[59]:
one two three flag
a 1.0 1.0 1.0 False
b 2.0 2.0 4.0 False
c 3.0 3.0 9.0 True
d NaN 4.0 NaN False
In [62]: df
Out[62]:
one flag
a 1.0 False
b 2.0 False
c 3.0 True
d NaN False
When inserting a scalar value, it will naturally be propagated to fill the column:
In [64]: df
Out[64]:
one flag foo
a 1.0 False bar
b 2.0 False bar
c 3.0 True bar
d NaN False bar
When inserting a Series that does not have the same index as the DataFrame, it will be conformed to the DataFrames
index:
In [65]: df['one_trunc'] = df['one'][:2]
In [66]: df
Out[66]:
one flag foo one_trunc
a 1.0 False bar 1.0
b 2.0 False bar 2.0
c 3.0 True bar NaN
d NaN False bar NaN
You can insert raw ndarrays but their length must match the length of the DataFrames index.
By default, columns get inserted at the end. The insert function is available to insert at a particular location in the
columns:
In [67]: df.insert(1, 'bar', df['one'])
In [68]: df
Out[68]:
one bar flag foo one_trunc
a 1.0 1.0 False bar 1.0
b 2.0 2.0 False bar 2.0
c 3.0 3.0 True bar NaN
d NaN NaN False bar NaN
In [70]: iris.head()
Out[70]:
SepalLength SepalWidth PetalLength PetalWidth Name
0 5.1 3.5 1.4 0.2 Iris-setosa
1 4.9 3.0 1.4 0.2 Iris-setosa
2 4.7 3.2 1.3 0.2 Iris-setosa
3 4.6 3.1 1.5 0.2 Iris-setosa
4 5.0 3.6 1.4 0.2 Iris-setosa
....: .head())
....:
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
Above was an example of inserting a precomputed value. We can also pass in a function of one argument to be
evalutated on the DataFrame being assigned to.
assign always returns a copy of the data, leaving the original DataFrame untouched.
Passing a callable, as opposed to an actual value to be inserted, is useful when you dont have a reference to the
DataFrame at hand. This is common when using assign in chains of operations. For example, we can limit the
DataFrame to just those observations with a Sepal Length greater than 5, calculate the ratio, and plot:
Since a function is passed in, the function is computed on the DataFrame being assigned to. Importantly, this is the
DataFrame thats been filtered to those rows with sepal length greater than 5. The filtering happens first, and then the
ratio calculations. This is an example where we didnt have a reference to the filtered DataFrame available.
The function signature for assign is simply **kwargs. The keys are the column names for the new fields, and the
values are either a value to be inserted (for example, a Series or NumPy array), or a function of one argument to be
called on the DataFrame. A copy of the original DataFrame is returned, with the new values inserted.
Warning: Since the function signature of assign is **kwargs, a dictionary, the order of the new columns in
the resulting DataFrame cannot be guaranteed to match the order you pass in. To make things predictable, items
are inserted alphabetically (by key) at the end of the DataFrame.
All expressions are computed first, and then assigned. So you cant refer to another column being assigned in the
same call to assign. For example:
In [74]: # Don't do this, bad reference to `C`
df.assign(C = lambda x: x['A'] + x['B'],
D = lambda x: x['A'] + x['C'])
In [2]: # Instead, break it into two assigns
(df.assign(C = lambda x: x['A'] + x['B'])
.assign(D = lambda x: x['A'] + x['C']))
In [75]: df.loc['b']
Out[75]:
one 2
bar 2
flag False
foo bar
one_trunc 2
Name: b, dtype: object
In [76]: df.iloc[2]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
one 3
bar 3
flag True
foo bar
one_trunc NaN
Name: c, dtype: object
For a more exhaustive treatment of more sophisticated label-based indexing and slicing, see the section on indexing.
We will address the fundamentals of reindexing / conforming to new sets of labels in the section on reindexing.
Data alignment between DataFrame objects automatically align on both the columns and the index (row labels).
Again, the resulting object will have the union of the column and row labels.
In [79]: df + df2
Out[79]:
A B C D
0 1.3073 2.4946 0.9907 NaN
1 2.5226 -0.0380 -0.6179 NaN
2 -0.1333 -1.4784 -0.5667 NaN
3 -0.4633 -0.6815 1.5152 NaN
4 0.0622 -1.1679 0.5534 NaN
5 3.1876 -0.0249 0.6607 NaN
6 -0.8777 -0.0846 -2.7677 NaN
7 NaN NaN NaN NaN
8 NaN NaN NaN NaN
9 NaN NaN NaN NaN
When doing an operation between DataFrame and Series, the default behavior is to align the Series index on the
DataFrame columns, thus broadcasting row-wise. For example:
In [80]: df - df.iloc[0]
Out[80]:
A B C D
0 0.0000 0.0000 0.0000 0.0000
1 0.6956 -0.9760 -0.5268 -0.4261
2 -0.6277 -1.9284 -1.7718 3.4021
3 -0.4289 -1.1245 -0.0013 2.2955
4 0.6241 -1.9643 -0.6090 2.0827
5 0.7796 0.0866 -0.2222 2.6553
6 0.1325 -1.4229 -2.2840 -0.0538
7 -0.3135 -1.9574 -0.5461 3.3179
8 0.6366 -1.2767 -0.4022 1.6091
9 -2.4500 -1.5917 -1.0151 3.1963
In the special case of working with time series data, and the DataFrame index also contains dates, the broadcasting
will be column-wise:
In [81]: index = pd.date_range('1/1/2000', periods=8)
In [83]: df
Out[83]:
A B C
2000-01-01 -0.0817 1.3905 -1.9620
2000-01-02 -0.5056 0.0213 -0.3171
2000-01-03 -0.0259 0.8407 1.4135
2000-01-04 0.0492 0.4879 0.4263
2000-01-05 1.2432 -0.6222 -0.5386
2000-01-06 0.7915 -0.0203 0.1844
2000-01-07 -0.1616 0.6414 -1.8116
2000-01-08 -0.1140 -0.8574 0.1719
In [84]: type(df['A'])
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
pandas.core.series.Series
In [85]: df - df['A']
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
[8 rows x 11 columns]
Warning:
df - df['A']
is now deprecated and will be removed in a future release. The preferred way to replicate this behavior is
df.sub(df['A'], axis=0)
For explicit control over the matching and broadcasting behavior, see the section on flexible binary operations.
Operations with scalars are just as you would expect:
In [86]: df * 5 + 2
Out[86]:
A B C
2000-01-01 1.5914 8.9525 -7.8102
2000-01-02 -0.5279 2.1063 0.4146
2000-01-03 1.8705 6.2037 9.0676
2000-01-04 2.2461 4.4393 4.1314
2000-01-05 8.2159 -1.1111 -0.6930
2000-01-06 5.9576 1.8985 2.9221
2000-01-07 1.1918 5.2071 -7.0578
2000-01-08 1.4298 -2.2869 2.8595
In [87]: 1 / df
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
A B C
2000-01-01 -12.2384 0.7192 -0.5097
2000-01-02 -1.9779 47.0519 -3.1539
2000-01-03 -38.6178 1.1894 0.7075
2000-01-04 20.3130 2.0498 2.3458
2000-01-05 0.8044 -1.6072 -1.8566
2000-01-06 1.2634 -49.2551 5.4221
2000-01-07 -6.1864 1.5590 -0.5520
2000-01-08 -8.7695 -1.1663 5.8170
In [88]: df ** 4
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
A B C
2000-01-01 4.4576e-05 3.7384e+00 14.8192
2000-01-02 6.5337e-02 2.0403e-07 0.0101
2000-01-03 4.4962e-07 4.9964e-01 3.9922
2000-01-04 5.8735e-06 5.6645e-02 0.0330
2000-01-05 2.3885e+00 1.4989e-01 0.0842
2000-01-06 3.9249e-01 1.6990e-07 0.0012
2000-01-07 6.8273e-04 1.6926e-01 10.7700
2000-01-08 1.6908e-04 5.4038e-01 0.0009
a b
0 True True
1 True False
2 False True
In [94]: -df1
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
a b
0 False True
1 True False
2 False False
8.2.12 Transposing
To transpose, access the T attribute (also the transpose function), similar to an ndarray:
Elementwise NumPy ufuncs (log, exp, sqrt, ...) and various other NumPy functions can be used with no issues on
DataFrame, assuming the data within are numeric:
In [96]: np.exp(df)
Out[96]:
A B C
In [97]: np.asarray(df)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
In [98]: df.T.dot(df)
Out[98]:
A B C
A 2.4765 -0.9176 0.0546
B -0.9176 4.4129 -2.3166
C 0.0546 -2.3166 9.7653
In [99]: s1 = pd.Series(np.arange(5,10))
In [100]: s1.dot(s1)
Out[100]: 255
DataFrame is not intended to be a drop-in replacement for ndarray as its indexing semantics are quite different in
places from a matrix.
Very large DataFrames will be truncated to display them in the console. You can also get a summary using info().
(Here I am reading a CSV version of the baseball dataset from the plyr R package):
In [102]: print(baseball)
id player year stint ... hbp sh sf gidp
0 88641 womacto01 2006 2 ... 0.0 3.0 0.0 0.0
1 88643 schilcu01 2006 1 ... 0.0 0.0 0.0 0.0
.. ... ... ... ... ... ... ... ... ...
98 89533 aloumo01 2007 1 ... 2.0 0.0 3.0 13.0
99 89534 alomasa02 2007 1 ... 0.0 0.0 0.0 0.0
In [103]: baseball.info()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
<class 'pandas.core.frame.DataFrame'>
However, using to_string will return a string representation of the DataFrame in tabular form, though it wont
always fit the console width:
New since 0.10.0, wide DataFrames will now be printed across multiple rows by default:
In [105]: pd.DataFrame(np.random.randn(3, 12))
Out[105]:
0 1 2 3 4 5 6 \
0 -1.040542 -1.126415 0.549956 1.323044 -0.219197 0.581467 -0.519407
1 -2.603736 0.532069 0.327184 -1.251625 1.481966 -0.642683 1.248002
2 0.683625 -1.876826 -1.873827 -0.251457 0.027599 1.235291 0.850574
7 8 9 10 11
0 -0.271582 0.344684 -0.643988 -0.378918 -0.924127
1 1.954333 -0.475215 -1.258974 -1.142863 -1.015321
2 -1.140302 2.149143 0.504452 0.678026 -0.628443
You can change how much to print on a single row by setting the display.width option:
In [106]: pd.set_option('display.width', 40) # default is 80
3 4 5 \
0 -1.299878 -0.110240 -0.333712
1 0.577103 -0.076021 0.720235
2 -0.528311 -0.660014 -0.117339
6 7 8 \
0 0.416876 -0.436400 0.999768
1 0.202660 -0.314950 -0.410852
2 0.780048 2.162047 0.874233
9 10 11
0 -0.383171 -0.172217 -1.674685
1 0.542758 1.955407 -0.940645
2 -0.764147 -0.484495 0.298570
You can adjust the max width of the individual columns by setting display.max_colwidth
In [108]: datafile={'filename': ['filename_01','filename_02'],
.....: 'path': ["media/user_name/storage/folder_01/filename_01",
.....: "media/user_name/storage/folder_02/filename_02"]}
.....:
In [109]: pd.set_option('display.max_colwidth',30)
In [110]: pd.DataFrame(datafile)
Out[110]:
filename \
0 filename_01
1 filename_02
path
0 media/user_name/storage/fo...
1 media/user_name/storage/fo...
In [111]: pd.set_option('display.max_colwidth',100)
In [112]: pd.DataFrame(datafile)
Out[112]:
filename \
0 filename_01
1 filename_02
path
0 media/user_name/storage/folder_01/filename_01
1 media/user_name/storage/folder_02/filename_02
You can also disable this feature via the expand_frame_repr option. This will print the table in one block.
If a DataFrame column label is a valid Python variable name, the column can be accessed like attributes:
In [114]: df
Out[114]:
foo1 foo2
0 0.825136 -1.749969
1 -0.388020 -1.402941
2 -0.339279 0.623222
3 0.141164 0.020129
4 0.565930 -2.858463
In [115]: df.foo1
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
0 0.825136
1 -0.388020
2 -0.339279
3 0.141164
4 0.565930
Name: foo1, dtype: float64
The columns are also connected to the IPython completion mechanism so they can be tab-completed:
In [5]: df.fo<TAB>
df.foo1 df.foo2
8.3 Panel
Warning: In 0.20.0, Panel is deprecated and will be removed in a future version. See the section Deprecate
Panel.
Panel is a somewhat less-used, but still important container for 3-dimensional data. The term panel data is derived
from econometrics and is partially responsible for the name pandas: pan(el)-da(ta)-s. The names for the 3 axes are
intended to give some semantic meaning to describing operations involving panel data and, in particular, econometric
analysis of panel data. However, for the strict purposes of slicing and dicing a collection of DataFrame objects, you
may find the axis names slightly arbitrary:
items: axis 0, each item corresponds to a DataFrame contained inside
major_axis: axis 1, it is the index (rows) of each of the DataFrames
minor_axis: axis 2, it is the columns of each of the DataFrames
Construction of Panels works about like you would expect:
In [117]: wp
Out[117]:
<class 'pandas.core.panel.Panel'>
Dimensions: 2 (items) x 5 (major_axis) x 4 (minor_axis)
Items axis: Item1 to Item2
Major_axis axis: 2000-01-01 00:00:00 to 2000-01-05 00:00:00
Minor_axis axis: A to D
In [119]: pd.Panel(data)
Out[119]:
<class 'pandas.core.panel.Panel'>
Dimensions: 2 (items) x 4 (major_axis) x 3 (minor_axis)
Items axis: Item1 to Item2
Major_axis axis: 0 to 3
Minor_axis axis: 0 to 2
Note that the values in the dict need only be convertible to DataFrame. Thus, they can be any of the other valid
inputs to DataFrame as per above.
One helpful factory method is Panel.from_dict, which takes a dictionary of DataFrames as above, and the
following named parameters:
Parameter Default Description
intersect False drops elements whose indices do not align
orient items use minor to use DataFrames columns as panel items
For example, compare to the construction above:
Orient is especially useful for mixed-type DataFrames. If you pass a dict of DataFrame objects with mixed-type
columns, all of the data will get upcasted to dtype=object unless you pass orient='minor':
In [122]: df
Out[122]:
a b
0 foo 1.047583
1 bar 0.507575
2 baz 1.172740
In [125]: panel['a']
Out[125]:
item1 item2
0 foo foo
1 bar bar
2 baz baz
In [126]: panel['b']
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[126]:
item1 item2
0 1.047583 1.047583
1 0.507575 0.507575
2 1.172740 1.172740
In [127]: panel['b'].dtypes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
item1 float64
item2 float64
dtype: object
Note: Unfortunately Panel, being less commonly used than Series and DataFrame, has been slightly neglected feature-
wise. A number of methods and options available in DataFrame are not available in Panel. This will get worked on,
of course, in future releases. And faster if you join me in working on the codebase.
This method was introduced in v0.7 to replace LongPanel.to_long, and converts a DataFrame with a two-level
index to a Panel.
In [130]: df.to_panel()
Out[130]:
<class 'pandas.core.panel.Panel'>
Dimensions: 2 (items) x 2 (major_axis) x 2 (minor_axis)
Items axis: A to B
Major_axis axis: one to two
Minor_axis axis: x to y
In [131]: wp['Item1']
Out[131]:
A B C D
2000-01-01 0.885765 0.158014 -1.981797 1.769622
2000-01-02 0.093792 -1.269228 1.290159 0.509707
2000-01-03 -0.251960 -1.127396 -0.430936 -1.243710
2000-01-04 -0.854956 -0.327742 0.210942 0.152473
2000-01-05 -0.061545 2.845263 -0.507224 1.772662
The API for insertion and deletion is the same as for DataFrame. And as with DataFrame, if the item is a valid python
identifier, you can access it as an attribute and tab-complete it in IPython.
8.3.5 Transposing
A Panel can be rearranged using its transpose method (which does not make a copy by default unless the data are
heterogeneous):
In [133]: wp.transpose(2, 0, 1)
Out[133]:
<class 'pandas.core.panel.Panel'>
Dimensions: 4 (items) x 3 (major_axis) x 5 (minor_axis)
Items axis: A to D
Major_axis axis: Item1 to Item3
Minor_axis axis: 2000-01-01 00:00:00 to 2000-01-05 00:00:00
In [135]: wp.major_xs(wp.major_axis[2])
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
In [136]: wp.minor_axis
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
Index(['A', 'B', 'C', 'D'], dtype='object')
In [137]: wp.minor_xs('C')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
8.3.7 Squeezing
Another way to change the dimensionality of an object is to squeeze a 1-len object, similar to wp['Item1']
In [138]: wp.reindex(items=['Item1']).squeeze()
Out[138]:
A B C D
2000-01-01 0.885765 0.158014 -1.981797 1.769622
2000-01-02 0.093792 -1.269228 1.290159 0.509707
2000-01-03 -0.251960 -1.127396 -0.430936 -1.243710
2000-01-04 -0.854956 -0.327742 0.210942 0.152473
2000-01-05 -0.061545 2.845263 -0.507224 1.772662
2000-01-01 0.158014
2000-01-02 -1.269228
2000-01-03 -1.127396
2000-01-04 -0.327742
2000-01-05 2.845263
Freq: D, Name: B, dtype: float64
A Panel can be represented in 2D form as a hierarchically indexed DataFrame. See the section hierarchical indexing
for more on this. To convert a Panel to a DataFrame, use the to_frame method:
In [140]: panel = pd.Panel(np.random.randn(3, 5, 4), items=['one', 'two', 'three'],
.....: major_axis=pd.date_range('1/1/2000', periods=5),
.....: minor_axis=['a', 'b', 'c', 'd'])
.....:
In [141]: panel.to_frame()
Out[141]:
one two three
major minor
2000-01-01 a 0.368964 -2.033050 0.525741
b -1.596338 -0.271503 1.311232
c 0.294397 0.000658 0.535689
d 1.633316 0.301351 0.350587
2000-01-02 a 0.613334 -0.977983 -0.691015
b -0.561237 -0.310997 0.893930
c -1.316660 0.608487 2.064058
d 1.038137 1.791018 0.548489
2000-01-03 a -1.367749 -0.724384 -1.298233
b 0.010581 0.327463 -0.286955
c 0.882541 -1.046022 -0.193618
d 0.177449 -1.424694 1.122169
2000-01-04 a 0.291669 1.845002 1.289298
b 2.177649 0.099995 -0.811164
c 0.741563 0.368960 -0.902172
d 0.524001 -0.025353 -0.093062
2000-01-05 a -1.154972 0.635333 0.687572
b -2.075966 -1.484139 -0.653155
c -0.858758 0.259096 -1.321267
d -0.868204 0.817009 -0.593775
Over the last few years, pandas has increased in both breadth and depth, with new features, datatype support, and
manipulation routines. As a result, supporting efficient indexing and functional routines for Series, DataFrame
and Panel has contributed to an increasingly fragmented and difficult-to-understand codebase.
The 3-D structure of a Panel is much less common for many types of data analysis, than the 1-D of the Series or
the 2-D of the DataFrame. Going forward it makes sense for pandas to focus on these areas exclusively.
Oftentimes, one can simply use a MultiIndex DataFrame for easily working with higher dimensional data.
In additon, the xarray package was built from the ground up, specifically in order to support the multi-dimensional
analysis that is one of Panel s main usecases. Here is a link to the xarray panel-transition documentation.
In [142]: p = tm.makePanel()
In [143]: p
Out[143]:
<class 'pandas.core.panel.Panel'>
Dimensions: 3 (items) x 30 (major_axis) x 4 (minor_axis)
Items axis: ItemA to ItemC
Major_axis axis: 2000-01-03 00:00:00 to 2000-02-11 00:00:00
Minor_axis axis: A to D
In [144]: p.to_frame()
Out[144]:
ItemA ItemB ItemC
major minor
2000-01-03 A -0.562101 0.596722 -0.006076
B -1.188433 0.623781 0.414700
C -1.122897 1.570412 -1.121722
D 1.068153 0.637637 -0.332359
2000-01-04 A 0.348637 -1.196606 0.584980
B -0.364369 0.044965 -0.104393
C 1.255063 -1.555786 1.864044
D 0.645839 -1.004495 0.211849
2000-01-05 A -3.136335 0.684902 0.764032
B -0.522007 -0.700244 0.618741
C 1.019730 1.515842 -1.117555
D 1.966118 1.146482 1.156103
2000-01-06 A 0.950982 -2.420257 -0.334286
B 0.379510 -0.800428 1.477061
C -0.185257 1.535935 -0.459102
D 1.929061 0.955239 -0.167683
2000-01-07 A -0.817696 -0.497864 0.723964
B 0.219003 -0.262461 -0.479880
C 0.980392 0.440980 0.254221
D 0.515374 0.393402 1.725259
2000-01-10 A -1.522532 -1.155281 -1.294066
B -1.434896 0.294109 -0.338110
C 0.363619 0.923475 -0.180491
D -0.007877 0.886686 -0.482607
2000-01-11 A 0.877248 -0.182729 1.511304
B 1.137177 0.455629 0.169672
C 1.044038 -1.046968 1.634983
D 0.566788 -0.961336 -0.008121
2000-01-12 A -0.596688 1.440756 0.917094
B -0.004067 0.610660 0.187756
... ... ... ...
2000-02-02 C -1.426564 -0.315895 -0.729149
D -1.951812 0.298852 -1.409432
2000-02-03 A 0.876211 1.780657 1.232949
B 0.753136 0.626754 0.480243
C 0.307062 -0.513063 -1.543837
D -0.304052 0.626159 -0.433954
2000-02-04 A -1.510807 -0.508626 1.396962
B -0.453719 0.243984 0.188892
C 0.846308 -0.000835 0.058163
D -0.378778 0.651006 -0.382207
2000-02-07 A 1.178281 -0.319874 0.081011
In [145]: p.to_xarray()
Out[145]:
<xarray.DataArray (items: 3, major_axis: 30, minor_axis: 4)>
array([[[-0.562101, -1.188433, -1.122897, 1.068153],
[ 0.348637, -0.364369, 1.255063, 0.645839],
...,
[ 3.082589, 0.431345, -0.108185, 0.928076],
[ 0.453499, -0.279384, -0.555896, 0.771169]],
Warning: In 0.19.0 Panel4D and PanelND are deprecated and will be removed in a future version. The
recommended way to represent these types of n-dimensional data are with the xarray package. Pandas provides a
to_xarray() method to automate this conversion.
NINE
Here we discuss a lot of the essential functionality common to the pandas data structures. Heres how to create some
of the objects used in the examples from the previous section:
To view a small sample of a Series or DataFrame object, use the head() and tail() methods. The default number
of elements to display is five, but you may pass a custom number.
In [6]: long_series.head()
Out[6]:
0 0.229453
1 0.304418
2 0.736135
3 -0.859631
4 -0.424100
dtype: float64
In [7]: long_series.tail(3)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[7]:
997 -0.351587
998 1.136249
999 -0.448789
dtype: float64
487
pandas: powerful Python data analysis toolkit, Release 0.20.1
pandas objects have a number of attributes enabling you to access the metadata
shape: gives the axis dimensions of the object, consistent with ndarray
Axis labels
Series: index (only axis)
DataFrame: index (rows) and columns
Panel: items, major_axis, and minor_axis
Note, these attributes can be safely assigned to!
In [8]: df[:2]
Out[8]:
A B C
2000-01-01 0.048869 -1.360687 -0.47901
2000-01-02 -0.859661 -0.231595 -0.52775
In [10]: df
Out[10]:
a b c
2000-01-01 0.048869 -1.360687 -0.479010
2000-01-02 -0.859661 -0.231595 -0.527750
2000-01-03 -1.296337 0.150680 0.123836
2000-01-04 0.571764 1.555563 -0.823761
2000-01-05 0.535420 -1.032853 1.469725
2000-01-06 1.304124 1.449735 0.203109
2000-01-07 -1.032011 0.969818 -0.962723
2000-01-08 1.382083 -0.938794 0.669142
To get the actual data inside a data structure, one need only access the values property:
In [11]: s.values
Out[11]: array([-1.9339, 0.3773, 0.7341, 2.1416, -0.0112])
In [12]: df.values
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[12]:
array([[ 0.0489, -1.3607, -0.479 ],
[-0.8597, -0.2316, -0.5278],
[-1.2963, 0.1507, 0.1238],
[ 0.5718, 1.5556, -0.8238],
[ 0.5354, -1.0329, 1.4697],
[ 1.3041, 1.4497, 0.2031],
[-1.032 , 0.9698, -0.9627],
[ 1.3821, -0.9388, 0.6691]])
In [13]: wp.values
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
If a DataFrame or Panel contains homogeneously-typed data, the ndarray can actually be modified in-place, and the
changes will be reflected in the data structure. For heterogeneous data (e.g. some of the DataFrames columns are not
all the same dtype), this will not be the case. The values attribute itself, unlike the axis labels, cannot be assigned to.
Note: When working with heterogeneous data, the dtype of the resulting ndarray will be chosen to accommodate all
of the data involved. For example, if strings are involved, the result will be of object dtype. If there are only floats and
integers, the resulting array will be of float dtype.
pandas has support for accelerating certain types of binary numerical and boolean operations using the numexpr
library and the bottleneck libraries.
These libraries are especially useful when dealing with large data sets, and provide large speedups. numexpr uses
smart chunking, caching, and multiple cores. bottleneck is a set of specialized cython routines that are especially
fast when dealing with arrays that have nans.
Here is a sample (using 100 column x 100,000 row DataFrames):
Operation 0.11.0 (ms) Prior Version (ms) Ratio to Prior
df1 > df2 13.32 125.35 0.1063
df1 * df2 21.71 36.63 0.5928
df1 + df2 22.04 36.50 0.6039
You are highly encouraged to install both libraries. See the section Recommended Dependencies for more installation
info.
These are both enabled to be used by default, you can control this by setting the options:
New in version 0.20.0.
pd.set_option('compute.use_bottleneck', False)
pd.set_option('compute.use_numexpr', False)
With binary operations between pandas data structures, there are two key points of interest:
Broadcasting behavior between higher- (e.g. DataFrame) and lower-dimensional (e.g. Series) objects.
Missing data in computations
We will demonstrate how to manage these issues independently, though they can be handled simultaneously.
DataFrame has the methods add(), sub(), mul(), div() and related functions radd(), rsub(), ... for carry-
ing out binary operations. For broadcasting behavior, Series input is of primary interest. Using these functions, you
can use to either match on the index or columns via the axis keyword:
....:
In [15]: df
Out[15]:
one three two
a -1.101558 NaN 1.124472
b -0.177289 -0.634293 2.487104
c 0.462215 1.931194 -0.486066
d NaN -1.222918 -0.456288
With Panel, describing the matching behavior is a bit more difficult, so the arithmetic methods instead (and perhaps
confusingly?) give you the option to specify the broadcast axis. For example, suppose we wished to demean the data
over a particular axis. This can be accomplished by taking the mean over an axis and broadcasting over the same axis:
In [26]: major_mean
Out[26]:
Item1 Item2
A -0.878036 -0.092218
B -0.060128 0.529811
C 0.099453 -0.715139
D 0.248599 -0.186535
<class 'pandas.core.panel.Panel'>
Dimensions: 2 (items) x 5 (major_axis) x 4 (minor_axis)
Items axis: Item1 to Item2
Major_axis axis: 2000-01-01 00:00:00 to 2000-01-05 00:00:00
Minor_axis axis: A to D
Note: I could be convinced to make the axis argument in the DataFrame methods match the broadcasting behavior of
Panel. Though it would require a transition period so users can change their code...
Series and Index also support the divmod() builtin. This function takes the floor division and modulo operation at
the same time returning a two-tuple of the same type as the left hand side. For example:
In [28]: s = pd.Series(np.arange(10))
In [29]: s
Out[29]:
0 0
1 1
2 2
3 3
4 4
5 5
6 6
7 7
8 8
9 9
dtype: int64
In [31]: div
Out[31]:
0 0
1 0
2 0
3 1
4 1
5 1
6 2
7 2
8 2
9 3
dtype: int64
In [32]: rem
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[32]:
0 0
1 1
2 2
3 0
4 1
5 2
6 0
7 1
8 2
9 0
dtype: int64
In [34]: idx
Out[34]: Int64Index([0, 1, 2, 3, 4, 5, 6, 7, 8, 9], dtype='int64')
In [36]: div
Out[36]: Int64Index([0, 0, 0, 1, 1, 1, 2, 2, 2, 3], dtype='int64')
In [37]: rem
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[37]:
Int64Index([0, 1, 2, 0, 1, 2, 0, 1, 2, 0], dtype='int64')
In [39]: div
Out[39]:
0 0
1 0
2 0
3 1
4 1
5 1
6 1
7 1
8 1
9 1
dtype: int64
In [40]: rem
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[40]:
0 0
1 1
2 2
3 0
4 0
5 1
6 1
7 2
8 2
9 3
dtype: int64
In Series and DataFrame (though not yet in Panel), the arithmetic functions have the option of inputting a fill_value,
namely a value to substitute when at most one of the values at a location are missing. For example, when adding two
DataFrame objects, you may wish to treat NaN as 0 unless both DataFrames are missing that value, in which case the
result will be NaN (you can later replace NaN with some other value using fillna if you wish).
In [41]: df
Out[41]:
one three two
a -1.101558 NaN 1.124472
b -0.177289 -0.634293 2.487104
c 0.462215 1.931194 -0.486066
d NaN -1.222918 -0.456288
In [42]: df2
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
In [43]: df + df2
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
Starting in v0.8, pandas introduced binary comparison methods eq, ne, lt, gt, le, and ge to Series and DataFrame whose
behavior is analogous to the binary arithmetic operations described above:
In [45]: df.gt(df2)
Out[45]:
one three two
a False False False
b False False False
c False False False
d False False False
In [46]: df2.ne(df)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
These operations produce a pandas object the same type as the left-hand-side input that if of dtype bool. These
boolean objects can be used in indexing operations, see here
You can apply the reductions: empty, any(), all(), and bool() to provide a way to summarize a boolean result.
In [47]: (df > 0).all()
Out[47]:
one False
three False
two False
dtype: bool
one True
three True
two True
dtype: bool
You can test if a pandas object is empty, via the empty property.
In [50]: df.empty
Out[50]: False
In [51]: pd.DataFrame(columns=list('ABC')).empty
\\\\\\\\\\\\\\\Out[51]: True
To evaluate single-element pandas objects in a boolean context, use the method bool():
In [52]: pd.Series([True]).bool()
Out[52]: True
In [53]: pd.Series([False]).bool()
\\\\\\\\\\\\\\Out[53]: False
In [54]: pd.DataFrame([[True]]).bool()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[54]: True
In [55]: pd.DataFrame([[False]]).bool()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[55]: False
Or
>>> df and df2
These both will raise as you are trying to compare multiple values.
ValueError: The truth value of an array is ambiguous. Use a.empty, a.any() or a.
all().
Often you may find there is more than one way to compute the same result. As a simple example, consider df+df and
df*2. To test that these two computations produce the same result, given the tools shown above, you might imagine
using (df+df == df*2).all(). But in fact, this expression is False:
one False
three False
two True
dtype: bool
Notice that the boolean DataFrame df+df == df*2 contains some False values! That is because NaNs do not
compare as equals:
In [58]: np.nan == np.nan
Out[58]: False
So, as of v0.13.1, NDFrames (such as Series, DataFrames, and Panels) have an equals() method for testing equality,
with NaNs in corresponding locations treated as equal.
In [59]: (df+df).equals(df*2)
Out[59]: True
Note that the Series or DataFrame index needs to be in the same order for equality to be True:
In [60]: df1 = pd.DataFrame({'col':['foo', 0, np.nan]})
In [62]: df1.equals(df2)
Out[62]: False
In [63]: df1.equals(df2.sort_index())
\\\\\\\\\\\\\\\Out[63]: True
You can conveniently do element-wise comparisons when comparing a pandas data structure with a scalar value:
In [64]: pd.Series(['foo', 'bar', 'baz']) == 'foo'
Out[64]:
0 True
1 False
2 False
dtype: bool
Pandas also handles element-wise comparisons between different array-like objects of the same length:
Trying to compare Index or Series objects of different lengths will raise a ValueError:
In [55]: pd.Series(['foo', 'bar', 'baz']) == pd.Series(['foo', 'bar'])
ValueError: Series lengths must match to compare
Note that this is different from the numpy behavior where a comparison can be broadcast:
In [68]: np.array([1, 2, 3]) == np.array([2])
Out[68]: array([False, True, False], dtype=bool)
A problem occasionally arising is the combination of two similar data sets where values in one are preferred over the
other. An example would be two data series representing a particular economic indicator where one is considered to
be of higher quality. However, the lower quality series might extend further back in history or have more complete
data coverage. As such, we would like to combine two DataFrame objects where missing values in one DataFrame
are conditionally filled with like-labeled values from the other DataFrame. The function implementing this operation
is combine_first(), which we illustrate:
In [70]: df1 = pd.DataFrame({'A' : [1., np.nan, 3., 5., np.nan],
....: 'B' : [np.nan, 2., 3., np.nan, 6.]})
....:
In [72]: df1
Out[72]:
A B
0 1.0 NaN
1 NaN 2.0
2 3.0 3.0
3 5.0 NaN
4 NaN 6.0
In [73]: df2
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[73]:
A B
0 5.0 NaN
1 2.0 NaN
2 4.0 3.0
3 NaN 4.0
4 3.0 6.0
5 7.0 8.0
In [74]: df1.combine_first(df2)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
A B
0 1.0 NaN
1 2.0 2.0
2 3.0 3.0
3 5.0 4.0
4 3.0 6.0
5 7.0 8.0
The combine_first() method above calls the more general DataFrame method combine(). This method takes
another DataFrame and a combiner function, aligns the input DataFrame and then passes the combiner function pairs
of Series (i.e., columns whose names are the same).
So, for instance, to reproduce combine_first() as above:
A large number of methods for computing descriptive statistics and other related operations on Series, DataFrame,
and Panel. Most of these are aggregations (hence producing a lower-dimensional result) like sum(), mean(), and
quantile(), but some of them, like cumsum() and cumprod(), produce an object of the same size. Generally
speaking, these methods take an axis argument, just like ndarray.{sum, std, ...}, but the axis can be specified by name
or integer:
Series: no axis argument needed
In [77]: df
Out[77]:
one three two
a -1.101558 NaN 1.124472
b -0.177289 -0.634293 2.487104
c 0.462215 1.931194 -0.486066
d NaN -1.222918 -0.456288
In [78]: df.mean(0)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
one -0.272211
three 0.024661
two 0.667306
dtype: float64
In [79]: df.mean(1)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
a 0.011457
b 0.558507
c 0.635781
d -0.839603
dtype: float64
All such methods have a skipna option signaling whether to exclude missing data (True by default):
a 0.022914
b 1.675522
c 1.907343
d -1.679206
dtype: float64
Combined with the broadcasting / arithmetic behavior, one can describe various statistical procedures, like standard-
ization (rendering data zero mean and standard deviation 1), very concisely:
In [83]: ts_stand.std()
Out[83]:
one 1.0
three 1.0
two 1.0
dtype: float64
In [85]: xs_stand.std(1)
Out[85]:
a 1.0
b 1.0
c 1.0
d 1.0
dtype: float64
Note that methods like cumsum() and cumprod() preserve the location of NaN values. This is somewhat different
from expanding() and rolling(). For more details please see this note.
In [86]: df.cumsum()
Out[86]:
one three two
a -1.101558 NaN 1.124472
b -1.278848 -0.634293 3.611576
c -0.816633 1.296901 3.125511
d NaN 0.073983 2.669223
Here is a quick reference summary table of common functions. Each also takes an optional level parameter which
applies only if the object has a hierarchical index.
Function Description
count Number of non-null observations
sum Sum of values
mean Mean of values
mad Mean absolute deviation
median Arithmetic median of values
min Minimum
max Maximum
mode Mode
abs Absolute Value
prod Product of values
std Bessel-corrected sample standard deviation
var Unbiased variance
sem Standard error of the mean
skew Sample skewness (3rd moment)
kurt Sample kurtosis (4th moment)
quantile Sample quantile (value at %)
cumsum Cumulative sum
cumprod Cumulative product
cummax Cumulative maximum
cummin Cumulative minimum
Note that by chance some NumPy methods, like mean, std, and sum, will exclude NAs on Series input by default:
In [87]: np.mean(df['one'])
Out[87]: -0.27221094480450114
In [88]: np.mean(df['one'].values)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[88]: nan
Series also has a method nunique() which will return the number of unique non-null values:
In [89]: series = pd.Series(np.random.randn(500))
In [91]: series[10:20] = 5
In [92]: series.nunique()
Out[92]: 11
There is a convenient describe() function which computes a variety of summary statistics about a Series or the
columns of a DataFrame (excluding NAs of course):
In [93]: series = pd.Series(np.random.randn(1000))
In [95]: series.describe()
Out[95]:
count 500.000000
mean -0.032127
std 1.067484
min -3.463789
25% -0.725523
50% -0.053230
75% 0.679790
max 3.120271
dtype: float64
In [98]: frame.describe()
Out[98]:
a b c d e
count 500.000000 500.000000 500.000000 500.000000 500.000000
mean -0.045109 -0.052045 0.024520 0.006117 0.001141
std 1.029268 1.002320 1.042793 1.040134 1.005207
min -2.915767 -3.294023 -3.610499 -2.907036 -3.010899
25% -0.763783 -0.720389 -0.609600 -0.665896 -0.682900
50% -0.086033 -0.048843 0.006093 0.043191 -0.001651
75% 0.663399 0.620980 0.728382 0.735973 0.656439
max 3.400646 2.925597 3.416896 3.331522 3.007143
5% -1.733545
25% -0.725523
50% -0.053230
75% 0.679790
95% 1.854383
max 3.120271
dtype: float64
In [100]: s = pd.Series(['a', 'a', 'b', 'b', 'a', 'a', np.nan, 'c', 'd', 'a'])
In [101]: s.describe()
Out[101]:
count 9
unique 4
top a
freq 5
dtype: object
Note that on a mixed-type DataFrame object, describe() will restrict the summary to include only numerical
columns or, if none are, only categorical columns:
In [103]: frame.describe()
Out[103]:
b
count 4.000000
mean 1.500000
std 1.290994
min 0.000000
25% 0.750000
50% 1.500000
75% 2.250000
max 3.000000
This behaviour can be controlled by providing a list of types as include/exclude arguments. The special value
all can also be used:
In [104]: frame.describe(include=['object'])
Out[104]:
a
count 4
unique 2
top Yes
freq 2
In [105]: frame.describe(include=['number'])
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[105]:
b
count 4.000000
mean 1.500000
std 1.290994
min 0.000000
25% 0.750000
50% 1.500000
75% 2.250000
max 3.000000
In [106]: frame.describe(include='all')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
a b
count 4 4.000000
unique 2 NaN
top Yes NaN
freq 2 NaN
mean NaN 1.500000
std NaN 1.290994
min NaN 0.000000
25% NaN 0.750000
50% NaN 1.500000
75% NaN 2.250000
max NaN 3.000000
That feature relies on select_dtypes. Refer to there for details about accepted inputs.
The idxmin() and idxmax() functions on Series and DataFrame compute the index labels with the minimum and
maximum corresponding values:
In [107]: s1 = pd.Series(np.random.randn(5))
In [108]: s1
Out[108]:
0 -1.649461
1 0.169660
2 1.246181
3 0.131682
4 -2.001988
dtype: float64
In [111]: df1
Out[111]:
A B C
0 -1.273023 0.870502 0.214583
1 0.088452 -0.173364 1.207466
2 0.546121 0.409515 -0.310515
3 0.585014 -0.490528 -0.054639
4 -0.239226 0.701089 0.228656
In [112]: df1.idxmin(axis=0)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
A 0
B 3
C 2
dtype: int64
In [113]: df1.idxmax(axis=1)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
0 B
1 C
2 A
3 A
4 B
dtype: object
When there are multiple rows (or columns) matching the minimum or maximum value, idxmin() and idxmax()
return the first matching index:
In [115]: df3
Out[115]:
A
e 2.0
d 1.0
c 1.0
b 3.0
a NaN
In [116]: df3['A'].idxmin()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[116]: 'd'
Note: idxmin and idxmax are called argmin and argmax in NumPy.
The value_counts() Series method and top-level function computes a histogram of a 1D array of values. It can
also be used as a function on regular arrays:
In [118]: data
Out[118]:
array([3, 3, 0, 2, 1, 0, 5, 5, 3, 6, 1, 5, 6, 2, 0, 0, 6, 3, 3, 5, 0, 4, 3,
3, 3, 0, 6, 1, 3, 5, 5, 0, 4, 0, 6, 3, 6, 5, 4, 3, 2, 1, 5, 0, 1, 1,
6, 4, 1, 4])
In [119]: s = pd.Series(data)
In [120]: s.value_counts()
Out[120]:
3 11
0 9
5 8
6 7
1 7
4 5
2 3
dtype: int64
In [121]: pd.value_counts(data)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[121]:
3 11
0 9
5 8
6 7
1 7
4 5
2 3
dtype: int64
Similarly, you can get the most frequently occurring value(s) (the mode) of the values in a Series or DataFrame:
In [123]: s5.mode()
Out[123]:
0 3
1 7
dtype: int64
In [125]: df5.mode()
Out[125]:
A B
0 2 -5
Continuous values can be discretized using the cut() (bins based on values) and qcut() (bins based on sample
quantiles) functions:
In [128]: factor
Out[128]:
[(-2.611, -1.58], (0.473, 1.499], (-2.611, -1.58], (-1.58, -0.554], (-0.554, 0.473], .
.., (0.473, 1.499], (0.473, 1.499], (-0.554, 0.473], (-0.554, 0.473], (-0.554, 0.
473]]
Length: 20
Categories (4, interval[float64]): [(-2.611, -1.58] < (-1.58, -0.554] < (-0.554, 0.
473] <
(0.473, 1.499]]
In [130]: factor
Out[130]:
[(-5, -1], (0, 1], (-5, -1], (-1, 0], (-1, 0], ..., (1, 5], (1, 5], (-1, 0], (-1, 0],
(-1, 0]]
Length: 20
Categories (4, interval[int64]): [(-5, -1] < (-1, 0] < (0, 1] < (1, 5]]
qcut() computes sample quantiles. For example, we could slice up some normally distributed data into equal-size
quartiles like so:
In [133]: factor
Out[133]:
[(0.544, 1.976], (0.544, 1.976], (-1.255, -0.375], (0.544, 1.976], (-0.103, 0.544], ..
., (-0.103, 0.544], (0.544, 1.976], (-0.103, 0.544], (-1.255, -0.375], (-0.375, -0.
103]]
Length: 30
Categories (4, interval[float64]): [(-1.255, -0.375] < (-0.375, -0.103] < (-0.103, 0.
544] <
(0.544, 1.976]]
In [134]: pd.value_counts(factor)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
(0.544, 1.976] 8
(-1.255, -0.375] 8
(-0.103, 0.544] 7
(-0.375, -0.103] 7
dtype: int64
In [137]: factor
Out[137]:
[(0.0, inf], (0.0, inf], (0.0, inf], (0.0, inf], (-inf, 0.0], ..., (-inf, 0.0], (-inf,
0.0], (0.0, inf], (-inf, 0.0], (0.0, inf]]
Length: 20
Categories (2, interval[float64]): [(-inf, 0.0] < (0.0, inf]]
To apply your own or another librarys functions to pandas objects, you should be aware of the three methods below.
The appropriate method to use depends on whether your function expects to operate on an entire DataFrame or
Series, row- or column-wise, or elementwise.
1. Tablewise Function Application: pipe()
Pandas encourages the second style, which is known as method chaining. pipe makes it easy to use your own or
another librarys functions in method chains, alongside pandas methods.
In the example above, the functions f, g, and h each expected the DataFrame as the first positional argument. What
if the function you wish to apply takes its data as, say, the second argument? In this case, provide pipe with a tuple
of (callable, data_keyword). .pipe will route the DataFrame to the argument specified in the tuple.
For example, we can fit a regression using statsmodels. Their API expects a formula first and a DataFrame as the
second argument, data. We pass in the function, keyword pair (sm.poisson, 'data') to pipe:
In [138]: import statsmodels.formula.api as sm
===============================================================================
coef std err z P>|z| [0.025 0.975]
-------------------------------------------------------------------------------
Intercept -1267.3636 457.867 -2.768 0.006 -2164.767 -369.960
C(lg)[T.NL] -0.2057 0.101 -2.044 0.041 -0.403 -0.008
ln_h 0.9280 0.191 4.866 0.000 0.554 1.302
year 0.6301 0.228 2.762 0.006 0.183 1.077
g 0.0099 0.004 2.754 0.006 0.003 0.017
===============================================================================
"""
The pipe method is inspired by unix pipes and more recently dplyr and magrittr, which have introduced the popular
(%>%) (read pipe) operator for R. The implementation of pipe here is quite clean and feels right at home in python.
We encourage you to view the source code (pd.DataFrame.pipe?? in IPython).
Arbitrary functions can be applied along the axes of a DataFrame or Panel using the apply() method, which, like
the descriptive statistics methods, take an optional axis argument:
In [141]: df.apply(np.mean)
Out[141]:
one -0.272211
three 0.024661
two 0.667306
dtype: float64
a 0.011457
b 0.558507
c 0.635781
d -0.839603
dtype: float64
one 1.563773
three 3.154112
two 2.973170
dtype: float64
In [144]: df.apply(np.cumsum)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
In [145]: df.apply(np.exp)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
a 0.011457
b 0.558507
c 0.635781
d -0.839603
dtype: float64
Depending on the return type of the function passed to apply(), the result will either be of lower dimension or the
same dimension.
apply() combined with some cleverness can be used to answer many questions about a data set. For example,
suppose we wanted to extract the date where the maximum value for each column occurred:
In [148]: tsdf = pd.DataFrame(np.random.randn(1000, 3), columns=['A', 'B', 'C'],
.....: index=pd.date_range('1/1/2000', periods=1000))
.....:
You may also pass additional arguments and keyword arguments to the apply() method. For instance, consider the
following function you would like to apply:
def subtract_and_divide(x, sub, divide=1):
return (x - sub) / divide
Another useful feature is the ability to pass Series methods to carry out some Series operation on each column or row:
In [150]: tsdf
Out[150]:
A B C
2000-01-01 -0.720299 0.546303 -0.082042
2000-01-02 0.200295 -0.577554 -0.908402
In [151]: tsdf.apply(pd.Series.interpolate)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
A B C
2000-01-01 -0.720299 0.546303 -0.082042
2000-01-02 0.200295 -0.577554 -0.908402
2000-01-03 0.102533 1.653614 0.303319
2000-01-04 0.188539 1.391201 0.272754
2000-01-05 0.274546 1.128788 0.242189
2000-01-06 0.360553 0.866374 0.211624
2000-01-07 0.446559 0.603961 0.181059
2000-01-08 0.532566 0.341548 0.150493
2000-01-09 0.330418 1.761200 0.567133
2000-01-10 -0.251020 1.020099 1.893177
Finally, apply() takes an argument raw which is False by default, which converts each row or column into a Series
before applying the function. When set to True, the passed function will instead receive an ndarray object, which has
positive performance implications if you do not need the indexing functionality.
In [154]: tsdf
Out[154]:
A B C
2000-01-01 0.170247 -0.916844 0.835024
2000-01-02 1.259919 0.801111 0.445614
2000-01-03 1.453046 2.430373 0.653093
2000-01-04 NaN NaN NaN
2000-01-05 NaN NaN NaN
2000-01-06 NaN NaN NaN
2000-01-07 NaN NaN NaN
2000-01-08 -1.874526 0.569822 -0.609644
2000-01-09 0.812462 0.565894 -1.461363
2000-01-10 -0.985475 1.388154 -0.078747
Using a single function is equivalent to apply(); You can also pass named methods as strings. These will return a
Series of the aggregated output:
In [155]: tsdf.agg(np.sum)
Out[155]:
A 0.835673
B 4.838510
C -0.216025
dtype: float64
In [156]: tsdf.agg('sum')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[156]:
A 0.835673
B 4.838510
C -0.216025
dtype: float64
A 0.835673
B 4.838510
C -0.216025
dtype: float64
In [158]: tsdf.A.agg('sum')
Out[158]: 0.835672979158205
You can pass multiple aggregation arguments as a list. The results of each of the passed functions will be a row in the
resultant DataFrame. These are naturally named from the aggregation function.
In [159]: tsdf.agg(['sum'])
Out[159]:
A B C
sum 0.835673 4.83851 -0.216025
Passing a named function will yield that name for the row:
Passing a dictionary of column names to a scalar or a list of scalars, to DataFame.agg allows you to customize
which functions are applied to which columns. Note that the results are not in any particular order, you can use an
OrderedDict instead to guarantee ordering.
Passing a list-like will generate a DataFrame output. You will get a matrix-like output of all of the aggregators. The
output will consist of all unique functions. Those that are not noted for a particular column will be NaN:
When presented with mixed dtypes that cannot aggregate, .agg will only take the valid aggregations. This is similiar
to how groupby .agg works.
In [168]: mdf.dtypes
Out[168]:
A int64
B float64
C object
D datetime64[ns]
dtype: object
With .agg() is it possible to easily create a custom describe function, similar to the built in describe function.
In [178]: tsdf
Out[178]:
A B C
2000-01-01 -0.578465 -0.503335 -0.987140
2000-01-02 -0.767147 -0.266046 1.083797
2000-01-03 0.195348 0.722247 -0.894537
2000-01-04 NaN NaN NaN
2000-01-05 NaN NaN NaN
2000-01-06 NaN NaN NaN
2000-01-07 NaN NaN NaN
2000-01-08 -0.556397 0.542165 -0.308675
2000-01-09 -1.010924 -0.672504 -1.139222
2000-01-10 0.354653 0.563622 -0.365106
Transform the entire frame. .transform() allows input functions as: a numpy function, a string function name or
a user defined function.
In [179]: tsdf.transform(np.abs)
Out[179]:
A B C
2000-01-01 0.578465 0.503335 0.987140
2000-01-02 0.767147 0.266046 1.083797
2000-01-03 0.195348 0.722247 0.894537
2000-01-04 NaN NaN NaN
2000-01-05 NaN NaN NaN
2000-01-06 NaN NaN NaN
2000-01-07 NaN NaN NaN
2000-01-08 0.556397 0.542165 0.308675
2000-01-09 1.010924 0.672504 1.139222
2000-01-10 0.354653 0.563622 0.365106
In [180]: tsdf.transform('abs')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
A B C
2000-01-01 0.578465 0.503335 0.987140
2000-01-02 0.767147 0.266046 1.083797
2000-01-03 0.195348 0.722247 0.894537
2000-01-04 NaN NaN NaN
2000-01-05 NaN NaN NaN
2000-01-06 NaN NaN NaN
2000-01-07 NaN NaN NaN
2000-01-08 0.556397 0.542165 0.308675
2000-01-09 1.010924 0.672504 1.139222
2000-01-10 0.354653 0.563622 0.365106
A B C
2000-01-01 0.578465 0.503335 0.987140
2000-01-02 0.767147 0.266046 1.083797
2000-01-03 0.195348 0.722247 0.894537
2000-01-04 NaN NaN NaN
2000-01-05 NaN NaN NaN
2000-01-06 NaN NaN NaN
2000-01-07 NaN NaN NaN
2000-01-08 0.556397 0.542165 0.308675
2000-01-09 1.010924 0.672504 1.139222
2000-01-10 0.354653 0.563622 0.365106
Passing a single function to .transform() with a Series will yield a single Series in return.
In [183]: tsdf.A.transform(np.abs)
Out[183]:
2000-01-01 0.578465
2000-01-02 0.767147
2000-01-03 0.195348
2000-01-04 NaN
2000-01-05 NaN
2000-01-06 NaN
2000-01-07 NaN
2000-01-08 0.556397
2000-01-09 1.010924
2000-01-10 0.354653
Freq: D, Name: A, dtype: float64
Passing multiple functions will yield a column multi-indexed DataFrame. The first level will be the original frame
column names; the second level will be the names of the transforming functions.
In [184]: tsdf.transform([np.abs, lambda x: x+1])
Out[184]:
A B C
absolute <lambda> absolute <lambda> absolute <lambda>
2000-01-01 0.578465 0.421535 0.503335 0.496665 0.987140 0.012860
2000-01-02 0.767147 0.232853 0.266046 0.733954 1.083797 2.083797
2000-01-03 0.195348 1.195348 0.722247 1.722247 0.894537 0.105463
2000-01-04 NaN NaN NaN NaN NaN NaN
2000-01-05 NaN NaN NaN NaN NaN NaN
2000-01-06 NaN NaN NaN NaN NaN NaN
2000-01-07 NaN NaN NaN NaN NaN NaN
2000-01-08 0.556397 0.443603 0.542165 1.542165 0.308675 0.691325
2000-01-09 1.010924 -0.010924 0.672504 0.327496 1.139222 -0.139222
2000-01-10 0.354653 1.354653 0.563622 1.563622 0.365106 0.634894
Passing multiple functions to a Series will yield a DataFrame. The resulting column names will be the transforming
functions.
In [185]: tsdf.A.transform([np.abs, lambda x: x+1])
Out[185]:
absolute <lambda>
2000-01-01 0.578465 0.421535
2000-01-02 0.767147 0.232853
2000-01-03 0.195348 1.195348
2000-01-04 NaN NaN
2000-01-05 NaN NaN
2000-01-06 NaN NaN
2000-01-07 NaN NaN
2000-01-08 0.556397 0.443603
2000-01-09 1.010924 -0.010924
2000-01-10 0.354653 1.354653
Passing a dict of functions will will allow selective transforming per column.
In [186]: tsdf.transform({'A': np.abs, 'B': lambda x: x+1})
Out[186]:
A B
2000-01-01 0.578465 0.496665
2000-01-02 0.767147 0.733954
2000-01-03 0.195348 1.722247
2000-01-04 NaN NaN
2000-01-05 NaN NaN
2000-01-06 NaN NaN
2000-01-07 NaN NaN
2000-01-08 0.556397 1.542165
2000-01-09 1.010924 0.327496
2000-01-10 0.354653 1.563622
Passing a dict of lists will generate a multi-indexed DataFrame with these selective transforms.
In [187]: tsdf.transform({'A': np.abs, 'B': [lambda x: x+1, 'sqrt']})
Out[187]:
A B
absolute <lambda> sqrt
2000-01-01 0.578465 0.496665 NaN
2000-01-02 0.767147 0.733954 NaN
2000-01-03 0.195348 1.722247 0.849851
2000-01-04 NaN NaN NaN
2000-01-05 NaN NaN NaN
2000-01-06 NaN NaN NaN
2000-01-07 NaN NaN NaN
2000-01-08 0.556397 1.542165 0.736318
2000-01-09 1.010924 0.327496 NaN
2000-01-10 0.354653 1.563622 0.750748
Since not all functions can be vectorized (accept NumPy arrays and return another array or value), the methods
applymap() on DataFrame and analogously map() on Series accept any Python function taking a single value and
returning a single value. For example:
In [188]: df4
Out[188]:
In [190]: df4['one'].map(f)
Out[190]:
a 19
b 20
c 18
d 3
Name: one, dtype: int64
In [191]: df4.applymap(f)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[191]:
one three two
a 19 3 18
b 20 19 18
c 18 18 20
d 3 19 19
Series.map() has an additional feature which is that it can be used to easily link or map values defined by a
secondary series. This is closely related to merging/joining functionality:
In [194]: s
Out[194]:
a six
b seven
c six
d seven
e six
dtype: object
In [195]: s.map(t)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[195]:
a 6.0
b 7.0
c 6.0
d 7.0
e 6.0
dtype: float64
Applying with a Panel will pass a Series to the applied function. If the applied function returns a Series, the
result of the application will be a Panel. If the applied function reduces to a scalar, the result of the application will
be a DataFrame.
Note: Prior to 0.13.1 apply on a Panel would only work on ufuncs (e.g. np.sum/np.max).
In [198]: panel
Out[198]:
<class 'pandas.core.panel.Panel'>
Dimensions: 3 (items) x 5 (major_axis) x 4 (minor_axis)
Items axis: ItemA to ItemC
Major_axis axis: 2000-01-03 00:00:00 to 2000-01-07 00:00:00
Minor_axis axis: A to D
In [199]: panel['ItemA']
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
A B C D
2000-01-03 1.092702 0.604244 -2.927808 0.339642
2000-01-04 -1.481449 -0.487265 0.082065 1.499953
2000-01-05 1.781190 1.990533 0.456554 -0.317818
2000-01-06 -0.031543 0.327007 -1.757911 0.447371
2000-01-07 0.480993 1.053639 0.982407 -1.315799
A transformational apply.
In [200]: result = panel.apply(lambda x: x*2, axis='items')
In [201]: result
Out[201]:
<class 'pandas.core.panel.Panel'>
Dimensions: 3 (items) x 5 (major_axis) x 4 (minor_axis)
Items axis: ItemA to ItemC
Major_axis axis: 2000-01-03 00:00:00 to 2000-01-07 00:00:00
Minor_axis axis: A to D
In [202]: result['ItemA']
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
A B C D
2000-01-03 2.185405 1.208489 -5.855616 0.679285
2000-01-04 -2.962899 -0.974530 0.164130 2.999905
2000-01-05 3.562379 3.981066 0.913107 -0.635635
2000-01-06 -0.063086 0.654013 -3.515821 0.894742
2000-01-07 0.961986 2.107278 1.964815 -2.631598
A reduction operation.
In [203]: panel.apply(lambda x: x.dtype, axis='items')
Out[203]:
A B C D
2000-01-03 float64 float64 float64 float64
2000-01-04 float64 float64 float64 float64
2000-01-05 float64 float64 float64 float64
2000-01-06 float64 float64 float64 float64
A transformation operation that returns a Panel, but is computing the z-score across the major_axis.
In [206]: result = panel.apply(
.....: lambda x: (x-x.mean())/x.std(),
.....: axis='major_axis')
.....:
In [207]: result
Out[207]:
<class 'pandas.core.panel.Panel'>
Dimensions: 3 (items) x 5 (major_axis) x 4 (minor_axis)
Items axis: ItemA to ItemC
Major_axis axis: 2000-01-03 00:00:00 to 2000-01-07 00:00:00
Minor_axis axis: A to D
In [208]: result['ItemA']
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
A B C D
2000-01-03 0.585813 -0.102070 -1.394063 0.201263
2000-01-04 -1.496089 -1.295066 0.434343 1.318766
2000-01-05 1.142642 1.413112 0.661833 -0.431942
2000-01-06 -0.323445 -0.405085 -0.683386 0.305017
2000-01-07 0.091079 0.389108 0.981273 -1.393105
Apply can also accept multiple axes in the axis argument. This will pass a DataFrame of the cross-section to the
applied function.
In [209]: f = lambda x: ((x.T-x.mean(1))/x.std(1)).T
In [211]: result
Out[211]:
<class 'pandas.core.panel.Panel'>
Dimensions: 4 (items) x 5 (major_axis) x 3 (minor_axis)
Items axis: A to D
Major_axis axis: 2000-01-03 00:00:00 to 2000-01-07 00:00:00
Minor_axis axis: ItemA to ItemC
In [212]: result.loc[:,:,'ItemA']
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
A B C D
2000-01-03 0.859304 0.448509 -1.109374 0.397237
2000-01-04 -1.053319 -1.063370 0.986639 1.152266
2000-01-05 1.106511 1.143185 -0.093917 -0.583083
2000-01-06 0.561619 -0.835608 -1.075936 0.194525
2000-01-07 -0.339514 1.097901 0.747522 -1.147605
In [214]: result
Out[214]:
<class 'pandas.core.panel.Panel'>
Dimensions: 4 (items) x 5 (major_axis) x 3 (minor_axis)
Items axis: A to D
Major_axis axis: 2000-01-03 00:00:00 to 2000-01-07 00:00:00
Minor_axis axis: ItemA to ItemC
In [215]: result.loc[:,:,'ItemA']
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
A B C D
2000-01-03 0.859304 0.448509 -1.109374 0.397237
2000-01-04 -1.053319 -1.063370 0.986639 1.152266
2000-01-05 1.106511 1.143185 -0.093917 -0.583083
2000-01-06 0.561619 -0.835608 -1.075936 0.194525
2000-01-07 -0.339514 1.097901 0.747522 -1.147605
reindex() is the fundamental data alignment method in pandas. It is used to implement nearly all other features
relying on label-alignment functionality. To reindex means to conform the data to match a given set of labels along a
particular axis. This accomplishes several things:
Reorders the existing data to match a new set of labels
Inserts missing value (NA) markers in label locations where no data for that label existed
If specified, fill data for missing labels using logic (highly relevant to working with time series data)
Here is a simple example:
In [216]: s = pd.Series(np.random.randn(5), index=['a', 'b', 'c', 'd', 'e'])
In [217]: s
Out[217]:
a -0.454087
b -0.360309
c -0.951631
d -0.535459
e 0.835231
dtype: float64
e 0.835231
b -0.360309
f NaN
d -0.535459
dtype: float64
Here, the f label was not contained in the Series and hence appears as NaN in the result.
With a DataFrame, you can simultaneously reindex the index and columns:
In [219]: df
Out[219]:
one three two
a -1.101558 NaN 1.124472
b -0.177289 -0.634293 2.487104
c 0.462215 1.931194 -0.486066
d NaN -1.222918 -0.456288
For convenience, you may utilize the reindex_axis() method, which takes the labels and a keyword axis
parameter.
Note that the Index objects containing the actual axis labels can be shared between objects. So if we have a Series
and a DataFrame, the following can be done:
In [221]: rs = s.reindex(df.index)
In [222]: rs
Out[222]:
a -0.454087
b -0.360309
c -0.951631
d -0.535459
dtype: float64
This means that the reindexed Seriess index is the same Python object as the DataFrames index.
See also:
Note: When writing performance-sensitive code, there is a good reason to spend some time becoming a reindexing
ninja: many operations are faster on pre-aligned data. Adding two unaligned DataFrames internally triggers a
reindexing step. For exploratory analysis you will hardly notice the difference (because reindex has been heavily
optimized), but when CPU cycles matter sprinkling a few explicit reindex calls here and there can have an impact.
You may wish to take an object and reindex its axes to be labeled the same as another object. While the syntax for this
is straightforward albeit verbose, it is a common enough operation that the reindex_like() method is available
to make this simpler:
In [224]: df2
Out[224]:
one two
a -1.101558 1.124472
b -0.177289 2.487104
c 0.462215 -0.486066
In [225]: df3
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Ou
one two
a -0.829347 0.082635
b 0.094922 1.445267
c 0.734426 -1.527903
In [226]: df.reindex_like(df2)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
one two
a -1.101558 1.124472
b -0.177289 2.487104
c 0.462215 -0.486066
The align() method is the fastest way to simultaneously align two objects. It supports a join argument (related to
joining and merging):
join='outer': take the union of the indexes (default)
join='left': use the calling objects index
join='right': use the passed objects index
join='inner': intersect the indexes
It returns a tuple with both of the reindexed Series:
In [228]: s1 = s[:4]
In [229]: s2 = s[1:]
In [230]: s1.align(s2)
Out[230]:
(a 0.505453
b 1.788110
c -0.405908
d -0.801912
e NaN
dtype: float64, a NaN
b 1.788110
c -0.405908
d -0.801912
e 0.768460
dtype: float64)
(b 1.788110
c -0.405908
d -0.801912
dtype: float64, b 1.788110
c -0.405908
d -0.801912
dtype: float64)
(a 0.505453
b 1.788110
c -0.405908
d -0.801912
dtype: float64, a NaN
b 1.788110
c -0.405908
d -0.801912
dtype: float64)
For DataFrames, the join method will be applied to both the index and the columns by default:
You can also pass an axis option to only align on the specified axis:
If you pass a Series to DataFrame.align(), you can choose to align both objects either on the DataFrames index
or columns using the axis argument:
In [235]: df.align(df2.iloc[0], axis=1)
Out[235]:
( one three two
a -1.101558 NaN 1.124472
b -0.177289 -0.634293 2.487104
c 0.462215 1.931194 -0.486066
d NaN -1.222918 -0.456288, one -1.101558
three NaN
two 1.124472
Name: a, dtype: float64)
reindex() takes an optional parameter method which is a filling method chosen from the following table:
Method Action
pad / ffill Fill values forward
bfill / backfill Fill values backward
nearest Fill from the nearest index value
We illustrate these fill methods on a simple Series:
In [236]: rng = pd.date_range('1/3/2000', periods=8)
In [239]: ts
Out[239]:
2000-01-03 0.466284
2000-01-04 -0.457411
2000-01-05 -0.364060
2000-01-06 0.785367
2000-01-07 -1.463093
2000-01-08 1.187315
2000-01-09 -0.493153
2000-01-10 -1.323445
Freq: D, dtype: float64
In [240]: ts2
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
2000-01-03 0.466284
2000-01-06 0.785367
2000-01-09 -0.493153
dtype: float64
In [241]: ts2.reindex(ts.index)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
2000-01-03 0.466284
2000-01-04 NaN
2000-01-05 NaN
2000-01-06 0.785367
2000-01-07 NaN
2000-01-08 NaN
2000-01-09 -0.493153
2000-01-10 NaN
Freq: D, dtype: float64
2000-01-03 0.466284
2000-01-04 0.466284
2000-01-05 0.466284
2000-01-06 0.785367
2000-01-07 0.785367
2000-01-08 0.785367
2000-01-09 -0.493153
2000-01-10 -0.493153
Freq: D, dtype: float64
2000-01-03 0.466284
2000-01-04 0.785367
2000-01-05 0.785367
2000-01-06 0.785367
2000-01-07 -0.493153
2000-01-08 -0.493153
2000-01-09 -0.493153
2000-01-10 NaN
Freq: D, dtype: float64
2000-01-03 0.466284
2000-01-04 0.466284
2000-01-05 0.785367
2000-01-06 0.785367
2000-01-07 0.785367
2000-01-08 -0.493153
2000-01-09 -0.493153
2000-01-10 -0.493153
Freq: D, dtype: float64
These methods require that the indexes are ordered increasing or decreasing.
Note that the same result could have been achieved using fillna (except for method='nearest') or interpolate:
In [245]: ts2.reindex(ts.index).fillna(method='ffill')
Out[245]:
2000-01-03 0.466284
2000-01-04 0.466284
2000-01-05 0.466284
2000-01-06 0.785367
2000-01-07 0.785367
2000-01-08 0.785367
2000-01-09 -0.493153
2000-01-10 -0.493153
Freq: D, dtype: float64
reindex() will raise a ValueError if the index is not monotonic increasing or decreasing. fillna() and
interpolate() will not make any checks on the order of the index.
The limit and tolerance arguments provide additional control over filling while reindexing. Limit specifies the
maximum count of consecutive matches:
In contrast, tolerance specifies the maximum distance between the index and indexer values:
Notice that when used on a DatetimeIndex, TimedeltaIndex or PeriodIndex, tolerance will coerced
into a Timedelta if possible. This allows you to specify tolerance with appropriate strings.
A method closely related to reindex is the drop() function. It removes a set of labels from an axis:
In [248]: df
Out[248]:
one three two
a -1.101558 NaN 1.124472
three two
a NaN 1.124472
b -0.634293 2.487104
c 1.931194 -0.486066
d -1.222918 -0.456288
Note that the following also works, but is a bit less obvious / clean:
The rename() method allows you to relabel an axis based on some mapping (a dict or Series) or an arbitrary function.
In [252]: s
Out[252]:
a 0.505453
b 1.788110
c -0.405908
d -0.801912
e 0.768460
dtype: float64
In [253]: s.rename(str.upper)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[2
A 0.505453
B 1.788110
C -0.405908
D -0.801912
E 0.768460
dtype: float64
If you pass a function, it must return a value when called with any of the labels (and must produce a set of unique
values). A dict or Series can also be used:
.....:
Out[254]:
foo three bar
apple -1.101558 NaN 1.124472
banana -0.177289 -0.634293 2.487104
c 0.462215 1.931194 -0.486066
durian NaN -1.222918 -0.456288
If the mapping doesnt include a column/index label, it isnt renamed. Also extra labels in the mapping dont throw an
error.
The rename() method also provides an inplace named parameter that is by default False and copies the under-
lying data. Pass inplace=True to rename the data in place.
New in version 0.18.0.
Finally, rename() also accepts a scalar or list-like for altering the Series.name attribute.
In [255]: s.rename("scalar-name")
Out[255]:
a 0.505453
b 1.788110
c -0.405908
d -0.801912
e 0.768460
Name: scalar-name, dtype: float64
The Panel class has a related rename_axis() class which can rename any of its three axes.
9.8 Iteration
The behavior of basic iteration over pandas objects depends on the type. When iterating over a Series, it is regarded
as array-like, and basic iteration produces the values. Other data structures, like DataFrame and Panel, follow the
dict-like convention of iterating over the keys of the objects.
In short, basic iteration (for i in object) produces:
Series: values
DataFrame: column labels
Panel: item labels
Thus, for example, iterating over a DataFrame gives you the column names:
Pandas objects also have the dict-like iteritems() method to iterate over the (key, value) pairs.
To iterate over the rows of a DataFrame, you can use the following methods:
iterrows(): Iterate over the rows of a DataFrame as (index, Series) pairs. This converts the rows to Series
objects, which can change the dtypes and has some performance implications.
itertuples(): Iterate over the rows of a DataFrame as namedtuples of the values. This is a lot faster than
iterrows(), and is in most cases preferable to use to iterate over the values of a DataFrame.
Warning: Iterating through pandas objects is generally slow. In many cases, iterating manually over the rows is
not needed and can be avoided with one of the following approaches:
Look for a vectorized solution: many operations can be performed using built-in methods or numpy func-
tions, (boolean) indexing, ...
When you have a function that cannot work on the full DataFrame/Series at once, it is better to use apply()
instead of iterating over the values. See the docs on function application.
If you need to do iterative manipulations on the values but performance is important, consider writing the
inner loop using e.g. cython or numba. See the enhancing performance section for some examples of this
approach.
Warning: You should never modify something you are iterating over. This is not guaranteed to work in all cases.
Depending on the data types, the iterator returns a copy and not a view, and writing to it will have no effect!
For example, in the following case setting the value has no effect:
In [258]: df = pd.DataFrame({'a': [1, 2, 3], 'b': ['a', 'b', 'c']})
In [260]: df
Out[260]:
a b
0 1 a
1 2 b
2 3 c
9.8.1 iteritems
Consistent with the dict-like interface, iteritems() iterates through key-value pairs:
Series: (index, scalar value) pairs
DataFrame: (column, Series) pairs
Panel: (item, DataFrame) pairs
For example:
9.8.2 iterrows
iterrows() allows you to iterate through the rows of a DataFrame as Series objects. It returns an iterator yielding
each index value along with a Series containing the data in each row:
Note: Because iterrows() returns a Series for each row, it does not preserve dtypes across the rows (dtypes are
preserved across columns for DataFrames). For example,
In [264]: df_orig.dtypes
Out[264]:
int int64
float float64
dtype: object
In [266]: row
Out[266]:
int 1.0
float 1.5
Name: 0, dtype: float64
All values in row, returned as a Series, are now upcasted to floats, also the original integer value in column x:
In [267]: row['int'].dtype
Out[267]: dtype('float64')
In [268]: df_orig['int'].dtype
\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[268]: dtype('int64')
To preserve dtypes while iterating over the rows, it is better to use itertuples() which returns namedtuples of the
values and which is generally much faster as iterrows.
In [270]: print(df2)
x y
0 1 4
1 2 5
2 3 6
In [271]: print(df2.T)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ 0 1 2
x 1 2 3
y 4 5 6
In [273]: print(df2_t)
0 1 2
x 1 2 3
y 4 5 6
9.8.3 itertuples
The itertuples() method will return an iterator yielding a namedtuple for each row in the DataFrame. The first
element of the tuple will be the rows corresponding index value, while the remaining values are the row values.
For instance,
This method does not convert the row to a Series object but just returns the values inside a namedtuple. Therefore,
itertuples() preserves the data type of the values and is generally faster as iterrows().
Note: The column names will be renamed to positional names if they are invalid Python identifiers, repeated, or start
with an underscore. With a large number of columns (>255), regular tuples are returned.
Series has an accessor to succinctly return datetime like properties for the values of the Series, if it is a date-
time/period like Series. This will return a Series, indexed like the existing Series.
# datetime
In [275]: s = pd.Series(pd.date_range('20130101 09:10:12', periods=4))
In [276]: s
Out[276]:
0 2013-01-01 09:10:12
1 2013-01-02 09:10:12
2 2013-01-03 09:10:12
3 2013-01-04 09:10:12
dtype: datetime64[ns]
In [277]: s.dt.hour
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
0 9
1 9
2 9
3 9
dtype: int64
In [278]: s.dt.second
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
0 12
1 12
2 12
3 12
dtype: int64
In [279]: s.dt.day
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
0 1
1 2
2 3
3 4
dtype: int64
In [282]: stz
Out[282]:
0 2013-01-01 09:10:12-05:00
1 2013-01-02 09:10:12-05:00
2 2013-01-03 09:10:12-05:00
3 2013-01-04 09:10:12-05:00
dtype: datetime64[ns, US/Eastern]
In [283]: stz.dt.tz
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
<DstTzInfo 'US/Eastern' LMT-1 day, 19:04:00 STD>
In [284]: s.dt.tz_localize('UTC').dt.tz_convert('US/Eastern')
Out[284]:
0 2013-01-01 04:10:12-05:00
1 2013-01-02 04:10:12-05:00
2 2013-01-03 04:10:12-05:00
3 2013-01-04 04:10:12-05:00
dtype: datetime64[ns, US/Eastern]
You can also format datetime values as strings with Series.dt.strftime() which supports the same format as
the standard strftime().
# DatetimeIndex
In [285]: s = pd.Series(pd.date_range('20130101', periods=4))
In [286]: s
Out[286]:
0 2013-01-01
1 2013-01-02
2 2013-01-03
3 2013-01-04
dtype: datetime64[ns]
In [287]: s.dt.strftime('%Y/%m/%d')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[287]
0 2013/01/01
1 2013/01/02
2 2013/01/03
3 2013/01/04
dtype: object
# PeriodIndex
In [288]: s = pd.Series(pd.period_range('20130101', periods=4))
In [289]: s
Out[289]:
0 2013-01-01
1 2013-01-02
2 2013-01-03
3 2013-01-04
dtype: object
In [290]: s.dt.strftime('%Y/%m/%d')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[290]:
0 2013/01/01
1 2013/01/02
2 2013/01/03
3 2013/01/04
dtype: object
In [292]: s
Out[292]:
0 2013-01-01
1 2013-01-02
2 2013-01-03
3 2013-01-04
dtype: object
In [293]: s.dt.year
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[293]:
0 2013
1 2013
2 2013
3 2013
dtype: int64
In [294]: s.dt.day
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
0 1
1 2
2 3
3 4
dtype: int64
# timedelta
In [295]: s = pd.Series(pd.timedelta_range('1 day 00:00:05', periods=4, freq='s'))
In [296]: s
Out[296]:
0 1 days 00:00:05
1 1 days 00:00:06
2 1 days 00:00:07
3 1 days 00:00:08
dtype: timedelta64[ns]
In [297]: s.dt.days
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
0 1
1 1
2 1
3 1
dtype: int64
In [298]: s.dt.seconds
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
0 5
1 6
2 7
3 8
dtype: int64
In [299]: s.dt.components
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
Note: Series.dt will raise a TypeError if you access with a non-datetimelike values
Series is equipped with a set of string processing methods that make it easy to operate on each element of the array.
Perhaps most importantly, these methods exclude missing/NA values automatically. These are accessed via the Seriess
str attribute and generally have names matching the equivalent (scalar) built-in string methods. For example:
In [300]: s = pd.Series(['A', 'B', 'C', 'Aaba', 'Baca', np.nan, 'CABA', 'dog
', 'cat'])
In [301]: s.str.lower()
Out[301]:
0 a
1 b
2 c
3 aaba
4 baca
5 NaN
6 caba
7 dog
8 cat
dtype: object
Powerful pattern-matching methods are provided as well, but note that pattern-matching generally uses regular expres-
sions by default (and in some cases always uses them).
Please see Vectorized String Methods for a complete description.
9.11 Sorting
Warning: The sorting API is substantially changed in 0.17.0, see here for these changes. In particular, all sorting
methods now return a new object by default, and DO NOT operate in-place (except by passing inplace=True).
There are two obvious kinds of sorting that you may be interested in: sorting by label and sorting by actual values.
9.11.1 By Index
The primary method for sorting axis labels (indexes) are the Series.sort_index() and the DataFrame.
sort_index() methods.
# DataFrame
In [303]: unsorted_df.sort_index()
Out[303]:
three two one
a NaN NaN NaN
b NaN NaN NaN
c NaN NaN NaN
d NaN NaN NaN
In [304]: unsorted_df.sort_index(ascending=False)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
In [305]: unsorted_df.sort_index(axis=1)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
# Series
In [306]: unsorted_df['three'].sort_index()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
a NaN
b NaN
c NaN
d NaN
Name: three, dtype: float64
9.11.2 By Values
The Series.sort_values() and DataFrame.sort_values() are the entry points for value sorting (that
is the values in a column or row). DataFrame.sort_values() can accept an optional by argument for axis=0
which will use an arbitrary vector or a column name of the DataFrame to determine the sort order:
In [308]: df1.sort_values(by='two')
Out[308]:
These methods have special treatment of NA values via the na_position argument:
In [311]: s.sort_values()
Out[311]:
0 A
3 Aaba
1 B
4 Baca
6 CABA
8 cat
7 dog
2 NaN
5 NaN
dtype: object
In [312]: s.sort_values(na_position='first')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
2 NaN
5 NaN
0 A
3 Aaba
1 B
4 Baca
6 CABA
8 cat
7 dog
dtype: object
9.11.3 searchsorted
In [321]: s
Out[321]:
0 3
1 1
2 9
3 6
4 0
5 8
6 5
7 2
8 7
9 4
dtype: int64
In [322]: s.sort_values()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[322
4 0
1 1
7 2
0 3
9 4
6 5
3 6
8 7
5 8
2 9
dtype: int64
In [323]: s.nsmallest(3)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
4 0
1 1
7 2
dtype: int64
In [324]: s.nlargest(3)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
2 9
5 8
8 7
dtype: int64
a b c
0 -2 a 1.0
1 -1 b 2.0
6 -1 f 4.0
a b c
0 -2 a 1.0
2 1 d 4.0
4 8 e NaN
1 -1 b 2.0
6 -1 f 4.0
You must be explicit about sorting when the column is a multi-index, and fully specify all levels to by.
In [331]: df1.sort_values(by=('a','two'))
Out[331]:
a b
one two three
3 1 2 4
2 1 3 2
1 1 4 3
0 2 5 1
9.12 Copying
The copy() method on pandas objects copies the underlying data (though not the axis indexes, since they are im-
mutable) and returns a new object. Note that it is seldom necessary to copy objects. For example, there are only a
handful of ways to alter a DataFrame in-place:
Inserting, deleting, or modifying a column
Assigning to the index or columns attributes
For homogeneous data, directly modifying the values via the values attribute or advanced indexing
To be clear, no pandas methods have the side effect of modifying your data; almost all methods return new objects,
leaving the original object untouched. If data is modified, it is because you did so explicitly.
9.13 dtypes
The main types stored in pandas objects are float, int, bool, datetime64[ns] and datetime64[ns, tz]
(in >= 0.17.0), timedelta[ns], category (in >= 0.15.0), and object. In addition these dtypes have item sizes,
e.g. int64 and int32. See Series with TZ for more detail on datetime64[ns, tz] dtypes.
A convenient dtypes attribute for DataFrames returns a Series with the data type of each column.
In [333]: dft
Out[333]:
A B C D E F G
0 0.534749 1 foo 2001-01-02 1.0 False 1
1 0.688452 1 foo 2001-01-02 1.0 False 1
2 0.777842 1 foo 2001-01-02 1.0 False 1
In [334]: dft.dtypes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
A float64
B int64
C object
D datetime64[ns]
E float32
F bool
G int8
dtype: object
In [335]: dft['A'].dtype
Out[335]: dtype('float64')
If a pandas object contains data multiple dtypes IN A SINGLE COLUMN, the dtype of the column will be chosen to
accommodate all of the data types (object is the most general).
0 1
1 2
2 3
3 6
4 foo
dtype: object
The method get_dtype_counts() will return the number of columns of each type in a DataFrame:
In [338]: dft.get_dtype_counts()
Out[338]:
bool 1
datetime64[ns] 1
float32 1
float64 1
int64 1
int8 1
object 1
dtype: int64
Numeric dtypes will propagate and can coexist in DataFrames (starting in v0.11.0). If a dtype is passed (either directly
via the dtype keyword, a passed ndarray, or a passed Series, then it will be preserved in DataFrame operations.
Furthermore, different numeric dtypes will NOT be combined. The following example will give you a taste.
In [340]: df1
Out[340]:
A
0 -2.038777
1 1.121731
2 0.586626
3 -0.282532
4 0.410238
5 -0.540166
6 1.400679
7 -0.255975
In [341]: df1.dtypes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
A float32
dtype: object
.....:
In [343]: df2
Out[343]:
A B C
0 -0.624512 -1.397492 0
1 0.022354 1.338115 0
2 -0.433594 0.781169 255
3 -0.405762 -0.791687 0
4 -0.149658 -0.764810 255
5 0.644531 -2.000933 0
6 -1.260742 -0.345662 0
7 0.365967 0.393915 0
In [344]: df2.dtypes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
A float16
B float64
C uint8
dtype: object
9.13.1 defaults
By default integer types are int64 and float types are float64, REGARDLESS of platform (32-bit or 64-bit). The
following will all result in int64 dtypes.
Numpy, however will choose platform-dependent types when creating arrays. The following WILL result in int32
on 32-bit platform.
9.13.2 upcasting
Types can potentially be upcasted when combined with other types, meaning they are promoted from the current type
(say int to float)
In [350]: df3
Out[350]:
A B C
0 -2.663288 -1.397492 0.0
1 1.144085 1.338115 0.0
2 0.153032 0.781169 255.0
3 -0.688294 -0.791687 0.0
4 0.260580 -0.764810 255.0
5 0.104365 -2.000933 0.0
6 0.139937 -0.345662 0.0
7 0.109992 0.393915 0.0
In [351]: df3.dtypes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
A float32
B float64
C float64
dtype: object
The values attribute on a DataFrame return the lower-common-denominator of the dtypes, meaning the dtype that
can accommodate ALL of the types in the resulting homogeneous dtyped numpy array. This can force some upcasting.
In [352]: df3.values.dtype
Out[352]: dtype('float64')
9.13.3 astype
You can use the astype() method to explicitly convert dtypes from one to another. These will by default return a
copy, even if the dtype was unchanged (pass copy=False to change this behavior). In addition, they will raise an
exception if the astype operation is invalid.
Upcasting is always according to the numpy rules. If two different dtypes are involved in an operation, then the more
general one will be used as the result of the operation.
In [353]: df3
Out[353]:
A B C
0 -2.663288 -1.397492 0.0
1 1.144085 1.338115 0.0
2 0.153032 0.781169 255.0
3 -0.688294 -0.791687 0.0
4 0.260580 -0.764810 255.0
5 0.104365 -2.000933 0.0
6 0.139937 -0.345662 0.0
7 0.109992 0.393915 0.0
In [354]: df3.dtypes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
A float32
B float64
C float64
dtype: object
# conversion of dtypes
In [355]: df3.astype('float32').dtypes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
A float32
B float32
C float32
dtype: object
In [358]: dft
Out[358]:
a b c
0 1 4 7
1 2 5 8
2 3 6 9
In [359]: dft.dtypes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[359]:
a uint8
b uint8
c int64
dtype: object
In [362]: dft1
Out[362]:
a b c
0 True 4 7.0
1 False 5 8.0
2 True 6 9.0
In [363]: dft1.dtypes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[363]:
a bool
b int64
c float64
dtype: object
Note: When trying to convert a subset of columns to a specified type using astype() and loc(), upcasting occurs.
loc() tries to fit in what we are assigning to the current dtypes, while [] will overwrite them taking the dtype from
the right hand side. Therefore the following piece of code produces the unintended result.
In [364]: dft = pd.DataFrame({'a': [1,2,3], 'b': [4,5,6], 'c': [7, 8, 9]})
In [367]: dft.dtypes
Out[367]:
a int64
b int64
c int64
dtype: object
pandas offers various functions to try to force conversion of types from the object dtype to other types. The
following functions are available for one dimensional object arrays or scalars:
to_numeric() (conversion to numeric dtypes)
In [368]: m = ['1.1', 2, 3]
In [369]: pd.to_numeric(m)
Out[369]: array([ 1.1, 2. , 3. ])
In [372]: pd.to_datetime(m)
Out[372]: DatetimeIndex(['2016-07-09', '2016-03-02'], dtype='datetime64[ns]',
freq=None)
In [374]: pd.to_timedelta(m)
Out[374]: TimedeltaIndex(['0 days 00:00:00.000005', '1 days 00:00:00'], dtype=
'timedelta64[ns]', freq=None)
To force a conversion, we can pass in an errors argument, which specifies how pandas should deal with elements
that cannot be converted to desired dtype or object. By default, errors='raise', meaning that any errors encoun-
tered will be raised during the conversion process. However, if errors='coerce', these errors will be ignored
and pandas will convert problematic elements to pd.NaT (for datetime and timedelta) or np.nan (for numeric).
This might be useful if you are reading in data which is mostly of the desired dtype (e.g. numeric, datetime), but
occasionally has non-conforming elements intermixed that you want to represent as missing:
In [375]: import datetime
In [378]: m = ['apple', 2, 3]
The errors parameter has a third option of errors='ignore', which will simply return the passed in data if it
encounters any errors with the conversion to a desired data type:
In [382]: import datetime
In [385]: m = ['apple', 2, 3]
In addition to object conversion, to_numeric() provides another argument downcast, which gives the option of
downcasting the newly (or already) numeric data to a smaller dtype, which can conserve memory:
In [389]: m = ['1', 2, 3]
As these methods apply only to one-dimensional arrays, lists or scalars; they cannot be used directly on multi-
dimensional objects such as DataFrames. However, with apply(), we can apply the function over each column
efficiently:
In [396]: df
Out[396]:
0 1
0 2016-07-09 2016-03-02 00:00:00
1 2016-07-09 2016-03-02 00:00:00
In [397]: df.apply(pd.to_datetime)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
0 1
0 2016-07-09 2016-03-02
1 2016-07-09 2016-03-02
In [399]: df
Out[399]:
0 1 2
0 1.1 2 3
1 1.1 2 3
In [400]: df.apply(pd.to_numeric)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[400]:
0 1 2
0 1.1 2 3
1 1.1 2 3
In [402]: df
Out[402]:
0 1
0 5us 1 days 00:00:00
1 5us 1 days 00:00:00
In [403]: df.apply(pd.to_timedelta)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[403]:
0 1
0 00:00:00.000005 1 days
1 00:00:00.000005 1 days
9.13.5 gotchas
Performing selection operations on integer type data can easily upcast the data to floating. The dtype of the
input data will be preserved in cases where nans are not introduced (starting in 0.11.0) See also Support for integer
NA
In [405]: dfi['E'] = 1
In [406]: dfi
Out[406]:
A B C E
0 -2 -1 0 1
1 1 1 0 1
2 0 0 255 1
3 0 0 0 1
4 0 0 255 1
5 0 -2 0 1
6 0 0 0 1
7 0 0 0 1
In [407]: dfi.dtypes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
A int32
B int32
C int32
E int64
dtype: object
In [409]: casted
Out[409]:
A B C E
0 NaN NaN NaN 1
1 1.0 1.0 NaN 1
2 NaN NaN 255.0 1
3 NaN NaN NaN 1
4 NaN NaN 255.0 1
5 NaN NaN NaN 1
6 NaN NaN NaN 1
7 NaN NaN NaN 1
In [410]: casted.dtypes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
A float64
B float64
C float64
E int64
dtype: object
In [413]: dfa.dtypes
Out[413]:
A float32
B float64
C float64
dtype: object
In [415]: casted
Out[415]:
A B C
0 NaN NaN NaN
1 1.144085 1.338115 NaN
2 NaN 0.781169 255.0
3 NaN NaN NaN
4 NaN NaN 255.0
5 0.104365 NaN NaN
6 NaN NaN NaN
7 0.109992 0.393915 NaN
In [416]: casted.dtypes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
A float32
B float64
C float64
dtype: object
In [422]: df
Out[422]:
bool1 bool2 category dates float64 int64 string \
0 True False A 2017-05-05 12:15:02.873800 4.0 1 a
1 False True B 2017-05-06 12:15:02.873800 5.0 2 b
2 True False C 2017-05-07 12:15:02.873800 6.0 3 c
In [423]: df.dtypes
Out[423]:
bool1 bool
bool2 bool
category category
dates datetime64[ns]
float64 float64
int64 int64
string object
uint8 uint8
tdeltas timedelta64[ns]
uint64 uint64
other_dates datetime64[ns]
tz_aware_dates datetime64[ns, US/Eastern]
dtype: object
select_dtypes() has two parameters include and exclude that allow you to say give me the columns
WITH these dtypes (include) and/or give the columns WITHOUT these dtypes (exclude).
For example, to select bool columns
In [424]: df.select_dtypes(include=[bool])
Out[424]:
bool1 bool2
0 True False
1 False True
2 True False
You can also pass the name of a dtype in the numpy dtype hierarchy:
In [425]: df.select_dtypes(include=['bool'])
Out[425]:
bool1 bool2
0 True False
1 False True
2 True False
In [427]: df.select_dtypes(include=['object'])
Out[427]:
string
0 a
1 b
2 c
To see all the child dtypes of a generic dtype like numpy.number you can define a function that returns a tree of
child dtypes:
In [429]: subdtypes(np.generic)
Out[429]:
[numpy.generic,
[[numpy.number,
[[numpy.integer,
[[numpy.signedinteger,
[numpy.int8,
numpy.int16,
numpy.int32,
numpy.int64,
numpy.int64,
numpy.timedelta64]],
[numpy.unsignedinteger,
[numpy.uint8,
numpy.uint16,
numpy.uint32,
numpy.uint64,
numpy.uint64]]]],
[numpy.inexact,
[[numpy.floating,
[numpy.float16, numpy.float32, numpy.float64, numpy.float128]],
[numpy.complexfloating,
[numpy.complex64, numpy.complex128, numpy.complex256]]]]]],
[numpy.flexible,
[[numpy.character, [numpy.bytes_, numpy.str_]],
[numpy.void, [numpy.record]]]],
numpy.bool_,
numpy.datetime64,
numpy.object_]]
Note: Pandas also defines the types category, and datetime64[ns, tz], which are not integrated into the
normal numpy hierarchy and wont show up with the above function.
TEN
Series and Index are equipped with a set of string processing methods that make it easy to operate on each element of
the array. Perhaps most importantly, these methods exclude missing/NA values automatically. These are accessed via
the str attribute and generally have names matching the equivalent (scalar) built-in string methods:
In [1]: s = pd.Series(['A', 'B', 'C', 'Aaba', 'Baca', np.nan, 'CABA', 'dog', 'cat'])
In [2]: s.str.lower()
Out[2]:
0 a
1 b
2 c
3 aaba
4 baca
5 NaN
6 caba
7 dog
8 cat
dtype: object
In [3]: s.str.upper()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
0 A
1 B
2 C
3 AABA
4 BACA
5 NaN
6 CABA
7 DOG
8 CAT
dtype: object
In [4]: s.str.len()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
0 1.0
1 1.0
2 1.0
3 4.0
4 4.0
5 NaN
6 4.0
7 3.0
553
pandas: powerful Python data analysis toolkit, Release 0.20.1
8 3.0
dtype: float64
In [5]: idx = pd.Index([' jack', 'jill ', ' jesse ', 'frank'])
In [6]: idx.str.strip()
Out[6]: Index(['jack', 'jill', 'jesse', 'frank'], dtype='object')
In [7]: idx.str.lstrip()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[7]: Index(['jack
', 'jill ', 'jesse ', 'frank'], dtype='object')
In [8]: idx.str.rstrip()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
Index([' jack', 'jill', ' jesse', 'frank'], dtype='object')
The string methods on Index are especially useful for cleaning up or transforming DataFrame columns. For instance,
you may have columns with leading or trailing whitespace:
In [10]: df
Out[10]:
Column A Column B
0 -1.425575 -1.336299
1 0.740933 1.032121
2 -1.585660 0.913812
In [11]: df.columns.str.strip()
Out[11]: Index(['Column A', 'Column B'], dtype='object')
In [12]: df.columns.str.lower()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[12]: Index([' column a ',
' column b '], dtype='object')
These string methods can then be used to clean up the columns as needed. Here we are removing leading and trailing
whitespaces, lowercasing all names, and replacing any remaining whitespaces with underscores:
In [14]: df
Out[14]:
column_a column_b
0 -1.425575 -1.336299
1 0.740933 1.032121
2 -1.585660 0.913812
Note: If you have a Series where lots of elements are repeated (i.e. the number of unique elements in the
Series is a lot smaller than the length of the Series), it can be faster to convert the original Series to one of
type category and then use .str.<method> or .dt.<property> on that. The performance difference comes
from the fact that, for Series of type category, the string operations are done on the .categories and not on
each element of the Series.
Please note that a Series of type category with string .categories has some limitations in comparison of
Series of type string (e.g. you cant add strings to each other: s + " " + s wont work if s is a Series of type
category). Also, .str methods which operate on elements of type list are not available on such a Series.
In [16]: s2.str.split('_')
Out[16]:
0 [a, b, c]
1 [c, d, e]
2 NaN
3 [f, g, h]
dtype: object
In [17]: s2.str.split('_').str.get(1)
Out[17]:
0 b
1 d
2 NaN
3 g
dtype: object
In [18]: s2.str.split('_').str[1]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[18]:
0 b
1 d
2 NaN
3 g
dtype: object
rsplit is similar to split except it works in the reverse direction, i.e., from the end of the string to the beginning
of the string:
In [23]: s3
Out[23]:
0 A
1 B
2 C
3 Aaba
4 Baca
5
6 NaN
7 CABA
8 dog
9 cat
dtype: object
0 A
1 B
2 C
3 XX-XX ba
4 XX-XX ca
5
6 NaN
7 XX-XX BA
8 XX-XX
9 XX-XX t
dtype: object
Some caution must be taken to keep regular expressions in mind! For example, the following code will cause trouble
because of the regular expression meaning of $:
dtype: object
0 12
1 -10
2 $10,000
dtype: object
The replace method can also take a callable as replacement. It is called on every pat using re.sub(). The
callable should expect one positional argument (a regex object) and return a string.
New in version 0.20.0.
The replace method also accepts a compiled regular expression object from re.compile() as a pattern. All
flags should be included in the compiled regular expression object.
New in version 0.20.0.
In [35]: import re
0 A
1 B
2 C
3 XX-XX ba
4 XX-XX ca
5
6 NaN
7 XX-XX BA
8 XX-XX
9 XX-XX t
dtype: object
Including a flags argument when calling replace with a compiled regular expression object will raise a
ValueError.
You can use [] notation to directly index by position locations. If you index past the end of the string, the result will
be a NaN.
In [40]: s.str[0]
Out[40]:
0 A
1 B
2 C
3 A
4 B
5 NaN
6 C
7 d
8 c
dtype: object
In [41]: s.str[1]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
0 NaN
1 NaN
2 NaN
3 a
4 a
5 NaN
6 A
7 o
8 a
dtype: object
Warning: In version 0.18.0, extract gained the expand argument. When expand=False it returns a
Series, Index, or DataFrame, depending on the subject and regular expression pattern (same behavior as
pre-0.18.0). When expand=True it always returns a DataFrame, which is more consistent and less confusing
from the perspective of a user.
The extract method accepts a regular expression with at least one capture group.
Extracting a regular expression with more than one group returns a DataFrame with one column per group.
Elements that do not match return a row filled with NaN. Thus, a Series of messy strings can be converted into a
like-indexed Series or DataFrame of cleaned-up or more useful strings, without necessitating get() to access tuples
or re.match objects. The dtype of the result is always object, even if no match is found and the result only contains
NaN.
Named groups like
Out[43]:
letter digit
0 a 1
1 b 2
2 NaN NaN
can also be used. Note that any capture group names in the regular expression will be used for column names;
otherwise capture group numbers will be used.
Extracting a regular expression with one group returns a DataFrame with one column if expand=True.
1 2
2 NaN
Calling on an Index with a regex with exactly one capture group returns a DataFrame with one column if
expand=True,
In [48]: s
Out[48]:
A11 a1
B22 b2
C33 c3
dtype: object
Calling on an Index with a regex with more than one capture group returns a DataFrame if expand=True.
The table below summarizes the behavior of extract(expand=False) (input subject in first column, number of
groups in regex in first row)
1 group >1 group
Index Index ValueError
Series Series DataFrame
In [53]: s
Out[53]:
A a1a2
B b1
C c1
dtype: object
the extractall method returns every match. The result of extractall is always a DataFrame with a
MultiIndex on its rows. The last level of the MultiIndex is named match and indicates the order in the
subject.
In [56]: s.str.extractall(two_groups)
Out[56]:
letter digit
match
A 0 a 1
1 a 2
B 0 b 1
C 0 c 1
When each subject string in the Series has exactly one match,
In [57]: s = pd.Series(['a3', 'b3', 'c2'])
In [58]: s
Out[58]:
0 a3
1 b3
2 c2
dtype: object
In [60]: extract_result
Out[60]:
letter digit
0 a 3
1 b 3
2 c 2
In [62]: extractall_result
Out[62]:
letter digit
match
0 0 a 3
1 0 b 3
2 0 c 2
letter digit
0 a 3
1 b 3
2 c 2
Index also supports .str.extractall. It returns a DataFrame which has the same result as a Series.str.
extractall with a default index (starts from 0).
New in version 0.19.0.
In [64]: pd.Index(["a1a2", "b1", "c1"]).str.extractall(two_groups)
Out[64]:
letter digit
match
0 0 a 1
1 a 2
1 0 b 1
2 0 c 1
letter digit
match
0 0 a 1
1 a 2
1 0 b 1
2 0 c 1
or match a pattern:
In [68]: pd.Series(['1', '2', '3a', '3b', '03c']).str.match(pattern)
Out[68]:
0 False
1 False
2 True
3 True
4 False
dtype: bool
The distinction between match and contains is strictness: match relies on strict re.match, while contains
relies on re.search.
Methods like match, contains, startswith, and endswith take an extra na argument so missing values can
be considered True or False:
In [69]: s4 = pd.Series(['A', 'B', 'C', 'Aaba', 'Baca', np.nan, 'CABA', 'dog', 'cat'])
You can extract dummy variables from string columns. For example if they are separated by a '|':
In [71]: s = pd.Series(['a', 'a|b', np.nan, 'a|c'])
In [72]: s.str.get_dummies(sep='|')
Out[72]:
a b c
0 1 0 0
1 1 1 0
2 0 0 0
3 1 0 1
In [74]: idx.str.get_dummies(sep='|')
Out[74]:
MultiIndex(levels=[[0, 1], [0, 1], [0, 1]],
Method Description
cat() Concatenate strings
split() Split strings on delimiter
rsplit() Split strings on delimiter working from the end of the string
get() Index into each element (retrieve i-th element)
join() Join strings in each element of the Series with passed separator
get_dummies() Split strings on the delimiter returning DataFrame of dummy variables
contains() Return boolean array if each string contains pattern/regex
replace() Replace occurrences of pattern/regex with some other string or the return value of a callable given the occurr
repeat() Duplicate values (s.str.repeat(3) equivalent to x * 3)
pad() Add whitespace to left, right, or both sides of strings
center() Equivalent to str.center
ljust() Equivalent to str.ljust
rjust() Equivalent to str.rjust
zfill() Equivalent to str.zfill
wrap() Split long strings into lines with length less than a given width
slice() Slice each string in the Series
slice_replace() Replace slice in each string with passed value
count() Count occurrences of pattern
startswith() Equivalent to str.startswith(pat) for each element
endswith() Equivalent to str.endswith(pat) for each element
findall() Compute list of all occurrences of pattern/regex for each string
match() Call re.match on each element, returning matched groups as list
extract() Call re.search on each element, returning DataFrame with one row for each element and one column for
extractall() Call re.findall on each element, returning DataFrame with one row for each match and one column for
len() Compute string lengths
strip() Equivalent to str.strip
rstrip() Equivalent to str.rstrip
lstrip() Equivalent to str.lstrip
partition() Equivalent to str.partition
rpartition() Equivalent to str.rpartition
lower() Equivalent to str.lower
upper() Equivalent to str.upper
find() Equivalent to str.find
rfind() Equivalent to str.rfind
index() Equivalent to str.index
rindex() Equivalent to str.rindex
capitalize() Equivalent to str.capitalize
swapcase() Equivalent to str.swapcase
normalize() Return Unicode normal form. Equivalent to unicodedata.normalize
translate() Equivalent to str.translate
isalnum() Equivalent to str.isalnum
C
ELEVEN
11.1 Overview
pandas has an options system that lets you customize some aspects of its behaviour, display-related options being those
the user is most likely to adjust.
Options have a full dotted-style, case-insensitive name (e.g. display.max_rows). You can get/set options
directly as attributes of the top-level options attribute:
In [2]: pd.options.display.max_rows
Out[2]: 15
In [4]: pd.options.display.max_rows
Out[4]: 999
There is also an API composed of 5 relevant functions, available directly from the pandas namespace:
get_option() / set_option() - get/set the value of a single option.
reset_option() - reset one or more options to their default value.
describe_option() - print the descriptions of one or more options.
option_context() - execute a codeblock with a set of options that revert to prior settings after execution.
Note: developers can check out pandas/core/config.py for more info.
All of the functions above accept a regexp pattern (re.search style) as an argument, and so passing in a substring
will work - as long as it is unambiguous :
In [5]: pd.get_option("display.max_rows")
Out[5]: 999
In [6]: pd.set_option("display.max_rows",101)
In [7]: pd.get_option("display.max_rows")
Out[7]: 101
In [8]: pd.set_option("max_r",102)
In [9]: pd.get_option("display.max_rows")
Out[9]: 102
567
pandas: powerful Python data analysis toolkit, Release 0.20.1
The following will not work because it matches multiple option names, e.g. display.max_colwidth,
display.max_rows, display.max_columns:
In [10]: try:
....: pd.get_option("column")
....: except KeyError as e:
....: print(e)
....:
'Pattern matched multiple keys'
Note: Using this form of shorthand may cause your code to break if new options with similar names are added in
future versions.
You can get a list of available options and their descriptions with describe_option. When called with no argu-
ment describe_option will print out the descriptions for all available options.
As described above, get_option() and set_option() are available from the pandas namespace. To change an
option, call set_option('option regex', new_value)
In [11]: pd.get_option('mode.sim_interactive')
Out[11]: False
In [13]: pd.get_option('mode.sim_interactive')
Out[13]: True
Note: that the option mode.sim_interactive is mostly used for debugging purposes.
All options also have a default value, and you can use reset_option to do just that:
In [14]: pd.get_option("display.max_rows")
Out[14]: 60
In [15]: pd.set_option("display.max_rows",999)
In [16]: pd.get_option("display.max_rows")
Out[16]: 999
In [17]: pd.reset_option("display.max_rows")
In [18]: pd.get_option("display.max_rows")
Out[18]: 60
In [19]: pd.reset_option("^display")
height has been deprecated.
line_width has been deprecated, use display.width instead (currently both are
identical)
option_context context manager has been exposed through the top-level API, allowing you to execute code with
given option values. Option values are restored automatically when you exit the with block:
In [21]: print(pd.get_option("display.max_rows"))
\\\\\60
In [22]: print(pd.get_option("display.max_columns"))
\\\\\\\\20
Using startup scripts for the python/ipython environment to import pandas and set options makes working with pandas
more efficient. To do this, create a .py or .ipy script in the startup directory of the desired profile. An example where
the startup folder is in a default ipython profile can be found at:
$IPYTHONDIR/profile_default/startup
More information can be found in the ipython documentation. An example startup script for pandas is displayed
below:
import pandas as pd
pd.set_option('display.max_rows', 999)
pd.set_option('precision', 5)
In [24]: pd.set_option('max_rows', 7)
In [25]: df
Out[25]:
0 1
0 0.469112 -0.282863
1 -1.509059 -1.135632
2 1.212112 -0.173215
3 0.119209 -1.044236
4 -0.861849 -2.104569
5 -0.494929 1.071804
6 0.721555 -0.706771
In [26]: pd.set_option('max_rows', 5)
In [27]: df
Out[27]:
0 1
0 0.469112 -0.282863
1 -1.509059 -1.135632
.. ... ...
5 -0.494929 1.071804
6 0.721555 -0.706771
[7 rows x 2 columns]
In [28]: pd.reset_option('max_rows')
display.expand_frame_repr allows for the the representation of dataframes to stretch across pages, wrapped
over the full column vs row-wise.
In [29]: df = pd.DataFrame(np.random.randn(5,10))
In [31]: df
Out[31]:
0 1 2 3 4 5 6 \
0 -1.039575 0.271860 -0.424972 0.567020 0.276232 -1.087401 -0.673690
1 0.404705 0.577046 -1.715002 -1.039268 -0.370647 -1.157892 -1.344312
2 1.643563 -1.469388 0.357021 -0.674600 -1.776904 -0.968914 -1.294524
3 -0.013960 -0.362543 -0.006154 -0.923061 0.895717 0.805244 -1.206412
4 -1.170299 -0.226169 0.410835 0.813850 0.132003 -0.827317 -0.076467
7 8 9
0 0.113648 -1.478427 0.524988
1 0.844885 1.075770 -0.109050
2 0.413738 0.276662 -0.472035
3 2.565646 1.431256 1.340309
4 -1.187678 1.130127 -1.436737
In [33]: df
Out[33]:
0 1 2 3 4 5 6 7
8 9
0 -1.039575 0.271860 -0.424972 0.567020 0.276232 -1.087401 -0.673690 0.113648 -1.
478427 0.524988
1 0.404705 0.577046 -1.715002 -1.039268 -0.370647 -1.157892 -1.344312 0.844885 1.
075770 -0.109050
In [34]: pd.reset_option('expand_frame_repr')
display.large_repr lets you select whether to display dataframes that exceed max_columns or max_rows
as a truncated frame, or as a summary.
In [35]: df = pd.DataFrame(np.random.randn(10,10))
In [36]: pd.set_option('max_rows', 5)
In [38]: df
Out[38]:
0 1 2 3 4 5 6 \
0 -1.413681 1.607920 1.024180 0.569605 0.875906 -2.211372 0.974466
1 0.545952 -1.219217 -1.226825 0.769804 -1.281247 -0.727707 -0.121306
.. ... ... ... ... ... ... ...
8 -2.484478 -0.281461 0.030711 0.109121 1.126203 -0.977349 1.474071
9 -1.071357 0.441153 2.353925 0.583787 0.221471 -0.744471 0.758527
7 8 9
0 -2.006747 -0.410001 -0.078638
1 -0.097883 0.695775 0.341734
.. ... ... ...
8 -0.064034 -1.282782 0.781836
9 1.729689 -0.964980 -0.845696
In [40]: df
Out[40]:
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 10 entries, 0 to 9
Data columns (total 10 columns):
0 10 non-null float64
1 10 non-null float64
2 10 non-null float64
3 10 non-null float64
4 10 non-null float64
5 10 non-null float64
6 10 non-null float64
7 10 non-null float64
8 10 non-null float64
9 10 non-null float64
dtypes: float64(10)
memory usage: 880.0 bytes
In [41]: pd.reset_option('large_repr')
In [42]: pd.reset_option('max_rows')
display.max_colwidth sets the maximum width of columns. Cells of this length or longer will be truncated
with an ellipsis.
In [43]: df = pd.DataFrame(np.array([['foo', 'bar', 'bim', 'uncomfortably long string
'],
In [44]: pd.set_option('max_colwidth',40)
In [45]: df
Out[45]:
0 1 2 3
0 foo bar bim uncomfortably long string
1 horse cow banana apple
In [46]: pd.set_option('max_colwidth', 6)
In [47]: df
Out[47]:
0 1 2 3
0 foo bar bim un...
1 horse cow ba... apple
In [48]: pd.reset_option('max_colwidth')
In [51]: df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 10 entries, 0 to 9
Data columns (total 10 columns):
0 10 non-null float64
1 10 non-null float64
2 10 non-null float64
3 10 non-null float64
4 10 non-null float64
5 10 non-null float64
6 10 non-null float64
7 10 non-null float64
8 10 non-null float64
9 10 non-null float64
dtypes: float64(10)
memory usage: 880.0 bytes
In [52]: pd.set_option('max_info_columns', 5)
In [53]: df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 10 entries, 0 to 9
Columns: 10 entries, 0 to 9
dtypes: float64(10)
memory usage: 880.0 bytes
In [54]: pd.reset_option('max_info_columns')
display.max_info_rows: df.info() will usually show null-counts for each column. For large frames this
can be quite slow. max_info_rows and max_info_cols limit this null check only to frames with smaller
dimensions then specified. Note that you can specify the option df.info(null_counts=True) to override on
showing a particular frame.
In [55]: df =pd.DataFrame(np.random.choice([0,1,np.nan], size=(10,10)))
In [56]: df
Out[56]:
0 1 2 3 4 5 6 7 8 9
0 0.0 1.0 1.0 0.0 1.0 1.0 0.0 NaN 1.0 NaN
1 1.0 NaN 0.0 0.0 1.0 1.0 NaN 1.0 0.0 1.0
2 NaN NaN NaN 1.0 1.0 0.0 NaN 0.0 1.0 NaN
3 0.0 1.0 1.0 NaN 0.0 NaN 1.0 NaN NaN 0.0
4 0.0 1.0 0.0 0.0 1.0 0.0 0.0 NaN 0.0 0.0
5 0.0 NaN 1.0 NaN NaN NaN NaN 0.0 1.0 NaN
6 0.0 1.0 0.0 0.0 NaN 1.0 NaN NaN 0.0 NaN
7 0.0 NaN 1.0 1.0 NaN 1.0 1.0 1.0 1.0 NaN
8 0.0 0.0 NaN 0.0 NaN 1.0 0.0 0.0 NaN NaN
9 NaN NaN 0.0 NaN NaN NaN 0.0 1.0 1.0 NaN
In [58]: df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 10 entries, 0 to 9
Data columns (total 10 columns):
0 8 non-null float64
1 5 non-null float64
2 8 non-null float64
3 7 non-null float64
4 5 non-null float64
5 7 non-null float64
6 6 non-null float64
7 6 non-null float64
8 8 non-null float64
9 3 non-null float64
dtypes: float64(10)
memory usage: 880.0 bytes
In [59]: pd.set_option('max_info_rows', 5)
In [60]: df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 10 entries, 0 to 9
Data columns (total 10 columns):
0 float64
1 float64
2 float64
3 float64
4 float64
5 float64
6 float64
7 float64
8 float64
9 float64
dtypes: float64(10)
memory usage: 880.0 bytes
In [61]: pd.reset_option('max_info_rows')
display.precision sets the output display precision in terms of decimal places. This is only a suggestion.
In [62]: df = pd.DataFrame(np.random.randn(5,5))
In [63]: pd.set_option('precision',7)
In [64]: df
Out[64]:
0 1 2 3 4
0 -2.0490276 2.8466122 -1.2080493 -0.4503923 2.4239054
1 0.1211080 0.2669165 0.8438259 -0.2225400 2.0219807
2 -0.7167894 -2.2244851 -1.0611370 -0.2328247 0.4307933
3 -0.6654779 1.8298075 -1.4065093 1.0782481 0.3227741
4 0.2003243 0.8900241 0.1948132 0.3516326 0.4488815
In [65]: pd.set_option('precision',4)
In [66]: df
Out[66]:
0 1 2 3 4
0 -2.0490 2.8466 -1.2080 -0.4504 2.4239
1 0.1211 0.2669 0.8438 -0.2225 2.0220
2 -0.7168 -2.2245 -1.0611 -0.2328 0.4308
3 -0.6655 1.8298 -1.4065 1.0782 0.3228
4 0.2003 0.8900 0.1948 0.3516 0.4489
display.chop_threshold sets at what level pandas rounds to zero when it displays a Series of DataFrame.
Note, this does not effect the precision at which the number is stored.
In [67]: df = pd.DataFrame(np.random.randn(6,6))
In [68]: pd.set_option('chop_threshold', 0)
In [69]: df
Out[69]:
0 1 2 3 4 5
0 -0.1979 0.9657 -1.5229 -0.1166 0.2956 -1.0477
1 1.6406 1.9058 2.7721 0.0888 -1.1442 -0.6334
2 0.9254 -0.0064 -0.8204 -0.6009 -1.0393 0.8248
3 -0.8241 -0.3377 -0.9278 -0.8401 0.2485 -0.1093
4 0.4320 -0.4607 0.3365 -3.2076 -1.5359 0.4098
5 -0.6731 -0.7411 -0.1109 -2.6729 0.8645 0.0609
In [71]: df
Out[71]:
0 1 2 3 4 5
0 0.0000 0.9657 -1.5229 0.0000 0.0000 -1.0477
1 1.6406 1.9058 2.7721 0.0000 -1.1442 -0.6334
2 0.9254 0.0000 -0.8204 -0.6009 -1.0393 0.8248
3 -0.8241 0.0000 -0.9278 -0.8401 0.0000 0.0000
4 0.0000 0.0000 0.0000 -3.2076 -1.5359 0.0000
5 -0.6731 -0.7411 0.0000 -2.6729 0.8645 0.0000
In [72]: pd.reset_option('chop_threshold')
display.colheader_justify controls the justification of the headers. Options are right, and left.
In [73]: df = pd.DataFrame(np.array([np.random.randn(6), np.random.randint(1,9,6)*.1,
np.zeros(6)]).T,
In [75]: df
Out[75]:
A B C
0 0.9331 0.3 0.0
1 0.2888 0.2 0.0
2 1.3250 0.2 0.0
3 0.5892 0.7 0.0
4 0.5314 0.1 0.0
5 -1.1987 0.7 0.0
In [77]: df
Out[77]:
A B C
0 0.9331 0.3 0.0
1 0.2888 0.2 0.0
2 1.3250 0.2 0.0
3 0.5892 0.7 0.0
4 0.5314 0.1 0.0
5 -1.1987 0.7 0.0
In [78]: pd.reset_option('colheader_justify')
pandas also allows you to set how numbers are displayed in the console. This option is not set through the
set_options API.
Use the set_eng_float_format function to alter the floating-point formatting of pandas objects to produce a
particular format.
For instance:
In [82]: s/1.e3
Out[82]:
a -236.866u
b 846.974u
c -685.597u
d 609.099u
e -303.961u
dtype: float64
In [83]: s/1.e6
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[83
a -236.866n
b 846.974n
c -685.597n
d 609.099n
e -303.961n
dtype: float64
To round floats on a case-by-case basis, you can also use round() and round().
Warning: Enabling this option will affect the performance for printing of DataFrame and Series (about 2 times
slower). Use only when it is actually required.
Some East Asian countries use Unicode characters its width is corresponding to 2 alphabets. If DataFrame or Series
contains these characters, default output cannot be aligned properly.
Note: Screen captures are attached for each outputs to show the actual results.
In [85]: df;
Enable display.unicode.east_asian_width allows pandas to check each characters East Asian Width
property. These characters can be aligned properly by checking this property, but it takes longer time than standard
len function.
In [87]: df;
In addition, Unicode contains characters which width is Ambiguous. These characters width should be either 1 or 2
depending on terminal setting or encoding. Because this cannot be distinguished from Python, display.unicode.
ambiguous_as_wide option is added to handle this.
By default, Ambiguous characters width, (inverted exclamation) in below example, is regarded as 1.
In [89]: df;
Enabling display.unicode.ambiguous_as_wide lets pandas to figure these characters width as 2. Note that
this option will be effective only when display.unicode.east_asian_width is enabled. Confirm starting
position has been changed, but is not aligned properly because the setting is mismatched with this environment.
In [91]: df;
TWELVE
Note: The Python and NumPy indexing operators [] and attribute operator . provide quick and easy access to pandas
data structures across a wide range of use cases. This makes interactive work intuitive, as theres little new to learn if
you already know how to deal with Python dictionaries and NumPy arrays. However, since the type of the data to be
accessed isnt known in advance, directly using standard operators has some optimization limits. For production code,
we recommended that you take advantage of the optimized pandas data access methods exposed in this chapter.
Warning: Whether a copy or a reference is returned for a setting operation, may depend on the context. This is
sometimes called chained assignment and should be avoided. See Returning a View versus Copy
Warning: In 0.15.0 Index has internally been refactored to no longer subclass ndarray but instead subclass
PandasObject, similarly to the rest of the pandas objects. This should be a transparent change with only very
limited API implications (See the Internal Refactoring)
Warning: Indexing on an integer-based Index with floats has been clarified in 0.18.0, for a summary of the
changes, see here.
See the MultiIndex / Advanced Indexing for MultiIndex and more advanced indexing documentation.
See the cookbook for some advanced strategies
579
pandas: powerful Python data analysis toolkit, Release 0.20.1
12.2 Basics
As mentioned when introducing the data structures in the last section, the primary function of indexing with [] (a.k.a.
__getitem__ for those familiar with implementing class behavior in Python) is selecting out lower-dimensional
slices. Thus,
Object Type Selection Return Value Type
Series series[label] scalar value
DataFrame frame[colname] Series corresponding to colname
Panel panel[itemname] DataFrame corresponding to the itemname
Here we construct a simple time series data set to use for illustrating the indexing functionality:
In [3]: df
Out[3]:
A B C D
2000-01-01 0.469112 -0.282863 -1.509059 -1.135632
2000-01-02 1.212112 -0.173215 0.119209 -1.044236
2000-01-03 -0.861849 -2.104569 -0.494929 1.071804
2000-01-04 0.721555 -0.706771 -1.039575 0.271860
2000-01-05 -0.424972 0.567020 0.276232 -1.087401
2000-01-06 -0.673690 0.113648 -1.478427 0.524988
2000-01-07 0.404705 0.577046 -1.715002 -1.039268
2000-01-08 -0.370647 -1.157892 -1.344312 0.844885
In [5]: panel
Out[5]:
<class 'pandas.core.panel.Panel'>
Dimensions: 2 (items) x 8 (major_axis) x 4 (minor_axis)
Items axis: one to two
Major_axis axis: 2000-01-01 00:00:00 to 2000-01-08 00:00:00
Minor_axis axis: A to D
Note: None of the indexing functionality is time series specific unless specifically stated.
Thus, as per above, we have the most basic indexing using []:
In [6]: s = df['A']
In [7]: s[dates[5]]
Out[7]: -0.67368970808837059
In [8]: panel['two']
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[8]:
A B C D
2000-01-01 0.409571 0.113086 -0.610826 -0.936507
2000-01-02 1.152571 0.222735 1.017442 -0.845111
2000-01-03 -0.921390 -1.708620 0.403304 1.270929
2000-01-04 0.662014 -0.310822 -0.141342 0.470985
You can pass a list of columns to [] to select columns in that order. If a column is not contained in the DataFrame, an
exception will be raised. Multiple columns can also be set in this manner:
In [9]: df
Out[9]:
A B C D
2000-01-01 0.469112 -0.282863 -1.509059 -1.135632
2000-01-02 1.212112 -0.173215 0.119209 -1.044236
2000-01-03 -0.861849 -2.104569 -0.494929 1.071804
2000-01-04 0.721555 -0.706771 -1.039575 0.271860
2000-01-05 -0.424972 0.567020 0.276232 -1.087401
2000-01-06 -0.673690 0.113648 -1.478427 0.524988
2000-01-07 0.404705 0.577046 -1.715002 -1.039268
2000-01-08 -0.370647 -1.157892 -1.344312 0.844885
In [11]: df
Out[11]:
A B C D
2000-01-01 -0.282863 0.469112 -1.509059 -1.135632
2000-01-02 -0.173215 1.212112 0.119209 -1.044236
2000-01-03 -2.104569 -0.861849 -0.494929 1.071804
2000-01-04 -0.706771 0.721555 -1.039575 0.271860
2000-01-05 0.567020 -0.424972 0.276232 -1.087401
2000-01-06 0.113648 -0.673690 -1.478427 0.524988
2000-01-07 0.577046 0.404705 -1.715002 -1.039268
2000-01-08 -1.157892 -0.370647 -1.344312 0.844885
You may find this useful for applying a transform (in-place) to a subset of the columns.
Warning: pandas aligns all AXES when setting Series and DataFrame from .loc, and .iloc.
This will not modify df because the column alignment is before value assignment.
In [12]: df[['A', 'B']]
Out[12]:
A B
2000-01-01 -0.282863 0.469112
2000-01-02 -0.173215 1.212112
2000-01-03 -2.104569 -0.861849
2000-01-04 -0.706771 0.721555
2000-01-05 0.567020 -0.424972
2000-01-06 0.113648 -0.673690
2000-01-07 0.577046 0.404705
2000-01-08 -1.157892 -0.370647
You may access an index on a Series, column on a DataFrame, and an item on a Panel directly as an attribute:
In [17]: sa = pd.Series([1,2,3],index=list('abc'))
In [19]: sa.b
Out[19]: 2
In [20]: dfa.A
\\\\\\\\\\\Out[20]:
2000-01-01 0.469112
2000-01-02 1.212112
2000-01-03 -0.861849
2000-01-04 0.721555
2000-01-05 -0.424972
2000-01-06 -0.673690
2000-01-07 0.404705
2000-01-08 -0.370647
Freq: D, Name: A, dtype: float64
In [21]: panel.one
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
A B C D
2000-01-01 0.469112 -0.282863 -1.509059 -1.135632
2000-01-02 1.212112 -0.173215 0.119209 -1.044236
2000-01-03 -0.861849 -2.104569 -0.494929 1.071804
2000-01-04 0.721555 -0.706771 -1.039575 0.271860
2000-01-05 -0.424972 0.567020 0.276232 -1.087401
2000-01-06 -0.673690 0.113648 -1.478427 0.524988
2000-01-07 0.404705 0.577046 -1.715002 -1.039268
2000-01-08 -0.370647 -1.157892 -1.344312 0.844885
You can use attribute access to modify an existing element of a Series or column of a DataFrame, but be careful; if you
try to use attribute access to create a new column, it fails silently, creating a new attribute rather than a new column.
In [22]: sa.a = 5
In [23]: sa
Out[23]:
a 5
b 2
c 3
dtype: int64
In [25]: dfa
Out[25]:
A B C D
2000-01-01 0 -0.282863 -1.509059 -1.135632
2000-01-02 1 -0.173215 0.119209 -1.044236
2000-01-03 2 -2.104569 -0.494929 1.071804
2000-01-04 3 -0.706771 -1.039575 0.271860
2000-01-05 4 0.567020 0.276232 -1.087401
2000-01-06 5 0.113648 -1.478427 0.524988
2000-01-07 6 0.577046 -1.715002 -1.039268
2000-01-08 7 -1.157892 -1.344312 0.844885
In [27]: dfa
Out[27]:
A B C D
2000-01-01 0 -0.282863 -1.509059 -1.135632
2000-01-02 1 -0.173215 0.119209 -1.044236
2000-01-03 2 -2.104569 -0.494929 1.071804
2000-01-04 3 -0.706771 -1.039575 0.271860
2000-01-05 4 0.567020 0.276232 -1.087401
2000-01-06 5 0.113648 -1.478427 0.524988
2000-01-07 6 0.577046 -1.715002 -1.039268
2000-01-08 7 -1.157892 -1.344312 0.844885
Warning:
You can use this access only if the index element is a valid python identifier, e.g. s.1 is not allowed. See
here for an explanation of valid identifiers.
The attribute will not be available if it conflicts with an existing method name, e.g. s.min is not allowed.
Similarly, the attribute will not be available if it conflicts with any of the following list: index,
major_axis, minor_axis, items, labels.
In any of these cases, standard indexing will still work, e.g. s['1'], s['min'], and s['index'] will
access the corresponding element or column.
The Series/Panel accesses are available starting in 0.13.0.
If you are using the IPython environment, you may also use tab-completion to see these accessible attributes.
You can also assign a dict to a row of a DataFrame:
In [30]: x
Out[30]:
x y
0 1 3
1 9 99
2 3 5
The most robust and consistent way of slicing ranges along arbitrary axes is described in the Selection by Position
section detailing the .iloc method. For now, we explain the semantics of slicing using the [] operator.
With Series, the syntax works exactly as with an ndarray, returning a slice of the values and the corresponding labels:
In [31]: s[:5]
Out[31]:
2000-01-01 0.469112
2000-01-02 1.212112
2000-01-03 -0.861849
2000-01-04 0.721555
2000-01-05 -0.424972
Freq: D, Name: A, dtype: float64
In [32]: s[::2]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
2000-01-01 0.469112
2000-01-03 -0.861849
2000-01-05 -0.424972
2000-01-07 0.404705
Freq: 2D, Name: A, dtype: float64
In [33]: s[::-1]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
2000-01-08 -0.370647
2000-01-07 0.404705
2000-01-06 -0.673690
2000-01-05 -0.424972
2000-01-04 0.721555
2000-01-03 -0.861849
2000-01-02 1.212112
2000-01-01 0.469112
Freq: -1D, Name: A, dtype: float64
In [35]: s2[:5] = 0
In [36]: s2
Out[36]:
2000-01-01 0.000000
2000-01-02 0.000000
2000-01-03 0.000000
2000-01-04 0.000000
2000-01-05 0.000000
2000-01-06 -0.673690
2000-01-07 0.404705
2000-01-08 -0.370647
Freq: D, Name: A, dtype: float64
With DataFrame, slicing inside of [] slices the rows. This is provided largely as a convenience since it is such a
common operation.
In [37]: df[:3]
Out[37]:
A B C D
2000-01-01 0.469112 -0.282863 -1.509059 -1.135632
2000-01-02 1.212112 -0.173215 0.119209 -1.044236
2000-01-03 -0.861849 -2.104569 -0.494929 1.071804
In [38]: df[::-1]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
A B C D
2000-01-08 -0.370647 -1.157892 -1.344312 0.844885
2000-01-07 0.404705 0.577046 -1.715002 -1.039268
2000-01-06 -0.673690 0.113648 -1.478427 0.524988
2000-01-05 -0.424972 0.567020 0.276232 -1.087401
2000-01-04 0.721555 -0.706771 -1.039575 0.271860
2000-01-03 -0.861849 -2.104569 -0.494929 1.071804
2000-01-02 1.212112 -0.173215 0.119209 -1.044236
2000-01-01 0.469112 -0.282863 -1.509059 -1.135632
Warning: Whether a copy or a reference is returned for a setting operation, may depend on the context. This is
sometimes called chained assignment and should be avoided. See Returning a View versus Copy
Warning:
.loc is strict when you present slicers that are not compatible (or convertible) with the index type.
For example using integers in a DatetimeIndex. These will raise a TypeError.
In [39]: dfl = pd.DataFrame(np.random.randn(5,4), columns=list('ABCD'), index=pd.
date_range('20130101',periods=5))
In [40]: dfl
Out[40]:
A B C D
2013-01-01 1.075770 -0.109050 1.643563 -1.469388
2013-01-02 0.357021 -0.674600 -1.776904 -0.968914
2013-01-03 -1.294524 0.413738 0.276662 -0.472035
2013-01-04 -0.013960 -0.362543 -0.006154 -0.923061
2013-01-05 0.895717 0.805244 -1.206412 2.565646
586 Chapter 12. Indexing and Selecting Data
pandas: powerful Python data analysis toolkit, Release 0.20.1
In [4]: dfl.loc[2:3]
TypeError: cannot do slice indexing on <class 'pandas.tseries.index.DatetimeIndex'>
with these indexers [2] of <type 'int'>
String likes in slicing can be convertible to the type of the index and lead to natural slicing.
In [41]: dfl.loc['20130102':'20130104']
Out[41]:
A B C D
2013-01-02 0.357021 -0.674600 -1.776904 -0.968914
2013-01-03 -1.294524 0.413738 0.276662 -0.472035
2013-01-04 -0.013960 -0.362543 -0.006154 -0.923061
pandas provides a suite of methods in order to have purely label based indexing. This is a strict inclusion based
protocol. At least 1 of the labels for which you ask, must be in the index or a KeyError will be raised! When
slicing, the start bound is included, AND the stop bound is included. Integers are valid labels, but they refer to the
label and not the position.
The .loc attribute is the primary access method. The following are valid inputs:
A single label, e.g. 5 or 'a', (note that 5 is interpreted as a label of the index. This use is not an integer
position along the index)
A list or array of labels ['a', 'b', 'c']
A slice object with labels 'a':'f' (note that contrary to usual python slices, both the start and the stop are
included!)
A boolean array
A callable, see Selection By Callable
In [42]: s1 = pd.Series(np.random.randn(6),index=list('abcdef'))
In [43]: s1
Out[43]:
a 1.431256
b 1.340309
c -1.170299
d -0.226169
e 0.410835
f 0.813850
dtype: float64
In [44]: s1.loc['c':]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
c -1.170299
d -0.226169
e 0.410835
f 0.813850
dtype: float64
In [45]: s1.loc['b']
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
1.3403088497993827
In [46]: s1.loc['c':] = 0
In [47]: s1
Out[47]:
a 1.431256
b 1.340309
c 0.000000
d 0.000000
e 0.000000
f 0.000000
dtype: float64
With a DataFrame
In [49]: df1
Out[49]:
A B C D
a 0.132003 -0.827317 -0.076467 -1.187678
b 1.130127 -1.436737 -1.413681 1.607920
c 1.024180 0.569605 0.875906 -2.211372
d 0.974466 -2.006747 -0.410001 -0.078638
e 0.545952 -1.219217 -1.226825 0.769804
f -1.281247 -0.727707 -0.121306 -0.097883
A B C D
a 0.132003 -0.827317 -0.076467 -1.187678
b 1.130127 -1.436737 -1.413681 1.607920
d 0.974466 -2.006747 -0.410001 -0.078638
In [52]: df1.loc['a']
Out[52]:
A 0.132003
B -0.827317
C -0.076467
D -1.187678
Name: a, dtype: float64
Warning: Whether a copy or a reference is returned for a setting operation, may depend on the context. This is
sometimes called chained assignment and should be avoided. See Returning a View versus Copy
Pandas provides a suite of methods in order to get purely integer based indexing. The semantics follow closely
python and numpy slicing. These are 0-based indexing. When slicing, the start bounds is included, while the upper
bound is excluded. Trying to use a non-integer, even a valid label will raise an IndexError.
The .iloc attribute is the primary access method. The following are valid inputs:
An integer e.g. 5
A list or array of integers [4, 3, 0]
A slice object with ints 1:7
A boolean array
A callable, see Selection By Callable
In [57]: s1
Out[57]:
0 0.695775
2 0.341734
4 0.959726
6 -1.110336
8 -0.619976
dtype: float64
In [58]: s1.iloc[:3]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[58
0 0.695775
2 0.341734
4 0.959726
dtype: float64
In [59]: s1.iloc[3]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
-1.1103361028911669
In [61]: s1
Out[61]:
0 0.000000
2 0.000000
4 0.000000
6 -1.110336
8 -0.619976
dtype: float64
With a DataFrame
In [62]: df1 = pd.DataFrame(np.random.randn(6,4),
....: index=list(range(0,12,2)),
....: columns=list(range(0,8,2)))
....:
In [63]: df1
Out[63]:
0 2 4 6
0 0.149748 -0.732339 0.687738 0.176444
2 0.403310 -0.154951 0.301624 -2.179861
4 -1.369849 -0.954208 1.462696 -1.743161
6 -0.826591 -0.345352 1.314232 0.690579
8 0.995761 2.396780 0.014871 3.357427
10 -0.317441 -1.236269 0.896171 -0.487602
4 6
2 0.301624 -2.179861
4 1.462696 -1.743161
6 1.314232 0.690579
8 0.014871 3.357427
In [67]: df1.iloc[1:3, :]
Out[67]:
0 2 4 6
2 0.403310 -0.154951 0.301624 -2.179861
4 -1.369849 -0.954208 1.462696 -1.743161
In [72]: x
Out[72]: ['a', 'b', 'c', 'd', 'e', 'f']
In [73]: x[4:10]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[73]: ['e', 'f']
In [74]: x[8:10]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[74]: []
In [75]: s = pd.Series(x)
In [76]: s
Out[76]:
0 a
1 b
2 c
3 d
4 e
5 f
dtype: object
In [77]: s.iloc[4:10]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[77]:
4 e
5 f
dtype: object
In [78]: s.iloc[8:10]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
Series([], dtype: object)
Note: Prior to v0.14.0, iloc would not accept out of bounds indexers for slices, e.g. a value that exceeds the length
of the object being indexed.
Note that this could result in an empty axis (e.g. an empty DataFrame being returned)
In [80]: dfl
Out[80]:
A B
0 -0.082240 -2.182937
1 0.380396 0.084844
2 0.432390 1.519970
3 -0.493662 0.600178
4 0.274230 0.132885
Empty DataFrame
Columns: []
Index: [0, 1, 2, 3, 4]
B
0 -2.182937
1 0.084844
2 1.519970
3 0.600178
4 0.132885
In [83]: dfl.iloc[4:6]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
A B
4 0.27423 0.132885
A single indexer that is out of bounds will raise an IndexError. A list of indexers where any element is out of
bounds will raise an IndexError
dfl.iloc[[4, 5, 6]]
IndexError: positional indexers are out-of-bounds
dfl.iloc[:, 4]
IndexError: single positional indexer is out-of-bounds
In [85]: df1
Out[85]:
A B C D
a -0.023688 2.410179 1.450520 0.206053
b -0.251905 -2.213588 1.063327 1.266143
c 0.299368 -0.863838 0.408204 -1.048089
d -0.025747 -0.988387 0.094055 1.262731
e 1.289997 0.082423 -0.055758 0.536580
f -0.489682 0.369374 -0.034571 -2.484478
A B C D
c 0.299368 -0.863838 0.408204 -1.048089
e 1.289997 0.082423 -0.055758 0.536580
A B
a -0.023688 2.410179
b -0.251905 -2.213588
c 0.299368 -0.863838
d -0.025747 -0.988387
e 1.289997 0.082423
f -0.489682 0.369374
A B
a -0.023688 2.410179
b -0.251905 -2.213588
c 0.299368 -0.863838
d -0.025747 -0.988387
e 1.289997 0.082423
f -0.489682 0.369374
a -0.023688
b -0.251905
c 0.299368
d -0.025747
e 1.289997
f -0.489682
Name: A, dtype: float64
Using these methods / indexers, you can chain data selection operations without using temporary variable.
Warning: Starting in 0.20.0, the .ix indexer is deprecated, in favor of the more strict .iloc and .loc indexers.
.ix offers a lot of magic on the inference of what the user wants to do. To wit, .ix can decide to index positionally
OR via labels depending on the data type of the index. This has caused quite a bit of user confusion over the years.
The recommended methods of indexing are:
.loc if you want to label index
.iloc if you want to positionally index.
In [93]: dfd = pd.DataFrame({'A': [1, 2, 3],
....: 'B': [4, 5, 6]},
....: index=list('abc'))
....:
In [94]: dfd
Out[94]:
A B
a 1 4
b 2 5
c 3 6
Previous Behavior, where you wish to get the 0th and the 2nd elements from the index in the A column.
In [3]: dfd.ix[[0, 2], 'A']
Out[3]:
a 1
c 3
Name: A, dtype: int64
Using .loc. Here we will select the appropriate indexes from the index, then use label indexing.
In [95]: dfd.loc[dfd.index[[0, 2]], 'A']
Out[95]:
a 1
c 3
Name: A, dtype: int64
This can also be expressed using .iloc, by explicitly getting locations on the indexers, and using positional indexing
to select things.
In [96]: dfd.iloc[[0, 2], dfd.columns.get_loc('A')]
Out[96]:
a 1
c 3
Name: A, dtype: int64
A random selection of rows or columns from a Series, DataFrame, or Panel with the sample() method. The method
will sample rows by default, and accepts a specific number of rows/columns to return, or a fraction of rows.
In [98]: s = pd.Series([0,1,2,3,4,5])
By default, sample will return each row at most once, but one can also sample with replacement using the replace
option:
In [102]: s = pd.Series([0,1,2,3,4,5])
# With replacement:
In [104]: s.sample(n=6, replace=True)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[104]:
0 0
4 4
3 3
2 2
4 4
4 4
dtype: int64
By default, each row has an equal probability of being selected, but if you want rows to have different probabilities,
you can pass the sample function sampling weights as weights. These weights can be a list, a numpy array, or a
Series, but they must be of the same length as the object you are sampling. Missing values will be treated as a weight
of zero, and inf values are not allowed. If weights do not sum to 1, they will be re-normalized by dividing all weights
by the sum of the weights. For example:
In [105]: s = pd.Series([0,1,2,3,4,5])
When applied to a DataFrame, you can use a column of the DataFrame as sampling weights (provided you are sampling
rows and not columns) by simply passing the name of the column as a string.
sample also allows users to sample columns instead of rows using the axis argument.
Finally, one can also set a seed for samples random number generator using the random_state argument, which
will accept either an integer (as a seed) or a numpy RandomState object.
# With a given seed, the sample will always draw the same rows.
In [115]: df4.sample(n=2, random_state=2)
Out[115]:
col1 col2
2 3 4
1 2 3
In [117]: se = pd.Series([1,2,3])
In [118]: se
Out[118]:
0 1
1 2
2 3
dtype: int64
In [119]: se[5] = 5.
In [120]: se
Out[120]:
0 1.0
1 2.0
2 3.0
5 5.0
dtype: float64
In [122]: dfi
Out[122]:
A B
0 0 1
1 2 3
2 4 5
In [124]: dfi
Out[124]:
A B C
0 0 1 0
1 2 3 2
2 4 5 4
In [126]: dfi
Out[126]:
A B C
0 0 1 0
1 2 3 2
2 4 5 4
3 5 5 5
Since indexing with [] must handle a lot of cases (single-label access, slicing, boolean indexing, etc.), it has a bit of
overhead in order to figure out what youre asking for. If you only want to access a scalar value, the fastest way is to
use the at and iat methods, which are implemented on all of the data structures.
Similarly to loc, at provides label based scalar lookups, while, iat provides integer based lookups analogously to
iloc
In [127]: s.iat[5]
Out[127]: 5
In [129]: df.iat[3, 0]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[129]: 0.72155516224436689
In [131]: df.iat[3, 0] = 7
In [133]: df
Out[133]:
A B C D E 0
2000-01-01 0.469112 -0.282863 -1.509059 -1.135632 NaN NaN
2000-01-02 1.212112 -0.173215 0.119209 -1.044236 NaN NaN
2000-01-03 -0.861849 -2.104569 -0.494929 1.071804 NaN NaN
2000-01-04 7.000000 -0.706771 -1.039575 0.271860 NaN NaN
2000-01-05 -0.424972 0.567020 0.276232 -1.087401 NaN NaN
2000-01-06 -0.673690 0.113648 -1.478427 0.524988 7.0 NaN
2000-01-07 0.404705 0.577046 -1.715002 -1.039268 NaN NaN
2000-01-08 -0.370647 -1.157892 -1.344312 0.844885 NaN NaN
2000-01-09 NaN NaN NaN NaN NaN 7.0
Another common operation is the use of boolean vectors to filter the data. The operators are: | for or, & for and, and
~ for not. These must be grouped by using parentheses.
Using a boolean vector to index a Series works exactly as in a numpy ndarray:
In [135]: s
Out[135]:
0 -3
1 -2
2 -1
3 0
4 1
5 2
6 3
dtype: int64
0 -3
1 -2
4 1
5 2
6 3
dtype: int64
3 0
4 1
5 2
6 3
dtype: int64
You may select rows from a DataFrame using a boolean vector the same length as the DataFrames index (for example,
something derived from one of the columns of the DataFrame):
List comprehensions and map method of Series can also be used to produce more complex criteria:
In [140]: df2 = pd.DataFrame({'a' : ['one', 'one', 'two', 'three', 'two', 'one', 'six
'],
In [142]: df2[criterion]
Out[142]:
a b c
2 two y 0.041290
3 three x 0.361719
4 two y -0.238075
a b c
2 two y 0.041290
3 three x 0.361719
4 two y -0.238075
# Multiple criteria
In [144]: df2[criterion & (df2['b'] == 'x')]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
a b c
3 three x 0.361719
Note, with the choice methods Selection by Label, Selection by Position, and Advanced Indexing you may select along
more than one axis using boolean vectors combined with other indexing expressions.
Consider the isin method of Series, which returns a boolean vector that is true wherever the Series elements exist in
the passed list. This allows you to select rows where one or more columns have values you want:
In [147]: s
Out[147]:
4 0
3 1
2 2
1 3
0 4
dtype: int64
2 2
0 4
dtype: int64
The same method is available for Index objects and is useful for the cases when you dont know which of the sought
labels are in fact present:
In [150]: s[s.index.isin([2, 4, 6])]
Out[150]:
4 0
2 2
dtype: int64
In addition to that, MultiIndex allows selecting a separate level to use in the membership check:
In [152]: s_mi = pd.Series(np.arange(6),
.....: index=pd.MultiIndex.from_product([[0, 1], ['a', 'b', 'c
']]))
.....:
In [153]: s_mi
Out[153]:
0 a 0
b 1
c 2
1 a 3
b 4
c 5
dtype: int64
0 c 2
1 a 3
dtype: int64
0 a 0
c 2
1 a 3
c 5
dtype: int64
DataFrame also has an isin method. When calling isin, pass a set of values as either an array or dict. If values is
an array, isin returns a DataFrame of booleans that is the same shape as the original DataFrame, with True wherever
the element is in the sequence of values.
In [158]: df.isin(values)
Out[158]:
ids ids2 vals
0 True True True
1 True False False
2 False False True
3 False False False
Oftentimes youll want to match certain values with certain columns. Just make values a dict where the key is the
column, and the value is a list of items you want to check for.
In [160]: df.isin(values)
Out[160]:
ids ids2 vals
0 True False True
1 True False False
2 False False True
3 False False False
Combine DataFrames isin with the any() and all() methods to quickly select subsets of your data that meet a
given criteria. To select a row where each column meets its own criterion:
In [161]: values = {'ids': ['a', 'b'], 'ids2': ['a', 'c'], 'vals': [1, 3]}
In [163]: df[row_mask]
Out[163]:
ids ids2 vals
0 a a 1
Selecting values from a Series with a boolean vector generally returns a subset of the data. To guarantee that selection
output has the same shape as the original data, you can use the where method in Series and DataFrame.
To return only the selected rows
Selecting values from a DataFrame with a boolean criterion now also preserves input data shape. where is used under
the hood as the implementation. Equivalent is df.where(df < 0)
In addition, where takes an optional other argument for replacement of values where the condition is False, in the
returned copy.
You may wish to set values based on some boolean criteria. This can be done intuitively like so:
In [168]: s2 = s.copy()
In [170]: s2
Out[170]:
4 0
3 1
2 2
1 3
0 4
dtype: int64
In [173]: df2
Out[173]:
A B C D
2000-01-01 0.000000 0.000000 0.485855 0.245166
2000-01-02 0.000000 0.390389 0.000000 1.655824
2000-01-03 0.000000 0.299674 0.000000 0.281059
2000-01-04 0.846958 0.000000 0.600705 0.000000
2000-01-05 0.669692 0.000000 0.000000 0.342416
2000-01-06 0.868584 0.000000 2.297780 0.000000
2000-01-07 0.000000 0.000000 0.168904 0.000000
2000-01-08 0.801196 1.392071 0.000000 0.000000
By default, where returns a modified copy of the data. There is an optional parameter inplace so that the original
data can be modified without creating a copy:
In [174]: df_orig = df.copy()
In [176]: df_orig
Out[176]:
A B C D
2000-01-01 2.104139 1.309525 0.485855 0.245166
2000-01-02 0.352480 0.390389 1.192319 1.655824
2000-01-03 0.864883 0.299674 0.227870 0.281059
2000-01-04 0.846958 1.222082 0.600705 1.233203
2000-01-05 0.669692 0.605656 1.169184 0.342416
2000-01-06 0.868584 0.948458 2.297780 0.684718
2000-01-07 2.670153 0.114722 0.168904 0.048048
2000-01-08 0.801196 1.392071 0.048788 0.808838
Note: The signature for DataFrame.where() differs from numpy.where(). Roughly df1.where(m,
df2) is equivalent to np.where(m, df1, df2).
In [177]: df.where(df < 0, -df) == np.where(df < 0, df, -df)
Out[177]:
A B C D
2000-01-01 True True True True
2000-01-02 True True True True
2000-01-03 True True True True
alignment
Furthermore, where aligns the input boolean condition (ndarray or DataFrame), such that partial selection with setting
is possible. This is analogous to partial setting via .loc (but on the contents rather than the axis labels)
In [180]: df2
Out[180]:
A B C D
2000-01-01 -2.104139 -1.309525 0.485855 0.245166
2000-01-02 -0.352480 3.000000 -1.192319 3.000000
2000-01-03 -0.864883 3.000000 -0.227870 3.000000
2000-01-04 3.000000 -1.222082 3.000000 -1.233203
2000-01-05 0.669692 -0.605656 -1.169184 0.342416
2000-01-06 0.868584 -0.948458 2.297780 -0.684718
2000-01-07 -2.670153 -0.114722 0.168904 -0.048048
2000-01-08 0.801196 1.392071 -0.048788 -0.808838
In [182]: df2.where(df2>0,df2['A'],axis='index')
Out[182]:
A B C D
2000-01-01 -2.104139 -2.104139 0.485855 0.245166
2000-01-02 -0.352480 0.390389 -0.352480 1.655824
2000-01-03 -0.864883 0.299674 -0.864883 0.281059
2000-01-04 0.846958 0.846958 0.600705 0.846958
2000-01-05 0.669692 0.669692 0.669692 0.342416
2000-01-06 0.868584 0.868584 2.297780 0.868584
2000-01-07 -2.670153 -2.670153 0.168904 -2.670153
2000-01-08 0.801196 1.392071 0.801196 0.801196
mask
mask is the inverse boolean operation of where.
In [187]: s.mask(s >= 0)
Out[187]:
4 NaN
3 NaN
2 NaN
1 NaN
0 NaN
dtype: float64
In [191]: df
Out[191]:
a b c
0 0.438921 0.118680 0.863670
1 0.138138 0.577363 0.686602
2 0.595307 0.564592 0.520630
3 0.913052 0.926075 0.616184
4 0.078718 0.854477 0.898725
5 0.076404 0.523211 0.591538
6 0.792342 0.216974 0.564056
7 0.397890 0.454131 0.915716
8 0.074315 0.437913 0.019794
9 0.559209 0.502065 0.026437
# pure python
In [192]: df[(df.a < df.b) & (df.b < df.c)]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
a b c
1 0.138138 0.577363 0.686602
4 0.078718 0.854477 0.898725
5 0.076404 0.523211 0.591538
7 0.397890 0.454131 0.915716
# query
In [193]: df.query('(a < b) & (b < c)')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
a b c
1 0.138138 0.577363 0.686602
4 0.078718 0.854477 0.898725
5 0.076404 0.523211 0.591538
7 0.397890 0.454131 0.915716
Do the same thing but fall back on a named index if there is no column with the name a.
In [194]: df = pd.DataFrame(np.random.randint(n / 2, size=(n, 2)), columns=list('bc'))
In [196]: df
Out[196]:
b c
a
0 0 4
1 0 1
2 3 4
3 4 3
4 1 4
5 0 3
6 0 1
7 3 4
8 2 3
9 1 1
b c
a
2 3 4
If instead you dont want to or cannot name your index, you can use the name index in your query expression:
In [199]: df
Out[199]:
b c
0 3 1
1 3 0
2 5 6
3 5 2
4 7 4
5 0 1
6 2 5
7 0 1
8 6 0
9 7 9
b c
2 5 6
Note: If the name of your index overlaps with a column name, the column name is given precedence. For example,
In [203]: df.query('a > 2') # uses the column 'a', not the index
Out[203]:
a
a
1 3
3 3
You can still use the index in a query expression by using the special identifier index:
If for some reason you have a column named index, then you can refer to the index as ilevel_0 as well, but at
this point you should consider renaming your columns to something less ambiguous.
You can also use the levels of a DataFrame with a MultiIndex as if they were columns in the frame:
In [205]: n = 10
In [208]: colors
Out[208]:
array(['red', 'red', 'red', 'green', 'green', 'green', 'green', 'green',
'green', 'green'],
dtype='<U5')
In [209]: foods
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
In [212]: df
Out[212]:
0 1
color food
red ham 0.194889 -0.381994
ham 0.318587 2.089075
eggs -0.728293 -0.090255
green eggs -0.748199 1.318931
eggs -2.029766 0.792652
ham 0.461007 -0.542749
ham -0.305384 -0.479195
eggs 0.095031 -0.270099
eggs -0.707140 -0.773882
eggs 0.229453 0.304418
0 1
color food
red ham 0.194889 -0.381994
ham 0.318587 2.089075
eggs -0.728293 -0.090255
If the levels of the MultiIndex are unnamed, you can refer to them using special names:
In [215]: df
Out[215]:
0 1
0 1
red ham 0.194889 -0.381994
ham 0.318587 2.089075
eggs -0.728293 -0.090255
The convention is ilevel_0, which means index level 0 for the 0th level of the index.
A use case for query() is when you have a collection of DataFrame objects that have a subset of column names
(or index levels/names) in common. You can pass the same query to both frames without having to specify which
frame youre interested in querying
In [218]: df
Out[218]:
a b c
0 0.224283 0.736107 0.139168
1 0.302827 0.657803 0.713897
2 0.611185 0.136624 0.984960
3 0.195246 0.123436 0.627712
4 0.618673 0.371660 0.047902
5 0.480088 0.062993 0.185760
6 0.568018 0.483467 0.445289
7 0.309040 0.274580 0.587101
8 0.258993 0.477769 0.370255
9 0.550459 0.840870 0.304611
In [220]: df2
Out[220]:
a b c
0 0.357579 0.229800 0.596001
1 0.309059 0.957923 0.965663
2 0.123102 0.336914 0.318616
3 0.526506 0.323321 0.860813
4 0.518736 0.486514 0.384724
5 0.190804 0.505723 0.614533
6 0.891939 0.623977 0.676639
7 0.480559 0.378528 0.460858
In [224]: df
Out[224]:
a b c
0 7 8 9
1 1 0 7
2 2 7 2
3 6 2 2
4 2 6 3
5 3 8 2
6 1 7 2
7 5 1 5
8 9 8 0
9 1 5 0
a b c
0 7 8 9
a b c
0 7 8 9
Slightly nicer by removing the parentheses (by binding making comparison operators bind tighter than &/|)
query() also supports special use of Pythons in and not in comparison operators, providing a succinct syntax
for calling the isin method of a Series or DataFrame.
# get all rows where columns "a" and "b" have overlapping values
In [230]: df = pd.DataFrame({'a': list('aabbccddeeff'), 'b': list('aaaabbbbcccc'),
.....: 'c': np.random.randint(5, size=12),
.....: 'd': np.random.randint(9, size=12)})
.....:
In [231]: df
Out[231]:
a b c d
0 a a 2 6
1 a a 4 7
2 b a 1 6
3 b a 2 1
4 c b 3 6
5 c b 0 2
6 d b 3 3
7 d b 2 1
8 e c 4 3
9 e c 2 0
10 f c 0 6
11 f c 1 2
a b c d
0 a a 2 6
1 a a 4 7
2 b a 1 6
3 b a 2 1
4 c b 3 6
5 c b 0 2
a b c d
0 a a 2 6
1 a a 4 7
2 b a 1 6
3 b a 2 1
4 c b 3 6
5 c b 0 2
a b c d
6 d b 3 3
7 d b 2 1
8 e c 4 3
9 e c 2 0
10 f c 0 6
11 f c 1 2
# pure Python
In [235]: df[~df.a.isin(df.b)]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
a b c d
6 d b 3 3
7 d b 2 1
8 e c 4 3
9 e c 2 0
10 f c 0 6
11 f c 1 2
You can combine this with other expressions for very succinct queries:
# rows where cols a and b have overlapping values and col c's values are less than
col d's
# pure Python
In [237]: df[df.b.isin(df.a) & (df.c < df.d)]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[23
a b c d
0 a a 2 6
1 a a 4 7
2 b a 1 6
4 c b 3 6
5 c b 0 2
10 f c 0 6
11 f c 1 2
Note: Note that in and not in are evaluated in Python, since numexpr has no equivalent of this operation.
However, only the in/not in expression itself is evaluated in vanilla Python. For example, in the expression
df.query('a in b + c + d')
(b + c + d) is evaluated by numexpr and then the in operation is evaluated in plain Python. In general, any
operations that can be evaluated using numexpr will be.
# pure Python
In [239]: df[df.b.isin(["a", "b", "c"])]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
a b c d
0 a a 2 6
1 a a 4 7
2 b a 1 6
3 b a 2 1
4 c b 3 6
5 c b 0 2
6 d b 3 3
7 d b 2 1
8 e c 4 3
9 e c 2 0
10 f c 0 6
11 f c 1 2
a b c d
0 a a 2 6
2 b a 1 6
3 b a 2 1
7 d b 2 1
9 e c 2 0
11 f c 1 2
a b c d
1 a a 4 7
4 c b 3 6
5 c b 0 2
6 d b 3 3
8 e c 4 3
10 f c 0 6
# using in/not in
In [242]: df.query('[1, 2] in c')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
a b c d
0 a a 2 6
2 b a 1 6
3 b a 2 1
7 d b 2 1
9 e c 2 0
11 f c 1 2
a b c d
1 a a 4 7
4 c b 3 6
5 c b 0 2
6 d b 3 3
8 e c 4 3
10 f c 0 6
# pure Python
In [244]: df[df.c.isin([1, 2])]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
a b c d
0 a a 2 6
2 b a 1 6
3 b a 2 1
7 d b 2 1
9 e c 2 0
11 f c 1 2
You can negate boolean expressions with the word not or the ~ operator.
In [247]: df.query('~bools')
Out[247]:
a b c bools
2 0.697753 0.212799 0.329209 False
7 0.275396 0.691034 0.826619 False
8 0.190649 0.558748 0.262467 False
a b c bools
a b c bools
2 True True True True
7 True True True True
8 True True True True
In [252]: shorter
Out[252]:
a b c bools
7 0.275396 0.691034 0.826619 False
In [253]: longer
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[253]:
a b c bools
7 0.275396 0.691034 0.826619 False
a b c bools
7 True True True True
DataFrame.query() using numexpr is slightly faster than Python for large frames
Note: You will only see the performance benefits of using the numexpr engine with DataFrame.query() if
your frame has more than approximately 200,000 rows
This plot was created using a DataFrame with 3 columns each containing floating point values generated using
numpy.random.randn().
If you want to identify and remove duplicate rows in a DataFrame, there are two methods that will help: duplicated
and drop_duplicates. Each takes as an argument the columns to use to identify duplicated rows.
duplicated returns a boolean vector whose length is the number of rows, and which indicates whether a row
is duplicated.
In [255]: df2 = pd.DataFrame({'a': ['one', 'one', 'two', 'two', 'two', 'three', 'four
'],
In [256]: df2
Out[256]:
a b c
0 one x -1.067137
1 one y 0.309500
2 two x -0.211056
3 two y -1.842023
4 two x -0.390820
5 three x -1.964475
6 four x 1.298329
In [257]: df2.duplicated('a')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
0 False
1 True
2 False
3 True
4 True
5 False
6 False
dtype: bool
0 True
1 False
2 True
3 True
4 False
5 False
6 False
dtype: bool
0 True
1 True
2 True
3 True
4 True
5 False
6 False
dtype: bool
In [260]: df2.drop_duplicates('a')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
a b c
0 one x -1.067137
2 two x -0.211056
5 three x -1.964475
6 four x 1.298329
a b c
1 one y 0.309500
4 two x -0.390820
5 three x -1.964475
6 four x 1.298329
a b c
5 three x -1.964475
6 four x 1.298329
a b c
0 one x -1.067137
1 one y 0.309500
2 two x -0.211056
3 two y -1.842023
5 three x -1.964475
6 four x 1.298329
To drop duplicates by index value, use Index.duplicated then perform slicing. Same options are available in
keep parameter.
In [265]: df3 = pd.DataFrame({'a': np.arange(6),
.....: 'b': np.random.randn(6)},
In [266]: df3
Out[266]:
a b
a 0 1.440455
a 1 2.456086
b 2 1.038402
c 3 -0.894409
b 4 0.683536
a 5 3.082764
In [267]: df3.index.duplicated()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
array([False, True, False, False, True, True], dtype=bool)
In [268]: df3[~df3.index.duplicated()]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
a b
a 0 1.440455
b 2 1.038402
c 3 -0.894409
In [269]: df3[~df3.index.duplicated(keep='last')]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
a b
c 3 -0.894409
b 4 0.683536
a 5 3.082764
In [270]: df3[~df3.index.duplicated(keep=False)]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
a b
c 3 -0.894409
Each of Series, DataFrame, and Panel have a get method which can return a default value.
Another way to extract slices from an object is with the select method of Series, DataFrame, and Panel. This
method should be used only when there is no more direct way. select takes a function which operates on labels
along axis and returns a boolean. For instance:
Sometimes you want to extract a set of values given a sequence of row labels and column labels, and the lookup
method allows for this and returns a numpy array. For instance,
The pandas Index class and its subclasses can be viewed as implementing an ordered multiset. Duplicates are
allowed. However, if you try to convert an Index object with duplicate entries into a set, an exception will be
raised.
Index also provides the infrastructure necessary for lookups, data alignment, and reindexing. The easiest way to
create an Index directly is to pass a list or other sequence to Index:
In [278]: index
Out[278]: Index(['e', 'd', 'a', 'b'], dtype='object')
In [281]: index.name
Out[281]: 'something'
In [285]: df
Out[285]:
cols A B C
rows
0 1.295989 0.185778 0.436259
1 0.678101 0.311369 -0.528378
2 -0.674808 -1.103529 -0.656157
3 1.889957 2.076651 -1.102192
4 -1.211795 -0.791746 0.634724
In [286]: df['A']
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
rows
0 1.295989
1 0.678101
2 -0.674808
3 1.889957
4 -1.211795
Name: A, dtype: float64
In [288]: ind.rename("apple")
Out[288]: Int64Index([1, 2, 3], dtype='int64', name='apple')
In [289]: ind
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[289]: Int64Index([1,
2, 3], dtype='int64')
In [292]: ind
Out[292]: Int64Index([1, 2, 3], dtype='int64', name='bob')
In [294]: index
Out[294]:
MultiIndex(levels=[[0, 1, 2], ['one', 'two']],
labels=[[0, 0, 1, 1, 2, 2], [0, 1, 0, 1, 0, 1]],
names=['first', 'second'])
In [295]: index.levels[1]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
Index(['one', 'two'], dtype='object', name='second')
Warning: In 0.15.0. the set operations + and - were deprecated in order to provide these for numeric type
operations on certain index types. + can be replace by .union() or |, and - by .difference().
The two main operations are union (|), intersection (&) These can be directly called as instance methods
or used via overloaded operators. Difference is provided via the .difference() method.
In [299]: a | b
Out[299]: Index(['a', 'b', 'c', 'd', 'e'], dtype='object')
In [300]: a & b
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[300]: Index(['c'],
dtype='object')
In [301]: a.difference(b)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out
Index(['a', 'b'], dtype='object')
Also available is the symmetric_difference (^) operation, which returns elements that appear in either idx1
or idx2 but not both. This is equivalent to the Index created by idx1.difference(idx2).union(idx2.
difference(idx1)), with duplicates dropped.
In [304]: idx1.symmetric_difference(idx2)
Note: The resulting index from a set operation will be sorted in ascending order.
Important: Even though Index can hold missing values (NaN), it should be avoided if you do not want any
unexpected results. For example, some operations exclude missing values implicitly.
In [307]: idx1
Out[307]: Float64Index([1.0, nan, 3.0, 4.0], dtype='float64')
In [308]: idx1.fillna(2)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[308]:
Float64Index([1.0, 2.0, 3.0, 4.0], dtype='float64')
In [310]: idx2
Out[310]: DatetimeIndex(['2011-01-01', 'NaT', '2011-01-03'], dtype='datetime64[ns]',
freq=None)
In [311]: idx2.fillna(pd.Timestamp('2011-01-02'))
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[3
DatetimeIndex(['2011-01-01', '2011-01-02', '2011-01-03'], dtype='datetime64[ns]',
freq=None)
Occasionally you will load or create a data set into a DataFrame and want to add an index after youve already done
so. There are a couple of different ways.
DataFrame has a set_index method which takes a column name (for a regular Index) or a list of column names
(for a MultiIndex), to create a new, indexed DataFrame:
In [312]: data
Out[312]:
a b c d
0 bar one z 1.0
1 bar two y 2.0
2 foo one x 3.0
3 foo two w 4.0
In [314]: indexed1
Out[314]:
a b d
c
z bar one 1.0
y bar two 2.0
x foo one 3.0
w foo two 4.0
In [316]: indexed2
Out[316]:
c d
a b
bar one z 1.0
two y 2.0
foo one x 3.0
two w 4.0
The append keyword option allow you to keep the existing index and append the given columns to a MultiIndex:
In [317]: frame = data.set_index('c', drop=False)
In [319]: frame
Out[319]:
c d
c a b
z bar one z 1.0
y bar two y 2.0
x foo one x 3.0
w foo two w 4.0
Other options in set_index allow you not drop the index columns or to add the index in-place (without creating a
new object):
In [320]: data.set_index('c', drop=False)
Out[320]:
a b c d
c
z bar one z 1.0
y bar two y 2.0
x foo one x 3.0
w foo two w 4.0
In [322]: data
Out[322]:
c d
a b
bar one z 1.0
two y 2.0
foo one x 3.0
two w 4.0
As a convenience, there is a new function on DataFrame called reset_index which transfers the index values into
the DataFrames columns and sets a simple integer index. This is the inverse operation to set_index
In [323]: data
Out[323]:
c d
a b
bar one z 1.0
two y 2.0
foo one x 3.0
two w 4.0
In [324]: data.reset_index()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
a b c d
0 bar one z 1.0
1 bar two y 2.0
2 foo one x 3.0
3 foo two w 4.0
The output is more similar to a SQL table or a record array. The names for the columns derived from the index are the
ones stored in the names attribute.
You can use the level keyword to remove only a portion of the index:
In [325]: frame
Out[325]:
c d
c a b
z bar one z 1.0
y bar two y 2.0
x foo one x 3.0
w foo two w 4.0
In [326]: frame.reset_index(level=1)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
a c d
c b
z one bar z 1.0
y two bar y 2.0
x one foo x 3.0
w two foo w 4.0
reset_index takes an optional parameter drop which if true simply discards the index, instead of putting index
values in the DataFrames columns.
Note: The reset_index method used to be called delevel which is now deprecated.
If you create an index yourself, you can just assign it to the index field:
data.index = index
When setting values in a pandas object, care must be taken to avoid what is called chained indexing. Here is an
example.
.....:
In [328]: dfmi
Out[328]:
one two
first second first second
0 a b c d
1 e f g h
2 i j k l
3 m n o p
In [329]: dfmi['one']['second']
Out[329]:
0 b
1 f
2 j
3 n
Name: second, dtype: object
In [330]: dfmi.loc[:,('one','second')]
Out[330]:
0 b
1 f
2 j
3 n
Name: (one, second), dtype: object
These both yield the same results, so which should you use? It is instructive to understand the order of operations on
these and why method 2 (.loc) is much preferred over method 1 (chained [])
dfmi['one'] selects the first level of the columns and returns a DataFrame that is singly-indexed. Then another
python operation dfmi_with_one['second'] selects the series indexed by 'second' happens. This is indi-
cated by the variable dfmi_with_one because pandas sees these operations as separate events. e.g. separate calls
to __getitem__, so it has to treat them as linear operations, they happen one after another.
Contrast this to df.loc[:,('one','second')] which passes a nested tuple of (slice(None),('one',
'second')) to a single call to __getitem__. This allows pandas to deal with this as a single entity. Furthermore
this order of operations can be significantly faster, and allows one to index both axes if so desired.
The problem in the previous section is just a performance issue. Whats up with the SettingWithCopy warning?
We dont usually throw warnings around when you do something that might cost a few extra milliseconds!
But it turns out that assigning to the product of chained indexing has inherently unpredictable results. To see this,
think about how the Python interpreter executes this code:
dfmi.loc[:,('one','second')] = value
# becomes
dfmi.loc.__setitem__((slice(None), ('one', 'second')), value)
dfmi['one']['second'] = value
# becomes
dfmi.__getitem__('one').__setitem__('second', value)
See that __getitem__ in there? Outside of simple cases, its very hard to predict whether it will return a view or a
copy (it depends on the memory layout of the array, about which pandas makes no guarantees), and therefore whether
the __setitem__ will modify dfmi or a temporary object that gets thrown out immediately afterward. Thats what
SettingWithCopy is warning you about!
Note: You may be wondering whether we should be concerned about the loc property in the first example. But
dfmi.loc is guaranteed to be dfmi itself with modified indexing behavior, so dfmi.loc.__getitem__ /
dfmi.loc.__setitem__ operate on dfmi directly. Of course, dfmi.loc.__getitem__(idx) may be
a view or a copy of dfmi.
Sometimes a SettingWithCopy warning will arise at times when theres no obvious chained indexing going on.
These are the bugs that SettingWithCopy is designed to catch! Pandas is probably trying to warn you that youve
done this:
def do_something(df):
foo = df[['bar', 'baz']] # Is foo a view? A copy? Nobody knows!
# ... many lines here ...
foo['quux'] = value # We don't know whether this will modify df or not!
return foo
Yikes!
Furthermore, in chained expressions, the order may determine whether a copy is returned or not. If an expression will
set values on a copy of a slice, then a SettingWithCopy exception will be raised (this raise/warn behavior is new
starting in 0.13.0)
You can control the action of a chained assignment via the option mode.chained_assignment, which can take
the values ['raise','warn',None], where showing a warning is the default.
>>> pd.set_option('mode.chained_assignment','warn')
>>> dfb[dfb.a.str.startswith('o')]['c'] = 42
Traceback (most recent call last)
...
SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_index,col_indexer] = value instead
In [334]: dfc.loc[0,'A'] = 11
In [335]: dfc
Out[335]:
A B
0 11 1
1 bbb 2
2 ccc 3
This can work at times, but is not guaranteed, and so should be avoided
In [338]: dfc
Out[338]:
A B
0 111 1
1 bbb 2
2 ccc 3
>>> pd.set_option('mode.chained_assignment','raise')
>>> dfc.loc[0]['A'] = 1111
Traceback (most recent call last)
...
SettingWithCopyException:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_index,col_indexer] = value instead
Warning: The chained assignment warnings / exceptions are aiming to inform the user of a possibly invalid
assignment. There may be false positives; situations where a chained assignment is inadvertently reported.
THIRTEEN
This section covers indexing with a MultiIndex and more advanced indexing features.
See the Indexing and Selecting Data for general indexing documentation.
Warning: Whether a copy or a reference is returned for a setting operation, may depend on the context. This is
sometimes called chained assignment and should be avoided. See Returning a View versus Copy
Warning: In 0.15.0 Index has internally been refactored to no longer sub-class ndarray but instead subclass
PandasObject, similarly to the rest of the pandas objects. This should be a transparent change with only very
limited API implications (See the Internal Refactoring)
Hierarchical / Multi-level indexing is very exciting as it opens the door to some quite sophisticated data analysis and
manipulation, especially for working with higher dimensional data. In essence, it enables you to store and manipulate
data with an arbitrary number of dimensions in lower dimensional data structures like Series (1d) and DataFrame (2d).
In this section, we will show what exactly we mean by hierarchical indexing and how it integrates with all of the
pandas indexing functionality described above and in prior sections. Later, when discussing group by and pivoting and
reshaping data, well show non-trivial applications to illustrate how it aids in structuring data for analysis.
See the cookbook for some advanced strategies
The MultiIndex object is the hierarchical analogue of the standard Index object which typically stores the axis
labels in pandas objects. You can think of MultiIndex as an array of tuples where each tuple is unique. A
MultiIndex can be created from a list of arrays (using MultiIndex.from_arrays), an array of tuples (using
MultiIndex.from_tuples), or a crossed set of iterables (using MultiIndex.from_product). The Index
constructor will attempt to return a MultiIndex when it is passed a list of tuples. The following examples demo
different ways to initialize MultiIndexes.
In [1]: arrays = [['bar', 'bar', 'baz', 'baz', 'foo', 'foo', 'qux', 'qux'],
...: ['one', 'two', 'one', 'two', 'one', 'two', 'one', 'two']]
...:
633
pandas: powerful Python data analysis toolkit, Release 0.20.1
In [3]: tuples
Out[3]:
[('bar', 'one'),
('bar', 'two'),
('baz', 'one'),
('baz', 'two'),
('foo', 'one'),
('foo', 'two'),
('qux', 'one'),
('qux', 'two')]
In [5]: index
Out[5]:
MultiIndex(levels=[['bar', 'baz', 'foo', 'qux'], ['one', 'two']],
labels=[[0, 0, 1, 1, 2, 2, 3, 3], [0, 1, 0, 1, 0, 1, 0, 1]],
names=['first', 'second'])
In [7]: s
Out[7]:
first second
bar one 0.469112
two -0.282863
baz one -1.509059
two -1.135632
foo one 1.212112
two -0.173215
qux one 0.119209
two -1.044236
dtype: float64
When you want every pairing of the elements in two iterables, it can be easier to use the MultiIndex.
from_product function:
In [8]: iterables = [['bar', 'baz', 'foo', 'qux'], ['one', 'two']]
As a convenience, you can pass a list of arrays directly into Series or DataFrame to construct a MultiIndex automati-
cally:
In [10]: arrays = [np.array(['bar', 'bar', 'baz', 'baz', 'foo', 'foo', 'qux', 'qux']),
....: np.array(['one', 'two', 'one', 'two', 'one', 'two', 'one', 'two'])]
....:
In [12]: s
Out[12]:
bar one -0.861849
two -2.104569
baz one -0.494929
two 1.071804
foo one 0.721555
two -0.706771
qux one -1.039575
two 0.271860
dtype: float64
In [14]: df
Out[14]:
0 1 2 3
bar one -0.424972 0.567020 0.276232 -1.087401
two -0.673690 0.113648 -1.478427 0.524988
baz one 0.404705 0.577046 -1.715002 -1.039268
two -0.370647 -1.157892 -1.344312 0.844885
foo one 1.075770 -0.109050 1.643563 -1.469388
two 0.357021 -0.674600 -1.776904 -0.968914
qux one -1.294524 0.413738 0.276662 -0.472035
two -0.013960 -0.362543 -0.006154 -0.923061
All of the MultiIndex constructors accept a names argument which stores string names for the levels themselves.
If no names are provided, None will be assigned:
In [15]: df.index.names
Out[15]: FrozenList([None, None])
This index can back any axis of a pandas object, and the number of levels of the index is up to you:
In [16]: df = pd.DataFrame(np.random.randn(3, 8), index=['A', 'B', 'C'],
columns=index)
In [17]: df
Out[17]:
first bar baz foo qux \
second one two one two one two one
A 0.895717 0.805244 -1.206412 2.565646 1.431256 1.340309 -1.170299
B 0.410835 0.813850 0.132003 -0.827317 -0.076467 -1.187678 1.130127
C -1.413681 1.607920 1.024180 0.569605 0.875906 -2.211372 0.974466
first
second two
A -0.226169
B -1.436737
C -2.006747
Weve sparsified the higher levels of the indexes to make the console output a bit easier on the eyes.
Its worth keeping in mind that theres nothing preventing you from using tuples as atomic labels on an axis:
In [19]: pd.Series(np.random.randn(8), index=tuples)
Out[19]:
(bar, one) -1.236269
(bar, two) 0.896171
(baz, one) -0.487602
(baz, two) -0.082240
(foo, one) -2.182937
(foo, two) 0.380396
(qux, one) 0.084844
(qux, two) 0.432390
dtype: float64
The reason that the MultiIndex matters is that it can allow you to do grouping, selection, and reshaping operations
as we will describe below and in subsequent areas of the documentation. As you will see in later sections, you can find
yourself working with hierarchically-indexed data without creating a MultiIndex explicitly yourself. However,
when loading data from a file, you may wish to generate your own MultiIndex when preparing the data set.
Note that how the index is displayed by be controlled using the multi_sparse option in pandas.
set_options():
In [20]: pd.set_option('display.multi_sparse', False)
In [21]: df
Out[21]:
first bar bar baz baz foo foo qux \
second one two one two one two one
A 0.895717 0.805244 -1.206412 2.565646 1.431256 1.340309 -1.170299
B 0.410835 0.813850 0.132003 -0.827317 -0.076467 -1.187678 1.130127
C -1.413681 1.607920 1.024180 0.569605 0.875906 -2.211372 0.974466
first qux
second two
A -0.226169
B -1.436737
C -2.006747
The method get_level_values will return a vector of the labels for each location at a particular level:
In [23]: index.get_level_values(0)
Out[23]: Index(['bar', 'bar', 'baz', 'baz', 'foo', 'foo', 'qux', 'qux'], dtype='object
', name='first')
In [24]: index.get_level_values('second')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
Index(['one', 'two', 'one', 'two', 'one', 'two', 'one', 'two'], dtype='object',
name='second')
One of the important features of hierarchical indexing is that you can select data by a partial label identifying a
subgroup in the data. Partial selection drops levels of the hierarchical index in the result in a completely analogous
way to selecting a column in a regular DataFrame:
In [25]: df['bar']
Out[25]:
second one two
A 0.895717 0.805244
B 0.410835 0.813850
C -1.413681 1.607920
A 0.895717
B 0.410835
C -1.413681
Name: (bar, one), dtype: float64
In [27]: df['bar']['one']
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
A 0.895717
B 0.410835
C -1.413681
Name: one, dtype: float64
In [28]: s['qux']
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
one -1.039575
two 0.271860
dtype: float64
See Cross-section with hierarchical index for how to select on a deeper level.
The repr of a MultiIndex shows ALL the defined levels of an index, even if the they are not actually used. When
slicing an index, you may notice this. For example:
# original multi-index
In [29]: df.columns
Out[29]:
MultiIndex(levels=[['bar', 'baz', 'foo', 'qux'], ['one', 'two']],
labels=[[0, 0, 1, 1, 2, 2, 3, 3], [0, 1, 0, 1, 0, 1, 0, 1]],
names=['first', 'second'])
# sliced
In [30]: df[['foo','qux']].columns
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
This is done to avoid a recomputation of the levels in order to make slicing highly performant. If you want to see the
actual used levels.
In [31]: df[['foo','qux']].columns.values
Out[31]: array([('foo', 'one'), ('foo', 'two'), ('qux', 'one'), ('qux', 'two')],
dtype=object)
In [33]: df[['foo','qux']].columns.remove_unused_levels()
Out[33]:
MultiIndex(levels=[['foo', 'qux'], ['one', 'two']],
labels=[[0, 0, 1, 1], [0, 1, 0, 1]],
names=['first', 'second'])
Operations between differently-indexed objects having MultiIndex on the axes will work as you expect; data
alignment will work the same as an Index of tuples:
In [34]: s + s[:-2]
Out[34]:
bar one -1.723698
two -4.209138
baz one -0.989859
two 2.143608
foo one 1.443110
two -1.413542
qux one NaN
two NaN
dtype: float64
In [35]: s + s[::2]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
reindex can be called with another MultiIndex or even a list or array of tuples:
In [36]: s.reindex(index[:3])
Out[36]:
first second
bar one -0.861849
two -2.104569
baz one -0.494929
dtype: float64
Syntactically integrating MultiIndex in advanced indexing with .loc is a bit challenging, but weve made every
effort to do so. for example the following works as you would expect:
In [38]: df = df.T
In [39]: df
Out[39]:
A B C
first second
bar one 0.895717 0.410835 -1.413681
two 0.805244 0.813850 1.607920
baz one -1.206412 0.132003 1.024180
two 2.565646 -0.827317 0.569605
foo one 1.431256 -0.076467 0.875906
two 1.340309 -1.187678 -2.211372
qux one -1.170299 1.130127 0.974466
two -0.226169 -1.436737 -2.006747
In [40]: df.loc['bar']
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
A B C
second
one 0.895717 0.410835 -1.413681
two 0.805244 0.813850 1.607920
A 0.805244
B 0.813850
C 1.607920
Name: (bar, two), dtype: float64
In [42]: df.loc['baz':'foo']
Out[42]:
A B C
first second
baz one -1.206412 0.132003 1.024180
two 2.565646 -0.827317 0.569605
foo one 1.431256 -0.076467 0.875906
two 1.340309 -1.187678 -2.211372
A B C
first second
baz two 2.565646 -0.827317 0.569605
foo one 1.431256 -0.076467 0.875906
two 1.340309 -1.187678 -2.211372
Warning: You should specify all axes in the .loc specifier, meaning the indexer for the index and for the
columns. There are some ambiguous cases where the passed indexer could be mis-interpreted as indexing both
axes, rather than into say the MuliIndex for the rows.
....: index=miindex,
....: columns=micolumns).sort_index().sort_index(axis=1)
....:
In [50]: dfmi
Out[50]:
lvl0 a b
lvl1 bar foo bah foo
A0 B0 C0 D0 1 0 3 2
D1 5 4 7 6
C1 D0 9 8 11 10
D1 13 12 15 14
C2 D0 17 16 19 18
D1 21 20 23 22
C3 D0 25 24 27 26
... ... ... ... ...
A3 B1 C0 D1 229 228 231 230
C1 D0 233 232 235 234
D1 237 236 239 238
C2 D0 241 240 243 242
D1 245 244 247 246
C3 D0 249 248 251 250
D1 253 252 255 254
D1 77 76 79 78
C3 D0 89 88 91 90
D1 93 92 95 94
B1 C1 D0 105 104 107 106
D1 109 108 111 110
C3 D0 121 120 123 122
... ... ... ... ...
A3 B0 C1 D1 205 204 207 206
C3 D0 217 216 219 218
D1 221 220 223 222
B1 C1 D0 233 232 235 234
D1 237 236 239 238
C3 D0 249 248 251 250
D1 253 252 255 254
You can use a pd.IndexSlice to have a more natural syntax using : rather than using slice(None)
It is possible to perform quite complicated selections using this method on multiple axes at the same time.
C1 D0 104 106
D1 108 110
C2 D0 112 114
D1 116 118
C3 D0 120 122
D1 124 126
lvl0 a b
lvl1 foo foo
A0 B0 C1 D0 8 10
D1 12 14
C3 D0 24 26
D1 28 30
B1 C1 D0 40 42
D1 44 46
C3 D0 56 58
... ... ...
A3 B0 C1 D1 204 206
C3 D0 216 218
D1 220 222
B1 C1 D0 232 234
D1 236 238
C3 D0 248 250
D1 252 254
Using a boolean indexer you can provide selection related to the values.
You can also specify the axis argument to .loc to interpret the passed slicers on a single axis.
B1 C1 D0 41 40 43 42
D1 45 44 47 46
C3 D0 57 56 59 58
... ... ... ... ...
A3 B0 C1 D1 205 204 207 206
C3 D0 217 216 219 218
D1 221 220 223 222
B1 C1 D0 233 232 235 234
D1 237 236 239 238
C3 D0 249 248 251 250
D1 253 252 255 254
In [61]: df2
Out[61]:
lvl0 a b
lvl1 bar foo bah foo
A0 B0 C0 D0 1 0 3 2
D1 5 4 7 6
C1 D0 -10 -10 -10 -10
D1 -10 -10 -10 -10
C2 D0 17 16 19 18
D1 21 20 23 22
C3 D0 -10 -10 -10 -10
... ... ... ... ...
A3 B1 C0 D1 229 228 231 230
C1 D0 -10 -10 -10 -10
D1 -10 -10 -10 -10
C2 D0 241 240 243 242
D1 245 244 247 246
C3 D0 -10 -10 -10 -10
D1 -10 -10 -10 -10
In [64]: df2
Out[64]:
lvl0 a b
lvl1 bar foo bah foo
A0 B0 C0 D0 1 0 3 2
D1 5 4 7 6
C1 D0 9000 8000 11000 10000
D1 13000 12000 15000 14000
C2 D0 17 16 19 18
D1 21 20 23 22
13.2.2 Cross-section
The xs method of DataFrame additionally takes a level argument to make selecting data at a particular level of a
MultiIndex easier.
In [65]: df
Out[65]:
A B C
first second
bar one 0.895717 0.410835 -1.413681
two 0.805244 0.813850 1.607920
baz one -1.206412 0.132003 1.024180
two 2.565646 -0.827317 0.569605
foo one 1.431256 -0.076467 0.875906
two 1.340309 -1.187678 -2.211372
qux one -1.170299 1.130127 0.974466
two -0.226169 -1.436737 -2.006747
A B C
first
bar 0.895717 0.410835 -1.413681
baz -1.206412 0.132003 1.024180
foo 1.431256 -0.076467 0.875906
qux -1.170299 1.130127 0.974466
You can also select on the columns with xs(), by providing the axis argument
In [68]: df = df.T
The parameter level has been added to the reindex and align methods of pandas objects. This is useful to
broadcast values across a level. For instance:
In [77]: df
Out[77]:
0 1
one y 1.519970 -0.493662
x 0.600178 0.274230
zero y 0.132885 -0.023688
x 2.410179 1.450520
In [79]: df2
Out[79]:
0 1
zero 1.271532 0.713416
one 1.060074 -0.109716
0 1
one y 1.060074 -0.109716
x 1.060074 -0.109716
zero y 1.271532 0.713416
x 1.271532 0.713416
# aligning
In [81]: df_aligned, df2_aligned = df.align(df2, level=0)
In [82]: df_aligned
Out[82]:
0 1
one y 1.519970 -0.493662
x 0.600178 0.274230
zero y 0.132885 -0.023688
x 2.410179 1.450520
In [83]: df2_aligned
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
0 1
one y 1.060074 -0.109716
x 1.060074 -0.109716
zero y 1.271532 0.713416
x 1.271532 0.713416
In [84]: df[:5]
Out[84]:
0 1
one y 1.519970 -0.493662
x 0.600178 0.274230
zero y 0.132885 -0.023688
x 2.410179 1.450520
0 1
y one 1.519970 -0.493662
x one 0.600178 0.274230
y zero 0.132885 -0.023688
x zero 2.410179 1.450520
The reorder_levels function generalizes the swaplevel function, allowing you to permute the hierarchical
index levels in one step:
For MultiIndex-ed objects to be indexed & sliced effectively, they need to be sorted. As with any index, you can use
sort_index.
In [89]: s
Out[89]:
bar one 0.206053
foo two -0.251905
one -2.213588
bar two 1.063327
qux two 1.266143
baz two 0.299368
qux one -0.863838
baz one 0.408204
dtype: float64
In [90]: s.sort_index()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
In [91]: s.sort_index(level=0)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
In [92]: s.sort_index(level=1)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
You may also pass a level name to sort_index if the MultiIndex levels are named.
In [94]: s.sort_index(level='L1')
Out[94]:
L1 L2
bar one 0.206053
two 1.063327
baz one 0.408204
two 0.299368
foo one -2.213588
two -0.251905
qux one -0.863838
two 1.266143
dtype: float64
In [95]: s.sort_index(level='L2')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
L1 L2
bar one 0.206053
baz one 0.408204
foo one -2.213588
qux one -0.863838
bar two 1.063327
baz two 0.299368
foo two -0.251905
qux two 1.266143
dtype: float64
On higher dimensional objects, you can sort any of the other axes by level if they have a MultiIndex:
Indexing will work even if the data are not sorted, but will be rather inefficient (and show a PerformanceWarning).
It will also return a copy of the data rather than a view:
In [99]: dfm
Out[99]:
jolie
jim joe
0 x 0.490671
x 0.120248
1 z 0.537020
y 0.110968
Out[4]:
jolie
jim joe
1 z 0.64094
Furthermore if you try to index something that is not fully lexsorted, this can raise:
The is_lexsorted() method on an Index show if the index is sorted, and the lexsort_depth property
returns the sort depth:
In [100]: dfm.index.is_lexsorted()
Out[100]: False
In [101]: dfm.index.lexsort_depth
\\\\\\\\\\\\\\\\Out[101]: 1
In [103]: dfm
Out[103]:
jolie
jim joe
0 x 0.490671
x 0.120248
1 y 0.110968
z 0.537020
In [104]: dfm.index.is_lexsorted()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
True
In [105]: dfm.index.lexsort_depth
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
2
Similar to numpy ndarrays, pandas Index, Series, and DataFrame also provides the take method that retrieves ele-
ments along a given axis at the given indices. The given indices must be either a list or an ndarray of integer index
positions. take will also accept negative integers as relative positions to the end of the object.
In [107]: index = pd.Index(np.random.randint(0, 1000, 10))
In [108]: index
Out[108]: Int64Index([214, 502, 712, 567, 786, 175, 993, 133, 758, 329], dtype='int64
')
In [110]: index[positions]
Out[110]: Int64Index([214, 329, 567], dtype='int64')
In [111]: index.take(positions)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[111]: Int64Index([214, 329,
567], dtype='int64')
In [113]: ser.iloc[positions]
Out[113]:
0 -0.179666
9 1.824375
3 0.392149
dtype: float64
In [114]: ser.take(positions)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[114]:
0 -0.179666
9 1.824375
3 0.392149
dtype: float64
For DataFrames, the given indices should be a 1d list or ndarray that specifies row or column positions.
In [115]: frm = pd.DataFrame(np.random.randn(5, 3))
0 2
0 0.595974 0.601544
1 -1.237881 -1.276829
2 -0.767101 1.499591
3 0.979542 0.615855
4 0.629675 1.857704
It is important to note that the take method on pandas objects are not intended to work on boolean indices and may
return unexpected results.
In [118]: arr = np.random.randn(10)
0 0.233141
1 -0.223540
dtype: float64
Finally, as a small note on performance, because the take method handles a narrower range of inputs, it can offer
performance that is a good deal faster than fancy indexing.
We have discussed MultiIndex in the previous sections pretty extensively. DatetimeIndex and PeriodIndex
are shown here. TimedeltaIndex are here.
In the following sub-sections we will highlite some other index types.
13.5.1 CategoricalIndex
In [126]: df
Out[126]:
A B
0 0 a
1 1 a
2 2 b
3 3 b
4 4 c
5 5 a
In [127]: df.dtypes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[127]:
A int64
B category
dtype: object
In [128]: df.B.cat.categories
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
Index(['c', 'a', 'b'], dtype='object')
In [130]: df2.index
Out[130]: CategoricalIndex(['a', 'a', 'b', 'b', 'c', 'a'], categories=['c', 'a', 'b'],
ordered=False, name='B', dtype='category')
Indexing with __getitem__/.iloc/.loc works similarly to an Index with duplicates. The indexers MUST be
in the category or the operation will raise.
In [131]: df2.loc['a']
Out[131]:
A
B
a 0
a 1
a 5
In [132]: df2.loc['a'].index
Out[132]: CategoricalIndex(['a', 'a', 'a'], categories=['c', 'a', 'b'], ordered=False,
name='B', dtype='category')
In [133]: df2.sort_index()
Out[133]:
A
B
c 4
a 0
a 1
a 5
b 2
b 3
Groupby operations on the index will preserve the index nature as well
In [134]: df2.groupby(level=0).sum()
Out[134]:
A
B
c 4
a 6
b 5
In [135]: df2.groupby(level=0).sum().index
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[135]: CategoricalIndex(['c', 'a', 'b'],
categories=['c', 'a', 'b'], ordered=False, name='B', dtype='category')
Reindexing operations, will return a resulting index based on the type of the passed indexer, meaning that passing
a list will return a plain-old-Index; indexing with a Categorical will return a CategoricalIndex, indexed
according to the categories of the PASSED Categorical dtype. This allows one to arbitrarly index these even with
values NOT in the categories, similarly to how you can reindex ANY pandas index.
In [136]: df2.reindex(['a','e'])
Out[136]:
A
B
a 0.0
a 1.0
a 5.0
e NaN
In [137]: df2.reindex(['a','e']).index
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[137]: Index(['a', 'a', 'a',
'e'], dtype='object', name='B')
In [138]: df2.reindex(pd.Categorical(['a','e'],categories=list('abcde')))
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
A
B
a 0.0
a 1.0
a 5.0
e NaN
In [139]: df2.reindex(pd.Categorical(['a','e'],categories=list('abcde'))).index
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
CategoricalIndex(['a', 'a', 'a', 'e'], categories=['a', 'b', 'c', 'd', 'e'],
Warning: Reshaping and Comparison operations on a CategoricalIndex must have the same categories or
a TypeError will be raised.
In [9]: df3 = pd.DataFrame({'A' : np.arange(6),
'B' : pd.Series(list('aabbca')).astype('category')})
In [11]: df3.index
Out[11]: CategoricalIndex([u'a', u'a', u'b', u'b', u'c', u'a'], categories=[u'a', u
'b', u'c'], ordered=False, name=u'B', dtype='category')
Warning: Indexing on an integer-based Index with floats has been clarified in 0.18.0, for a summary of the
changes, see here.
Int64Index is a fundamental basic index in pandas. This is an Immutable array implementing an ordered, sliceable
set. Prior to 0.18.0, the Int64Index would provide the default index for all NDFrame objects.
RangeIndex is a sub-class of Int64Index added in version 0.18.0, now providing the default index for all
NDFrame objects. RangeIndex is an optimized version of Int64Index that can represent a monotonic ordered
set. These are analagous to python range types.
13.5.3 Float64Index
Note: As of 0.14.0, Float64Index is backed by a native float64 dtype array. Prior to 0.14.0, Float64Index
was backed by an object dtype array. Using a float64 dtype in the backend speeds up arithmetic operations by
about 30x and boolean indexing operations on the Float64Index itself are about 2x as fast.
In [141]: indexf
Out[141]: Float64Index([1.5, 2.0, 3.0, 4.5, 5.0], dtype='float64')
In [143]: sf
Out[143]:
1.5 0
2.0 1
3.0 2
4.5 3
5.0 4
dtype: int64
Scalar selection for [],.loc will always be label based. An integer will match an equal float index (e.g. 3 is
equivalent to 3.0)
In [144]: sf[3]
Out[144]: 2
In [145]: sf[3.0]
\\\\\\\\\\\\Out[145]: 2
In [146]: sf.loc[3]
\\\\\\\\\\\\\\\\\\\\\\\\Out[146]: 2
In [147]: sf.loc[3.0]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[147]: 2
In [150]: sf.loc[2:4]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[150]:
2.0 1
3.0 2
dtype: int64
In [151]: sf.iloc[2:4]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[151]:
3.0 2
4.5 3
dtype: int64
In [153]: sf.loc[2.1:4.6]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[153]:
3.0 2
4.5 3
dtype: int64
In [1]: pd.Series(range(5))[3.5:4.5]
TypeError: the slice start [3.5] is not a proper indexer for this index type
(Int64Index)
Warning: Using a scalar float indexer for .iloc has been removed in 0.18.0, so the following will raise a
TypeError
In [3]: pd.Series(range(5)).iloc[3.0]
TypeError: cannot do positional indexing on <class 'pandas.indexes.range.RangeIndex
'> with these indexers [3.0] of <type 'float'>
Here is a typical use-case for using this type of indexing. Imagine that you have a somewhat irregular timedelta-like
indexing scheme, but the data is recorded as floats. This could for example be millisecond offsets.
In [154]: dfir = pd.concat([pd.DataFrame(np.random.randn(5,2),
.....: index=np.arange(5) * 250.0,
.....: columns=list('AB')),
.....: pd.DataFrame(np.random.randn(6,2),
.....: index=np.arange(4,10) * 250.1,
.....: columns=list('AB'))])
.....:
In [155]: dfir
Out[155]:
A B
0.0 0.997289 -1.693316
250.0 -0.179129 -1.598062
500.0 0.936914 0.912560
750.0 -1.003401 1.632781
1000.0 -0.724626 0.178219
1000.4 0.310610 -0.108002
1250.5 -0.974226 -1.147708
1500.6 -2.281374 0.760010
1750.7 -0.742532 1.533318
2000.8 2.495362 -0.432771
2250.9 -0.068954 0.043520
Selection operations then will always work on a value basis, for all selection operators.
In [156]: dfir[0:1000.4]
Out[156]:
A B
0.0 0.997289 -1.693316
250.0 -0.179129 -1.598062
500.0 0.936914 0.912560
750.0 -1.003401 1.632781
1000.0 -0.724626 0.178219
1000.4 0.310610 -0.108002
In [157]: dfir.loc[0:1001,'A']
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
0.0 0.997289
250.0 -0.179129
500.0 0.936914
750.0 -1.003401
1000.0 -0.724626
1000.4 0.310610
Name: A, dtype: float64
In [158]: dfir.loc[1000.4]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
A 0.310610
B -0.108002
Name: 1000.4, dtype: float64
You could then easily pick out the first 1 second (1000 ms) of data then.
In [159]: dfir[0:1000]
Out[159]:
A B
0.0 0.997289 -1.693316
250.0 -0.179129 -1.598062
500.0 0.936914 0.912560
750.0 -1.003401 1.632781
1000.0 -0.724626 0.178219
In [160]: dfir.iloc[0:5]
Out[160]:
A B
0.0 0.997289 -1.693316
250.0 -0.179129 -1.598062
500.0 0.936914 0.912560
750.0 -1.003401 1.632781
1000.0 -0.724626 0.178219
13.5.4 IntervalIndex
Warning: These indexing behaviors are provisional and may change in a future version of pandas.
In [162]: df
Out[162]:
A
(0, 1] 1
(1, 2] 2
(2, 3] 3
(3, 4] 4
Label based indexing via .loc along the edges of an interval works as you would expect, selecting that particular
interval.
In [163]: df.loc[2]
Out[163]:
A 2
Name: (1, 2], dtype: int64
If you select a lable contained within an interval, this will also select the interval.
In [165]: df.loc[2.5]
Out[165]:
A 3
Name: (2, 3], dtype: int64
Label-based indexing with integer axis labels is a thorny topic. It has been discussed heavily on mailing lists and
among various members of the scientific Python community. In pandas, our general viewpoint is that labels matter
more than integer locations. Therefore, with an integer axis index only label-based indexing is possible with the
standard tools like .loc. The following code will generate exceptions:
s = pd.Series(range(5))
s[-1]
df = pd.DataFrame(np.random.randn(5, 4))
df
df.loc[-2:]
This deliberate decision was made to prevent ambiguities and subtle bugs (many users reported finding bugs when the
API change was made to stop falling back on position-based indexing).
If the index of a Series or DataFrame is monotonically increasing or decreasing, then the bounds of a label-based
slice can be outside the range of the index, much like slice indexing a normal Python list. Monotonicity of an index
can be tested with the is_monotonic_increasing and is_monotonic_decreasing attributes.
In [168]: df.index.is_monotonic_increasing
Out[168]: True
On the other hand, if the index is not monotonic, then both slice bounds must be unique members of the index.
In [172]: df.index.is_monotonic_increasing
Out[172]: False
\\\\\\\\\\\\\\\\Out[173]:
data
2 0
3 1
1 2
4 3
Compared with standard Python sequence slicing in which the slice endpoint is not inclusive, label-based slicing in
pandas is inclusive. The primary reason for this is that it is often not possible to easily determine the successor or
next element after a particular label in an index. For example, consider the following Series:
In [174]: s = pd.Series(np.random.randn(6), index=list('abcdef'))
In [175]: s
Out[175]:
a 0.112246
b 0.871721
c -0.816064
d -0.784880
e 1.030659
f 0.187483
dtype: float64
However, if you only had c and e, determining the next element in the index can be somewhat complicated. For
example, the following does not work:
s.loc['c':'e'+1]
A very common use case is to limit a time series to start and end at two specific dates. To enable this, we made the
design design to make label-based slicing include both endpoints:
In [177]: s.loc['c':'e']
Out[177]:
c -0.816064
d -0.784880
e 1.030659
dtype: float64
This is most definitely a practicality beats purity sort of thing, but it is something to watch out for if you expect
label-based slicing to behave exactly in the way that standard Python integer slicing works.
The different indexing operation can potentially change the dtype of a Series.
In [179]: series1.dtype
Out[179]: dtype('int64')
In [181]: res.dtype
Out[181]: dtype('float64')
In [182]: res
\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[182]:
0 1.0
4 NaN
dtype: float64
In [184]: series2.dtype
Out[184]: dtype('bool')
In [186]: res.dtype
Out[186]: dtype('O')
In [187]: res
\\\\\\\\\\\\\\\\\\\\\Out[187]:
0 True
1 NaN
2 NaN
dtype: object
This is because the (re)indexing operations above silently inserts NaNs and the dtype changes accordingly. This can
cause some issues when using numpy ufuncs such as numpy.logical_and.
See the this old issue for a more detailed discussion.
FOURTEEN
COMPUTATIONAL TOOLS
Series, DataFrame, and Panel all have a method pct_change to compute the percent change over a given
number of periods (using fill_method to fill NA/null values before computing the percent change).
In [2]: ser.pct_change()
Out[2]:
0 NaN
1 -1.602976
2 4.334938
3 -0.247456
4 -2.067345
5 -1.142903
6 -1.688214
7 -9.759729
dtype: float64
In [4]: df.pct_change(periods=3)
Out[4]:
0 1 2 3
0 NaN NaN NaN NaN
1 NaN NaN NaN NaN
2 NaN NaN NaN NaN
3 -0.218320 -1.054001 1.987147 -0.510183
4 -0.439121 -1.816454 0.649715 -4.822809
5 -0.127833 -3.042065 -5.866604 -1.776977
6 -2.596833 -1.959538 -2.111697 -3.798900
7 -0.117826 -2.169058 0.036094 -0.067696
8 2.492606 -1.357320 -1.205802 -1.558697
9 -1.012977 2.324558 -1.003744 -0.371806
14.1.2 Covariance
The Series object has a method cov to compute covariance between series (excluding NA/null values).
663
pandas: powerful Python data analysis toolkit, Release 0.20.1
In [5]: s1 = pd.Series(np.random.randn(1000))
In [6]: s2 = pd.Series(np.random.randn(1000))
In [7]: s1.cov(s2)
Out[7]: 0.00068010881743108746
Analogously, DataFrame has a method cov to compute pairwise covariances among the series in the DataFrame,
also excluding NA/null values.
Note: Assuming the missing data are missing at random this results in an estimate for the covariance matrix which
is unbiased. However, for many applications this estimate may not be acceptable because the estimated covariance
matrix is not guaranteed to be positive semi-definite. This could lead to estimated correlations having absolute values
which are greater than one, and/or a non-invertible covariance matrix. See Estimation of covariance matrices for more
details.
In [9]: frame.cov()
Out[9]:
a b c d e
a 1.000882 -0.003177 -0.002698 -0.006889 0.031912
b -0.003177 1.024721 0.000191 0.009212 0.000857
c -0.002698 0.000191 0.950735 -0.031743 -0.005087
d -0.006889 0.009212 -0.031743 1.002983 -0.047952
e 0.031912 0.000857 -0.005087 -0.047952 1.042487
DataFrame.cov also supports an optional min_periods keyword that specifies the required minimum number
of observations for each column pair in order to have a valid result.
In [13]: frame.cov()
Out[13]:
a b c
a 1.123670 -0.412851 0.018169
b -0.412851 1.154141 0.305260
c 0.018169 0.305260 1.301149
In [14]: frame.cov(min_periods=12)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
a b c
a 1.123670 NaN 0.018169
b NaN 1.154141 0.305260
c 0.018169 0.305260 1.301149
14.1.3 Correlation
Note: Please see the caveats associated with this method of calculating correlation matrices in the covariance section.
Note that non-numeric columns will be automatically excluded from the correlation calculation.
Like cov, corr also supports the optional min_periods keyword:
In [23]: frame.corr()
Out[23]:
a b c
a 1.000000 -0.121111 0.069544
b -0.121111 1.000000 0.051742
c 0.069544 0.051742 1.000000
In [24]: frame.corr(min_periods=12)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
a b c
a 1.000000 NaN 0.069544
A related method corrwith is implemented on DataFrame to compute the correlation between like-labeled Series
contained in different DataFrame objects.
In [29]: df1.corrwith(df2)
Out[29]:
one -0.125501
two -0.493244
three 0.344056
four 0.004183
dtype: float64
a -0.675817
b 0.458296
c 0.190809
d -0.186275
e NaN
dtype: float64
The rank method produces a data ranking with ties being assigned the mean of the ranks (by default) for the group:
In [33]: s.rank()
Out[33]:
a 5.0
b 2.5
c 1.0
d 2.5
e 4.0
dtype: float64
rank is also a DataFrame method and can rank either the rows (axis=0) or the columns (axis=1). NaN values are
excluded from the ranking.
In [36]: df
Out[36]:
0 1 2 3 4 5
0 -0.904948 -1.163537 -1.457187 0.135463 -1.457187 0.294650
1 -0.976288 -0.244652 -0.748406 -0.999601 -0.748406 -0.800809
2 0.401965 1.460840 1.256057 1.308127 1.256057 0.876004
3 0.205954 0.369552 -0.669304 0.038378 -0.669304 1.140296
4 -0.477586 -0.730705 -1.129149 -0.601463 -1.129149 -0.211196
5 -1.092970 -0.689246 0.908114 0.204848 NaN 0.463347
6 0.376892 0.959292 0.095572 -0.593740 NaN -0.069180
7 -1.002601 1.957794 -0.120708 0.094214 NaN -1.467422
8 -0.547231 0.664402 -0.519424 -0.073254 NaN -1.263544
9 -0.250277 -0.237428 -1.056443 0.419477 NaN 1.375064
In [37]: df.rank(1)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
0 1 2 3 4 5
0 4.0 3.0 1.5 5.0 1.5 6.0
1 2.0 6.0 4.5 1.0 4.5 3.0
2 1.0 6.0 3.5 5.0 3.5 2.0
3 4.0 5.0 1.5 3.0 1.5 6.0
4 5.0 3.0 1.5 4.0 1.5 6.0
5 1.0 2.0 5.0 3.0 NaN 4.0
6 4.0 5.0 3.0 1.0 NaN 2.0
7 2.0 5.0 3.0 4.0 NaN 1.0
8 2.0 5.0 3.0 4.0 NaN 1.0
9 2.0 3.0 1.0 4.0 NaN 5.0
rank optionally takes a parameter ascending which by default is true; when false, data is reverse-ranked, with
larger values assigned a smaller rank.
rank supports different tie-breaking methods, specified with the method parameter:
average : average rank of tied group
min : lowest rank in the group
max : highest rank in the group
first : ranks assigned in the order they appear in the array
Warning: Prior to version 0.18.0, pd.rolling_*, pd.expanding_*, and pd.ewm* were module level
functions and are now deprecated. These are replaced by using the Rolling, Expanding and EWM. objects and
a corresponding method call.
The deprecation warning will show the new syntax, see an example here You can view the previous documentation
here
For working with data, a number of windows functions are provided for computing common window or rolling statis-
tics. Among these are count, sum, mean, median, correlation, variance, covariance, standard deviation, skewness, and
kurtosis.
Starting in version 0.18.1, the rolling() and expanding() functions can be used directly from
DataFrameGroupBy objects, see the groupby docs.
Note: The API for window statistics is quite similar to the way one works with GroupBy objects, see the documen-
tation here
We work with rolling, expanding and exponentially weighted data through the corresponding objects,
Rolling, Expanding and EWM.
In [39]: s = s.cumsum()
In [40]: s
Out[40]:
2000-01-01 -0.268824
2000-01-02 -1.771855
2000-01-03 -0.818003
2000-01-04 -0.659244
2000-01-05 -1.942133
2000-01-06 -1.869391
2000-01-07 0.563674
...
2002-09-20 -68.233054
2002-09-21 -66.765687
2002-09-22 -67.457323
2002-09-23 -69.253182
2002-09-24 -70.296818
2002-09-25 -70.844674
2002-09-26 -72.475016
Freq: D, Length: 1000, dtype: float64
In [41]: r = s.rolling(window=60)
In [42]: r
Out[42]: Rolling [window=60,center=False,axis=0]
In [14]: r.
r.agg r.apply r.count r.exclusions r.max r.median r.
name r.skew r.sum
r.aggregate r.corr r.cov r.kurt r.mean r.min r.
quantile r.std r.var
Generally these methods all have the same interface. They all accept the following arguments:
window: size of moving window
min_periods: threshold of non-null data points to require (otherwise result is NA)
center: boolean, whether to set the labels at the center (default is False)
Warning: The freq and how arguments were in the API prior to 0.18.0 changes. These are deprecated in the
new API. You can simply resample the input prior to creating a window function.
For example, instead of s.rolling(window=5,freq='D').max() to get the max value on a rolling 5 Day
window, one could use s.resample('D').max().rolling(window=5).max(), which first resamples
the data to daily data, then provides a rolling 5 day window.
We can then call methods on these rolling objects. These return like-indexed objects:
In [43]: r.mean()
Out[43]:
2000-01-01 NaN
2000-01-02 NaN
2000-01-03 NaN
2000-01-04 NaN
2000-01-05 NaN
2000-01-06 NaN
2000-01-07 NaN
...
2002-09-20 -62.694135
2002-09-21 -62.812190
2002-09-22 -62.914971
2002-09-23 -63.061867
2002-09-24 -63.213876
2002-09-25 -63.375074
2002-09-26 -63.539734
Freq: D, Length: 1000, dtype: float64
In [44]: s.plot(style='k--')
Out[44]: <matplotlib.axes._subplots.AxesSubplot at 0x115b03438>
In [45]: r.mean().plot(style='k')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[45]: <matplotlib.
axes._subplots.AxesSubplot at 0x115b03438>
They can also be applied to DataFrame objects. This is really just syntactic sugar for applying the moving window
operator to all of the DataFrames columns:
In [47]: df = df.cumsum()
In [48]: df.rolling(window=60).sum().plot(subplots=True)
Out[48]:
array([<matplotlib.axes._subplots.AxesSubplot object at 0x11fc7db38>,
<matplotlib.axes._subplots.AxesSubplot object at 0x120008b00>,
<matplotlib.axes._subplots.AxesSubplot object at 0x1200694a8>,
<matplotlib.axes._subplots.AxesSubplot object at 0x1200d3ac8>], dtype=object)
In [50]: s.rolling(window=60).apply(mad).plot(style='k')
Out[50]: <matplotlib.axes._subplots.AxesSubplot at 0x1203785f8>
Passing win_type to .rolling generates a generic rolling window computation, that is weighted according the
win_type. The following methods are available:
Method Description
sum() Sum of values
mean() Mean of values
The weights used in the window are specified by the win_type keyword. The list of recognized types are:
boxcar
triang
blackman
hamming
bartlett
parzen
bohman
blackmanharris
nuttall
barthann
kaiser (needs beta)
gaussian (needs std)
general_gaussian (needs power, width)
In [54]: ser.rolling(window=5).mean()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
2000-01-01 NaN
2000-01-02 NaN
2000-01-03 NaN
2000-01-04 NaN
2000-01-05 -0.841164
2000-01-06 -0.779948
2000-01-07 -0.565487
2000-01-08 -0.502815
2000-01-09 -0.553755
2000-01-10 -0.472211
Freq: D, dtype: float64
2000-01-06 -1.153000
2000-01-07 0.606382
2000-01-08 -0.681101
2000-01-09 -0.289724
2000-01-10 -0.996632
Freq: D, dtype: float64
Note: For .sum() with a win_type, there is no normalization done to the weights for the window. Passing custom
weights of [1, 1, 1] will yield a different result than passing weights of [2, 2, 2], for example. When passing
a win_type instead of explicitly specifying the weights, the weights are already normalized so that the largest weight
is 1.
In contrast, the nature of the .mean() calculation is such that the weights are normalized with respect to each other.
Weights of [1, 1, 1] and [2, 2, 2] yield the same result.
....:
In [57]: dft
Out[57]:
B
2013-01-01 09:00:00 0.0
2013-01-01 09:00:01 1.0
2013-01-01 09:00:02 2.0
2013-01-01 09:00:03 NaN
2013-01-01 09:00:04 4.0
This is a regular frequency index. Using an integer window parameter works to roll along the window frequency.
In [58]: dft.rolling(2).sum()
Out[58]:
B
2013-01-01 09:00:00 NaN
2013-01-01 09:00:01 1.0
2013-01-01 09:00:02 3.0
2013-01-01 09:00:03 NaN
2013-01-01 09:00:04 NaN
B
2013-01-01 09:00:00 0.0
Using a non-regular, but still monotonic index, rolling with an integer window does not impart any special calculation.
In [61]: dft = pd.DataFrame({'B': [0, 1, 2, np.nan, 4]},
....: index = pd.Index([pd.Timestamp('20130101 09:00:00'),
....: pd.Timestamp('20130101 09:00:02'),
....: pd.Timestamp('20130101 09:00:03'),
....: pd.Timestamp('20130101 09:00:05'),
....: pd.Timestamp('20130101 09:00:06')],
....: name='foo'))
....:
In [62]: dft
Out[62]:
B
foo
2013-01-01 09:00:00 0.0
2013-01-01 09:00:02 1.0
2013-01-01 09:00:03 2.0
2013-01-01 09:00:05 NaN
2013-01-01 09:00:06 4.0
In [63]: dft.rolling(2).sum()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
B
foo
2013-01-01 09:00:00 NaN
2013-01-01 09:00:02 1.0
2013-01-01 09:00:03 3.0
2013-01-01 09:00:05 NaN
2013-01-01 09:00:06 NaN
Using the time-specification generates variable windows for this sparse data.
In [64]: dft.rolling('2s').sum()
Out[64]:
B
foo
2013-01-01 09:00:00 0.0
2013-01-01 09:00:02 1.0
2013-01-01 09:00:03 3.0
2013-01-01 09:00:05 NaN
Furthermore, we now allow an optional on parameter to specify a column (rather than the default of the index) in a
DataFrame.
In [65]: dft = dft.reset_index()
In [66]: dft
Out[66]:
foo B
0 2013-01-01 09:00:00 0.0
1 2013-01-01 09:00:02 1.0
2 2013-01-01 09:00:03 2.0
3 2013-01-01 09:00:05 NaN
4 2013-01-01 09:00:06 4.0
foo B
0 2013-01-01 09:00:00 0.0
1 2013-01-01 09:00:02 1.0
2 2013-01-01 09:00:03 3.0
3 2013-01-01 09:00:05 NaN
4 2013-01-01 09:00:06 4.0
In [73]: df
Out[73]:
x right both left neither
2013-01-01 09:00:01 1 1.0 1.0 NaN NaN
2013-01-01 09:00:02 1 2.0 2.0 1.0 1.0
2013-01-01 09:00:03 1 2.0 3.0 2.0 1.0
2013-01-01 09:00:04 1 2.0 3.0 2.0 1.0
2013-01-01 09:00:06 1 1.0 2.0 1.0 NaN
Currently, this feature is only implemented for time-based windows. For fixed windows, the closed parameter cannot
be set and the rolling window will always have both endpoints closed.
Using .rolling() with a time-based index is quite similar to resampling. They both operate and perform reductive
operations on time-indexed pandas objects.
When using .rolling() with an offset. The offset is a time-delta. Take a backwards-in-time looking window, and
aggregate all of the values in that window (including the end-point, but not the start-point). This is the new value at
that point in the result. These are variable sized windows in time-space for each point of the input. You will get a same
sized result as the input.
When using .resample() with an offset. Construct a new index that is the frequency of the offset. For each
frequency bin, aggregate points from the input within a backwards-in-time looking window that fall in that bin. The
result of this aggregation is the output for that frequency point. The windows are fixed size size in the frequency space.
Your result will have the shape of a regular frequency between the min and the max of the original input object.
To summarize, .rolling() is a time-based window operation, while .resample() is a frequency-based window
operation.
By default the labels are set to the right edge of the window, but a center keyword is available so the labels can be
set at the center.
In [74]: ser.rolling(window=5).mean()
Out[74]:
2000-01-01 NaN
2000-01-02 NaN
2000-01-03 NaN
2000-01-04 NaN
2000-01-05 -0.841164
2000-01-06 -0.779948
2000-01-07 -0.565487
2000-01-08 -0.502815
2000-01-09 -0.553755
2000-01-10 -0.472211
Freq: D, dtype: float64
2000-01-01 NaN
2000-01-02 NaN
2000-01-03 -0.841164
2000-01-04 -0.779948
2000-01-05 -0.565487
2000-01-06 -0.502815
2000-01-07 -0.553755
2000-01-08 -0.472211
2000-01-09 NaN
2000-01-10 NaN
Freq: D, dtype: float64
cov() and corr() can compute moving window statistics about two Series or any combination of DataFrame/
Series or DataFrame/DataFrame. Here is the behavior in each case:
two Series: compute the statistic for the pairing.
DataFrame/Series: compute the statistics for each column of the DataFrame with the passed Series, thus
returning a DataFrame.
DataFrame/DataFrame: by default compute the statistic for matching column names, returning a
DataFrame. If the keyword argument pairwise=True is passed then computes the statistic for each pair
of columns, returning a MultiIndexed DataFrame whose index are the dates in question (see the next
section).
For example:
In [77]: df = df.cumsum()
In [79]: df2.rolling(window=5).corr(df2['B'])
Out[79]:
A B C D
2000-01-01 NaN NaN NaN NaN
2000-01-02 NaN NaN NaN NaN
2000-01-03 NaN NaN NaN NaN
2000-01-04 NaN NaN NaN NaN
2000-01-05 0.768775 1.0 -0.977990 0.800252
2000-01-06 0.744106 1.0 -0.967912 0.830021
2000-01-07 0.683257 1.0 -0.928969 0.384916
... ... ... ... ...
2000-01-14 -0.392318 1.0 0.570240 -0.591056
2000-01-15 0.017217 1.0 0.649900 -0.896258
2000-01-16 0.691078 1.0 0.807450 -0.939302
2000-01-17 0.274506 1.0 0.582601 -0.902954
2000-01-18 0.330459 1.0 0.515707 -0.545268
2000-01-19 0.046756 1.0 -0.104334 -0.419799
2000-01-20 -0.328241 1.0 -0.650974 -0.777777
Warning: Prior to version 0.20.0 if pairwise=True was passed, a Panel would be returned. This will now
return a 2-level MultiIndexed DataFrame, see the whatsnew here
In financial data analysis and other fields its common to compute covariance and correlation matrices for a collection
of time series. Often one is also interested in moving-window covariance and correlation matrices. This can be done
by passing the pairwise keyword argument, which in the case of DataFrame inputs will yield a MultiIndexed
DataFrame whose index are the dates in question. In the case of a single DataFrame argument the pairwise
argument can even be omitted:
Note: Missing values are ignored and each entry is computed using the pairwise complete observations. Please see
the covariance section for caveats associated with this method of calculating covariance and correlation matrices.
In [81]: covs.loc['2002-09-22':]
Out[81]:
B C D
2002-09-22 A 1.367467 8.676734 -8.047366
B 3.067315 0.865946 -1.052533
C 0.865946 7.739761 -4.943924
2002-09-23 A 0.910343 8.669065 -8.443062
B 2.625456 0.565152 -0.907654
C 0.565152 7.825521 -5.367526
2002-09-24 A 0.463332 8.514509 -8.776514
B 2.306695 0.267746 -0.732186
C 0.267746 7.771425 -5.696962
2002-09-25 A 0.467976 8.198236 -9.162599
B 2.307129 0.267287 -0.754080
C 0.267287 7.466559 -5.822650
2002-09-26 A 0.545781 7.899084 -9.326238
B 2.311058 0.322295 -0.844451
C 0.322295 7.038237 -5.684445
In [83]: correls.loc['2002-09-22':]
Out[83]:
A B C D
2002-09-22 A 1.000000 0.186397 0.744551 -0.769767
B 0.186397 1.000000 0.177725 -0.240802
C 0.744551 0.177725 1.000000 -0.712051
D -0.769767 -0.240802 -0.712051 1.000000
2002-09-23 A 1.000000 0.134723 0.743113 -0.758758
B 0.134723 1.000000 0.124683 -0.209934
C 0.743113 0.124683 1.000000 -0.719088
... ... ... ... ...
2002-09-25 B 0.075157 1.000000 0.064399 -0.164179
C 0.731888 0.064399 1.000000 -0.704686
D -0.739160 -0.164179 -0.704686 1.000000
2002-09-26 A 1.000000 0.087756 0.727792 -0.736562
B 0.087756 1.000000 0.079913 -0.179477
You can efficiently retrieve the time series of correlations between two columns by reshaping and indexing:
14.3 Aggregation
Once the Rolling, Expanding or EWM objects have been created, several methods are available to perform multiple
computations on the data. These operations are similar to the aggregating API, groupby API, and resample API.
In [86]: r = dfa.rolling(window=60,min_periods=1)
In [87]: r
Out[87]: Rolling [window=60,min_periods=1,center=False,axis=0]
We can aggregate by passing a function to the entire DataFrame, or select a Series (or multiple Series) via standard
getitem.
In [88]: r.aggregate(np.sum)
Out[88]:
A B C
2000-01-01 -0.289838 -0.370545 -1.284206
2000-01-02 -0.216612 -1.675528 -1.169415
2000-01-03 1.154661 -1.634017 -1.566620
2000-01-04 2.969393 -4.003274 -1.816179
2000-01-05 4.690630 -4.682017 -2.717209
2000-01-06 3.880630 -4.447700 -1.078947
2000-01-07 4.001957 -2.884072 -3.116903
... ... ... ...
2002-09-20 2.652493 -10.528875 9.867805
2002-09-21 0.844497 -9.280944 9.522649
2002-09-22 2.860036 -9.270337 6.415245
2002-09-23 3.510163 -8.151439 5.177219
2002-09-24 6.524983 -10.168078 5.792639
2002-09-25 6.409626 -9.956226 5.704050
2002-09-26 5.093787 -7.074515 6.905823
In [89]: r['A'].aggregate(np.sum)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
2000-01-01 -0.289838
2000-01-02 -0.216612
2000-01-03 1.154661
2000-01-04 2.969393
2000-01-05 4.690630
2000-01-06 3.880630
2000-01-07 4.001957
...
2002-09-20 2.652493
2002-09-21 0.844497
2002-09-22 2.860036
2002-09-23 3.510163
2002-09-24 6.524983
2002-09-25 6.409626
2002-09-26 5.093787
Freq: D, Name: A, Length: 1000, dtype: float64
In [90]: r[['A','B']].aggregate(np.sum)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
A B
2000-01-01 -0.289838 -0.370545
2000-01-02 -0.216612 -1.675528
2000-01-03 1.154661 -1.634017
2000-01-04 2.969393 -4.003274
2000-01-05 4.690630 -4.682017
2000-01-06 3.880630 -4.447700
2000-01-07 4.001957 -2.884072
... ... ...
2002-09-20 2.652493 -10.528875
2002-09-21 0.844497 -9.280944
2002-09-22 2.860036 -9.270337
2002-09-23 3.510163 -8.151439
As you can see, the result of the aggregation will have the selected columns, or all columns if none are selected.
With windowed Series you can also pass a list of functions to do aggregation with, outputting a DataFrame:
In [91]: r['A'].agg([np.sum, np.mean, np.std])
Out[91]:
sum mean std
2000-01-01 -0.289838 -0.289838 NaN
2000-01-02 -0.216612 -0.108306 0.256725
2000-01-03 1.154661 0.384887 0.873311
2000-01-04 2.969393 0.742348 1.009734
2000-01-05 4.690630 0.938126 0.977914
2000-01-06 3.880630 0.646772 1.128883
2000-01-07 4.001957 0.571708 1.049487
... ... ... ...
2002-09-20 2.652493 0.044208 1.164919
2002-09-21 0.844497 0.014075 1.148231
2002-09-22 2.860036 0.047667 1.132051
2002-09-23 3.510163 0.058503 1.134296
2002-09-24 6.524983 0.108750 1.144204
2002-09-25 6.409626 0.106827 1.142913
2002-09-26 5.093787 0.084896 1.151416
On a widowed DataFrame, you can pass a list of functions to apply to each column, which produces an aggregated
result with a hierarchical index:
In [92]: r.agg([np.sum, np.mean])
Out[92]:
A B C
sum mean sum mean sum mean
2000-01-01 -0.289838 -0.289838 -0.370545 -0.370545 -1.284206 -1.284206
2000-01-02 -0.216612 -0.108306 -1.675528 -0.837764 -1.169415 -0.584708
2000-01-03 1.154661 0.384887 -1.634017 -0.544672 -1.566620 -0.522207
2000-01-04 2.969393 0.742348 -4.003274 -1.000819 -1.816179 -0.454045
2000-01-05 4.690630 0.938126 -4.682017 -0.936403 -2.717209 -0.543442
2000-01-06 3.880630 0.646772 -4.447700 -0.741283 -1.078947 -0.179825
2000-01-07 4.001957 0.571708 -2.884072 -0.412010 -3.116903 -0.445272
... ... ... ... ... ... ...
2002-09-20 2.652493 0.044208 -10.528875 -0.175481 9.867805 0.164463
2002-09-21 0.844497 0.014075 -9.280944 -0.154682 9.522649 0.158711
2002-09-22 2.860036 0.047667 -9.270337 -0.154506 6.415245 0.106921
2002-09-23 3.510163 0.058503 -8.151439 -0.135857 5.177219 0.086287
2002-09-24 6.524983 0.108750 -10.168078 -0.169468 5.792639 0.096544
2002-09-25 6.409626 0.106827 -9.956226 -0.165937 5.704050 0.095068
2002-09-26 5.093787 0.084896 -7.074515 -0.117909 6.905823 0.115097
Passing a dict of functions has different behavior by default, see the next section.
By passing a dict to aggregate you can apply a different aggregation to the columns of a DataFrame:
The function names can also be strings. In order for a string to be valid it must be implemented on the windowed
object
Furthermore you can pass a nested dict to indicate different aggregations on different columns.
A common alternative to rolling statistics is to use an expanding window, which yields the value of the statistic with
all the data available up to that point in time.
These follow a similar interface to .rolling, with the .expanding method returning an Expanding object.
As these calculations are a special case of rolling statistics, they are implemented in pandas such that the following
two calls are equivalent:
In [97]: df.expanding(min_periods=1).mean()[:5]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
A B C D
2000-01-01 0.314226 -0.001675 0.071823 0.892566
2000-01-02 0.654522 -0.171495 0.179278 0.853361
2000-01-03 0.708733 -0.064489 -0.238271 1.371111
2000-01-04 0.987613 0.163472 -0.919693 1.566485
2000-01-05 1.426971 0.288267 -1.358877 1.808650
Function Description
count() Number of non-null observations
sum() Sum of values
mean() Mean of values
median() Arithmetic median of values
min() Minimum
max() Maximum
std() Unbiased standard deviation
var() Unbiased variance
skew() Unbiased skewness (3rd moment)
kurt() Unbiased kurtosis (4th moment)
quantile() Sample quantile (value at %)
apply() Generic apply
cov() Unbiased covariance (binary)
corr() Correlation (binary)
Aside from not having a window parameter, these functions have the same interfaces as their .rolling counter-
parts. Like above, the parameters they all accept are:
min_periods: threshold of non-null data points to require. Defaults to minimum needed to compute statistic.
No NaNs will be output once min_periods non-null data points have been seen.
center: boolean, whether to set the labels at the center (default is False)
Note: The output of the .rolling and .expanding methods do not return a NaN if there are at least
min_periods non-null values in the current window. For example,
In [98]: sn = pd.Series([1, 2, np.nan, 3, np.nan, 4])
In [99]: sn
Out[99]:
0 1.0
1 2.0
2 NaN
3 3.0
4 NaN
5 4.0
dtype: float64
In [100]: sn.rolling(2).max()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[100]:
0 NaN
1 2.0
2 NaN
3 NaN
4 NaN
5 NaN
dtype: float64
0 1.0
1 2.0
2 2.0
3 3.0
4 3.0
5 4.0
dtype: float64
In case of expanding functions, this differs from cumsum(), cumprod(), cummax(), and cummin(), which
return NaN in the output wherever a NaN is encountered in the input. In order to match the output of cumsum with
expanding, use fillna():
In [102]: sn.expanding().sum()
Out[102]:
0 1.0
1 3.0
2 3.0
3 6.0
4 6.0
5 10.0
dtype: float64
In [103]: sn.cumsum()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[103]:
0 1.0
1 3.0
2 NaN
3 6.0
4 NaN
5 10.0
dtype: float64
In [104]: sn.cumsum().fillna(method='ffill')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
0 1.0
1 3.0
2 3.0
3 6.0
4 6.0
5 10.0
dtype: float64
An expanding window statistic will be more stable (and less responsive) than its rolling window counterpart as the
increasing window size decreases the relative impact of an individual data point. As an example, here is the mean()
output for the previous time series dataset:
In [105]: s.plot(style='k--')
Out[105]: <matplotlib.axes._subplots.AxesSubplot at 0x11fcd7668>
In [106]: s.expanding().mean().plot(style='k')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[106]:
<matplotlib.axes._subplots.AxesSubplot at 0x11fcd7668>
A related set of functions are exponentially weighted versions of several of the above statistics. A similar interface
to .rolling and .expanding is accessed through the .ewm method to receive an EWM object. A number of
expanding EW (exponentially weighted) methods are provided:
Function Description
mean() EW moving average
var() EW moving variance
std() EW moving standard deviation
corr() EW moving correlation
cov() EW moving covariance
In general, a weighted moving average is calculated as
=0
= ,
=0
+ (1 )1 + (1 )2 2 + ... + (1 ) 0
=
1 + (1 ) + (1 )2 + ... + (1 )
The difference between the above two variants arises because we are dealing with series which have finite history.
Consider a series of infinite history:
+ (1 )1 + (1 )2 2 + ...
=
1 + (1 ) + (1 )2 + ...
Noting that the denominator is a geometric series with initial term equal to 1 and a ratio of 1 we have
+ (1 )1 + (1 )2 2 + ...
= 1
1(1)
= [ + (1 )1 + (1 )2 2 + ...]
= + [(1 )1 + (1 )2 2 + ...]
= + (1 )[1 + (1 )2 + ...]
= + (1 )1
which shows the equivalence of the above two variants for infinite series. When adjust=True we have 0 = 0
and from the last representation above we have = + (1 )1 , therefore there is an assumption that 0 is
not an ordinary value but rather an exponentially weighted moment of the infinite series up to that point.
One must have 0 < 1, and while since version 0.18.0 it has been possible to pass directly, its often easier to
think about either the span, center of mass (com) or half-life of an EW moment:
2
+1 ,
for span 1
1
= 1+ , for center of mass 0
log 0.5
1 exp , for half-life > 0
One must specify precisely one of span, center of mass, half-life and alpha to the EW functions:
Span corresponds to what is commonly called an N-day EW moving average.
Center of mass has a more physical interpretation and can be thought of in terms of span: = ( 1)/2.
Half-life is the period of time for the exponential weight to reduce to one half.
Alpha specifies the smoothing factor directly.
Here is an example for a univariate time series:
In [107]: s.plot(style='k--')
Out[107]: <matplotlib.axes._subplots.AxesSubplot at 0x111396940>
In [108]: s.ewm(span=20).mean().plot(style='k')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[108]:
<matplotlib.axes._subplots.AxesSubplot at 0x111396940>
EWM has a min_periods argument, which has the same meaning it does for all the .expanding and .rolling
methods: no output values will be set until at least min_periods non-null values are encountered in the (expanding)
window. (This is a change from versions prior to 0.15.0, in which the min_periods argument affected only the
min_periods consecutive entries starting at the first non-null value.)
EWM also has an ignore_na argument, which deterines how intermediate null values affect the calculation of
the weights. When ignore_na=False (the default), weights are calculated based on absolute positions, so that
intermediate null values affect the result. When ignore_na=True (which reproduces the behavior in versions prior
to 0.15.0), weights are calculated by ignoring intermediate null values. For example, assuming adjust=True, if
ignore_na=False, the weighted average of 3, NaN, 5 would be calculated as
(1 )2 3 + 1 5
(1 )2 + 1
Whereas if ignore_na=True, the weighted average would be calculated as
(1 ) 3 + 1 5
.
(1 ) + 1
The var(), std(), and cov() functions have a bias argument, specifying whether the result should con-
tain biased or unbiased statistics. For example, if bias=True, ewmvar(x) is calculated as ewmvar(x) =
ewma(x**2) - ewma(x)**2; whereas if bias=False (the default), the biased variance statistics are scaled
by debiasing factors
( )2
=0
( )2 .
2
=0
=0
(For = 1, this reduces to the usual /( 1) factor, with = + 1.) See Weighted Sample Variance for further
details.
FIFTEEN
In this section, we will discuss missing (also referred to as NA) values in pandas.
Note: The choice of using NaN internally to denote missing data was largely for simplicity and performance reasons.
It differs from the MaskedArray approach of, for example, scikits.timeseries. We are hopeful that NumPy
will soon be able to provide a native NA type solution (similar to R) performant enough to be used in pandas.
Some might quibble over our usage of missing. By missing we simply mean null or not present for whatever
reason. Many data sets simply arrive with missing data, either because it exists and was not collected or it never
existed. For example, in a collection of financial time series, some of the time series might start on different dates.
Thus, values prior to the start date would generally be marked as missing.
In pandas, one of the most common ways that missing data is introduced into a data set is by reindexing. For example
In [4]: df
Out[4]:
one two three four five
a -0.166778 0.501113 -0.355322 bar False
c -0.337890 0.580967 0.983801 bar False
e 0.057802 0.761948 -0.712964 bar True
f -0.443160 -0.974602 1.047704 bar False
h -0.717852 -1.053898 -0.019369 bar False
In [5]: df2 = df.reindex(['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h'])
In [6]: df2
Out[6]:
691
pandas: powerful Python data analysis toolkit, Release 0.20.1
As data comes in many shapes and forms, pandas aims to be flexible with regard to handling missing data. While
NaN is the default missing value marker for reasons of computational speed and convenience, we need to be able to
easily detect this value with data of different types: floating point, integer, boolean, and general object. In many cases,
however, the Python None will arise and we wish to also consider that missing or null.
Note: Prior to version v0.10.0 inf and -inf were also considered to be null in computations. This is no longer
the case by default; use the mode.use_inf_as_null option to recover it.
To make detecting missing values easier (and across different array dtypes), pandas provides the isnull() and
notnull() functions, which are also methods on Series and DataFrame objects:
In [7]: df2['one']
Out[7]:
a -0.166778
b NaN
c -0.337890
d NaN
e 0.057802
f -0.443160
g NaN
h -0.717852
Name: one, dtype: float64
In [8]: pd.isnull(df2['one'])
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
a False
b True
c False
d True
e False
f False
g True
h False
Name: one, dtype: bool
In [9]: df2['four'].notnull()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
a True
b False
c True
d False
e True
f True
g False
h True
Name: four, dtype: bool
In [10]: df2.isnull()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
Warning: One has to be mindful that in python (and numpy), the nan's dont compare equal, but None's do.
Note that Pandas/numpy uses the fact that np.nan != np.nan, and treats None like np.nan.
In [11]: None == None
Out[11]: True
So as compared to above, a scalar equality comparison versus a None/np.nan doesnt provide useful informa-
tion.
In [13]: df2['one'] == np.nan
Out[13]:
a False
b False
c False
d False
e False
f False
g False
h False
Name: one, dtype: bool
15.2 Datetimes
For datetime64[ns] types, NaT represents missing values. This is a pseudo-native sentinel value that can be represented
by numpy in a singular dtype (datetime64[ns]). pandas objects provide intercompatibility between NaT and NaN.
In [16]: df2
Out[16]:
one two three four five timestamp
a -0.166778 0.501113 -0.355322 bar False 2012-01-01
c -0.337890 0.580967 0.983801 bar False 2012-01-01
e 0.057802 0.761948 -0.712964 bar True 2012-01-01
f -0.443160 -0.974602 1.047704 bar False 2012-01-01
h -0.717852 -1.053898 -0.019369 bar False 2012-01-01
In [18]: df2
Out[18]:
one two three four five timestamp
a NaN 0.501113 -0.355322 bar False NaT
c NaN 0.580967 0.983801 bar False NaT
e 0.057802 0.761948 -0.712964 bar True 2012-01-01
f -0.443160 -0.974602 1.047704 bar False 2012-01-01
h NaN -1.053898 -0.019369 bar False NaT
In [19]: df2.get_dtype_counts()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
bool 1
datetime64[ns] 1
float64 3
object 1
dtype: int64
You can insert missing values by simply assigning to containers. The actual missing value used will be chosen based
on the dtype.
For example, numeric containers will always use NaN regardless of the missing value type chosen:
In [22]: s
Out[22]:
0 NaN
1 2.0
2 3.0
dtype: float64
In [26]: s
Out[26]:
0 None
1 NaN
2 c
dtype: object
Missing values propagate naturally through arithmetic operations between pandas objects.
In [27]: a
Out[27]:
one two
a NaN 0.501113
c NaN 0.580967
e 0.057802 0.761948
f -0.443160 -0.974602
h -0.443160 -1.053898
In [28]: b
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
In [29]: a + b
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
The descriptive statistics and computational methods discussed in the data structure overview (and listed here and
here) are all written to account for missing data. For example:
When summing data, NA (missing) values will be treated as zero
If the data are all NA, the result will be NA
Methods like cumsum and cumprod ignore NA values, but preserve them in the resulting arrays
In [30]: df
Out[30]:
one two three
a NaN 0.501113 -0.355322
c NaN 0.580967 0.983801
e 0.057802 0.761948 -0.712964
f -0.443160 -0.974602 1.047704
In [31]: df['one'].sum()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
-0.3853582652846141
In [32]: df.mean(1)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
a 0.072895
c 0.782384
e 0.035595
f -0.123353
h -0.536633
dtype: float64
In [33]: df.cumsum()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
NA groups in GroupBy are automatically excluded. This behavior is consistent with R, for example:
In [34]: df
Out[34]:
one two three
a NaN 0.501113 -0.355322
c NaN 0.580967 0.983801
e 0.057802 0.761948 -0.712964
f -0.443160 -0.974602 1.047704
h NaN -1.053898 -0.019369
In [35]: df.groupby('one').mean()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
two three
one
-0.443160 -0.974602 1.047704
0.057802 0.761948 -0.712964
pandas objects are equipped with various data manipulation methods for dealing with missing data.
The fillna function can fill in NA values with non-null data in a couple of ways, which we illustrate:
Replace NA with a scalar value
In [36]: df2
Out[36]:
one two three four five timestamp
a NaN 0.501113 -0.355322 bar False NaT
c NaN 0.580967 0.983801 bar False NaT
e 0.057802 0.761948 -0.712964 bar True 2012-01-01
f -0.443160 -0.974602 1.047704 bar False 2012-01-01
h NaN -1.053898 -0.019369 bar False NaT
In [37]: df2.fillna(0)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
In [38]: df2['four'].fillna('missing')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
a bar
c bar
e bar
f bar
h bar
Name: four, dtype: object
In [39]: df
Out[39]:
one two three
a NaN 0.501113 -0.355322
c NaN 0.580967 0.983801
e 0.057802 0.761948 -0.712964
f -0.443160 -0.974602 1.047704
h NaN -1.053898 -0.019369
In [40]: df.fillna(method='pad')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
If we only want consecutive gaps filled up to a certain number of data points, we can use the limit keyword:
In [41]: df
Out[41]:
one two three
a NaN 0.501113 -0.355322
c NaN 0.580967 0.983801
e NaN NaN NaN
f NaN NaN NaN
h NaN -1.053898 -0.019369
In [47]: dff
Out[47]:
A B C
0 0.758887 2.340598 0.219039
1 -1.235583 0.031785 0.701683
2 -1.557016 -0.636986 -1.238610
3 NaN -1.002278 0.654052
4 NaN NaN 1.053999
5 0.651981 NaN NaN
6 0.109001 -0.533294 NaN
In [48]: dff.fillna(dff.mean())
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
A B C
0 0.758887 2.340598 0.219039
1 -1.235583 0.031785 0.701683
2 -1.557016 -0.636986 -1.238610
3 -0.407125 -1.002278 0.654052
4 -0.407125 0.033067 1.053999
5 0.651981 0.033067 0.238800
6 0.109001 -0.533294 0.238800
7 -1.037831 -1.150016 0.238800
8 -0.687693 1.921056 -0.121113
9 -0.258742 -0.706329 0.402547
In [49]: dff.fillna(dff.mean()['B':'C'])
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
A B C
0 0.758887 2.340598 0.219039
1 -1.235583 0.031785 0.701683
2 -1.557016 -0.636986 -1.238610
3 NaN -1.002278 0.654052
4 NaN 0.033067 1.053999
5 0.651981 0.033067 0.238800
6 0.109001 -0.533294 0.238800
7 -1.037831 -1.150016 0.238800
8 -0.687693 1.921056 -0.121113
9 -0.258742 -0.706329 0.402547
You may wish to simply exclude labels from a data set which refer to missing data. To do this, use the dropna method:
In [51]: df
Out[51]:
one two three
a NaN 0.501113 -0.355322
c NaN 0.580967 0.983801
e NaN 0.000000 0.000000
f NaN 0.000000 0.000000
h NaN -1.053898 -0.019369
In [52]: df.dropna(axis=0)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
Empty DataFrame
Columns: [one, two, three]
Index: []
In [53]: df.dropna(axis=1)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
two three
a 0.501113 -0.355322
c 0.580967 0.983801
e 0.000000 0.000000
f 0.000000 0.000000
h -1.053898 -0.019369
In [54]: df['one'].dropna()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
Series([], Name: one, dtype: float64)
Series.dropna is a simpler method as it only has one axis to consider. DataFrame.dropna has considerably more options
than Series.dropna, which can be examined in the API.
15.5.4 Interpolation
New in version 0.13.0: interpolate(), and interpolate() have revamped interpolation methods and func-
tionality.
New in version 0.17.0: The limit_direction keyword argument was added.
Both Series and Dataframe objects have an interpolate method that, by default, performs linear interpolation at
missing datapoints.
In [55]: ts
Out[55]:
2000-01-31 0.469112
2000-02-29 NaN
2000-03-31 NaN
2000-04-28 NaN
2000-05-31 NaN
2000-06-30 NaN
2000-07-31 NaN
...
2007-10-31 -3.305259
2007-11-30 -5.485119
2007-12-31 -6.854968
2008-01-31 -7.809176
2008-02-29 -6.346480
2008-03-31 -8.089641
2008-04-30 -8.916232
Freq: BM, Length: 100, dtype: float64
In [56]: ts.count()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
61
In [57]: ts.interpolate().count()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
100
In [58]: ts.interpolate().plot()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
<matplotlib.axes._subplots.AxesSubplot at 0x12fa5e4a8>
In [60]: ts2.interpolate()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
2000-01-31 0.469112
2000-02-29 -2.610313
2002-07-31 -5.689738
2005-01-31 -7.302985
2008-04-30 -8.916232
dtype: float64
In [61]: ts2.interpolate(method='time')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
2000-01-31 0.469112
2000-02-29 0.273272
2002-07-31 -5.689738
2005-01-31 -7.095568
2008-04-30 -8.916232
dtype: float64
In [62]: ser
Out[62]:
0.0 0.0
1.0 NaN
10.0 10.0
dtype: float64
In [63]: ser.interpolate()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[63]:
0.0 0.0
1.0 5.0
10.0 10.0
dtype: float64
In [64]: ser.interpolate(method='values')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
0.0 0.0
1.0 1.0
10.0 10.0
dtype: float64
In [66]: df
Out[66]:
A B
0 1.0 0.25
1 2.1 NaN
2 NaN NaN
3 4.7 4.00
4 5.6 12.20
5 6.8 14.40
In [67]: df.interpolate()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
A B
0 1.0 0.25
1 2.1 1.50
2 3.4 2.75
3 4.7 4.00
4 5.6 12.20
5 6.8 14.40
The method argument gives access to fancier interpolation methods. If you have scipy installed, you can set pass
the name of a 1-d interpolation routine to method. Youll want to consult the full scipy interpolation documentation
and reference guide for details. The appropriate interpolation method will depend on the type of data you are working
with.
If you are dealing with a time series that is growing at an increasing rate, method='quadratic' may be
appropriate.
If you have values approximating a cumulative distribution function, then method='pchip' should work
well.
To fill missing values with goal of smooth plotting, use method='akima'.
In [68]: df.interpolate(method='barycentric')
Out[68]:
A B
0 1.00 0.250
1 2.10 -7.660
2 3.53 -4.515
3 4.70 4.000
4 5.60 12.200
5 6.80 14.400
In [69]: df.interpolate(method='pchip')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
A B
0 1.00000 0.250000
1 2.10000 0.672808
2 3.43454 1.928950
3 4.70000 4.000000
4 5.60000 12.200000
5 6.80000 14.400000
In [70]: df.interpolate(method='akima')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
A B
0 1.000000 0.250000
1 2.100000 -0.873316
2 3.406667 0.320034
3 4.700000 4.000000
4 5.600000 12.200000
5 6.800000 14.400000
When interpolating via a polynomial or spline approximation, you must also specify the degree or order of the approx-
imation:
A B
0 1.000000 0.250000
1 2.100000 -2.703846
2 3.451351 -1.453846
3 4.700000 4.000000
4 5.600000 12.200000
5 6.800000 14.400000
In [73]: np.random.seed(2)
In [75]: bad = np.array([4, 13, 14, 15, 16, 17, 18, 20, 29])
In [79]: df.plot()
Out[79]: <matplotlib.axes._subplots.AxesSubplot at 0x12fa6e0f0>
Another use case is interpolation at new values. Suppose you have 100 observations from some distribution. And lets
suppose that youre particularly interested in whats happening around the middle. You can mix pandas reindex
and interpolate methods to interpolate at the new values.
In [80]: ser = pd.Series(np.sort(np.random.uniform(size=100)))
# interpolate at new_index
In [81]: new_index = ser.index | pd.Index([49.25, 49.5, 49.75, 50.25, 50.5, 50.75])
In [83]: interp_s[49:51]
Out[83]:
49.00 0.471410
49.25 0.476841
49.50 0.481780
49.75 0.485998
50.00 0.489266
50.25 0.491814
50.50 0.493995
50.75 0.495763
51.00 0.497074
dtype: float64
Like other pandas fill methods, interpolate accepts a limit keyword argument. Use this argument to limit
the number of consecutive interpolations, keeping NaN values for interpolations that are too far from the last valid
observation:
In [85]: ser.interpolate(limit=2)
Out[85]:
0 NaN
1 NaN
2 5.0
3 7.0
4 9.0
5 NaN
6 13.0
dtype: float64
By default, limit applies in a forward direction, so that only NaN values after a non-NaN value can be filled. If you
provide 'backward' or 'both' for the limit_direction keyword argument, you can fill NaN values before
non-NaN values, or both before and after non-NaN values, respectively:
0 NaN
1 5.0
2 5.0
3 NaN
4 NaN
5 11.0
6 13.0
dtype: float64
0 NaN
1 5.0
2 5.0
3 7.0
4 NaN
5 11.0
6 13.0
dtype: float64
Often times we want to replace arbitrary values with other values. New in v0.8 is the replace method in Se-
ries/DataFrame that provides an efficient yet flexible way to perform such replacements.
For a Series, you can replace a single value or a list of values by another value:
In [90]: ser.replace(0, 5)
Out[90]:
0 5.0
1 1.0
2 2.0
3 3.0
4 4.0
dtype: float64
Instead of replacing with specified values, you can treat all given values as missing and interpolate over them:
Note: Python strings prefixed with the r character such as r'hello world' are so-called raw strings. They
have different semantics regarding backslashes than strings without this prefix. Backslashes in raw strings will be
interpreted as an escaped backslash, e.g., r'\' == '\\'. You should read about them if this is unclear.
In [96]: d = {'a': list(range(4)), 'b': list('ab..'), 'c': ['a', 'b', np.nan, 'd']}
In [97]: df = pd.DataFrame(d)
Now do it with a regular expression that removes surrounding whitespace (regex -> regex)
1 1 b b
2 2 NaN NaN
3 3 NaN d
Same as the previous example, but use a regular expression for searching instead (dict of regex -> dict)
In [103]: df.replace({'b': r'\s*\.\s*'}, {'b': np.nan}, regex=True)
Out[103]:
a b c
0 0 a a
1 1 b b
2 2 NaN NaN
3 3 NaN d
You can pass nested dictionaries of regular expressions that use regex=True
In [104]: df.replace({'b': {'b': r''}}, regex=True)
Out[104]:
a b c
0 0 a a
1 1 b
2 2 . NaN
3 3 . d
You can also use the group of a regular expression match when replacing (dict of regex -> dict of regex), this works
for lists as well
In [106]: df.replace({'b': r'\s*(\.)\s*'}, {'b': r'\1ty'}, regex=True)
Out[106]:
a b c
0 0 a a
1 1 b b
2 2 .ty NaN
3 3 .ty d
You can pass a list of regular expressions, of which those that match will be replaced with a scalar (list of regex ->
regex)
In [107]: df.replace([r'\s*\.\s*', r'a|b'], np.nan, regex=True)
Out[107]:
a b c
0 0 NaN NaN
1 1 NaN NaN
2 2 NaN NaN
3 3 NaN d
All of the regular expression examples can also be passed with the to_replace argument as the regex argument.
In this case the value argument must be passed explicitly by name or regex must be a nested dictionary. The
This can be convenient if you do not want to pass regex=True every time you want to use a regular expression.
Note: Anywhere in the above replace examples that you see a regular expression a compiled regular expression is
valid as well.
Similar to DataFrame.fillna
In [114]: df[1].dtype
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
dtype('float64')
Warning: When replacing multiple bool or datetime64 objects, the first argument to replace
(to_replace) must match the type of the value being replaced type. For example,
s = pd.Series([True, False, True])
s.replace({'a string': 'new value', True: False}) # raises
will raise a TypeError because one of the dict keys is not of the correct type for replacement.
However, when replacing a single object such as,
In [116]: s = pd.Series([True, False, True])
the original NDFrame object will be returned untouched. Were working on unifying this API, but for backwards
compatibility reasons we cannot break the latter behavior. See GH6354 for more details.
While pandas supports storing arrays of integer and boolean type, these types are not capable of storing missing data.
Until we can switch to using a native NA type in NumPy, weve established some casting rules when reindexing will
cause missing data to be introduced into, say, a Series or DataFrame. Here they are:
data type Cast to
integer float
boolean object
float no cast
object no cast
For example:
In [119]: s > 0
Out[119]:
0 True
2 True
4 True
6 True
7 True
dtype: bool
In [122]: crit
Out[122]:
0 True
1 NaN
2 True
3 NaN
4 True
5 NaN
6 True
7 True
dtype: object
In [123]: crit.dtype
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
dtype('O')
Ordinarily NumPy will complain if you try to use an object array (even if it contains boolean values) instead of a
boolean array to get or set values from an ndarray (e.g. selecting values based on some criteria). If a boolean vector
contains NAs, an exception will be generated:
In [124]: reindexed = s.reindex(list(range(8))).fillna(0)
In [125]: reindexed[crit]
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-125-2da204ed1ac7> in <module>()
----> 1 reindexed[crit]
/Users/taugspurger/sandbox/pandas/pandas/core/common.py in is_bool_indexer(key)
187 if not lib.is_bool_array(key):
188 if isnull(key).any():
--> 189 raise ValueError('cannot index with vector containing '
190 'NA / NaN values')
191 return False
However, these can be filled in using fillna and it will work fine:
In [126]: reindexed[crit.fillna(False)]
Out[126]:
0 0.126504
2 0.696198
4 0.697416
6 0.601516
7 0.003659
dtype: float64
In [127]: reindexed[crit.fillna(True)]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[1
0 0.126504
1 0.000000
2 0.696198
3 0.000000
4 0.697416
5 0.000000
6 0.601516
7 0.003659
dtype: float64
SIXTEEN
By group by we are referring to a process involving one or more of the following steps
Splitting the data into groups based on some criteria
Applying a function to each group independently
Combining the results into a data structure
Of these, the split step is the most straightforward. In fact, in many situations you may wish to split the data set into
groups and do something with those groups yourself. In the apply step, we might wish to one of the following:
Aggregation: computing a summary statistic (or statistics) about each group. Some examples:
Compute group sums or means
Compute group sizes / counts
Transformation: perform some group-specific computations and return a like-indexed. Some examples:
Standardizing data (zscore) within group
Filling NAs within groups with a value derived from each group
Filtration: discard some groups, according to a group-wise computation that evaluates True or False. Some
examples:
Discarding data that belongs to groups with only a few members
Filtering out data based on the group sum or mean
Some combination of the above: GroupBy will examine the results of the apply step and try to return a sensibly
combined result if it doesnt fit into either of the above two categories
Since the set of object instance methods on pandas data structures are generally rich and expressive, we often simply
want to invoke, say, a DataFrame function on each group. The name GroupBy should be quite familiar to those who
have used a SQL-based tool (or itertools), in which you can write code like:
We aim to make operations like this natural and easy to express using pandas. Well address each area of GroupBy
functionality then provide some non-trivial examples / use cases.
See the cookbook for some advanced strategies
715
pandas: powerful Python data analysis toolkit, Release 0.20.1
pandas objects can be split on any of their axes. The abstract definition of grouping is to provide a mapping of labels
to group names. To create a GroupBy object (more on what the GroupBy object is later), you do the following:
# default is axis=0
>>> grouped = obj.groupby(key)
>>> grouped = obj.groupby(key, axis=1)
>>> grouped = obj.groupby([key1, key2])
In [2]: df
Out[2]:
A B C D
0 foo one 0.469112 -0.861849
1 bar one -0.282863 -2.104569
2 foo two -1.509059 -0.494929
3 bar three -1.135632 1.071804
4 foo two 1.212112 0.721555
5 bar two -0.173215 -0.706771
6 foo one 0.119209 -1.039575
7 foo three -1.044236 0.271860
These will split the DataFrame on its index (rows). We could also split by the columns:
Starting with 0.8, pandas Index objects now support duplicate values. If a non-unique index is used as the group key
in a groupby operation, all values for the same index value will be considered to be in one group and thus the output
of aggregation functions will only contain unique index values:
In [10]: grouped.first()
Out[10]:
1 1
2 2
3 3
dtype: int64
In [11]: grouped.last()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[11]:
1 10
2 20
3 30
dtype: int64
In [12]: grouped.sum()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[12]:
1 11
2 22
3 33
dtype: int64
Note that no splitting occurs until its needed. Creating the GroupBy object only verifies that youve passed a valid
mapping.
Note: Many kinds of complicated data manipulations can be expressed in terms of GroupBy operations (though cant
be guaranteed to be the most efficient). You can get quite creative with the label mapping functions.
By default the group keys are sorted during the groupby operation. You may however pass sort=False for
potential speedups:
In [13]: df2 = pd.DataFrame({'X' : ['B', 'B', 'A', 'A'], 'Y' : [1, 2, 3, 4]})
In [14]: df2.groupby(['X']).sum()
Out[14]:
Y
X
A 7
B 3
Note that groupby will preserve the order in which observations are sorted within each group. For example, the
groups created by groupby() below are in the order they appeared in the original DataFrame:
In [16]: df3 = pd.DataFrame({'X' : ['A', 'B', 'A', 'B'], 'Y' : [1, 4, 3, 2]})
In [17]: df3.groupby(['X']).get_group('A')
Out[17]:
X Y
0 A 1
2 A 3
In [18]: df3.groupby(['X']).get_group('B')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[18]:
X Y
1 B 4
3 B 2
The groups attribute is a dict whose keys are the computed unique groups and corresponding values being the axis
labels belonging to each group. In the above example we have:
In [19]: df.groupby('A').groups
Out[19]:
{'bar': Int64Index([1, 3, 5], dtype='int64'),
'foo': Int64Index([0, 2, 4, 6, 7], dtype='int64')}
Calling the standard Python len function on the GroupBy object just returns the length of the groups dict, so it is
largely just a convenience:
In [22]: grouped.groups
Out[22]:
In [23]: len(grouped)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
6
In [24]: df
Out[24]:
gender height weight
2000-01-01 male 42.849980 157.500553
2000-01-02 male 49.607315 177.340407
2000-01-03 male 56.293531 171.524640
2000-01-04 female 48.421077 144.251986
2000-01-05 male 46.556882 152.526206
2000-01-06 female 68.448851 168.272968
2000-01-07 male 70.757698 136.431469
2000-01-08 female 58.909500 176.499753
2000-01-09 female 76.435631 174.094104
2000-01-10 male 45.306120 177.540920
In [25]: gb = df.groupby('gender')
In [26]: gb.<TAB>
gb.agg gb.boxplot gb.cummin gb.describe gb.filter gb.get_group
gb.height gb.last gb.median gb.ngroups gb.plot gb.rank
gb.std gb.transform
gb.aggregate gb.count gb.cumprod gb.dtype gb.first gb.groups
gb.hist gb.max gb.min gb.nth gb.prod gb.resample
gb.sum gb.var
gb.apply gb.cummax gb.cumsum gb.fillna gb.gender gb.head
gb.indices gb.mean gb.name gb.ohlc gb.quantile gb.size
gb.tail gb.weight
With hierarchically-indexed data, its quite natural to group by one of the levels of the hierarchy.
Lets create a Series with a two-level MultiIndex.
In [27]: arrays = [['bar', 'bar', 'baz', 'baz', 'foo', 'foo', 'qux', 'qux'],
....: ['one', 'two', 'one', 'two', 'one', 'two', 'one', 'two']]
....:
In [30]: s
Out[30]:
first second
bar one -0.919854
two -0.042379
baz one 1.247642
two -0.009920
foo one 0.290213
two 0.495767
qux one 0.362949
two 1.548106
dtype: float64
In [32]: grouped.sum()
Out[32]:
first
bar -0.962232
baz 1.237723
foo 0.785980
qux 1.911055
dtype: float64
If the MultiIndex has names specified, these can be passed instead of the level number:
In [33]: s.groupby(level='second').sum()
Out[33]:
second
one 0.980950
two 1.991575
dtype: float64
The aggregation functions such as sum will take the level parameter directly. Additionally, the resulting index will be
named according to the chosen level:
In [34]: s.sum(level='second')
Out[34]:
second
one 0.980950
two 1.991575
dtype: float64
In [35]: s
Out[35]:
first second third
bar doo one -1.131345
two -0.089329
baz bee one 0.337863
two -0.945867
foo bop one -0.932132
two 1.956030
qux bop one 0.017587
two -0.016692
dtype: float64
first second
bar doo -1.220674
baz bee -0.608004
foo bop 1.023898
qux bop 0.000895
dtype: float64
A DataFrame may be grouped by a combination of columns and index levels by specifying the column names as
strings and the index levels as pd.Grouper objects.
In [38]: arrays = [['bar', 'bar', 'baz', 'baz', 'foo', 'foo', 'qux', 'qux'],
....: ['one', 'two', 'one', 'two', 'one', 'two', 'one', 'two']]
....:
In [41]: df
Out[41]:
A B
first second
bar one 1 0
two 1 1
baz one 1 2
two 1 3
foo one 2 4
two 2 5
qux one 3 6
two 3 7
The following example groups df by the second index level and the A column.
Once you have created the GroupBy object from a DataFrame, for example, you might want to do something different
for each of the columns. Thus, using [] similar to getting a column from a DataFrame, you can do:
This is mainly syntactic sugar for the alternative and much more verbose:
In [48]: df['C'].groupby(df['A'])
Out[48]: <pandas.core.groupby.SeriesGroupBy object at 0x1201161d0>
Additionally this method avoids recomputing the internal grouping information derived from the passed key.
With the GroupBy object in hand, iterating through the grouped data is very natural and functions similarly to
itertools.groupby:
In the case of grouping by multiple keys, the group name will be a tuple:
Its standard Python-fu but remember you can unpack the tuple in the for loop statement if you wish: for (k1,
k2), group in grouped:.
In [52]: grouped.get_group('bar')
Out[52]:
A B C D
1 bar one 0.254161 1.511763
3 bar three 0.215897 -0.990582
5 bar two -0.077118 1.211526
16.4 Aggregation
Once the GroupBy object has been created, several methods are available to perform a computation on the grouped
data. These operations are similar to the aggregating API, window functions API, and resample API.
An obvious one is aggregation via the aggregate or equivalently agg method:
In [55]: grouped.aggregate(np.sum)
Out[55]:
C D
A
bar 0.392940 1.732707
foo -1.796421 2.824590
In [57]: grouped.aggregate(np.sum)
Out[57]:
C D
A B
bar one 0.254161 1.511763
three 0.215897 -0.990582
two -0.077118 1.211526
foo one -0.983776 1.614581
three -0.862495 0.024580
two 0.049851 1.185429
As you can see, the result of the aggregation will have the group names as the new index along the grouped axis. In
the case of multiple keys, the result is a MultiIndex by default, though this can be changed by using the as_index
option:
In [59]: grouped.aggregate(np.sum)
Out[59]:
A B C D
0 bar one 0.254161 1.511763
1 bar three 0.215897 -0.990582
2 bar two -0.077118 1.211526
3 foo one -0.983776 1.614581
4 foo three -0.862495 0.024580
5 foo two 0.049851 1.185429
A C D
0 bar 0.392940 1.732707
1 foo -1.796421 2.824590
Note that you could use the reset_index DataFrame function to achieve the same result as the column names are
stored in the resulting MultiIndex:
Another simple aggregation example is to compute the size of each group. This is included in GroupBy as the size
method. It returns a Series whose index are the group names and whose values are the sizes of each group.
In [62]: grouped.size()
Out[62]:
A B
bar one 1
three 1
two 1
foo one 2
three 1
two 2
dtype: int64
In [63]: grouped.describe()
Out[63]:
C \
count mean std min 25% 50% 75% max
0 1.0 0.254161 NaN 0.254161 0.254161 0.254161 0.254161 0.254161
1 1.0 0.215897 NaN 0.215897 0.215897 0.215897 0.215897 0.215897
2 1.0 -0.077118 NaN -0.077118 -0.077118 -0.077118 -0.077118 -0.077118
3 2.0 -0.491888 0.117887 -0.575247 -0.533567 -0.491888 -0.450209 -0.408530
4 1.0 -0.862495 NaN -0.862495 -0.862495 -0.862495 -0.862495 -0.862495
5 2.0 0.024925 1.652692 -1.143704 -0.559389 0.024925 0.609240 1.193555
D
count mean std min 25% 50% 75% max
0 1.0 1.511763 NaN 1.511763 1.511763 1.511763 1.511763 1.511763
1 1.0 -0.990582 NaN -0.990582 -0.990582 -0.990582 -0.990582 -0.990582
Note: Aggregation functions will not return the groups that you are aggregating over if they are named columns,
when as_index=True, the default. The grouped columns will be the indices of the returned object.
Passing as_index=False will return the groups that you are aggregating over, if they are named columns.
Aggregating functions are ones that reduce the dimension of the returned objects, for example: mean, sum, size,
count, std, var, sem, describe, first, last, nth, min, max. This is what happens when
you do for example DataFrame.sum() and get back a Series.
nth can act as a reducer or a filter, see here
With grouped Series you can also pass a list or dict of functions to do aggregation with, outputting a DataFrame:
On a grouped DataFrame, you can pass a list of functions to apply to each column, which produces an aggregated
result with a hierarchical index:
The resulting aggregations are named for the functions themselves. If you need to rename, then you can add in a
chained operation for a Series like this:
By passing a dict to aggregate you can apply a different aggregation to the columns of a DataFrame:
The function names can also be strings. In order for a string to be valid it must be either implemented on GroupBy or
available via dispatching:
Note: If you pass a dict to aggregate, the ordering of the output colums is non-deterministic. If you want to
be sure the output columns will be in a specific order, you can use an OrderedDict. Compare the output of the
following two commands:
D C
A
Some common aggregations, currently only sum, mean, std, and sem, have optimized Cython implementations:
In [73]: df.groupby('A').sum()
Out[73]:
C D
A
bar 0.392940 1.732707
foo -1.796421 2.824590
C D
A B
bar one 0.254161 1.511763
three 0.215897 -0.990582
two -0.077118 1.211526
foo one -0.491888 0.807291
three -0.862495 0.024580
two 0.024925 0.592714
Of course sum and mean are implemented on pandas objects, so the above code would work even without the special
versions via dispatching (see below).
16.5 Transformation
The transform method returns an object that is indexed the same (same size) as the one being grouped. The
transform function must:
Return a result that is either the same size as the group chunk or broadcastable to the size of the group chunk
(e.g., a scalar, grouped.transform(lambda x: x.iloc[-1])).
Operate column-by-column on the group chunk. The transform is applied to the first group chunk using
chunk.apply.
Not perform in-place operations on the group chunk. Group chunks should be treated as immutable, and changes
to a group chunk may produce unexpected results. For example, when using fillna, inplace must be
False (grouped.transform(lambda x: x.fillna(inplace=False))).
(Optionally) operates on the entire group chunk. If this is supported, a fast path is used starting from the second
chunk.
For example, suppose we wished to standardize the data within each group:
In [77]: ts = ts.rolling(window=100,min_periods=100).mean().dropna()
In [78]: ts.head()
Out[78]:
2000-01-08 0.779333
2000-01-09 0.778852
2000-01-10 0.786476
2000-01-11 0.782797
2000-01-12 0.798110
Freq: D, dtype: float64
In [79]: ts.tail()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
2002-09-30 0.660294
2002-10-01 0.631095
2002-10-02 0.673601
2002-10-03 0.709213
2002-10-04 0.719369
Freq: D, dtype: float64
We would expect the result to now have mean 0 and standard deviation 1 within each group, which we can easily
check:
# Original Data
In [83]: grouped = ts.groupby(key)
In [84]: grouped.mean()
Out[84]:
2000 0.442441
2001 0.526246
2002 0.459365
dtype: float64
In [85]: grouped.std()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[85]:
2000 0.131752
2001 0.210945
2002 0.128753
dtype: float64
# Transformed Data
In [86]: grouped_trans = transformed.groupby(key)
In [87]: grouped_trans.mean()
Out[87]:
2000 1.168208e-15
2001 1.454544e-15
2002 1.726657e-15
dtype: float64
In [88]: grouped_trans.std()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[88]:
2000 1.0
2001 1.0
2002 1.0
dtype: float64
We can also visually compare the original and transformed data sets.
In [89]: compare = pd.DataFrame({'Original': ts, 'Transformed': transformed})
In [90]: compare.plot()
Out[90]: <matplotlib.axes._subplots.AxesSubplot at 0x129175208>
Transformation functions that have lower dimension outputs are broadcast to match the shape of the input array.
In [91]: data_range = lambda x: x.max() - x.min()
In [92]: ts.groupby(key).transform(data_range)
Out[92]:
2000-01-08 0.623893
2000-01-09 0.623893
2000-01-10 0.623893
2000-01-11 0.623893
2000-01-12 0.623893
2000-01-13 0.623893
2000-01-14 0.623893
...
2002-09-28 0.558275
2002-09-29 0.558275
2002-09-30 0.558275
2002-10-01 0.558275
2002-10-02 0.558275
2002-10-03 0.558275
2002-10-04 0.558275
Freq: D, Length: 1001, dtype: float64
Alternatively the built-in methods can be could be used to produce the same outputs
Another common data transform is to replace missing data with the group mean.
In [94]: data_df
Out[94]:
A B C
0 1.539708 -1.166480 0.533026
1 1.302092 -0.505754 NaN
2 -0.371983 1.104803 -0.651520
3 -1.309622 1.118697 -1.161657
4 -1.924296 0.396437 0.812436
5 0.815643 0.367816 -0.469478
6 -0.030651 1.376106 -0.645129
.. ... ... ...
993 0.012359 0.554602 -1.976159
994 0.042312 -1.628835 1.013822
995 -0.093110 0.683847 -0.774753
996 -0.185043 1.438572 NaN
997 -0.394469 -0.642343 0.011374
998 -1.174126 1.857148 NaN
999 0.234564 0.517098 0.393534
Out[98]:
A B C
GR 209 217 189
JP 240 255 217
UK 216 231 193
US 239 250 217
We can verify that the group means have not changed in the transformed data and that the transformed data contains
no NAs.
In [101]: grouped_trans = transformed.groupby(key)
A B C
GR -0.098371 -0.015420 0.068053
JP 0.069025 0.023100 -0.077324
UK 0.034069 -0.052580 -0.116525
US 0.058664 -0.020399 0.028603
A B C
GR 209 217 189
JP 240 255 217
UK 216 231 193
US 239 250 217
A B C
GR 228 228 228
JP 267 267 267
UK 247 247 247
US 258 258 258
GR 228
JP 267
UK 247
US 258
dtype: int64
Note: Some functions when applied to a groupby object will automatically transform the input, returning an object
of the same shape as the original. Passing as_index=False will not affect these transformation methods.
For example: fillna, ffill, bfill, shift.
In [107]: grouped.ffill()
Out[107]:
A B C
0 1.539708 -1.166480 0.533026
1 1.302092 -0.505754 0.533026
2 -0.371983 1.104803 -0.651520
3 -1.309622 1.118697 -1.161657
4 -1.924296 0.396437 0.812436
5 0.815643 0.367816 -0.469478
6 -0.030651 1.376106 -0.645129
.. ... ... ...
993 0.012359 0.554602 -1.976159
994 0.042312 -1.628835 1.013822
995 -0.093110 0.683847 -0.774753
996 -0.185043 1.438572 -0.774753
997 -0.394469 -0.642343 0.011374
998 -1.174126 1.857148 -0.774753
999 0.234564 0.517098 0.393534
In [109]: df_re
Out[109]:
A B
0 1 0
1 1 1
2 1 2
3 1 3
4 1 4
5 1 5
6 1 6
.. .. ..
13 5 13
14 5 14
15 5 15
16 5 16
17 5 17
18 5 18
19 5 19
In [110]: df_re.groupby('A').rolling(4).B.mean()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
A
1 0 NaN
1 NaN
2 NaN
3 1.5
4 2.5
5 3.5
6 4.5
...
5 13 11.5
14 12.5
15 13.5
16 14.5
17 15.5
18 16.5
19 17.5
Name: B, Length: 20, dtype: float64
The expanding() method will accumulate a given operation (sum() in the example) for all the members of each
particular group.
In [111]: df_re.groupby('A').expanding().sum()
Out[111]:
A B
A
1 0 1.0 0.0
1 2.0 1.0
2 3.0 3.0
3 4.0 6.0
4 5.0 10.0
5 6.0 15.0
6 7.0 21.0
... ... ...
5 13 20.0 46.0
14 25.0 60.0
15 30.0 75.0
16 35.0 91.0
17 40.0 108.0
18 45.0 126.0
19 50.0 145.0
Suppose you want to use the resample() method to get a daily frequency in each group of your dataframe and wish
to complete the missing values with the ffill() method.
In [113]: df_re
Out[113]:
group val
date
2016-01-03 1 5
2016-01-10 1 6
2016-01-17 2 7
2016-01-24 2 8
In [114]: df_re.groupby('group').resample('1D').ffill()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
group val
group date
1 2016-01-03 1 5
2016-01-04 1 5
2016-01-05 1 5
2016-01-06 1 5
2016-01-07 1 5
2016-01-08 1 5
2016-01-09 1 5
... ... ...
2 2016-01-18 2 7
2016-01-19 2 7
2016-01-20 2 7
2016-01-21 2 7
2016-01-22 2 7
2016-01-23 2 7
2016-01-24 2 8
16.6 Filtration
The argument of filter must be a function that, applied to the group as a whole, returns True or False.
Another useful operation is filtering out elements that belong to groups with only a couple members.
Alternatively, instead of dropping the offending groups, we can return a like-indexed objects where the groups that do
not pass the filter are filled with NaNs.
For DataFrames with multiple columns, filters should explicitly specify a column as the filter criterion.
Note: Some functions when applied to a groupby object will act as a filter on the input, returning a reduced shape of
the original (and potentially eliminating groups), but with the index unchanged. Passing as_index=False will not
affect these transformation methods.
For example: head, tail.
In [122]: dff.groupby('B').head(2)
Out[122]:
A B C
0 0 a 0
1 1 a 1
2 2 b 2
3 3 b 3
6 6 c 6
7 7 c 7
When doing an aggregation or transformation, you might just want to call an instance method on each data group.
This is pretty easy to do by passing lambda functions:
But, its rather verbose and can be untidy if you need to pass additional arguments. Using a bit of metaprogramming
cleverness, GroupBy now has the ability to dispatch method calls to the groups:
In [125]: grouped.std()
Out[125]:
C D
A
bar 0.181231 1.366330
foo 0.912265 0.884785
What is actually happening here is that a function wrapper is being generated. When invoked, it takes any passed
arguments and invokes the function with any arguments on each group (in the above example, the std function). The
results are then combined together much in the style of agg and transform (it actually uses apply to infer the
gluing, documented next). This enables some operations to be carried out rather succinctly:
In [129]: grouped.fillna(method='pad')
Out[129]:
A B C
2000-01-01 NaN NaN NaN
2000-01-02 -0.353501 -0.080957 -0.876864
2000-01-03 -0.353501 -0.080957 -0.876864
2000-01-04 0.050976 0.044273 -0.559849
2000-01-05 0.050976 0.044273 -0.559849
2000-01-06 0.030091 0.186460 -0.680149
2000-01-07 0.030091 0.186460 -0.680149
... ... ... ...
2002-09-20 2.310215 0.157482 -0.064476
2002-09-21 2.310215 0.157482 -0.064476
2002-09-22 0.005011 0.053897 -1.026922
2002-09-23 0.005011 0.053897 -1.026922
2002-09-24 -0.456542 -1.849051 1.559856
2002-09-25 -0.456542 -1.849051 1.559856
2002-09-26 1.123162 0.354660 1.128135
In this example, we chopped the collection of time series into yearly chunks then independently called fillna on the
groups.
New in version 0.14.1.
The nlargest and nsmallest methods work on Series style groupbys:
In [130]: s = pd.Series([9, 8, 7, 5, 19, 1, 4.2, 3.3])
In [131]: g = pd.Series(list('abababab'))
In [132]: gb = s.groupby(g)
In [133]: gb.nlargest(3)
Out[133]:
a 4 19.0
0 9.0
2 7.0
b 1 8.0
3 5.0
7 3.3
dtype: float64
In [134]: gb.nsmallest(3)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
a 6 4.2
2 7.0
0 9.0
b 5 1.0
7 3.3
3 5.0
dtype: float64
Some operations on the grouped data might not fit into either the aggregate or transform categories. Or, you may simply
want GroupBy to infer how to combine the results. For these, use the apply function, which can be substituted for
both aggregate and transform in many standard use cases. However, apply can handle some exceptional use
cases, for example:
In [135]: df
Out[135]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
In [140]: grouped.apply(f)
Out[140]:
demeaned original
0 -0.215962 -0.575247
1 0.123181 0.254161
2 -0.784420 -1.143704
3 0.084917 0.215897
4 1.552839 1.193555
5 -0.208098 -0.077118
6 -0.049245 -0.408530
7 -0.503211 -0.862495
apply on a Series can operate on a returned value from the applied function, that is itself a series, and possibly upcast
the result to a DataFrame
In [141]: def f(x):
.....: return pd.Series([ x, x**2 ], index = ['x', 'x^2'])
.....:
In [142]: s
Out[142]:
0 9.0
1 8.0
2 7.0
3 5.0
4 19.0
5 1.0
6 4.2
7 3.3
dtype: float64
In [143]: s.apply(f)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
x x^2
0 9.0 81.00
1 8.0 64.00
2 7.0 49.00
3 5.0 25.00
4 19.0 361.00
5 1.0 1.00
6 4.2 17.64
7 3.3 10.89
Note: apply can act as a reducer, transformer, or filter function, depending on exactly what is passed to it. So
depending on the path taken, and exactly what you are grouping. Thus the grouped columns(s) may be included in the
output as well as set the indices.
Warning: In the current implementation apply calls func twice on the first group to decide whether it can take a
fast or slow code path. This can lead to unexpected behavior if func has side-effects, as they will take effect twice
for the first group.
In [144]: d = pd.DataFrame({"a":["x", "y"], "b":[1,2]})
In [146]: df
Out[146]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
Suppose we wish to compute the standard deviation grouped by the A column. There is a slight problem, namely that
we dont care about the data in column B. We refer to this as a nuisance column. If the passed aggregation function
cant be applied to some columns, the troublesome columns will be (silently) dropped. Thus, this does not pose any
problems:
In [147]: df.groupby('A').std()
Out[147]:
C D
A
bar 0.181231 1.366330
foo 0.912265 0.884785
If there are any NaN or NaT values in the grouping key, these will be automatically excluded. So there will never be
an NA group or NaT group. This was not the case in older versions of pandas, but users were generally discarding
the NA group anyway (and supporting it was an implementation headache).
Categorical variables represented as instance of pandass Categorical class can be used as group keys. If so, the
order of the levels will be preserved:
In [148]: data = pd.Series(np.random.randn(100))
In [150]: data.groupby(factor).mean()
Out[150]:
(-2.618, -0.684] -1.331461
(-0.684, -0.0232] -0.272816
(-0.0232, 0.541] 0.263607
(0.541, 2.369] 1.166038
dtype: float64
You may need to specify a bit more data to properly group. You can use the pd.Grouper to provide this local
control.
In [151]: import datetime
In [152]: df = pd.DataFrame({
.....: 'Branch' : 'A A A A A A A B'.split(),
.....: 'Buyer': 'Carl Mark Carl Carl Joe Joe Joe Carl'.split(),
.....: 'Quantity': [1,3,5,1,8,1,9,3],
.....: 'Date' : [
.....: datetime.datetime(2013,1,1,13,0),
.....: datetime.datetime(2013,1,1,13,5),
.....: datetime.datetime(2013,10,1,20,0),
.....: datetime.datetime(2013,10,2,10,0),
.....: datetime.datetime(2013,10,1,20,0),
.....: datetime.datetime(2013,10,2,10,0),
.....: datetime.datetime(2013,12,2,12,0),
.....: datetime.datetime(2013,12,2,14,0),
.....: ]
.....: })
.....:
In [153]: df
Out[153]:
Branch Buyer Date Quantity
0 A Carl 2013-01-01 13:00:00 1
1 A Mark 2013-01-01 13:05:00 3
2 A Carl 2013-10-01 20:00:00 5
3 A Carl 2013-10-02 10:00:00 1
4 A Joe 2013-10-01 20:00:00 8
5 A Joe 2013-10-02 10:00:00 1
6 A Joe 2013-12-02 12:00:00 9
7 B Carl 2013-12-02 14:00:00 3
Groupby a specific column with the desired frequency. This is like resampling.
In [154]: df.groupby([pd.Grouper(freq='1M',key='Date'),'Buyer']).sum()
Out[154]:
Quantity
Date Buyer
2013-01-31 Carl 1
Mark 3
2013-10-31 Carl 6
Joe 9
2013-12-31 Carl 3
Joe 9
You have an ambiguous specification in that you have a named index and a column that could be potential groupers.
In [155]: df = df.set_index('Date')
In [157]: df.groupby([pd.Grouper(freq='6M',key='Date'),'Buyer']).sum()
Out[157]:
Quantity
Date Buyer
2013-02-28 Carl 1
Mark 3
2014-02-28 Carl 9
Joe 18
In [158]: df.groupby([pd.Grouper(freq='6M',level='Date'),'Buyer']).sum()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
Quantity
Date Buyer
2013-01-31 Carl 1
Mark 3
2014-01-31 Carl 9
Joe 18
Just like for a DataFrame or Series you can call head and tail on a groupby:
In [160]: df
Out[160]:
A B
0 1 2
1 1 4
2 5 6
In [161]: g = df.groupby('A')
In [162]: g.head(1)
Out[162]:
A B
0 1 2
2 5 6
In [163]: g.tail(1)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[163]:
A B
1 1 4
2 5 6
Warning: Before 0.14.0 this was implemented with a fall-through apply, so the result would incorrectly respect
the as_index flag:
>>> g.head(1): # was equivalent to g.apply(lambda x: x.head(1))
A B
A
1 0 1 2
5 2 5 6
To select from a DataFrame or Series the nth item, use the nth method. This is a reduction method, and will return a
single row (or no row) per group if you pass an int for n:
In [165]: g = df.groupby('A')
In [166]: g.nth(0)
Out[166]:
B
A
1 NaN
5 6.0
In [167]: g.nth(-1)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[167]:
B
A
1 4.0
5 6.0
In [168]: g.nth(1)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[168]:
B
A
1 4.0
If you want to select the nth not-null item, use the dropna kwarg. For a DataFrame this should be either 'any' or
'all' just like you would pass to dropna, for a Series this just needs to be truthy.
In [170]: g.first()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[170]:
B
A
1 4.0
5 6.0
B
A
1 4.0
5 6.0
In [172]: g.last()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
B
A
1 4.0
5 6.0
A
1 4.0
5 6.0
Name: B, dtype: float64
As with other methods, passing as_index=False, will achieve a filtration, which returns the grouped row.
In [175]: g = df.groupby('A',as_index=False)
In [176]: g.nth(0)
Out[176]:
A B
0 1 NaN
2 5 6.0
In [177]: g.nth(-1)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[177]:
A B
1 1 4.0
2 5 6.0
You can also select multiple rows from each group by specifying multiple nth values as a list of ints.
In [178]: business_dates = pd.date_range(start='4/1/2014', end='6/30/2014', freq='B')
# get the first, 4th, and last date index for each month
In [180]: df.groupby((df.index.year, df.index.month)).nth([0, 3, -1])
Out[180]:
a b
2014 4 1 1
4 1 1
4 1 1
5 1 1
5 1 1
5 1 1
6 1 1
6 1 1
6 1 1
In [182]: df
Out[182]:
A
0 a
1 a
2 a
3 b
4 b
5 a
In [183]: df.groupby('A').cumcount()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[183]:
0 0
1 1
2 2
3 0
4 1
5 3
dtype: int64
0 3
1 2
2 1
3 1
4 0
5 0
dtype: int64
16.9.8 Plotting
Groupby also works with some plotting methods. For example, suppose we suspect that some features in a DataFrame
may differ by group, in this case, the values in column 1 where the group is B are 3 higher on average.
In [185]: np.random.seed(1234)
In [189]: df.groupby('g').boxplot()
Out[189]:
A Axes(0.1,0.15;0.363636x0.75)
B Axes(0.536364,0.15;0.363636x0.75)
dtype: object
The result of calling boxplot is a dictionary whose keys are the values of our grouping column g (A and B).
The values of the resulting dictionary can be controlled by the return_type keyword of boxplot. See the
visualization documentation for more.
16.10 Examples
Regroup columns of a DataFrame according to their sum, and sum the aggregated ones.
In [191]: df
Out[191]:
a b c d
0 1 0 1 2
1 0 1 0 3
2 0 0 0 4
0 2 2
1 1 3
2 0 4
Resampling produces new hypothetical samples(resamples) from already existing observed data or from a model that
generates data. These new samples are similar to the pre-existing samples.
In order to resample to work on indices that are non-datetimelike , the following procedure can be utilized.
In the following examples, df.index // 5 returns a binary array which is used to determine what gets selected for the
groupby operation.
Note: The below example shows how we can downsample by consolidation of samples into fewer samples. Here by
using df.index // 5, we are aggregating the samples in bins. By applying std() function, we aggregate the information
contained in many samples into a small subset of values which is their standard deviation thereby reducing the number
of samples.
In [193]: df = pd.DataFrame(np.random.randn(10,2))
In [194]: df
Out[194]:
0 1
0 -0.832423 0.114059
1 1.218203 -0.890593
2 0.165445 -1.127470
3 -1.192185 0.818644
4 0.237185 -0.336384
5 0.694727 0.750161
6 0.247055 0.645433
7 -1.366120 0.313160
8 0.205207 0.089987
9 0.186062 1.314182
In [195]: df.index // 5
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
Int64Index([0, 0, 0, 0, 0, 1, 1, 1, 1, 1], dtype='int64')
0 1
0 0.955154 0.783648
1 0.788428 0.467576
Group DataFrame columns, compute a set of metrics and return a named Series. The Series name is used as the name
for the column index. This is especially useful in conjunction with reshaping operations such as stacking in which the
column index name will be used as the name of the inserted column:
In [197]: df = pd.DataFrame({
.....: 'a': [0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 2],
.....: 'b': [0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1],
.....: 'c': [1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0],
.....: 'd': [0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1],
.....: })
.....:
In [200]: result
Out[200]:
metrics b_sum c_mean
a
0 2.0 0.5
1 2.0 0.5
2 2.0 0.5
In [201]: result.stack()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
a metrics
0 b_sum 2.0
c_mean 0.5
1 b_sum 2.0
c_mean 0.5
2 b_sum 2.0
c_mean 0.5
dtype: float64
SEVENTEEN
pandas provides various facilities for easily combining together Series, DataFrame, and Panel objects with various
kinds of set logic for the indexes and relational algebra functionality in the case of join / merge-type operations.
The concat function (in the main pandas namespace) does all of the heavy lifting of performing concatenation
operations along an axis while performing optional set logic (union or intersection) of the indexes (if any) on the other
axes. Note that I say if any because there is only a single possible axis of concatenation for Series.
Before diving into all of the details of concat and what it can do, here is a simple example:
751
pandas: powerful Python data analysis toolkit, Release 0.20.1
Like its sibling function on ndarrays, numpy.concatenate, pandas.concat takes a list or dict of
homogeneously-typed objects and concatenates them with some configurable handling of what to do with the other
axes:
objs : a sequence or mapping of Series, DataFrame, or Panel objects. If a dict is passed, the sorted keys will be
used as the keys argument, unless it is passed, in which case the values will be selected (see below). Any None
objects will be dropped silently unless they are all None in which case a ValueError will be raised.
axis : {0, 1, ...}, default 0. The axis to concatenate along.
join : {inner, outer}, default outer. How to handle indexes on other axis(es). Outer for union and inner
for intersection.
ignore_index : boolean, default False. If True, do not use the index values on the concatenation axis. The
resulting axis will be labeled 0, ..., n - 1. This is useful if you are concatenating objects where the concatenation
axis does not have meaningful indexing information. Note the index values on the other axes are still respected
in the join.
join_axes : list of Index objects. Specific indexes to use for the other n - 1 axes instead of performing
inner/outer set logic.
keys : sequence, default None. Construct hierarchical index using the passed keys as the outermost level. If
multiple levels passed, should contain tuples.
levels : list of sequences, default None. Specific levels (unique values) to use for constructing a MultiIndex.
Otherwise they will be inferred from the keys.
names : list, default None. Names for the levels in the resulting hierarchical index.
verify_integrity : boolean, default False. Check whether the new concatenated axis contains duplicates.
This can be very expensive relative to the actual data concatenation.
As you can see (if youve read the rest of the documentation), the resulting objects index has a hierarchical index.
This means that we can now do stuff like select out each chunk by key:
In [7]: result.loc['y']
Out[7]:
A B C D
4 A4 B4 C4 D4
5 A5 B5 C5 D5
6 A6 B6 C6 D6
7 A7 B7 C7 D7
Its not a stretch to see how this can be very useful. More detail on this functionality below.
Note: It is worth noting however, that concat (and therefore append) makes a full copy of the data, and that
constantly reusing this function can create a significant performance hit. If you need to use the operation over several
datasets, use a list comprehension.
When gluing together multiple DataFrames (or Panels or...), for example, you have a choice of how to handle the other
axes (other than the one being concatenated). This can be done in three ways:
Take the (sorted) union of them all, join='outer'. This is the default option as it results in zero information
loss.
Take the intersection, join='inner'.
Use a specific index (in the case of DataFrame) or indexes (in the case of Panel or future higher dimensional
objects), i.e. the join_axes argument
Here is a example of each of these methods. First, the default join='outer' behavior:
Note that the row indexes have been unioned and sorted. Here is the same thing with join='inner':
Lastly, suppose we just wanted to reuse the exact index from the original DataFrame:
A useful shortcut to concat are the append instance methods on Series and DataFrame. These methods actually
predated concat. They concatenate along axis=0, namely the index:
In the case of DataFrame, the indexes must be disjoint but the columns do not need to be:
Note: Unlike list.append method, which appends to the original list and returns nothing, append here does not
modify df1 and returns its copy with df2 appended.
For DataFrames which dont have a meaningful index, you may wish to append them and ignore the fact that they may
have overlapping indexes:
To do this, use the ignore_index argument:
You can concatenate a mix of Series and DataFrames. The Series will be transformed to DataFrames with the column
name as the name of the Series.
In [17]: s1 = pd.Series(['X0', 'X1', 'X2', 'X3'], name='X')
A fairly common use of the keys argument is to override the column names when creating a new DataFrame based
on existing Series. Notice how the default behaviour consists on letting the resulting DataFrame inherits the parent
Series name, when these existed.
Through the keys argument we can override the existing column names.
You can also pass a dict to concat in which case the dict keys will be used for the keys argument (unless other keys
are specified):
The MultiIndex created has levels that are constructed from the passed keys and the index of the DataFrame pieces:
In [31]: result.index.levels
Out[31]: FrozenList([['z', 'y'], [4, 5, 6, 7, 8, 9, 10, 11]])
If you wish to specify other levels (as will occasionally be the case), you can do so using the levels argument:
In [33]: result.index.levels
Out[33]: FrozenList([['z', 'y', 'x', 'w'], [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]])
Yes, this is fairly esoteric, but is actually necessary for implementing things like GroupBy where the order of a
categorical variable is meaningful.
While not especially efficient (since a new object must be created), you can append a single row to a DataFrame by
passing a Series or dict to append, which returns a new DataFrame as above.
You should use ignore_index with this method to instruct DataFrame to discard its index. If you wish to preserve
the index, you should construct an appropriately-indexed DataFrame and append or concatenate those objects.
You can also pass a list of dicts or Series:
pandas has full-featured, high performance in-memory join operations idiomatically very similar to relational
databases like SQL. These methods perform significantly better (in some cases well over an order of magnitude better)
than other open source implementations (like base::merge.data.frame in R). The reason for this is careful
algorithmic design and internal layout of the data in DataFrame.
See the cookbook for some advanced strategies.
Users who are familiar with SQL but new to pandas might be interested in a comparison with SQL.
pandas provides a single function, merge, as the entry point for all standard database join operations between
DataFrame objects:
pd.merge(left, right, how='inner', on=None, left_on=None, right_on=None,
left_index=False, right_index=False, sort=True,
suffixes=('_x', '_y'), copy=True, indicator=False)
Experienced users of relational databases like SQL will be familiar with the terminology used to describe join opera-
tions between two SQL-table like structures (DataFrame objects). There are several cases to consider which are very
important to understand:
one-to-one joins: for example when joining two DataFrame objects on their indexes (which must contain unique
values)
many-to-one joins: for example when joining an index (unique) to one or more columns in a DataFrame
many-to-many joins: joining columns on columns.
Note: When joining columns on columns (potentially a many-to-many join), any indexes on the passed DataFrame
objects will be discarded.
It is worth spending some time understanding the result of the many-to-many join case. In SQL / standard relational
algebra, if a key combination appears more than once in both tables, the resulting table will have the Cartesian
product of the associated data. Here is a very basic example with one unique key combination:
The how argument to merge specifies how to determine which keys are to be included in the resulting table. If a
key combination does not appear in either the left or right tables, the values in the joined table will be NA. Here is a
summary of the how options and their SQL equivalent names:
Merge method SQL Join Name Description
left LEFT OUTER JOIN Use keys from left frame only
right RIGHT OUTER JOIN Use keys from right frame only
outer FULL OUTER JOIN Use union of keys from both frames
inner INNER JOIN Use intersection of keys from both frames
In [44]: result = pd.merge(left, right, how='left', on=['key1', 'key2'])
Warning: Joining / merging on duplicate keys can cause a returned frame that is the multiplication of the row
dimensions, may result in memory overflow. It is the user s responsibility to manage duplicate values in keys
before joining large DataFrames.
The indicator argument will also accept string arguments, in which case the indicator function will use the value
of the passed string as the name for the indicator column.
In [56]: left
Out[56]:
key v1
0 1 10
In [58]: right
Out[58]:
key v1
0 1 20
1 2 30
Of course if you have missing values that are introduced, then the resulting dtype will be upcast.
In [61]: pd.merge(left, right, how='outer', on='key')
Out[61]:
key v1_x v1_y
0 1 10.0 20
1 2 NaN 30
....:
In [66]: left
Out[66]:
X Y
0 bar one
1 foo one
2 foo three
3 bar three
4 foo one
5 bar one
6 bar three
7 bar three
8 bar three
9 foo three
In [67]: left.dtypes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
X category
Y object
dtype: object
In [69]: right
Out[69]:
X Z
0 foo 1
1 bar 2
In [70]: right.dtypes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[70]:
X category
Z int64
dtype: object
In [72]: result
Out[72]:
X Y Z
0 bar one 2
1 bar three 2
2 bar one 2
3 bar three 2
4 bar three 2
5 bar three 2
6 foo one 1
7 foo three 1
8 foo one 1
9 foo three 1
In [73]: result.dtypes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
X category
Y object
Z int64
dtype: object
Note: The category dtypes must be exactly the same, meaning the same categories and the ordered attribute. Other-
wise the result will coerce to object dtype.
Note: Merging on category dtypes that are the same can be quite performant compared to object dtype merging.
DataFrame.join is a convenient method for combining the columns of two potentially differently-indexed
DataFrames into a single result DataFrame. Here is a very basic example:
The data alignment here is on the indexes (row labels). This same behavior can be achieved using merge plus
additional arguments instructing it to use the indexes:
In [79]: result = pd.merge(left, right, left_index=True, right_index=True, how='outer
')
join takes an optional on argument which may be a column or multiple column names, which specifies that the
passed DataFrame is to be aligned on that column in the DataFrame. These two function calls are completely equiva-
lent:
left.join(right, on=key_or_keys)
pd.merge(left, right, left_on=key_or_keys, right_index=True,
how='left', sort=False)
Obviously you can choose whichever form you find more convenient. For many-to-one joins (where one of the
DataFrames is already indexed by the join key), using join may be more convenient. Here is a simple example:
In [81]: left = pd.DataFrame({'A': ['A0', 'A1', 'A2', 'A3'],
....: 'B': ['B0', 'B1', 'B2', 'B3'],
....: 'key': ['K0', 'K1', 'K0', 'K1']})
....:
Now this can be joined by passing the two key column names:
The de-
fault for DataFrame.join is to perform a left join (essentially a VLOOKUP operation, for Excel users), which
uses only the keys found in the calling DataFrame. Other join types, for example inner join, can be just as easily
performed:
As you can see, this drops any rows where there was no match.
This is equivalent but less verbose and more memory efficient / faster than this.
This is not Implemented via join at-the-moment, however it can be done using the following.
The merge suffixes argument takes a tuple of list of strings to append to overlapping column names in the input
DataFrames to disambiguate the result columns:
A list or tuple of DataFrames can also be passed to DataFrame.join to join them together on their indexes. The
same is true for Panel.join.
Another fairly common situation is to have two like-indexed (or similarly indexed) Series or DataFrame objects and
wanting to patch values in one object from values for matching indices in the other. Here is an example:
Note that this method only takes values from the right DataFrame if they are missing in the left DataFrame. A related
method, update, alters non-NA values inplace:
In [110]: df1.update(df2)
A merge_ordered() function allows combining time series and other ordered data. In particular it has an optional
fill_method keyword to fill/interpolate missing data:
In [116]: trades
Out[116]:
time ticker price quantity
0 2016-05-25 13:30:00.023 MSFT 51.95 75
1 2016-05-25 13:30:00.038 MSFT 51.95 155
2 2016-05-25 13:30:00.048 GOOG 720.77 100
3 2016-05-25 13:30:00.048 GOOG 720.92 100
4 2016-05-25 13:30:00.048 AAPL 98.00 100
In [117]: quotes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
We only asof within 2ms betwen the quote time and the trade time.
We only asof within 10ms betwen the quote time and the trade time and we exclude exact matches on time. Note that
though we exclude the exact matches (of the quotes), prior quotes DO propogate to that point in time.
EIGHTEEN
Data is often stored in CSV files or databases in so-called stacked or record format:
In [1]: df
Out[1]:
date variable value
0 2000-01-03 A 0.469112
1 2000-01-04 A -0.282863
2 2000-01-05 A -1.509059
3 2000-01-03 B -1.135632
4 2000-01-04 B 1.212112
5 2000-01-05 B -0.173215
6 2000-01-03 C 0.119209
7 2000-01-04 C -1.044236
8 2000-01-05 C -0.861849
9 2000-01-03 D -2.104569
10 2000-01-04 D -0.494929
11 2000-01-05 D 1.071804
For the curious here is how the above DataFrame was created:
But suppose we wish to do time series operations with the variables. A better representation would be where the
columns are the unique variables and an index of dates identifies individual observations. To reshape the data into
this form, use the pivot function:
783
pandas: powerful Python data analysis toolkit, Release 0.20.1
If the values argument is omitted, and the input DataFrame has more than one column of values which are not used
as column or index inputs to pivot, then the resulting pivoted DataFrame will have hierarchical columns whose
topmost level indicates the respective value column:
In [6]: pivoted
Out[6]:
value value2 \
variable A B C D A B
date
2000-01-03 0.469112 -1.135632 0.119209 -2.104569 0.938225 -2.271265
2000-01-04 -0.282863 1.212112 -1.044236 -0.494929 -0.565727 2.424224
2000-01-05 -1.509059 -0.173215 -0.861849 1.071804 -3.018117 -0.346429
variable C D
date
2000-01-03 0.238417 -4.209138
2000-01-04 -2.088472 -0.989859
2000-01-05 -1.723698 2.143608
You of course can then select subsets from the pivoted DataFrame:
In [7]: pivoted['value2']
Out[7]:
variable A B C D
date
2000-01-03 0.938225 -2.271265 0.238417 -4.209138
2000-01-04 -0.565727 2.424224 -2.088472 -0.989859
2000-01-05 -3.018117 -0.346429 -1.723698 2.143608
Note that this returns a view on the underlying data in the case where the data are homogeneously-typed.
Closely related to the pivot function are the related stack and unstack functions currently available on Series and
DataFrame. These functions are designed to work together with MultiIndex objects (see the section on hierarchical
indexing). Here are essentially what these functions do:
stack: pivot a level of the (possibly hierarchical) column labels, returning a DataFrame with an index with
a new inner-most level of row labels.
unstack: inverse operation from stack: pivot a level of the (possibly hierarchical) row index to the column
axis, producing a reshaped DataFrame with a new inner-most level of column labels.
The clearest way to explain is by example. Lets take a prior example data set from the hierarchical indexing section:
In [8]: tuples = list(zip(*[['bar', 'bar', 'baz', 'baz',
...: 'foo', 'foo', 'qux', 'qux'],
...: ['one', 'two', 'one', 'two',
...: 'one', 'two', 'one', 'two']]))
...:
In [12]: df2
Out[12]:
A B
first second
bar one 0.721555 -0.706771
two -1.039575 0.271860
baz one -0.424972 0.567020
two 0.276232 -1.087401
The stack function compresses a level in the DataFrames columns to produce either:
A Series, in the case of a simple column Index
A DataFrame, in the case of a MultiIndex in the columns
If the columns have a MultiIndex, you can choose which level to stack. The stacked level becomes the new lowest
level in a MultiIndex on the columns:
In [13]: stacked = df2.stack()
In [14]: stacked
Out[14]:
first second
bar one A 0.721555
B -0.706771
two A -1.039575
B 0.271860
baz one A -0.424972
B 0.567020
two A 0.276232
B -1.087401
dtype: float64
With a stacked DataFrame or Series (having a MultiIndex as the index), the inverse operation of stack is
unstack, which by default unstacks the last level:
In [15]: stacked.unstack()
Out[15]:
A B
first second
bar one 0.721555 -0.706771
two -1.039575 0.271860
baz one -0.424972 0.567020
two 0.276232 -1.087401
In [16]: stacked.unstack(1)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
In [17]: stacked.unstack(0)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
If the indexes have names, you can use the level names instead of specifying the level numbers:
In [18]: stacked.unstack('second')
Out[18]:
second one two
first
bar A 0.721555 -1.039575
B -0.706771 0.271860
baz A -0.424972 0.276232
B 0.567020 -1.087401
Notice that the stack and unstack methods implicitly sort the index levels involved. Hence a call to stack and
then unstack, or viceversa, will result in a sorted copy of the original DataFrame or Series:
In [21]: df
Out[21]:
A
2 a -0.370647
b -1.157892
1 a -1.344312
b 0.844885
while the above code will raise a TypeError if the call to sort_index is removed.
You may also stack or unstack more than one level at a time by passing a list of levels, in which case the end result is
as if each level in the list were processed individually.
In [25]: df
Out[25]:
exp A B A B
animal cat cat dog dog
hair_length long long short short
0 1.075770 -0.109050 1.643563 -1.469388
1 0.357021 -0.674600 -1.776904 -0.968914
2 -1.294524 0.413738 0.276662 -0.472035
3 -0.013960 -0.362543 -0.006154 -0.923061
exp A B
animal hair_length
0 cat long 1.075770 -0.109050
dog short 1.643563 -1.469388
1 cat long 0.357021 -0.674600
dog short -1.776904 -0.968914
2 cat long -1.294524 0.413738
dog short 0.276662 -0.472035
3 cat long -0.013960 -0.362543
dog short -0.006154 -0.923061
The list of levels can contain either level names or level numbers (but not a mixture of the two).
# df.stack(level=['animal', 'hair_length'])
# from above is equivalent to:
In [27]: df.stack(level=[1, 2])
Out[27]:
exp A B
animal hair_length
0 cat long 1.075770 -0.109050
dog short 1.643563 -1.469388
1 cat long 0.357021 -0.674600
dog short -1.776904 -0.968914
2 cat long -1.294524 0.413738
dog short 0.276662 -0.472035
3 cat long -0.013960 -0.362543
dog short -0.006154 -0.923061
These functions are intelligent about handling missing data and do not expect each subgroup within the hierarchical
index to have the same set of labels. They also can handle the index being unsorted (but you can make it sorted by
calling sort_index, of course). Here is a more complex example:
In [32]: df2
Out[32]:
exp A B A
animal cat dog cat dog
first second
bar one 0.895717 0.805244 -1.206412 2.565646
two 1.431256 1.340309 -1.170299 -0.226169
baz one 0.410835 0.813850 0.132003 -0.827317
foo one -1.413681 1.607920 1.024180 0.569605
two 0.875906 -2.211372 0.974466 -2.006747
qux two -1.226825 0.769804 -1.281247 -0.727707
As mentioned above, stack can be called with a level argument to select which level in the columns to stack:
In [33]: df2.stack('exp')
Out[33]:
animal cat dog
first second exp
bar one A 0.895717 2.565646
B -1.206412 0.805244
two A 1.431256 -0.226169
B -1.170299 1.340309
baz one A 0.410835 -0.827317
B 0.132003 0.813850
foo one A -1.413681 0.569605
B 1.024180 1.607920
two A 0.875906 -2.006747
B 0.974466 -2.211372
qux two A -1.226825 -0.727707
B -1.281247 0.769804
In [34]: df2.stack('animal')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
exp A B
first second animal
bar one cat 0.895717 -1.206412
dog 2.565646 0.805244
two cat 1.431256 -1.170299
dog -0.226169 1.340309
baz one cat 0.410835 0.132003
dog -0.827317 0.813850
foo one cat -1.413681 1.024180
dog 0.569605 1.607920
Unstacking can result in missing values if subgroups do not have the same set of labels. By default, missing values
will be replaced with the default fill value for that data type, NaN for float, NaT for datetimelike, etc. For integer types,
by default data will converted to float and missing values will be set to NaN.
In [36]: df3
Out[36]:
exp B
animal dog cat
first second
bar one 0.805244 -1.206412
two 1.340309 -1.170299
foo one 1.607920 1.024180
qux two 0.769804 -1.281247
In [37]: df3.unstack()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
exp B
animal dog cat
second one two one two
first
bar 0.805244 1.340309 -1.206412 -1.170299
foo 1.607920 NaN 1.024180 NaN
qux NaN 0.769804 NaN -1.281247
Alternatively, unstack takes an optional fill_value argument, for specifying the value of missing data.
In [38]: df3.unstack(fill_value=-1e9)
Out[38]:
exp B
animal dog cat
second one two one two
first
bar 8.052440e-01 1.340309e+00 -1.206412e+00 -1.170299e+00
foo 1.607920e+00 -1.000000e+09 1.024180e+00 -1.000000e+09
qux -1.000000e+09 7.698036e-01 -1.000000e+09 -1.281247e+00
Unstacking when the columns are a MultiIndex is also careful about doing the right thing:
In [39]: df[:3].unstack(0)
Out[39]:
exp A B A \
animal cat dog cat dog
first bar baz bar baz bar baz bar
second
one 0.895717 0.410835 0.805244 0.81385 -1.206412 0.132003 2.565646
two 1.431256 NaN 1.340309 NaN -1.170299 NaN -0.226169
exp
animal
first baz
second
one -0.827317
two NaN
In [40]: df2.unstack(1)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
exp A B A \
animal cat dog cat dog
second one two one two one two one
first
bar 0.895717 1.431256 0.805244 1.340309 -1.206412 -1.170299 2.565646
baz 0.410835 NaN 0.813850 NaN 0.132003 NaN -0.827317
foo -1.413681 0.875906 1.607920 -2.211372 1.024180 0.974466 0.569605
qux NaN -1.226825 NaN 0.769804 NaN -1.281247 NaN
exp
animal
second two
first
bar -0.226169
baz NaN
foo -2.006747
qux -0.727707
In [42]: cheese
Out[42]:
first height last weight
0 John 5.5 Doe 130
1 Mary 6.0 Bo 150
Another way to transform is to use the wide_to_long panel data convenience function.
In [47]: dft
Out[47]:
A1970 A1980 B1970 B1980 X id
0 a d 2.5 3.2 -0.121306 0
1 b e 1.2 1.3 -0.097883 1
2 c f 0.7 0.1 0.695775 2
X A B
id year
0 1970 -0.121306 a 2.5
1 1970 -0.097883 b 1.2
2 1970 0.695775 c 0.7
0 1980 -0.121306 d 3.2
1 1980 -0.097883 e 1.3
2 1980 0.695775 f 0.1
It should be no shock that combining pivot / stack / unstack with GroupBy and the basic Series and DataFrame
statistical functions can produce some very expressive and fast data manipulations.
In [49]: df
Out[49]:
exp A B A
animal cat dog cat dog
first second
bar one 0.895717 0.805244 -1.206412 2.565646
two 1.431256 1.340309 -1.170299 -0.226169
baz one 0.410835 0.813850 0.132003 -0.827317
two -0.076467 -1.187678 1.130127 -1.436737
foo one -1.413681 1.607920 1.024180 0.569605
two 0.875906 -2.211372 0.974466 -2.006747
qux one -0.410001 -0.078638 0.545952 -1.219217
two -1.226825 0.769804 -1.281247 -0.727707
In [50]: df.stack().mean(1).unstack()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
In [52]: df.stack().groupby(level=1).mean()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
exp A B
second
one 0.071448 0.455513
two -0.424186 -0.204486
In [53]: df.mean().unstack(0)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
exp A B
animal
cat 0.060843 0.018596
dog -0.413580 0.232430
While pivot provides general purpose pivoting of DataFrames with various data types (strings, numerics, etc.),
Pandas also provides the pivot_table function for pivoting with aggregation of numeric data.
The function pandas.pivot_table can be used to create spreadsheet-style pivot tables. See the cookbook for
some advanced strategies
It takes a number of arguments
data: A DataFrame object
values: a column or a list of columns to aggregate
index: a column, Grouper, array which has the same length as data, or list of them. Keys to group by on the
pivot table index. If an array is passed, it is being used as the same manner as column values.
columns: a column, Grouper, array which has the same length as data, or list of them. Keys to group by on
the pivot table column. If an array is passed, it is being used as the same manner as column values.
aggfunc: function to use for aggregation, defaulting to numpy.mean
Consider a data set like this:
....:
In [56]: df
Out[56]:
A B C D E F
0 one A foo 0.341734 -0.317441 2013-01-01
1 one B foo 0.959726 -1.236269 2013-02-01
2 two C foo -1.110336 0.896171 2013-03-01
3 three A bar -0.619976 -0.487602 2013-04-01
4 one B bar 0.149748 -0.082240 2013-05-01
5 one C bar -0.732339 -2.182937 2013-06-01
6 two A foo 0.687738 0.380396 2013-07-01
.. ... .. ... ... ... ...
17 one C bar -0.345352 0.206053 2013-06-15
18 two A foo 1.314232 -0.251905 2013-07-15
19 three B foo 0.690579 -2.213588 2013-08-15
20 one C foo 0.995761 1.063327 2013-09-15
21 one A bar 2.396780 1.266143 2013-10-15
22 two B bar 0.014871 0.299368 2013-11-15
23 three C bar 3.357427 -0.863838 2013-12-15
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
D E \
A one three two one
C bar foo bar foo bar foo bar
B
A 2.241830 -1.028115 -2.363137 NaN NaN 2.001971 2.786113
B -0.676843 0.005518 NaN 0.867024 0.316495 NaN 1.368280
C -1.077692 1.399070 1.177566 NaN NaN 0.352360 -1.976883
A three two
C foo bar foo bar foo
B
A -0.043211 1.922577 NaN NaN 0.128491
B -1.103384 NaN -2.128743 -0.194294 NaN
C 1.495717 -0.263660 NaN NaN 0.872482
The result object is a DataFrame having potentially hierarchical indexes on the rows and columns. If the values
column name is not given, the pivot table will include all of the data that can be aggregated in an additional level of
hierarchy in the columns:
In [60]: pd.pivot_table(df, index=['A', 'B'], columns=['C'])
Out[60]:
D E
C bar foo bar foo
A B
one A 1.120915 -0.514058 1.393057 -0.021605
B -0.338421 0.002759 0.684140 -0.551692
C -0.538846 0.699535 -0.988442 0.747859
three A -1.181568 NaN 0.961289 NaN
Also, you can use Grouper for index and columns keywords. For detail of Grouper, see Grouping with a
Grouper specification.
Out[61]:
C bar foo
F
2013-01-31 NaN -0.514058
2013-02-28 NaN 0.002759
2013-03-31 NaN 0.176180
2013-04-30 -1.181568 NaN
2013-05-31 -0.338421 NaN
2013-06-30 -0.538846 NaN
2013-07-31 NaN 1.000985
2013-08-31 NaN 0.433512
2013-09-30 NaN 0.699535
2013-10-31 1.120915 NaN
2013-11-30 0.158248 NaN
2013-12-31 0.588783 NaN
You can render a nice output of the table omitting the missing values by calling to_string if you wish:
In [63]: print(table.to_string(na_rep=''))
D E
C bar foo bar foo
A B
one A 1.120915 -0.514058 1.393057 -0.021605
B -0.338421 0.002759 0.684140 -0.551692
C -0.538846 0.699535 -0.988442 0.747859
three A -1.181568 0.961289
B 0.433512 -1.064372
C 0.588783 -0.131830
two A 1.000985 0.064245
B 0.158248 -0.097147
C 0.176180 0.436241
If you pass margins=True to pivot_table, special All columns and rows will be added with partial group
aggregates across the categories on the rows and columns:
A B
one A 1.804346 1.210272 1.569879 0.179483 0.418374 0.858005
B 0.690376 1.353355 0.898998 1.083825 0.968138 1.101401
C 0.273641 0.418926 0.771139 1.689271 0.446140 1.422136
three A 0.794212 NaN 0.794212 2.049040 NaN 2.049040
B NaN 0.363548 0.363548 NaN 1.625237 1.625237
C 3.915454 NaN 3.915454 1.035215 NaN 1.035215
two A NaN 0.442998 0.442998 NaN 0.447104 0.447104
B 0.202765 NaN 0.202765 0.560757 NaN 0.560757
C NaN 1.819408 1.819408 NaN 0.650439 0.650439
All 1.556686 0.952552 1.246608 1.250924 0.899904 1.059389
Use the crosstab function to compute a cross-tabulation of two (or more) factors. By default crosstab computes
a frequency table of the factors unless an array of values and an aggregation function are passed.
It takes a number of arguments
index: array-like, values to group by in the rows
columns: array-like, values to group by in the columns
values: array-like, optional, array of values to aggregate according to the factors
aggfunc: function, optional, If no values array is passed, computes a frequency table
rownames: sequence, default None, must match number of row arrays passed
colnames: sequence, default None, if passed, must match number of column arrays passed
margins: boolean, default False, Add row/column margins (subtotals)
normalize: boolean, {all, index, columns}, or {0,1}, default False. Normalize by dividing all values
by the sum of values.
Any Series passed will have their name attributes used unless row or column names for the cross-tabulation are speci-
fied
For example:
In [65]: foo, bar, dull, shiny, one, two = 'foo', 'bar', 'dull', 'shiny', 'one', 'two'
In [71]: df
Out[71]:
A B C
0 1 3 1.0
1 2 3 1.0
2 2 4 NaN
3 2 4 1.0
4 2 4 1.0
B 3 4
A
1 1 0
2 1 3
Any input passed containing Categorical data will have all of its categories included in the cross-tabulation, even
if the actual data does not contain any instances of a particular category.
In [73]: foo = pd.Categorical(['a', 'b'], categories=['a', 'b', 'c'])
18.6.1 Normalization
normalize can also normalize values within each row or within each column:
In [77]: pd.crosstab(df.A, df.B, normalize='columns')
Out[77]:
B 3 4
A
1 0.5 0.0
2 0.5 1.0
crosstab can also be passed a third Series and an aggregation function (aggfunc) that will be applied to the values
of the third Series within each group defined by the first two Series:
18.7 Tiling
The cut function computes groupings for the values of the input array and is often used to transform continuous
variables to discrete or categorical variables:
In [80]: ages = np.array([10, 15, 13, 12, 23, 25, 28, 59, 60])
Categories (3, interval[float64]): [(9.95, 26.667] < (26.667, 43.333] < (43.333, 60.
0]]
If the bins keyword is an integer, then equal-width bins are formed. Alternatively we can specify custom bin-edges:
In [83]: c
Out[83]:
[(0, 18], (0, 18], (0, 18], (0, 18], (18, 35], (18, 35], (18, 35], (35, 70], (35, 70]]
Categories (3, interval[int64]): [(0, 18] < (18, 35] < (35, 70]]
To convert a categorical variable into a dummy or indicator DataFrame, for example a column in a DataFrame (a
Series) which has k distinct values, can derive a DataFrame containing k columns of 1s and 0s:
In [84]: df = pd.DataFrame({'key': list('bbacab'), 'data1': range(6)})
In [85]: pd.get_dummies(df['key'])
Out[85]:
a b c
0 0 1 0
1 0 1 0
2 1 0 0
3 0 0 1
4 1 0 0
5 0 1 0
Sometimes its useful to prefix the column names, for example when merging the result with the original DataFrame:
In [86]: dummies = pd.get_dummies(df['key'], prefix='key')
In [87]: dummies
Out[87]:
key_a key_b key_c
0 0 1 0
1 0 1 0
2 1 0 0
3 0 0 1
4 1 0 0
5 0 1 0
In [88]: df[['data1']].join(dummies)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
This function is often used along with discretization functions like cut:
In [89]: values = np.random.randn(10)
In [90]: values
Out[90]:
array([ 0.4082, -1.0481, -0.0257, -0.9884, 0.0941, 1.2627, 1.29 ,
0.0824, -0.0558, 0.5366])
2 0 0 0 0 0
3 0 0 0 0 0
4 1 0 0 0 0
5 0 0 0 0 0
6 0 0 0 0 0
7 1 0 0 0 0
8 0 0 0 0 0
9 0 0 1 0 0
In [94]: pd.get_dummies(df)
Out[94]:
C A_a A_b B_b B_c
0 1 1 0 0 1
1 2 0 1 0 1
2 3 1 0 1 0
Notice that the B column is still included in the output, it just hasnt been encoded. You can drop B before calling
get_dummies if you dont want to include it in the output.
As with the Series version, you can pass values for the prefix and prefix_sep. By default the column name is
used as the prefix, and _ as the prefix separator. You can specify prefix and prefix_sep in 3 ways
string: Use the same value for prefix or prefix_sep for each column to be encoded
list: Must be the same length as the number of columns being encoded.
dict: Mapping column name to prefix
In [97]: simple
Out[97]:
C new_prefix_a new_prefix_b new_prefix_b new_prefix_c
0 1 1 0 0 1
1 2 0 1 0 1
2 3 1 0 1 0
In [99]: from_list
Out[99]:
C from_A_a from_A_b from_B_b from_B_c
0 1 1 0 0 1
1 2 0 1 0 1
2 3 1 0 1 0
In [101]: from_dict
Out[101]:
C from_A_a from_A_b from_B_b from_B_c
0 1 1 0 0 1
1 2 0 1 0 1
2 3 1 0 1 0
In [103]: pd.get_dummies(s)
Out[103]:
a b c
0 1 0 0
1 0 1 0
2 0 0 1
3 1 0 0
4 1 0 0
b c
0 0 0
1 1 0
2 0 1
3 0 0
4 0 0
When a column contains only one level, it will be omitted in the result.
In [105]: df = pd.DataFrame({'A':list('aaaaa'),'B':list('ababc')})
In [106]: pd.get_dummies(df)
Out[106]:
A_a B_a B_b B_c
0 1 1 0 0
1 1 0 1 0
2 1 1 0 0
3 1 0 1 0
4 1 0 0 1
B_b B_c
0 0 0
1 1 0
2 0 0
3 1 0
4 0 1
In [109]: x
Out[109]:
0 A
1 A
2 NaN
3 B
4 3.14
5 inf
dtype: object
In [111]: labels
Out[111]: array([ 0, 0, -1, 1, 2, 3])
In [112]: uniques
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[112]: Index(['A', 'B', 3.14, inf],
dtype='object')
Note that factorize is similar to numpy.unique, but differs in its handling of NaN:
Note: The following numpy.unique will fail under Python 3 with a TypeError because of an ordering bug. See
also Here
Note: If you just want to handle one column as a categorical variable (like Rs factor), you can use df["cat_col"]
= pd.Categorical(df["col"]) or df["cat_col"] = df["col"].astype("category"). For
full docs on Categorical, see the Categorical introduction and the API documentation. This feature was in-
troduced in version 0.15.
NINETEEN
pandas has proven very successful as a tool for working with time series data, especially in the financial data anal-
ysis space. Using the NumPy datetime64 and timedelta64 dtypes, we have consolidated a large number of
features from other Python libraries like scikits.timeseries as well as created a tremendous amount of new
functionality for manipulating time series data.
In working with time series data, we will frequently seek to:
generate sequences of fixed-frequency dates and time spans
conform or convert time series to a particular frequency
compute relative dates based on various non-standard time increments (e.g. 5 business days before the last
business day of the year), or roll dates forward or backward
pandas provides a relatively compact and self-contained set of tools for performing the above tasks.
Create a range of dates:
In [2]: rng[:5]
Out[2]:
DatetimeIndex(['2011-01-01 00:00:00', '2011-01-01 01:00:00',
'2011-01-01 02:00:00', '2011-01-01 03:00:00',
'2011-01-01 04:00:00'],
dtype='datetime64[ns]', freq='H')
In [4]: ts.head()
Out[4]:
2011-01-01 00:00:00 0.469112
2011-01-01 01:00:00 -0.282863
2011-01-01 02:00:00 -1.509059
2011-01-01 03:00:00 -1.135632
2011-01-01 04:00:00 1.212112
Freq: H, dtype: float64
803
pandas: powerful Python data analysis toolkit, Release 0.20.1
In [6]: converted.head()
Out[6]:
2011-01-01 00:00:00 0.469112
2011-01-01 00:45:00 0.469112
2011-01-01 01:30:00 -0.282863
2011-01-01 02:15:00 -1.509059
2011-01-01 03:00:00 -1.135632
Freq: 45T, dtype: float64
Resample:
# Daily means
In [7]: ts.resample('D').mean()
Out[7]:
2011-01-01 -0.319569
2011-01-02 -0.337703
2011-01-03 0.117258
Freq: D, dtype: float64
19.1 Overview
Following table shows the type of time-related classes pandas can handle and how to create them.
Class Remarks How to create
Timestamp Represents a single time stamp to_datetime, Timestamp
DatetimeIndex Index of Timestamp to_datetime, date_range, DatetimeIndex
Period Represents a single time span Period
PeriodIndex Index of Period period_range, PeriodIndex
Time-stamped data is the most basic type of timeseries data that associates values with points in time. For pandas
objects it means using the points in time.
In [9]: pd.Timestamp('2012-05-01')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[9]: Timestamp('2012-05-01 00:00:00')
In [10]: pd.Timestamp(2012, 5, 1)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[10]:
Timestamp('2012-05-01 00:00:00')
However, in many cases it is more natural to associate things like change variables with a time span instead. The span
represented by Period can be specified explicitly, or inferred from datetime string format.
For example:
In [11]: pd.Period('2011-01')
Out[11]: Period('2011-01', 'M')
Timestamp and Period can be the index. Lists of Timestamp and Period are automatically coerced to
DatetimeIndex and PeriodIndex respectively.
In [15]: type(ts.index)
Out[15]: pandas.core.indexes.datetimes.DatetimeIndex
In [16]: ts.index
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[16]: DatetimeIndex(['2012-05-
01', '2012-05-02', '2012-05-03'], dtype='datetime64[ns]', freq=None)
In [17]: ts
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
2012-05-01 -0.410001
2012-05-02 -0.078638
2012-05-03 0.545952
dtype: float64
In [20]: type(ts.index)
Out[20]: pandas.core.indexes.period.PeriodIndex
In [21]: ts.index
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[21]: PeriodIndex(['2012-01',
'2012-02', '2012-03'], dtype='period[M]', freq='M')
In [22]: ts
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
2012-01 -1.219217
2012-02 -1.226825
2012-03 0.769804
Freq: M, dtype: float64
pandas allows you to capture both representations and convert between them. Under the hood, pandas represents
timestamps using instances of Timestamp and sequences of timestamps using instances of DatetimeIndex. For
regular time spans, pandas uses Period objects for scalar values and PeriodIndex for sequences of spans. Better
support for irregular intervals with arbitrary start and end points are forth-coming in future releases.
To convert a Series or list-like object of date-like objects e.g. strings, epochs, or a mixture, you can use the
to_datetime function. When passed a Series, this returns a Series (with the same index), while a list-like is
converted to a DatetimeIndex:
If you use dates which start with the day first (i.e. European style), you can pass the dayfirst flag:
Warning: You see in the above example that dayfirst isnt strict, so if a date cant be parsed with the day
being first it will be parsed as if dayfirst were False.
Note: Specifying a format argument will potentially speed up the conversion considerably and on versions later
then 0.13.0 explicitly specifying a format string of %Y%m%d takes a faster path still.
If you pass a single string to to_datetime, it returns single Timestamp. Also, Timestamp can accept the string
input. Note that Timestamp doesnt accept string parsing option like dayfirst or format, use to_datetime
if these are required.
In [27]: pd.to_datetime('2010/11/12')
Out[27]: Timestamp('2010-11-12 00:00:00')
In [28]: pd.Timestamp('2010/11/12')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[28]: Timestamp('2010-11-12 00:00:00')
In [30]: pd.to_datetime(df)
Out[30]:
0 2015-02-04 02:00:00
1 2016-03-05 03:00:00
dtype: datetime64[ns]
You can pass only the columns that you need to assemble.
pd.to_datetime looks for standard designations of the datetime component in the column names, including:
required: year, month, day
optional: hour, minute, second, millisecond, microsecond, nanosecond
Note: In version 0.17.0, the default for to_datetime is now errors='raise', rather than
errors='ignore'. This means that invalid parsing will raise rather that return the original input as in previous
versions.
Its also possible to convert integer or float epoch times. The default unit for these is nanoseconds (since these are
how Timestamp s are stored). However, often epochs are stored in another unit which can be specified. These are
computed from the starting point specified by the Origin Parameter.
Typical epoch stored units
....:
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
Warning: Conversion of float epoch times can lead to inaccurate and unexpected results. Python floats have
about 15 digits precision in decimal. Rounding during conversion from float to high precision Timestamp is
unavoidable. The only way to achieve exact precision is to use a fixed-width types (e.g. an int64).
In [34]: pd.to_datetime([1490195805.433, 1490195805.433502912], unit='s')
Out[34]: DatetimeIndex(['2017-03-22 15:16:45.433000', '2017-03-22 15:16:45.433503'],
dtype='datetime64[ns]', freq=None)
To invert the operation from above, namely, to convert from a Timestamp to a unix epoch:
In [37]: stamps
Out[37]:
DatetimeIndex(['2012-10-08 18:15:05', '2012-10-09 18:15:05',
'2012-10-10 18:15:05', '2012-10-11 18:15:05'],
dtype='datetime64[ns]', freq='D')
We convert the DatetimeIndex to an int64 array, then divide by the conversion unit.
The default is set at origin='unix', which defaults to 1970-01-01 00:00:00. Commonly called unix
epoch or POSIX time.
To generate an index with time stamps, you can use either the DatetimeIndex or Index constructor and pass in a list of
datetime objects:
In [43]: index
Out[43]: DatetimeIndex(['2012-05-01', '2012-05-02', '2012-05-03'], dtype=
'datetime64[ns]', freq=None)
In [45]: index
Out[45]: DatetimeIndex(['2012-05-01', '2012-05-02', '2012-05-03'], dtype=
'datetime64[ns]', freq=None)
Practically, this becomes very cumbersome because we often need a very long index with a large number of
timestamps. If we need timestamps on a regular frequency, we can use the pandas functions date_range and
bdate_range to create timestamp indexes.
In [47]: index
Out[47]:
DatetimeIndex(['2000-01-31', '2000-02-29', '2000-03-31', '2000-04-30',
'2000-05-31', '2000-06-30', '2000-07-31', '2000-08-31',
'2000-09-30', '2000-10-31',
...
'2082-07-31', '2082-08-31', '2082-09-30', '2082-10-31',
'2082-11-30', '2082-12-31', '2083-01-31', '2083-02-28',
'2083-03-31', '2083-04-30'],
dtype='datetime64[ns]', length=1000, freq='M')
In [49]: index
Out[49]:
DatetimeIndex(['2012-01-02', '2012-01-03', '2012-01-04', '2012-01-05',
'2012-01-06', '2012-01-09', '2012-01-10', '2012-01-11',
'2012-01-12', '2012-01-13',
...
'2012-12-03', '2012-12-04', '2012-12-05', '2012-12-06',
'2012-12-07', '2012-12-10', '2012-12-11', '2012-12-12',
'2012-12-13', '2012-12-14'],
dtype='datetime64[ns]', length=250, freq='B')
Convenience functions like date_range and bdate_range utilize a variety of frequency aliases. The default
frequency for date_range is a calendar day while the default for bdate_range is a business day
In [50]: start = datetime(2011, 1, 1)
In [53]: rng
Out[53]:
DatetimeIndex(['2011-01-01', '2011-01-02', '2011-01-03', '2011-01-04',
'2011-01-05', '2011-01-06', '2011-01-07', '2011-01-08',
'2011-01-09', '2011-01-10',
...
'2011-12-23', '2011-12-24', '2011-12-25', '2011-12-26',
'2011-12-27', '2011-12-28', '2011-12-29', '2011-12-30',
'2011-12-31', '2012-01-01'],
dtype='datetime64[ns]', length=366, freq='D')
In [55]: rng
Out[55]:
DatetimeIndex(['2011-01-03', '2011-01-04', '2011-01-05', '2011-01-06',
'2011-01-07', '2011-01-10', '2011-01-11', '2011-01-12',
'2011-01-13', '2011-01-14',
...
'2011-12-19', '2011-12-20', '2011-12-21', '2011-12-22',
'2011-12-23', '2011-12-26', '2011-12-27', '2011-12-28',
'2011-12-29', '2011-12-30'],
dtype='datetime64[ns]', length=260, freq='B')
date_range and bdate_range make it easy to generate a range of dates using various combinations of parame-
ters like start, end, periods, and freq:
In [56]: pd.date_range(start, end, freq='BM')
Out[56]:
DatetimeIndex(['2011-01-31', '2011-02-28', '2011-03-31', '2011-04-29',
'2011-05-31', '2011-06-30', '2011-07-29', '2011-08-31',
'2011-09-30', '2011-10-31', '2011-11-30', '2011-12-30'],
dtype='datetime64[ns]', freq='BM')
The start and end dates are strictly inclusive. So it will not generate any dates outside of those dates if specified.
Since pandas represents timestamps in nanosecond resolution, the timespan that can be represented using a 64-bit
integer is limited to approximately 584 years:
In [60]: pd.Timestamp.min
Out[60]: Timestamp('1677-09-21 00:12:43.145225')
In [61]: pd.Timestamp.max
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[61]: Timestamp('2262-04-11
23:47:16.854775807')
19.6 Indexing
One of the main uses for DatetimeIndex is as an index for pandas objects. The DatetimeIndex class contains
many timeseries related optimizations:
A large range of dates for various offsets are pre-computed and cached under the hood in order to make gener-
ating subsequent date ranges very fast (just have to grab a slice)
Fast shifting using the shift and tshift method on pandas objects
Unioning of overlapping DatetimeIndex objects with the same frequency is very fast (important for fast data
alignment)
Quick access to date fields via properties such as year, month, etc.
Regularization functions like snap and very fast asof logic
DatetimeIndex objects has all the basic functionality of regular Index objects and a smorgasbord of advanced
timeseries-specific methods for easy frequency processing.
See also:
Reindexing methods
Note: While pandas does not force you to have a sorted date index, some of these methods may have unexpected or
incorrect behavior if the dates are unsorted. So please be careful.
DatetimeIndex can be used like a regular index and offers all of its intelligent functionality like selection, slicing,
etc.
In [62]: rng = pd.date_range(start, end, freq='BM')
In [64]: ts.index
Out[64]:
DatetimeIndex(['2011-01-31', '2011-02-28', '2011-03-31', '2011-04-29',
'2011-05-31', '2011-06-30', '2011-07-29', '2011-08-31',
'2011-09-30', '2011-10-31', '2011-11-30', '2011-12-30'],
dtype='datetime64[ns]', freq='BM')
In [65]: ts[:5].index
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
In [66]: ts[::2].index
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
You can pass in dates and strings that parse to dates as indexing parameters:
In [67]: ts['1/31/2011']
Out[67]: -1.2812473076599531
In [69]: ts['10/31/2011':'12/31/2011']
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[69]:
2011-10-31 0.149748
2011-11-30 -0.732339
2011-12-30 0.687738
Freq: BM, dtype: float64
To provide convenience for accessing longer time series, you can also pass in the year or year and month as strings:
In [70]: ts['2011']
Out[70]:
2011-01-31 -1.281247
2011-02-28 -0.727707
2011-03-31 -0.121306
2011-04-29 -0.097883
2011-05-31 0.695775
2011-06-30 0.341734
2011-07-29 0.959726
2011-08-31 -1.110336
2011-09-30 -0.619976
2011-10-31 0.149748
2011-11-30 -0.732339
2011-12-30 0.687738
Freq: BM, dtype: float64
In [71]: ts['2011-6']
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
2011-06-30 0.341734
Freq: BM, dtype: float64
This type of slicing will work on a DataFrame with a DateTimeIndex as well. Since the partial string selection is a
form of label slicing, the endpoints will be included. This would include matching times on an included date. Heres
an example:
In [72]: dft = pd.DataFrame(randn(100000,1),
....: columns=['A'],
....: index=pd.date_range('20130101',periods=100000,freq='T'))
....:
In [73]: dft
Out[73]:
A
2013-01-01 00:00:00 0.176444
2013-01-01 00:01:00 0.403310
2013-01-01 00:02:00 -0.154951
2013-01-01 00:03:00 0.301624
2013-01-01 00:04:00 -2.179861
2013-01-01 00:05:00 -1.369849
2013-01-01 00:06:00 -0.954208
... ...
2013-03-11 10:33:00 -0.293083
2013-03-11 10:34:00 -0.059881
2013-03-11 10:35:00 1.252450
2013-03-11 10:36:00 0.046611
2013-03-11 10:37:00 0.059478
2013-03-11 10:38:00 -0.286539
2013-03-11 10:39:00 0.841669
In [74]: dft['2013']
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
A
2013-01-01 00:00:00 0.176444
2013-01-01 00:01:00 0.403310
2013-01-01 00:02:00 -0.154951
2013-01-01 00:03:00 0.301624
2013-01-01 00:04:00 -2.179861
2013-01-01 00:05:00 -1.369849
2013-01-01 00:06:00 -0.954208
... ...
2013-03-11 10:33:00 -0.293083
2013-03-11 10:34:00 -0.059881
2013-03-11 10:35:00 1.252450
2013-03-11 10:36:00 0.046611
2013-03-11 10:37:00 0.059478
2013-03-11 10:38:00 -0.286539
2013-03-11 10:39:00 0.841669
This starts on the very first time in the month, and includes the last date & time for the month
In [75]: dft['2013-1':'2013-2']
Out[75]:
A
2013-01-01 00:00:00 0.176444
2013-01-01 00:01:00 0.403310
2013-01-01 00:02:00 -0.154951
2013-01-01 00:03:00 0.301624
2013-01-01 00:04:00 -2.179861
2013-01-01 00:05:00 -1.369849
2013-01-01 00:06:00 -0.954208
... ...
2013-02-28 23:53:00 0.103114
2013-02-28 23:54:00 -1.303422
2013-02-28 23:55:00 0.451943
2013-02-28 23:56:00 0.220534
2013-02-28 23:57:00 -1.624220
2013-02-28 23:58:00 0.093915
2013-02-28 23:59:00 -1.087454
This specifies a stop time that includes all of the times on the last day
In [76]: dft['2013-1':'2013-2-28']
Out[76]:
A
2013-01-01 00:00:00 0.176444
2013-01-01 00:01:00 0.403310
2013-01-01 00:02:00 -0.154951
2013-01-01 00:03:00 0.301624
2013-01-01 00:04:00 -2.179861
2013-01-01 00:05:00 -1.369849
2013-01-01 00:06:00 -0.954208
... ...
2013-02-28 23:53:00 0.103114
2013-02-28 23:54:00 -1.303422
2013-02-28 23:55:00 0.451943
2013-02-28 23:56:00 0.220534
2013-02-28 23:57:00 -1.624220
2013-02-28 23:58:00 0.093915
2013-02-28 23:59:00 -1.087454
This specifies an exact stop time (and is not the same as the above)
DatetimeIndex Partial String Indexing also works on DataFrames with a MultiIndex. For example:
....:
periods=10,
....: freq='12H
'),
In [80]: dft2
Out[80]:
A
2013-01-01 00:00:00 a -0.659574
b 1.494522
2013-01-01 12:00:00 a -0.778425
b -0.253355
2013-01-02 00:00:00 a -2.816159
b -1.210929
2013-01-02 12:00:00 a 0.144669
... ...
2013-01-04 00:00:00 b -1.624463
2013-01-04 12:00:00 a 0.056912
b 0.149867
2013-01-05 00:00:00 a -1.256173
b 2.324544
2013-01-05 12:00:00 a -1.067396
b -0.660996
In [81]: dft2.loc['2013-01-05']
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
A
2013-01-05 00:00:00 a -1.256173
b 2.324544
2013-01-05 12:00:00 a -1.067396
b -0.660996
In [86]: series_minute.index.resolution
Out[86]: 'minute'
A timestamp string with minute resolution (or more accurate), gives a scalar instead, i.e. it is not casted to a slice.
In [91]: series_second.index.resolution
Out[91]: 'second'
If the timestamp string is treated as a slice, it can be used to index DataFrame with [] as well.
a b
2011-12-31 23:59:00 1 4
Warning: However if the string is treated as an exact match, the selection in DataFrames [] will be column-
wise and not row-wise, see Indexing Basics. For example dft_minute['2011-12-31 23:59'] will raise
KeyError as '2012-12-31 23:59' has the same resolution as index and there is no column with such name:
To always have unambiguous selection, whether the row is treated as a slice or a single selection, use .loc.
In [95]: dft_minute.loc['2011-12-31 23:59']
Out[95]:
a 1
b 4
Name: 2011-12-31 23:59:00, dtype: int64
Note also that DatetimeIndex resolution cannot be less precise than day.
In [97]: series_monthly.index.resolution
Out[97]: 'day'
As discussed in previous section, indexing a DateTimeIndex with a partial string depends on the accuracy of the
period, in other words how specific the interval is in relation to the resolution of the index. In contrast, indexing with
Timestamp or datetime objects is exact, because the objects have exact meaning. These also follow the semantics
of including both endpoints.
These Timestamp and datetime objects have exact hours, minutes, and seconds, even though they were
not explicitly specified (they are 0).
With no defaults.
Even complicated fancy indexing that breaks the DatetimeIndexs frequency regularity will result in a
DatetimeIndex (but frequency is lost):
There are several time/date properties that one can access from Timestamp or a collection of timestamps like a
DateTimeIndex.
Property Description
year The year of the datetime
month The month of the datetime
day The days of the datetime
hour The hour of the datetime
minute The minutes of the datetime
second The seconds of the datetime
microsecond The microseconds of the datetime
nanosecond The nanoseconds of the datetime
date Returns datetime.date (does not contain timezone information)
time Returns datetime.time (does not contain timezone information)
dayofyear The ordinal day of year
weekofyear The week ordinal of the year
week The week ordinal of the year
dayofweek The number of the day of the week with Monday=0, Sunday=6
weekday The number of the day of the week with Monday=0, Sunday=6
weekday_name The name of the day in a week (ex: Friday)
quarter Quarter of the date: Jan-Mar = 1, Apr-Jun = 2, etc.
days_in_month The number of days in the month of the datetime
is_month_start Logical indicating if first day of month (defined by frequency)
is_month_end Logical indicating if last day of month (defined by frequency)
is_quarter_start Logical indicating if first day of quarter (defined by frequency)
is_quarter_end Logical indicating if last day of quarter (defined by frequency)
is_year_start Logical indicating if first day of year (defined by frequency)
is_year_end Logical indicating if last day of year (defined by frequency)
is_leap_year Logical indicating if the date belongs to a leap year
Furthermore, if you have a Series with datetimelike values, then you can access these properties via the .dt
accessor, see the docs
In the preceding examples, we created DatetimeIndex objects at various frequencies by passing in frequency strings
like M, W, and BM to the freq keyword. Under the hood, these frequency strings are being translated into an
instance of pandas DateOffset, which represents a regular frequency increment. Specific offset logic like month,
business day, or one hour is represented in its various subclasses.
The basic DateOffset takes the same arguments as dateutil.relativedelta, which works like:
In [103]: d = datetime(2008, 8, 18, 9, 0)
In [107]: d - 5 * BDay()
Out[107]: Timestamp('2008-08-11 09:00:00')
In [108]: d + BMonthEnd()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[108]: Timestamp('2008-08-29 09:00:00')
The rollforward and rollback methods do exactly what you would expect:
In [109]: d
Out[109]: datetime.datetime(2008, 8, 18, 9, 0)
In [111]: offset.rollforward(d)
Out[111]: Timestamp('2008-08-29 09:00:00')
In [112]: offset.rollback(d)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[112]: Timestamp('2008-07-31 09:00:00')
Its definitely worth exploring the pandas.tseries.offsets module and the various docstrings for the classes.
These operations (apply, rollforward and rollback) preserves time (hour, minute, etc) information by de-
fault. To reset time, use normalize=True keyword when creating the offset instance. If normalize=True,
result is normalized after the function is applied.
Some of the offsets can be parameterized when created to result in different behaviors. For example, the Week
offset for generating weekly data accepts a weekday parameter which results in the generated dates always lying on
a particular day of the week:
In [122]: d
Out[122]: datetime.datetime(2008, 8, 18, 9, 0)
In [123]: d + Week()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[123]: Timestamp('2008-08-25
09:00:00')
In [124]: d + Week(weekday=4)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[124]:
Timestamp('2008-08-22 09:00:00')
In [125]: (d + Week(weekday=4)).weekday()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
4
In [126]: d - Week()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
Timestamp('2008-08-11 09:00:00')
In [127]: d + Week(normalize=True)
Out[127]: Timestamp('2008-08-25 00:00:00')
In [128]: d - Week(normalize=True)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[128]: Timestamp('2008-08-11 00:00:00')
In [129]: d + YearEnd()
Out[129]: Timestamp('2008-12-31 09:00:00')
In [130]: d + YearEnd(month=6)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[130]: Timestamp('2009-06-30 09:00:00')
Offsets can be used with either a Series or DatetimeIndex to apply the offset to each element.
In [132]: s = pd.Series(rng)
In [133]: rng
Out[133]: DatetimeIndex(['2012-01-01', '2012-01-02', '2012-01-03'], dtype=
'datetime64[ns]', freq='D')
freq='D')
In [135]: s + DateOffset(months=2)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
0 2012-03-01
1 2012-03-02
2 2012-03-03
dtype: datetime64[ns]
In [136]: s - DateOffset(months=2)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
0 2011-11-01
1 2011-11-02
2 2011-11-03
dtype: datetime64[ns]
If the offset class maps directly to a Timedelta (Day, Hour, Minute, Second, Micro, Milli, Nano) it can be
used exactly like a Timedelta - see the Timedelta section for more examples.
In [137]: s - Day(2)
Out[137]:
0 2011-12-30
1 2011-12-31
2 2012-01-01
dtype: datetime64[ns]
In [139]: td
Out[139]:
0 3 days
1 3 days
2 3 days
dtype: timedelta64[ns]
In [140]: td + Minute(15)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[140]:
0 3 days 00:15:00
1 3 days 00:15:00
2 3 days 00:15:00
dtype: timedelta64[ns]
Note that some offsets (such as BQuarterEnd) do not have a vectorized implementation. They can still be used but
may calculate significantly slower and will show a PerformanceWarning
The CDay or CustomBusinessDay class provides a parametric BusinessDay class which can be used to create
customized business day calendars which account for local holidays and local weekend conventions.
As an interesting example, lets look at Egypt where a Friday-Saturday weekend is observed.
In [147]: dt + 2 * bday_egypt
Out[147]: Timestamp('2013-05-05 00:00:00')
Out[149]:
2013-04-30 Tue
2013-05-02 Thu
2013-05-05 Sun
2013-05-06 Mon
2013-05-07 Tue
Freq: C, dtype: object
Holiday calendars can be used to provide the list of holidays. See the holiday calendar section for more information.
Monthly offsets that respect a certain holiday calendar can be defined in the usual way.
In [157]: dt + bmth_us
Out[157]: Timestamp('2014-01-02 00:00:00')
Note: The frequency string C is used to indicate that a CustomBusinessDay DateOffset is used, it is important to
note that since CustomBusinessDay is a parameterised type, instances of CustomBusinessDay may differ and this is
not detectable from the C frequency string. The user therefore needs to ensure that the C frequency string is used
consistently within the users application.
The BusinessHour class provides a business hour representation on BusinessDay, allowing to use specific start
and end times.
By default, BusinessHour uses 9:00 - 17:00 as business hours. Adding BusinessHour will increment
Timestamp by hourly. If target Timestamp is out of business hours, move to the next business hour then in-
crement it. If the result exceeds the business hours end, remaining is added to the next business day.
In [159]: bh = BusinessHour()
In [160]: bh
Out[160]: <BusinessHour: BH=09:00-17:00>
# 2014-08-01 is Friday
In [161]: pd.Timestamp('2014-08-01 10:00').weekday()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[161]: 4
# If the results is on the end time, move to the next business day
In [164]: pd.Timestamp('2014-08-01 16:00') + bh
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
Timestamp('2014-08-04 09:00:00')
Also, you can specify start and end time by keywords. Argument must be str which has hour:minute repre-
sentation or datetime.time instance. Specifying seconds, microseconds and nanoseconds as business hour results
in ValueError.
In [169]: bh
Out[169]: <BusinessHour: BH=11:00-20:00>
Passing start time later than end represents midnight business hour. In this case, business hour exceeds midnight
and overlap to the next day. Valid business hours are distinguished by whether it started from valid BusinessDay.
In [173]: bh = BusinessHour(start='17:00', end='09:00')
In [174]: bh
Out[174]: <BusinessHour: BH=17:00-09:00>
Applying BusinessHour.rollforward and rollback to out of business hours results in the next business
hour start or previous days end. Different from other offsets, BusinessHour.rollforward may output different
results from apply by definition.
This is because one days business hour end is equal to next days business hour start. For example, under the de-
fault business hours (9:00 - 17:00), there is no gap (0 minutes) between 2014-08-01 17:00 and 2014-08-04
09:00.
# This adjusts a Timestamp to business hour edge
In [179]: BusinessHour().rollback(pd.Timestamp('2014-08-02 15:00'))
Out[179]: Timestamp('2014-08-01 17:00:00')
BusinessHour regards Saturday and Sunday as holidays. To use arbitrary holidays, you can use
CustomBusinessHour offset, see Custom Business Hour:
In [187]: dt + bhour_us
Out[187]: Timestamp('2014-01-17 16:00:00')
You can use keyword arguments suported by either BusinessHour and CustomBusinessDay.
# Monday is skipped because it's a holiday, business hour starts from 10:00
In [190]: dt + bhour_mon * 2
Out[190]: Timestamp('2014-01-21 10:00:00')
A number of string aliases are given to useful common time series frequencies. We will refer to these aliases as offset
aliases (referred to as time rules prior to v0.8.0).
Alias Description
B business day frequency
C custom business day frequency (experimental)
D calendar day frequency
W weekly frequency
M month end frequency
SM semi-month end frequency (15th and end of month)
BM business month end frequency
CBM custom business month end frequency
MS month start frequency
SMS semi-month start frequency (1st and 15th)
BMS business month start frequency
CBMS custom business month start frequency
Q quarter end frequency
BQ business quarter endfrequency
QS quarter start frequency
BQS business quarter start frequency
A year end frequency
BA business year end frequency
AS year start frequency
BAS business year start frequency
BH business hour frequency
H hourly frequency
T, min minutely frequency
S secondly frequency
L, ms milliseconds
U, us microseconds
N nanoseconds
As we have seen previously, the alias and the offset instance are fungible in most functions:
Alias Description
W-SUN weekly frequency (sundays). Same as W
W-MON weekly frequency (mondays)
W-TUE weekly frequency (tuesdays)
W-WED weekly frequency (wednesdays)
W-THU weekly frequency (thursdays)
W-FRI weekly frequency (fridays)
W-SAT weekly frequency (saturdays)
(B)Q(S)-DEC quarterly frequency, year ends in December. Same as Q
(B)Q(S)-JAN quarterly frequency, year ends in January
(B)Q(S)-FEB quarterly frequency, year ends in February
(B)Q(S)-MAR quarterly frequency, year ends in March
(B)Q(S)-APR quarterly frequency, year ends in April
(B)Q(S)-MAY quarterly frequency, year ends in May
(B)Q(S)-JUN quarterly frequency, year ends in June
(B)Q(S)-JUL quarterly frequency, year ends in July
(B)Q(S)-AUG quarterly frequency, year ends in August
(B)Q(S)-SEP quarterly frequency, year ends in September
(B)Q(S)-OCT quarterly frequency, year ends in October
(B)Q(S)-NOV quarterly frequency, year ends in November
(B)A(S)-DEC annual frequency, anchored end of December. Same as A
(B)A(S)-JAN annual frequency, anchored end of January
(B)A(S)-FEB annual frequency, anchored end of February
(B)A(S)-MAR annual frequency, anchored end of March
(B)A(S)-APR annual frequency, anchored end of April
(B)A(S)-MAY annual frequency, anchored end of May
(B)A(S)-JUN annual frequency, anchored end of June
(B)A(S)-JUL annual frequency, anchored end of July
(B)A(S)-AUG annual frequency, anchored end of August
(B)A(S)-SEP annual frequency, anchored end of September
(B)A(S)-OCT annual frequency, anchored end of October
(B)A(S)-NOV annual frequency, anchored end of November
These can be used as arguments to date_range, bdate_range, constructors for DatetimeIndex, as well as
For those offsets that are anchored to the start or end of specific frequency (MonthEnd, MonthBegin, WeekEnd,
etc) the following rules apply to rolling forward and backwards.
When n is not 0, if the given date is not on an anchor point, it snapped to the next(previous) anchor point, and moved
|n|-1 additional steps forwards or backwards.
In [195]: pd.Timestamp('2014-01-02') + MonthBegin(n=1)
Out[195]: Timestamp('2014-02-01 00:00:00')
If the given date is on an anchor point, it is moved |n| points forwards or backwards.
In [201]: pd.Timestamp('2014-01-01') + MonthBegin(n=1)
Out[201]: Timestamp('2014-02-01 00:00:00')
For the case when n=0, the date is not moved if on an anchor point, otherwise it is rolled forward to the next anchor
point.
Holidays and calendars provide a simple way to define holiday rules to be used with CustomBusinessDay or
in other analysis that requires a predefined set of holidays. The AbstractHolidayCalendar class provides all
the necessary methods to return a list of holidays and only rules need to be defined in a specific holiday calendar
class. Further, start_date and end_date class attributes determine over what date range holidays are generated.
These should be overwritten on the AbstractHolidayCalendar class to have the range apply to all calendar
subclasses. USFederalHolidayCalendar is the only calendar that exists and primarily serves as an example for
developing other calendars.
For holidays that occur on fixed dates (e.g., US Memorial Day or July 4th) an observance rule determines when that
holiday is observed if it falls on a weekend or some other non-observed day. Defined observance rules are:
Rule Description
nearest_workday move Saturday to Friday and Sunday to Monday
sunday_to_monday move Sunday to following Monday
next_monday_or_tuesday move Saturday to Monday and Sunday/Monday to Tuesday
previous_friday move Saturday and Sunday to previous Friday
next_monday move Saturday and Sunday to following Monday
An example of how holidays and holiday calendars are defined:
Using this calendar, creating an index or doing offset arithmetic skips weekends and holidays (i.e., Memorial Day/July
4th). For example, the below defines a custom business day offset using the ExampleCalendar. Like any other
offset, it can be used to create a DatetimeIndex or added to datetime or Timestamp objects.
Ranges are defined by the start_date and end_date class attributes of AbstractHolidayCalendar. The
defaults are below.
In [222]: AbstractHolidayCalendar.start_date
Out[222]: Timestamp('1970-01-01 00:00:00')
In [223]: AbstractHolidayCalendar.end_date
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[223]: Timestamp('2030-12-31 00:00:00')
In [226]: cal.holidays()
Out[226]: DatetimeIndex(['2012-05-28', '2012-07-04', '2012-10-08'], dtype=
'datetime64[ns]', freq=None)
Every calendar class is accessible by name using the get_calendar function which returns a holiday class instance.
Any imported calendar class will automatically be available by this function. Also, HolidayCalendarFactory
provides an easy interface to create calendars that are combinations of calendars or calendars with additional rules.
In [229]: cal.rules
Out[229]:
[Holiday: MemorialDay (month=5, day=31, offset=<DateOffset: kwds={'weekday': MO(-1)}>
),
In [231]: new_cal.rules
Out[231]:
[Holiday: Labor Day (month=9, day=1, offset=<DateOffset: kwds={'weekday': MO(+1)}>),
Holiday: MemorialDay (month=5, day=31, offset=<DateOffset: kwds={'weekday': MO(-1)}>
),
One may want to shift or lag the values in a time series back and forward in time. The method for this is shift,
which is available on all of the pandas objects.
In [232]: ts = ts[:5]
In [233]: ts.shift(1)
Out[233]:
2011-01-31 NaN
2011-02-28 -1.281247
2011-03-31 -0.727707
2011-04-29 -0.121306
2011-05-31 -0.097883
Freq: BM, dtype: float64
The shift method accepts an freq argument which can accept a DateOffset class or other timedelta-like object
or also a offset alias:
In [234]: ts.shift(5, freq=offsets.BDay())
Out[234]:
2011-02-07 -1.281247
2011-03-07 -0.727707
2011-04-07 -0.121306
2011-05-06 -0.097883
2011-06-07 0.695775
dtype: float64
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
2011-06-30 -1.281247
2011-07-29 -0.727707
2011-08-31 -0.121306
2011-09-30 -0.097883
2011-10-31 0.695775
Freq: BM, dtype: float64
Rather than changing the alignment of the data and the index, DataFrame and Series objects also have a tshift
convenience method that changes all the dates in the index by a specified number of offsets:
Note that with tshift, the leading entry is no longer NaN because the data is not being realigned.
The primary function for changing frequencies is the asfreq function. For a DatetimeIndex, this is basically
just a thin, but convenient wrapper around reindex which generates a date_range and calls reindex.
In [239]: ts
Out[239]:
2010-01-01 0.532005
2010-01-06 0.544874
2010-01-11 -1.001788
Freq: 3B, dtype: float64
In [240]: ts.asfreq(BDay())
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
2010-01-01 0.532005
2010-01-04 NaN
2010-01-05 NaN
2010-01-06 0.544874
2010-01-07 NaN
2010-01-08 NaN
2010-01-11 -1.001788
Freq: B, dtype: float64
asfreq provides a further convenience so you can specify an interpolation method for any gaps that may appear after
the frequency conversion
2010-01-01 0.532005
2010-01-04 0.532005
2010-01-05 0.532005
2010-01-06 0.544874
2010-01-07 0.544874
2010-01-08 0.544874
2010-01-11 -1.001788
Freq: B, dtype: float64
Related to asfreq and reindex is the fillna function documented in the missing data section.
DatetimeIndex can be converted to an array of Python native datetime.datetime objects using the
to_pydatetime method.
19.10 Resampling
Warning: The interface to .resample has changed in 0.18.0 to be more groupby-like and hence more flexible.
See the whatsnew docs for a comparison with prior versions.
Pandas has a simple, powerful, and efficient functionality for performing resampling operations during frequency
conversion (e.g., converting secondly data into 5-minutely data). This is extremely common in, but not limited to,
financial applications.
.resample() is a time-based groupby, followed by a reduction method on each of its groups. See some cookbook
examples for some advanced strategies
Starting in version 0.18.1, the resample() function can be used directly from DataFrameGroupBy objects, see
the groupby docs.
Note: .resample() is similar to using a .rolling() operation with a time-based offset, see a discussion here
19.10.1 Basics
In [244]: ts.resample('5Min').sum()
Out[244]:
2012-01-01 24390
Freq: 5T, dtype: int64
The resample function is very flexible and allows you to specify many different parameters to control the frequency
conversion and resampling operation.
The how parameter can be a function name or numpy array function that takes an array and produces aggregated
values:
In [245]: ts.resample('5Min').mean()
Out[245]:
2012-01-01 243.9
Freq: 5T, dtype: float64
In [246]: ts.resample('5Min').ohlc()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[246]:
open high low close
2012-01-01 161 495 1 245
In [247]: ts.resample('5Min').max()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
2012-01-01 495
Freq: 5T, dtype: int64
Any function available via dispatching can be given to the how parameter by name, including sum, mean, std, sem,
max, min, median, first, last, ohlc.
For downsampling, closed can be set to left or right to specify which end of the interval is closed:
2012-01-01 243.9
Freq: 5T, dtype: float64
Parameters like label and loffset are used to manipulate the resulting labels. label specifies whether the result
is labeled with the beginning or the end of the interval. loffset performs a time adjustment on the output labels.
The axis parameter can be set to 0 or 1 and allows you to resample the specified axis for a DataFrame.
kind can be set to timestamp or period to convert the resulting index to/from time-stamp and time-span represen-
tations. By default resample retains the input representation.
convention can be set to start or end when resampling period data (detail below). It specifies how low frequency
periods are converted to higher frequency periods.
19.10.2 Up Sampling
For upsampling, you can specify a way to upsample and the limit parameter to interpolate over the gaps that are
created:
In [254]: ts[:2].resample('250L').ffill()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
In [255]: ts[:2].resample('250L').ffill(limit=2)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
Sparse timeseries are ones where you have a lot fewer points relative to the amount of time you are looking to resample.
Naively upsampling a sparse series can potentially generate lots of intermediate values. When you dont want to use a
method to fill these values, e.g. fill_method is None, then intermediate values will be filled with NaN.
Since resample is a time-based groupby, the following is a method to efficiently resample only the groups that are
not all NaN
In [258]: ts.resample('3T').sum()
Out[258]:
2014-01-01 00:00:00 0.0
2014-01-01 00:03:00 NaN
2014-01-01 00:06:00 NaN
2014-01-01 00:09:00 NaN
2014-01-01 00:12:00 NaN
2014-01-01 00:15:00 NaN
2014-01-01 00:18:00 NaN
...
2014-04-09 23:42:00 NaN
2014-04-09 23:45:00 NaN
2014-04-09 23:48:00 NaN
2014-04-09 23:51:00 NaN
2014-04-09 23:54:00 NaN
2014-04-09 23:57:00 NaN
2014-04-10 00:00:00 99.0
Freq: 3T, Length: 47521, dtype: float64
We can instead only resample those groups where we have points as follows:
19.10.4 Aggregation
Similar to the aggregating API, groupby API, and the window functions API, a Resampler can be selectively resam-
pled.
Resampling a DataFrame, the default will be to act on all columns with the same function.
In [264]: r = df.resample('3T')
In [265]: r.mean()
Out[265]:
A B C
2012-01-01 00:00:00 -0.220339 0.034854 -0.073757
2012-01-01 00:03:00 0.037070 0.040013 0.053754
2012-01-01 00:06:00 -0.041597 -0.144562 -0.007614
2012-01-01 00:09:00 0.043127 -0.076432 -0.032570
2012-01-01 00:12:00 -0.027609 0.054618 0.056878
2012-01-01 00:15:00 -0.014181 0.043958 0.077734
In [267]: r[['A','B']].mean()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
A B
2012-01-01 00:00:00 -0.220339 0.034854
2012-01-01 00:03:00 0.037070 0.040013
2012-01-01 00:06:00 -0.041597 -0.144562
2012-01-01 00:09:00 0.043127 -0.076432
2012-01-01 00:12:00 -0.027609 0.054618
2012-01-01 00:15:00 -0.014181 0.043958
You can pass a list or dict of functions to do aggregation with, outputting a DataFrame:
In [268]: r['A'].agg([np.sum, np.mean, np.std])
Out[268]:
sum mean std
2012-01-01 00:00:00 -39.660974 -0.220339 1.033912
2012-01-01 00:03:00 6.672559 0.037070 0.971503
2012-01-01 00:06:00 -7.487453 -0.041597 1.018418
2012-01-01 00:09:00 7.762901 0.043127 1.025842
2012-01-01 00:12:00 -4.969624 -0.027609 0.961649
2012-01-01 00:15:00 -1.418119 -0.014181 0.978847
On a resampled DataFrame, you can pass a list of functions to apply to each column, which produces an aggregated
result with a hierarchical index:
In [269]: r.agg([np.sum, np.mean])
Out[269]:
A B C \
mean
2012-01-01 00:00:00 -0.073757
2012-01-01 00:03:00 0.053754
2012-01-01 00:06:00 -0.007614
2012-01-01 00:09:00 -0.032570
2012-01-01 00:12:00 0.056878
2012-01-01 00:15:00 0.077734
By passing a dict to aggregate you can apply a different aggregation to the columns of a DataFrame:
The function names can also be strings. In order for a string to be valid it must be implemented on the Resampled
object
Furthermore, you can also specify multiple aggregation functions for each column separately.
If a DataFrame does not have a datetimelike index, but instead you want to resample based on datetimelike column
.....: names=['v','d']))
.....:
In [274]: df
Out[274]:
a date
v d
1 2015-01-04 0 2015-01-04
2 2015-01-11 1 2015-01-11
3 2015-01-18 2 2015-01-18
4 2015-01-25 3 2015-01-25
5 2015-02-01 4 2015-02-01
a
date
2015-01-31 6
2015-02-28 4
Similarly, if you instead want to resample by a datetimelike level of MultiIndex, its name or location can be passed
to the level keyword.
Regular intervals of time are represented by Period objects in pandas while sequences of Period objects are
collected in a PeriodIndex, which can be created with the convenience function period_range.
19.11.1 Period
A Period represents a span of time (e.g., a day, a month, a quarter, etc). You can specify the span via freq keyword
using a frequency alias like below. Because freq represents a span of Period, it cannot be negative like -3D.
Adding and subtracting integers from periods shifts the period by its own frequency. Arithmetic is not allowed between
Period with different freq (span).
In [282]: p + 1
Out[282]: Period('2013', 'A-DEC')
In [283]: p - 3
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[283]: Period('2009', 'A-DEC')
In [285]: p + 2
Out[285]: Period('2012-05', '2M')
In [286]: p - 1
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[286]: Period('2011-11', '2M')
/Users/taugspurger/sandbox/pandas/pandas/_libs/period.pyx in pandas._libs.period._
Period.__richcmp__ (pandas/_libs/period.c:11842)()
If Period freq is daily or higher (D, H, T, S, L, U, N), offsets and timedelta-like can be added if the result can
have the same freq. Otherwise, ValueError will be raised.
In [289]: p + Hour(2)
Out[289]: Period('2014-07-01 11:00', 'H')
In [290]: p + timedelta(minutes=120)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[290]: Period('2014-07-01 11:00', 'H')
In [1]: p + Minute(5)
Traceback
...
ValueError: Input has different freq from Period(freq=H)
If Period has other freqs, only the same offsets can be added. Otherwise, ValueError will be raised.
In [293]: p + MonthEnd(3)
Out[293]: Period('2014-10', 'M')
In [1]: p + MonthBegin(3)
Traceback
...
ValueError: Input has different freq from Period(freq=M)
Taking the difference of Period instances with the same frequency will return the number of frequency units between
them:
Regular sequences of Period objects can be collected in a PeriodIndex, which can be constructed using the
period_range convenience function:
In [296]: prng
Out[296]:
PeriodIndex(['2011-01', '2011-02', '2011-03', '2011-04', '2011-05', '2011-06',
'2011-07', '2011-08', '2011-09', '2011-10', '2011-11', '2011-12',
'2012-01'],
dtype='period[M]', freq='M')
Passing multiplied frequency outputs a sequence of Period which has multiplied span.
Just like DatetimeIndex, a PeriodIndex can also be used to index pandas objects:
In [300]: ps
Out[300]:
2011-01 -1.022670
2011-02 1.371155
2011-03 1.035277
2011-04 1.694400
2011-05 -1.659733
2011-06 0.511432
2011-07 0.433176
2011-08 -0.317955
2011-09 -0.517114
2011-10 -0.310466
2011-11 0.543957
2011-12 0.492003
2012-01 0.193420
Freq: M, dtype: float64
PeriodIndex supports addition and subtraction with the same rule as Period.
In [302]: idx
Out[302]:
PeriodIndex(['2014-07-01 09:00', '2014-07-01 10:00', '2014-07-01 11:00',
'2014-07-01 12:00', '2014-07-01 13:00'],
dtype='period[H]', freq='H')
In [305]: idx
Out[305]: PeriodIndex(['2014-07', '2014-08', '2014-09', '2014-10', '2014-11'], dtype=
'period[M]', freq='M')
'period[M]', freq='M')
PeriodIndex has its own dtype named period, refer to Period Dtypes.
In [308]: pi
Out[308]: PeriodIndex(['2016-01', '2016-02', '2016-03'], dtype='period[M]', freq='M')
In [309]: pi.dtype
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[309]:
period[M]
The period dtype can be used in .astype(...). It allows one to change the freq of a PeriodIndex like
.asfreq() and convert a DatetimeIndex to PeriodIndex like to_period():
# convert to DatetimeIndex
In [311]: pi.astype('datetime64[ns]')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[31
DatetimeIndex(['2016-01-01', '2016-02-01', '2016-03-01'], dtype='datetime64[ns]',
freq='MS')
# convert to PeriodIndex
In [312]: dti = pd.date_range('2011-01-01', freq='M', periods=3)
In [313]: dti
Out[313]: DatetimeIndex(['2011-01-31', '2011-02-28', '2011-03-31'], dtype=
'datetime64[ns]', freq='M')
In [314]: dti.astype('period[M]')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
PeriodIndex(['2011-01', '2011-02', '2011-03'], dtype='period[M]', freq='M')
You can pass in dates and strings to Series and DataFrame with PeriodIndex, in the same manner as
DatetimeIndex. For details, refer to DatetimeIndex Partial String Indexing.
In [315]: ps['2011-01']
Out[315]: -1.022669594890105
In [317]: ps['10/31/2011':'12/31/2011']
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
2011-10 -0.310466
2011-11 0.543957
2011-12 0.492003
Freq: M, dtype: float64
Passing a string representing a lower frequency than PeriodIndex returns partial sliced data.
In [318]: ps['2011']
Out[318]:
2011-01 -1.022670
2011-02 1.371155
2011-03 1.035277
2011-04 1.694400
2011-05 -1.659733
2011-06 0.511432
2011-07 0.433176
2011-08 -0.317955
2011-09 -0.517114
2011-10 -0.310466
2011-11 0.543957
2011-12 0.492003
Freq: M, dtype: float64
.....:
In [320]: dfp
Out[320]:
A
2013-01-01 09:00 0.197720
2013-01-01 09:01 -0.284769
2013-01-01 09:02 0.061491
2013-01-01 09:03 1.630257
2013-01-01 09:04 2.042442
2013-01-01 09:05 -0.804392
2013-01-01 09:06 0.212760
... ...
2013-01-01 18:53 0.150586
2013-01-01 18:54 -0.679569
2013-01-01 18:55 -0.910216
2013-01-01 18:56 -0.413168
2013-01-01 18:57 -0.247752
2013-01-01 18:58 1.590875
2013-01-01 18:59 -2.005294
A
2013-01-01 10:00 -0.569936
2013-01-01 10:01 -1.179183
2013-01-01 10:02 -0.838602
2013-01-01 10:03 -1.727539
2013-01-01 10:04 1.334027
2013-01-01 10:05 0.417423
2013-01-01 10:06 -0.221189
... ...
2013-01-01 10:53 -0.375925
2013-01-01 10:54 0.212750
2013-01-01 10:55 -0.592417
2013-01-01 10:56 -0.466064
2013-01-01 10:57 -1.715347
2013-01-01 10:58 -0.634913
As with DatetimeIndex, the endpoints will be included in the result. The example below slices data starting from
10:00 to 11:59.
The frequency of Period and PeriodIndex can be converted via the asfreq method. Lets start with the fiscal
year 2011, ending in December:
In [324]: p
Out[324]: Period('2011', 'A-DEC')
We can convert it to a monthly frequency. Using the how parameter, we can specify whether to return the starting or
ending month:
Converting to a super-period (e.g., annual frequency is a super-period of quarterly frequency) automatically returns
the super-period that includes the input period:
In [329]: p = pd.Period('2011-12', freq='M')
In [330]: p.asfreq('A-NOV')
Out[330]: Period('2012', 'A-NOV')
Note that since we converted to an annual frequency that ends the year in November, the monthly period of December
2011 is actually in the 2012 A-NOV period. Period conversions with anchored frequencies are particularly useful
for working with various quarterly data common to economics, business, and other fields. Many organizations define
quarters relative to the month in which their fiscal year starts and ends. Thus, first quarter of 2011 could start in 2010
or a few months into 2011. Via anchored frequencies, pandas works for all quarterly frequencies Q-JAN through
Q-DEC.
Q-DEC define regular calendar quarters:
In [331]: p = pd.Period('2012Q1', freq='Q-DEC')
Timestamped data can be converted to PeriodIndex-ed data using to_period and vice-versa using
to_timestamp:
In [337]: rng = pd.date_range('1/1/2012', periods=5, freq='M')
In [339]: ts
Out[339]:
2012-01-31 2.167674
2012-02-29 -1.505130
2012-03-31 1.005802
2012-04-30 0.481525
2012-05-31 -0.352151
Freq: M, dtype: float64
In [340]: ps = ts.to_period()
In [341]: ps
Out[341]:
2012-01 2.167674
2012-02 -1.505130
2012-03 1.005802
2012-04 0.481525
2012-05 -0.352151
Freq: M, dtype: float64
In [342]: ps.to_timestamp()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
2012-01-01 2.167674
2012-02-01 -1.505130
2012-03-01 1.005802
2012-04-01 0.481525
2012-05-01 -0.352151
Freq: MS, dtype: float64
Remember that s and e can be used to return the timestamps at the start or end of the period:
Converting between period and timestamp enables some convenient arithmetic functions to be used. In the following
example, we convert a quarterly frequency with year ending in November to 9am of the end of the month following
the quarter end:
In [347]: ts.head()
Out[347]:
1990-03-01 09:00 -0.608988
1990-06-01 09:00 0.412294
1990-09-01 09:00 -0.715938
1990-12-01 09:00 1.297773
1991-03-01 09:00 -2.260765
Freq: H, dtype: float64
If you have data that is outside of the Timestamp bounds, see Timestamp limitations, then you can use a
PeriodIndex and/or Series of Periods to do computations.
In [349]: span
Out[349]:
PeriodIndex(['1215-01-01', '1215-01-02', '1215-01-03', '1215-01-04',
'1215-01-05', '1215-01-06', '1215-01-07', '1215-01-08',
'1215-01-09', '1215-01-10',
...
'1380-12-23', '1380-12-24', '1380-12-25', '1380-12-26',
'1380-12-27', '1380-12-28', '1380-12-29', '1380-12-30',
'1380-12-31', '1381-01-01'],
dtype='period[D]', length=60632, freq='D')
In [351]: s
Out[351]:
0 20121231
1 20141130
2 99991231
dtype: int64
.....:
In [353]: s.apply(conv)
Out[353]:
0 2012-12-31
1 2014-11-30
2 9999-12-31
dtype: object
In [354]: s.apply(conv)[2]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[354]:
Period('9999-12-31', 'D')
In [356]: span
Out[356]: PeriodIndex(['2012-12-31', '2014-11-30', '9999-12-31'], dtype='period[D]',
freq='D')
Pandas provides rich support for working with timestamps in different time zones using pytz and dateutil li-
braries. dateutil support is new in 0.14.1 and currently only supported for fixed offset and tzfile zones. The
default library is pytz. Support for dateutil is provided for compatibility with other applications e.g. if you use
dateutil in other python packages.
To supply the time zone, you can use the tz keyword to date_range and other functions. Dateutil time zone strings
are distinguished from pytz time zones by starting with dateutil/.
In pytz you can find a list of common (and less common) time zones using from pytz import
common_timezones, all_timezones.
dateutil uses the OS timezones so there isnt a fixed list available. For common zones, the names are the
same as pytz.
# pytz
In [359]: rng_pytz = pd.date_range('3/6/2012 00:00', periods=10, freq='D',
.....: tz='Europe/London')
.....:
In [360]: rng_pytz.tz
Out[360]: <DstTzInfo 'Europe/London' LMT-1 day, 23:59:00 STD>
# dateutil
In [361]: rng_dateutil = pd.date_range('3/6/2012 00:00', periods=10, freq='D',
.....: tz='dateutil/Europe/London')
.....:
In [362]: rng_dateutil.tz
Out[362]: tzfile('/usr/share/zoneinfo/Europe/London')
In [364]: rng_utc.tz
Out[364]: tzutc()
Note that the UTC timezone is a special case in dateutil and should be constructed explicitly as an instance of
dateutil.tz.tzutc. You can also construct other timezones explicitly first, which gives you more control over
which time zone is used:
# pytz
In [365]: tz_pytz = pytz.timezone('Europe/London')
# dateutil
In [368]: tz_dateutil = dateutil.tz.gettz('Europe/London')
Timestamps, like Pythons datetime.datetime object can be either time zone naive or time zone aware. Naive
time series and DatetimeIndex objects can be localized using tz_localize:
In [373]: ts_utc
Out[373]:
2012-03-06 00:00:00+00:00 0.679135
2012-03-07 00:00:00+00:00 0.345668
2012-03-08 00:00:00+00:00 -1.143903
2012-03-09 00:00:00+00:00 0.487087
2012-03-10 00:00:00+00:00 -1.421073
2012-03-11 00:00:00+00:00 -0.327463
2012-03-12 00:00:00+00:00 0.169899
2012-03-13 00:00:00+00:00 0.867568
2012-03-14 00:00:00+00:00 -0.834122
2012-03-15 00:00:00+00:00 -1.698494
2012-03-16 00:00:00+00:00 0.974717
2012-03-17 00:00:00+00:00 0.966771
2012-03-18 00:00:00+00:00 -0.754168
2012-03-19 00:00:00+00:00 -1.434246
2012-03-20 00:00:00+00:00 0.848935
Freq: D, dtype: float64
Again, you can explicitly construct the timezone object first. You can use the tz_convert method to convert pandas
objects to convert tz-aware data to another time zone:
In [374]: ts_utc.tz_convert('US/Eastern')
Out[374]:
2012-03-05 19:00:00-05:00 0.679135
2012-03-06 19:00:00-05:00 0.345668
2012-03-07 19:00:00-05:00 -1.143903
2012-03-08 19:00:00-05:00 0.487087
2012-03-09 19:00:00-05:00 -1.421073
2012-03-10 19:00:00-05:00 -0.327463
2012-03-11 20:00:00-04:00 0.169899
2012-03-12 20:00:00-04:00 0.867568
2012-03-13 20:00:00-04:00 -0.834122
2012-03-14 20:00:00-04:00 -1.698494
2012-03-15 20:00:00-04:00 0.974717
2012-03-16 20:00:00-04:00 0.966771
2012-03-17 20:00:00-04:00 -0.754168
2012-03-18 20:00:00-04:00 -1.434246
2012-03-19 20:00:00-04:00 0.848935
Freq: D, dtype: float64
Warning: Be wary of conversions between libraries. For some zones pytz and dateutil have different
definitions of the zone. This is more of a problem for unusual timezones than for standard zones like US/
Eastern.
Warning: Be aware that a timezone definition across versions of timezone libraries may not be considered equal.
This may cause problems when working with stored data that is localized using one version and operated on with
a different version. See here for how to handle such a situation.
Warning: It is incorrect to pass a timezone directly into the datetime.datetime constructor (e.g.,
datetime.datetime(2011, 1, 1, tz=timezone('US/Eastern')). Instead, the datetime needs
to be localized using the the localize method on the timezone.
Under the hood, all timestamps are stored in UTC. Scalar values from a DatetimeIndex with a time zone will have
their fields (day, hour, minute) localized to the time zone. However, timestamps with the same UTC value are still
considered to be equal even if they are in different time zones:
In [375]: rng_eastern = rng_utc.tz_convert('US/Eastern')
In [377]: rng_eastern[5]
Out[377]: Timestamp('2012-03-10 19:00:00-0500', tz='US/Eastern', freq='D')
In [378]: rng_berlin[5]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[378]:
Timestamp('2012-03-11 01:00:00+0100', tz='Europe/Berlin', freq='D')
In [381]: rng_berlin[5]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[381]:
Timestamp('2012-03-11 01:00:00+0100', tz='Europe/Berlin', freq='D')
In [382]: rng_eastern[5].tz_convert('Europe/Berlin')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
Timestamp('2012-03-11 01:00:00+0100', tz='Europe/Berlin')
In [384]: rng[5].tz_localize('Asia/Shanghai')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[384]: Timestamp('2012-03-11
00:00:00+0800', tz='Asia/Shanghai')
Operations between Series in different time zones will yield UTC Series, aligning the data on the UTC timestamps:
In [385]: eastern = ts_utc.tz_convert('US/Eastern')
In [388]: result
Out[388]:
2012-03-06 00:00:00+00:00 1.358269
2012-03-07 00:00:00+00:00 0.691336
2012-03-08 00:00:00+00:00 -2.287805
2012-03-09 00:00:00+00:00 0.974174
2012-03-10 00:00:00+00:00 -2.842146
2012-03-11 00:00:00+00:00 -0.654926
2012-03-12 00:00:00+00:00 0.339798
2012-03-13 00:00:00+00:00 1.735136
2012-03-14 00:00:00+00:00 -1.668245
2012-03-15 00:00:00+00:00 -3.396988
2012-03-16 00:00:00+00:00 1.949435
2012-03-17 00:00:00+00:00 1.933541
2012-03-18 00:00:00+00:00 -1.508335
2012-03-19 00:00:00+00:00 -2.868493
2012-03-20 00:00:00+00:00 1.697870
Freq: D, dtype: float64
In [389]: result.index
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
In [391]: didx
Out[391]:
DatetimeIndex(['2014-08-01 09:00:00-04:00', '2014-08-01 10:00:00-04:00',
'2014-08-01 11:00:00-04:00', '2014-08-01 12:00:00-04:00',
'2014-08-01 13:00:00-04:00', '2014-08-01 14:00:00-04:00',
'2014-08-01 15:00:00-04:00', '2014-08-01 16:00:00-04:00',
'2014-08-01 17:00:00-04:00', '2014-08-01 18:00:00-04:00'],
dtype='datetime64[ns, US/Eastern]', freq='H')
In [392]: didx.tz_localize(None)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
In [393]: didx.tz_convert(None)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
In some cases, localize cannot determine the DST and non-DST hours when there are duplicates. This often hap-
pens when reading files or database records that simply duplicate the hours. Passing ambiguous='infer'
(infer_dst argument in prior releases) into tz_localize will attempt to determine the right offset. Below
the top example will fail as it contains ambiguous times and the bottom will infer the right offset.
In [2]: rng_hourly.tz_localize('US/Eastern')
AmbiguousTimeError: Cannot infer dst time from Timestamp('2011-11-06 01:00:00'), try
using the 'ambiguous' argument
In [397]: rng_hourly_eastern.tolist()
Out[397]:
[Timestamp('2011-11-06 00:00:00-0400', tz='US/Eastern'),
Timestamp('2011-11-06 01:00:00-0400', tz='US/Eastern'),
Timestamp('2011-11-06 01:00:00-0500', tz='US/Eastern'),
Timestamp('2011-11-06 02:00:00-0500', tz='US/Eastern'),
Timestamp('2011-11-06 03:00:00-0500', tz='US/Eastern')]
In addition to infer, there are several other arguments supported. Passing an array-like of bools or 0s/1s where True
represents a DST hour and False a non-DST hour, allows for distinguishing more than one DST transition (e.g., if you
have multiple records in a database each with their own DST transition). Or passing NaT will fill in transition times
with not-a-time values. These methods are available in the DatetimeIndex constructor as well as tz_localize.
In [402]: didx
Out[402]:
DatetimeIndex(['2014-08-01 09:00:00-04:00', '2014-08-01 10:00:00-04:00',
'2014-08-01 11:00:00-04:00', '2014-08-01 12:00:00-04:00',
'2014-08-01 13:00:00-04:00', '2014-08-01 14:00:00-04:00',
'2014-08-01 15:00:00-04:00', '2014-08-01 16:00:00-04:00',
'2014-08-01 17:00:00-04:00', '2014-08-01 18:00:00-04:00'],
dtype='datetime64[ns, US/Eastern]', freq='H')
In [403]: didx.tz_localize(None)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
In [404]: didx.tz_convert(None)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
In [407]: s_naive
Out[407]:
0 2013-01-01
1 2013-01-02
2 2013-01-03
dtype: datetime64[ns]
Series/DatetimeIndex with a timezone aware value are represented with a dtype of datetime64[ns,
tz].
In [408]: s_aware = pd.Series(pd.date_range('20130101',periods=3,tz='US/Eastern'))
In [409]: s_aware
Out[409]:
0 2013-01-01 00:00:00-05:00
1 2013-01-02 00:00:00-05:00
2 2013-01-03 00:00:00-05:00
dtype: datetime64[ns, US/Eastern]
Both of these Series can be manipulated via the .dt accessor, see here.
For example, to localize and convert a naive stamp to timezone aware.
In [410]: s_naive.dt.tz_localize('UTC').dt.tz_convert('US/Eastern')
Out[410]:
0 2012-12-31 19:00:00-05:00
1 2013-01-01 19:00:00-05:00
2 2013-01-02 19:00:00-05:00
dtype: datetime64[ns, US/Eastern]
Further more you can .astype(...) timezone aware (and naive). This operation is effectively a localize AND
convert on a naive stamp, and a convert on an aware stamp.
# localize and convert a naive timezone
In [411]: s_naive.astype('datetime64[ns, US/Eastern]')
Out[411]:
0 2012-12-31 19:00:00-05:00
1 2013-01-01 19:00:00-05:00
2 2013-01-02 19:00:00-05:00
dtype: datetime64[ns, US/Eastern]
0 2013-01-01 05:00:00
1 2013-01-02 05:00:00
2 2013-01-03 05:00:00
dtype: datetime64[ns]
0 2013-01-01 06:00:00+01:00
1 2013-01-02 06:00:00+01:00
2 2013-01-03 06:00:00+01:00
dtype: datetime64[ns, CET]
Note: Using the .values accessor on a Series, returns an numpy array of the data. These values are converted
to UTC, as numpy does not currently support timezones (even though it is printing in the local timezone!).
In [414]: s_naive.values
Out[414]:
array(['2013-01-01T00:00:00.000000000', '2013-01-02T00:00:00.000000000',
'2013-01-03T00:00:00.000000000'], dtype='datetime64[ns]')
In [415]: s_aware.values
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
array(['2013-01-01T05:00:00.000000000', '2013-01-02T05:00:00.000000000',
'2013-01-03T05:00:00.000000000'], dtype='datetime64[ns]')
Further note that once converted to a numpy array these would lose the tz tenor.
In [416]: pd.Series(s_aware.values)
Out[416]:
0 2013-01-01 05:00:00
1 2013-01-02 05:00:00
2 2013-01-03 05:00:00
dtype: datetime64[ns]
In [417]: pd.Series(s_aware.values).dt.tz_localize('UTC').dt.tz_convert('US/Eastern')
Out[417]:
0 2013-01-01 00:00:00-05:00
1 2013-01-02 00:00:00-05:00
2 2013-01-03 00:00:00-05:00
dtype: datetime64[ns, US/Eastern]
TWENTY
TIME DELTAS
Note: Starting in v0.15.0, we introduce a new scalar type Timedelta, which is a subclass of datetime.
timedelta, and behaves in a similar manner, but allows compatibility with np.timedelta64 types as well
as a host of custom representation, parsing, and attributes.
Timedeltas are differences in times, expressed in difference units, e.g. days, hours, minutes, seconds. They can be
both positive and negative.
20.1 Parsing
# like datetime.timedelta
# note: these MUST be specified as keyword arguments
In [5]: pd.Timedelta(days=1, seconds=1)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
Timedelta('1 days 00:00:01')
# from a datetime.timedelta/np.timedelta64
In [7]: pd.Timedelta(datetime.timedelta(days=1, seconds=1))
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
Timedelta('1 days 00:00:01')
861
pandas: powerful Python data analysis toolkit, Release 0.20.1
# a NaT
In [10]: pd.Timedelta('nan')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
NaT
In [11]: pd.Timedelta('nat')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
NaT
DateOffsets (Day, Hour, Minute, Second, Milli, Micro, Nano) can also be used in construction.
In [12]: pd.Timedelta(Second(2))
Out[12]: Timedelta('0 days 00:00:02')
20.1.1 to_timedelta
Warning: Prior to 0.15.0 pd.to_timedelta would return a Series for list-like/Series input, and a np.
timedelta64 for scalar input. It will now return a TimedeltaIndex for list-like input, Series for Series
input, and Timedelta for scalar input.
The arguments to pd.to_timedelta are now (arg, unit='ns', box=True), previously were (arg,
box=True, unit='ns') as these are more logical.
Using the top-level pd.to_timedelta, you can convert a scalar, array, list, or Series from a recognized timedelta
format / value into a Timedelta type. It will construct Series if the input is a Series, a scalar if the input is scalar-like,
otherwise will output a TimedeltaIndex.
You can parse a single string to a Timedelta:
In [15]: pd.to_timedelta('15.5us')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[15]: Timedelta('0 days 00:00:00.
000015')
or a list/array of strings:
'timedelta64[ns]', freq=None)
Pandas represents Timedeltas in nanosecond resolution using 64 bit integers. As such, the 64 bit integer limits
determine the Timedelta limits.
In [19]: pd.Timedelta.min
Out[19]: Timedelta('-106752 days +00:12:43.145224')
In [20]: pd.Timedelta.max
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[20]: Timedelta('106751 days
23:47:16.854775')
20.2 Operations
You can operate on Series/DataFrames and construct timedelta64[ns] Series through subtraction operations on
datetime64[ns] Series, or Timestamps.
In [24]: df
Out[24]:
A B
0 2012-01-01 0 days
1 2012-01-02 1 days
2 2012-01-03 2 days
In [26]: df
Out[26]:
A B C
0 2012-01-01 0 days 2012-01-01
1 2012-01-02 1 days 2012-01-03
2 2012-01-03 2 days 2012-01-05
In [27]: df.dtypes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
A datetime64[ns]
B timedelta64[ns]
C datetime64[ns]
dtype: object
In [28]: s - s.max()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
0 -2 days
1 -1 days
2 0 days
dtype: timedelta64[ns]
In [29]: s - datetime.datetime(2011, 1, 1, 3, 5)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
In [30]: s + datetime.timedelta(minutes=5)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
0 2012-01-01 00:05:00
1 2012-01-02 00:05:00
2 2012-01-03 00:05:00
dtype: datetime64[ns]
In [31]: s + Minute(5)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
0 2012-01-01 00:05:00
1 2012-01-02 00:05:00
2 2012-01-03 00:05:00
dtype: datetime64[ns]
0 2012-01-01 00:05:00.005
1 2012-01-02 00:05:00.005
2 2012-01-03 00:05:00.005
dtype: datetime64[ns]
In [33]: y = s - s[0]
In [34]: y
Out[34]:
0 0 days
1 1 days
2 2 days
dtype: timedelta64[ns]
In [36]: y
Out[36]:
0 NaT
1 1 days
2 1 days
dtype: timedelta64[ns]
In [38]: y
Out[38]:
0 NaT
1 NaT
2 1 days
dtype: timedelta64[ns]
Operands can also appear in a reversed order (a singular object operated with a Series):
In [39]: s.max() - s
Out[39]:
0 2 days
1 1 days
2 0 days
dtype: timedelta64[ns]
In [40]: datetime.datetime(2011, 1, 1, 3, 5) - s
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[40]:
0 -365 days +03:05:00
1 -366 days +03:05:00
2 -367 days +03:05:00
dtype: timedelta64[ns]
In [41]: datetime.timedelta(minutes=5) + s
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
0 2012-01-01 00:05:00
1 2012-01-02 00:05:00
2 2012-01-03 00:05:00
dtype: datetime64[ns]
min, max and the corresponding idxmin, idxmax operations are supported on frames:
In [42]: A = s - pd.Timestamp('20120101') - pd.Timedelta('00:05:05')
In [45]: df
Out[45]:
A B
0 -1 days +23:54:55 -1 days
1 0 days 23:54:55 -1 days
2 1 days 23:54:55 -1 days
In [46]: df.min()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
A -1 days +23:54:55
B -1 days +00:00:00
dtype: timedelta64[ns]
In [47]: df.min(axis=1)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
0 -1 days
1 -1 days
2 -1 days
dtype: timedelta64[ns]
In [48]: df.idxmin()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
A 0
B 0
dtype: int64
In [49]: df.idxmax()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
A 2
B 0
dtype: int64
min, max, idxmin, idxmax operations are supported on Series as well. A scalar result will be a Timedelta.
In [50]: df.min().max()
Out[50]: Timedelta('-1 days +23:54:55')
In [51]: df.min(axis=1).min()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[51]: Timedelta('-1 days +00:00:00')
In [52]: df.min().idxmax()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[52]:
'A'
In [53]: df.min(axis=1).idxmin()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[53]:
0
You can fillna on timedeltas. Integers will be interpreted as seconds. You can pass a timedelta to get a particular value.
In [54]: y.fillna(0)
Out[54]:
0 0 days
1 0 days
2 1 days
dtype: timedelta64[ns]
In [55]: y.fillna(10)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[55]:
0 0 days 00:00:10
1 0 days 00:00:10
2 1 days 00:00:00
dtype: timedelta64[ns]
0 -1 days +00:00:05
1 -1 days +00:00:05
2 1 days 00:00:00
dtype: timedelta64[ns]
In [58]: td1
Out[58]: Timedelta('-2 days +21:59:57')
In [59]: -1 * td1
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[59]: Timedelta('1 days 02:00:03')
In [60]: - td1
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[60]:
Timedelta('1 days 02:00:03')
In [61]: abs(td1)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
Timedelta('1 days 02:00:03')
20.3 Reductions
Numeric reduction operation for timedelta64[ns] will return Timedelta objects. As usual NaT are skipped
during evaluation.
In [63]: y2
Out[63]:
0 -1 days +00:00:05
1 NaT
2 -1 days +00:00:05
3 1 days 00:00:00
dtype: timedelta64[ns]
In [64]: y2.mean()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
Timedelta('-1 days +16:00:03.333333')
In [65]: y2.median()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
Timedelta('-1 days +00:00:05')
In [66]: y2.quantile(.1)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
Timedelta('-1 days +00:00:05')
In [67]: y2.sum()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
Timedelta('-1 days +00:00:10')
In [71]: td
Out[71]:
0 31 days 00:00:00
1 31 days 00:00:00
2 31 days 00:05:03
3 NaT
dtype: timedelta64[ns]
# to days
In [72]: td / np.timedelta64(1, 'D')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
0 31.000000
1 31.000000
2 31.003507
3 NaN
dtype: float64
In [73]: td.astype('timedelta64[D]')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
0 31.0
1 31.0
2 31.0
3 NaN
dtype: float64
# to seconds
In [74]: td / np.timedelta64(1, 's')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
0 2678400.0
1 2678400.0
2 2678703.0
3 NaN
dtype: float64
In [75]: td.astype('timedelta64[s]')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
0 2678400.0
1 2678400.0
2 2678703.0
3 NaN
dtype: float64
0 1.018501
1 1.018501
2 1.018617
3 NaN
dtype: float64
In [77]: td * -1
Out[77]:
0 -31 days +00:00:00
1 -31 days +00:00:00
2 -32 days +23:54:57
3 NaT
dtype: timedelta64[ns]
0 31 days 00:00:00
1 62 days 00:00:00
2 93 days 00:15:09
3 NaT
dtype: timedelta64[ns]
20.5 Attributes
You can access various components of the Timedelta or TimedeltaIndex directly using the attributes
days,seconds,microseconds,nanoseconds. These are identical to the values returned by datetime.
timedelta, in that, for example, the .seconds attribute represents the number of seconds >= 0 and < 1 day.
These are signed according to whether the Timedelta is signed.
These operations can also be directly accessed via the .dt property of the Series as well.
Note: Note that the attributes are NOT the displayed values of the Timedelta. Use .components to retrieve the
displayed values.
For a Series:
In [79]: td.dt.days
Out[79]:
0 31.0
1 31.0
2 31.0
3 NaN
dtype: float64
In [80]: td.dt.seconds
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[80]:
0 0.0
1 0.0
2 303.0
3 NaN
dtype: float64
You can access the value of the fields for a scalar Timedelta directly.
In [82]: tds.days
Out[82]: 31
In [83]: tds.seconds
\\\\\\\\\\\\Out[83]: 303
In [84]: (-tds).seconds
\\\\\\\\\\\\\\\\\\\\\\\\\Out[84]: 86097
You can use the .components property to access a reduced form of the timedelta. This returns a DataFrame
indexed similarly to the Series. These are the displayed values of the Timedelta.
In [85]: td.dt.components
Out[85]:
days hours minutes seconds milliseconds microseconds nanoseconds
0 31.0 0.0 0.0 0.0 0.0 0.0 0.0
1 31.0 0.0 0.0 0.0 0.0 0.0 0.0
2 31.0 0.0 5.0 3.0 0.0 0.0 0.0
3 NaN NaN NaN NaN NaN NaN NaN
In [86]: td.dt.components.seconds
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
0 0.0
1 0.0
2 3.0
3 NaN
Name: seconds, dtype: float64
You can convert a Timedelta to an ISO 8601 Duration string with the .isoformat method
New in version 0.20.0.
20.6 TimedeltaIndex
....:
Out[88]:
TimedeltaIndex(['1 days 00:00:00', '1 days 00:00:05', '2 days 00:00:00',
'2 days 00:00:02'],
dtype='timedelta64[ns]', freq=None)
Similarly to other of the datetime-like indices, DatetimeIndex and PeriodIndex, you can use
TimedeltaIndex as the index of pandas objects.
In [91]: s = pd.Series(np.arange(100),
....: index=pd.timedelta_range('1 days', periods=100, freq='h'))
....:
In [92]: s
Out[92]:
1 days 00:00:00 0
1 days 01:00:00 1
1 days 02:00:00 2
1 days 03:00:00 3
1 days 04:00:00 4
1 days 05:00:00 5
1 days 06:00:00 6
..
4 days 21:00:00 93
4 days 22:00:00 94
4 days 23:00:00 95
5 days 00:00:00 96
5 days 01:00:00 97
5 days 02:00:00 98
5 days 03:00:00 99
Freq: H, Length: 100, dtype: int64
Furthermore you can use partial string selection and the range will be inferred:
20.6.2 Operations
Finally, the combination of TimedeltaIndex with DatetimeIndex allow certain combination operations that
are NaT preserving:
In [98]: tdi.tolist()
Out[98]: [Timedelta('1 days 00:00:00'), NaT, Timedelta('2 days 00:00:00')]
In [100]: dti.tolist()
Out[100]:
[Timestamp('2013-01-01 00:00:00', freq='D'),
Timestamp('2013-01-02 00:00:00', freq='D'),
Timestamp('2013-01-03 00:00:00', freq='D')]
20.6.3 Conversions
Similarly to frequency conversion on a Series above, you can convert these indices to yield another Index.
In [104]: tdi.astype('timedelta64[s]')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[104]:
Float64Index([86400.0, nan, 172800.0], dtype='float64')
Scalars type ops work as well. These can potentially return a different type of index.
freq=None)
20.7 Resampling
In [110]: s.resample('D').mean()
Out[110]:
1 days 11.5
2 days 35.5
3 days 59.5
4 days 83.5
5 days 97.5
Freq: D, dtype: float64
TWENTYONE
CATEGORICAL DATA
Note: While there was pandas.Categorical in earlier versions, the ability to use categorical data in Series and
DataFrame is new.
This is an introduction to pandas categorical data type, including a short comparison with Rs factor.
Categoricals are a pandas data type, which correspond to categorical variables in statistics: a variable, which can take
on only a limited, and usually fixed, number of possible values (categories; levels in R). Examples are gender, social
class, blood types, country affiliations, observation time or ratings via Likert scales.
In contrast to statistical categorical variables, categorical data might have an order (e.g. strongly agree vs agree or
first observation vs. second observation), but numerical operations (additions, divisions, ...) are not possible.
All values of categorical data are either in categories or np.nan. Order is defined by the order of categories, not lexical
order of the values. Internally, the data structure consists of a categories array and an integer array of codes which
point to the real value in the categories array.
The categorical data type is useful in the following cases:
A string variable consisting of only a few different values. Converting such a string variable to a categorical
variable will save some memory, see here.
The lexical order of a variable is not the same as the logical order (one, two, three). By converting to a
categorical and specifying an order on the categories, sorting and min/max will use the logical order instead of
the lexical order, see here.
As a signal to other python libraries that this column should be treated as a categorical variable (e.g. to use
suitable statistical methods or plot types).
See also the API docs on categoricals.
In [2]: s
Out[2]:
0 a
875
pandas: powerful Python data analysis toolkit, Release 0.20.1
1 b
2 c
3 a
dtype: category
Categories (3, object): [a, b, c]
In [3]: df = pd.DataFrame({"A":["a","b","c","a"]})
In [5]: df
Out[5]:
A B
0 a a
1 b b
2 c c
3 a a
In [9]: df.head(10)
Out[9]:
value group
0 65 60 - 69
1 49 40 - 49
2 56 50 - 59
3 43 40 - 49
4 43 40 - 49
5 91 90 - 99
6 32 30 - 39
7 87 80 - 89
8 36 30 - 39
9 8 0 - 9
In [11]: s = pd.Series(raw_cat)
In [12]: s
Out[12]:
0 NaN
1 b
2 c
3 NaN
dtype: category
Categories (3, object): [b, c, d]
In [13]: df = pd.DataFrame({"A":["a","b","c","a"]})
In [15]: df
Out[15]:
A B
0 a NaN
1 b b
2 c c
3 a NaN
You can also specify differently ordered categories or make the resulting data ordered, by passing these arguments to
astype():
In [16]: s = pd.Series(["a","b","c","a"])
In [18]: s_cat
Out[18]:
0 NaN
1 b
2 c
3 NaN
dtype: category
Categories (3, object): [b, c, d]
Note: In contrast to Rs factor function, categorical data is not converting input values to strings and categories will
end up the same data type as the original values.
Note: In contrast to Rs factor function, there is currently no way to assign/change labels at creation time. Use
categories to change the categories after creation time.
To get back to the original Series or numpy array, use Series.astype(original_dtype) or np.
asarray(categorical):
In [20]: s = pd.Series(["a","b","c","a"])
In [21]: s
Out[21]:
0 a
1 b
2 c
3 a
dtype: object
In [22]: s2 = s.astype('category')
In [23]: s2
Out[23]:
0 a
1 b
2 c
3 a
dtype: category
Categories (3, object): [a, b, c]
In [24]: s2.astype(str)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[24]:
0 a
1 b
2 c
3 a
dtype: object
In [25]: np.asarray(s2)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
array(['a', 'b', 'c', 'a'], dtype=object)
If you have already codes and categories, you can use the from_codes() constructor to save the factorize step
during normal constructor mode:
21.2 Description
Using .describe() on categorical data will produce similar output to a Series or DataFrame of type string.
In [30]: df.describe()
Out[30]:
cat s
count 3 3
unique 2 2
top c c
freq 2 2
In [31]: df["cat"].describe()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[31]:
count 3
unique 2
top c
freq 2
Name: cat, dtype: object
Categorical data has a categories and a ordered property, which list their possible values and whether the ordering
matters or not. These properties are exposed as s.cat.categories and s.cat.ordered. If you dont manually
specify categories and ordering, they are inferred from the passed in values.
In [33]: s.cat.categories
Out[33]: Index(['a', 'b', 'c'], dtype='object')
In [34]: s.cat.ordered
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[34]: False
In [36]: s.cat.categories
Out[36]: Index(['c', 'b', 'a'], dtype='object')
In [37]: s.cat.ordered
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[37]: False
Note: New categorical data are NOT automatically ordered. You must explicitly pass ordered=True to indicate
an ordered Categorical.
Note: The result of Series.unique() is not always the same as Series.cat.categories, because
Series.unique() has a couple of guarantees, namely that it returns categories in the order of appearance, and it
only includes values that are actually present.
In [39]: s
Out[39]:
0 b
1 a
2 b
3 c
dtype: category
Categories (4, object): [a, b, c, d]
# categories
In [40]: s.cat.categories
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[40]:
Index(['a', 'b', 'c', 'd'], dtype='object')
# uniques
In [41]: s.unique()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
[b, a, c]
Categories (3, object): [b, a, c]
Renaming categories is done by assigning new values to the Series.cat.categories property or by using the
Categorical.rename_categories() method:
In [43]: s
Out[43]:
0 a
1 b
2 c
3 a
dtype: category
Categories (3, object): [a, b, c]
In [45]: s
Out[45]:
0 Group a
1 Group b
2 Group c
3 Group a
dtype: category
Categories (3, object): [Group a, Group b, Group c]
In [46]: s.cat.rename_categories([1,2,3])
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
0 1
1 2
2 3
3 1
dtype: category
Categories (3, int64): [1, 2, 3]
Note: In contrast to Rs factor, categorical data can have categories of other types than string.
Note: Be aware that assigning new categories is an inplace operations, while most other operation under Series.
cat per default return a new Series of dtype category.
In [47]: try:
....: s.cat.categories = [1,1,1]
....: except ValueError as e:
....: print("ValueError: " + str(e))
....:
ValueError: Categorical categories must be unique
In [50]: s.cat.categories
Out[50]: Index(['Group a', 'Group b', 'Group c', 4], dtype='object')
In [51]: s
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[51]:
0 Group a
1 Group b
2 Group c
3 Group a
dtype: category
Categories (4, object): [Group a, Group b, Group c, 4]
Removing categories can be done by using the Categorical.remove_categories() method. Values which
are removed are replaced by np.nan.:
In [52]: s = s.cat.remove_categories([4])
In [53]: s
Out[53]:
0 Group a
1 Group b
2 Group c
3 Group a
dtype: category
Categories (3, object): [Group a, Group b, Group c]
In [55]: s
Out[55]:
0 a
1 b
2 a
dtype: category
Categories (4, object): [a, b, c, d]
In [56]: s.cat.remove_unused_categories()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[56]:
0 a
1 b
2 a
dtype: category
Categories (2, object): [a, b]
If you want to do remove and add new categories in one step (which has some speed advantage), or simply set the
categories to a predefined scale, use Categorical.set_categories().
In [58]: s
Out[58]:
0 one
1 two
2 four
3 -
dtype: category
Categories (4, object): [-, four, one, two]
In [59]: s = s.cat.set_categories(["one","two","three","four"])
In [60]: s
Out[60]:
0 one
1 two
2 four
3 NaN
dtype: category
Categories (4, object): [one, two, three, four]
Note: Be aware that Categorical.set_categories() cannot know whether some category is omitted in-
tentionally or because it is misspelled or (under Python3) due to a type difference (e.g., numpys S1 dtype and python
strings). This can result in surprising behaviour!
Warning: The default for construction has changed in v0.16.0 to ordered=False, from the prior implicit
ordered=True
If categorical data is ordered (s.cat.ordered == True), then the order of the categories has a meaning and
certain operations are possible. If the categorical is unordered, .min()/.max() will raise a TypeError.
In [62]: s.sort_values(inplace=True)
In [64]: s.sort_values(inplace=True)
In [65]: s
Out[65]:
0 a
3 a
1 b
2 c
dtype: category
Categories (3, object): [a < b < c]
You can set categorical data to be ordered by using as_ordered() or unordered by using as_unordered().
These will by default return a new object.
In [67]: s.cat.as_ordered()
Out[67]:
0 a
3 a
1 b
2 c
dtype: category
Categories (3, object): [a < b < c]
In [68]: s.cat.as_unordered()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[68]:
0 a
3 a
1 b
2 c
dtype: category
Categories (3, object): [a, b, c]
Sorting will use the order defined by categories, not any lexical order present on the data type. This is even true for
strings and numeric data:
In [71]: s
Out[71]:
0 1
1 2
2 3
3 1
dtype: category
Categories (3, int64): [2 < 3 < 1]
In [72]: s.sort_values(inplace=True)
In [73]: s
Out[73]:
1 2
2 3
0 1
3 1
dtype: category
Categories (3, int64): [2 < 3 < 1]
21.4.1 Reordering
In [77]: s
Out[77]:
0 1
1 2
2 3
3 1
dtype: category
Categories (3, int64): [2 < 3 < 1]
In [78]: s.sort_values(inplace=True)
In [79]: s
Out[79]:
1 2
2 3
0 1
3 1
dtype: category
Categories (3, int64): [2 < 3 < 1]
Note: Note the difference between assigning new categories and reordering the categories: the first renames categories
and therefore the individual values in the Series, but if the first position was sorted last, the renamed value will still be
sorted last. Reordering means that the way values are sorted is different afterwards, but not that individual values in
the Series are changed.
Note: If the Categorical is not ordered, Series.min() and Series.max() will raise TypeError. Numeric
operations like +, -, *, / and operations based on them (e.g. Series.median(), which would need to compute
the mean between two values if the length of an array is even) do not work and raise a TypeError.
A categorical dtyped column will participate in a multi-column sort in a similar manner to other columns. The ordering
of the categorical is determined by the categories of that column.
In [84]: dfs.sort_values(by=['A','B'])
Out[84]:
A B
7 a 1
6 a 2
0 b 1
5 b 1
1 b 2
4 b 2
2 e 1
3 e 2
21.5 Comparisons
Note: Any non-equality comparisons of categorical data with a Series, np.array, list or categorical data with
different categories or ordering will raise an TypeError because custom categories ordering could be interpreted in two
ways: one with taking into account the ordering and one without.
In [88]: cat
Out[88]:
0 1
1 2
2 3
dtype: category
Categories (3, int64): [3 < 2 < 1]
In [89]: cat_base
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[89]:
0 2
1 2
2 2
dtype: category
Categories (3, int64): [3 < 2 < 1]
In [90]: cat_base2
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
0 2
1 2
2 2
dtype: category
Categories (1, int64): [2]
Comparing to a categorical with the same categories and ordering or to a scalar works:
In [91]: cat > cat_base
Out[91]:
0 True
1 False
2 False
dtype: bool
Equality comparisons work with any list-like object of same length and scalars:
In [93]: cat == cat_base
Out[93]:
0 False
1 True
2 False
dtype: bool
In [95]: cat == 2
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
0 False
1 True
2 False
dtype: bool
This doesnt work because the categories are not the same:
In [96]: try:
....: cat > cat_base2
....: except TypeError as e:
....: print("TypeError: " + str(e))
....:
TypeError: Categoricals can only be compared if 'categories' are the same
If you want to do a non-equality comparison of a categorical series with a list-like object which is not categorical
data, you need to be explicit and convert the categorical data back to the original values:
In [97]: base = np.array([1,2,3])
In [98]: try:
....: cat > base
....: except TypeError as e:
....: print("TypeError: " + str(e))
....:
TypeError: Cannot compare a Categorical for op __gt__ with type <class 'numpy.ndarray
'>.
21.6 Operations
Apart from Series.min(), Series.max() and Series.mode(), the following operations are possible with
categorical data:
Series methods like Series.value_counts() will use all categories, even if some categories are not present in the data:
In [100]: s = pd.Series(pd.Categorical(["a","b","c","c"], categories=["c","a","b","d
"]))
In [101]: s.value_counts()
Out[101]:
c 2
b 1
a 1
d 0
dtype: int64
In [103]: df = pd.DataFrame({"cats":cats,"values":[1,2,2,2,3,4,5]})
In [104]: df.groupby("cats").mean()
Out[104]:
values
cats
a 1.0
b 2.0
c 4.0
d NaN
In [107]: df2.groupby(["cats","B"]).mean()
Out[107]:
values
cats B
a c 1.0
d 2.0
b c 3.0
d 4.0
c c NaN
d NaN
Pivot tables:
The optimized pandas data access methods .loc, .iloc, .at, and .iat, work as normal. The only difference is
the return type (for getting) and that only values already in categories can be assigned.
21.7.1 Getting
If the slicing operation returns either a DataFrame or a column of type Series, the category dtype is preserved.
In [115]: df.iloc[2:4,:]
Out[115]:
cats values
j b 2
k b 2
In [116]: df.iloc[2:4,:].dtypes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[116]:
cats category
values int64
dtype: object
In [117]: df.loc["h":"j","cats"]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
h a
i b
j b
Name: cats, dtype: category
cats values
i b 2
j b 2
k b 2
An example where the category type is not preserved is if you take one single row: the resulting Series is of dtype
object:
# get the complete "h" row as a Series
In [119]: df.loc["h", :]
Out[119]:
cats a
values 1
Name: h, dtype: object
Returning a single item from categorical data will also return the value, not a categorical of length 1.
In [120]: df.iat[0,0]
Out[120]: 'a'
Note: This is a difference to Rs factor function, where factor(c(1,2,3))[1] returns a single value factor.
To get a single value Series of type category pass in a list with a single value:
In [123]: df.loc[["h"],"cats"]
Out[123]:
h x
Name: cats, dtype: category
Categories (3, object): [x, y, z]
In [126]: str_cat
Out[126]:
0 a
1 a
2 b
3 b
dtype: category
Categories (2, object): [a, b]
In [127]: str_cat.str.contains("a")
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[127]:
0 True
1 True
2 False
3 False
dtype: bool
In [130]: date_cat
Out[130]:
0 2015-01-01
1 2015-01-02
2 2015-01-03
3 2015-01-04
4 2015-01-05
dtype: category
Categories (5, datetime64[ns]): [2015-01-01, 2015-01-02, 2015-01-03, 2015-01-04, 2015-
01-05]
In [131]: date_cat.dt.day
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
0 1
1 2
2 3
3 4
4 5
dtype: int64
Note: The returned Series (or DataFrame) is of the same type as if you used the .str.<method> / .dt.
<method> on a Series of that type (and not of type category!).
That means, that the returned values from methods and properties on the accessors of a Series and the returned
values from methods and properties on the accessors of this Series transformed to one of type category will be
equal:
In [132]: ret_s = str_s.str.contains("a")
2 True
3 True
dtype: bool
Note: The work is done on the categories and then a new Series is constructed. This has some performance
implication if you have a Series of type string, where lots of elements are repeated (i.e. the number of unique
elements in the Series is a lot smaller than the length of the Series). In this case it can be faster to convert the
original Series to one of type category and use .str.<method> or .dt.<property> on that.
21.7.3 Setting
Setting values in a categorical column (or Series) works as long as the value is included in the categories:
In [141]: df
Out[141]:
cats values
h a 1
i a 1
j b 2
k b 2
l a 1
m a 1
n a 1
In [142]: try:
.....: df.iloc[2:4,:] = [["c",3],["c",3]]
.....: except ValueError as e:
.....: print("ValueError: " + str(e))
.....:
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
Cannot setitem on a Categorical with a new category, set the categories first
Setting values by assigning categorical data will also check that the categories match:
In [144]: df
Out[144]:
cats values
h a 1
i a 1
j a 2
k a 2
l a 1
m a 1
n a 1
In [145]: try:
.....: df.loc["j":"k","cats"] = pd.Categorical(["b","b"], categories=["a","b",
"c"])
Assigning a Categorical to parts of a column of other types will use the values:
In [149]: df
Out[149]:
a b
0 1 a
1 b a
2 b b
3 1 b
4 1 a
In [150]: df.dtypes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[150]:
a object
b object
dtype: object
21.7.4 Merging
You can concat two DataFrames containing categorical data together, but the categories of these categoricals need to
be the same:
In [155]: res
Out[155]:
cats vals
0 a 1
1 b 2
0 a 1
1 b 2
In [156]: res.dtypes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[156]:
cats category
vals int64
dtype: object
In this case the categories are not the same and so an error is raised:
In [159]: try:
.....: pd.concat([df,df_different])
.....: except ValueError as e:
.....: print("ValueError: " + str(e))
.....:
21.7.5 Unioning
By default, the resulting categories will be ordered as they appear in the data. If you want the categories to be lexsorted,
use sort_categories=True argument.
union_categoricals also works with the easy case of combining two categoricals of the same categories and
order information (e.g. what you could also append for).
[a, b, a, b, a]
Categories (2, object): [a < b]
The below raises TypeError because the categories are ordered and not identical.
In [1]: a = pd.Categorical(["a", "b"], ordered=True)
In [2]: b = pd.Categorical(["a", "b", "c"], ordered=True)
In [3]: union_categoricals([a, b])
Out[3]:
TypeError: to union ordered Categoricals, all categories must be the same
union_categoricals also works with a CategoricalIndex, or Series containing categorical data, but
note that the resulting array will always be a plain Categorical
In [171]: a = pd.Series(["b", "c"], dtype='category')
Note: union_categoricals may recode the integer codes for categories when combining categoricals. This is
likely what you want, but if you are relying on the exact numbering of the categories, be aware.
In [174]: c1 = pd.Categorical(["b", "c"])
In [176]: c1
Out[176]:
[b, c]
Categories (2, object): [b, c]
# "b" is coded to 0
In [177]: c1.codes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[177]: array([0, 1], dtype=int8)
In [178]: c2
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[178]:
[a, b]
# "b" is coded to 1
In [179]: c2.codes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
array([0, 1], dtype=int8)
In [181]: c
Out[181]:
[b, c, a, b]
Categories (3, object): [b, c, a]
21.7.6 Concatenation
This section describes concatenations specific to category dtype. See Concatenating objects for general description.
By default, Series or DataFrame concatenation which contains the same categories results in category dtype,
otherwise results in object dtype. Use .astype or union_categoricals to get category result.
# same categories
In [183]: s1 = pd.Series(['a', 'b'], dtype='category')
# different categories
In [186]: s3 = pd.Series(['b', 'c'], dtype='category')
1 b
0 b
1 c
dtype: category
Categories (3, object): [a, b, c]
[a, b, b, c]
Categories (3, object): [a, b, c]
In [195]: df.to_csv(csv)
In [197]: df2.dtypes
Out[197]:
Unnamed: 0 int64
cats object
vals int64
dtype: object
In [198]: df2["cats"]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[198]:
0 very good
1 good
2 good
3 very good
4 very good
5 bad
Name: cats, dtype: object
.....: inplace=True)
.....:
In [201]: df2.dtypes
Out[201]:
Unnamed: 0 int64
cats category
vals int64
dtype: object
In [202]: df2["cats"]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[202
0 very good
1 good
2 good
3 very good
4 very good
5 bad
Name: cats, dtype: category
Categories (5, object): [very bad, bad, medium, good, very good]
pandas primarily uses the value np.nan to represent missing data. It is by default not included in computations. See
the Missing Data section.
Missing values should not be included in the Categoricals categories, only in the values. Instead, it is under-
stood that NaN is different, and is always a possibility. When working with the Categoricals codes, missing values
will always have a code of -1.
In [203]: s = pd.Series(["a", "b", np.nan, "a"], dtype="category")
1 b
2 NaN
3 a
dtype: category
Categories (2, object): [a, b]
In [205]: s.cat.codes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[205
0 0
1 1
2 -1
3 0
dtype: int8
Methods for working with missing data, e.g. isnull(), fillna(), dropna(), all work normally:
In [207]: s
Out[207]:
0 a
1 b
2 NaN
dtype: category
Categories (2, object): [a, b]
In [208]: pd.isnull(s)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[208]:
0 False
1 False
2 True
dtype: bool
In [209]: s.fillna("a")
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
0 a
1 b
2 a
dtype: category
Categories (2, object): [a, b]
R allows for missing values to be included in its levels (pandas categories). Pandas does not allow NaN
categories, but missing values can still be in the values.
21.11 Gotchas
The memory usage of a Categorical is proportional to the number of categories times the length of the data. In
contrast, an object dtype is a constant times the length of the data.
In [210]: s = pd.Series(['foo','bar']*1000)
# object dtype
In [211]: s.nbytes
Out[211]: 16000
# category dtype
In [212]: s.astype('category').nbytes
\\\\\\\\\\\\\\\\Out[212]: 2016
Note: If the number of categories approaches the length of the data, the Categorical will use nearly the same or
more memory than an equivalent object dtype representation.
# object dtype
In [214]: s.nbytes
Out[214]: 16000
# category dtype
In [215]: s.astype('category').nbytes
\\\\\\\\\\\\\\\\Out[215]: 20000
In earlier versions than pandas 0.15, a Categorical could be constructed by passing in precomputed codes (called then
labels) instead of values with categories. The codes were interpreted as pointers to the categories with -1 as NaN. This
type of constructor usage is replaced by the special constructor Categorical.from_codes().
Unfortunately, in some special cases, using code which assumes the old style constructor usage will work with the
current pandas version, resulting in subtle bugs:
Warning: If you used Categoricals with older versions of pandas, please audit your code before upgrading and
change your code to use the from_codes() constructor.
Currently, categorical data and the underlying Categorical is implemented as a python object and not as a low-level
numpy array dtype. This leads to some problems.
numpy itself doesnt know about the new dtype:
In [216]: try:
.....: np.dtype("category")
.....: except TypeError as e:
.....: print("TypeError: " + str(e))
.....:
TypeError: data type "category" not understood
In [218]: try:
.....: np.dtype(dtype)
.....: except TypeError as e:
.....: print("TypeError: " + str(e))
.....:
TypeError: data type not understood
To check if a Series contains Categorical data, with pandas 0.16 or later, use hasattr(s, 'cat'):
Using numpy functions on a Series of type category should not work as Categoricals are not numeric data (even in
the case that .categories is numeric).
In [223]: s = pd.Series(pd.Categorical([1,2,3,4]))
In [224]: try:
.....: np.sum(s)
.....: except TypeError as e:
.....: print("TypeError: " + str(e))
.....:
TypeError: Categorical cannot perform the operation sum
Pandas currently does not preserve the dtype in apply functions: If you apply along rows you get a Series of object
dtype (same as getting a row -> getting one element will return a basic type) and applying along columns will also
convert to object.
In [225]: df = pd.DataFrame({"a":[1,2,3,4],
.....: "b":["a","b","c","d"],
.....: "cats":pd.Categorical([1,2,3,2])})
.....:
a object
b object
cats object
dtype: object
In [232]: df.index
Out[232]: CategoricalIndex([1, 2, 3, 4], categories=[4, 2, 3, 1], ordered=False,
dtype='category')
strings values
4 d 1
2 b 2
3 c 3
1 a 4
In previous versions (<0.16.1) there is no index of type category, so setting the index to categorical column will
convert the categorical data to a normal dtype first and therefore remove any custom ordering of the categories.
Constructing a Series from a Categorical will not copy the input Categorical. This means that changes to the Series
will in most cases change the original Categorical:
In [236]: cat
Out[236]:
[1, 2, 3, 10]
Categories (5, int64): [1, 2, 3, 4, 10]
In [237]: s.iloc[0:2] = 10
In [238]: cat
Out[238]:
[10, 10, 3, 10]
Categories (5, int64): [1, 2, 3, 4, 10]
In [239]: df = pd.DataFrame(s)
In [241]: cat
Out[241]:
[5, 5, 3, 5]
Categories (5, int64): [1, 2, 3, 4, 5]
In [244]: cat
Out[244]:
[1, 2, 3, 10]
Categories (5, int64): [1, 2, 3, 4, 10]
In [245]: s.iloc[0:2] = 10
In [246]: cat
Out[246]:
[1, 2, 3, 10]
Categories (5, int64): [1, 2, 3, 4, 10]
Note: This also happens in some cases when you supply a numpy array instead of a Categorical: using an int
array (e.g. np.array([1,2,3,4])) will exhibit the same behaviour, while using a string array (e.g. np.
array(["a","b","c","a"])) will not.
TWENTYTWO
VISUALIZATION
The plots in this document are made using matplotlibs ggplot style (new in version 1.4):
import matplotlib
matplotlib.style.use('ggplot')
We provide the basics in pandas to easily create decent looking plots. See the ecosystem section for visualization
libraries that go beyond the basics documented here.
In [3]: ts = ts.cumsum()
In [4]: ts.plot()
Out[4]: <matplotlib.axes._subplots.AxesSubplot at 0x127e39b70>
905
pandas: powerful Python data analysis toolkit, Release 0.20.1
If the index consists of dates, it calls gcf().autofmt_xdate() to try to format the x-axis nicely as per above.
On DataFrame, plot() is a convenience to plot all of the columns with labels:
In [6]: df = df.cumsum()
You can plot one column versus another using the x and y keywords in plot():
Plotting methods allow for a handful of plot styles other than the default Line plot. These methods can be provided as
the kind keyword argument to plot(). These include:
bar or barh for bar plots
hist for histogram
box for boxplot
kde or 'density' for density plots
area for area plots
scatter for scatter plots
hexbin for hexagonal bin plots
pie for pie plots
For example, a bar plot can be created the following way:
In [11]: plt.figure();
In [13]: df = pd.DataFrame()
In [14]: df.plot.<TAB>
df.plot.area df.plot.barh df.plot.density df.plot.hist df.plot.line
df.plot.scatter
In addition to these kind s, there are the DataFrame.hist(), and DataFrame.boxplot() methods, which use a separate
interface.
Finally, there are several plotting functions in pandas.plotting that take a Series or DataFrame as an argu-
ment. These include
Scatter Matrix
Andrews Curves
Parallel Coordinates
Lag Plot
Autocorrelation Plot
Bootstrap Plot
RadViz
Plots may also be adorned with errorbars or tables.
For labeled, non-time series data, you may wish to produce a bar plot:
In [15]: plt.figure();
In [18]: df2.plot.bar();
In [19]: df2.plot.bar(stacked=True);
In [20]: df2.plot.barh(stacked=True);
22.2.2 Histograms
In [22]: plt.figure();
In [23]: df4.plot.hist(alpha=0.5)
Out[23]: <matplotlib.axes._subplots.AxesSubplot at 0x1321820b8>
Histogram can be stacked by stacked=True. Bin size can be changed by bins keyword.
In [24]: plt.figure();
You can pass other keywords supported by matplotlib hist. For example, horizontal and cumulative histgram can be
drawn by orientation='horizontal' and cumulative='True'.
In [26]: plt.figure();
See the hist method and the matplotlib hist documentation for more.
The existing interface DataFrame.hist to plot histogram still can be used.
In [28]: plt.figure();
In [29]: df['A'].diff().hist()
Out[29]: <matplotlib.axes._subplots.AxesSubplot at 0x1285dceb8>
In [30]: plt.figure()
Out[30]: <matplotlib.figure.Figure at 0x12f415f28>
In [35]: df.plot.box()
Out[35]: <matplotlib.axes._subplots.AxesSubplot at 0x12dd1bcf8>
Boxplot can be colorized by passing color keyword. You can pass a dict whose keys are boxes, whiskers,
medians and caps. If some keys are missing in the dict, default colors are used for the corresponding artists.
Also, boxplot has sym keyword to specify fliers style.
When you pass other type of arguments via color keyword, it will be directly passed to matplotlib for all the boxes,
whiskers, medians and caps colorization.
The colors are applied to every boxes to be drawn. If you want more complicated colorization, you can get each drawn
artists by passing return_type.
Also, you can pass other keywords supported by matplotlib boxplot. For example, horizontal and custom-positioned
boxplot can be drawn by vert=False and positions keywords.
See the boxplot method and the matplotlib boxplot documentation for more.
The existing interface DataFrame.boxplot to plot boxplot still can be used.
In [39]: df = pd.DataFrame(np.random.rand(10,5))
In [40]: plt.figure();
In [41]: bp = df.boxplot()
You can create a stratified boxplot using the by keyword argument to create groupings. For instance,
In [44]: plt.figure();
In [45]: bp = df.boxplot(by='X')
You can also pass a subset of columns to plot, as well as group by multiple columns:
In [49]: plt.figure();
In boxplot, the return type can be controlled by the return_type, keyword. The valid choices are {"axes",
"dict", "both", None}. Faceting, created by DataFrame.boxplot with the by keyword, will affect the
output type as well:
return_type= Faceted Output type
None No axes
None Yes 2-D ndarray of axes
'axes' No axes
'axes' Yes Series of axes
'dict' No dict of artists
'dict' Yes Series of dicts of artists
'both' No namedtuple
'both' Yes Series of namedtuples
Groupby.boxplot always returns a Series of return_type.
In [51]: np.random.seed(1234)
In [55]: bp = df_box.boxplot(by='g')
Compare to:
In [56]: bp = df_box.groupby('g').boxplot()
In [58]: df.plot.area();
To produce an unstacked plot, pass stacked=False. Alpha value is set to 0.5 unless otherwise specified:
In [59]: df.plot.area(stacked=False);
To plot multiple column groups in a single axes, repeat plot method specifying target ax. It is recommended to
specify color and label keywords to distinguish each groups.
The keyword c may be given as the name of a column to provide colors for each point:
You can pass other keywords supported by matplotlib scatter. Below example shows a bubble chart using a
dataframe column values as bubble size.
See the scatter method and the matplotlib scatter documentation for more.
A useful keyword argument is gridsize; it controls the number of hexagons in the x-direction, and defaults to 100.
A larger gridsize means more, smaller bins.
By default, a histogram of the counts around each (x, y) point is computed. You can specify alternative aggregations
by passing values to the C and reduce_C_function arguments. C specifies the value at each (x, y) point and
reduce_C_function is a function of one argument that reduces all the values in a bin to a single number (e.g.
mean, max, sum, std). In this example the positions are given by columns a and b, while the value is given by
column z. The bins are aggregated with numpys max function.
See the hexbin method and the matplotlib hexbin documentation for more.
For pie plots its best to use square figures, ones with an equal aspect ratio. You can create the figure with equal
width and height, or force the aspect ratio to be equal after plotting by calling ax.set_aspect('equal') on the
returned axes object.
Note that pie plot with DataFrame requires that you either specify a target column by the y argument or
subplots=True. When y is specified, pie plot of selected column will be drawn. If subplots=True is spec-
ified, pie plots for each column are drawn as subplots. A legend will be drawn in each pie plots by default; specify
legend=False to hide it.
You can use the labels and colors keywords to specify the labels and colors of each wedge.
Warning: Most pandas plots use the the label and color arguments (note the lack of s on those). To be
consistent with matplotlib.pyplot.pie() you must use labels and colors.
If you want to hide wedge labels, specify labels=None. If fontsize is specified, the value will be applied to
wedge labels. Also, other keywords supported by matplotlib.pyplot.pie() can be used.
If you pass values whose sum total is less than 1.0, matplotlib draws a semicircle.
Pandas tries to be pragmatic about plotting DataFrames or Series that contain missing data. Missing values are
dropped, left out, or filled depending on the plot type.
Plot Type NaN Handling
Line Leave gaps at NaNs
Line (stacked) Fill 0s
Bar Fill 0s
Scatter Drop NaNs
Histogram Drop NaNs (column-wise)
Box Drop NaNs (column-wise)
Area Fill 0s
KDE Drop NaNs (column-wise)
Hexbin Drop NaNs
Pie Fill 0s
If any of these defaults are not what you want, or if you want to be explicit about how missing values are handled,
consider using fillna() or dropna() before plotting.
These functions can be imported from pandas.plotting and take a Series or DataFrame as an argument.
In [84]: ser.plot.kde()
Out[84]: <matplotlib.axes._subplots.AxesSubplot at 0x12e445be0>
Andrews curves allow one to plot multivariate data as a large number of curves that are created using the attributes
of samples as coefficients for Fourier series. By coloring these curves differently for each class it is possible to
visualize data clustering. Curves belonging to samples of the same class will usually be closer together and form
larger structures.
Note: The Iris dataset is available here.
In [87]: plt.figure()
Out[87]: <matplotlib.figure.Figure at 0x12f72ad30>
Parallel coordinates is a plotting technique for plotting multivariate data. It allows one to see clusters in data and to
estimate other statistics visually. Using parallel coordinates points are represented as connected line segments. Each
vertical line represents one attribute. One set of connected line segments represents one data point. Points that tend to
cluster will appear closer together.
In [91]: plt.figure()
Out[91]: <matplotlib.figure.Figure at 0x127f5e550>
Lag plots are used to check if a data set or time series is random. Random data should not exhibit any structure in the
lag plot. Non-random structure implies that the underlying data are not random.
In [94]: plt.figure()
Out[94]: <matplotlib.figure.Figure at 0x12e4757f0>
In [96]: lag_plot(data)
Out[96]: <matplotlib.axes._subplots.AxesSubplot at 0x12d764dd8>
Autocorrelation plots are often used for checking randomness in time series. This is done by computing autocorrela-
tions for data values at varying time lags. If time series is random, such autocorrelations should be near zero for any
and all time-lag separations. If time series is non-random then one or more of the autocorrelations will be significantly
non-zero. The horizontal lines displayed in the plot correspond to 95% and 99% confidence bands. The dashed line is
99% confidence band.
In [98]: plt.figure()
Out[98]: <matplotlib.figure.Figure at 0x12d7a6470>
In [100]: autocorrelation_plot(data)
Out[100]: <matplotlib.axes._subplots.AxesSubplot at 0x129fe8fd0>
Bootstrap plots are used to visually assess the uncertainty of a statistic, such as mean, median, midrange, etc. A
random subset of a specified size is selected from a data set, the statistic in question is computed for this subset and
the process is repeated a specified number of times. Resulting plots and histograms are what constitutes the bootstrap
plot.
22.4.8 RadViz
RadViz is a way of visualizing multi-variate data. It is based on a simple spring tension minimization algorithm.
Basically you set up a bunch of points in a plane. In our case they are equally spaced on a unit circle. Each point
represents a single attribute. You then pretend that each sample in the data set is attached to each of these points
by a spring, the stiffness of which is proportional to the numerical value of that attribute (they are normalized to
unit interval). The point in the plane, where our sample settles to (where the forces acting on our sample are at an
equilibrium) is where a dot representing our sample will be drawn. Depending on which class that sample belongs it
will be colored differently.
Note: The Iris dataset is available here.
In [106]: plt.figure()
Out[106]: <matplotlib.figure.Figure at 0x127137c18>
Most plotting methods have a set of keyword arguments that control the layout and formatting of the returned plot:
For each kind of plot (e.g. line, bar, scatter) any additional arguments keywords are passed along to the corresponding
matplotlib function (ax.plot(), ax.bar(), ax.scatter()). These can be used to control additional styling,
beyond what pandas provides.
You may set the legend argument to False to hide the legend, which is shown by default.
In [110]: df = df.cumsum()
In [111]: df.plot(legend=False)
Out[111]: <matplotlib.axes._subplots.AxesSubplot at 0x12d7a6a20>
22.5.2 Scales
In [113]: ts = np.exp(ts.cumsum())
In [114]: ts.plot(logy=True)
Out[114]: <matplotlib.axes._subplots.AxesSubplot at 0x128798cf8>
In [115]: df.A.plot()
Out[115]: <matplotlib.axes._subplots.AxesSubplot at 0x123ef6e10>
To plot some columns in a DataFrame, give the column names to the secondary_y keyword:
In [117]: plt.figure()
Out[117]: <matplotlib.figure.Figure at 0x132eb6ba8>
Note that the columns plotted on the secondary y-axis is automatically marked with (right) in the legend. To turn off
the automatic marking, use the mark_right=False keyword:
In [121]: plt.figure()
Out[121]: <matplotlib.figure.Figure at 0x12742a908>
pandas includes automatic tick resolution adjustment for regular frequency time-series data. For limited cases where
pandas cannot infer the frequency information (e.g., in an externally created twinx), you can choose to suppress this
behavior for alignment purposes.
Here is the default behavior, notice how the x-axis tick labelling is performed:
In [123]: plt.figure()
Out[123]: <matplotlib.figure.Figure at 0x12741a3c8>
In [124]: df.A.plot()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[124]: <matplotlib.axes._
subplots.AxesSubplot at 0x12f347940>
In [125]: plt.figure()
Out[125]: <matplotlib.figure.Figure at 0x127ee3a20>
In [126]: df.A.plot(x_compat=True)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[126]: <matplotlib.axes._
subplots.AxesSubplot at 0x12f346e48>
If you have more than one plot that needs to be suppressed, the use method in pandas.plotting.
plot_params can be used in a with statement:
In [127]: plt.figure()
Out[127]: <matplotlib.figure.Figure at 0x127eb9780>
22.5.6 Subplots
Each Series in a DataFrame can be plotted on a different axis with the subplots keyword:
The layout of subplots can be specified by layout keyword. It can accept (rows, columns). The layout
keyword can be used in hist and boxplot also. If input is invalid, ValueError will be raised.
The number of axes which can be contained by rows x columns specified by layout must be larger than the number
of required subplots. If layout can contain more axes than required, blank axes are not drawn. Similar to a numpy
arrays reshape method, you can use -1 for one dimension to automatically calculate the number of rows or columns
needed, given the other.
The required number of columns (3) is inferred from the number of series to plot and the given number of rows (2).
Also, you can pass multiple axes created beforehand as list-like via ax keyword. This allows to use more complicated
layout. The passed axes must be the same number as the subplots being drawn.
When multiple axes are passed via ax keyword, layout, sharex and sharey keywords dont affect to the output.
You should explicitly pass sharex=False and sharey=False, otherwise you will see a warning.
In [132]: fig, axes = plt.subplots(4, 4, figsize=(6, 6));
# Group by index labels and take the means and standard deviations for each group
In [145]: gp3 = df3.groupby(level=('letter', 'word'))
In [148]: means
Out[148]:
data1 data2
letter word
a bar 3.5 6.0
foo 2.5 5.5
b bar 2.5 5.5
foo 3.0 4.5
In [149]: errors
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
data1 data2
letter word
a bar 0.707107 1.414214
foo 0.707107 0.707107
b bar 0.707107 0.707107
foo 1.414214 0.707107
# Plot
In [150]: fig, ax = plt.subplots()
Also, you can pass different DataFrame or Series for table keyword. The data will be drawn as displayed in
print method (not transposed automatically). If required, it should be transposed manually as below example.
Finally, there is a helper function pandas.plotting.table to create a table from DataFrame and Series,
and add it to an matplotlib.Axes. This function can accept keywords which matplotlib table has.
Note: You can get table instances on the axes using axes.tables property for further decorations. See the mat-
plotlib table documentation for more.
22.5.10 Colormaps
A potential issue when plotting a large number of columns is that it can be difficult to distinguish some series due to
repetition in the default colors. To remedy this, DataFrame plotting supports the use of the colormap= argument,
which accepts either a Matplotlib colormap or a string that is a name of a colormap registered with Matplotlib. A
visualization of the default matplotlib colormaps is available here.
As matplotlib does not directly support colormaps for line-based plots, the colors are selected based on an even spacing
determined by the number of columns in the DataFrame. There is no consideration made for background color, so
some colormaps will produce lines that are not easily visible.
To use the cubehelix colormap, we can simply pass 'cubehelix' to colormap=
In [164]: df = df.cumsum()
In [165]: plt.figure()
Out[165]: <matplotlib.figure.Figure at 0x12e6caa90>
In [166]: df.plot(colormap='cubehelix')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[166]: <matplotlib.axes._
subplots.AxesSubplot at 0x12377be10>
In [168]: plt.figure()
Out[168]: <matplotlib.figure.Figure at 0x12377b828>
In [169]: df.plot(colormap=cm.cubehelix)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[169]: <matplotlib.axes._
subplots.AxesSubplot at 0x10b493b70>
Colormaps can also be used other plot types, like bar charts:
In [171]: dd = dd.cumsum()
In [172]: plt.figure()
Out[172]: <matplotlib.figure.Figure at 0x1238fca90>
In [173]: dd.plot.bar(colormap='Greens')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[173]: <matplotlib.axes._
subplots.AxesSubplot at 0x122a30d30>
In [174]: plt.figure()
Out[174]: <matplotlib.figure.Figure at 0x12e3247b8>
In [176]: plt.figure()
Out[176]: <matplotlib.figure.Figure at 0x122a301d0>
In some situations it may still be preferable or necessary to prepare plots directly with matplotlib, for instance when
a certain type of plot or customization is not (yet) supported by pandas. Series and DataFrame objects behave like
arrays and can therefore be passed directly to matplotlib functions without explicit casts.
pandas also automatically registers formatters and locators that recognize date indices, thereby extending date and
time support to practically all plot types available in matplotlib. Although this formatting does not provide the same
level of refinement you would get when plotting via pandas, it can be faster when plotting a large number of points.
Note: The speed up for large data sets only applies to pandas 0.14.0 and later.
In [179]: ma = price.rolling(20).mean()
In [181]: plt.figure()
Out[181]: <matplotlib.figure.Figure at 0x123ccbdd8>
Warning: The rplot trellis plotting interface has been removed. Please use external packages like seaborn for
similar but more refined functionality and refer to our 0.18.1 documentation here for how to convert to using it.
TWENTYTHREE
STYLING
np.random.seed(24)
df = pd.DataFrame({'A': np.linspace(1, 10, 10)})
df = pd.concat([df, pd.DataFrame(np.random.randn(10, 4), columns=list('BCDE'))],
axis=1)
df.iloc[0, 2] = np.nan
973
pandas: powerful Python data analysis toolkit, Release 0.20.1
In [3]: df.style
Out[3]: <pandas.io.formats.style.Styler at 0x1187d6b38>
Note: The DataFrame.style attribute is a property that returns a Styler object. Styler has a _repr_html_
method defined on it so they are rendered automatically. If you want the actual HTML back for further processing or
for writing to file call the .render() method which returns a string.
The above output looks very similar to the standard DataFrame HTML representation. But weve done some work
behind the scenes to attach CSS classes to each cell. We can view these by calling the .render method.
In [4]: df.style.highlight_null().render().split('\n')[:10]
Out[4]: ['<style type="text/css" >',
' #T_ad6bb706_31b6_11e7_9571_186590cd1c87row0_col2 {',
' background-color: red;',
' }</style> ',
'<table id="T_ad6bb706_31b6_11e7_9571_186590cd1c87" > ',
'<thead> <tr> ',
' <th class="blank level0" ></th> ',
' <th class="col_heading level0 col0" >A</th> ',
' <th class="col_heading level0 col1" >B</th> ',
' <th class="col_heading level0 col2" >C</th> ']
The row0_col2 is the identifier for that particular cell. Weve also prepended each row/column identifier with a
UUID unique to each DataFrame so that the style from one doesnt collide with the styling from another within the
same notebook or page (you can set the uuid if youd like to tie together the styling of two DataFrames).
When writing style functions, you take care of producing the CSS attribute / value pairs you want. Pandas matches
those up with the CSS classes that identify each cell.
Lets write a simple style function that will color negative numbers red and positive numbers black.
In [5]: def color_negative_red(val):
"""
Takes a scalar and returns a string with
the css property `'color: red'` for negative
strings, black otherwise.
"""
color = 'red' if val < 0 else 'black'
return 'color: %s' % color
In this case, the cells style depends only on its own value. That means we should use the Styler.applymap
method which works elementwise.
In [6]: s = df.style.applymap(color_negative_red)
s
Out[6]: <pandas.io.formats.style.Styler at 0x1189dda20>
Notice the similarity with the standard df.applymap, which operates on DataFrames elementwise. We want you to
be able to resuse your existing knowledge of how to interact with DataFrames.
Notice also that our function returned a string containing the CSS attribute and value, separated by a colon just like in
a <style> tag. This will be a common theme.
Finally, the input shapes matched. Styler.applymap calls the function on each scalar input, and the function
returns a scalar output.
Now suppose you wanted to highlight the maximum value in each column. We cant use .applymap anymore since
that operated elementwise. Instead, well turn to .apply which operates columnwise (or rowwise using the axis
keyword). Later on well see that something like highlight_max is already defined on Styler so you wouldnt
need to write this yourself.
In this case the input is a Series, one column at a time. Notice that the output shape of highlight_max matches
the input shape, an array with len(s) items.
We encourage you to use method chains to build up a style piecewise, before finally rending at the end of the chain.
In [9]: df.style.\
applymap(color_negative_red).\
apply(highlight_max)
Out[9]: <pandas.io.formats.style.Styler at 0x118a07438>
Above we used Styler.apply to pass in each column one at a time.
Debugging Tip: If youre having trouble writing your style function, try just passing it into DataFrame.apply. Inter-
nally, Styler.apply uses DataFrame.apply so the result should be the same.
What if you wanted to highlight just the maximum value in the entire table? Use .apply(function,
axis=None) to indicate that your function wants the entire table, not one column or row at a time. Lets try that
next.
Well rewrite our highlight-max to handle either Series (from .apply(axis=0 or 1)) or DataFrames (from
.apply(axis=None)). Well also allow the color to be adjustable, to demonstrate that .apply, and .applymap
pass along keyword arguments.
In [10]: def highlight_max(data, color='yellow'):
'''
highlight the maximum in a Series or DataFrame
'''
attr = 'background-color: {}'.format(color)
if data.ndim == 1: # Series from .apply(axis=0) or axis=1
is_max = data == data.max()
return [attr if v else '' for v in is_max]
else: # from .apply(axis=None)
is_max = data == data.max().max()
return pd.DataFrame(np.where(is_max, attr, ''),
index=data.index, columns=data.columns)
When using Styler.apply(func, axis=None), the function must return a DataFrame with the same index
and column labels.
In [11]: df.style.apply(highlight_max, color='darkorange', axis=None)
Out[11]: <pandas.io.formats.style.Styler at 0x118a076a0>
Style functions should return strings with one or more CSS attribute: value delimited by semicolons. Use
Styler.applymap(func) for elementwise styles
Styler.apply(func, axis=0) for columnwise styles
Styler.apply(func, axis=1) for rowwise styles
Both Styler.apply, and Styler.applymap accept a subset keyword. This allows you to apply styles to
specific rows or columns, without having to code that logic into your style function.
The value passed to subset behaves simlar to slicing a DataFrame.
A scalar is treated as a column label
A list (or series or numpy array)
A tuple is treated as (row_indexer, column_indexer)
Consider using pd.IndexSlice to construct the tuple for the last one.
In [12]: df.style.apply(highlight_max, subset=['B', 'C', 'D'])
Out[12]: <pandas.io.formats.style.Styler at 0x118a075c0>
For row and column slicing, any valid indexer to .loc will work.
In [13]: df.style.applymap(color_negative_red,
subset=pd.IndexSlice[2:5, ['B', 'D']])
Out[13]: <pandas.io.formats.style.Styler at 0x118a07b70>
Only label-based slicing is supported right now, not positional.
If your style function uses a subset or axis keyword argument, consider wrapping your function in a
functools.partial, partialing out that keyword.
We distinguish the display value from the actual value in Styler. To control the display value, the text is printed in
each cell, use Styler.format. Cells can be formatted according to a format spec string or a callable that takes a
single value and returns a string.
In [14]: df.style.format("{:.2%}")
Out[14]: <pandas.io.formats.style.Styler at 0x118a07470>
Finally, we expect certain styling functions to be common enough that weve included a few built-in to the Styler,
so you dont have to write them yourself.
In [17]: df.style.highlight_null(null_color='red')
Out[17]: <pandas.io.formats.style.Styler at 0x118a071d0>
You can create heatmaps with the background_gradient method. These require matplotlib, and well use
Seaborn to get a nice colormap.
In [18]: import seaborn as sns
cm = sns.light_palette("green", as_cmap=True)
s = df.style.background_gradient(cmap=cm)
s
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-18-21d716029213> in <module>()
----> 1 import seaborn as sns
2
3 cm = sns.light_palette("green", as_cmap=True)
4
5 s = df.style.background_gradient(cmap=cm)
Styler.background_gradient takes the keyword arguments low and high. Roughly speaking these extend
the range of your data by low and high percent so that when we convert the colors, the colormaps entire range isnt
used. This is useful so that you can actually read the text still.
In [19]: # Uses the full color range
df.loc[:4].style.background_gradient(cmap='viridis')
/Users/taugspurger/miniconda3/envs/pandas-dev/lib/python3.6/site-packages/matplotlib/colors.py:494: R
cbook._putmask(xa, xa < 0.0, -1)
Out[19]: <pandas.io.formats.style.Styler at 0x118a10a58>
In [20]: # Compress the color range
(df.loc[:4]
.style
.background_gradient(cmap='viridis', low=.5, high=0)
.highlight_null('red'))
/Users/taugspurger/miniconda3/envs/pandas-dev/lib/python3.6/site-packages/matplotlib/colors.py:494: R
cbook._putmask(xa, xa < 0.0, -1)
Out[20]: <pandas.io.formats.style.Styler at 0x118a10f60>
Theres also .highlight_min and .highlight_max.
In [21]: df.style.highlight_max(axis=0)
Out[21]: <pandas.io.formats.style.Styler at 0x118a106d8>
Use Styler.set_properties when the style doesnt actually depend on the values.
In [22]: df.style.set_properties(**{'background-color': 'black',
'color': 'lawngreen',
'border-color': 'white'})
# Test series
test1 = pd.Series([-100,-60,-30,-20], name='All Negative')
test2 = pd.Series([10,20,50,100], name='All Positive')
test3 = pd.Series([-10,-5,0,90], name='Both Pos and Neg')
head = """
<table>
<thead>
<th>Align</th>
<th>All Negative</th>
<th>All Positive</th>
<th>Both Neg and Pos</th>
</thead>
</tbody>
"""
aligns = ['left','zero','mid']
for align in aligns:
row = "<tr><th>{}</th>".format(align)
for serie in [test1,test2,test3]:
s = serie.copy()
s.name=''
row += "<td>{}</td>".format(s.to_frame().style.bar(align=align,
color=['#d65f5f', '#5fba7d'],
width=100).render()) #testn['widt
row += '</tr>'
head += row
head+= """
</tbody>
</table>"""
HTML(head)
Say you have a lovely style built up for a DataFrame, and now you want to apply the same style to a second DataFrame.
Export the style with df1.style.export, and import it on the second DataFrame with df1.style.set
In [26]: df2 = -df
style1 = df.style.applymap(color_negative_red)
style1
Out[26]: <pandas.io.formats.style.Styler at 0x118b22c50>
In [27]: style2 = df2.style
style2.use(style1.export())
style2
Out[27]: <pandas.io.formats.style.Styler at 0x118b87940>
Notice that youre able share the styles even though theyre data aware. The styles are re-evaluated on the new
DataFrame theyve been used upon.
Youve seen a few methods for data-driven styling. Styler also provides a few other options for styles that dont
depend on the data.
precision
captions
table-wide styles
Each of these can be specified in two ways:
A keyword argument to Styler.__init__
A call to one of the .set_ methods, e.g. .set_caption
The best method to use depends on the context. Use the Styler constructor when building many styled DataFrames
that should all share the same properties. For interactive use, the.set_ methods are more convenient.
23.6.1 Precision
You can control the precision of floats using pandas regular display.precision option.
In [28]: with pd.option_context('display.precision', 2):
html = (df.style
.applymap(color_negative_red)
.apply(highlight_max))
html
Out[28]: <pandas.io.formats.style.Styler at 0x118b22f98>
Or through a set_precision method.
In [29]: df.style\
.applymap(color_negative_red)\
.apply(highlight_max)\
.set_precision(2)
Out[29]: <pandas.io.formats.style.Styler at 0x118b87208>
Setting the precision only affects the printed number; the full-precision values are always passed to your style func-
tions. You can always use df.round(2).style if youd prefer to round from the start.
23.6.2 Captions
The next option you have are table styles. These are styles that apply to the table as a whole, but dont look at the
data. Certain sytlings, including pseudo-selectors like :hover can only be used this way.
In [31]: from IPython.display import HTML
def hover(hover_color="#ffff99"):
return dict(selector="tr:hover",
props=[("background-color", "%s" % hover_color)])
styles = [
hover(),
dict(selector="th", props=[("font-size", "150%"),
("text-align", "center")]),
dict(selector="caption", props=[("caption-side", "bottom")])
]
html = (df.style.set_table_styles(styles)
.set_caption("Hover to highlight."))
html
Out[31]: <pandas.io.formats.style.Styler at 0x118b87cf8>
table_styles should be a list of dictionaries. Each dictionary should have the selector and props keys.
The value for selector should be a valid CSS selector. Recall that all the styles are already attached to an id,
unique to each Styler. This selector is in addition to that id. The value for props should be a list of tuples of
('attribute', 'value').
table_styles are extremely flexible, but not as fun to type out by hand. We hope to collect some useful ones
either in pandas, or preferable in a new package that builds on top the tools here.
Index and Column names include index_name and level<k> where k is its level in a MultiIndex
Index label cells include
row_heading
row<n> where n is the numeric position of the row
level<k> where k is the level in a MultiIndex
Column label cells include
col_heading
col<n> where n is the numeric position of the column
level<k> where k is the level in a MultiIndex
Blank cells include blank
Data cells include data
23.6.5 Limitations
23.6.6 Terms
Style function: a function thats passed into Styler.apply or Styler.applymap and returns values like
'css attribute: value'
Builtin style functions: style functions that are methods on Styler
table style: a dictionary with the two keys selector and props. selector is the CSS selector that props
will apply to. props is a list of (attribute, value) tuples. A list of table styles passed into Styler.
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
/Users/taugspurger/miniconda3/envs/pandas-dev/lib/python3.6/site-packages/ipywidgets/widgets/interact
215 value = widget.get_interact_value()
216 self.kwargs[widget._kwarg] = value
--> 217 self.result = self.f(**self.kwargs)
218 if self.auto_display and self.result is not None:
219 display(self.result)
bigdf.style.background_gradient(cmap, axis=1)\
.set_properties(**{'max-width': '80px', 'font-size': '1pt'})\
.set_caption("Hover to magnify")\
.set_precision(2)\
.set_table_styles(magnify())
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-34-db68e117d2b4> in <module>()
1 np.random.seed(25)
----> 2 cmap = cmap=sns.diverging_palette(5, 250, as_cmap=True)
3 bigdf = pd.DataFrame(np.random.randn(20, 25)).cumsum()
4
5 bigdf.style.background_gradient(cmap, axis=1) .set_properties(**'max-width': '80px', 'font
background-color
border-style, border-width, border-color and their {top, right, bottom, left variants}
color
font-family
font-style
font-weight
text-align
text-decoration
vertical-align
white-space: nowrap
Only CSS2 named colors and hex colors of the form #rgb or #rrggbb are currently supported.
In [35]: df.style.\
applymap(color_negative_red).\
apply(highlight_max).\
to_excel('styled.xlsx', engine='openpyxl')
23.9 Extensibility
The core of pandas is, and will remain, its high-performance, easy-to-use data structures. With that in mind, we
hope that DataFrame.style accomplishes two goals
Provide an API that is pleasing to use interactively and is good enough for many tasks
23.9.1 Subclassing
If the default template doesnt quite suit your needs, you can subclass Styler and extend or override the template. Well
show an example of extending the default template to insert a custom header before each table.
In [36]: from jinja2 import Environment, ChoiceLoader, FileSystemLoader
from IPython.display import HTML
from pandas.io.formats.style import Styler
In [37]: %mkdir templates
mkdir: templates: File exists
This next cell writes the custom template. We extend the template html.tpl, which comes with pandas.
In [38]: %%file templates/myhtml.tpl
{% extends "html.tpl" %}
{% block table %}
<h1>{{ table_title|default("My Table") }}</h1>
{{ super() }}
{% endblock table %}
Overwriting templates/myhtml.tpl
Now that weve created a template, we need to set up a subclass of Styler that knows about it.
In [39]: class MyStyler(Styler):
env = Environment(
loader=ChoiceLoader([
FileSystemLoader("templates"), # contains ours
Styler.loader, # the default
])
)
template = env.get_template("myhtml.tpl")
Notice that we include the original loader in our environments loader. Thats because we extend the original template,
so the Jinja environment needs to be able to find it.
Now we can use that custom styler. Its __init__ takes a DataFrame.
In [40]: MyStyler(df)
Out[40]: <__main__.MyStyler at 0x11b5634a8>
Our custom template accepts a table_title keyword. We can provide the value in the .render method.
In [41]: HTML(MyStyler(df).render(table_title="Extending Example"))
Out[41]: <IPython.core.display.HTML object>
For convenience, we provide the Styler.from_custom_template method that does the same as the custom
subclass.
In [42]: EasyStyler = Styler.from_custom_template("templates", "myhtml.tpl")
EasyStyler(df)
Out[42]: <pandas.io.formats.style.Styler.from_custom_template.<locals>.MyStyler at 0x11b563240>
HTML(structure)
Out[43]: <IPython.core.display.HTML object>
TWENTYFOUR
The pandas I/O API is a set of top level reader functions accessed like pd.read_csv() that generally return a
pandas object. The corresponding writer functions are object methods that are accessed like df.to_csv()
Format Type Data Description Reader Writer
text CSV read_csv to_csv
text JSON read_json to_json
text HTML read_html to_html
text Local clipboard read_clipboard to_clipboard
binary MS Excel read_excel to_excel
binary HDF5 Format read_hdf to_hdf
binary Feather Format read_feather to_feather
binary Msgpack read_msgpack to_msgpack
binary Stata read_stata to_stata
binary SAS read_sas
binary Python Pickle Format read_pickle to_pickle
SQL SQL read_sql to_sql
SQL Google Big Query read_gbq to_gbq
Here is an informal performance comparison for some of these IO methods.
Note: For examples that use the StringIO class, make sure you import it according to your Python version, i.e.
from StringIO import StringIO for Python 2 and from io import StringIO for Python 3.
The two workhorse functions for reading text files (a.k.a. flat files) are read_csv() and read_table(). They
both use the same parsing code to intelligently convert tabular data into a DataFrame object. See the cookbook for
some advanced strategies.
24.1.1.1 Basic
987
pandas: powerful Python data analysis toolkit, Release 0.20.1
header [int or list of ints, default 'infer'] Row number(s) to use as the column names, and the start of the data.
Default behavior is as if header=0 if no names passed, otherwise as if header=None. Explicitly pass
header=0 to be able to replace existing names. The header can be a list of ints that specify row locations
for a multi-index on the columns e.g. [0,1,3]. Intervening rows that are not specified will be skipped
(e.g. 2 in this example is skipped). Note that this parameter ignores commented lines and empty lines if
skip_blank_lines=True, so header=0 denotes the first line of data rather than the first line of the file.
names [array-like, default None] List of column names to use. If file contains no header row, then you should
explicitly pass header=None. Duplicates in this list are not allowed unless mangle_dupe_cols=True,
which is the default.
index_col [int or sequence or False, default None] Column to use as the row labels of the DataFrame. If a sequence
is given, a MultiIndex is used. If you have a malformed file with delimiters at the end of each line, you might
consider index_col=False to force pandas to not use the first column as the index (row names).
usecols [array-like or callable, default None] Return a subset of the columns. If array-like, all elements must either be
positional (i.e. integer indices into the document columns) or strings that correspond to column names provided
either by the user in names or inferred from the document header row(s). For example, a valid array-like usecols
parameter would be [0, 1, 2] or [foo, bar, baz].
If callable, the callable function will be evaluated against the column names, returning names where the callable
function evaluates to True:
In [2]: pd.read_csv(StringIO(data))
Out[2]:
col1 col2 col3
0 a b 1
1 a b 2
2 c d 3
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[3]:
col1 col3
0 a 1
1 a 2
2 c 3
Using this parameter results in much faster parsing time and lower memory usage.
as_recarray [boolean, default False] DEPRECATED: this argument will be removed in a future version. Please
call pd.read_csv(...).to_records() instead.
Return a NumPy recarray instead of a DataFrame after parsing the data. If set to True, this option takes
precedence over the squeeze parameter. In addition, as row indices are not available in such a format, the
index_col parameter will be ignored.
squeeze [boolean, default False] If the parsed data only contains one column then return a Series.
prefix [str, default None] Prefix to add to column numbers when no header, e.g. X for X0, X1, ...
mangle_dupe_cols [boolean, default True] Duplicate columns will be specified as X.0...X.N, rather than
X...X. Passing in False will cause data to be overwritten if there are duplicate names in the columns.
dtype [Type name or dict of column -> type, default None] Data type for data or columns. E.g. {'a': np.
float64, 'b': np.int32} (unsupported with engine='python'). Use str or object to preserve
and not interpret dtype.
New in version 0.20.0: support for the Python parser.
engine [{'c', 'python'}] Parser engine to use. The C engine is faster while the python engine is currently more
feature-complete.
converters [dict, default None] Dict of functions for converting values in certain columns. Keys can either be integers
or column labels.
true_values [list, default None] Values to consider as True.
false_values [list, default None] Values to consider as False.
skipinitialspace [boolean, default False] Skip spaces after delimiter.
skiprows [list-like or integer, default None] Line numbers to skip (0-indexed) or number of lines to skip (int) at the
start of the file.
If callable, the callable function will be evaluated against the row indices, returning True if the row should be
skipped and False otherwise:
In [5]: pd.read_csv(StringIO(data))
Out[5]:
col1 col2 col3
0 a b 1
1 a b 2
2 c d 3
skipfooter [int, default 0] Number of lines at bottom of file to skip (unsupported with engine=c).
skip_footer [int, default 0] DEPRECATED: use the skipfooter parameter instead, as they are identical
nrows [int, default None] Number of rows of file to read. Useful for reading pieces of large files.
low_memory [boolean, default True] Internally process the file in chunks, resulting in lower memory use while
parsing, but possibly mixed type inference. To ensure no mixed types either set False, or specify the type with
the dtype parameter. Note that the entire file is read into a single DataFrame regardless, use the chunksize
or iterator parameter to return the data in chunks. (Only valid with C parser)
buffer_lines [int, default None] DEPRECATED: this argument will be removed in a future version because its value
is not respected by the parser
compact_ints [boolean, default False] DEPRECATED: this argument will be removed in a future version
If compact_ints is True, then for any column that is of integer dtype, the parser will attempt to cast it
as the smallest integer dtype possible, either signed or unsigned depending on the specification from the
use_unsigned parameter.
use_unsigned [boolean, default False] DEPRECATED: this argument will be removed in a future version
If integer columns are being compacted (i.e. compact_ints=True), specify whether the column should be
compacted to the smallest signed or unsigned integer dtype.
memory_map [boolean, default False] If a filepath is provided for filepath_or_buffer, map the file object
directly onto memory and access the data directly from there. Using this option can improve performance
because there is no longer any I/O overhead.
na_values [scalar, str, list-like, or dict, default None] Additional strings to recognize as NA/NaN. If
dict passed, specific per-column NA values. By default the following values are interpreted as
NaN: '-1.#IND', '1.#QNAN', '1.#IND', '-1.#QNAN', '#N/A N/A', '#N/A', 'N/A',
'NA', '#NA', 'NULL', 'NaN', '-NaN', 'nan', '-nan', ''.
keep_default_na [boolean, default True] If na_values are specified and keep_default_na is False the default NaN
values are overridden, otherwise theyre appended to.
na_filter [boolean, default True] Detect missing value markers (empty strings and the value of na_values). In data
without any NAs, passing na_filter=False can improve the performance of reading a large file.
verbose [boolean, default False] Indicate number of NA values placed in non-numeric columns.
skip_blank_lines [boolean, default True] If True, skip over blank lines rather than interpreting as NaN values.
parse_dates [boolean or list of ints or names or list of lists or dict, default False.]
If True -> try parsing the index.
If [1, 2, 3] -> try parsing columns 1, 2, 3 each as a separate date column.
If [[1, 3]] -> combine columns 1 and 3 and parse as a single date column.
If {'foo' : [1, 3]} -> parse columns 1, 3 as date and call result foo. A fast-path exists for
iso8601-formatted dates.
infer_datetime_format [boolean, default False] If True and parse_dates is enabled for a column, attempt to infer
the datetime format to speed up the processing.
keep_date_col [boolean, default False] If True and parse_dates specifies combining multiple columns then keep
the original columns.
date_parser [function, default None] Function to use for converting a sequence of string columns to an array of
datetime instances. The default uses dateutil.parser.parser to do the conversion. Pandas will try to
call date_parser in three different ways, advancing to the next if an exception occurs: 1) Pass one or more arrays
(as defined by parse_dates) as arguments; 2) concatenate (row-wise) the string values from the columns defined
by parse_dates into a single array and pass that; and 3) call date_parser once for each row using one or more
strings (corresponding to the columns defined by parse_dates) as arguments.
dayfirst [boolean, default False] DD/MM format dates, international and European format.
24.1.1.6 Iteration
iterator [boolean, default False] Return TextFileReader object for iteration or getting chunks with get_chunk().
chunksize [int, default None] Return TextFileReader object for iteration. See iterating and chunking below.
compression [{'infer', 'gzip', 'bz2', 'zip', 'xz', None}, default 'infer'] For on-the-fly decompres-
sion of on-disk data. If infer, then use gzip, bz2, zip, or xz if filepath_or_buffer is a string ending in .gz,
.bz2, .zip, or .xz, respectively, and no decompression otherwise. If using zip, the ZIP file must contain
only one data file to be read in. Set to None for no decompression.
New in version 0.18.1: support for zip and xz compression.
thousands [str, default None] Thousands separator.
decimal [str, default '.'] Character to recognize as decimal point. E.g. use ',' for European data.
float_precision [string, default None] Specifies which converter the C engine should use for floating-point values.
The options are None for the ordinary converter, high for the high-precision converter, and round_trip for
the round-trip converter.
lineterminator [str (length 1), default None] Character to break file into lines. Only valid with C parser.
quotechar [str (length 1)] The character used to denote the start and end of a quoted item. Quoted items can include
the delimiter and it will be ignored.
quoting [int or csv.QUOTE_* instance, default 0] Control field quoting behavior per csv.QUOTE_* constants.
Use one of QUOTE_MINIMAL (0), QUOTE_ALL (1), QUOTE_NONNUMERIC (2) or QUOTE_NONE (3).
doublequote [boolean, default True] When quotechar is specified and quoting is not QUOTE_NONE, indi-
cate whether or not to interpret two consecutive quotechar elements inside a field as a single quotechar
element.
escapechar [str (length 1), default None] One-character string used to escape delimiter when quoting is
QUOTE_NONE.
comment [str, default None] Indicates remainder of line should not be parsed. If found at the beginning of a line,
the line will be ignored altogether. This parameter must be a single character. Like empty lines (as long
as skip_blank_lines=True), fully commented lines are ignored by the parameter header but not by
skiprows. For example, if comment='#', parsing #empty\na,b,c\n1,2,3 with header=0 will result in a,b,c
being treated as the header.
encoding [str, default None] Encoding to use for UTF when reading/writing (e.g. 'utf-8'). List of Python standard
encodings.
dialect [str or csv.Dialect instance, default None] If provided, this parameter will override values (default or
not) for the following parameters: delimiter, doublequote, escapechar, skipinitialspace, quotechar, and quoting.
If it is necessary to override values, a ParserWarning will be issued. See csv.Dialect documentation for
more details.
tupleize_cols [boolean, default False] Leave a list of tuples on columns as is (default is to convert to a MultiIndex
on the columns).
error_bad_lines [boolean, default True] Lines with too many fields (e.g. a csv line with too many commas) will by
default cause an exception to be raised, and no DataFrame will be returned. If False, then these bad lines
will dropped from the DataFrame that is returned. See bad lines below.
warn_bad_lines [boolean, default True] If error_bad_lines is False, and warn_bad_lines is True, a warning for
each bad line will be output.
Starting with v0.10, you can indicate the data type for the whole DataFrame or individual columns:
In [8]: print(data)
a,b,c
1,2,3
4,5,6
7,8,9
In [10]: df
Out[10]:
a b c
0 1 2 3
1 4 5 6
2 7 8 9
In [11]: df['a'][0]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[11]: '1'
In [13]: df.dtypes
Out[13]:
a int64
b object
c float64
dtype: object
Fortunately, pandas offers more than one way to ensure that your column(s) contain only one dtype. If youre
unfamiliar with these concepts, you can see here to learn more about dtypes, and here to learn more about object
conversion in pandas.
For instance, you can use the converters argument of read_csv():
In [16]: df
Out[16]:
col_1
0 1
1 2
2 'A'
3 4.22
In [17]: df['col_1'].apply(type).value_counts()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[17]:
<class 'str'> 4
Name: col_1, dtype: int64
Or you can use the to_numeric() function to coerce the dtypes after reading in the data,
In [18]: df2 = pd.read_csv(StringIO(data))
In [20]: df2
Out[20]:
col_1
0 1.00
1 2.00
2 NaN
3 4.22
In [21]: df2['col_1'].apply(type).value_counts()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[21]:
<class 'float'> 4
Name: col_1, dtype: int64
which would convert all valid parsing to floats, leaving the invalid parsing as NaN.
Ultimately, how you deal with reading in columns containing mixed dtypes depends on your specific needs. In the case
above, if you wanted to NaN out the data anomalies, then to_numeric() is probably your best option. However, if
you wanted for all the data to be coerced, no matter the type, then using the converters argument of read_csv()
would certainly be worth trying.
New in version 0.20.0: support for the Python parser.
The dtype option is supported by the python engine
Note: In some cases, reading in abnormal data with columns containing mixed dtypes will result in an inconsistent
dataset. If you rely on pandas to infer the dtypes of your columns, the parsing engine will go and infer the dtypes for
different chunks of the data, rather than the whole dataset at once. Consequently, you can end up with column(s) with
mixed dtypes. For example,
In [22]: df = pd.DataFrame({'col_1': list(range(500000)) + ['a', 'b'] +
list(range(500000))})
In [23]: df.to_csv('foo.csv')
In [25]: mixed_df['col_1'].apply(type).value_counts()
Out[25]:
<class 'int'> 737858
<class 'str'> 262144
In [26]: mixed_df['col_1'].dtype
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[26]:
dtype('O')
will result with mixed_df containing an int dtype for certain chunks of the column, and str for others due to the
mixed dtypes from the data that was read in. It is important to note that the overall column will be marked with a
dtype of object, which is used for columns with mixed dtypes.
In [28]: pd.read_csv(StringIO(data))
Out[28]:
col1 col2 col3
0 a b 1
1 a b 2
2 c d 3
In [29]: pd.read_csv(StringIO(data)).dtypes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[29]:
col1 object
col2 object
col3 int64
dtype: object
col1 category
col2 category
col3 category
dtype: object
Note: The resulting categories will always be parsed as strings (object dtype). If the categories are numeric they can
be converted using the to_numeric() function, or as appropriate, another converter such as to_datetime().
In [33]: df.dtypes
Out[33]:
col1 category
col2 category
col3 category
dtype: object
In [34]: df['col3']
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[34]:
0 1
1 2
2 3
Name: col3, dtype: category
Categories (3, object): [1, 2, 3]
In [36]: df['col3']
Out[36]:
0 1
1 2
2 3
Name: col3, dtype: category
Categories (3, int64): [1, 2, 3]
A file may or may not have a header row. pandas assumes the first row should be used as the column names:
In [37]: data = 'a,b,c\n1,2,3\n4,5,6\n7,8,9'
In [38]: print(data)
a,b,c
1,2,3
4,5,6
7,8,9
In [39]: pd.read_csv(StringIO(data))
\\\\\\\\\\\\\\\\\\\\\\\\Out[39]:
a b c
0 1 2 3
1 4 5 6
2 7 8 9
By specifying the names argument in conjunction with header you can indicate other names to use and whether or
not to throw away the header row (if any):
In [40]: print(data)
a,b,c
1,2,3
4,5,6
7,8,9
If the header is in a row other than the first, pass the row number to header. This will skip the preceding rows:
If the file or header contains duplicate names, pandas by default will deduplicate these names so as to prevent data
overwrite:
In [46]: pd.read_csv(StringIO(data))
Out[46]:
a b a.1
0 0 1 2
1 3 4 5
There is no more duplicate data because mangle_dupe_cols=True by default, which modifies a series of dupli-
cate columns X...X to become X.0...X.N. If mangle_dupe_cols =False, duplicate data can arise:
To prevent users from encountering this problem with duplicate data, a ValueError exception is raised if
mangle_dupe_cols != True:
The usecols argument allows you to select any subset of the columns in a file, either using the column names,
position numbers or a callable:
New in version 0.20.0: support for callable usecols arguments
In [48]: pd.read_csv(StringIO(data))
Out[48]:
a b c d
0 1 2 3 foo
1 4 5 6 bar
2 7 8 9 baz
a c d
0 1 3 foo
1 4 6 bar
2 7 9 baz
a c
0 1 3
1 4 6
2 7 9
The usecols argument can also be used to specify which columns not to use in the final result:
In this case, the callable is specifying that we exclude the a and c columns from the output.
If the comment parameter is specified, then completely commented lines will be ignored. By default, completely
blank lines will be ignored as well. Both of these are API changes introduced in version 0.15.
In [54]: print(data)
a,b,c
# commented line
1,2,3
4,5,6
Warning: The presence of ignored lines might create ambiguities involving line numbers; the parameter header
uses row numbers (ignoring commented/empty lines), while skiprows uses line numbers (including com-
mented/empty lines):
In [58]: data = '#comment\na,b,c\nA,B,C\n1,2,3'
If both header and skiprows are specified, header will be relative to the end of skiprows. For example:
In [62]: 'line\nX,Y,Z\n1,2,3\nA,B,C\n1,2.,4.\n5.,NaN,10.0'
In [63]: print(data)
# empty
# second empty line
# third emptyline
X,Y,Z
1,2,3
A,B,C
1,2.,4.
5.,NaN,10.0
A B C
0 1.0 2.0 4.0
1 5.0 NaN 10.0
24.1.6.2 Comments
In [65]: print(open('tmp.csv').read())
ID,level,category
Patient1,123000,x # really unpleasant
Patient2,23000,y # wouldn't take his medicine
Patient3,1234018,z # awesome
In [66]: df = pd.read_csv('tmp.csv')
In [67]: df
Out[67]:
ID level category
0 Patient1 123000 x # really unpleasant
1 Patient2 23000 y # wouldn't take his medicine
2 Patient3 1234018 z # awesome
In [69]: df
Out[69]:
ID level category
0 Patient1 123000 x
1 Patient2 23000 y
2 Patient3 1234018 z
The encoding argument should be used for encoded unicode data, which will result in byte strings being decoded
to unicode in the result:
In [70]: data = b'word,length\nTr\xc3\xa4umen,7\nGr\xc3\xbc\xc3\x9fe,5'.decode('utf8
').encode('latin-1')
In [72]: df
Out[72]:
word length
0 Trumen 7
1 Gre 5
In [73]: df['word'][1]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[73]: 'Gre'
Some formats which encode all characters as multiple bytes, like UTF-16, wont parse correctly at all without speci-
fying the encoding. Full list of Python standard encodings
If a file has one more column of data than the number of column names, the first column will be used as the
DataFrames row names:
In [74]: data = 'a,b,c\n4,apple,bat,5.7\n8,orange,cow,10'
In [75]: pd.read_csv(StringIO(data))
Out[75]:
a b c
4 apple bat 5.7
8 orange cow 10.0
Ordinarily, you can achieve this behavior using the index_col option.
There are some exception cases when a file has been prepared with delimiters at the end of each data line, confusing
the parser. To explicitly disable the index column inference and discard the last column, pass index_col=False:
In [78]: data = 'a,b,c\n4,apple,bat,\n8,orange,cow,'
In [79]: print(data)
a,b,c
4,apple,bat,
8,orange,cow,
In [80]: pd.read_csv(StringIO(data))
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[80]:
a b c
4 apple bat NaN
8 orange cow NaN
a b c
0 4 apple bat
1 8 orange cow
To better facilitate working with datetime data, read_csv() and read_table() use the keyword arguments
parse_dates and date_parser to allow users to specify a variety of columns and date/time formats to turn the
input text data into datetime objects.
The simplest case is to just pass in parse_dates=True:
In [83]: df
Out[83]:
A B C
date
2009-01-01 a 1 2
2009-01-02 b 3 4
2009-01-03 c 4 5
name='date', freq=None)
It is often the case that we may want to store date and time data separately, or store various date fields separately. the
parse_dates keyword can be used to specify a combination of columns to parse the dates and/or times from.
You can specify a list of column lists to parse_dates, the resulting date columns will be prepended to the output
(so as to not affect the existing column order) and the new column names will be the concatenation of the component
column names:
In [85]: print(open('tmp.csv').read())
KORD,19990127, 19:00:00, 18:56:00, 0.8100
KORD,19990127, 20:00:00, 19:56:00, 0.0100
KORD,19990127, 21:00:00, 20:56:00, -0.5900
KORD,19990127, 21:00:00, 21:18:00, -0.9900
KORD,19990127, 22:00:00, 21:56:00, -0.5900
KORD,19990127, 23:00:00, 22:56:00, -0.5900
In [87]: df
Out[87]:
1_2 1_3 0 4
0 1999-01-27 19:00:00 1999-01-27 18:56:00 KORD 0.81
1 1999-01-27 20:00:00 1999-01-27 19:56:00 KORD 0.01
2 1999-01-27 21:00:00 1999-01-27 20:56:00 KORD -0.59
3 1999-01-27 21:00:00 1999-01-27 21:18:00 KORD -0.99
4 1999-01-27 22:00:00 1999-01-27 21:56:00 KORD -0.59
5 1999-01-27 23:00:00 1999-01-27 22:56:00 KORD -0.59
By default the parser removes the component date columns, but you can choose to retain them via the
keep_date_col keyword:
In [89]: df
Out[89]:
1_2 1_3 0 1 2 \
0 1999-01-27 19:00:00 1999-01-27 18:56:00 KORD 19990127 19:00:00
1 1999-01-27 20:00:00 1999-01-27 19:56:00 KORD 19990127 20:00:00
2 1999-01-27 21:00:00 1999-01-27 20:56:00 KORD 19990127 21:00:00
3 1999-01-27 21:00:00 1999-01-27 21:18:00 KORD 19990127 21:00:00
4 1999-01-27 22:00:00 1999-01-27 21:56:00 KORD 19990127 22:00:00
5 1999-01-27 23:00:00 1999-01-27 22:56:00 KORD 19990127 23:00:00
3 4
0 18:56:00 0.81
1 19:56:00 0.01
2 20:56:00 -0.59
3 21:18:00 -0.99
4 21:56:00 -0.59
5 22:56:00 -0.59
Note that if you wish to combine multiple columns into a single date column, a nested list must be used. In other
words, parse_dates=[1, 2] indicates that the second and third columns should each be parsed as separate date
columns while parse_dates=[[1, 2]] means the two columns should be parsed into a single column.
You can also use a dict to specify custom name columns:
In [92]: df
Out[92]:
nominal actual 0 4
0 1999-01-27 19:00:00 1999-01-27 18:56:00 KORD 0.81
1 1999-01-27 20:00:00 1999-01-27 19:56:00 KORD 0.01
2 1999-01-27 21:00:00 1999-01-27 20:56:00 KORD -0.59
3 1999-01-27 21:00:00 1999-01-27 21:18:00 KORD -0.99
4 1999-01-27 22:00:00 1999-01-27 21:56:00 KORD -0.59
5 1999-01-27 23:00:00 1999-01-27 22:56:00 KORD -0.59
It is important to remember that if multiple text columns are to be parsed into a single date column, then a new column
is prepended to the data. The index_col specification is based off of this new set of columns rather than the original
data columns:
In [95]: df
Out[95]:
actual 0 4
nominal
1999-01-27 19:00:00 1999-01-27 18:56:00 KORD 0.81
1999-01-27 20:00:00 1999-01-27 19:56:00 KORD 0.01
1999-01-27 21:00:00 1999-01-27 20:56:00 KORD -0.59
1999-01-27 21:00:00 1999-01-27 21:18:00 KORD -0.99
1999-01-27 22:00:00 1999-01-27 21:56:00 KORD -0.59
1999-01-27 23:00:00 1999-01-27 22:56:00 KORD -0.59
Note: If a column or index contains an unparseable date, the entire column or index will be returned unaltered as an
object data type. For non-standard datetime parsing, use to_datetime() after pd.read_csv.
Note: read_csv has a fast_path for parsing datetime strings in iso8601 format, e.g 2000-01-01T00:01:02+00:00 and
similar variations. If you can arrange for your data to store datetimes in this format, load times will be significantly
faster, ~20x has been observed.
Note: When passing a dict as the parse_dates argument, the order of the columns prepended is not guaranteed,
because dict objects do not impose an ordering on their keys. On Python 2.7+ you may use collections.OrderedDict
instead of a regular dict if this matters to you. Because of this, when using a dict for parse_dates in conjunction with
the index_col argument, its best to specify index_col as a column label rather then as an index on the resulting frame.
Finally, the parser allows you to specify a custom date_parser function to take full advantage of the flexibility of
the date parsing API:
In [98]: df
Out[98]:
nominal actual 0 4
0 1999-01-27 19:00:00 1999-01-27 18:56:00 KORD 0.81
1 1999-01-27 20:00:00 1999-01-27 19:56:00 KORD 0.01
2 1999-01-27 21:00:00 1999-01-27 20:56:00 KORD -0.59
3 1999-01-27 21:00:00 1999-01-27 21:18:00 KORD -0.99
4 1999-01-27 22:00:00 1999-01-27 21:56:00 KORD -0.59
5 1999-01-27 23:00:00 1999-01-27 22:56:00 KORD -0.59
Pandas will try to call the date_parser function in three different ways. If an exception is raised, the next one is
tried:
1. date_parser is first called with one or more arrays as arguments, as defined using parse_dates (e.g.,
date_parser(['2013', '2013'], ['1', '2']))
2. If #1 fails, date_parser is called with all the columns concatenated row-wise into a single array (e.g.,
date_parser(['2013 1', '2013 2']))
3. If #2 fails, date_parser is called once for every row with one or more string arguments from
the columns indicated with parse_dates (e.g., date_parser('2013', '1') for the first row,
date_parser('2013', '2') for the second, etc.)
Note that performance-wise, you should try these methods of parsing dates in order:
1. Try to infer the format using infer_datetime_format=True (see section below)
2. If you know the format, use pd.to_datetime(): date_parser=lambda x: pd.
to_datetime(x, format=...)
3. If you have a really non-standard format, use a custom date_parser function. For optimal performance, this
should be vectorized, i.e., it should accept arrays as arguments.
You can explore the date parsing functionality in date_converters.py and add your own. We would love to turn
this module into a community supported set of date/time parsers. To get you started, date_converters.py con-
tains functions to parse dual date and time columns, year/month/day columns, and year/month/day/hour/minute/second
columns. It also contains a generic_parser function so you can curry it with a function that deals with a single
date rather than the entire array.
If you have parse_dates enabled for some or all of your columns, and your datetime strings are all formatted the
same way, you may get a large speed up by setting infer_datetime_format=True. If set, pandas will attempt
to guess the format of your datetime strings, and then use a faster means of parsing the strings. 5-10x parsing speeds
have been observed. pandas will fallback to the usual parsing if either the format cannot be guessed or the format that
was guessed cannot properly parse the entire column of strings. So in general, infer_datetime_format should
not have any negative consequences if enabled.
Here are some examples of datetime strings that can be guessed (All representing December 30th, 2011 at 00:00:00)
20111230
2011/12/30
20111230 00:00:00
12/30/2011 00:00:00
30/Dec/2011 00:00:00
30/December/2011 00:00:00
infer_datetime_format is sensitive to dayfirst. With dayfirst=True, it will guess 01/12/2011 to be
December 1st. With dayfirst=False (default) it will guess 01/12/2011 to be January 12th.
# Try to infer the format for the index column
In [99]: df = pd.read_csv('foo.csv', index_col=0, parse_dates=True,
....: infer_datetime_format=True)
....:
In [100]: df
Out[100]:
A B C
date
2009-01-01 a 1 2
2009-01-02 b 3 4
2009-01-03 c 4 5
While US date formats tend to be MM/DD/YYYY, many international formats use DD/MM/YYYY instead. For
convenience, a dayfirst keyword is provided:
In [101]: print(open('tmp.csv').read())
date,value,cat
1/6/2000,5,a
2/6/2000,10,b
3/6/2000,15,c
The parameter float_precision can be specified in order to use a specific floating-point converter during parsing
with the C engine. The options are the ordinary converter, the high-precision converter, and the round-trip converter
(which is guaranteed to round-trip values after writing to a file). For example:
Out[106]: 1.1102230246251565e-16
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[107]: 5.5511151231257827e-17
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[108]: 0.0
For large numbers that have been written with a thousands separator, you can set the thousands keyword to a string
of length 1 so that integers will be parsed correctly:
By default, numbers with a thousands separator will be parsed as strings
In [109]: print(open('tmp.csv').read())
ID|level|category
Patient1|123,000|x
Patient2|23,000|y
Patient3|1,234,018|z
In [111]: df
Out[111]:
ID level category
0 Patient1 123,000 x
1 Patient2 23,000 y
2 Patient3 1,234,018 z
In [112]: df.level.dtype
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
dtype('O')
In [113]: print(open('tmp.csv').read())
ID|level|category
Patient1|123,000|x
Patient2|23,000|y
Patient3|1,234,018|z
In [115]: df
Out[115]:
ID level category
0 Patient1 123000 x
1 Patient2 23000 y
2 Patient3 1234018 z
In [116]: df.level.dtype
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
dtype('int64')
24.1.12 NA Values
To control which values are parsed as missing values (which are signified by NaN), specifiy a string in na_values.
If you specify a list of strings, then all values in it are considered to be missing values. If you specify a number (a
float, like 5.0 or an integer like 5), the corresponding equivalent values will also imply a missing value (in this
case effectively [5.0,5] are recognized as NaN.
To completely override the default values that are recognized as missing, specify keep_default_na=False. The
default NaN recognized values are ['-1.#IND', '1.#QNAN', '1.#IND', '-1.#QNAN', '#N/A','N/
A', 'NA', '#NA', 'NULL', 'NaN', '-NaN', 'nan', '-nan']. Although a 0-length string '' is
not included in the default NaN values list, it is still treated as a missing value.
read_csv(path, na_values=[5])
the default values, in addition to 5 , 5.0 when interpreted as numbers are recognized as NaN
read_csv(path, na_values=["Nope"])
the default values, in addition to the string "Nope" are recognized as NaN
24.1.13 Infinity
inf like values will be parsed as np.inf (positive infinity), and -inf as -np.inf (negative infinity). These will
ignore the case of the value, meaning Inf, will also be parsed as np.inf.
Using the squeeze keyword, the parser will return output with a single column as a Series:
In [117]: print(open('tmp.csv').read())
level
Patient1,123000
Patient2,23000
Patient3,1234018
In [119]: output
Out[119]:
Patient1 123000
Patient2 23000
Patient3 1234018
Name: level, dtype: int64
In [120]: type(output)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[
pandas.core.series.Series
The common values True, False, TRUE, and FALSE are all recognized as boolean. Sometime you would want to
recognize some other values as being boolean. To do this use the true_values and false_values options:
In [122]: print(data)
a,b,c
1,Yes,2
3,No,4
In [123]: pd.read_csv(StringIO(data))
\\\\\\\\\\\\\\\\\\\\\Out[123]:
a b c
0 1 Yes 2
1 3 No 4
Some files may have malformed lines with too few fields or too many. Lines with too few fields will have NA values
filled in the trailing fields. Lines with too many will cause an error by default:
In [27]: data = 'a,b,c\n1,2,3\n4,5,6,7\n8,9,10'
In [28]: pd.read_csv(StringIO(data))
---------------------------------------------------------------------------
ParserError Traceback (most recent call last)
ParserError: Error tokenizing data. C error: Expected 3 fields in line 3, saw 4
Out[29]:
a b c
0 1 2 3
1 8 9 10
You can also use the usecols parameter to eliminate extraneous column data that appear in some lines but not others:
In [30]: pd.read_csv(StringIO(data), usecols=[0, 1, 2])
Out[30]:
a b c
0 1 2 3
1 4 5 6
2 8 9 10
24.1.17 Dialect
The dialect keyword gives greater flexibility in specifying the file format. By default it uses the Excel dialect but
you can specify either the dialect name or a csv.Dialect instance.
In [125]: print(data)
label1,label2,label3
index1,"a,c,e
index2,b,d,f
By default, read_csv uses the Excel dialect and treats the double quote as the quote character, which causes it to
fail when it finds a newline before it finds the closing double quote.
We can get around this using dialect
Another common dialect option is skipinitialspace, to skip any whitespace after a delimiter:
In [132]: print(data)
a, b, c
1, 2, 3
4, 5, 6
The parsers make every attempt to do the right thing and not be very fragile. Type inference is a pretty big deal. So
if a column can be coerced to integer dtype without altering the contents, it will do so. Any non-numeric columns will
come through as object dtype as with the rest of pandas objects.
Quotes (and other escape characters) in embedded fields can be handled in any number of ways. One way is to use
backslashes; to properly parse this data, you should pass the escapechar option:
In [135]: print(data)
a,b
"hello, \"Bob\", nice to see you",5
While read_csv reads delimited data, the read_fwf() function works with data files that have known and fixed
column widths. The function parameters to read_fwf are largely the same as read_csv with two extra parameters:
colspecs: A list of pairs (tuples) giving the extents of the fixed-width fields of each line as half-open intervals
(i.e., [from, to[ ). String value infer can be used to instruct the parser to try detecting the column specifications
from the first 100 rows of the data. Default behaviour, if not specified, is to infer.
widths: A list of field widths which can be used instead of colspecs if the intervals are contiguous.
Consider a typical fixed-width data file:
In [137]: print(open('bar.csv').read())
id8141 360.242940 149.910199 11950.7
id1594 444.953632 166.985655 11788.4
id1849 364.136849 183.628767 11806.2
id1230 413.836124 184.375703 11916.8
id1948 502.953953 173.237159 12468.3
In order to parse this file into a DataFrame, we simply need to supply the column specifications to the read_fwf
function along with the file name:
#Column specifications are a list of half-intervals
In [138]: colspecs = [(0, 6), (8, 20), (21, 33), (34, 43)]
In [140]: df
Out[140]:
1 2 3
0
id8141 360.242940 149.910199 11950.7
id1594 444.953632 166.985655 11788.4
id1849 364.136849 183.628767 11806.2
id1230 413.836124 184.375703 11916.8
id1948 502.953953 173.237159 12468.3
Note how the parser automatically picks column names X.<column number> when header=None argument is spec-
ified. Alternatively, you can supply just the column widths for contiguous columns:
#Widths are a list of integers
In [141]: widths = [6, 14, 13, 10]
In [143]: df
Out[143]:
0 1 2 3
0 id8141 360.242940 149.910199 11950.7
1 id1594 444.953632 166.985655 11788.4
2 id1849 364.136849 183.628767 11806.2
3 id1230 413.836124 184.375703 11916.8
4 id1948 502.953953 173.237159 12468.3
The parser will take care of extra white spaces around the columns so its ok to have extra separation between the
columns in the file.
New in version 0.13.0.
By default, read_fwf will try to infer the files colspecs by using the first 100 rows of the file. It can do it
only in cases when the columns are aligned and correctly separated by the provided delimiter (default delimiter is
whitespace).
In [145]: df
Out[145]:
1 2 3
0
id8141 360.242940 149.910199 11950.7
id1594 444.953632 166.985655 11788.4
id1849 364.136849 183.628767 11806.2
id1230 413.836124 184.375703 11916.8
id1948 502.953953 173.237159 12468.3
24.1.20 Indexes
Consider a file with one less entry in the header than the number of data column:
In [148]: print(open('foo.csv').read())
A,B,C
20090101,a,1,2
20090102,b,3,4
20090103,c,4,5
In this special case, read_csv assumes that the first column is to be used as the index of the DataFrame:
In [149]: pd.read_csv('foo.csv')
Out[149]:
A B C
20090101 a 1 2
20090102 b 3 4
20090103 c 4 5
Note that the dates werent automatically parsed. In that case you would need to do as before:
In [151]: df.index
Out[151]: DatetimeIndex(['2009-01-01', '2009-01-02', '2009-01-03'], dtype=
'datetime64[ns]', freq=None)
In [152]: print(open('data/mindex_ex.csv').read())
year,indiv,zit,xit
1977,"A",1.2,.6
1977,"B",1.5,.5
1977,"C",1.7,.8
1978,"A",.2,.06
1978,"B",.7,.2
1978,"C",.8,.3
1978,"D",.9,.5
1978,"E",1.4,.9
1979,"C",.2,.15
1979,"D",.14,.05
1979,"E",.5,.15
1979,"F",1.2,.5
1979,"G",3.4,1.9
1979,"H",5.4,2.7
1979,"I",6.4,1.2
The index_col argument to read_csv and read_table can take a list of column numbers to turn multiple
columns into a MultiIndex for the index of the returned object:
In [154]: df
Out[154]:
zit xit
year indiv
1977 A 1.20 0.60
B 1.50 0.50
C 1.70 0.80
1978 A 0.20 0.06
B 0.70 0.20
C 0.80 0.30
D 0.90 0.50
E 1.40 0.90
1979 C 0.20 0.15
D 0.14 0.05
E 0.50 0.15
F 1.20 0.50
G 3.40 1.90
H 5.40 2.70
I 6.40 1.20
In [155]: df.loc[1978]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
zit xit
indiv
A 0.2 0.06
B 0.7 0.20
C 0.8 0.30
D 0.9 0.50
E 1.4 0.90
By specifying list of row locations for the header argument, you can read in a MultiIndex for the columns.
Specifying non-consecutive rows will skip the intervening rows. In order to have the pre-0.13 behavior of tupleizing
columns, specify tupleize_cols=True.
In [157]: df = mkdf(5,3,r_idx_nlevels=2,c_idx_nlevels=4)
In [158]: df.to_csv('mi.csv')
In [159]: print(open('mi.csv').read())
C0,,C_l0_g0,C_l0_g1,C_l0_g2
C1,,C_l1_g0,C_l1_g1,C_l1_g2
C2,,C_l2_g0,C_l2_g1,C_l2_g2
C3,,C_l3_g0,C_l3_g1,C_l3_g2
R0,R1,,,
R_l0_g0,R_l1_g0,R0C0,R0C1,R0C2
R_l0_g1,R_l1_g1,R1C0,R1C1,R1C2
R_l0_g2,R_l1_g2,R2C0,R2C1,R2C2
R_l0_g3,R_l1_g3,R3C0,R3C1,R3C2
R_l0_g4,R_l1_g4,R4C0,R4C1,R4C2
In [160]: pd.read_csv('mi.csv',header=[0,1,2,3],index_col=[0,1])
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
Starting in 0.13.0, read_csv will be able to interpret a more common format of multi-columns indices.
In [161]: print(open('mi2.csv').read())
,a,a,a,b,c,c
,q,r,s,t,u,v
one,1,2,3,4,5,6
two,7,8,9,10,11,12
In [162]: pd.read_csv('mi2.csv',header=[0,1],index_col=0)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[162]:
a b c
q r s t u v
one 1 2 3 4 5 6
two 7 8 9 10 11 12
Note: If an index_col is not specified (e.g. you dont have an index, or wrote it with df.to_csv(...,
index=False), then any names on the columns index will be lost.
read_csv is capable of inferring delimited (not necessarily comma-separated) files, as pandas uses the csv.
Sniffer class of the csv module. For this, you have to specify sep=None.
In [163]: print(open('tmp2.sv').read())
:0:1:2:3
0:0.4691122999071863:-0.2828633443286633:-1.5090585031735124:-1.1356323710171934
1:1.2121120250208506:-0.17321464905330858:0.11920871129693428:-1.0442359662799567
2:-0.8618489633477999:-2.1045692188948086:-0.4949292740687813:1.071803807037338
3:0.7215551622443669:-0.7067711336300845:-1.0395749851146963:0.27185988554282986
4:-0.42497232978883753:0.567020349793672:0.27623201927771873:-1.0874006912859915
5:-0.6736897080883706:0.1136484096888855:-1.4784265524372235:0.5249876671147047
6:0.4047052186802365:0.5770459859204836:-1.7150020161146375:-1.0392684835147725
7:-0.3706468582364464:-1.1578922506419993:-1.344311812731667:0.8448851414248841
8:1.0757697837155533:-0.10904997528022223:1.6435630703622064:-1.4693879595399115
9:0.35702056413309086:-0.6746001037299882:-1.776903716971867:-0.9689138124473498
Unnamed: 0 0 1 2 3
0 0 0.469112 -0.282863 -1.509059 -1.135632
1 1 1.212112 -0.173215 0.119209 -1.044236
2 2 -0.861849 -2.104569 -0.494929 1.071804
3 3 0.721555 -0.706771 -1.039575 0.271860
4 4 -0.424972 0.567020 0.276232 -1.087401
5 5 -0.673690 0.113648 -1.478427 0.524988
6 6 0.404705 0.577046 -1.715002 -1.039268
Its best to use concat() to combine multiple files. See the cookbook for an example.
Suppose you wish to iterate through a (potentially very large) file lazily rather than reading the entire file into memory,
such as the following:
In [165]: print(open('tmp.sv').read())
|0|1|2|3
0|0.4691122999071863|-0.2828633443286633|-1.5090585031735124|-1.1356323710171934
1|1.2121120250208506|-0.17321464905330858|0.11920871129693428|-1.0442359662799567
2|-0.8618489633477999|-2.1045692188948086|-0.4949292740687813|1.071803807037338
3|0.7215551622443669|-0.7067711336300845|-1.0395749851146963|0.27185988554282986
4|-0.42497232978883753|0.567020349793672|0.27623201927771873|-1.0874006912859915
5|-0.6736897080883706|0.1136484096888855|-1.4784265524372235|0.5249876671147047
6|0.4047052186802365|0.5770459859204836|-1.7150020161146375|-1.0392684835147725
7|-0.3706468582364464|-1.1578922506419993|-1.344311812731667|0.8448851414248841
8|1.0757697837155533|-0.10904997528022223|1.6435630703622064|-1.4693879595399115
9|0.35702056413309086|-0.6746001037299882|-1.776903716971867|-0.9689138124473498
In [167]: table
Out[167]:
Unnamed: 0 0 1 2 3
0 0 0.469112 -0.282863 -1.509059 -1.135632
1 1 1.212112 -0.173215 0.119209 -1.044236
2 2 -0.861849 -2.104569 -0.494929 1.071804
3 3 0.721555 -0.706771 -1.039575 0.271860
4 4 -0.424972 0.567020 0.276232 -1.087401
5 5 -0.673690 0.113648 -1.478427 0.524988
6 6 0.404705 0.577046 -1.715002 -1.039268
7 7 -0.370647 -1.157892 -1.344312 0.844885
8 8 1.075770 -0.109050 1.643563 -1.469388
9 9 0.357021 -0.674600 -1.776904 -0.968914
By specifying a chunksize to read_csv or read_table, the return value will be an iterable object of type
TextFileReader:
In [168]: reader = pd.read_table('tmp.sv', sep='|', chunksize=4)
In [169]: reader
Out[169]: <pandas.io.parsers.TextFileReader at 0x12992b940>
In [172]: reader.get_chunk(5)
Out[172]:
Unnamed: 0 0 1 2 3
0 0 0.469112 -0.282863 -1.509059 -1.135632
1 1 1.212112 -0.173215 0.119209 -1.044236
2 2 -0.861849 -2.104569 -0.494929 1.071804
3 3 0.721555 -0.706771 -1.039575 0.271860
4 4 -0.424972 0.567020 0.276232 -1.087401
Under the hood pandas uses a fast and efficient parser implemented in C as well as a python implementation which is
currently more feature-complete. Where possible pandas uses the C parser (specified as engine='c'), but may fall
back to python if C-unsupported options are specified. Currently, C-unsupported options include:
sep other than a single character (e.g. regex separators)
skipfooter
sep=None with delim_whitespace=False
Specifying any of the above options will produce a ParserWarning unless the python engine is selected explicitly
using engine='python'.
df = pd.read_csv('https://download.bls.gov/pub/time.series/cu/cu.item',
sep='\t')
df = pd.read_csv('s3://pandas-test/tips.csv')
The Series and DataFrame objects have an instance method to_csv which allows storing the contents of the object
as a comma-separated-values file. The function takes a number of arguments. Only the first is required.
path_or_buf: A string path to the file to write or a StringIO
sep : Field delimiter for the output file (default ,)
na_rep: A string representation of a missing value (default )
float_format: Format string for floating point numbers
cols: Columns to write (default None)
header: Whether to write out the column names (default True)
index: whether to write row (index) names (default True)
index_label: Column label(s) for index column(s) if desired. If None (default), and header and index are
True, then the index names are used. (A sequence should be given if the DataFrame uses MultiIndex).
mode : Python write mode, default w
encoding: a string representing the encoding to use if the contents are non-ASCII, for python versions prior
to 3
line_terminator: Character sequence denoting line end (default \n)
quoting: Set quoting rules as in csv module (default csv.QUOTE_MINIMAL). Note that if you have set
a float_format then floats are converted to strings and csv.QUOTE_NONNUMERIC will treat them as non-
numeric
quotechar: Character used to quote fields (default )
doublequote: Control quoting of quotechar in fields (default True)
escapechar: Character used to escape sep and quotechar when appropriate (default None)
chunksize: Number of rows to write at a time
tupleize_cols: If False (default), write as a list of tuples, otherwise write in an expanded line format
suitable for read_csv
date_format: Format string for datetime objects
The DataFrame object has an instance method to_string which allows control over the string representation of the
object. All arguments are optional:
buf default None, for example a StringIO object
columns default None, which columns to write
col_space default None, minimum width of each column.
na_rep default NaN, representation of NA value
formatters default None, a dictionary (by column) of functions each of which takes a single argument and
returns a formatted string
float_format default None, a function which takes a single (float) argument and returns a formatted string;
to be applied to floats in the DataFrame.
sparsify default True, set to False for a DataFrame with a hierarchical index to print every multiindex key at
each row.
index_names default True, will print the names of the indices
index default True, will print the index (ie, row labels)
header default True, will print the column labels
justify default left, will print column headers left- or right-justified
The Series object also has a to_string method, but with only the buf, na_rep, float_format arguments.
There is also a length argument which, if set to True, will additionally output the length of the Series.
24.2 JSON
A Series or DataFrame can be converted to a valid JSON string. Use to_json with optional parameters:
path_or_buf : the pathname or buffer to write the output This can be None in which case a JSON string is
returned
orient :
Series :
default is index
allowed values are {split, records, index}
DataFrame
default is columns
allowed values are {split, records, index, columns, values}
The format of the JSON string
split dict like {index -> [index], columns -> [columns], data -> [values]}
records list like [{column -> value}, ... , {column -> value}]
index dict like {index -> {column -> value}}
columns dict like {column -> {index -> value}}
values just the values array
date_format : string, type of date conversion, epoch for timestamp, iso for ISO8601.
double_precision : The number of decimal places to use when encoding floating point values, default 10.
force_ascii : force encoded string to be ASCII, default True.
date_unit : The time unit to encode to, governs timestamp and ISO8601 precision. One of s, ms, us or
ns for seconds, milliseconds, microseconds and nanoseconds respectively. Default ms.
default_handler : The handler to call if an object cannot otherwise be converted to a suitable format for
JSON. Takes a single argument, which is the object to convert, and returns a serializable object.
lines : If records orient, then will write each record per line as json.
Note NaNs, NaTs and None will be converted to null and datetime objects will be converted based on the
date_format and date_unit parameters.
In [175]: json
Out[175]: '{"A":{"0":-1.2945235903,"1":0.2766617129,"2":-0.0139597524,"3":-0.
0061535699,"4":0.8957173022},"B":{"0":0.4137381054,"1":-0.472034511,"2":-0.
3625429925,"3":-0.923060654,"4":0.8052440254}}'
There are a number of different options for the format of the resulting JSON file / string. Consider the following
DataFrame and Series:
In [177]: dfjo
Out[177]:
A B C
x 1 4 7
y 2 5 8
z 3 6 9
In [179]: sjo
Out[179]:
x 15
y 16
z 17
Name: D, dtype: int64
Column oriented (the default for DataFrame) serializes the data as nested JSON objects with column labels acting
as the primary index:
In [180]: dfjo.to_json(orient="columns")
Out[180]: '{"A":{"x":1,"y":2,"z":3},"B":{"x":4,"y":5,"z":6},"C":{"x":7,"y":8,"z":9}}'
Index oriented (the default for Series) similar to column oriented but the index labels are now primary:
In [181]: dfjo.to_json(orient="index")
Out[181]: '{"x":{"A":1,"B":4,"C":7},"y":{"A":2,"B":5,"C":8},"z":{"A":3,"B":6,"C":9}}'
In [182]: sjo.to_json(orient="index")
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[182]:
'{"x":15,"y":16,"z":17}'
Record oriented serializes the data to a JSON array of column -> value records, index labels are not included. This is
useful for passing DataFrame data to plotting libraries, for example the JavaScript library d3.js:
In [183]: dfjo.to_json(orient="records")
Out[183]: '[{"A":1,"B":4,"C":7},{"A":2,"B":5,"C":8},{"A":3,"B":6,"C":9}]'
In [184]: sjo.to_json(orient="records")
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[184]:
'[15,16,17]'
Value oriented is a bare-bones option which serializes to nested JSON arrays of values only, column and index labels
are not included:
In [185]: dfjo.to_json(orient="values")
Out[185]: '[[1,4,7],[2,5,8],[3,6,9]]'
Split oriented serializes to a JSON object containing separate entries for values, index and columns. Name is also
included for Series:
In [186]: dfjo.to_json(orient="split")
Out[186]: '{"columns":["A","B","C"],"index":["x","y","z"],"data":[[1,4,7],[2,5,8],[3,
6,9]]}'
In [187]: sjo.to_json(orient="split")
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[187]
'{"name":"D","index":["x","y","z"],"data":[15,16,17]}'
Note: Any orient option that encodes to a JSON object will not preserve the ordering of index and column labels
during round-trip serialization. If you wish to preserve label ordering use the split option as it uses ordered containers.
In [192]: json
Out[192]: '{"date":{"0":"2013-01-01T00:00:00.000Z","1":"2013-01-01T00:00:00.000Z","2":
"2013-01-01T00:00:00.000Z","3":"2013-01-01T00:00:00.000Z","4":"2013-01-01T00:00:00.
000Z"},"B":{"0":2.5656459463,"1":1.3403088498,"2":-0.2261692849,"3":0.8138502857,"4
":-0.8273169356},"A":{"0":-1.2064117817,"1":1.4312559863,"2":-1.1702987971,"3":0.
4108345112,"4":0.1320031703}}'
In [194]: json
Out[194]: '{"date":{"0":"2013-01-01T00:00:00.000000Z","1":"2013-01-01T00:00:00.000000Z
","2":"2013-01-01T00:00:00.000000Z","3":"2013-01-01T00:00:00.000000Z","4":"2013-01-
01T00:00:00.000000Z"},"B":{"0":2.5656459463,"1":1.3403088498,"2":-0.2261692849,"3
":0.8138502857,"4":-0.8273169356},"A":{"0":-1.2064117817,"1":1.4312559863,"2":-1.
1702987971,"3":0.4108345112,"4":0.1320031703}}'
In [196]: json
Out[196]: '{"date":{"0":1356998400,"1":1356998400,"2":1356998400,"3":1356998400,"4
":1356998400},"B":{"0":2.5656459463,"1":1.3403088498,"2":-0.2261692849,"3":0.
8138502857,"4":-0.8273169356},"A":{"0":-1.2064117817,"1":1.4312559863,"2":-1.
1702987971,"3":0.4108345112,"4":0.1320031703}}'
In [202]: dfj2.to_json('test.json')
In [203]: open('test.json').read()
Out[203]: '{"A":{"1356998400000":-1.2945235903,"1357084800000":0.2766617129,
"1357171200000":-0.0139597524,"1357257600000":-0.0061535699,"1357344000000":0.
8957173022},"B":{"1356998400000":0.4137381054,"1357084800000":-0.472034511,
"1357171200000":-0.3625429925,"1357257600000":-0.923060654,"1357344000000":0.
8052440254},"date":{"1356998400000":1356998400000,"1357084800000":1356998400000,
"1357171200000":1356998400000,"1357257600000":1356998400000,"1357344000000
":1356998400000},"ints":{"1356998400000":0,"1357084800000":1,"1357171200000":2,
"1357257600000":3,"1357344000000":4},"bools":{"1356998400000":true,"1357084800000
":true,"1357171200000":true,"1357257600000":true,"1357344000000":true}}'
If the JSON serializer cannot handle the container contents directly it will fallback in the following manner:
if the dtype is unsupported (e.g. np.complex) then the default_handler, if provided, will be called for
each value, otherwise an exception is raised.
if an object is unsupported it will attempt the following:
check if the object has defined a toDict method and call it. A toDict method should return a dict
which will then be JSON serialized.
invoke the default_handler if one was provided.
convert the object to a dict by traversing its contents. However this will often fail with an
OverflowError or give unexpected results.
In general the best approach for unsupported objects or dtypes is to provide a default_handler. For example:
Reading a JSON string to pandas object can take a number of parameters. The parser will try to parse a DataFrame
if typ is not supplied or is None. To explicitly force Series parsing, pass typ=series
filepath_or_buffer : a VALID JSON string or file handle / StringIO. The string could be a URL. Valid
URL schemes include http, ftp, S3, and file. For file URLs, a host is expected. For instance, a local file could be
file ://localhost/path/to/table.json
typ : type of object to recover (series or frame), default frame
orient :
Series :
default is index
allowed values are {split, records, index}
DataFrame
default is columns
allowed values are {split, records, index, columns, values}
The format of the JSON string
split dict like {index -> [index], columns -> [columns], data -> [values]}
records list like [{column -> value}, ... , {column -> value}]
index dict like {index -> {column -> value}}
columns dict like {column -> {index -> value}}
values just the values array
dtype : if True, infer dtypes, if a dict of column to dtype, then use those, if False, then dont infer dtypes at all,
default is True, apply only to the data
convert_axes : boolean, try to convert the axes to the proper dtypes, default is True
convert_dates : a list of columns to parse for dates; If True, then try to parse date-like columns, default is
True
keep_default_dates : boolean, default True. If parsing dates, then parse the default date-like columns
numpy : direct decoding to numpy arrays. default is False; Supports numeric data only, although labels may be
non-numeric. Also note that the JSON ordering MUST be the same for each term if numpy=True
precise_float : boolean, default False. Set to enable usage of higher precision (strtod) function when
decoding string to double values. Default (False) is to use fast but less precise builtin functionality
date_unit : string, the timestamp unit to detect if converting dates. Default None. By default the timestamp
precision will be detected, if this is not desired then pass one of s, ms, us or ns to force timestamp
precision to seconds, milliseconds, microseconds or nanoseconds respectively.
lines : reads file as one json object per line.
encoding : The encoding to use to decode py3 bytes.
The parser will raise one of ValueError/TypeError/AssertionError if the JSON is not parseable.
If a non-default orient was used when encoding to JSON be sure to pass the same option here so that decoding
produces sensible results, see Orient Options for an overview.
The default of convert_axes=True, dtype=True, and convert_dates=True will try to parse the axes, and
all of the data into appropriate types, including dates. If you need to override specific dtypes, pass a dict to dtype.
convert_axes should only be set to False if you need to preserve string-like numbers (e.g. 1, 2) in an axes.
Note: Large integer values may be converted to dates if convert_dates=True and the data and / or column labels
appear date-like. The exact threshold depends on the date_unit specified. date-like means that the column label
meets one of the following criteria:
it ends with '_at'
it ends with '_time'
it begins with 'timestamp'
it is 'modified'
it is 'date'
Warning: When reading JSON data, automatic coercing into dtypes has some quirks:
an index can be reconstructed in a different order from serialization, that is, the returned order is not guaran-
teed to be the same as before serialization
a column that was float data will be converted to integer if it can be done safely, e.g. a column of 1.
bool columns will be converted to integer on reconstruction
Thus there are times where you may want to specify specific dtypes via the dtype keyword argument.
In [205]: pd.read_json(json)
Out[205]:
A B date
0 -1.206412 2.565646 2013-01-01
1 1.431256 1.340309 2013-01-01
2 -1.170299 -0.226169 2013-01-01
3 0.410835 0.813850 2013-01-01
4 0.132003 -0.827317 2013-01-01
In [206]: pd.read_json('test.json')
Out[206]:
A B bools date ints
2013-01-01 -1.294524 0.413738 True 2013-01-01 0
2013-01-02 0.276662 -0.472035 True 2013-01-01 1
2013-01-03 -0.013960 -0.362543 True 2013-01-01 2
2013-01-04 -0.006154 -0.923061 True 2013-01-01 3
2013-01-05 0.895717 0.805244 True 2013-01-01 4
Dont convert any data (but still convert axes and dates):
In [207]: pd.read_json('test.json', dtype=object).dtypes
Out[207]:
A object
B object
bools object
date object
ints object
dtype: object
In [210]: si
Out[210]:
0 1 2 3
0 0.0 0.0 0.0 0.0
1 0.0 0.0 0.0 0.0
2 0.0 0.0 0.0 0.0
3 0.0 0.0 0.0 0.0
In [211]: si.index
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
Index(['0', '1', '2', '3'], dtype='object')
In [212]: si.columns
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
Int64Index([0, 1, 2, 3], dtype='int64')
In [215]: sij
Out[215]:
0 1 2 3
0 0 0 0 0
1 0 0 0 0
2 0 0 0 0
3 0 0 0 0
In [216]: sij.index
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[216]:
Index(['0', '1', '2', '3'], dtype='object')
In [217]: sij.columns
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
Index(['0', '1', '2', '3'], dtype='object')
In [220]: dfju
Out[220]:
A B bools date ints
1356998400000000000 -1.294524 0.413738 True 1356998400000000000 0
1357084800000000000 0.276662 -0.472035 True 1356998400000000000 1
1357171200000000000 -0.013960 -0.362543 True 1356998400000000000 2
1357257600000000000 -0.006154 -0.923061 True 1356998400000000000 3
1357344000000000000 0.895717 0.805244 True 1356998400000000000 4
In [222]: dfju
Out[222]:
A B bools date ints
2013-01-01 -1.294524 0.413738 True 2013-01-01 0
2013-01-02 0.276662 -0.472035 True 2013-01-01 1
2013-01-03 -0.013960 -0.362543 True 2013-01-01 2
2013-01-04 -0.006154 -0.923061 True 2013-01-01 3
2013-01-05 0.895717 0.805244 True 2013-01-01 4
In [224]: dfju
Out[224]:
A B bools date ints
2013-01-01 -1.294524 0.413738 True 2013-01-01 0
2013-01-02 0.276662 -0.472035 True 2013-01-01 1
2013-01-03 -0.013960 -0.362543 True 2013-01-01 2
2013-01-04 -0.006154 -0.923061 True 2013-01-01 3
2013-01-05 0.895717 0.805244 True 2013-01-01 4
Note: This supports numeric data only. Index and columns labels may be non-numeric, e.g. strings, dates etc.
If numpy=True is passed to read_json an attempt will be made to sniff an appropriate dtype during deserialization
and to subsequently decode directly to numpy arrays, bypassing the need for intermediate Python objects.
This can provide speedups if you are deserialising a large amount of numeric data:
In [225]: randfloats = np.random.uniform(-100, 1000, 10000)
Warning: Direct numpy decoding makes a number of assumptions and may fail or produce unexpected output if
these assumptions are not satisfied:
data is numeric.
data is uniform. The dtype is sniffed from the first value decoded. A ValueError may be raised, or
incorrect output may be produced if this condition is not satisfied.
labels are ordered. Labels are only read from the first container, it is assumed that each subsequent row /
column has been encoded in the same order. This should be satisfied if the data was encoded using to_json
but may not be the case if the JSON is from another source.
24.2.3 Normalization
Out[236]:
name population state shortname info.governor
0 Dade 12345 Florida FL Rick Scott
1 Broward 40000 Florida FL Rick Scott
2 Palm Beach 60000 Florida FL Rick Scott
3 Summit 1234 Ohio OH John Kasich
4 Cuyahoga 1337 Ohio OH John Kasich
In [239]: df
Out[239]:
a b
0 1 2
1 3 4
Table Schema is a spec for describing tabular datasets as a JSON object. The JSON includes information on the field
names, types, and other attributes. You can use the orient table to build a JSON string with two fields, schema and
data.
In [241]: df = pd.DataFrame(
.....: {'A': [1, 2, 3],
.....: 'B': ['a', 'b', 'c'],
.....: 'C': pd.date_range('2016-01-01', freq='d', periods=3),
.....: }, index=pd.Index(range(3), name='idx'))
.....:
In [242]: df
Out[242]:
A B C
idx
0 1 a 2016-01-01
1 2 b 2016-01-02
2 3 c 2016-01-03
,{"name":"B","type":"string"},{"name":"C","type":"datetime"}],"primaryKey":["idx"],
01T00:00:00.000Z"},{"idx":1,"A":2,"B":"b","C":"2016-01-02T00:00:00.000Z"},{"idx":2,
"A":3,"B":"c","C":"2016-01-03T00:00:00.000Z"}]}'
The schema field contains the fields key, which itself contains a list of column name to type pairs, including the
Index or MultiIndex (see below for a list of types). The schema field also contains a primaryKey field if the
(Multi)index is unique.
The second field, data, contains the serialized data with the records orient. The index is included, and any
datetimes are ISO 8601 formatted, as required by the Table Schema spec.
The full list of types supported are described in the Table Schema spec. This table shows the mapping from pandas
types:
Pandas type Table Schema type
int64 integer
float64 number
bool boolean
datetime64[ns] datetime
timedelta64[ns] duration
categorical any
object str
A few notes on the generated table schema:
The schema object contains a pandas_version field. This contains the version of pandas dialect of the
schema, and will be incremented with each revision.
All dates are converted to UTC when serializing. Even timezone nave values, which are treated as UTC with
an offset of 0.
In [246]: build_table_schema(s)
Out[246]:
{'fields': [{'name': 'index', 'type': 'integer'},
{'name': 'values', 'type': 'datetime'}],
'pandas_version': '0.20.0',
'primaryKey': ['index']}
datetimes with a timezone (before serializing), include an additional field tz with the time zone name (e.g.
'US/Central').
In [247]: s_tz = pd.Series(pd.date_range('2016', periods=12,
.....: tz='US/Central'))
.....:
In [248]: build_table_schema(s_tz)
Out[248]:
{'fields': [{'name': 'index', 'type': 'integer'},
{'name': 'values', 'type': 'datetime', 'tz': 'US/Central'}],
'pandas_version': '0.20.0',
'primaryKey': ['index']}
Periods are converted to timestamps before serialization, and so have the same behavior of being converted to
UTC. In addition, periods will contain and additional field freq with the periods frequency, e.g. 'A-DEC'
In [249]: s_per = pd.Series(1, index=pd.period_range('2016', freq='A-DEC',
.....: periods=4))
.....:
In [250]: build_table_schema(s_per)
Out[250]:
{'fields': [{'freq': 'A-DEC', 'name': 'index', 'type': 'datetime'},
{'name': 'values', 'type': 'integer'}],
'pandas_version': '0.20.0',
'primaryKey': ['index']}
Categoricals use the any type and an enum constraint listing the set of possible values. Additionally, an
ordered field is included
In [251]: s_cat = pd.Series(pd.Categorical(['a', 'b', 'a']))
In [252]: build_table_schema(s_cat)
Out[252]:
{'fields': [{'name': 'index', 'type': 'integer'},
{'constraints': {'enum': ['a', 'b']},
'name': 'values',
'ordered': False,
'type': 'any'}],
'pandas_version': '0.20.0',
'primaryKey': ['index']}
In [254]: build_table_schema(s_dupe)
Out[254]:
{'fields': [{'name': 'index', 'type': 'integer'},
{'name': 'values', 'type': 'integer'}],
'pandas_version': '0.20.0'}
The primaryKey behavior is the same with MultiIndexes, but in this case the primaryKey is an array:
In [256]: build_table_schema(s_multi)
Out[256]:
{'fields': [{'name': 'level_0', 'type': 'string'},
{'name': 'level_1', 'type': 'integer'},
{'name': 'values', 'type': 'integer'}],
'pandas_version': '0.20.0',
'primaryKey': FrozenList(['level_0', 'level_1'])}
24.3 HTML
Warning: We highly encourage you to read the HTML Table Parsing gotchas below regarding the issues sur-
rounding the BeautifulSoup4/html5lib/lxml parsers.
Note: read_html returns a list of DataFrame objects, even if there is only a single table contained in the
HTML content
In [259]: dfs
Out[259]:
[ Bank Name City ST CERT \
0 First NBC Bank New Orleans LA 58302
1 Proficio Bank Cottonwood Heights UT 35495
Updated Date
0 May 3, 2017
1 April 13, 2017
2 April 21, 2017
3 April 13, 2017
4 November 17, 2016
5 April 27, 2017
6 September 6, 2016
.. ...
544 September 21, 2015
545 February 10, 2004
546 August 19, 2014
547 November 18, 2002
548 February 18, 2003
549 March 17, 2005
550 March 17, 2005
Note: The data from the above URL changes every Monday so the resulting data above and the data below may be
slightly different.
Read in the content of the file from the above URL and pass it to read_html as a string
In [261]: dfs
Out[261]:
[ Bank Name City ST CERT \
0 Banks of Wisconsin d/b/a Bank of Kenosha Kenosha WI 35386
1 Central Arizona Bank Scottsdale AZ 34527
2 Sunrise Bank Valdosta GA 58185
3 Pisgah Community Bank Asheville NC 58701
4 Douglas County Bank Douglasville GA 21649
5 Parkway Bank Lenoir NC 57158
6 Chipola Community Bank Marianna FL 58034
.. ... ... .. ...
499 Hamilton Bank, NAEn Espanol Miami FL 24382
500 Sinclair National Bank Gravette AR 34248
501 Superior Bank, FSB Hinsdale IL 32646
502 Malta National Bank Malta OH 6629
503 First Alliance Bank & Trust Co. Manchester NH 34264
504 National State Bank of Metropolis Metropolis IL 3815
505 Bank of Honolulu Honolulu HI 21029
In [264]: dfs
Out[264]:
[ Bank Name City ST CERT \
0 Banks of Wisconsin d/b/a Bank of Kenosha Kenosha WI 35386
1 Central Arizona Bank Scottsdale AZ 34527
2 Sunrise Bank Valdosta GA 58185
3 Pisgah Community Bank Asheville NC 58701
4 Douglas County Bank Douglasville GA 21649
Note: The following examples are not run by the IPython evaluator due to the fact that having so many network-
accessing functions slows down the documentation build. If you spot an error or an example that doesnt run, please
do not hesitate to report it over on pandas GitHub issues page.
Specify a header row (by default <th> or <td> elements located within a <thead> are used to form the column
index, if multiple rows are contained within <thead> then a multiindex is created); if specified, the header row is
taken from the data minus the parsed header elements (<th> elements).
Specify a number of rows to skip using a list (xrange (Python 2 only) works as well)
url_mcc = 'https://en.wikipedia.org/wiki/Mobile_country_code'
dfs = pd.read_html(url_mcc, match='Telekom Albania', header=0, converters={'MNC':
str})
Read in pandas to_html output (with some loss of floating point precision)
df = pd.DataFrame(randn(2, 2))
s = df.to_html(float_format='{0:.40g}'.format)
dfin = pd.read_html(s, index_col=0)
The lxml backend will raise an error on a failed parse if that is the only parser you provide (if you only have a single
parser you can provide just a string, but it is considered good practice to pass a list with one string if, for example, the
function expects a sequence of strings)
or
However, if you have bs4 and html5lib installed and pass None or ['lxml', 'bs4'] then the parse will most
likely succeed. Note that as soon as a parse succeeds, the function will return.
DataFrame objects have an instance method to_html which renders the contents of the DataFrame as an HTML
table. The function arguments are as in the method to_string described above.
Note: Not all of the possible options for DataFrame.to_html are shown here for brevitys sake. See
to_html() for the full set of options.
In [266]: df
Out[266]:
0 1
0 -0.184744 0.496971
1 -0.856240 1.857977
<thead>
<tr style="text-align: right;">
<th></th>
<th>0</th>
<th>1</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>-0.184744</td>
<td>0.496971</td>
</tr>
<tr>
<th>1</th>
<td>-0.856240</td>
<td>1.857977</td>
</tr>
</tbody>
</table>
HTML:
The columns argument will limit the columns shown
In [268]: print(df.to_html(columns=[0]))
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>0</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>-0.184744</td>
</tr>
<tr>
<th>1</th>
<td>-0.856240</td>
</tr>
</tbody>
</table>
HTML:
float_format takes a Python callable to control the precision of floating point values
In [269]: print(df.to_html(float_format='{0:.10f}'.format))
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>0</th>
<th>1</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>-0.1847438576</td>
<td>0.4969711327</td>
</tr>
<tr>
<th>1</th>
<td>-0.8562396763</td>
<td>1.8579766508</td>
</tr>
</tbody>
</table>
HTML:
bold_rows will make the row labels bold by default, but you can turn that off
In [270]: print(df.to_html(bold_rows=False))
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>0</th>
<th>1</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>-0.184744</td>
<td>0.496971</td>
</tr>
<tr>
<td>1</td>
<td>-0.856240</td>
<td>1.857977</td>
</tr>
</tbody>
</table>
The classes argument provides the ability to give the resulting HTML table CSS classes. Note that these classes
are appended to the existing 'dataframe' class.
In [271]: print(df.to_html(classes=['awesome_table_class', 'even_more_awesome_class
']))
Finally, the escape argument allows you to control whether the <, > and & characters escaped in the resulting
HTML (by default it is True). So to get the HTML without escaped characters pass escape=False
In [272]: df = pd.DataFrame({'a': list('&<>'), 'b': randn(3)})
Escaped:
In [273]: print(df.to_html())
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>a</th>
<th>b</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>&</td>
<td>-0.474063</td>
</tr>
<tr>
<th>1</th>
<td><</td>
<td>-0.230305</td>
</tr>
<tr>
<th>2</th>
<td>></td>
<td>-0.400654</td>
</tr>
</tbody>
</table>
Not escaped:
In [274]: print(df.to_html(escape=False))
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>a</th>
<th>b</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>&</td>
<td>-0.474063</td>
</tr>
<tr>
<th>1</th>
<td><</td>
<td>-0.230305</td>
</tr>
<tr>
<th>2</th>
<td>></td>
<td>-0.400654</td>
</tr>
</tbody>
</table>
Note: Some browsers may not show a difference in the rendering of the previous two HTML tables.
There are some versioning issues surrounding the libraries that are used to parse HTML tables in the top-level pandas
io function read_html.
Issues with lxml
Benefits
lxml is very fast
lxml requires Cython to install correctly.
Drawbacks
lxml does not make any guarantees about the results of its parse unless it is given strictly valid markup.
In light of the above, we have chosen to allow you, the user, to use the lxml backend, but this backend
will use html5lib if lxml fails to parse
It is therefore highly recommended that you install both BeautifulSoup4 and html5lib, so that you will
still get a valid result (provided everything else is valid) even if lxml fails.
Issues with BeautifulSoup4 using lxml as a backend
The above issues hold here as well since BeautifulSoup4 is essentially just a wrapper around a parser backend.
Issues with BeautifulSoup4 using html5lib as a backend
Benefits
html5lib is far more lenient than lxml and consequently deals with real-life markup in a much saner way
rather than just, e.g., dropping an element without notifying you.
html5lib generates valid HTML5 markup from invalid markup automatically. This is extremely important
for parsing HTML tables, since it guarantees a valid document. However, that does NOT mean that it is
correct, since the process of fixing markup does not have a single definition.
html5lib is pure Python and requires no additional build steps beyond its own installation.
Drawbacks
The biggest drawback to using html5lib is that it is slow as molasses. However consider the fact that many
tables on the web are not big enough for the parsing algorithm runtime to matter. It is more likely that the
bottleneck will be in the process of reading the raw text from the URL over the web, i.e., IO (input-output).
For very large tables, this might not be true.
The read_excel() method can read Excel 2003 (.xls) and Excel 2007+ (.xlsx) files using the xlrd Python
module. The to_excel() instance method is used for saving a DataFrame to Excel. Generally the semantics are
similar to working with csv data. See the cookbook for some advanced strategies
In the most basic use-case, read_excel takes a path to an Excel file, and the sheetname indicating which sheet
to parse.
# Returns a DataFrame
read_excel('path_to_file.xls', sheetname='Sheet1')
To facilitate working with multiple sheets from the same file, the ExcelFile class can be used to wrap the file and
can be be passed into read_excel There will be a performance benefit for reading multiple sheets as the file is read
into memory only once.
xlsx = pd.ExcelFile('path_to_file.xls')
df = pd.read_excel(xlsx, 'Sheet1')
The sheet_names property will generate a list of the sheet names in the file.
The primary use-case for an ExcelFile is parsing multiple sheets with different parameters
data = {}
# For when Sheet1's format differs from Sheet2
with pd.ExcelFile('path_to_file.xls') as xls:
data['Sheet1'] = pd.read_excel(xls, 'Sheet1', index_col=None, na_values=['NA'])
data['Sheet2'] = pd.read_excel(xls, 'Sheet2', index_col=1)
Note that if the same parsing parameters are used for all sheets, a list of sheet names can simply be passed to
read_excel with no loss in performance.
# Returns a DataFrame
read_excel('path_to_file.xls', 'Sheet1', index_col=None, na_values=['NA'])
# Returns a DataFrame
read_excel('path_to_file.xls', 0, index_col=None, na_values=['NA'])
# Returns a DataFrame
read_excel('path_to_file.xls')
In [276]: df.to_excel('path_to_file.xlsx')
In [278]: df
Out[278]:
a b
a c 1 5
d 2 6
b c 3 7
d 4 8
If the index has level names, they will parsed as well, using the same parameters.
In [280]: df.to_excel('path_to_file.xlsx')
In [282]: df
Out[282]:
a b
lvl1 lvl2
a c 1 5
d 2 6
b c 3 7
d 4 8
If the source file has both MultiIndex index and columns, lists specifying each should be passed to index_col
and header
In [284]: df.to_excel('path_to_file.xlsx')
In [285]: df = pd.read_excel('path_to_file.xlsx',
.....: index_col=[0,1], header=[0,1])
.....:
In [286]: df
Out[286]:
c1 a
c2 b d
lvl1 lvl2
a c 1 5
d 2 6
b c 3 7
d 4 8
Warning: Excel files saved in version 0.16.2 or prior that had index names will still able to be read in, but the
has_index_names argument must specified to True.
It is often the case that users will insert columns to do temporary computations in Excel and you may not want to read
in those columns. read_excel takes a parse_cols keyword to allow you to specify a subset of columns to parse.
If parse_cols is an integer, then it is assumed to indicate the last column to be parsed.
If parse_cols is a list of integers, then it is assumed to be the file column indices to be parsed.
Datetime-like values are normally automatically converted to the appropriate dtype when reading the excel file. But
if you have a column of strings that look like dates (but are not actually formatted as dates in excel), you can use the
parse_dates keyword to parse those strings to datetimes:
It is possible to transform the contents of Excel cells via the converters option. For instance, to convert a column to
boolean:
This options handles missing values and treats exceptions in the converters as missing data. Transformations are
applied cell by cell rather than to the column as a whole, so the array dtype is not guaranteed. For instance, a column
of integers with missing values cannot be transformed to an array with integer dtype, because NaN is strictly a float.
You can manually mask missing data to recover integer dtype:
To write a DataFrame object to a sheet of an Excel file, you can use the to_excel instance method. The arguments
are largely the same as to_csv described above, the first argument being the name of the excel file, and the optional
second argument the name of the sheet to which the DataFrame should be written. For example:
df.to_excel('path_to_file.xlsx', sheet_name='Sheet1')
Files with a .xls extension will be written using xlwt and those with a .xlsx extension will be written using
xlsxwriter (if available) or openpyxl.
The DataFrame will be written in a way that tries to mimic the REPL output. One difference from 0.12.0 is that the
index_label will be placed in the second row instead of the first. You can get the previous behaviour by setting
the merge_cells option in to_excel() to False:
The Panel class also has a to_excel instance method, which writes each DataFrame in the Panel to a separate sheet.
In order to write separate DataFrames to separate sheets in a single Excel file, one can pass an ExcelWriter.
with ExcelWriter('path_to_file.xlsx') as writer:
df1.to_excel(writer, sheet_name='Sheet1')
df2.to_excel(writer, sheet_name='Sheet2')
Note: Wringing a little more performance out of read_excel Internally, Excel stores all numeric data as floats.
Because this can produce unexpected behavior when reading in data, pandas defaults to trying to convert integers to
floats if it doesnt lose information (1.0 --> 1). You can pass convert_float=False to disable this behavior,
which may give a slight performance improvement.
bio = BytesIO()
# Seek to the beginning and read to copy the workbook to a variable in memory
bio.seek(0)
workbook = bio.read()
Note: engine is optional but recommended. Setting the engine determines the version of workbook produced.
Setting engine='xlrd' will produce an Excel 2003-format workbook (xls). Using either 'openpyxl' or
'xlsxwriter' will produce an Excel 2007-format workbook (xlsx). If omitted, an Excel 2007-formatted workbook
is produced.
df.to_excel('path_to_file.xlsx', sheet_name='Sheet1')
The look and feel of Excel worksheets created from pandas can be modified using the following parameters on the
DataFrames to_excel method.
float_format : Format string for floating point numbers (default None)
freeze_panes : A tuple of two integers representing the bottommost row and rightmost column to freeze.
Each of these parameters is one-based, so (1, 1) will freeze the first row and first column (default None)
24.5 Clipboard
A handy way to grab data is to use the read_clipboard method, which takes the contents of the clipboard buffer
and passes them to the read_table method. For instance, you can copy the following text to the clipboard (CTRL-C
on many operating systems):
A B C
x 1 4 p
y 2 5 q
z 3 6 r
clipdf = pd.read_clipboard()
In [287]: clipdf
Out[287]:
A B C
x 1 4 p
y 2 5 q
z 3 6 r
The to_clipboard method can be used to write the contents of a DataFrame to the clipboard. Following which
you can paste the clipboard contents into other applications (CTRL-V on many operating systems). Here we illustrate
writing a DataFrame into clipboard and reading it back.
In [288]: df = pd.DataFrame(randn(5,3))
In [289]: df
Out[289]:
0 1 2
0 -0.288267 -0.084905 0.004772
1 1.382989 0.343635 -1.253994
2 -0.124925 0.212244 0.496654
3 0.525417 1.238640 -1.210543
4 -1.175743 -0.172372 -0.734129
In [290]: df.to_clipboard()
In [291]: pd.read_clipboard()
Out[291]:
0 1 2
0 -0.288267 -0.084905 0.004772
1 1.382989 0.343635 -1.253994
2 -0.124925 0.212244 0.496654
3 0.525417 1.238640 -1.210543
4 -1.175743 -0.172372 -0.734129
We can see that we got the same content back, which we had earlier written to the clipboard.
Note: You may need to install xclip or xsel (with gtk or PyQt4 modules) on Linux to use these methods.
24.6 Pickling
All pandas objects are equipped with to_pickle methods which use Pythons cPickle module to save data
structures to disk using the pickle format.
In [292]: df
Out[292]:
0 1 2
0 -0.288267 -0.084905 0.004772
1 1.382989 0.343635 -1.253994
2 -0.124925 0.212244 0.496654
3 0.525417 1.238640 -1.210543
4 -1.175743 -0.172372 -0.734129
In [293]: df.to_pickle('foo.pkl')
The read_pickle function in the pandas namespace can be used to load any pickled pandas object (or any other
pickled object) from file:
In [294]: pd.read_pickle('foo.pkl')
Out[294]:
0 1 2
0 -0.288267 -0.084905 0.004772
1 1.382989 0.343635 -1.253994
2 -0.124925 0.212244 0.496654
3 0.525417 1.238640 -1.210543
4 -1.175743 -0.172372 -0.734129
Warning: Loading pickled data received from untrusted sources can be unsafe.
See: http://docs.python.org/2.7/library/pickle.html
Warning: Several internal refactorings, 0.13 (Series Refactoring), and 0.15 (Index Refactoring), preserve com-
patibility with pickles created prior to these versions. However, these must be read with pd.read_pickle,
rather than the default python pickle.load. See this question for a detailed explanation.
In [296]: df
Out[296]:
A B C
0 0.478412 foo 2013-01-01 00:00:00
1 -0.783748 foo 2013-01-01 00:00:01
2 1.403558 foo 2013-01-01 00:00:02
3 -0.539282 foo 2013-01-01 00:00:03
4 -1.651012 foo 2013-01-01 00:00:04
5 0.692072 foo 2013-01-01 00:00:05
6 1.022171 foo 2013-01-01 00:00:06
.. ... ... ...
993 -1.613932 foo 2013-01-01 00:16:33
994 1.088104 foo 2013-01-01 00:16:34
995 -0.632963 foo 2013-01-01 00:16:35
996 -0.585314 foo 2013-01-01 00:16:36
997 -0.275038 foo 2013-01-01 00:16:37
998 -0.937512 foo 2013-01-01 00:16:38
In [299]: rt
Out[299]:
A B C
0 0.478412 foo 2013-01-01 00:00:00
1 -0.783748 foo 2013-01-01 00:00:01
2 1.403558 foo 2013-01-01 00:00:02
3 -0.539282 foo 2013-01-01 00:00:03
4 -1.651012 foo 2013-01-01 00:00:04
5 0.692072 foo 2013-01-01 00:00:05
6 1.022171 foo 2013-01-01 00:00:06
.. ... ... ...
993 -1.613932 foo 2013-01-01 00:16:33
994 1.088104 foo 2013-01-01 00:16:34
995 -0.632963 foo 2013-01-01 00:16:35
996 -0.585314 foo 2013-01-01 00:16:36
997 -0.275038 foo 2013-01-01 00:16:37
998 -0.937512 foo 2013-01-01 00:16:38
999 0.632369 foo 2013-01-01 00:16:39
In [302]: rt
Out[302]:
A B C
0 0.478412 foo 2013-01-01 00:00:00
1 -0.783748 foo 2013-01-01 00:00:01
2 1.403558 foo 2013-01-01 00:00:02
3 -0.539282 foo 2013-01-01 00:00:03
4 -1.651012 foo 2013-01-01 00:00:04
5 0.692072 foo 2013-01-01 00:00:05
6 1.022171 foo 2013-01-01 00:00:06
.. ... ... ...
993 -1.613932 foo 2013-01-01 00:16:33
994 1.088104 foo 2013-01-01 00:16:34
995 -0.632963 foo 2013-01-01 00:16:35
996 -0.585314 foo 2013-01-01 00:16:36
997 -0.275038 foo 2013-01-01 00:16:37
998 -0.937512 foo 2013-01-01 00:16:38
999 0.632369 foo 2013-01-01 00:16:39
In [303]: df.to_pickle("data.pkl.gz")
In [304]: rt = pd.read_pickle("data.pkl.gz")
In [305]: rt
Out[305]:
A B C
0 0.478412 foo 2013-01-01 00:00:00
1 -0.783748 foo 2013-01-01 00:00:01
2 1.403558 foo 2013-01-01 00:00:02
3 -0.539282 foo 2013-01-01 00:00:03
4 -1.651012 foo 2013-01-01 00:00:04
5 0.692072 foo 2013-01-01 00:00:05
6 1.022171 foo 2013-01-01 00:00:06
.. ... ... ...
993 -1.613932 foo 2013-01-01 00:16:33
994 1.088104 foo 2013-01-01 00:16:34
995 -0.632963 foo 2013-01-01 00:16:35
996 -0.585314 foo 2013-01-01 00:16:36
997 -0.275038 foo 2013-01-01 00:16:37
998 -0.937512 foo 2013-01-01 00:16:38
999 0.632369 foo 2013-01-01 00:16:39
In [306]: df["A"].to_pickle("s1.pkl.bz2")
In [307]: rt = pd.read_pickle("s1.pkl.bz2")
In [308]: rt
Out[308]:
0 0.478412
1 -0.783748
2 1.403558
3 -0.539282
4 -1.651012
5 0.692072
6 1.022171
...
993 -1.613932
994 1.088104
995 -0.632963
996 -0.585314
997 -0.275038
998 -0.937512
999 0.632369
Name: A, Length: 1000, dtype: float64
24.7 msgpack
Warning: This is a very new feature of pandas. We intend to provide certain optimizations in the io of the
msgpack data. Since this is marked as an EXPERIMENTAL LIBRARY, the storage format may not be stable
until a future release.
As a result of writing format changes and other issues:
Packed with Can be unpacked with
pre-0.17 / Python 2 any
pre-0.17 / Python 3 any
0.17 / Python 2
0.17 / Python 2
>=0.18 / any Python
In [309]: df = pd.DataFrame(np.random.rand(5,2),columns=list('AB'))
In [310]: df.to_msgpack('foo.msg')
In [311]: pd.read_msgpack('foo.msg')
Out[311]:
A B
0 0.170801 0.895366
1 0.838238 0.052592
2 0.664140 0.289750
3 0.449593 0.872087
4 0.983618 0.744359
In [312]: s = pd.Series(np.random.rand(5),index=pd.date_range('20130101',periods=5))
You can pass a list of objects and you will receive them back on deserialization.
In [314]: pd.read_msgpack('foo.msg')
Out[314]:
[ A B
0 0.170801 0.895366
1 0.838238 0.052592
2 0.664140 0.289750
3 0.449593 0.872087
4 0.983618 0.744359, 'foo', array([1, 2, 3]), 2013-01-01 0.548134
2013-01-02 0.503447
2013-01-03 0.348438
2013-01-04 0.707267
2013-01-05 0.261656
Freq: D, dtype: float64]
In [316]: df.to_msgpack('foo.msg',append=True)
In [317]: pd.read_msgpack('foo.msg')
Out[317]:
[ A B
0 0.170801 0.895366
1 0.838238 0.052592
2 0.664140 0.289750
3 0.449593 0.872087
4 0.983618 0.744359, 'foo', array([1, 2, 3]), 2013-01-01 0.548134
2013-01-02 0.503447
2013-01-03 0.348438
2013-01-04 0.707267
2013-01-05 0.261656
Freq: D, dtype: float64, A B
0 0.170801 0.895366
1 0.838238 0.052592
2 0.664140 0.289750
3 0.449593 0.872087
4 0.983618 0.744359]
Unlike other io methods, to_msgpack is available on both a per-object basis, df.to_msgpack() and using the
top-level pd.to_msgpack(...) where you can pack arbitrary collections of python lists, dicts, scalars, while
intermixing pandas objects.
In [319]: pd.read_msgpack('foo2.msg')
Out[319]:
{'dict': ({'df': A B
0 0.170801 0.895366
1 0.838238 0.052592
2 0.664140 0.289750
3 0.449593 0.872087
4 0.983618 0.744359},
{'string': 'foo'},
{'scalar': 1.0},
{'s': 2013-01-01 0.548134
2013-01-02 0.503447
2013-01-03 0.348438
2013-01-04 0.707267
2013-01-05 0.261656
Freq: D, dtype: float64})}
In [320]: df.to_msgpack()
Out[320]: b'\x84\xa3typ\xadblock_
manager\xa5klass\xa9DataFrame\xa4axes\x92\x86\xa3typ\xa5index\xa5klass\xa5Index\xa4name\xc0\xa5dtyp
index\xa5klass\xaaRangeIndex\xa4name\xc0\xa5start\x00\xa4stop\x05\xa4step\x01\xa6blocks\x91\x86\xa4
<\xfd\xd2f\xcf\xdc\xc5?0\x15\xebN\xd9\xd2\xea?,\x9c\x16A\xa2@\xe5?\xd8/\xdd\xf4
"\xc6\xdc?\x11\x1e\x97\x1b\xcdy\xef?&\x1e<\xee\xd6\xa6\xec?p\xd3;\xb2N\xed\xaa?
h\xcb\xb1\xbdB\x8b\xd2?\xaf4\x01r"\xe8\xeb?)G6\xd9\xc9\xd1\xe7?
\xa5shape\x92\x02\x05\xa5dtype\xa7float64\xa5klass\xaaFloatBlock\xa8compress\xc0'
Furthermore you can concatenate the strings to produce a list of the original objects.
HDFStore is a dict-like object which reads and writes pandas using the high performance HDF5 format using the
excellent PyTables library. See the cookbook for some advanced strategies
Warning: As of version 0.15.0, pandas requires PyTables >= 3.0.0. Stores written with prior versions of
pandas / PyTables >= 2.3 are fully compatible (this was the previous minimum PyTables required version).
Warning: There is a PyTables indexing bug which may appear when querying stores using an index. If you
see a subset of results being returned, upgrade to PyTables >= 3.2. Stores created previously will need to be
rewritten using the updated version.
Warning: As of version 0.17.0, HDFStore will not drop rows that have all missing values by default. Previously,
if all values (except the index) were missing, HDFStore would not write those rows to disk.
In [323]: print(store)
<class 'pandas.io.pytables.HDFStore'>
Objects can be written to the file just like adding key-value pairs to a dict:
In [324]: np.random.seed(1234)
In [330]: store['df'] = df
In [331]: store['wp'] = wp
In [333]: store
\\\\\\\\\\\\\\\\\Out[333]:
<class 'pandas.io.pytables.HDFStore'>
File path: store.h5
/df frame (shape->[8,3])
/s series (shape->[5])
/wp wide (shape->[2,5,4])
A B C
2000-01-01 0.887163 0.859588 -0.636524
2000-01-02 0.015696 -2.242685 1.150036
2000-01-03 0.991946 0.953324 -2.021255
2000-01-04 -0.334077 0.002118 0.405453
2000-01-05 0.289092 1.321158 -1.546906
2000-01-06 -0.202646 -0.655969 0.193421
2000-01-07 0.553439 1.318152 -0.469305
2000-01-08 0.675554 -1.817027 -0.183109
In [337]: store
Out[337]:
<class 'pandas.io.pytables.HDFStore'>
File path: store.h5
/df frame (shape->[8,3])
/s series (shape->[5])
In [338]: store.close()
In [339]: store
Out[339]:
<class 'pandas.io.pytables.HDFStore'>
File path: store.h5
File is CLOSED
In [340]: store.is_open
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[340]:
False
# Working with, and automatically closing the store with the context
# manager
In [341]: with pd.HDFStore('store.h5') as store:
.....: store.keys()
.....:
HDFStore supports an top-level API using read_hdf for reading and to_hdf for writing, similar to how
read_csv and to_csv work. (new in 0.11.0)
In [343]: df_tl.to_hdf('store_tl.h5','table',append=True)
As of version 0.17.0, HDFStore will no longer drop rows that are all missing by default. This behavior can be enabled
by setting dropna=True.
In [345]: df_with_missing = pd.DataFrame({'col1':[0, np.nan, 2],
.....: 'col2':[1, np.nan, np.nan]})
.....:
In [346]: df_with_missing
Out[346]:
col1 col2
0 0.0 1.0
1 NaN NaN
2 2.0 NaN
In [353]: panel_with_major_axis_all_missing
Out[353]:
<class 'pandas.core.panel.Panel'>
Dimensions: 3 (items) x 2 (major_axis) x 3 (minor_axis)
Items axis: Item1 to Item3
Major_axis axis: 1 to 2
Minor_axis axis: A to C
.....: mode='w')
.....:
In [356]: reloaded
Out[356]:
<class 'pandas.core.panel.Panel'>
Dimensions: 3 (items) x 1 (major_axis) x 3 (minor_axis)
Items axis: Item1 to Item3
Major_axis axis: 2 to 2
Minor_axis axis: A to C
The examples above show storing using put, which write the HDF5 to PyTables in a fixed array format, called the
fixed format. These types of stores are are not appendable once written (though you can simply remove them and
rewrite). Nor are they queryable; they must be retrieved in their entirety. They also do not support dataframes with
non-unique column names. The fixed format stores offer very fast writing and slightly faster reading than table
stores. This format is specified by default when using put or to_hdf or by format='fixed' or format='f'
Warning: A fixed format will raise a TypeError if you try to retrieve using a where .
pd.DataFrame(randn(10,2)).to_hdf('test_fixed.h5','df')
pd.read_hdf('test_fixed.h5','df',where='index>5')
TypeError: cannot pass a where specification when reading a fixed format.
this store must be selected in its entirety
HDFStore supports another PyTables format on disk, the table format. Conceptually a table is shaped very
much like a DataFrame, with rows and columns. A table may be appended to in the same or other sessions. In addi-
tion, delete & query type operations are supported. This format is specified by format='table' or format='t'
to append or put or to_hdf
New in version 0.13.
This format can be set as an option as well pd.set_option('io.hdf.default_format','table') to
enable put/append/to_hdf to by default store in the table format.
In [362]: store
Out[362]:
<class 'pandas.io.pytables.HDFStore'>
File path: store.h5
/df frame_table (typ->appendable,nrows->8,ncols->3,indexers->[index])
A B C
2000-01-01 0.887163 0.859588 -0.636524
2000-01-02 0.015696 -2.242685 1.150036
2000-01-03 0.991946 0.953324 -2.021255
2000-01-04 -0.334077 0.002118 0.405453
2000-01-05 0.289092 1.321158 -1.546906
2000-01-06 -0.202646 -0.655969 0.193421
2000-01-07 0.553439 1.318152 -0.469305
2000-01-08 0.675554 -1.817027 -0.183109
Note: You can also create a table by passing format='table' or format='t' to a put operation.
Keys to a store can be specified as a string. These can be in a hierarchical path-name like format (e.g. foo/bar/
bah), which will generate a hierarchy of sub-stores (or Groups in PyTables parlance). Keys can be specified with
out the leading / and are ALWAYS absolute (e.g. foo refers to /foo). Removal operations can remove everything
in the sub-store and BELOW, so be careful.
In [365]: store.put('foo/bar/bah', df)
In [368]: store
Out[368]:
<class 'pandas.io.pytables.HDFStore'>
File path: store.h5
/df frame_table (typ->appendable,nrows->8,ncols->3,indexers->
[index])
In [371]: store
Out[371]:
<class 'pandas.io.pytables.HDFStore'>
File path: store.h5
/df frame_table (typ->appendable,nrows->8,ncols->3,indexers->
[index])
Warning: Hierarchical keys cannot be retrieved as dotted (attribute) access as described above for items stored
under the root node.
In [8]: store.foo.bar.bah
AttributeError: 'HDFStore' object has no attribute 'foo'
# you can directly access the actual PyTables node but using the root node
In [9]: store.root.foo.bar.bah
Out[9]:
/foo/bar/bah (Group) ''
children := ['block0_items' (Array), 'block0_values' (Array), 'axis0' (Array),
'axis1' (Array)]
Storing mixed-dtype data is supported. Strings are stored as a fixed-width using the maximum size of the appended
column. Subsequent attempts at appending longer strings will raise a ValueError.
Passing min_itemsize={`values`: size} as a parameter to append will set a larger minimum for the string
columns. Storing floats, strings, ints, bools, datetime64 are currently supported. For string
columns, passing nan_rep = 'nan' to append will change the default nan representation on disk (which con-
verts to/from np.nan), this defaults to nan.
In [373]: df_mixed = pd.DataFrame({ 'A' : randn(8),
.....: 'B' : randn(8),
.....: 'C' : np.array(randn(8),dtype='float32'),
.....: 'string' :'string',
.....: 'int' : 1,
.....: 'bool' : True,
.....: 'datetime64' : pd.Timestamp('20010102')},
.....: index=list(range(8)))
.....:
In [377]: df_mixed1
Out[377]:
A B C bool datetime64 int string
0 0.704721 -1.152659 -0.430096 True 2001-01-02 1 string
1 -0.785435 0.631979 0.767369 True 2001-01-02 1 string
2 0.462060 0.039513 0.984920 True 2001-01-02 1 string
3 NaN NaN 0.270836 True NaT 1 NaN
4 NaN NaN 1.391986 True NaT 1 NaN
5 -0.926254 1.321106 0.079842 True 2001-01-02 1 string
6 2.007843 0.152631 -0.399965 True 2001-01-02 1 string
7 0.226963 0.164530 -1.027851 True 2001-01-02 1 string
In [378]: df_mixed1.get_dtype_counts()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
bool 1
datetime64[ns] 1
float32 1
float64 2
int64 1
object 1
dtype: int64
colindexes := {
"index": Index(6, medium, shuffle, zlib(1)).is_csi=False}
Storing multi-index dataframes as tables is very similar to storing/selecting from homogeneous index DataFrames.
In [382]: df_mi
Out[382]:
A B C
foo bar
foo one -0.584718 0.816594 -0.081947
two -0.344766 0.528288 -1.068989
three -0.511881 0.291205 0.566534
bar one 0.503592 0.285296 0.484288
two 1.363482 -0.781105 -0.468018
baz two 1.224574 -1.281108 0.875476
three -1.710715 -0.450765 0.749164
qux one -0.203933 -0.182175 0.680656
two -1.818499 0.047072 0.394844
three -0.248432 -0.617707 -0.682884
In [383]: store.append('df_mi',df_mi)
In [384]: store.select('df_mi')
Out[384]:
A B C
foo bar
foo one -0.584718 0.816594 -0.081947
two -0.344766 0.528288 -1.068989
three -0.511881 0.291205 0.566534
bar one 0.503592 0.285296 0.484288
two 1.363482 -0.781105 -0.468018
baz two 1.224574 -1.281108 0.875476
three -1.710715 -0.450765 0.749164
qux one -0.203933 -0.182175 0.680656
two -1.818499 0.047072 0.394844
three -0.248432 -0.617707 -0.682884
A B C
foo bar
24.8.6 Querying
Warning: This query capabilities have changed substantially starting in 0.13.0. Queries from prior version are
accepted (with a DeprecationWarning) printed if its not string-like.
select and delete operations have an optional criterion that can be specified to select/delete only a subset of the
data. This allows one to have a very large on-disk table and retrieve only a portion of the data.
A query is specified using the Term class under the hood, as a boolean expression.
index and columns are supported indexers of a DataFrame
major_axis, minor_axis, and items are supported indexers of the Panel
if data_columns are specified, these can be used as additional indexers
Valid comparison operators are:
=, ==, !=, >, >=, <, <=
Valid boolean expressions are combined with:
| : or
& : and
( and ) : for grouping
These rules are similar to how boolean expressions are used in pandas for indexing.
Note:
= will be automatically expanded to the comparison operator ==
~ is the not operator, but can only be used in very limited circumstances
If a list/tuple of expressions is passed they will be combined via &
"ts>=Timestamp('2012-02-01')"
"major_axis>=20130101"
The indexers are on the left-hand side of the sub-expression:
columns, major_axis, ts
The right-hand side of the sub-expression (after a comparison operator) can be:
functions that will be evaluated, e.g. Timestamp('2012-02-01')
strings, e.g. "bar"
date-like, e.g. 20130101, or "20130101"
lists, e.g. "['A','B']"
variables that are defined in the local names space, e.g. date
Note: Passing a string to a query by interpolating it into the query expression is not recommended. Simply assign the
string of interest to a variable and use that variable in an expression. For example, do this
string = "HolyMoly'"
store.select('df', 'index == string')
instead of this
string = "HolyMoly'"
store.select('df', 'index == %s' % string)
The latter will not work and will raise a SyntaxError.Note that theres a single quote followed by a double quote
in the string variable.
If you must interpolate, use the '%r' format specifier
In [387]: store.append('dfq',dfq,format='table',data_columns=True)
In [390]: store.append('wp',wp)
In [391]: store
Out[391]:
<class 'pandas.io.pytables.HDFStore'>
File path: store.h5
/df frame_table (typ->appendable,nrows->8,ncols->3,indexers->
[index])
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
<class 'pandas.core.panel.Panel'>
Dimensions: 2 (items) x 3 (major_axis) x 2 (minor_axis)
Items axis: Item1 to Item2
Major_axis axis: 2000-01-03 00:00:00 to 2000-01-05 00:00:00
Minor_axis axis: A to B
The columns keyword can be supplied to select a list of columns to be returned, this is equivalent to passing a
'columns=list_of_columns_to_filter':
start and stop parameters can be specified to limit the total search space. These are in terms of the total number
of rows in a table.
# this is effectively what the storage of a Panel looks like
In [394]: wp.to_frame()
Out[394]:
Item1 Item2
major minor
2000-01-01 A 1.058969 0.215269
B -0.397840 0.841009
C 0.337438 -1.445810
D 1.047579 -1.401973
2000-01-02 A 1.045938 -0.100918
B 0.863717 -0.548242
C -0.122092 -0.144620
... ... ...
2000-01-04 B 0.036142 0.307969
C -2.074978 -0.208499
D 0.247792 1.033801
2000-01-05 A -0.897157 -2.400454
B -0.136795 2.030604
C 0.018289 -1.142631
D 0.755414 0.211883
<class 'pandas.core.panel.Panel'>
Dimensions: 2 (items) x 1 (major_axis) x 2 (minor_axis)
Items axis: Item1 to Item2
Major_axis axis: 2000-01-03 00:00:00 to 2000-01-03 00:00:00
Minor_axis axis: A to B
Note: select will raise a ValueError if the query expression has an unknown variable reference. Usually this
means that you are trying to select on a column that is not a data_column.
select will raise a SyntaxError if the query expression is not valid.
In [399]: dftd
Out[399]:
A B C
0 2013-01-01 2013-01-01 00:00:10 -1 days +23:59:50
1 2013-01-01 2013-01-02 00:00:10 -2 days +23:59:50
2 2013-01-01 2013-01-03 00:00:10 -3 days +23:59:50
3 2013-01-01 2013-01-04 00:00:10 -4 days +23:59:50
4 2013-01-01 2013-01-05 00:00:10 -5 days +23:59:50
5 2013-01-01 2013-01-06 00:00:10 -6 days +23:59:50
6 2013-01-01 2013-01-07 00:00:10 -7 days +23:59:50
7 2013-01-01 2013-01-08 00:00:10 -8 days +23:59:50
8 2013-01-01 2013-01-09 00:00:10 -9 days +23:59:50
9 2013-01-01 2013-01-10 00:00:10 -10 days +23:59:50
In [400]: store.append('dftd',dftd,data_columns=True)
In [401]: store.select('dftd',"C<'-3.5D'")
Out[401]:
A B C
4 2013-01-01 2013-01-05 00:00:10 -5 days +23:59:50
5 2013-01-01 2013-01-06 00:00:10 -6 days +23:59:50
6 2013-01-01 2013-01-07 00:00:10 -7 days +23:59:50
7 2013-01-01 2013-01-08 00:00:10 -8 days +23:59:50
8 2013-01-01 2013-01-09 00:00:10 -9 days +23:59:50
9 2013-01-01 2013-01-10 00:00:10 -10 days +23:59:50
24.8.6.3 Indexing
You can create/modify an index for a table with create_table_index after data is already in the table (after and
append/put operation). Creating a table index is highly encouraged. This will speed your queries a great deal
when you use a select with the indexed dimension as the where.
Note: Indexes are automagically created (starting 0.10.1) on the indexables and any data columns you specify.
This behavior can be turned off by passing index=False to append.
In [405]: i = store.root.df.table.cols.index.index
Oftentimes when appending large amounts of data to a store, it is useful to turn off index creation for each append,
then recreate at the end.
In [409]: st = pd.HDFStore('appends.h5',mode='w')
In [412]: st.get_storer('df').table
Out[412]:
/df/table (Table(20,)) ''
description := {
"index": Int64Col(shape=(), dflt=0, pos=0),
"values_block_0": Float64Col(shape=(1,), dflt=0.0, pos=1),
"B": Float64Col(shape=(), dflt=0.0, pos=2)}
byteorder := 'little'
chunkshape := (2730,)
In [414]: st.get_storer('df').table
Out[414]:
/df/table (Table(20,)) ''
description := {
"index": Int64Col(shape=(), dflt=0, pos=0),
"values_block_0": Float64Col(shape=(1,), dflt=0.0, pos=1),
"B": Float64Col(shape=(), dflt=0.0, pos=2)}
byteorder := 'little'
chunkshape := (2730,)
autoindex := True
colindexes := {
"B": Index(9, full, shuffle, zlib(1)).is_csi=True}
In [415]: st.close()
You can designate (and index) certain columns that you want to be able to perform queries (other than the indexable
columns, which you can always query). For instance say you want to perform this common operation, on-disk, and
return just the frame that matches this query. You can specify data_columns = True to force all columns to be
data_columns
In [416]: df_dc = df.copy()
In [422]: df_dc
Out[422]:
A B C string string2
2000-01-01 0.887163 0.859588 -0.636524 foo cool
2000-01-02 0.015696 1.000000 1.000000 foo cool
2000-01-03 0.991946 1.000000 1.000000 foo cool
2000-01-04 -0.334077 0.002118 0.405453 foo cool
2000-01-05 0.289092 1.321158 -1.546906 NaN cool
2000-01-06 -0.202646 -0.655969 0.193421 NaN cool
2000-01-07 0.553439 1.318152 -0.469305 foo cool
2000-01-08 0.675554 -1.817027 -0.183109 bar cool
# on-disk operations
In [423]: store.append('df_dc', df_dc, data_columns = ['B', 'C', 'string', 'string2'])
# getting creative
In [425]: store.select('df_dc', 'B > 0 & C > 0 & string == foo')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
A B C string string2
2000-01-02 0.015696 1.000000 1.000000 foo cool
2000-01-03 0.991946 1.000000 1.000000 foo cool
2000-01-04 -0.334077 0.002118 0.405453 foo cool
A B C string string2
2000-01-02 0.015696 1.000000 1.000000 foo cool
2000-01-03 0.991946 1.000000 1.000000 foo cool
2000-01-04 -0.334077 0.002118 0.405453 foo cool
There is some performance degradation by making lots of columns into data columns, so it is up to the user to designate
these. In addition, you cannot change data columns (nor indexables) after the first append/put operation (Of course
you can simply read in the data and create a new table!)
24.8.6.5 Iterator
Note, that the chunksize keyword applies to the source rows. So if you are doing a query, then the chunksize will
subdivide the total rows in the table and the query applied, returning an iterator on potentially unequal sized chunks.
Here is a recipe for generating a query and using it to create equal sized return chunks.
In [429]: dfeq = pd.DataFrame({'number': np.arange(1,11)})
In [430]: dfeq
Out[430]:
number
0 1
1 2
2 3
3 4
4 5
5 6
6 7
7 8
8 9
9 10
To retrieve a single indexable or data column, use the method select_column. This will, for example, enable you
to get the index very quickly. These return a Series of the result, indexed by the row number. These do not currently
accept the where selector.
In [436]: store.select_column('df_dc', 'index')
Out[436]:
0 2000-01-01
1 2000-01-02
2 2000-01-03
3 2000-01-04
4 2000-01-05
5 2000-01-06
6 2000-01-07
7 2000-01-08
Name: index, dtype: datetime64[ns]
0 foo
1 foo
2 foo
3 foo
4 NaN
5 NaN
6 foo
7 bar
Name: string, dtype: object
Selecting coordinates
Sometimes you want to get the coordinates (a.k.a the index locations) of your query. This returns an Int64Index
of the resulting locations. These coordinates can also be passed to subsequent where operations.
In [439]: store.append('df_coord',df_coord)
In [440]: c = store.select_as_coordinates('df_coord','index>20020101')
In [441]: c.summary()
Out[441]: 'Int64Index: 268 entries, 732 to 999'
In [442]: store.select('df_coord',where=c)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[442]:
0 1
2002-01-02 -0.178266 -0.064638
2002-01-03 -1.204956 -3.880898
2002-01-04 0.974470 0.415160
2002-01-05 1.751967 0.485011
2002-01-06 -0.170894 0.748870
2002-01-07 0.629793 0.811053
2002-01-08 2.133776 0.238459
... ... ...
2002-09-20 -0.181434 0.612399
2002-09-21 -0.763324 -0.354962
2002-09-22 -0.261776 0.812126
2002-09-23 0.482615 -0.886512
2002-09-24 -0.037757 -0.562953
2002-09-25 0.897706 0.383232
2002-09-26 -1.324806 1.139269
Sometime your query can involve creating a list of rows to select. Usually this mask would be a resulting index
from an indexing operation. This example selects the months of a datetimeindex which are 5.
In [444]: store.append('df_mask',df_mask)
In [445]: c = store.select_column('df_mask','index')
In [447]: store.select('df_mask',where=where)
Out[447]:
0 1
2000-05-01 -1.006245 -0.616759
2000-05-02 0.218940 0.717838
2000-05-03 0.013333 1.348060
2000-05-04 0.662176 -1.050645
2000-05-05 -1.034870 -0.243242
2000-05-06 -0.753366 -1.454329
2000-05-07 -1.022920 -0.476989
... ... ...
2002-05-25 -0.509090 -0.389376
2002-05-26 0.150674 1.164337
2002-05-27 -0.332944 0.115181
2002-05-28 -1.048127 -0.605733
2002-05-29 1.418754 -0.442835
2002-05-30 -0.433200 0.835001
2002-05-31 -1.041278 1.401811
Storer Object
If you want to inspect the stored object, retrieve via get_storer. You could use this programmatically to say get
the number of rows in an object.
In [448]: store.get_storer('df_dc').nrows
Out[448]: 8
New in 0.10.1 are the methods append_to_multiple and select_as_multiple, that can perform append-
ing/selecting from multiple tables at once. The idea is to have one table (call it the selector table) that you index
most/all of the columns, and perform your queries. The other table(s) are data tables with an index matching the
selector tables index. You can then perform a very fast query on the selector table, yet get lots of data back. This
method is similar to having a very wide table, but enables more efficient queries.
The append_to_multiple method splits a given single DataFrame into multiple tables according to d, a dictio-
nary that maps the table names to a list of columns you want in that table. If None is used in place of a list, that
table will have the remaining unspecified columns of the given DataFrame. The argument selector defines which
table is the selector table (which you can make queries from). The argument dropna will drop rows from the input
DataFrame to ensure tables are synchronized. This means that if a row for one of the tables being written to is entirely
np.NaN, that row will be dropped from all tables.
If dropna is False, THE USER IS RESPONSIBLE FOR SYNCHRONIZING THE TABLES. Remember that
entirely np.Nan rows are not written to the HDFStore, so if you choose to call dropna=False, some tables may
have more rows than others, and therefore select_as_multiple may not work or it may return unexpected
results.
In [453]: store
Out[453]:
<class 'pandas.io.pytables.HDFStore'>
File path: store.h5
/df frame_table (typ->appendable,nrows->8,ncols->3,indexers->
[index])
A B
2000-01-01 0.714697 0.318215
2000-01-02 NaN NaN
2000-01-03 -0.086919 0.416905
2000-01-04 0.489131 -0.253340
2000-01-05 -0.382952 -0.397373
2000-01-06 0.538116 0.226388
2000-01-07 -2.073479 -0.115926
2000-01-08 -0.695400 0.402493
In [455]: store.select('df2_mt')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
C D E F foo
# as a multiple
In [456]: store.select_as_multiple(['df1_mt', 'df2_mt'], where=['A>0', 'B>0'],
.....: selector = 'df1_mt')
.....:
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
A B C D E F foo
2000-01-01 0.714697 0.318215 0.607460 0.790907 0.852225 0.096696 bar
2000-01-06 0.538116 0.226388 1.541729 0.205256 1.998065 0.953591 bar
You can delete from a table selectively by specifying a where. In deleting rows, it is important to understand the
PyTables deletes rows by erasing the rows, then moving the following data. Thus deleting can potentially be a very
expensive operation depending on the orientation of your data. This is especially true in higher dimensional objects
(Panel and Panel4D). To get optimal performance, its worthwhile to have the dimension you are deleting be the
first of the indexables.
Data is ordered (on the disk) in terms of the indexables. Heres a simple use case. You store panel-type data, with
dates in the major_axis and ids in the minor_axis. The data is then interleaved like this:
date_1 - id_1 - id_2 - . - id_n
date_2 - id_1 - . - id_n
It should be clear that a delete operation on the major_axis will be fairly quick, as one chunk is removed, then the
following data moved. On the other hand a delete operation on the minor_axis will be very expensive. In this case
it would almost certainly be faster to rewrite the table using a where that selects all but the missing data.
In [458]: store.select('wp')
\\\\\\\\\\\\\Out[458]:
<class 'pandas.core.panel.Panel'>
Dimensions: 2 (items) x 2 (major_axis) x 4 (minor_axis)
Items axis: Item1 to Item2
Major_axis axis: 2000-01-01 00:00:00 to 2000-01-02 00:00:00
Minor_axis axis: A to D
Warning: Please note that HDF5 DOES NOT RECLAIM SPACE in the h5 files automatically. Thus, repeatedly
deleting (or removing nodes) and adding again, WILL TEND TO INCREASE THE FILE SIZE.
To repack and clean the file, use ptrepack
24.8.8.1 Compression
PyTables allows the stored data to be compressed. This applies to all kinds of stores, not just tables.
Pass complevel=int for a compression level (1-9, with 0 being no compression, and the default)
Pass complib=lib where lib is any of zlib, bzip2, lzo, blosc for whichever compression library
you prefer.
HDFStore will use the file based compression scheme if no overriding complib or complevel options are pro-
vided. blosc offers very fast compression, and is my most used. Note that lzo and bzip2 may not be installed (by
Python) by default.
Compression for all objects within the file
Or on-the-fly compression (this only applies to tables). You can turn off file compression for a specific table by passing
complevel=0
24.8.8.2 ptrepack
PyTables offers better write performance when tables are compressed after they are written, as opposed to turning on
compression at the very beginning. You can use the supplied PyTables utility ptrepack. In addition, ptrepack
can change compression levels after the fact.
Furthermore ptrepack in.h5 out.h5 will repack the file to allow you to reuse previously deleted space. Alter-
natively, one can simply remove the file and write again, or use the copy method.
24.8.8.3 Caveats
Warning: HDFStore is not-threadsafe for writing. The underlying PyTables only supports concurrent
reads (via threading or processes). If you need reading and writing at the same time, you need to serialize these
operations in a single thread in a single process. You will corrupt your data otherwise. See the (GH2397) for more
information.
If you use locks to manage write access between multiple processes, you may want to use fsync() before
releasing write locks. For convenience you can use store.flush(fsync=True) to do this for you.
Once a table is created its items (Panel) / columns (DataFrame) are fixed; only exactly the same columns can
be appended
Be aware that timezones (e.g., pytz.timezone('US/Eastern')) are not necessarily equal across time-
zone versions. So if data is localized to a specific timezone in the HDFStore using one version of a timezone
library and that data is updated with another version, the data will be converted to UTC since these timezones
are not considered equal. Either use the same version of timezone library or use tz_convert with the updated
timezone definition.
Warning: PyTables will show a NaturalNameWarning if a column name cannot be used as an attribute
selector. Natural identifiers contain only letters, numbers, and underscores, and may not begin with a number.
Other identifiers cannot be used in a where clause and are generally a bad idea.
24.8.9 DataTypes
HDFStore will map an object dtype to the PyTables underlying dtype. This means the following types are known
to work:
Type Represents missing values
floating : float64, float32, float16 np.nan
integer : int64, int32, int8, uint64,uint32, uint8
boolean
datetime64[ns] NaT
timedelta64[ns] NaT
categorical : see the section below
object : strings np.nan
unicode columns are not supported, and WILL FAIL.
In [460]: dfcat
Out[460]:
A B
0 a 0.603273
1 a 0.262554
2 b -0.979586
3 b 2.132387
4 c 0.892485
5 d 1.996474
6 b 0.231425
7 a 0.980070
In [461]: dfcat.dtypes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
A category
B float64
dtype: object
In [465]: result
Out[465]:
A B
2 b -0.979586
3 b 2.132387
4 c 0.892485
6 b 0.231425
In [466]: result.dtypes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[466]:
A category
B float64
dtype: object
Warning: The format of the Categorical is readable by prior versions of pandas (< 0.15.2), but will retrieve
the data as an integer based column (e.g. the codes). However, the categories can be retrieved but require
the user to select them manually using the explicit meta path.
The data is stored like so:
In [467]: cstore
Out[467]:
<class 'pandas.io.pytables.HDFStore'>
File path: cats.h5
/dfcat frame_table (typ->appendable,nrows->8,ncols->2,
indexers->[index],dc->[A])
0 a
1 b
2 c
3 d
dtype: object
min_itemsize
The underlying implementation of HDFStore uses a fixed column width (itemsize) for string columns. A string
column itemsize is calculated as the maximum of the length of data (for that column) that is passed to the HDFStore,
in the first append. Subsequent appends, may introduce a string for a column larger than the column can hold, an
Exception will be raised (otherwise you could have a silent truncation of these columns, leading to loss of information).
In the future we may relax this and allow a user-specified truncation to occur.
Pass min_itemsize on the first table creation to a-priori specify the minimum length of a particular string column.
min_itemsize can be an integer, or a dict mapping a column name to an integer. You can pass values as a key
to allow all indexables or data_columns to have this min_itemsize.
Starting in 0.11.0, passing a min_itemsize dict will cause all passed columns to be created as data_columns
automatically.
Note: If you are not passing any data_columns, then the min_itemsize will be the maximum of the length of
any string passed
In [470]: dfs
Out[470]:
A B
0 foo bar
1 foo bar
2 foo bar
3 foo bar
4 foo bar
In [472]: store.get_storer('dfs').table
Out[472]:
/dfs/table (Table(5,)) ''
description := {
"index": Int64Col(shape=(), dflt=0, pos=0),
"values_block_0": StringCol(itemsize=30, shape=(2,), dflt=b'', pos=1)}
byteorder := 'little'
chunkshape := (963,)
autoindex := True
colindexes := {
"index": Index(6, medium, shuffle, zlib(1)).is_csi=False}
In [474]: store.get_storer('dfs2').table
Out[474]:
/dfs2/table (Table(5,)) ''
description := {
"index": Int64Col(shape=(), dflt=0, pos=0),
"values_block_0": StringCol(itemsize=3, shape=(1,), dflt=b'', pos=1),
"A": StringCol(itemsize=30, shape=(), dflt=b'', pos=2)}
byteorder := 'little'
chunkshape := (1598,)
autoindex := True
colindexes := {
"index": Index(6, medium, shuffle, zlib(1)).is_csi=False,
"A": Index(6, medium, shuffle, zlib(1)).is_csi=False}
nan_rep
String columns will serialize a np.nan (a missing value) with the nan_rep string representation. This defaults to
the string value nan. You could inadvertently turn an actual nan value into a missing value.
In [476]: dfss
Out[476]:
A
0 foo
1 bar
2 nan
In [478]: store.select('dfss')
Out[478]:
A
0 foo
1 bar
2 NaN
In [480]: store.select('dfss2')
Out[480]:
A
0 foo
1 bar
2 nan
HDFStore writes table format objects in specific formats suitable for producing loss-less round trips to pandas
objects. For external compatibility, HDFStore can read native PyTables format tables.
It is possible to write an HDFStore object that can easily be imported into R using the rhdf5 library (Package
website). Create a table format store like this:
In [481]: np.random.seed(1)
In [483]: df_for_r.head()
Out[483]:
class first second
0 0 0.417022 0.326645
1 0 0.720324 0.527058
2 1 0.000114 0.885942
3 1 0.302333 0.357270
4 1 0.146756 0.908535
In [486]: store_export
Out[486]:
<class 'pandas.io.pytables.HDFStore'>
File path: export.h5
/df_for_r frame_table (typ->appendable,nrows->100,ncols->3,indexers->
[index])
In R this file can be read into a data.frame object using the rhdf5 library. The following example function reads
the corresponding column names and data values from the values and assembles them into a data.frame:
# Load values and column names for all datasets from corresponding nodes and
# insert them into one data.frame object.
library(rhdf5)
return(data)
}
Note: The R function lists the entire HDF5 files contents and assembles the data.frame object from all matching
nodes, so use this only as a starting point if you have stored multiple DataFrame objects to a single HDF5 file.
0.10.1 of HDFStore can read tables created in a prior version of pandas, however query terms using the prior (un-
documented) methodology are unsupported. HDFStore will issue a warning if you try to use a legacy-format file.
You must read in the entire file and write it out using the new format, using the method copy to take advantage of the
updates. The group attribute pandas_version contains the version information. copy takes a number of options,
please see the docstring.
# a legacy store
In [487]: legacy_store = pd.HDFStore(legacy_file_path,'r')
In [488]: legacy_store
Out[488]:
<class 'pandas.io.pytables.HDFStore'>
File path: /Users/taugspurger/sandbox/pandas/doc/source/_static/legacy_0.10.h5
/a series (shape->[30])
/b frame (shape->[30,4])
In [490]: new_store
Out[490]:
<class 'pandas.io.pytables.HDFStore'>
File path: store_new.h5
/a series (shape->[30])
/b frame (shape->[30,4])
In [491]: new_store.close()
24.8.12 Performance
tables format come with a writing performance penalty as compared to fixed stores. The benefit is the
ability to append/delete and query (potentially very large amounts of data). Write times are generally longer as
compared with regular stores. Query times can be quite fast, especially on an indexed axis.
You can pass chunksize=<int> to append, specifying the write chunksize (default is 50000). This will
significantly lower your memory usage on writing.
You can pass expectedrows=<int> to the first append, to set the TOTAL number of expected rows that
PyTables will expected. This will optimize read/write performance.
Duplicate rows can be written to tables, but are filtered out in selection (with the last items being selected; thus
a table is unique on major, minor pairs)
A PerformanceWarning will be raised if you are attempting to store types that will be pickled by PyTables
(rather than stored as endemic types). See Here for more information and some solutions.
24.8.13 Experimental
In [494]: p4d
Out[494]:
<class 'pandas.core.panelnd.Panel4D'>
Dimensions: 1 (labels) x 2 (items) x 5 (major_axis) x 4 (minor_axis)
Labels axis: l1 to l1
Items axis: Item1 to Item2
Major_axis axis: 2000-01-01 00:00:00 to 2000-01-05 00:00:00
Minor_axis axis: A to D
In [496]: store
Out[496]:
<class 'pandas.io.pytables.HDFStore'>
File path: store.h5
/df frame_table (typ->appendable,nrows->8,ncols->3,indexers->
[index])
These, by default, index the three axes items, major_axis, minor_axis. On an AppendableTable it is
possible to setup with the first append a different indexing scheme, depending on how you want to store your data.
Pass the axes keyword with a list of dimensions (currently must by exactly 1 less than the total dimensions of the
object). This cannot be changed after table creation.
24.9 Feather
In [500]: df
Out[500]:
a b c d e f g h i
0 a 1 3 4.0 True a 2013-01-01 2013-01-01 00:00:00-05:00 2013-01-01
1 b 2 4 5.0 False b 2013-01-02 2013-01-02 00:00:00-05:00 2013-01-01
2 c 3 5 6.0 True c 2013-01-03 2013-01-03 00:00:00-05:00 2013-01-01
In [501]: df.dtypes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
a object
b int64
c uint8
d float64
e bool
f category
g datetime64[ns]
h datetime64[ns, US/Eastern]
i datetime64[ns]
dtype: object
In [502]: df.to_feather('example.feather')
In [504]: result
Out[504]:
a b c d e f g h i
0 a 1 3 4.0 True a 2013-01-01 2013-01-01 00:00:00-05:00 2013-01-01
1 b 2 4 5.0 False b 2013-01-02 2013-01-02 00:00:00-05:00 2013-01-01
2 c 3 5 6.0 True c 2013-01-03 2013-01-03 00:00:00-05:00 2013-01-01
# we preserve dtypes
In [505]: result.dtypes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
a object
b int64
c uint8
d float64
e bool
f category
g datetime64[ns]
h datetime64[ns, US/Eastern]
i datetime64[ns]
dtype: object
The pandas.io.sql module provides a collection of query wrappers to both facilitate data retrieval and to reduce
dependency on DB-specific API. Database abstraction is provided by SQLAlchemy if installed. In addition you will
need a driver library for your database. Examples of such drivers are psycopg2 for PostgreSQL or pymysql for
MySQL. For SQLite this is included in Pythons standard library by default. You can find an overview of supported
drivers for each SQL dialect in the SQLAlchemy docs.
New in version 0.14.0.
If SQLAlchemy is not installed, a fallback is only provided for sqlite (and for mysql for backwards compatibility,
but this is deprecated and will be removed in a future version). This mode requires a Python database adapter which
respect the Python DB-API.
See also some cookbook examples for some advanced strategies.
The key functions are:
read_sql_table(table_name, con[, schema, ...]) Read SQL database table into a DataFrame.
read_sql_query(sql, con[, index_col, ...]) Read SQL query into a DataFrame.
read_sql(sql, con[, index_col, ...]) Read SQL query or database table into a DataFrame.
DataFrame.to_sql(name, con[, flavor, ...]) Write records stored in a DataFrame to a SQL database.
24.10.1 pandas.read_sql_table
read_sql
Notes
Any datetime values with time zone information will be converted to UTC
24.10.2 pandas.read_sql_query
List of parameters to pass to execute method. The syntax used to pass parameters is
database driver dependent. Check your database driver documentation for which of the
five syntax styles, described in PEP 249s paramstyle, is supported. Eg. for psycopg2,
uses %(name)s so use params={name : value}
parse_dates : list or dict, default: None
List of column names to parse as dates
Dict of {column_name: format string} where format string is strftime compat-
ible in case of parsing string times or is one of (D, s, ns, ms, us) in case of parsing integer
timestamps
Dict of {column_name: arg dict}, where the arg dict corresponds to the keyword
arguments of pandas.to_datetime() Especially useful with databases without native
Datetime support, such as SQLite
chunksize : int, default None
If specified, return an iterator where chunksize is the number of rows to include in each
chunk.
Returns DataFrame
See also:
read_sql
Notes
Any datetime values with time zone information parsed via the parse_dates parameter will be converted to UTC
24.10.3 pandas.read_sql
List of parameters to pass to execute method. The syntax used to pass parameters is
database driver dependent. Check your database driver documentation for which of the
five syntax styles, described in PEP 249s paramstyle, is supported. Eg. for psycopg2,
uses %(name)s so use params={name : value}
parse_dates : list or dict, default: None
List of column names to parse as dates
Dict of {column_name: format string} where format string is strftime compat-
ible in case of parsing string times or is one of (D, s, ns, ms, us) in case of parsing integer
timestamps
Dict of {column_name: arg dict}, where the arg dict corresponds to the keyword
arguments of pandas.to_datetime() Especially useful with databases without native
Datetime support, such as SQLite
columns : list, default: None
List of column names to select from sql table (only used when reading a table).
chunksize : int, default None
If specified, return an iterator where chunksize is the number of rows to include in each
chunk.
Returns DataFrame
See also:
Notes
This function is a convenience wrapper around read_sql_table and read_sql_query (and for back-
ward compatibility) and will delegate to the specific function depending on the provided input (database table
name or sql query). The delegated function might have more specific notes about their functionality not listed
here.
24.10.4 pandas.DataFrame.to_sql
In the following example, we use the SQlite SQL database engine. You can use a temporary SQLite database where
data are stored in memory.
To connect with SQLAlchemy you use the create_engine() function to create an engine object from database
URI. You only need to create the engine once per database you are connecting to. For more information on
create_engine() and the URI formatting, see the examples below and the SQLAlchemy documentation
If you want to manage your own connections you can pass one of those instead:
Assuming the following data is in a DataFrame data, we can insert it into the database using to_sql().
With some databases, writing large DataFrames can result in errors due to packet size limitations being exceeded. This
can be avoided by setting the chunksize parameter when calling to_sql. For example, the following writes data
to the database in batches of 1000 rows at a time:
to_sql() will try to map your data to an appropriate SQL data type based on the dtype of the data. When you have
columns of dtype object, pandas will try to infer the data type.
You can always override the default type by specifying the desired SQL type of any of the columns by using the
dtype argument. This argument needs a dictionary mapping column names to SQLAlchemy types (or strings for the
sqlite3 fallback mode). For example, specifying to use the sqlalchemy String type instead of the default Text type
for string columns:
Note: Due to the limited support for timedeltas in the different database flavors, columns with type timedelta64
will be written as integer values as nanoseconds to the database and a warning will be raised.
Note: Columns of category dtype will be converted to the dense representation as you would get with np.
asarray(categorical) (e.g. for string categories this gives an array of strings). Because of this, reading the
database table back in does not generate a categorical.
read_sql_table() will read a database table given the table name and optionally a subset of columns to read.
Note: In order to use read_sql_table(), you must have the SQLAlchemy optional dependency installed.
You can also specify the name of the column as the DataFrame index, and specify a subset of columns to be read.
Col_1 Col_2
0 X 27.50
1 Y -12.50
2 Z 5.73
If needed you can explicitly specify a format string, or a dict of arguments to pass to pandas.to_datetime():
24.10.8 Querying
You can query using raw SQL in the read_sql_query() function. In this case you must use the SQL variant
appropriate for your database. When using SQLAlchemy, you can also pass SQLAlchemy Expression language
constructs, which are database-agnostic.
Out[517]:
id Col_1 Col_2
0 42 Y -12.5
The read_sql_query() function supports a chunksize argument. Specifying this will return an iterator through
chunks of the query result:
.....: print(chunk)
.....:
a b c
0 -0.700399 -0.203394 0.242669
1 0.201830 0.661020 1.792158
2 -0.120465 -1.233121 -1.182318
3 -0.665755 -1.674196 0.825030
4 -0.498214 -0.310985 -0.001891
a b c
0 -1.396620 -0.861316 0.674712
1 0.618539 -0.443172 1.810535
2 -1.305727 -0.344987 -0.230840
3 -2.793085 1.937529 0.366332
4 -1.044589 2.051173 0.585662
a b c
0 0.429526 -0.606998 0.106223
1 -1.525680 0.795026 -0.374438
2 0.134048 1.202055 0.284748
3 0.262467 0.276499 -0.733272
4 0.836005 1.543359 0.758806
a b c
0 0.884909 -0.877282 -0.867787
1 -1.440876 1.232253 -0.254180
2 1.399844 -0.781912 -0.437509
3 0.095425 0.921450 0.060750
4 0.211125 0.016528 0.177188
You can also run a plain query without creating a dataframe with execute(). This is useful for queries that dont
return values, such as INSERT. This is functionally equivalent to calling execute on the SQLAlchemy engine or db
connection object. Again, you must use the SQL syntax variant appropriate for your database.
To connect with SQLAlchemy you use the create_engine() function to create an engine object from database
URI. You only need to create the engine once per database you are connecting to.
engine = create_engine('postgresql://scott:tiger@localhost:5432/mydatabase')
engine = create_engine('mysql+mysqldb://scott:tiger@localhost/foo')
engine = create_engine('oracle://scott:tiger@127.0.0.1:1521/sidname')
engine = create_engine('mssql+pyodbc://mydsn')
# sqlite://<nohostname>/<path>
# where <path> is relative:
engine = create_engine('sqlite:///foo.db')
Out[522]:
index id Date Col_1 Col_2 Col_3
0 0 26 2010-10-18 00:00:00.000000 X 27.5 1
If you have an SQLAlchemy description of your database you can express where conditions using SQLAlchemy
expressions
Out[525]:
index Date Col_1 Col_2 Col_3
You can combine SQLAlchemy expressions with parameters passed to read_sql() using sqlalchemy.
bindparam()
The use of sqlite is supported without using SQLAlchemy. This mode requires a Python database adapter which
respect the Python DB-API.
You can create connections like so:
import sqlite3
con = sqlite3.connect(':memory:')
data.to_sql('data', cnx)
pd.read_sql_query("SELECT * FROM data", con)
Warning: Starting in 0.20.0, pandas has split off Google BigQuery support into the separate package
pandas-gbq. You can pip install pandas-gbq to get it.
The method to_stata() will write a DataFrame into a .dta file. The format version of this file is always 115 (Stata
12).
In [530]: df.to_stata('stata.dta')
Stata data files have limited data type support; only strings with 244 or fewer characters, int8, int16, int32,
float32 and float64 can be stored in .dta files. Additionally, Stata reserves certain values to represent missing
data. Exporting a non-missing value that is outside of the permitted range in Stata for a particular data type will retype
the variable to the next larger size. For example, int8 values are restricted to lie between -127 and 100 in Stata, and
so variables with values above 100 will trigger a conversion to int16. nan values in floating points data types are
stored as the basic missing data type (. in Stata).
Note: It is not possible to export missing data values for integer data types.
The Stata writer gracefully handles other data types including int64, bool, uint8, uint16, uint32 by casting
to the smallest supported type that can represent the data. For example, data with a type of uint8 will be cast to
int8 if all values are less than 100 (the upper bound for non-missing int8 data in Stata), or, if values are outside of
this range, the variable is cast to int16.
Warning: Conversion from int64 to float64 may result in a loss of precision if int64 values are larger than
2**53.
Warning: StataWriter and to_stata() only support fixed width strings containing up to 244 characters,
a limitation imposed by the version 115 dta file format. Attempting to write Stata dta files with strings longer than
244 characters raises a ValueError.
The top-level function read_stata will read a dta file and return either a DataFrame or a StataReader that can
be used to read the file incrementally.
In [531]: pd.read_stata('stata.dta')
Out[531]:
index A B
0 0 -1.116470 0.080927
1 1 -0.186579 -0.056824
2 2 0.492337 -0.680678
3 3 -0.084508 -0.297362
4 4 0.417302 0.784771
5 5 -0.955425 0.585910
6 6 2.065783 -1.471157
7 7 -0.830172 -0.880578
8 8 -0.279098 1.622849
9 9 0.013353 -0.694694
Specifying a chunksize yields a StataReader instance that can be used to read chunksize lines from the file
at a time. The StataReader object can be used as an iterator.
For more fine-grained control, use iterator=True and specify chunksize with each call to read().
Note: read_stata() and StataReader support .dta formats 113-115 (Stata 10-12), 117 (Stata 13), and 118
(Stata 14).
Note: Setting preserve_dtypes=False will upcast to the standard pandas data types: int64 for all integer
types and float64 for floating point data. By default, the Stata data types are preserved when importing.
Warning: Stata only supports string value labels, and so str is called on the categories when exporting data.
Exporting Categorical variables with non-string categories produces a warning, and can result a loss of infor-
mation if the str representations of the categories are not unique.
Labeled data can similarly be imported from Stata data files as Categorical variables using the keyword argu-
ment convert_categoricals (True by default). The keyword argument order_categoricals (True by
default) determines whether imported Categorical variables are ordered.
Note: When importing categorical data, the values of the variables in the Stata data file are not preserved
since Categorical variables always use integer data types between -1 and n-1 where n is the number
of categories. If the original values in the Stata data file are required, these can be imported by setting
convert_categoricals=False, which will import original data (but not the variable labels). The original
values can be matched to the imported categorical data since there is a simple mapping between the original Stata
data values and the category codes of imported Categorical variables: missing values are assigned code -1, and the
smallest original value is assigned 0, the second smallest is assigned 1 and so on until the largest original value is
assigned the code n-1.
Note: Stata supports partially labeled series. These series have value labels for some but not all data values. Importing
a partially labeled series will produce a Categorical with string categories for the values that are labeled and
numeric categories for values with no label.
df = pd.read_sas('sas_data.sas7bdat')
The specification for the xport file format is available from the SAS web site.
No official documentation is available for the SAS7BDAT format.
pandas itself only supports IO with a limited set of file formats that map cleanly to its tabular data model. For reading
and writing other file formats into and from pandas, we recommend these packages from the broader community.
24.14.1 netCDF
xarray provides data structures inspired by the pandas DataFrame for working with multi-dimensional datasets, with a
focus on the netCDF file format and easy conversion to and from pandas.
In [1]: df = pd.DataFrame(randn(1000000,2),columns=list('AB'))
In [2]: df.info()
<class 'pandas.core.frame.DataFrame'>
Int64Index: 1000000 entries, 0 to 999999
Data columns (total 2 columns):
A 1000000 non-null float64
B 1000000 non-null float64
dtypes: float64(2)
memory usage: 22.9 MB
Writing
Reading
import sqlite3
import os
from pandas.io import sql
df = pd.DataFrame(randn(1000000,2),columns=list('AB'))
def test_sql_write(df):
if os.path.exists('test.sql'):
os.remove('test.sql')
sql_db = sqlite3.connect('test.sql')
df.to_sql(name='test_table', con=sql_db)
sql_db.close()
def test_sql_read():
sql_db = sqlite3.connect('test.sql')
pd.read_sql_query("select * from test_table", sql_db)
sql_db.close()
def test_hdf_fixed_write(df):
df.to_hdf('test_fixed.hdf','test',mode='w')
def test_hdf_fixed_read():
pd.read_hdf('test_fixed.hdf','test')
def test_hdf_fixed_write_compress(df):
df.to_hdf('test_fixed_compress.hdf','test',mode='w',complib='blosc')
def test_hdf_fixed_read_compress():
pd.read_hdf('test_fixed_compress.hdf','test')
def test_hdf_table_write(df):
df.to_hdf('test_table.hdf','test',mode='w',format='table')
def test_hdf_table_read():
pd.read_hdf('test_table.hdf','test')
def test_hdf_table_write_compress(df):
df.to_hdf('test_table_compress.hdf','test',mode='w',complib='blosc',format='table
')
def test_hdf_table_read_compress():
pd.read_hdf('test_table_compress.hdf','test')
def test_csv_write(df):
df.to_csv('test.csv',mode='w')
def test_csv_read():
pd.read_csv('test.csv',index_col=0)
TWENTYFIVE
25.1 DataReader
With:
1101
pandas: powerful Python data analysis toolkit, Release 0.20.1
TWENTYSIX
ENHANCING PERFORMANCE
For many use cases writing pandas in pure python and numpy is sufficient. In some computationally heavy applications
however, it can be possible to achieve sizeable speed-ups by offloading work to cython.
This tutorial assumes you have refactored as much as possible in python, for example trying to remove for loops and
making use of numpy vectorization, its always worth optimising in python first.
This tutorial walks through a typical process of cythonizing a slow computation. We use an example from the cython
documentation but in the context of pandas. Our final cythonized solution is around 100 times faster than the pure
python.
In [2]: df
Out[2]:
N a b x
0 585 0.469112 -0.218470 x
1 841 -0.282863 -0.061645 x
2 251 -1.509059 -0.723780 x
3 972 -1.135632 0.551225 x
4 181 1.212112 -0.497767 x
5 458 -0.173215 0.837519 x
6 159 0.119209 1.103245 x
.. ... ... ... ..
993 190 0.131892 0.290162 x
994 931 0.342097 0.215341 x
995 374 -1.512743 0.874737 x
996 246 0.933753 1.120790 x
997 157 -0.308013 0.198768 x
998 977 -0.079915 1.757555 x
999 770 -1.010589 -1.115680 x
1103
pandas: powerful Python data analysis toolkit, Release 0.20.1
But clearly this isnt fast enough for us. Lets take a look and see where the time is spent during this operation (limited
to the most time consuming four calls) using the prun ipython magic function:
By far the majority of time is spend inside either integrate_f or f, hence well concentrate our efforts cythonizing
these two functions.
Note: In python 2 replacing the range with its generator counterpart (xrange) would mean the range line would
vanish. In python 3 range is already a generator.
First were going to need to import the cython magic function to ipython (for cython versions < 0.21 you can use
%load_ext cythonmagic):
Now, lets simply copy our functions over to cython as is (the suffix is here to distinguish between function versions):
In [7]: %%cython
...: def f_plain(x):
...: return x * (x - 1)
...: def integrate_f_plain(a, b, N):
...: s = 0
...: dx = (b - a) / N
...: for i in range(N):
...: s += f_plain(a + i * dx)
...: return s * dx
...:
Note: If youre having trouble pasting the above into your ipython, you may need to be using bleeding edge ipython
for paste to play well with cell magics.
Already this has shaved a third off, not too bad for a simple copy and paste.
In [8]: %%cython
...: cdef double f_typed(double x) except? -2:
...: return x * (x - 1)
...: cpdef double integrate_f_typed(double a, double b, int N):
...: cdef int i
...: cdef double s, dx
...: s = 0
...: dx = (b - a) / N
...: for i in range(N):
...: s += f_typed(a + i * dx)
...: return s * dx
...:
Now, were talking! Its now over ten times faster than the original python implementation, and we havent really
modified the code. Lets have another look at whats eating up time:
Its calling series... a lot! Its creating a Series from each row, and get-ting from both the index and the series (three
times for each row). Function calls are expensive in python, so maybe we could minimise these by cythonizing the
apply part.
Note: We are now passing ndarrays into the cython function, fortunately cython plays very nicely with numpy.
In [10]: %%cython
....: cimport numpy as np
....: import numpy as np
....: cdef double f_typed(double x) except? -2:
....: return x * (x - 1)
....: cpdef double integrate_f_typed(double a, double b, int N):
....: cdef int i
....: cdef double s, dx
....: s = 0
....: dx = (b - a) / N
....: for i in range(N):
....: s += f_typed(a + i * dx)
....: return s * dx
....: cpdef np.ndarray[double] apply_integrate_f(np.ndarray col_a, np.ndarray col_
b, np.ndarray col_N):
The implementation is simple, it creates an array of zeros and loops over the rows, applying our
integrate_f_typed, and putting this in the zeros array.
Warning: In 0.13.0 since Series has internaly been refactored to no longer sub-class ndarray but instead
subclass NDFrame, you can not pass a Series directly as a ndarray typed parameter to a cython function.
Instead pass the actual ndarray using the .values attribute of the Series.
Prior to 0.13.0
apply_integrate_f(df['a'], df['b'], df['N'])
Note: Loops like this would be extremely slow in python, but in Cython looping over numpy arrays is fast.
Weve gotten another big improvement. Lets check again where the time is spent:
As one might expect, the majority of the time is now spent in apply_integrate_f, so if we wanted to make
anymore efficiencies we must continue to concentrate our efforts here.
There is still hope for improvement. Heres an example of using some more advanced cython techniques:
In [12]: %%cython
....: cimport cython
....: cimport numpy as np
....: import numpy as np
....: cdef double f_typed(double x) except? -2:
....: return x * (x - 1)
....: cpdef double integrate_f_typed(double a, double b, int N):
....: cdef int i
....: cdef double s, dx
....: s = 0
....: dx = (b - a) / N
....: for i in range(N):
....: s += f_typed(a + i * dx)
....: return s * dx
....: @cython.boundscheck(False)
....: @cython.wraparound(False)
....: cpdef np.ndarray[double] apply_integrate_f_wrap(np.ndarray[double] col_a, np.
ndarray[double] col_b, np.ndarray[int] col_N):
Even faster, with the caveat that a bug in our cython code (an off-by-one error, for example) might cause a segfault
because memory access isnt checked.
A recent alternative to statically compiling cython code, is to use a dynamic jit-compiler, numba.
Numba gives you the power to speed up your applications with high performance functions written directly in Python.
With a few annotations, array-oriented and math-heavy Python code can be just-in-time compiled to native machine
instructions, similar in performance to C, C++ and Fortran, without having to switch languages or Python interpreters.
Numba works by generating optimized machine code using the LLVM compiler infrastructure at import time, runtime,
or statically (using the included pycc tool). Numba supports compilation of Python to run on either CPU or GPU
hardware, and is designed to integrate with the Python scientific software stack.
Note: You will need to install numba. This is easy with conda, by using: conda install numba, see installing
using miniconda.
Note: As of numba version 0.20, pandas objects cannot be passed directly to numba-compiled functions. Instead, one
must pass the numpy array underlying the pandas object to the numba-compiled function as demonstrated below.
26.2.1 Jit
Using numba to just-in-time compile your code. We simply take the plain python code from above and annotate with
the @jit decorator.
import numba
@numba.jit
def f_plain(x):
return x * (x - 1)
@numba.jit
def integrate_f_numba(a, b, N):
s = 0
dx = (b - a) / N
for i in range(N):
s += f_plain(a + i * dx)
return s * dx
@numba.jit
def apply_integrate_f_numba(col_a, col_b, col_N):
n = len(col_N)
result = np.empty(n, dtype='float64')
assert len(col_a) == len(col_b) == n
for i in range(n):
result[i] = integrate_f_numba(col_a[i], col_b[i], col_N[i])
return result
def compute_numba(df):
result = apply_integrate_f_numba(df['a'].values, df['b'].values, df['N'].values)
return pd.Series(result, index=df.index, name='result')
Note that we directly pass numpy arrays to the numba function. compute_numba is just a wrapper that provides a
nicer interface by passing/returning pandas objects.
26.2.2 Vectorize
numba can also be used to write vectorized functions that do not require the user to explicitly loop over the observa-
tions of a vector; a vectorized function will be applied to each row automatically. Consider the following toy example
of doubling each observation:
import numba
def double_every_value_nonumba(x):
return x*2
@numba.vectorize
def double_every_value_withnumba(x):
return x*2
26.2.3 Caveats
Note: numba will execute on any function, but can only accelerate certain classes of functions.
numba is best at accelerating functions that apply numerical functions to numpy arrays. When passed a function that
only uses operations it knows how to accelerate, it will execute in nopython mode.
If numba is passed a function that includes something it doesnt know how to work with a category that currently
includes sets, lists, dictionaries, or string functions it will revert to object mode. In object mode, numba
will execute but your code will not speed up significantly. If you would prefer that numba throw an error if it cannot
compile a function in a way that speeds up your code, pass numba the argument nopython=True (e.g. @numba.
jit(nopython=True)). For more on troubleshooting numba modes, see the numba troubleshooting page.
Read more in the numba docs.
Note: To benefit from using eval() you need to install numexpr. See the recommended dependencies section for
more details.
The point of using eval() for expression evaluation rather than plain Python is two-fold: 1) large DataFrame
objects are evaluated more efficiently and 2) large arithmetic and boolean expressions are evaluated all at once by the
underlying engine (by default numexpr is used for evaluation).
Note: You should not use eval() for simple expressions or for expressions involving small DataFrames. In fact,
eval() is many orders of magnitude slower for smaller expressions/objects than plain ol Python. A good rule of
thumb is to only use eval() when you have a DataFrame with more than 10,000 rows.
eval() supports all arithmetic expressions supported by the engine in addition to some extensions available only in
pandas.
Note: The larger the frame and the larger the expression the more speedup you will see from using eval().
Statements
Neither simple nor compound statements are allowed. This includes things like for, while, and if.
Now lets compare adding them together using plain ol Python versus eval():
In [15]: %timeit df1 + df2 + df3 + df4
100 loops, best of 3: 10.1 ms per loop
In [18]: %timeit pd.eval('(df1 > 0) & (df2 > 0) & (df3 > 0) & (df4 > 0)')
100 loops, best of 3: 7.89 ms per loop
should be performed in Python. An exception will be raised if you try to perform any boolean/bitwise operations with
scalar operands that are not of type bool or np.bool_. Again, you should perform these kinds of operations in
plain Python.
In addition to the top level pandas.eval() function you can also evaluate an expression in the context of a
DataFrame.
In [22]: df = pd.DataFrame(np.random.randn(5, 2), columns=['a', 'b'])
Any expression that is a valid pandas.eval() expression is also a valid DataFrame.eval() expression, with
the added benefit that you dont have to prefix the name of the DataFrame to the column(s) youre interested in
evaluating.
In addition, you can perform assignment of columns within an expression. This allows for formulaic evaluation. The
assignment target can be a new column name or an existing column name, and it must be a valid Python identifier.
New in version 0.18.0.
The inplace keyword determines whether this assignment will performed on the original DataFrame or return a
copy with the new column.
Warning: For backwards compatability, inplace defaults to True if not specified. This will change in a
future version of pandas - if your code depends on an inplace assignment you should update to explicitly set
inplace=True
In [28]: df
Out[28]:
a b c d
0 1 5 5 10
1 1 6 7 14
2 1 7 9 18
3 1 8 11 22
4 1 9 13 26
When inplace is set to False, a copy of the DataFrame with the new or modified columns is returned and the
original frame is unchanged.
In [29]: df
Out[29]:
a b c d
0 1 5 5 10
1 1 6 7 14
2 1 7 9 18
3 1 8 11 22
4 1 9 13 26
a b c d e
0 1 5 5 10 -4
1 1 6 7 14 -6
2 1 7 9 18 -8
3 1 8 11 22 -10
4 1 9 13 26 -12
In [31]: df
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
a b c d
0 1 5 5 10
1 1 6 7 14
2 1 7 9 18
3 1 8 11 22
4 1 9 13 26
In [32]: df.eval("""
....: c = a + b
....: d = a + b + c
....: a = 1""", inplace=False)
....:
Out[32]:
a b c d
0 1 5 6 12
1 1 6 7 14
2 1 7 8 16
3 1 8 9 18
4 1 9 10 20
In [36]: df['a'] = 1
In [37]: df
Out[37]:
a b c d
0 1 5 5 10
1 1 6 7 14
2 1 7 9 18
3 1 8 11 22
4 1 9 13 26
In [41]: df
Out[41]:
a b
3 3 8
4 4 9
Warning: Unlike with eval, the default value for inplace for query is False. This is consistent with prior
versions of pandas.
In pandas version 0.14 the local variable API has changed. In pandas 0.13.x, you could refer to local variables the
same way you would in standard Python. For example,
As you can see from the exception generated, this syntax is no longer allowed. You must explicitly reference any local
variable that you want to use in an expression by placing the @ character in front of the name. For example,
a b
0 0.863987 -0.115998
2 -2.621419 -1.297879
If you dont prefix the local variable with @, pandas will raise an exception telling you the variable is undefined.
When using DataFrame.eval() and DataFrame.query(), this allows you to have a local variable and a
DataFrame column with the same name in an expression.
In [46]: a = np.random.randn()
With pandas.eval() you cannot use the @ prefix at all, because it isnt defined in that context. pandas will let
you know this if you try to use @ in a top-level call to pandas.eval(). For example,
In [49]: a, b = 1, 2
In this case, you should simply refer to the variables like you would in standard Python.
There are two different parsers and two different engines you can use as the backend.
The default 'pandas' parser allows a more intuitive syntax for expressing query-like operations (comparisons,
conjunctions and disjunctions). In particular, the precedence of the & and | operators is made equal to the precedence
of the corresponding boolean operations and and or.
For example, the above conjunction can be written without parentheses. Alternatively, you can use the 'python'
parser to enforce strict Python semantics.
In [52]: expr = '(df1 > 0) & (df2 > 0) & (df3 > 0) & (df4 > 0)'
In [54]: expr_no_parens = 'df1 > 0 & df2 > 0 & df3 > 0 & df4 > 0'
In [56]: np.all(x == y)
Out[56]: True
The same expression can be anded together with the word and as well:
In [57]: expr = '(df1 > 0) & (df2 > 0) & (df3 > 0) & (df4 > 0)'
In [59]: expr_with_ands = 'df1 > 0 and df2 > 0 and df3 > 0 and df4 > 0'
In [61]: np.all(x == y)
Out[61]: True
The and and or operators here have the same precedence that they would in vanilla Python.
Theres also the option to make eval() operate identical to plain ol Python.
Note: Using the 'python' engine is generally not useful, except for testing other evaluation engines against it. You
will achieve no performance benefits using eval() with engine='python' and in fact may incur a performance
hit.
You can see this by using pandas.eval() with the 'python' engine. It is a bit slower (not by much) than
evaluating the same expression in Python
eval() is intended to speed up certain kinds of operations. In particular, those operations involving complex expres-
sions with large DataFrame/Series objects should see a significant performance benefit. Here is a plot showing
the running time of pandas.eval() as function of the size of the frame involved in the computation. The two lines
are two different engines.
Note: Operations with smallish objects (around 15k-20k rows) are faster using plain Python:
This plot was created using a DataFrame with 3 columns each containing floating point values generated using
numpy.random.randn().
Expressions that would result in an object dtype or involve datetime operations (because of NaT) must be evaluated
in Python space. The main reason for this behavior is to maintain backwards compatibility with versions of numpy <
1.7. In those versions of numpy a call to ndarray.astype(str) will truncate any strings that are more than 60
characters in length. Second, we cant pass object arrays to numexpr thus string comparisons must be evaluated
in Python space.
The upshot is that this only applies to object-dtyped expressions. So, if you have an expressionfor example
In [65]: df
Out[65]:
nums strings
0 0 c
1 0 c
2 0 c
3 1 b
4 1 b
5 1 b
6 2 a
7 2 a
8 2 a
Empty DataFrame
Columns: [nums, strings]
Index: []
TWENTYSEVEN
We have implemented sparse versions of Series and DataFrame. These are not sparse in the typical mostly 0.
Rather, you can view these objects as being compressed where any data matching a specific value (NaN / missing
value, though any value can be chosen) is omitted. A special SparseIndex object tracks where data has been
sparsified. This will make much more sense in an example. All of the standard pandas data structures have a
to_sparse method:
In [1]: ts = pd.Series(randn(10))
In [4]: sts
Out[4]:
0 0.469112
1 -0.282863
2 NaN
3 NaN
4 NaN
5 NaN
6 NaN
7 NaN
8 -0.861849
9 -2.104569
dtype: float64
BlockIndex
Block locations: array([0, 8], dtype=int32)
Block lengths: array([2, 2], dtype=int32)
The to_sparse method takes a kind argument (for the sparse index, see below) and a fill_value. So if we
had a mostly zero Series, we could convert it to sparse with fill_value=0:
In [5]: ts.fillna(0).to_sparse(fill_value=0)
Out[5]:
0 0.469112
1 -0.282863
2 0.000000
3 0.000000
4 0.000000
5 0.000000
1119
pandas: powerful Python data analysis toolkit, Release 0.20.1
6 0.000000
7 0.000000
8 -0.861849
9 -2.104569
dtype: float64
BlockIndex
Block locations: array([0, 8], dtype=int32)
Block lengths: array([2, 2], dtype=int32)
The sparse objects exist for memory efficiency reasons. Suppose you had a large, mostly NA DataFrame:
In [9]: sdf
Out[9]:
0 1 2 3
0 NaN NaN NaN NaN
1 NaN NaN NaN NaN
2 NaN NaN NaN NaN
3 NaN NaN NaN NaN
4 NaN NaN NaN NaN
5 NaN NaN NaN NaN
6 NaN NaN NaN NaN
... ... ... ... ...
9993 NaN NaN NaN NaN
9994 NaN NaN NaN NaN
9995 NaN NaN NaN NaN
9996 NaN NaN NaN NaN
9997 NaN NaN NaN NaN
9998 0.509184 -0.774928 -1.369894 -0.382141
9999 0.280249 -1.648493 1.490865 -0.890819
In [10]: sdf.density
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
0.0002
As you can see, the density (% of values that have not been compressed) is extremely low. This sparse object takes
up much less memory on disk (pickled) and in the Python interpreter. Functionally, their behavior should be nearly
identical to their dense counterparts.
Any sparse object can be converted back to the standard dense form by calling to_dense:
In [11]: sts.to_dense()
Out[11]:
0 0.469112
1 -0.282863
2 NaN
3 NaN
4 NaN
5 NaN
6 NaN
7 NaN
8 -0.861849
9 -2.104569
dtype: float64
27.1 SparseArray
SparseArray is the base layer for all of the sparse indexed data structures. It is a 1-dimensional ndarray-like object
storing only values distinct from the fill_value:
In [15]: sparr
Out[15]:
[-1.95566352972, -1.6588664276, nan, nan, nan, 1.15893288864, 0.145297113733, nan, 0.
606027190513, 1.33421134013]
Fill: nan
IntIndex
Indices: array([0, 1, 5, 6, 8, 9], dtype=int32)
Like the indexed objects (SparseSeries, SparseDataFrame), a SparseArray can be converted back to a regular
ndarray by calling to_dense:
In [16]: sparr.to_dense()
Out[16]:
array([-1.9557, -1.6589, nan, nan, nan, 1.1589, 0.1453,
nan, 0.606 , 1.3342])
27.2 SparseList
The SparseList class has been deprecated and will be removed in a future version. See the docs of a previous
version for documentation on SparseList.
Two kinds of SparseIndex are implemented, block and integer. We recommend using block as its more
memory efficient. The integer format keeps an arrays of all of the locations where the data are not equal to the fill
value. The block format tracks only the locations and sizes of blocks of data.
Sparse data should have the same dtype as its dense representation. Currently, float64, int64 and bool dtypes
are supported. Depending on the original dtype, fill_value default changes:
float64: np.nan
int64: 0
bool: False
In [18]: s
Out[18]:
0 1.0
1 NaN
2 NaN
dtype: float64
In [19]: s.to_sparse()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[19]:
0 1.0
1 NaN
2 NaN
dtype: float64
BlockIndex
Block locations: array([0], dtype=int32)
Block lengths: array([1], dtype=int32)
In [21]: s
Out[21]:
0 1
1 0
2 0
dtype: int64
In [22]: s.to_sparse()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[22]:
0 1
1 0
2 0
dtype: int64
BlockIndex
Block locations: array([0], dtype=int32)
Block lengths: array([1], dtype=int32)
In [24]: s
Out[24]:
0 True
1 False
2 True
dtype: bool
In [25]: s.to_sparse()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[25]:
0 True
1 False
2 True
dtype: bool
BlockIndex
Block locations: array([0, 2], dtype=int32)
You can change the dtype using .astype(), the result is also sparse. Note that .astype() also affects to the
fill_value to keep its dense represantation.
In [27]: s
Out[27]:
0 1
1 0
2 0
3 0
4 0
dtype: int64
In [28]: ss = s.to_sparse()
In [29]: ss
Out[29]:
0 1
1 0
2 0
3 0
4 0
dtype: int64
BlockIndex
Block locations: array([0], dtype=int32)
Block lengths: array([1], dtype=int32)
In [30]: ss.astype(np.float64)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
0 1.0
1 0.0
2 0.0
3 0.0
4 0.0
dtype: float64
BlockIndex
Block locations: array([0], dtype=int32)
Block lengths: array([1], dtype=int32)
In [2]: ss.astype(np.int64)
ValueError: unable to coerce current fill_value nan to int64 dtype
You can apply NumPy ufuncs to SparseArray and get a SparseArray as a result.
In [32]: np.abs(arr)
Out[32]:
[1.0, nan, nan, 2.0, nan]
Fill: nan
IntIndex
Indices: array([0, 3], dtype=int32)
The ufunc is also applied to fill_value. This is needed to get the correct dense result.
In [34]: np.abs(arr)
Out[34]:
[1.0, 1, 1, 2.0, 1]
Fill: 1
IntIndex
Indices: array([0, 3], dtype=int32)
In [35]: np.abs(arr).to_dense()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[35]:
array([ 1., 1., 1., 2., 1.])
27.6.1 SparseDataFrame
In [40]: sp_arr
Out[40]:
<1000x5 sparse matrix of type '<class 'numpy.float64'>'
with 517 stored elements in Compressed Sparse Row format>
In [42]: sdf
Out[42]:
0 1 2 3 4
0 0.956380 NaN NaN NaN NaN
All sparse formats are supported, but matrices that are not in COOrdinate format will be converted, copying
data as needed. To convert a SparseDataFrame back to sparse SciPy matrix in COO format, you can use the
SparseDataFrame.to_coo() method:
In [43]: sdf.to_coo()
Out[43]:
<1000x5 sparse matrix of type '<class 'numpy.float64'>'
with 517 stored elements in COOrdinate format>
27.6.2 SparseSeries
In [46]: s
Out[46]:
A B C D
1 2 a 0 3.0
1 NaN
1 b 0 1.0
1 3.0
2 1 b 0 NaN
1 NaN
dtype: float64
# SparseSeries
In [47]: ss = s.to_sparse()
In [48]: ss
Out[48]:
A B C D
1 2 a 0 3.0
1 NaN
1 b 0 1.0
1 3.0
2 1 b 0 NaN
1 NaN
dtype: float64
BlockIndex
Block locations: array([0, 2], dtype=int32)
Block lengths: array([1, 2], dtype=int32)
In the example below, we transform the SparseSeries to a sparse representation of a 2-d array by specifying that
the first and second MultiIndex levels define labels for the rows and the third and fourth levels define labels for the
columns. We also specify that the column and row labels should be sorted in the final sparse representation.
In [49]: A, rows, columns = ss.to_coo(row_levels=['A', 'B'],
....: column_levels=['C', 'D'],
....: sort_labels=True)
....:
In [50]: A
Out[50]:
<3x4 sparse matrix of type '<class 'numpy.float64'>'
with 3 stored elements in COOrdinate format>
In [51]: A.todense()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
In [52]: rows
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
[(1, 1), (1, 2), (2, 1)]
In [53]: columns
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
[('a', 0), ('a', 1), ('b', 0), ('b', 1)]
Specifying different row and column labels (and not sorting them) yields a different sparse matrix:
In [54]: A, rows, columns = ss.to_coo(row_levels=['A', 'B', 'C'],
....: column_levels=['D'],
....: sort_labels=False)
....:
In [55]: A
Out[55]:
<3x2 sparse matrix of type '<class 'numpy.float64'>'
with 3 stored elements in COOrdinate format>
In [56]: A.todense()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
In [57]: rows
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
[(1, 2, 'a'), (1, 1, 'b'), (2, 1, 'b')]
In [58]: columns
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
[0, 1]
In [61]: A
Out[61]:
<3x4 sparse matrix of type '<class 'numpy.float64'>'
with 3 stored elements in COOrdinate format>
In [62]: A.todense()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
The default behaviour (with dense_index=False) simply returns a SparseSeries containing only the non-
null entries.
In [63]: ss = pd.SparseSeries.from_coo(A)
In [64]: ss
Out[64]:
0 2 1.0
3 2.0
1 0 3.0
dtype: float64
BlockIndex
Block locations: array([0], dtype=int32)
Block lengths: array([3], dtype=int32)
Specifying dense_index=True will result in an index that is the Cartesian product of the row and columns coordi-
nates of the matrix. Note that this will consume a significant amount of memory (relative to dense_index=False)
if the sparse matrix is large (and sparse) enough.
In [65]: ss_dense = pd.SparseSeries.from_coo(A, dense_index=True)
In [66]: ss_dense
Out[66]:
0 0 NaN
1 NaN
2 1.0
3 2.0
1 0 3.0
1 NaN
2 NaN
3 NaN
2 0 NaN
1 NaN
2 NaN
3 NaN
dtype: float64
BlockIndex
Block locations: array([2], dtype=int32)
Block lengths: array([3], dtype=int32)
TWENTYEIGHT
As of pandas version 0.15.0, the memory usage of a dataframe (including the index) is shown when accessing the info
method of a dataframe. A configuration option, display.memory_usage (see Options and Settings), specifies if
the dataframes memory usage will be displayed when invoking the df.info() method.
For example, the memory usage of the dataframe below is shown when calling df.info():
In [2]: n = 5000
In [4]: df = pd.DataFrame(data)
In [6]: df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 5000 entries, 0 to 4999
Data columns (total 8 columns):
bool 5000 non-null bool
complex128 5000 non-null complex128
datetime64[ns] 5000 non-null datetime64[ns]
float64 5000 non-null float64
int64 5000 non-null int64
object 5000 non-null object
timedelta64[ns] 5000 non-null timedelta64[ns]
categorical 5000 non-null category
dtypes: bool(1), category(1), complex128(1), datetime64[ns](1), float64(1), int64(1),
object(1), timedelta64[ns](1)
The + symbol indicates that the true memory usage could be higher, because pandas does not count the memory used
by values in columns with dtype=object.
New in version 0.17.1.
1129
pandas: powerful Python data analysis toolkit, Release 0.20.1
Passing memory_usage='deep' will enable a more accurate memory usage report, that accounts for the full usage
of the contained objects. This is optional as it can be expensive to do this deeper introspection.
In [7]: df.info(memory_usage='deep')
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 5000 entries, 0 to 4999
Data columns (total 8 columns):
bool 5000 non-null bool
complex128 5000 non-null complex128
datetime64[ns] 5000 non-null datetime64[ns]
float64 5000 non-null float64
int64 5000 non-null int64
object 5000 non-null object
timedelta64[ns] 5000 non-null timedelta64[ns]
categorical 5000 non-null category
dtypes: bool(1), category(1), complex128(1), datetime64[ns](1), float64(1), int64(1),
object(1), timedelta64[ns](1)
By default the display option is set to True but can be explicitly overridden by passing the memory_usage argument
when invoking df.info().
The memory usage of each column can be found by calling the memory_usage method. This returns a Series with
an index represented by column names and memory usage of each column shown in bytes. For the dataframe above,
the memory usage of each column and the total memory usage of the dataframe can be found with the memory_usage
method:
In [8]: df.memory_usage()
Out[8]:
Index 80
bool 5000
complex128 80000
datetime64[ns] 40000
float64 40000
int64 40000
object 40000
timedelta64[ns] 40000
categorical 10920
dtype: int64
By default the memory usage of the dataframes index is shown in the returned Series, the memory usage of the index
can be suppressed by passing the index=False argument:
In [10]: df.memory_usage(index=False)
Out[10]:
bool 5000
complex128 80000
datetime64[ns] 40000
float64 40000
int64 40000
object 40000
timedelta64[ns] 40000
categorical 10920
dtype: int64
The memory usage displayed by the info method utilizes the memory_usage method to determine the memory
usage of a dataframe while also formatting the output in human-readable units (base-2 representation; i.e., 1KB = 1024
bytes).
See also Categorical Memory Usage.
pandas follows the numpy convention of raising an error when you try to convert something to a bool. This happens
in a if or when using the boolean operations, and, or, or not. It is not clear what the result of
should be. Should it be True because its not zero-length? False because there are False values? It is unclear, so
instead, pandas raises a ValueError:
If you see that, you need to explicitly choose what you want to do with it (e.g., use any(), all() or empty). or, you might
want to compare if the pandas object is None
To evaluate single-element pandas objects in a boolean context, use the method .bool():
In [11]: pd.Series([True]).bool()
Out[11]: True
In [12]: pd.Series([False]).bool()
\\\\\\\\\\\\\\Out[12]: False
In [13]: pd.DataFrame([[True]]).bool()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[13]: True
In [14]: pd.DataFrame([[False]]).bool()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[14]: False
Bitwise boolean operators like == and != will return a boolean Series, which is almost always what you want
anyways.
>>> s = pd.Series(range(5))
>>> s == 4
0 False
1 False
2 False
3 False
4 True
dtype: bool
Using the Python in operator on a Series tests for membership in the index, not membership among the values.
If this behavior is surprising, keep in mind that using in on a Python dictionary tests keys, not values, and Series are
dict-like. To test for membership in the values, use the method isin():
For DataFrames, likewise, in applies to the column axis, testing for membership in the list of column names.
For lack of NA (missing) support from the ground up in NumPy and Python in general, we were given the difficult
choice between either
A masked array solution: an array of data and an array of boolean values indicating whether a value
Using a special sentinel value, bit pattern, or set of sentinel values to denote NA across the dtypes
For many reasons we chose the latter. After years of production use it has proven, at least in my opinion, to be the best
decision given the state of affairs in NumPy and Python in general. The special value NaN (Not-A-Number) is used
everywhere as the NA value, and there are API functions isnull and notnull which can be used across the dtypes
to detect NA values.
However, it comes with it a couple of trade-offs which I most certainly have not ignored.
In the absence of high performance NA support being built into NumPy from the ground up, the primary casualty is
the ability to represent NAs in integer arrays. For example:
In [15]: s = pd.Series([1, 2, 3, 4, 5], index=list('abcde'))
In [16]: s
Out[16]:
a 1
b 2
c 3
d 4
e 5
dtype: int64
In [17]: s.dtype
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[17]: dtype('int64')
In [19]: s2
Out[19]:
a 1.0
b 2.0
c 3.0
f NaN
u NaN
dtype: float64
In [20]: s2.dtype
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[20]: dtype(
'float64')
This trade-off is made largely for memory and performance reasons, and also so that the resulting Series continues to
be numeric. One possibility is to use dtype=object arrays instead.
When introducing NAs into an existing Series or DataFrame via reindex or some other means, boolean and integer
types will be promoted to a different dtype in order to store the NAs. These are summarized by this table:
Typeclass Promotion dtype for storing NAs
floating no change
object no change
integer cast to float64
boolean cast to object
While this may seem like a heavy trade-off, I have found very few cases where this is an issue in practice. Some
explanation for the motivation here in the next section.
Many people have suggested that NumPy should simply emulate the NA support present in the more domain-specific
statistical programming language R. Part of the reason is the NumPy type hierarchy:
Typeclass Dtypes
numpy.floating float16, float32, float64, float128
numpy.integer int8, int16, int32, int64
numpy.unsignedinteger uint8, uint16, uint32, uint64
numpy.object_ object_
numpy.bool_ bool_
numpy.character string_, unicode_
The R language, by contrast, only has a handful of built-in data types: integer, numeric (floating-point),
character, and boolean. NA types are implemented by reserving special bit patterns for each type to be used
as the missing value. While doing this with the full NumPy type hierarchy would be possible, it would be a more
substantial trade-off (especially for the 8- and 16-bit data types) and implementation undertaking.
An alternate approach is that of using masked arrays. A masked array is an array of data with an associated boolean
mask denoting whether each value should be considered NA or not. I am personally not in love with this approach as I
feel that overall it places a fairly heavy burden on the user and the library implementer. Additionally, it exacts a fairly
high performance cost when working with numerical data compared with the simple approach of using NaN. Thus,
I have chosen the Pythonic practicality beats purity approach and traded integer NA capability for a much simpler
approach of using a special value in float and object arrays to denote NA, and promoting integer arrays to floating when
NAs must be introduced.
For Series and DataFrame objects, var normalizes by N-1 to produce unbiased estimates of the sample variance,
while NumPys var normalizes by N, which measures the variance of the sample. Note that cov normalizes by N-1
in both pandas and NumPy.
28.5 Thread-safety
As of pandas 0.11, pandas is not 100% thread safe. The known issues relate to the DataFrame.copy method. If
you are doing a lot of copying of DataFrame objects shared among threads, we recommend holding locks inside the
threads where the data copying occurs.
See this link for more information.
Occasionally you may have to deal with data that were created on a machine with a different byte order than the one
on which you are running Python. A common symptom of this issue is an error like
Traceback
...
ValueError: Big-endian buffer not supported on little-endian compiler
To deal with this issue you should convert the underlying NumPy array to the native system byte order before passing
it to Series/DataFrame/Panel constructors using something similar to the following:
In [23]: s = pd.Series(newx)
TWENTYNINE
RPY2 / R INTERFACE
Warning: Up to pandas 0.19, a pandas.rpy module existed with functionality to convert between pandas and
rpy2 objects. This functionality now lives in the rpy2 project itself. See the updating section of the previous
documentation for a guide to port your code from the removed pandas.rpy to rpy2 functions.
rpy2 is an interface to R running embedded in a Python process, and also includes functionality to deal with pan-
das DataFrames. Converting data frames back and forth between rpy2 and pandas should be largely automated (no
need to convert explicitly, it will be done on the fly in most rpy2 functions). To convert explicitly, the functions are
pandas2ri.py2ri() and pandas2ri.ri2py().
See also the documentation of the rpy2 project: https://rpy2.readthedocs.io.
In the remainder of this page, a few examples of explicit conversion is given. The pandas conversion of rpy2 needs
first to be activated:
In [2]: pandas2ri.activate()
Once the pandas conversion is activated (pandas2ri.activate()), many conversions of R to pandas objects will
be done automatically. For example, to obtain the iris dataset as a pandas DataFrame:
In [3]: r.data('iris')
Out[3]:
R object with classes: ('character',) mapped to:
<StrVector - Python:0x130229a88 / R:0x7fc1035943d8>
['iris']
In [4]: r['iris'].head()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
1135
pandas: powerful Python data analysis toolkit, Release 0.20.1
If the pandas conversion was not activated, the above could also be accomplished by explicitly converting it with the
pandas2ri.ri2py function (pandas2ri.ri2py(r['iris'])).
The pandas2ri.py2ri function support the reverse operation to convert DataFrames into the equivalent R object
(that is, data.frame):
In [7]: print(type(r_dataframe))
<class 'rpy2.robjects.vectors.DataFrame'>
In [8]: print(r_dataframe)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ A B C
one 1 4 7
two 2 5 8
three 3 6 9
The DataFrames index is stored as the rownames attribute of the data.frame instance.
THIRTY
PANDAS ECOSYSTEM
Increasingly, packages are being built on top of pandas to address specific needs in data preparation, analysis and
visualization. This is encouraging because it means pandas is not only helping users to handle their data tasks but also
that it provides a better starting point for developers to build powerful and more focused data tools. The creation of
libraries that complement pandas functionality also allows pandas development to remain focused around its original
requirements.
This is an in-exhaustive list of projects that build on pandas in order to provide tools in the PyData space.
Wed like to make it easier for users to find these project, if you know of other substantial projects that you feel should
be on this list, please let us know.
30.1.1 Statsmodels
Statsmodels is the prominent python statistics and econometrics library and it has a long-standing special relation-
ship with pandas. Statsmodels provides powerful statistics, econometrics, analysis and modeling functionality that is
out of pandas scope. Statsmodels leverages pandas objects as the underlying data container for computation.
30.1.2 sklearn-pandas
30.2 Visualization
30.2.1 Bokeh
Bokeh is a Python interactive visualization library for large datasets that natively uses the latest web technologies.
Its goal is to provide elegant, concise construction of novel graphics in the style of Protovis/D3, while delivering
high-performance interactivity over large data to thin clients.
30.2.2 yhat/ggplot
Hadley Wickhams ggplot2 is a foundational exploratory visualization package for the R language. Based on The
Grammar of Graphics it provides a powerful, declarative and extremely general way to generate bespoke plots of
any kind of data. Its really quite incredible. Various implementations to other languages are available, but a faithful
1137
pandas: powerful Python data analysis toolkit, Release 0.20.1
implementation for python users has long been missing. Although still young (as of Jan-2014), the yhat/ggplot project
has been progressing quickly in that direction.
30.2.3 Seaborn
Although pandas has quite a bit of just plot it functionality built-in, visualization and in particular statistical graphics
is a vast field with a long tradition and lots of ground to cover. The Seaborn project builds on top of pandas and
matplotlib to provide easy plotting of data which extends to more advanced types of plots then those offered by
pandas.
30.2.4 Vincent
The Vincent project leverages Vega (that in turn, leverages d3) to create plots. Although functional, as of Summer
2016 the Vincent project has not been updated in over two years and is unlikely to receive further updates.
Like Vincent, the IPython Vega project leverages Vega to create plots, but primarily targets the IPython Notebook
environment.
30.2.6 Plotly
Plotlys Python API enables interactive figures and web shareability. Maps, 2D, 3D, and live-streaming graphs are
rendered with WebGL and D3.js. The library supports plotting directly from a pandas DataFrame and cloud-based
collaboration. Users of matplotlib, ggplot for Python, and Seaborn can convert figures into interactive web-based
plots. Plots can be drawn in IPython Notebooks , edited with R or MATLAB, modified in a GUI, or embedded in apps
and dashboards. Plotly is free for unlimited sharing, and has cloud, offline, or on-premise accounts for private use.
30.2.7 QtPandas
Spun off from the main pandas library, the qtpandas library enables DataFrame visualization and manipulation in
PyQt4 and PySide applications.
30.3 IDE
30.3.1 IPython
IPython is an interactive command shell and distributed computing environment. IPython Notebook is a web appli-
cation for creating IPython notebooks. An IPython notebook is a JSON document containing an ordered list of in-
put/output cells which can contain code, text, mathematics, plots and rich media. IPython notebooks can be converted
to a number of open standard output formats (HTML, HTML presentation slides, LaTeX, PDF, ReStructuredText,
Markdown, Python) through Download As in the web interface and ipython nbconvert in a shell.
Pandas DataFrames implement _repr_html_ methods which are utilized by IPython Notebook for displaying (ab-
breviated) HTML tables. (Note: HTML tables may or may not be compatible with non-HTML IPython output for-
mats.)
30.3.2 quantopian/qgrid
qgrid is an interactive grid for sorting and filtering DataFrames in IPython Notebook built with SlickGrid.
30.3.3 Spyder
Spyder is a cross-platform Qt-based open-source Python IDE with editing, testing, debugging, and introspection fea-
tures. Spyder can now introspect and display Pandas DataFrames and show both column wise min/max and global
min/max coloring.
30.4 API
30.4.1 pandas-datareader
pandas-datareader is a remote data access library for pandas. pandas.io from pandas < 0.17.0 is now
refactored/split-off to and importable from pandas_datareader (PyPI:pandas-datareader). Many/most of
the supported APIs have at least a documentation paragraph in the pandas-datareader docs:
The following data feeds are available:
Yahoo! Finance
Google Finance
FRED
Fama/French
World Bank
OECD
Eurostat
EDGAR Index
30.4.2 quandl/Python
Quandl API for Python wraps the Quandl REST API to return Pandas DataFrames with timeseries indexes.
30.4.3 pydatastream
PyDatastream is a Python interface to the Thomson Dataworks Enterprise (DWE/Datastream) SOAP API to return
indexed Pandas DataFrames or Panels with financial data. This package requires valid credentials for this API (non
free).
30.4.4 pandaSDMX
pandaSDMX is a library to retrieve and acquire statistical data and metadata disseminated in SDMX 2.1, an ISO-
standard widely used by institutions such as statistics offices, central banks, and international organisations. pandaS-
DMX can expose datasets and related structural metadata including dataflows, code-lists, and datastructure definitions
as pandas Series or multi-indexed DataFrames.
30.4.5 fredapi
fredapi is a Python interface to the Federal Reserve Economic Data (FRED) provided by the Federal Reserve Bank of
St. Louis. It works with both the FRED database and ALFRED database that contains point-in-time data (i.e. historic
data revisions). fredapi provides a wrapper in python to the FRED HTTP API, and also provides several convenient
methods for parsing and analyzing point-in-time data from ALFRED. fredapi makes use of pandas and returns data in
a Series or DataFrame. This module requires a FRED API key that you can obtain for free on the FRED website.
30.5.1 Geopandas
Geopandas extends pandas data objects to include geographic information which support geometric operations. If your
work entails maps and geographical coordinates, and you love pandas, you should take a close look at Geopandas.
30.5.2 xarray
xarray brings the labeled data power of pandas to the physical sciences by providing N-dimensional variants of the
core pandas data structures. It aims to provide a pandas-like and pandas-compatible toolkit for analytics on multi-
dimensional arrays, rather than the tabular data for which pandas excels.
30.6 Out-of-core
30.6.1 Dask
Dask is a flexible parallel computing library for analytics. Dask allow a familiar DataFrame interface to out-of-core,
parallel and distributed computing.
30.6.2 Blaze
Blaze provides a standard API for doing computations with various in-memory and on-disk backends: NumPy, Pandas,
SQLAlchemy, MongoDB, PyTables, PySpark.
30.6.3 Odo
Odo provides a uniform API for moving data between different formats. It uses pandas own read_csv for CSV
IO and leverages many existing packages such as PyTables, h5py, and pymongo to move data between non pandas
formats. Its graph based approach is also extensible by end users for custom formats that may be too specific for the
core of odo.
THIRTYONE
Since pandas aims to provide a lot of the data manipulation and analysis functionality that people use R for, this
page was started to provide a more detailed look at the R language and its many third party libraries as they relate to
pandas. In comparisons with R and CRAN libraries, we care about the following things:
Functionality / flexibility: what can/cannot be done with each tool
Performance: how fast are operations. Hard numbers/benchmarks are preferable
Ease-of-use: Is one tool easier/harder to use (you may have to be the judge of this, given side-by-side code
comparisons)
This page is also here to offer a bit of a translation guide for users of these R packages.
For transfer of DataFrame objects from pandas to R, one option is to use HDF5 files, see External Compatibility
for an example.
Well start off with a quick reference guide pairing some common R operations using dplyr with pandas equivalents.
R pandas
dim(df) df.shape
head(df) df.head()
slice(df, 1:10) df.iloc[:9]
filter(df, col1 == 1, col2 == 1) df.query('col1 == 1 & col2 == 1')
df[df$col1 == 1 & df$col2 == 1,] df[(df.col1 == 1) & (df.col2 == 1)]
select(df, col1, col2) df[['col1', 'col2']]
select(df, col1:col3) df.loc[:, 'col1':'col3']
select(df, -(col1:col3)) df.drop(cols_to_drop, axis=1) but see1
distinct(select(df, col1)) df[['col1']].drop_duplicates()
distinct(select(df, col1, col2)) df[['col1', 'col2']].drop_duplicates()
sample_n(df, 10) df.sample(n=10)
sample_frac(df, 0.01) df.sample(frac=0.01)
1 Rs shorthand for a subrange of columns (select(df, col1:col3)) can be approached cleanly in pandas, if you have the list of columns,
for example df[cols[1:3]] or df.drop(cols[1:3]), but doing this by column name is a bit messy.
1141
pandas: powerful Python data analysis toolkit, Release 0.20.1
31.1.2 Sorting
R pandas
arrange(df, col1, col2) df.sort_values(['col1', 'col2'])
arrange(df, desc(col1)) df.sort_values('col1', ascending=False)
31.1.3 Transforming
R pandas
select(df, col_one = df.rename(columns={'col1':
col1) 'col_one'})['col_one']
rename(df, col_one = df.rename(columns={'col1': 'col_one'})
col1)
mutate(df, c=a-b) df.assign(c=df.a-df.b)
R pandas
summary(df) df.describe()
gdf <- group_by(df, col1) gdf = df.groupby('col1')
summarise(gdf, avg=mean(col1, df.groupby('col1').agg({'col1':
na.rm=TRUE)) 'mean'})
summarise(gdf, total=sum(col1)) df.groupby('col1').sum()
31.2 Base R
or by integer location
6 1.075770 1.643563
7 -1.469388 -0.674600
8 -1.776904 -1.294524
9 0.413738 -0.472035
a c
0 -1.039575 -0.424972
1 0.567020 -1.087401
2 -0.673690 -1.478427
3 0.524988 0.577046
4 -1.715002 -0.370647
5 -1.157892 0.844885
6 1.075770 1.643563
7 -1.469388 -0.674600
8 -1.776904 -1.294524
9 0.413738 -0.472035
Selecting multiple noncontiguous columns by integer location can be achieved with a combination of the iloc indexer
attribute and numpy.r_.
In [4]: named = list('abcdefg')
In [5]: n = 30
7 8 9 24 25 26 27 \
0 2.565646 1.431256 1.340309 0.875906 -2.211372 0.974466 -2.006747
1 -0.097883 0.695775 0.341734 -1.743161 -0.826591 -0.345352 1.314232
2 -0.082240 -2.182937 0.380396 1.266143 0.299368 -0.863838 0.408204
3 -0.489682 0.369374 -0.034571 0.221471 -0.744471 0.758527 1.729689
4 0.901805 1.171216 0.520260 0.650776 -1.461665 -1.137707 -0.891060
5 -0.260838 0.281957 1.523962 -0.008434 1.952541 -1.056652 0.533946
6 0.576897 1.146000 1.487349 2.015523 -1.833722 1.771740 -0.670027
28 29
0 -0.410001 -0.078638
1 0.690579 0.995761
2 -1.048089 -0.025747
3 -0.964980 -0.845696
4 -0.693921 1.613616
5 -1.226970 0.040403
6 0.049307 -0.521493
.. ... ...
23 -0.469503 1.142702
24 -0.486078 0.433042
25 0.571599 -0.000676
26 -0.143550 0.289401
27 -0.192862 1.979055
28 -0.657647 -0.952699
29 0.313335 -0.399709
31.2.2 aggregate
In R you may want to split data into subsets and compute the mean for each. Using a data.frame called df and splitting
it into groups by1 and by2:
df <- data.frame(
v1 = c(1,3,5,7,8,3,5,NA,4,5,7,9),
v2 = c(11,33,55,77,88,33,55,NA,44,55,77,99),
by1 = c("red", "blue", 1, 2, NA, "big", 1, 2, "red", 1, NA, 12),
by2 = c("wet", "dry", 99, 95, NA, "damp", 95, 99, "red", 99, NA, NA))
aggregate(x=df[, c("v1", "v2")], by=list(mydf2$by1, mydf2$by2), FUN = mean)
In [9]: df = pd.DataFrame({
...: 'v1': [1,3,5,7,8,3,5,np.nan,4,5,7,9],
...: 'v2': [11,33,55,77,88,33,55,np.nan,44,55,77,99],
...: 'by1': ["red", "blue", 1, 2, np.nan, "big", 1, 2, "red", 1, np.nan, 12],
...: 'by2': ["wet", "dry", 99, 95, np.nan, "damp", 95, 99, "red", 99, np.nan,
...: np.nan]
...: })
...:
In [10]: g = df.groupby(['by1','by2'])
In [11]: g[['v1','v2']].mean()
Out[11]:
v1 v2
by1 by2
1 95 5.0 55.0
99 5.0 55.0
2 95 7.0 77.0
99 NaN NaN
big damp 3.0 33.0
blue dry 3.0 33.0
red red 4.0 44.0
wet 1.0 11.0
A common way to select data in R is using %in% which is defined using the function match. The operator %in% is
used to return a logical vector indicating if there is a match or not:
s <- 0:4
s %in% c(2,4)
In [12]: s = pd.Series(np.arange(5),dtype=np.float32)
The match function returns a vector of the positions of matches of its first argument in its second:
s <- 0:4
match(s, c(2,4))
31.2.4 tapply
tapply is similar to aggregate, but data can be in a ragged array, since the subclass sizes are possibly irregular.
Using a data.frame called baseball, and retrieving information based on the array team:
baseball <-
data.frame(team = gl(5, 5,
labels = paste("Team", LETTERS[1:5])),
player = sample(letters, 25),
batting.average = runif(25, .200, .400))
tapply(baseball$batting.average, baseball.example$team,
max)
31.2.5 subset
In pandas, there are a few ways to perform subsetting. You can use query() or pass an expression as if it were an
index/slice as well as standard boolean indexing:
In [18]: df = pd.DataFrame({'a': np.random.randn(10), 'b': np.random.randn(10)})
a b
0 -1.003455 -0.990738
1 0.083515 0.548796
3 -0.524392 0.904400
4 -0.837804 0.746374
8 -0.507219 0.245479
a b
0 -1.003455 -0.990738
1 0.083515 0.548796
3 -0.524392 0.904400
4 -0.837804 0.746374
8 -0.507219 0.245479
31.2.6 with
In pandas the equivalent expression, using the eval() method, would be:
0 -0.920205
1 -0.860236
2 1.154370
3 0.188140
4 -1.163718
5 0.001397
6 -0.825694
7 -1.138198
8 -1.708034
9 1.148616
dtype: float64
In certain cases eval() will be much faster than evaluation in pure Python. For more details and examples see the
eval documentation.
31.3 plyr
plyr is an R library for the split-apply-combine strategy for data analysis. The functions revolve around three data
structures in R, a for arrays, l for lists, and d for data.frame. The table below shows how these data
structures could be mapped in Python.
R Python
array list
lists dictionary or list of objects
data.frame dataframe
31.3.1 ddply
require(plyr)
df <- data.frame(
x = runif(120, 1, 168),
y = runif(120, 7, 334),
z = runif(120, 1.7, 20.7),
month = rep(c(5,6,7,8),30),
week = sample(1:4, 120, TRUE)
)
In pandas the equivalent expression, using the groupby() method, would be:
In [25]: df = pd.DataFrame({
....: 'x': np.random.uniform(1., 168., 120),
....: 'y': np.random.uniform(7., 334., 120),
....: 'z': np.random.uniform(1.7, 20.7, 120),
....: 'month': [5,6,7,8]*30,
....: 'week': np.random.randint(1,4, 120)
....: })
....:
2 69.717872 53.747188
3 79.892221 52.950459
31.4.1 melt.array
An expression using a 3 dimensional array called a in R where you want to melt it into a data.frame:
In [28]: a = np.array(list(range(1,24))+[np.NAN]).reshape(2,3,4)
31.4.2 melt.list
An expression using a list called a in R where you want to melt it into a data.frame:
In Python, this list would be a list of tuples, so DataFrame() method would convert it to a dataframe as required.
In [30]: a = list(enumerate(list(range(1,5))+[np.NAN]))
In [31]: pd.DataFrame(a)
Out[31]:
0 1
0 0 1.0
1 1 2.0
2 2 3.0
3 3 4.0
4 4 NaN
For more details and examples see the Into to Data Structures documentation.
31.4.3 melt.data.frame
An expression using a data.frame called cheese in R where you want to reshape the data.frame:
first last
John Doeheight 5.5
weight 130.0
Mary Bo height 6.0
weight 150.0
dtype: float64
31.4.4 cast
In R acast is an expression using a data.frame called df in R to cast into a higher dimensional array:
df <- data.frame(
x = runif(12, 1, 168),
y = runif(12, 7, 334),
Similarly for dcast which uses a data.frame called df in R to aggregate information based on Animal and
FeedType:
df <- data.frame(
Animal = c('Animal1', 'Animal2', 'Animal3', 'Animal2', 'Animal1',
'Animal2', 'Animal3'),
FeedType = c('A', 'B', 'A', 'A', 'B', 'B', 'A'),
Amount = c(10, 7, 4, 2, 5, 6, 2)
)
Python can approach this in two different ways. Firstly, similar to above using pivot_table():
In [38]: df = pd.DataFrame({
....: 'Animal': ['Animal1', 'Animal2', 'Animal3', 'Animal2', 'Animal1',
....: 'Animal2', 'Animal3'],
....: 'FeedType': ['A', 'B', 'A', 'A', 'B', 'B', 'A'],
....: 'Amount': [10, 7, 4, 2, 5, 6, 2],
....: })
....:
Out[39]:
FeedType A B
Animal
Animal1 10.0 5.0
Animal2 2.0 13.0
Animal3 6.0 NaN
In [40]: df.groupby(['Animal','FeedType'])['Amount'].sum()
Out[40]:
Animal FeedType
Animal1 A 10
B 5
Animal2 A 2
B 13
Animal3 A 6
Name: Amount, dtype: int64
For more details and examples see the reshaping documentation or the groupby documentation.
31.4.5 factor
cut(c(1,2,3,4,5,6), 3)
factor(c(1,2,3,2,2,3))
In [41]: pd.cut(pd.Series([1,2,3,4,5,6]), 3)
Out[41]:
0 (0.995, 2.667]
1 (0.995, 2.667]
2 (2.667, 4.333]
3 (2.667, 4.333]
4 (4.333, 6.0]
5 (4.333, 6.0]
dtype: category
Categories (3, interval[float64]): [(0.995, 2.667] < (2.667, 4.333] < (4.333, 6.0]]
In [42]: pd.Series([1,2,3,2,2,3]).astype("category")
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
0 1
1 2
2 3
3 2
4 2
5 3
dtype: category
Categories (3, int64): [1, 2, 3]
For more details and examples see categorical introduction and the API documentation. There is also a documentation
regarding the differences to Rs factor.
THIRTYTWO
Since many potential pandas users have some familiarity with SQL, this page is meant to provide some examples of
how various SQL operations would be performed using pandas.
If youre new to pandas, you might want to first read through 10 Minutes to pandas to familiarize yourself with the
library.
As is customary, we import pandas and numpy as follows:
Most of the examples will utilize the tips dataset found within pandas tests. Well read the data into a DataFrame
called tips and assume we have a database table of the same name and structure.
In [5]: tips.head()
Out[5]:
total_bill tip sex smoker day time size
0 16.99 1.01 Female No Sun Dinner 2
1 10.34 1.66 Male No Sun Dinner 3
2 21.01 3.50 Male No Sun Dinner 3
3 23.68 3.31 Male No Sun Dinner 2
4 24.59 3.61 Female No Sun Dinner 4
32.1 SELECT
In SQL, selection is done using a comma-separated list of columns youd like to select (or a * to select all columns):
With pandas, column selection is done by passing a list of column names to your DataFrame:
1155
pandas: powerful Python data analysis toolkit, Release 0.20.1
Calling the DataFrame without the list of column names would display all columns (akin to SQLs *).
32.2 WHERE
SELECT *
FROM tips
WHERE time = 'Dinner'
LIMIT 5;
DataFrames can be filtered in multiple ways; the most intuitive of which is using boolean indexing.
The above statement is simply passing a Series of True/False objects to the DataFrame, returning all rows with
True.
In [9]: is_dinner.value_counts()
Out[9]:
True 176
False 68
Name: time, dtype: int64
In [10]: tips[is_dinner].head(5)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[10]:
total_bill tip sex smoker day time size
0 16.99 1.01 Female No Sun Dinner 2
1 10.34 1.66 Male No Sun Dinner 3
2 21.01 3.50 Male No Sun Dinner 3
3 23.68 3.31 Male No Sun Dinner 2
4 24.59 3.61 Female No Sun Dinner 4
Just like SQLs OR and AND, multiple conditions can be passed to a DataFrame using | (OR) and & (AND).
-- tips by parties of at least 5 diners OR bill total was more than $45
SELECT *
FROM tips
WHERE size >= 5 OR total_bill > 45;
# tips by parties of at least 5 diners OR bill total was more than $45
In [12]: tips[(tips['size'] >= 5) | (tips['total_bill'] > 45)]
Out[12]:
total_bill tip sex smoker day time size
59 48.27 6.73 Male No Sat Dinner 4
125 29.80 4.20 Female No Thur Lunch 6
141 34.30 6.70 Male No Thur Lunch 6
142 41.19 5.00 Male No Thur Lunch 5
143 27.05 5.00 Female No Thur Lunch 6
155 29.85 5.14 Female No Sun Dinner 5
156 48.17 5.00 Male No Sun Dinner 6
170 50.81 10.00 Male Yes Sat Dinner 3
182 45.35 3.50 Male Yes Sun Dinner 3
185 20.69 5.00 Male No Sun Dinner 5
187 30.46 2.00 Male Yes Sun Dinner 5
212 48.33 9.00 Male No Sat Dinner 4
216 28.15 3.00 Male Yes Sat Dinner 5
In [14]: frame
Out[14]:
col1 col2
0 A F
1 B NaN
2 NaN G
3 C H
4 D I
Assume we have a table of the same structure as our DataFrame above. We can see only the records where col2 IS
NULL with the following query:
SELECT *
FROM frame
WHERE col2 IS NULL;
In [15]: frame[frame['col2'].isnull()]
Out[15]:
col1 col2
1 B NaN
Getting items where col1 IS NOT NULL can be done with notnull().
SELECT *
FROM frame
WHERE col1 IS NOT NULL;
In [16]: frame[frame['col1'].notnull()]
Out[16]:
col1 col2
0 A F
1 B NaN
3 C H
4 D I
32.3 GROUP BY
In pandas, SQLs GROUP BY operations are performed using the similarly named groupby() method.
groupby() typically refers to a process where wed like to split a dataset into groups, apply some function (typically
aggregation) , and then combine the groups together.
A common SQL operation would be getting the count of records in each group throughout a dataset. For instance, a
query getting us the number of tips left by sex:
In [17]: tips.groupby('sex').size()
Out[17]:
sex
Female 87
Male 157
dtype: int64
Notice that in the pandas code we used size() and not count(). This is because count() applies the function
to each column, returning the number of not null records within each.
In [18]: tips.groupby('sex').count()
Out[18]:
total_bill tip smoker day time size
sex
Female 87 87 87 87 87 87
Male 157 157 157 157 157 157
In [19]: tips.groupby('sex')['total_bill'].count()
Out[19]:
sex
Female 87
Male 157
Name: total_bill, dtype: int64
Multiple functions can also be applied at once. For instance, say wed like to see how tip amount differs by day of
the week - agg() allows you to pass a dictionary to your grouped DataFrame, indicating which functions to apply to
specific columns.
Grouping by more than one column is done by passing a list of columns to the groupby() method.
32.4 JOIN
JOINs can be performed with join() or merge(). By default, join() will join the DataFrames on their indices.
Each method has parameters allowing you to specify the type of join to perform (LEFT, RIGHT, INNER, FULL) or
the columns to join on (column names or indices).
Assume we have two database tables of the same name and structure as our DataFrames.
Now lets go over the various types of JOINs.
SELECT *
FROM df1
INNER JOIN df2
ON df1.key = df2.key;
merge() also offers parameters for cases when youd like to join one DataFrames column with another DataFrames
index.
Out[26]:
key value_x value_y
1 B -0.318214 0.543581
3 D 2.169960 -0.426067
3 D 2.169960 1.138079
pandas also allows for FULL JOINs, which display both sides of the dataset, whether or not the joined columns find a
match. As of writing, FULL JOINs are not supported in all RDBMS (MySQL).
32.5 UNION
SQLs UNION is similar to UNION ALL, however UNION will remove duplicate rows.
SELECT city, rank
FROM df1
UNION
SELECT city, rank
FROM df2;
32.6 Pandas equivalents for some SQL analytic and aggregate func-
tions
-- MySQL
SELECT * FROM tips
ORDER BY tip DESC
LIMIT 10 OFFSET 5;
32.6. Pandas equivalents for some SQL analytic and aggregate functions 1163
pandas: powerful Python data analysis toolkit, Release 0.20.1
In [36]: (tips.assign(rnk=tips.groupby(['day'])['total_bill']
....: .rank(method='first', ascending=False))
....: .query('rnk < 3')
....: .sort_values(['day','rnk'])
....: )
....:
Out[36]:
total_bill tip sex smoker day time size rnk
95 40.17 4.73 Male Yes Fri Dinner 4 1.0
90 28.97 3.00 Male Yes Fri Dinner 2 2.0
170 50.81 10.00 Male Yes Sat Dinner 3 1.0
212 48.33 9.00 Male No Sat Dinner 4 2.0
156 48.17 5.00 Male No Sun Dinner 6 1.0
182 45.35 3.50 Male Yes Sun Dinner 3 2.0
197 43.11 5.00 Female Yes Thur Lunch 4 1.0
142 41.19 5.00 Male No Thur Lunch 5 2.0
Lets find tips with (rank < 3) per gender group for (tips < 2). Notice that when using rank(method='min')
function rnk_min remains the same for the same tip (as Oracles RANK() function)
32.7 UPDATE
UPDATE tips
SET tip = tip*2
WHERE tip < 2;
32.8 DELETE
In pandas we select the rows that should remain, instead of deleting them
THIRTYTHREE
For potential users coming from SAS this page is meant to demonstrate how different SAS operations would be
performed in pandas.
If youre new to pandas, you might want to first read through 10 Minutes to pandas to familiarize yourself with the
library.
As is customary, we import pandas and numpy as follows:
Note: Throughout this tutorial, the pandas DataFrame will be displayed by calling df.head(), which displays
the first N (default 5) rows of the DataFrame. This is often used in interactive work (e.g. Jupyter notebook or
terminal) - the equivalent in SAS would be:
pandas SAS
DataFrame data set
column variable
row observation
groupby BY-group
NaN .
A DataFrame in pandas is analogous to a SAS data set - a two-dimensional data source with labeled columns that
can be of different types. As will be shown in this document, almost any operation that can be applied to a data set
using SASs DATA step, can also be accomplished in pandas.
1167
pandas: powerful Python data analysis toolkit, Release 0.20.1
A Series is the data structure that represents one column of a DataFrame. SAS doesnt have a separate data
structure for a single column, but in general, working with a Series is analogous to referencing a column in the
DATA step.
33.1.3 Index
Every DataFrame and Series has an Index - which are labels on the rows of the data. SAS does not have an
exactly analogous concept. A data sets row are essentially unlabeled, other than an implicit integer index that can be
accessed during the DATA step (_N_).
In pandas, if no index is specified, an integer index is also used by default (first row = 0, second row = 1, and so on).
While using a labeled Index or MultiIndex can enable sophisticated analyses and is ultimately an important part
of pandas to understand, for this comparison we will essentially ignore the Index and just treat the DataFrame as
a collection of columns. Please see the indexing documentation for much more on how to use an Index effectively.
A SAS data set can be built from specified values by placing the data after a datalines statement and specifying
the column names.
data df;
input x y;
datalines;
1 2
3 4
5 6
;
run;
A pandas DataFrame can be constructed in many different ways, but for a small number of values, it is often
convenient to specify it as a python dictionary, where the keys are the column names and the values are the data.
In [3]: df = pd.DataFrame({
...: 'x': [1, 3, 5],
...: 'y': [2, 4, 6]})
...:
In [4]: df
Out[4]:
x y
0 1 2
1 3 4
2 5 6
Like SAS, pandas provides utilities for reading in data from many formats. The tips dataset, found within the pandas
tests (csv) will be used in many of the following examples.
SAS provides PROC IMPORT to read csv data into a data set.
In [7]: tips.head()
Out[7]:
total_bill tip sex smoker day time size
0 16.99 1.01 Female No Sun Dinner 2
1 10.34 1.66 Male No Sun Dinner 3
2 21.01 3.50 Male No Sun Dinner 3
3 23.68 3.31 Male No Sun Dinner 2
4 24.59 3.61 Female No Sun Dinner 4
Like PROC IMPORT, read_csv can take a number of parameters to specify how the data should be parsed. For
example, if the data was instead tab delimited, and did not have column names, the pandas command would be:
In addition to text/csv, pandas supports a variety of other data formats such as Excel, HDF5, and SQL databases. These
are all read via a pd.read_* function. See the IO documentation for more details.
Similarly in pandas, the opposite of read_csv is to_csv(), and other data formats follow a similar api.
tips.to_csv('tips2.csv')
In the DATA step, arbitrary math expressions can be used on new or existing columns.
data tips;
set tips;
total_bill = total_bill - 2;
new_bill = total_bill / 2;
run;
pandas provides similar vectorized operations by specifying the individual Series in the DataFrame. New
columns can be assigned in the same way.
In [10]: tips.head()
Out[10]:
total_bill tip sex smoker day time size new_bill
0 14.99 1.01 Female No Sun Dinner 2 7.495
1 8.34 1.66 Male No Sun Dinner 3 4.170
2 19.01 3.50 Male No Sun Dinner 3 9.505
3 21.68 3.31 Male No Sun Dinner 2 10.840
4 22.59 3.61 Female No Sun Dinner 4 11.295
33.3.2 Filtering
data tips;
set tips;
if total_bill > 10;
run;
data tips;
set tips;
where total_bill > 10;
/* equivalent in this case - where happens before the
DATA step begins and can also be used in PROC statements */
run;
DataFrames can be filtered in multiple ways; the most intuitive of which is using boolean indexing
data tips;
set tips;
format bucket $4.;
The same operation in pandas can be accomplished using the where method from numpy.
In [13]: tips.head()
Out[13]:
total_bill tip sex smoker day time size bucket
0 14.99 1.01 Female No Sun Dinner 2 high
1 8.34 1.66 Male No Sun Dinner 3 low
2 19.01 3.50 Male No Sun Dinner 3 high
3 21.68 3.31 Male No Sun Dinner 2 high
4 22.59 3.61 Female No Sun Dinner 4 high
data tips;
set tips;
format date1 date2 date1_plusmonth mmddyy10.;
date1 = mdy(1, 15, 2013);
date2 = mdy(2, 15, 2015);
date1_year = year(date1);
date2_month = month(date2);
* shift date to beginning of next interval;
date1_next = intnx('MONTH', date1, 1);
* count intervals between dates;
months_between = intck('MONTH', date1, date2);
run;
The equivalent pandas operations are shown below. In addition to these functions pandas supports other Time Series
features not available in Base SAS (such as resampling and and custom offsets) - see the timeseries documentation for
more details.
In [20]: tips[['date1','date2','date1_year','date2_month',
....: 'date1_next','months_between']].head()
....:
Out[20]:
date1 date2 date1_year date2_month date1_next months_between
0 2013-01-15 2015-02-15 2013 2 2013-02-01 25
1 2013-01-15 2015-02-15 2013 2 2013-02-01 25
2 2013-01-15 2015-02-15 2013 2 2013-02-01 25
SAS provides keywords in the DATA step to select, drop, and rename columns.
data tips;
set tips;
keep sex total_bill tip;
run;
data tips;
set tips;
drop sex;
run;
data tips;
set tips;
rename total_bill=total_bill_2;
run;
# keep
In [21]: tips[['sex', 'total_bill', 'tip']].head()
Out[21]:
sex total_bill tip
0 Female 14.99 1.01
1 Male 8.34 1.66
2 Male 19.01 3.50
3 Male 21.68 3.31
4 Female 22.59 3.61
# drop
In [22]: tips.drop('sex', axis=1).head()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
# rename
In [23]: tips.rename(columns={'total_bill':'total_bill_2'}).head()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
pandas objects have a sort_values() method, which takes a list of columns to sort by.
In [24]: tips = tips.sort_values(['sex', 'total_bill'])
In [25]: tips.head()
Out[25]:
total_bill tip sex smoker day time size
67 1.07 1.00 Female Yes Sat Dinner 1
92 3.75 1.00 Female Yes Fri Dinner 2
111 5.25 1.00 Female No Sat Dinner 1
145 6.35 1.50 Female No Thur Lunch 2
135 6.51 1.25 Female No Thur Lunch 2
33.4 Merging
In [27]: df1
Out[27]:
key value
0 A -0.857326
1 B 1.075416
2 C 0.371727
3 D 1.065735
In [29]: df2
Out[29]:
key value
0 B -0.227314
1 D 2.102726
2 D -0.092796
3 E 0.094694
In SAS, data must be explicitly sorted before merging. Different types of joins are accomplished using the in= dummy
variables to track whether a match was found in one or both input frames.
proc sort data=df1;
by key;
run;
pandas DataFrames have a merge() method, which provides similar functionality. Note that the data does not have
to be sorted ahead of time, and different join types are accomplished via the how keyword.
In [31]: inner_join
Out[31]:
key value_x value_y
0 B 1.075416 -0.227314
1 D 1.065735 2.102726
2 D 1.065735 -0.092796
In [33]: left_join
Out[33]:
key value_x value_y
0 A -0.857326 NaN
1 B 1.075416 -0.227314
2 C 0.371727 NaN
3 D 1.065735 2.102726
4 D 1.065735 -0.092796
In [35]: right_join
Out[35]:
key value_x value_y
0 B 1.075416 -0.227314
1 D 1.065735 2.102726
2 D 1.065735 -0.092796
3 E NaN 0.094694
In [37]: outer_join
Out[37]:
key value_x value_y
0 A -0.857326 NaN
1 B 1.075416 -0.227314
2 C 0.371727 NaN
3 D 1.065735 2.102726
4 D 1.065735 -0.092796
5 E NaN 0.094694
Like SAS, pandas has a representation for missing data - which is the special float value NaN (not a number). Many
of the semantics are the same, for example missing data propagates through numeric operations, and is ignored by
default for aggregations.
In [38]: outer_join
Out[38]:
key value_x value_y
0 A -0.857326 NaN
1 B 1.075416 -0.227314
2 C 0.371727 NaN
3 D 1.065735 2.102726
4 D 1.065735 -0.092796
5 E NaN 0.094694
0 NaN
1 0.848102
2 NaN
3 3.168461
4 0.972939
5 NaN
dtype: float64
In [40]: outer_join['value_x'].sum()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
2.72128653544262
One difference is that missing data cannot be compared to its sentinel value. For example, in SAS you could do this
to filter missing values.
data outer_join_nulls;
set outer_join;
if value_x = .;
run;
data outer_join_no_nulls;
set outer_join;
if value_x ^= .;
run;
Which doesnt work in in pandas. Instead, the pd.isnull or pd.notnull functions should be used for compar-
isons.
In [41]: outer_join[pd.isnull(outer_join['value_x'])]
Out[41]:
key value_x value_y
5 E NaN 0.094694
In [42]: outer_join[pd.notnull(outer_join['value_x'])]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[42]:
key value_x value_y
0 A -0.857326 NaN
1 B 1.075416 -0.227314
2 C 0.371727 NaN
3 D 1.065735 2.102726
4 D 1.065735 -0.092796
pandas also provides a variety of methods to work with missing data - some of which would be challenging to express
in SAS. For example, there are methods to drop all rows with any missing values, replacing missing values with a
specified value, like the mean, or forward filling from previous rows. See the missing data documentation for more.
In [43]: outer_join.dropna()
Out[43]:
key value_x value_y
1 B 1.075416 -0.227314
3 D 1.065735 2.102726
4 D 1.065735 -0.092796
In [44]: outer_join.fillna(method='ffill')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
In [45]: outer_join['value_x'].fillna(outer_join['value_x'].mean())
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
0 -0.857326
1 1.075416
2 0.371727
3 1.065735
4 1.065735
5 0.544257
Name: value_x, dtype: float64
33.6 GroupBy
33.6.1 Aggregation
SASs PROC SUMMARY can be used to group by one or more key variables and compute aggregations on numeric
columns.
pandas provides a flexible groupby mechanism that allows similar aggregations. See the groupby documentation for
more details and examples.
In [47]: tips_summed.head()
Out[47]:
total_bill tip
sex smoker
Female No 869.68 149.77
Yes 527.27 96.74
Male No 1725.75 302.00
Yes 1217.07 183.07
33.6.2 Transformation
In SAS, if the group aggregations need to be used with the original frame, it must be merged back together. For
example, to subtract the mean for each observation by smoker group.
data tips;
merge tips(in=a) smoker_means(in=b);
by smoker;
adj_total_bill = total_bill - group_bill;
if a and b;
run;
pandas groubpy provides a transform mechanism that allows these type of operations to be succinctly expressed
in one operation.
In [48]: gb = tips.groupby('smoker')['total_bill']
In [50]: tips.head()
Out[50]:
total_bill tip sex smoker day time size adj_total_bill
67 1.07 1.00 Female Yes Sat Dinner 1 -17.686344
92 3.75 1.00 Female Yes Fri Dinner 2 -15.006344
111 5.25 1.00 Female No Sat Dinner 1 -11.938278
145 6.35 1.50 Female No Thur Lunch 2 -10.838278
135 6.51 1.25 Female No Thur Lunch 2 -10.678278
In addition to aggregation, pandas groupby can be used to replicate most other by group processing from SAS. For
example, this DATA step reads the data by sex/smoker group and filters to the first entry for each.
data tips_first;
set tips;
by sex smoker;
if FIRST.sex or FIRST.smoker then output;
run;
In [51]: tips.groupby(['sex','smoker']).first()
Out[51]:
total_bill tip day time size adj_total_bill
sex smoker
Female No 5.25 1.00 Sat Dinner 1 -11.938278
Yes 1.07 1.00 Sat Dinner 1 -17.686344
Male No 5.51 2.00 Thur Lunch 2 -11.678278
Yes 5.25 5.15 Sun Dinner 2 -13.506344
pandas operates exclusively in memory, where a SAS data set exists on disk. This means that the size of data able to
be loaded in pandas is limited by your machines memory, but also that the operations on that data may be faster.
If out of core processing is needed, one possibility is the dask.dataframe library (currently in development) which
provides a subset of pandas functionality for an on-disk DataFrame
pandas provides a read_sas() method that can read SAS data saved in the XPORT format. The ability to read
SASs binary format is planned for a future release.
df = pd.read_sas('transport-file.xpt')
XPORT is a relatively limited format and the parsing of it is not as optimized as some of the other pandas readers. An
alternative way to interop data between SAS and pandas is to serialize to csv.
THIRTYFOUR
API REFERENCE
This page gives an overview of all public pandas objects, functions and methods. In general, all classes and functions
exposed in the top-level pandas.* namespace are regarded as public.
Further some of the subpackages are public, including pandas.errors, pandas.plotting, and pandas.
testing. Certain functions in the the pandas.io and pandas.tseries submodules are public as well (those
mentioned in the documentation). Further, the pandas.api.types subpackage holds some public functions re-
lated to data types in pandas.
Warning: The pandas.core, pandas.compat, and pandas.util top-level modules are considered to
be PRIVATE. Stability of functionality in those modules in not guaranteed.
34.1 Input/Output
34.1.1 Pickling
read_pickle(path[, compression]) Load pickled pandas object (or any other pickled object)
from the specified
34.1.1.1 pandas.read_pickle
pandas.read_pickle(path, compression=infer)
Load pickled pandas object (or any other pickled object) from the specified file path
Warning: Loading pickled data received from untrusted sources can be unsafe. See: http://docs.python.org/2.7/
library/pickle.html
Parameters path : string
File path
compression : {infer, gzip, bz2, xz, zip, None}, default infer
For on-the-fly decompression of on-disk data. If infer, then use gzip, bz2, xz or zip if
path is a string ending in .gz, .bz2, xz, or zip respectively, and no decompression
otherwise. Set to None for no decompression.
New in version 0.20.0.
Returns unpickled : type of object stored in file
1181
pandas: powerful Python data analysis toolkit, Release 0.20.1
34.1.2.1 pandas.read_table
Row number(s) to use as the column names, and the start of the data. Default behavior
is as if set to 0 if no names passed, otherwise None. Explicitly pass header=0 to
be able to replace existing names. The header can be a list of integers that specify
row locations for a multi-index on the columns e.g. [0,1,3]. Intervening rows that
are not specified will be skipped (e.g. 2 in this example is skipped). Note that this
parameter ignores commented lines and empty lines if skip_blank_lines=True,
so header=0 denotes the first line of data rather than the first line of the file.
names : array-like, default None
List of column names to use. If file contains no header row, then you should
explicitly pass header=None. Duplicates in this list are not allowed unless man-
gle_dupe_cols=True, which is the default.
index_col : int or sequence or False, default None
Column to use as the row labels of the DataFrame. If a sequence is given, a MultiIndex
is used. If you have a malformed file with delimiters at the end of each line, you might
consider index_col=False to force pandas to _not_ use the first column as the index (row
names)
usecols : array-like or callable, default None
Return a subset of the columns. If array-like, all elements must either be positional (i.e.
integer indices into the document columns) or strings that correspond to column names
provided either by the user in names or inferred from the document header row(s). For
example, a valid array-like usecols parameter would be [0, 1, 2] or [foo, bar, baz].
If callable, the callable function will be evaluated against the column names, returning
names where the callable function evaluates to True. An example of a valid callable ar-
gument would be lambda x: x.upper() in ['AAA', 'BBB', 'DDD'].
Using this parameter results in much faster parsing time and lower memory usage.
as_recarray : boolean, default False
DEPRECATED: this argument will be removed in a future version. Please call
pd.read_csv(...).to_records() instead.
Return a NumPy recarray instead of a DataFrame after parsing the data. If set to True,
this option takes precedence over the squeeze parameter. In addition, as row indices are
not available in such a format, the index_col parameter will be ignored.
squeeze : boolean, default False
If the parsed data only contains one column then return a Series
prefix : str, default None
Prefix to add to column numbers when no header, e.g. X for X0, X1, ...
mangle_dupe_cols : boolean, default True
Duplicate columns will be specified as X.0...X.N, rather than X...X. Passing in
False will cause data to be overwritten if there are duplicate names in the columns.
dtype : Type name or dict of column -> type, default None
Data type for data or columns. E.g. {a: np.float64, b: np.int32} Use str or object
to preserve and not interpret dtype. If converters are specified, they will be applied
INSTEAD of dtype conversion.
engine : {c, python}, optional
Parser engine to use. The C engine is faster while the python engine is currently more
feature-complete.
converters : dict, default None
Dict of functions for converting values in certain columns. Keys can either be integers
or column labels
true_values : list, default None
Values to consider as True
false_values : list, default None
Values to consider as False
skipinitialspace : boolean, default False
Skip spaces after delimiter.
skiprows : list-like or integer or callable, default None
Line numbers to skip (0-indexed) or number of lines to skip (int) at the start of the file.
If callable, the callable function will be evaluated against the row indices, returning
True if the row should be skipped and False otherwise. An example of a valid callable
argument would be lambda x: x in [0, 2].
skipfooter : int, default 0
Number of lines at bottom of file to skip (Unsupported with engine=c)
skip_footer : int, default 0
DEPRECATED: use the skipfooter parameter instead, as they are identical
nrows : int, default None
Number of rows of file to read. Useful for reading pieces of large files
na_values : scalar, str, list-like, or dict, default None
Additional strings to recognize as NA/NaN. If dict passed, specific per-column NA
values. By default the following values are interpreted as NaN: , #N/A, #N/A N/A,
#NA, -1.#IND, -1.#QNAN, -NaN, -nan, 1.#IND, 1.#QNAN, N/A, NA,
NULL, NaN, nan.
keep_default_na : bool, default True
If na_values are specified and keep_default_na is False the default NaN values are over-
ridden, otherwise theyre appended to.
na_filter : boolean, default True
Detect missing value markers (empty strings and the value of na_values). In data with-
out any NAs, passing na_filter=False can improve the performance of reading a large
file
verbose : boolean, default False
Indicate number of NA values placed in non-numeric columns
skip_blank_lines : boolean, default True
If True, skip over blank lines rather than interpreting as NaN values
parse_dates : boolean or list of ints or names or list of lists or dict, default False
If error_bad_lines is False, and warn_bad_lines is True, a warning for each bad line
will be output.
low_memory : boolean, default True
Internally process the file in chunks, resulting in lower memory use while parsing, but
possibly mixed type inference. To ensure no mixed types either set False, or specify the
type with the dtype parameter. Note that the entire file is read into a single DataFrame
regardless, use the chunksize or iterator parameter to return the data in chunks. (Only
valid with C parser)
buffer_lines : int, default None
DEPRECATED: this argument will be removed in a future version because its value is
not respected by the parser
compact_ints : boolean, default False
DEPRECATED: this argument will be removed in a future version
If compact_ints is True, then for any column that is of integer dtype, the parser will
attempt to cast it as the smallest integer dtype possible, either signed or unsigned de-
pending on the specification from the use_unsigned parameter.
use_unsigned : boolean, default False
DEPRECATED: this argument will be removed in a future version
If integer columns are being compacted (i.e. compact_ints=True), specify whether the
column should be compacted to the smallest signed or unsigned integer dtype.
memory_map : boolean, default False
If a filepath is provided for filepath_or_buffer, map the file object directly onto memory
and access the data directly from there. Using this option can improve performance
because there is no longer any I/O overhead.
Returns result : DataFrame or TextParser
34.1.2.2 pandas.read_csv
by the parameter header but not by skiprows. For example, if comment=#, parsing
#emptyna,b,cn1,2,3 with header=0 will result in a,b,c being treated as the header.
encoding : str, default None
Encoding to use for UTF when reading/writing (ex. utf-8). List of Python standard
encodings
dialect : str or csv.Dialect instance, default None
If provided, this parameter will override values (default or not) for the following pa-
rameters: delimiter, doublequote, escapechar, skipinitialspace, quotechar, and quoting.
If it is necessary to override values, a ParserWarning will be issued. See csv.Dialect
documentation for more details.
tupleize_cols : boolean, default False
Leave a list of tuples on columns as is (default is to convert to a Multi Index on the
columns)
error_bad_lines : boolean, default True
Lines with too many fields (e.g. a csv line with too many commas) will by default cause
an exception to be raised, and no DataFrame will be returned. If False, then these bad
lines will dropped from the DataFrame that is returned.
warn_bad_lines : boolean, default True
If error_bad_lines is False, and warn_bad_lines is True, a warning for each bad line
will be output.
low_memory : boolean, default True
Internally process the file in chunks, resulting in lower memory use while parsing, but
possibly mixed type inference. To ensure no mixed types either set False, or specify the
type with the dtype parameter. Note that the entire file is read into a single DataFrame
regardless, use the chunksize or iterator parameter to return the data in chunks. (Only
valid with C parser)
buffer_lines : int, default None
DEPRECATED: this argument will be removed in a future version because its value is
not respected by the parser
compact_ints : boolean, default False
DEPRECATED: this argument will be removed in a future version
If compact_ints is True, then for any column that is of integer dtype, the parser will
attempt to cast it as the smallest integer dtype possible, either signed or unsigned de-
pending on the specification from the use_unsigned parameter.
use_unsigned : boolean, default False
DEPRECATED: this argument will be removed in a future version
If integer columns are being compacted (i.e. compact_ints=True), specify whether the
column should be compacted to the smallest signed or unsigned integer dtype.
memory_map : boolean, default False
If a filepath is provided for filepath_or_buffer, map the file object directly onto memory
and access the data directly from there. Using this option can improve performance
because there is no longer any I/O overhead.
34.1.2.3 pandas.read_fwf
consider index_col=False to force pandas to _not_ use the first column as the index (row
names)
usecols : array-like or callable, default None
Return a subset of the columns. If array-like, all elements must either be positional (i.e.
integer indices into the document columns) or strings that correspond to column names
provided either by the user in names or inferred from the document header row(s). For
example, a valid array-like usecols parameter would be [0, 1, 2] or [foo, bar, baz].
If callable, the callable function will be evaluated against the column names, returning
names where the callable function evaluates to True. An example of a valid callable ar-
gument would be lambda x: x.upper() in ['AAA', 'BBB', 'DDD'].
Using this parameter results in much faster parsing time and lower memory usage.
as_recarray : boolean, default False
DEPRECATED: this argument will be removed in a future version. Please call
pd.read_csv(...).to_records() instead.
Return a NumPy recarray instead of a DataFrame after parsing the data. If set to True,
this option takes precedence over the squeeze parameter. In addition, as row indices are
not available in such a format, the index_col parameter will be ignored.
squeeze : boolean, default False
If the parsed data only contains one column then return a Series
prefix : str, default None
Prefix to add to column numbers when no header, e.g. X for X0, X1, ...
mangle_dupe_cols : boolean, default True
Duplicate columns will be specified as X.0...X.N, rather than X...X. Passing in
False will cause data to be overwritten if there are duplicate names in the columns.
dtype : Type name or dict of column -> type, default None
Data type for data or columns. E.g. {a: np.float64, b: np.int32} Use str or object
to preserve and not interpret dtype. If converters are specified, they will be applied
INSTEAD of dtype conversion.
converters : dict, default None
Dict of functions for converting values in certain columns. Keys can either be integers
or column labels
true_values : list, default None
Values to consider as True
false_values : list, default None
Values to consider as False
skipinitialspace : boolean, default False
Skip spaces after delimiter.
skiprows : list-like or integer or callable, default None
Line numbers to skip (0-indexed) or number of lines to skip (int) at the start of the file.
If callable, the callable function will be evaluated against the row indices, returning
True if the row should be skipped and False otherwise. An example of a valid callable
argument would be lambda x: x in [0, 2].
skipfooter : int, default 0
Number of lines at bottom of file to skip (Unsupported with engine=c)
skip_footer : int, default 0
DEPRECATED: use the skipfooter parameter instead, as they are identical
nrows : int, default None
Number of rows of file to read. Useful for reading pieces of large files
na_values : scalar, str, list-like, or dict, default None
Additional strings to recognize as NA/NaN. If dict passed, specific per-column NA
values. By default the following values are interpreted as NaN: , #N/A, #N/A N/A,
#NA, -1.#IND, -1.#QNAN, -NaN, -nan, 1.#IND, 1.#QNAN, N/A, NA,
NULL, NaN, nan.
keep_default_na : bool, default True
If na_values are specified and keep_default_na is False the default NaN values are over-
ridden, otherwise theyre appended to.
na_filter : boolean, default True
Detect missing value markers (empty strings and the value of na_values). In data with-
out any NAs, passing na_filter=False can improve the performance of reading a large
file
verbose : boolean, default False
Indicate number of NA values placed in non-numeric columns
skip_blank_lines : boolean, default True
If True, skip over blank lines rather than interpreting as NaN values
parse_dates : boolean or list of ints or names or list of lists or dict, default False
boolean. If True -> try parsing the index.
list of ints or names. e.g. If [1, 2, 3] -> try parsing columns 1, 2, 3 each as a separate
date column.
list of lists. e.g. If [[1, 3]] -> combine columns 1 and 3 and parse as a single date
column.
dict, e.g. {foo : [1, 3]} -> parse columns 1, 3 as date and call result foo
If a column or index contains an unparseable date, the entire column or index will be
returned unaltered as an object data type. For non-standard datetime parsing, use pd.
to_datetime after pd.read_csv
Note: A fast-path exists for iso8601-formatted dates.
infer_datetime_format : boolean, default False
If True and parse_dates is enabled, pandas will attempt to infer the format of the date-
time strings in the columns, and if it can be inferred, switch to a faster method of parsing
them. In some cases this can increase the parsing speed by 5-10x.
keep_date_col : boolean, default False
If True and parse_dates specifies combining multiple columns then keep the original
columns.
date_parser : function, default None
Function to use for converting a sequence of string columns to an array of datetime in-
stances. The default uses dateutil.parser.parser to do the conversion. Pandas
will try to call date_parser in three different ways, advancing to the next if an exception
occurs: 1) Pass one or more arrays (as defined by parse_dates) as arguments; 2) con-
catenate (row-wise) the string values from the columns defined by parse_dates into a
single array and pass that; and 3) call date_parser once for each row using one or more
strings (corresponding to the columns defined by parse_dates) as arguments.
dayfirst : boolean, default False
DD/MM format dates, international and European format
iterator : boolean, default False
Return TextFileReader object for iteration or getting chunks with get_chunk().
chunksize : int, default None
Return TextFileReader object for iteration. See the IO Tools docs for more information
on iterator and chunksize.
compression : {infer, gzip, bz2, zip, xz, None}, default infer
For on-the-fly decompression of on-disk data. If infer, then use gzip, bz2, zip or xz if
filepath_or_buffer is a string ending in .gz, .bz2, .zip, or xz, respectively, and no
decompression otherwise. If using zip, the ZIP file must contain only one data file to
be read in. Set to None for no decompression.
New in version 0.18.1: support for zip and xz compression.
thousands : str, default None
Thousands separator
decimal : str, default .
Character to recognize as decimal point (e.g. use , for European data).
float_precision : string, default None
Specifies which converter the C engine should use for floating-point values. The op-
tions are None for the ordinary converter, high for the high-precision converter, and
round_trip for the round-trip converter.
lineterminator : str (length 1), default None
Character to break file into lines. Only valid with C parser.
quotechar : str (length 1), optional
The character used to denote the start and end of a quoted item. Quoted items can
include the delimiter and it will be ignored.
quoting : int or csv.QUOTE_* instance, default 0
Control field quoting behavior per csv.QUOTE_* constants. Use one of
QUOTE_MINIMAL (0), QUOTE_ALL (1), QUOTE_NONNUMERIC (2) or
QUOTE_NONE (3).
doublequote : boolean, default True
When quotechar is specified and quoting is not QUOTE_NONE, indicate whether or not
to interpret two consecutive quotechar elements INSIDE a field as a single quotechar
element.
escapechar : str (length 1), default None
One-character string used to escape delimiter when quoting is QUOTE_NONE.
comment : str, default None
Indicates remainder of line should not be parsed. If found at the beginning of a line, the
line will be ignored altogether. This parameter must be a single character. Like empty
lines (as long as skip_blank_lines=True), fully commented lines are ignored
by the parameter header but not by skiprows. For example, if comment=#, parsing
#emptyna,b,cn1,2,3 with header=0 will result in a,b,c being treated as the header.
encoding : str, default None
Encoding to use for UTF when reading/writing (ex. utf-8). List of Python standard
encodings
dialect : str or csv.Dialect instance, default None
If provided, this parameter will override values (default or not) for the following pa-
rameters: delimiter, doublequote, escapechar, skipinitialspace, quotechar, and quoting.
If it is necessary to override values, a ParserWarning will be issued. See csv.Dialect
documentation for more details.
tupleize_cols : boolean, default False
Leave a list of tuples on columns as is (default is to convert to a Multi Index on the
columns)
error_bad_lines : boolean, default True
Lines with too many fields (e.g. a csv line with too many commas) will by default cause
an exception to be raised, and no DataFrame will be returned. If False, then these bad
lines will dropped from the DataFrame that is returned.
warn_bad_lines : boolean, default True
If error_bad_lines is False, and warn_bad_lines is True, a warning for each bad line
will be output.
low_memory : boolean, default True
Internally process the file in chunks, resulting in lower memory use while parsing, but
possibly mixed type inference. To ensure no mixed types either set False, or specify the
type with the dtype parameter. Note that the entire file is read into a single DataFrame
regardless, use the chunksize or iterator parameter to return the data in chunks. (Only
valid with C parser)
buffer_lines : int, default None
DEPRECATED: this argument will be removed in a future version because its value is
not respected by the parser
compact_ints : boolean, default False
DEPRECATED: this argument will be removed in a future version
If compact_ints is True, then for any column that is of integer dtype, the parser will
attempt to cast it as the smallest integer dtype possible, either signed or unsigned de-
pending on the specification from the use_unsigned parameter.
34.1.2.4 pandas.read_msgpack
34.1.3 Clipboard
34.1.3.1 pandas.read_clipboard
pandas.read_clipboard(sep=\\s+, **kwargs)
Read text from clipboard and pass to read_table. See read_table for the full argument list
Parameters sep : str, default s+.
A string or regex delimiter. The default of s+ denotes one or more whitespace charac-
ters.
Returns parsed : DataFrame
34.1.4 Excel
read_excel(io[, sheetname, header, ...]) Read an Excel table into a pandas DataFrame
ExcelFile.parse([sheetname, header, ...]) Parse specified sheet(s) into a DataFrame
34.1.4.1 pandas.read_excel
34.1.4.2 pandas.ExcelFile.parse
34.1.5 JSON
read_json([path_or_buf, orient, typ, dtype, ...]) Convert a JSON string to pandas object
34.1.5.1 pandas.read_json
Examples
>>> df.to_json(orient='split')
'{"columns":["col 1","col 2"],
"index":["row 1","row 2"],
"data":[["a","b"],["c","d"]]}'
>>> pd.read_json(_, orient='split')
col 1 col 2
row 1 a b
row 2 c d
>>> df.to_json(orient='index')
'{"row 1":{"col 1":"a","col 2":"b"},"row 2":{"col 1":"c","col 2":"d"}}'
>>> pd.read_json(_, orient='index')
col 1 col 2
row 1 a b
row 2 c d
Encoding/decoding a Dataframe using 'records' formatted JSON. Note that index labels are not preserved
with this encoding.
>>> df.to_json(orient='records')
'[{"col 1":"a","col 2":"b"},{"col 1":"c","col 2":"d"}]'
>>> pd.read_json(_, orient='records')
col 1 col 2
0 a b
1 c d
>>> df.to_json(orient='table')
'{"schema": {"fields": [{"name": "index", "type": "string"},
{"name": "col 1", "type": "string"},
{"name": "col 2", "type": "string"}],
"primaryKey": "index",
"pandas_version": "0.20.0"},
"data": [{"index": "row 1", "col 1": "a", "col 2": "b"},
{"index": "row 2", "col 1": "c", "col 2": "d"}]}'
json_normalize(data[, record_path, meta, ...]) Normalize semi-structured JSON data into a flat table
build_table_schema(data[, index, ...]) Create a Table schema from data.
34.1.5.2 pandas.io.json.json_normalize
Examples
34.1.5.3 pandas.io.json.build_table_schema
Notes
See _as_json_table_type for conversion types. Timedeltas as converted to ISO8601 duration format with 9
decimal places after the secnods field for nanosecond precision.
Categoricals are converted to the any dtype, and use the enum field constraint to list the allowed values. The
ordered attribute is included in an ordered field.
Examples
>>> df = pd.DataFrame(
... {'A': [1, 2, 3],
... 'B': ['a', 'b', 'c'],
... 'C': pd.date_range('2016-01-01', freq='d', periods=3),
... }, index=pd.Index(range(3), name='idx'))
>>> build_table_schema(df)
{'fields': [{'name': 'idx', 'type': 'integer'},
{'name': 'A', 'type': 'integer'},
{'name': 'B', 'type': 'string'},
{'name': 'C', 'type': 'datetime'}],
'pandas_version': '0.20.0',
'primaryKey': ['idx']}
34.1.6 HTML
read_html(io[, match, flavor, header, ...]) Read HTML tables into a list of DataFrame objects.
34.1.6.1 pandas.read_html
Soup. However, these attributes must be valid HTML table attributes to work correctly.
For example,
is a valid attribute dictionary because the id HTML tag attribute is a valid HTML
attribute for any HTML tag as per this document.
is not a valid attribute dictionary because asdf is not a valid HTML attribute even if
it is a valid XML attribute. Valid HTML 4.01 table attributes can be found here. A
working draft of the HTML 5 spec can be found here. It contains the latest information
on table attributes for the modern web.
parse_dates : bool, optional
See read_csv() for more details.
tupleize_cols : bool, optional
If False try to parse multiple header rows into a MultiIndex, otherwise return raw
tuples. Defaults to False.
thousands : str, optional
Separator to use to parse thousands. Defaults to ','.
encoding : str or None, optional
The encoding used to decode the web page. Defaults to None.None preserves the
previous encoding behavior, which depends on the underlying parser library (e.g., the
parser library will try to use the encoding provided by the document).
decimal : str, default .
Character to recognize as decimal point (e.g. use , for European data).
New in version 0.19.0.
converters : dict, default None
Dict of functions for converting values in certain columns. Keys can either be integers or
column labels, values are functions that take one input argument, the cell (not column)
content, and return the transformed content.
New in version 0.19.0.
na_values : iterable, default None
Custom NA values
New in version 0.19.0.
keep_default_na : bool, default True
If na_values are specified and keep_default_na is False the default NaN values are over-
ridden, otherwise theyre appended to
New in version 0.19.0.
Returns dfs : list of DataFrames
See also:
pandas.read_csv
Notes
Before using this function you should read the gotchas about the HTML parsing libraries.
Expect to do some cleanup after you call this function. For example, you might need to manually assign column
names if the column names are converted to NaN when you pass the header=0 argument. We try to assume as
little as possible about the structure of the table and push the idiosyncrasies of the HTML contained in the table
to the user.
This function searches for <table> elements and only for <tr> and <th> rows and <td> elements within
each <tr> or <th> element in the table. <td> stands for table data.
Similar to read_csv() the header argument is applied after skiprows is applied.
This function will always return a list of DataFrame or it will fail, e.g., it will not return an empty list.
Examples
See the read_html documentation in the IO section of the docs for some examples of reading in HTML tables.
34.1.7.1 pandas.read_hdf
34.1.7.2 pandas.HDFStore.put
34.1.7.3 pandas.HDFStore.append
Notes
Does not check if data being appended overlaps with existing data in the table, so be careful
34.1.7.4 pandas.HDFStore.get
HDFStore.get(key)
Retrieve pandas object stored in file
Parameters key : object
Returns obj : type of object stored in file
34.1.7.5 pandas.HDFStore.select
34.1.8 Feather
34.1.8.1 pandas.read_feather
pandas.read_feather(path)
Load a feather-format object from the file path
Parameters path : string
File path
Returns type of object stored in file
34.1.9 SAS
read_sas(filepath_or_buffer[, format, ...]) Read SAS files stored as either XPORT or SAS7BDAT for-
mat files.
34.1.9.1 pandas.read_sas
34.1.10 SQL
read_sql_table(table_name, con[, schema, ...]) Read SQL database table into a DataFrame.
read_sql_query(sql, con[, index_col, ...]) Read SQL query into a DataFrame.
read_sql(sql, con[, index_col, ...]) Read SQL query or database table into a DataFrame.
34.1.11.1 pandas.read_gbq
34.1.12 STATA
34.1.12.1 pandas.read_stata
Examples
>>> df = pandas.read_stata('filename.dta')
34.1.12.2 pandas.io.stata.StataReader.data
StataReader.data(**kwargs)
DEPRECATED: Reads observations from Stata file, converting them into a dataframe
This is a legacy method. Use read in new code.
Parameters convert_dates : boolean, defaults to True
Convert date variables to DataFrame time values
convert_categoricals : boolean, defaults to True
Read value labels and convert columns to Categorical/Factor variables
index : identifier of index column
identifier of column that should be used as index of the DataFrame
convert_missing : boolean, defaults to False
Flag indicating whether to convert missing values to their Stata representations. If False,
missing values are replaced with nans. If True, columns containing missing values are
returned with object data types and missing values are represented by StataMissing-
Value objects.
preserve_dtypes : boolean, defaults to True
Preserve Stata datatypes. If False, numeric data are upcast to pandas default types for
foreign data (float64 or int64)
columns : list or None
Columns to retain. Columns will be returned in the given order. None returns all
columns
order_categoricals : boolean, defaults to True
Flag indicating whether converted categorical data are ordered.
Returns DataFrame
34.1.12.3 pandas.io.stata.StataReader.data_label
StataReader.data_label()
Returns data label of Stata file
34.1.12.4 pandas.io.stata.StataReader.value_labels
StataReader.value_labels()
Returns a dict, associating each variable name a dict, associating each value its corresponding label
34.1.12.5 pandas.io.stata.StataReader.variable_labels
StataReader.variable_labels()
Returns variable labels as a dict, associating each variable name with corresponding label
34.1.12.6 pandas.io.stata.StataWriter.write_file
StataWriter.write_file()
melt(frame[, id_vars, value_vars, var_name, ...]) Unpivots a DataFrame from wide format to long format,
optionally
pivot(index, columns, values) Produce pivot table based on 3 columns of this
DataFrame.
pivot_table(data[, values, index, columns, ...]) Create a spreadsheet-style pivot table as a DataFrame.
crosstab(index, columns[, values, rownames, ...]) Compute a simple cross-tabulation of two (or more) fac-
tors.
cut(x, bins[, right, labels, retbins, ...]) Return indices of half-open bins to which each value of x
belongs.
qcut(x, q[, labels, retbins, precision, ...]) Quantile-based discretization function.
merge(left, right[, how, on, left_on, ...]) Merge DataFrame objects by performing a database-style
join operation by columns or indexes.
merge_ordered(left, right[, on, left_on, ...]) Perform merge with optional filling/interpolation designed
for ordered data like time series data.
merge_asof(left, right[, on, left_on, ...]) Perform an asof merge.
concat(objs[, axis, join, join_axes, ...]) Concatenate pandas objects along a particular axis with op-
tional set logic along the other axes.
get_dummies(data[, prefix, prefix_sep, ...]) Convert categorical variable into dummy/indicator vari-
ables
factorize(values[, sort, order, ...]) Encode input values as an enumerated type or categorical
variable
unique(values) Hash table-based unique.
Continued on next page
34.2.1.1 pandas.melt
Examples
34.2.1.2 pandas.pivot
Returns DataFrame
See also:
DataFrame.pivot_table generalization of pivot that can handle duplicate values for one index/column
pair
Notes
Obviously, all 3 of the input arguments must have the same length
34.2.1.3 pandas.pivot_table
Examples
>>> df
A B C D
0 foo one small 1
1 foo one large 2
2 foo one large 2
3 foo two small 3
4 foo two small 3
5 bar one large 4
6 bar one small 5
7 bar two small 6
8 bar two large 7
34.2.1.4 pandas.crosstab
Notes
Any Series passed will have their name attributes used unless row or column names for the cross-tabulation are
specified.
Any input passed containing Categorical data will have all of its categories included in the cross-tabulation,
even if the actual data does not contain any instances of a particular category.
In the event that there arent overlapping indexes an empty DataFrame will be returned.
Examples
>>> a
array([foo, foo, foo, foo, bar, bar,
bar, bar, foo, foo, foo], dtype=object)
>>> b
array([one, one, one, two, one, one,
one, two, two, two, one], dtype=object)
>>> c
array([dull, dull, shiny, dull, dull, shiny,
shiny, dull, shiny, shiny, shiny], dtype=object)
34.2.1.5 pandas.cut
Notes
The cut function can be useful for going from a continuous variable to a categorical variable. For example, cut
could convert ages to groups of age ranges.
Any NA values will be NA in the result. Out of bounds values will be NA in the resulting Categorical object
Examples
Categories (3, object): [(0.191, 3.367] < (3.367, 6.533] < (6.533, 9.7]],
array([ 0.1905 , 3.36666667, 6.53333333, 9.7 ]))
34.2.1.6 pandas.qcut
Notes
Examples
>>> pd.qcut(range(5), 4)
[[0, 1], [0, 1], (1, 2], (2, 3], (3, 4]]
Categories (4, object): [[0, 1] < (1, 2] < (2, 3] < (3, 4]]
34.2.1.7 pandas.merge
Examples
>>> A >>> B
lkey value rkey value
0 foo 1 0 foo 5
1 bar 2 1 bar 6
2 baz 3 2 qux 7
3 foo 4 3 bar 8
34.2.1.8 pandas.merge_ordered
Examples
>>> A >>> B
key lvalue group key rvalue
0 a 1 a 0 b 1
1 c 2 a 1 c 2
2 e 3 a 2 d 3
3 a 1 b
4 c 2 b
5 e 3 b
34.2.1.9 pandas.merge_asof
Examples
>>> left
a left_val
0 1 a
1 5 b
2 10 c
>>> right
a right_val
0 1 1
1 2 2
2 3 3
3 6 6
4 7 7
>>> right
right_val
1 1
2 2
3 3
6 6
7 7
>>> trades
time ticker price quantity
0 2016-05-25 13:30:00.023 MSFT 51.95 75
1 2016-05-25 13:30:00.038 MSFT 51.95 155
2 2016-05-25 13:30:00.048 GOOG 720.77 100
3 2016-05-25 13:30:00.048 GOOG 720.92 100
4 2016-05-25 13:30:00.048 AAPL 98.00 100
We only asof within 2ms betwen the quote time and the trade time
We only asof within 10ms betwen the quote time and the trade time and we exclude exact matches on time.
However prior data will propogate forward
34.2.1.10 pandas.concat
Notes
Examples
Clear the existing index and reset it in the result by setting the ignore_index option to True.
>>> pd.concat([s1, s2], ignore_index=True)
0 a
1 b
2 c
3 d
dtype: object
Add a hierarchical index at the outermost level of the data with the keys option.
>>> pd.concat([s1, s2], keys=['s1', 's2',])
s1 0 a
1 b
s2 0 c
1 d
dtype: object
Label the index keys you create with the names option.
>>> pd.concat([s1, s2], keys=['s1', 's2'],
... names=['Series name', 'Row ID'])
Series name Row ID
s1 0 a
1 b
s2 0 c
1 d
dtype: object
Combine DataFrame objects with overlapping columns and return everything. Columns outside the intersec-
tion will be filled with NaN values.
Combine DataFrame objects with overlapping columns and return only those that are shared by passing
inner to the join keyword argument.
Prevent the result from including duplicate index values with the verify_integrity option.
34.2.1.11 pandas.get_dummies
Examples
>>> pd.get_dummies(s)
a b c
0 1 0 0
1 0 1 0
2 0 0 1
3 1 0 0
>>> pd.get_dummies(s1)
a b
0 1 0
1 0 1
2 0 0
>>> pd.get_dummies(pd.Series(list('abcaa')))
a b c
0 1 0 0
1 0 1 0
2 0 0 1
3 1 0 0
4 1 0 0
34.2.1.12 pandas.factorize
34.2.1.13 pandas.unique
pandas.unique(values)
Hash table-based unique. Uniques are returned in order of appearance. This does NOT sort.
Significantly faster than numpy.unique. Includes NA values.
Parameters values : 1d array-like
Returns unique values.
If the input is an Index, the return is an Index
If the input is a Categorical dtype, the return is a Categorical
If the input is a Series/ndarray, the return will be an ndarray
See also:
pandas.Index.unique, pandas.Series.unique
Examples
>>> pd.unique(Series([pd.Timestamp('20160101'),
... pd.Timestamp('20160101')]))
array(['2016-01-01T00:00:00.000000000'], dtype='datetime64[ns]')
>>> pd.unique(list('baabc'))
array(['b', 'a', 'c'], dtype=object)
>>> pd.unique(Series(pd.Categorical(list('baabc'))))
[b, a, c]
Categories (3, object): [b, a, c]
>>> pd.unique(Series(pd.Categorical(list('baabc'),
... categories=list('abc'))))
[b, a, c]
Categories (3, object): [b, a, c]
>>> pd.unique(Series(pd.Categorical(list('baabc'),
... categories=list('abc'),
... ordered=True)))
[b, a, c]
Categories (3, object): [a < b < c]
34.2.1.14 pandas.wide_to_long
A character indicating the separation of the variable names in the wide format, to be
stripped from the names in the long format. For example, if your column names are
A-suffix1, A-suffix2, you can strip the hypen by specifying sep=-
New in version 0.20.0.
suffix : str, default \d+
A regular expression capturing the wanted suffixes. \d+ captures numeric suffixes.
Suffixes with no numbers could be specified with the negated character class \D+.
You can also further disambiguate suffixes, for example, if your wide variables are of
the form Aone, Btwo,.., and you have an unrelated column Arating, you can ignore the
last one by specifying suffix=(!?one|two)
New in version 0.20.0.
Returns DataFrame
A DataFrame that contains each stub name as a variable, with new index (i, j)
Notes
All extra variables are left untouched. This simply uses pandas.melt under the hood, but is hard-coded to do
the right thing in a typicaly case.
Examples
>>> df = pd.DataFrame({
... 'famid': [1, 1, 1, 2, 2, 2, 3, 3, 3],
... 'birth': [1, 2, 3, 1, 2, 3, 1, 2, 3],
... 'ht1': [2.8, 2.9, 2.2, 2, 1.8, 1.9, 2.2, 2.3, 2.1],
... 'ht2': [3.4, 3.8, 2.9, 3.2, 2.8, 2.4, 3.3, 3.4, 2.9]
... })
>>> df
birth famid ht1 ht2
0 1 1 2.8 3.4
1 2 1 2.9 3.8
2 3 1 2.2 2.9
3 1 2 2.0 3.2
4 2 2 1.8 2.8
5 3 2 1.9 2.4
6 1 3 2.2 3.3
7 2 3 2.3 3.4
8 3 3 2.1 2.9
>>> l = pd.wide_to_long(df, stubnames='ht', i=['famid', 'birth'], j='age')
>>> l
ht
famid birth age
1 1 1 2.8
2 3.4
2 1 2.9
2 3.8
3 1 2.2
2 2.9
2 1 1 2.0
2 3.2
2 1 1.8
2 2.8
3 1 1.9
2 2.4
3 1 1 2.2
2 3.3
2 1 2.3
2 3.4
3 1 2.1
2 2.9
Going from long back to wide just takes some creative use of unstack
If we have many columns, we could also use a regex to find our stubnames and pass that list on to wide_to_long
34.2.2.1 pandas.isnull
pandas.isnull(obj)
Detect missing values (NaN in numeric arrays, None/NaN in object arrays)
Parameters arr : ndarray or object value
Object to check for null-ness
Returns isnulled : array-like of bool or bool
Array or bool indicating whether an object is null or if an array is given which of the
element is null.
See also:
34.2.2.2 pandas.notnull
pandas.notnull(obj)
Replacement for numpy.isfinite / -numpy.isnan which is suitable for use on object arrays.
Parameters arr : ndarray or object value
Object to check for not-null-ness
Returns isnulled : array-like of bool or bool
Array or bool indicating whether an object is not null or if an array is given which of
the element is not null.
See also:
34.2.3.1 pandas.to_numeric
Examples
34.2.4.1 pandas.to_datetime
Parameters arg : integer, float, string, datetime, list, tuple, 1-d array, Series
Examples
Assembling a datetime from multiple columns of a DataFrame. The keys can be common abbreviations like
[year, month, day, minute, second, ms, us, ns]) or plurals of the same
If a date does not meet the timestamp limitations, passing errors=ignore will return the original input instead
of raising any exception.
Passing errors=coerce will force an out-of-bounds date to NaT, in addition to forcing non-dates (or non-
parseable dates) to NaT.
Passing infer_datetime_format=True can often-times speedup a parsing if its not an ISO8601 format exactly,
but in a regular format.
>>> s.head()
0 3/11/2000
1 3/12/2000
2 3/13/2000
3 3/11/2000
4 3/12/2000
dtype: object
Warning: For float arg, precision rounding might happen. To prevent unexpected behavior use a fixed-
width exact type.
34.2.4.2 pandas.to_timedelta
Examples
34.2.4.3 pandas.date_range
Notes
34.2.4.4 pandas.bdate_range
Notes
34.2.4.5 pandas.period_range
34.2.4.6 pandas.timedelta_range
Notes
34.2.4.7 pandas.infer_freq
pandas.infer_freq(index, warn=True)
Infer the most likely frequency given the input index. If the frequency is uncertain, a warning will be printed.
Parameters index : DatetimeIndex or TimedeltaIndex
if passed a Series will use the values of the series (NOT THE INDEX)
warn : boolean, default True
Returns freq : string or None
None if no discernible frequency TypeError if the index is not datetime-like ValueError
if there are less than three values.
eval(expr[, parser, engine, truediv, ...]) Evaluate a Python expression as a string using various
backends.
34.2.5.1 pandas.eval
Notes
The dtype of any objects involved in an arithmetic % operation are recursively cast to float64.
See the enhancing performance documentation for more details.
34.2.6 Testing
test([extra_args])
34.2.6.1 pandas.test
pandas.test(extra_args=None)
34.3 Series
34.3.1 Constructor
Series([data, index, dtype, name, copy, ...]) One-dimensional ndarray with axis labels (including time
series).
34.3.1.1 pandas.Series
Attributes
pandas.Series.T
Series.T
return the transpose, which is by definition self
pandas.Series.asobject
Series.asobject
return object Series which contains boxed values
this is an internal non-public method
pandas.Series.at
Series.at
Fast label-based scalar accessor
Similarly to loc, at provides label based scalar lookups. You can also set using these indexers.
pandas.Series.axes
Series.axes
Return a list of the row axis labels
pandas.Series.base
Series.base
return the base object if the memory of the underlying data is shared
pandas.Series.blocks
Series.blocks
Internal property, property synonym for as_blocks()
pandas.Series.data
Series.data
return the data pointer of the underlying data
pandas.Series.dtype
Series.dtype
return the dtype object of the underlying data
pandas.Series.dtypes
Series.dtypes
return the dtype object of the underlying data
pandas.Series.empty
Series.empty
pandas.Series.flags
Series.flags
pandas.Series.ftype
Series.ftype
return if the data is sparse|dense
pandas.Series.ftypes
Series.ftypes
return if the data is sparse|dense
pandas.Series.hasnans
Series.hasnans = None
pandas.Series.iat
Series.iat
Fast integer location scalar accessor.
Similarly to iloc, iat provides integer based lookups. You can also set using these indexers.
pandas.Series.iloc
Series.iloc
Purely integer-location based indexing for selection by position.
.iloc[] is primarily integer position based (from 0 to length-1 of the axis), but may also be used
with a boolean array.
Allowed inputs are:
An integer, e.g. 5.
A list or array of integers, e.g. [4, 3, 0].
A slice object with ints, e.g. 1:7.
A boolean array.
A callable function with one argument (the calling Series, DataFrame or Panel) and that returns
valid output for indexing (one of the above)
.iloc will raise IndexError if a requested indexer is out-of-bounds, except slice indexers which allow
out-of-bounds indexing (this conforms with python/numpy slice semantics).
See more at Selection by Position
pandas.Series.imag
Series.imag
pandas.Series.is_copy
Series.is_copy = None
pandas.Series.is_monotonic
Series.is_monotonic
Return boolean if values in the object are monotonic_increasing
New in version 0.19.0.
Returns is_monotonic : boolean
pandas.Series.is_monotonic_decreasing
Series.is_monotonic_decreasing
Return boolean if values in the object are monotonic_decreasing
New in version 0.19.0.
Returns is_monotonic_decreasing : boolean
pandas.Series.is_monotonic_increasing
Series.is_monotonic_increasing
Return boolean if values in the object are monotonic_increasing
New in version 0.19.0.
Returns is_monotonic : boolean
pandas.Series.is_unique
Series.is_unique
Return boolean if values in the object are unique
Returns is_unique : boolean
pandas.Series.itemsize
Series.itemsize
return the size of the dtype of the item of the underlying data
pandas.Series.ix
Series.ix
A primarily label-location based indexer, with integer position fallback.
.ix[] supports mixed integer and label based access. It is primarily label based, but will fall back to
integer positional access unless the corresponding axis is of integer type.
.ix is the most general indexer and will support any of the inputs in .loc and .iloc. .ix also supports
floating point label schemes. .ix is exceptionally useful when dealing with mixed positional and label
based hierachical indexes.
However, when an axis is integer based, ONLY label based access and not positional access is supported.
Thus, in such cases, its usually better to be explicit and use .iloc or .loc.
See more at Advanced Indexing.
pandas.Series.loc
Series.loc
Purely label-location based indexer for selection by label.
.loc[] is primarily label based, but may also be used with a boolean array.
pandas.Series.name
Series.name
pandas.Series.nbytes
Series.nbytes
return the number of bytes in the underlying data
pandas.Series.ndim
Series.ndim
return the number of dimensions of the underlying data, by definition 1
pandas.Series.real
Series.real
pandas.Series.shape
Series.shape
return a tuple of the shape of the underlying data
pandas.Series.size
Series.size
return the number of elements in the underlying data
pandas.Series.strides
Series.strides
return the strides of the underlying data
pandas.Series.values
Series.values
Return Series as ndarray or ndarray-like depending on the dtype
Returns arr : numpy.ndarray or ndarray-like
Examples
>>> pd.Series(list('aabc')).values
array(['a', 'a', 'b', 'c'], dtype=object)
>>> pd.Series(list('aabc')).astype('category').values
[a, a, b, c]
Categories (3, object): [a, b, c]
Methods
pandas.Series.abs
Series.abs()
Return an object with absolute value takenonly applicable to objects that are all numeric.
Returns abs: type of caller
pandas.Series.add
See also:
Series.radd
pandas.Series.add_prefix
Series.add_prefix(prefix)
Concatenate prefix string with panel items names.
Parameters prefix : string
Returns with_prefix : type of caller
pandas.Series.add_suffix
Series.add_suffix(suffix)
Concatenate suffix string with panel items names.
Parameters suffix : string
Returns with_suffix : type of caller
pandas.Series.agg
Notes
Numpy functions mean/median/prod/sum/std/var are special cased so the default behavior is applying
the function along axis=0 (e.g., np.mean(arr_2d, axis=0)) as opposed to mimicking the default Numpy
behavior (e.g., np.mean(arr_2d)).
agg is an alias for aggregate. Use it.
Examples
>>> s = Series(np.random.randn(10))
>>> s.agg('min')
-1.3018049988556679
pandas.Series.aggregate
Notes
Numpy functions mean/median/prod/sum/std/var are special cased so the default behavior is applying
the function along axis=0 (e.g., np.mean(arr_2d, axis=0)) as opposed to mimicking the default Numpy
behavior (e.g., np.mean(arr_2d)).
agg is an alias for aggregate. Use it.
Examples
>>> s = Series(np.random.randn(10))
>>> s.agg('min')
-1.3018049988556679
pandas.Series.align
pandas.Series.all
If the axis is a MultiIndex (hierarchical), count along a particular level, collapsing into
a scalar
bool_only : boolean, default None
Include only boolean columns. If None, will attempt to use everything, then use only
boolean data. Not implemented for Series.
Returns all : scalar or Series (if level specified)
pandas.Series.any
pandas.Series.append
Examples
1 5
2 6
dtype: int64
>>> s1.append(s3)
0 1
1 2
2 3
3 4
4 5
5 6
dtype: int64
pandas.Series.apply
Examples
Define a custom function that needs additional positional arguments and pass these additional arguments
using the args keyword.
Define a custom function that takes keyword arguments and pass these arguments to apply.
>>> series.apply(np.log)
London 2.995732
New York 3.044522
Helsinki 2.484907
dtype: float64
pandas.Series.argmax
Notes
pandas.Series.argmin
Notes
pandas.Series.argsort
Choice of sorting algorithm. See np.sort for more information. mergesort is the only
stable algorithm
order : ignored
Returns argsorted : Series, with -1 indicated where nan values are present
See also:
numpy.ndarray.argsort
pandas.Series.as_blocks
Series.as_blocks(copy=True)
Convert the frame to a dict of dtype -> Constructor Types that each has a homogeneous dtype.
NOTE: the dtypes of the blocks WILL BE PRESERVED HERE (unlike in as_matrix)
pandas.Series.as_matrix
Series.as_matrix(columns=None)
Convert the frame to its Numpy-array representation.
Parameters columns: list, optional, default:None
If None, return all columns, otherwise, returns specified columns.
Returns values : ndarray
If the caller is heterogeneous and contains booleans or objects, the result will be of
dtype=object. See Notes.
See also:
pandas.DataFrame.values
Notes
pandas.Series.asfreq
Notes
To learn more about the frequency strings, please see this link.
Examples
>>> df.asfreq(freq='30S')
s
2000-01-01 00:00:00 0.0
2000-01-01 00:00:30 NaN
2000-01-01 00:01:00 NaN
2000-01-01 00:01:30 NaN
2000-01-01 00:02:00 2.0
2000-01-01 00:02:30 NaN
2000-01-01 00:03:00 3.0
pandas.Series.asof
Series.asof(where, subset=None)
The last row without any NaN is taken (or the last row without NaN considering only the subset of columns
in the case of a DataFrame)
New in version 0.19.0: For DataFrame
If there is no good value, NaN is returned for a Series a Series of NaN values for a DataFrame
Parameters where : date or array of dates
subset : string or list of strings, default None
if not None use these columns for NaN propagation
Returns where is scalar
value or NaN if input is Series
Series if input is DataFrame
where is Index: same shape object as input
See also:
merge_asof
Notes
pandas.Series.astype
pandas.Series.at_time
Series.at_time(time, asof=False)
Select values at particular time of day (e.g. 9:30AM).
Parameters time : datetime.time or string
Returns values_at_time : type of caller
pandas.Series.autocorr
Series.autocorr(lag=1)
Lag-N autocorrelation
Parameters lag : int, default 1
Number of lags to apply before performing autocorrelation.
Returns autocorr : float
pandas.Series.between
pandas.Series.between_time
pandas.Series.bfill
pandas.Series.bool
Series.bool()
Return the bool of a single element PandasObject.
This must be a boolean scalar value, either True or False. Raise a ValueError if the PandasObject does not
have exactly 1 element, or that element is not boolean
pandas.Series.cat
Series.cat()
Accessor object for categorical properties of the Series values.
Be aware that assigning to categories is a inplace operation, while all methods return new categorical data
per default (but can be called with inplace=True).
Examples
>>> s.cat.categories
>>> s.cat.categories = list('abc')
>>> s.cat.rename_categories(list('cab'))
>>> s.cat.reorder_categories(list('cab'))
>>> s.cat.add_categories(['d','e'])
>>> s.cat.remove_categories(['d'])
>>> s.cat.remove_unused_categories()
>>> s.cat.set_categories(list('abcde'))
>>> s.cat.as_ordered()
>>> s.cat.as_unordered()
pandas.Series.clip
Examples
>>> df
0 1
0 0.335232 -1.256177
1 -1.367855 0.746646
2 0.027753 -1.176076
3 0.230930 -0.679613
4 1.261967 0.570967
>>> df.clip(-1.0, 0.5)
0 1
0 0.335232 -1.000000
1 -1.000000 0.500000
2 0.027753 -1.000000
3 0.230930 -0.679613
4 0.500000 0.500000
>>> t
0 -0.3
1 -0.2
2 -0.1
3 0.0
4 0.1
dtype: float64
>>> df.clip(t, t + 1, axis=0)
0 1
0 0.335232 -0.300000
1 -0.200000 0.746646
2 0.027753 -0.100000
3 0.230930 0.000000
4 1.100000 0.570967
pandas.Series.clip_lower
Series.clip_lower(threshold, axis=None)
Return copy of the input with values below given value(s) truncated.
Parameters threshold : float or array_like
axis : int or string axis name, optional
Align object with threshold along the given axis.
Returns clipped : same type as input
See also:
clip
pandas.Series.clip_upper
Series.clip_upper(threshold, axis=None)
Return copy of input with values above given value(s) truncated.
Parameters threshold : float or array_like
axis : int or string axis name, optional
Align object with threshold along the given axis.
Returns clipped : same type as input
See also:
clip
pandas.Series.combine
pandas.Series.combine_first
Series.combine_first(other)
Combine Series values, choosing the calling Seriess values first. Result index will be the union of the two
indexes
Parameters other : Series
Returns y : Series
pandas.Series.compound
pandas.Series.compress
pandas.Series.consolidate
Series.consolidate(inplace=False)
DEPRECATED: consolidate will be an internal implementation only.
pandas.Series.convert_objects
If True, convert to timedelta where possible. If coerce, force conversion, with uncon-
vertible values becoming NaT.
copy : boolean, default True
If True, return a copy even if no copy is necessary (e.g. no conversion was done). Note:
This is meant for internal use, and should not be confused with inplace.
Returns converted : same as input object
See also:
pandas.Series.copy
Series.copy(deep=True)
Make a copy of this objects data.
Parameters deep : boolean or string, default True
Make a deep copy, including a copy of the data and the indices. With deep=False
neither the indices or the data are copied.
Note that when deep=True data is copied, actual python objects will not be copied
recursively, only the reference to the object. This is in contrast to copy.deepcopy in
the Standard Library, which recursively copies object data.
Returns copy : type of caller
pandas.Series.corr
pandas.Series.count
Series.count(level=None)
Return number of non-NA/null observations in the Series
pandas.Series.cov
Series.cov(other, min_periods=None)
Compute covariance with Series, excluding missing values
Parameters other : Series
min_periods : int, optional
Minimum number of observations needed to have a valid result
Returns covariance : float
Normalized by N-1 (unbiased estimator).
pandas.Series.cummax
pandas.Series.cummin
pandas.Series.cumprod
pandas.Series.cumsum
pandas.Series.describe
Notes
For numeric data, the results index will include count, mean, std, min, max as well as lower, 50 and
upper percentiles. By default the lower percentile is 25 and the upper percentile is 75. The 50 percentile
is the same as the median.
For object data (e.g. strings or timestamps), the results index will include count, unique, top, and
freq. The top is the most common value. The freq is the most common values frequency. Timestamps
also include the first and last items.
If multiple object values have the highest count, then the count and top results will be arbitrarily chosen
from among those with the highest count.
For mixed data types provided via a DataFrame, the default is to return only an analysis of numeric
columns. If include='all' is provided as an option, the result will include a union of attributes of
each type.
The include and exclude parameters can be used to limit which columns in a DataFrame are analyzed
for the output. The parameters are ignored when analyzing a Series.
Examples
>>> s = pd.Series([
... np.datetime64("2000-01-01"),
... np.datetime64("2010-01-01"),
... np.datetime64("2010-01-01")
... ])
>>> s.describe()
count 3
unique 2
top 2010-01-01 00:00:00
freq 2
first 2000-01-01 00:00:00
last 2010-01-01 00:00:00
dtype: object
>>> df.describe(include='all')
numeric object
count 3.0 3
unique NaN 3
top NaN b
freq NaN 1
mean 2.0 NaN
std 1.0 NaN
min 1.0 NaN
25% 1.5 NaN
50% 2.0 NaN
75% 2.5 NaN
max 3.0 NaN
>>> df.numeric.describe()
count 3.0
mean 2.0
std 1.0
min 1.0
25% 1.5
50% 2.0
75% 2.5
max 3.0
Name: numeric, dtype: float64
>>> df.describe(include=[np.number])
numeric
count 3.0
mean 2.0
std 1.0
min 1.0
25% 1.5
50% 2.0
75% 2.5
max 3.0
>>> df.describe(include=[np.object])
object
count 3
unique 3
top b
freq 1
>>> df.describe(exclude=[np.number])
object
count 3
unique 3
top b
freq 1
>>> df.describe(exclude=[np.object])
numeric
count 3.0
mean 2.0
std 1.0
min 1.0
25% 1.5
50% 2.0
75% 2.5
max 3.0
pandas.Series.diff
Series.diff(periods=1)
1st discrete difference of object
Parameters periods : int, default 1
Periods to shift for forming difference
Returns diffed : Series
pandas.Series.div
pandas.Series.divide
pandas.Series.dot
Series.dot(other)
Matrix multiplication with DataFrame or inner-product with Series objects
Parameters other : Series or DataFrame
Returns dot_product : scalar or Series
pandas.Series.drop
pandas.Series.drop_duplicates
Series.drop_duplicates(keep=first, inplace=False)
Return Series with duplicate values removed
Parameters keep : {first, last, False}, default first
first : Drop duplicates except for the first occurrence.
last : Drop duplicates except for the last occurrence.
False : Drop all duplicates.
inplace : boolean, default False
If True, performs operation inplace and returns None.
Returns deduplicated : Series
pandas.Series.dropna
Do operation in place.
pandas.Series.dt
Series.dt()
Accessor object for datetimelike properties of the Series values.
Examples
>>> s.dt.hour
>>> s.dt.second
>>> s.dt.quarter
Returns a Series indexed like the original Series. Raises TypeError if the Series does not contain datetime-
like values.
pandas.Series.duplicated
Series.duplicated(keep=first)
Return boolean Series denoting duplicate values
Parameters keep : {first, last, False}, default first
first : Mark duplicates as True except for the first occurrence.
last : Mark duplicates as True except for the last occurrence.
False : Mark all duplicates as True.
Returns duplicated : Series
pandas.Series.eq
pandas.Series.equals
Series.equals(other)
Determines if two NDFrame objects contain the same elements. NaNs in the same location are considered
equal.
pandas.Series.ewm
Notes
Exactly one of center of mass, span, half-life, and alpha must be provided. Allowed values and relationship
between the parameters are specified in the parameter descriptions above; see the link at the end of this
section for a detailed explanation.
The freq keyword is used to conform time series data to a specified frequency by resampling the data. This
is done with the default parameters of resample() (i.e. using the mean).
When adjust is True (default), weighted averages are calculated using weights (1-alpha)**(n-1), (1-
alpha)**(n-2), ..., 1-alpha, 1.
When adjust is False, weighted averages are calculated recursively as: weighted_average[0] = arg[0];
weighted_average[i] = (1-alpha)*weighted_average[i-1] + alpha*arg[i].
When ignore_na is False (default), weights are based on absolute positions. For example, the weights of
x and y used in calculating the final weighted average of [x, None, y] are (1-alpha)**2 and 1 (if adjust is
True), and (1-alpha)**2 and alpha (if adjust is False).
When ignore_na is True (reproducing pre-0.15.0 behavior), weights are based on relative positions. For
example, the weights of x and y used in calculating the final weighted average of [x, None, y] are 1-alpha
and 1 (if adjust is True), and 1-alpha and alpha (if adjust is False).
More details can be found at http://pandas.pydata.org/pandas-docs/stable/computation.html#
exponentially-weighted-windows
Examples
>>> df.ewm(com=0.5).mean()
B
0 0.000000
1 0.750000
2 1.615385
3 1.615385
4 3.670213
pandas.Series.expanding
Notes
By default, the result is set to the right edge of the window. This can be changed to the center of the
window by setting center=True.
The freq keyword is used to conform time series data to a specified frequency by resampling the data. This
is done with the default parameters of resample() (i.e. using the mean).
Examples
>>> df.expanding(2).sum()
B
0 NaN
1 1.0
2 3.0
3 3.0
4 7.0
pandas.Series.factorize
Series.factorize(sort=False, na_sentinel=-1)
Encode the object as an enumerated type or categorical variable
Parameters sort : boolean, default False
Sort by values
na_sentinel: int, default -1
Value to mark not found
Returns labels : the indexer to the original array
uniques : the unique Index
pandas.Series.ffill
pandas.Series.fillna
pandas.Series.filter
See also:
pandas.DataFrame.select
Notes
The items, like, and regex parameters are enforced to be mutually exclusive.
axis defaults to the info axis that is used when indexing with [].
Examples
>>> df
one two three
mouse 1 2 3
rabbit 4 5 6
pandas.Series.first
Series.first(offset)
Convenience method for subsetting initial periods of time series data based on a date offset.
Parameters offset : string, DateOffset, dateutil.relativedelta
Returns subset : type of caller
Examples
pandas.Series.first_valid_index
Series.first_valid_index()
Return label for first non-NA/null value
pandas.Series.floordiv
pandas.Series.from_array
pandas.Series.from_csv
pandas.Series.ge
pandas.Series.get
Series.get(key, default=None)
Get item from object for given key (DataFrame column, Panel slice, etc.). Returns default value if not
found.
Parameters key : object
Returns value : type of items contained in object
pandas.Series.get_dtype_counts
Series.get_dtype_counts()
Return the counts of dtypes in this object.
pandas.Series.get_ftype_counts
Series.get_ftype_counts()
Return the counts of ftypes in this object.
pandas.Series.get_value
Series.get_value(label, takeable=False)
Quickly retrieve single value at passed index label
Parameters index : label
takeable : interpret the index as indexers, default False
Returns value : scalar value
pandas.Series.get_values
Series.get_values()
same as values (but handles sparseness conversions); is a view
pandas.Series.groupby
Sort group keys. Get better performance by turning this off. Note this does not influence
the order of observations within each group. groupby preserves the order of rows within
each group.
group_keys : boolean, default True
When calling apply, add group keys to index to identify pieces
squeeze : boolean, default False
reduce the dimensionality of the return type if possible, otherwise return a consistent
type
Returns GroupBy object
Examples
DataFrame results
pandas.Series.gt
pandas.Series.head
Series.head(n=5)
Returns first n rows
pandas.Series.hist
Notes
pandas.Series.idxmax
Notes
pandas.Series.idxmin
Notes
pandas.Series.interpolate
linear: ignore the index and treat the values as equally spaced. This is the only
method supported on MultiIndexes. default
time: interpolation works on daily and higher resolution data to interpolate given
length of interval
index, values: use the actual numerical values of the index
nearest, zero, slinear, quadratic, cubic, barycentric, polynomial is passed
to scipy.interpolate.interp1d. Both polynomial and spline require
that you also specify an order (int), e.g. df.interpolate(method=polynomial, or-
der=4). These use the actual numerical values of the index.
krogh, piecewise_polynomial, spline, pchip and akima are all wrappers
around the scipy interpolation methods of similar names. These use the actual nu-
merical values of the index. For more information on their behavior, see the scipy
documentation and tutorial documentation
from_derivatives refers to BPoly.from_derivatives which replaces piece-
wise_polynomial interpolation method in scipy 0.18
New in version 0.18.1: Added support for the akima method Added interpolate method
from_derivatives which replaces piecewise_polynomial in scipy 0.18; backwards-
compatible with scipy < 0.18
axis : {0, 1}, default 0
0: fill column-by-column
1: fill row-by-row
limit : int, default None.
Maximum number of consecutive NaNs to fill. Must be greater than 0.
limit_direction : {forward, backward, both}, default forward
If limit is specified, consecutive NaNs will be filled in this direction.
New in version 0.17.0.
inplace : bool, default False
Update the NDFrame in place if possible.
downcast : optional, infer or None, defaults to None
Downcast dtypes if possible.
kwargs : keyword arguments to pass on to the interpolating function.
Returns Series or DataFrame of same shape interpolated at the NaNs
See also:
reindex, replace, fillna
Examples
Filling in NaNs
pandas.Series.isin
Series.isin(values)
Return a boolean Series showing whether each element in the Series is exactly contained in the
passed sequence of values.
Parameters values : set or list-like
The sequence of values to test. Passing in a single string will raise a TypeError.
Instead, turn a single string into a list of one element.
New in version 0.18.1.
Support for values as a set
Examples
>>> s = pd.Series(list('abc'))
>>> s.isin(['a', 'c', 'e'])
0 True
1 False
2 True
dtype: bool
Passing a single string as s.isin('a') will raise an error. Use a list of one element instead:
>>> s.isin(['a'])
0 True
1 False
2 False
dtype: bool
pandas.Series.isnull
Series.isnull()
Return a boolean same-sized object indicating if the values are null.
See also:
pandas.Series.item
Series.item()
return the first element of the underlying data as a python scalar
pandas.Series.items
Series.items()
Lazily iterate over (index, value) tuples
pandas.Series.iteritems
Series.iteritems()
Lazily iterate over (index, value) tuples
pandas.Series.keys
Series.keys()
Alias for index
pandas.Series.kurt
pandas.Series.kurtosis
pandas.Series.last
Series.last(offset)
Convenience method for subsetting final periods of time series data based on a date offset.
Parameters offset : string, DateOffset, dateutil.relativedelta
Examples
pandas.Series.last_valid_index
Series.last_valid_index()
Return label for last non-NA/null value
pandas.Series.le
pandas.Series.lt
See also:
Series.None
pandas.Series.mad
pandas.Series.map
Series.map(arg, na_action=None)
Map values of Series using input correspondence (which can be a dict, Series, or function)
Parameters arg : function, dict, or Series
na_action : {None, ignore}
If ignore, propagate NA values, without passing them to the mapping function
Returns y : Series
same index as caller
See also:
Notes
When arg is a dictionary, values in Series that are not in the dictionary (as keys) are converted to NaN.
However, if the dictionary is a dict subclass that defines __missing__ (i.e. provides a method for
default values), then this default is used rather than NaN:
>>> from collections import Counter
>>> counter = Counter()
>>> counter['bar'] += 1
>>> y.map(counter)
1 0
2 1
3 0
dtype: int64
Examples
>>> x.map(y)
one foo
two bar
three baz
If arg is a dictionary, return a new Series with values converted according to the dictionarys mapping:
>>> x.map(z)
one A
two B
three C
Use na_action to control whether NA values are affected by the mapping function.
pandas.Series.mask
Notes
The mask method is an application of the if-then idiom. For each element in the calling DataFrame, if
cond is False the element is used; otherwise the corresponding element from the DataFrame other is
used.
The signature for DataFrame.where() differs from numpy.where(). Roughly df1.where(m,
df2) is equivalent to np.where(m, df1, df2).
For further details and examples see the mask documentation in indexing.
Examples
>>> s = pd.Series(range(5))
>>> s.where(s > 0)
0 NaN
1 1.0
2 2.0
3 3.0
4 4.0
pandas.Series.max
pandas.Series.mean
pandas.Series.median
pandas.Series.memory_usage
Series.memory_usage(index=True, deep=False)
Memory usage of the Series
Parameters index : bool
Specifies whether to include memory usage of Series index
deep : bool
Introspect the data deeply, interrogate object dtypes for system-level memory consump-
tion
Returns scalar bytes of memory consumed
See also:
numpy.ndarray.nbytes
Notes
Memory usage does not include memory consumed by elements that are not components of the array if
deep=False
pandas.Series.min
pandas.Series.mod
pandas.Series.mode
Series.mode()
Return the mode(s) of the dataset.
Always returns Series even if only one value is returned.
Returns modes : Series (sorted)
pandas.Series.mul
pandas.Series.multiply
pandas.Series.ne
pandas.Series.nlargest
Series.nlargest(n=5, keep=first)
Return the largest n elements.
Parameters n : int
Return this many descending sorted values
keep : {first, last, False}, default first
Where there are duplicate values: - first : take the first occurrence. - last : take
the last occurrence.
Returns top_n : Series
The n largest values in the Series, in sorted order
See also:
Series.nsmallest
Notes
Examples
82124 4.608745
421689 4.564644
425277 4.447014
718691 4.414137
43154 4.403520
283187 4.313922
595519 4.273635
503969 4.250236
121637 4.240952
dtype: float64
pandas.Series.nonzero
Series.nonzero()
Return the indices of the elements that are non-zero
This method is equivalent to calling numpy.nonzero on the series data. For compatability with NumPy,
the return value is the same (a tuple with an array of indices for each dimension), but it will always be a
one-item tuple because series only have one dimension.
See also:
numpy.nonzero
Examples
pandas.Series.notnull
Series.notnull()
Return a boolean same-sized object indicating if the values are not null.
See also:
pandas.Series.nsmallest
Series.nsmallest(n=5, keep=first)
Return the smallest n elements.
Parameters n : int
Return this many ascending sorted values
keep : {first, last, False}, default first
Where there are duplicate values: - first : take the first occurrence. - last : take
the last occurrence.
Returns bottom_n : Series
The n smallest values in the Series, in sorted order
See also:
Series.nlargest
Notes
Faster than .sort_values().head(n) for small n relative to the size of the Series object.
Examples
pandas.Series.nunique
Series.nunique(dropna=True)
Return number of unique elements in the object.
Excludes NA values by default.
Parameters dropna : boolean, default True
Dont include NaN in the count.
Returns nunique : int
pandas.Series.pct_change
Notes
By default, the percentage change is calculated along the stat axis: 0, or Index, for DataFrame and 1,
or minor for Panel. You can change this with the axis keyword argument.
pandas.Series.pipe
Notes
Use .pipe when chaining together functions that expect on Series or DataFrames. Instead of writing
>>> (df.pipe(h)
... .pipe(g, arg1=a)
... .pipe(f, arg2=b, arg3=c)
... )
If you have a function that takes the data as (say) the second argument, pass a tuple indicating which
keyword expects the data. For example, suppose f takes its data as arg2:
>>> (df.pipe(h)
... .pipe(g, arg1=a)
... .pipe((f, 'arg2'), arg1=a, arg3=c)
... )
pandas.Series.plot
When using a secondary_y axis, automatically mark the column labels with (right) in
the legend
kwds : keywords
Options to pass to matplotlib plotting method
Returns axes : matplotlib.AxesSubplot or np.array of them
Notes
pandas.Series.pop
Series.pop(item)
Return item and drop from frame. Raise KeyError if not found.
pandas.Series.pow
pandas.Series.prod
If the axis is a MultiIndex (hierarchical), count along a particular level, collapsing into
a scalar
numeric_only : boolean, default None
Include only float, int, boolean columns. If None, will attempt to use everything, then
use only numeric data. Not implemented for Series.
Returns prod : scalar or Series (if level specified)
pandas.Series.product
pandas.Series.ptp
pandas.Series.put
Series.put(*args, **kwargs)
Applies the put method to its values attribute if it has one.
See also:
numpy.ndarray.put
pandas.Series.quantile
Series.quantile(q=0.5, interpolation=linear)
Return value at the given quantile, a la numpy.percentile.
Parameters q : float or array-like, default 0.5 (50% quantile)
0 <= q <= 1, the quantile(s) to compute
interpolation : {linear, lower, higher, midpoint, nearest}
New in version 0.18.0.
This optional parameter specifies the interpolation method to use, when the desired
quantile lies between two data points i and j:
linear: i + (j - i) * fraction, where fraction is the fractional part of the index sur-
rounded by i and j.
lower: i.
higher: j.
nearest: i or j whichever is nearest.
midpoint: (i + j) / 2.
Returns quantile : float or Series
if q is an array, a Series will be returned where the index is q and the values are the
quantiles.
Examples
pandas.Series.radd
Equivalent to other + series, but with support to substitute a fill_value for missing data in one of
the inputs.
Parameters other : Series or scalar value
fill_value : None or float value, default None (NaN)
Fill missing (NaN) values with this value. If both Series are missing, the result will be
missing
level : int or name
Broadcast across a level, matching Index values on the passed MultiIndex level
Returns result : Series
See also:
Series.add
pandas.Series.rank
pandas.Series.ravel
Series.ravel(order=C)
Return the flattened underlying data as an ndarray
See also:
numpy.ndarray.ravel
pandas.Series.rdiv
pandas.Series.reindex
Series.reindex(index=None, **kwargs)
Conform Series to new index with optional filling logic, placing NA/NaN in locations having no value in
the previous index. A new object is produced unless the new index is equivalent to the current one and
copy=False
Parameters index : array-like, optional (can be specified in order, or as
keywords) New labels / index to conform to. Preferably an Index object to avoid dupli-
cating data
method : {None, backfill/bfill, pad/ffill, nearest}, optional
method to use for filling holes in reindexed DataFrame. Please note: this is only appli-
cable to DataFrames/Series with a monotonically increasing/decreasing index.
default: dont fill gaps
pad / ffill: propagate last valid observation forward to next valid
backfill / bfill: use next valid observation to fill gap
nearest: use nearest valid observations to fill gap
copy : boolean, default True
Return a new object, even if the passed indexes are the same
Examples
Create a new index and reindex the dataframe. By default values in the new index that do not have
corresponding records in the dataframe are assigned NaN.
We can fill in the missing values by passing a value to the keyword fill_value. Because the index is
not monotonically increasing or decreasing, we cannot use arguments to the keyword method to fill the
NaN values.
To further illustrate the filling functionality in reindex, we will create a dataframe with a monotonically
increasing index (for example, a sequence of dates).
>>> date_index = pd.date_range('1/1/2010', periods=6, freq='D')
>>> df2 = pd.DataFrame({"prices": [100, 101, np.nan, 100, 89, 88]},
... index=date_index)
>>> df2
prices
2010-01-01 100
2010-01-02 101
2010-01-03 NaN
2010-01-04 100
2010-01-05 89
2010-01-06 88
The index entries that did not have a value in the original data frame (for example, 2009-12-29) are by
default filled with NaN. If desired, we can fill in the missing values using one of several options.
For example, to backpropagate the last valid value to fill the NaN values, pass bfill as an argument to
the method keyword.
>>> df2.reindex(date_index2, method='bfill')
prices
2009-12-29 100
2009-12-30 100
2009-12-31 100
2010-01-01 100
2010-01-02 101
2010-01-03 NaN
2010-01-04 100
2010-01-05 89
2010-01-06 88
2010-01-07 NaN
Please note that the NaN value present in the original dataframe (at index value 2010-01-03) will not be
filled by any of the value propagation schemes. This is because filling while reindexing does not look at
dataframe values, but only compares the original and desired indexes. If you do want to fill in the NaN
values present in the original dataframe, use the fillna() method.
pandas.Series.reindex_axis
pandas.Series.reindex_like
Notes
pandas.Series.rename
Series.rename(index=None, **kwargs)
Alter axes input function or functions. Function / dict values must be unique (1-to-1). Labels not contained
in a dict / Series will be left as-is. Extra labels listed dont throw an error. Alternatively, change Series.
name with a scalar value (Series only).
Parameters index : scalar, list-like, dict-like or function, optional
Scalar or list-like will alter the Series.name attribute, and raise on DataFrame or
Panel. dict-like or functions are transformations to apply to that axis values
copy : boolean, default True
Examples
pandas.Series.rename_axis
Examples
pandas.Series.reorder_levels
Series.reorder_levels(order)
Rearrange index levels using input order. May not drop or duplicate levels
Parameters order : list of int representing new level order.
(reference level by number or key)
axis : where to reorder levels
Returns type of caller (new object)
pandas.Series.repeat
pandas.Series.replace
Notes
Regex substitution is performed under the hood with re.sub. The rules for substitution for re.sub
are the same.
Regular expressions will only substitute on strings, meaning you cannot provide, for example, a
regular expression matching floating point numbers and expect the columns in your frame that have
a numeric dtype to be matched. However, if those floating point numbers are strings, then you can
do this.
This method has a lot of options. You are encouraged to experiment and play with this method to
gain intuition about how it works.
pandas.Series.resample
Notes
To learn more about the offset strings, please see this link.
Examples
Downsample the series into 3 minute bins and sum the values of the timestamps falling into a bin.
>>> series.resample('3T').sum()
2000-01-01 00:00:00 3
2000-01-01 00:03:00 12
2000-01-01 00:06:00 21
Freq: 3T, dtype: int64
Downsample the series into 3 minute bins as above, but label each bin using the right edge instead of the
left. Please note that the value in the bucket used as the label is not included in the bucket, which it labels.
For example, in the original series the bucket 2000-01-01 00:03:00 contains the value 3, but the
summed value in the resampled bucket with the label2000-01-01 00:03:00 does not include 3 (if it
did, the summed value would be 6, not 3). To include this value close the right side of the bin interval as
illustrated in the example below this one.
Downsample the series into 3 minute bins as above, but close the right side of the bin interval.
Upsample the series into 30 second bins and fill the NaN values using the pad method.
>>> series.resample('30S').pad()[0:5]
2000-01-01 00:00:00 0
2000-01-01 00:00:30 0
2000-01-01 00:01:00 1
2000-01-01 00:01:30 1
2000-01-01 00:02:00 2
Freq: 30S, dtype: int64
Upsample the series into 30 second bins and fill the NaN values using the bfill method.
>>> series.resample('30S').bfill()[0:5]
2000-01-01 00:00:00 0
2000-01-01 00:00:30 1
2000-01-01 00:01:00 1
2000-01-01 00:01:30 2
2000-01-01 00:02:00 2
Freq: 30S, dtype: int64
>>> series.resample('3T').apply(custom_resampler)
2000-01-01 00:00:00 8
2000-01-01 00:03:00 17
2000-01-01 00:06:00 26
Freq: 3T, dtype: int64
For DataFrame objects, the keyword on can be used to specify the column instead of the index for resam-
pling.
For a DataFrame with MultiIndex, the keyword level can be used to specify on level the resampling
needs to take place.
pandas.Series.reset_index
pandas.Series.reshape
Series.reshape(*args, **kwargs)
DEPRECATED: calling this method will raise an error in a future release. Please call .values.
reshape(...) instead.
return an ndarray with the values shape if the specified shape matches exactly the current shape, then
return self (for compat)
See also:
numpy.ndarray.reshape
pandas.Series.rfloordiv
pandas.Series.rmod
pandas.Series.rmul
pandas.Series.rolling
Notes
By default, the result is set to the right edge of the window. This can be changed to the center of the
window by setting center=True.
The freq keyword is used to conform time series data to a specified frequency by resampling the data.
This is done with the default parameters of resample() (i.e. using the mean).
To learn more about the offsets & frequency strings, please see this link.
The recognized win_types are:
boxcar
triang
blackman
hamming
bartlett
parzen
bohman
blackmanharris
nuttall
barthann
kaiser (needs beta)
gaussian (needs std)
general_gaussian (needs power, width)
slepian (needs width).
Examples
Rolling sum with a window length of 2, using the triang window type.
Rolling sum with a window length of 2, min_periods defaults to the window length.
>>> df.rolling(2).sum()
B
0 NaN
1 1.0
2 3.0
3 NaN
4 NaN
>>> df
B
2013-01-01 09:00:00 0.0
2013-01-01 09:00:02 1.0
2013-01-01 09:00:03 2.0
2013-01-01 09:00:05 NaN
2013-01-01 09:00:06 4.0
Contrasting to an integer rolling window, this will roll a variable length window corresponding to the time
period. The default for min_periods is 1.
>>> df.rolling('2s').sum()
B
2013-01-01 09:00:00 0.0
2013-01-01 09:00:02 1.0
2013-01-01 09:00:03 3.0
2013-01-01 09:00:05 NaN
2013-01-01 09:00:06 4.0
pandas.Series.round
pandas.Series.rpow
pandas.Series.rsub
See also:
Series.sub
pandas.Series.rtruediv
pandas.Series.sample
Seed for the random number generator (if int), or numpy RandomState object.
axis : int or string, optional
Axis to sample. Accepts axis number or name. Default is stat axis for given data
type (0 for Series and DataFrames, 1 for Panels).
Returns A new object of same type as caller.
Examples
>>> s = pd.Series(np.random.randn(50))
>>> s.head()
0 -0.038497
1 1.820773
2 -0.972766
3 -1.598270
4 -1.095526
dtype: float64
>>> df = pd.DataFrame(np.random.randn(50, 4), columns=list('ABCD'))
>>> df.head()
A B C D
0 0.016443 -2.318952 -0.566372 -1.028078
1 -1.051921 0.438836 0.658280 -0.175797
2 -1.243569 -0.364626 -0.215065 0.057736
3 1.768216 0.404512 -0.385604 -1.457834
4 1.072446 -1.137172 0.314194 -0.046661
>>> s.sample(n=3)
27 -0.994689
55 -1.049016
67 -0.224565
dtype: float64
pandas.Series.searchsorted
Notes
Examples
>>> x.searchsorted(4)
array([3])
>>> x.searchsorted('bread')
array([1]) # Note: an array, not a scalar
>>> x.searchsorted(['bread'])
array([1])
pandas.Series.select
Series.select(crit, axis=0)
Return data corresponding to axis labels matching criteria
Parameters crit : function
To be called on each index (label). Should return True or False
axis : int
Returns selection : type of caller
pandas.Series.sem
pandas.Series.set_axis
Series.set_axis(axis, labels)
public verson of axis assignment
pandas.Series.set_value
pandas.Series.shift
Notes
If freq is specified then the index values are shifted but the data is not realigned. That is, use freq if you
would like to extend the index when shifting and preserve the original data.
pandas.Series.skew
Include only float, int, boolean columns. If None, will attempt to use everything,
then use only numeric data. Not implemented for Series.
Returns skew : scalar or Series (if level specified)
pandas.Series.slice_shift
Series.slice_shift(periods=1, axis=0)
Equivalent to shift without copying data. The shifted data will not include the dropped periods and the
shifted axis will be smaller than the original.
Parameters periods : int
Number of periods to move, can be positive or negative
Returns shifted : same type as caller
Notes
While the slice_shift is faster than shift, you may pay for it later during alignment.
pandas.Series.sort_index
pandas.Series.sort_values
pandas.Series.sortlevel
pandas.Series.squeeze
Series.squeeze(axis=None)
Squeeze length 1 dimensions.
Parameters axis : None, integer or string axis name, optional
The axis to squeeze if 1-sized.
New in version 0.20.0.
Returns scalar if 1-sized, else original object
pandas.Series.std
pandas.Series.str
Series.str()
Vectorized string functions for Series and Index. NAs stay NA unless handled otherwise by a particular
method. Patterned after Pythons string methods, with some inspiration from Rs stringr package.
Examples
>>> s.str.split('_')
>>> s.str.replace('_', '')
pandas.Series.sub
pandas.Series.subtract
pandas.Series.sum
pandas.Series.swapaxes
pandas.Series.swaplevel
pandas.Series.tail
Series.tail(n=5)
Returns last n rows
pandas.Series.take
pandas.Series.to_clipboard
Notes
pandas.Series.to_csv
pandas.Series.to_dense
Series.to_dense()
Return dense representation of NDFrame (as opposed to sparse)
pandas.Series.to_dict
Series.to_dict()
Convert Series to {label -> value} dict
Returns value_dict : dict
pandas.Series.to_excel
Notes
If passing an existing ExcelWriter object, then the sheet will be added to the existing workbook. This can
be used to save different DataFrames to one workbook:
For compatibility with to_csv, to_excel serializes lists and dicts to strings before writing.
pandas.Series.to_frame
Series.to_frame(name=None)
Convert Series to DataFrame
Parameters name : object, default None
The passed name should substitute for the series name (if it has one).
Returns data_frame : DataFrame
pandas.Series.to_hdf
pandas.Series.to_json
table : dict like {schema: {schema}, data: {data}} describing the data, and the
data component is like orient='records'.
Changed in version 0.20.0.
date_format : {None, epoch, iso}
Type of date conversion. epoch = epoch milliseconds, iso = ISO8601. The default
depends on the orient. For orient=table, the default is iso. For all other orients,
the default is epoch.
double_precision : The number of decimal places to use when encoding
floating point values, default 10.
force_ascii : force encoded string to be ASCII, default True.
date_unit : string, default ms (milliseconds)
The time unit to encode to, governs timestamp and ISO8601 precision. One of
s, ms, us, ns for second, millisecond, microsecond, and nanosecond re-
spectively.
default_handler : callable, default None
Handler to call if object cannot otherwise be converted to a suitable format for
JSON. Should receive a single argument which is the object to convert and return
a serialisable object.
lines : boolean, default False
If orient is records write out line delimited json format. Will throw ValueError
if incorrect orient since others are not list like.
New in version 0.19.0.
Returns same type as input object with filtered info axis
See also:
pd.read_json
Examples
Encoding/decoding a Dataframe using 'records' formatted JSON. Note that index labels are not pre-
served with this encoding.
>>> df.to_json(orient='records')
'[{"col 1":"a","col 2":"b"},{"col 1":"c","col 2":"d"}]'
>>> df.to_json(orient='table')
'{"schema": {"fields": [{"name": "index", "type": "string"},
{"name": "col 1", "type": "string"},
{"name": "col 2", "type": "string"}],
"primaryKey": "index",
"pandas_version": "0.20.0"},
"data": [{"index": "row 1", "col 1": "a", "col 2": "b"},
{"index": "row 2", "col 1": "c", "col 2": "d"}]}'
pandas.Series.to_msgpack
pandas.Series.to_period
Series.to_period(freq=None, copy=True)
Convert Series from DatetimeIndex to PeriodIndex with desired frequency (inferred from index if not
passed)
Parameters freq : string, default
Returns ts : Series with PeriodIndex
pandas.Series.to_pickle
Series.to_pickle(path, compression=infer)
Pickle (serialize) object to input file path.
Parameters path : string
File path
compression : {infer, gzip, bz2, xz, None}, default infer
a string representing the compression to use in the output file
New in version 0.20.0.
pandas.Series.to_sparse
Series.to_sparse(kind=block, fill_value=None)
Convert Series to SparseSeries
Parameters kind : {block, integer}
fill_value : float, defaults to NaN (missing)
Returns sp : SparseSeries
pandas.Series.to_sql
pandas.Series.to_string
pandas.Series.to_timestamp
pandas.Series.to_xarray
Series.to_xarray()
Return an xarray object from the pandas object.
Notes
Examples
>>> df.to_xarray()
<xarray.Dataset>
Dimensions: (index: 3)
Coordinates:
* index (index) int64 0 1 2
Data variables:
A (index) int64 1 1 2
B (index) object 'foo' 'bar' 'foo'
C (index) float64 4.0 5.0 6.0
>>> df.to_xarray()
<xarray.Dataset>
Dimensions: (A: 2, B: 2)
Coordinates:
* B (B) object 'bar' 'foo'
* A (A) int64 1 2
Data variables:
C (B, A) float64 5.0 nan 4.0 6.0
>>> p = pd.Panel(np.arange(24).reshape(4,3,2),
items=list('ABCD'),
major_axis=pd.date_range('20130101', periods=3),
minor_axis=['first', 'second'])
>>> p
<class 'pandas.core.panel.Panel'>
Dimensions: 4 (items) x 3 (major_axis) x 2 (minor_axis)
Items axis: A to D
Major_axis axis: 2013-01-01 00:00:00 to 2013-01-03 00:00:00
Minor_axis axis: first to second
>>> p.to_xarray()
<xarray.DataArray (items: 4, major_axis: 3, minor_axis: 2)>
array([[[ 0, 1],
[ 2, 3],
[ 4, 5]],
[[ 6, 7],
[ 8, 9],
[10, 11]],
[[12, 13],
[14, 15],
[16, 17]],
[[18, 19],
[20, 21],
[22, 23]]])
Coordinates:
* items (items) object 'A' 'B' 'C' 'D'
* major_axis (major_axis) datetime64[ns] 2013-01-01 2013-01-02 2013-01-03
# noqa
pandas.Series.tolist
Series.tolist()
Convert Series to a nested list
pandas.Series.transform
pandas.NDFrame.aggregate, pandas.NDFrame.apply
Examples
pandas.Series.transpose
Series.transpose(*args, **kwargs)
return the transpose, which is by definition self
pandas.Series.truediv
pandas.Series.truncate
pandas.Series.tshift
Notes
If freq is not specified then tries to use the freq or inferred_freq attributes of the index. If neither of those
attributes exist, a ValueError is thrown
pandas.Series.tz_convert
pandas.Series.tz_localize
pandas.Series.unique
Series.unique()
Return unique values in the object. Uniques are returned in order of appearance, this does NOT sort. Hash
table-based unique.
Parameters values : 1d array-like
Returns unique values.
If the input is an Index, the return is an Index
If the input is a Categorical dtype, the return is a Categorical
If the input is a Series/ndarray, the return will be an ndarray
See also:
unique, Index.unique, Series.unique
pandas.Series.unstack
Series.unstack(level=-1, fill_value=None)
Unstack, a.k.a. pivot, Series with MultiIndex to produce DataFrame. The level involved will automatically
get sorted.
Parameters level : int, string, or list of these, default last level
Examples
>>> s.unstack(level=-1)
a b
one 1 2
two 3 4
>>> s.unstack(level=0)
one two
a 1 3
b 2 4
pandas.Series.update
Series.update(other)
Modify Series in place using non-NA values from passed Series. Aligns on index
Parameters other : Series
pandas.Series.valid
Series.valid(inplace=False, **kwargs)
pandas.Series.value_counts
pandas.Series.var
pandas.Series.view
Series.view(dtype=None)
pandas.Series.where
Notes
The where method is an application of the if-then idiom. For each element in the calling DataFrame, if
cond is True the element is used; otherwise the corresponding element from the DataFrame other is
used.
The signature for DataFrame.where() differs from numpy.where(). Roughly df1.where(m,
df2) is equivalent to np.where(m, df1, df2).
For further details and examples see the where documentation in indexing.
Examples
>>> s = pd.Series(range(5))
>>> s.where(s > 0)
0 NaN
1 1.0
2 2.0
3 3.0
4 4.0
A B
0 0 -1
1 -2 3
2 -4 -5
3 6 -7
4 -8 9
>>> df.where(m, -df) == np.where(m, df, -df)
A B
0 True True
1 True True
2 True True
3 True True
4 True True
>>> df.where(m, -df) == df.mask(~m, -df)
A B
0 True True
1 True True
2 True True
3 True True
4 True True
pandas.Series.xs
Notes
Examples
>>> df
A B C
a 4 5 2
b 4 0 9
c 9 7 3
>>> df.xs('a')
A 4
B 5
C 2
Name: a
>>> df.xs('C', axis=1)
a 2
b 9
c 3
Name: C
>>> df
A B C D
first second third
bar one 1 4 1 8 9
two 1 7 5 5 0
baz one 1 6 6 8 0
three 2 5 3 5 3
>>> df.xs(('baz', 'three'))
A B C D
third
2 5 3 5 3
>>> df.xs('one', level=1)
A B C D
first third
bar 1 4 1 8 9
baz 1 6 6 8 0
>>> df.xs(('baz', 2), level=[0, 'third'])
A B C D
second
three 5 3 5 3
34.3.2 Attributes
Axes
index: axis labels
34.3.3 Conversion
Series.get(key[, default]) Get item from object for given key (DataFrame column,
Panel slice, etc.).
Series.at Fast label-based scalar accessor
Series.iat Fast integer location scalar accessor.
Series.loc Purely label-location based indexer for selection by label.
Series.iloc Purely integer-location based indexing for selection by po-
sition.
Series.__iter__() provide iteration over the values of the Series
Series.iteritems() Lazily iterate over (index, value) tuples
34.3.4.1 pandas.Series.__iter__
Series.__iter__()
provide iteration over the values of the Series box values if necessary
For more information on .at, .iat, .loc, and .iloc, see the indexing documentation.
Series.add(other[, level, fill_value, axis]) Addition of series and other, element-wise (binary operator
add).
Series.sub(other[, level, fill_value, axis]) Subtraction of series and other, element-wise (binary oper-
ator sub).
Series.mul(other[, level, fill_value, axis]) Multiplication of series and other, element-wise (binary op-
erator mul).
Series.div(other[, level, fill_value, axis]) Floating division of series and other, element-wise (binary
operator truediv).
Continued on next page
Series.align(other[, join, axis, level, ...]) Align two object on their axes with the
Series.drop(labels[, axis, level, inplace, ...]) Return new object with labels in requested axis removed.
Series.drop_duplicates([keep, inplace]) Return Series with duplicate values removed
Series.duplicated([keep]) Return boolean Series denoting duplicate values
Series.equals(other) Determines if two NDFrame objects contain the same ele-
ments.
Series.first(offset) Convenience method for subsetting initial periods of time
series data based on a date offset.
Series.head([n]) Returns first n rows
Series.idxmax([axis, skipna]) Index of first occurrence of maximum of values.
Series.idxmin([axis, skipna]) Index of first occurrence of minimum of values.
Series.isin(values) Return a boolean Series showing whether each element
in the Series is exactly contained in the passed sequence
of values.
Series.last(offset) Convenience method for subsetting final periods of time
series data based on a date offset.
Series.reindex([index]) Conform Series to new index with optional filling logic,
placing NA/NaN in locations having no value in the previ-
ous index.
Series.reindex_like(other[, method, copy, ...]) Return an object with matching indices to myself.
Series.rename([index]) Alter axes input function or functions.
Series.rename_axis(mapper[, axis, copy, inplace]) Alter index and / or columns using input function or func-
tions.
Series.reset_index([level, drop, name, inplace]) Analogous to the pandas.DataFrame.
reset_index() function, see docstring there.
Series.sample([n, frac, replace, weights, ...]) Returns a random sample of items from an axis of object.
Series.select(crit[, axis]) Return data corresponding to axis labels matching criteria
Series.take(indices[, axis, convert, is_copy]) return Series corresponding to requested indices
Series.tail([n]) Returns last n rows
Continued on next page
Series.dt can be used to access the values of the series as datetimelike and return several properties. These can be
accessed like Series.dt.<property>.
Datetime Properties
34.3.13.1 pandas.Series.dt.date
Series.dt.date
Returns numpy array of python datetime.date objects (namely, the date part of Timestamps without timezone
information).
34.3.13.2 pandas.Series.dt.time
Series.dt.time
Returns numpy array of datetime.time. The time part of the Timestamps.
34.3.13.3 pandas.Series.dt.year
Series.dt.year
The year of the datetime
34.3.13.4 pandas.Series.dt.month
Series.dt.month
The month as January=1, December=12
34.3.13.5 pandas.Series.dt.day
Series.dt.day
The days of the datetime
34.3.13.6 pandas.Series.dt.hour
Series.dt.hour
The hours of the datetime
34.3.13.7 pandas.Series.dt.minute
Series.dt.minute
The minutes of the datetime
34.3.13.8 pandas.Series.dt.second
Series.dt.second
The seconds of the datetime
34.3.13.9 pandas.Series.dt.microsecond
Series.dt.microsecond
The microseconds of the datetime
34.3.13.10 pandas.Series.dt.nanosecond
Series.dt.nanosecond
The nanoseconds of the datetime
34.3.13.11 pandas.Series.dt.week
Series.dt.week
The week ordinal of the year
34.3.13.12 pandas.Series.dt.weekofyear
Series.dt.weekofyear
The week ordinal of the year
34.3.13.13 pandas.Series.dt.dayofweek
Series.dt.dayofweek
The day of the week with Monday=0, Sunday=6
34.3.13.14 pandas.Series.dt.weekday
Series.dt.weekday
The day of the week with Monday=0, Sunday=6
34.3.13.15 pandas.Series.dt.weekday_name
Series.dt.weekday_name
The name of day in a week (ex: Friday)
New in version 0.18.1.
34.3.13.16 pandas.Series.dt.dayofyear
Series.dt.dayofyear
The ordinal day of the year
34.3.13.17 pandas.Series.dt.quarter
Series.dt.quarter
The quarter of the date
34.3.13.18 pandas.Series.dt.is_month_start
Series.dt.is_month_start
Logical indicating if first day of month (defined by frequency)
34.3.13.19 pandas.Series.dt.is_month_end
Series.dt.is_month_end
Logical indicating if last day of month (defined by frequency)
34.3.13.20 pandas.Series.dt.is_quarter_start
Series.dt.is_quarter_start
Logical indicating if first day of quarter (defined by frequency)
34.3.13.21 pandas.Series.dt.is_quarter_end
Series.dt.is_quarter_end
Logical indicating if last day of quarter (defined by frequency)
34.3.13.22 pandas.Series.dt.is_year_start
Series.dt.is_year_start
Logical indicating if first day of year (defined by frequency)
34.3.13.23 pandas.Series.dt.is_year_end
Series.dt.is_year_end
Logical indicating if last day of year (defined by frequency)
34.3.13.24 pandas.Series.dt.is_leap_year
Series.dt.is_leap_year
Logical indicating if the date belongs to a leap year
34.3.13.25 pandas.Series.dt.daysinmonth
Series.dt.daysinmonth
The number of days in the month
New in version 0.16.0.
34.3.13.26 pandas.Series.dt.days_in_month
Series.dt.days_in_month
The number of days in the month
New in version 0.16.0.
34.3.13.27 pandas.Series.dt.tz
Series.dt.tz
34.3.13.28 pandas.Series.dt.freq
Series.dt.freq
get/set the frequency of the Index
Datetime Methods
34.3.13.29 pandas.Series.dt.to_period
Series.dt.to_period(*args, **kwargs)
Cast to PeriodIndex at a particular frequency
34.3.13.30 pandas.Series.dt.to_pydatetime
Series.dt.to_pydatetime()
34.3.13.31 pandas.Series.dt.tz_localize
Series.dt.tz_localize(*args, **kwargs)
Localize tz-naive DatetimeIndex to given time zone (using pytz/dateutil), or remove timezone from tz-aware
DatetimeIndex
Parameters tz : string, pytz.timezone, dateutil.tz.tzfile or None
Time zone for time. Corresponding timestamps would be converted to time zone of
the TimeSeries. None will remove timezone holding local time.
ambiguous : infer, bool-ndarray, NaT, default raise
infer will attempt to infer fall dst-transition hours based on order
bool-ndarray where True signifies a DST time, False signifies a non-DST time (note that
this flag is only applicable for ambiguous times)
NaT will return NaT where there are ambiguous times
raise will raise an AmbiguousTimeError if there are ambiguous times
errors : raise, coerce, default raise
raise will raise a NonExistentTimeError if a timestamp is not valid in the
specified timezone (e.g. due to a transition from or to DST time)
coerce will return NaT if the timestamp can not be converted into the specified
timezone
New in version 0.19.0.
infer_dst : boolean, default False (DEPRECATED)
Attempt to infer fall dst-transition hours based on order
34.3.13.32 pandas.Series.dt.tz_convert
Series.dt.tz_convert(*args, **kwargs)
Convert tz-aware DatetimeIndex from one time zone to another (using pytz/dateutil)
Parameters tz : string, pytz.timezone, dateutil.tz.tzfile or None
Time zone for time. Corresponding timestamps would be converted to time zone of
the TimeSeries. None will remove timezone holding UTC time.
Returns normalized : DatetimeIndex
Raises TypeError
If DatetimeIndex is tz-naive.
34.3.13.33 pandas.Series.dt.normalize
Series.dt.normalize(*args, **kwargs)
Return DatetimeIndex with times to midnight. Length is unaltered
Returns normalized : DatetimeIndex
34.3.13.34 pandas.Series.dt.strftime
Series.dt.strftime(*args, **kwargs)
Return an array of formatted strings specified by date_format, which supports the same string format as the
python standard library. Details of the string format can be found in python string format doc
New in version 0.17.0.
Parameters date_format : str
date format string (e.g. %Y-%m-%d)
Returns ndarray of formatted strings
34.3.13.35 pandas.Series.dt.round
Series.dt.round(*args, **kwargs)
round the index to the specified freq
Parameters freq : freq string/object
Returns index of same type
Raises ValueError if the freq cannot be converted
34.3.13.36 pandas.Series.dt.floor
Series.dt.floor(*args, **kwargs)
floor the index to the specified freq
Parameters freq : freq string/object
Returns index of same type
Raises ValueError if the freq cannot be converted
34.3.13.37 pandas.Series.dt.ceil
Series.dt.ceil(*args, **kwargs)
ceil the index to the specified freq
Parameters freq : freq string/object
Returns index of same type
Raises ValueError if the freq cannot be converted
Timedelta Properties
34.3.13.38 pandas.Series.dt.days
Series.dt.days
Number of days for each element.
34.3.13.39 pandas.Series.dt.seconds
Series.dt.seconds
Number of seconds (>= 0 and less than 1 day) for each element.
34.3.13.40 pandas.Series.dt.microseconds
Series.dt.microseconds
Number of microseconds (>= 0 and less than 1 second) for each element.
34.3.13.41 pandas.Series.dt.nanoseconds
Series.dt.nanoseconds
Number of nanoseconds (>= 0 and less than 1 microsecond) for each element.
34.3.13.42 pandas.Series.dt.components
Series.dt.components
Return a dataframe of the components (days, hours, minutes, seconds, milliseconds, microseconds, nanosec-
onds) of the Timedeltas.
Returns a DataFrame
Timedelta Methods
Series.dt.to_pytimedelta()
Series.dt.total_seconds(*args, **kwargs) Total duration of each element expressed in seconds.
34.3.13.43 pandas.Series.dt.to_pytimedelta
Series.dt.to_pytimedelta()
34.3.13.44 pandas.Series.dt.total_seconds
Series.dt.total_seconds(*args, **kwargs)
Total duration of each element expressed in seconds.
New in version 0.17.0.
Series.str can be used to access the values of the series as strings and apply several methods to it. These can be
accessed like Series.str.<function/property>.
34.3.14.1 pandas.Series.str.capitalize
Series.str.capitalize()
Convert strings in the Series/Index to be capitalized. Equivalent to str.capitalize().
Returns converted : Series/Index of objects
34.3.14.2 pandas.Series.str.cat
Examples
When na_rep is None (default behavior), NaN value(s) in the Series are ignored.
If others is specified, corresponding values are concatenated with the separator. Result will be a Series of
strings.
34.3.14.3 pandas.Series.str.center
Series.str.center(width, fillchar= )
Filling left and right side of strings in the Series/Index with an additional character. Equivalent to str.
center().
Parameters width : int
Minimum width of resulting string; additional characters will be filled with
fillchar
fillchar : str
Additional character for filling, default is whitespace
Returns filled : Series/Index of objects
34.3.14.4 pandas.Series.str.contains
34.3.14.5 pandas.Series.str.count
34.3.14.6 pandas.Series.str.decode
Series.str.decode(encoding, errors=strict)
Decode character string in the Series/Index using indicated encoding. Equivalent to str.decode() in
python2 and bytes.decode() in python3.
Parameters encoding : str
errors : str, optional
Returns decoded : Series/Index of objects
34.3.14.7 pandas.Series.str.encode
Series.str.encode(encoding, errors=strict)
Encode character string in the Series/Index using indicated encoding. Equivalent to str.encode().
Parameters encoding : str
errors : str, optional
Returns encoded : Series/Index of objects
34.3.14.8 pandas.Series.str.endswith
Series.str.endswith(pat, na=nan)
Return boolean Series indicating whether each string in the Series/Index ends with passed pattern. Equivalent
to str.endswith().
Parameters pat : string
Character sequence
34.3.14.9 pandas.Series.str.extract
Examples
A pattern with two groups will return a DataFrame with two columns. Non-matches will be NaN.
>>> s.str.extract('([ab])?(\d)')
0 1
0 a 1
1 b 2
2 NaN 3
>>> s.str.extract('(?P<letter>[ab])(?P<digit>\d)')
letter digit
0 a 1
1 b 2
2 NaN NaN
A pattern with one group will return a DataFrame with one column if expand=True.
34.3.14.10 pandas.Series.str.extractall
Series.str.extractall(pat, flags=0)
For each subject string in the Series, extract groups from all matches of regular expression pat. When each
subject string in the Series has exactly one match, extractall(pat).xs(0, level=match) is the same as extract(pat).
New in version 0.18.0.
Parameters pat : string
Regular expression pattern with capturing groups
flags : int, default 0 (no flags)
re module flags, e.g. re.IGNORECASE
Returns A DataFrame with one row for each match, and one column for each
group. Its rows have a MultiIndex with first levels that come from
the subject Series. The last level is named match and indicates
the order in the subject. Any capture group names in regular
expression pat will be used for column names; otherwise capture
group numbers will be used.
See also:
Examples
A pattern with one group will return a DataFrame with one column. Indices with no matches will not appear in
the result.
Capture group names are used for column names of the result.
>>> s.str.extractall("[ab](?P<digit>\d)")
digit
match
A 0 1
1 2
B 0 1
A pattern with two groups will return a DataFrame with two columns.
>>> s.str.extractall("(?P<letter>[ab])(?P<digit>\d)")
letter digit
match
A 0 a 1
1 a 2
B 0 b 1
>>> s.str.extractall("(?P<letter>[ab])?(?P<digit>\d)")
letter digit
match
A 0 a 1
1 a 2
B 0 b 1
C 0 NaN 1
34.3.14.11 pandas.Series.str.find
34.3.14.12 pandas.Series.str.findall
34.3.14.13 pandas.Series.str.get
Series.str.get(i)
Extract element from lists, tuples, or strings in each element in the Series/Index.
Parameters i : int
Integer index (location)
Returns items : Series/Index of objects
34.3.14.14 pandas.Series.str.index
34.3.14.15 pandas.Series.str.join
Series.str.join(sep)
Join lists contained as elements in the Series/Index with passed delimiter. Equivalent to str.join().
Parameters sep : string
Delimiter
Returns joined : Series/Index of objects
34.3.14.16 pandas.Series.str.len
Series.str.len()
Compute length of each string in the Series/Index.
Returns lengths : Series/Index of integer values
34.3.14.17 pandas.Series.str.ljust
Series.str.ljust(width, fillchar= )
Filling right side of strings in the Series/Index with an additional character. Equivalent to str.ljust().
Parameters width : int
Minimum width of resulting string; additional characters will be filled with
fillchar
fillchar : str
Additional character for filling, default is whitespace
Returns filled : Series/Index of objects
34.3.14.18 pandas.Series.str.lower
Series.str.lower()
Convert strings in the Series/Index to lowercase. Equivalent to str.lower().
Returns converted : Series/Index of objects
34.3.14.19 pandas.Series.str.lstrip
Series.str.lstrip(to_strip=None)
Strip whitespace (including newlines) from each string in the Series/Index from left side. Equivalent to str.
lstrip().
Returns stripped : Series/Index of objects
34.3.14.20 pandas.Series.str.match
34.3.14.21 pandas.Series.str.normalize
Series.str.normalize(form)
Return the Unicode normal form for the strings in the Series/Index. For more information on the forms, see the
unicodedata.normalize().
Parameters form : {NFC, NFKC, NFD, NFKD}
Unicode form
Returns normalized : Series/Index of objects
34.3.14.22 pandas.Series.str.pad
34.3.14.23 pandas.Series.str.partition
Series.str.partition(pat= , expand=True)
Split the string at the first occurrence of sep, and return 3 elements containing the part before the separator, the
separator itself, and the part after the separator. If the separator is not found, return 3 elements containing the
string itself, followed by two empty strings.
Parameters pat : string, default whitespace
String to split on.
expand : bool, default True
Examples
>>> s.str.partition('_')
0 1 2
0 A _ B_C
1 D _ E_F
2 X
>>> s.str.rpartition('_')
0 1 2
0 A_B _ C
1 D_E _ F
2 X
34.3.14.24 pandas.Series.str.repeat
Series.str.repeat(repeats)
Duplicate each string in the Series/Index by indicated number of times.
Parameters repeats : int or array
Same value for all (int) or different value per (array)
Returns repeated : Series/Index of objects
34.3.14.25 pandas.Series.str.replace
Replacement string or a callable. The callable is passed the regex match object and
must return a replacement string to be used. See re.sub().
New in version 0.20.0: repl also accepts a callable.
n : int, default -1 (all)
Number of replacements to make from start
case : boolean, default None
If True, case sensitive (the default if pat is a string)
Set to False for case insensitive
Cannot be set if pat is a compiled regex
flags : int, default 0 (no flags)
re module flags, e.g. re.IGNORECASE
Cannot be set if pat is a compiled regex
Returns replaced : Series/Index of objects
Notes
When pat is a compiled regex, all flags should be included in the compiled regex. Use of case or flags with a
compiled regex will raise an error.
Examples
When repl is a string, every pat is replaced as with str.replace(). NaN value(s) in the Series are left as is.
When repl is a callable, it is called on every pat using re.sub(). The callable should expect one positional
argument (a regex object) and return a string.
To get the idea:
34.3.14.26 pandas.Series.str.rfind
34.3.14.27 pandas.Series.str.rindex
34.3.14.28 pandas.Series.str.rjust
Series.str.rjust(width, fillchar= )
Filling left side of strings in the Series/Index with an additional character. Equivalent to str.rjust().
Parameters width : int
Minimum width of resulting string; additional characters will be filled with
fillchar
fillchar : str
Additional character for filling, default is whitespace
Returns filled : Series/Index of objects
34.3.14.29 pandas.Series.str.rpartition
Series.str.rpartition(pat= , expand=True)
Split the string at the last occurrence of sep, and return 3 elements containing the part before the separator, the
separator itself, and the part after the separator. If the separator is not found, return 3 elements containing two
empty strings, followed by the string itself.
Parameters pat : string, default whitespace
String to split on.
expand : bool, default True
If True, return DataFrame/MultiIndex expanding dimensionality.
If False, return Series/Index.
Returns split : DataFrame/MultiIndex or Series/Index of objects
See also:
Examples
>>> s.str.partition('_')
0 1 2
0 A _ B_C
1 D _ E_F
2 X
>>> s.str.rpartition('_')
0 1 2
0 A_B _ C
1 D_E _ F
2 X
34.3.14.30 pandas.Series.str.rstrip
Series.str.rstrip(to_strip=None)
Strip whitespace (including newlines) from each string in the Series/Index from right side. Equivalent to str.
rstrip().
Returns stripped : Series/Index of objects
34.3.14.31 pandas.Series.str.slice
34.3.14.32 pandas.Series.str.slice_replace
34.3.14.33 pandas.Series.str.split
34.3.14.34 pandas.Series.str.rsplit
34.3.14.35 pandas.Series.str.startswith
Series.str.startswith(pat, na=nan)
Return boolean Series/array indicating whether each string in the Series/Index starts with passed pattern.
Equivalent to str.startswith().
Parameters pat : string
Character sequence
na : bool, default NaN
Returns startswith : Series/array of boolean values
34.3.14.36 pandas.Series.str.strip
Series.str.strip(to_strip=None)
Strip whitespace (including newlines) from each string in the Series/Index from left and right sides. Equivalent
to str.strip().
Returns stripped : Series/Index of objects
34.3.14.37 pandas.Series.str.swapcase
Series.str.swapcase()
Convert strings in the Series/Index to be swapcased. Equivalent to str.swapcase().
Returns converted : Series/Index of objects
34.3.14.38 pandas.Series.str.title
Series.str.title()
Convert strings in the Series/Index to titlecase. Equivalent to str.title().
Returns converted : Series/Index of objects
34.3.14.39 pandas.Series.str.translate
Series.str.translate(table, deletechars=None)
Map all characters in the string through the given mapping table. Equivalent to standard str.translate().
Note that the optional argument deletechars is only valid if you are using python 2. For python 3, character
deletion should be specified via the table argument.
Parameters table : dict (python 3), str or None (python 2)
In python 3, table is a mapping of Unicode ordinals to Unicode ordinals, strings,
or None. Unmapped characters are left untouched. Characters mapped to None are
deleted. str.maketrans() is a helper function for making translation tables.
In python 2, table is either a string of length 256 or None. If the table argument is
None, no translation is applied and the operation simply removes the characters in
deletechars. string.maketrans() is a helper function for making translation
tables.
deletechars : str, optional (python 2)
A string of characters to delete. This argument is only valid in python 2.
Returns translated : Series/Index of objects
34.3.14.40 pandas.Series.str.upper
Series.str.upper()
Convert strings in the Series/Index to uppercase. Equivalent to str.upper().
Returns converted : Series/Index of objects
34.3.14.41 pandas.Series.str.wrap
Series.str.wrap(width, **kwargs)
Wrap long strings in the Series/Index to be formatted in paragraphs with length less than a given width.
This method has the same keyword parameters and defaults as textwrap.TextWrapper.
Parameters width : int
Maximum line-width
expand_tabs : bool, optional
If true, tab characters will be expanded to spaces (default: True)
Notes
Internally, this method uses a textwrap.TextWrapper instance with default settings. To achieve behavior
matching Rs stringr library str_wrap function, use the arguments:
expand_tabs = False
replace_whitespace = True
drop_whitespace = True
break_long_words = False
break_on_hyphens = False
Examples
34.3.14.42 pandas.Series.str.zfill
Series.str.zfill(width)
Filling left side of strings in the Series/Index with 0. Equivalent to str.zfill().
Parameters width : int
Minimum width of resulting string; additional characters will be filled with 0
Returns filled : Series/Index of objects
34.3.14.43 pandas.Series.str.isalnum
Series.str.isalnum()
Check whether all characters in each string in the Series/Index are alphanumeric. Equivalent to str.
isalnum().
Returns is : Series/array of boolean values
34.3.14.44 pandas.Series.str.isalpha
Series.str.isalpha()
Check whether all characters in each string in the Series/Index are alphabetic. Equivalent to str.isalpha().
Returns is : Series/array of boolean values
34.3.14.45 pandas.Series.str.isdigit
Series.str.isdigit()
Check whether all characters in each string in the Series/Index are digits. Equivalent to str.isdigit().
Returns is : Series/array of boolean values
34.3.14.46 pandas.Series.str.isspace
Series.str.isspace()
Check whether all characters in each string in the Series/Index are whitespace. Equivalent to str.
isspace().
Returns is : Series/array of boolean values
34.3.14.47 pandas.Series.str.islower
Series.str.islower()
Check whether all characters in each string in the Series/Index are lowercase. Equivalent to str.islower().
Returns is : Series/array of boolean values
34.3.14.48 pandas.Series.str.isupper
Series.str.isupper()
Check whether all characters in each string in the Series/Index are uppercase. Equivalent to str.isupper().
Returns is : Series/array of boolean values
34.3.14.49 pandas.Series.str.istitle
Series.str.istitle()
Check whether all characters in each string in the Series/Index are titlecase. Equivalent to str.istitle().
Returns is : Series/array of boolean values
34.3.14.50 pandas.Series.str.isnumeric
Series.str.isnumeric()
Check whether all characters in each string in the Series/Index are numeric. Equivalent to str.
isnumeric().
Returns is : Series/array of boolean values
34.3.14.51 pandas.Series.str.isdecimal
Series.str.isdecimal()
Check whether all characters in each string in the Series/Index are decimal. Equivalent to str.isdecimal().
Returns is : Series/array of boolean values
34.3.14.52 pandas.Series.str.get_dummies
Series.str.get_dummies(sep=|)
Split each string in the Series by sep and return a frame of dummy/indicator variables.
Parameters sep : string, default |
String to split on.
Returns dummies : DataFrame
See also:
pandas.get_dummies
Examples
34.3.15 Categorical
If the Series is of dtype category, Series.cat can be used to change the the categorical data. This accessor is
similar to the Series.dt or Series.str and has the following usable methods and properties:
34.3.15.1 pandas.Series.cat.categories
Series.cat.categories
The categories of this categorical.
Setting assigns new values to each category (effectively a rename of each individual category).
The assigned value has to be a list-like object. All items must be unique and the number of items in the new
categories must be the same as the number of items in the old categories.
Assigning to categories is a inplace operation!
Raises ValueError
If the new categories do not validate as categories or if the number of new categories
is unequal the number of old categories
See also:
rename_categories, reorder_categories, add_categories, remove_categories,
remove_unused_categories, set_categories
34.3.15.2 pandas.Series.cat.ordered
Series.cat.ordered
Gets the ordered attribute
34.3.15.3 pandas.Series.cat.codes
Series.cat.codes
34.3.15.4 pandas.Series.cat.rename_categories
Series.cat.rename_categories(*args, **kwargs)
Renames categories.
The new categories has to be a list-like object. All items must be unique and the number of items in the new
categories must be the same as the number of items in the old categories.
Parameters new_categories : Index-like
The renamed categories.
inplace : boolean (default: False)
Whether or not to rename the categories inplace or return a copy of this categorical
with renamed categories.
Returns cat : Categorical with renamed categories added or None if inplace.
Raises ValueError
If the new categories do not have the same number of items than the current cate-
gories or do not validate as categories
See also:
reorder_categories, add_categories, remove_categories,
remove_unused_categories, set_categories
34.3.15.5 pandas.Series.cat.reorder_categories
Series.cat.reorder_categories(*args, **kwargs)
Reorders categories as specified in new_categories.
new_categories need to include all old categories and no new category items.
Parameters new_categories : Index-like
The categories in new order.
ordered : boolean, optional
Whether or not the categorical is treated as a ordered categorical. If not given, do
not change the ordered information.
inplace : boolean (default: False)
Whether or not to reorder the categories inplace or return a copy of this categorical
with reordered categories.
Returns cat : Categorical with reordered categories or None if inplace.
Raises ValueError
If the new categories do not contain all old category items or any new ones
See also:
rename_categories, add_categories, remove_categories,
remove_unused_categories, set_categories
34.3.15.6 pandas.Series.cat.add_categories
Series.cat.add_categories(*args, **kwargs)
Add new categories.
new_categories will be included at the last/highest place in the categories and will be unused directly after this
call.
Parameters new_categories : category or list-like of category
The new categories to be included.
inplace : boolean (default: False)
Whether or not to add the categories inplace or return a copy of this categorical with
added categories.
34.3.15.7 pandas.Series.cat.remove_categories
Series.cat.remove_categories(*args, **kwargs)
Removes the specified categories.
removals must be included in the old categories. Values which were in the removed categories will be set to
NaN
Parameters removals : category or list of categories
The categories which should be removed.
inplace : boolean (default: False)
Whether or not to remove the categories inplace or return a copy of this categorical
with removed categories.
Returns cat : Categorical with removed categories or None if inplace.
Raises ValueError
If the removals are not contained in the categories
See also:
rename_categories, reorder_categories, add_categories,
remove_unused_categories, set_categories
34.3.15.8 pandas.Series.cat.remove_unused_categories
Series.cat.remove_unused_categories(*args, **kwargs)
Removes categories which are not used.
Parameters inplace : boolean (default: False)
Whether or not to drop unused categories inplace or return a copy of this categorical
with unused categories dropped.
Returns cat : Categorical with unused categories dropped or None if inplace.
See also:
rename_categories, reorder_categories, add_categories, remove_categories,
set_categories
34.3.15.9 pandas.Series.cat.set_categories
Series.cat.set_categories(*args, **kwargs)
Sets the categories to the specified new_categories.
new_categories can include new categories (which will result in unused categories) or remove old categories
(which results in values set to NaN). If rename==True, the categories will simple be renamed (less or more
items than in old categories will result in values set to NaN or in unused categories respectively).
This method can be used to perform more than one action of adding, removing, and reordering simultaneously
and is therefore faster than performing the individual steps via the more specialised methods.
On the other hand this methods does not do checks (e.g., whether the old categories are included in the new
categories on a reorder), which can result in surprising changes, for example when using special string dtypes
on python3, which does not considers a S1 string equal to a single char python string.
Parameters new_categories : Index-like
The categories in new order.
ordered : boolean, (default: False)
Whether or not the categorical is treated as a ordered categorical. If not given, do
not change the ordered information.
rename : boolean (default: False)
Whether or not the new_categories should be considered as a rename of the old
categories or as reordered categories.
inplace : boolean (default: False)
Whether or not to reorder the categories inplace or return a copy of this categorical
with reordered categories.
Returns cat : Categorical with reordered categories or None if inplace.
Raises ValueError
If new_categories does not validate as categories
See also:
rename_categories, reorder_categories, add_categories, remove_categories,
remove_unused_categories
34.3.15.10 pandas.Series.cat.as_ordered
Series.cat.as_ordered(*args, **kwargs)
Sets the Categorical to be ordered
Parameters inplace : boolean (default: False)
Whether or not to set the ordered attribute inplace or return a copy of this categorical
with ordered set to True
34.3.15.11 pandas.Series.cat.as_unordered
Series.cat.as_unordered(*args, **kwargs)
Sets the Categorical to be unordered
Parameters inplace : boolean (default: False)
Whether or not to set the ordered attribute inplace or return a copy of this categorical
with ordered set to False
Categorical(values[, categories, ordered, ...]) Represents a categorical variable in classic R / S-plus fash-
ion
34.3.15.12 pandas.Categorical
Examples
>>> a.min()
'c'
Categorical.from_codes(codes, categories[, ...]) Make a Categorical type from codes and categories arrays.
34.3.15.13 pandas.Categorical.from_codes
34.3.15.14 pandas.Categorical.__array__
Categorical.__array__(dtype=None)
The numpy array interface.
Returns values : numpy array
A numpy array of either the specified dtype or, if dtype==None (default), the same
dtype as categorical.categories.dtype
34.3.16 Plotting
Series.plot is both a callable method and a namespace attribute for specific plotting methods of the form
Series.plot.<kind>.
34.3.16.1 pandas.Series.plot.area
Series.plot.area(**kwds)
Area plot
New in version 0.17.0.
Parameters **kwds : optional
Keyword arguments to pass on to pandas.Series.plot().
Returns axes : matplotlib.AxesSubplot or np.array of them
34.3.16.2 pandas.Series.plot.bar
Series.plot.bar(**kwds)
Vertical bar plot
New in version 0.17.0.
Parameters **kwds : optional
Keyword arguments to pass on to pandas.Series.plot().
Returns axes : matplotlib.AxesSubplot or np.array of them
34.3.16.3 pandas.Series.plot.barh
Series.plot.barh(**kwds)
Horizontal bar plot
New in version 0.17.0.
Parameters **kwds : optional
Keyword arguments to pass on to pandas.Series.plot().
Returns axes : matplotlib.AxesSubplot or np.array of them
34.3.16.4 pandas.Series.plot.box
Series.plot.box(**kwds)
Boxplot
New in version 0.17.0.
Parameters **kwds : optional
Keyword arguments to pass on to pandas.Series.plot().
Returns axes : matplotlib.AxesSubplot or np.array of them
34.3.16.5 pandas.Series.plot.density
Series.plot.density(**kwds)
Kernel Density Estimate plot
New in version 0.17.0.
Parameters **kwds : optional
Keyword arguments to pass on to pandas.Series.plot().
Returns axes : matplotlib.AxesSubplot or np.array of them
34.3.16.6 pandas.Series.plot.hist
Series.plot.hist(bins=10, **kwds)
Histogram
New in version 0.17.0.
Parameters bins: integer, default 10
Number of histogram bins to be used
**kwds : optional
Keyword arguments to pass on to pandas.Series.plot().
Returns axes : matplotlib.AxesSubplot or np.array of them
34.3.16.7 pandas.Series.plot.kde
Series.plot.kde(**kwds)
Kernel Density Estimate plot
New in version 0.17.0.
Parameters **kwds : optional
Keyword arguments to pass on to pandas.Series.plot().
Returns axes : matplotlib.AxesSubplot or np.array of them
34.3.16.8 pandas.Series.plot.line
Series.plot.line(**kwds)
Line plot
New in version 0.17.0.
Parameters **kwds : optional
Keyword arguments to pass on to pandas.Series.plot().
Returns axes : matplotlib.AxesSubplot or np.array of them
34.3.16.9 pandas.Series.plot.pie
Series.plot.pie(**kwds)
Pie chart
New in version 0.17.0.
Parameters **kwds : optional
Keyword arguments to pass on to pandas.Series.plot().
Returns axes : matplotlib.AxesSubplot or np.array of them
Series.hist([by, ax, grid, xlabelsize, ...]) Draw histogram of the input series using matplotlib
Series.from_csv(path[, sep, parse_dates, ...]) Read CSV file (DISCOURAGED, please use pandas.
read_csv() instead).
Series.to_pickle(path[, compression]) Pickle (serialize) object to input file path.
Series.to_csv([path, index, sep, na_rep, ...]) Write Series to a comma-separated values (csv) file
Series.to_dict() Convert Series to {label -> value} dict
Series.to_excel(excel_writer[, sheet_name, ...]) Write Series to an excel sheet
Series.to_frame([name]) Convert Series to DataFrame
Series.to_xarray() Return an xarray object from the pandas object.
Series.to_hdf(path_or_buf, key, **kwargs) Write the contained data to an HDF5 file using HDFStore.
Series.to_sql(name, con[, flavor, schema, ...]) Write records stored in a DataFrame to a SQL database.
Series.to_msgpack([path_or_buf, encoding]) msgpack (serialize) object to input file path
Series.to_json([path_or_buf, orient, ...]) Convert the object to a JSON string.
Series.to_sparse([kind, fill_value]) Convert Series to SparseSeries
Series.to_dense() Return dense representation of NDFrame (as opposed to
sparse)
Series.to_string([buf, na_rep, ...]) Render a string representation of the Series
Series.to_clipboard([excel, sep]) Attempt to write text representation of object to the system
clipboard This can be pasted into Excel, for example.
34.3.18 Sparse
34.3.18.1 pandas.SparseSeries.to_coo
Examples
34.3.18.2 pandas.SparseSeries.from_coo
Examples
34.4 DataFrame
34.4.1 Constructor
34.4.1.1 pandas.DataFrame
Examples
Attributes
pandas.DataFrame.T
DataFrame.T
Transpose index and columns
pandas.DataFrame.at
DataFrame.at
Fast label-based scalar accessor
Similarly to loc, at provides label based scalar lookups. You can also set using these indexers.
pandas.DataFrame.axes
DataFrame.axes
Return a list with the row axis labels and column axis labels as the only members. They are returned in
that order.
pandas.DataFrame.blocks
DataFrame.blocks
Internal property, property synonym for as_blocks()
pandas.DataFrame.dtypes
DataFrame.dtypes
Return the dtypes in this object.
pandas.DataFrame.empty
DataFrame.empty
True if NDFrame is entirely empty [no items], meaning any of the axes are of length 0.
See also:
pandas.Series.dropna, pandas.DataFrame.dropna
Notes
If NDFrame contains only NaNs, it is still not considered empty. See the example below.
Examples
If we only have NaNs in our DataFrame, it is not considered empty! We will need to drop the NaNs to
make the DataFrame empty:
pandas.DataFrame.ftypes
DataFrame.ftypes
Return the ftypes (indication of sparse/dense and dtype) in this object.
pandas.DataFrame.iat
DataFrame.iat
Fast integer location scalar accessor.
Similarly to iloc, iat provides integer based lookups. You can also set using these indexers.
pandas.DataFrame.iloc
DataFrame.iloc
Purely integer-location based indexing for selection by position.
.iloc[] is primarily integer position based (from 0 to length-1 of the axis), but may also be used
with a boolean array.
Allowed inputs are:
An integer, e.g. 5.
A list or array of integers, e.g. [4, 3, 0].
A slice object with ints, e.g. 1:7.
A boolean array.
A callable function with one argument (the calling Series, DataFrame or Panel) and that returns
valid output for indexing (one of the above)
.iloc will raise IndexError if a requested indexer is out-of-bounds, except slice indexers which
allow out-of-bounds indexing (this conforms with python/numpy slice semantics).
pandas.DataFrame.is_copy
DataFrame.is_copy = None
pandas.DataFrame.ix
DataFrame.ix
A primarily label-location based indexer, with integer position fallback.
.ix[] supports mixed integer and label based access. It is primarily label based, but will fall back to
integer positional access unless the corresponding axis is of integer type.
.ix is the most general indexer and will support any of the inputs in .loc and .iloc. .ix also
supports floating point label schemes. .ix is exceptionally useful when dealing with mixed positional
and label based hierachical indexes.
However, when an axis is integer based, ONLY label based access and not positional access is supported.
Thus, in such cases, its usually better to be explicit and use .iloc or .loc.
See more at Advanced Indexing.
pandas.DataFrame.loc
DataFrame.loc
Purely label-location based indexer for selection by label.
.loc[] is primarily label based, but may also be used with a boolean array.
Allowed inputs are:
A single label, e.g. 5 or 'a', (note that 5 is interpreted as a label of the index, and never as an
integer position along the index).
A list or array of labels, e.g. ['a', 'b', 'c'].
A slice object with labels, e.g. 'a':'f' (note that contrary to usual python slices, both the start
and the stop are included!).
A boolean array.
A callable function with one argument (the calling Series, DataFrame or Panel) and that returns
valid output for indexing (one of the above)
.loc will raise a KeyError when the items are not found.
See more at Selection by Label
pandas.DataFrame.ndim
DataFrame.ndim
Number of axes / array dimensions
pandas.DataFrame.shape
DataFrame.shape
Return a tuple representing the dimensionality of the DataFrame.
pandas.DataFrame.size
DataFrame.size
number of elements in the NDFrame
pandas.DataFrame.style
DataFrame.style
Property returning a Styler object containing methods for building a styled HTML representation fo the
DataFrame.
See also:
pandas.io.formats.style.Styler
pandas.DataFrame.values
DataFrame.values
Numpy representation of NDFrame
Notes
The dtype will be a lower-common-denominator dtype (implicit upcasting); that is to say if the dtypes
(even of numeric types) are mixed, the one that accommodates all will be chosen. Use this with care if
you are not dealing with the blocks.
e.g. If the dtypes are float16 and float32, dtype will be upcast to float32. If dtypes are int32 and uint8,
dtype will be upcast to int32. By numpy.find_common_type convention, mixing int64 and uint64 will
result in a flot64 dtype.
Methods
pandas.DataFrame.abs
DataFrame.abs()
Return an object with absolute value takenonly applicable to objects that are all numeric.
Returns abs: type of caller
pandas.DataFrame.add
Notes
pandas.DataFrame.add_prefix
DataFrame.add_prefix(prefix)
Concatenate prefix string with panel items names.
Parameters prefix : string
Returns with_prefix : type of caller
pandas.DataFrame.add_suffix
DataFrame.add_suffix(suffix)
Concatenate suffix string with panel items names.
Parameters suffix : string
Returns with_suffix : type of caller
pandas.DataFrame.agg
Notes
Numpy functions mean/median/prod/sum/std/var are special cased so the default behavior is applying
the function along axis=0 (e.g., np.mean(arr_2d, axis=0)) as opposed to mimicking the default Numpy
behavior (e.g., np.mean(arr_2d)).
agg is an alias for aggregate. Use it.
Examples
pandas.DataFrame.aggregate
Notes
Numpy functions mean/median/prod/sum/std/var are special cased so the default behavior is applying
the function along axis=0 (e.g., np.mean(arr_2d, axis=0)) as opposed to mimicking the default Numpy
behavior (e.g., np.mean(arr_2d)).
Examples
pandas.DataFrame.align
Broadcast values along this axis, if aligning two objects of different dimensions
New in version 0.17.0.
Returns (left, right) : (DataFrame, type of other)
Aligned objects
pandas.DataFrame.all
pandas.DataFrame.any
pandas.DataFrame.append
Notes
If a list of dict/series is passed and the keys are all contained in the DataFrames index, the order of the
columns in the resulting DataFrame will be unchanged.
Examples
pandas.DataFrame.apply
Notes
In the current implementation apply calls func twice on the first column/row to decide whether it can take
a fast or slow code path. This can lead to unexpected behavior if func has side-effects, as they will take
effect twice for the first column/row.
Examples
pandas.DataFrame.applymap
DataFrame.applymap(func)
Apply a function to a DataFrame that is intended to operate elementwise, i.e. like doing map(func, series)
for each series in the DataFrame
Parameters func : function
Python function, returns a single value from a single value
Returns applied : DataFrame
See also:
Examples
pandas.DataFrame.as_blocks
DataFrame.as_blocks(copy=True)
Convert the frame to a dict of dtype -> Constructor Types that each has a homogeneous dtype.
NOTE: the dtypes of the blocks WILL BE PRESERVED HERE (unlike in as_matrix)
pandas.DataFrame.as_matrix
DataFrame.as_matrix(columns=None)
Convert the frame to its Numpy-array representation.
Parameters columns: list, optional, default:None
If None, return all columns, otherwise, returns specified columns.
Returns values : ndarray
If the caller is heterogeneous and contains booleans or objects, the result will be
of dtype=object. See Notes.
See also:
pandas.DataFrame.values
Notes
pandas.DataFrame.asfreq
Notes
To learn more about the frequency strings, please see this link.
Examples
>>> df.asfreq(freq='30S')
s
2000-01-01 00:00:00 0.0
2000-01-01 00:00:30 NaN
2000-01-01 00:01:00 NaN
2000-01-01 00:01:30 NaN
2000-01-01 00:02:00 2.0
2000-01-01 00:02:30 NaN
2000-01-01 00:03:00 3.0
pandas.DataFrame.asof
DataFrame.asof(where, subset=None)
The last row without any NaN is taken (or the last row without NaN considering only the subset of
columns in the case of a DataFrame)
New in version 0.19.0: For DataFrame
If there is no good value, NaN is returned for a Series a Series of NaN values for a DataFrame
Parameters where : date or array of dates
subset : string or list of strings, default None
if not None use these columns for NaN propagation
Returns where is scalar
value or NaN if input is Series
Series if input is DataFrame
where is Index: same shape object as input
See also:
merge_asof
Notes
pandas.DataFrame.assign
DataFrame.assign(**kwargs)
Assign new columns to a DataFrame, returning a new object (a copy) with all the original columns in
addition to the new ones.
New in version 0.16.0.
Parameters kwargs : keyword, value pairs
keywords are the column names. If the values are callable, they are computed on
the DataFrame and assigned to the new columns. The callable must not change
input DataFrame (though pandas doesnt check it). If the values are not callable,
(e.g. a Series, scalar, or array), they are simply assigned.
Returns df : DataFrame
A new DataFrame with the new columns in addition to all the existing columns.
Notes
Since kwargs is a dictionary, the order of your arguments may not be preserved. To make things predi-
catable, the columns are inserted in alphabetical order, at the end of your DataFrame. Assigning multiple
columns within the same assign is possible, but you cannot reference other columns created within the
same assign call.
Examples
pandas.DataFrame.astype
pandas.DataFrame.at_time
DataFrame.at_time(time, asof=False)
Select values at particular time of day (e.g. 9:30AM).
Parameters time : datetime.time or string
Returns values_at_time : type of caller
pandas.DataFrame.between_time
pandas.DataFrame.bfill
pandas.DataFrame.bool
DataFrame.bool()
Return the bool of a single element PandasObject.
This must be a boolean scalar value, either True or False. Raise a ValueError if the PandasObject does not
have exactly 1 element, or that element is not boolean
pandas.DataFrame.boxplot
Notes
Use return_type='dict' when you want to tweak the appearance of the lines after plotting. In this
case a dict containing the Lines making up the boxes, caps, fliers, medians, and whiskers is returned.
pandas.DataFrame.clip
Examples
>>> df
0 1
0 0.335232 -1.256177
1 -1.367855 0.746646
2 0.027753 -1.176076
3 0.230930 -0.679613
4 1.261967 0.570967
>>> df.clip(-1.0, 0.5)
0 1
0 0.335232 -1.000000
1 -1.000000 0.500000
2 0.027753 -1.000000
3 0.230930 -0.679613
4 0.500000 0.500000
>>> t
0 -0.3
1 -0.2
2 -0.1
3 0.0
4 0.1
dtype: float64
>>> df.clip(t, t + 1, axis=0)
0 1
0 0.335232 -0.300000
1 -0.200000 0.746646
2 0.027753 -0.100000
3 0.230930 0.000000
4 1.100000 0.570967
pandas.DataFrame.clip_lower
DataFrame.clip_lower(threshold, axis=None)
Return copy of the input with values below given value(s) truncated.
Parameters threshold : float or array_like
axis : int or string axis name, optional
Align object with threshold along the given axis.
Returns clipped : same type as input
See also:
clip
pandas.DataFrame.clip_upper
DataFrame.clip_upper(threshold, axis=None)
Return copy of input with values above given value(s) truncated.
Parameters threshold : float or array_like
axis : int or string axis name, optional
Align object with threshold along the given axis.
Returns clipped : same type as input
See also:
clip
pandas.DataFrame.combine
pandas.DataFrame.combine_first
DataFrame.combine_first(other)
Combine two DataFrame objects and default to non-null values in frame calling the method. Result index
columns will be the union of the respective indexes and columns
Parameters other : DataFrame
Returns combined : DataFrame
Examples
pandas.DataFrame.compound
pandas.DataFrame.consolidate
DataFrame.consolidate(inplace=False)
DEPRECATED: consolidate will be an internal implementation only.
pandas.DataFrame.convert_objects
pandas.DataFrame.copy
DataFrame.copy(deep=True)
Make a copy of this objects data.
Parameters deep : boolean or string, default True
Make a deep copy, including a copy of the data and the indices. With
deep=False neither the indices or the data are copied.
Note that when deep=True data is copied, actual python objects will not be
copied recursively, only the reference to the object. This is in contrast to copy.
deepcopy in the Standard Library, which recursively copies object data.
Returns copy : type of caller
pandas.DataFrame.corr
DataFrame.corr(method=pearson, min_periods=1)
Compute pairwise correlation of columns, excluding NA/null values
Parameters method : {pearson, kendall, spearman}
pearson : standard correlation coefficient
kendall : Kendall Tau correlation coefficient
spearman : Spearman rank correlation
min_periods : int, optional
Minimum number of observations required per pair of columns to have a valid
result. Currently only available for pearson and spearman correlation
Returns y : DataFrame
pandas.DataFrame.corrwith
pandas.DataFrame.count
pandas.DataFrame.cov
DataFrame.cov(min_periods=None)
Compute pairwise covariance of columns, excluding NA/null values
Parameters min_periods : int, optional
Minimum number of observations required per pair of columns to have a valid
result.
Returns y : DataFrame
Notes
y contains the covariance matrix of the DataFrames time series. The covariance is normalized by N-1
(unbiased estimator).
pandas.DataFrame.cummax
pandas.DataFrame.cummin
pandas.DataFrame.cumprod
pandas.DataFrame.cumsum
pandas.DataFrame.describe
A list-like of dtypes : Excludes the provided data types from the result. To
select numeric types submit numpy.number. To select categorical objects
submit the data type numpy.object. Strings can also be used in the style of
select_dtypes (e.g. df.describe(include=['O']))
None (default) : The result will exclude nothing.
Returns summary: Series/DataFrame of summary statistics
See also:
DataFrame.count, DataFrame.max, DataFrame.min, DataFrame.mean, DataFrame.
std, DataFrame.select_dtypes
Notes
For numeric data, the results index will include count, mean, std, min, max as well as lower, 50 and
upper percentiles. By default the lower percentile is 25 and the upper percentile is 75. The 50 percentile
is the same as the median.
For object data (e.g. strings or timestamps), the results index will include count, unique, top, and
freq. The top is the most common value. The freq is the most common values frequency. Times-
tamps also include the first and last items.
If multiple object values have the highest count, then the count and top results will be arbitrarily chosen
from among those with the highest count.
For mixed data types provided via a DataFrame, the default is to return only an analysis of numeric
columns. If include='all' is provided as an option, the result will include a union of attributes of
each type.
The include and exclude parameters can be used to limit which columns in a DataFrame are analyzed
for the output. The parameters are ignored when analyzing a Series.
Examples
max 3.0
Name: numeric, dtype: float64
>>> df.describe(include=[np.number])
numeric
count 3.0
mean 2.0
std 1.0
min 1.0
25% 1.5
50% 2.0
75% 2.5
max 3.0
>>> df.describe(include=[np.object])
object
count 3
unique 3
top b
freq 1
>>> df.describe(exclude=[np.number])
object
count 3
unique 3
top b
freq 1
>>> df.describe(exclude=[np.object])
numeric
count 3.0
mean 2.0
std 1.0
min 1.0
25% 1.5
50% 2.0
75% 2.5
max 3.0
pandas.DataFrame.diff
DataFrame.diff(periods=1, axis=0)
1st discrete difference of object
Parameters periods : int, default 1
Periods to shift for forming difference
axis : {0 or index, 1 or columns}, default 0
pandas.DataFrame.div
Notes
pandas.DataFrame.divide
See also:
DataFrame.rtruediv
Notes
pandas.DataFrame.dot
DataFrame.dot(other)
Matrix multiplication with DataFrame or Series objects
Parameters other : DataFrame or Series
Returns dot_product : DataFrame or Series
pandas.DataFrame.drop
pandas.DataFrame.drop_duplicates
pandas.DataFrame.dropna
Examples
1 1
2 5
Drop the rows where all of the elements are nan (there is no row to drop, so df stays the same):
>>> df.dropna(thresh=2)
A B C D
0 NaN 2.0 NaN 0
1 3.0 4.0 NaN 1
pandas.DataFrame.duplicated
DataFrame.duplicated(subset=None, keep=first)
Return boolean Series denoting duplicate rows, optionally only considering certain columns
Parameters subset : column label or sequence of labels, optional
Only consider certain columns for identifying duplicates, by default use all of the
columns
keep : {first, last, False}, default first
first : Mark duplicates as True except for the first occurrence.
last : Mark duplicates as True except for the last occurrence.
False : Mark all duplicates as True.
Returns duplicated : Series
pandas.DataFrame.eq
pandas.DataFrame.equals
DataFrame.equals(other)
Determines if two NDFrame objects contain the same elements. NaNs in the same location are considered
equal.
pandas.DataFrame.eval
Notes
For more details see the API documentation for eval(). For detailed examples see enhancing perfor-
mance with eval.
Examples
pandas.DataFrame.ewm
Notes
Exactly one of center of mass, span, half-life, and alpha must be provided. Allowed values and relation-
ship between the parameters are specified in the parameter descriptions above; see the link at the end of
this section for a detailed explanation.
The freq keyword is used to conform time series data to a specified frequency by resampling the data.
This is done with the default parameters of resample() (i.e. using the mean).
When adjust is True (default), weighted averages are calculated using weights (1-alpha)**(n-1), (1-
alpha)**(n-2), ..., 1-alpha, 1.
When adjust is False, weighted averages are calculated recursively as: weighted_average[0] =
arg[0]; weighted_average[i] = (1-alpha)*weighted_average[i-1] + alpha*arg[i].
When ignore_na is False (default), weights are based on absolute positions. For example, the weights of
x and y used in calculating the final weighted average of [x, None, y] are (1-alpha)**2 and 1 (if adjust is
True), and (1-alpha)**2 and alpha (if adjust is False).
When ignore_na is True (reproducing pre-0.15.0 behavior), weights are based on relative positions. For
example, the weights of x and y used in calculating the final weighted average of [x, None, y] are 1-alpha
and 1 (if adjust is True), and 1-alpha and alpha (if adjust is False).
More details can be found at http://pandas.pydata.org/pandas-docs/stable/computation.html#
exponentially-weighted-windows
Examples
3 NaN
4 4.0
>>> df.ewm(com=0.5).mean()
B
0 0.000000
1 0.750000
2 1.615385
3 1.615385
4 3.670213
pandas.DataFrame.expanding
Notes
By default, the result is set to the right edge of the window. This can be changed to the center of the
window by setting center=True.
The freq keyword is used to conform time series data to a specified frequency by resampling the data.
This is done with the default parameters of resample() (i.e. using the mean).
Examples
>>> df.expanding(2).sum()
B
0 NaN
1 1.0
2 3.0
3 3.0
4 7.0
pandas.DataFrame.ffill
pandas.DataFrame.fillna
pandas.DataFrame.filter
Notes
The items, like, and regex parameters are enforced to be mutually exclusive.
axis defaults to the info axis that is used when indexing with [].
Examples
>>> df
one two three
mouse 1 2 3
rabbit 4 5 6
pandas.DataFrame.first
DataFrame.first(offset)
Convenience method for subsetting initial periods of time series data based on a date offset.
Parameters offset : string, DateOffset, dateutil.relativedelta
Returns subset : type of caller
Examples
pandas.DataFrame.first_valid_index
DataFrame.first_valid_index()
Return label for first non-NA/null value
pandas.DataFrame.floordiv
Notes
pandas.DataFrame.from_csv
pandas.DataFrame.from_dict
pandas.DataFrame.from_items
pandas.DataFrame.from_records
pandas.DataFrame.ge
pandas.DataFrame.get
DataFrame.get(key, default=None)
Get item from object for given key (DataFrame column, Panel slice, etc.). Returns default value if not
found.
Parameters key : object
Returns value : type of items contained in object
pandas.DataFrame.get_dtype_counts
DataFrame.get_dtype_counts()
Return the counts of dtypes in this object.
pandas.DataFrame.get_ftype_counts
DataFrame.get_ftype_counts()
Return the counts of ftypes in this object.
pandas.DataFrame.get_value
pandas.DataFrame.get_values
DataFrame.get_values()
same as values (but handles sparseness conversions)
pandas.DataFrame.groupby
Used to determine the groups for the groupby. If by is a function, its called
on each value of the objects index. If a dict or Series is passed, the Series or
dict VALUES will be used to determine the groups (the Series values are first
aligned; see .align() method). If an ndarray is passed, the values are used
as-is determine the groups. A str or list of strs may be passed to group by the
columns in self
axis : int, default 0
level : int, level name, or sequence of such, default None
If the axis is a MultiIndex (hierarchical), group by a particular level or levels
as_index : boolean, default True
For aggregated output, return object with group labels as the index. Only relevant
for DataFrame input. as_index=False is effectively SQL-style grouped output
sort : boolean, default True
Sort group keys. Get better performance by turning this off. Note this does not
influence the order of observations within each group. groupby preserves the
order of rows within each group.
group_keys : boolean, default True
When calling apply, add group keys to index to identify pieces
squeeze : boolean, default False
reduce the dimensionality of the return type if possible, otherwise return a con-
sistent type
Returns GroupBy object
Examples
DataFrame results
pandas.DataFrame.gt
pandas.DataFrame.head
DataFrame.head(n=5)
Returns first n rows
pandas.DataFrame.hist
pandas.DataFrame.idxmax
DataFrame.idxmax(axis=0, skipna=True)
Return index of first occurrence of maximum over requested axis. NA/null values are excluded.
Parameters axis : {0 or index, 1 or columns}, default 0
0 or index for row-wise, 1 or columns for column-wise
skipna : boolean, default True
Exclude NA/null values. If an entire row/column is NA, the result will be first
index.
Returns idxmax : Series
See also:
Series.idxmax
Notes
pandas.DataFrame.idxmin
DataFrame.idxmin(axis=0, skipna=True)
Return index of first occurrence of minimum over requested axis. NA/null values are excluded.
Parameters axis : {0 or index, 1 or columns}, default 0
0 or index for row-wise, 1 or columns for column-wise
skipna : boolean, default True
Exclude NA/null values. If an entire row/column is NA, the result will be NA
Returns idxmin : Series
See also:
Series.idxmin
Notes
pandas.DataFrame.info
pandas.DataFrame.insert
pandas.DataFrame.interpolate
linear: ignore the index and treat the values as equally spaced. This is the
only method supported on MultiIndexes. default
time: interpolation works on daily and higher resolution data to interpolate
given length of interval
index, values: use the actual numerical values of the index
New in version 0.18.1: Added support for the akima method Added interpolate
method from_derivatives which replaces piecewise_polynomial in scipy 0.18;
backwards-compatible with scipy < 0.18
axis : {0, 1}, default 0
0: fill column-by-column
1: fill row-by-row
limit : int, default None.
Maximum number of consecutive NaNs to fill. Must be greater than 0.
limit_direction : {forward, backward, both}, default forward
If limit is specified, consecutive NaNs will be filled in this direction.
New in version 0.17.0.
inplace : bool, default False
Update the NDFrame in place if possible.
downcast : optional, infer or None, defaults to None
Downcast dtypes if possible.
kwargs : keyword arguments to pass on to the interpolating function.
Returns Series or DataFrame of same shape interpolated at the NaNs
See also:
reindex, replace, fillna
Examples
Filling in NaNs
pandas.DataFrame.isin
DataFrame.isin(values)
Return boolean DataFrame showing whether each element in the DataFrame is contained in values.
Parameters values : iterable, Series, DataFrame or dictionary
The result will only be true at a location if all the labels match. If values is a
Series, thats the index. If values is a dictionary, the keys must be the column
names, which must match. If values is a DataFrame, then both the index and
column labels must match.
Returns DataFrame of booleans
Examples
pandas.DataFrame.isnull
DataFrame.isnull()
Return a boolean same-sized object indicating if the values are null.
See also:
pandas.DataFrame.items
DataFrame.items()
Iterator over (column name, Series) pairs.
See also:
pandas.DataFrame.iteritems
DataFrame.iteritems()
Iterator over (column name, Series) pairs.
See also:
pandas.DataFrame.iterrows
DataFrame.iterrows()
Iterate over DataFrame rows as (index, Series) pairs.
Returns it : generator
A generator that iterates over the rows of the frame.
See also:
Notes
1.Because iterrows returns a Series for each row, it does not preserve dtypes across the rows
(dtypes are preserved across columns for DataFrames). For example,
To preserve dtypes while iterating over the rows, it is better to use itertuples() which returns
namedtuples of the values and which is generally faster than iterrows.
2.You should never modify something you are iterating over. This is not guaranteed to work in all
cases. Depending on the data types, the iterator returns a copy and not a view, and writing to it will
have no effect.
pandas.DataFrame.itertuples
DataFrame.itertuples(index=True, name=Pandas)
Iterate over DataFrame rows as namedtuples, with index value as first element of the tuple.
Parameters index : boolean, default True
If True, return the index as the first element of the tuple.
name : string, default Pandas
The name of the returned namedtuples or None to return regular tuples.
See also:
Notes
The column names will be renamed to positional names if they are invalid Python identifiers, repeated, or
start with an underscore. With a large number of columns (>255), regular tuples are returned.
Examples
pandas.DataFrame.join
Column(s) in the caller to join on the index in other, otherwise joins index-on-
index. If multiples columns given, the passed DataFrame must have a MultiIn-
dex. Can pass an array as the join key if not already contained in the calling
DataFrame. Like an Excel VLOOKUP operation
how : {left, right, outer, inner}, default: left
How to handle the operation of the two objects.
left: use calling frames index (or column if on is specified)
right: use other frames index
outer: form union of calling frames index (or column if on is specified) with
other frames index, and sort it lexicographically
inner: form intersection of calling frames index (or column if on is specified)
with other frames index, preserving the order of the callings one
lsuffix : string
Suffix to use from left frames overlapping columns
rsuffix : string
Suffix to use from right frames overlapping columns
sort : boolean, default False
Order result DataFrame lexicographically by the join key. If False, the order of
the join key depends on the join type (how keyword)
Returns joined : DataFrame
See also:
Notes
on, lsuffix, and rsuffix options are not supported when passing a list of DataFrame objects
Examples
>>> caller
A key
0 A0 K0
1 A1 K1
2 A2 K2
3 A3 K3
4 A4 K4
5 A5 K5
>>> other
B key
0 B0 K0
1 B1 K1
2 B2 K2
If we want to join using the key columns, we need to set key to be the index in both caller and other. The
joined DataFrame will have key as its index.
>>> caller.set_index('key').join(other.set_index('key'))
>>> A B
key
K0 A0 B0
K1 A1 B1
K2 A2 B2
K3 A3 NaN
K4 A4 NaN
K5 A5 NaN
Another option to join using the key columns is to use the on parameter. DataFrame.join always uses
others index but we can use any column in the caller. This method preserves the original callers index in
the result.
>>> A key B
0 A0 K0 B0
1 A1 K1 B1
2 A2 K2 B2
3 A3 K3 NaN
4 A4 K4 NaN
5 A5 K5 NaN
pandas.DataFrame.keys
DataFrame.keys()
Get the info axis (see Indexing for more)
This is index for Series, columns for DataFrame and major_axis for Panel.
pandas.DataFrame.kurt
pandas.DataFrame.kurtosis
pandas.DataFrame.last
DataFrame.last(offset)
Convenience method for subsetting final periods of time series data based on a date offset.
Parameters offset : string, DateOffset, dateutil.relativedelta
Returns subset : type of caller
Examples
pandas.DataFrame.last_valid_index
DataFrame.last_valid_index()
Return label for last non-NA/null value
pandas.DataFrame.le
pandas.DataFrame.lookup
DataFrame.lookup(row_labels, col_labels)
Label-based fancy indexing function for DataFrame. Given equal-length arrays of row and column
labels, return an array of the values corresponding to each (row, col) pair.
Parameters row_labels : sequence
The row labels to use for lookup
col_labels : sequence
The column labels to use for lookup
Notes
Akin to:
result = []
for row, col in zip(row_labels, col_labels):
result.append(df.get_value(row, col))
Examples
pandas.DataFrame.lt
pandas.DataFrame.mad
pandas.DataFrame.mask
Notes
The mask method is an application of the if-then idiom. For each element in the calling DataFrame, if
cond is False the element is used; otherwise the corresponding element from the DataFrame other
is used.
Examples
>>> s = pd.Series(range(5))
>>> s.where(s > 0)
0 NaN
1 1.0
2 2.0
3 3.0
4 4.0
pandas.DataFrame.max
pandas.DataFrame.mean
pandas.DataFrame.median
pandas.DataFrame.melt
This function is useful to massage a DataFrame into a format where one or more columns are identifier
variables (id_vars), while all other columns, considered measured variables (value_vars), are unpivoted
to the row axis, leaving just two non-identifier columns, variable and value.
New in version 0.20.0.
Parameters frame : DataFrame
id_vars : tuple, list, or ndarray, optional
Column(s) to use as identifier variables.
value_vars : tuple, list, or ndarray, optional
Column(s) to unpivot. If not specified, uses all columns that are not set as id_vars.
var_name : scalar
Name to use for the variable column. If None it uses frame.columns.name
or variable.
value_name : scalar, default value
Name to use for the value column.
col_level : int or string, optional
If columns are a MultiIndex then use this level to melt.
See also:
melt, pivot_table, DataFrame.pivot
Examples
pandas.DataFrame.memory_usage
DataFrame.memory_usage(index=True, deep=False)
Memory usage of DataFrame columns.
Parameters index : bool
Specifies whether to include memory usage of DataFrames index in returned
Series. If index=True (default is False) the first index of the Series is Index.
deep : bool
Introspect the data deeply, interrogate object dtypes for system-level memory
consumption
Returns sizes : Series
A series with column names as index and memory usage of columns with units
of bytes.
See also:
numpy.ndarray.nbytes
Notes
Memory usage does not include memory consumed by elements that are not components of the array if
deep=False
pandas.DataFrame.merge
Examples
>>> A >>> B
lkey value rkey value
0 foo 1 0 foo 5
1 bar 2 1 bar 6
2 baz 3 2 qux 7
3 foo 4 3 bar 8
pandas.DataFrame.min
pandas.DataFrame.mod
Notes
pandas.DataFrame.mode
DataFrame.mode(axis=0, numeric_only=False)
Gets the mode(s) of each element along the axis selected. Adds a row for each mode per label, fills in
gaps with nan.
Note that there could be multiple values returned for the selected axis (when more than one item share
the maximum frequency), which is the reason why a dataframe is returned. If you want to impute missing
values with the mode in a dataframe df, you can just do this: df.fillna(df.mode().iloc[0])
Parameters axis : {0 or index, 1 or columns}, default 0
0 or index : get mode of each column
1 or columns : get mode of each row
numeric_only : boolean, default False
if True, only apply to numeric columns
Returns modes : DataFrame (sorted)
Examples
pandas.DataFrame.mul
Notes
pandas.DataFrame.multiply
Notes
pandas.DataFrame.ne
pandas.DataFrame.nlargest
Examples
pandas.DataFrame.notnull
DataFrame.notnull()
Return a boolean same-sized object indicating if the values are not null.
See also:
pandas.DataFrame.nsmallest
Examples
pandas.DataFrame.nunique
DataFrame.nunique(axis=0, dropna=True)
Return Series with number of distinct observations over requested axis.
New in version 0.20.0.
Parameters axis : {0 or index, 1 or columns}, default 0
dropna : boolean, default True
Dont include NaN in the counts.
Returns nunique : Series
Examples
>>> df.nunique(axis=1)
0 1
1 2
2 2
pandas.DataFrame.pct_change
Notes
By default, the percentage change is calculated along the stat axis: 0, or Index, for DataFrame and 1,
or minor for Panel. You can change this with the axis keyword argument.
pandas.DataFrame.pipe
Notes
Use .pipe when chaining together functions that expect on Series or DataFrames. Instead of writing
>>> (df.pipe(h)
... .pipe(g, arg1=a)
... .pipe(f, arg2=b, arg3=c)
... )
If you have a function that takes the data as (say) the second argument, pass a tuple indicating which
keyword expects the data. For example, suppose f takes its data as arg2:
>>> (df.pipe(h)
... .pipe(g, arg1=a)
... .pipe((f, 'arg2'), arg1=a, arg3=c)
... )
pandas.DataFrame.pivot
DataFrame.pivot_table generalization of pivot that can handle duplicate values for one in-
dex/column pair
DataFrame.unstack pivot based on the index values instead of a column
Notes
For finer-tuned control, see hierarchical indexing documentation along with the related stack/unstack
methods
Examples
pandas.DataFrame.pivot_table
If list of functions passed, the resulting pivot table will have hierarchical columns
whose top level are the function names (inferred from the function objects them-
selves)
fill_value : scalar, default None
Value to replace missing values with
margins : boolean, default False
Add all row / columns (e.g. for subtotal / grand totals)
dropna : boolean, default True
Do not include columns whose entries are all NaN
margins_name : string, default All
Name of the row / column that will contain the totals when margins is True.
Returns table : DataFrame
See also:
Examples
>>> df
A B C D
0 foo one small 1
1 foo one large 2
2 foo one large 2
3 foo two small 3
4 foo two small 3
5 bar one large 4
6 bar one small 5
7 bar two small 6
8 bar two large 7
pandas.DataFrame.plot
New in version 0.17.0: Each plot kind has a corresponding method on the DataFrame.plot accessor:
df.plot(kind='line') is equivalent to df.plot.line().
Parameters data : DataFrame
x : label or position, default None
y : label or position, default None
Allows plotting of one column versus another
kind : str
line : line plot (default)
bar : vertical bar plot
barh : horizontal bar plot
hist : histogram
box : boxplot
kde : Kernel Density Estimation plot
density : same as kde
area : area plot
pie : pie plot
scatter : scatter plot
hexbin : hexbin plot
ax : matplotlib axes object, default None
subplots : boolean, default False
Make separate subplots for each column
sharex : boolean, default True if ax is None else False
In case subplots=True, share x axis and set some x axis labels to invisible; defaults
to True if ax is None otherwise False if an ax is passed in; Be aware, that passing
in both an ax and sharex=True will alter all x axis labels for all axis in a figure!
sharey : boolean, default False
In case subplots=True, share y axis and set some y axis labels to invisible
layout : tuple (optional)
(rows, columns) for the layout of subplots
figsize : a tuple (width, height) in inches
use_index : boolean, default True
Use index as ticks for x axis
title : string or list
Title to use for the plot. If a string is passed, print the string at the top of the
figure. If a list is passed and subplots is True, print each item in the list above the
corresponding subplot.
grid : boolean, default None (matlab style default)
Axis grid lines
legend : False/True/reverse
Place legend on axis subplots
style : list or dict
matplotlib line style per column
logx : boolean, default False
Use log scaling on x axis
logy : boolean, default False
Use log scaling on y axis
loglog : boolean, default False
Use log scaling on both x and y axes
xticks : sequence
Values to use for the xticks
yticks : sequence
Values to use for the yticks
xlim : 2-tuple/list
ylim : 2-tuple/list
rot : int, default None
Rotation for ticks (xticks for vertical, yticks for horizontal plots)
fontsize : int, default None
Font size for xticks and yticks
colormap : str or matplotlib colormap object, default None
Colormap to select colors from. If string, load colormap with that name from
matplotlib.
colorbar : boolean, optional
If True, plot colorbar (only relevant for scatter and hexbin plots)
position : float
Specify relative alignments for bar plot layout. From 0 (left/bottom-end) to 1
(right/top-end). Default is 0.5 (center)
layout : tuple (optional)
(rows, columns) for the layout of the plot
table : boolean, Series or DataFrame, default False
If True, draw a table using the data in the DataFrame and the data will be trans-
posed to meet matplotlibs default layout. If a Series or DataFrame is passed, use
passed data to draw a table.
yerr : DataFrame, Series, array-like, dict and str
See Plotting with Error Bars for detail.
Notes
pandas.DataFrame.pop
DataFrame.pop(item)
Return item and drop from frame. Raise KeyError if not found.
pandas.DataFrame.pow
Notes
pandas.DataFrame.prod
pandas.DataFrame.product
Include only float, int, boolean columns. If None, will attempt to use everything,
then use only numeric data. Not implemented for Series.
Returns prod : Series or DataFrame (if level specified)
pandas.DataFrame.quantile
Examples
pandas.DataFrame.query
Notes
The result of the evaluation of this expression is first passed to DataFrame.loc and if that fails be-
cause of a multidimensional key (e.g., a DataFrame) then the result will be passed to DataFrame.
__getitem__().
This method uses the top-level pandas.eval() function to evaluate the passed query.
The query() method uses a slightly modified Python syntax by default. For example, the & and |
(bitwise) operators have the precedence of their boolean cousins, and and or. This is syntactically valid
Python, however the semantics are different.
You can change the semantics of the expression by passing the keyword argument parser='python'.
This enforces the same semantics as evaluation in Python space. Likewise, you can pass
engine='python' to evaluate an expression using Python itself as a backend. This is not recom-
mended as it is inefficient compared to using numexpr as the engine.
The DataFrame.index and DataFrame.columns attributes of the DataFrame instance are
placed in the query namespace by default, which allows you to treat both the index and columns of
the frame as a column in the frame. The identifier index is used for the frame index; you can also use
the name of the index to identify it in a query.
For further details and examples see the query documentation in indexing.
Examples
pandas.DataFrame.radd
Notes
pandas.DataFrame.rank
pandas.DataFrame.rdiv
Notes
pandas.DataFrame.reindex
keywords) New labels / index to conform to. Preferably an Index object to avoid
duplicating data
method : {None, backfill/bfill, pad/ffill, nearest}, optional
method to use for filling holes in reindexed DataFrame. Please note: this is only
applicable to DataFrames/Series with a monotonically increasing/decreasing in-
dex.
default: dont fill gaps
pad / ffill: propagate last valid observation forward to next valid
backfill / bfill: use next valid observation to fill gap
nearest: use nearest valid observations to fill gap
copy : boolean, default True
Return a new object, even if the passed indexes are the same
level : int or name
Broadcast across a level, matching Index values on the passed MultiIndex level
fill_value : scalar, default np.NaN
Value to use for missing values. Defaults to NaN, but can be any compatible
value
limit : int, default None
Maximum number of consecutive elements to forward or backward fill
tolerance : optional
Maximum distance between original and new labels for inexact matches.
The values of the index at the matching locations most satisfy the equation
abs(index[indexer] - target) <= tolerance.
New in version 0.17.0.
Returns reindexed : DataFrame
Examples
Create a new index and reindex the dataframe. By default values in the new index that do not have
corresponding records in the dataframe are assigned NaN.
We can fill in the missing values by passing a value to the keyword fill_value. Because the index is
not monotonically increasing or decreasing, we cannot use arguments to the keyword method to fill the
NaN values.
To further illustrate the filling functionality in reindex, we will create a dataframe with a monotonically
increasing index (for example, a sequence of dates).
2010-01-05 89
2010-01-06 88
2010-01-07 NaN
The index entries that did not have a value in the original data frame (for example, 2009-12-29) are by
default filled with NaN. If desired, we can fill in the missing values using one of several options.
For example, to backpropagate the last valid value to fill the NaN values, pass bfill as an argument to
the method keyword.
Please note that the NaN value present in the original dataframe (at index value 2010-01-03) will not be
filled by any of the value propagation schemes. This is because filling while reindexing does not look at
dataframe values, but only compares the original and desired indexes. If you do want to fill in the NaN
values present in the original dataframe, use the fillna() method.
pandas.DataFrame.reindex_axis
Broadcast across a level, matching Index values on the passed MultiIndex level
limit : int, default None
Maximum number of consecutive elements to forward or backward fill
tolerance : optional
Maximum distance between original and new labels for inexact matches.
The values of the index at the matching locations most satisfy the equation
abs(index[indexer] - target) <= tolerance.
New in version 0.17.0.
Returns reindexed : DataFrame
See also:
reindex, reindex_like
Examples
pandas.DataFrame.reindex_like
Notes
pandas.DataFrame.rename
Examples
1 2 5
2 3 6
pandas.DataFrame.rename_axis
Examples
pandas.DataFrame.reorder_levels
DataFrame.reorder_levels(order, axis=0)
Rearrange index levels using input order. May not drop or duplicate levels
Parameters order : list of int or list of str
List representing new level order. Reference level by number (position) or by key
(label).
axis : int
Where to reorder levels.
Returns type of caller (new object)
pandas.DataFrame.replace
Notes
Regex substitution is performed under the hood with re.sub. The rules for substitution for re.
sub are the same.
Regular expressions will only substitute on strings, meaning you cannot provide, for example, a
regular expression matching floating point numbers and expect the columns in your frame that have
a numeric dtype to be matched. However, if those floating point numbers are strings, then you can
do this.
This method has a lot of options. You are encouraged to experiment and play with this method to
gain intuition about how it works.
pandas.DataFrame.resample
Notes
To learn more about the offset strings, please see this link.
Examples
Downsample the series into 3 minute bins and sum the values of the timestamps falling into a bin.
>>> series.resample('3T').sum()
2000-01-01 00:00:00 3
2000-01-01 00:03:00 12
2000-01-01 00:06:00 21
Freq: 3T, dtype: int64
Downsample the series into 3 minute bins as above, but label each bin using the right edge instead of the
left. Please note that the value in the bucket used as the label is not included in the bucket, which it labels.
For example, in the original series the bucket 2000-01-01 00:03:00 contains the value 3, but the
summed value in the resampled bucket with the label2000-01-01 00:03:00 does not include 3 (if it
did, the summed value would be 6, not 3). To include this value close the right side of the bin interval as
illustrated in the example below this one.
Downsample the series into 3 minute bins as above, but close the right side of the bin interval.
Upsample the series into 30 second bins and fill the NaN values using the pad method.
>>> series.resample('30S').pad()[0:5]
2000-01-01 00:00:00 0
2000-01-01 00:00:30 0
2000-01-01 00:01:00 1
2000-01-01 00:01:30 1
2000-01-01 00:02:00 2
Freq: 30S, dtype: int64
Upsample the series into 30 second bins and fill the NaN values using the bfill method.
>>> series.resample('30S').bfill()[0:5]
2000-01-01 00:00:00 0
2000-01-01 00:00:30 1
2000-01-01 00:01:00 1
2000-01-01 00:01:30 2
2000-01-01 00:02:00 2
Freq: 30S, dtype: int64
>>> series.resample('3T').apply(custom_resampler)
2000-01-01 00:00:00 8
2000-01-01 00:03:00 17
2000-01-01 00:06:00 26
Freq: 3T, dtype: int64
For DataFrame objects, the keyword on can be used to specify the column instead of the index for resam-
pling.
For a DataFrame with MultiIndex, the keyword level can be used to specify on level the resampling
needs to take place.
pandas.DataFrame.reset_index
If the columns have multiple levels, determines which level the labels are inserted
into. By default it is inserted into the first level.
col_fill : object, default
If the columns have multiple levels, determines how the other levels are named.
If None then the index name is repeated.
Returns resetted : DataFrame
pandas.DataFrame.rfloordiv
Notes
pandas.DataFrame.rmod
Notes
pandas.DataFrame.rmul
Notes
pandas.DataFrame.rolling
Size of the moving window. This is the number of observations used for calculat-
ing the statistic. Each window will be a fixed size.
If its an offset then this will be the time period of each window. Each window will
be a variable sized based on the observations included in the time-period. This is
only valid for datetimelike indexes. This is new in 0.19.0
min_periods : int, default None
Minimum number of observations in window required to have a value (otherwise
result is NA). For a window that is specified by an offset, this will default to 1.
freq : string or DateOffset object, optional (default None) (DEPRECATED)
Frequency to conform the data to before computing the statistic. Specified as a
frequency string or DateOffset object.
center : boolean, default False
Set the labels at the center of the window.
win_type : string, default None
Provide a window type. See the notes below.
on : string, optional
For a DataFrame, column on which to calculate the rolling window, rather than
the index
closed : string, default None
Make the interval closed on the right, left, both or neither endpoints. For
offset-based windows, it defaults to right. For fixed windows, defaults to both.
Remaining cases not implemented for fixed windows.
New in version 0.20.0.
axis : int or string, default 0
Returns a Window or Rolling sub-classed for the particular operation
Notes
By default, the result is set to the right edge of the window. This can be changed to the center of the
window by setting center=True.
The freq keyword is used to conform time series data to a specified frequency by resampling the data.
This is done with the default parameters of resample() (i.e. using the mean).
To learn more about the offsets & frequency strings, please see this link.
The recognized win_types are:
boxcar
triang
blackman
hamming
bartlett
parzen
bohman
blackmanharris
nuttall
barthann
kaiser (needs beta)
gaussian (needs std)
general_gaussian (needs power, width)
slepian (needs width).
Examples
Rolling sum with a window length of 2, using the triang window type.
Rolling sum with a window length of 2, min_periods defaults to the window length.
>>> df.rolling(2).sum()
B
0 NaN
1 1.0
2 3.0
3 NaN
4 NaN
>>> df
B
2013-01-01 09:00:00 0.0
2013-01-01 09:00:02 1.0
2013-01-01 09:00:03 2.0
2013-01-01 09:00:05 NaN
2013-01-01 09:00:06 4.0
Contrasting to an integer rolling window, this will roll a variable length window corresponding to the time
period. The default for min_periods is 1.
>>> df.rolling('2s').sum()
B
2013-01-01 09:00:00 0.0
2013-01-01 09:00:02 1.0
2013-01-01 09:00:03 3.0
2013-01-01 09:00:05 NaN
2013-01-01 09:00:06 4.0
pandas.DataFrame.round
Examples
pandas.DataFrame.rpow
Notes
pandas.DataFrame.rsub
Equivalent to other - dataframe, but with support to substitute a fill_value for missing data in one
of the inputs.
Parameters other : Series, DataFrame, or constant
axis : {0, 1, index, columns}
For Series input, axis to match Series index on
fill_value : None or float value, default None
Fill missing (NaN) values with this value. If both DataFrame locations are miss-
ing, the result will be missing
level : int or name
Broadcast across a level, matching Index values on the passed MultiIndex level
Returns result : DataFrame
See also:
DataFrame.sub
Notes
pandas.DataFrame.rtruediv
Notes
pandas.DataFrame.sample
Examples
>>> s = pd.Series(np.random.randn(50))
>>> s.head()
0 -0.038497
1 1.820773
2 -0.972766
3 -1.598270
4 -1.095526
dtype: float64
>>> df = pd.DataFrame(np.random.randn(50, 4), columns=list('ABCD'))
>>> df.head()
A B C D
0 0.016443 -2.318952 -0.566372 -1.028078
1 -1.051921 0.438836 0.658280 -0.175797
2 -1.243569 -0.364626 -0.215065 0.057736
>>> s.sample(n=3)
27 -0.994689
55 -1.049016
67 -0.224565
dtype: float64
pandas.DataFrame.select
DataFrame.select(crit, axis=0)
Return data corresponding to axis labels matching criteria
Parameters crit : function
To be called on each index (label). Should return True or False
axis : int
Returns selection : type of caller
pandas.DataFrame.select_dtypes
DataFrame.select_dtypes(include=None, exclude=None)
Return a subset of a DataFrame including/excluding columns based on their dtype.
Parameters include, exclude : list-like
A list of dtypes or strings to be included/excluded. You must pass in a non-empty
sequence for at least one of these.
Returns subset : DataFrame
The subset of the frame including the dtypes in include and excluding the
dtypes in exclude.
Raises ValueError
If both of include and exclude are empty
If include and exclude have overlapping elements
If any kind of string dtype is passed in.
TypeError
If either of include or exclude is not a sequence
Notes
Examples
pandas.DataFrame.sem
pandas.DataFrame.set_axis
DataFrame.set_axis(axis, labels)
public verson of axis assignment
pandas.DataFrame.set_index
Examples
pandas.DataFrame.set_value
pandas.DataFrame.shift
Notes
If freq is specified then the index values are shifted but the data is not realigned. That is, use freq if you
would like to extend the index when shifting and preserve the original data.
pandas.DataFrame.skew
pandas.DataFrame.slice_shift
DataFrame.slice_shift(periods=1, axis=0)
Equivalent to shift without copying data. The shifted data will not include the dropped periods and the
shifted axis will be smaller than the original.
Parameters periods : int
Number of periods to move, can be positive or negative
Returns shifted : same type as caller
Notes
While the slice_shift is faster than shift, you may pay for it later during alignment.
pandas.DataFrame.sort_index
pandas.DataFrame.sort_values
pandas.DataFrame.sortlevel
pandas.DataFrame.squeeze
DataFrame.squeeze(axis=None)
Squeeze length 1 dimensions.
Parameters axis : None, integer or string axis name, optional
The axis to squeeze if 1-sized.
New in version 0.20.0.
Returns scalar if 1-sized, else original object
pandas.DataFrame.stack
DataFrame.stack(level=-1, dropna=True)
Pivot a level of the (possibly hierarchical) column labels, returning a DataFrame (or Series in the case of
an object with a single level of column labels) having a hierarchical index with a new inner-most level of
row labels. The level involved will automatically get sorted.
Parameters level : int, string, or list of these, default last level
Level(s) to stack, can pass level name
dropna : boolean, default True
Whether to drop rows in the resulting Frame/Series with no valid values
Returns stacked : DataFrame or Series
Examples
>>> s
a b
one 1. 2.
two 3. 4.
>>> s.stack()
one a 1
b 2
two a 3
b 4
pandas.DataFrame.std
pandas.DataFrame.sub
Notes
pandas.DataFrame.subtract
Notes
pandas.DataFrame.sum
pandas.DataFrame.swapaxes
pandas.DataFrame.swaplevel
pandas.DataFrame.tail
DataFrame.tail(n=5)
Returns last n rows
pandas.DataFrame.take
pandas.DataFrame.to_clipboard
Notes
pandas.DataFrame.to_csv
pandas.DataFrame.to_dense
DataFrame.to_dense()
Return dense representation of NDFrame (as opposed to sparse)
pandas.DataFrame.to_dict
DataFrame.to_dict(orient=dict)
Convert DataFrame to dictionary.
Parameters orient : str {dict, list, series, split, records, index}
Determines the type of the values of the dictionary.
dict (default) : dict like {column -> {index -> value}}
list : dict like {column -> [values]}
series : dict like {column -> Series(values)}
split : dict like {index -> [index], columns -> [columns], data -> [values]}
records : list like [{column -> value}, ... , {column -> value}]
index : dict like {index -> {column -> value}}
New in version 0.17.0.
Abbreviations are allowed. s indicates series and sp indicates split.
pandas.DataFrame.to_excel
Notes
If passing an existing ExcelWriter object, then the sheet will be added to the existing workbook. This can
be used to save different DataFrames to one workbook:
For compatibility with to_csv, to_excel serializes lists and dicts to strings before writing.
pandas.DataFrame.to_feather
DataFrame.to_feather(fname)
write out the binary feather-format for DataFrames
New in version 0.20.0.
Parameters fname : str
string file path
pandas.DataFrame.to_gbq
destination_table : string
Name of table to be written, in the form dataset.tablename
project_id : str
Google BigQuery Account project ID.
chunksize : int (default 10000)
Number of rows to be inserted in each chunk from the dataframe.
verbose : boolean (default True)
Show percentage complete
reauth : boolean (default False)
Force Google BigQuery to reauthenticate the user. This is useful if multiple ac-
counts are used.
if_exists : {fail, replace, append}, default fail
fail: If table exists, do nothing. replace: If table exists, drop it, recreate it, and
insert data. append: If table exists, insert data. Create if does not exist.
private_key : str (optional)
Service account private key in JSON format. Can be file path or string contents.
This is useful for remote server authentication (eg. jupyter iPython notebook on
remote host)
pandas.DataFrame.to_hdf
List of columns to create as indexed data columns for on-disk queries, or True to
use all columns. By default only the axes of the object are indexed. See here.
Applicable only to format=table.
complevel : int, 1-9, default 0
If a complib is specified compression will be applied where possible
complib : {zlib, bzip2, lzo, blosc, None}, default None
If complevel is > 0 apply compression to objects written in the store wherever
possible
fletcher32 : bool, default False
If applying compression use the fletcher32 checksum
dropna : boolean, default False.
If true, ALL nan rows will not be written to store.
pandas.DataFrame.to_html
pandas.DataFrame.to_json
Examples
>>> df.to_json(orient='index')
'{"row 1":{"col 1":"a","col 2":"b"},"row 2":{"col 1":"c","col 2":"d"}}'
Encoding/decoding a Dataframe using 'records' formatted JSON. Note that index labels are not pre-
served with this encoding.
>>> df.to_json(orient='records')
'[{"col 1":"a","col 2":"b"},{"col 1":"c","col 2":"d"}]'
>>> df.to_json(orient='table')
'{"schema": {"fields": [{"name": "index", "type": "string"},
{"name": "col 1", "type": "string"},
{"name": "col 2", "type": "string"}],
"primaryKey": "index",
"pandas_version": "0.20.0"},
"data": [{"index": "row 1", "col 1": "a", "col 2": "b"},
{"index": "row 2", "col 1": "c", "col 2": "d"}]}'
pandas.DataFrame.to_latex
pandas.DataFrame.to_msgpack
THIS IS AN EXPERIMENTAL LIBRARY and the storage format may not be stable until a future release.
Parameters path : string File path, buffer-like, or None
if None, return generated string
append : boolean whether to append to an existing msgpack
(default is False)
compress : type of compressor (zlib or blosc), default to None (no
compression)
pandas.DataFrame.to_panel
DataFrame.to_panel()
Transform long (stacked) format (DataFrame) into wide (3D, Panel) format.
Currently the index of the DataFrame must be a 2-level MultiIndex. This may be generalized later
Returns panel : Panel
pandas.DataFrame.to_period
pandas.DataFrame.to_pickle
DataFrame.to_pickle(path, compression=infer)
Pickle (serialize) object to input file path.
Parameters path : string
File path
compression : {infer, gzip, bz2, xz, None}, default infer
a string representing the compression to use in the output file
New in version 0.20.0.
pandas.DataFrame.to_records
DataFrame.to_records(index=True, convert_datetime64=True)
Convert DataFrame to record array. Index will be put in the index field of the record array if requested
Parameters index : boolean, default True
Include index in resulting record array, stored in index field
convert_datetime64 : boolean, default True
Whether to convert the index to datetime.datetime if it is a DatetimeIndex
Returns y : recarray
pandas.DataFrame.to_sparse
DataFrame.to_sparse(fill_value=None, kind=block)
Convert to SparseDataFrame
Parameters fill_value : float, default NaN
kind : {block, integer}
Returns y : SparseDataFrame
pandas.DataFrame.to_stata
variable_labels : dict
Dictionary containing columns as keys and variable labels as values. Each label
must be 80 characters or smaller.
New in version 0.19.0.
Raises NotImplementedError
If datetimes contain timezone information
Column dtype is not representable in Stata
ValueError
Columns listed in convert_dates are noth either datetime64[ns] or date-
time.datetime
Column listed in convert_dates is not in DataFrame
Categorical label contains more than 32,000 characters
New in version 0.19.0.
Examples
Or with dates
pandas.DataFrame.to_string
pandas.DataFrame.to_timestamp
pandas.DataFrame.to_xarray
DataFrame.to_xarray()
Return an xarray object from the pandas object.
Notes
Examples
>>> df.to_xarray()
<xarray.Dataset>
Dimensions: (index: 3)
Coordinates:
* index (index) int64 0 1 2
Data variables:
A (index) int64 1 1 2
B (index) object 'foo' 'bar' 'foo'
C (index) float64 4.0 5.0 6.0
>>> df.to_xarray()
<xarray.Dataset>
Dimensions: (A: 2, B: 2)
Coordinates:
* B (B) object 'bar' 'foo'
* A (A) int64 1 2
Data variables:
C (B, A) float64 5.0 nan 4.0 6.0
>>> p = pd.Panel(np.arange(24).reshape(4,3,2),
items=list('ABCD'),
major_axis=pd.date_range('20130101', periods=3),
minor_axis=['first', 'second'])
>>> p
<class 'pandas.core.panel.Panel'>
Dimensions: 4 (items) x 3 (major_axis) x 2 (minor_axis)
Items axis: A to D
Major_axis axis: 2013-01-01 00:00:00 to 2013-01-03 00:00:00
Minor_axis axis: first to second
>>> p.to_xarray()
<xarray.DataArray (items: 4, major_axis: 3, minor_axis: 2)>
array([[[ 0, 1],
[ 2, 3],
[ 4, 5]],
[[ 6, 7],
[ 8, 9],
[10, 11]],
[[12, 13],
[14, 15],
[16, 17]],
[[18, 19],
[20, 21],
[22, 23]]])
Coordinates:
* items (items) object 'A' 'B' 'C' 'D'
* major_axis (major_axis) datetime64[ns] 2013-01-01 2013-01-02 2013-01-03
# noqa
pandas.DataFrame.transform
Examples
pandas.DataFrame.transpose
DataFrame.transpose(*args, **kwargs)
Transpose index and columns
pandas.DataFrame.truediv
Notes
pandas.DataFrame.truncate
pandas.DataFrame.tshift
Notes
If freq is not specified then tries to use the freq or inferred_freq attributes of the index. If neither of those
attributes exist, a ValueError is thrown
pandas.DataFrame.tz_convert
pandas.DataFrame.tz_localize
pandas.DataFrame.unstack
DataFrame.unstack(level=-1, fill_value=None)
Pivot a level of the (necessarily hierarchical) index labels, returning a DataFrame having a new level of
column labels whose inner-most level consists of the pivoted index labels. If the index is not a MultiIndex,
the output will be a Series (the analogue of stack when the columns are not a MultiIndex). The level
involved will automatically get sorted.
Parameters level : int, string, or list of these, default -1 (last level)
Level(s) of index to unstack, can pass level name
fill_value : replace NaN with this value if the unstack produces
missing values
Returns unstacked : DataFrame or Series
See also:
DataFrame.stack Pivot a level of the column labels (inverse operation from unstack).
Examples
>>> s.unstack(level=-1)
a b
one 1.0 2.0
two 3.0 4.0
>>> s.unstack(level=0)
one two
a 1.0 3.0
b 2.0 4.0
>>> df = s.unstack(level=0)
>>> df.unstack()
one a 1.0
b 2.0
two a 3.0
b 4.0
dtype: float64
pandas.DataFrame.update
pandas.DataFrame.var
pandas.DataFrame.where
Notes
The where method is an application of the if-then idiom. For each element in the calling DataFrame, if
cond is True the element is used; otherwise the corresponding element from the DataFrame other is
used.
The signature for DataFrame.where() differs from numpy.where(). Roughly df1.where(m,
df2) is equivalent to np.where(m, df1, df2).
For further details and examples see the where documentation in indexing.
Examples
>>> s = pd.Series(range(5))
>>> s.where(s > 0)
0 NaN
1 1.0
2 2.0
3 3.0
4 4.0
pandas.DataFrame.xs
Notes
Examples
>>> df
A B C
a 4 5 2
b 4 0 9
c 9 7 3
>>> df.xs('a')
A 4
B 5
C 2
Name: a
>>> df.xs('C', axis=1)
a 2
b 9
c 3
Name: C
>>> df
A B C D
first second third
bar one 1 4 1 8 9
two 1 7 5 5 0
baz one 1 6 6 8 0
three 2 5 3 5 3
>>> df.xs(('baz', 'three'))
A B C D
third
2 5 3 5 3
>>> df.xs('one', level=1)
A B C D
first third
bar 1 4 1 8 9
baz 1 6 6 8 0
>>> df.xs(('baz', 2), level=[0, 'third'])
A B C D
second
three 5 3 5 3
Axes
index: row labels
columns: column labels
34.4.3 Conversion
34.4.4.1 pandas.DataFrame.__iter__
DataFrame.__iter__()
Iterate over infor axis
For more information on .at, .iat, .loc, and .iloc, see the indexing documentation.
DataFrame.add(other[, axis, level, fill_value]) Addition of dataframe and other, element-wise (binary op-
erator add).
DataFrame.sub(other[, axis, level, fill_value]) Subtraction of dataframe and other, element-wise (binary
operator sub).
DataFrame.mul(other[, axis, level, fill_value]) Multiplication of dataframe and other, element-wise (bi-
nary operator mul).
DataFrame.div(other[, axis, level, fill_value]) Floating division of dataframe and other, element-wise (bi-
nary operator truediv).
DataFrame.truediv(other[, axis, level, ...]) Floating division of dataframe and other, element-wise (bi-
nary operator truediv).
DataFrame.floordiv(other[, axis, level, ...]) Integer division of dataframe and other, element-wise (bi-
nary operator floordiv).
DataFrame.mod(other[, axis, level, fill_value]) Modulo of dataframe and other, element-wise (binary op-
erator mod).
Continued on next page
DataFrame.apply(func[, axis, broadcast, ...]) Applies function along input axis of DataFrame.
DataFrame.applymap(func) Apply a function to a DataFrame that is intended to operate
elementwise, i.e.
DataFrame.aggregate(func[, axis]) Aggregate using callable, string, dict, or list of
string/callables
DataFrame.transform(func, *args, **kwargs) Call function producing a like-indexed NDFrame
DataFrame.groupby([by, axis, level, ...]) Group series using mapper (dict or key function, apply
given function to group, return result as series) or by a se-
ries of columns.
DataFrame.rolling(window[, min_periods, ...]) Provides rolling window calculcations.
DataFrame.expanding([min_periods, freq, ...]) Provides expanding transformations.
DataFrame.ewm([com, span, halflife, alpha, ...]) Provides exponential weighted functions
DataFrame.dropna([axis, how, thresh, ...]) Return object with labels on given axis omitted where al-
ternately any
DataFrame.fillna([value, method, axis, ...]) Fill NA/NaN values using the specified method
DataFrame.replace([to_replace, value, ...]) Replace values given in to_replace with value.
DataFrame.pivot([index, columns, values]) Reshape data (produce a pivot table) based on column
values.
DataFrame.reorder_levels(order[, axis]) Rearrange index levels using input order.
DataFrame.sort_values(by[, axis, ascending, ...]) Sort by the values along either axis
DataFrame.sort_index([axis, level, ...]) Sort object by labels (along an axis)
DataFrame.nlargest(n, columns[, keep]) Get the rows of a DataFrame sorted by the n largest values
of columns.
DataFrame.nsmallest(n, columns[, keep]) Get the rows of a DataFrame sorted by the n smallest values
of columns.
DataFrame.swaplevel([i, j, axis]) Swap levels i and j in a MultiIndex on a particular axis
DataFrame.stack([level, dropna]) Pivot a level of the (possibly hierarchical) column labels,
returning a DataFrame (or Series in the case of an object
with a single level of column labels) having a hierarchical
index with a new inner-most level of row labels.
DataFrame.unstack([level, fill_value]) Pivot a level of the (necessarily hierarchical) index labels,
returning a DataFrame having a new level of column labels
whose inner-most level consists of the pivoted index labels.
DataFrame.melt([id_vars, value_vars, ...]) Unpivots a DataFrame from wide format to long format,
optionally
DataFrame.T Transpose index and columns
DataFrame.to_panel() Transform long (stacked) format (DataFrame) into wide
(3D, Panel) format.
DataFrame.to_xarray() Return an xarray object from the pandas object.
DataFrame.transpose(*args, **kwargs) Transpose index and columns
DataFrame.append(other[, ignore_index, ...]) Append rows of other to the end of this frame, returning a
new object.
DataFrame.assign(**kwargs) Assign new columns to a DataFrame, returning a new ob-
ject (a copy) with all the original columns in addition to the
new ones.
DataFrame.join(other[, on, how, lsuffix, ...]) Join columns with other DataFrame either on index or on a
key column.
DataFrame.merge(right[, how, on, left_on, ...]) Merge DataFrame objects by performing a database-style
join operation by columns or indexes.
DataFrame.update(other[, join, overwrite, ...]) Modify DataFrame in place using non-NA values from
passed DataFrame.
34.4.13 Plotting
DataFrame.plot is both a callable method and a namespace attribute for specific plotting methods of the form
DataFrame.plot.<kind>.
34.4.13.1 pandas.DataFrame.plot.area
34.4.13.2 pandas.DataFrame.plot.bar
34.4.13.3 pandas.DataFrame.plot.barh
34.4.13.4 pandas.DataFrame.plot.box
DataFrame.plot.box(by=None, **kwds)
Boxplot
New in version 0.17.0.
Parameters by : string or sequence
Column in the DataFrame to group by.
**kwds : optional
Keyword arguments to pass on to pandas.DataFrame.plot().
Returns axes : matplotlib.AxesSubplot or np.array of them
34.4.13.5 pandas.DataFrame.plot.density
DataFrame.plot.density(**kwds)
Kernel Density Estimate plot
New in version 0.17.0.
Parameters **kwds : optional
Keyword arguments to pass on to pandas.DataFrame.plot().
Returns axes : matplotlib.AxesSubplot or np.array of them
34.4.13.6 pandas.DataFrame.plot.hexbin
34.4.13.7 pandas.DataFrame.plot.hist
34.4.13.8 pandas.DataFrame.plot.kde
DataFrame.plot.kde(**kwds)
Kernel Density Estimate plot
New in version 0.17.0.
Parameters **kwds : optional
Keyword arguments to pass on to pandas.DataFrame.plot().
Returns axes : matplotlib.AxesSubplot or np.array of them
34.4.13.9 pandas.DataFrame.plot.line
34.4.13.10 pandas.DataFrame.plot.pie
DataFrame.plot.pie(y=None, **kwds)
Pie chart
New in version 0.17.0.
Parameters y : label or position, optional
Column to plot.
**kwds : optional
Keyword arguments to pass on to pandas.DataFrame.plot().
Returns axes : matplotlib.AxesSubplot or np.array of them
34.4.13.11 pandas.DataFrame.plot.scatter
DataFrame.boxplot([column, by, ax, ...]) Make a box plot from DataFrame column optionally
grouped by some columns or
DataFrame.hist(data[, column, by, grid, ...]) Draw histogram of the DataFrames series using matplotlib
/ pylab.
DataFrame.from_csv(path[, header, sep, ...]) Read CSV file (DISCOURAGED, please use pandas.
read_csv() instead).
DataFrame.from_dict(data[, orient, dtype]) Construct DataFrame from dict of array-like or dicts
DataFrame.from_items(items[, columns, orient]) Convert (key, value) pairs to DataFrame.
DataFrame.from_records(data[, index, ...]) Convert structured or record ndarray to DataFrame
DataFrame.info([verbose, buf, max_cols, ...]) Concise summary of a DataFrame.
DataFrame.to_pickle(path[, compression]) Pickle (serialize) object to input file path.
DataFrame.to_csv([path_or_buf, sep, na_rep, ...]) Write DataFrame to a comma-separated values (csv) file
DataFrame.to_hdf(path_or_buf, key, **kwargs) Write the contained data to an HDF5 file using HDFStore.
DataFrame.to_sql(name, con[, flavor, ...]) Write records stored in a DataFrame to a SQL database.
DataFrame.to_dict([orient]) Convert DataFrame to dictionary.
DataFrame.to_excel(excel_writer[, ...]) Write DataFrame to an excel sheet
DataFrame.to_json([path_or_buf, orient, ...]) Convert the object to a JSON string.
DataFrame.to_html([buf, columns, col_space, ...]) Render a DataFrame as an HTML table.
DataFrame.to_feather(fname) write out the binary feather-format for DataFrames
DataFrame.to_latex([buf, columns, ...]) Render a DataFrame to a tabular environment table.
DataFrame.to_stata(fname[, convert_dates, ...]) A class for writing Stata binary dta files from array-like
objects
DataFrame.to_msgpack([path_or_buf, encoding]) msgpack (serialize) object to input file path
DataFrame.to_gbq(destination_table, project_id) Write a DataFrame to a Google BigQuery table.
DataFrame.to_records([index, con- Convert DataFrame to record array.
vert_datetime64])
DataFrame.to_sparse([fill_value, kind]) Convert to SparseDataFrame
DataFrame.to_dense() Return dense representation of NDFrame (as opposed to
sparse)
DataFrame.to_string([buf, columns, ...]) Render a DataFrame to a console-friendly tabular output.
DataFrame.to_clipboard([excel, sep]) Attempt to write text representation of object to the system
clipboard This can be pasted into Excel, for example.
34.4.15 Sparse
34.4.15.1 pandas.SparseDataFrame.to_coo
SparseDataFrame.to_coo()
Return the contents of the frame as a sparse SciPy COO matrix.
New in version 0.20.0.
Returns coo_matrix : scipy.sparse.spmatrix
If the caller is heterogeneous and contains booleans or objects, the result will be of
dtype=object. See Notes.
Notes
The dtype will be the lowest-common-denominator type (implicit upcasting); that is to say if the dtypes (even
of numeric types) are mixed, the one that accommodates all will be chosen.
e.g. If the dtypes are float16 and float32, dtype will be upcast to float32. By numpy.find_common_type conven-
tion, mixing int64 and and uint64 will result in a float64 dtype.
34.5 Panel
34.5.1 Constructor
Panel([data, items, major_axis, minor_axis, ...]) Represents wide format panel data, stored as 3-dimensional
array
34.5.1.1 pandas.Panel
Attributes
pandas.Panel.at
Panel.at
Fast label-based scalar accessor
Similarly to loc, at provides label based scalar lookups. You can also set using these indexers.
pandas.Panel.axes
Panel.axes
Return index label(s) of the internal NDFrame
pandas.Panel.blocks
Panel.blocks
Internal property, property synonym for as_blocks()
pandas.Panel.dtypes
Panel.dtypes
Return the dtypes in this object.
pandas.Panel.empty
Panel.empty
True if NDFrame is entirely empty [no items], meaning any of the axes are of length 0.
See also:
pandas.Series.dropna, pandas.DataFrame.dropna
Notes
If NDFrame contains only NaNs, it is still not considered empty. See the example below.
Examples
If we only have NaNs in our DataFrame, it is not considered empty! We will need to drop the NaNs to
make the DataFrame empty:
pandas.Panel.ftypes
Panel.ftypes
Return the ftypes (indication of sparse/dense and dtype) in this object.
pandas.Panel.iat
Panel.iat
Fast integer location scalar accessor.
Similarly to iloc, iat provides integer based lookups. You can also set using these indexers.
pandas.Panel.iloc
Panel.iloc
Purely integer-location based indexing for selection by position.
.iloc[] is primarily integer position based (from 0 to length-1 of the axis), but may also be used
with a boolean array.
Allowed inputs are:
An integer, e.g. 5.
A list or array of integers, e.g. [4, 3, 0].
A slice object with ints, e.g. 1:7.
A boolean array.
A callable function with one argument (the calling Series, DataFrame or Panel) and that returns
valid output for indexing (one of the above)
.iloc will raise IndexError if a requested indexer is out-of-bounds, except slice indexers which
allow out-of-bounds indexing (this conforms with python/numpy slice semantics).
See more at Selection by Position
pandas.Panel.is_copy
Panel.is_copy = None
pandas.Panel.ix
Panel.ix
A primarily label-location based indexer, with integer position fallback.
.ix[] supports mixed integer and label based access. It is primarily label based, but will fall back to
integer positional access unless the corresponding axis is of integer type.
.ix is the most general indexer and will support any of the inputs in .loc and .iloc. .ix also
supports floating point label schemes. .ix is exceptionally useful when dealing with mixed positional
and label based hierachical indexes.
However, when an axis is integer based, ONLY label based access and not positional access is supported.
Thus, in such cases, its usually better to be explicit and use .iloc or .loc.
See more at Advanced Indexing.
pandas.Panel.loc
Panel.loc
Purely label-location based indexer for selection by label.
.loc[] is primarily label based, but may also be used with a boolean array.
Allowed inputs are:
A single label, e.g. 5 or 'a', (note that 5 is interpreted as a label of the index, and never as an
integer position along the index).
A list or array of labels, e.g. ['a', 'b', 'c'].
A slice object with labels, e.g. 'a':'f' (note that contrary to usual python slices, both the start
and the stop are included!).
A boolean array.
A callable function with one argument (the calling Series, DataFrame or Panel) and that returns
valid output for indexing (one of the above)
.loc will raise a KeyError when the items are not found.
See more at Selection by Label
pandas.Panel.ndim
Panel.ndim
Number of axes / array dimensions
pandas.Panel.shape
Panel.shape
Return a tuple of axis dimensions
pandas.Panel.size
Panel.size
number of elements in the NDFrame
pandas.Panel.values
Panel.values
Numpy representation of NDFrame
Notes
The dtype will be a lower-common-denominator dtype (implicit upcasting); that is to say if the dtypes
(even of numeric types) are mixed, the one that accommodates all will be chosen. Use this with care if
you are not dealing with the blocks.
e.g. If the dtypes are float16 and float32, dtype will be upcast to float32. If dtypes are int32 and uint8,
dtype will be upcast to int32. By numpy.find_common_type convention, mixing int64 and uint64 will
result in a flot64 dtype.
Methods
pandas.Panel.abs
Panel.abs()
Return an object with absolute value takenonly applicable to objects that are all numeric.
pandas.Panel.add
Panel.add(other, axis=0)
Addition of series and other, element-wise (binary operator add). Equivalent to panel + other.
Parameters other : DataFrame or Panel
axis : {items, major_axis, minor_axis}
Axis to broadcast over
Returns Panel
See also:
Panel.radd
pandas.Panel.add_prefix
Panel.add_prefix(prefix)
Concatenate prefix string with panel items names.
Parameters prefix : string
Returns with_prefix : type of caller
pandas.Panel.add_suffix
Panel.add_suffix(suffix)
Concatenate suffix string with panel items names.
Parameters suffix : string
Returns with_suffix : type of caller
pandas.Panel.agg
pandas.Panel.aggregate
pandas.Panel.align
Panel.align(other, **kwargs)
pandas.Panel.all
pandas.Panel.any
pandas.Panel.apply
Examples
>>> p = pd.Panel(np.random.rand(4,3,2))
>>> p.apply(np.sqrt)
Equivalent to previous:
Return the shapes of each DataFrame over axis 2 (i.e the shapes of items x major), as a Series
pandas.Panel.as_blocks
Panel.as_blocks(copy=True)
Convert the frame to a dict of dtype -> Constructor Types that each has a homogeneous dtype.
NOTE: the dtypes of the blocks WILL BE PRESERVED HERE (unlike in as_matrix)
pandas.Panel.as_matrix
Panel.as_matrix()
pandas.Panel.asfreq
Notes
To learn more about the frequency strings, please see this link.
Examples
>>> df.asfreq(freq='30S')
s
2000-01-01 00:00:00 0.0
2000-01-01 00:00:30 NaN
2000-01-01 00:01:00 NaN
2000-01-01 00:01:30 NaN
2000-01-01 00:02:00 2.0
2000-01-01 00:02:30 NaN
2000-01-01 00:03:00 3.0
pandas.Panel.asof
Panel.asof(where, subset=None)
The last row without any NaN is taken (or the last row without NaN considering only the subset of
columns in the case of a DataFrame)
New in version 0.19.0: For DataFrame
If there is no good value, NaN is returned for a Series a Series of NaN values for a DataFrame
Parameters where : date or array of dates
subset : string or list of strings, default None
if not None use these columns for NaN propagation
Returns where is scalar
value or NaN if input is Series
Series if input is DataFrame
where is Index: same shape object as input
See also:
merge_asof
Notes
pandas.Panel.astype
pandas.Panel.at_time
Panel.at_time(time, asof=False)
Select values at particular time of day (e.g. 9:30AM).
Parameters time : datetime.time or string
Returns values_at_time : type of caller
pandas.Panel.between_time
pandas.Panel.bfill
pandas.Panel.bool
Panel.bool()
Return the bool of a single element PandasObject.
This must be a boolean scalar value, either True or False. Raise a ValueError if the PandasObject does not
have exactly 1 element, or that element is not boolean
pandas.Panel.clip
Examples
>>> df
0 1
0 0.335232 -1.256177
1 -1.367855 0.746646
2 0.027753 -1.176076
3 0.230930 -0.679613
4 1.261967 0.570967
>>> df.clip(-1.0, 0.5)
0 1
0 0.335232 -1.000000
1 -1.000000 0.500000
2 0.027753 -1.000000
3 0.230930 -0.679613
4 0.500000 0.500000
>>> t
0 -0.3
1 -0.2
2 -0.1
3 0.0
4 0.1
dtype: float64
>>> df.clip(t, t + 1, axis=0)
0 1
0 0.335232 -0.300000
1 -0.200000 0.746646
2 0.027753 -0.100000
3 0.230930 0.000000
4 1.100000 0.570967
pandas.Panel.clip_lower
Panel.clip_lower(threshold, axis=None)
Return copy of the input with values below given value(s) truncated.
Parameters threshold : float or array_like
axis : int or string axis name, optional
Align object with threshold along the given axis.
Returns clipped : same type as input
See also:
clip
pandas.Panel.clip_upper
Panel.clip_upper(threshold, axis=None)
Return copy of input with values above given value(s) truncated.
Parameters threshold : float or array_like
axis : int or string axis name, optional
Align object with threshold along the given axis.
Returns clipped : same type as input
See also:
clip
pandas.Panel.compound
pandas.Panel.conform
Panel.conform(frame, axis=items)
Conform input DataFrame to align with chosen axis pair.
Parameters frame : DataFrame
axis : {items, major, minor}
Axis the input corresponds to. E.g., if axis=major, then the frames columns
would be items, and the index would be values of the minor axis
Returns DataFrame
pandas.Panel.consolidate
Panel.consolidate(inplace=False)
DEPRECATED: consolidate will be an internal implementation only.
pandas.Panel.convert_objects
pandas.Panel.copy
Panel.copy(deep=True)
Make a copy of this objects data.
Parameters deep : boolean or string, default True
Make a deep copy, including a copy of the data and the indices. With
deep=False neither the indices or the data are copied.
Note that when deep=True data is copied, actual python objects will not be
copied recursively, only the reference to the object. This is in contrast to copy.
deepcopy in the Standard Library, which recursively copies object data.
Returns copy : type of caller
pandas.Panel.count
Panel.count(axis=major)
Return number of observations over requested axis.
Parameters axis : {items, major, minor} or {0, 1, 2}
Returns count : DataFrame
pandas.Panel.cummax
pandas.Panel.cummin
pandas.Panel.cumprod
pandas.Panel.cumsum
pandas.Panel.describe
Notes
For numeric data, the results index will include count, mean, std, min, max as well as lower, 50 and
upper percentiles. By default the lower percentile is 25 and the upper percentile is 75. The 50 percentile
is the same as the median.
For object data (e.g. strings or timestamps), the results index will include count, unique, top, and
freq. The top is the most common value. The freq is the most common values frequency. Times-
tamps also include the first and last items.
If multiple object values have the highest count, then the count and top results will be arbitrarily chosen
from among those with the highest count.
For mixed data types provided via a DataFrame, the default is to return only an analysis of numeric
columns. If include='all' is provided as an option, the result will include a union of attributes of
each type.
The include and exclude parameters can be used to limit which columns in a DataFrame are analyzed
for the output. The parameters are ignored when analyzing a Series.
Examples
>>> s = pd.Series([
... np.datetime64("2000-01-01"),
... np.datetime64("2010-01-01"),
... np.datetime64("2010-01-01")
... ])
>>> s.describe()
count 3
unique 2
top 2010-01-01 00:00:00
freq 2
first 2000-01-01 00:00:00
last 2010-01-01 00:00:00
dtype: object
>>> df.describe(include='all')
numeric object
count 3.0 3
unique NaN 3
top NaN b
freq NaN 1
mean 2.0 NaN
std 1.0 NaN
min 1.0 NaN
25% 1.5 NaN
50% 2.0 NaN
75% 2.5 NaN
max 3.0 NaN
>>> df.numeric.describe()
count 3.0
mean 2.0
std 1.0
min 1.0
25% 1.5
50% 2.0
75% 2.5
max 3.0
Name: numeric, dtype: float64
>>> df.describe(include=[np.number])
numeric
count 3.0
mean 2.0
std 1.0
min 1.0
25% 1.5
50% 2.0
75% 2.5
max 3.0
>>> df.describe(include=[np.object])
object
count 3
unique 3
top b
freq 1
>>> df.describe(exclude=[np.number])
object
count 3
unique 3
top b
freq 1
>>> df.describe(exclude=[np.object])
numeric
count 3.0
mean 2.0
std 1.0
min 1.0
25% 1.5
50% 2.0
75% 2.5
max 3.0
pandas.Panel.div
Panel.div(other, axis=0)
Floating division of series and other, element-wise (binary operator truediv). Equivalent to panel /
other.
Parameters other : DataFrame or Panel
axis : {items, major_axis, minor_axis}
Axis to broadcast over
Returns Panel
See also:
Panel.rtruediv
pandas.Panel.divide
Panel.divide(other, axis=0)
Floating division of series and other, element-wise (binary operator truediv). Equivalent to panel /
other.
Parameters other : DataFrame or Panel
axis : {items, major_axis, minor_axis}
Axis to broadcast over
Returns Panel
See also:
Panel.rtruediv
pandas.Panel.drop
pandas.Panel.dropna
pandas.Panel.eq
Panel.eq(other, axis=None)
Wrapper for comparison method eq
pandas.Panel.equals
Panel.equals(other)
Determines if two NDFrame objects contain the same elements. NaNs in the same location are considered
equal.
pandas.Panel.ffill
pandas.Panel.fillna
pandas.Panel.filter
Notes
The items, like, and regex parameters are enforced to be mutually exclusive.
axis defaults to the info axis that is used when indexing with [].
Examples
>>> df
one two three
mouse 1 2 3
rabbit 4 5 6
pandas.Panel.first
Panel.first(offset)
Convenience method for subsetting initial periods of time series data based on a date offset.
Parameters offset : string, DateOffset, dateutil.relativedelta
Returns subset : type of caller
Examples
pandas.Panel.floordiv
Panel.floordiv(other, axis=0)
Integer division of series and other, element-wise (binary operator floordiv). Equivalent to panel //
other.
Parameters other : DataFrame or Panel
axis : {items, major_axis, minor_axis}
Axis to broadcast over
Returns Panel
See also:
Panel.rfloordiv
pandas.Panel.fromDict
pandas.Panel.from_dict
pandas.Panel.ge
Panel.ge(other, axis=None)
Wrapper for comparison method ge
pandas.Panel.get
Panel.get(key, default=None)
Get item from object for given key (DataFrame column, Panel slice, etc.). Returns default value if not
found.
Parameters key : object
Returns value : type of items contained in object
pandas.Panel.get_dtype_counts
Panel.get_dtype_counts()
Return the counts of dtypes in this object.
pandas.Panel.get_ftype_counts
Panel.get_ftype_counts()
Return the counts of ftypes in this object.
pandas.Panel.get_value
Panel.get_value(*args, **kwargs)
Quickly retrieve single value at (item, major, minor) location
Parameters item : item label (panel item)
major : major axis label (panel item row)
minor : minor axis label (panel item column)
takeable : interpret the passed labels as indexers, default False
Returns value : scalar value
pandas.Panel.get_values
Panel.get_values()
same as values (but handles sparseness conversions)
pandas.Panel.groupby
Panel.groupby(function, axis=major)
Group data on given axis, returning GroupBy object
Parameters function : callable
Mapping function for chosen access
axis : {major, minor, items}, default major
Returns grouped : PanelGroupBy
pandas.Panel.gt
Panel.gt(other, axis=None)
Wrapper for comparison method gt
pandas.Panel.head
Panel.head(n=5)
pandas.Panel.interpolate
linear: ignore the index and treat the values as equally spaced. This is the
only method supported on MultiIndexes. default
time: interpolation works on daily and higher resolution data to interpolate
given length of interval
index, values: use the actual numerical values of the index
nearest, zero, slinear, quadratic, cubic, barycentric, polyno-
mial is passed to scipy.interpolate.interp1d. Both poly-
nomial and spline require that you also specify an order (int), e.g.
df.interpolate(method=polynomial, order=4). These use the actual numeri-
cal values of the index.
krogh, piecewise_polynomial, spline, pchip and akima are all wrap-
pers around the scipy interpolation methods of similar names. These use the
actual numerical values of the index. For more information on their behavior,
see the scipy documentation and tutorial documentation
from_derivatives refers to BPoly.from_derivatives which replaces piece-
wise_polynomial interpolation method in scipy 0.18
New in version 0.18.1: Added support for the akima method Added interpolate
method from_derivatives which replaces piecewise_polynomial in scipy 0.18;
backwards-compatible with scipy < 0.18
axis : {0, 1}, default 0
0: fill column-by-column
1: fill row-by-row
limit : int, default None.
Maximum number of consecutive NaNs to fill. Must be greater than 0.
limit_direction : {forward, backward, both}, default forward
Examples
Filling in NaNs
pandas.Panel.isnull
Panel.isnull()
Return a boolean same-sized object indicating if the values are null.
See also:
pandas.Panel.iteritems
Panel.iteritems()
Iterate over (label, values) on info axis
This is index for Series, columns for DataFrame, major_axis for Panel, and so on.
pandas.Panel.join
pandas.Panel.keys
Panel.keys()
Get the info axis (see Indexing for more)
This is index for Series, columns for DataFrame and major_axis for Panel.
pandas.Panel.kurt
pandas.Panel.kurtosis
pandas.Panel.last
Panel.last(offset)
Convenience method for subsetting final periods of time series data based on a date offset.
Parameters offset : string, DateOffset, dateutil.relativedelta
Returns subset : type of caller
Examples
pandas.Panel.le
Panel.le(other, axis=None)
Wrapper for comparison method le
pandas.Panel.lt
Panel.lt(other, axis=None)
Wrapper for comparison method lt
pandas.Panel.mad
pandas.Panel.major_xs
Panel.major_xs(key)
Return slice of panel along major axis
Parameters key : object
Major axis label
Returns y : DataFrame
index -> minor axis, columns -> items
Notes
pandas.Panel.mask
See also:
DataFrame.where()
Notes
The mask method is an application of the if-then idiom. For each element in the calling DataFrame, if
cond is False the element is used; otherwise the corresponding element from the DataFrame other
is used.
The signature for DataFrame.where() differs from numpy.where(). Roughly df1.where(m,
df2) is equivalent to np.where(m, df1, df2).
For further details and examples see the mask documentation in indexing.
Examples
>>> s = pd.Series(range(5))
>>> s.where(s > 0)
0 NaN
1 1.0
2 2.0
3 3.0
4 4.0
pandas.Panel.max
pandas.Panel.mean
pandas.Panel.median
pandas.Panel.min
pandas.Panel.minor_xs
Panel.minor_xs(key)
Return slice of panel along minor axis
Parameters key : object
Minor axis label
Returns y : DataFrame
index -> major axis, columns -> items
Notes
pandas.Panel.mod
Panel.mod(other, axis=0)
Modulo of series and other, element-wise (binary operator mod). Equivalent to panel % other.
Parameters other : DataFrame or Panel
axis : {items, major_axis, minor_axis}
Axis to broadcast over
Returns Panel
See also:
Panel.rmod
pandas.Panel.mul
Panel.mul(other, axis=0)
Multiplication of series and other, element-wise (binary operator mul). Equivalent to panel * other.
Parameters other : DataFrame or Panel
axis : {items, major_axis, minor_axis}
Axis to broadcast over
Returns Panel
See also:
Panel.rmul
pandas.Panel.multiply
Panel.multiply(other, axis=0)
Multiplication of series and other, element-wise (binary operator mul). Equivalent to panel * other.
Parameters other : DataFrame or Panel
axis : {items, major_axis, minor_axis}
Axis to broadcast over
Returns Panel
See also:
Panel.rmul
pandas.Panel.ne
Panel.ne(other, axis=None)
Wrapper for comparison method ne
pandas.Panel.notnull
Panel.notnull()
Return a boolean same-sized object indicating if the values are not null.
See also:
pandas.Panel.pct_change
Notes
By default, the percentage change is calculated along the stat axis: 0, or Index, for DataFrame and 1,
or minor for Panel. You can change this with the axis keyword argument.
pandas.Panel.pipe
Notes
Use .pipe when chaining together functions that expect on Series or DataFrames. Instead of writing
>>> (df.pipe(h)
... .pipe(g, arg1=a)
... .pipe(f, arg2=b, arg3=c)
... )
If you have a function that takes the data as (say) the second argument, pass a tuple indicating which
keyword expects the data. For example, suppose f takes its data as arg2:
>>> (df.pipe(h)
... .pipe(g, arg1=a)
... .pipe((f, 'arg2'), arg1=a, arg3=c)
... )
pandas.Panel.pop
Panel.pop(item)
Return item and drop from frame. Raise KeyError if not found.
pandas.Panel.pow
Panel.pow(other, axis=0)
Exponential power of series and other, element-wise (binary operator pow). Equivalent to panel **
other.
Parameters other : DataFrame or Panel
axis : {items, major_axis, minor_axis}
Axis to broadcast over
Returns Panel
See also:
Panel.rpow
pandas.Panel.prod
pandas.Panel.product
pandas.Panel.radd
Panel.radd(other, axis=0)
Addition of series and other, element-wise (binary operator radd). Equivalent to other + panel.
Parameters other : DataFrame or Panel
axis : {items, major_axis, minor_axis}
Axis to broadcast over
Returns Panel
See also:
Panel.add
pandas.Panel.rank
pandas.Panel.rdiv
Panel.rdiv(other, axis=0)
Floating division of series and other, element-wise (binary operator rtruediv). Equivalent to other /
panel.
Parameters other : DataFrame or Panel
axis : {items, major_axis, minor_axis}
Axis to broadcast over
Returns Panel
See also:
Panel.truediv
pandas.Panel.reindex
Examples
Create a new index and reindex the dataframe. By default values in the new index that do not have
corresponding records in the dataframe are assigned NaN.
We can fill in the missing values by passing a value to the keyword fill_value. Because the index is
not monotonically increasing or decreasing, we cannot use arguments to the keyword method to fill the
NaN values.
To further illustrate the filling functionality in reindex, we will create a dataframe with a monotonically
increasing index (for example, a sequence of dates).
The index entries that did not have a value in the original data frame (for example, 2009-12-29) are by
default filled with NaN. If desired, we can fill in the missing values using one of several options.
For example, to backpropagate the last valid value to fill the NaN values, pass bfill as an argument to
the method keyword.
Please note that the NaN value present in the original dataframe (at index value 2010-01-03) will not be
filled by any of the value propagation schemes. This is because filling while reindexing does not look at
dataframe values, but only compares the original and desired indexes. If you do want to fill in the NaN
values present in the original dataframe, use the fillna() method.
pandas.Panel.reindex_axis
Examples
pandas.Panel.reindex_like
Notes
pandas.Panel.rename
Examples
pandas.Panel.rename_axis
labels. Function / dict values must be unique (1-to-1). Labels not contained in a dict / Series will be left
as-is.
Parameters mapper : scalar, list-like, dict-like or function, optional
axis : int or string, default 0
copy : boolean, default True
Also copy underlying data
inplace : boolean, default False
Returns renamed : type of caller
See also:
pandas.NDFrame.rename, pandas.Index.rename
Examples
pandas.Panel.replace
much for value since there are only a few possible substitution regexes you
can use.
str and regex rules apply as above.
dict:
Nested dictionaries, e.g., {a: {b: nan}}, are read as follows: look in
column a for the value b and replace it with nan. You can nest regular
expressions as well. Note that column names (the top-level dictionary keys
in a nested dictionary) cannot be regular expressions.
Keys map to column names and values map to substitution values. You can
treat this as a special case of passing two lists except that you are specifying
the column to search in.
None:
This means that the regex argument must be a string, compiled regular
expression, or list, dict, ndarray or Series of such elements. If value is also
None then this must be a nested dictionary or Series.
See the examples section for examples of each of these.
value : scalar, dict, list, str, regex, default None
Value to use to fill holes (e.g. 0), alternately a dict of values specifying which
value to use for each column (columns not in the dict will not be filled). Regular
expressions, strings and lists or dicts of such objects are also allowed.
inplace : boolean, default False
If True, in place. Note: this will modify any other views on this object (e.g. a
column form a DataFrame). Returns the caller if this is True.
limit : int, default None
Maximum size gap to forward or backward fill
regex : bool or same types as to_replace, default False
Whether to interpret to_replace and/or value as regular expressions. If this is
True then to_replace must be a string. Otherwise, to_replace must be None
because this parameter will be interpreted as a regular expression or a list, dict, or
array of regular expressions.
method : string, optional, {pad, ffill, bfill}
The method to use when for replacement, when to_replace is a list.
Returns filled : NDFrame
Raises AssertionError
If regex is not a bool and to_replace is not None.
TypeError
If to_replace is a dict and value is not a list, dict, ndarray, or Series
If to_replace is None and regex is not compilable into a regular expression or is a list,
dict, ndarray, or Series.
ValueError
If to_replace and value are list s or ndarray s, but they are not the same length.
See also:
NDFrame.reindex, NDFrame.asfreq, NDFrame.fillna
Notes
Regex substitution is performed under the hood with re.sub. The rules for substitution for re.
sub are the same.
Regular expressions will only substitute on strings, meaning you cannot provide, for example, a
regular expression matching floating point numbers and expect the columns in your frame that have
a numeric dtype to be matched. However, if those floating point numbers are strings, then you can
do this.
This method has a lot of options. You are encouraged to experiment and play with this method to
gain intuition about how it works.
pandas.Panel.resample
For a MultiIndex, level (name or number) to use for resampling. Level must be
datetime-like.
New in version 0.19.0.
Notes
To learn more about the offset strings, please see this link.
Examples
Downsample the series into 3 minute bins and sum the values of the timestamps falling into a bin.
>>> series.resample('3T').sum()
2000-01-01 00:00:00 3
2000-01-01 00:03:00 12
2000-01-01 00:06:00 21
Freq: 3T, dtype: int64
Downsample the series into 3 minute bins as above, but label each bin using the right edge instead of the
left. Please note that the value in the bucket used as the label is not included in the bucket, which it labels.
For example, in the original series the bucket 2000-01-01 00:03:00 contains the value 3, but the
summed value in the resampled bucket with the label2000-01-01 00:03:00 does not include 3 (if it
did, the summed value would be 6, not 3). To include this value close the right side of the bin interval as
illustrated in the example below this one.
>>> series.resample('3T', label='right').sum()
2000-01-01 00:03:00 3
2000-01-01 00:06:00 12
2000-01-01 00:09:00 21
Freq: 3T, dtype: int64
Downsample the series into 3 minute bins as above, but close the right side of the bin interval.
>>> series.resample('3T', label='right', closed='right').sum()
2000-01-01 00:00:00 0
2000-01-01 00:03:00 6
2000-01-01 00:06:00 15
2000-01-01 00:09:00 15
Freq: 3T, dtype: int64
Upsample the series into 30 second bins and fill the NaN values using the pad method.
>>> series.resample('30S').pad()[0:5]
2000-01-01 00:00:00 0
2000-01-01 00:00:30 0
2000-01-01 00:01:00 1
2000-01-01 00:01:30 1
2000-01-01 00:02:00 2
Freq: 30S, dtype: int64
Upsample the series into 30 second bins and fill the NaN values using the bfill method.
>>> series.resample('30S').bfill()[0:5]
2000-01-01 00:00:00 0
2000-01-01 00:00:30 1
2000-01-01 00:01:00 1
2000-01-01 00:01:30 2
2000-01-01 00:02:00 2
Freq: 30S, dtype: int64
>>> series.resample('3T').apply(custom_resampler)
2000-01-01 00:00:00 8
2000-01-01 00:03:00 17
2000-01-01 00:06:00 26
Freq: 3T, dtype: int64
For DataFrame objects, the keyword on can be used to specify the column instead of the index for resam-
pling.
For a DataFrame with MultiIndex, the keyword level can be used to specify on level the resampling
needs to take place.
pandas.Panel.rfloordiv
Panel.rfloordiv(other, axis=0)
Integer division of series and other, element-wise (binary operator rfloordiv). Equivalent to other //
panel.
Parameters other : DataFrame or Panel
axis : {items, major_axis, minor_axis}
Axis to broadcast over
Returns Panel
See also:
Panel.floordiv
pandas.Panel.rmod
Panel.rmod(other, axis=0)
Modulo of series and other, element-wise (binary operator rmod). Equivalent to other % panel.
Parameters other : DataFrame or Panel
axis : {items, major_axis, minor_axis}
Axis to broadcast over
Returns Panel
See also:
Panel.mod
pandas.Panel.rmul
Panel.rmul(other, axis=0)
Multiplication of series and other, element-wise (binary operator rmul). Equivalent to other * panel.
Parameters other : DataFrame or Panel
axis : {items, major_axis, minor_axis}
Axis to broadcast over
Returns Panel
See also:
Panel.mul
pandas.Panel.round
pandas.Panel.rpow
Panel.rpow(other, axis=0)
Exponential power of series and other, element-wise (binary operator rpow). Equivalent to other **
panel.
Parameters other : DataFrame or Panel
axis : {items, major_axis, minor_axis}
Axis to broadcast over
Returns Panel
See also:
Panel.pow
pandas.Panel.rsub
Panel.rsub(other, axis=0)
Subtraction of series and other, element-wise (binary operator rsub). Equivalent to other - panel.
Parameters other : DataFrame or Panel
axis : {items, major_axis, minor_axis}
Axis to broadcast over
Returns Panel
See also:
Panel.sub
pandas.Panel.rtruediv
Panel.rtruediv(other, axis=0)
Floating division of series and other, element-wise (binary operator rtruediv). Equivalent to other /
panel.
Parameters other : DataFrame or Panel
axis : {items, major_axis, minor_axis}
Axis to broadcast over
Returns Panel
See also:
Panel.truediv
pandas.Panel.sample
Examples
>>> s = pd.Series(np.random.randn(50))
>>> s.head()
0 -0.038497
1 1.820773
2 -0.972766
3 -1.598270
4 -1.095526
dtype: float64
>>> df = pd.DataFrame(np.random.randn(50, 4), columns=list('ABCD'))
>>> df.head()
A B C D
0 0.016443 -2.318952 -0.566372 -1.028078
1 -1.051921 0.438836 0.658280 -0.175797
2 -1.243569 -0.364626 -0.215065 0.057736
3 1.768216 0.404512 -0.385604 -1.457834
4 1.072446 -1.137172 0.314194 -0.046661
>>> s.sample(n=3)
27 -0.994689
55 -1.049016
67 -0.224565
dtype: float64
pandas.Panel.select
Panel.select(crit, axis=0)
Return data corresponding to axis labels matching criteria
Parameters crit : function
To be called on each index (label). Should return True or False
axis : int
Returns selection : type of caller
pandas.Panel.sem
pandas.Panel.set_axis
Panel.set_axis(axis, labels)
public verson of axis assignment
pandas.Panel.set_value
Panel.set_value(*args, **kwargs)
Quickly set single value at (item, major, minor) location
Parameters item : item label (panel item)
major : major axis label (panel item row)
minor : minor axis label (panel item column)
value : scalar
takeable : interpret the passed labels as indexers, default False
Returns panel : Panel
If label combo is contained, will be reference to calling Panel, otherwise a new
object
pandas.Panel.shift
pandas.Panel.skew
pandas.Panel.slice_shift
Panel.slice_shift(periods=1, axis=0)
Equivalent to shift without copying data. The shifted data will not include the dropped periods and the
shifted axis will be smaller than the original.
Parameters periods : int
Number of periods to move, can be positive or negative
Returns shifted : same type as caller
Notes
While the slice_shift is faster than shift, you may pay for it later during alignment.
pandas.Panel.sort_index
pandas.Panel.sort_values
pandas.Panel.squeeze
Panel.squeeze(axis=None)
Squeeze length 1 dimensions.
Parameters axis : None, integer or string axis name, optional
The axis to squeeze if 1-sized.
New in version 0.20.0.
Returns scalar if 1-sized, else original object
pandas.Panel.std
pandas.Panel.sub
Panel.sub(other, axis=0)
Subtraction of series and other, element-wise (binary operator sub). Equivalent to panel - other.
Parameters other : DataFrame or Panel
axis : {items, major_axis, minor_axis}
Axis to broadcast over
Returns Panel
See also:
Panel.rsub
pandas.Panel.subtract
Panel.subtract(other, axis=0)
Subtraction of series and other, element-wise (binary operator sub). Equivalent to panel - other.
Parameters other : DataFrame or Panel
axis : {items, major_axis, minor_axis}
Axis to broadcast over
Returns Panel
See also:
Panel.rsub
pandas.Panel.sum
pandas.Panel.swapaxes
pandas.Panel.swaplevel
pandas.Panel.tail
Panel.tail(n=5)
pandas.Panel.take
pandas.Panel.toLong
Panel.toLong(*args, **kwargs)
pandas.Panel.to_clipboard
Notes
pandas.Panel.to_dense
Panel.to_dense()
Return dense representation of NDFrame (as opposed to sparse)
pandas.Panel.to_excel
Notes
Keyword arguments (and na_rep) are passed to the to_excel method for each DataFrame written.
pandas.Panel.to_frame
Panel.to_frame(filter_observations=True)
Transform wide format into long (stacked) format as DataFrame whose columns are the Panels items and
whose index is a MultiIndex formed of the Panels major and minor axes.
Parameters filter_observations : boolean, default True
Drop (major, minor) pairs without a complete set of observations across all the
items
Returns y : DataFrame
pandas.Panel.to_hdf
pandas.Panel.to_json
table : dict like {schema: {schema}, data: {data}} describing the data, and the
data component is like orient='records'.
Changed in version 0.20.0.
date_format : {None, epoch, iso}
Type of date conversion. epoch = epoch milliseconds, iso = ISO8601. The default
depends on the orient. For orient=table, the default is iso. For all other orients,
the default is epoch.
double_precision : The number of decimal places to use when encoding
floating point values, default 10.
force_ascii : force encoded string to be ASCII, default True.
date_unit : string, default ms (milliseconds)
The time unit to encode to, governs timestamp and ISO8601 precision. One of
s, ms, us, ns for second, millisecond, microsecond, and nanosecond re-
spectively.
default_handler : callable, default None
Handler to call if object cannot otherwise be converted to a suitable format for
JSON. Should receive a single argument which is the object to convert and return
a serialisable object.
lines : boolean, default False
If orient is records write out line delimited json format. Will throw ValueError
if incorrect orient since others are not list like.
New in version 0.19.0.
Returns same type as input object with filtered info axis
See also:
pd.read_json
Examples
Encoding/decoding a Dataframe using 'records' formatted JSON. Note that index labels are not pre-
served with this encoding.
>>> df.to_json(orient='records')
'[{"col 1":"a","col 2":"b"},{"col 1":"c","col 2":"d"}]'
>>> df.to_json(orient='table')
'{"schema": {"fields": [{"name": "index", "type": "string"},
{"name": "col 1", "type": "string"},
{"name": "col 2", "type": "string"}],
"primaryKey": "index",
"pandas_version": "0.20.0"},
"data": [{"index": "row 1", "col 1": "a", "col 2": "b"},
{"index": "row 2", "col 1": "c", "col 2": "d"}]}'
pandas.Panel.to_long
Panel.to_long(*args, **kwargs)
pandas.Panel.to_msgpack
pandas.Panel.to_pickle
Panel.to_pickle(path, compression=infer)
Pickle (serialize) object to input file path.
Parameters path : string
File path
compression : {infer, gzip, bz2, xz, None}, default infer
a string representing the compression to use in the output file
New in version 0.20.0.
pandas.Panel.to_sparse
Panel.to_sparse(*args, **kwargs)
NOT IMPLEMENTED: do not call this method, as sparsifying is not supported for Panel objects and will
raise an error.
Convert to SparsePanel
pandas.Panel.to_sql
pandas.Panel.to_xarray
Panel.to_xarray()
Return an xarray object from the pandas object.
Returns a DataArray for a Series
a Dataset for a DataFrame
a DataArray for higher dims
Notes
Examples
>>> df.to_xarray()
<xarray.Dataset>
Dimensions: (index: 3)
Coordinates:
* index (index) int64 0 1 2
Data variables:
A (index) int64 1 1 2
B (index) object 'foo' 'bar' 'foo'
C (index) float64 4.0 5.0 6.0
>>> df.to_xarray()
<xarray.Dataset>
Dimensions: (A: 2, B: 2)
Coordinates:
* B (B) object 'bar' 'foo'
* A (A) int64 1 2
Data variables:
C (B, A) float64 5.0 nan 4.0 6.0
>>> p = pd.Panel(np.arange(24).reshape(4,3,2),
items=list('ABCD'),
major_axis=pd.date_range('20130101', periods=3),
minor_axis=['first', 'second'])
>>> p
<class 'pandas.core.panel.Panel'>
Dimensions: 4 (items) x 3 (major_axis) x 2 (minor_axis)
Items axis: A to D
Major_axis axis: 2013-01-01 00:00:00 to 2013-01-03 00:00:00
Minor_axis axis: first to second
>>> p.to_xarray()
<xarray.DataArray (items: 4, major_axis: 3, minor_axis: 2)>
array([[[ 0, 1],
[ 2, 3],
[ 4, 5]],
[[ 6, 7],
[ 8, 9],
[10, 11]],
[[12, 13],
[14, 15],
[16, 17]],
[[18, 19],
[20, 21],
[22, 23]]])
Coordinates:
* items (items) object 'A' 'B' 'C' 'D'
* major_axis (major_axis) datetime64[ns] 2013-01-01 2013-01-02 2013-01-03
# noqa
pandas.Panel.transpose
Panel.transpose(*args, **kwargs)
Permute the dimensions of the Panel
Examples
>>> p.transpose(2, 0, 1)
>>> p.transpose(2, 0, 1, copy=True)
pandas.Panel.truediv
Panel.truediv(other, axis=0)
Floating division of series and other, element-wise (binary operator truediv). Equivalent to panel /
other.
Parameters other : DataFrame or Panel
axis : {items, major_axis, minor_axis}
Axis to broadcast over
Returns Panel
See also:
Panel.rtruediv
pandas.Panel.truncate
pandas.Panel.tshift
pandas.Panel.tz_convert
pandas.Panel.tz_localize
pandas.Panel.update
pandas.Panel.var
pandas.Panel.where
Notes
The where method is an application of the if-then idiom. For each element in the calling DataFrame, if
cond is True the element is used; otherwise the corresponding element from the DataFrame other is
used.
The signature for DataFrame.where() differs from numpy.where(). Roughly df1.where(m,
df2) is equivalent to np.where(m, df1, df2).
For further details and examples see the where documentation in indexing.
Examples
>>> s = pd.Series(range(5))
>>> s.where(s > 0)
0 NaN
1 1.0
2 2.0
3 3.0
4 4.0
pandas.Panel.xs
Panel.xs(key, axis=1)
Return slice of panel along selected axis
Parameters key : object
Label
axis : {items, major, minor}, default 1/major
Returns y : ndim(self)-1
Notes
Axes
items: axis 0; each item corresponds to a DataFrame contained inside
major_axis: axis 1; the index (rows) of each of the DataFrames
minor_axis: axis 2; the columns of each of the DataFrames
34.5.3 Conversion
Panel.get_value(*args, **kwargs) Quickly retrieve single value at (item, major, minor) loca-
tion
Panel.set_value(*args, **kwargs) Quickly set single value at (item, major, minor) location
34.5.5.1 pandas.Panel.__iter__
Panel.__iter__()
Iterate over infor axis
For more information on .at, .iat, .loc, and .iloc, see the indexing documentation.
Panel.apply(func[, axis]) Applies function along axis (or axes) of the Panel
Panel.groupby(function[, axis]) Group data on given axis, returning GroupBy object
Panel.dropna([axis, how, inplace]) Drop 2D from panel, holding passed axis constant
Panel.fillna([value, method, axis, inplace, ...]) Fill NA/NaN values using the specified method
Panel.join(other[, how, lsuffix, rsuffix]) Join items with other Panel either on major and minor axes
column
Panel.update(other[, join, overwrite, ...]) Modify Panel in place using non-NA values from passed
Panel, or object coercible to Panel.
Panel.from_dict(data[, intersect, orient, dtype]) Construct Panel from dict of DataFrame objects
Panel.to_pickle(path[, compression]) Pickle (serialize) object to input file path.
Panel.to_excel(path[, na_rep, engine]) Write each DataFrame in Panel to a separate excel sheet
Panel.to_hdf(path_or_buf, key, **kwargs) Write the contained data to an HDF5 file using HDFStore.
Panel.to_sparse(*args, **kwargs) NOT IMPLEMENTED: do not call this method, as sparsi-
fying is not supported for Panel objects and will raise an
error.
Panel.to_frame([filter_observations]) Transform wide format into long (stacked) format as
DataFrame whose columns are the Panels items and whose
index is a MultiIndex formed of the Panels major and mi-
nor axes.
Panel.to_xarray() Return an xarray object from the pandas object.
Panel.to_clipboard([excel, sep]) Attempt to write text representation of object to the system
clipboard This can be pasted into Excel, for example.
34.6 Index
Many of these methods or variants thereof are available on the objects that contain an index (Series/Dataframe)
and those should most likely be used before calling these methods directly.
34.6.1 pandas.Index
class pandas.Index
Immutable ndarray implementing an ordered, sliceable set. The basic object storing axis labels for all pandas
objects
Parameters data : array-like (1-dimensional)
dtype : NumPy dtype (default: object)
copy : bool
Make a copy of input ndarray
name : object
Name to be stored in the index
tupleize_cols : bool (default: True)
Notes
Attributes
34.6.1.1 pandas.Index.T
Index.T
return the transpose, which is by definition self
34.6.1.2 pandas.Index.asi8
Index.asi8 = None
34.6.1.3 pandas.Index.base
Index.base
return the base object if the memory of the underlying data is shared
34.6.1.4 pandas.Index.data
Index.data
return the data pointer of the underlying data
34.6.1.5 pandas.Index.dtype
Index.dtype = None
34.6.1.6 pandas.Index.dtype_str
Index.dtype_str = None
34.6.1.7 pandas.Index.empty
Index.empty
34.6.1.8 pandas.Index.flags
Index.flags
34.6.1.9 pandas.Index.has_duplicates
Index.has_duplicates
34.6.1.10 pandas.Index.hasnans
Index.hasnans = None
34.6.1.11 pandas.Index.inferred_type
Index.inferred_type = None
34.6.1.12 pandas.Index.is_all_dates
Index.is_all_dates = None
34.6.1.13 pandas.Index.is_monotonic
Index.is_monotonic
alias for is_monotonic_increasing (deprecated)
34.6.1.14 pandas.Index.is_monotonic_decreasing
Index.is_monotonic_decreasing
return if the index is monotonic decreasing (only equal or decreasing) values.
34.6.1.15 pandas.Index.is_monotonic_increasing
Index.is_monotonic_increasing
return if the index is monotonic increasing (only equal or increasing) values.
34.6.1.16 pandas.Index.is_unique
Index.is_unique = None
34.6.1.17 pandas.Index.itemsize
Index.itemsize
return the size of the dtype of the item of the underlying data
34.6.1.18 pandas.Index.name
Index.name = None
34.6.1.19 pandas.Index.names
Index.names
34.6.1.20 pandas.Index.nbytes
Index.nbytes
return the number of bytes in the underlying data
34.6.1.21 pandas.Index.ndim
Index.ndim
return the number of dimensions of the underlying data, by definition 1
34.6.1.22 pandas.Index.nlevels
Index.nlevels
34.6.1.23 pandas.Index.shape
Index.shape
return a tuple of the shape of the underlying data
34.6.1.24 pandas.Index.size
Index.size
return the number of elements in the underlying data
34.6.1.25 pandas.Index.strides
Index.strides
return the strides of the underlying data
34.6.1.26 pandas.Index.values
Index.values
return the underlying data as an ndarray
Methods
34.6.1.27 pandas.Index.all
Index.all(*args, **kwargs)
Return whether all elements are True
Parameters All arguments to numpy.all are accepted.
Returns all : bool or array_like (if axis is specified)
A single element array_like may be converted to bool.
34.6.1.28 pandas.Index.any
Index.any(*args, **kwargs)
Return whether any element is True
Parameters All arguments to numpy.any are accepted.
Returns any : bool or array_like (if axis is specified)
A single element array_like may be converted to bool.
34.6.1.29 pandas.Index.append
Index.append(other)
Append a collection of Index options together
34.6.1.30 pandas.Index.argmax
Index.argmax(axis=None)
return a ndarray of the maximum argument indexer
See also:
numpy.ndarray.argmax
34.6.1.31 pandas.Index.argmin
Index.argmin(axis=None)
return a ndarray of the minimum argument indexer
See also:
numpy.ndarray.argmin
34.6.1.32 pandas.Index.argsort
Index.argsort(*args, **kwargs)
Returns the indices that would sort the index and its underlying data.
Returns argsorted : numpy array
See also:
numpy.ndarray.argsort
34.6.1.33 pandas.Index.asof
Index.asof(label)
For a sorted index, return the most recent label up to and including the passed label. Return NaN if not
found.
See also:
34.6.1.34 pandas.Index.asof_locs
Index.asof_locs(where, mask)
where : array of timestamps mask : array of booleans where data is not NA
34.6.1.35 pandas.Index.astype
Index.astype(dtype, copy=True)
Create an Index with values cast to dtypes. The class of a new Index is determined by dtype. When
conversion is impossible, a ValueError exception is raised.
34.6.1.36 pandas.Index.contains
Index.contains(key)
return a boolean if this key is IN the index
Parameters key : object
Returns boolean
34.6.1.37 pandas.Index.copy
Notes
In most cases, there should be no functional difference from using deep, but if deep is passed it will
attempt to deepcopy.
34.6.1.38 pandas.Index.delete
Index.delete(loc)
Make new Index with passed location(-s) deleted
Returns new_index : Index
34.6.1.39 pandas.Index.difference
Index.difference(other)
Return a new Index with elements from the index that are not in other.
This is the set difference of two Index objects. Its sorted if sorting is possible.
Parameters other : Index or array-like
Returns difference : Index
Examples
34.6.1.40 pandas.Index.drop
Index.drop(labels, errors=raise)
Make new Index with passed list of labels deleted
Parameters labels : array-like
errors : {ignore, raise}, default raise
If ignore, suppress error and existing labels are dropped.
Returns dropped : Index
34.6.1.41 pandas.Index.drop_duplicates
Index.drop_duplicates(keep=first)
Return Index with duplicate values removed
Parameters keep : {first, last, False}, default first
first : Drop duplicates except for the first occurrence.
last : Drop duplicates except for the last occurrence.
False : Drop all duplicates.
Returns deduplicated : Index
34.6.1.42 pandas.Index.dropna
Index.dropna(how=any)
Return Index without NA/NaN values
Parameters how : {any, all}, default any
If the Index is a MultiIndex, drop the value when any or all levels are NaN.
Returns valid : Index
34.6.1.43 pandas.Index.duplicated
Index.duplicated(keep=first)
Return boolean np.ndarray denoting duplicate values
Parameters keep : {first, last, False}, default first
first : Mark duplicates as True except for the first occurrence.
last : Mark duplicates as True except for the last occurrence.
False : Mark all duplicates as True.
34.6.1.44 pandas.Index.equals
Index.equals(other)
Determines if two Index objects contain the same elements.
34.6.1.45 pandas.Index.factorize
Index.factorize(sort=False, na_sentinel=-1)
Encode the object as an enumerated type or categorical variable
Parameters sort : boolean, default False
Sort by values
na_sentinel: int, default -1
Value to mark not found
Returns labels : the indexer to the original array
uniques : the unique Index
34.6.1.46 pandas.Index.fillna
Index.fillna(value=None, downcast=None)
Fill NA/NaN values with the specified value
Parameters value : scalar
Scalar value to use to fill holes (e.g. 0). This value cannot be a list-likes.
downcast : dict, default is None
a dict of item->dtype of what to downcast if possible, or the string infer which
will try to downcast to an appropriate equal type (e.g. float64 to int64 if possible)
Returns filled : %(klass)s
34.6.1.47 pandas.Index.format
34.6.1.48 pandas.Index.get_duplicates
Index.get_duplicates()
34.6.1.49 pandas.Index.get_indexer
Examples
34.6.1.50 pandas.Index.get_indexer_for
Index.get_indexer_for(target, **kwargs)
guaranteed return of an indexer even when non-unique This dispatches to get_indexer or
get_indexer_nonunique as appropriate
34.6.1.51 pandas.Index.get_indexer_non_unique
Index.get_indexer_non_unique(target)
Compute indexer and mask for new index given the current index. The indexer should be then used as an
input to ndarray.take to align the current data to the new index.
Parameters target : Index
Returns indexer : ndarray of int
Integers from 0 to n - 1 indicating that the index at these positions matches the
corresponding target values. Missing values in the target are marked by -1.
missing : ndarray of int
An indexer into the target of the values not found. These correspond to the -1 in
the indexer array
34.6.1.52 pandas.Index.get_level_values
Index.get_level_values(level)
Return an Index of values for requested level, equal to the length of the index
Parameters level : int
Returns values : Index
34.6.1.53 pandas.Index.get_loc
34.6.1.54 pandas.Index.get_slice_bound
34.6.1.55 pandas.Index.get_value
Index.get_value(series, key)
Fast lookup of value from 1-dimensional ndarray. Only use this if you know what youre doing
34.6.1.56 pandas.Index.get_values
Index.get_values()
return the underlying data as an ndarray
34.6.1.57 pandas.Index.groupby
Index.groupby(values)
Group the index labels by a given array of values.
Parameters values : array
Values used to determine the groups.
Returns groups : dict
{group name -> group labels}
34.6.1.58 pandas.Index.holds_integer
Index.holds_integer()
34.6.1.59 pandas.Index.identical
Index.identical(other)
Similar to equals, but check that other comparable attributes are also equal
34.6.1.60 pandas.Index.insert
Index.insert(loc, item)
Make new Index inserting new item at location. Follows Python list.append semantics for negative values
Parameters loc : int
item : object
Returns new_index : Index
34.6.1.61 pandas.Index.intersection
Index.intersection(other)
Form the intersection of two Index objects.
This returns a new Index with elements common to the index and other, preserving the order of the calling
index.
Parameters other : Index or array-like
Returns intersection : Index
Examples
34.6.1.62 pandas.Index.is_
Index.is_(other)
More flexible, faster check like is but that works through views
Note: this is not the same as Index.identical(), which checks that metadata is also the same.
Parameters other : object
other object to compare against.
Returns True if both have same underlying data, False otherwise : bool
34.6.1.63 pandas.Index.is_boolean
Index.is_boolean()
34.6.1.64 pandas.Index.is_categorical
Index.is_categorical()
34.6.1.65 pandas.Index.is_floating
Index.is_floating()
34.6.1.66 pandas.Index.is_integer
Index.is_integer()
34.6.1.67 pandas.Index.is_interval
Index.is_interval()
34.6.1.68 pandas.Index.is_lexsorted_for_tuple
Index.is_lexsorted_for_tuple(tup)
34.6.1.69 pandas.Index.is_mixed
Index.is_mixed()
34.6.1.70 pandas.Index.is_numeric
Index.is_numeric()
34.6.1.71 pandas.Index.is_object
Index.is_object()
34.6.1.72 pandas.Index.is_type_compatible
Index.is_type_compatible(kind)
34.6.1.73 pandas.Index.isin
Index.isin(values, level=None)
Compute boolean array of whether each index value is found in the passed set of values.
Parameters values : set or list-like
Sought values.
New in version 0.18.1.
Support for values as a set
level : str or int, optional
Name or position of the index level to use (if the index is a MultiIndex).
Returns is_contained : ndarray (boolean dtype)
Notes
If level is specified:
if it is the name of one and only one index level, use that level;
otherwise it should be a number indicating level position.
34.6.1.74 pandas.Index.isnull
Index.isnull()
Detect missing values
New in version 0.20.0.
Returns a boolean array of whether my values are null
See also:
34.6.1.75 pandas.Index.item
Index.item()
return the first element of the underlying data as a python scalar
34.6.1.76 pandas.Index.join
34.6.1.77 pandas.Index.map
Index.map(mapper)
Apply mapper function to an index.
Parameters mapper : callable
Function to be applied.
Returns applied : Union[Index, MultiIndex], inferred
The output of the mapping function applied to the index. If the function returns a
tuple with more than one element a MultiIndex will be returned.
34.6.1.78 pandas.Index.max
Index.max()
The maximum value of the object
34.6.1.79 pandas.Index.memory_usage
Index.memory_usage(deep=False)
Memory usage of my values
Parameters deep : bool
Introspect the data deeply, interrogate object dtypes for system-level memory
consumption
Returns bytes used
See also:
numpy.ndarray.nbytes
Notes
Memory usage does not include memory consumed by elements that are not components of the array if
deep=False
34.6.1.80 pandas.Index.min
Index.min()
The minimum value of the object
34.6.1.81 pandas.Index.notnull
Index.notnull()
Reverse of isnull
New in version 0.20.0.
Returns a boolean array of whether my values are not null
See also:
34.6.1.82 pandas.Index.nunique
Index.nunique(dropna=True)
Return number of unique elements in the object.
Excludes NA values by default.
Parameters dropna : boolean, default True
Dont include NaN in the count.
Returns nunique : int
34.6.1.83 pandas.Index.putmask
Index.putmask(mask, value)
return a new Index of the values set with the mask
See also:
numpy.ndarray.putmask
34.6.1.84 pandas.Index.ravel
Index.ravel(order=C)
return an ndarray of the flattened values of the underlying data
See also:
numpy.ndarray.ravel
34.6.1.85 pandas.Index.reindex
Resulting index
indexer : np.ndarray or None
Indices of output values in original index
34.6.1.86 pandas.Index.rename
Index.rename(name, inplace=False)
Set new names on index. Defaults to returning new index.
Parameters name : str or list
name to set
inplace : bool
if True, mutates in place
Returns new index (of same type and class...etc) [if inplace, returns None]
34.6.1.87 pandas.Index.repeat
34.6.1.88 pandas.Index.reshape
Index.reshape(*args, **kwargs)
NOT IMPLEMENTED: do not call this method, as reshaping is not supported for Index objects and will
raise an error.
Reshape an Index.
34.6.1.89 pandas.Index.searchsorted
Optional array of integer indices that sort self into ascending order. They are
typically the result of np.argsort.
Returns indices : array of ints
Array of insertion points with the same shape as value.
See also:
numpy.searchsorted
Notes
Examples
>>> x.searchsorted(4)
array([3])
>>> x.searchsorted('bread')
array([1]) # Note: an array, not a scalar
>>> x.searchsorted(['bread'])
array([1])
34.6.1.90 pandas.Index.set_names
Examples
34.6.1.91 pandas.Index.set_value
34.6.1.92 pandas.Index.shift
Index.shift(periods=1, freq=None)
Shift Index containing datetime objects by input number of periods and DateOffset
Returns shifted : Index
34.6.1.93 pandas.Index.slice_indexer
Notes
This function assumes that the data is sorted, so use at your own peril
34.6.1.94 pandas.Index.slice_locs
34.6.1.95 pandas.Index.sort
Index.sort(*args, **kwargs)
34.6.1.96 pandas.Index.sort_values
Index.sort_values(return_indexer=False, ascending=True)
Return sorted copy of Index
34.6.1.97 pandas.Index.sortlevel
34.6.1.98 pandas.Index.str
Index.str()
Vectorized string functions for Series and Index. NAs stay NA unless handled otherwise by a particular
method. Patterned after Pythons string methods, with some inspiration from Rs stringr package.
Examples
>>> s.str.split('_')
>>> s.str.replace('_', '')
34.6.1.99 pandas.Index.summary
Index.summary(name=None)
34.6.1.100 pandas.Index.sym_diff
Index.sym_diff(*args, **kwargs)
34.6.1.101 pandas.Index.symmetric_difference
Index.symmetric_difference(other, result_name=None)
Compute the symmetric difference of two Index objects. Its sorted if sorting is possible.
Parameters other : Index or array-like
result_name : str
Returns symmetric_difference : Index
Notes
symmetric_difference contains elements that appear in either idx1 or idx2 but not both. Equiv-
alent to the Index created by idx1.difference(idx2) | idx2.difference(idx1) with du-
plicates dropped.
Examples
34.6.1.102 pandas.Index.take
34.6.1.103 pandas.Index.to_datetime
Index.to_datetime(dayfirst=False)
DEPRECATED: use pandas.to_datetime() instead.
For an Index containing strings or datetime.datetime objects, attempt conversion to DatetimeIndex
34.6.1.104 pandas.Index.to_native_types
Index.to_native_types(slicer=None, **kwargs)
Format specified values of self and return them.
Parameters slicer : int, array-like
An indexer into self that specifies which values are used in the formatting process.
kwargs : dict
Options for specifying how the values should be formatted. These options include
the following:
1. na_rep [str] The value that serves as a placeholder for NULL values
2. quoting [bool or None] Whether or not there are quoted values in self
3. date_format [str] The format used to represent date-like values
34.6.1.105 pandas.Index.to_series
Index.to_series(**kwargs)
Create a Series with both index and values equal to the index keys useful with map for returning an indexer
based on an index
Returns Series : dtype will be based on the type of the Index values.
34.6.1.106 pandas.Index.tolist
Index.tolist()
return a list of the Index values
34.6.1.107 pandas.Index.transpose
Index.transpose(*args, **kwargs)
return the transpose, which is by definition self
34.6.1.108 pandas.Index.union
Index.union(other)
Form the union of two Index objects and sorts if possible.
Parameters other : Index or array-like
Returns union : Index
Examples
34.6.1.109 pandas.Index.unique
Index.unique()
Return unique values in the object. Uniques are returned in order of appearance, this does NOT sort. Hash
table-based unique.
Parameters values : 1d array-like
Returns unique values.
If the input is an Index, the return is an Index
If the input is a Categorical dtype, the return is a Categorical
If the input is a Series/ndarray, the return will be an ndarray
See also:
unique, Index.unique, Series.unique
34.6.1.110 pandas.Index.value_counts
If True then the object returned will contain the relative frequencies of the unique
values.
sort : boolean, default True
Sort by values
ascending : boolean, default False
Sort in ascending order
bins : integer, optional
Rather than count values, group them into half-open bins, a convenience for
pd.cut, only works with numeric data
dropna : boolean, default True
Dont include counts of NaN.
Returns counts : Series
34.6.1.111 pandas.Index.view
Index.view(cls=None)
34.6.1.112 pandas.Index.where
Index.where(cond, other=None)
New in version 0.19.0.
Return an Index of same shape as self and whose corresponding entries are from self where cond is True
and otherwise are from other.
Parameters cond : boolean array-like with the same length as self
other : scalar, or array-like
34.6.2 Attributes
Index.take(indices[, axis, allow_fill, ...]) return a new Index of the values selected by the indices
Index.putmask(mask, value) return a new Index of the values set with the mask
Index.set_names(names[, level, inplace]) Set new names on index.
Index.unique() Return unique values in the object.
Index.nunique([dropna]) Return number of unique elements in the object.
Index.value_counts([normalize, sort, ...]) Returns object containing counts of unique values.
34.6.5 Conversion
34.6.6 Sorting
Index.argsort(*args, **kwargs) Returns the indices that would sort the index and its under-
lying data.
Index.sort_values([return_indexer, ascending]) Return sorted copy of Index
34.6.9 Selecting
Index.get_indexer(target[, method, limit, ...]) Compute indexer and mask for new index given the current
index.
Index.get_indexer_non_unique(target) Compute indexer and mask for new index given the current
index.
Index.get_level_values(level) Return an Index of values for requested level, equal to the
length
Index.get_loc(key[, method, tolerance]) Get integer location for requested label.
Index.get_value(series, key) Fast lookup of value from 1-dimensional ndarray.
Index.isin(values[, level]) Compute boolean array of whether each index value is
found in the passed set of values.
Index.slice_indexer([start, end, step, kind]) For an ordered Index, compute the slice indexer for input
labels and
Index.slice_locs([start, end, step, kind]) Compute slice locations for input labels.
34.7 CategoricalIndex
34.7.1 pandas.CategoricalIndex
class pandas.CategoricalIndex
Immutable Index implementing an ordered, sliceable set. CategoricalIndex represents a sparsely populated
Index with an underlying Categorical.
New in version 0.16.1.
Parameters data : array-like or Categorical, (1-dimensional)
categories : optional, array-like
categories for the CategoricalIndex
ordered : boolean,
designating if the categories are ordered
copy : bool
Make a copy of input ndarray
name : object
Name to be stored in the index
See also:
Categorical, Index
CategoricalIndex.codes
CategoricalIndex.categories
CategoricalIndex.ordered
CategoricalIndex.rename_categories(*args, Renames categories.
...)
CategoricalIndex.reorder_categories(*args, Reorders categories as specified in new_categories.
...)
CategoricalIndex.add_categories(*args, Add new categories.
**kwargs)
CategoricalIndex.remove_categories(*args, Removes the specified categories.
...)
Removes categories which are not used.
CategoricalIndex.remove_unused_categories(...)
CategoricalIndex.set_categories(*args, Sets the categories to the specified new_categories.
**kwargs)
CategoricalIndex.as_ordered(*args, **kwargs) Sets the Categorical to be ordered
CategoricalIndex.as_unordered(*args, Sets the Categorical to be unordered
**kwargs)
34.7.2.1 pandas.CategoricalIndex.codes
CategoricalIndex.codes
34.7.2.2 pandas.CategoricalIndex.categories
CategoricalIndex.categories
34.7.2.3 pandas.CategoricalIndex.ordered
CategoricalIndex.ordered
34.7.2.4 pandas.CategoricalIndex.rename_categories
CategoricalIndex.rename_categories(*args, **kwargs)
Renames categories.
The new categories has to be a list-like object. All items must be unique and the number of items in the new
categories must be the same as the number of items in the old categories.
Parameters new_categories : Index-like
The renamed categories.
inplace : boolean (default: False)
Whether or not to rename the categories inplace or return a copy of this categorical
with renamed categories.
Returns cat : Categorical with renamed categories added or None if inplace.
Raises ValueError
If the new categories do not have the same number of items than the current cate-
gories or do not validate as categories
See also:
reorder_categories, add_categories, remove_categories,
remove_unused_categories, set_categories
34.7.2.5 pandas.CategoricalIndex.reorder_categories
CategoricalIndex.reorder_categories(*args, **kwargs)
Reorders categories as specified in new_categories.
new_categories need to include all old categories and no new category items.
Parameters new_categories : Index-like
The categories in new order.
ordered : boolean, optional
Whether or not the categorical is treated as a ordered categorical. If not given, do
not change the ordered information.
inplace : boolean (default: False)
Whether or not to reorder the categories inplace or return a copy of this categorical
with reordered categories.
Returns cat : Categorical with reordered categories or None if inplace.
Raises ValueError
If the new categories do not contain all old category items or any new ones
See also:
rename_categories, add_categories, remove_categories,
remove_unused_categories, set_categories
34.7.2.6 pandas.CategoricalIndex.add_categories
CategoricalIndex.add_categories(*args, **kwargs)
Add new categories.
new_categories will be included at the last/highest place in the categories and will be unused directly after this
call.
Parameters new_categories : category or list-like of category
The new categories to be included.
inplace : boolean (default: False)
Whether or not to add the categories inplace or return a copy of this categorical with
added categories.
Returns cat : Categorical with new categories added or None if inplace.
Raises ValueError
If the new categories include old categories or do not validate as categories
See also:
rename_categories, reorder_categories, remove_categories,
remove_unused_categories, set_categories
34.7.2.7 pandas.CategoricalIndex.remove_categories
CategoricalIndex.remove_categories(*args, **kwargs)
Removes the specified categories.
removals must be included in the old categories. Values which were in the removed categories will be set to
NaN
Parameters removals : category or list of categories
The categories which should be removed.
inplace : boolean (default: False)
Whether or not to remove the categories inplace or return a copy of this categorical
with removed categories.
Returns cat : Categorical with removed categories or None if inplace.
Raises ValueError
If the removals are not contained in the categories
See also:
rename_categories, reorder_categories, add_categories,
remove_unused_categories, set_categories
34.7.2.8 pandas.CategoricalIndex.remove_unused_categories
CategoricalIndex.remove_unused_categories(*args, **kwargs)
Removes categories which are not used.
Parameters inplace : boolean (default: False)
Whether or not to drop unused categories inplace or return a copy of this categorical
with unused categories dropped.
Returns cat : Categorical with unused categories dropped or None if inplace.
See also:
rename_categories, reorder_categories, add_categories, remove_categories,
set_categories
34.7.2.9 pandas.CategoricalIndex.set_categories
CategoricalIndex.set_categories(*args, **kwargs)
Sets the categories to the specified new_categories.
new_categories can include new categories (which will result in unused categories) or remove old categories
(which results in values set to NaN). If rename==True, the categories will simple be renamed (less or more
items than in old categories will result in values set to NaN or in unused categories respectively).
This method can be used to perform more than one action of adding, removing, and reordering simultaneously
and is therefore faster than performing the individual steps via the more specialised methods.
On the other hand this methods does not do checks (e.g., whether the old categories are included in the new
categories on a reorder), which can result in surprising changes, for example when using special string dtypes
on python3, which does not considers a S1 string equal to a single char python string.
Parameters new_categories : Index-like
The categories in new order.
ordered : boolean, (default: False)
Whether or not the categorical is treated as a ordered categorical. If not given, do
not change the ordered information.
rename : boolean (default: False)
Whether or not the new_categories should be considered as a rename of the old
categories or as reordered categories.
inplace : boolean (default: False)
Whether or not to reorder the categories inplace or return a copy of this categorical
with reordered categories.
Returns cat : Categorical with reordered categories or None if inplace.
Raises ValueError
If new_categories does not validate as categories
See also:
rename_categories, reorder_categories, add_categories, remove_categories,
remove_unused_categories
34.7.2.10 pandas.CategoricalIndex.as_ordered
CategoricalIndex.as_ordered(*args, **kwargs)
Sets the Categorical to be ordered
Parameters inplace : boolean (default: False)
Whether or not to set the ordered attribute inplace or return a copy of this categorical
with ordered set to True
34.7.2.11 pandas.CategoricalIndex.as_unordered
CategoricalIndex.as_unordered(*args, **kwargs)
Sets the Categorical to be unordered
Parameters inplace : boolean (default: False)
Whether or not to set the ordered attribute inplace or return a copy of this categorical
with ordered set to False
34.8 IntervalIndex
34.8.1 pandas.IntervalIndex
class pandas.IntervalIndex
Immutable Index implementing an ordered, sliceable set. IntervalIndex represents an Index of intervals that are
all closed on the same side.
New in version 0.20.0.
Warning: the indexing behaviors are provisional and may change in a future version of pandas.
See also:
Index
IntervalIndex.from_arrays(left, right[, ...]) Construct an IntervalIndex from a a left and right array
IntervalIndex.from_tuples(data[, closed, ...]) Construct an IntervalIndex from a list/array of tuples
IntervalIndex.from_breaks(breaks[, closed, ...]) Construct an IntervalIndex from an array of splits
IntervalIndex.from_intervals(data[, name, Construct an IntervalIndex from a 1d array of Interval ob-
copy]) jects
34.8.2.1 pandas.IntervalIndex.from_arrays
Examples
34.8.2.2 pandas.IntervalIndex.from_tuples
34.8.2.3 pandas.IntervalIndex.from_breaks
Examples
34.8.2.4 pandas.IntervalIndex.from_intervals
Examples
The generic Index constructor work identically when it infers an array of all intervals:
>>> Index([Interval(0, 1), Interval(1, 2)])
IntervalIndex(left=[0, 1],
right=[1, 2],
closed='right')
34.9 MultiIndex
34.9.1 pandas.MultiIndex
class pandas.MultiIndex
A multi-level, or hierarchical, index object for pandas objects
Parameters levels : sequence of arrays
The unique labels for each level
labels : sequence of arrays
Integers for each level designating which label at each location
sortorder : optional int
Level of sortedness (must be lexicographically sorted by that level)
names : optional sequence of objects
Names for each of the index levels. (name is accepted for compat)
copy : boolean, default False
Copy the meta-data
verify_integrity : boolean, default True
Check that the levels/labels are consistent and valid
Attributes
34.9.1.1 pandas.MultiIndex.T
MultiIndex.T
return the transpose, which is by definition self
34.9.1.2 pandas.MultiIndex.asi8
MultiIndex.asi8 = None
34.9.1.3 pandas.MultiIndex.base
MultiIndex.base
return the base object if the memory of the underlying data is shared
34.9.1.4 pandas.MultiIndex.data
MultiIndex.data
return the data pointer of the underlying data
34.9.1.5 pandas.MultiIndex.dtype
MultiIndex.dtype = None
34.9.1.6 pandas.MultiIndex.dtype_str
MultiIndex.dtype_str = None
34.9.1.7 pandas.MultiIndex.empty
MultiIndex.empty
34.9.1.8 pandas.MultiIndex.flags
MultiIndex.flags
34.9.1.9 pandas.MultiIndex.has_duplicates
MultiIndex.has_duplicates
34.9.1.10 pandas.MultiIndex.hasnans
MultiIndex.hasnans = None
34.9.1.11 pandas.MultiIndex.inferred_type
MultiIndex.inferred_type = None
34.9.1.12 pandas.MultiIndex.is_all_dates
MultiIndex.is_all_dates
34.9.1.13 pandas.MultiIndex.is_monotonic
MultiIndex.is_monotonic = None
34.9.1.14 pandas.MultiIndex.is_monotonic_decreasing
MultiIndex.is_monotonic_decreasing
return if the index is monotonic decreasing (only equal or decreasing) values.
34.9.1.15 pandas.MultiIndex.is_monotonic_increasing
MultiIndex.is_monotonic_increasing = None
34.9.1.16 pandas.MultiIndex.is_unique
MultiIndex.is_unique = None
34.9.1.17 pandas.MultiIndex.itemsize
MultiIndex.itemsize
return the size of the dtype of the item of the underlying data
34.9.1.18 pandas.MultiIndex.labels
MultiIndex.labels
34.9.1.19 pandas.MultiIndex.levels
MultiIndex.levels
34.9.1.20 pandas.MultiIndex.levshape
MultiIndex.levshape
34.9.1.21 pandas.MultiIndex.lexsort_depth
MultiIndex.lexsort_depth = None
34.9.1.22 pandas.MultiIndex.name
MultiIndex.name = None
34.9.1.23 pandas.MultiIndex.names
MultiIndex.names
Names of levels in MultiIndex
34.9.1.24 pandas.MultiIndex.nbytes
MultiIndex.nbytes = None
34.9.1.25 pandas.MultiIndex.ndim
MultiIndex.ndim
return the number of dimensions of the underlying data, by definition 1
34.9.1.26 pandas.MultiIndex.nlevels
MultiIndex.nlevels
34.9.1.27 pandas.MultiIndex.shape
MultiIndex.shape
return a tuple of the shape of the underlying data
34.9.1.28 pandas.MultiIndex.size
MultiIndex.size
return the number of elements in the underlying data
34.9.1.29 pandas.MultiIndex.strides
MultiIndex.strides
return the strides of the underlying data
34.9.1.30 pandas.MultiIndex.values
MultiIndex.values
Methods
all([other])
any([other])
append(other) Append a collection of Index options together
argmax([axis]) return a ndarray of the maximum argument indexer
argmin([axis]) return a ndarray of the minimum argument indexer
argsort(*args, **kwargs)
asof(label) For a sorted index, return the most recent label up to and
including the passed label.
asof_locs(where, mask) where : array of timestamps
astype(dtype[, copy]) Create an Index with values cast to dtypes.
contains(key) return a boolean if this key is IN the index
copy([names, dtype, levels, labels, deep, ...]) Make a copy of this object.
delete(loc) Make new index with passed location deleted
difference(other) Compute sorted set difference of two MultiIndex objects
drop(labels[, level, errors]) Make new MultiIndex with passed list of labels deleted
drop_duplicates([keep]) Return Index with duplicate values removed
droplevel([level]) Return Index with requested level removed.
dropna([how]) Return Index without NA/NaN values
duplicated([keep]) Return boolean np.ndarray denoting duplicate values
equal_levels(other) Return True if the levels of both MultiIndex objects are
the same
equals(other) Determines if two MultiIndex objects have the same la-
beling information
factorize([sort, na_sentinel]) Encode the object as an enumerated type or categorical
variable
fillna([value, downcast]) Fill NA/NaN values with the specified value
format([space, sparsify, adjoin, names, ...])
from_arrays(arrays[, sortorder, names]) Convert arrays to MultiIndex
from_product(iterables[, sortorder, names]) Make a MultiIndex from the cartesian product of multi-
ple iterables
from_tuples(tuples[, sortorder, names]) Convert list of tuples to MultiIndex
get_duplicates()
get_indexer(target[, method, limit, tolerance]) Compute indexer and mask for new index given the cur-
rent index.
get_indexer_for(target, **kwargs) guaranteed return of an indexer even when non-unique
get_indexer_non_unique(target) Compute indexer and mask for new index given the cur-
rent index.
get_level_values(level) Return vector of label values for requested level,
Continued on next page
34.9.1.31 pandas.MultiIndex.all
MultiIndex.all(other=None)
34.9.1.32 pandas.MultiIndex.any
MultiIndex.any(other=None)
34.9.1.33 pandas.MultiIndex.append
MultiIndex.append(other)
Append a collection of Index options together
Parameters other : Index or list/tuple of indices
Returns appended : Index
34.9.1.34 pandas.MultiIndex.argmax
MultiIndex.argmax(axis=None)
return a ndarray of the maximum argument indexer
See also:
numpy.ndarray.argmax
34.9.1.35 pandas.MultiIndex.argmin
MultiIndex.argmin(axis=None)
return a ndarray of the minimum argument indexer
See also:
numpy.ndarray.argmin
34.9.1.36 pandas.MultiIndex.argsort
MultiIndex.argsort(*args, **kwargs)
34.9.1.37 pandas.MultiIndex.asof
MultiIndex.asof(label)
For a sorted index, return the most recent label up to and including the passed label. Return NaN if not
found.
See also:
34.9.1.38 pandas.MultiIndex.asof_locs
MultiIndex.asof_locs(where, mask)
where : array of timestamps mask : array of booleans where data is not NA
34.9.1.39 pandas.MultiIndex.astype
MultiIndex.astype(dtype, copy=True)
Create an Index with values cast to dtypes. The class of a new Index is determined by dtype. When
conversion is impossible, a ValueError exception is raised.
Parameters dtype : numpy dtype or pandas type
copy : bool, default True
By default, astype always returns a newly allocated object. If copy is set to False
and internal requirements on dtype are satisfied, the original data is used to create
a new Index or the original Index is returned.
New in version 0.19.0.
34.9.1.40 pandas.MultiIndex.contains
MultiIndex.contains(key)
return a boolean if this key is IN the index
Parameters key : object
Returns boolean
34.9.1.41 pandas.MultiIndex.copy
Notes
In most cases, there should be no functional difference from using deep, but if deep is passed it will
attempt to deepcopy. This could be potentially expensive on large MultiIndex objects.
34.9.1.42 pandas.MultiIndex.delete
MultiIndex.delete(loc)
Make new index with passed location deleted
Returns new_index : MultiIndex
34.9.1.43 pandas.MultiIndex.difference
MultiIndex.difference(other)
Compute sorted set difference of two MultiIndex objects
Returns diff : MultiIndex
34.9.1.44 pandas.MultiIndex.drop
34.9.1.45 pandas.MultiIndex.drop_duplicates
MultiIndex.drop_duplicates(keep=first)
Return Index with duplicate values removed
Parameters keep : {first, last, False}, default first
first : Drop duplicates except for the first occurrence.
last : Drop duplicates except for the last occurrence.
False : Drop all duplicates.
Returns deduplicated : Index
34.9.1.46 pandas.MultiIndex.droplevel
MultiIndex.droplevel(level=0)
Return Index with requested level removed. If MultiIndex has only 2 levels, the result will be of Index
type not MultiIndex.
Parameters level : int/level name or list thereof
Returns index : Index or MultiIndex
Notes
34.9.1.47 pandas.MultiIndex.dropna
MultiIndex.dropna(how=any)
Return Index without NA/NaN values
Parameters how : {any, all}, default any
If the Index is a MultiIndex, drop the value when any or all levels are NaN.
34.9.1.48 pandas.MultiIndex.duplicated
MultiIndex.duplicated(keep=first)
Return boolean np.ndarray denoting duplicate values
Parameters keep : {first, last, False}, default first
first : Mark duplicates as True except for the first occurrence.
last : Mark duplicates as True except for the last occurrence.
False : Mark all duplicates as True.
Returns duplicated : np.ndarray
34.9.1.49 pandas.MultiIndex.equal_levels
MultiIndex.equal_levels(other)
Return True if the levels of both MultiIndex objects are the same
34.9.1.50 pandas.MultiIndex.equals
MultiIndex.equals(other)
Determines if two MultiIndex objects have the same labeling information (the levels themselves do not
necessarily have to be the same)
See also:
equal_levels
34.9.1.51 pandas.MultiIndex.factorize
MultiIndex.factorize(sort=False, na_sentinel=-1)
Encode the object as an enumerated type or categorical variable
Parameters sort : boolean, default False
Sort by values
na_sentinel: int, default -1
Value to mark not found
Returns labels : the indexer to the original array
uniques : the unique Index
34.9.1.52 pandas.MultiIndex.fillna
MultiIndex.fillna(value=None, downcast=None)
Fill NA/NaN values with the specified value
Parameters value : scalar
Scalar value to use to fill holes (e.g. 0). This value cannot be a list-likes.
34.9.1.53 pandas.MultiIndex.format
34.9.1.54 pandas.MultiIndex.from_arrays
Examples
34.9.1.55 pandas.MultiIndex.from_product
Examples
34.9.1.56 pandas.MultiIndex.from_tuples
Examples
34.9.1.57 pandas.MultiIndex.get_duplicates
MultiIndex.get_duplicates()
34.9.1.58 pandas.MultiIndex.get_indexer
Examples
34.9.1.59 pandas.MultiIndex.get_indexer_for
MultiIndex.get_indexer_for(target, **kwargs)
guaranteed return of an indexer even when non-unique This dispatches to get_indexer or
get_indexer_nonunique as appropriate
34.9.1.60 pandas.MultiIndex.get_indexer_non_unique
MultiIndex.get_indexer_non_unique(target)
Compute indexer and mask for new index given the current index. The indexer should be then used as an
input to ndarray.take to align the current data to the new index.
Parameters target : MultiIndex or list of tuples
Returns indexer : ndarray of int
Integers from 0 to n - 1 indicating that the index at these positions matches the
corresponding target values. Missing values in the target are marked by -1.
missing : ndarray of int
An indexer into the target of the values not found. These correspond to the -1 in
the indexer array
34.9.1.61 pandas.MultiIndex.get_level_values
MultiIndex.get_level_values(level)
Return vector of label values for requested level, equal to the length of the index
Parameters level : int or level name
Returns values : Index
34.9.1.62 pandas.MultiIndex.get_loc
MultiIndex.get_loc(key, method=None)
Get integer location, slice or boolean mask for requested label or tuple. If the key is past the lexsort depth,
the return may be a boolean mask array, otherwise it is always a slice or int.
Parameters key : label or tuple
method : None
Returns loc : int, slice object or boolean mask
34.9.1.63 pandas.MultiIndex.get_loc_level
34.9.1.64 pandas.MultiIndex.get_locs
MultiIndex.get_locs(tup)
Given a tuple of slices/lists/labels/boolean indexer to a level-wise spec produce an indexer to extract those
locations
Parameters key : tuple of (slices/list/labels)
Returns locs : integer list of locations or boolean indexer suitable
for passing to iloc
34.9.1.65 pandas.MultiIndex.get_major_bounds
Notes
This function assumes that the data is sorted by the first level
34.9.1.66 pandas.MultiIndex.get_slice_bound
34.9.1.67 pandas.MultiIndex.get_value
MultiIndex.get_value(series, key)
34.9.1.68 pandas.MultiIndex.get_values
MultiIndex.get_values()
return the underlying data as an ndarray
34.9.1.69 pandas.MultiIndex.groupby
MultiIndex.groupby(values)
Group the index labels by a given array of values.
Parameters values : array
Values used to determine the groups.
Returns groups : dict
{group name -> group labels}
34.9.1.70 pandas.MultiIndex.holds_integer
MultiIndex.holds_integer()
34.9.1.71 pandas.MultiIndex.identical
MultiIndex.identical(other)
Similar to equals, but check that other comparable attributes are also equal
34.9.1.72 pandas.MultiIndex.insert
MultiIndex.insert(loc, item)
Make new MultiIndex inserting new item at location
Parameters loc : int
item : tuple
Must be same length as number of levels in the MultiIndex
Returns new_index : Index
34.9.1.73 pandas.MultiIndex.intersection
MultiIndex.intersection(other)
Form the intersection of two MultiIndex objects, sorting if possible
Parameters other : MultiIndex or array / Index of tuples
Returns Index
34.9.1.74 pandas.MultiIndex.is_
MultiIndex.is_(other)
More flexible, faster check like is but that works through views
Note: this is not the same as Index.identical(), which checks that metadata is also the same.
Parameters other : object
other object to compare against.
Returns True if both have same underlying data, False otherwise : bool
34.9.1.75 pandas.MultiIndex.is_boolean
MultiIndex.is_boolean()
34.9.1.76 pandas.MultiIndex.is_categorical
MultiIndex.is_categorical()
34.9.1.77 pandas.MultiIndex.is_floating
MultiIndex.is_floating()
34.9.1.78 pandas.MultiIndex.is_integer
MultiIndex.is_integer()
34.9.1.79 pandas.MultiIndex.is_interval
MultiIndex.is_interval()
34.9.1.80 pandas.MultiIndex.is_lexsorted
MultiIndex.is_lexsorted()
Return True if the labels are lexicographically sorted
34.9.1.81 pandas.MultiIndex.is_lexsorted_for_tuple
MultiIndex.is_lexsorted_for_tuple(tup)
Return True if we are correctly lexsorted given the passed tuple
34.9.1.82 pandas.MultiIndex.is_mixed
MultiIndex.is_mixed()
34.9.1.83 pandas.MultiIndex.is_numeric
MultiIndex.is_numeric()
34.9.1.84 pandas.MultiIndex.is_object
MultiIndex.is_object()
34.9.1.85 pandas.MultiIndex.is_type_compatible
MultiIndex.is_type_compatible(kind)
34.9.1.86 pandas.MultiIndex.isin
MultiIndex.isin(values, level=None)
Compute boolean array of whether each index value is found in the passed set of values.
Parameters values : set or list-like
Sought values.
New in version 0.18.1.
Support for values as a set
level : str or int, optional
Name or position of the index level to use (if the index is a MultiIndex).
Returns is_contained : ndarray (boolean dtype)
Notes
If level is specified:
if it is the name of one and only one index level, use that level;
otherwise it should be a number indicating level position.
34.9.1.87 pandas.MultiIndex.isnull
MultiIndex.isnull()
Detect missing values
New in version 0.20.0.
Returns a boolean array of whether my values are null
See also:
34.9.1.88 pandas.MultiIndex.item
MultiIndex.item()
return the first element of the underlying data as a python scalar
34.9.1.89 pandas.MultiIndex.join
34.9.1.90 pandas.MultiIndex.map
MultiIndex.map(mapper)
Apply mapper function to an index.
Parameters mapper : callable
Function to be applied.
Returns applied : Union[Index, MultiIndex], inferred
The output of the mapping function applied to the index. If the function returns a
tuple with more than one element a MultiIndex will be returned.
34.9.1.91 pandas.MultiIndex.max
MultiIndex.max()
The maximum value of the object
34.9.1.92 pandas.MultiIndex.memory_usage
MultiIndex.memory_usage(deep=False)
Memory usage of my values
Parameters deep : bool
Introspect the data deeply, interrogate object dtypes for system-level memory
consumption
Returns bytes used
See also:
numpy.ndarray.nbytes
Notes
Memory usage does not include memory consumed by elements that are not components of the array if
deep=False
34.9.1.93 pandas.MultiIndex.min
MultiIndex.min()
The minimum value of the object
34.9.1.94 pandas.MultiIndex.notnull
MultiIndex.notnull()
Reverse of isnull
New in version 0.20.0.
Returns a boolean array of whether my values are not null
See also:
34.9.1.95 pandas.MultiIndex.nunique
MultiIndex.nunique(dropna=True)
Return number of unique elements in the object.
Excludes NA values by default.
Parameters dropna : boolean, default True
Dont include NaN in the count.
Returns nunique : int
34.9.1.96 pandas.MultiIndex.putmask
MultiIndex.putmask(mask, value)
return a new Index of the values set with the mask
See also:
numpy.ndarray.putmask
34.9.1.97 pandas.MultiIndex.ravel
MultiIndex.ravel(order=C)
return an ndarray of the flattened values of the underlying data
See also:
numpy.ndarray.ravel
34.9.1.98 pandas.MultiIndex.reindex
34.9.1.99 pandas.MultiIndex.remove_unused_levels
MultiIndex.remove_unused_levels()
create a new MultiIndex from the current that removing unused levels, meaning that they are not expressed
in the labels
The resulting MultiIndex will have the same outward appearance, meaning the same .values and ordering.
It will also be .equals() to the original.
New in version 0.20.0.
Returns MultiIndex
Examples
>>> i[2:]
MultiIndex(levels=[[0, 1], ['a', 'b']],
labels=[[1, 1], [0, 1]])
The 0 from the first level is not represented and can be removed
>>> i[2:].remove_unused_levels()
MultiIndex(levels=[[1], ['a', 'b']],
labels=[[0, 0], [0, 1]])
34.9.1.100 pandas.MultiIndex.rename
Examples
34.9.1.101 pandas.MultiIndex.reorder_levels
MultiIndex.reorder_levels(order)
Rearrange levels using input order. May not drop or duplicate levels
34.9.1.102 pandas.MultiIndex.repeat
34.9.1.103 pandas.MultiIndex.reshape
MultiIndex.reshape(*args, **kwargs)
NOT IMPLEMENTED: do not call this method, as reshaping is not supported for Index objects and will
raise an error.
Reshape an Index.
34.9.1.104 pandas.MultiIndex.searchsorted
Notes
Examples
>>> x.searchsorted(4)
array([3])
>>> x.searchsorted('bread')
array([1]) # Note: an array, not a scalar
>>> x.searchsorted(['bread'])
array([1])
34.9.1.105 pandas.MultiIndex.set_labels
Examples
34.9.1.106 pandas.MultiIndex.set_levels
Examples
34.9.1.107 pandas.MultiIndex.set_names
Examples
34.9.1.108 pandas.MultiIndex.set_value
34.9.1.109 pandas.MultiIndex.shift
MultiIndex.shift(periods=1, freq=None)
Shift Index containing datetime objects by input number of periods and DateOffset
Returns shifted : Index
34.9.1.110 pandas.MultiIndex.slice_indexer
Notes
This function assumes that the data is sorted, so use at your own peril
34.9.1.111 pandas.MultiIndex.slice_locs
Notes
This function assumes that the data is sorted by the first level
34.9.1.112 pandas.MultiIndex.sort
MultiIndex.sort(*args, **kwargs)
34.9.1.113 pandas.MultiIndex.sort_values
MultiIndex.sort_values(return_indexer=False, ascending=True)
Return sorted copy of Index
34.9.1.114 pandas.MultiIndex.sortlevel
34.9.1.115 pandas.MultiIndex.str
MultiIndex.str()
Vectorized string functions for Series and Index. NAs stay NA unless handled otherwise by a particular
method. Patterned after Pythons string methods, with some inspiration from Rs stringr package.
Examples
>>> s.str.split('_')
>>> s.str.replace('_', '')
34.9.1.116 pandas.MultiIndex.summary
MultiIndex.summary(name=None)
34.9.1.117 pandas.MultiIndex.swaplevel
MultiIndex.swaplevel(i=-2, j=-1)
Swap level i with level j. Do not change the ordering of anything
Parameters i, j : int, string (can be mixed)
Level of index to be swapped. Can pass level name as string.
Returns swapped : MultiIndex
Changed in version 0.18.1: The indexes i and j are now optional, and default to the two
innermost levels of the index.
34.9.1.118 pandas.MultiIndex.sym_diff
MultiIndex.sym_diff(*args, **kwargs)
34.9.1.119 pandas.MultiIndex.symmetric_difference
MultiIndex.symmetric_difference(other, result_name=None)
Compute the symmetric difference of two Index objects. Its sorted if sorting is possible.
Parameters other : Index or array-like
result_name : str
Returns symmetric_difference : Index
Notes
symmetric_difference contains elements that appear in either idx1 or idx2 but not both. Equiv-
alent to the Index created by idx1.difference(idx2) | idx2.difference(idx1) with du-
plicates dropped.
Examples
34.9.1.120 pandas.MultiIndex.take
See also:
numpy.ndarray.take
34.9.1.121 pandas.MultiIndex.to_datetime
MultiIndex.to_datetime(dayfirst=False)
DEPRECATED: use pandas.to_datetime() instead.
For an Index containing strings or datetime.datetime objects, attempt conversion to DatetimeIndex
34.9.1.122 pandas.MultiIndex.to_frame
MultiIndex.to_frame(index=True)
Create a DataFrame with the columns the levels of the MultiIndex
New in version 0.20.0.
Parameters index : boolean, default True
return this MultiIndex as the index
Returns DataFrame
34.9.1.123 pandas.MultiIndex.to_hierarchical
MultiIndex.to_hierarchical(n_repeat, n_shuffle=1)
Return a MultiIndex reshaped to conform to the shapes given by n_repeat and n_shuffle.
Useful to replicate and rearrange a MultiIndex for combination with another Index with n_repeat items.
Parameters n_repeat : int
Number of times to repeat the labels on self
n_shuffle : int
Controls the reordering of the labels. If the result is going to be an inner level in
a MultiIndex, n_shuffle will need to be greater than one. The size of each label
must divisible by n_shuffle.
Returns MultiIndex
Examples
34.9.1.124 pandas.MultiIndex.to_native_types
MultiIndex.to_native_types(slicer=None, **kwargs)
Format specified values of self and return them.
Parameters slicer : int, array-like
An indexer into self that specifies which values are used in the formatting process.
kwargs : dict
Options for specifying how the values should be formatted. These options include
the following:
1. na_rep [str] The value that serves as a placeholder for NULL values
2. quoting [bool or None] Whether or not there are quoted values in self
3. date_format [str] The format used to represent date-like values
34.9.1.125 pandas.MultiIndex.to_series
MultiIndex.to_series(**kwargs)
Create a Series with both index and values equal to the index keys useful with map for returning an indexer
based on an index
Returns Series : dtype will be based on the type of the Index values.
34.9.1.126 pandas.MultiIndex.tolist
MultiIndex.tolist()
return a list of the Index values
34.9.1.127 pandas.MultiIndex.transpose
MultiIndex.transpose(*args, **kwargs)
return the transpose, which is by definition self
34.9.1.128 pandas.MultiIndex.truncate
MultiIndex.truncate(before=None, after=None)
Slice index between two labels / tuples, return new MultiIndex
Parameters before : label or tuple, can be partial. Default None
None defaults to start
after : label or tuple, can be partial. Default None
None defaults to end
Returns truncated : MultiIndex
34.9.1.129 pandas.MultiIndex.union
MultiIndex.union(other)
Form the union of two MultiIndex objects, sorting if possible
Parameters other : MultiIndex or array / Index of tuples
Returns Index
>>> index.union(index2)
34.9.1.130 pandas.MultiIndex.unique
MultiIndex.unique()
Return unique values in the object. Uniques are returned in order of appearance, this does NOT sort. Hash
table-based unique.
Parameters values : 1d array-like
Returns unique values.
If the input is an Index, the return is an Index
If the input is a Categorical dtype, the return is a Categorical
If the input is a Series/ndarray, the return will be an ndarray
See also:
unique, Index.unique, Series.unique
34.9.1.131 pandas.MultiIndex.value_counts
34.9.1.132 pandas.MultiIndex.view
MultiIndex.view(cls=None)
this is defined as a copy with the same identity
34.9.1.133 pandas.MultiIndex.where
MultiIndex.where(cond, other=None)
34.9.2 pandas.IndexSlice
Examples
34.10 DatetimeIndex
34.10.1 pandas.DatetimeIndex
class pandas.DatetimeIndex
Immutable ndarray of datetime64 data, represented internally as int64, and which can be boxed to Timestamp
objects that are subclasses of datetime and carry metadata such as frequency information.
Parameters data : array-like (1-dimensional), optional
Optional datetime-like data to construct index with
copy : bool
Make a copy of input ndarray
freq : string or pandas offset object, optional
One of pandas date offset strings or corresponding objects
start : starting value, datetime-like, optional
If data is None, start is used as the start point in generating regular timestamp data.
periods : int, optional, > 0
Number of periods to generate, if generating index. Takes precedence over end
argument
end : end time, datetime-like, optional
If periods is none, generated index will extend to first conforming time on or just
past end argument
closed : string or None, default None
Make the interval closed with respect to the given frequency to the left, right, or
both sides (None)
tz : pytz.timezone or dateutil.tz.tzfile
ambiguous : infer, bool-ndarray, NaT, default raise
infer will attempt to infer fall dst-transition hours based on order
bool-ndarray where True signifies a DST time, False signifies a non-DST time (note that
this flag is only applicable for ambiguous times)
NaT will return NaT where there are ambiguous times
raise will raise an AmbiguousTimeError if there are ambiguous times
infer_dst : boolean, default False (DEPRECATED)
Attempt to infer fall dst-transition hours based on order
name : object
Name to be stored in the index
Notes
To learn more about the frequency strings, please see this link.
Attributes
34.10.1.1 pandas.DatetimeIndex.T
DatetimeIndex.T
return the transpose, which is by definition self
34.10.1.2 pandas.DatetimeIndex.asi8
DatetimeIndex.asi8
34.10.1.3 pandas.DatetimeIndex.asobject
DatetimeIndex.asobject
return object Index which contains boxed values
this is an internal non-public method
34.10.1.4 pandas.DatetimeIndex.base
DatetimeIndex.base
return the base object if the memory of the underlying data is shared
34.10.1.5 pandas.DatetimeIndex.data
DatetimeIndex.data
return the data pointer of the underlying data
34.10.1.6 pandas.DatetimeIndex.date
DatetimeIndex.date
Returns numpy array of python datetime.date objects (namely, the date part of Timestamps without time-
zone information).
34.10.1.7 pandas.DatetimeIndex.day
DatetimeIndex.day
The days of the datetime
34.10.1.8 pandas.DatetimeIndex.dayofweek
DatetimeIndex.dayofweek
The day of the week with Monday=0, Sunday=6
34.10.1.9 pandas.DatetimeIndex.dayofyear
DatetimeIndex.dayofyear
The ordinal day of the year
34.10.1.10 pandas.DatetimeIndex.days_in_month
DatetimeIndex.days_in_month
The number of days in the month
New in version 0.16.0.
34.10.1.11 pandas.DatetimeIndex.daysinmonth
DatetimeIndex.daysinmonth
The number of days in the month
New in version 0.16.0.
34.10.1.12 pandas.DatetimeIndex.dtype
DatetimeIndex.dtype = None
34.10.1.13 pandas.DatetimeIndex.dtype_str
DatetimeIndex.dtype_str = None
34.10.1.14 pandas.DatetimeIndex.empty
DatetimeIndex.empty
34.10.1.15 pandas.DatetimeIndex.flags
DatetimeIndex.flags
34.10.1.16 pandas.DatetimeIndex.freq
DatetimeIndex.freq
get/set the frequency of the Index
34.10.1.17 pandas.DatetimeIndex.freqstr
DatetimeIndex.freqstr
Return the frequency object as a string if its set, otherwise None
34.10.1.18 pandas.DatetimeIndex.has_duplicates
DatetimeIndex.has_duplicates
34.10.1.19 pandas.DatetimeIndex.hasnans
DatetimeIndex.hasnans = None
34.10.1.20 pandas.DatetimeIndex.hour
DatetimeIndex.hour
The hours of the datetime
34.10.1.21 pandas.DatetimeIndex.inferred_freq
DatetimeIndex.inferred_freq = None
34.10.1.22 pandas.DatetimeIndex.inferred_type
DatetimeIndex.inferred_type
34.10.1.23 pandas.DatetimeIndex.is_all_dates
DatetimeIndex.is_all_dates
34.10.1.24 pandas.DatetimeIndex.is_leap_year
DatetimeIndex.is_leap_year
Logical indicating if the date belongs to a leap year
34.10.1.25 pandas.DatetimeIndex.is_monotonic
DatetimeIndex.is_monotonic
alias for is_monotonic_increasing (deprecated)
34.10.1.26 pandas.DatetimeIndex.is_monotonic_decreasing
DatetimeIndex.is_monotonic_decreasing
return if the index is monotonic decreasing (only equal or decreasing) values.
34.10.1.27 pandas.DatetimeIndex.is_monotonic_increasing
DatetimeIndex.is_monotonic_increasing
return if the index is monotonic increasing (only equal or increasing) values.
34.10.1.28 pandas.DatetimeIndex.is_month_end
DatetimeIndex.is_month_end
Logical indicating if last day of month (defined by frequency)
34.10.1.29 pandas.DatetimeIndex.is_month_start
DatetimeIndex.is_month_start
Logical indicating if first day of month (defined by frequency)
34.10.1.30 pandas.DatetimeIndex.is_normalized
DatetimeIndex.is_normalized = None
34.10.1.31 pandas.DatetimeIndex.is_quarter_end
DatetimeIndex.is_quarter_end
Logical indicating if last day of quarter (defined by frequency)
34.10.1.32 pandas.DatetimeIndex.is_quarter_start
DatetimeIndex.is_quarter_start
Logical indicating if first day of quarter (defined by frequency)
34.10.1.33 pandas.DatetimeIndex.is_unique
DatetimeIndex.is_unique = None
34.10.1.34 pandas.DatetimeIndex.is_year_end
DatetimeIndex.is_year_end
Logical indicating if last day of year (defined by frequency)
34.10.1.35 pandas.DatetimeIndex.is_year_start
DatetimeIndex.is_year_start
Logical indicating if first day of year (defined by frequency)
34.10.1.36 pandas.DatetimeIndex.itemsize
DatetimeIndex.itemsize
return the size of the dtype of the item of the underlying data
34.10.1.37 pandas.DatetimeIndex.microsecond
DatetimeIndex.microsecond
The microseconds of the datetime
34.10.1.38 pandas.DatetimeIndex.minute
DatetimeIndex.minute
The minutes of the datetime
34.10.1.39 pandas.DatetimeIndex.month
DatetimeIndex.month
The month as January=1, December=12
34.10.1.40 pandas.DatetimeIndex.name
DatetimeIndex.name = None
34.10.1.41 pandas.DatetimeIndex.names
DatetimeIndex.names
34.10.1.42 pandas.DatetimeIndex.nanosecond
DatetimeIndex.nanosecond
The nanoseconds of the datetime
34.10.1.43 pandas.DatetimeIndex.nbytes
DatetimeIndex.nbytes
return the number of bytes in the underlying data
34.10.1.44 pandas.DatetimeIndex.ndim
DatetimeIndex.ndim
return the number of dimensions of the underlying data, by definition 1
34.10.1.45 pandas.DatetimeIndex.nlevels
DatetimeIndex.nlevels
34.10.1.46 pandas.DatetimeIndex.offset
DatetimeIndex.offset = None
34.10.1.47 pandas.DatetimeIndex.quarter
DatetimeIndex.quarter
The quarter of the date
34.10.1.48 pandas.DatetimeIndex.resolution
DatetimeIndex.resolution = None
34.10.1.49 pandas.DatetimeIndex.second
DatetimeIndex.second
The seconds of the datetime
34.10.1.50 pandas.DatetimeIndex.shape
DatetimeIndex.shape
return a tuple of the shape of the underlying data
34.10.1.51 pandas.DatetimeIndex.size
DatetimeIndex.size
return the number of elements in the underlying data
34.10.1.52 pandas.DatetimeIndex.strides
DatetimeIndex.strides
return the strides of the underlying data
34.10.1.53 pandas.DatetimeIndex.time
DatetimeIndex.time
Returns numpy array of datetime.time. The time part of the Timestamps.
34.10.1.54 pandas.DatetimeIndex.tz
DatetimeIndex.tz = None
34.10.1.55 pandas.DatetimeIndex.tzinfo
DatetimeIndex.tzinfo
Alias for tz attribute
34.10.1.56 pandas.DatetimeIndex.values
DatetimeIndex.values
return the underlying data as an ndarray
34.10.1.57 pandas.DatetimeIndex.week
DatetimeIndex.week
The week ordinal of the year
34.10.1.58 pandas.DatetimeIndex.weekday
DatetimeIndex.weekday
The day of the week with Monday=0, Sunday=6
34.10.1.59 pandas.DatetimeIndex.weekday_name
DatetimeIndex.weekday_name
The name of day in a week (ex: Friday)
New in version 0.18.1.
34.10.1.60 pandas.DatetimeIndex.weekofyear
DatetimeIndex.weekofyear
The week ordinal of the year
34.10.1.61 pandas.DatetimeIndex.year
DatetimeIndex.year
The year of the datetime
Methods
all([other])
any([other])
append(other) Append a collection of Index options together
argmax([axis]) Returns the indices of the maximum values along an
axis.
argmin([axis]) Returns the indices of the minimum values along an
axis.
argsort(*args, **kwargs) Returns the indices that would sort the index and its un-
derlying data.
asof(label) For a sorted index, return the most recent label up to and
including the passed label.
asof_locs(where, mask) where : array of timestamps
astype(dtype[, copy]) Create an Index with values cast to dtypes.
ceil(freq) ceil the index to the specified freq
contains(key) return a boolean if this key is IN the index
copy([name, deep, dtype]) Make a copy of this object.
delete(loc) Make a new DatetimeIndex with passed location(s)
deleted.
difference(other) Return a new Index with elements from the index that
are not in other.
drop(labels[, errors]) Make new Index with passed list of labels deleted
drop_duplicates([keep]) Return Index with duplicate values removed
dropna([how]) Return Index without NA/NaN values
duplicated([keep]) Return boolean np.ndarray denoting duplicate values
equals(other) Determines if two Index objects contain the same ele-
ments.
factorize([sort, na_sentinel]) Encode the object as an enumerated type or categorical
variable
fillna([value, downcast]) Fill NA/NaN values with the specified value
floor(freq) floor the index to the specified freq
format([name, formatter]) Render a string representation of the Index
get_duplicates()
get_indexer(target[, method, limit, tolerance]) Compute indexer and mask for new index given the cur-
rent index.
get_indexer_for(target, **kwargs) guaranteed return of an indexer even when non-unique
get_indexer_non_unique(target) Compute indexer and mask for new index given the cur-
rent index.
Continued on next page
34.10.1.62 pandas.DatetimeIndex.all
DatetimeIndex.all(other=None)
34.10.1.63 pandas.DatetimeIndex.any
DatetimeIndex.any(other=None)
34.10.1.64 pandas.DatetimeIndex.append
DatetimeIndex.append(other)
Append a collection of Index options together
Parameters other : Index or list/tuple of indices
Returns appended : Index
34.10.1.65 pandas.DatetimeIndex.argmax
34.10.1.66 pandas.DatetimeIndex.argmin
34.10.1.67 pandas.DatetimeIndex.argsort
DatetimeIndex.argsort(*args, **kwargs)
Returns the indices that would sort the index and its underlying data.
Returns argsorted : numpy array
See also:
numpy.ndarray.argsort
34.10.1.68 pandas.DatetimeIndex.asof
DatetimeIndex.asof(label)
For a sorted index, return the most recent label up to and including the passed label. Return NaN if not
found.
See also:
34.10.1.69 pandas.DatetimeIndex.asof_locs
DatetimeIndex.asof_locs(where, mask)
where : array of timestamps mask : array of booleans where data is not NA
34.10.1.70 pandas.DatetimeIndex.astype
DatetimeIndex.astype(dtype, copy=True)
Create an Index with values cast to dtypes. The class of a new Index is determined by dtype. When
conversion is impossible, a ValueError exception is raised.
Parameters dtype : numpy dtype or pandas type
copy : bool, default True
By default, astype always returns a newly allocated object. If copy is set to False
and internal requirements on dtype are satisfied, the original data is used to create
a new Index or the original Index is returned.
New in version 0.19.0.
34.10.1.71 pandas.DatetimeIndex.ceil
DatetimeIndex.ceil(freq)
ceil the index to the specified freq
Parameters freq : freq string/object
Returns index of same type
Raises ValueError if the freq cannot be converted
34.10.1.72 pandas.DatetimeIndex.contains
DatetimeIndex.contains(key)
return a boolean if this key is IN the index
Parameters key : object
Returns boolean
34.10.1.73 pandas.DatetimeIndex.copy
Notes
In most cases, there should be no functional difference from using deep, but if deep is passed it will
attempt to deepcopy.
34.10.1.74 pandas.DatetimeIndex.delete
DatetimeIndex.delete(loc)
Make a new DatetimeIndex with passed location(s) deleted.
Parameters loc: int, slice or array of ints
Indicate which sub-arrays to remove.
Returns new_index : DatetimeIndex
34.10.1.75 pandas.DatetimeIndex.difference
DatetimeIndex.difference(other)
Return a new Index with elements from the index that are not in other.
This is the set difference of two Index objects. Its sorted if sorting is possible.
Parameters other : Index or array-like
Returns difference : Index
Examples
34.10.1.76 pandas.DatetimeIndex.drop
DatetimeIndex.drop(labels, errors=raise)
Make new Index with passed list of labels deleted
Parameters labels : array-like
errors : {ignore, raise}, default raise
If ignore, suppress error and existing labels are dropped.
Returns dropped : Index
34.10.1.77 pandas.DatetimeIndex.drop_duplicates
DatetimeIndex.drop_duplicates(keep=first)
Return Index with duplicate values removed
Parameters keep : {first, last, False}, default first
first : Drop duplicates except for the first occurrence.
34.10.1.78 pandas.DatetimeIndex.dropna
DatetimeIndex.dropna(how=any)
Return Index without NA/NaN values
Parameters how : {any, all}, default any
If the Index is a MultiIndex, drop the value when any or all levels are NaN.
Returns valid : Index
34.10.1.79 pandas.DatetimeIndex.duplicated
DatetimeIndex.duplicated(keep=first)
Return boolean np.ndarray denoting duplicate values
Parameters keep : {first, last, False}, default first
first : Mark duplicates as True except for the first occurrence.
last : Mark duplicates as True except for the last occurrence.
False : Mark all duplicates as True.
Returns duplicated : np.ndarray
34.10.1.80 pandas.DatetimeIndex.equals
DatetimeIndex.equals(other)
Determines if two Index objects contain the same elements.
34.10.1.81 pandas.DatetimeIndex.factorize
DatetimeIndex.factorize(sort=False, na_sentinel=-1)
Encode the object as an enumerated type or categorical variable
Parameters sort : boolean, default False
Sort by values
na_sentinel: int, default -1
Value to mark not found
Returns labels : the indexer to the original array
uniques : the unique Index
34.10.1.82 pandas.DatetimeIndex.fillna
DatetimeIndex.fillna(value=None, downcast=None)
Fill NA/NaN values with the specified value
Parameters value : scalar
Scalar value to use to fill holes (e.g. 0). This value cannot be a list-likes.
downcast : dict, default is None
a dict of item->dtype of what to downcast if possible, or the string infer which
will try to downcast to an appropriate equal type (e.g. float64 to int64 if possible)
Returns filled : %(klass)s
34.10.1.83 pandas.DatetimeIndex.floor
DatetimeIndex.floor(freq)
floor the index to the specified freq
Parameters freq : freq string/object
Returns index of same type
Raises ValueError if the freq cannot be converted
34.10.1.84 pandas.DatetimeIndex.format
34.10.1.85 pandas.DatetimeIndex.get_duplicates
DatetimeIndex.get_duplicates()
34.10.1.86 pandas.DatetimeIndex.get_indexer
tolerance : optional
Maximum distance between original and new labels for inexact matches.
The values of the index at the matching locations most satisfy the equation
abs(index[indexer] - target) <= tolerance.
New in version 0.17.0.
Returns indexer : ndarray of int
Integers from 0 to n - 1 indicating that the index at these positions matches the
corresponding target values. Missing values in the target are marked by -1.
Examples
34.10.1.87 pandas.DatetimeIndex.get_indexer_for
DatetimeIndex.get_indexer_for(target, **kwargs)
guaranteed return of an indexer even when non-unique This dispatches to get_indexer or
get_indexer_nonunique as appropriate
34.10.1.88 pandas.DatetimeIndex.get_indexer_non_unique
DatetimeIndex.get_indexer_non_unique(target)
Compute indexer and mask for new index given the current index. The indexer should be then used as an
input to ndarray.take to align the current data to the new index.
Parameters target : Index
Returns indexer : ndarray of int
Integers from 0 to n - 1 indicating that the index at these positions matches the
corresponding target values. Missing values in the target are marked by -1.
missing : ndarray of int
An indexer into the target of the values not found. These correspond to the -1 in
the indexer array
34.10.1.89 pandas.DatetimeIndex.get_level_values
DatetimeIndex.get_level_values(level)
Return an Index of values for requested level, equal to the length of the index
Parameters level : int
Returns values : Index
34.10.1.90 pandas.DatetimeIndex.get_loc
34.10.1.91 pandas.DatetimeIndex.get_slice_bound
34.10.1.92 pandas.DatetimeIndex.get_value
DatetimeIndex.get_value(series, key)
Fast lookup of value from 1-dimensional ndarray. Only use this if you know what youre doing
34.10.1.93 pandas.DatetimeIndex.get_value_maybe_box
DatetimeIndex.get_value_maybe_box(series, key)
34.10.1.94 pandas.DatetimeIndex.get_values
DatetimeIndex.get_values()
return the underlying data as an ndarray
34.10.1.95 pandas.DatetimeIndex.groupby
DatetimeIndex.groupby(values)
Group the index labels by a given array of values.
Parameters values : array
Values used to determine the groups.
Returns groups : dict
{group name -> group labels}
34.10.1.96 pandas.DatetimeIndex.holds_integer
DatetimeIndex.holds_integer()
34.10.1.97 pandas.DatetimeIndex.identical
DatetimeIndex.identical(other)
Similar to equals, but check that other comparable attributes are also equal
34.10.1.98 pandas.DatetimeIndex.indexer_at_time
DatetimeIndex.indexer_at_time(time, asof=False)
Select values at particular time of day (e.g. 9:30AM)
Parameters time : datetime.time or string
Returns values_at_time : TimeSeries
34.10.1.99 pandas.DatetimeIndex.indexer_between_time
34.10.1.100 pandas.DatetimeIndex.insert
DatetimeIndex.insert(loc, item)
Make new Index inserting new item at location
Parameters loc : int
item : object
if not either a Python datetime or a numpy integer-like, returned Index dtype will
be object rather than datetime.
Returns new_index : Index
34.10.1.101 pandas.DatetimeIndex.intersection
DatetimeIndex.intersection(other)
Specialized intersection for DatetimeIndex objects. May be much faster than Index.intersection
Parameters other : DatetimeIndex or array-like
Returns y : Index or DatetimeIndex
34.10.1.102 pandas.DatetimeIndex.is_
DatetimeIndex.is_(other)
More flexible, faster check like is but that works through views
Note: this is not the same as Index.identical(), which checks that metadata is also the same.
Parameters other : object
other object to compare against.
Returns True if both have same underlying data, False otherwise : bool
34.10.1.103 pandas.DatetimeIndex.is_boolean
DatetimeIndex.is_boolean()
34.10.1.104 pandas.DatetimeIndex.is_categorical
DatetimeIndex.is_categorical()
34.10.1.105 pandas.DatetimeIndex.is_floating
DatetimeIndex.is_floating()
34.10.1.106 pandas.DatetimeIndex.is_integer
DatetimeIndex.is_integer()
34.10.1.107 pandas.DatetimeIndex.is_interval
DatetimeIndex.is_interval()
34.10.1.108 pandas.DatetimeIndex.is_lexsorted_for_tuple
DatetimeIndex.is_lexsorted_for_tuple(tup)
34.10.1.109 pandas.DatetimeIndex.is_mixed
DatetimeIndex.is_mixed()
34.10.1.110 pandas.DatetimeIndex.is_numeric
DatetimeIndex.is_numeric()
34.10.1.111 pandas.DatetimeIndex.is_object
DatetimeIndex.is_object()
34.10.1.112 pandas.DatetimeIndex.is_type_compatible
DatetimeIndex.is_type_compatible(typ)
34.10.1.113 pandas.DatetimeIndex.isin
DatetimeIndex.isin(values)
Compute boolean array of whether each index value is found in the passed set of values
Parameters values : set or sequence of values
Returns is_contained : ndarray (boolean dtype)
34.10.1.114 pandas.DatetimeIndex.isnull
DatetimeIndex.isnull()
Detect missing values
New in version 0.20.0.
Returns a boolean array of whether my values are null
See also:
34.10.1.115 pandas.DatetimeIndex.item
DatetimeIndex.item()
return the first element of the underlying data as a python scalar
34.10.1.116 pandas.DatetimeIndex.join
34.10.1.117 pandas.DatetimeIndex.map
DatetimeIndex.map(f )
34.10.1.118 pandas.DatetimeIndex.max
34.10.1.119 pandas.DatetimeIndex.memory_usage
DatetimeIndex.memory_usage(deep=False)
Memory usage of my values
Parameters deep : bool
Introspect the data deeply, interrogate object dtypes for system-level memory
consumption
Returns bytes used
See also:
numpy.ndarray.nbytes
Notes
Memory usage does not include memory consumed by elements that are not components of the array if
deep=False
34.10.1.120 pandas.DatetimeIndex.min
34.10.1.121 pandas.DatetimeIndex.normalize
DatetimeIndex.normalize()
Return DatetimeIndex with times to midnight. Length is unaltered
Returns normalized : DatetimeIndex
34.10.1.122 pandas.DatetimeIndex.notnull
DatetimeIndex.notnull()
Reverse of isnull
New in version 0.20.0.
Returns a boolean array of whether my values are not null
See also:
34.10.1.123 pandas.DatetimeIndex.nunique
DatetimeIndex.nunique(dropna=True)
Return number of unique elements in the object.
Excludes NA values by default.
34.10.1.124 pandas.DatetimeIndex.putmask
DatetimeIndex.putmask(mask, value)
return a new Index of the values set with the mask
See also:
numpy.ndarray.putmask
34.10.1.125 pandas.DatetimeIndex.ravel
DatetimeIndex.ravel(order=C)
return an ndarray of the flattened values of the underlying data
See also:
numpy.ndarray.ravel
34.10.1.126 pandas.DatetimeIndex.reindex
34.10.1.127 pandas.DatetimeIndex.rename
DatetimeIndex.rename(name, inplace=False)
Set new names on index. Defaults to returning new index.
Parameters name : str or list
name to set
inplace : bool
if True, mutates in place
Returns new index (of same type and class...etc) [if inplace, returns None]
34.10.1.128 pandas.DatetimeIndex.repeat
34.10.1.129 pandas.DatetimeIndex.reshape
DatetimeIndex.reshape(*args, **kwargs)
NOT IMPLEMENTED: do not call this method, as reshaping is not supported for Index objects and will
raise an error.
Reshape an Index.
34.10.1.130 pandas.DatetimeIndex.round
34.10.1.131 pandas.DatetimeIndex.searchsorted
Notes
Examples
>>> x.searchsorted(4)
array([3])
>>> x.searchsorted('bread')
array([1]) # Note: an array, not a scalar
>>> x.searchsorted(['bread'])
array([1])
34.10.1.132 pandas.DatetimeIndex.set_names
Examples
34.10.1.133 pandas.DatetimeIndex.set_value
34.10.1.134 pandas.DatetimeIndex.shift
DatetimeIndex.shift(n, freq=None)
Specialized shift which produces a DatetimeIndex
Parameters n : int
Periods to shift by
freq : DateOffset or timedelta-like, optional
Returns shifted : DatetimeIndex
34.10.1.135 pandas.DatetimeIndex.slice_indexer
34.10.1.136 pandas.DatetimeIndex.slice_locs
34.10.1.137 pandas.DatetimeIndex.snap
DatetimeIndex.snap(freq=S)
Snap time stamps to nearest occurring frequency
34.10.1.138 pandas.DatetimeIndex.sort
DatetimeIndex.sort(*args, **kwargs)
34.10.1.139 pandas.DatetimeIndex.sort_values
DatetimeIndex.sort_values(return_indexer=False, ascending=True)
Return sorted copy of Index
34.10.1.140 pandas.DatetimeIndex.sortlevel
34.10.1.141 pandas.DatetimeIndex.str
DatetimeIndex.str()
Vectorized string functions for Series and Index. NAs stay NA unless handled otherwise by a particular
method. Patterned after Pythons string methods, with some inspiration from Rs stringr package.
Examples
>>> s.str.split('_')
>>> s.str.replace('_', '')
34.10.1.142 pandas.DatetimeIndex.strftime
DatetimeIndex.strftime(date_format)
Return an array of formatted strings specified by date_format, which supports the same string format as
the python standard library. Details of the string format can be found in python string format doc
New in version 0.17.0.
Parameters date_format : str
date format string (e.g. %Y-%m-%d)
Returns ndarray of formatted strings
34.10.1.143 pandas.DatetimeIndex.summary
DatetimeIndex.summary(name=None)
return a summarized representation
34.10.1.144 pandas.DatetimeIndex.sym_diff
DatetimeIndex.sym_diff(*args, **kwargs)
34.10.1.145 pandas.DatetimeIndex.symmetric_difference
DatetimeIndex.symmetric_difference(other, result_name=None)
Compute the symmetric difference of two Index objects. Its sorted if sorting is possible.
Parameters other : Index or array-like
result_name : str
Returns symmetric_difference : Index
Notes
symmetric_difference contains elements that appear in either idx1 or idx2 but not both. Equiv-
alent to the Index created by idx1.difference(idx2) | idx2.difference(idx1) with du-
plicates dropped.
Examples
34.10.1.146 pandas.DatetimeIndex.take
34.10.1.147 pandas.DatetimeIndex.to_datetime
DatetimeIndex.to_datetime(dayfirst=False)
34.10.1.148 pandas.DatetimeIndex.to_julian_date
DatetimeIndex.to_julian_date()
Convert DatetimeIndex to Float64Index of Julian Dates. 0 Julian date is noon January 1, 4713 BC.
http://en.wikipedia.org/wiki/Julian_day
34.10.1.149 pandas.DatetimeIndex.to_native_types
DatetimeIndex.to_native_types(slicer=None, **kwargs)
Format specified values of self and return them.
Parameters slicer : int, array-like
An indexer into self that specifies which values are used in the formatting process.
kwargs : dict
Options for specifying how the values should be formatted. These options include
the following:
1. na_rep [str] The value that serves as a placeholder for NULL values
2. quoting [bool or None] Whether or not there are quoted values in self
3. date_format [str] The format used to represent date-like values
34.10.1.150 pandas.DatetimeIndex.to_period
DatetimeIndex.to_period(freq=None)
Cast to PeriodIndex at a particular frequency
34.10.1.151 pandas.DatetimeIndex.to_perioddelta
DatetimeIndex.to_perioddelta(freq)
Calcuates TimedeltaIndex of difference between index values and index converted to PeriodIndex at spec-
ified freq. Used for vectorized offsets
New in version 0.17.0.
Parameters freq : Period frequency
Returns y : TimedeltaIndex
34.10.1.152 pandas.DatetimeIndex.to_pydatetime
DatetimeIndex.to_pydatetime()
Return DatetimeIndex as object ndarray of datetime.datetime objects
Returns datetimes : ndarray
34.10.1.153 pandas.DatetimeIndex.to_series
DatetimeIndex.to_series(keep_tz=False)
Create a Series with both index and values equal to the index keys useful with map for returning an indexer
based on an index
Parameters keep_tz : optional, defaults False.
return the data keeping the timezone.
If keep_tz is True:
If the timezone is not set, the resulting Series will have a datetime64[ns]
dtype.
Otherwise the Series will have an datetime64[ns, tz] dtype; the tz will be
preserved.
If keep_tz is False:
Series will have a datetime64[ns] dtype. TZ aware objects will have the tz
removed.
Returns Series
34.10.1.154 pandas.DatetimeIndex.tolist
DatetimeIndex.tolist()
return a list of the underlying data
34.10.1.155 pandas.DatetimeIndex.transpose
DatetimeIndex.transpose(*args, **kwargs)
return the transpose, which is by definition self
34.10.1.156 pandas.DatetimeIndex.tz_convert
DatetimeIndex.tz_convert(tz)
Convert tz-aware DatetimeIndex from one time zone to another (using pytz/dateutil)
Parameters tz : string, pytz.timezone, dateutil.tz.tzfile or None
Time zone for time. Corresponding timestamps would be converted to time zone
of the TimeSeries. None will remove timezone holding UTC time.
Returns normalized : DatetimeIndex
Raises TypeError
If DatetimeIndex is tz-naive.
34.10.1.157 pandas.DatetimeIndex.tz_localize
34.10.1.158 pandas.DatetimeIndex.union
DatetimeIndex.union(other)
Specialized union for DatetimeIndex objects. If combine overlapping ranges with the same DateOffset,
will be much faster than Index.union
34.10.1.159 pandas.DatetimeIndex.union_many
DatetimeIndex.union_many(others)
A bit of a hack to accelerate unioning a collection of indexes
34.10.1.160 pandas.DatetimeIndex.unique
DatetimeIndex.unique()
Return unique values in the object. Uniques are returned in order of appearance, this does NOT sort. Hash
table-based unique.
Parameters values : 1d array-like
Returns unique values.
If the input is an Index, the return is an Index
If the input is a Categorical dtype, the return is a Categorical
If the input is a Series/ndarray, the return will be an ndarray
See also:
unique, Index.unique, Series.unique
34.10.1.161 pandas.DatetimeIndex.value_counts
34.10.1.162 pandas.DatetimeIndex.view
DatetimeIndex.view(cls=None)
34.10.1.163 pandas.DatetimeIndex.where
DatetimeIndex.where(cond, other=None)
New in version 0.19.0.
Return an Index of same shape as self and whose corresponding entries are from self where cond is True
and otherwise are from other.
Parameters cond : boolean array-like with the same length as self
other : scalar, or array-like
34.10.3 Selecting
34.10.5 Conversion
DatetimeIndex.to_datetime([dayfirst])
DatetimeIndex.to_period([freq]) Cast to PeriodIndex at a particular frequency
DatetimeIndex.to_perioddelta(freq) Calcuates TimedeltaIndex of difference between index val-
ues and index converted to PeriodIndex at specified freq.
DatetimeIndex.to_pydatetime() Return DatetimeIndex as object ndarray of date-
time.datetime objects
DatetimeIndex.to_series([keep_tz]) Create a Series with both index and values equal to the in-
dex keys
34.11 TimedeltaIndex
34.11.1 pandas.TimedeltaIndex
class pandas.TimedeltaIndex
Immutable ndarray of timedelta64 data, represented internally as int64, and which can be boxed to timedelta
objects
Parameters data : array-like (1-dimensional), optional
Optional timedelta-like data to construct index with
unit: unit of the arg (D,h,m,s,ms,us,ns) denote the unit, optional
which is an integer/float number
freq: a frequency for the index, optional
copy : bool
Make a copy of input ndarray
start : starting value, timedelta-like, optional
If data is None, start is used as the start point in generating regular timedelta data.
periods : int, optional, > 0
Number of periods to generate, if generating index. Takes precedence over end
argument
end : end time, timedelta-like, optional
If periods is none, generated index will extend to first conforming time on or just
past end argument
closed : string or None, default None
Make the interval closed with respect to the given frequency to the left, right, or
both sides (None)
name : object
Name to be stored in the index
Notes
To learn more about the frequency strings, please see this link.
Attributes
34.11.1.1 pandas.TimedeltaIndex.T
TimedeltaIndex.T
return the transpose, which is by definition self
34.11.1.2 pandas.TimedeltaIndex.asi8
TimedeltaIndex.asi8
34.11.1.3 pandas.TimedeltaIndex.asobject
TimedeltaIndex.asobject
return object Index which contains boxed values
this is an internal non-public method
34.11.1.4 pandas.TimedeltaIndex.base
TimedeltaIndex.base
return the base object if the memory of the underlying data is shared
34.11.1.5 pandas.TimedeltaIndex.components
TimedeltaIndex.components
Return a dataframe of the components (days, hours, minutes, seconds, milliseconds, microseconds,
nanoseconds) of the Timedeltas.
Returns a DataFrame
34.11.1.6 pandas.TimedeltaIndex.data
TimedeltaIndex.data
return the data pointer of the underlying data
34.11.1.7 pandas.TimedeltaIndex.days
TimedeltaIndex.days
Number of days for each element.
34.11.1.8 pandas.TimedeltaIndex.dtype
TimedeltaIndex.dtype
34.11.1.9 pandas.TimedeltaIndex.dtype_str
TimedeltaIndex.dtype_str = None
34.11.1.10 pandas.TimedeltaIndex.empty
TimedeltaIndex.empty
34.11.1.11 pandas.TimedeltaIndex.flags
TimedeltaIndex.flags
34.11.1.12 pandas.TimedeltaIndex.freq
TimedeltaIndex.freq = None
34.11.1.13 pandas.TimedeltaIndex.freqstr
TimedeltaIndex.freqstr
Return the frequency object as a string if its set, otherwise None
34.11.1.14 pandas.TimedeltaIndex.has_duplicates
TimedeltaIndex.has_duplicates
34.11.1.15 pandas.TimedeltaIndex.hasnans
TimedeltaIndex.hasnans = None
34.11.1.16 pandas.TimedeltaIndex.inferred_freq
TimedeltaIndex.inferred_freq = None
34.11.1.17 pandas.TimedeltaIndex.inferred_type
TimedeltaIndex.inferred_type
34.11.1.18 pandas.TimedeltaIndex.is_all_dates
TimedeltaIndex.is_all_dates
34.11.1.19 pandas.TimedeltaIndex.is_monotonic
TimedeltaIndex.is_monotonic
alias for is_monotonic_increasing (deprecated)
34.11.1.20 pandas.TimedeltaIndex.is_monotonic_decreasing
TimedeltaIndex.is_monotonic_decreasing
return if the index is monotonic decreasing (only equal or decreasing) values.
34.11.1.21 pandas.TimedeltaIndex.is_monotonic_increasing
TimedeltaIndex.is_monotonic_increasing
return if the index is monotonic increasing (only equal or increasing) values.
34.11.1.22 pandas.TimedeltaIndex.is_unique
TimedeltaIndex.is_unique = None
34.11.1.23 pandas.TimedeltaIndex.itemsize
TimedeltaIndex.itemsize
return the size of the dtype of the item of the underlying data
34.11.1.24 pandas.TimedeltaIndex.microseconds
TimedeltaIndex.microseconds
Number of microseconds (>= 0 and less than 1 second) for each element.
34.11.1.25 pandas.TimedeltaIndex.name
TimedeltaIndex.name = None
34.11.1.26 pandas.TimedeltaIndex.names
TimedeltaIndex.names
34.11.1.27 pandas.TimedeltaIndex.nanoseconds
TimedeltaIndex.nanoseconds
Number of nanoseconds (>= 0 and less than 1 microsecond) for each element.
34.11.1.28 pandas.TimedeltaIndex.nbytes
TimedeltaIndex.nbytes
return the number of bytes in the underlying data
34.11.1.29 pandas.TimedeltaIndex.ndim
TimedeltaIndex.ndim
return the number of dimensions of the underlying data, by definition 1
34.11.1.30 pandas.TimedeltaIndex.nlevels
TimedeltaIndex.nlevels
34.11.1.31 pandas.TimedeltaIndex.resolution
TimedeltaIndex.resolution = None
34.11.1.32 pandas.TimedeltaIndex.seconds
TimedeltaIndex.seconds
Number of seconds (>= 0 and less than 1 day) for each element.
34.11.1.33 pandas.TimedeltaIndex.shape
TimedeltaIndex.shape
return a tuple of the shape of the underlying data
34.11.1.34 pandas.TimedeltaIndex.size
TimedeltaIndex.size
return the number of elements in the underlying data
34.11.1.35 pandas.TimedeltaIndex.strides
TimedeltaIndex.strides
return the strides of the underlying data
34.11.1.36 pandas.TimedeltaIndex.values
TimedeltaIndex.values
return the underlying data as an ndarray
Methods
all([other])
any([other])
append(other) Append a collection of Index options together
argmax([axis]) Returns the indices of the maximum values along an
axis.
argmin([axis]) Returns the indices of the minimum values along an
axis.
argsort(*args, **kwargs) Returns the indices that would sort the index and its un-
derlying data.
asof(label) For a sorted index, return the most recent label up to and
including the passed label.
asof_locs(where, mask) where : array of timestamps
astype(dtype[, copy]) Create an Index with values cast to dtypes.
ceil(freq) ceil the index to the specified freq
contains(key) return a boolean if this key is IN the index
copy([name, deep, dtype]) Make a copy of this object.
delete(loc) Make a new DatetimeIndex with passed location(s)
deleted.
difference(other) Return a new Index with elements from the index that
are not in other.
drop(labels[, errors]) Make new Index with passed list of labels deleted
drop_duplicates([keep]) Return Index with duplicate values removed
dropna([how]) Return Index without NA/NaN values
duplicated([keep]) Return boolean np.ndarray denoting duplicate values
equals(other) Determines if two Index objects contain the same ele-
ments.
factorize([sort, na_sentinel]) Encode the object as an enumerated type or categorical
variable
fillna([value, downcast]) Fill NA/NaN values with the specified value
floor(freq) floor the index to the specified freq
format([name, formatter]) Render a string representation of the Index
get_duplicates()
Continued on next page
34.11.1.37 pandas.TimedeltaIndex.all
TimedeltaIndex.all(other=None)
34.11.1.38 pandas.TimedeltaIndex.any
TimedeltaIndex.any(other=None)
34.11.1.39 pandas.TimedeltaIndex.append
TimedeltaIndex.append(other)
Append a collection of Index options together
34.11.1.40 pandas.TimedeltaIndex.argmax
34.11.1.41 pandas.TimedeltaIndex.argmin
34.11.1.42 pandas.TimedeltaIndex.argsort
TimedeltaIndex.argsort(*args, **kwargs)
Returns the indices that would sort the index and its underlying data.
Returns argsorted : numpy array
See also:
numpy.ndarray.argsort
34.11.1.43 pandas.TimedeltaIndex.asof
TimedeltaIndex.asof(label)
For a sorted index, return the most recent label up to and including the passed label. Return NaN if not
found.
See also:
34.11.1.44 pandas.TimedeltaIndex.asof_locs
TimedeltaIndex.asof_locs(where, mask)
where : array of timestamps mask : array of booleans where data is not NA
34.11.1.45 pandas.TimedeltaIndex.astype
TimedeltaIndex.astype(dtype, copy=True)
Create an Index with values cast to dtypes. The class of a new Index is determined by dtype. When
conversion is impossible, a ValueError exception is raised.
Parameters dtype : numpy dtype or pandas type
copy : bool, default True
By default, astype always returns a newly allocated object. If copy is set to False
and internal requirements on dtype are satisfied, the original data is used to create
a new Index or the original Index is returned.
New in version 0.19.0.
34.11.1.46 pandas.TimedeltaIndex.ceil
TimedeltaIndex.ceil(freq)
ceil the index to the specified freq
Parameters freq : freq string/object
Returns index of same type
Raises ValueError if the freq cannot be converted
34.11.1.47 pandas.TimedeltaIndex.contains
TimedeltaIndex.contains(key)
return a boolean if this key is IN the index
Parameters key : object
Returns boolean
34.11.1.48 pandas.TimedeltaIndex.copy
Notes
In most cases, there should be no functional difference from using deep, but if deep is passed it will
attempt to deepcopy.
34.11.1.49 pandas.TimedeltaIndex.delete
TimedeltaIndex.delete(loc)
Make a new DatetimeIndex with passed location(s) deleted.
Parameters loc: int, slice or array of ints
Indicate which sub-arrays to remove.
Returns new_index : TimedeltaIndex
34.11.1.50 pandas.TimedeltaIndex.difference
TimedeltaIndex.difference(other)
Return a new Index with elements from the index that are not in other.
This is the set difference of two Index objects. Its sorted if sorting is possible.
Parameters other : Index or array-like
Returns difference : Index
Examples
34.11.1.51 pandas.TimedeltaIndex.drop
TimedeltaIndex.drop(labels, errors=raise)
Make new Index with passed list of labels deleted
Parameters labels : array-like
errors : {ignore, raise}, default raise
If ignore, suppress error and existing labels are dropped.
Returns dropped : Index
34.11.1.52 pandas.TimedeltaIndex.drop_duplicates
TimedeltaIndex.drop_duplicates(keep=first)
Return Index with duplicate values removed
Parameters keep : {first, last, False}, default first
first : Drop duplicates except for the first occurrence.
last : Drop duplicates except for the last occurrence.
False : Drop all duplicates.
Returns deduplicated : Index
34.11.1.53 pandas.TimedeltaIndex.dropna
TimedeltaIndex.dropna(how=any)
Return Index without NA/NaN values
Parameters how : {any, all}, default any
If the Index is a MultiIndex, drop the value when any or all levels are NaN.
Returns valid : Index
34.11.1.54 pandas.TimedeltaIndex.duplicated
TimedeltaIndex.duplicated(keep=first)
Return boolean np.ndarray denoting duplicate values
Parameters keep : {first, last, False}, default first
first : Mark duplicates as True except for the first occurrence.
last : Mark duplicates as True except for the last occurrence.
False : Mark all duplicates as True.
Returns duplicated : np.ndarray
34.11.1.55 pandas.TimedeltaIndex.equals
TimedeltaIndex.equals(other)
Determines if two Index objects contain the same elements.
34.11.1.56 pandas.TimedeltaIndex.factorize
TimedeltaIndex.factorize(sort=False, na_sentinel=-1)
Encode the object as an enumerated type or categorical variable
Parameters sort : boolean, default False
Sort by values
na_sentinel: int, default -1
Value to mark not found
Returns labels : the indexer to the original array
uniques : the unique Index
34.11.1.57 pandas.TimedeltaIndex.fillna
TimedeltaIndex.fillna(value=None, downcast=None)
Fill NA/NaN values with the specified value
Parameters value : scalar
Scalar value to use to fill holes (e.g. 0). This value cannot be a list-likes.
downcast : dict, default is None
34.11.1.58 pandas.TimedeltaIndex.floor
TimedeltaIndex.floor(freq)
floor the index to the specified freq
Parameters freq : freq string/object
Returns index of same type
Raises ValueError if the freq cannot be converted
34.11.1.59 pandas.TimedeltaIndex.format
34.11.1.60 pandas.TimedeltaIndex.get_duplicates
TimedeltaIndex.get_duplicates()
34.11.1.61 pandas.TimedeltaIndex.get_indexer
Integers from 0 to n - 1 indicating that the index at these positions matches the
corresponding target values. Missing values in the target are marked by -1.
Examples
34.11.1.62 pandas.TimedeltaIndex.get_indexer_for
TimedeltaIndex.get_indexer_for(target, **kwargs)
guaranteed return of an indexer even when non-unique This dispatches to get_indexer or
get_indexer_nonunique as appropriate
34.11.1.63 pandas.TimedeltaIndex.get_indexer_non_unique
TimedeltaIndex.get_indexer_non_unique(target)
Compute indexer and mask for new index given the current index. The indexer should be then used as an
input to ndarray.take to align the current data to the new index.
Parameters target : Index
Returns indexer : ndarray of int
Integers from 0 to n - 1 indicating that the index at these positions matches the
corresponding target values. Missing values in the target are marked by -1.
missing : ndarray of int
An indexer into the target of the values not found. These correspond to the -1 in
the indexer array
34.11.1.64 pandas.TimedeltaIndex.get_level_values
TimedeltaIndex.get_level_values(level)
Return an Index of values for requested level, equal to the length of the index
Parameters level : int
Returns values : Index
34.11.1.65 pandas.TimedeltaIndex.get_loc
34.11.1.66 pandas.TimedeltaIndex.get_slice_bound
34.11.1.67 pandas.TimedeltaIndex.get_value
TimedeltaIndex.get_value(series, key)
Fast lookup of value from 1-dimensional ndarray. Only use this if you know what youre doing
34.11.1.68 pandas.TimedeltaIndex.get_value_maybe_box
TimedeltaIndex.get_value_maybe_box(series, key)
34.11.1.69 pandas.TimedeltaIndex.get_values
TimedeltaIndex.get_values()
return the underlying data as an ndarray
34.11.1.70 pandas.TimedeltaIndex.groupby
TimedeltaIndex.groupby(values)
Group the index labels by a given array of values.
Parameters values : array
Values used to determine the groups.
Returns groups : dict
{group name -> group labels}
34.11.1.71 pandas.TimedeltaIndex.holds_integer
TimedeltaIndex.holds_integer()
34.11.1.72 pandas.TimedeltaIndex.identical
TimedeltaIndex.identical(other)
Similar to equals, but check that other comparable attributes are also equal
34.11.1.73 pandas.TimedeltaIndex.insert
TimedeltaIndex.insert(loc, item)
Make new Index inserting new item at location
Parameters loc : int
item : object
if not either a Python datetime or a numpy integer-like, returned Index dtype will
be object rather than datetime.
Returns new_index : Index
34.11.1.74 pandas.TimedeltaIndex.intersection
TimedeltaIndex.intersection(other)
Specialized intersection for TimedeltaIndex objects. May be much faster than Index.intersection
Parameters other : TimedeltaIndex or array-like
Returns y : Index or TimedeltaIndex
34.11.1.75 pandas.TimedeltaIndex.is_
TimedeltaIndex.is_(other)
More flexible, faster check like is but that works through views
Note: this is not the same as Index.identical(), which checks that metadata is also the same.
Parameters other : object
other object to compare against.
Returns True if both have same underlying data, False otherwise : bool
34.11.1.76 pandas.TimedeltaIndex.is_boolean
TimedeltaIndex.is_boolean()
34.11.1.77 pandas.TimedeltaIndex.is_categorical
TimedeltaIndex.is_categorical()
34.11.1.78 pandas.TimedeltaIndex.is_floating
TimedeltaIndex.is_floating()
34.11.1.79 pandas.TimedeltaIndex.is_integer
TimedeltaIndex.is_integer()
34.11.1.80 pandas.TimedeltaIndex.is_interval
TimedeltaIndex.is_interval()
34.11.1.81 pandas.TimedeltaIndex.is_lexsorted_for_tuple
TimedeltaIndex.is_lexsorted_for_tuple(tup)
34.11.1.82 pandas.TimedeltaIndex.is_mixed
TimedeltaIndex.is_mixed()
34.11.1.83 pandas.TimedeltaIndex.is_numeric
TimedeltaIndex.is_numeric()
34.11.1.84 pandas.TimedeltaIndex.is_object
TimedeltaIndex.is_object()
34.11.1.85 pandas.TimedeltaIndex.is_type_compatible
TimedeltaIndex.is_type_compatible(typ)
34.11.1.86 pandas.TimedeltaIndex.isin
TimedeltaIndex.isin(values)
Compute boolean array of whether each index value is found in the passed set of values
Parameters values : set or sequence of values
Returns is_contained : ndarray (boolean dtype)
34.11.1.87 pandas.TimedeltaIndex.isnull
TimedeltaIndex.isnull()
Detect missing values
New in version 0.20.0.
Returns a boolean array of whether my values are null
See also:
34.11.1.88 pandas.TimedeltaIndex.item
TimedeltaIndex.item()
return the first element of the underlying data as a python scalar
34.11.1.89 pandas.TimedeltaIndex.join
34.11.1.90 pandas.TimedeltaIndex.map
TimedeltaIndex.map(f )
34.11.1.91 pandas.TimedeltaIndex.max
34.11.1.92 pandas.TimedeltaIndex.memory_usage
TimedeltaIndex.memory_usage(deep=False)
Memory usage of my values
Parameters deep : bool
Introspect the data deeply, interrogate object dtypes for system-level memory
consumption
Returns bytes used
See also:
numpy.ndarray.nbytes
Notes
Memory usage does not include memory consumed by elements that are not components of the array if
deep=False
34.11.1.93 pandas.TimedeltaIndex.min
34.11.1.94 pandas.TimedeltaIndex.notnull
TimedeltaIndex.notnull()
Reverse of isnull
New in version 0.20.0.
34.11.1.95 pandas.TimedeltaIndex.nunique
TimedeltaIndex.nunique(dropna=True)
Return number of unique elements in the object.
Excludes NA values by default.
Parameters dropna : boolean, default True
Dont include NaN in the count.
Returns nunique : int
34.11.1.96 pandas.TimedeltaIndex.putmask
TimedeltaIndex.putmask(mask, value)
return a new Index of the values set with the mask
See also:
numpy.ndarray.putmask
34.11.1.97 pandas.TimedeltaIndex.ravel
TimedeltaIndex.ravel(order=C)
return an ndarray of the flattened values of the underlying data
See also:
numpy.ndarray.ravel
34.11.1.98 pandas.TimedeltaIndex.reindex
34.11.1.99 pandas.TimedeltaIndex.rename
TimedeltaIndex.rename(name, inplace=False)
Set new names on index. Defaults to returning new index.
Parameters name : str or list
name to set
inplace : bool
if True, mutates in place
Returns new index (of same type and class...etc) [if inplace, returns None]
34.11.1.100 pandas.TimedeltaIndex.repeat
34.11.1.101 pandas.TimedeltaIndex.reshape
TimedeltaIndex.reshape(*args, **kwargs)
NOT IMPLEMENTED: do not call this method, as reshaping is not supported for Index objects and will
raise an error.
Reshape an Index.
34.11.1.102 pandas.TimedeltaIndex.round
34.11.1.103 pandas.TimedeltaIndex.searchsorted
Optional array of integer indices that sort self into ascending order. They are
typically the result of np.argsort.
Returns indices : array of ints
Array of insertion points with the same shape as value.
See also:
numpy.searchsorted
Notes
Examples
>>> x.searchsorted(4)
array([3])
>>> x.searchsorted('bread')
array([1]) # Note: an array, not a scalar
>>> x.searchsorted(['bread'])
array([1])
34.11.1.104 pandas.TimedeltaIndex.set_names
Examples
34.11.1.105 pandas.TimedeltaIndex.set_value
34.11.1.106 pandas.TimedeltaIndex.shift
TimedeltaIndex.shift(n, freq=None)
Specialized shift which produces a DatetimeIndex
Parameters n : int
Periods to shift by
freq : DateOffset or timedelta-like, optional
Returns shifted : DatetimeIndex
34.11.1.107 pandas.TimedeltaIndex.slice_indexer
Notes
This function assumes that the data is sorted, so use at your own peril
34.11.1.108 pandas.TimedeltaIndex.slice_locs
34.11.1.109 pandas.TimedeltaIndex.sort
TimedeltaIndex.sort(*args, **kwargs)
34.11.1.110 pandas.TimedeltaIndex.sort_values
TimedeltaIndex.sort_values(return_indexer=False, ascending=True)
Return sorted copy of Index
34.11.1.111 pandas.TimedeltaIndex.sortlevel
34.11.1.112 pandas.TimedeltaIndex.str
TimedeltaIndex.str()
Vectorized string functions for Series and Index. NAs stay NA unless handled otherwise by a particular
method. Patterned after Pythons string methods, with some inspiration from Rs stringr package.
Examples
>>> s.str.split('_')
>>> s.str.replace('_', '')
34.11.1.113 pandas.TimedeltaIndex.summary
TimedeltaIndex.summary(name=None)
return a summarized representation
34.11.1.114 pandas.TimedeltaIndex.sym_diff
TimedeltaIndex.sym_diff(*args, **kwargs)
34.11.1.115 pandas.TimedeltaIndex.symmetric_difference
TimedeltaIndex.symmetric_difference(other, result_name=None)
Compute the symmetric difference of two Index objects. Its sorted if sorting is possible.
Parameters other : Index or array-like
result_name : str
Returns symmetric_difference : Index
Notes
symmetric_difference contains elements that appear in either idx1 or idx2 but not both. Equiv-
alent to the Index created by idx1.difference(idx2) | idx2.difference(idx1) with du-
plicates dropped.
Examples
34.11.1.116 pandas.TimedeltaIndex.take
34.11.1.117 pandas.TimedeltaIndex.to_datetime
TimedeltaIndex.to_datetime(dayfirst=False)
DEPRECATED: use pandas.to_datetime() instead.
For an Index containing strings or datetime.datetime objects, attempt conversion to DatetimeIndex
34.11.1.118 pandas.TimedeltaIndex.to_native_types
TimedeltaIndex.to_native_types(slicer=None, **kwargs)
Format specified values of self and return them.
Parameters slicer : int, array-like
An indexer into self that specifies which values are used in the formatting process.
kwargs : dict
Options for specifying how the values should be formatted. These options include
the following:
1. na_rep [str] The value that serves as a placeholder for NULL values
2. quoting [bool or None] Whether or not there are quoted values in self
3. date_format [str] The format used to represent date-like values
34.11.1.119 pandas.TimedeltaIndex.to_pytimedelta
TimedeltaIndex.to_pytimedelta()
Return TimedeltaIndex as object ndarray of datetime.timedelta objects
Returns datetimes : ndarray
34.11.1.120 pandas.TimedeltaIndex.to_series
TimedeltaIndex.to_series(**kwargs)
Create a Series with both index and values equal to the index keys useful with map for returning an indexer
based on an index
Returns Series : dtype will be based on the type of the Index values.
34.11.1.121 pandas.TimedeltaIndex.tolist
TimedeltaIndex.tolist()
return a list of the underlying data
34.11.1.122 pandas.TimedeltaIndex.total_seconds
TimedeltaIndex.total_seconds()
Total duration of each element expressed in seconds.
New in version 0.17.0.
34.11.1.123 pandas.TimedeltaIndex.transpose
TimedeltaIndex.transpose(*args, **kwargs)
return the transpose, which is by definition self
34.11.1.124 pandas.TimedeltaIndex.union
TimedeltaIndex.union(other)
Specialized union for TimedeltaIndex objects. If combine overlapping ranges with the same DateOffset,
will be much faster than Index.union
Parameters other : TimedeltaIndex or array-like
Returns y : Index or TimedeltaIndex
34.11.1.125 pandas.TimedeltaIndex.unique
TimedeltaIndex.unique()
Return unique values in the object. Uniques are returned in order of appearance, this does NOT sort. Hash
table-based unique.
Parameters values : 1d array-like
Returns unique values.
If the input is an Index, the return is an Index
If the input is a Categorical dtype, the return is a Categorical
If the input is a Series/ndarray, the return will be an ndarray
See also:
unique, Index.unique, Series.unique
34.11.1.126 pandas.TimedeltaIndex.value_counts
34.11.1.127 pandas.TimedeltaIndex.view
TimedeltaIndex.view(cls=None)
34.11.1.128 pandas.TimedeltaIndex.where
TimedeltaIndex.where(cond, other=None)
New in version 0.19.0.
Return an Index of same shape as self and whose corresponding entries are from self where cond is True
and otherwise are from other.
Parameters cond : boolean array-like with the same length as self
other : scalar, or array-like
34.11.2 Components
34.11.3 Conversion
34.12 Window
34.12.1.1 pandas.core.window.Rolling.count
Rolling.count()
rolling count of number of non-NaN observations inside provided window.
See also:
pandas.Series.rolling, pandas.DataFrame.rolling
34.12.1.2 pandas.core.window.Rolling.sum
Rolling.sum(*args, **kwargs)
rolling sum
Parameters how : string, default None (DEPRECATED)
Method for down- or re-sampling
Returns same type as input
See also:
pandas.Series.rolling, pandas.DataFrame.rolling
34.12.1.3 pandas.core.window.Rolling.mean
Rolling.mean(*args, **kwargs)
rolling mean
Parameters how : string, default None (DEPRECATED)
Method for down- or re-sampling
Returns same type as input
See also:
pandas.Series.rolling, pandas.DataFrame.rolling
34.12.1.4 pandas.core.window.Rolling.median
Rolling.median(**kwargs)
rolling median
Parameters how : string, default median (DEPRECATED)
Method for down- or re-sampling
Returns same type as input
See also:
pandas.Series.rolling, pandas.DataFrame.rolling
34.12.1.5 pandas.core.window.Rolling.var
34.12.1.6 pandas.core.window.Rolling.std
34.12.1.7 pandas.core.window.Rolling.min
Rolling.min(*args, **kwargs)
rolling minimum
Parameters how : string, default min (DEPRECATED)
Method for down- or re-sampling
Returns same type as input
See also:
pandas.Series.rolling, pandas.DataFrame.rolling
34.12.1.8 pandas.core.window.Rolling.max
Rolling.max(*args, **kwargs)
rolling maximum
Parameters how : string, default max (DEPRECATED)
Method for down- or re-sampling
Returns same type as input
See also:
pandas.Series.rolling, pandas.DataFrame.rolling
34.12.1.9 pandas.core.window.Rolling.corr
34.12.1.10 pandas.core.window.Rolling.cov
34.12.1.11 pandas.core.window.Rolling.skew
Rolling.skew(**kwargs)
Unbiased rolling skewness
Returns same type as input
See also:
pandas.Series.rolling, pandas.DataFrame.rolling
34.12.1.12 pandas.core.window.Rolling.kurt
Rolling.kurt(**kwargs)
Unbiased rolling kurtosis
Returns same type as input
See also:
pandas.Series.rolling, pandas.DataFrame.rolling
34.12.1.13 pandas.core.window.Rolling.apply
34.12.1.14 pandas.core.window.Rolling.quantile
Rolling.quantile(quantile, **kwargs)
rolling quantile
Parameters quantile : float
0 <= quantile <= 1
Returns same type as input
See also:
pandas.Series.rolling, pandas.DataFrame.rolling
34.12.1.15 pandas.core.window.Window.mean
Window.mean(*args, **kwargs)
window mean
Parameters how : string, default None (DEPRECATED)
34.12.1.16 pandas.core.window.Window.sum
Window.sum(*args, **kwargs)
window sum
Parameters how : string, default None (DEPRECATED)
Method for down- or re-sampling
Returns same type as input
See also:
pandas.Series.window, pandas.DataFrame.window
34.12.2.1 pandas.core.window.Expanding.count
Expanding.count(**kwargs)
expanding count of number of non-NaN observations inside provided window.
See also:
pandas.Series.expanding, pandas.DataFrame.expanding
34.12.2.2 pandas.core.window.Expanding.sum
Expanding.sum(*args, **kwargs)
expanding sum
Parameters how : string, default None (DEPRECATED)
Method for down- or re-sampling
Returns same type as input
See also:
pandas.Series.expanding, pandas.DataFrame.expanding
34.12.2.3 pandas.core.window.Expanding.mean
Expanding.mean(*args, **kwargs)
expanding mean
Parameters how : string, default None (DEPRECATED)
Method for down- or re-sampling
Returns same type as input
See also:
pandas.Series.expanding, pandas.DataFrame.expanding
34.12.2.4 pandas.core.window.Expanding.median
Expanding.median(**kwargs)
expanding median
Parameters how : string, default median (DEPRECATED)
Method for down- or re-sampling
Returns same type as input
See also:
pandas.Series.expanding, pandas.DataFrame.expanding
34.12.2.5 pandas.core.window.Expanding.var
34.12.2.6 pandas.core.window.Expanding.std
34.12.2.7 pandas.core.window.Expanding.min
Expanding.min(*args, **kwargs)
expanding minimum
Parameters how : string, default min (DEPRECATED)
Method for down- or re-sampling
Returns same type as input
See also:
pandas.Series.expanding, pandas.DataFrame.expanding
34.12.2.8 pandas.core.window.Expanding.max
Expanding.max(*args, **kwargs)
expanding maximum
Parameters how : string, default max (DEPRECATED)
Method for down- or re-sampling
Returns same type as input
See also:
pandas.Series.expanding, pandas.DataFrame.expanding
34.12.2.9 pandas.core.window.Expanding.corr
34.12.2.10 pandas.core.window.Expanding.cov
34.12.2.11 pandas.core.window.Expanding.skew
Expanding.skew(**kwargs)
Unbiased expanding skewness
Returns same type as input
See also:
pandas.Series.expanding, pandas.DataFrame.expanding
34.12.2.12 pandas.core.window.Expanding.kurt
Expanding.kurt(**kwargs)
Unbiased expanding kurtosis
Returns same type as input
See also:
pandas.Series.expanding, pandas.DataFrame.expanding
34.12.2.13 pandas.core.window.Expanding.apply
34.12.2.14 pandas.core.window.Expanding.quantile
Expanding.quantile(quantile, **kwargs)
expanding quantile
Parameters quantile : float
0 <= quantile <= 1
Returns same type as input
See also:
pandas.Series.expanding, pandas.DataFrame.expanding
34.12.3.1 pandas.core.window.EWM.mean
EWM.mean(*args, **kwargs)
exponential weighted moving average
Returns same type as input
See also:
pandas.Series.ewm, pandas.DataFrame.ewm
34.12.3.2 pandas.core.window.EWM.std
34.12.3.3 pandas.core.window.EWM.var
34.12.3.4 pandas.core.window.EWM.corr
34.12.3.5 pandas.core.window.EWM.cov
34.13 GroupBy
34.13.1.1 pandas.core.groupby.GroupBy.__iter__
GroupBy.__iter__()
Groupby iterator
Returns Generator yielding sequence of (name, subsetted object)
for each group
34.13.1.2 pandas.core.groupby.GroupBy.groups
GroupBy.groups
dict {group name -> group labels}
34.13.1.3 pandas.core.groupby.GroupBy.indices
GroupBy.indices
dict {group name -> group indices}
34.13.1.4 pandas.core.groupby.GroupBy.get_group
GroupBy.get_group(name, obj=None)
Constructs NDFrame from group with provided name
Parameters name : object
the name of the group to get as a DataFrame
obj : NDFrame, default None
the NDFrame to take the DataFrame out of. If it is None, the object groupby was
called on will be used
Returns group : type of obj
Grouper([key, level, freq, axis, sort]) A Grouper allows the user to specify a groupby instruction
for a target
34.13.1.5 pandas.Grouper
Examples
>>> df.groupby(Grouper(key='A'))
Specify a resample operation on the level date on the columns axis with a frequency of 60s
Attributes
ax
groups
pandas.Grouper.ax
Grouper.ax
pandas.Grouper.groups
Grouper.groups
GroupBy.apply(func, *args, **kwargs) Apply function and combine results together in an intelli-
gent way.
GroupBy.aggregate(func, *args, **kwargs)
GroupBy.transform(func, *args, **kwargs)
34.13.2.1 pandas.core.groupby.GroupBy.apply
Notes
34.13.2.2 pandas.core.groupby.GroupBy.aggregate
34.13.2.3 pandas.core.groupby.GroupBy.transform
34.13.3.1 pandas.core.groupby.GroupBy.count
GroupBy.count()
Compute count of group, excluding missing values
See also:
pandas.Series.groupby, pandas.DataFrame.groupby, pandas.Panel.groupby
34.13.3.2 pandas.core.groupby.GroupBy.cumcount
GroupBy.cumcount(ascending=True)
Number each item in each group from 0 to the length of that group - 1.
Essentially this is equivalent to
See also:
pandas.Series.groupby, pandas.DataFrame.groupby, pandas.Panel.groupby
Examples
34.13.3.3 pandas.core.groupby.GroupBy.first
GroupBy.first(**kwargs)
Compute first of group values
See also:
pandas.Series.groupby, pandas.DataFrame.groupby, pandas.Panel.groupby
34.13.3.4 pandas.core.groupby.GroupBy.head
GroupBy.head(n=5)
Returns first n rows of each group.
Essentially equivalent to .apply(lambda x: x.head(n)), except ignores as_index flag.
See also:
pandas.Series.groupby, pandas.DataFrame.groupby, pandas.Panel.groupby
Examples
34.13.3.5 pandas.core.groupby.GroupBy.last
GroupBy.last(**kwargs)
Compute last of group values
See also:
pandas.Series.groupby, pandas.DataFrame.groupby, pandas.Panel.groupby
34.13.3.6 pandas.core.groupby.GroupBy.max
GroupBy.max(**kwargs)
Compute max of group values
See also:
pandas.Series.groupby, pandas.DataFrame.groupby, pandas.Panel.groupby
34.13.3.7 pandas.core.groupby.GroupBy.mean
GroupBy.mean(*args, **kwargs)
Compute mean of groups, excluding missing values
For multiple groupings, the result index will be a MultiIndex
See also:
pandas.Series.groupby, pandas.DataFrame.groupby, pandas.Panel.groupby
34.13.3.8 pandas.core.groupby.GroupBy.median
GroupBy.median(**kwargs)
Compute median of groups, excluding missing values
For multiple groupings, the result index will be a MultiIndex
See also:
pandas.Series.groupby, pandas.DataFrame.groupby, pandas.Panel.groupby
34.13.3.9 pandas.core.groupby.GroupBy.min
GroupBy.min(**kwargs)
Compute min of group values
See also:
pandas.Series.groupby, pandas.DataFrame.groupby, pandas.Panel.groupby
34.13.3.10 pandas.core.groupby.GroupBy.nth
GroupBy.nth(n, dropna=None)
Take the nth row from each group if n is an int, or a subset of rows if n is a list of ints.
If dropna, will take the nth non-null row, dropna is either Truthy (if a Series) or all, any (if a DataFrame);
this is equivalent to calling dropna(how=dropna) before the groupby.
Parameters n : int or list of ints
a single nth value for the row or a list of nth values
dropna : None or str, optional
apply the specified dropna operation before counting which row is the nth row. Needs
to be None, any or all
See also:
pandas.Series.groupby, pandas.DataFrame.groupby, pandas.Panel.groupby
Examples
34.13.3.11 pandas.core.groupby.GroupBy.ohlc
GroupBy.ohlc()
Compute sum of values, excluding missing values For multiple groupings, the result index will be a MultiIndex
See also:
pandas.Series.groupby, pandas.DataFrame.groupby, pandas.Panel.groupby
34.13.3.12 pandas.core.groupby.GroupBy.prod
GroupBy.prod(**kwargs)
Compute prod of group values
See also:
pandas.Series.groupby, pandas.DataFrame.groupby, pandas.Panel.groupby
34.13.3.13 pandas.core.groupby.GroupBy.size
GroupBy.size()
Compute group sizes
See also:
pandas.Series.groupby, pandas.DataFrame.groupby, pandas.Panel.groupby
34.13.3.14 pandas.core.groupby.GroupBy.sem
GroupBy.sem(ddof=1)
Compute standard error of the mean of groups, excluding missing values
For multiple groupings, the result index will be a MultiIndex
34.13.3.15 pandas.core.groupby.GroupBy.std
34.13.3.16 pandas.core.groupby.GroupBy.sum
GroupBy.sum(**kwargs)
Compute sum of group values
See also:
pandas.Series.groupby, pandas.DataFrame.groupby, pandas.Panel.groupby
34.13.3.17 pandas.core.groupby.GroupBy.var
34.13.3.18 pandas.core.groupby.GroupBy.tail
GroupBy.tail(n=5)
Returns last n rows of each group
Essentially equivalent to .apply(lambda x: x.tail(n)), except ignores as_index flag.
See also:
pandas.Series.groupby, pandas.DataFrame.groupby, pandas.Panel.groupby
Examples
The following methods are available in both SeriesGroupBy and DataFrameGroupBy objects, but may differ
slightly, usually in that the DataFrameGroupBy version usually permits the specification of an axis argument, and
often an argument indicating whether to restrict application to columns of a specific data type.
34.13.3.19 pandas.core.groupby.DataFrameGroupBy.agg
Notes
Numpy functions mean/median/prod/sum/std/var are special cased so the default behavior is applying the func-
tion along axis=0 (e.g., np.mean(arr_2d, axis=0)) as opposed to mimicking the default Numpy behavior (e.g.,
np.mean(arr_2d)).
agg is an alias for aggregate. Use it.
Examples
>>> df
A B C
0 1 1 0.362838
1 1 2 0.227877
2 2 3 1.267767
3 2 4 -0.562860
>>> df.groupby('A').agg('min')
B C
A
1 1 0.227877
2 3 -0.562860
Multiple aggregations
34.13.3.20 pandas.core.groupby.DataFrameGroupBy.all
DataFrameGroupBy.all
Return whether all elements are True over requested axis
Parameters axis : {index (0), columns (1)}
skipna : boolean, default True
Exclude NA/null values. If an entire row/column is NA, the result will be NA
level : int or level name, default None
If the axis is a MultiIndex (hierarchical), count along a particular level, collapsing
into a Series
bool_only : boolean, default None
Include only boolean columns. If None, will attempt to use everything, then use only
boolean data. Not implemented for Series.
Returns all : Series or DataFrame (if level specified)
34.13.3.21 pandas.core.groupby.DataFrameGroupBy.any
DataFrameGroupBy.any
Return whether any element is True over requested axis
Parameters axis : {index (0), columns (1)}
skipna : boolean, default True
Exclude NA/null values. If an entire row/column is NA, the result will be NA
level : int or level name, default None
If the axis is a MultiIndex (hierarchical), count along a particular level, collapsing
into a Series
bool_only : boolean, default None
Include only boolean columns. If None, will attempt to use everything, then use only
boolean data. Not implemented for Series.
Returns any : Series or DataFrame (if level specified)
34.13.3.22 pandas.core.groupby.DataFrameGroupBy.bfill
DataFrameGroupBy.bfill(limit=None)
Backward fill the values
Parameters limit : integer, optional
limit of how many values to fill
See also:
pandas.Series.groupby, pandas.DataFrame.groupby, pandas.Panel.groupby
34.13.3.23 pandas.core.groupby.DataFrameGroupBy.corr
DataFrameGroupBy.corr
Compute pairwise correlation of columns, excluding NA/null values
Parameters method : {pearson, kendall, spearman}
pearson : standard correlation coefficient
kendall : Kendall Tau correlation coefficient
spearman : Spearman rank correlation
min_periods : int, optional
Minimum number of observations required per pair of columns to have a valid result.
Currently only available for pearson and spearman correlation
Returns y : DataFrame
34.13.3.24 pandas.core.groupby.DataFrameGroupBy.count
DataFrameGroupBy.count()
Compute count of group, excluding missing values
34.13.3.25 pandas.core.groupby.DataFrameGroupBy.cov
DataFrameGroupBy.cov
Compute pairwise covariance of columns, excluding NA/null values
Parameters min_periods : int, optional
Minimum number of observations required per pair of columns to have a valid result.
Returns y : DataFrame
Notes
y contains the covariance matrix of the DataFrames time series. The covariance is normalized by N-1 (unbiased
estimator).
34.13.3.26 pandas.core.groupby.DataFrameGroupBy.cummax
DataFrameGroupBy.cummax(axis=0, **kwargs)
Cumulative max for each group
See also:
pandas.Series.groupby, pandas.DataFrame.groupby, pandas.Panel.groupby
34.13.3.27 pandas.core.groupby.DataFrameGroupBy.cummin
DataFrameGroupBy.cummin(axis=0, **kwargs)
Cumulative min for each group
See also:
pandas.Series.groupby, pandas.DataFrame.groupby, pandas.Panel.groupby
34.13.3.28 pandas.core.groupby.DataFrameGroupBy.cumprod
34.13.3.29 pandas.core.groupby.DataFrameGroupBy.cumsum
34.13.3.30 pandas.core.groupby.DataFrameGroupBy.describe
DataFrameGroupBy.describe(**kwargs)
Parameters percentiles : list-like of numbers, optional
The percentiles to include in the output. All should fall between 0 and 1.
The default is [.25, .5, .75], which returns the 25th, 50th, and 75th
percentiles.
include [all, list-like of dtypes or None (default), optional] A white list of data
types to include in the result. Ignored for Series. Here are the options:
all : All columns of the input will be included in the output.
A list-like of dtypes : Limits the results to the provided data types.
To limit the result to numeric types submit numpy.number. To limit
it instead to categorical objects submit the numpy.object data type.
Strings can also be used in the style of select_dtypes (e.g. df.
describe(include=['O']))
None (default) : The result will include all numeric columns.
exclude [list-like of dtypes or None (default), optional,] A black list of data types to
omit from the result. Ignored for Series. Here are the options:
A list-like of dtypes : Excludes the provided data types from the result. To
select numeric types submit numpy.number. To select categorical objects
submit the data type numpy.object. Strings can also be used in the style
of select_dtypes (e.g. df.describe(include=['O']))
None (default) : The result will exclude nothing.
Notes
For numeric data, the results index will include count, mean, std, min, max as well as lower, 50 and upper
percentiles. By default the lower percentile is 25 and the upper percentile is 75. The 50 percentile is the same
as the median.
For object data (e.g. strings or timestamps), the results index will include count, unique, top, and freq.
The top is the most common value. The freq is the most common values frequency. Timestamps also include
the first and last items.
If multiple object values have the highest count, then the count and top results will be arbitrarily chosen from
among those with the highest count.
For mixed data types provided via a DataFrame, the default is to return only an analysis of numeric columns.
If include='all' is provided as an option, the result will include a union of attributes of each type.
The include and exclude parameters can be used to limit which columns in a DataFrame are analyzed for the
output. The parameters are ignored when analyzing a Series.
Examples
top NaN b
freq NaN 1
mean 2.0 NaN
std 1.0 NaN
min 1.0 NaN
25% 1.5 NaN
50% 2.0 NaN
75% 2.5 NaN
max 3.0 NaN
count 3.0
mean 2.0
std 1.0
min 1.0
25% 1.5
50% 2.0
75% 2.5
max 3.0
34.13.3.31 pandas.core.groupby.DataFrameGroupBy.diff
DataFrameGroupBy.diff
1st discrete difference of object
Parameters periods : int, default 1
Periods to shift for forming difference
axis : {0 or index, 1 or columns}, default 0
Take difference over rows (0) or columns (1).
Returns diffed : DataFrame
34.13.3.32 pandas.core.groupby.DataFrameGroupBy.ffill
DataFrameGroupBy.ffill(limit=None)
Forward fill the values
Parameters limit : integer, optional
limit of how many values to fill
See also:
pandas.Series.groupby, pandas.DataFrame.groupby, pandas.Panel.groupby
34.13.3.33 pandas.core.groupby.DataFrameGroupBy.fillna
DataFrameGroupBy.fillna
Fill NA/NaN values using the specified method
Parameters value : scalar, dict, Series, or DataFrame
Value to use to fill holes (e.g. 0), alternately a dict/Series/DataFrame of values spec-
ifying which value to use for each index (for a Series) or column (for a DataFrame).
(values not in the dict/Series/DataFrame will not be filled). This value cannot be a
list.
method : {backfill, bfill, pad, ffill, None}, default None
Method to use for filling holes in reindexed Series pad / ffill: propagate last valid
observation forward to next valid backfill / bfill: use NEXT valid observation to fill
gap
axis : {0 or index, 1 or columns}
inplace : boolean, default False
If True, fill in place. Note: this will modify any other views on this object, (e.g. a
no-copy slice for a column in a DataFrame).
limit : int, default None
If method is specified, this is the maximum number of consecutive NaN values to
forward/backward fill. In other words, if there is a gap with more than this number
of consecutive NaNs, it will only be partially filled. If method is not specified, this
is the maximum number of entries along the entire axis where NaNs will be filled.
Must be greater than 0 if not None.
downcast : dict, default is None
a dict of item->dtype of what to downcast if possible, or the string infer which will
try to downcast to an appropriate equal type (e.g. float64 to int64 if possible)
Returns filled : DataFrame
See also:
reindex, asfreq
34.13.3.34 pandas.core.groupby.DataFrameGroupBy.hist
DataFrameGroupBy.hist
Draw histogram of the DataFrames series using matplotlib / pylab.
Parameters data : DataFrame
column : string or sequence
If passed, will be used to limit data to a subset of columns
by : object, optional
If passed, then used to form histograms for separate groups
grid : boolean, default True
Whether to show axis grid lines
xlabelsize : int, default None
If specified changes the x-axis label size
xrot : float, default None
rotation of x axis labels
ylabelsize : int, default None
If specified changes the y-axis label size
yrot : float, default None
rotation of y axis labels
ax : matplotlib axes object, default None
sharex : boolean, default True if ax is None else False
In case subplots=True, share x axis and set some x axis labels to invisible; defaults
to True if ax is None otherwise False if an ax is passed in; Be aware, that passing in
both an ax and sharex=True will alter all x axis labels for all subplots in a figure!
sharey : boolean, default False
In case subplots=True, share y axis and set some y axis labels to invisible
figsize : tuple
The size of the figure to create in inches by default
layout : tuple, optional
Tuple of (rows, columns) for the layout of the histograms
bins : integer, default 10
Number of histogram bins to be used
kwds : other plotting keyword arguments
To be passed to hist function
34.13.3.35 pandas.core.groupby.DataFrameGroupBy.idxmax
DataFrameGroupBy.idxmax
Return index of first occurrence of maximum over requested axis. NA/null values are excluded.
Parameters axis : {0 or index, 1 or columns}, default 0
0 or index for row-wise, 1 or columns for column-wise
skipna : boolean, default True
Exclude NA/null values. If an entire row/column is NA, the result will be first index.
Returns idxmax : Series
See also:
Series.idxmax
Notes
34.13.3.36 pandas.core.groupby.DataFrameGroupBy.idxmin
DataFrameGroupBy.idxmin
Return index of first occurrence of minimum over requested axis. NA/null values are excluded.
Parameters axis : {0 or index, 1 or columns}, default 0
0 or index for row-wise, 1 or columns for column-wise
skipna : boolean, default True
Exclude NA/null values. If an entire row/column is NA, the result will be NA
Returns idxmin : Series
See also:
Series.idxmin
Notes
34.13.3.37 pandas.core.groupby.DataFrameGroupBy.mad
DataFrameGroupBy.mad
Return the mean absolute deviation of the values for the requested axis
Parameters axis : {index (0), columns (1)}
skipna : boolean, default True
Exclude NA/null values. If an entire row/column is NA, the result will be NA
level : int or level name, default None
If the axis is a MultiIndex (hierarchical), count along a particular level, collapsing
into a Series
numeric_only : boolean, default None
Include only float, int, boolean columns. If None, will attempt to use everything,
then use only numeric data. Not implemented for Series.
Returns mad : Series or DataFrame (if level specified)
34.13.3.38 pandas.core.groupby.DataFrameGroupBy.pct_change
DataFrameGroupBy.pct_change
Percent change over given number of periods.
Parameters periods : int, default 1
Periods to shift for forming percent change
fill_method : str, default pad
How to handle NAs before computing percent changes
limit : int, default None
The number of consecutive NAs to fill before stopping
freq : DateOffset, timedelta, or offset alias string, optional
Increment to use from time series API (e.g. M or BDay())
Returns chg : NDFrame
Notes
By default, the percentage change is calculated along the stat axis: 0, or Index, for DataFrame and 1, or
minor for Panel. You can change this with the axis keyword argument.
34.13.3.39 pandas.core.groupby.DataFrameGroupBy.plot
DataFrameGroupBy.plot
Class implementing the .plot attribute for groupby objects
34.13.3.40 pandas.core.groupby.DataFrameGroupBy.quantile
DataFrameGroupBy.quantile
Return values at the given quantile over requested axis, a la numpy.percentile.
Parameters q : float or array-like, default 0.5 (50% quantile)
0 <= q <= 1, the quantile(s) to compute
axis : {0, 1, index, columns} (default 0)
0 or index for row-wise, 1 or columns for column-wise
interpolation : {linear, lower, higher, midpoint, nearest}
New in version 0.18.0.
This optional parameter specifies the interpolation method to use, when the desired
quantile lies between two data points i and j:
linear: i + (j - i) * fraction, where fraction is the fractional part of the index
surrounded by i and j.
lower: i.
higher: j.
nearest: i or j whichever is nearest.
midpoint: (i + j) / 2.
Returns quantiles : Series or DataFrame
If q is an array, a DataFrame will be returned where the index is q, the columns are the
columns of self, and the values are the quantiles.
If q is a float, a Series will be returned where the index is the columns of self and the
values are the quantiles.
Examples
34.13.3.41 pandas.core.groupby.DataFrameGroupBy.rank
DataFrameGroupBy.rank
Compute numerical data ranks (1 through n) along axis. Equal values are assigned a rank that is the average of
the ranks of those values
Parameters axis : {0 or index, 1 or columns}, default 0
index to direct ranking
34.13.3.42 pandas.core.groupby.DataFrameGroupBy.resample
34.13.3.43 pandas.core.groupby.DataFrameGroupBy.shift
34.13.3.44 pandas.core.groupby.DataFrameGroupBy.size
DataFrameGroupBy.size()
Compute group sizes
See also:
pandas.Series.groupby, pandas.DataFrame.groupby, pandas.Panel.groupby
34.13.3.45 pandas.core.groupby.DataFrameGroupBy.skew
DataFrameGroupBy.skew
Return unbiased skew over requested axis Normalized by N-1
Parameters axis : {index (0), columns (1)}
skipna : boolean, default True
Exclude NA/null values. If an entire row/column is NA, the result will be NA
level : int or level name, default None
If the axis is a MultiIndex (hierarchical), count along a particular level, collapsing
into a Series
numeric_only : boolean, default None
Include only float, int, boolean columns. If None, will attempt to use everything,
then use only numeric data. Not implemented for Series.
Returns skew : Series or DataFrame (if level specified)
34.13.3.46 pandas.core.groupby.DataFrameGroupBy.take
DataFrameGroupBy.take
Analogous to ndarray.take
Parameters indices : list / array of ints
axis : int, default 0
convert : translate neg to pos indices (default)
is_copy : mark the returned frame as a copy
Returns taken : type of caller
34.13.3.47 pandas.core.groupby.DataFrameGroupBy.tshift
DataFrameGroupBy.tshift
Shift the time index, using the indexs frequency if available.
Parameters periods : int
Number of periods to move, can be positive or negative
freq : DateOffset, timedelta, or time rule string, default None
Increment to use from the tseries module or time rule (e.g. EOM)
axis : int or basestring
Notes
If freq is not specified then tries to use the freq or inferred_freq attributes of the index. If neither of those
attributes exist, a ValueError is thrown
The following methods are available only for SeriesGroupBy objects.
34.13.3.48 pandas.core.groupby.SeriesGroupBy.nlargest
SeriesGroupBy.nlargest
Return the largest n elements.
Parameters n : int
Return this many descending sorted values
keep [{first, last, False}, default first] Where there are duplicate values: -
first : take the first occurrence. - last : take the last occurrence.
Notes
Examples
121637 4.240952
dtype: float64
34.13.3.49 pandas.core.groupby.SeriesGroupBy.nsmallest
SeriesGroupBy.nsmallest
Return the smallest n elements.
Parameters n : int
Return this many ascending sorted values
keep [{first, last, False}, default first] Where there are duplicate values: -
first : take the first occurrence. - last : take the last occurrence.
Notes
Faster than .sort_values().head(n) for small n relative to the size of the Series object.
Examples
34.13.3.50 pandas.core.groupby.SeriesGroupBy.nunique
SeriesGroupBy.nunique(dropna=True)
Returns number of unique elements in the group
34.13.3.51 pandas.core.groupby.SeriesGroupBy.unique
SeriesGroupBy.unique
Return unique values in the object. Uniques are returned in order of appearance, this does NOT sort. Hash
table-based unique.
34.13.3.52 pandas.core.groupby.SeriesGroupBy.value_counts
34.13.3.53 pandas.core.groupby.DataFrameGroupBy.corrwith
DataFrameGroupBy.corrwith
Compute pairwise correlation between rows or columns of two DataFrame objects.
Parameters other : DataFrame
axis : {0 or index, 1 or columns}, default 0
0 or index to compute column-wise, 1 or columns for row-wise
drop : boolean, default False
Drop missing indices from result, default returns union of all
Returns correls : Series
34.13.3.54 pandas.core.groupby.DataFrameGroupBy.boxplot
Examples
34.14 Resampling
34.14.1.1 pandas.core.resample.Resampler.__iter__
Resampler.__iter__()
Groupby iterator
Returns Generator yielding sequence of (name, subsetted object)
for each group
34.14.1.2 pandas.core.resample.Resampler.groups
Resampler.groups
dict {group name -> group labels}
34.14.1.3 pandas.core.resample.Resampler.indices
Resampler.indices
dict {group name -> group indices}
34.14.1.4 pandas.core.resample.Resampler.get_group
Resampler.get_group(name, obj=None)
Constructs NDFrame from group with provided name
Parameters name : object
the name of the group to get as a DataFrame
obj : NDFrame, default None
the NDFrame to take the DataFrame out of. If it is None, the object groupby was
called on will be used
Returns group : type of obj
34.14.2.1 pandas.core.resample.Resampler.apply
Notes
Numpy functions mean/median/prod/sum/std/var are special cased so the default behavior is applying the func-
tion along axis=0 (e.g., np.mean(arr_2d, axis=0)) as opposed to mimicking the default Numpy behavior (e.g.,
np.mean(arr_2d)).
agg is an alias for aggregate. Use it.
Examples
>>> s = Series([1,2,3,4,5],
index=pd.date_range('20130101',
periods=5,freq='s'))
2013-01-01 00:00:00 1
2013-01-01 00:00:01 2
2013-01-01 00:00:02 3
2013-01-01 00:00:03 4
2013-01-01 00:00:04 5
Freq: S, dtype: int64
>>> r = s.resample('2s')
DatetimeIndexResampler [freq=<2 * Seconds>, axis=0, closed=left,
label=left, convention=start, base=0]
>>> r.agg(np.sum)
2013-01-01 00:00:00 3
2013-01-01 00:00:02 7
2013-01-01 00:00:04 5
Freq: 2S, dtype: int64
>>> r.agg(['sum','mean','max'])
sum mean max
2013-01-01 00:00:00 3 1.5 2
2013-01-01 00:00:02 7 3.5 4
2013-01-01 00:00:04 5 5.0 5
34.14.2.2 pandas.core.resample.Resampler.aggregate
Notes
Numpy functions mean/median/prod/sum/std/var are special cased so the default behavior is applying the func-
tion along axis=0 (e.g., np.mean(arr_2d, axis=0)) as opposed to mimicking the default Numpy behavior (e.g.,
np.mean(arr_2d)).
agg is an alias for aggregate. Use it.
Examples
>>> s = Series([1,2,3,4,5],
index=pd.date_range('20130101',
periods=5,freq='s'))
2013-01-01 00:00:00 1
2013-01-01 00:00:01 2
2013-01-01 00:00:02 3
2013-01-01 00:00:03 4
2013-01-01 00:00:04 5
Freq: S, dtype: int64
>>> r = s.resample('2s')
DatetimeIndexResampler [freq=<2 * Seconds>, axis=0, closed=left,
label=left, convention=start, base=0]
>>> r.agg(np.sum)
2013-01-01 00:00:00 3
2013-01-01 00:00:02 7
2013-01-01 00:00:04 5
Freq: 2S, dtype: int64
>>> r.agg(['sum','mean','max'])
sum mean max
2013-01-01 00:00:00 3 1.5 2
2013-01-01 00:00:02 7 3.5 4
2013-01-01 00:00:04 5 5.0 5
34.14.2.3 pandas.core.resample.Resampler.transform
Examples
34.14.3 Upsampling
34.14.3.1 pandas.core.resample.Resampler.ffill
Resampler.ffill(limit=None)
Forward fill the values
Parameters limit : integer, optional
34.14.3.2 pandas.core.resample.Resampler.backfill
Resampler.backfill(limit=None)
Backward fill the values
Parameters limit : integer, optional
limit of how many values to fill
See also:
Series.fillna, DataFrame.fillna
34.14.3.3 pandas.core.resample.Resampler.bfill
Resampler.bfill(limit=None)
Backward fill the values
Parameters limit : integer, optional
limit of how many values to fill
See also:
Series.fillna, DataFrame.fillna
34.14.3.4 pandas.core.resample.Resampler.pad
Resampler.pad(limit=None)
Forward fill the values
Parameters limit : integer, optional
limit of how many values to fill
See also:
Series.fillna, DataFrame.fillna
34.14.3.5 pandas.core.resample.Resampler.fillna
Resampler.fillna(method, limit=None)
Fill missing values
Parameters method : str, method of resampling (ffill, bfill)
limit : integer, optional
limit of how many values to fill
See also:
Series.fillna, DataFrame.fillna
34.14.3.6 pandas.core.resample.Resampler.asfreq
Resampler.asfreq(fill_value=None)
return the values at the new freq, essentially a reindex
Parameters fill_value: scalar, optional
Value to use for missing values, applied during upsampling (note this does not fill
NaNs that already were present).
New in version 0.20.0.
See also:
Series.asfreq, DataFrame.asfreq
34.14.3.7 pandas.core.resample.Resampler.interpolate
linear: ignore the index and treat the values as equally spaced. This is the only
method supported on MultiIndexes. default
time: interpolation works on daily and higher resolution data to interpolate
given length of interval
index, values: use the actual numerical values of the index
nearest, zero, slinear, quadratic, cubic, barycentric, polyno-
mial is passed to scipy.interpolate.interp1d. Both poly-
nomial and spline require that you also specify an order (int), e.g.
df.interpolate(method=polynomial, order=4). These use the actual numerical
values of the index.
krogh, piecewise_polynomial, spline, pchip and akima are all wrappers
around the scipy interpolation methods of similar names. These use the actual
numerical values of the index. For more information on their behavior, see the
scipy documentation and tutorial documentation
from_derivatives refers to BPoly.from_derivatives which replaces piece-
wise_polynomial interpolation method in scipy 0.18
New in version 0.18.1: Added support for the akima method Added interpolate
method from_derivatives which replaces piecewise_polynomial in scipy 0.18;
backwards-compatible with scipy < 0.18
axis : {0, 1}, default 0
0: fill column-by-column
1: fill row-by-row
Examples
Filling in NaNs
34.14.4.1 pandas.core.resample.Resampler.count
Resampler.count(_method=count)
Compute count of group, excluding missing values
See also:
pandas.Series.groupby, pandas.DataFrame.groupby, pandas.Panel.groupby
34.14.4.2 pandas.core.resample.Resampler.nunique
Resampler.nunique(_method=nunique)
Returns number of unique elements in the group
34.14.4.3 pandas.core.resample.Resampler.first
34.14.4.4 pandas.core.resample.Resampler.last
34.14.4.5 pandas.core.resample.Resampler.max
34.14.4.6 pandas.core.resample.Resampler.mean
34.14.4.7 pandas.core.resample.Resampler.median
34.14.4.8 pandas.core.resample.Resampler.min
34.14.4.9 pandas.core.resample.Resampler.ohlc
34.14.4.10 pandas.core.resample.Resampler.prod
34.14.4.11 pandas.core.resample.Resampler.size
Resampler.size(_method=size)
Compute group sizes
See also:
pandas.Series.groupby, pandas.DataFrame.groupby, pandas.Panel.groupby
34.14.4.12 pandas.core.resample.Resampler.sem
See also:
pandas.Series.groupby, pandas.DataFrame.groupby, pandas.Panel.groupby
34.14.4.13 pandas.core.resample.Resampler.std
34.14.4.14 pandas.core.resample.Resampler.sum
34.14.4.15 pandas.core.resample.Resampler.var
34.15 Style
34.15.1 Constructor
Styler(data[, precision, table_styles, ...]) Helps style a DataFrame or Series according to the data
with HTML and CSS.
34.15.1.1 pandas.io.formats.style.Styler
Warning: This is a new feature and is under active development. Well be adding features and possibly
making breaking changes in future releases.
See also:
pandas.DataFrame.style
Notes
Most styling will be done by passing style functions into Styler.apply or Styler.applymap. Style
functions should return values with strings containing CSS 'attr: value' that will be applied to the
indicated cells.
If using in the Jupyter notebook, Styler has defined a _repr_html_ to automatically render itself. Otherwise
call Styler.render to get the genterated HTML.
CSS classes are attached to the generated HTML
Index and Column names include index_name and level<k> where k is its level in a MultiIndex
Index label cells include
row_heading
row<n> where n is the numeric position of the row
level<k> where k is the level in a MultiIndex
Column label cells include * col_heading * col<n> where n is the numeric position of the column
* evel<k> where k is the level in a MultiIndex
Blank cells include blank
Data cells include data
Attributes
env
template
loader
pandas.io.formats.style.Styler.env
pandas.io.formats.style.Styler.template
pandas.io.formats.style.Styler.loader
Methods
pandas.io.formats.style.Styler.apply
object with the same shape. Must return a DataFrame with identical index and
column labels when axis=None
axis : int, str or None
apply to each column (axis=0 or 'index') or to each row (axis=1 or
'columns') or to the entire DataFrame at once with axis=None
subset : IndexSlice
a valid indexer to limit data to before applying the function. Consider using a
pandas.IndexSlice
kwargs : dict
pass along to func
Returns self : Styler
Notes
The output shape of func should match the input, i.e. if x is the input row, column, or table (depending
on axis), then func(x.shape) == x.shape should be true.
This is similar to DataFrame.apply, except that axis=None applies the function to the entire
DataFrame at once, rather than column-wise or row-wise.
Examples
pandas.io.formats.style.Styler.applymap
pandas.io.formats.style.Styler.background_gradient
Notes
Tune low and high to keep the text legible by not using the entire range of the color map. These extend
the range of the data by low * (x.max() - x.min()) and high * (x.max() - x.min())
before normalizing.
pandas.io.formats.style.Styler.bar
mid : the center of the cell is at (max-min)/2, or if values are all negative (positive)
the zero is aligned at the right (left) of the cell
New in version 0.20.0.
Returns self : Styler
pandas.io.formats.style.Styler.clear
Styler.clear()
Reset the styler, removing any previously applied styles. Returns None.
pandas.io.formats.style.Styler.export
Styler.export()
Export the styles to applied to the current Styler. Can be applied to a second style with Styler.use.
New in version 0.17.1.
Returns styles: list
See also:
Styler.use
pandas.io.formats.style.Styler.format
Styler.format(formatter, subset=None)
Format the text display value of cells.
New in version 0.18.0.
Parameters formatter: str, callable, or dict
subset: IndexSlice
An argument to DataFrame.loc that restricts which elements formatter is
applied to.
Returns self : Styler
Notes
Examples
pandas.io.formats.style.Styler.from_custom_template
pandas.io.formats.style.Styler.highlight_max
pandas.io.formats.style.Styler.highlight_min
pandas.io.formats.style.Styler.highlight_null
Styler.highlight_null(null_color=red)
Shade the background null_color for missing values.
New in version 0.17.1.
Parameters null_color: str
Returns self : Styler
pandas.io.formats.style.Styler.render
Styler.render(**kwargs)
Render the built up styles to HTML
New in version 0.17.1.
Parameters **kwargs:
Any additional keyword arguments are passed through to self.template.
render. This is useful when you need to provide additional variables for a
custom template.
New in version 0.20.
Returns rendered: str
the rendered HTML
Notes
Styler objects have defined the _repr_html_ method which automatically calls self.render()
when its the last item in a Notebook cell. When calling Styler.render() directly, wrap the result
in IPython.display.HTML to view the rendered HTML in the notebook.
Pandas uses the following keys in render. Arguments passed in **kwargs take precedence, so think
carefuly if you want to override them:
head
cellstyle
body
uuid
precision
table_styles
caption
table_attributes
pandas.io.formats.style.Styler.set_caption
Styler.set_caption(caption)
Se the caption on a Styler
New in version 0.17.1.
Parameters caption: str
Returns self : Styler
pandas.io.formats.style.Styler.set_precision
Styler.set_precision(precision)
Set the precision used to render.
New in version 0.17.1.
Parameters precision: int
Returns self : Styler
pandas.io.formats.style.Styler.set_properties
Styler.set_properties(subset=None, **kwargs)
Convience method for setting one or more non-data dependent properties or each cell.
New in version 0.17.1.
Parameters subset: IndexSlice
a valid slice for data to limit the style application to
kwargs: dict
property: value pairs to be set for each cell
Returns self : Styler
Examples
pandas.io.formats.style.Styler.set_table_attributes
Styler.set_table_attributes(attributes)
Set the table attributes. These are the items that show up in the opening <table> tag in addition to to
automatic (by default) id.
New in version 0.17.1.
Parameters attributes : string
Returns self : Styler
Examples
pandas.io.formats.style.Styler.set_table_styles
Styler.set_table_styles(table_styles)
Set the table styles on a Styler. These are placed in a <style> tag before the generated HTML table.
New in version 0.17.1.
Parameters table_styles: list
Each individual table_style should be a dictionary with selector and props
keys. selector should be a CSS selector that the style will be applied to (au-
tomatically prefixed by the tables UUID) and props should be a list of tuples
with (attribute, value).
Returns self : Styler
Examples
pandas.io.formats.style.Styler.set_uuid
Styler.set_uuid(uuid)
Set the uuid for a Styler.
New in version 0.17.1.
Parameters uuid: str
Returns self : Styler
pandas.io.formats.style.Styler.to_excel
Notes
If passing an existing ExcelWriter object, then the sheet will be added to the existing workbook. This can
be used to save different DataFrames to one workbook:
For compatibility with to_csv, to_excel serializes lists and dicts to strings before writing.
pandas.io.formats.style.Styler.use
Styler.use(styles)
Set the styles on the current Styler, possibly using styles from Styler.export.
New in version 0.17.1.
Parameters styles: list
list of style functions
Returns self : Styler
See also:
Styler.export
describe_option(pat[, _print_desc]) Prints the description for one or more registered options.
reset_option(pat) Reset one or more options to their default value.
get_option(pat) Retrieves the value of the specified option.
set_option(pat, value) Sets the value of the specified option.
option_context(*args) Context manager to temporarily set options in the with
statement context.
34.16.1.1 pandas.describe_option
Regexp pattern. All matching keys will have their description displayed.
_print_desc : bool, default True
If True (default) the description(s) will be printed to stdout. Otherwise, the descrip-
tion(s) will be returned as a unicode string (for testing).
Returns None by default, the description(s) as a unicode string if _print_desc
is False
Notes
display.latex.multicolumn [bool] This specifies if the to_latex method of a Dataframe uses multicolumns to
pretty-print MultiIndex columns. Valid values: False,True [default: True] [currently: True]
display.latex.multicolumn_format [bool] This specifies if the to_latex method of a Dataframe uses multi-
columns to pretty-print MultiIndex columns. Valid values: False,True [default: l] [currently: l]
display.latex.multirow [bool] This specifies if the to_latex method of a Dataframe uses multirows to pretty-
print MultiIndex rows. Valid values: False,True [default: False] [currently: False]
display.latex.repr [boolean] Whether to produce a latex DataFrame representation for jupyter environments
that support it. (default: False) [default: False] [currently: False]
display.line_width [int] Deprecated. [default: 80] [currently: 80] (Deprecated, use display.width instead.)
display.max_categories [int] This sets the maximum number of categories pandas should output when printing
out a Categorical or a Series of dtype category. [default: 8] [currently: 8]
display.max_columns [int] If max_cols is exceeded, switch to truncate view. Depending on large_repr, objects
are either centrally truncated or printed as a summary view. None value means unlimited.
In case python/IPython is running in a terminal and large_repr equals truncate this can be set to 0 and
pandas will auto-detect the width of the terminal and print a truncated object which fits the screen width.
The IPython notebook, IPython qtconsole, or IDLE do not run in a terminal and hence it is not possible
to do correct auto-detection. [default: 20] [currently: 20]
display.max_colwidth [int] The maximum width in characters of a column in the repr of a pandas data struc-
ture. When the column overflows, a ... placeholder is embedded in the output. [default: 50] [currently:
50]
display.max_info_columns [int] max_info_columns is used in DataFrame.info method to decide if per column
information will be printed. [default: 100] [currently: 100]
display.max_info_rows [int or None] df.info() will usually show null-counts for each column. For large frames
this can be quite slow. max_info_rows and max_info_cols limit this null check only to frames with smaller
dimensions than specified. [default: 1690785] [currently: 1690785]
display.max_rows [int] If max_rows is exceeded, switch to truncate view. Depending on large_repr, objects
are either centrally truncated or printed as a summary view. None value means unlimited.
In case python/IPython is running in a terminal and large_repr equals truncate this can be set to 0 and
pandas will auto-detect the height of the terminal and print a truncated object which fits the screen height.
The IPython notebook, IPython qtconsole, or IDLE do not run in a terminal and hence it is not possible
to do correct auto-detection. [default: 60] [currently: 15]
display.max_seq_items [int or None] when pretty-printing a long sequence, no more then max_seq_items will
be printed. If items are omitted, they will be denoted by the addition of ... to the resulting string.
If set to None, the number of items to be printed is unlimited. [default: 100] [currently: 100]
display.memory_usage [bool, string or None] This specifies if the memory usage of a DataFrame should be
displayed when df.info() is called. Valid values True,False,deep [default: True] [currently: True]
display.mpl_style [bool] Setting this to default will modify the rcParams used by matplotlib to give plots a
more pleasing visual style by default. Setting this to None/False restores the values to their initial value.
[default: None] [currently: None]
display.multi_sparse [boolean] sparsify MultiIndex display (dont display repeated elements in outer levels
within groups) [default: True] [currently: True]
display.notebook_repr_html [boolean] When True, IPython notebook will use html representation for pandas
objects (if it is available). [default: True] [currently: True]
display.pprint_nest_depth [int] Controls the number of nested levels to process when pretty-printing [default:
3] [currently: 3]
display.precision [int] Floating point output precision (number of significant digits). This is only a suggestion
[default: 6] [currently: 6]
display.show_dimensions [boolean or truncate] Whether to print out dimensions at the end of DataFrame
repr. If truncate is specified, only print out the dimensions if the frame is truncated (e.g. not display all
rows and/or columns) [default: truncate] [currently: truncate]
display.unicode.ambiguous_as_wide [boolean] Whether to use the Unicode East Asian Width to calculate the
display text width. Enabling this may affect to the performance (default: False) [default: False] [currently:
False]
display.unicode.east_asian_width [boolean] Whether to use the Unicode East Asian Width to calculate the
display text width. Enabling this may affect to the performance (default: False) [default: False] [currently:
False]
display.width [int] Width of the display in characters. In case python/IPython is running in a terminal this can
be set to None and pandas will correctly auto-detect the width. Note that the IPython notebook, IPython
qtconsole, or IDLE do not run in a terminal and hence it is not possible to correctly detect the width.
[default: 80] [currently: 80]
html.border [int] A border=value attribute is inserted in the <table> tag for the DataFrame HTML repr.
[default: 1] [currently: 1]
io.excel.xls.writer [string] The default Excel writer engine for xls files. Available options: xlwt (the de-
fault). [default: xlwt] [currently: xlwt]
io.excel.xlsm.writer [string] The default Excel writer engine for xlsm files. Available options: openpyxl
(the default). [default: openpyxl] [currently: openpyxl]
io.excel.xlsx.writer [string] The default Excel writer engine for xlsx files. Available options: xlsxwriter (the
default), openpyxl. [default: xlsxwriter] [currently: xlsxwriter]
io.hdf.default_format [format] default format writing format, if None, then put will default to fixed and
append will default to table [default: None] [currently: None]
io.hdf.dropna_table [boolean] drop ALL nan rows when appending to a table [default: False] [currently:
False]
mode.chained_assignment [string] Raise an exception, warn, or no action if trying to use chained assignment,
The default is warn [default: warn] [currently: warn]
mode.sim_interactive [boolean] Whether to simulate interactive mode for purposes of testing [default: False]
[currently: False]
mode.use_inf_as_null [boolean] True means treat None, NaN, INF, -INF as null (old way), False means None
and NaN are null, but INF, -INF are not null (new way). [default: False] [currently: False]
34.16.1.2 pandas.reset_option
Notes
display.encoding [str/unicode] Defaults to the detected encoding of the console. Specifies the encoding to be
used for strings returned by to_string, these are generally strings meant to be displayed on the console.
[default: UTF-8] [currently: UTF-8]
display.expand_frame_repr [boolean] Whether to print out the full DataFrame repr for wide DataFrames
across multiple lines, max_columns is still respected, but the output will wrap-around across multiple
pages if its width exceeds display.width. [default: True] [currently: True]
display.float_format [callable] The callable should accept a floating point number and return a string with
the desired format of the number. This is used in some places like SeriesFormatter. See for-
mats.format.EngFormatter for an example. [default: None] [currently: None]
display.height [int] Deprecated. [default: 60] [currently: 15] (Deprecated, use display.max_rows instead.)
display.html.table_schema [boolean] Whether to publish a Table Schema representation for frontends that
support it. (default: False) [default: False] [currently: False]
display.large_repr [truncate/info] For DataFrames exceeding max_rows/max_cols, the repr (and HTML
repr) can show a truncated table (the default from 0.13), or switch to the view from df.info() (the behaviour
in earlier versions of pandas). [default: truncate] [currently: truncate]
display.latex.escape [bool] This specifies if the to_latex method of a Dataframe uses escapes special characters.
Valid values: False,True [default: True] [currently: True]
display.latex.longtable :bool This specifies if the to_latex method of a Dataframe uses the longtable format.
Valid values: False,True [default: False] [currently: False]
display.latex.multicolumn [bool] This specifies if the to_latex method of a Dataframe uses multicolumns to
pretty-print MultiIndex columns. Valid values: False,True [default: True] [currently: True]
display.latex.multicolumn_format [bool] This specifies if the to_latex method of a Dataframe uses multi-
columns to pretty-print MultiIndex columns. Valid values: False,True [default: l] [currently: l]
display.latex.multirow [bool] This specifies if the to_latex method of a Dataframe uses multirows to pretty-
print MultiIndex rows. Valid values: False,True [default: False] [currently: False]
display.latex.repr [boolean] Whether to produce a latex DataFrame representation for jupyter environments
that support it. (default: False) [default: False] [currently: False]
display.line_width [int] Deprecated. [default: 80] [currently: 80] (Deprecated, use display.width instead.)
display.max_categories [int] This sets the maximum number of categories pandas should output when printing
out a Categorical or a Series of dtype category. [default: 8] [currently: 8]
display.max_columns [int] If max_cols is exceeded, switch to truncate view. Depending on large_repr, objects
are either centrally truncated or printed as a summary view. None value means unlimited.
In case python/IPython is running in a terminal and large_repr equals truncate this can be set to 0 and
pandas will auto-detect the width of the terminal and print a truncated object which fits the screen width.
The IPython notebook, IPython qtconsole, or IDLE do not run in a terminal and hence it is not possible
to do correct auto-detection. [default: 20] [currently: 20]
display.max_colwidth [int] The maximum width in characters of a column in the repr of a pandas data struc-
ture. When the column overflows, a ... placeholder is embedded in the output. [default: 50] [currently:
50]
display.max_info_columns [int] max_info_columns is used in DataFrame.info method to decide if per column
information will be printed. [default: 100] [currently: 100]
display.max_info_rows [int or None] df.info() will usually show null-counts for each column. For large frames
this can be quite slow. max_info_rows and max_info_cols limit this null check only to frames with smaller
dimensions than specified. [default: 1690785] [currently: 1690785]
display.max_rows [int] If max_rows is exceeded, switch to truncate view. Depending on large_repr, objects
are either centrally truncated or printed as a summary view. None value means unlimited.
In case python/IPython is running in a terminal and large_repr equals truncate this can be set to 0 and
pandas will auto-detect the height of the terminal and print a truncated object which fits the screen height.
The IPython notebook, IPython qtconsole, or IDLE do not run in a terminal and hence it is not possible
to do correct auto-detection. [default: 60] [currently: 15]
display.max_seq_items [int or None] when pretty-printing a long sequence, no more then max_seq_items will
be printed. If items are omitted, they will be denoted by the addition of ... to the resulting string.
If set to None, the number of items to be printed is unlimited. [default: 100] [currently: 100]
display.memory_usage [bool, string or None] This specifies if the memory usage of a DataFrame should be
displayed when df.info() is called. Valid values True,False,deep [default: True] [currently: True]
display.mpl_style [bool] Setting this to default will modify the rcParams used by matplotlib to give plots a
more pleasing visual style by default. Setting this to None/False restores the values to their initial value.
[default: None] [currently: None]
display.multi_sparse [boolean] sparsify MultiIndex display (dont display repeated elements in outer levels
within groups) [default: True] [currently: True]
display.notebook_repr_html [boolean] When True, IPython notebook will use html representation for pandas
objects (if it is available). [default: True] [currently: True]
display.pprint_nest_depth [int] Controls the number of nested levels to process when pretty-printing [default:
3] [currently: 3]
display.precision [int] Floating point output precision (number of significant digits). This is only a suggestion
[default: 6] [currently: 6]
display.show_dimensions [boolean or truncate] Whether to print out dimensions at the end of DataFrame
repr. If truncate is specified, only print out the dimensions if the frame is truncated (e.g. not display all
rows and/or columns) [default: truncate] [currently: truncate]
display.unicode.ambiguous_as_wide [boolean] Whether to use the Unicode East Asian Width to calculate the
display text width. Enabling this may affect to the performance (default: False) [default: False] [currently:
False]
display.unicode.east_asian_width [boolean] Whether to use the Unicode East Asian Width to calculate the
display text width. Enabling this may affect to the performance (default: False) [default: False] [currently:
False]
display.width [int] Width of the display in characters. In case python/IPython is running in a terminal this can
be set to None and pandas will correctly auto-detect the width. Note that the IPython notebook, IPython
qtconsole, or IDLE do not run in a terminal and hence it is not possible to correctly detect the width.
[default: 80] [currently: 80]
html.border [int] A border=value attribute is inserted in the <table> tag for the DataFrame HTML repr.
[default: 1] [currently: 1]
io.excel.xls.writer [string] The default Excel writer engine for xls files. Available options: xlwt (the de-
fault). [default: xlwt] [currently: xlwt]
io.excel.xlsm.writer [string] The default Excel writer engine for xlsm files. Available options: openpyxl
(the default). [default: openpyxl] [currently: openpyxl]
io.excel.xlsx.writer [string] The default Excel writer engine for xlsx files. Available options: xlsxwriter (the
default), openpyxl. [default: xlsxwriter] [currently: xlsxwriter]
io.hdf.default_format [format] default format writing format, if None, then put will default to fixed and
append will default to table [default: None] [currently: None]
io.hdf.dropna_table [boolean] drop ALL nan rows when appending to a table [default: False] [currently:
False]
mode.chained_assignment [string] Raise an exception, warn, or no action if trying to use chained assignment,
The default is warn [default: warn] [currently: warn]
mode.sim_interactive [boolean] Whether to simulate interactive mode for purposes of testing [default: False]
[currently: False]
mode.use_inf_as_null [boolean] True means treat None, NaN, INF, -INF as null (old way), False means None
and NaN are null, but INF, -INF are not null (new way). [default: False] [currently: False]
34.16.1.3 pandas.get_option
Notes
display.max_categories [int] This sets the maximum number of categories pandas should output when printing
out a Categorical or a Series of dtype category. [default: 8] [currently: 8]
display.max_columns [int] If max_cols is exceeded, switch to truncate view. Depending on large_repr, objects
are either centrally truncated or printed as a summary view. None value means unlimited.
In case python/IPython is running in a terminal and large_repr equals truncate this can be set to 0 and
pandas will auto-detect the width of the terminal and print a truncated object which fits the screen width.
The IPython notebook, IPython qtconsole, or IDLE do not run in a terminal and hence it is not possible
to do correct auto-detection. [default: 20] [currently: 20]
display.max_colwidth [int] The maximum width in characters of a column in the repr of a pandas data struc-
ture. When the column overflows, a ... placeholder is embedded in the output. [default: 50] [currently:
50]
display.max_info_columns [int] max_info_columns is used in DataFrame.info method to decide if per column
information will be printed. [default: 100] [currently: 100]
display.max_info_rows [int or None] df.info() will usually show null-counts for each column. For large frames
this can be quite slow. max_info_rows and max_info_cols limit this null check only to frames with smaller
dimensions than specified. [default: 1690785] [currently: 1690785]
display.max_rows [int] If max_rows is exceeded, switch to truncate view. Depending on large_repr, objects
are either centrally truncated or printed as a summary view. None value means unlimited.
In case python/IPython is running in a terminal and large_repr equals truncate this can be set to 0 and
pandas will auto-detect the height of the terminal and print a truncated object which fits the screen height.
The IPython notebook, IPython qtconsole, or IDLE do not run in a terminal and hence it is not possible
to do correct auto-detection. [default: 60] [currently: 15]
display.max_seq_items [int or None] when pretty-printing a long sequence, no more then max_seq_items will
be printed. If items are omitted, they will be denoted by the addition of ... to the resulting string.
If set to None, the number of items to be printed is unlimited. [default: 100] [currently: 100]
display.memory_usage [bool, string or None] This specifies if the memory usage of a DataFrame should be
displayed when df.info() is called. Valid values True,False,deep [default: True] [currently: True]
display.mpl_style [bool] Setting this to default will modify the rcParams used by matplotlib to give plots a
more pleasing visual style by default. Setting this to None/False restores the values to their initial value.
[default: None] [currently: None]
display.multi_sparse [boolean] sparsify MultiIndex display (dont display repeated elements in outer levels
within groups) [default: True] [currently: True]
display.notebook_repr_html [boolean] When True, IPython notebook will use html representation for pandas
objects (if it is available). [default: True] [currently: True]
display.pprint_nest_depth [int] Controls the number of nested levels to process when pretty-printing [default:
3] [currently: 3]
display.precision [int] Floating point output precision (number of significant digits). This is only a suggestion
[default: 6] [currently: 6]
display.show_dimensions [boolean or truncate] Whether to print out dimensions at the end of DataFrame
repr. If truncate is specified, only print out the dimensions if the frame is truncated (e.g. not display all
rows and/or columns) [default: truncate] [currently: truncate]
display.unicode.ambiguous_as_wide [boolean] Whether to use the Unicode East Asian Width to calculate the
display text width. Enabling this may affect to the performance (default: False) [default: False] [currently:
False]
display.unicode.east_asian_width [boolean] Whether to use the Unicode East Asian Width to calculate the
display text width. Enabling this may affect to the performance (default: False) [default: False] [currently:
False]
display.width [int] Width of the display in characters. In case python/IPython is running in a terminal this can
be set to None and pandas will correctly auto-detect the width. Note that the IPython notebook, IPython
qtconsole, or IDLE do not run in a terminal and hence it is not possible to correctly detect the width.
[default: 80] [currently: 80]
html.border [int] A border=value attribute is inserted in the <table> tag for the DataFrame HTML repr.
[default: 1] [currently: 1]
io.excel.xls.writer [string] The default Excel writer engine for xls files. Available options: xlwt (the de-
fault). [default: xlwt] [currently: xlwt]
io.excel.xlsm.writer [string] The default Excel writer engine for xlsm files. Available options: openpyxl
(the default). [default: openpyxl] [currently: openpyxl]
io.excel.xlsx.writer [string] The default Excel writer engine for xlsx files. Available options: xlsxwriter (the
default), openpyxl. [default: xlsxwriter] [currently: xlsxwriter]
io.hdf.default_format [format] default format writing format, if None, then put will default to fixed and
append will default to table [default: None] [currently: None]
io.hdf.dropna_table [boolean] drop ALL nan rows when appending to a table [default: False] [currently:
False]
mode.chained_assignment [string] Raise an exception, warn, or no action if trying to use chained assignment,
The default is warn [default: warn] [currently: warn]
mode.sim_interactive [boolean] Whether to simulate interactive mode for purposes of testing [default: False]
[currently: False]
mode.use_inf_as_null [boolean] True means treat None, NaN, INF, -INF as null (old way), False means None
and NaN are null, but INF, -INF are not null (new way). [default: False] [currently: False]
34.16.1.4 pandas.set_option
io.excel.xls.[writer]
io.excel.xlsm.[writer]
io.excel.xlsx.[writer]
io.hdf.[default_format, dropna_table]
mode.[chained_assignment, sim_interactive, use_inf_as_null]
Notes
display.html.table_schema [boolean] Whether to publish a Table Schema representation for frontends that
support it. (default: False) [default: False] [currently: False]
display.large_repr [truncate/info] For DataFrames exceeding max_rows/max_cols, the repr (and HTML
repr) can show a truncated table (the default from 0.13), or switch to the view from df.info() (the behaviour
in earlier versions of pandas). [default: truncate] [currently: truncate]
display.latex.escape [bool] This specifies if the to_latex method of a Dataframe uses escapes special characters.
Valid values: False,True [default: True] [currently: True]
display.latex.longtable :bool This specifies if the to_latex method of a Dataframe uses the longtable format.
Valid values: False,True [default: False] [currently: False]
display.latex.multicolumn [bool] This specifies if the to_latex method of a Dataframe uses multicolumns to
pretty-print MultiIndex columns. Valid values: False,True [default: True] [currently: True]
display.latex.multicolumn_format [bool] This specifies if the to_latex method of a Dataframe uses multi-
columns to pretty-print MultiIndex columns. Valid values: False,True [default: l] [currently: l]
display.latex.multirow [bool] This specifies if the to_latex method of a Dataframe uses multirows to pretty-
print MultiIndex rows. Valid values: False,True [default: False] [currently: False]
display.latex.repr [boolean] Whether to produce a latex DataFrame representation for jupyter environments
that support it. (default: False) [default: False] [currently: False]
display.line_width [int] Deprecated. [default: 80] [currently: 80] (Deprecated, use display.width instead.)
display.max_categories [int] This sets the maximum number of categories pandas should output when printing
out a Categorical or a Series of dtype category. [default: 8] [currently: 8]
display.max_columns [int] If max_cols is exceeded, switch to truncate view. Depending on large_repr, objects
are either centrally truncated or printed as a summary view. None value means unlimited.
In case python/IPython is running in a terminal and large_repr equals truncate this can be set to 0 and
pandas will auto-detect the width of the terminal and print a truncated object which fits the screen width.
The IPython notebook, IPython qtconsole, or IDLE do not run in a terminal and hence it is not possible
to do correct auto-detection. [default: 20] [currently: 20]
display.max_colwidth [int] The maximum width in characters of a column in the repr of a pandas data struc-
ture. When the column overflows, a ... placeholder is embedded in the output. [default: 50] [currently:
50]
display.max_info_columns [int] max_info_columns is used in DataFrame.info method to decide if per column
information will be printed. [default: 100] [currently: 100]
display.max_info_rows [int or None] df.info() will usually show null-counts for each column. For large frames
this can be quite slow. max_info_rows and max_info_cols limit this null check only to frames with smaller
dimensions than specified. [default: 1690785] [currently: 1690785]
display.max_rows [int] If max_rows is exceeded, switch to truncate view. Depending on large_repr, objects
are either centrally truncated or printed as a summary view. None value means unlimited.
In case python/IPython is running in a terminal and large_repr equals truncate this can be set to 0 and
pandas will auto-detect the height of the terminal and print a truncated object which fits the screen height.
The IPython notebook, IPython qtconsole, or IDLE do not run in a terminal and hence it is not possible
to do correct auto-detection. [default: 60] [currently: 15]
display.max_seq_items [int or None] when pretty-printing a long sequence, no more then max_seq_items will
be printed. If items are omitted, they will be denoted by the addition of ... to the resulting string.
If set to None, the number of items to be printed is unlimited. [default: 100] [currently: 100]
display.memory_usage [bool, string or None] This specifies if the memory usage of a DataFrame should be
displayed when df.info() is called. Valid values True,False,deep [default: True] [currently: True]
display.mpl_style [bool] Setting this to default will modify the rcParams used by matplotlib to give plots a
more pleasing visual style by default. Setting this to None/False restores the values to their initial value.
[default: None] [currently: None]
display.multi_sparse [boolean] sparsify MultiIndex display (dont display repeated elements in outer levels
within groups) [default: True] [currently: True]
display.notebook_repr_html [boolean] When True, IPython notebook will use html representation for pandas
objects (if it is available). [default: True] [currently: True]
display.pprint_nest_depth [int] Controls the number of nested levels to process when pretty-printing [default:
3] [currently: 3]
display.precision [int] Floating point output precision (number of significant digits). This is only a suggestion
[default: 6] [currently: 6]
display.show_dimensions [boolean or truncate] Whether to print out dimensions at the end of DataFrame
repr. If truncate is specified, only print out the dimensions if the frame is truncated (e.g. not display all
rows and/or columns) [default: truncate] [currently: truncate]
display.unicode.ambiguous_as_wide [boolean] Whether to use the Unicode East Asian Width to calculate the
display text width. Enabling this may affect to the performance (default: False) [default: False] [currently:
False]
display.unicode.east_asian_width [boolean] Whether to use the Unicode East Asian Width to calculate the
display text width. Enabling this may affect to the performance (default: False) [default: False] [currently:
False]
display.width [int] Width of the display in characters. In case python/IPython is running in a terminal this can
be set to None and pandas will correctly auto-detect the width. Note that the IPython notebook, IPython
qtconsole, or IDLE do not run in a terminal and hence it is not possible to correctly detect the width.
[default: 80] [currently: 80]
html.border [int] A border=value attribute is inserted in the <table> tag for the DataFrame HTML repr.
[default: 1] [currently: 1]
io.excel.xls.writer [string] The default Excel writer engine for xls files. Available options: xlwt (the de-
fault). [default: xlwt] [currently: xlwt]
io.excel.xlsm.writer [string] The default Excel writer engine for xlsm files. Available options: openpyxl
(the default). [default: openpyxl] [currently: openpyxl]
io.excel.xlsx.writer [string] The default Excel writer engine for xlsx files. Available options: xlsxwriter (the
default), openpyxl. [default: xlsxwriter] [currently: xlsxwriter]
io.hdf.default_format [format] default format writing format, if None, then put will default to fixed and
append will default to table [default: None] [currently: None]
io.hdf.dropna_table [boolean] drop ALL nan rows when appending to a table [default: False] [currently:
False]
mode.chained_assignment [string] Raise an exception, warn, or no action if trying to use chained assignment,
The default is warn [default: warn] [currently: warn]
mode.sim_interactive [boolean] Whether to simulate interactive mode for purposes of testing [default: False]
[currently: False]
mode.use_inf_as_null [boolean] True means treat None, NaN, INF, -INF as null (old way), False means None
and NaN are null, but INF, -INF are not null (new way). [default: False] [currently: False]
34.16.1.5 pandas.option_context
class pandas.option_context(*args)
Context manager to temporarily set options in the with statement context.
You need to invoke as option_context(pat, val, [(pat, val), ...]).
Examples
testing.assert_frame_equal(left, right[, ...]) Check that left and right DataFrame are equal.
testing.assert_series_equal(left, right[, ...]) Check that left and right Series are equal.
testing.assert_index_equal(left, right[, ...]) Check that left and right Index are equal.
34.16.2.1 pandas.testing.assert_frame_equal
34.16.2.2 pandas.testing.assert_series_equal
34.16.2.3 pandas.testing.assert_index_equal
34.16.3.1 pandas.errors.DtypeWarning
exception pandas.errors.DtypeWarning
Warning that is raised for a dtype incompatiblity. This is can happen whenever pd.read_csv encounters non-
uniform dtypes in a column(s) of a given CSV file
34.16.3.2 pandas.errors.EmptyDataError
exception pandas.errors.EmptyDataError
Exception that is thrown in pd.read_csv (by both the C and Python engines) when empty data or header is
encountered
34.16.3.3 pandas.errors.OutOfBoundsDatetime
exception pandas.errors.OutOfBoundsDatetime
34.16.3.4 pandas.errors.ParserError
exception pandas.errors.ParserError
Exception that is thrown by an error is encountered in pd.read_csv
34.16.3.5 pandas.errors.ParserWarning
exception pandas.errors.ParserWarning
Warning that is raised in pd.read_csv whenever it is necessary to change parsers (generally from c to python)
contrary to the one specified by the user due to lack of support or functionality for parsing particular attributes
of a CSV file with the requsted engine
34.16.3.6 pandas.errors.PerformanceWarning
exception pandas.errors.PerformanceWarning
Warnings shown when there is a possible performance impact.
34.16.3.7 pandas.errors.UnsortedIndexError
exception pandas.errors.UnsortedIndexError
Error raised when attempting to get a slice of a MultiIndex and the index has not been lexsorted. Subclass of
KeyError.
New in version 0.20.0.
34.16.3.8 pandas.errors.UnsupportedFunctionCall
exception pandas.errors.UnsupportedFunctionCall
If attempting to call a numpy function on a pandas object. For example using np.
cumsum(groupby_object).
34.16.4.1 pandas.api.types.union_categoricals
34.16.4.2 pandas.api.types.infer_dtype
pandas.api.types.infer_dtype()
Effeciently infer the type of a passed val, or list-like array of values. Return a string describing the type.
Parameters value : scalar, list, ndarray, or pandas type
Notes
Examples
>>> infer_dtype([pd.Timestamp('20130101')])
'datetime'
>>> infer_dtype([np.datetime64('2013-01-01')])
'datetime64'
>>> infer_dtype(pd.Series(list('aabc')).astype('category'))
'categorical'
34.16.4.3 pandas.api.types.pandas_dtype
pandas.api.types.pandas_dtype(dtype)
Converts input into a pandas only dtype object or a numpy dtype object.
Parameters dtype : object to be converted
Returns np.dtype or a pandas dtype
Dtype introspection
34.16.4.4 pandas.api.types.is_bool_dtype
pandas.api.types.is_bool_dtype(arr_or_dtype)
Check whether the provided array or dtype is of a boolean dtype.
Parameters arr_or_dtype : array-like
The array or dtype to check.
Returns boolean : Whether or not the array or dtype is of a boolean dtype.
Examples
>>> is_bool_dtype(str)
False
>>> is_bool_dtype(int)
False
>>> is_bool_dtype(bool)
True
>>> is_bool_dtype(np.bool)
True
>>> is_bool_dtype(np.array(['a', 'b']))
False
>>> is_bool_dtype(pd.Series([1, 2]))
False
>>> is_bool_dtype(np.array([True, False]))
True
34.16.4.5 pandas.api.types.is_categorical_dtype
pandas.api.types.is_categorical_dtype(arr_or_dtype)
Check whether an array-like or dtype is of the Categorical dtype.
Parameters arr_or_dtype : array-like
The array-like or dtype to check.
Returns boolean : Whether or not the array-like or dtype is
of the Categorical dtype.
Examples
>>> is_categorical_dtype(object)
False
>>> is_categorical_dtype(CategoricalDtype())
True
>>> is_categorical_dtype([1, 2, 3])
False
>>> is_categorical_dtype(pd.Categorical([1, 2, 3]))
True
>>> is_categorical_dtype(pd.CategoricalIndex([1, 2, 3]))
True
34.16.4.6 pandas.api.types.is_complex_dtype
pandas.api.types.is_complex_dtype(arr_or_dtype)
Check whether the provided array or dtype is of a complex dtype.
Parameters arr_or_dtype : array-like
The array or dtype to check.
Returns boolean : Whether or not the array or dtype is of a compex dtype.
Examples
>>> is_complex_dtype(str)
False
>>> is_complex_dtype(int)
False
>>> is_complex_dtype(np.complex)
True
>>> is_complex_dtype(np.array(['a', 'b']))
False
>>> is_complex_dtype(pd.Series([1, 2]))
False
>>> is_complex_dtype(np.array([1 + 1j, 5]))
True
34.16.4.7 pandas.api.types.is_datetime64_any_dtype
pandas.api.types.is_datetime64_any_dtype(arr_or_dtype)
Check whether the provided array or dtype is of the datetime64 dtype.
Parameters arr_or_dtype : array-like
The array or dtype to check.
Returns boolean : Whether or not the array or dtype is of the datetime64 dtype.
Examples
>>> is_datetime64_any_dtype(str)
False
>>> is_datetime64_any_dtype(int)
False
>>> is_datetime64_any_dtype(np.datetime64) # can be tz-naive
True
>>> is_datetime64_any_dtype(DatetimeTZDtype("ns", "US/Eastern"))
True
>>> is_datetime64_any_dtype(np.array(['a', 'b']))
False
>>> is_datetime64_any_dtype(np.array([1, 2]))
False
>>> is_datetime64_any_dtype(np.array([], dtype=np.datetime64))
True
>>> is_datetime64_any_dtype(pd.DatetimeIndex([1, 2, 3],
dtype=np.datetime64))
True
34.16.4.8 pandas.api.types.is_datetime64_dtype
pandas.api.types.is_datetime64_dtype(arr_or_dtype)
Check whether an array-like or dtype is of the datetime64 dtype.
Parameters arr_or_dtype : array-like
The array-like or dtype to check.
Returns boolean : Whether or not the array-like or dtype is of
the datetime64 dtype.
Examples
>>> is_datetime64_dtype(object)
False
>>> is_datetime64_dtype(np.datetime64)
True
34.16.4.9 pandas.api.types.is_datetime64_ns_dtype
pandas.api.types.is_datetime64_ns_dtype(arr_or_dtype)
Check whether the provided array or dtype is of the datetime64[ns] dtype.
Parameters arr_or_dtype : array-like
The array or dtype to check.
Returns boolean : Whether or not the array or dtype is of the datetime64[ns] dtype.
Examples
>>> is_datetime64_ns_dtype(str)
False
>>> is_datetime64_ns_dtype(int)
False
>>> is_datetime64_ns_dtype(np.datetime64) # no unit
False
>>> is_datetime64_ns_dtype(DatetimeTZDtype("ns", "US/Eastern"))
True
>>> is_datetime64_ns_dtype(np.array(['a', 'b']))
False
>>> is_datetime64_ns_dtype(np.array([1, 2]))
False
>>> is_datetime64_ns_dtype(np.array([], dtype=np.datetime64)) # no unit
False
>>> is_datetime64_ns_dtype(np.array([],
dtype="datetime64[ps]")) # wrong unit
False
>>> is_datetime64_ns_dtype(pd.DatetimeIndex([1, 2, 3],
dtype=np.datetime64)) # has 'ns' unit
True
34.16.4.10 pandas.api.types.is_datetime64tz_dtype
pandas.api.types.is_datetime64tz_dtype(arr_or_dtype)
Check whether an array-like or dtype is of a DatetimeTZDtype dtype.
Parameters arr_or_dtype : array-like
The array-like or dtype to check.
Returns boolean : Whether or not the array-like or dtype is of
a DatetimeTZDtype dtype.
Examples
>>> is_datetime64tz_dtype(object)
False
>>> is_datetime64tz_dtype([1, 2, 3])
False
>>> is_datetime64tz_dtype(pd.DatetimeIndex([1, 2, 3])) # tz-naive
False
>>> is_datetime64tz_dtype(pd.DatetimeIndex([1, 2, 3], tz="US/Eastern"))
True
34.16.4.11 pandas.api.types.is_extension_type
pandas.api.types.is_extension_type(arr)
Check whether an array-like is of a pandas extension class instance.
Extension classes include categoricals, pandas sparse objects (i.e. classes represented within the pandas library
and not ones external to it like scipy sparse matrices), and datetime-like arrays.
Parameters arr : array-like
The array-like to check.
Returns boolean : Whether or not the array-like is of a pandas
extension class instance.
Examples
False
>>> is_extension_type(pd.DatetimeIndex([1, 2, 3], tz="US/Eastern"))
True
>>>
>>> dtype = DatetimeTZDtype("ns", tz="US/Eastern")
>>> s = pd.Series([], dtype=dtype)
>>> is_extension_type(s)
True
34.16.4.12 pandas.api.types.is_float_dtype
pandas.api.types.is_float_dtype(arr_or_dtype)
Check whether the provided array or dtype is of a float dtype.
Parameters arr_or_dtype : array-like
The array or dtype to check.
Returns boolean : Whether or not the array or dtype is of a float dtype.
Examples
>>> is_float_dtype(str)
False
>>> is_float_dtype(int)
False
>>> is_float_dtype(float)
True
>>> is_float_dtype(np.array(['a', 'b']))
False
>>> is_float_dtype(pd.Series([1, 2]))
False
>>> is_float_dtype(pd.Index([1, 2.]))
True
34.16.4.13 pandas.api.types.is_int64_dtype
pandas.api.types.is_int64_dtype(arr_or_dtype)
Check whether the provided array or dtype is of the int64 dtype.
Parameters arr_or_dtype : array-like
The array or dtype to check.
Returns boolean : Whether or not the array or dtype is of the int64 dtype.
Notes
Depending on system architecture, the return value of is_int64_dtype( int) will be True if the OS uses 64-bit
integers and False if the OS uses 32-bit integers.
Examples
>>> is_int64_dtype(str)
False
>>> is_int64_dtype(np.int32)
False
>>> is_int64_dtype(np.int64)
True
>>> is_int64_dtype(float)
False
>>> is_int64_dtype(np.uint64) # unsigned
False
>>> is_int64_dtype(np.array(['a', 'b']))
False
>>> is_int64_dtype(np.array([1, 2], dtype=np.int64))
True
>>> is_int64_dtype(pd.Index([1, 2.])) # float
False
>>> is_int64_dtype(np.array([1, 2], dtype=np.uint32)) # unsigned
False
34.16.4.14 pandas.api.types.is_integer_dtype
pandas.api.types.is_integer_dtype(arr_or_dtype)
Check whether the provided array or dtype is of an integer dtype.
Unlike in in_any_int_dtype, timedelta64 instances will return False.
Parameters arr_or_dtype : array-like
The array or dtype to check.
Returns boolean : Whether or not the array or dtype is of an integer dtype
and not an instance of timedelta64.
Examples
>>> is_integer_dtype(str)
False
>>> is_integer_dtype(int)
True
>>> is_integer_dtype(float)
False
>>> is_integer_dtype(np.uint64)
True
>>> is_integer_dtype(np.datetime64)
False
>>> is_integer_dtype(np.timedelta64)
False
>>> is_integer_dtype(np.array(['a', 'b']))
False
>>> is_integer_dtype(pd.Series([1, 2]))
True
>>> is_integer_dtype(np.array([], dtype=np.timedelta64))
False
34.16.4.15 pandas.api.types.is_interval_dtype
pandas.api.types.is_interval_dtype(arr_or_dtype)
Check whether an array-like or dtype is of the Interval dtype.
Parameters arr_or_dtype : array-like
The array-like or dtype to check.
Returns boolean : Whether or not the array-like or dtype is
of the Interval dtype.
Examples
>>> is_interval_dtype(object)
False
>>> is_interval_dtype(IntervalDtype())
True
>>> is_interval_dtype([1, 2, 3])
False
>>>
>>> interval = pd.Interval(1, 2, closed="right")
>>> is_interval_dtype(interval)
False
>>> is_interval_dtype(pd.IntervalIndex([interval]))
True
34.16.4.16 pandas.api.types.is_numeric_dtype
pandas.api.types.is_numeric_dtype(arr_or_dtype)
Check whether the provided array or dtype is of a numeric dtype.
Parameters arr_or_dtype : array-like
The array or dtype to check.
Returns boolean : Whether or not the array or dtype is of a numeric dtype.
Examples
>>> is_numeric_dtype(str)
False
>>> is_numeric_dtype(int)
True
>>> is_numeric_dtype(float)
True
>>> is_numeric_dtype(np.uint64)
True
>>> is_numeric_dtype(np.datetime64)
False
>>> is_numeric_dtype(np.timedelta64)
False
>>> is_numeric_dtype(np.array(['a', 'b']))
False
>>> is_numeric_dtype(pd.Series([1, 2]))
True
>>> is_numeric_dtype(pd.Index([1, 2.]))
True
>>> is_numeric_dtype(np.array([], dtype=np.timedelta64))
False
34.16.4.17 pandas.api.types.is_object_dtype
pandas.api.types.is_object_dtype(arr_or_dtype)
Check whether an array-like or dtype is of the object dtype.
Parameters arr_or_dtype : array-like
The array-like or dtype to check.
Returns boolean : Whether or not the array-like or dtype is of the object dtype.
Examples
>>> is_object_dtype(object)
True
>>> is_object_dtype(int)
False
>>> is_object_dtype(np.array([], dtype=object))
True
>>> is_object_dtype(np.array([], dtype=int))
False
>>> is_object_dtype([1, 2, 3])
False
34.16.4.18 pandas.api.types.is_period_dtype
pandas.api.types.is_period_dtype(arr_or_dtype)
Check whether an array-like or dtype is of the Period dtype.
Parameters arr_or_dtype : array-like
The array-like or dtype to check.
Returns boolean : Whether or not the array-like or dtype is of the Period dtype.
Examples
>>> is_period_dtype(object)
False
>>> is_period_dtype(PeriodDtype(freq="D"))
True
>>> is_period_dtype([1, 2, 3])
False
>>> is_period_dtype(pd.Period("2017-01-01"))
False
>>> is_period_dtype(pd.PeriodIndex([], freq="A"))
True
34.16.4.19 pandas.api.types.is_signed_integer_dtype
pandas.api.types.is_signed_integer_dtype(arr_or_dtype)
Check whether the provided array or dtype is of a signed integer dtype.
Unlike in in_any_int_dtype, timedelta64 instances will return False.
Parameters arr_or_dtype : array-like
The array or dtype to check.
Returns boolean : Whether or not the array or dtype is of a signed integer dtype
and not an instance of timedelta64.
Examples
>>> is_signed_integer_dtype(str)
False
>>> is_signed_integer_dtype(int)
True
>>> is_signed_integer_dtype(float)
False
>>> is_signed_integer_dtype(np.uint64) # unsigned
False
>>> is_signed_integer_dtype(np.datetime64)
False
>>> is_signed_integer_dtype(np.timedelta64)
False
>>> is_signed_integer_dtype(np.array(['a', 'b']))
False
>>> is_signed_integer_dtype(pd.Series([1, 2]))
True
>>> is_signed_integer_dtype(np.array([], dtype=np.timedelta64))
False
>>> is_signed_integer_dtype(pd.Index([1, 2.])) # float
False
>>> is_signed_integer_dtype(np.array([1, 2], dtype=np.uint32)) # unsigned
False
34.16.4.20 pandas.api.types.is_string_dtype
pandas.api.types.is_string_dtype(arr_or_dtype)
Check whether the provided array or dtype is of the string dtype.
Parameters arr_or_dtype : array-like
The array or dtype to check.
Returns boolean : Whether or not the array or dtype is of the string dtype.
Examples
>>> is_string_dtype(str)
True
>>> is_string_dtype(object)
True
>>> is_string_dtype(int)
False
>>>
>>> is_string_dtype(np.array(['a', 'b']))
True
>>> is_string_dtype(pd.Series([1, 2]))
False
34.16.4.21 pandas.api.types.is_timedelta64_dtype
pandas.api.types.is_timedelta64_dtype(arr_or_dtype)
Check whether an array-like or dtype is of the timedelta64 dtype.
Parameters arr_or_dtype : array-like
The array-like or dtype to check.
Returns boolean : Whether or not the array-like or dtype is
of the timedelta64 dtype.
Examples
>>> is_timedelta64_dtype(object)
False
>>> is_timedelta64_dtype(np.timedelta64)
True
>>> is_timedelta64_dtype([1, 2, 3])
False
>>> is_timedelta64_dtype(pd.Series([], dtype="timedelta64[ns]"))
True
34.16.4.22 pandas.api.types.is_timedelta64_ns_dtype
pandas.api.types.is_timedelta64_ns_dtype(arr_or_dtype)
Check whether the provided array or dtype is of the timedelta64[ns] dtype.
This is a very specific dtype, so generic ones like np.timedelta64 will return False if passed into this function.
Parameters arr_or_dtype : array-like
The array or dtype to check.
Returns boolean : Whether or not the array or dtype is of the
timedelta64[ns] dtype.
Examples
>>> is_timedelta64_ns_dtype(np.dtype('m8[ns]'))
True
>>> is_timedelta64_ns_dtype(np.dtype('m8[ps]')) # Wrong frequency
False
>>> is_timedelta64_ns_dtype(np.array([1, 2], dtype='m8[ns]'))
True
>>> is_timedelta64_ns_dtype(np.array([1, 2], dtype=np.timedelta64))
False
34.16.4.23 pandas.api.types.is_unsigned_integer_dtype
pandas.api.types.is_unsigned_integer_dtype(arr_or_dtype)
Check whether the provided array or dtype is of an unsigned integer dtype.
Parameters arr_or_dtype : array-like
The array or dtype to check.
Returns boolean : Whether or not the array or dtype is of an
unsigned integer dtype.
Examples
>>> is_unsigned_integer_dtype(str)
False
>>> is_unsigned_integer_dtype(int) # signed
False
>>> is_unsigned_integer_dtype(float)
False
>>> is_unsigned_integer_dtype(np.uint64)
True
>>> is_unsigned_integer_dtype(np.array(['a', 'b']))
False
>>> is_unsigned_integer_dtype(pd.Series([1, 2])) # signed
False
>>> is_unsigned_integer_dtype(pd.Index([1, 2.])) # float
False
>>> is_unsigned_integer_dtype(np.array([1, 2], dtype=np.uint32))
True
34.16.4.24 pandas.api.types.is_sparse
pandas.api.types.is_sparse(arr)
Check whether an array-like is a pandas sparse array.
Parameters arr : array-like
The array-like to check.
Returns boolean : Whether or not the array-like is a pandas sparse array.
Examples
This function checks only for pandas sparse array instances, so sparse arrays from other libraries will return
False.
Iterable introspection
34.16.4.25 pandas.api.types.is_dict_like
pandas.api.types.is_dict_like(obj)
Check if the object is dict-like.
Parameters obj : The object to check.
Returns is_dict_like : bool
Whether obj has dict-like properties.
Examples
34.16.4.26 pandas.api.types.is_file_like
pandas.api.types.is_file_like(obj)
Check if the object is a file-like object.
For objects to be considered file-like, they must be an iterator AND have either a read and/or write method as
an attribute.
Note: file-like objects must be iterable, but iterable objects need not be file-like.
New in version 0.20.0.
Examples
>>> buffer(StringIO("data"))
>>> is_file_like(buffer)
True
>>> is_file_like([1, 2, 3])
False
34.16.4.27 pandas.api.types.is_list_like
pandas.api.types.is_list_like(obj)
Check if the object is list-like.
Objects that are considered list-like are for example Python lists, tuples, sets, NumPy arrays, and Pandas Series.
Strings and datetime objects, however, are not considered list-like.
Parameters obj : The object to check.
Returns is_list_like : bool
Whether obj has list-like properties.
Examples
34.16.4.28 pandas.api.types.is_named_tuple
pandas.api.types.is_named_tuple(obj)
Check if the object is a named tuple.
Parameters obj : The object to check.
Returns is_named_tuple : bool
Whether obj is a named tuple.
Examples
34.16.4.29 pandas.api.types.is_iterator
pandas.api.types.is_iterator(obj)
Check if the object is an iterator.
For example, lists are considered iterators but not strings or datetime objects.
Parameters obj : The object to check.
Returns is_iter : bool
Whether obj is an iterator.
Examples
Scalar introspection
api.types.is_bool
api.types.is_categorical(arr) Check whether an array-like is a Categorical instance.
api.types.is_complex
api.types.is_datetimetz(arr) Check whether an array-like is a datetime array-like with a
timezone component in its dtype.
api.types.is_float
api.types.is_hashable(obj) Return True if hash(obj) will succeed, False otherwise.
api.types.is_integer
api.types.is_interval
api.types.is_number(obj) Check if the object is a number.
api.types.is_period(arr) Check whether an array-like is a periodical index.
api.types.is_re(obj) Check if the object is a regex pattern instance.
api.types.is_re_compilable(obj) Check if the object can be compiled into a regex pattern
instance.
api.types.is_scalar Return True if given value is scalar.
34.16.4.30 pandas.api.types.is_bool
pandas.api.types.is_bool()
34.16.4.31 pandas.api.types.is_categorical
pandas.api.types.is_categorical(arr)
Check whether an array-like is a Categorical instance.
Parameters arr : array-like
The array-like to check.
Returns boolean : Whether or not the array-like is of a Categorical instance.
Examples
34.16.4.32 pandas.api.types.is_complex
pandas.api.types.is_complex()
34.16.4.33 pandas.api.types.is_datetimetz
pandas.api.types.is_datetimetz(arr)
Check whether an array-like is a datetime array-like with a timezone component in its dtype.
Parameters arr : array-like
The array-like to check.
Returns boolean : Whether or not the array-like is a datetime array-like with
a timezone component in its dtype.
Examples
Although the following examples are both DatetimeIndex objects, the first one returns False because it has no
timezone component unlike the second one, which returns True.
The object need not be a DatetimeIndex object. It just needs to have a dtype which has a timezone component.
34.16.4.34 pandas.api.types.is_float
pandas.api.types.is_float()
34.16.4.35 pandas.api.types.is_hashable
pandas.api.types.is_hashable(obj)
Return True if hash(obj) will succeed, False otherwise.
Some types will pass a test against collections.Hashable but fail when they are actually hashed with hash().
Distinguish between these and other types by trying the call to hash() and seeing if they raise TypeError.
Examples
>>> a = ([],)
>>> isinstance(a, collections.Hashable)
True
>>> is_hashable(a)
False
34.16.4.36 pandas.api.types.is_integer
pandas.api.types.is_integer()
34.16.4.37 pandas.api.types.is_interval
pandas.api.types.is_interval()
34.16.4.38 pandas.api.types.is_number
pandas.api.types.is_number(obj)
Check if the object is a number.
Parameters obj : The object to check.
Returns is_number : bool
Examples
>>> is_number(1)
True
>>> is_number("foo")
False
34.16.4.39 pandas.api.types.is_period
pandas.api.types.is_period(arr)
Check whether an array-like is a periodical index.
Parameters arr : array-like
The array-like to check.
Returns boolean : Whether or not the array-like is a periodical index.
Examples
34.16.4.40 pandas.api.types.is_re
pandas.api.types.is_re(obj)
Check if the object is a regex pattern instance.
Parameters obj : The object to check.
Returns is_regex : bool
Whether obj is a regex pattern.
Examples
>>> is_re(re.compile(".*"))
True
>>> is_re("foo")
False
34.16.4.41 pandas.api.types.is_re_compilable
pandas.api.types.is_re_compilable(obj)
Check if the object can be compiled into a regex pattern instance.
Parameters obj : The object to check.
Returns is_regex_compilable : bool
Whether obj can be compiled as a regex pattern.
Examples
>>> is_re_compilable(".*")
True
>>> is_re_compilable(1)
False
34.16.4.42 pandas.api.types.is_scalar
pandas.api.types.is_scalar()
Return True if given value is scalar.
This includes: - numpy array scalar (e.g. np.int64) - Python builtin numerics - Python builtin byte arrays
and strings - None - instances of datetime.datetime - instances of datetime.timedelta - Period - instances of
decimal.Decimal - Interval
THIRTYFIVE
INTERNALS
35.1 Indexing
In pandas there are a few objects implemented which can serve as valid containers for the axis labels:
Index: the generic ordered set object, an ndarray of object dtype assuming nothing about its contents. The
labels must be hashable (and likely immutable) and unique. Populates a dict of label to location in Cython to do
O(1) lookups.
Int64Index: a version of Index highly optimized for 64-bit integer data, such as time stamps
Float64Index: a version of Index highly optimized for 64-bit float data
MultiIndex: the standard hierarchical index object
DatetimeIndex: An Index object with Timestamp boxed elements (impl are the int64 values)
TimedeltaIndex: An Index object with Timedelta boxed elements (impl are the in64 values)
PeriodIndex: An Index object with Period elements
There are functions that make the creation of a regular index easy:
date_range: fixed frequency date range generated from a time rule or DateOffset. An ndarray of Python
datetime objects
period_range: fixed frequency date range generated from a time rule or DateOffset. An ndarray of Period
objects, representing Timespans
The motivation for having an Index class in the first place was to enable different implementations of indexing.
This means that its possible for you, the user, to implement a custom Index subclass that may be better suited to a
particular application than the ones provided in pandas.
From an internal implementation point of view, the relevant methods that an Index must define are one or more of
the following (depending on how incompatible the new object internals are with the Index functions):
get_loc: returns an indexer (an integer, or in some cases a slice object) for a label
slice_locs: returns the range to slice between two labels
get_indexer: Computes the indexing vector for reindexing / data alignment purposes. See the source /
docstrings for more on this
get_indexer_non_unique: Computes the indexing vector for reindexing / data alignment purposes when
the index is non-unique. See the source / docstrings for more on this
reindex: Does any pre-conversion of the input index then calls get_indexer
1859
pandas: powerful Python data analysis toolkit, Release 0.20.1
35.1.1 MultiIndex
Internally, the MultiIndex consists of a few things: the levels, the integer labels, and the level names:
In [2]: index
Out[2]:
MultiIndex(levels=[[0, 1, 2], ['one', 'two']],
labels=[[0, 0, 1, 1, 2, 2], [0, 1, 0, 1, 0, 1]],
names=['first', 'second'])
In [3]: index.levels
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
FrozenList([[0, 1, 2], ['one', 'two']])
In [4]: index.labels
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
FrozenList([[0, 0, 1, 1, 2, 2], [0, 1, 0, 1, 0, 1]])
In [5]: index.names
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
FrozenList(['first', 'second'])
You can probably guess that the labels determine which unique element is identified with that location at each layer
of the index. Its important to note that sortedness is determined solely from the integer labels and does not check
(or care) whether the levels themselves are sorted. Fortunately, the constructors from_tuples and from_arrays
ensure that this is true, but if you compute the levels and labels yourself, please be careful.
Warning: There are some easier alternatives before considering subclassing pandas data structures.
1. Extensible method chains with pipe
2. Use composition. See here.
This section describes how to subclass pandas data structures to meet more specific needs. There are 2 points which
need attention:
1. Override constructor properties.
2. Define original properties
Each data structure has constructor properties to specifying data constructors. By overriding these properties, you can
retain defined-classes through pandas data manipulations.
There are 3 constructors to be defined:
_constructor: Used when a manipulation result has the same dimesions as the original.
_constructor_sliced: Used when a manipulation result has one lower dimension(s) as the original, such
as DataFrame single columns slicing.
_constructor_expanddim: Used when a manipulation result has one higher dimension as the original,
such as Series.to_frame() and DataFrame.to_panel().
Following table shows how pandas data structures define constructor properties by default.
Property Attributes Series DataFrame Panel
_constructor Series DataFrame Panel
_constructor_sliced NotImplementedError Series DataFrame
_constructor_expanddim DataFrame Panel NotImplementedError
Below example shows how to define SubclassedSeries and SubclassedDataFrame overriding constructor
properties.
class SubclassedSeries(Series):
@property
def _constructor(self):
return SubclassedSeries
@property
def _constructor_expanddim(self):
return SubclassedDataFrame
class SubclassedDataFrame(DataFrame):
@property
def _constructor(self):
return SubclassedDataFrame
@property
def _constructor_sliced(self):
return SubclassedSeries
>>> df = SubclassedDataFrame({'A', [1, 2, 3], 'B': [4, 5, 6], 'C': [7, 8, 9]})
>>> df
A B C
0 1 4 7
1 2 5 8
2 3 6 9
>>> type(df)
<class '__main__.SubclassedDataFrame'>
To let original data structures have additional properties, you should let pandas know what properties are added.
pandas maps unknown properties to data names overriding __getattribute__. Defining original properties
can be done in one of 2 ways:
1. Define _internal_names and _internal_names_set for temporary properties which WILL NOT be
passed to manipulation results.
2. Define _metadata for normal properties which will be passed to manipulation results.
Below is an example to define 2 original properties, internal_cache as a temporary property and added_property
as a normal property
class SubclassedDataFrame2(DataFrame):
# temporary properties
_internal_names = pd.DataFrame._internal_names + ['internal_cache']
_internal_names_set = set(_internal_names)
# normal properties
_metadata = ['added_property']
@property
def _constructor(self):
return SubclassedDataFrame2
>>> df = SubclassedDataFrame2({'A', [1, 2, 3], 'B': [4, 5, 6], 'C': [7, 8, 9]})
>>> df
A B C
0 1 4 7
1 2 5 8
2 3 6 9
>>> df.internal_cache
cached
>>> df.added_property
property
THIRTYSIX
RELEASE NOTES
This is the list of changes to pandas between each release. For full details, see the commit logs at http://github.com/
pandas-dev/pandas
What is it
pandas is a Python package providing fast, flexible, and expressive data structures designed to make working with
relational or labeled data both easy and intuitive. It aims to be the fundamental high-level building block for doing
practical, real world data analysis in Python. Additionally, it has the broader goal of becoming the most powerful and
flexible open source data analysis / manipulation tool available in any language.
Where to get it
Source code: http://github.com/pandas-dev/pandas
Binary installers on PyPI: http://pypi.python.org/pypi/pandas
Documentation: http://pandas.pydata.org
1865
pandas: powerful Python data analysis toolkit, Release 0.20.1
Window binary corr/cov operations now return a MultiIndexed DataFrame rather than a Panel, as Panel is
now deprecated, see here
Support for S3 handling now uses s3fs, see here
Google BigQuery support now uses the pandas-gbq library, see here
See the v0.20.1 Whatsnew overview for an extensive list of all enhancements and bugs that have been fixed in 0.20.1.
Note: This is a combined release for 0.20.0 and and 0.20.1. Version 0.20.1 contains one additional change for
backwards-compatibility with downstream projects using pandas utils routines. (GH16250)
36.1.1 Thanks
abaldenko
Adam J. Stewart
Adrian
adrian-stepien
Ajay Saxena
Akash Tandon
Albert Villanova del Moral
Aleksey Bilogur
alexandercbooth
Alexis Mignon
Amol Kahat
Andreas Winkler
Andrew Kittredge
Anthonios Partheniou
Arco Bast
Ashish Singal
atbd
bastewart
Baurzhan Muftakhidinov
Ben Kandel
Ben Thayer
Ben Welsh
Bill Chambers
bmagnusson
Brandon M. Burroughs
Brian
Brian McFee
carlosdanielcsantos
Carlos Souza
chaimdemulder
Chris
chris-b1
Chris Ham
Christopher C. Aycock
Christoph Gohlke
Christoph Paulik
Chris Warth
Clemens Brunner
DaanVanHauwermeiren
Daniel Himmelstein
Dave Willmer
David Cook
David Gwynne
David Hoffman
David Krych
dickreuter
Diego Fernandez
Dimitris Spathis
discort
Dmitry L
Dody Suria Wijaya
Dominik Stanczak
Dr-Irv
Dr. Irv
dr-leo
D.S. McNeil
dubourg
dwkenefick
Elliott Sales de Andrade
Ennemoser Christoph
Francesc Alted
Fumito Hamamura
funnycrab
gfyoung
Giacomo Ferroni
goldenbull
Graham R. Jeffries
Greg Williams
Guilherme Beltramini
Guilherme Samora
Hao Wu
Harshit Patni
hesham.shabana@hotmail.com
Ilya V. Schurov
Ivn Valls Prez
Jackie Leng
Jaehoon Hwang
James Draper
James Goppert
James McBride
James Santucci
Jan Schulz
Jeff Carey
Jeff Reback
JennaVergeynst
Jim
Jim Crist
Joe Jevnik
Joel Nothman
John
John Tucker
John W. OBrien
John Zwinck
jojomdt
Jonathan de Bruin
Jonathan Whitmore
Jon Mease
Jon M. Mease
Joost Kranendonk
Joris Van den Bossche
Joshua Bradt
Julian Santander
Julien Marrec
Jun Kim
Justin Solinsky
Kacawi
Kamal Kamalaldin
Kerby Shedden
Kernc
Keshav Ramaswamy
Kevin Sheppard
Kyle Kelley
Larry Ren
Leon Yin
linebp
Line Pedersen
Lorenzo Cestaro
Luca Scarabello
Lukasz
Mahmoud Lababidi
manu
manuels
Mark Mandel
Matthew Brett
Matthew Roeschke
mattip
Matti Picus
Matt Roeschke
maxalbert
Maximilian Roos
mcocdawc
Michael Charlton
Michael Felt
Michael Lamparski
Michiel Stock
Mikolaj Chwalisz
Min RK
Miroslav ediv
Mykola Golubyev
Nate Yoder
Nathalie Rud
Nicholas Ver Halen
Nick Chmura
Nolan Nichols
nuffe
Pankaj Pandey
paul-mannino
Pawel Kordek
pbreach
Pete Huang
Peter
Peter Csizsek
Petio Petrov
Phil Ruffwind
Pietro Battiston
Piotr Chromiec
Prasanjit Prakash
Robert Bradshaw
Rob Forgione
Robin
Rodolfo Fernandez
Roger Thomas
Rouz Azari
Sahil Dua
sakkemo
Sam Foo
Sami Salonen
Sarah Bird
Sarma Tangirala
scls19fr
Scott Sanderson
Sebastian Bank
Sebastian Gsnger
Sbastien de Menten
Shawn Heide
Shyam Saladi
sinhrks
Sinhrks
Stephen Rauch
stijnvanhoey
Tara Adiseshan
themrmax
the-nose-knows
Thiago Serafim
Thoralf Gutierrez
Thrasibule
Tobias Gustafsson
Tom Augspurger
tomrod
Tong Shen
Tong SHEN
TrigonaMinima
tzinckgraf
Uwe
wandersoncferreira
watercrossing
wcwagner
Wes Turner
Wiktor Tomczak
WillAyd
xgdgsc
Yaroslav Halchenko
Yimeng Zhang
yui-knk
36.2.1 Thanks
Ajay Saxena
Ben Kandel
Chris
Chris Ham
Christopher C. Aycock
Daniel Himmelstein
Dave Willmer
Dr-Irv
gfyoung
hesham shabana
Jeff Carey
Jeff Reback
Joe Jevnik
Joris Van den Bossche
Julian Santander
Kerby Shedden
Keshav Ramaswamy
Kevin Sheppard
Luca Scarabello
Matti Picus
Matt Roeschke
Maximilian Roos
Mykola Golubyev
Nate Yoder
Nicholas Ver Halen
Pawel Kordek
Pietro Battiston
Rodolfo Fernandez
sinhrks
Tara Adiseshan
Tom Augspurger
wandersoncferreira
Yaroslav Halchenko
36.3.1 Thanks
Adam Chainz
Anthonios Partheniou
Arash Rouhani
Ben Kandel
Brandon M. Burroughs
Chris
chris-b1
Chris Warth
David Krych
dubourg
gfyoung
Ivn Valls Prez
Jeff Reback
Joe Jevnik
Jon M. Mease
Joris Van den Bossche
Josh Owen
Keshav Ramaswamy
Larry Ren
mattrijk
Michael Felt
paul-mannino
Piotr Chromiec
Robert Bradshaw
Sinhrks
Thiago Serafim
Tom Bird
36.4.1 Thanks
adneu
Adrien Emery
agraboso
Alex Alekseyev
Alex Vig
Allen Riddell
Amol
Amol Agrawal
Andy R. Terrel
Anthonios Partheniou
babakkeyvani
Ben Kandel
Bob Baxley
Brett Rosen
c123w
Camilo Cota
Chris
chris-b1
Chris Grinolds
Christian Hudon
Christopher C. Aycock
Chris Warth
cmazzullo
conquistador1492
cr3
Daniel Siladji
Douglas McNeil
Drewrey Lupton
dsm054
Eduardo Blancas Reyes
Elliot Marsden
Evan Wright
Felix Marczinowski
Francis T. ODonovan
Gbor Liptk
Geraint Duck
gfyoung
Giacomo Ferroni
Grant Roch
Haleemur Ali
harshul1610
Hassan Shamim
iamsimha
Iulius Curt
Ivan Nazarov
jackieleng
Jeff Reback
Jeffrey Gerard
Jenn Olsen
Jim Crist
Joe Jevnik
John Evans
John Freeman
John Liekezer
Johnny Gill
John W. OBrien
John Zwinck
Jordan Erenrich
Joris Van den Bossche
Josh Howes
Jozef Brandys
Kamil Sindi
Ka Wo Chen
Kerby Shedden
Kernc
Kevin Sheppard
Matthieu Brucher
Maximilian Roos
Michael Scherer
Mike Graham
Mortada Mehyar
mpuels
Muhammad Haseeb Tariq
Nate George
Neil Parley
Nicolas Bonnotte
OXPHOS
Pan Deng / Zora
Paul
Pauli Virtanen
Paul Mestemaker
Pawel Kordek
Pietro Battiston
pijucha
Piotr Jucha
priyankjain
Ravi Kumar Nimmi
Robert Gieseke
Robert Kern
Roger Thomas
Roy Keyes
Russell Smith
Sahil Dua
Sanjiv Lobo
Sao Stanovnik
Shawn Heide
sinhrks
Sinhrks
Stephen Kappel
Steve Choi
Stewart Henderson
Sudarshan Konge
Thomas A Caswell
Tom Augspurger
Tom Bird
Uwe Hoffmann
wcwagner
WillAyd
Xiang Zhang
Yadunandan
Yaroslav Halchenko
YG-Riku
Yuichiro Kaneko
yui-knk
zhangjinjie
znmean
Yan Facai
36.5.1 Thanks
Andrew Fiore-Gartland
Bastiaan
Benot Vinot
Brandon Rhodes
DaCoEx
Drew Fustin
Ernesto Freitas
Filip Ter
Gregory Livschitz
Gbor Liptk
Hassan Kibirige
Iblis Lin
Israel Saeta Prez
Jason Wolosonovich
Jeff Reback
Joe Jevnik
Joris Van den Bossche
Joshua Storck
Ka Wo Chen
Kerby Shedden
Kieran OMahony
Leif Walsh
Mahmoud Lababidi
Maoyuan Liu
Mark Roth
Matt Wittmann
MaxU
Maximilian Roos
Michael Droettboom
Nick Eubank
Nicolas Bonnotte
OXPHOS
Pauli Virtanen
Peter Waller
Pietro Battiston
Prabhjot Singh
Robin Wilson
Roger Thomas
Sebastian Bank
Stephen Hoover
Tim Hopper
Tom Augspurger
WANG Aiyong
Wes Turner
Winand
Xbar
Yan Facai
adneu
ajenkins-cargometrics
behzad nouri
chinskiy
gfyoung
jeps-journal
jonaslb
kotrfa
nileracecrew
onesandzeroes
rs2
sinhrks
tsdlovell
36.6.1 Thanks
ARF
Alex Alekseyev
Andrew McPherson
Andrew Rosenfeld
Anthonios Partheniou
Anton I. Sipos
Ben
Ben North
Bran Yang
Chris
Chris Carroux
Christopher C. Aycock
Christopher Scanlin
Cody
Da Wang
Daniel Grady
Dorozhko Anton
Dr-Irv
Erik M. Bray
Evan Wright
Francis T. ODonovan
Frank Cleary
Gianluca Rossi
Graham Jeffries
Guillaume Horel
Henry Hammond
Isaac Schwabacher
Jean-Mathieu Deschenes
Jeff Reback
Joe Jevnik
John Freeman
John Fremlin
Jonas Hoersch
Joris Van den Bossche
Joris Vankerschaver
Justin Lecher
Justin Lin
Ka Wo Chen
Keming Zhang
Kerby Shedden
Kyle
Marco Farrugia
MasonGallo
MattRijk
Matthew Lurie
Maximilian Roos
Mayank Asthana
Mortada Mehyar
Moussa Taifi
Navreet Gill
Nicolas Bonnotte
Paul Reiners
Philip Gura
Pietro Battiston
RahulHP
Randy Carnevale
Rinoc Johnson
Rishipuri
Sangmin Park
Scott E Lasley
Sereger13
Shannon Wang
Skipper Seabold
Thierry Moisan
Thomas A Caswell
Toby Dylan Hocking
Tom Augspurger
Travis
Trent Hauck
Tux1
Varun
Wes McKinney
Will Thompson
Yoav Ram
Yoong Kang Lim
Yoshiki Vzquez Baeza
Young Joong Kim
Younggun Kim
Yuval Langer
alex argunov
behzad nouri
boombard
brian-pantano
chromy
daniel
dgram0
gfyoung
hack-c
hcontrast
jfoo
kaustuv deolal
llllllllll
ranarag
rockg
scls19fr
seales
sinhrks
srib
surveymedia.ca
tworec
36.7.1 Thanks
Aleksandr Drozd
Alex Chase
Anthonios Partheniou
BrenBarn
Brian J. McGuirk
Chris
Christian Berendt
Christian Perez
Cody Piersall
Data & Code Expert Experimenting with Code on Data
DrIrv
Evan Wright
Guillaume Gay
Hamed Saljooghinejad
Iblis Lin
Jake VanderPlas
Jan Schulz
Jean-Mathieu Deschenes
Jeff Reback
Jimmy Callin
Joris Van den Bossche
K.-Michael Aye
Ka Wo Chen
Loc Sguin-C
Luo Yicheng
Magnus Jud
Manuel Leonhardt
Matthew Gilbert
Maximilian Roos
Michael
Nicholas Stahl
Nicolas Bonnotte
Pastafarianist
Petra Chong
Phil Schaf
Philipp A
Rob deCarvalho
Roman Khomenko
Rmy Lone
Sebastian Bank
Thierry Moisan
Tom Augspurger
Tux1
Varun
Wieland Hoffmann
Winterflower
Yoav Ram
Younggun Kim
Zeke
ajcr
azuranski
behzad nouri
cel4
emilydolson
hironow
lexual
llllllllll
rockg
silentquasar
sinhrks
taeold
Datetime accessor (dt) now supports Series.dt.strftime to generate formatted strings for datetime-
likes, and Series.dt.total_seconds to generate each duration of the timedelta in seconds. See here
Period and PeriodIndex can handle multiplied freq like 3D, which corresponding to 3 days span. See here
Development installed versions of pandas will now have PEP440 compliant version strings (GH9518)
Development support for benchmarking with the Air Speed Velocity library (GH8316)
Support for reading SAS xport files, see here
Documentation comparing SAS to pandas, see here
Removal of the automatic TimeSeries broadcasting, deprecated since 0.8.0, see here
Display format with plain text can optionally align with Unicode East Asian Width, see here
Compatibility with Python 3.5 (GH11097)
Compatibility with matplotlib 1.5.0 (GH11111)
See the v0.17.0 Whatsnew overview for an extensive list of all enhancements and bugs that have been fixed in 0.17.0.
36.8.1 Thanks
Alex Rothberg
Andrea Bedini
Andrew Rosenfeld
Andy Li
Anthonios Partheniou
Artemy Kolchinsky
Bernard Willers
Charlie Clark
Chris
Chris Whelan
Christoph Gohlke
Christopher Whelan
Clark Fitzgerald
Clearfield Christopher
Dan Ringwalt
Daniel Ni
Data & Code Expert Experimenting with Code on Data
David Cottrell
David John Gagne
David Kelly
ETF
Eduardo Schettino
Egor
Egor Panfilov
Evan Wright
Frank Pinter
Gabriel Araujo
Garrett-R
Gianluca Rossi
Guillaume Gay
Guillaume Poulin
Harsh Nisar
Ian Henriksen
Ian Hoegen
Jaidev Deshpande
Jan Rudolph
Jan Schulz
Jason Swails
Jeff Reback
Jonas Buyl
Joris Van den Bossche
Joris Vankerschaver
Josh Levy-Kramer
Julien Danjou
Ka Wo Chen
Karrie Kehoe
Kelsey Jordahl
Kerby Shedden
Kevin Sheppard
Lars Buitinck
Leif Johnson
Luis Ortiz
Mac
Matt Gambogi
Matt Savoie
Matthew Gilbert
Maximilian Roos
Michelangelo DAgostino
Mortada Mehyar
Nick Eubank
Nipun Batra
Ondrej Certk
Phillip Cloud
Pratap Vardhan
Rafal Skolasinski
Richard Lewis
Rinoc Johnson
Rob Levy
Robert Gieseke
Safia Abdalla
Samuel Denny
Saumitra Shahapure
Sebastian Plsterl
Sebastian Rubbert
Sheppard, Kevin
Sinhrks
Siu Kwan Lam
Skipper Seabold
Spencer Carrucciu
Stephan Hoyer
Stephen Hoover
Stephen Pascoe
Terry Santegoeds
Thomas Grainger
Tjerk Santegoeds
Tom Augspurger
Vincent Davis
Winterflower
Yaroslav Halchenko
Yuan Tang (Terry)
agijsberts
ajcr
behzad nouri
cel4
cyrusmaher
davidovitch
ganego
jreback
juricast
larvian
maximilianr
msund
rekcahpassyla
robertzk
scls19fr
seth-p
sinhrks
springcoil
terrytangyuan
tzinckgraf
36.9.1 Thanks
Andrew Rosenfeld
Artemy Kolchinsky
Bernard Willers
Christer van der Meeren
Christian Hudon
Constantine Glen Evans
Daniel Julius Lasiman
Evan Wright
Francesco Brundu
Gatan de Menten
Jake VanderPlas
James Hiebert
Jeff Reback
Joris Van den Bossche
Justin Lecher
Ka Wo Chen
Kevin Sheppard
Mortada Mehyar
Morton Fox
Robin Wilson
Thomas Grainger
Tom Ajamian
Tom Augspurger
Yoshiki Vzquez Baeza
Younggun Kim
austinc
behzad nouri
jreback
lexual
rekcahpassyla
scls19fr
sinhrks
36.10.1 Thanks
Alfonso MHC
Andy Hayden
Artemy Kolchinsky
Chris Gilmer
Chris Grinolds
Dan Birken
David BROCHART
David Hirschfeld
David Stephens
Dr. Leo
Evan Wright
Frans van Dunn
Hatem Nassrat
Henning Sperr
Hugo Herter
Jan Schulz
Jeff Blackburne
Jeff Reback
Jim Crist
Jonas Abernot
Joris Van den Bossche
Kerby Shedden
Leo Razoumov
Manuel Riel
Mortada Mehyar
Nick Burns
Nick Eubank
Olivier Grisel
Phillip Cloud
Pietro Battiston
Roy Hyunjin Han
Sam Zhang
Scott Sanderson
Stephan Hoyer
Tiago Antao
Tom Ajamian
Tom Augspurger
Tomaz Berisa
Vikram Shirgur
Vladimir Filimonov
William Hogman
Yasin A
Younggun Kim
behzad nouri
dsm054
floydsoft
flying-sheep
gfr
jnmclarty
jreback
ksanghai
lucas
mschmohl
ptype
rockg
scls19fr
sinhrks
36.11.1 Thanks
Aaron Toth
Alan Du
Alessandro Amici
Artemy Kolchinsky
Ashwini Chaudhary
Ben Schiller
Bill Letson
Brandon Bradley
Chau Hoang
Chris Reynolds
Chris Whelan
Christer van der Meeren
David Cottrell
David Stephens
Ehsan Azarnasab
Garrett-R
Guillaume Gay
Jake Torcasso
Jason Sexauer
Jeff Reback
John McNamara
Joris Van den Bossche
Joschka zur Jacobsmhlen
Juarez Bochi
Junya Hayashi
K.-Michael Aye
Kerby Shedden
Kevin Sheppard
Kieran OMahony
Kodi Arfer
Matti Airas
Min RK
Mortada Mehyar
Robert
Scott E Lasley
Scott Lasley
Sergio Pascual
Skipper Seabold
Stephan Hoyer
Thomas Grainger
Tom Augspurger
TomAugspurger
Vladimir Filimonov
Vyomkesh Tripathi
Will Holmgren
Yulong Yang
behzad nouri
bertrandhaut
bjonen
cel4
clham
hsperr
ischwabacher
jnmclarty
josham
jreback
omtinez
roch
sinhrks
unutbu
36.12.1 Thanks
Aaron Staple
Angelos Evripiotis
Artemy Kolchinsky
Benoit Pointet
Brian Jacobowski
Charalampos Papaloizou
Chris Warth
David Stephens
Fabio Zanini
Francesc Via
Henry Kleynhans
Jake VanderPlas
Jan Schulz
Jeff Reback
Jeff Tratner
Joris Van den Bossche
Kevin Sheppard
Matt Suggit
Matthew Brett
Phillip Cloud
Rupert Thompson
Scott E Lasley
Stephan Hoyer
Stephen Simmons
Sylvain Corlay
Thomas Grainger
Tiago Antao
Trent Hauck
Victor Chaves
Victor Salgado
Vikram Bhandoh
WANG Aiyong
Will Holmgren
behzad nouri
broessli
charalampos papaloizou
immerrr
jnmclarty
jreback
mgilbert
onesandzeroes
peadarcoyle
rockg
seth-p
sinhrks
unutbu
wavedatalab
smund Hjulstad
36.13.1 Thanks
Aaron Staple
Andrew Rosenfeld
Anton I. Sipos
Artemy Kolchinsky
Bill Letson
Dave Hughes
David Stephens
Guillaume Horel
Jeff Reback
Joris Van den Bossche
Kevin Sheppard
Nick Stahl
Sanghee Kim
Stephan Hoyer
TomAugspurger
WANG Aiyong
behzad nouri
immerrr
jnmclarty
jreback
pallav-fdsi
unutbu
36.14.1 Thanks
Aaron Schumacher
Adam Greenhall
Andy Hayden
Anthony OBrien
Artemy Kolchinsky
behzad nouri
Benedikt Sauer
benjamin
Benjamin Thyreau
Ben Schiller
bjonen
BorisVerk
Chris Reynolds
Chris Stoafer
Dav Clark
dlovell
DSM
dsm054
FragLegs
German Gomez-Herrero
Hsiaoming Yang
Huan Li
hunterowens
Hyungtae Kim
immerrr
Isaac Slavitt
ischwabacher
Jacob Schaer
Jacob Wasserman
Jan Schulz
Jeff Tratner
Jesse Farnham
jmorris0x0
jnmclarty
Joe Bradish
Joerg Rittinger
John W. OBrien
Joris Van den Bossche
jreback
Kevin Sheppard
klonuo
Kyle Meyer
lexual
Max Chang
mcjcode
Michael Mueller
Michael W Schatzow
Mike Kelly
Mortada Mehyar
mtrbean
Nathan Sanders
Nathan Typanski
onesandzeroes
Paul Masurel
Phillip Cloud
Pietro Battiston
RenzoBertocchi
rockg
Ross Petchler
seth-p
Shahul Hameed
Shashank Agarwal
sinhrks
someben
stahlous
stas-sl
Stephan Hoyer
thatneat
tom-alcorn
TomAugspurger
Tom Augspurger
Tony Lorenzo
unknown
unutbu
Wes Turner
Wilfred Hughes
Yevgeniy Grechka
Yoshiki Vzquez Baeza
zachcp
36.15.1 Thanks
Andrew Rosenfeld
Andy Hayden
Benjamin Adams
Benjamin M. Gross
Brian Quistorff
Brian Wignall
bwignall
clham
Daniel Waeber
David Bew
David Stephens
DSM
dsm054
helger
immerrr
Jacob Schaer
jaimefrio
Jan Schulz
John David Reaver
John W. OBrien
Joris Van den Bossche
jreback
Julien Danjou
Kevin Sheppard
K.-Michael Aye
Kyle Meyer
lexual
Matthew Brett
Matt Wittmann
Michael Mueller
Mortada Mehyar
onesandzeroes
Phillip Cloud
Rob Levy
rockg
sanguineturtle
Schaer, Jacob C
seth-p
sinhrks
Stephan Hoyer
Thomas Kluyver
Todd Jennings
TomAugspurger
unknown
yelite
36.16.1 Thanks
Acanthostega
Adam Marcus
agijsberts
akittredge
Alex Gaudio
Alex Rothberg
AllenDowney
Andrew Rosenfeld
Andy Hayden
ankostis
anomrake
Antoine Mazires
anton-d
bashtage
Benedikt Sauer
benjamin
Brad Buran
bwignall
cgohlke
chebee7i
Christopher Whelan
Clark Fitzgerald
clham
Dale Jung
Dan Allan
Dan Birken
danielballan
Daniel Waeber
David Jung
David Stephens
Douglas McNeil
DSM
Garrett Drapala
Gouthaman Balaraman
Guillaume Poulin
hshimizu77
hugo
immerrr
ischwabacher
Jacob Howard
Jacob Schaer
jaimefrio
Jason Sexauer
Jeff Reback
Jeffrey Starr
Jeff Tratner
John David Reaver
John McNamara
John W. OBrien
Jonathan Chambers
Joris Van den Bossche
jreback
jsexauer
Julia Evans
Jlio
Katie Atkinson
kdiether
Kelsey Jordahl
Kevin Sheppard
K.-Michael Aye
Matthias Kuhn
Matt Wittmann
Max Grender-Jones
Michael E. Gruen
michaelws
mikebailey
Mike Kelly
Nipun Batra
Noah Spies
ojdo
onesandzeroes
Patrick OKeeffe
phaebz
Phillip Cloud
Pietro Battiston
PKEuS
Randy Carnevale
ribonoous
Robert Gibboni
rockg
sinhrks
Skipper Seabold
SplashDance
Stephan Hoyer
Tim Cera
Tobias Brandt
Todd Jennings
TomAugspurger
Tom Augspurger
unutbu
westurner
Yaroslav Halchenko
y-p
zach powers
Series.sort will raise a ValueError (rather than a TypeError) on sorting an object that is a view of
another (GH5856, GH5853)
Raise/Warn SettingWithCopyError (according to the option chained_assignment in more cases,
when detecting chained assignment, related (GH5938, GH6025)
DataFrame.head(0) returns self instead of empty frame (GH5846)
autocorrelation_plot now accepts **kwargs. (GH5623)
convert_objects now accepts a convert_timedeltas='coerce' argument to allow forced dtype
conversion of timedeltas (GH5458,:issue:5689)
Add -NaN and -nan to the default set of NA values (GH5952). See NA Values.
NDFrame now has an equals method. (GH5283)
DataFrame.apply will use the reduce argument to determine whether a Series or a DataFrame
should be returned when the DataFrame is empty (GH6007).
add ability to recognize %p format code (am/pm) to date parsers when the specific format is supplied (GH5361)
Fix performance regression in JSON IO (GH5765)
performance regression in Index construction from Series (GH6150)
plot(kind='kde') now accepts the optional parameters bw_method and ind, passed to
scipy.stats.gaussian_kde() (for scipy >= 0.11.0) to set the bandwidth, and to gkde.evaluate() to specify the indi-
cies at which it is evaluated, respectively. See scipy docs. (GH4298)
Added isin method to DataFrame (GH4211)
df.to_clipboard() learned a new excel keyword that lets you paste df data directly into excel (enabled
by default). (GH5070).
Clipboard functionality now works with PySide (GH4282)
New extract string method returns regex matches more conveniently (GH4685)
Auto-detect field widths in read_fwf when unspecified (GH4488)
to_csv() now outputs datetime objects according to a specified format string via the date_format key-
word (GH4313)
Added LastWeekOfMonth DateOffset (GH4637)
Added cumcount groupby method (GH4646)
Added FY5253, and FY5253Quarter DateOffsets (GH4511)
Added mode() method to Series and DataFrame to get the statistical mode(s) of a column/series.
(GH5367)
The new eval() function implements expression evaluation using numexpr behind the scenes. This results
in large speedups for complicated expressions involving large DataFrames/Series.
DataFrame has a new eval() that evaluates an expression in the context of the DataFrame; allows inline
expression assignment
A query() method has been added that allows you to select elements of a DataFrame using a natural query
syntax nearly identical to Python syntax.
pd.eval and friends now evaluate operations involving datetime64 objects in Python space because
numexpr cannot handle NaT values (GH4897).
Add msgpack support via pd.read_msgpack() and pd.to_msgpack() / df.to_msgpack() for se-
rialization of arbitrary pandas (and python objects) in a lightweight portable binary format (GH686, GH5506)
Added PySide support for the qtpandas DataFrameModel and DataFrameWidget.
Added pandas.io.gbq for reading from (and writing to) Google BigQuery into a DataFrame. (GH4140)
read_html now raises a URLError instead of catching and raising a ValueError (GH4303, GH4305)
read_excel now supports an integer in its sheetname argument giving the index of the sheet to read in
(GH4301).
get_dummies works with NaN (GH4446)
Added a test for read_clipboard() and to_clipboard() (GH4282)
Added bins argument to value_counts (GH3945), also sort and ascending, now available in Series method
as well as top-level function.
Text parser now treats anything that reads like inf (inf, Inf, -Inf, iNf, etc.) to infinity. (GH4220,
GH4219), affecting read_table, read_csv, etc.
Added a more informative error message when plot arguments contain overlapping color and style arguments
(GH4402)
Significant table writing performance improvements in HDFStore
JSON date serialization now performed in low-level C code.
JSON support for encoding datetime.time
Expanded JSON docs, more info about orient options and the use of the numpy param when decoding.
Add drop_level argument to xs (GH4180)
Can now resample a DataFrame with ohlc (GH2320)
Index.copy() and MultiIndex.copy() now accept keyword arguments to change attributes (i.e.,
names, levels, labels) (GH4039)
Add rename and set_names methods to Index as well as set_names, set_levels, set_labels to
MultiIndex. (GH4039) with improved validation for all (GH4039, GH4794)
A Series of dtype timedelta64[ns] can now be divided/multiplied by an integer series (GH4521)
A Series of dtype timedelta64[ns] can now be divided by another timedelta64[ns] object to yield a
float64 dtyped Series. This is frequency conversion; astyping is also supported.
Timedelta64 support fillna/ffill/bfill with an integer interpreted as seconds, or a timedelta
(GH3371)
Box numeric ops on timedelta Series (GH4984)
Datetime64 support ffill/bfill
Performance improvements with __getitem__ on DataFrames with when the key is a column
Support for using a DatetimeIndex/PeriodsIndex directly in a datelike calculation e.g. s-s.index
(GH4629)
Better/cleaned up exceptions in core/common, io/excel and core/format (GH4721, GH3954), as well as cleaned
up test cases in tests/test_frame, tests/test_multilevel (GH4732).
Performance improvement of timeseries plotting with PeriodIndex and added test to vbench (GH4705 and
GH4722)
Add axis and level keywords to where, so that the other argument can now be an alignable pandas
object.
to_datetime with a format of %Y%m%d now parses much faster
Its now easier to hook new Excel writers into pandas (just subclass ExcelWriter and register your engine).
You can specify an engine in to_excel or in ExcelWriter. You can also specify which writers you want
to use by default with config options io.excel.xlsx.writer and io.excel.xls.writer. (GH4745,
GH4750)
Panel.to_excel() now accepts keyword arguments that will be passed to its DataFrames
to_excel() methods. (GH4750)
Added XlsxWriter as an optional ExcelWriter engine. This is about 5x faster than the default openpyxl xlsx
writer and is equivalent in speed to the xlwt xls writer module. (GH4542)
allow DataFrame constructor to accept more list-like objects, e.g. list of collections.Sequence and
array.Array objects (GH3783, GH4297, GH4851), thanks @lgautier
DataFrame constructor now accepts a numpy masked record array (GH3478), thanks @jnothman
__getitem__ with tuple key (e.g., [:, 2]) on Series without MultiIndex raises ValueError
(GH4759, GH4837)
read_json now raises a (more informative) ValueError when the dict contains a bad key and
orient='split' (GH4730, GH4838)
read_stata now accepts Stata 13 format (GH4291)
ExcelWriter and ExcelFile can be used as contextmanagers. (GH3441, GH4933)
pandas is now tested with two different versions of statsmodels (0.4.3 and 0.5.0) (GH4981).
Better string representations of MultiIndex (including ability to roundtrip via repr). (GH3347, GH4935)
Both ExcelFile and read_excel to accept an xlrd.Book for the io (formerly path_or_buf) argument; this requires
engine to be set. (GH4961).
concat now gives a more informative error message when passed objects that cannot be concatenated
(GH4608).
Add halflife option to exponentially weighted moving functions (PR GH4998)
to_dict now takes records as a possible outtype. Returns an array of column-keyed dictionaries. (GH4936)
tz_localize can infer a fall daylight savings transition based on the structure of unlocalized data (GH4230)
DatetimeIndex is now in the API documentation
Improve support for converting R datasets to pandas objects (more informative index for timeseries and numeric,
support for factors, dist, and high-dimensional arrays).
read_html() now supports the parse_dates, tupleize_cols and thousands parameters
(GH4770).
json_normalize() is a new method to allow you to create a flat table from semi-structured JSON data. See
the docs (GH1067)
DataFrame.from_records() will now accept generators (GH4910)
DataFrame.interpolate() and Series.interpolate() have been expanded to include interpola-
tion methods from scipy. (GH4434, GH1892)
Series now supports a to_frame method to convert it to a single-column DataFrame (GH5164)
DatetimeIndex (and date_range) can now be constructed in a left- or right-open fashion using the closed
parameter (GH4579)
Python csv parser now supports usecols (GH4335)
Added support for Google Analytics v3 API segment IDs that also supports v2 IDs. (GH5271)
NDFrame.drop() now accepts names as well as integers for the axis argument. (GH5354)
Added short docstrings to a few methods that were missing them + fixed the docstrings for Panel flex methods.
(GH5336)
NDFrame.drop(), NDFrame.dropna(), and .drop_duplicates() all accept inplace as a key-
word argument; however, this only means that the wrapper is updated inplace, a copy is still made internally.
(GH1960, GH5247, GH5628, and related GH2325 [still not closed])
Fixed bug in tools.plotting.andrews_curvres so that lines are drawn grouped by color as expected.
read_excel() now tries to convert integral floats (like 1.0) to int by default. (GH5394)
Excel writers now have a default option merge_cells in to_excel() to merge cells in MultiIndex and Hi-
erarchical Rows. Note: using this option it is no longer possible to round trip Excel files with merged MultiIndex
and Hierarchical Rows. Set the merge_cells to False to restore the previous behaviour. (GH5254)
The FRED DataReader now accepts multiple series (:issue3413)
StataWriter adjusts variable names to Statas limitations (GH5709)
DataFrame.reindex() and forward/backward filling now raises ValueError if either index is not mono-
tonic (GH4483, GH4484).
pandas now is Python 2/3 compatible without the need for 2to3 thanks to @jtratner. As a result, pandas now
uses iterators more extensively. This also led to the introduction of substantive parts of the Benjamin Petersons
six library into compat. (GH4384, GH4375, GH4372)
pandas.util.compat and pandas.util.py3compat have been merged into pandas.compat.
pandas.compat now includes many functions allowing 2/3 compatibility. It contains both list and itera-
tor versions of range, filter, map and zip, plus other necessary elements for Python 3 compatibility. lmap,
lzip, lrange and lfilter all produce lists instead of iterators, for compatibility with numpy, subscripting
and pandas constructors.(GH4384, GH4375, GH4372)
deprecated iterkv, which will be removed in a future release (was just an alias of iteritems used to get around
2to3s changes). (GH4384, GH4375, GH4372)
Series.get with negative indexers now returns the same as [] (GH4390)
allow ix/loc for Series/DataFrame/Panel to set on any axis even when the single-key is not currently contained
in the index for that axis (GH2578, GH5226, GH5632, GH5720, GH5744, GH5756)
Default export for to_clipboard is now csv with a sep of t for compat (GH3368)
at now will enlarge the object inplace (and return the same) (GH2578)
DataFrame.plot will scatter plot x versus y by passing kind='scatter' (GH2215)
HDFStore
append_to_multiple automatically synchronizes writing rows to multiple tables and adds a
dropna kwarg (GH4698)
handle a passed Series in table format (GH4330)
added an is_open property to indicate if the underlying file handle is_open; a closed store will now
report CLOSED when viewing the store (rather than raising an error) (GH4409)
a close of a HDFStore now will close that instance of the HDFStore but will only close the actual file
if the ref count (by PyTables) w.r.t. all of the open handles are 0. Essentially you have a local instance
of HDFStore referenced by a variable. Once you close it, it will report closed. Other references (to the
same file) will continue to operate until they themselves are closed. Performing an action on a closed file
will raise ClosedFileError
removed the _quiet attribute, replace by a DuplicateWarning if retrieving duplicate rows from a
table (GH4367)
removed the warn argument from open. Instead a PossibleDataLossError exception will be
raised if you try to use mode='w' with an OPEN file handle (GH4367)
allow a passed locations array or mask as a where condition (GH4467)
add the keyword dropna=True to append to change whether ALL nan rows are not written to
the store (default is True, ALL nan rows are NOT written), also settable via the option io.hdf.
dropna_table (GH4625)
the format keyword now replaces the table keyword; allowed values are fixed(f)|table(t)
the Storer format has been renamed to Fixed
a column multi-index will be recreated properly (GH4710); raise on trying to use a multi-index with
data_columns on the same axis
select_as_coordinates will now return an Int64Index of the resultant selection set
support timedelta64[ns] as a serialization type (GH3577)
store datetime.date objects as ordinals rather then timetuples to avoid timezone issues (GH2852), thanks
@tavistmorph and @numpand
numexpr 2.2.2 fixes incompatibility in PyTables 2.4 (GH4908)
flush now accepts an fsync parameter, which defaults to False (GH5364)
unicode indices not supported on table formats (GH5386)
pass thru store creation arguments; can be used to support in-memory stores
JSON
added date_unit parameter to specify resolution of timestamps. Options are seconds, milliseconds,
microseconds and nanoseconds. (GH4362, GH4498).
added default_handler parameter to allow a callable to be passed which will be responsible for
handling otherwise unserialiable objects. (GH5138)
Index and MultiIndex changes (GH4039):
Setting levels and labels directly on MultiIndex is now deprecated. Instead, you can use the
set_levels() and set_labels() methods.
levels, labels and names properties no longer return lists, but instead return containers that do not
allow setting of items (mostly immutable)
levels, labels and names are validated upon setting and are either copied or shallow-copied.
inplace setting of levels or labels now correctly invalidates the cached properties. (GH5238).
__deepcopy__ now returns a shallow copy (currently: a view) of the data - allowing metadata changes.
MultiIndex.astype() now only allows np.object_-like dtypes and now returns a
MultiIndex rather than an Index. (GH4039)
Added is_ method to Index that allows fast equality comparison of views (similar to np.
may_share_memory but no false positives, and changes on levels and labels setting on
MultiIndex). (GH4859 , GH4909)
Aliased __iadd__ to __add__. (GH4996)
Added is_ method to Index that allows fast equality comparison of views (similar to np.
may_share_memory but no false positives, and changes on levels and labels setting on
MultiIndex). (GH4859, GH4909)
Infer and downcast dtype if downcast='infer' is passed to fillna/ffill/bfill (GH4604)
__nonzero__ for all NDFrame objects, will now raise a ValueError, this reverts back to (GH1073,
GH4633) behavior. Add .bool() method to NDFrame objects to facilitate evaluating of single-element
boolean Series
All division with NDFrame - likes is now truedivision, regardless of the future import. You can use // and
floordiv to do integer division.
In 0.13.0 there is a major refactor primarily to subclass Series from NDFrame, which is the base class currently
for DataFrame and Panel, to unify methods and behaviors. Series formerly subclassed directly from ndarray.
(GH4080, GH3862, GH816) See Internal Refactoring
Refactor of series.py/frame.py/panel.py to move common code to generic.py
added _setup_axes to created generic NDFrame structures
moved methods
from_axes, _wrap_array, axes, ix, loc, iloc, shape, empty, swapaxes, transpose,
pop
__iter__, keys, __contains__, __len__, __neg__, __invert__
convert_objects, as_blocks, as_matrix, values
__getstate__, __setstate__ (compat remains in frame/panel)
__getattr__, __setattr__
_indexed_same, reindex_like, align, where, mask
fillna, replace (Series replace is now consistent with DataFrame)
filter (also added axis argument to selectively filter on a different axis)
reindex, reindex_axis, take
truncate (moved to become part of NDFrame)
HDFStore
raising an invalid TypeError rather than ValueError when appending with a different block ordering
(GH4096)
read_hdf was not respecting as passed mode (GH4504)
appending a 0-len table will work correctly (GH4273)
to_hdf was raising when passing both arguments append and table (GH4584)
reading from a store with duplicate columns across dtypes would raise (GH4767)
Fixed a bug where ValueError wasnt correctly raised when column names werent strings (GH4956)
A zero length series written in Fixed format not deserializing properly. (GH4708)
Fixed decoding perf issue on pyt3 (GH5441)
Validate levels in a multi-index before storing (GH5527)
Correctly handle data_columns with a Panel (GH5717)
Fixed bug in tslib.tz_convert(vals, tz1, tz2): it could raise IndexError exception while trying to access trans[pos
+ 1] (GH4496)
The by argument now works correctly with the layout argument (GH4102, GH4014) in *.hist plotting
methods
Fixed bug in PeriodIndex.map where using str would return the str representation of the index (GH4136)
Fixed bug in DataFrame.set_values which was causing name attributes to be lost when expanding the
index. (GH3742, GH4039)
Fixed issue where individual names, levels and labels could be set on MultiIndex without validation
(GH3714, GH4039)
Fixed (GH3334) in pivot_table. Margins did not compute if values is the index.
Fix bug in having a rhs of np.timedelta64 or np.offsets.DateOffset when operating with date-
times (GH4532)
Fix arithmetic with series/datetimeindex and np.timedelta64 not working the same (GH4134) and buggy
timedelta in numpy 1.6 (GH4135)
Fix bug in pd.read_clipboard on windows with PY3 (GH4561); not decoding properly
tslib.get_period_field() and tslib.get_period_field_arr() now raise if code argument
out of range (GH4519, GH4520)
Fix boolean indexing on an empty series loses index names (GH4235), infer_dtype works with empty arrays.
Fix reindexing with multiple axes; if an axes match was not replacing the current axes, leading to a possible
lazay frequency inference issue (GH3317)
Fixed issue where DataFrame.apply was reraising exceptions incorrectly (causing the original stack trace
to be truncated).
Fix selection with ix/loc and non_unique selectors (GH4619)
Fix assignment with iloc/loc involving a dtype change in an existing column (GH4312, GH5702) have internal
setitem_with_indexer in core/indexing to use Block.setitem
Fixed bug where thousands operator was not handled correctly for floating point numbers in csv_import
(GH4322)
Fix an issue with CacheableOffset not properly being used by many DateOffset; this prevented the DateOffset
from being cached (GH4609)
Fix boolean comparison with a DataFrame on the lhs, and a list/tuple on the rhs (GH4576)
Fix error/dtype conversion with setitem of None on Series/DataFrame (GH4667)
Fix decoding based on a passed in non-default encoding in pd.read_stata (GH4626)
Fix DataFrame.from_records with a plain-vanilla ndarray. (GH4727)
Fix some inconsistencies with Index.rename and MultiIndex.rename, etc. (GH4718, GH4628)
Bug in using iloc/loc with a cross-sectional and duplicate indicies (GH4726)
Bug with using QUOTE_NONE with to_csv causing Exception. (GH4328)
Bug with Series indexing not raising an error when the right-hand-side has an incorrect length (GH2702)
Bug in multi-indexing with a partial string selection as one part of a MultIndex (GH4758)
Bug with reindexing on the index with a non-unique index will now raise ValueError (GH4746)
Bug in setting with loc/ix a single indexer with a multi-index axis and a numpy array, related to (GH3777)
Bug in concatenation with duplicate columns across dtypes not merging with axis=0 (GH4771, GH4975)
Bug in iloc with a slice index failing (GH4771)
Incorrect error message with no colspecs or width in read_fwf. (GH4774)
Fix bugs in indexing in a Series with a duplicate index (GH4548, GH4550)
Fixed bug with reading compressed files with read_fwf in Python 3. (GH3963)
Fixed an issue with a duplicate index and assignment with a dtype change (GH4686)
Fixed bug with reading compressed files in as bytes rather than str in Python 3. Simplifies bytes-producing
file-handling in Python 3 (GH3963, GH4785).
Fixed an issue related to ticklocs/ticklabels with log scale bar plots across different versions of matplotlib
(GH4789)
Suppressed DeprecationWarning associated with internal calls issued by repr() (GH4391)
Fixed an issue with a duplicate index and duplicate selector with .loc (GH4825)
Fixed an issue with DataFrame.sort_index where, when sorting by a single column and passing a list for
ascending, the argument for ascending was being interpreted as True (GH4839, GH4846)
Fixed Panel.tshift not working. Added freq support to Panel.shift (GH4853)
Fix an issue in TextFileReader w/ Python engine (i.e. PythonParser) with thousands != , (GH4596)
Bug in getitem with a duplicate index when using where (GH4879)
Fix Type inference code coerces float column into datetime (GH4601)
Fixed _ensure_numeric does not check for complex numbers (GH4902)
Fixed a bug in Series.hist where two figures were being created when the by argument was passed
(GH4112, GH4113).
Fixed a bug in convert_objects for > 2 ndims (GH4937)
Fixed a bug in DataFrame/Panel cache insertion and subsequent indexing (GH4939, GH5424)
Fixed string methods for FrozenNDArray and FrozenList (GH4929)
Fixed a bug with setting invalid or out-of-range values in indexing enlargement scenarios (GH4940)
Tests for fillna on empty Series (GH4346), thanks @immerrr
Fixed copy() to shallow copy axes/indices as well and thereby keep separate metadata. (GH4202, GH4830)
Fixed skiprows option in Python parser for read_csv (GH4382)
Fixed bug preventing cut from working with np.inf levels without explicitly passing labels (GH3415)
Fixed wrong check for overlapping in DatetimeIndex.union (GH4564)
Fixed conflict between thousands separator and date parser in csv_parser (GH4678)
Fix appending when dtypes are not the same (error showing mixing float/np.datetime64) (GH4993)
Fix repr for DateOffset. No longer show duplicate entries in kwds. Removed unused offset fields. (GH4638)
Fixed wrong index name during read_csv if using usecols. Applies to c parser only. (GH4201)
Timestamp objects can now appear in the left hand side of a comparison operation with a Series or
DataFrame object (GH4982).
Fix a bug when indexing with np.nan via iloc/loc (GH5016)
Fixed a bug where low memory c parser could create different types in different chunks of the same file. Now
coerces to numerical type or raises warning. (GH3866)
Fix a bug where reshaping a Series to its own shape raised TypeError (GH4554) and other reshaping
issues.
Bug in setting with ix/loc and a mixed int/string index (GH4544)
C and Python Parser can now handle the more common multi-index column format which doesnt have a row
for index names (GH4702)
Bug when trying to use an out-of-bounds date as an object dtype (GH5312)
Bug when trying to display an embedded PandasObject (GH5324)
Allows operating of Timestamps to return a datetime if the result is out-of-bounds related (GH5312)
Fix return value/type signature of initObjToJSON() to be compatible with numpys import_array()
(GH5334, GH5326)
Bug when renaming then set_index on a DataFrame (GH5344)
Test suite no longer leaves around temporary files when testing graphics. (GH5347) (thanks for catching this
@yarikoptic!)
Fixed html tests on win32. (GH4580)
Make sure that head/tail are iloc based, (GH5370)
Fixed bug for PeriodIndex string representation if there are 1 or 2 elements. (GH5372)
The GroupBy methods transform and filter can be used on Series and DataFrames that have repeated
(non-unique) indices. (GH4620)
Fix empty series not printing name in repr (GH4651)
Make tests create temp files in temp directory by default. (GH5419)
pd.to_timedelta of a scalar returns a scalar (GH5410)
pd.to_timedelta accepts NaN and NaT, returning NaT instead of raising (GH5437)
performance improvements in isnull on larger size pandas objects
Fixed various setitem with 1d ndarray that does not have a matching length to the indexer (GH5508)
Bug in getitem with a multi-index and iloc (GH5528)
Bug in delitem on a Series (GH5542)
Bug fix in apply when using custom function and objects are not mutated (GH5545)
Bug in selecting from a non-unique index with loc (GH5553)
Bug in groupby returning non-consistent types when user function returns a None, (GH5592)
Work around regression in numpy 1.7.0 which erroneously raises IndexError from ndarray.item (GH5666)
Bug in repeated indexing of object with resultant non-unique index (GH5678)
Bug in fillna with Series and a passed series/dict (GH5703)
Bug in groupby transform with a datetime-like grouper (GH5712)
Bug in multi-index selection in PY3 when using certain keys (GH5725)
Row-wise concat of differing dtypes failing in certain cases (GH5754)
pd.read_html() can now parse HTML strings, files or urls and returns a list of DataFrame s courtesy of
@cpcloud. (GH3477, GH3605, GH3606)
Support for reading Amazon S3 files. (GH3504)
Added module for reading and writing JSON strings/files: pandas.io.json includes to_json DataFrame/Series
method, and a read_json top-level reader various issues (GH1226, GH3804, GH3876, GH3867, GH1305)
Added module for reading and writing Stata files: pandas.io.stata (GH1512) includes to_stata DataFrame
method, and a read_stata top-level reader
Added support for writing in to_csv and reading in read_csv, multi-index columns. The header option in
read_csv now accepts a list of the rows from which to read the index. Added the option, tupleize_cols
to provide compatibility for the pre 0.12 behavior of writing and reading multi-index columns via a list of tuples.
The default in 0.12 is to write lists of tuples and not interpret list of tuples as a multi-index column. Note: The
default value will change in 0.12 to make the default to write and read multi-index columns in the new format.
(GH3571, GH1651, GH3141)
Add iterator to Series.str (GH3638)
pd.set_option() now allows N option, value pairs (GH3667).
Added keyword parameters for different types of scatter_matrix subplots
A filter method on grouped Series or DataFrames returns a subset of the original (GH3680, GH919)
Access to historical Google Finance data in pandas.io.data (GH3814)
DataFrame plotting methods can sample column colors from a Matplotlib colormap via the colormap key-
word. (GH3860)
Fixed various issues with internal pprinting code, the repr() for various objects including TimeStamp and Index
now produces valid python code strings and can be used to recreate the object, (GH3038, GH3379, GH3251,
GH3460)
convert_objects now accepts a copy parameter (defaults to True)
HDFStore
will retain index attributes (freq,tz,name) on recreation (GH3499,:issue:4098)
will warn with a AttributeConflictWarning if you are attempting to append an index with a
different frequency than the existing, or attempting to append an index with a different name than the
existing
support datelike columns with a timezone as data_columns (GH2852)
table writing performance improvements.
support python3 (via PyTables 3.0.0) (GH3750)
Add modulo operator to Series, DataFrame
Add date method to DatetimeIndex
Add dropna argument to pivot_table (:issue: 3820)
Simplified the API and added a describe method to Categorical
melt now accepts the optional parameters var_name and value_name to specify custom column names of
the returned DataFrame (GH3649), thanks @hoechenberger. If var_name is not specified and dataframe.
columns.name is not None, then this will be used as the var_name (GH4144). Also support for MultiIndex
columns.
clipboard functions use pyperclip (no dependencies on Windows, alternative dependencies offered for Linux)
(GH3837).
Plotting functions now raise a TypeError before trying to plot anything if the associated objects have have a
dtype of object (GH1818, GH3572, GH3911, GH3912), but they will try to convert object arrays to numeric
arrays if possible so that you can still plot, for example, an object array with floats. This happens before any
drawing takes place which eliminates any spurious plots from showing up.
Added Faq section on repr display options, to help users customize their setup.
where operations that result in block splitting are much faster (GH3733)
Series and DataFrame hist methods now take a figsize argument (GH3834)
DatetimeIndexes no longer try to convert mixed-integer indexes during join operations (GH3877)
Add unit keyword to Timestamp and to_datetime to enable passing of integers or floats that are in
an epoch unit of D, s, ms, us, ns, thanks @mtkini (GH3969) (e.g. unix timestamps or epoch s, with
fractional seconds allowed) (GH3540)
DataFrame corr method (spearman) is now cythonized.
Improved network test decorator to catch IOError (and therefore URLError as well). Added
with_connectivity_check decorator to allow explicitly checking a website as a proxy for seeing if there
is network connectivity. Plus, new optional_args decorator factory for decorators. (GH3910, GH3914)
read_csv will now throw a more informative error message when a file contains no columns, e.g., all newline
characters
Added layout keyword to DataFrame.hist() for more customizable layout (GH4050)
Timestamp.min and Timestamp.max now represent valid Timestamp instances instead of the default date-
time.min and datetime.max (respectively), thanks @SleepingPills
read_html now raises when no tables are found and BeautifulSoup==4.2.0 is detected (GH4214)
HDFStore
When removing an object, remove(key) raises KeyError if the key is not a valid store object.
raise a TypeError on passing where or columns to select with a Storer; these are invalid parameters
at this time (GH4189)
can now specify an encoding option to append/put to enable alternate encodings (GH3750)
enable support for iterator/chunksize with read_hdf
The repr() for (Multi)Index now obeys display.max_seq_items rather then numpy threshold print options.
(GH3426, GH3466)
Added mangle_dupe_cols option to read_table/csv, allowing users to control legacy behaviour re dupe cols (A,
A.1, A.2 vs A, A ) (GH3468) Note: The default value will change in 0.12 to the no mangle behaviour, If your
code relies on this behaviour, explicitly specify mangle_dupe_cols=True in your calls.
Do not allow astypes on datetime64[ns] except to object, and timedelta64[ns] to object/int
(GH3425)
The behavior of datetime64 dtypes has changed with respect to certain so-called reduction operations
(GH3726). The following operations now raise a TypeError when performed on a Series and return
an empty Series when performed on a DataFrame similar to performing these operations on, for example,
a DataFrame of slice objects: - sum, prod, mean, std, var, skew, kurt, corr, and cov
Do not allow datetimelike/timedeltalike creation except with valid types (e.g. cannot pass datetime64[ms])
(GH3423)
Add squeeze keyword to groupby to allow reduction from DataFrame -> Series if groups are unique. Re-
gression from 0.10.1, partial revert on (GH2893) with (GH3596)
Raise on iloc when boolean indexing with a label based indexer mask e.g. a boolean Series, even with integer
labels, will raise. Since iloc is purely positional based, the labels on the Series are not alignable (GH3631)
The raise_on_error option to plotting methods is obviated by GH3572, so it is removed. Plots now always
raise when data cannot be plotted or the object being plotted has a dtype of object.
DataFrame.interpolate() is now deprecated. Please use DataFrame.fillna() and
DataFrame.replace() instead (GH3582, GH3675, GH3676).
the method and axis arguments of DataFrame.replace() are deprecated
DataFrame.replace s infer_types parameter is removed and now performs conversion by default.
(GH3907)
Deprecated display.height, display.width is now only a formatting option does not control triggering of summary,
similar to < 0.11.0.
Add the keyword allow_duplicates to DataFrame.insert to allow a duplicate column to be inserted
if True, default is False (same as prior to 0.12) (GH3679)
io API changes
added pandas.io.api for i/o imports
removed Excel support to pandas.io.excel
added top-level pd.read_sql and to_sql DataFrame methods
removed clipboard support to pandas.io.clipboard
replace top-level and instance methods save and load with top-level read_pickle and to_pickle
instance method, save and load will give deprecation warning.
the method and axis arguments of DataFrame.replace() are deprecated
set FutureWarning to require data_source, and to replace year/month with expiry date in pandas.io options. This
is in preparation to add options data from Google (GH3822)
the method and axis arguments of DataFrame.replace() are deprecated
Implement __nonzero__ for NDFrame objects (GH3691, GH3696)
as_matrix with mixed signed and unsigned dtypes will result in 2 x the lcd of the unsigned as an int, maxing
with int64, to avoid precision issues (GH3733)
na_values in a list provided to read_csv/read_excel will match string and numeric versions e.g.
na_values=['99'] will match 99 whether the column ends up being int, float, or string (GH3611)
read_html now defaults to None when reading, and falls back on bs4 + html5lib when lxml fails to
parse. a list of parsers to try until success is also valid
more consistency in the to_datetime return types (give string/array of string inputs) (GH3888)
The internal pandas class hierarchy has changed (slightly). The previous PandasObject now is called
PandasContainer and a new PandasObject has become the baseclass for PandasContainer as well
as Index, Categorical, GroupBy, SparseList, and SparseArray (+ their base classes). Currently,
PandasObject provides string methods (from StringMixin). (GH4090, GH4092)
New StringMixin that, given a __unicode__ method, gets python 2 and python 3 compatible string
methods (__str__, __bytes__, and __repr__). Plus string safety throughout. Now employed in many
places throughout the pandas library. (GH4090, GH4092)
Added experimental CustomBusinessDay class to support DateOffsets with custom holiday calendars
and custom weekmasks. (GH2301)
Fixed an esoteric excel reading bug, xlrd>= 0.9.0 now required for excel support. Should provide python3
support (for reading) which has been lacking. (GH3164)
Disallow Series constructor called with MultiIndex which caused segfault (GH4187)
Allow unioning of date ranges sharing a timezone (GH3491)
Fix to_csv issue when having a large number of rows and NaT in some columns (GH3437)
.loc was not raising when passed an integer list (GH3449)
Unordered time series selection was misbehaving when using label slicing (GH3448)
Fix sorting in a frame with a list of columns which contains datetime64[ns] dtypes (GH3461)
DataFrames fetched via FRED now handle . as a NaN. (GH3469)
Fix regression in a DataFrame apply with axis=1, objects were not being converted back to base dtypes correctly
(GH3480)
Fix issue when storing uint dtypes in an HDFStore. (GH3493)
Non-unique index support clarified (GH3468)
Addressed handling of dupe columns in df.to_csv new and old (GH3454, GH3457)
Fix assigning a new index to a duplicate index in a DataFrame would fail (GH3468)
Fix construction of a DataFrame with a duplicate index
ref_locs support to allow duplicative indices across dtypes, allows iget support to always find the index
(even across dtypes) (GH2194)
applymap on a DataFrame with a non-unique index now works (removed warning) (GH2786), and fix
(GH3230)
Fix to_csv to handle non-unique columns (GH3495)
Duplicate indexes with getitem will return items in the correct order (GH3455, GH3457) and handle
missing elements like unique indices (GH3561)
Duplicate indexes with and empty DataFrame.from_records will return a correct frame (GH3562)
Concat to produce a non-unique columns when duplicates are across dtypes is fixed (GH3602)
Non-unique indexing with a slice via loc and friends fixed (GH3659)
sql.write_frame failing when writing a single column to sqlite (GH3628), thanks to @stonebig
Fix pivoting with nan in the index (GH3558)
Fix running of bs4 tests when it is not installed (GH3605)
Fix parsing of html table (GH3606)
read_html() now only allows a single backend: html5lib (GH3616)
convert_objects with convert_dates='coerce' was parsing some single-letter strings into todays
date
DataFrame.from_records did not accept empty recarrays (GH3682)
DataFrame.to_csv will succeed with the deprecated option nanRep, @tdsmith
DataFrame.to_html and DataFrame.to_latex now accept a path for their first argument (GH3702)
Fix file tokenization error with r delimiter and quoted fields (GH3453)
Groupby transform with item-by-item not upcasting correctly (GH3740)
Incorrectly read a HDFStore multi-index Frame with a column specification (GH3748)
read_html now correctly skips tests (GH3741)
PandasObjects raise TypeError when trying to hash (GH3882)
Fix incorrect arguments passed to concat that are not list-like (e.g. concat(df1,df2)) (GH3481)
Correctly parse when passed the dtype=str (or other variable-len string dtypes) in read_csv (GH3795)
Fix index name not propagating when using loc/ix (GH3880)
Fix groupby when applying a custom function resulting in a returned DataFrame was not converting dtypes
(GH3911)
Fixed a bug where DataFrame.replace with a compiled regular expression in the to_replace argument
wasnt working (GH3907)
Fixed __truediv__ in Python 2.7 with numexpr installed to actually do true division when dividing two
integer arrays with at least 10000 cells total (GH3764)
Indexing with a string with seconds resolution not selecting from a time index (GH3925)
csv parsers would loop infinitely if iterator=True but no chunksize was specified (GH3967), python
parser failing with chunksize=1
Fix index name not propagating when using shift
Fixed dropna=False being ignored with multi-index stack (GH3997)
Fixed flattening of columns when renaming MultiIndex columns DataFrame (GH4004)
Fix Series.clip for datetime series. NA/NaN threshold values will now throw ValueError (GH3996)
Fixed insertion issue into DataFrame, after rename (GH4032)
Fixed testing issue where too many sockets where open thus leading to a connection reset issue (GH3982,
GH3985, GH4028, GH4054)
Fixed failing tests in test_yahoo, test_google where symbols were not retrieved but were being accessed
(GH3982, GH3985, GH4028, GH4054)
Series.hist will now take the figure from the current environment if one is not passed
Fixed bug where a 1xN DataFrame would barf on a 1xN mask (GH4071)
Fixed running of tox under python3 where the pickle import was getting rewritten in an incompatible way
(GH4062, GH4063)
Fixed bug where sharex and sharey were not being passed to grouped_hist (GH4089)
Fix bug where HDFStore will fail to append because of a different block ordering on-disk (GH4096)
Better error messages on inserting incompatible columns to a frame (GH4107)
Fixed bug in DataFrame.replace where a nested dict wasnt being iterated over when regex=False
(GH4115)
Fixed bug in convert_objects(convert_numeric=True) where a mixed numeric and object Se-
ries/Frame was not converting properly (GH4119)
Fixed bugs in multi-index selection with column multi-index and duplicates (GH4145, GH4146)
Fixed bug in the parsing of microseconds when using the format argument in to_datetime (GH4152)
Fixed bug in PandasAutoDateLocator where invert_xaxis triggered incorrectly
MilliSecondLocator (GH3990)
Fixed bug in Series.where where broadcasting a single element input vector to the length of the series
resulted in multiplying the value inside the input (GH4192)
Fixed bug in plotting that wasnt raising on invalid colormap for matplotlib 1.1.1 (GH4215)
Fixed the legend displaying in DataFrame.plot(kind='kde') (GH4216)
Fixed bug where Index slices werent carrying the name attribute (GH4226)
Fixed bug in initializing DatetimeIndex with an array of strings in a certain time zone (GH4229)
Fixed bug where html5lib wasnt being properly skipped (GH4265)
Fixed bug where get_data_famafrench wasnt using the correct file edges (GH4281)
In [2]: p
Out[2]:
<class 'pandas.core.panel.Panel'>
Dimensions: 3 (items) x 4 (major_axis) x 4 (minor_axis)
Items axis: ItemA to ItemC
Major_axis axis: 2001-01-02 00:00:00 to 2001-01-05 00:00:00
Minor_axis axis: A to D
In [3]: p.reindex(items=['ItemA']).squeeze()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
A B C D
2001-01-02 0.469112 -0.282863 -1.509059 -1.135632
2001-01-03 1.212112 -0.173215 0.119209 -1.044236
2001-01-04 -0.861849 -2.104569 -0.494929 1.071804
2001-01-05 0.721555 -0.706771 -1.039575 0.271860
In [4]: p.reindex(items=['ItemA'],minor=['B']).squeeze()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
2001-01-02 -0.282863
2001-01-03 -0.173215
2001-01-04 -2.104569
2001-01-05 -0.706771
Freq: D, Name: B, dtype: float64
In [6]: ts = pd.Series(np.random.rand(len(idx)),index=idx)
In [7]: ts['2001']
Out[7]:
2001-10-31 0.838796
2001-11-30 0.897333
2001-12-31 0.732592
Freq: M, dtype: float64
In [9]: df['2001']
Out[9]:
A
2001-10-31 0.838796
2001-11-30 0.897333
2001-12-31 0.732592
added option display.mpl_style providing a sleeker visual style for plots. Based on https://gist.github.com/
huyng/816622 (GH3075).
Improved performance across several core functions by taking memory ordering of arrays into account. Courtesy
of @stephenwlin (GH3130)
Improved performance of groupby transform method (GH2121)
Handle ragged CSV files missing trailing delimiters in rows with missing fields when also providing explicit
list of column names (so the parser knows how many columns to expect in the result) (GH2981)
On a mixed DataFrame, allow setting with indexers with ndarray/DataFrame on rhs (GH3216)
Treat boolean values as integers (values 1 and 0) for numeric operations. (GH2641)
Add time method to DatetimeIndex (GH3180)
Return NA when using Series.str[...] for values that are not long enough (GH3223)
Display cursor coordinate information in time-series plots (GH1670)
to_html() now accepts an optional escape argument to control reserved HTML character escaping (enabled
by default) and escapes &, in addition to < and >. (GH2919)
Do not automatically upcast numeric specified dtypes to int64 or float64 (GH622 and GH797)
DataFrame construction of lists and scalars, with no dtype present, will result in casting to int64 or float64,
regardless of platform. This is not an apparent change in the API, but noting it.
Guarantee that convert_objects() for Series/DataFrame always returns a copy
groupby operations will respect dtypes for numeric float operations (float32/float64); other types will be operated
on, and will try to cast back to the input dtype (e.g. if an int is passed, as long as the output doesnt have nans,
then an int will be returned)
backfill/pad/take/diff/ohlc will now support float32/int16/int8 operations
Block types will upcast as needed in where/masking operations (GH2793)
Series now automatically will try to set the correct dtype based on passed datetimelike objects (date-
time/Timestamp)
timedelta64 are returned in appropriate cases (e.g. Series - Series, when both are datetime64)
mixed datetimes and objects (GH2751) in a constructor will be cast correctly
astype on datetimes to object are now handled (as well as NaT conversions to np.nan)
all timedelta like objects will be correctly assigned to timedelta64 with mixed NaN and/or NaT al-
lowed
arguments to DataFrame.clip were inconsistent to numpy and Series clipping (GH2747)
util.testing.assert_frame_equal now checks the column and index names (GH2964)
Constructors will now return a more informative ValueError on failures when invalid shapes are passed
Dont suppress TypeError in GroupBy.agg (GH3238)
Methods return None when inplace=True (GH1893)
HDFStore
added the method select_column to select a single column from a table as a Series.
deprecated the unique method, can be replicated by select_column(key,column).unique()
min_itemsize parameter will now automatically create data_columns for passed keys
Downcast on pivot if possible (GH3283), adds argument downcast to fillna
Introduced options display.height/width for explicitly specifying terminal height/width in characters. Depre-
cated display.line_width, now replaced by display.width. These defaults are in effect for scripts as well, so
unless disabled, previously very wide output will now be output as expand_repr style wrapped output.
Various defaults for options (including display.max_rows) have been revised, after a brief survey concluded they
were wrong for everyone. Now at w=80,h=60.
HTML repr output in IPython qtconsole is once again controlled by the option display.notebook_repr_html, and
on by default.
Fix seg fault on empty data frame when fillna with pad or backfill (GH2778)
Single element ndarrays of datetimelike objects are handled (e.g. np.array(datetime(2001,1,1,0,0))), w/o dtype
being passed
0-dim ndarrays with a passed dtype are handled correctly (e.g. np.array(0.,dtype=float32))
Fix some boolean indexing inconsistencies in Series.__getitem__/__setitem__ (GH2776)
Fix issues with DataFrame and Series constructor with integers that overflow int64 and some mixed typed
type lists (GH2845)
HDFStore
Fix weird PyTables error when using too many selectors in a where also correctly filter on any number of
values in a Term expression (so not using numexpr filtering, but isin filtering)
Internally, change all variables to be private-like (now have leading underscore)
Fixes for query parsing to correctly interpret boolean and != (GH2849, GH2973)
Fixes for pathological case on SparseSeries with 0-len array and compression (GH2931)
Fixes bug with writing rows if part of a block was all-nan (GH3012)
Exceptions are now ValueError or TypeError as needed
A table will now raise if min_itemsize contains fields which are not queryables
Bug showing up in applymap where some object type columns are converted (GH2909) had an incorrect default
in convert_objects
TimeDeltas
Series ops with a Timestamp on the rhs was throwing an exception (GH2898) added tests for Series ops
with datetimes,timedeltas,Timestamps, and datelike Series on both lhs and rhs
Fixed subtle timedelta64 inference issue on py3 & numpy 1.7.0 (GH3094)
Fixed some formatting issues on timedelta when negative
Support null checking on timedelta64, representing (and formatting) with NaT
Support setitem with np.nan value, converts to NaT
Support min/max ops in a Dataframe (abs not working, nor do we error on non-supported ops)
Support idxmin/idxmax/abs/max/min in a Series (GH2989, GH2982)
Bug on in-place putmasking on an integer series that needs to be converted to float (GH2746)
Bug in argsort of datetime64[ns] Series with NaT (GH2967)
Bug in value_counts of datetime64[ns] Series (GH3002)
Fixed printing of NaT in an index
Bug in idxmin/idxmax of datetime64[ns] Series with NaT (GH2982)
Bug in icol, take with negative indicies was producing incorrect return values (see GH2922, GH2892),
also check for out-of-bounds indices (GH3029)
Bug in DataFrame column insertion when the column creation fails, existing frame is left in an irrecoverable
state (GH3010)
Bug in DataFrame update, combine_first where non-specified values could cause dtype changes (GH3016,
GH3041)
Bug in groupby with first/last where dtypes could change (GH3041, GH2763)
Formatting of an index that has nan was inconsistent or wrong (would fill from other values), (GH2850)
Unstack of a frame with no nans would always cause dtype upcasting (GH2929)
Fix scalar datetime.datetime parsing bug in read_csv (GH3071)
Fixed slow printing of large Dataframes, due to inefficient dtype reporting (GH2807)
Fixed a segfault when using a function as grouper in groupby (GH3035)
Fix pretty-printing of infinite data structures (closes GH2978)
Fixed exception when plotting timeseries bearing a timezone (closes GH2877)
str.contains ignored na argument (GH2806)
Substitute warning for segfault when grouping with categorical grouper of mismatched length (GH3011)
Fix exception in SparseSeries.density (GH2083)
Fix upsampling bug with closed=left and daily to daily data (GH3020)
Fixed missing tick bars on scatter_matrix plot (GH3063)
Fixed bug in Timestamp(d,tz=foo) when d is date() rather then datetime() (GH2993)
series.plot(kind=bar) now respects pylab color schem (GH3115)
Fixed bug in reshape if not passed correct input, now raises TypeError (GH2719)
Fixed a bug where Series ctor did not respect ordering if OrderedDict passed in (GH3282)
Fix NameError issue on RESO_US (GH2787)
Allow selection in an unordered timeseries to work similary to an ordered timeseries (GH2437).
Fix implemented .xs when called with axes=1 and a level parameter (GH2903)
Timestamp now supports the class method fromordinal similar to datetimes (GH3042)
Fix issue with indexing a series with a boolean key and specifiying a 1-len list on the rhs (GH2745) or a list on
the rhs (GH3235)
Fixed bug in groupby apply when kernel generate list of arrays having unequal len (GH1738)
fixed handling of rolling_corr with center=True which could produce corr>1 (GH3155)
Fixed issues where indices can be passed as index/column in addition to 0/1 for the axis parameter
PeriodIndex.tolist now boxes to Period (GH3178)
PeriodIndex.get_loc KeyError now reports Period instead of ordinal (GH3179)
df.to_records bug when handling MultiIndex (GH3189)
Fix Series.__getitem__ segfault when index less than -length (GH3168)
Fix bug when using Timestamp as a date parser (GH2932)
Fix bug creating date range from Timestamp with time zone and passing same time zone (GH2926)
Add comparison operators to Period object (GH2781)
Fix bug when concatenating two Series into a DataFrame when they have the same name (GH2797)
Fix automatic color cycling when plotting consecutive timeseries without color arguments (GH2816)
fixed bug in the pickling of PeriodIndex (GH2891)
Upcast/split blocks when needed in a mixed DataFrame when setitem with an indexer (GH3216)
Invoking df.applymap on a dataframe with dupe cols now raises a ValueError (GH2786)
Apply with invalid returned indices raise correct Exception (GH2808)
Fixed a bug in plotting log-scale bar plots (GH3247)
df.plot() grid on/off now obeys the mpl default style, just like series.plot(). (GH3233)
Fixed a bug in the legend of plotting.andrews_curves() (GH3278)
Produce a series on apply if we only generate a singular series and have a simple index (GH2893)
Fix Python ASCII file parsing when integer falls outside of floating point spacing (GH3258)
fixed pretty priniting of sets (GH3294)
Panel() and Panel.from_dict() now respects ordering when give OrderedDict (GH3303)
DataFrame where with a datetimelike incorrectly selecting (GH3311)
Ensure index casts work even in Int64Index
Fix set_index segfault when passing MultiIndex (GH3308)
Ensure pickles created in py2 can be read in py3
Insert ellipsis in MultiIndex summary repr (GH3348)
Groupby will handle mutation among an input groups columns (and fallback to non-fast apply) (GH3380)
Eliminated unicode errors on FreeBSD when using MPL GTK backend (GH3360)
Period.strftime should return unicode strings always (GH3363)
Respect passed read_* chunksize in get_chunk function (GH3406)
Restored inplace=True behavior returning self (same object) with deprecation warning until 0.11 (GH1893)
HDFStore
refactored HFDStore to deal with non-table stores as objects, will allow future enhancements
removed keyword compression from put (replaced by keyword complib to be consistent across
library)
warn PerformanceWarning if you are attempting to store types that will be pickled by PyTables
HDFStore
enables storing of multi-index dataframes (closes GH1277)
support data column indexing and selection, via data_columns keyword in append
support write chunking to reduce memory footprint, via chunksize keyword to append
support automagic indexing via index keyword to append
support expectedrows keyword in append to inform PyTables about the expected tablesize
support start and stop keywords in select to limit the row selection space
added get_store context manager to automatically import with pandas
added column filtering via columns keyword in select
added methods append_to_multiple/select_as_multiple/select_as_coordinates to do multiple-table ap-
pend/selection
added support for datetime64 in columns
added method unique to select the unique values in an indexable or data column
added method copy to copy an existing store (and possibly upgrade)
show the shape of the data on disk for non-table stores when printing the store
added ability to read PyTables flavor tables (allows compatibility to other HDF5 systems)
Add logx option to DataFrame/Series.plot (GH2327, GH2565)
Support reading gzipped data from file-like object
pivot_table aggfunc can be anything used in GroupBy.aggregate (GH2643)
Implement DataFrame merges in case where set cardinalities might overflow 64-bit integer (GH2690)
Raise exception in C file parser if integer dtype specified and have NA values. (GH2631)
Attempt to parse ISO8601 format dates when parse_dates=True in read_csv for major performance boost in
such cases (GH2698)
Add methods neg and inv to Series
Implement kind option in ExcelFile to indicate whether its an XLS or XLSX file (GH2613)
Documented a fast-path in pd.read_csv when parsing iso8601 datetime strings yielding as much as a 20x
speedup. (GH5993)
Brand new high-performance delimited file parsing engine written in C and Cython. 50% or better performance
in many standard use cases with a fraction as much memory usage. (GH407, GH821)
Many new file parser (read_csv, read_table) features:
Support for on-the-fly gzip or bz2 decompression (compression option)
Ability to get back numpy.recarray instead of DataFrame (as_recarray=True)
dtype option: explicit column dtypes
usecols option: specify list of columns to be read from a file. Good for reading very wide files with many
irrelevant columns (GH1216 GH926, GH2465)
Enhanced unicode decoding support via encoding option
skipinitialspace dialect option
Can specify strings to be recognized as True (true_values) or False (false_values)
High-performance delim_whitespace option for whitespace-delimited files; a preferred alternative to the
s+ regular expression delimiter
Option to skip bad lines (wrong number of fields) that would otherwise have caused an error in the past
(error_bad_lines and warn_bad_lines options)
Substantially improved performance in the parsing of integers with thousands markers and lines with
comments
Easy of European (and other) decimal formats (decimal option) (GH584, GH2466)
Custom line terminators (e.g. lineterminator=~) (GH2457)
Handling of no trailing commas in CSV files (GH2333)
Ability to handle fractional seconds in date_converters (GH2209)
read_csv allow scalar arg to na_values (GH1944)
Explicit column dtype specification in read_* functions (GH1858)
Easier CSV dialect specification (GH1743)
Improve parser performance when handling special characters (GH1204)
Google Analytics API integration with easy oauth2 workflow (GH2283)
Add error handling to Series.str.encode/decode (GH2276)
Add where and mask to Series (GH2337)
Grouped histogram via by keyword in Series/DataFrame.hist (GH2186)
Support optional min_periods keyword in corr and cov for both Series and DataFrame (GH2002)
Add duplicated and drop_duplicates functions to Series (GH1923)
Add docs for HDFStore table format
density property in SparseSeries (GH2384)
Add ffill and bfill convenience functions for forward- and backfilling time series data (GH2284)
New option configuration system and functions set_option, get_option, describe_option, and reset_option.
Deprecate set_printoptions and reset_printoptions (GH2393). You can also access options as attributes via
pandas.options.X
Wide DataFrames can be viewed more easily in the console with new expand_frame_repr and line_width con-
figuration options. This is on by default now (GH2436)
Scikits.timeseries-like moving window functions via rolling_window (GH1270)
The default binning/labeling behavior for resample has been changed to closed=left, label=left for daily
and lower frequencies. This had been a large source of confusion for users. See whats new page for more on
this. (GH2410)
Methods with inplace option now return None instead of the calling (modified) object (GH1893)
The special case DataFrame - TimeSeries doing column-by-column broadcasting has been deprecated. Users
should explicitly do e.g. df.sub(ts, axis=0) instead. This is a legacy hack and can lead to subtle bugs.
inf/-inf are no longer considered as NA by isnull/notnull. To be clear, this is legacy cruft from early pandas. This
behavior can be globally re-enabled using the new option mode.use_inf_as_null (GH2050, GH1919)
pandas.merge will now default to sort=False. For many use cases sorting the join keys is not necessary,
and doing it by default is wasteful
Specify header=0 explicitly to replace existing column names in file in read_* functions.
Default column names for header-less parsed files (yielded by read_csv, etc.) are now the integers 0, 1, .... A
new argument prefix has been added; to get the v0.9.x behavior specify prefix='X' (GH2034). This API
change was made to make the default column names more consistent with the DataFrame constructors default
column names when none are specified.
DataFrame selection using a boolean frame now preserves input shape
If function passed to Series.apply yields a Series, result will be a DataFrame (GH2316)
Values like YES/NO/yes/no will not be considered as boolean by default any longer in the file parsers. This can
be customized using the new true_values and false_values options (GH2360)
obj.fillna() is no longer valid; make method=pad no longer the default option, to be more explicit about what
kind of filling to perform. Add ffill/bfill convenience functions per above (GH2284)
HDFStore.keys() now returns an absolute path-name for each key
to_string() now always returns a unicode string. (GH2224)
File parsers will not handle NA sentinel values arising from passed converter functions
Upsampling period index spans intervals. Example: annual periods upsampled to monthly will span all
months in each year
Period.end_time will yield timestamp at last nanosecond in the interval (GH2124, GH2125, GH1764)
File parsers no longer coerce to float or bool for columns that have custom converters specified (GH2184)
Change default header names in read_* functions to more Pythonic X0, X1, etc. instead of X.1, X.2. (GH2000)
Deprecated day_of_year API removed from PeriodIndex, use dayofyear (GH1723)
Dont modify NumPy suppress printoption at import time
The internal HDF5 data arrangement for DataFrames has been transposed. Legacy files will still be readable by
HDFStore (GH1834, GH1824)
Legacy cruft removed: pandas.stats.misc.quantileTS
Use ISO8601 format for Period repr: monthly, daily, and on down (GH1776)
Empty DataFrame columns are now created as object dtype. This will prevent a class of TypeErrors that was
occurring in code where the dtype of a column would depend on the presence of data or not (e.g. a SQL query
having results) (GH1783)
Setting parts of DataFrame/Panel using ix now aligns input Series/DataFrame (GH1630)
first and last methods in GroupBy no longer drop non-numeric columns (GH1809)
Resolved inconsistencies in specifying custom NA values in text parser. na_values of type dict no longer over-
ride default NAs unless keep_default_na is set to false explicitly (GH1657)
Enable skipfooter parameter in text parsers as an alias for skip_footer
Perform arithmetic column-by-column in mixed-type DataFrame to avoid type upcasting issues. Caused down-
stream DataFrame.diff bug (GH1896)
Fix matplotlib auto-color assignment when no custom spectrum passed. Also respect passed color keyword
argument (GH1711)
Fix resampling logical error with closed=left (GH1726)
Fix critical DatetimeIndex.union bugs (GH1730, GH1719, GH1745, GH1702, GH1753)
Fix critical DatetimeIndex.intersection bug with unanchored offsets (GH1708)
Fix MM-YYYY time series indexing case (GH1672)
Fix case where Categorical group key was not being passed into index in GroupBy result (GH1701)
Handle Ellipsis in Series.__getitem__/__setitem__ (GH1721)
Fix some bugs with handling datetime64 scalars of other units in NumPy 1.6 and 1.7 (GH1717)
Fix performance issue in MultiIndex.format (GH1746)
Fixed GroupBy bugs interacting with DatetimeIndex asof / map methods (GH1677)
Handle factors with NAs in pandas.rpy (GH1615)
Fix statsmodels import in pandas.stats.var (GH1734)
Fix DataFrame repr/info summary with non-unique columns (GH1700)
Fix Series.iget_value for non-unique indexes (GH1694)
Dont lose tzinfo when passing DatetimeIndex as DataFrame column (GH1682)
Fix tz conversion with time zones that havent had any DST transitions since first date in the array (GH1673)
Add TypeError when appending HDFStore table w/ wrong index type (GH1881)
Dont raise exception on empty inputs in EW functions (e.g. ewma) (GH1900)
Make asof work correctly with PeriodIndex (GH1883)
Fix extlinks in doc build
Fill boolean DataFrame with NaN when calling shift (GH1814)
Fix setuptools bug causing pip not to Cythonize .pyx files sometimes
Fix negative integer indexing regression in .ix from 0.7.x (GH1888)
Fix error while retrieving timezone and utc offset from subclasses of datetime.tzinfo without .zone and ._utcoff-
set attributes (GH1922)
Fix DataFrame formatting of small, non-zero FP numbers (GH1911)
Various fixes by upcasting of date -> datetime (GH1395)
Raise better exception when passing multiple functions with the same name, such as lambdas, to
GroupBy.aggregate
Fix DataFrame.apply with axis=1 on a non-unique index (GH1878)
Proper handling of Index subclasses in pandas.unique (GH1759)
Set index names in DataFrame.from_records (GH1744)
Fix time series indexing error with duplicates, under and over hash table size cutoff (GH1821)
Handle list keys in addition to tuples in DataFrame.xs when partial-indexing a hierarchically-indexed DataFrame
(GH1796)
Support multiple column selection in DataFrame.__getitem__ with duplicate columns (GH1943)
Fix time zone localization bug causing improper fields (e.g. hours) in time zones that have not had a UTC
transition in a long time (GH1946)
Fix errors when parsing and working with with fixed offset timezones (GH1922, GH1928)
Fix text parser bug when handling UTC datetime objects generated by dateutil (GH1693)
Fix plotting bug when B is the inferred frequency but index actually contains weekends (GH1668, GH1669)
Fix plot styling bugs (GH1666, GH1665, GH1658)
Fix plotting bug with index/columns with unicode (GH1685)
Fix DataFrame constructor bug when passed Series with datetime64 dtype in a dict (GH1680)
Fixed regression in generating DatetimeIndex using timezone aware datetime.datetime (GH1676)
Fix DataFrame bug when printing concatenated DataFrames with duplicated columns (GH1675)
Fixed bug when plotting time series with multiple intraday frequencies (GH1732)
Fix bug in DataFrame.duplicated to enable iterables other than list-types as input argument (GH1773)
Fix resample bug when passed list of lambdas as how argument (GH1808)
Repr fix for MultiIndex level with all NAs (GH1971)
Fix PeriodIndex slicing bug when slice start/end are out-of-bounds (GH1977)
Fix read_table bug when parsing unicode (GH1975)
Fix BlockManager.iget bug when dealing with non-unique MultiIndex as columns (GH1970)
Fix reset_index bug if both drop and level are specified (GH1957)
Work around unsafe NumPy object->int casting with Cython function (GH1987)
Fix datetime64 formatting bug in DataFrame.to_csv (GH1993)
Default start date in pandas.io.data to 1/1/2000 as the docs say (GH2011)
Use moving min/max algorithms from Bottleneck in rolling_min/rolling_max for > 100x speedup. (GH1504,
GH50)
Add Cython group median method for >15x speedup (GH1358)
Drastically improve to_datetime performance on ISO8601 datetime strings (with no time zones) (GH1571)
Improve single-key groupby performance on large data sets, accelerate use of groupby with a Categorical vari-
able
Add ability to append hierarchical index levels with set_index and to drop single levels with reset_index
(GH1569, GH1577)
Always apply passed functions in resample, even if upsampling (GH1596)
Avoid unnecessary copies in DataFrame constructor with explicit dtype (GH1572)
Cleaner DatetimeIndex string representation with 1 or 2 elements (GH1611)
Improve performance of array-of-Period to PeriodIndex, convert such arrays to PeriodIndex inside Index
(GH1215)
More informative string representation for weekly Period objects (GH1503)
Accelerate 3-axis multi data selection from homogeneous Panel (GH979)
Add adjust option to ewma to disable adjustment factor (GH1584)
Add new matplotlib converters for high frequency time series plotting (GH1599)
Handling of tz-aware datetime.datetime objects in to_datetime; raise Exception unless utc=True given (GH1581)
Switch to klib/khash-based hash tables in Index classes for better performance in many cases and lower memory
footprint
Shipping some functions from scipy.stats to reduce dependency, e.g. Series.describe and DataFrame.describe
(GH1092)
Can create MultiIndex by passing list of lists or list of arrays to Series, DataFrame constructor, etc. (GH831)
Can pass arrays in addition to column names to DataFrame.set_index (GH402)
Improve the speed of square reindexing of homogeneous DataFrame objects by significant margin (GH836)
Handle more dtypes when passed MaskedArrays in DataFrame constructor (GH406)
Improved performance of join operations on integer keys (GH682)
Can pass multiple columns to GroupBy object, e.g. grouped[[col1, col2]] to only aggregate a subset of the value
columns (GH383)
Add histogram / kde plot options for scatter_matrix diagonals (GH1237)
Add inplace option to Series/DataFrame.rename and sort_index, DataFrame.drop_duplicates (GH805, GH207)
More helpful error message when nothing passed to Series.reindex (GH1267)
Can mix array and scalars as dict-value inputs to DataFrame ctor (GH1329)
Use DataFrame columns name for legend title in plots
Preserve frequency in DatetimeIndex when possible in boolean indexing operations
Promote datetime.date values in data alignment operations (GH867)
Add order method to Index classes (GH1028)
Avoid hash table creation in large monotonic hash table indexes (GH1160)
Store time zones in HDFStore (GH1232)
Enable storage of sparse data structures in HDFStore (GH85)
Enable Series.asof to work with arrays of timestamp inputs
Cython implementation of DataFrame.corr speeds up by > 100x (GH1349, GH1354)
Exclude nuisance columns automatically in GroupBy.transform (GH1364)
Support functions-as-strings in GroupBy.transform (GH1362)
Use index name as xlabel/ylabel in plots (GH1415)
Add convert_dtype option to Series.apply to be able to leave data as dtype=object (GH1414)
Can specify all index level names in concat (GH1419)
Add dialect keyword to parsers for quoting conventions (GH1363)
Enable DataFrame[bool_DataFrame] += value (GH1366)
Add retries argument to get_data_yahoo to try to prevent Yahoo! API 404s (GH826)
Improve performance of reshaping by using O(N) categorical sorting
Series names will be used for index of DataFrame if no index passed (GH1494)
Header argument in DataFrame.to_csv can accept a list of column names to use instead of the objects columns
(GH921)
Add raise_conflict argument to DataFrame.update (GH1526)
Support file-like objects in ExcelFile (GH1529)
Fix OverflowError from storing pre-1970 dates in HDFStore by switching to datetime64 (GH179)
Fix logical error with February leap year end in YearEnd offset
Series([False, nan]) was getting casted to float64 (GH1074)
Fix binary operations between boolean Series and object Series with booleans and NAs (GH1074, GH1079)
Couldnt assign whole array to column in mixed-type DataFrame via .ix (GH1142)
Fix label slicing issues with float index values (GH1167)
Fix segfault caused by empty groups passed to groupby (GH1048)
Fix occasionally misbehaved reindexing in the presence of NaN labels (GH522)
Fix imprecise logic causing weird Series results from .apply (GH1183)
Unstack multiple levels in one shot, avoiding empty columns in some cases. Fix pivot table bug (GH1181)
Fix formatting of MultiIndex on Series/DataFrame when index name coincides with label (GH1217)
Handle Excel 2003 #N/A as NaN from xlrd (GH1213, GH1225)
Fix timestamp locale-related deserialization issues with HDFStore by moving to datetime64 representation
(GH1081, GH809)
Fix DataFrame.duplicated/drop_duplicates NA value handling (GH557)
Actually raise exceptions in fast reducer (GH1243)
Fix various timezone-handling bugs from 0.7.3 (GH969)
GroupBy on level=0 discarded index name (GH1313)
Better error message with unmergeable DataFrames (GH1307)
Series.__repr__ alignment fix with unicode index values (GH1279)
Better error message if nothing passed to reindex (GH1267)
More robust NA handling in DataFrame.drop_duplicates (GH557)
Resolve locale-based and pre-epoch HDF5 timestamp deserialization issues (GH973, GH1081, GH179)
Implement Series.repeat (GH1229)
Fix indexing with namedtuple and other tuple subclasses (GH1026)
Fix float64 slicing bug (GH1167)
Support for non-unique indexes: indexing and selection, many-to-one and many-to-many joins (GH1306)
Added fixed-width file reader, read_fwf (GH952)
Add group_keys argument to groupby to not add group names to MultiIndex in result of apply (GH938)
DataFrame can now accept non-integer label slicing (GH946). Previously only DataFrame.ix was able to do so.
DataFrame.apply now retains name attributes on Series objects (GH983)
Numeric DataFrame comparisons with non-numeric values now raises proper TypeError (GH943). Previously
raise PandasError: DataFrame constructor not properly called!
Add kurt methods to Series and DataFrame (GH964)
Can pass dict of column -> list/set NA values for text parsers (GH754)
Allows users specified NA values in text parsers (GH754)
Parsers checks for openpyxl dependency and raises ImportError if not found (GH1007)
New factory function to create HDFStore objects that can be used in a with statement so users do not have to
explicitly call HDFStore.close (GH1005)
pivot_table is now more flexible with same parameters as groupby (GH941)
Added stacked bar plots (GH987)
scatter_matrix method in pandas/tools/plotting.py (GH935)
DataFrame.boxplot returns plot results for ex-post styling (GH985)
Short version number accessible as pandas.version.short_version (GH930)
Additional documentation in panel.to_frame (GH942)
More informative Series.apply docstring regarding element-wise apply (GH977)
Notes on rpy2 installation (GH1006)
Add rotation and font size options to hist method (GH1012)
Use exogenous / X variable index in result of OLS.y_predict. Add OLS.predict method (GH1027, GH1008)
Calling apply on grouped Series, e.g. describe(), will no longer yield DataFrame by default. Will have to call
unstack() to get prior behavior
NA handling in non-numeric comparisons has been tightened up (GH933, GH953)
No longer assign dummy names key_0, key_1, etc. to groupby index (GH1291)
Fix logic error when selecting part of a row in a DataFrame with a MultiIndex index (GH1013)
Series comparison with Series of differing length causes crash (GH1016).
Fix bug in indexing when selecting section of hierarchically-indexed row (GH1013)
DataFrame.plot(logy=True) has no effect (GH1011).
Broken arithmetic operations between SparsePanel-Panel (GH1015)
Unicode repr issues in MultiIndex with non-ASCII characters (GH1010)
DataFrame.lookup() returns inconsistent results if exact match not present (GH1001)
DataFrame arithmetic operations not treating None as NA (GH992)
DataFrameGroupBy.apply returns incorrect result (GH991)
Series.reshape returns incorrect result for multiple dimensions (GH989)
Series.std and Series.var ignores ddof parameter (GH934)
DataFrame.append loses index names (GH980)
DataFrame.plot(kind=bar) ignores color argument (GH958)
Inconsistent Index comparison results (GH948)
Improper int dtype DataFrame construction from data with NaN (GH846)
Removes default result name in groupby results (GH995)
Series.sum returns 0 instead of NA when called on an empty series. Analogously for a DataFrame whose rows
or columns are length 0 (GH844)
Add to_clipboard function to pandas namespace for writing objects to the system clipboard (GH774)
Add itertuples method to DataFrame for iterating through the rows of a dataframe as tuples (GH818)
Add ability to pass fill_value and method to DataFrame and Series align method (GH806, GH807)
Add fill_value option to reindex, align methods (GH784)
Enable concat to produce DataFrame from Series (GH787)
Add between method to Series (GH802)
Add HTML representation hook to DataFrame for the IPython HTML notebook (GH773)
Support for reading Excel 2007 XML documents using openpyxl
Fix memory leak when inserting large number of columns into a single DataFrame (GH790)
Appending length-0 DataFrame with new columns would not result in those new columns being part of the
resulting concatenated DataFrame (GH782)
Fixed groupby corner case when passing dictionary grouper and as_index is False (GH819)
Fixed bug whereby bool array sometimes had object dtype (GH820)
Fix exception thrown on np.diff (GH816)
Fix to_records where columns are non-strings (GH822)
Fix Index.intersection where indices have incomparable types (GH811)
Fix ExcelFile throwing an exception for two-line file (GH837)
Add clearer error message in csv parser (GH835)
Fix loss of fractional seconds in HDFStore (GH513)
Fix DataFrame join where columns have datetimes (GH787)
Work around numpy performance issue in take (GH817)
Improve comparison operations for NA-friendliness (GH801)
Fix indexing operation for floating point values (GH780, GH798)
Fix groupby case resulting in malformed dataframe (GH814)
Fix behavior of reindex of Series dropping name (GH812)
Improve on redudant groupby computation (GH775)
Catch possible NA assignment to int/bool series with exception (GH839)
New merge function for efficiently performing full gamut of database / relational-algebra operations. Refac-
tored existing join methods to use the new infrastructure, resulting in substantial performance gains (GH220,
GH249, GH267)
New concat function for concatenating DataFrame or Panel objects along an axis. Can form union or inter-
section of the other axes. Improves performance of DataFrame.append (GH468, GH479, GH273)
Handle differently-indexed output values in DataFrame.apply (GH498)
Can pass list of dicts (e.g., a list of shallow JSON objects) to DataFrame constructor (GH526)
Add reorder_levels method to Series and DataFrame (GH534)
Add dict-like get function to DataFrame and Panel (GH521)
DataFrame.iterrows method for efficiently iterating through the rows of a DataFrame
Added DataFrame.to_panel with code adapted from LongPanel.to_long
Label-indexing with integer indexes now raises KeyError if a label is not found instead of falling back on
location-based indexing (GH700)
Label-based slicing via ix or [] on Series will now only work if exact matches for the labels are found or if
the index is monotonic (for range selections)
Label-based slicing and sequences of labels can be passed to [] on a Series for both getting and setting (GH86)
[] operator (__getitem__ and __setitem__) will raise KeyError with integer indexes when an index is
not contained in the index. The prior behavior would fall back on position-based indexing if a key was not found
in the index which would lead to subtle bugs. This is now consistent with the behavior of .ix on DataFrame
and friends (GH328)
Rename DataFrame.delevel to DataFrame.reset_index and add deprecation warning
Series.sort (an in-place operation) called on a Series which is a view on a larger array (e.g. a column in a
DataFrame) will generate an Exception to prevent accidentally modifying the data source (GH316)
Refactor to remove deprecated LongPanel class (GH552)
Deprecated Panel.to_long, renamed to to_frame
Deprecated colSpace argument in DataFrame.to_string, renamed to col_space
Rename precision to accuracy in engineering float formatter (GH GH395)
The default delimiter for read_csv is comma rather than letting csv.Sniffer infer it
Rename col_or_columns argument in DataFrame.drop_duplicates (GH GH734)
Better error message in DataFrame constructor when passed column labels dont match data (GH497)
Substantially improve performance of multi-GroupBy aggregation when a Python function is passed, reuse
ndarray object in Cython (GH496)
Can store objects indexed by tuples and floats in HDFStore (GH492)
Dont print length by default in Series.to_string, add length option (GH GH489)
Improve Cython code for multi-groupby to aggregate without having to sort the data (GH93)
Improve MultiIndex reindexing speed by storing tuples in the MultiIndex, test for backwards unpickling com-
patibility
Improve column reindexing performance by using specialized Cython take function
Further performance tweaking of Series.__getitem__ for standard use cases
Avoid Index dict creation in some cases (i.e. when getting slices, etc.), regression from prior versions
Friendlier error message in setup.py if NumPy not installed
Use common set of NA-handling operations (sum, mean, etc.) in Panel class also (GH536)
Default name assignment when calling reset_index on DataFrame with a regular (non-hierarchical) index
(GH476)
Use Cythonized groupers when possible in Series/DataFrame stat ops with level parameter passed (GH545)
Ported skiplist data structure to C to speed up rolling_median by about 5-10x in most typical use cases
(GH374)
Raise exception in out-of-bounds indexing of Series instead of seg-faulting, regression from earlier releases
(GH495)
Fix error when joining DataFrames of different dtypes within the same typeclass (e.g. float32 and float64)
(GH486)
Fix bug in Series.min/Series.max on objects like datetime.datetime (GH GH487)
Preserve index names in Index.union (GH501)
Fix bug in Index joining causing subclass information (like DateRange type) to be lost in some cases (GH500)
Accept empty list as input to DataFrame constructor, regression from 0.6.0 (GH491)
Can output DataFrame and Series with ndarray objects in a dtype=object array (GH490)
Return empty string from Series.to_string when called on empty Series (GH GH488)
Fix exception passing empty list to DataFrame.from_records
Fix Index.format bug (excluding name field) with datetimes with time info
Fix scalar value access in Series to always return NumPy scalars, regression from prior versions (GH510)
Handle rows skipped at beginning of file in read_* functions (GH505)
Handle improper dtype casting in set_value methods
Unary - / __neg__ operator on DataFrame was returning integer values
Unbox 0-dim ndarrays from certain operators like all, any in Series
Fix handling of missing columns (was combine_first-specific) in DataFrame.combine for general case (GH529)
Fix type inference logic with boolean lists and arrays in DataFrame indexing
Use centered sum of squares in R-square computation if entity_effects=True in panel regression
Handle all NA case in Series.{corr, cov}, was raising exception (GH548)
Aggregating by multiple levels with level argument to DataFrame, Series stat method, was broken (GH545)
Fix Cython buf when converter passed to read_csv produced a numeric array (buffer dtype mismatch when
passed to Cython type inference function) (GH GH546)
Fix exception when setting scalar value using .ix on a DataFrame with a MultiIndex (GH551)
Fix outer join between two DateRanges with different offsets that returned an invalid DateRange
Cleanup DataFrame.from_records failure where index argument is an integer
Fix Data.from_records failure when passed a dictionary
Fix NA handling in {Series, DataFrame}.rank with non-floating point dtypes
Fix bug related to integer type-checking in .ix-based indexing
Handle non-string index name passed to DataFrame.from_records
DataFrame.insert caused the columns name(s) field to be discarded (GH527)
Fix erroneous in monotonic many-to-one left joins
Fix DataFrame.to_string to remove extra column white space (GH571)
Format floats to default to same number of digits (GH395)
Added decorator to copy docstring from one function to another (GH449)
Fix error in monotonic many-to-one left joins
Fix __eq__ comparison between DateOffsets with different relativedelta keywords passed
Fix exception caused by parser converter returning strings (GH583)
Fix MultiIndex formatting bug with integer names (GH601)
Fix bug in handling of non-numeric aggregates in Series.groupby (GH612)
Fix TypeError with tuple subclasses (e.g. namedtuple) in DataFrame.from_records (GH611)
36.30.5 Thanks
Craig Austin
Chris Billington
Marius Cobzarenco
Mario Gamboa-Cavazos
Hans-Martin Gaudecker
Arthur Gerigk
Yaroslav Halchenko
Jeff Hammerbacher
Matt Harrison
Andreas Hilboll
Luc Kesters
Adam Klein
Gregg Lind
Solomon Negusse
Wouter Overmeire
Christian Prinoth
Jeff Reback
Sam Reckoner
Craig Reeson
Jan Schulz
Skipper Seabold
Ted Square
Graham Taylor
Aman Thakral
Chris Uga
Dieter Vandenbussche
Texas P.
Pinxing Ye
... and everyone I forgot
Can pass Series to DataFrame.append with ignore_index=True for appending a single row (GH430)
Add Spearman and Kendall correlation options to Series.corr and DataFrame.corr (GH428)
Add new get_value and set_value methods to Series, DataFrame, and Panel to very low-overhead access to
scalar elements. df.get_value(row, column) is about 3x faster than df[column][row] by handling fewer cases
(GH437, GH438). Add similar methods to sparse data structures for compatibility
Add Qt table widget to sandbox (GH435)
DataFrame.align can accept Series arguments, add axis keyword (GH461)
Implement new SparseList and SparseArray data structures. SparseSeries now derives from SparseArray
(GH463)
max_columns / max_rows options in set_printoptions (GH453)
Implement Series.rank and DataFrame.rank, fast versions of scipy.stats.rankdata (GH428)
Implement DataFrame.from_items alternate constructor (GH444)
DataFrame.convert_objects method for inferring better dtypes for object columns (GH302)
Add rolling_corr_pairwise function for computing Panel of correlation matrices (GH189)
Add margins option to pivot_table for computing subgroup aggregates (GH GH114)
Add Series.from_csv function (GH482)
Improve memory usage of DataFrame.describe (do not copy data unnecessarily) (GH425)
Use same formatting function for outputting floating point Series to console as in DataFrame (GH420)
DataFrame.delevel will try to infer better dtype for new columns (GH440)
Exclude non-numeric types in DataFrame.{corr, cov}
Override Index.astype to enable dtype casting (GH412)
Use same float formatting function for Series.__repr__ (GH420)
Use available console width to output DataFrame columns (GH453)
Accept ndarrays when setting items in Panel (GH452)
Infer console width when printing __repr__ of DataFrame to console (PR GH453)
Optimize scalar value lookups in the general case by 25% or more in Series and DataFrame
Can pass DataFrame/DataFrame and DataFrame/Series to rolling_corr/rolling_cov (GH462)
Fix performance regression in cross-sectional count in DataFrame, affecting DataFrame.dropna speed
Column deletion in DataFrame copies no data (computes views on blocks) (GH GH158)
MultiIndex.get_level_values can take the level name
More helpful error message when DataFrame.plot fails on one of the columns (GH478)
Improve performance of DataFrame.{index, columns} attribute lookup
Fix O(K^2) memory leak caused by inserting many columns without consolidating, had been present since 0.4.0
(GH467)
DataFrame.count should return Series with zero instead of NA with length-0 axis (GH423)
Fix Yahoo! Finance API usage in pandas.io.data (GH419, GH427)
Fix upstream bug causing failure in Series.align with empty Series (GH434)
Function passed to DataFrame.apply can return a list, as long as its the right length. Regression from 0.4
(GH432)
Dont accidentally upcast scalar values when indexing using .ix (GH431)
Fix groupby exception raised with as_index=False and single column selected (GH421)
Implement DateOffset.__ne__ causing downstream bug (GH456)
Fix __doc__-related issue when converting py -> pyo with py2exe
Bug fix in left join Cython code with duplicate monotonic labels
Fix bug when unstacking multiple levels described in GH451
Exclude NA values in dtype=object arrays, regression from 0.5.0 (GH469)
Use Cython map_infer function in DataFrame.applymap to properly infer output type, handle tuple return values
and other things that were breaking (GH465)
Handle floating point index values in HDFStore (GH454)
Fixed stale column reference bug (cached Series object) caused by type change / item deletion in DataFrame
(GH473)
Index.get_loc should always raise Exception when there are duplicates
Handle differently-indexed Series input to DataFrame constructor (GH475)
Omit nuisance columns in multi-groupby with Python function
Buglet in handling of single grouping in general apply
Handle type inference properly when passing list of lists or tuples to DataFrame constructor (GH484)
Preserve Index / MultiIndex names in GroupBy.apply concatenation step (GH GH481)
36.31.5 Thanks
Ralph Bean
Luca Beltrame
Marius Cobzarenco
Andreas Hilboll
Jev Kuznetsov
Adam Lichtenstein
Wouter Overmeire
Fernando Perez
Nathan Pinger
Christian Prinoth
Alex Reyfman
Joon Ro
Chang She
Ted Square
Chris Uga
Dieter Vandenbussche
Arithmetic methods like sum will attempt to sum dtype=object values by default instead of excluding them
(GH382)
New Cython vectorized function map_infer speeds up Series.apply and Series.map significantly when passed
elementwise Python function, motivated by GH355
Cythonized cache_readonly, resulting in substantial micro-performance enhancements throughout the codebase
(GH361)
Special Cython matrix iterator for applying arbitrary reduction operations with 3-5x better performance than
np.apply_along_axis (GH309)
Add raw option to DataFrame.apply for getting better performance when the passed function only requires an
ndarray (GH309)
Improve performance of MultiIndex.from_tuples
Can pass multiple levels to stack and unstack (GH370)
Can pass multiple values columns to pivot_table (GH381)
Can call DataFrame.delevel with standard Index with name set (GH393)
Use Series name in GroupBy for result index (GH363)
Refactor Series/DataFrame stat methods to use common set of NaN-friendly function
Handle NumPy scalar integers at C level in Cython conversion routines
Fix bug in DataFrame.to_csv when writing a DataFrame with an index name (GH290)
DataFrame should clear its Series caches on consolidation, was causing stale Series to be returned in some
corner cases (GH304)
DataFrame constructor failed if a column had a list of tuples (GH293)
Ensure that Series.apply always returns a Series and implement Series.round (GH314)
Support boolean columns in Cythonized groupby functions (GH315)
DataFrame.describe should not fail if there are no numeric columns, instead return categorical describe (GH323)
Fixed bug which could cause columns to be printed in wrong order in DataFrame.to_string if specific list of
columns passed (GH325)
Fix legend plotting failure if DataFrame columns are integers (GH326)
Shift start date back by one month for Yahoo! Finance API in pandas.io.data (GH329)
Fix DataFrame.join failure on unconsolidated inputs (GH331)
DataFrame.min/max will no longer fail on mixed-type DataFrame (GH337)
Fix read_csv / read_table failure when passing list to index_col that is not in ascending order (GH349)
Fix failure passing Int64Index to Index.union when both are monotonic
Fix error when passing SparseSeries to (dense) DataFrame constructor
Added missing bang at top of setup.py (GH352)
Change is_monotonic on MultiIndex so it properly compares the tuples
Fix MultiIndex outer join logic (GH351)
Set index name attribute with single-key groupby (GH358)
Bug fix in reflexive binary addition in Series and DataFrame for non-commutative operations (like string con-
catenation) (GH353)
setupegg.py will invoke Cython (GH192)
Fix block consolidation bug after inserting column into MultiIndex (GH366)
Fix bug in join operations between Index and Int64Index (GH367)
Handle min_periods=0 case in moving window functions (GH365)
Fixed corner cases in DataFrame.apply/pivot with empty DataFrame (GH378)
Fixed repr exception when Series name is a tuple
Always return DateRange from asfreq (GH390)
Pass level names to swaplavel (GH379)
Dont lose index names in MultiIndex.droplevel (GH394)
Infer more proper return type in DataFrame.apply when no columns or rows depending on whether the passed
function is a reduction (GH389)
Always return NA/NaN from Series.min/max and DataFrame.min/max when all of a row/column/values are NA
(GH384)
Enable partial setting with .ix / advanced indexing (GH397)
Handle mixed-type DataFrames correctly in unstack, do not lose type information (GH403)
Fix integer name formatting bug in Index.format and in Series.__repr__
Handle label types other than string passed to groupby (GH405)
Fix bug in .ix-based indexing with partial retrieval when a label is not contained in a level
Index name was not being pickled (GH408)
Level name should be passed to result index in GroupBy.apply (GH416)
36.32.5 Thanks
Craig Austin
Marius Cobzarenco
Joel Cross
Jeff Hammerbacher
Adam Klein
Thomas Kluyver
Jev Kuznetsov
Kieran OMahony
Wouter Overmeire
Nathan Pinger
Christian Prinoth
Skipper Seabold
Chang She
Ted Square
Aman Thakral
Chris Uga
Dieter Vandenbussche
carljv
rsamson
read_table, read_csv, and ExcelFile.parse default arguments for index_col is now None. To use one or more of
the columns as the resulting DataFrames index, these must be explicitly specified now
Parsing functions like read_csv no longer parse dates by default (GH GH225)
Removed weights option in panel regression which was not doing anything principled (GH155)
Changed buffer argument name in Series.to_string to buf
Series.to_string and DataFrame.to_string now return strings by default instead of printing to sys.stdout
Deprecated nanRep argument in various to_string and to_csv functions in favor of na_rep. Will be removed in
0.6 (GH275)
Renamed delimiter to sep in DataFrame.from_csv for consistency
Changed order of Series.clip arguments to match those of numpy.clip and added (unimplemented) out argument
so numpy.clip can be called on a Series (GH272)
Series functions renamed (and thus deprecated) in 0.4 series have been removed:
asOf, use asof
toDict, use to_dict
toString, use to_string
toCSV, use to_csv
merge, use map
applymap, use apply
combineFirst, use combine_first
_firstTimeWithValue use first_valid_index
Add nrows, chunksize, and iterator arguments to read_csv and read_table. The last two return a new TextParser
class capable of lazily iterating through chunks of a flat file (GH242)
Added ability to join on multiple columns in DataFrame.join (GH214)
Added private _get_duplicates function to Index for identifying duplicate values more easily
Added column attribute access to DataFrame, e.g. df.A equivalent to df[A] if A is a column in the DataFrame
(GH213)
Added IPython tab completion hook for DataFrame columns. (GH233, GH230)
Implement Series.describe for Series containing objects (GH241)
Add inner join option to DataFrame.join when joining on key(s) (GH248)
Can select set of DataFrame columns by passing a list to __getitem__ (GH GH253)
Can use & and | to intersection / union Index objects, respectively (GH GH261)
Added pivot_table convenience function to pandas namespace (GH234)
Implemented Panel.rename_axis function (GH243)
DataFrame will show index level names in console output
Implemented Panel.take
Add set_eng_float_format function for setting alternate DataFrame floating point string formatting
Add convenience set_index function for creating a DataFrame index from its existing columns
36.33.6 Thanks
Thomas Kluyver
Daniel Fortunov
Aman Thakral
Luca Beltrame
Wouter Overmeire
Series.describe and DataFrame.describe now bring the 25% and 75% quartiles instead of the 10% and 90%
deciles. The other outputs have not changed
Series.toString will print deprecation warning, has been de-camelCased to to_string
Fix broken interaction between Index and Int64Index when calling intersection. Implement
Int64Index.intersection
MultiIndex.sortlevel discarded the level names (GH202)
Fix bugs in groupby, join, and append due to improper concatenation of MultiIndex objects (GH201)
Fix regression from 0.4.1, isnull and notnull ceased to work on other kinds of Python scalar objects like date-
time.datetime
Raise more helpful exception when attempting to write empty DataFrame or LongPanel to HDFStore (GH204)
Use stdlib csv module to properly escape strings with commas in DataFrame.to_csv (GH206, Thomas Kluyver)
Fix Python ndarray access in Cython code for sparse blocked index integrity check
Fix bug writing Series to CSV in Python 3 (GH209)
Miscellaneous Python 3 bugfixes
36.34.5 Thanks
Thomas Kluyver
rsamson
Added fast Int64Index type with specialized join, union, intersection. Will result in significant performance
enhancements for int64-based time series (e.g. using NumPys datetime64 one day) and also faster operations
on DataFrame objects storing record array-like data.
Refactored Index classes to have a join method and associated data alignment routines throughout the codebase
to be able to leverage optimized joining / merging routines.
Added Series.align method for aligning two series with choice of join method
Wrote faster Cython data alignment / merging routines resulting in substantial speed increases
Added is_monotonic property to Index classes with associated Cython code to evaluate the monotonicity of the
Index values
Add method get_level_values to MultiIndex
Implemented shallow copy of BlockManager object in DataFrame internals
Improved performance of DateRange.union with overlapping ranges and non-cacheable offsets (like Minute).
Implemented analogous fast DateRange.intersection for overlapping ranges.
Implemented BlockManager.take resulting in significantly faster take performance on mixed-type DataFrame
objects (GH104)
Improved performance of Series.sort_index
Significant groupby performance enhancement: removed unnecessary integrity checks in DataFrame internals
that were slowing down slicing operations to retrieve groups
Added informative Exception when passing dict to DataFrame groupby aggregation with axis != 0
Fixed minor unhandled exception in Cython code implementing fast groupby aggregation operations
Fixed bug in unstacking code manifesting with more than 3 hierarchical levels
Throw exception when step specified in label-based slice (GH185)
Fix isnull to correctly work with np.float32. Fix upstream bug described in GH182
Finish implementation of as_index=False in groupby for DataFrame aggregation (GH181)
Raise SkipTest for pre-epoch HDFStore failure. Real fix will be sorted out via datetime64 dtype
36.35.5 Thanks
Uri Laserson
Scott Sinclair
Fixed DataFrame constructor bug causing downstream problems (e.g. .copy() failing) when passing a Series as
the values along with a column name and index
Fixed single-key groupby on DataFrame with as_index=False (GH160)
Series.shift was failing on integer Series (GH154)
unstack methods were producing incorrect output in the case of duplicate hierarchical labels. An exception will
now be raised (GH147)
Calling count with level argument caused reduceat failure or segfault in earlier NumPy (GH169)
Fixed DataFrame.corrwith to automatically exclude non-numeric data (GH GH144)
Unicode handling bug fixes in DataFrame.to_string (GH138)
Excluding OLS degenerate unit test case that was causing platform specific failure (GH149)
Skip blosc-dependent unit tests for PyTables < 2.2 (GH137)
Calling copy on DateRange did not copy over attributes to the new object (GH168)
Fix bug in HDFStore in which Panel data could be appended to a Table with different item order, thus resulting
in an incorrect result read back
36.36.5 Thanks
Yaroslav Halchenko
Jeff Reback
Skipper Seabold
Dan Lovell
Nick Pentreath
pandas.core.sparse module: Sparse (mostly-NA, or some other fill value) versions of Series, DataFrame, and
Panel. For low-density data, this will result in significant performance boosts, and smaller memory footprint.
Added to_sparse methods to Series, DataFrame, and Panel. See online documentation for more on these
Fancy indexing operator on Series / DataFrame, e.g. via .ix operator. Both getting and setting of values is sup-
ported; however, setting values will only currently work on homogeneously-typed DataFrame objects. Things
like:
series.ix[[d1, d2, d3]]
frame.ix[5:10, [C, B, A]], frame.ix[5:10, A:C]
frame.ix[date1:date2]
Significantly enhanced groupby functionality
Can groupby multiple keys, e.g. df.groupby([key1, key2]). Iteration with multiple groupings products
a flattened tuple
Nuisance columns (non-aggregatable) will automatically be excluded from DataFrame aggregation op-
erations
Added automatic dispatching to Series / DataFrame methods to more easily invoke methods on groups.
e.g. s.groupby(crit).std() will work even though std is not implemented on the GroupBy class
Hierarchical / multi-level indexing
New the MultiIndex class. Integrated MultiIndex into Series and DataFrame fancy indexing, slicing,
__getitem__ and __setitem, reindexing, etc. Added level keyword argument to groupby to enable grouping
by a level of a MultiIndex
New data reshaping functions: stack and unstack on DataFrame and Series
Integrate with MultiIndex to enable sophisticated reshaping of data
Index objects (labels for axes) are now capable of holding tuples
Series.describe, DataFrame.describe: produces an R-like table of summary statistics about each data column
DataFrame.quantile, Series.quantile for computing sample quantiles of data across requested axis
Added general DataFrame.dropna method to replace dropIncompleteRows and dropEmptyRows, deprecated
those.
Series arithmetic methods with optional fill_value for missing data, e.g. a.add(b, fill_value=0). If a location is
missing for both it will still be missing in the result though.
fill_value option has been added to DataFrame.{add, mul, sub, div} methods similar to Series
Boolean indexing with DataFrame objects: data[data > 0.1] = 0.1 or data[data> other] = 1.
pytz / tzinfo support in DateRange
tz_localize, tz_normalize, and tz_validate methods added
Added ExcelFile class to pandas.io.parsers for parsing multiple sheets out of a single Excel 2003 document
GroupBy aggregations can now optionally broadcast, e.g. produce an object of the same size with the aggregated
value propagated
Added select function in all data structures: reindex axis based on arbitrary criterion (function returning boolean
value), e.g. frame.select(lambda x: foo in x, axis=1)
DataFrame.consolidate method, API function relating to redesigned internals
DataFrame.insert method for inserting column at a specified location rather than the default __setitem__ behav-
ior (which puts it at the end)
HDFStore class in pandas.io.pytables has been largely rewritten using patches from Jeff Reback from others. It
now supports mixed-type DataFrame and Series data and can store Panel objects. It also has the option to query
DataFrame and Panel data. Loading data from legacy HDFStore files is supported explicitly in the code
Added set_printoptions method to modify appearance of DataFrame tabular output
rolling_quantile functions; a moving version of Series.quantile / DataFrame.quantile
Generic rolling_apply moving window function
New drop method added to Series, DataFrame, etc. which can drop a set of labels from an axis, producing a
new object
reindex methods now sport a copy option so that data is not forced to be copied then the resulting object is
indexed the same
Added sort_index methods to Series and Panel. Renamed DataFrame.sort to sort_index. Leaving
DataFrame.sort for now.
Added skipna option to statistical instance methods on all the data structures
pandas.io.data module providing a consistent interface for reading time series data from several different sources
The 2-dimensional DataFrame and DataMatrix classes have been extensively redesigned internally into a single
class DataFrame, preserving where possible their optimal performance characteristics. This should reduce
confusion from users about which class to use.
Note that under the hood there is a new essentially lazy evaluation scheme within respect to adding
columns to DataFrame. During some operations, like-typed blocks will be consolidated but not before.
DataFrame accessing columns repeatedly is now significantly faster than DataMatrix used to be in 0.3.0 due to
an internal Series caching mechanism (which are all views on the underlying data)
Column ordering for mixed type data is now completely consistent in DataFrame. In prior releases, there was
inconsistent column ordering in DataMatrix
Improved console / string formatting of DataMatrix with negative numbers
Improved tabular data parsing functions, read_table and read_csv:
Added skiprows and na_values arguments to pandas.io.parsers functions for more flexible IO
parseCSV / read_csv functions and others in pandas.io.parsers now can take a list of custom NA values,
and also a list of rows to skip
Can slice DataFrame and get a view of the data (when homogeneously typed), e.g. frame.xs(idx, copy=False)
or frame.ix[idx]
Many speed optimizations throughout Series and DataFrame
Eager evaluation of groups when calling groupby functions, so if there is an exception with the grouping
function it will raised immediately versus sometime later on when the groups are needed
datetools.WeekOfMonth offset can be parameterized with n different than 1 or -1.
Statistical methods on DataFrame like mean, std, var, skew will now ignore non-numerical data. Before a
not very useful error message was generated. A flag numeric_only has been added to DataFrame.sum and
DataFrame.count to enable this behavior in those methods if so desired (disabled by default)
DataFrame.pivot generalized to enable pivoting multiple columns into a DataFrame with hierarchical columns
DataFrame constructor can accept structured / record arrays
Panel constructor can accept a dict of DataFrame-like objects. Do not need to use from_dict anymore (from_dict
is there to stay, though).
The DataMatrix variable now refers to DataFrame, will be removed within two releases
WidePanel is now known as Panel. The WidePanel variable in the pandas namespace now refers to the renamed
Panel class
LongPanel and Panel / WidePanel now no longer have a common subclass. LongPanel is now a subclass of
DataFrame having a number of additional methods and a hierarchical index instead of the old LongPanelIndex
object, which has been removed. Legacy LongPanel pickles may not load properly
Cython is now required to build pandas from a development branch. This was done to avoid continuing to check
in cythonized C files into source control. Builds from released source distributions will not require Cython
Cython code has been moved up to a top level pandas/src directory. Cython extension modules have been
renamed and promoted from the lib subpackage to the top level, i.e.
pandas.lib.tseries -> pandas._tseries
pandas.lib.sparse -> pandas._sparse
DataFrame pickling format has changed. Backwards compatibility for legacy pickles is provided, but its rec-
ommended to consider PyTables-based HDFStore for storing data with a longer expected shelf life
A copy argument has been added to the DataFrame constructor to avoid unnecessary copying of data. Data is
no longer copied by default when passed into the constructor
Handling of boolean dtype in DataFrame has been improved to support storage of boolean data with NA / NaN
values. Before it was being converted to float64 so this should not (in theory) cause API breakage
To optimize performance, Index objects now only check that their labels are unique when uniqueness matters
(i.e. when someone goes to perform a lookup). This is a potentially dangerous tradeoff, but will lead to much
better performance in many places (like groupby).
Boolean indexing using Series must now have the same indices (labels)
Backwards compatibility support for begin/end/nPeriods keyword arguments in DateRange class has been re-
moved
More intuitive / shorter filling aliases ffill (for pad) and bfill (for backfill) have been added to the functions that
use them: reindex, asfreq, fillna.
pandas.core.mixins code moved to pandas.core.generic
buffer keyword arguments (e.g. DataFrame.toString) renamed to buf to avoid using Python built-in name
DataFrame.rows() removed (use DataFrame.index)
Column ordering in pandas.io.parsers.parseCSV will match CSV in the presence of mixed-type data
Fixed handling of Excel 2003 dates in pandas.io.parsers
DateRange caching was happening with high resolution DateOffset objects, e.g. DateOffset(seconds=1). This
has been fixed
Fixed __truediv__ issue in DataFrame
Fixed DataFrame.toCSV bug preventing IO round trips in some cases
Fixed bug in Series.plot causing matplotlib to barf in exceptional cases
Disabled Index objects from being hashable, like ndarrays
Added __ne__ implementation to Index so that operations like ts[ts != idx] will work
Added __ne__ implementation to DataFrame
36.37.5 Thanks
Joon Ro
Michael Pennington
Chris Uga
Chris Withers
Jeff Reback
Ted Square
Craig Austin
William Ferreira
Daniel Fortunov
Tony Roberts
Martin Felder
John Marino
Tim McNamara
Justin Berka
Dieter Vandenbussche
Shane Conway
Skipper Seabold
Chris Jordan-Squire
corrwith function to compute column- or row-wise correlations between two DataFrame objects
Can boolean-index DataFrame objects, e.g. df[df > 2] = 2, px[px > last_px] = 0
Added comparison magic methods (__lt__, __gt__, etc.)
Flexible explicit arithmetic methods (add, mul, sub, div, etc.)
Added reindex_like method
Added reindex_like method to WidePanel
Convenience functions for accessing SQL-like databases in pandas.io.sql module
Added (still experimental) HDFStore class for storing pandas data structures using HDF5 / PyTables in pan-
das.io.pytables module
Added WeekOfMonth date offset
pandas.rpy (experimental) module created, provide some interfacing / conversion between rpy2 and pandas
Exponentially-weighted moment functions in pandas.stats.moments have a more consistent API and accept a
min_periods argument like their regular moving counterparts.
fillMethod argument in Series, DataFrame changed to method, FutureWarning added.
fill method in Series, DataFrame/DataMatrix, WidePanel renamed to fillna, FutureWarning added to fill
Renamed DataFrame.getXS to xs, FutureWarning added
Removed cap and floor functions from DataFrame, renamed to clip_upper and clip_lower for consistency with
NumPy
Fixed bug in IndexableSkiplist Cython code that was breaking rolling_max function
Numerous numpy.int64-related indexing fixes
Several NumPy 1.4.0 NaN-handling fixes
Bug fixes to pandas.io.parsers.parseCSV
Fixed DateRange caching issue with unusual date offsets
Fixed bug in DateRange.union
p
pandas, 1
1987