Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

In-memory to_pickle leads to I/O error #29570

Closed
reidhin opened this issue Nov 12, 2019 · 5 comments · Fixed by #35736
Closed

In-memory to_pickle leads to I/O error #29570

reidhin opened this issue Nov 12, 2019 · 5 comments · Fixed by #35736
Labels
Bug IO Pickle read_pickle, to_pickle
Milestone

Comments

@reidhin
Copy link

reidhin commented Nov 12, 2019

Code Sample

# import libraries
import pandas as pd
import io

# show version
print(pd.__version__)
# 0.25.2

# create example dataframe
df = pd.DataFrame({"A": [1, 2, 3, 4], "B": [5, 6, 7, 8]})

# create io-stream to act as surrogate file
stream = io.BytesIO()

# since the compression cannot be inferred from the filename, it has to be set explicitly.
df.to_pickle(stream, compression=None)

# stream.getvalue() can be used as binary load in an api call to save the dataframe in the cloud
print(stream.getvalue())
'''
correct output pandas version 0.24.1:
b'\x80\x04\x95\xe2\x02\x00\x00\x00\x00\x00\x00\x8c\x11pandas.core.frame\x94\x8c\tDataFrame\x94\x93\x94)\x81\x94}\x94(\x8c\x05_data\x94\x8c\x1epandas.core.internals.managers\x94\x8c\x0cBlockManager\x94\x93\x94)\x81\x94(]\x94(\x8c\x18pandas.core.indexes.base\x94\x8c\n_new_Index\x94\x93\x94h\x0b\x8c\x05Index\x94\x93\x94}\x94(\x8c\x04data\x94\x8c\x15numpy.core.multiarray\x94\x8c\x0c_reconstruct\x94\x93\x94\x8c\x05numpy\x94\x8c\x07ndarray\x94\x93\x94K\x00\x85\x94C\x01b\x94\x87\x94R\x94(K\x01K\x02\x85\x94h\x15\x8c\x05dtype\x94\x93\x94\x8c\x02O8\x94K\x00K\x01\x87\x94R\x94(K\x03\x8c\x01|\x94NNNJ\xff\xff\xff\xffJ\xff\xff\xff\xffK?t\x94b\x89]\x94(\x8c\x01A\x94\x8c\x01B\x94et\x94b\x8c\x04name\x94Nu\x86\x94R\x94h\r\x8c\x19pandas.core.indexes.range\x94\x8c\nRangeIndex\x94\x93\x94}\x94(h(N\x8c\x05start\x94K\x00\x8c\x04stop\x94K\x04\x8c\x04step\x94K\x01u\x86\x94R\x94e]\x94h\x14h\x17K\x00\x85\x94h\x19\x87\x94R\x94(K\x01K\x02K\x04\x86\x94h\x1e\x8c\x02i8\x94K\x00K\x01\x87\x94R\x94(K\x03\x8c\x01<\x94NNNJ\xff\xff\xff\xffJ\xff\xff\xff\xffK\x00t\x94b\x89C@\x01\x00\x00\x00\x00\x00\x00\x00\x02\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x05\x00\x00\x00\x00\x00\x00\x00\x06\x00\x00\x00\x00\x00\x00\x00\x07\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x94t\x94ba]\x94h\rh\x0f}\x94(h\x11h\x14h\x17K\x00\x85\x94h\x19\x87\x94R\x94(K\x01K\x02\x85\x94h!\x89]\x94(h%h&et\x94bh(Nu\x86\x94R\x94a}\x94\x8c\x060.14.1\x94}\x94(\x8c\x04axes\x94h\n\x8c\x06blocks\x94]\x94}\x94(\x8c\x06values\x94h7\x8c\x08mgr_locs\x94\x8c\x08builtins\x94\x8c\x05slice\x94\x93\x94K\x00K\x02K\x01\x87\x94R\x94uaust\x94b\x8c\x04_typ\x94\x8c\tdataframe\x94\x8c\t_metadata\x94]\x94ub.'

erroneous output pandas version 0.25.2:
Traceback (most recent call last):
  File "C:/Projects/bug_report/report_bug.py", line 20, in <module>
    print(stream.getvalue())
ValueError: I/O operation on closed file.
'''

Problem description

Occasionally I would like to save Pandas dataframes in the cloud. This can be done through api-calls in which the dataframe is uploaded as binary content. The binary content can be created through providing a io.BytesIO stream to the pandas.to_pickle method. Subsequently the binary content can be obtained by the method getvalue from io.BytesIO. This works perfectly in pandas version 0.24.1. However, when updating to pandas version 0.25.2, this ceases to work. Apparently the io.BytesIO stream gets now closed in the pandas.to_pickle method and can no longer be accessed.

Expected Output

A binary string as produced in pandas version 0.24.1, see code example above

Output of pd.show_versions()

INSTALLED VERSIONS

commit : None
python : 3.7.5.final.0
python-bits : 64
OS : Windows
OS-release : 10
machine : AMD64
processor : Intel64 Family 6 Model 158 Stepping 10, GenuineIntel
byteorder : little
LC_ALL : None
LANG : None
LOCALE : None.None
pandas : 0.25.2
numpy : 1.17.3
pytz : 2019.3
dateutil : 2.8.1
pip : 19.3.1
setuptools : 41.6.0.post20191030
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : None
IPython : None
pandas_datareader: None
bs4 : None
bottleneck : None
fastparquet : None
gcsfs : None
lxml.etree : None
matplotlib : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pytables : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
xarray : None
xlrd : None
xlwt : None
xlsxwriter : None
None

@alimcmaster1 alimcmaster1 added the IO Pickle read_pickle, to_pickle label Nov 13, 2019
@yuanyen
Copy link

yuanyen commented Jan 22, 2020

I got the same problem.

@mroeschke mroeschke added the Bug label Apr 3, 2020
@SebastianB12
Copy link

I got the same problem in version 1.0.3

@Zirochkaa
Copy link

@SebastianB12 I also have the same problem in 1.0.3 version :(

@TomAugspurger
Copy link
Contributor

Anyone interested in working on this?

@akhmerov
Copy link

akhmerov commented Jun 2, 2020

By the way, to those interested in a workaround in the meantime: you can use this

class ResilientBytesIO(BytesIO):
    def close(self):
        pass  # Refuse to close to avoid pandas bug

    def really_close(self):
        super().close()

@TomAugspurger TomAugspurger added this to the Contributions Welcome milestone Jun 2, 2020
@jreback jreback modified the milestones: Contributions Welcome, 1.2 Sep 5, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Bug IO Pickle read_pickle, to_pickle
Projects
None yet
Development

Successfully merging a pull request may close this issue.

9 participants