-
-
Notifications
You must be signed in to change notification settings - Fork 356
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to resume a download? #68
Comments
Hi @Thomas-Frew, The Toolkit class gives you the ability to provide a "custom" dataset for both historical as well as fundamental data. So as an example you can do the following: from financetoolkit import Toolkit
import pandas as pd
ticker_list = ['AMZN', 'AAPL', 'TSLA', 'MU', 'GOOG', 'TSM']
# Only picking the first two as an example
companies = Toolkit(tickers=ticker_list[:2], api_key="API_KEY")
# Load historical data and save to a variable
historical_data = companies.get_historical_data()
# Write to Pickle
historical_data.to_pickle("saved_stocks.pickle")
# Load the Pickle (in the case your current instance is erased)
loaded_data = pd.read_pickle('saved_stocks.pickle')
# Reinitialise the Toolkit with the historical data
companies_2 = Toolkit(tickers=ticker_list[:2], api_key="API_KEY", historical=loaded_data)
# Show bollinger bands as an illustration of the capability
companies_2.technicals.get_bollinger_bands() This gets you the following without the requirement to download any of the data again: Therefore, if you create a loop over your tickers, write to pickle and when you got everything combine the pickles, you get what you are looking for. As an example: Working with custom datasets is a little bit experimental however. I believe it works in pretty much every area but feel free to let me know if you run into issues. Note: I disabled the benchmark just to prevent it from showing up every single time. It should be enough to get the Benchmark data once to make functions like Jensen's Alpha work. |
Hi @Thomas-Frew, did this fix your problem? |
Thanks for your help! This was very useful. |
Hello everyone!
I don't have the most stable network connection, and I was wondering if it was possible to resume a data download for a large dataset if the connection is interrupted.
If not, do you have any tips to minimise losses (like downloading in stages or creating some sort of checkpoint system?)
Thanks!
The text was updated successfully, but these errors were encountered: