Read more: Overview
is to supply a vector of 225 scenarios intended to represent possible future values taken by a source of live data (to crawl
is to do this frequently for a number of different streams).
0. Install microprediction
pip install microprediction
1. Get a key
- Click here for an instant key.
- Or see muid instructions for ways to mine a more rare key.
Rare keys prolong your algorithm's life. See the table of bankruptcy levels in the README
2. Choose a live data stream
If you are new, see the gallery
for some ideas. To programmatically select a suitable stream you might find the following types of JSON page useful:
3. Choose a horizon
You can supply a delay parameter chosen from the vector of delays in config.json
4. Use the MicroWriter to submit scenarios
You can directly POST scenarios using the API, but Python folks will probably prefer:
from microprediction import MicroWriter
mw = MicroWriter(write_key="744fd0bec96112c9c5ce92ee904e1a4e") # Substitute in your own key
scenarios = [ i*0.001 for i in range(mw.num_predictions) ]
5. Examine performance
Use your write_key to log into the dashboard
. To manage
things programmatically the following types of calls may help.
6, 7, 8... Improve it
This we leave in your capable hands. Here is a start:
You are advised to read the MicroWriter code
and package readme. Also likely to be of interest is the MicroCrawler code
. There is a series of articles
on LinkedIn covering topics such as crawler navigation
, predicting bivariate
streams and overview
of the mechanics of prediction and reward.
You can also make a PUT request to submit/STREAM_NAME, with values in the payload as a comma separated list. See the API