Python stock market simulator

Python stock market simulator

Posted: fast-search Date: 08.06.2017

Ok, moving on to the next contestant: Looks good, so I went on with the installation which also went smoothly. The tutorial seems to be a little bit out of date, as the first command yahoofinance. No worries, the documentation is up to date and I find that the missing function is now renamed to yahoofinance.

The problem is that this function only downloads data for one year instead of everything from that year to current date.

After I downloaded the data myself and saved it to csv, I needed to adjust the column names because apparently pyalgotrade expects Date,Adj Close,Close,High,Low,Open,Volume to be in the header. That is all minor trouble. Following through to performance testing on an SMA strategy that is provided in the tutorial. My dataset consists of days of SPY:. But then I tried searching documentation for functionality needed to backtest spreads and multiple asset portfolios and just could not find any.

Then I tried to find a way to feed pandas DataFrame as an input to a strategy and it happens to be not possible , which is again a big disappointment. I did not state it as a requirement in the previous post, but now I come to realisation that pandas support is a must for any framework that works with time series data.

Pandas was a reason for me to switch from Matlab to Python and I never want to go back. Conclusion pyalgotrade does not meet my requrement for flexibility.

It looks like it was designed with classic TA in mind and single instrument trading. Saturday, May 20, Yahoo is dead, long live Yahoo! On 18 May the ichart data api of yahoo finance went down, without any notice.

And it does not seem like it is coming back. This has left many including me with broken code and without a descent free end-of-day data source. Something needs to be done. I'm now working on updating the tradingWithPython.

The notebook can be found on Github, enjoy!

Warrior Trading | Momentum Day Trading Courses & Day Trading Chat Room

Posted by sjev at 1: Share to Twitter Share to Facebook Share to Pinterest. Saturday, February 20, A simple statistical edge in SPY. I've recently read a great post by the turinginance blog on how to be a quant. In short, it describes a scientific approach to developing trading strategies. For me personally, observing data, thinking with models and forming hypothesis is a second nature, as it should be for any good engineer.

In this post I'm going to illustrate this approach by explicitly going through a number of steps just a couple, not all of them involved in development of a trading strategy. I'll start with observations. Observations It occurred to me that most of the time that there is much talk in the media about the market crashing after big losses over several days timespan , quite a significant rebound sometimes follows. In the past I've made a couple of mistakes by closing my positions to cut losses short, just to miss out a recovery in the following days.

General theory After a period of consecutive losses, many traders will liquidate their positions out of fear for even larger loss. Much of this behavior is governed by fear, rather than calculated risk. Smarter traders come in then for the bargains. Next-day returns of SPY will show an upward bias after a number of consecutive losses.

To test the hypothesis, I've calculated the number of consecutive 'down' days. The return series are near-random, so as one would expect, the chances of 5 or more consecutive down days are low, resulting in a very limited number of occurrences. Low number of occurrences will result in unreliable statistical estimates, so I'll stop at 5.

Below is a visualisation of nex-tday returns as a function of number of down days. But even a tiny edge can be exploited find a statistical advantage and repeat as often as possible. Next step is to investigate if this edge can be turned in a trading strategy.

Given the data above, a trading strategy can be forumlated: After consectutive 3 or more losses, go long. Exit on next close. Below is a result of this strategy compared to pure buy-and-hold. This does not look bad at all! Looking a the sharpe ratios the strategy scores a descent 2.

This is actually pretty good! While the strategy above is not something that I would like to trade simply because of the long time span, the theory itself provokes futher thoughts that could produce something useful.

If the same principle applies to intraday data, a form of scalping strategy could be built. Also, position exit is just a basic 'next-day-close'. There is much to be improved, but the essence in my opinion is this: Posted by sjev at 8: Monday, November 17, Trading VXX with nearest neighbors prediction. An experienced trader knows what behavior to expect from the market based on a set of indicators and their interpretation. The latter is often done based on his memory or some kind of model.

Finding a good set of indicators and processing their information poses a big challenge. Data that does not have any predictive quality only intorduces noise and complexity, decreasing strategy performance.

Finding good indicators is a science on its own, often requiring deep understandig of the market dynamics. This part of strategy design can not be easily automated. Luckily, once a good set of indicators has been found, the traders memory and 'intuition' can be easily replaced with a statistical model, which will likely to perform much better as computers do have flawless memory and can make perfect statistical estimations.

Regarding volatility trading, it took me quite some time to understand what influences its movements. I will not go into full-length explanation here, but just present a conclusion: My definition of these two is: This will cause a tailwind from both the premium and daily roll along the term structure in VXX. But this is just a rough estimation. I've been struggling for a very long time to come up with a good way to combine the noisy data from both indicators.

Stock Market Screener and Simulator Tool | Python | Software Architecture

I've tried most of the 'standard' approaches, like linear regression, writing a bunch of 'if-thens' , but all with a very minor improvements compared to using only one indicator. Does not look bad, but what can be done with multiple indicators? I'll start with some out-of-sample VXX data that I got from MarketSci.

Note that this is simulated data, before VXX was created. First, with 10 points, the strategy is excellent in-sample, but is flat out-of-sample red line in figure below is the last point in-sample. Posted by sjev at 3: Wednesday, July 16, Simple backtesting module. My search of an ideal backtesting tool my definition of 'ideal' is described in the earlier 'Backtesting dilemmas' posts did not result in something that I could use right away.

However, reviewing the available options helped me to understand better what I really want. From there, it was only a small step to writing my own backtester, which is now available in the TradingWithPython library. I have chosen an approach where the backtester contains functionality which all trading strategies share and that often gets copy-pasted. Things like calculating positions and pnl, performance metrics and making plots.

Strategy specific functionality, like determining entry and exit points should be done outside of the backtester.

A typical workflow would be: Posted by sjev at 2: Saturday, June 7, Boosting performance with Cython. Posted by sjev at 5: Tuesday, May 27, Backtesting dilemmas: My dataset consists of days of SPY: Monday, May 26, Backtesting dilemmas. A quantitative trader faces quite some challenges on a way to a successful trading strategy. A good trading simulation must: Be good approximation of the real world. This one is of course the most important requirement. Everything that can be quantified should be usable.

It is all about productivity and being able to test many ideas to find one that works. Allow for parameter scans, walk-forward testing and optimisations. This is needed for investigating strategy performance and stability depending on strategy parameters.

The problem with satisfying all of the requirements above is that 2 and 3 are conflicting ones. Typically, a third party point-and-click tool will severely limit freedom to test with custom signals and odd portfolios, while at the other end of the spectrum a custom-coded diy solution will require tens or more hours to implement with high chances of ending up with cluttered and unreadable code.

Zipline is widely known and is the engine behind Quantopian PyAlgotrade seems to be actively developed and well-documented pybacktest is a light-weight vector-based framework with that might be interesting because of its simplicity and performance. First one for the evaluation is Zipline.

My first impression of Zipline and Quantopian is a positive one. Zipline is backed by a team of developers and is tested in production, so quality bugs should be great. There is good documentation on the site and an example notebook on github. To get a hang of it, I downloaded the exampe notebook and started playing with it. To my disappointment I quickly run into trouble at the first example Simplest Zipline Algorithm: The dataset has only days, but running this example just took forever.

Here is what I measured: This kind of performance would be prohibitive for any kind of scan or optimization. Another problem would arise when working with larger datasets like intraday data or multiple securities, which can easily contain hundreds of thousands of samples.

Unfortunately, I will have to drop Zipline from the list of useable backtesters as it does not meet my requirement 4 by a fat margin. In the following post I will be looking at PyAlgotrade. My current system is a couple of years old, running an AMD Athlon II X2 MHZ with 3GB of RAM. A basic walk-forward test with 10 steps and a parameter scan for 20x20 grid would result in a whooping 66 hours with zipline. Wednesday, January 15, Starting IPython notebook from windows file exlorer.

I organize my IPython notebooks by saving them in different directories. I'm sure the ipython team will solve this in the long run, but in the meantime there is a pretty descent way to quickly access the notebooks from the file explorer. All you need to do is add a context menu that starts ipython server in your desired directory: Monday, January 13, Leveraged ETFs in , where is your decay now? Many people think that leveraged etfs in the long term underperform their benchmarks.

This is true for choppy markets, but not in the case of trending conditions, either up or down. For more background please read this post. Let's see what would happen if we shorted some of the leveraged etfs exactly a year ago and hedged them with their benchmark.

I will be considering these pairs: Notice that to hedge an inverse etf a negative position is held in the 1x etf. Here is one example: Thursday, January 2, Putting a price tag on TWTR. Here is my shot at Twitter valuation. I'd like to start with a disclaimer: The reason I've done my own analysis is that my bet did not work out well, and Twitter made a parabolic move in December So the question that I'm trying to answer here is "should I take my loss or hold on to my shorts?

The table below summarises user numbers per company and a value per user derived from the market cap. TWTR is currently more valuable per user thatn FB or LNKD. This is not logical as both competitors have more valuable personal user data at their disposal. GOOG has been excelling at extracting ad revenue from its users. TWTR has a limited room to grow its user base as it does not offer products comparable to FB or GOOG offerings.

TWTR has been around for seven years now and most people wanting an accout have got their chance. The rest just does not care. TWTR user base is volatile and is likely to move to the next hot thing when it will become available. I think the best reference here would be LNKD, which has a stable niche in the professional market.

By this metric TWTR would be overvalued. There is enough data available of the future earnings estimates. One of the most useful ones I've found is here. Using those numbers while subtracting company spendings, which I assume to remain constant , produces this numbers:.

With an assumption that a healthy company will have a final PE ratio of around 30, we can calculate share prices:. There are no clear reasons it should be trading higher and many operational risks to trade lower. My guess is that during the IPO enough professionals have reviewed the price, setting it at a fair price level.

What happened next was an irrational market move not justified by new information. Pure emotion, which never works out well. The only thing that rests me now is to put my money where my mouth is and stick to my shorts. Thursday, September 19, Trading With Python course available!

The Trading With Python course is now available for subscription! I have received very positive feedback from the pilot I held this spring, and this time it is going to be even better. The course is now hosted on a new TradingWithPython website, and the material has been updated and restructured. I even decided to include new material, adding more trading strategies and ideas. Sunday, August 18, Short VXX strategy. Shorting the short-term volatility etn VXX may seem like a great idea when you look at the chart from quite a distance.

Due to the contango in the volatility futures, the etn experiences quite some headwind most of the time and looses a little bit its value every day. This happens due to daily rebalancing, for more information please look into the prospect. Just look back at the summer of I have been unfortunate or foolish enough to hold a short VXX position just before the VIX went up. I have almost blown my account by then: Margin call would mean cashing the loss.

This is not a situation I'd ever like to be in again. I knew it would not be easy to keep head cool at all times, but experiencing the stress and pressure of the situation was something different.

Luckily I knew how VXX tends to behave, so I did not panic, but switched side to XIV to avoid a margin call. To start with a word of warning here: Having said that, let's take a look at a strategy that minimizes some of the risks by shorting VXX only when it is appropriate. VXX experiences most drag when the futures curve is in a steep contango.

The futures curve is approximated by the VIX-VXV relationship. We will short VXX when VXV has an unusually high premium over VIX. First, let's take a look at the VIX-VXV relationship: This strategy is compared with the one that trades short every day, but does not take delta into account. The green line represents our VXX short strategy, blue line is the dumb one. But even more important is that the gut-wrenching drawdowns are largely avoided by paying attention to the forward futures curve.

Building this strategy step-by-step will be discussed during the coming Trading With Python course. Posted by sjev at Getting short volume from BATS. In my last post I have gone through the steps needed to get the short volume data from the BATS exchange. The code provided was however of the quick-n-dirty variety. I have now packaged everything to bats. Thursday, August 15, Building an indicator from short volume data.

Price of an asset or an ETF is of course the best indicator there is, but unfortunately there is only only so much information contained in it. Some people seem to think that the more indicators rsi, macd, moving average crossover etc , the better, but if all of them are based at the same underlying price series, they will all contain a subset of the same limited information contained in the price. Producing this kind of analysis requires a great amount of work, for which I simply don't have the time as I only trade part-time.

So I built my own 'market dashboard' that automatically collects information for me and presents it in an easily digestible form. In this post I'm going to show how to build an indicator based on short volume data.

This post will illustrate the process of data gathering and processing. BATS exchange provides daily volume data for free on their site. Each day has its own zip file. After downloading and unzipping the txt file, this is what's inside first several lines: Date Symbol Short Volume Total Volume Market Center A Z AA Z AACC Z AAN Z AAON Z In total a file contains around symbols.

python stock market simulator

This data is needs quite some work before it can be presented in a meaningful manner. Luckily, full automation is only a couple of code lines away: First we need to dynamically create an url from which a file will be downloaded: Parse downloaded files We can use zip and pandas libraries to parse a single file: Now the only thing left is to parse all downloaded files and combine them to a single table and plot the result: Sunday, March 17, Trading With Python course - status update.

I am happy to announce that a sufficient number of people have showed their interest in taking the course. Starting today I will be preparing a new website and material for the course, which will start in the second week of April.

Posted by sjev at 7: Thursday, January 12, Reconstructing VXX from CBOE futures data. Many people in the blogsphere have reconstructed the VXX to see how it should perform before its inception.

Quantopian

Doing this by hand however is a very tedious work, requiring to download data for each future separately, combine them in a spreadsheet etc. The scripts below automate this process. The first one, downloadVixFutures. To check the calculations, I've compared my simulated results with the SPVXSTR index data, the two agree pretty well, see the charts below. For a fee, I can provide support to get the code running or create a stand-alone program, contact me if you are interested.

Monday, December 26, howto: It allows for class-to class communication with a very simple structure. Even more important is the ability to separate functionality in different modules, for example running a single 'broker' as a wrapper to some api and letting multiple strategies subscribe to relevant broker events.

There are some ready-made modules available, but the best way to understand how this process works is to write the whole system from scratch. In many languages this is a very tedious task, but thanks to the power of Python it only takes a couple of lines to do this. Sender keeps track of interested listeners and notifies them accordingly. In more detail, this is achieved by a dictionary containing a function-event mapping, Sender. All of them have a method, that is that is subscribed to Sender.

The only special thing about the subscribed method is that it should contain three parameters: Optionally, a message is the data that is passed to a function. A nice detail is that if a listener method throws an exception, it is automatically unsubscribed from further events. Posted by sjev at 4: Wednesday, December 14, Plotting with guiqwt.

While it's been quiet on this blog, but behind the scenes I have been very busy trying to build an interactive spread scanner. To make one, a list of ingredients is needed: After tinkering with matplotlib in pyqt for several days I must admit its use in interactive applications is far from optimal.

Slow, difficult to integrate and little interactivity. PyQwt proven to work a little better, but it had its own quirks, a little bit too low-level for me. Interactive charts are just a couple of code lines away now, take a look at an example here: For this I used guiqwt example code with some minor tweaks. And of course a pretty picture of the result: Friday, November 4, How to setup Python development environment.

If you would like to start playing with the code from this blog and write your own, you need to setup a development environment first. This is a utility that you need to pull the source code from Google Code 3. To get the code, use 'Svn Checkout' windows explorer context menu that is available after installing Tortoise SVN.

python stock market simulator

Checkout like this change Checkout directory to the location you want, but it should end with tradingWithPython: Friday, October 28, kung fu pandas will solve your data problems. I love researching strategies, but sadly I spend too much time on low-level work like filtering and aligning datasets. There had got to be a better way than hacking all the filtering code myself, and there is! Some time ago I've come across a data analysis toolkit pandas especially suited for working with financial data.

After just scratching the surface of its capabilities I'm already blown away by what it delivers. Well, I think he is well on the way! Let's take a look at just how easy it is to align two datasets: Trading With Python course If you are a trader or an investor and would like to acquire a set of quantitative trading skills you may consider taking the Trading With Python couse.

The online course will provide you with the best tools and practices for quantitative trading research, including functions and scripts written by expert quantitative traders. You will learn how to get and process incredible amounts of data, design and backtest strategies and analyze trading performance.

This will help you make informed decisions that are crucial for a traders success. Click here to continue to the Trading With Python course website. By night I am a trader.

Python #8

I studied applied physics with specialization in pattern recognition and artificial intelligence. Since I have been using my technical skills in financial markets. Before coming to the conclusion that Python is the best tool available, I was working extensively in Matlab, which is covered on my other blog.

You can reach me at. Short Side Of Long.

inserted by FC2 system