Recurrent Neural Networks to Predict Pricing Trends in UK Housing Market

Recurrent Neural Networks (RNN):

RNNs are used when temporal relationships have to be learnt. Some common examples include time series data (e.g. stock prices), sequence of words (e.g. predictive text) and so on.

The basic concept of RNNs is that we train an additional set of weights (along with the standard input – output pair) that associate past state (time: t-1) with the current state (time: t). This can then be used to predict the future state (time: t+1) given the current state (time: t). In other words RNNs are NNs with state!

When used to standard time series prediction the input and output values are taken from the same time series (usually a scalar value). This is a degenerate case of single valued inputs and outputs. Thus we need to learn the relationship between x(t-1) and x(t) so that we can predict the value of x(t+1) given x(t). This is what I did for this post.

Time series can be made more complicated by making the input a vector of different parameters, the output may still remain a scalar value which is a component of x or be a vector. One reason this is done is to add all the factors that may impact the value to be predicted (e.g. x(t+1)). In our example of average house prices – we may want to add factors such as time of the year, interest rates, salary levels, inflation etc. to provide some more “independent” variables in the input.

Two final points:

  • Use-cases for RNNs: Speech to Text, Predictive Text, Music Tagging, Machine Translation
  • RNNs include the additional complexity of training in Time as well as Space therefore our standard Back-Propagation becomes Back-Propagation Through Time

RNN Structure for Predicting House Prices:

RNN simple time series

The basic time series problem is that we have a sequence of numbers – the average price of houses for a given month and year (e.g. given: X(1), X(2), … X(t-1), X(t) ) with a regular step size and our task is to predict the next number in the sequence (i.e. predict: X(t+1)). In our problem the avg price is calculated for every month since January 1995 (thus step size is 1 month). As a first step we need to define a fixed sequence size that we are going to use for training the RNN. For the input data we will select a sub-sequence of a given length equal to the number of inputs (in the diagram above there are three inputs). For training output we will select a sub-sequence of the same length as the input but the values will be shifted one step in the future.

Thus if input sub-sequence is: X(3), X(4) and X(5) then the output sub-sequence must be: X(4), X(5) and X(6). In general if input sub-sequence spans time step to where b > a and b-a = sub-sequence length, then the output sub-sequence must span a+1 to b+1.

Once the training has been completed if we provide the last sub-sequence as input we will get the next number in the series as the output. We can see how well the RNN is able to replicate the signal by starting with a sub-sequence in the middle and movie ahead in time steps and plotting actual vs predicted values for the next number in the sequence.

Remember to NORMALISE the data!

The parameters are as below:

n_steps = 36 # Number of time steps (thus a = 0 and b = 35, total of 36 months)

n_inputs = 1 # Number of inputs per step (the avg. price for the current month)

n_neurons = 1000 # Number of neurons in the middle layer

n_outputs = 1 # Number of outputs per step (the avg. price for the next month)

learning_rate = 0.0001 # Learning Rate

n_iter = 2000 # Number of iterations

batch_size = 50 # Batch size

I am using TensorFlow’s BasicRNNCell (complete code at the end of the post) but the basic setup is:

X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
y = tf.placeholder(tf.float32, [None, n_steps, n_outputs])

cell = tf.contrib.rnn.OutputProjectionWrapper(tf.contrib.rnn.BasicRNNCell(num_units = n_neurons, activation = tf.nn.relu), output_size=n_outputs)

outputs, states = tf.nn.dynamic_rnn(cell, X, dtype = tf.float32)

loss = tf.reduce_mean(tf.square(outputs-y))
opt = tf.train.AdamOptimizer(learning_rate=learning_rate)
training = opt.minimize(loss)

saver = tf.train.Saver()

init = tf.global_variables_initializer()

Results:

A sample of 3 runs, using Mean Squared Error threshold of 1e-4 we get the following values for Error:

  1. 8.6831e-05
  2. 9.05436e-05
  3. 9.86998e-05

Run 3 fitting and predictions are shown below:

Orange dots represent the prediction by the RNN and Blue dots represent the actual data

 

Run 3 prediction against existing data 3 years before October 2017

Then we start from October 2017 (Month 24 in figure below) and forecast ahead to October 2018. This predicts a rise in average prices which start to plateau 3rd quarter of 2018. Given that average house prices across a country like UK are determined by a large number of noisy factors, we should take this prediction with a pinch of salt.

Run 3 Forecasting from Month 24 (October 2017 for the year ahead till October 2018)

A sample of 3 runs, using Mean Squared Error threshold of 1e-3 we get the following values for Error:

  1. 3.4365e-04
  2. 4.1512e-04
  3. 2.1874e-04

With a higher Error Threshold we find when comparing against actual data (Runs 2 and 3 below) the predicted values have a lot less overlap with the actual values. This is expected as we have traded accuracy for reduction in training time.

predicted avg price vs actual avg price (Run 2)

predicted avg price vs actual avg price (Run 3)

Projections in this case are lot different. We see a linearly decreasing avg price in 2018.

predicted avg price vs actual avg price with forecast

Next Steps:

I would like to add more parameters to the input – but it is difficult to get correlated data for different things such as interest rates, inflation etc.

I would also like to try other types of networks (e.g. LSTM) but I am not sure if that would be the equivalent of using a canon to kill a mosquito.

Finally if anyone has any ideas on this I would be happy to collaborate with them on this!

 

Source code can be found here: housing_tf

Contains HM Land Registry data © Crown copyright and database right 2017. This data is licensed under the Open Government Licence v3.0.

UK Housing Data Analysis: Additional Price Paid Entry

I have been exploring the HM Land Registry Price Paid Data and have discovered few more things of interest.

The data contains a ‘Price Paid Data Category Type’ (this is the second last column at the time of writing this post. As per the description of the schema this field can have one of two values:

A = Standard Price Paid entry, includes single residential property sold for full market value.
B = Additional Price Paid entry including transfers under a power of sale/repossessions, buy-to-lets (where they can be identified by a Mortgage) and transfers to non-private individuals.

Therefore it seems there is a way of looking at how properties sold for full market value differ from buy-to-lets, repossessions and power of sale transactions. Proper Category B tracking only starts from October 2013.

Before we do this it is worthwhile to use the ‘Property Type’ field to filter out properties of type ‘Other’ which contribute to the overall noise because these are usually high value properties such as office buildings. The ‘Property Type’ field has the following values:

D = Detached,

S = Semi-Detached,

T = Terraced,

F = Flats/Maisonettes,

O = Other

Data Pipeline for all transactions:

Step 1: Filter out all transactions with Property Type of Other

Step 2: Group using Year and Month of Transaction

Step 3: Calculate Standard Deviations in Price, Average Price and Counts

 

Data Pipeline for Standard and Additional Price Paid Transactions (separate):

Step 1: Filter out all transactions with Property Type of Other

Step 2: Group using Price Paid Data Category Type, Year and Month of Transaction

Step 3: Calculate Standard Deviations in Price, Average Price and Counts

Tech stuff:

I used a combination of MongoDB (aggregation pipelines for standard heavy weight aggregations – such as simple grouping), Apache Spark (Java based for heavy weight custom aggregations) and Python (for creating graphs and summarising aggregated data)

Results:

In all graphs Orange points represent Category B related data, Blue represents Category A related data and Green represents a combination of both the Categories.

Transaction Counts

Price Paid Data Category A/B Transaction Count

Price Paid Data Category A/B Transaction Count

Category B transactions form a small percentage of the overall transactions (5-8% appprox.)

As the Category B data starts from October 2013 we see a rapid increase in Category B transactions which then settles to a steady rate till 2017 where we can see transactions falling as it becomes less lucrative to buy a second house to generate rental income. There is a massive variation in terms of overall and Category A transactions. But here as well we see a downward trend in 2017.

We can also see the sharp fall in transactions due to the financial crisis around 2008.

In all graphs Orange points represent Category B related data, Blue represents Category A related data and Green represents a combination of both the Categories.

Average Price

Price Paid Data Category A/B Average Price

Price Paid Data Category A/B Average Price

Here we find an interesting result. Category B prices are consistently lower than pure Category A. But given the relatively small number of Category B transactions the average price of combined transactions is fairly close to the average price of Category A transactions. This also seems to point to the fact that in case of buy to let, repossessions and power of sale conditions the price paid is below the average price for Category A. Several reasons could exist for such a result:

  1. People buy cheaper properties as buy-to-let and use more expensive properties as their main residence.
  2. Under stressful conditions (e.g. forced sale or repossession) there is urgency to sell and therefore full market rate may not be obtainable.

Standard Deviation of Prices

Price Paid Data Category A/B Price Standard Dev.

Price Paid Data Category A/B Price Standard Dev.

The variation in the price for Category B properties is quite high when compared with Category A (the standard price paid transaction). This can point to few things about the Category B market:

  1. A lot more speculative activity is carried out here therefore the impact of ‘expectation’ on price paid is very high – particularly:
    1. ‘expected rental returns’: The tendency here will be to buy cheap (i.e. lowest possible mortgage) and profit from the difference between monthly rental and mortgage payments over a long period of time.
    2. ‘expected profit from a future sale’: The tendency here will be to keep a shorter horizon and buy cheap then renovate and sell at a higher price – either through direct value add or because of natural increase in demand.
  2. For a Standard transaction (Category A) the incentive to speculate may not be present as it is a basic necessity.

Contains HM Land Registry data © Crown copyright and database right 2017. This data is licensed under the Open Government Licence v3.0.

Tic Tac Toe: Statistical Analysis with arbitrary Grid Size

The game of tic tac toe (naughts and crosses) is a very simple yet interesting game.

If you don’t know anything about the game visit this link first: https://en.wikipedia.org/wiki/Tic-tac-toe

Some terms I will be using in this post:

  • Grid – the x by x square grid used to play the game, most common size is 3 by 3
  • Grid Reference – the reference to each entry (or slot) in the Grid, for a x by x grid there will be x-squared grid references, therefore for 3 by 3 there are 9 references and for 4 by 4 there are 16
  • Grid Value – each grid reference (or slot) can be empty, ‘x’ or ‘o’
  • Move Sequence – a complete set of moves that define a game, for a Grid Reference length of 9 (3 by 3 grid) there are Factorial(9) set of Move Sequences, a valid example of a move sequence for 3 by 3 grid:
    [1, 3, 4, 6, 5, 7, 9, 8, 2] : alternate moves by player 1 and 2

    Each player takes a symbol and sticks with it (‘x’ or ‘o’)In my post Player 1 always uses ‘o’ and Player 2 ‘x’, this choice has no impact on the final result

  • Set of Move Sequences – a set of move sequences, these can be the full set (all possible moves) for grids of size 3 by 3 (2 by 2 is a trivial case as we shall see). As we move to larger grids the total set of possible moves increases at a Factorial rate therefore we can only sample from the full set of move sequences.
    Grid size, Number of Grid Slots, Total number of moves
    2x2, 4 slots, 24 move sequences
    3x3, 9 slots, 362880 move sequences
    4x4, 16 slots,  >20 trillion move sequences

The interesting thing about regular tic tac toe is that it is deceptively simple as a game with straight forward rules, but can throw some surprising results. The other interesting point is the one related to the rapidly exploding set of possible move sequences.

The Target

How do we analyze games, develop strategies and generally have some fun with this ‘friendly’ problem? Do the strategies remain the same as we scale up the grid?

Approach

I built a tic tac toe simulator to generate move sequences and process them to analyse the final result using Python.

I also used MatplotLib for plotting results and AnyTree for generating brute force move sequences. It is very easy to extend the code and incorporate optimum strategy results to build a tic tac toe bot.

The Python file is attached at the end of this post.

Data Collection

The process of collecting data is very simple.

A set of move sequences is generated (brute force for 3 by 3 grid will generate the complete set of move sequences) by sampling without replacement (size: 10000) from all possible move sequences.

Sampling

The sampling is carried out using a Markov Chain – starting with an initial move sequence (a list of grid references or slots – for 3 by 3 this will be [1, 2, 3, 4, 5, 6, 7, 8, 9]) we (with uniform probability) pick any two moves and switch them. Thus the next state depends only on the previous state.

As an example:

Pick the first and last move to switch.

Sequence at Step t: [1, 3, 4, 6, 5, 7, 9, 8, 2]
Sequence at Step t+1: [2, 3, 4, 6, 5, 7, 9, 8, 1]

The sample size for Grids more than 3 by 3 is 10000.

Using the sampling we can estimate the following given arbitrary grid size:

  • Is there any advantage in starting first?
  • How many games end in a draw or have second movers as winners?
  • Which are the ‘best’ starting squares?
  • How are these factors affected by grid size?

Results

I experimented with grids of size 3 by 3, 4 by 4, 5 by 5 and 6 by 6 (here we are going beyond the Age of the Universe w.r.t. number of move sequences!) and pulled out some interesting statistics!

3 by 3 Grid:

Here the first mover has a CLEAR advantage. It is not surprising because the first mover gets 5 moves as compared to the second user. But the advantage is QUITE SIGNIFICANT.

First Mover Advantage:

  1. The First mover wins 58.4 % of the games (using brute force: 58.492%)
  2. The Second mover wins 28.8% of the games (using brute force: 28.809%)
  3. There are 12.7% instances of a Draw (using brute force: 12.698%)

2 by 2 Grid (An Interlude)

If you are wondering about the 2 by 2 grid (and why we called it a trivial case) then try out a few simple games of 2 by 2 tic tac toe as a thought experiment. In case you are impatient the 2 by 2 grid becomes a trivial case because there is no possibility of a draw or the second mover winning. The first mover wins in all 24 move sequences! You could brute force it on a piece of paper.

3 by 3 Grid Plots

The graph below shows a 3 by 3 grid histogram of winning percentages for First Movers (Orange), Second Movers (Blue) and Draws (Green).

3×3 Grid: First movers in Orange, Second Movers in Blue, Draws in Green

The above histogram clearly shows the difference between the three.

Strategy for Tic Tac Toe!

For the first movers the best starting square by far is 5 (right in the middle and opens up 4 winning lines – most of any starting square, all other squares open up to 3!). Second best option then becomes the corners (1, 3, 7, 9).

For the second movers the best starting squares are 2, 4, 6, or 8 (a diamond pattern).

4 by 4 Grid:

When we increase the size to a 4 by 4 grid we get 16 slots with > 20 trillion move sequences. It is difficult to analyse this (and probably pointless) using the brute force approach. To do this quickly we can use the sampling approach where we randomly sample move sequences (as described before) and measure how many of these lead to draws, first mover wins and second mover wins. This will obviously be an approximate measure.

The result is presented in the histogram below. Once again First movers in Orange, Second Movers in Blue and Draws in Green.

This time we see that there is high chance of a draw (~ 42%), first movers still win more than second movers but the difference is less dramatic (~ 31% for first movers and ~ 26% for second movers). We also see that the first and second mover distributions are starting to overlap at the edges.

4×4 Grid: First movers in Orange, Second Movers in Blue, Draws in Green

5 by 5 Grid:

For a larger 5 by 5 Grid it is impossible to do a brute force calculation as the full number of move sequences reach a stunning Factorial 25!

Using the same sampling approach we can again plot the percentages as a histogram.

As expected the chance to draw has now increased to ~60%! First movers win only about ~25% of the time where as second movers win only about ~13% of the time.

5×5 Grid: First movers in Orange, Second Movers in Blue, Draws in Green

6 by 6 Grid:

Finally doing the same for a 6 by 6 Grid we find that there is a ~75% chance of a draw! There is also almost no difference between first movers and second movers (or very little difference) and both are in the 12 – 15% range. Thus as we increase the grid size it becomes more difficult to win and there is no real advantage in moving first.

6×6 Grid: First movers in Orange, Second Movers in Blue, Draws in Green

Next Steps:

Obviously the experiments given in this post are not the final word. They can be further improved by implementing parallel processing so that we can increase the sample size (especially for larger grids).

To answer the questions we started with:

  • Is there any advantage in starting first? – Yes, definitely! Right up till grid size of 5 by 5.
  • How many games end in a draw or have second movers as winners? – More games end in a draw as the grid size increases. It is almost always bad to be the second mover!
  • Which are the ‘best’ starting squares? – For the 3 by 3 Grid we have:
    • For the first movers the best starting square by far is 5 (right in the middle and opens up 4 winning lines – most of any starting square, all other squares open up to 3!). Second best option then becomes the corners (1, 3, 7, 9)
    • For the second movers the best starting squares are 2, 4, 6, or 8 (a diamond pattern).
  • How are these factors affected by grid size? – As the grid size increases a draw becomes more likely.

Larger Sample Sizes:

What if we increase the sample size from 10000 to 100000 (a factor of 10)?

Let us see what happens to the spread of the first movers, second movers and draws in case of 5 by 5 Grid and 4 by 4 Grid:

Grid Width 5×5: Large Sample, First movers in Orange, Second Movers in Blue, Draws in Green

Grid Width 4×4: Large Sample, First movers in Orange, Second Movers in Blue, Draws in Green

The spread is now far less as compared to the 10000 sample graph. This means we get the percentage values with higher degree of accuracy. The spread looks more like Burj Khalifa than the Shard!

 

Tic Tac Toe python code file can be found HERE!