Recommendation engines are an essential functionality for all global marketplaces, no matter if they are offering books, mobile apps or music. As the number of different products offered within such marketplaces grew into the millions, human users simply cannot handle that amount of information anymore.

Historically, there are different approaches how humans take the decision on what to buy next. Many of us rely on friends with similar taste that recommend a good book for reading next. Most of us are also somehow influenced by the advertisements and marketing that large enterprises use to promote their latest products.

Even if you are a very critical consumer and you decide to scroll down all the available books at Amazon you will not reach the end of the list due to lack of lifetime. Recommender systems and algorithms are one way of learning your personal taste and to suggest products that you did not knew before.

One well known methodology for calculating a recommendation is to use all the collected information from other users’ preferences.

Collaborative filtering, also known as crowd intelligence, is one methodology to predict the taste of a user by using the collected knowledge of millions of users.

Imagine a simple rating matrix U x B where U represents each individual user and B all books within a marketplace. The matrix values are all ratings that all users have assigned to the individual books. This is typically a sparse matrix where there are more non rated books within the matrix that actual user ratings.

See an example for such a user / book rating matrix:

```
Billy Sarah Klara Joseph Bob Sue
Harry Potter 3 5
Sherlock Holmes 5 3 5
Alice in Wonderland 3
Game of Thrones 1 5 3
Pretty Little Liars 5 4
Twilight 1 4 2 5
The Innocent 1 1
Pride and Prejudice 5
```

The goal of our recommendation system is to estimate all the missing ratings within this sparse matrix. We will use Google TensorFlow for implementing our recommendation system as it offers one of the most mature machine learning frameworks.

To find a number of K latent factors, or hidden features between a list of books and a list of users we have to model a function P x Q that approximates our ratings matrix:

R ≈ P x Q

Where the matrices are defined as P = Users x Features and Q = Books x Features.

The TensorFlow model is a simple multiplication of two variable nodes P and Q, while the loss is calculated as the square sum of the difference between the actual ratings and the product of P and Q:

# prepare data R = np.array(BOOKS) N = len(BOOKS) M = len(BOOKS[0]) K = 2 # number of hidden features P = np.random.rand(N,K) Q = np.random.rand(M,K) # input placeholders ratings = tf.placeholder(tf.float32, name = 'ratings') # model variables tP = tf.Variable(P, dtype=tf.float32, name='P') tQ = tf.Variable(Q, dtype=tf.float32, name='Q') # build model pMultq = tf.matmul(tP, tQ, transpose_b=True) squared_deltas = tf.square(pMultq - ratings) loss = tf.reduce_sum(squared_deltas) tf.summary.scalar('loss', loss) tf.summary.scalar('sumP', tf.reduce_sum(tP)) tf.summary.scalar('sumQ', tf.reduce_sum(tQ))

The final step is to create a TensorFlow GradientDescentOptimizer that learns both variable matrices P and Q:

# create an optimizer towards minimizing the loss value optimizer = tf.train.GradientDescentOptimizer(0.01) train = optimizer.minimize(loss)

And we start the TensorFlow training loop as it is shown below:

sess = tf.Session(); init = tf.global_variables_initializer() sess.run(init) # write summaries to TensorBoard log directory summary_op = tf.summary.merge_all() file_writer = tf.summary.FileWriter('./logs', sess.graph) # now run the training loop to reduce the loss for i in range(5000): # also execute a summary operation together with the train node for TensorBoard _, summary = sess.run([train, summary_op], {ratings: R}) file_writer.add_summary(summary, i) print(np.around(sess.run(pMultq),3))

The result is a matrix of approximated book ratings for all the given users:

[[ 1.38 1.60 0.31 1.49 2.50 1.69] [ 3.12 -0.54 -0.81 3.49 4.99 1.78] [-0.18 0.84 0.34 -0.23 -0.16 0.28] [ 1.92 0.29 -0.27 2.13 3.17 1.40] [-0.55 4.82 1.86 -0.78 -0.17 1.96] [ 1.02 4.49 1.43 0.99 2.34 2.84] [ 0.49 -0.05 -0.11 0.55 0.79 0.30] [-0.31 3.70 1.40 -0.47 0.04 1.58]]

If you like to read more about TensorFlow and neural networks, refer to my Kindle eBook here:

Tweet