Did you ever wonder why the artwork Netflix uses for different shows sometimes changes when you login to your account day-to-day? One day it’s a picture of the whole bridge crew, the next day it’s Worf glaring at me judgingly. It’s because machine learning is operating behind the scenes to try and guess what will make you pick a particular show to watch. They’re personalizing the experience for you, based on your history of picking shows.
It is powerful machine learning, and you can use it to.
From Netflix’s blog about this technology:
For artwork personalization, the specific online learning framework we use is contextual bandits. Rather than waiting to collect a full batch of data, waiting to learn a model, and then waiting for an A/B test to conclude, contextual bandits rapidly figure out the optimal personalized artwork selection for a title for each member and context.
There are a few interesting challenges to getting this right. For one, you need to first reduce your choices from millions to 50 or less. The artwork selection is a beautiful use case because there are certainly less than 50 different pieces of artwork that represent each show. The fewer choices this machine learning model needs to make, the better.
The second thing you need to do is to reward the artificial intelligence when it gets it right. For Netflix, the reward comes when you click the show. They showed you 10 different images for the same show, and on the 10th one you finally clicked it. There may have been other decisions that went into you clicking that show as well that you can capture and throw at the machine learning model; time of day, how long you’ve been online, geolocation etc., just in case the image isn’t the only deciding factor. The model will figure out what factors are more important than others.
By the way, this is why Netflix has different profiles. It’s pretty universally accepted by scientists that Star Trek: The Next Generation is a great show, but not everyone has accepted this truth yet. Netflix has no idea if it’s you, or your kids picking the shows to watch at any given time, so they encourage you to use profiles so they can separate out the traffic.
So how can I use it?
You can’t, its too hard.
The amount of things you have to do to get it to work is insane. You need a data scientists to build the algorithms, you need machine learning experts to understand the flow of the models and the data, you need rockstar developers to integrate it into a product or solution, and dev ops gods to deploy it and get it to scale.
Or you can use out-of-the-box machine learning tools like Suggestionbox from Machine Box. It has everything you need to solve this problem inside a Docker container wrapped in a simple API. I suggest you start with this and experiment before going deep into AI complexity hell.
But can I use it for my use case?
There are actually many different and wonderful ways you can use collaborative filtering powered by machine learning to personalize things for your customers.
- Personalize the layout and location of elements on your website to increase engagements.
- Order or display news articles based on who is visiting your feed.
- Personalize product images. Or don’t… it’s up to you.
- Picking the hero photo for an apartment, hotel room, car, or other item you’re trying to sell
- Some fun fifth thing
Please get the picture by now
Personalization can be subtle, it can be dynamic, and it can greatly increase a metric like clicks, engagements, purchases, etc. And it’s a great use case for machine learning.
Each and every visitor to your site will be providing the model with information it can then use to choose an element, image, layout, ad, product, or news article that you’re more likely to engage with. Tools like Suggestionbox will give you a model that learns as more people engage with it and adapts on the fly using online learning, which gives you (among other things) the ability to do way more than just A/B testing.
Isn’t this controversial?
It can be, it is really up to you. I don’t advocate for trying to collect too much personal information from users to use for training a model. Netflix ran into this issue when users started to feel a bias creeping into the recommendations. In Netflix’s case, that happened in spite of them not collecting personal information. Does that mean we have to guard against our users own collective bias? Maybe.
Part of a good implementation of this process is a system that randomizes results occasionally, so that users are not just fed things that the model thinks they want to see. In order for the model to learn, it needs to explore, which means showing users random choices and seeing how they engage. This helps us avoid building too much assumption into the model.
With great machine learning comes great responsibility.
Image from Netflix’s blog: https://medium.com/netflix-techblog/artwork-personalization-c589f074ad76