"Cracking the YouTube algorithm" Is A Fake Myth
(Read on Twitter here)
There's no such thing as "cracking the YouTube algorithm."
For a simple reason:
Even the developers who wrote the code do not fully understand how the decision process is done.
YouTube's recommendation algorithm uses "deep learning," a type of machine learning that uses neural networks with many layers to model and understand complex patterns.
Deep learning models (especially complex ones, like the YouTube recommendation system) are often referred to as "black boxes" because no one can fully understand how they make specific decisions (again not even the developers themselves).
When a deep learning model is trained, it learns to make predictions by adjusting the weights and biases in its many artificial neurons based on the data it's trained on.
For a complex model, there can be millions or even billions of these weights, and the YouTube algorithm is among one of the most complex models in the world right now.
The process by which the model arrives at a particular decision involves many layers of computation and tracing a decision back through those layers to understand why the model made that decision is impossible.
That's the reason why developers working on the YouTube algorithm will always tell you to make content for the audience, not for the algorithm because that's what the algorithm is ultimately trained for: following what the audience wants (as long as the content complies with YouTube's policies, of course).
Because it's counter-intuitive, deep learning models do not "learn" or "understand" in the way humans do. They find patterns in the data they're trained on but don't know why those patterns exist or what they mean in a broader context.
This is why, for example, an image recognition model can recognize a cat in a picture but doesn't understand what a cat is.
These models are able to make autonomous decisions based on intelligent observations, and btw this is one of the major challenges in AI right now because if we don't know why a model is making the decisions it's making, it's dangerous to trust those decisions, especially in high-stakes settings.
The only thing we can do is understand the core rules, analyze emerging behaviors, and learn from them.
Note: This post was liked by Todd Beaupré, the product lead at YouTube.
Home