The internet uses a protocol - it's called http. A protocol, in this sense, is an agreement. It allows any party, wishing to partake in an exchange, to have a clear expectation of how to do things. For the internet, it tells interested parties how to communicate, to send and receive data. It has occurred to me that machine learning currently lacks a protocol.
Instead, we have frameworks, which implement protocols defined by the people that have implemented the frameworks. And while there is nothing wrong with these “framework protocols”, the fact that there are several of them is a problem. It implies that there are number of ways to achieve the same thing - read data, transform it, train a model, export it, load it, and get predictions out of it.
Now, the absence of a protocol in itself doesn't strictly represent a problem.
There is a version of the world where machine learning can work without protocols.
In this version, every tool that exists is consistent and simple.
Consistent, in that it uses the same models for representing data as every other tool, and simple in that it provides a single functionality.
This world exists.
That fundamental tool for representing data is
The single purpose tools are
open-ai gym, and others alike.
In this world, however, these single purpose tools are disjoint. They provide building blocks to be stitched together. And so the frameworks that are out there stand out, because they often give a complete picture of a problem and solution. And we turn to them because we prefer the cohesive over having to figure things ourselves every time. And for doing so, we are left locked into an ecosystem that promises everything, in exchange for adhesion. And every now and then, we're forced to skip rope, to be on par with the latest redesign of the framework that is often incompatible with its previous versions.
There are too many ways to create a neural network. Very few ways of testing them. Too many ways to create reinforcement learning environments, and very few ways of debugging them. A framework does not define a standard in the way that a protocol does. And without a protocol, there is no contract - social or otherwise. So all we're left with is a choice - to accept the decisions of the framework creators or to make our own. Whichever choice we make, we'll be stuck with a high interest technical debt. And this debt isn't for developers of machine learning solutions alone - for it also borne by consumers, which includes users of applications, governments, regulators, and research communities of interest. It is harder to share, evaluate, validate, introspect, and build on top of work that has already been done. And that is why I think that machine learning needs a protocol.