Collect “clusters” of similar colors into separate lists:
Create a graph of connections based on nearness in “color space”:
Show nearby colors successively grouped together:
Make a rasterized image of each letter in the alphabet:
How come I’m getting different results from the ones shown here?
It’s based on artificial neural networks inspired by the way brains seem to work. It’s been trained with millions of example images, from which it’s progressively learned to make distinctions. And a bit like in the game of “twenty questions”, by using enough of these distinctions it can eventually determine what an image is of.
At least 10,000—which is more than a typical human. (There are about 5000 “picturable nouns” in English.)
If the general area (like everyday images) is one it already knows well, then as few as a hundred. But in areas that are new, it can take many millions of examples to achieve good results.
Can a single graph have several disconnected parts?
There’s no easy answer. When it’s given a collection of things, it’ll learn features that distinguish them—though it’s typically primed by having seen many other things of the same general type (like images).
The Wolfram Language stores its latest machine learning classifiers in the cloud—but if you’re using a desktop system, they’ll automatically be downloaded, and then they’ll run locally.