Machines are notoriously awful at classification—but so are humans
Classification is the brain fuel that powers behavior. People often prefer to call it by more grandiose names—identification, judgment, prejudice, conclusion—but whatever the analog, human brains rarely take a conscious step without first employing it.
Unsurprisingly, such a deep-seated mental operation has proven difficult to pass down to inventions, like A.I. and self-driving cars, for instance. A machine’s worldview—its contextual frame of reference—is so difficult to build out to human scale that we find them classifying pedestrians as bicycles and advertisements as pedestrians. Perhaps practical A.I. isn’t ready for the real world after all.
Or perhaps the real world is not as easily classified as we’d like to believe.
Consider the example of a curbside advertisement and ask a relatively simple question: Is this a pedestrian?
An algorithm without a strong worldview may indeed ignore the context and classify this as a pedestrian when it is clearly just a print advertisement of a pedestrian. Except, it isn’t.
It’s an image of an advertisement of a pedestrian, and by no means is that a matter of semantics. What we see in our web browsers is a contextual frame, and anything occurring outside that frame is beyond our own worldview, which can lead to gross misclassification. What we’d confidently classify here as a photograph of an outdoor advertisement is a forgery that only exists on-screen. An algorithm would’ve known better, based on the pixels alone.
The distinction between humans and machines is not that we’re better at classifying but that we’re more interested in being confident than accurate.
The distinction between humans and machines is not that we’re better at classifying but that we’re more interested in being confident than accurate. The approach creates its fair share of traffic collisions, specifically in scenarios where pedestrians appear out of context. To that extent, it could be argued that human drivers would struggle to complete a single trip if held to the same unreasonable performance standards we shoulder A.I. with.
Such misguided prioritization bodes poorly for us as the world becomes increasingly digital, and at some point, we may find ourselves relying on A.I. to distinguish reality from fabrication.
Certainly, one could dismiss the impact of having a limited digital worldview as it seems trivial to learn that half our species misclassified the color of a dress and refused to acknowledge it. But the contextual barrier of our screens has also played a role in choosing world leaders as we have aligned ourselves with ideas generated by bots and manipulative agents masquerading as authoritative news sources and fellow citizens. Yet, like the dress, many of us prefer to maintain confidence over accuracy, even in the face of contrary evidence.
Perhaps we should reconsider A.I.’s struggles in the broader, healthier context that classification is hard and humans have only mastered it by cutting corners and ignoring the consequences. Given the chance to rewire a brain, we might do well to curb the hubris.
Source : medium.com