When was the last time you followed a group you didn’t agree with? When was the last time you made sure to look at counter-arguments when reading the news or social media posts? How about when you’re coming to a conclusion? In today’s society, the vast amount of information out there can be seriously overwhelming, not to mention wrong or skewed in some way. There are tons of conspiracies and misinformation. Worse still, there are tons of *true* information that can also cause problems if we don’t think about the information holistically.
If we look at our likes or the groups we follow, the vast majority of us will find that similar beliefs and political views are in those likes and groups. By default, the content we see is going to be mostly from people or groups we already agree with. Similarly, when marketers sell or solicit to us, they take these metrics and tailor their offerings to what we agree with.
This is great for shopping, but when it comes time for understanding or learning or communicating with those who don’t agree with us, we give ourselves a major disadvantage. To many of us, we wouldn’t have it any other way; we don’t want to see some random content that is spouting a bunch of nonsense. But if that nonsense includes 40% of a population, we’d be remiss to not at least come to understand the *truth* in their positions. If we only see our own content, our ability to discern what is true becomes difficult (remember the Monty Hall problem we spoke about?).
Garbage In, Garbage Out. Usually…
We can all agree that if we believe or follow ‘wrong’ or ‘skewed’ information, that isn’t a good thing. We know that with bad directions, for example, finding our way could be difficult. In computer terms, the matter of ‘garbage data in’ will generally equal ‘garbage data out.’ Sure, we aren’t computers, but the information we read daily is very similar. We process and make assessments based on what we take in.
The exception to that rule depends on whether we know it’s garbage data or not. If we know that it could be bad data, we can make use of that and learn from it. For example, if a friend named Jake makes a post saying: “The coronavirus is going to wipe out 20% of the population,” firstly, why would we believe or not believe his statement? If he were a doctor currently working for the CDC, we’d probably place much more weight in what he’s saying, right?
However, if he is a computer technician working at the same company we work for, chances are he doesn’t have quite the means to make that claim. How, then, can Jake’s potentially bad information help us?
Context, Context, Context
If we want to know how bad the coronavirus is and whether it will wipe out 20% of the population, we definitely shouldn’t count on Jake, the IT guy. Yet, what we did learn is that Jake thinks this is the case. That could be relevant, depending on what we want to use this information for. 1. We can use the information to talk to Jake, to relate to Jake. 2. We can determine what sources Jake is using to come to the conclusion. 3. It could help us understand the overall sentiment about the coronavirus situation. 4. And on and on…
It may seem obvious, but we often forget this when reading posts, articles, or watching the news or YouTube. All bad information can be useful depending on the context. The key is to know or have the idea that the information could be bad. If we assume the information is good, then it’s going to be garbage in, garbage out.
Even the word ‘skeptic’ sounds like a bad thing, but the idea is that we should maintain a healthy questioning attitude toward everything. It allows us to make use of the information, whether it’s true or not. Healthy skepticism means that it doesn’t put us into analysis paralysis, and it doesn’t have us constantly arguing with others.
Just because we question something doesn’t mean we have to be vocal or explicit in our questioning; it also doesn’t mean we have to get everything right before making a decision. It means we have to hold in our minds when things are unknown or unsure instead of marking them with good/bad or right/wrong. In fact, we’ll come to find that a large majority of things are never 100% sure bets; they just approach 100% in our experience or understanding. This leads us to how things that are *true* can also cause problems.
Overfitting & Underfitting
Let’s imagine that we see a new post every week about a race crime somewhere in the world. It’s the same race against another race every time. Assume that the crimes did occur, and there’s a high probability that the crimes were, in fact, motivated by some form of racism. What if this is the only information we have about crime in general? What invalid conclusions would we draw?
This is underfitting. Imagine having 100 fur-less sphinx cat photos and having to teach a kid or an AI what a cat is based only on those photos? Drawing conclusions from these photos would make skewed results unless we accounted for the limitations. If we increase the variance in our data, it would mean we take in more possibilities. For example instead of only having fur-less cats, we’d include many other types; instead of of only seeing the race crimes, we’d make sure to acknowledge all the other non-race crimes to have a balanced perspective.
The slightly harder concept to understand is overfitting. It refers to taking in information that has high ‘variance’ and low ability to identify trends from the information. Instead of teaching fur-less cats, imagine we wanted to teach the kid or AI 100 distinct animal species from 100 photos. There would be lots of ‘variance’ because every photo is of something completely different, but having only one instance of dog and one instance of cat could make it hard to delineate from all the other 98 animals. They wouldn’t be able to tell why a hyena isn’t a dog. Initially they would depend completely on our explanation until we gave them ways to identify trends.
Similarly what about if we were to decide what’s best for our country’s national security? With so many factors, it can be difficult to draw conclusions ourselves. We naturally default to an ‘authority’ or other subject matter expert to tell us what we should know, what trends we should look for and what we should conclude. When we can’t identify valid trends on our own and depend on what we’ve been told, we are encountering ‘overfitting’ to an extent.
With the massive amount of information in the world, it makes sense to look for and listen to experts, authorities and even the echo chambers. They can at least provide us with ways to find and identify trends for ourselves. We just need to make sure to constantly branch out and balance the information we receive. If we don’t we could become ‘underfit’ and unable to draw valid conclusions or ‘overfit’ and subject to what other’s say without even knowing it.
In both cases we could slowly become extremists, holding very specific ‘truths’ that cannot create reasonable generalizations. We have to realize that individual pieces of information used to form a generalization may all be true, but the generalization itself by definition is only an approximation. The better we balance the information we receive, the better our ability to generalize.
Let’s take the time to open our minds to other truths: 1. Join a social media group that is a large part of the population, but we disagree with. 2. Remember that bad information can be useful and 3. Remember that even 100% true information can yield skewed results.