STATISTICS! HAZARDOUS TO YOUR COPING HEALTH?
I read the other day that 20% (1 in 5) of Americans suffer some type of mental illness. You could find a number like that upsetting, especially if you are struggling with anxiety issues over some personal crisis, but don’t feel you are “mentally ill.” After, there are about 250 million adult Americans. If 20% are mentally ill to some degree, that’s 50 million people. If there are that many it might be easier for you to grudgingly decide you are mentally ill (our blog of December 17, 2017 might help you with this issue), and that maybe you should seek psychiatric counseling or medication or both (which may be totally inappropriate for you).
The 1 in 5 figure has been around for many years, and it may be close to accurate depending on how “mental illness” is defined, the sample on which the number is based (is it representative of the population?), and what sort of protocol was followed to arrive at the number. Before you passively accept such statistics, however, you owe it to yourself to devote some critical thinking to where it came from. Obviously, 250 million people were not assessed, so who decided on the 1 in 5? Bottom line, be wary and don’t get sucked into statistical statements.
The piece I read also claimed that of the 1 in 5 who were mentally ill, 1 in 5 of them were not seeking help for their illness. Now here’s my problem with that comment: if 20% of those who are mentally ill are not seeking help, how can we possibly know they exist? Think about these figures and the assumptions being made. Suppose we have 1,000 people. If 1 in 5 (20%) are mentally ill, that’s 200 people. Now, of those 200 people, we’re told that 1 in 5 of them (20%) don’t seek help for their mental illness. That means 40 people (20% of 200) are mentally ill but walking around untreated. Says who? Did someone go around asking those original 1,000 people (1) “Are you mentally ill?” and (2) “Are you seeking help?” I doubt it, but how else would we know how many are mentally ill and not seeking treatment?
A few years ago I saw a similar report from a university counseling center. The center said that in the previous year, 5% of the student body came in for alcohol abuse counseling. OK, that’s easy enough to calculate. But the report went on the say, “That means 10% of our student body has an alcohol problem but is not seeking help for it.” Huh? How can anyone know the number of students with a problem if they never identify themselves? That 10% number is obviously based on an assumption, an estimate of how many students on campus have a drinking problem. Well, how about saying that! I wish more reporters would use the word, “estimate,” as in, “It is estimated that 1 in 5 Americans suffer from mental illness.” I also wish they would make some effort to define what they mean by “mental illness.”
By the way, the categorical, definitive 1 in 5 figure leads us to some interesting extrapolations. The NRA claims it has 5 million members. That means 1 million members (1 in 5) are mentally ill, but 1 in 5 of those, which would be 200 thousand, are not seeking help. (If you’re a progressive Democrat that thought might keep you awake tonight!)
There are roughly 222 million licensed drivers in the US. Are 44.4 million mentally ill (seems so sometimes, doesn’t it) and 8.8 million of them are not seeking help? (Think about that next time you see someone alongside the road looking like they need assistance!) There are 435 voting members in the US House of Representatives. Are 87 mentally ill? (No comment.) Including the President, about 375 folks are on the White House staff. Are 75 mentally ill? (Bet I can name one!)
Enough about being mentally ill. Have you ever heard or read a pitch for a medicine that said something like, “Compared to a placebo this antidepressant produced 25% more improvement in mood”? You’re down in the dumps and you think, “Wow, I’m getting a prescription for this drug.” But what if you read the complete study and discovered that the placebo produced mood improvement in 4 of every 100 participants, and the drug produced improvement in 5 of every 100? That increase of 1 per 100 is a 25% increase, but hearing the 4 to 5 increase makes me doubt you would be ready to rush to the phone and ask your health provider for a prescription.
What if you read about a drug that significantly improved mental stability, mood, and anxiety levels for in-patient psychiatric clients? You think, “Well I’m having some issues with my mood and anxiety, so I’m getting some of this stuff!” Before you rush to get a prescription, remember that you might not be a member of the sub-population in the material you read; that is, you are not hospitalized. There was a similar fallacy in the OJ Simpson trial. A defense attorney pointed out that only 1 time in 1,000 does an abuser kill the abused. That rarity produces reasonable doubt, about OJ’s guilt, right? Well, the statistic overlooks the fact that Nicole Simpson was not just a woman who was abused; she was an abused woman who was also murdered. In that latter sub-population, almost 90% of the time, the murderer is the abuser. Specifying the population is always important, and before you “buy into” some impressive-sounding numbers, ask yourself if you’re in the population specified.
OK, one more, this one having to do with the so-called lie-detector (see our January 7, 2018 for some additional insight into this instrument). We’re not really dealing with a mental health issue here, but a statistical misrepresentation is clearly shown, a problem that could exist with any measurement technique.
Lie-detector (polygraph) advocates often claim accuracy levels of 90% and higher for the machine. That is, the device will identify a liar 9 times out of ten; only one will be missed. OK, let’s say you own a company with 500 employees, and some of them are stealing materials. That 90% accuracy sounds great so you hire a polygraph operator to find the crooks.
Let’s assume that 50 of your employees are stealing, and 450 are totally honest. (Their identities, of course, are known only to God!) Let’s further assume that everyone will say “No,” when hooked up to the machine and asked something along the lines of, “Have you ever stolen any company materials while at work?”
OK, 50 people are thieves and they are lying. The 90% machine will correctly identify 45 of them, but miss the other 5 (a crook is scored as honest). Meanwhile, of the 450 honest folks, 405 will pass the polygraph but the other 45 honest workers will be identified as liars (crooks). Note that the machine has identified a total of 90 people as lying thieves, and you fire those 90. How many should you have fired, though? Unfortunately, only 45 of the 90 you fired are really stealing from you. You fired them but you also fired 45 honest employees. That means 50% of the employees you fired should not have been. That 90% accurate machine gave you a true yield of 50% accuracy. Bummer, especially if you were fired!
The lesson here? Whether we’re talking polygraph, mammogram, MRI, or any other evaluation test, don’t be swayed by accuracy numbers. There will always be false positives (you’re not one but the test says you are) and false negatives (you are one but the test says you aren’t). Again, think critically and be a wise “consumer” when choosing. Don’t passively accept the message; seek other opinions. You always want to make the best choice for yourself, and you want to do all you can to make sure that choice is an informed one.