Ch. 8: Evaluating and Interpreting Information


·         Appreciate the Role of Critical Thinking in Evaluating Research Findings
o   The critical thinking process is made up of several questions that require a decision. There is also a chance that an error could be made at any point.
·         Assess the Dependability of Information Sources
o   You need to know how current, reputable, and trustworthy the source is before assuming any of the information is usable. The newest information is not automatically the best and the internet, while useful, can be used by anyone as a method to spread false information.  Because online info is compiled by humans (including databases) the information can reflect the biases, priorities, and interests of the person who put the information together. Groups that fund research can create studies made to support their ideas. Think about an oil company that pays for environmental research. That research might paint a rosier picture than research funded by an environmental group. Their research might seem more negative. The reality might likely fall somewhere between the findings of the two groups. It is important to consider what a group might have to gain from the research. Also, never rely on only one source for all of your information.  Check information against other, similar sources.
·         Assess the Quality of your Evidence
o   First you have to decide if there is even enough evidence to come to some sort of judgment. Then, you have to decipher between hard evidence (facts, statistics) and soft evidence (opinions, speculations, and unscientifically analyzed data). If the claims seem too big and exaggerated, they probably are. Check how the information is framed. Which sounds more positive? A 90-percent survival rate or a 10-percent mortality rate? Is the issue clouded by over-the-top language, euphemisms, or demeaning language?
·         Interpret Your Findings Accurately and Without Bias
o   Now that you’ve done the research, you have to put the puzzle together. Interpreting research doesn’t necessarily mean a solid yes or no. In fact, findings usually suggest something in between which makes critical evaluation important.
·         Understand that “Certainty” in Research is an Elusive Goal
o   There are three levels of certainty in research. The conclusive answer is the truth, the things that are just so about something. It is what people will “ultimately” agree on.  The belief for centuries was that the earth was in the center of the universe. The ultimate truth is that we are not at the center. For the most part, research will leave us with a belief, theory, or what is called a present understanding. While conclusive answers are nice and desirable, they are rare. Most answers are either probable or inconclusive. Probable answers have the most potential to be true while inconclusive answers show that the truth will be more complex difficult to find than anticipated. How certain are we?
o   Assumptions are a part of every-day life as well as research. We use our underlying assumptions to do research. If every time you wanted to drive you had to re-invent the wheel, you probably wouldn’t go many places. That’s what happens with research when we assume things like mice and humans are biologically similar. Assumptions become a problem when we don’t evaluate them. Ask other people to help you identify your own assumptions. Personal bias is another issue that comes into play. Everyone is biased in some way.  You can manage personal biases by evaluating your attitudes towards things. This way, you don’t end up rationalizing your bias in your research.  Another important thing to consider would be other potential interpretations of the same information.
·         Recognize Common errors in Reasoning and Statistical Analysis
o   Interpreting information leads to inferences based on what we already know. Inferences can be useful as long as they are evaluated. Can these findings be generalized? Does X really cause Y? Can we trust these numbers and what exactly do they mean? And then there are the three major reasoning errors. Faulty generalization is based in limited evidence (Iraq War). Without critically assessing a piece of information, we can come to a completely inaccurate conclusion. Not all evidence reveals some greater truth. Faulty casual reasoning seeks to explain why something happened or what will happen but ignores other causes/effects, confuses correlation with causation, or is the result of rationalizing. Definite cause can be apparent, like how the moon’s orbit causes the tides to rise and fall. Unless you’re Bill O’Reily…
                "...tide goes in, tide goes out. Never a miscommunication.                 
                  You can't explain that."
o   Faulty statistical analysis occurs when someone attempts (unsuccessfully) to determine the meaning of collected numbers. While these numbers seem more accurate and objective, they can be misleading as a result of faulty analysis. There are several common fallacies.
§  The sanitized statistic is a statistic that has been manipulated to obscure facts.
§  The meaningless statistic is an exact number used to quantify something vague or inexact.
§  The undefined average uses either a mean, median, or mode as an “average”
§  The distorted percentage figure is a percentage reported without explanation of the original numbers used or the margin of error is ignored.
§  The bogus ranking occurs when items are compared on the basis of poorly-defined criteria
§  Confusion of correlation with causation: Correlation mathematically defines the strength of a relationship between two things. Causation is production of the actual effect.
§  The biased meta-analysis occurs when a group of studies are studied and because a human is compiling the information, their bias can be reflected in what studies are included and omitted.
§  The fallible computer model is fallible because it processes complex assumptions to make predictions.
§  Misleading terminology is a problem because terminology often means something different or something more than what the common person might realize.
·         Understand that Research Carries the Potential for Error
o   Research is conducted by humans and therefore always has the potential for error. A survey is only reliable when it produces consistent results and valid when the responses are trustworthy. There are flaws in research studies as well because each type has its own limitations. Epidemiological studies are limited because they don’t prove anything. Laboratory studies are limited because mice are not people and because reactions in cells don’t necessarily apply to an entire organism. Human exposure studies are limited by the groups that are part of the study.
o   The public is often given a distorted picture because of deceptive reporting. Stories are sometimes suppressed to keep the public from knowing how bad a situation may be.

No comments:

Post a Comment