Despite the fact that Gorski was talking about medical research in particular, I'm sure you would agree that things like publication bias and politics are just as prevalent in other fields as well. So don't act like research in medical journals is particularly tainted, but research from sociology or criminology journals is A-OK.
Interestingly, the medical field
does have this problem to a larger extent than other areas, because much of the research is performed by MDs, who are not strictly speaking scientists, and whose grounding in the scientific method often leaves a lot to be desired. And, by the same token, much of the peer review is also performed by MDs, who may suffer from the same problem.
Unlike others here, at least you concede that the mainstream academic consensus is not on your side: it is a fact that there is extensive peer-reviewed research in mainstream, reputable journals supporting gun control.
<...>
But on to your theory of community-wide bias, which, I must note, puts you in the company of every other fringe group whose theories go against mainstream academic opinion.
Hold the phone. Your assertions only work if you look
exclusively at the medical/public health community, while deliberately ignoring everybody else, including the field of criminology.
Especially criminology. Because, in actual fact,
there is no "mainstream academic consensus" outside the medical/public health community, and it's open to question to which extent such a consensus exists
within that community. Because it's remarkable in research that concludes time and time again that Guns Are Bad that so much of it is produced by a fairly small number of researchers--names like Arthur Kellermann, John Sloan, Garen Wintemute, and the HSPH troika of David Hemenway, Matthew Miller and Deborah Azrael--and even when none of them are directly involved with a given study, you can bet they'll pop up in the references. So your "mainstream academic consensus" really consists of a comparatively small bunch of researchers from only one of the fields that takes an interest in this subject matter. That makes it a very different proposition than global warming or the theory of evolution, where just about everybody from a number of different fields broadly agrees that the phenomenon in question occurs (with disputes occurring mainly over how and why).
You should see how many MDs--even ones who perform research--are creationists, by the way.
If the gun research were really all crap, then in order to get it all published in the variety and quality of journals in which it has appeared, tens and maybe hundreds of highly accomplished and reputable researchers would have to be involved.
"Highly accomplished and reputable"? You're
begging the question, since whether the researchers are, in fact, accomplished and their reputations are deserved forms part of the matter under discussion. It's not as even the most reputable journals don't publish some utter bilge from time to time; last year, both
Nature Neuroscience and the
NEJM published articles that made breathlessly positive (but upon closer inspection utterly unsupported) conclusions about acupuncture. Papers in which the authors' conclusions are not actually supported by their data are published with depressing frequency. I will again quote Amy Tuteur, MD, in the comments of
this blog post:
<...> a lot of bad analysis gets by reviewers at leading medical journals. That’s why it is so important to read a scientific paper in full, not just the abstract. All too often, the data in the paper does not support the conclusion in the abstract.
But more importantly, the crappiness of the research in question very often doesn't lie in the research and the resulting paper itself, but the way the findings are (re)presented in non-peer-reviewed media, such as accompanying press releases. It's quite possible for a paper itself to be based on sound methodology and provided with appropriate qualifying phrases (at least in the main body) about its limitations, only to have those limitations blithely ignored when the study is discussed outside the journal in which it was published.
By way of illustration, take the caveat in the Branas study that the researchers "did not account for the potential of reverse causation between gun possession and gun assault"; that's good science in that it tells the informed reader what the study's findings do
not indicate. You might legitimately ask "if you didn't look at that, what was the point of this study in the first place?" but at least you know there are unresearched possibilities. But what do we get in the press release?
The research team concluded that, although successful defensive gun uses are possible and do occur each year, the chances of success are low. People should rethink their possession of guns or, at least, understand that regular possession necessitates careful safety countermeasures, write the authors. Suggestions to the contrary, especially for urban residents who may see gun possession as a defense against a dangerous environment should be discussed and thoughtfully reconsidered.
Such conclusions are utterly unwarranted given the researchers' failure to "account for the potential of reverse causation." That's another problem with the medical/public health field's shakiness concerning the scientific method, incidentally: there is a strong tendency to base conclusions only on the findings of studies on a particular topic, without taking into account the general body of scientific evidence. That's why study after study gets published examining the purported efficacy of homeopathy, "therapeutic touch" and various other forms of "complementary and alternative medicine" and concluded that "more research is necessary" even though, for
any of them to work, a large chunk of what we understand about the laws of nature would have to be discarded.
And so it is with the Branas study: decades of criminological research indicates that both perpetrators
and victims of assaultive shootings are disproportionately likely to be individuals involved on a regular and frequent basis in criminal activity. It should hardly come as a surprise that such persons, considering it highly likely that a competitor might try to shoot them, would carry firearms, so you'd
think it should be a fairly obvious step to look into whether that possibility might explain your findings. Actually, this applies to just about every study that asserted conclusions about "keeping a firearm in one's home/about one's person is associated with an elevated risk of (someone in the household) being shot."
You see, the easiest thing in the world is to take a statistical study, crunch some of the numbers in a different way, pick out some wording you don't like, and conclude that it is bunk. I don't know if you thought up this stuff yourself, <...>
If it's so easy, why would you suggest I hadn't, except to get in a gratuitous denigrating remark?
But it has all the hallmarks of a lone internet loon attempting to discredit mainstream peer-reviewed research. Here are some common signs:
-Mumble something about "causation" and "correlation".
-Claim that the authors forgot to control for something.
-Bring up a past and irrelevant example of improper use of statistics.
The only biggie you seem to have missed is the use official-sounding logical phrases like "modus ponens" or "fallacy of composition".
Leaving aside more gratuitous denigrating remarks (how does one "mumble" while
typing, anyway?), by a staggering coincidence, those are
also the common signs of a valid critique.
<...> believe me, anyone with even a modest background working with data can very easily whip up a persuasive-sounding critique of a study, enough to seem convincing to a lay audience and send true believers into ecstatic convulsions.
Have you considered the possibility that such critiques may sound persuasive
because they're correct? You don't need a fashion management degree to see the Emperor's naked.
At the end of the day, the peer review system is indeed flawed, but it is far superior to the "some crap I read on the internet" standard.
Certainly, but it would be a hell of a lot more convincing if the medical/public health research community ever got beyond retrospective studies. Because that's how scientific progress is made: you conduct retrospective studies (which are not very rigorous but are comparatively inexpensive) to develop hypotheses, and then you test those hypotheses using more rigorous (and more expensive) methods (such as prospective studies or, in medicine, controlled randomized trials). But in over three decades, the medical/public health research community never seems to have developed any hypotheses, much less put them to the test.
If there were really errors as severe as you and the pro-gun sites would have us believe (and not in just one paper, but in every gun paper!), then it would almost surely not have gotten past the referees.
What you've got is essentially an argument from incredulity--you refuse to believe that supposedly reputable researchers and medical journals would put out study after study that is essentially garbage--but to maintain that incredulity, you have to hand-wave away every example of studies that
did get published and
are garbage.