Recently I shared a study on social media (Ghorbanpour et al). It seemed to be an unusually low quality paper, and soon after I posted it I was informed that it was published in a suspected “predatory journal” — a fraudulent journal that will publish anything for pay (literally anything, even gibberish). They are scams, ripping off academics who are desperate to publish or perish. I just blogged about predatory journals a few weeks ago:
The scientific literature is severely polluted with actual non-science, with an insane number of papers that were published under entirely false pretenses, the fruit of fraud.
Although I’ve been aware of this debacle for several years, I have not paid close enough attention to be aware of how to identify predatory journals. I assumed I didn’t have to worry about it. But the journal in this case was on a list of sixty suspected predatory journals in the rehabilitation field specifically, put together by Manca et al. That’s my turf! If that list is trustworthy (which seems likely), it’s a depressing but invaluable resource for me. Since then I’ve learned about other lists (see PredatoryJournals.com and BeallsList.weebly.com).
My next job was to audit my own bibliographic database for those bogus journals. The bibliographic database for PainScience.com contains 2450 papers. How many are the spawn of predatory journals? How heavily have I relied on their unreliable conclusions? Not a comfortable chore! But an unavoidable one.
Here’s what I found lurking in my bibliography
- 14 journals on Manca’s list appear to be indexed by PubMed (making it much more likely that they’d be in my database)
- 11 papers in my bibliography are in one of those journals. Not too bad…
- 9 of those paper are from one journal! The Journal of Physical Therapy Science.
- 8 of were rated by me, all mediocre at best.?I use a five-point scale for overall paper quality/credibility. I hadn’t given any of them better than a 3, and 2 had been given 2-stars, which is a bit unusual. The mediocrity of a 3-star rating embraces lots of common, normal flaws in studies and papers. All science is flawed and limited, so typical methodological inadequacies are not cause for a demotion.
- 4 had proper summaries (meaning I invested significant time in understanding and explaining the paper). I’d specifically noted that one of them was poorly written.
Obviously I was well aware that these were shabby papers. Which isn’t really surprising, because these days — for many reasons — most papers are “guilty until proven innocent.” I deal with bad papers all the time. I just didn’t know that these papers were not really published.
And how were those bad sources cited?
My real concern was that I might discover that I’d used some worthless studies to support a personal bias. Was I using non-science to make any important points?
Nope! I mostly passed this test. Here’s how they were used:
- Both Iqbal and Kim are cited on two pages as examples of “a handful of very weak studies” that support deep cervical flexor training for neck pain. I checked the only other reference I have for that, Gupta. It wasn’t on Manca’s list, but the journal’s website looked super sketchy,?All of them have a web presence (they have to be findable to function as a scam) and most of them look like shite. So it’s often just a matter of a quick peek at the website. You know how a lot of phishing emails are meant to look like they are from a reputable company, but are actually hilariously clumsy, with obvious errors and glitches that you would never actually find in a real email from a real company? The predatory journal websites are like that. Dog help us if the scammers ever figure out how to actually look professional. and I was quickly able to find it on another list of suspected predatory journals. So all three are worthless, and there is literally no real science supporting DCF training. I am Jack’s complete lack of surprise.
- Both Hyong and Yoo are also double-cited, along with a three others, to support the claim that “just the right exercises do indeed preferentially engage the [vastus medialis muscle]”. Collectively I characterized those studies as “all admittedly small, but also all quite straightforward and probably adequate.” But not those two! So a bit wrong there, but not horribly.
- Cheng is just barely cited (at the end of a footnote for another citation) as having conclusions “similar to” Gross, which is in turn presented as mostly inconclusive “garbage in, garbage out” review of studies of exercise for neck pain. So no real harm done there.
- Both Ravichandran and Amin are in a list of 17 studies of massage for trigger points. The low quality of Amin is made clear even in summary: “no control, mixed results, poorly written paper.” Ravichandran is presented as just a “small negative RCT.”
That last one is the most interesting of the batch: I actually read Ravichandran et al quite carefully just a few months ago, and wrote a thorough summary of it, in which I slammed the authors for spinning their data to make the results look more positive than they were. I think it’s “one of the few clearly negative trials” of massage for trigger points, which rubs my bias the wrong way: I want to believe that massage helps trigger points! So I’m actually smugly pleased to see this paper discredited.
So that’s a dozen citations to papers that are completely useless, but — phew — I didn’t rely on any of them heavily for anything that mattered.
The remaining papers, which I had not yet gotten around to citing, and now never will:
- “Effects of McGill stabilization exercises and conventional physiotherapy on pain, functional disability and active back range of motion in patients with chronic non-specific low back pain”
- “Core strength training for patients with chronic low back pain”
- “Effects of friction massage of the popliteal fossa on blood flow velocity of the popliteal vein”
- “Analysis of vastus lateralis and vastus medialis oblique muscle activation during squat exercise with and without a variety of tools in normal adults”
All of these will remain in my bibliography, but their quality will be prominently questioned, and all will have the 1-star ratings that I apply only to “bad example” papers. I’ll remove most of the citations to them, as examples of the lack of support for a claim.
No doubt there’s more
I did my initial search for predatory journals in my own database before I discovered other lists of predatory journals, so I have more auditing to do, and I fully expect to find more of these festering pustules in my bibliography. However, based on these preliminary results, I suspect I won’t be too horrified by what I find.
And I will now be systematically checking the origins of every significant new citation. PainScience.com will never knowingly cite anything from a predatory journal ever again, except as a bad example.
Could these papers have some value?
Is it overkill to disqualify them entirely? It is theoretically possible for a good paper to end up in a predatory journal, but there’s know way for us to separate those from the rest. I think publication in a predatory journal almost completely undermines the credibility of a paper. Even in legit journals, with flawed but earnest peer review, we have an appalling problem with underpowered crappy little trials, the p-hacking epidemic, and so on. Peer review is deeply flawed, and in some journals it’s not much a lot better than the rubber stamp at a predatory journal, but in any half decent journal it’s a lot better than nothing.
Without it, I think the value of a paper and the credibility of its authors drops to near-zero. They might have good intentions, but they certainly don’t have good judgement. It casts doubt on the value of all their research, wherever and whenever it is published — it’s a serious stain on their record.
Taking out the trash: purging predatory journals from my bibliography
Orginally Published At: Pain Science