Looks like it boiled down to the IRB's take on the matter.
The school asserted that Boghossian had unethically
conducted research on human subjects with his
experiment. According to the school’s Institutional
Review Board, Boghossian would have needed to obtain
“informed consent” from the individuals reviewing his
hoax articles in order for his actions to have been
considered ethical.
Does this catch-22 mean no more investigative meta-research would be tolerated by universities? Even if those human subjects are kept anonymous, they're still afforded IRB protections? Is there anything we can know apriori that those subjects aren't actually subjected to anything high risk to them? Maybe that's a bad direction to take to even allow that?
I wonder how this could be done ethically but in a way that doesn't allow peer reviewers to cheat. One thing that comes to mind is ABX testing. Show peer reviewers both a paper which they should accept, and a paper which they should reject, and then give them questionable papers to classify.
At what point does it become unethical simply to be a peer-reviewer for a broken journal? That's the implicit question that should be asked in this sort of ethics review. It does no good to punish just a submitter of papers, if the entire journal is unhealthy.
Low-impact-factor journals that accept papers with 1000 of ethnographic field studies aren't "broken" just because the data was fake, because it isn't the premise of peer review to reliably spot fake data.
You can make the argument that there are too many low-impact journals and schools shouldn't be allocating resources towards keeping them going; I don't think it's a super strong argument but it's at least coherent. But more importantly: it won't have anything to do with this hoax; if you're willing to fake data, you can get accepted in higher-impact journals outside the social sciences as well.
I'm not sure how fake data is substantially different from any other part of a fake paper. Just like how randomly-generated sentences might sound reasonable, faked data might look like real data. In both cases, we expect peer review to engage and to consider how data was sourced. This happened in the "Who's Afraid of Peer Review?" incident [0], where the offending papers were intentionally designed to have many red flags that would have helped reviewers understand that the data was bogus.
More worryingly, data cannot model itself, so the quality of data should be largely irrelevant to any models which purport to explain it; as a result, a paper's contribution to science needs to be structured to work with other results, and cannot simply proclaim that its modelling is correct because some given equation fits the observed data. To pick a situation where your expertise shines, imagine that somebody submits a paper about a 3-SAT solver, and they include data that shows their solver doing not just extremely well on standard problem sets, but asymptotically scaling better than exponentially. The reason that you might doubt the honest presentation of the data is because you know, from having studied the field, that such behavior is unlikely.
No, we don't expect peer reviewers to investigate the sourcing of data. A journal reviewer might be assigned a workload of dozens of papers to be completed, in snippets of spare time, over the course of a month or two. Peer reviewers aren't equipped to do that kind of vetting; that's what replication is for, and replications are their own research projects. It's really surprising what kinds of expectations people seem to have about peer review and what it can reasonably accomplish.
"Who's Afraid Of Peer Review" targeted fee-charging open-access journals, which are financially incentivized to accept random papers. Journal reviewers are generally unpaid postgraduate academics.
Spot on about unreasonable expectations; omne ignotum pro magnifico est always pokes its head when guessing on unfamiliar processes and procedures and what not. So funny how human nature when it assumes, assumes big grand intensive things! I'm frequently guilty of it myself. This is news to me about the reasonable costs absorbed from peer review, makes so much sense the level of vetting that's employed.
No, that's not the only difference. In fact, the two projects were more dissimilar than similar.
Notably, the point of "Who's Afraid Of Peer Review" is that no reasonable reviewer could accept the paper for any reason; the papers (they were all practically identical) contained self-contradicting data; in fact, the plots in the paper were contradictory, so you couldn't even just skim the abstract and the graphics and then reasonably accept the paper. And, of course, the WAOPR papers were claiming to have cured cancer.
The WAOPR papers were designed to be trivial to spot. That's not the case for the S2 papers; the "dog park" paper, for instance, intricately describes 1000 hours of field work.
And, of course, there's the distinction that WAOPR papers targeted commercial services asking for money to publish papers, not volunteer peer review time.
You're saying it's reasonable to publish "main kampf rewritten using woke language", and "a study on how men should regularly use dildos on themselves to improve their social attitutes"?
If it was just the dog park paper, I could see your point, but come on, those papers are completely farcical.
I don't want to quibble with your experiences of peer review; your experiences are valid. But what does a journal provide, then, exactly? Are they relics of the era before cheap self-publication?
"A panel of experts from this field read this paper and felt it likely to make significant contributions". For high-impact journals, you can add "so much so that it was selected to be among the 25% of submissions this cycle to be published".
What peer review does not say is "a panel of experts carefully vetted this paper to ensure that its conclusions are accurate". In fact, part of the premise of replication projects is that peer review doesn't say that.
I've only been a reviewer a couple times, and only in computer science. Other people on HN have experience reviewing for other hard science venues. Maybe their experiences are different. But I think it's notable that you don't hear that in comments about what this hoax exposed; has there been any HN comment from a reviewer saying that they were expected to rigorously vet submissions, the way a PhD board does with a thesis?
The problem is not so much the system as how it's presented: the media presents "peer-reviewed study" as a gold standard, practically synonymous with correct. And unfortunately academia absolutely encourages this practice.
The message that peer review is largely meaningless hasn't really been made by academia, for obvious reasons, and thus stupid papers getting accepted by journals will continue to be interesting and news-worthy until people get the message that peer review doesn't mean much.
The world will then move on to "replicated" as a gold standard. This will be better but not by much. Just a few weeks ago Imperial College London published a press release claiming their Report 9 results from their COVID-19 simulator program had been replicated. Worth a press release because after it was open sourced it was discovered to be filled with non-deterministic behaviour, even with fixed RNG seeds.
Unfortunately the press release was fraudulent. The report they cited as evidence of replication was by a friendly academic. He said he was able to replicate the results, then admitted every number he got out was different, some by 10%-25% different. This is the output of a computer simulation so the allowable difference is 0%. Despite this, Nature and other outlets proceeded to report that ICL's COVID model was "replicated".
In the end, academia is strongly incentivised to appear credible, not actually be so. Being credible requires you can reliably turn your findings into something useful, as when in the realm of pure theory you can never be sure your findings actually hold up in reality. Corporate research has this attribute (eventually), academic research doesn't, as academics are rewarded for writing clever sounding papers not being correct.
I'm inclined to say the biggest piece of value peer review adds is that it ensures that new research takes into account the most relevant past research. It also means that the argumentation in most peer-reviewed research passes appropriate "smell tests", which is rather weaker than saying the argumentation is sound.
> Does this catch-22 mean no more investigative meta-research would be tolerated by universities?
It's not a Cache-22, it's plain old ultra-wokeness & wrongthink. Similar studies on "human subjects" (e.g. sending fake CVs to employers to spot biases in hiring) are lauded as exemplary and important just because they present socially-favourable results.
Social studies are one big hoax, most research is p-hacked & fake, and the rest is heavily politicized.
The PSU Grievance Studies Affair [1] had the same objective as the 1996 Sokal Affair: to test peer review by submitting obviously bad papers that should be rejected, and seeing how many of them are accepted.
The motivation behind this is entirely reasonable. Bad "true negative" as well as known-good "true positive" articles ought to be regularly submitted to every peer review process as a test of accuracy. A great example of a similar idea was the 2014 NIPS consistency experiment [2].
However, there is a problem with the "dog rape" hoax article [3] as a test of the peer review system. Specifically, the author in that paper claims to have spent over 1000 hours carefully cataloguing the behavior of ten thousand people and dogs over the course of one year. The paper then proceeds to produce silly but surprising statistics: female dogs are 70% more likely to be leashed than male dogs, 100% of dogs with shock collars are male, 847 instances of dogs fighting were observed, and so on.
The problem here is that the data was falsified: the observations and experiments claimed never happened, the numbers were all made up. The article was likely accepted because of its data, but the data was a lie. If this data had been real, it might have been a minor but useful study relevant to economists, sociologists, or urban planners, regardless of how silly or made-up the conclusions at the end were.
Peer review works based on the assumption that the author is telling the truth about what experiments they conducted and what numbers they measured. The grievance studies authors could have submitted fake and silly hoax papers without falsifying data: that's what Sokal did, and that would have been a valid experiment testing the peer review system. But that is not what the authors chose to do.
An ongoing misconception here is that academic journals only publish 'facts'
Not really.
They publish research articles. Depending on the field, and the field's philosophical perspective on the construction of knowledge, that can take differing forms. Those differing forms are not just valid but important. And understanding the differences is also, important. An article about medical treatment necessarily does different things and looks different than a journal in the humanities (for example). They make meaning from information in different ways - and that is what all forms of research do, make meaning. Research at its core is not about the discovery of facts...this is basic philosophy of science, basic Thomas Kuhn. Standards like replicability may be of use for making meaning in some fields, but they are not necessarily equally useful in others, often because the level of contextual situating that needs to occur reaches towards the impossible.
Serious members of the field would not have taken the Grievance Studies articles as 'capital-F-FACT' they instead would have interpreted it as a perspective, an argument, an interpretation. Specifically, the entire field of critical studies exists to, in varying forms, critique the ways in which 'discovery of facts' is a reductivist way of looking at meaning making that privileges certain perspectives over others - by treating some perspectives as reality.
So the critique here, and looking back at Sokal, isn't that 'hahaha I pulled one over on you' its that 'we assumed that you were giving us a new perspective in good faith that we could collectively learn from' and ...'now you are standing here laughing at us because you acted like a jerk.' It's two different world views with one willing to be open (by choice) to outside critique and perspectives, and the other supremely self-confident that critique and perspective is unnecessary.
In effect, it's like a legal opinion that separates matters of interpreting law and matters of finding fact.
As a Portland State donor, I'm disappointed at PSU's refusal to back legitimate scientific inquiry. Sending bogus garbage to journals occasionally is a reasonable way to test that peer-review systems are working.
The events I normally donate to support are canceled due to the pandemic, though, so I probably wouldn't donate anyway. Guess I can't really take a moral stance here.
If Mark McLellan remains VP for Research and Graduate Studies, it casts a cloud over academic excellence at PSU. What is his background? How long has he been in this position? What would cause him to make such a politically driven bad decision?
It is inconceivable that someone with so little understanding of the toxic academic publishing environment could be influencing the education of 26,000 students.
As an aside for everyone coming to the comments first before reading the article, PSU in the title is Portland State University not Penn State University which I immediately thought of when I saw PSU.
Basically, Portland State University is retaliating against a professor who showed that the peer review and acceptance process for various journals was woefully broken.
Understood, but in the process he claimed to do a large experiment that he didn't do. He faked data, he broke the trust that the entire research enterprise is built on.
I think you're going to find that most people commenting on this have never reviewed a journal article and don't understand the process. You see it in replication crisis threads as well: the belief that academic research is premised on reviewers replicating results before things get published.
The idea that academic journals are based on a presumption of good faith is totally alien to a lot of HN commenters.
Reasonable critics aren't expecting reviewers to replicate the results, but the dog park paper illustrates that reviewers can't even recognized clearly bogus data. I mean, have you even read the methods the paper said they employed?
I read several of the hoax papers, and skimmed one that included dog park data. Which paper are you referring to, where was it submitted, and where was it accepted? The hoax authors were exceptionally dishonest in their presentation of the results: virtually all of their papers were rejected, the accepts were generally in very low-impact venues, and the papers that got accepted were generally not the lurid ones they highlighted in their summary. I ask for specifics because it is not an interesting research result if they created a paper with clearly bogus data that was then rejected.
The problem with the experiment, of course, is that reviewing takes a fuckload of time and effort, and most fields barely keep up with the legitimate workload they have. They are literally taking time and resources from program committees, and they do have IRB obligations in order to do that.
It was accepted by Gender, Place & Culture. In fact it was not only accepted, it received an award for exemplary scholarship.
Out of the 21 papers submitted [1] 9 were rejected, 7 were published, 3 were asked to revised and resubmit for publication, and 1 was currently under review when the hoax was revealed. The fact that totally bogus papers have about a coin flip chance of being accepted is astounding.
I'm astounded that a research didn't bat an eye about someone claiming to have observed a thousand instances of dog-humping, managed to identify the genders of the dogs, as well as the sexual orientation of the owners - and to have published the paper anyway when the authors claimed to have accidentally thrown away the original data. And to put the cherry on top, to have decided to give this paper an award for exemplary scholarship.
They didn't claim "a thousand instances". They claim 1000 hours of field study. That number shocks people, but over the course of a year, it's a half-time job for a researcher, and so your argument is left at "that seems like a dumb thing for a researcher to allocate half their time to". I mean, sure, I agree, but so what? That's not a very interesting argument, or a damning one.
I think you're missing the series of events. The authors never submitted any false data. They claimed that this was what their research found, and when the journal asked for the actual research data the authors claimed that they had written the finding on pen and paper and lost the only physical copy. The journal published the paper anyway, without ever even seeing the data.
What you're saying now is that you think it's standard practice for peer reviewers to request the raw data for papers they're reviewing? The paper authors signed an actual contract affirming that they hadn't fabricated the data.
It's normal practice for publications to publish papers from authors who explicitly say that they don't even retain the data used to produce the paper? If that's the case what's even the point of peer review?
At this point it seems like you're saying that peer review doesn't actually involve any sort of review. Fortunately, though, other academics don't share your experience that peer review is incapable of identifying faulty research. Because if it did, then there'd be little to no reason to put trust in academia.
Adversarial review with respect to the veracity of the paper author? Unless you're making claims that would upend the field, no, it does not involve any of that sort of review. "I spent 1000 hours taking notes in a dog park" is not an extraordinary claim.
We're saying the same things back and forth to each other at this point and can probably wrap it up.
I didn't say adversarial review. I said publishing papers from authors who explicitly disclaim that they do not have access to their own data any more. Because that's what this publication did. They asked for the data, the hoaxsters claimed that they lost the data, and the journal published the paper anyway.
If what you say is true, that reviewers don't bother reviewing the actual data, then mistakes like missing a decimal point and reporting figures an order of magnitude off would not be caught. That would be astounding, but fortunately most of my coworkers who have experience in academia do not corroborate your claim that reviewers don't bother to look at the data used to produce the paper.
I work in academia in a STEM field. I've never seen or heard of a reviewer asking for access to the raw data used to produce a paper. Reviewers typically operate under the assumption that you're not trying to deliberately mislead them about how you collected and evaluated your data (and I think they have to, at least with how the system currently works).
What often happens is that, although the time they get to spend on a single paper is limited, reviewers still come up with important criticisms that end up leading to substantial changes (sometimes multiple rounds of them) or even an outright rejection.
> The hoax authors were exceptionally dishonest in their presentation of the results: virtually all of their papers were rejected, the accepts were generally in very low-impact venues, and the papers that got accepted were generally not the lurid ones they highlighted in their summary.
I think the other poster summarizes why quite well why this charge of dishonesty is ironic. I'll just add a link to the paper itself if you'd like to read it [1], and review one part:
> From 10 June 2016, to 10 June 2017, I stationed myself on benches that were in central observational locations at three dog parks in Southeast Portland, Oregon. Observation sessions varied widely according to the day of the week and time of day. These, however, lasted a minimum of two and no more than 7 h and concluded by 7:30 pm (due to visibility). I did not conduct any observations in heavy rain. [...] The usual caveats of observational research also apply here. While I closely and respectfully examined the genitals of slightly fewer than ten thousand dogs [...]
So in the span of one year, this lone "researcher" claims to have "closely" inspected the genitals of ~10,000 dogs. That's 1,000 hours to inspect 10,000 dogs, which amounts to 10 dogs per hour, during which they took detailed notes on the dogs and owner's names, gender, and other associated information, while documenting the dogs' behaviour (6 minutes per dog+owner!). That stretches credulity to say the least.
Also, for the data to be meaningful, there must be at least 10,000 unique dogs visiting these three dog parks during the given time span. This also beggars belief even for Portland which features a high percentage of dog ownership. Portland has ~264,000 households, ~70% of households own a dog, that's ~185,000 dogs across ~32 dog parks, which is only 5,000 unique dogs per park on average.
The basic math just doesn't add up, and then the researcher disclaims their abilities to determine canine breeds, but makes claims like, "NB: the phrase ‘dog rape/humping incident’ documents only those incidents in which the activity appeared unwanted from my perspective – the humped dog having given no encouragement and apparently not enjoying the activity."
So apparently they have quite a bit of insight into canine behavioural psychology. There is a lot about the methods and the data that make no sense, and this paper received accolades.
> They are literally taking time and resources from program committees, and they do have IRB obligations in order to do that.
That's a legitimate concern. Unfortunately, the hoax itself reveals that these program committees may not be doing much meaningful work with those resources anyway, which seems like a far more important matter.
Edit: I would add that some way to verify that peer review is doing its job should be part of the publishing process. Periodic random hoaxes seem like a good way of doing it. It will make everyone, particularly reviewers, more skeptical and cautious.
> (Revise-and-resubmit, by the way, is a nice way of saying "reject").
No, it's a nice way of saying, "this is good work, you just need to massage your presentation".
1000 hours over the course of a year is the equivalent of a half-time job, which makes sense if you're a researcher publishing in journals, in that it is your actual job. There are way more than 10,000 dogs in Portland. You're shooting the data down because you're motivated to find its flaws, which I agree are apparent on close inspection, but that's not what motivates a paper reviewer. Why would a reviewer for a gender studies journal have any intuition for the usage of a dog park? It's not an epidemiology or even an animal studies venue.
(Here's a sharper way of asking the same question: tell me, as quickly as you can, how many dogs visit the largest Portland dog park; bear in mind that this is a waste of your time while you're tracking that stat down, because that's what the reviewer is thinking, too).
R&R means reject (it's a rejection cause). At Usenix, if I wanted you to "massage your presentation", I would accept conditional on those changes (actually: at Usenix WOOT, we would have assigned a reviewer to shepherd the paper --- we would have helped you massage your presentation).
Ultimately, to make a case that journals are accepting bad papers, you have to look at their accepts, not their rejects, no matter how those rejects are worded.
The whole point of the experiment was to test whether the research enterprise is trustworthy. Submitting fake experiments to the paper and seeing if they get published is the experiment.
It seems to me that you're rejecting the entire premise of testing publications' abilities to detect fraudulent research.
The whole point of peer review and publications like these is to ensure a quality of published research. If someone conducts a study and determines that a significant portion of fraudulent research is published, then this demonstrates that the publication is not doing a good job of detecting fraudulent research. Some may respond that these authors were deliberately attempting to get their fraudulent papers published, and that these pieces were only published because they were bad faith actors. And? Who's to say that other authors aren't doing the same? The whole point of the system is to detect bad faith actors, so pointing out that the authors of the grievance studies papers were not publishing in good faith is no excuse.
This is like saying pen testing or red team exercises are fatally flawed. They're not. But they can definitely be embarrassing when they reveal deficiencies, so it's understanding why many would want to reject the results of these exercises. But if the organizations wants to improve they need to react constructively to the issues that were revealed, not dismiss the study as flawed. Unfortunately, in this situation the latter seems to be happening.
No, it's not. Just the idea that you'd think it would be reasonable to "red team" a program committee shows how far off you are. The red team will succeed every time; program committees generally operate on a presumption of good faith. Reviewers are busy people with their own research commitments; they might spend an hour, tops, if they're generous, on any paper.
Of course, the purpose of the "Sokal Squared" "experiment" was to demonstrate how un-rigorous social science research is, but this is common in hard science fields as well, which just amplifies the dishonesty of the whole enterprise.
Good program committees don't operate on the presumption of good faith. In fact, I addressed this in my comment so I'm unsure why you're continuing to make this faulty argument.
This is like a website saying that a vulnerability shouldn't be criticized because only bad faith actors would take advantage of it. Is that a reassuring response? Of course not. The whole point of being secure is to be secure from bad faith actors.
The whole point of Boghossian and the other researchers was to demonstrate how easily these publications can be taken advantage of by bad faith actors. Pointing out that these publications published these papers because the authors were bad faith actors is no excuse. How many other bad faith actors got fraudulent research published in these papers? We don't know, and we can't know. But we do know that these publications are extremely vulnerable to them.
It seems to me you aren't actually disagreeing with the conclusions made by these hoax papers: that these publications operate on a system of blind trust and can be easily exploited. It seems like you're saying that social science publications can't operate on any system other than blind trust, so we shouldn't think of this hoax as revealing anything significant. Personally, though, if these publications operate on a system of blind trust then that makes them inherently untrustworthy - hoax papers or not.
This thread is being rate limited, reply in edit:
At this point you're not even contending the claim that these publications fail to block bad research, and are instead claiming that other publications are just as ineffective. At this point you're accepting the thesis of these authors: these publications operate on blind faith and are incapable of identifying false papers.
If you submitted a paper solving NP hard problems in polynomial time it wouldn't be published - at least not without extensive scrutiny and checking if the most famous computer science question had indeed been answered.
Also, regarding the claim that papers don't receive feedback I suggest you read through the responses these authors of the hoax papers received. They did indeed receive feedback from their submissions, contrary to your comment, and some of it was truly astounding. Several praised the content but explicitly rejected the paper on the grounds of the race and gender of the author. In fact I'd say that the feedback was more important than the count of papers published.
My belief is that this is just false. My basis for that belief is experience with ACM and Usenix program committees, which I've participated in directly, and have talked to peers who have as well; I also chaired one cycle of a Usenix proceeding, and so got to see how a fairly large set of reviewers actually worked. I am not including my experience as a reviewer for non-academic conferences.
To be specific: my experience is that reviewers allocate a very small amount of time to reviewing any particular paper, certainly not enough to reliably spot faked research results. Most of the work of reviewing is simply ranking papers, reducing an intractable set of submissions down to a tractable short list. Submitters are lucky to get any substantive feedback at all (hence the "reviewer #2" phenomenon).
If you have countervailing experience, describe it. Where have you reviewed, where you believe that program committee could withstand "red teaming"?
You responded in an edit to your post. But your post doesn't address anything I said. My argument, at this point, boils down to: you don't seem to understand how journal peer review works, or what its purpose is. When confronted with the reality of what peer review is meant to accomplish, you reject the system entirely. That's a coherent (though: bad) argument, but it is not the argument the Areo Hoax authors made. If you can't find support for the Areo Hoax's own arguments, you're not actually making a point relevant to this thread.
What did I fail to address? Your argument is that peer review doesn't actually involve any sort of review capable of spotting bad research. That publications operate on blind faith, and so failure to spot bad fraudulent research is a non-issue.
And I pointed out that this isn't a rebuttal to what these hoaxsters sought to reveal. In fact, it's an admission that their claims are true: These publication do operate on blind faith and are highly vulnerable to false claims.
You seem to be under the impression that peer review isn't meant to spot ineffective or false claims, and so this hoax reveals nothing. You're entitled to your own opinions, but many others disagree. Most understand that the purpose of peer review to ensure academic rigor and catch bad research, rather than operating under blind faith. So the revelation that many publications operate on blind faith is indeed a significant result.
In short, these hoaxsters are saying, "Look! These publications are incapable of spotting blatantly wrong research."
And you're responding, "But most publications can't spot blatantly wrong research.".
The second statement does not disprove the former. You're just claiming that the conclusions these hoaxsters made about social science publications can also be made against other publications. And that may be your perspective, but others do not have the same pessimistic attitude towards peer review.
As I said: the good (not "blind") faith social science venues presume is also presumed by hard science journals, an important fact you're decided to pretend I didn't point out, and one that refutes the Areo Hoax authors arguments. I think we can be done now.
No, journals that go as far as accepting chapters of Mein Kampf with a few terms swapped out go beyond "good faith" and into the ream of "blind faith".
You're confused. I can't say this better than Stefan Savage did, and though I'm paraphrasing I think the gist will come through anyways: there is nothing magical about being "published" in a journal. What's important about ideas is that other people care about and build on them. Getting published in a journal is just one step on one path towards finding an audience for your idea among academics in a particular field.
The cite record in every field is littered with retractions, revisions, replication failures, and even outright fabrications.
A paper being published in a journal is meant to indicate some degree of review and passing a level of oversight. Yes, there are plenty of journals that fail to do this. And every once in a while people share stories revealing just how poor the review process is in those journals. Like one person who had a paper published that solely consisted of the sentence, "take me off your fucking mailing list" [1].
At this point you're agreeing with the point these hoaxers set out to prove: these journals are poor at reviewing content and the content published in them should not be treated as having any level of authenticity or credibility.
> The cite record in every field is littered with retractions, revisions, replication failures, and even outright fabrications.
Again, at this point you're not even disagreeing with the claim the hoax paper authors are making. You're just saying that the same observations can be made in other fields.
The point is neither you nor Stefan are even disagreeing with the point that the hoaxers are making: that these journals are not capable of rejecting even the most blatantly bad submissions.
Stefan is just saying that the same point the hoaxers are making can also be made towards other fields. That may be the case, but it doesn't make the hoaxer's statements any less true.
"This journal accepts blatantly bad submissions"
"But other journals also accept blatantly bad submissions"
The second sentence does not do anything to disprove the first.
He "faked data" that was literally impossible to acquire in the manner described in the paper. Whether reviewers would actually catch the obviously faked data was entirely the point. Reviewers are not supposed to check work, not trust blindly.
No. Submitting a flawed paper in good faith to a journal isn't unethical; journals exist to evaluate papers. Deliberately submitting bad papers --- in fact, going out of your way to shade what's bad about your papers to try to get them further into the process --- is a waste of volunteer time. Reviewers have offered to evaluate good faith papers; they have not offered to have their time spent as guinea pigs. Hence IRBs, to tell you not to do stuff like this.
This website seems to be an extremely partisan culture war site basically focused on issues academics discuss that conservatives don't like? And then "cancelling" them?