Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> The hoax authors were exceptionally dishonest in their presentation of the results: virtually all of their papers were rejected, the accepts were generally in very low-impact venues, and the papers that got accepted were generally not the lurid ones they highlighted in their summary.

I think the other poster summarizes why quite well why this charge of dishonesty is ironic. I'll just add a link to the paper itself if you'd like to read it [1], and review one part:

> From 10 June 2016, to 10 June 2017, I stationed myself on benches that were in central observational locations at three dog parks in Southeast Portland, Oregon. Observation sessions varied widely according to the day of the week and time of day. These, however, lasted a minimum of two and no more than 7 h and concluded by 7:30 pm (due to visibility). I did not conduct any observations in heavy rain. [...] The usual caveats of observational research also apply here. While I closely and respectfully examined the genitals of slightly fewer than ten thousand dogs [...]

So in the span of one year, this lone "researcher" claims to have "closely" inspected the genitals of ~10,000 dogs. That's 1,000 hours to inspect 10,000 dogs, which amounts to 10 dogs per hour, during which they took detailed notes on the dogs and owner's names, gender, and other associated information, while documenting the dogs' behaviour (6 minutes per dog+owner!). That stretches credulity to say the least.

Also, for the data to be meaningful, there must be at least 10,000 unique dogs visiting these three dog parks during the given time span. This also beggars belief even for Portland which features a high percentage of dog ownership. Portland has ~264,000 households, ~70% of households own a dog, that's ~185,000 dogs across ~32 dog parks, which is only 5,000 unique dogs per park on average.

The basic math just doesn't add up, and then the researcher disclaims their abilities to determine canine breeds, but makes claims like, "NB: the phrase ‘dog rape/humping incident’ documents only those incidents in which the activity appeared unwanted from my perspective – the humped dog having given no encouragement and apparently not enjoying the activity."

So apparently they have quite a bit of insight into canine behavioural psychology. There is a lot about the methods and the data that make no sense, and this paper received accolades.

> They are literally taking time and resources from program committees, and they do have IRB obligations in order to do that.

That's a legitimate concern. Unfortunately, the hoax itself reveals that these program committees may not be doing much meaningful work with those resources anyway, which seems like a far more important matter.

Edit: I would add that some way to verify that peer review is doing its job should be part of the publishing process. Periodic random hoaxes seem like a good way of doing it. It will make everyone, particularly reviewers, more skeptical and cautious.

> (Revise-and-resubmit, by the way, is a nice way of saying "reject").

No, it's a nice way of saying, "this is good work, you just need to massage your presentation".

[1] http://norskk.is/bytta/menn/dog_park.pdf



1000 hours over the course of a year is the equivalent of a half-time job, which makes sense if you're a researcher publishing in journals, in that it is your actual job. There are way more than 10,000 dogs in Portland. You're shooting the data down because you're motivated to find its flaws, which I agree are apparent on close inspection, but that's not what motivates a paper reviewer. Why would a reviewer for a gender studies journal have any intuition for the usage of a dog park? It's not an epidemiology or even an animal studies venue.

(Here's a sharper way of asking the same question: tell me, as quickly as you can, how many dogs visit the largest Portland dog park; bear in mind that this is a waste of your time while you're tracking that stat down, because that's what the reviewer is thinking, too).

R&R means reject (it's a rejection cause). At Usenix, if I wanted you to "massage your presentation", I would accept conditional on those changes (actually: at Usenix WOOT, we would have assigned a reviewer to shepherd the paper --- we would have helped you massage your presentation).

Ultimately, to make a case that journals are accepting bad papers, you have to look at their accepts, not their rejects, no matter how those rejects are worded.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: