Oct 12, 2015 12:00
art,
temperature,
fraud,
death,
viadrdoug,
patriarchy,
society,
women,
law,
trees,
movies,
care,
3d,
economics,
usa,
engineering,
crows,
josswhedon,
stereotypes,
cooking,
research,
reviews,
architecture,
nazis,
links,
hitler,
history,
healthcare,
technology,
sexism,
nature,
rivers,
achievement,
intelligence,
students,
design,
marijuana,
banking,
money,
legalisation,
internet,
perspective,
disney,
animation,
photos,
avengers,
tax,
viaswampers,
replication,
disabilities
Reply
But with the state of worries about quality of research I suspect that replication is going to be something that there's a bigger and bigger demand for.
Reply
Reply
This is the sort of thing that DVCS improves on, of course - if you'd started a similar project today, you might (I'm guessing) have naturally used git rather than svn, in which case any local copy lying around on any machine or backup you could find would have automatically come with a copy of the complete history, and the availability of the upstream server wouldn't be so critical.
Of course that wouldn't solve the rest of the problems, like the scripts not running right in up-to-date Perl and similar bit-rot. But it would be a start, at least.
Reply
Today, for example, hosting repos on github makes sense and some of my papers refer to that as a code repos. In seven years will it be there? Bet now.
Reply
In maths, for example, it seems increasingly that everybody who is anybody posts their papers on the arXiv, so I suppose the right answer would be that the arXiv should provide a means of hosting a git repository alongside the PDF, and that any paper on there with a vital computational component (which in maths, I expect, would be less about replicability and more in the 4CT 'computer-assisted proof' sort of space) would take advantage of that. (Bet they don't, though.)
In a discipline where papers are still mostly in hard-copy journals, that might be (even) harder to arrange...
Reply
Of course journals themselves don't last forever and do fuck up. I discovered after about 5 years that the journal with my most cited paper had screwed up and not ever put my paper online (link error). Nobody seemed to have noticed this as it was available at my web site and IIRC on arXiv so continued to be cited at the journal where it wasn't available except in hard copy.
Reply
Reply
Reply
The scenario you want to avoid is that the paper is still out there claiming some result, and the critical supporting code isn't.
Reply
There are exceptions of course but how many? Yes, Perelman's paper on the Poincare conjecture -- but he's Perelman. He could scrawl it on a loo wall, someone would put it online for him and it would live on.
So papers will (mostly) survive arXiv dying anyway if the journal they are in survives. :-)
Reply
Rationale: the point of a journal is not the physical publication and distribution of the paper, which the arXiv does better anyway; the real added value is the selection and peer review which winnows the great mass of proto-papers out there into ones that are judged by sensible people to be both correct and important.
So you upload your preprint to the arXiv, you submit to the journal by sending you a link, and if it passes peer review, then a link to that arXiv entry appears on the journal website.
Reply
TBPH, the likelihood of arxiv going down without warning and with no backup is negligible though.
(Goodness knows how many people analyse arXiv anyway as part of their research -- I bet a good chunk of the network science community is holding a local copy.)
Reply
Leave a comment