For scholarly publishing, the secret sauce - the essential thing - is a mechanism for review
I’d venture to say that’s close but not phrased quite right. The secret sauce of academic journal publishing is that we get young academics hired. If you can figure out a way for a young junior faculty member to get tenure without publishing in a traditional journal then you’re golden. But as it stands today, the name of the journal is the currency on your CV. Peer review is part of the mechanics of how that works, but it’s the prestige that’s the key part, not the review mechanism itself.
Tenure committees can’t read the actual papers of all the applicants to form an opinion on the quality of the research. They need some form of proxy. Today that’s the impact factor and reputation of the journal title, almost exclusively. It’s entirely possible that that will change and something else (or most likely a variety of something elses) will take its place. There are a lot of people working on alternatives right now. I don’t know what the winners are going to be, but it’s not going to be number of tweets or Facebook likes. If you want to break the existing system you have to figure out how to get a young academic without an established reputation noticed in her field and hired. If you can do that without the need for a journal at all then you can put a crack in the foundation of the industry.
> Tenure committees can’t read the actual papers of all the applicants to form an opinion on the quality of the research.
I once discussed this with the dean of a humanities faculty in Norway. He said they did not take h-index and similar metrics into account while debating tenure.
So I asked: "How do you evaluate the applicants, then?". "We read their papers" he replied. A room full of professors laughed loud.
He then went on telling us about how they shortlist the candidates and how many paper each person in the committee has to read. The figures where something like one paper or small book read in depth for each candidate and five papers read more casually. The candidates choose the reading list.
To me it does not seem a great amount of work for somebody that is about to appoint another scholar as their peer. It also seems to me as a very decent way to treat applicants: as people that did contribute interesting knowledge, not data points.
I am a mathematics professor who has served on several hiring committees. In our context this would be impossible.
In particular, I was asked to help hire an algebraic geometer -- algebraic geometry not being my specialty. For me to read and understand the highly technical papers of each of 500 job applicants, outside my area of expertise, would take an astronomical amount of effort -- completely out of the question.
And not only that, but distinguishing thorny unsolved problems from fairly routine technical exercises is quite a challenge -- one reason that high-end journals solicit multiple reviews. It is sometimes possible to fool many (but not all) of the experts.
We rely principally on recommendation letters (generic, and submitted via a centralized system, so candidates can get one set of letters for all jobs to which they apply). But, yes, which journals candidates have published in is important.
>high-end journals solicit multiple reviews. It is sometimes possible to fool many (but not all) of the experts.
I thought this was a debated point. Don't a lot of people assert that the prestigious high end journals don't really review them for correctness etc, but instead focus on how "ground-breaking" or "news worthy" ??
Experts are sometimes necessary to identify and translate the papers into something that you can understand—include whether it's ground-breaking or news-worthy.
> In particular, I was asked to help hire an algebraic geometer -- algebraic geometry not being my specialty.
Why were you selected then?
> For me to read and understand the highly technical papers of each of 500 job applicants
Why did you, as a professor, wasted your time even reading the 500 names of the applicants? Wouldn't your time be better spent if somebody else did some background research, filtered 95% of the candidates and you were provided only a short list of the best candidates that you could carefully inspect?
Even in a large department, individual specialties may have one or only a few experts. Sometimes none if the department is looking to branch out or fill a gap. There's also typically competition between groups for hires, so no group gets the final decision on which candidate to present (to the department, then the dean, and on up the chain...)
As for the filtering, there aren't any widely accepted ways for a nonspecialist to rank candidates. Hiring committees are a standard part of a professor's job.
This should be the norm and not something that is laughed at. For an absurd comparison, who would hire a programmer based on counting the LOC in his github repository :D
Then again I suspect that quite a few companies don't read the code of applicants either and prefer job interview testing of sorts.
I wouldn’t mind phone screening either. Ask the candidate to name his best/worst paper beforehand and discuss it over the phone (alternatively pick one without the candidate knowing it beforehand). Talk about methodology/his science theoretical baseline and where he thinks the field is headed.
The current hiring process already does this over 2 days of 1-on-1 interviews with over half the faculty. Those faculty can ask whatever questions they'd like (talking about specific papers and methods are very common). But there are a lot of smooth talkers who aren't very productive, but can market themselves well in an interview. So looking at actual published work is also important.
Given the choice between pure interview, versus counting the number of peer-reviewed publications in top conferences/journals, I'd most certainly pick the latter. In the worst case, you'd get someone who can get a lot of work done and published, versus someone who can talk about "big" problems and sound confident.
> The figures where something like one paper or small book read in depth for each candidate and five papers read more casually. The candidates choose the reading list.
This should be the rule, not the exception. I hope we're moving towards that in the post-journal and open peer review era.
As a commenter already pointed put, you have to solicit reviews from experts in the field, and there is unlikely to be another one of those persons in your school.
Open peer review is a nice idea, but when the people that can review your work is roughly 50 or persons whom you met already...
> the people that can review your work is roughly 50 or persons whom you met already...
I don't see the problem with this. If the reviews are signed, public, and citable, then it doesn't matter if it is your best friend who reviewed your paper. What he says in the review is public. He engages his name as much as if he was writing a paper, he can't just say "this is good" without being sure that it is and arguing why it is. Just as people would not be able to just say "this is bad" without arguing and pointing out mistakes. I'm pretty sure that such a system would indeed improve the quality of the reviews, and thus the quality of the papers.
If the reviews were signed, public and citable there would be much fewer critical reviews, especially in small fields. Cue all arguments in favour of anonymous participation on the net.
I'm not entirely sure about that. But even if it is the case, so what?
People who know each other could continue to send everything which could have been in a critical review via private email as they do today to help improve the papers.
If what you assume is true, then it would mean that an article with no reviews must be one which would have had critical reviews, so just ignore it. No one can force you into writing a review saying in your name something you don't believe in. If you only want those with positive reviews it's okay, since those reviews still exist in your model.
Nonetheless, a review is not an upvote or a downvote, it's a small summary of the paper, plus comments, plus an overall impression of what are the contributions and their qualities. So I think that signed, public, and citable reviews would actually be better than what we have today, if only because the shitty reviews we sometimes get or write would just disappear.
Astronomy is a bit unique in that there is no real hierarchy of journals in our field. There are really only four main "bread and butter" journals: Astronomical Journal, Astrophyiscal Journal, Monthly Notices of the Royal Astronomical Society, and Astronomy & Astrophysics. A paper published in any one of these journals has roughly equal weight and "impact".
I think this unique status will help OJA succeed - astronomers are already used to judging papers from all four journals with roughly equal weight. If some senior people get behind it and start making big, important papers available via OJA, I think it really might take off.
Page charges can be seriously expensive. I think many academics would be happy to avoid the whole mess and just post to the arxiv and get peer review via OJA. It's already more or less standard practice to post to the arxiv either upon submission or acceptance, this just formalizes that practice and adds an element of certification of the result.
That doesn't help with tenure committees. I guess in Astronomy, this is less of a problem, but in computer science, different fields of computer science are so different that those on your committee will have probably never have read what journal you are publishing to. They go by reputation alone.
I absolutely agree with Doug. The key is prestige. Our approach to that is to turn peer review itself into a measurable research output -- see our recent partnership with PeerJ for an example (http://blog.publons.com/post/85660504608/publons-partners-wi...)
I'm a physicist and a see a few problems with this. Currently, refereeing is essentially community service. Have you talked to any editors at journals such as the Physical Review or New Journal of Physics? I would imagine that the largest cost of running a journal (let's say an online journal) are the costs of the editors salaries. Let's say that you scale your referee system (which is cute) and add in statistics on referees (do they often accept/reject? Areas of expertise? Time to submit a review, etc.), your editor will still have to take time to resolve the inevitable disputes between authors and referees. As the number of papers grows, you will need to either have more editors, or start paying editors to do this full time.
Another problem that you will run into is the question of curated content. There is a place for having refereed content that is technically correct, but not necessarily impactful (for example, Nature's scientific reports), but such a journal will have a low impact and will have a hard time attracting impactful papers (which are still valuable for career advancement). Otherwise, your editors can start to curate the content based on impact, but this requires even more time and expertise from your editors. I think you'll eventually run into some of the same problems as traditional journals...
Have you thought about working with existing journals to improve the refereeing process?
For some of the other issues that you bring up, such as attaching code--I think that this will have to be something required by funding agencies. I have found the "new" supplementary material sections of journals to be a source of improvement...
It seems there's been quite a bit of talk about collaboration/review in science and academia as of late. Makes sense, as there are some real unsolved problems here.
Shameless plug: I'm also working on tackling peer review, but from a bit of a different angle. Penflip.com (https://www.penflip.com/) is like GitHub for non-programmers. It hosts public and private writing projects backed by git repos, but the interface is stripped down and simplified. Command line access is unnecessary (though still possible) thanks to an in-browser writing interface.
While still relatively early in development, I think Penflip has big potential in academia and science. If anybody here is interested in this space, I would love to hear your thoughts on my project.
You say Penflip is for non-programmers, but your project page says stuff like:
* Markdown support
* Built on Git
and non-programmers have no idea what these are and why they should care. You should probably do some user testing with non-programmers to figure out what claims are relevant on your front page.
EDIT: pricing is confusing as well.
It says "Plans can be paid monthly or annually.", but the paying plans mention that they have to be "(paid annually)" which does not make sense. Can you clarify ? So can you actually pay per month or do you have to pay per year ?
Additionally, what is the license for public projects ? GitHub makes it clear that they have to be open-source. How about on Penflip ? Can they be open, while retain a copyright license ?
Also, why don't you have some intermediate plans ? Imagine I'm writing a book, having the 8 dollars plan for 50 projects seems completely overkill, I'd probably want a plan in there with 2 to 5 project or something like that. 50 seems like a company/organization plan.
And it's not clear what "premium support" means in the pricing, nor why we should care about it.
This being said, it's a good project, but I see many ways you could improve on how you communicate around it.
I second this. I found out about Penflip earlier today and was interested in using it for my fiction writing hobbies. But the 50 repos plan is an overkill for me. I would also like to see some smaller plans, maybe for 10 repos?
Why does the pdf for the example YC application (despite having a lot of text)[^1] only say:
Congratulations, you’ve successfully created a SparkleShare repository!
Any files you add or change in this folder will be automatically synced to
ssh://git@penflip.com/loren/yc-application and everyone connected to it.
SparkleShare is an Open Source software program that helps people
collaborate and share files. If you like what we do, consider buying us a
beer: http://www.sparkleshare.org/
Have fun! :)
It seems like you should give a nod to Prof. MacFarlane if you are going to rely so heavily on his project. Do you support all of the features of pandoc's extended markdown format?
> It seems like you should give a nod to Prof. MacFarlane if you are going to rely so heavily on his project.
Good call. How is this typically done? As with any modern app, I'm utilizing countless open source projects. What's the protocol here?
> Do you support all of the features of pandoc's extended markdown format?
I'm using github flavored markdown + footnotes + raw_tex, some custom latex templates, a looooooot of hacking to make various bits and pieces play nicely. My conversion script is 300 lines long, I'm not just handing a file off to pandoc. Needless to say, pandoc is still invaluable here.
I came here to comment that it is a little strange that a project organized around web publishing would not have hyperlinked footnotes. I had forgotten how distracting it was to have to scroll down to the footnote and then scroll back up and try to get back to where you were in the paper.
Halfway through this comment I realized that I could not recall if there was any discussion about uniform publication style/format. Would papers in the OJA all have the same format (including hyperlinked footnotes!) or would it be a potpourri of different formatting quirks?
> if our editorial board had been paid for their work (as many are)
To my knowledge, the vast majority of editorial boards are composed of researchers who are not paid by the publisher (they have their normal salary, the same as if they did not take part in the editorial board), nor even have a contract with the publisher. I don't think it's true that many editorial boards of academic journals are getting any money out of this job.
In traditional journals, the outcome of reviews is a yes or no for the article to be published. My feeling is that this is stressful and unecessarily constraining for an open journal on internet. Why not assigning points and let reader set their own point sum threshold when they subscribe ? The review mechanism could then be more automatic and wouldn't need an editor.
My wife has peer-reviewed a number of articles and she doesn't just provide a Yes or No. The final outcome is one of four choices. Publishable in current form, Publishable after minor revisions, Not publishable in current form or not suitable for journal. In addition to this she is expected to provide at least half a page of critique and feedback that is (anonymously) passed on to the authors as justification for her decision.
For example the last paper she just reviewed was the third revision of paper she rejected the first time around. The authors had obviously taken her (and others) comments on board and produced a better paper because of it. Had instead the paper been published online in its original form, but with a very low score, neither the authors or the readers would have benefited.
I’d venture to say that’s close but not phrased quite right. The secret sauce of academic journal publishing is that we get young academics hired. If you can figure out a way for a young junior faculty member to get tenure without publishing in a traditional journal then you’re golden. But as it stands today, the name of the journal is the currency on your CV. Peer review is part of the mechanics of how that works, but it’s the prestige that’s the key part, not the review mechanism itself.
Tenure committees can’t read the actual papers of all the applicants to form an opinion on the quality of the research. They need some form of proxy. Today that’s the impact factor and reputation of the journal title, almost exclusively. It’s entirely possible that that will change and something else (or most likely a variety of something elses) will take its place. There are a lot of people working on alternatives right now. I don’t know what the winners are going to be, but it’s not going to be number of tweets or Facebook likes. If you want to break the existing system you have to figure out how to get a young academic without an established reputation noticed in her field and hired. If you can do that without the need for a journal at all then you can put a crack in the foundation of the industry.