Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

My Ph.D. thesis advisor and I are trying to combat this trend in our field. We are working to develop a suite of benchmarks to evaluate and compare different methods/algorithms. The goal is to require everyone to run their code against a set of accepted standard problems in the field and obtain a fair comparison. As reviewers of papers we can and must hold authors accountable to their claims. You can't just claim that your algorithm is superior without comparing it to the work of others on a set of community agreed upon benchmark problems. The scientific review process needs to hold authors accountable for their claims. It isn't an easy task, but with some work it is possible to establish fair metrics. Once these metrics are in place it should no longer be possible to present your ideas in an biased light.


That's an interesting approach that I do think can improve some things, but I think the underlying problem is that incentives need to be changed. It's not only metrics, but just giving honest opinions: what use-cases do you really think your algorithm is suited for, not looking at it in the most optimistic possible light? If academia weren't as ultra-competitive as it's become in the past two decades or so, I think there would be more chances of getting honest and useful answers to such questions in papers. One still finds them sometimes in papers of people who don't have to play "the game" anymore: papers by senior full-professor types are often quite interesting because of how they can say what they really think.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: