Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Great post! The author touches on Chebychev polynomials, which are the basis for quite a few numerical analysis tricks, including conjugate gradient [1], accelerated gradient descent [2], and Chebychev semi-iterative methods (which find the best combination of past iterates to use during an optimization procedure; sadly I can't find a good reference).

There are a number of facts/folklore about Chebychev polynomials that seem to go stated without proof in a lot of papers, so a few years ago I wrote up some notes [3,4] with the most common claims. Maybe someone will find them useful!

[1] (p35) http://www.cs.cmu.edu/~quake-papers/painless-conjugate-gradi...

[2] http://blog.mrtz.org/2013/09/07/the-zen-of-gradient-descent....

[3] https://benrbray.com/static/notes/chebychev-polynomials_dec1...

[4] https://benrbray.com/static/notes/minimax-approx_nov16.pdf



And also quite useful in a wide range of numerical methods for PDEs like discontinuous Galerkin (and spectral methods in general).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: