It's not the same as with exploitable bugs, because exploitable bugs are fundamentally preventable. Not necessarily in aggregate; but individually, all bugs can essentially be patched given enough time or effort. There's no benefit to keeping them secret if their threats can be neutralized.
As I outlined in another comment in this thread, algorithms that do not offer or adopt significant authorization constraints (as quantified by time/monetary costs) cannot be "fixed." This is fundamentally why reverse engineering e.g. HMAC signing algorithms, search results ranking, spam filtering or front page listing algorithms is possible. The generous usability requirements do not allow for authorization that would mitigate reversing the algorithm, even when it's not embedded in an untrustworthy client.
Suppression is essentially all you can do to prevent reverse engineering, and suppressing the knowledge of how to reverse engineer an algorithm is in effect the same as suppressing the algorithm itself.
As I outlined in another comment in this thread, algorithms that do not offer or adopt significant authorization constraints (as quantified by time/monetary costs) cannot be "fixed." This is fundamentally why reverse engineering e.g. HMAC signing algorithms, search results ranking, spam filtering or front page listing algorithms is possible. The generous usability requirements do not allow for authorization that would mitigate reversing the algorithm, even when it's not embedded in an untrustworthy client.
Suppression is essentially all you can do to prevent reverse engineering, and suppressing the knowledge of how to reverse engineer an algorithm is in effect the same as suppressing the algorithm itself.