This is very fundamental not only in maths but in theoretical computer science in general.
Even to prove how "hard" a problem is. E.g. look into (Turing/many-one) reductions [1]
Many problems can be reduced to SAT and then you can employ an off-the-shelf SAT/SMT solver to solve it for you. Etc etc.
But in general, being able to reduce problem A to problem B implies that A is not harder than B. I.e., to show that a problem P is undecidable, you can reduce the Halting problem to P.
TSP is NP-hard and can be reduced to SAT, then solved with a SAT solver.
But something like determining whether a player has a winning strategy in unrestricted chess (with an nxn board, for arbitrary n) is in EXPTIME and can't be reduced to an "easier" problem.
Edit: Someone ITT also mentioned ILP (integer linear programming). Also a good example. E.g., many optimization programs can be mapped to ILP.
Many problems can be reduced to SAT and then you can employ an off-the-shelf SAT/SMT solver to solve it for you. Etc etc.
But in general, being able to reduce problem A to problem B implies that A is not harder than B. I.e., to show that a problem P is undecidable, you can reduce the Halting problem to P.
TSP is NP-hard and can be reduced to SAT, then solved with a SAT solver. But something like determining whether a player has a winning strategy in unrestricted chess (with an nxn board, for arbitrary n) is in EXPTIME and can't be reduced to an "easier" problem.
Edit: Someone ITT also mentioned ILP (integer linear programming). Also a good example. E.g., many optimization programs can be mapped to ILP.
[1] https://en.wikipedia.org/wiki/Turing_reduction