I would gladly work on "defense tech" too, as in actually defending people. But what you mean by "defense tech" is actually offense tech. Look at how these things are actually used in the world.
When an entire field needs to stand behind a euphemism in order to prevent seeming ugly, that is certainly some kind of a sign.
There is no difference between a technology being used defensively or offensively. The 'self driving car' tech will feed into 'self driving tanks' tech, and vice versa. Military Satellite tech gives you the GPS and Google Maps, which will in turn be used for military purposes. A technology (including algorithms) can be used for good or ill. That is just the nature of the beast.
You can certainly work towards what you see as the overreach of American militarism. It is a political/economic problem which needs a political/economic solution. Your (not) working on cryptanalysis algorithms has nothing to do with it.
To play devil's advocate: Let's say my morals aren't bothered by killing members of the Taliban / Al-Qaeda / Enemy Of The Week, as I consider them a bunch of fanatics and a threat to western civilization. As such, I work on guidance systems and weapons to try and improve their effectiveness and accuracy, making them more precise, and less likely to kill civilians and cause collateral damage when employed.
Now what? Where do you derive your morals, and what makes them inherently correct, and mine reprehensible?
The missile guidance systems used for targeting offensively is similar (not the same) to the missile guidance systems used for intercepting incoming ballistics.
Are you then suggesting that guidance systems for missiles that intercept (like a thaad or mim patriot) should not have been created?
That is one way things are actually used in the world. I would think a case such as that could be seen to be morally ambiguous.
Yes seriously. My "sense of morals" is very different from yours (in that I'd gladly work on defence tech say.). So what?