I feel that alignment is not just hard but impossible, at least if you want something truly useful. Maybe the only thing you can do is let an AI develop and observe its nature from a distance, say in a simulated world running at high speed which it does not know is simulated. You can hope it will develop principles that do align with your own, that its essential nature will be good. Sometimes I wonder if that is what a greater intelligence is doing to us.