This is a retreaded and often tiresome debate. I'll still throw my 2c in...
Should you pick a complex framework from day one? Probably not, unless your team has extensive experience with it.
My objection is towards the idea that managing infrastructure with a bespoke process and custom tooling will always be less effort to maintain than established tooling. It's the idea of stubbornly rejecting the "complexity" bogeyman, even when the process you built yourself is far from simple, and takes a lot of your time from your core product anyway.
Everyone loves the simplicity of copying over a binary to a VPS, and restarting a service. But then you want to solve configuration and secret management, have multiple servers for availability/redundancy so then you want gradual deployments, load balancing, rollbacks, etc. You probably also want some staging environment, so need to easily replicate this workflow. Then your team eventually grows and they find that it's impossible to run a prod-like environment locally. And then, and then...
You're forced to solve each new requirement with your own special approach, instead of relying on standard solutions others have figured out for you. It eventually gets to a question of sunken cost: do you want to abandon all this custom tooling you know and understand, in favor of "complexity" you don't? The difficult thing is that the more you invest in it, the harder it will be to migrate away from it.
My suggestion is: start by following practices that will make your transition to the standard tooling later easier. This means deploying with containers from day 1, adopting the 12 factors methodology, etc. And when you do start to struggle with some feature you need, switch to established tooling sooner later than later. You're likely find that your fear of the unknown was unwarranted, and you'll spend less time working on infra in the long run.
There's no correct answer here. Your choice seems reasonable _if_ you already have some previous familiarity with managing k8s. If not, you might want to consider starting with a managed k8s solution from a cloud provider. The bulk of the work will be containerizing your stack, and getting familiar with all the concepts. You don't want to do all that while also keeping k8s running. After that you would be able to relatively easily migrate to a self-hosted cluster if you need to.
If you do want to self-host, k3s could also be an option, like a sibling comment suggested. It's simpler to start with, though it still has a learning curve since it's a lightweight version of k8s. I reckon that you would still want to run at least 3 nodes for redundancy/failover, and maybe a couple more for just DB workloads. But you can certainly start with one to setup your workflow, and then scale out to more nodes as needed.
k3s single node + ArgoCD/Flux is what I would if I had to build infrastructure of a small startup by myself.
Unfortunately it's HN so people are more likely to do everything in bash scripts and say a big "fuck you" to all new hires that would have to learn their custom made mess
This is exactly the setup I’ve been considering. Feels like the best of both worlds: you learn the standard tooling and can easily upgrade to full blown distributed k8s, but you retain the flexibility and low cost aspects of single VM.
Also leaning towards putting it behind a Cloudflare tunnel and having managed Postgres for both k3s and application state.
Have been running k3sup provisioned nodes on Hetzner for services and even a Stackgres managed Postgres cluster on another node (yes, it backs up to the cloud). And it's been great. Incredibly low cost and I do not have to think about running out if compute or memory for everything I need for a tiny startup.
The other aspect of this is it's literally impossible to hire someone from industry already familiar with your home grown SDLC systems. But you can find plenty of "cloud engineers" who do understand these "complex" cloud systems who can deploy and maintain them via terraform. It's a turn-key skill set.
Should you pick a complex framework from day one? Probably not, unless your team has extensive experience with it.
My objection is towards the idea that managing infrastructure with a bespoke process and custom tooling will always be less effort to maintain than established tooling. It's the idea of stubbornly rejecting the "complexity" bogeyman, even when the process you built yourself is far from simple, and takes a lot of your time from your core product anyway.
Everyone loves the simplicity of copying over a binary to a VPS, and restarting a service. But then you want to solve configuration and secret management, have multiple servers for availability/redundancy so then you want gradual deployments, load balancing, rollbacks, etc. You probably also want some staging environment, so need to easily replicate this workflow. Then your team eventually grows and they find that it's impossible to run a prod-like environment locally. And then, and then...
You're forced to solve each new requirement with your own special approach, instead of relying on standard solutions others have figured out for you. It eventually gets to a question of sunken cost: do you want to abandon all this custom tooling you know and understand, in favor of "complexity" you don't? The difficult thing is that the more you invest in it, the harder it will be to migrate away from it.
My suggestion is: start by following practices that will make your transition to the standard tooling later easier. This means deploying with containers from day 1, adopting the 12 factors methodology, etc. And when you do start to struggle with some feature you need, switch to established tooling sooner later than later. You're likely find that your fear of the unknown was unwarranted, and you'll spend less time working on infra in the long run.