Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

k8s is complex, if you don't need the following you probably shouldn't use it:

* Service discovery

* Auto bin packing

* Load Balancing

* Automated rollouts and rollbacks

* Horizonal scaling

* Probably more I forgot about

You also have secret and config management built in. If you use k8s you also have the added benefit of making it easier to move your workloads between clouds and bare metal. As long as you have a k8s cluster you can mostly move your app there.

Problem is most companies I've worked at in the past 10 years needed multiple of the features above, and they decided to roll their own solution with Ansible/Chef, Terraform, ASGs, Packer, custom scripts, custom apps, etc. The solutions have always been worse than what k8s provides, and it's a bespoke tool that you can't hire for.

For what k8s provides, it isn't complex, and it's all documented very well, AND it's extensible so you can build your own apps on top of it.

I think there are more SWE on HN than Infra/Platform/Devops/buzzword engineers. As a result there are a lot of people who don't have a lot of experience managing infra and think that spinning up their docker container on a VM is the same as putting an app in k8s. That's my opinion on why k8s gets so much hate on HN.



There are other out of the box features that are useful:

* Cert manager.

* External-dns.

* Monitoring stack (e.g. Grafana/Prometheus.)

* Overlay network.

* Integration with deployment tooling like ArgoCD or Spinnaker.

* Relatively easy to deploy anything that comes with a helm chart (your database or search engine or whatnot).

* Persistent volume/storage management.

* High availability.

It's also about using containers which mean there's a lot less to manage in hosts.

I'm a fan of k8s. There's a learning curve but there's a huge ecosystem and I also find the docs to be good.

But if you don't need any of it - don't use it! It is targeting a certain scale and beyond.


I started with kubernetes and have never looked back. Being able to bring up a network copy, deploy a clustered database, deploy a distributed fs all in 10 minutes (including the install of k3s or k8s) has been a game-changer for me.

You can run monolithic apps with no downtime restarts quite easily with k8s using rollout restart policy which is very useful when applications take minutes to start.


In the same vein here.

Every time I see one of these posts and the ensuing comments I always get a little bit of inverse imposter syndrome. All of these people saying "Unless you're at 10k users+ scale you don't need k8s". If you're running a personal project with a single-digit user count, then sure, but only purely out of a cost-to-performance metric would I say k8s is unreasonable. Any scale larger, however, and I struggle to reconcile this position with the reality that anything with a consistent user base should have zero-downtime deployments, load balancing, etc. Maybe I'm just incredibly OOTL, but when did these simple features to implement and essentially free from a cost standpoint become optional? Perhaps I'm just misunderstanding the argument, and the argument is that you should use a Fly or Vercel-esque platform that provides some of these benefits without needing to configure k8s. Still, the problem with this mindset is that vendor lock-in is a lot harder to correct once a platform is in production and being used consistently without prolonged downtime.

Personally, I would do early builds with Fly and once I saw a consistent userbase I'd switch to k8s for scale, but this is purely due to the cost of a minimal k8s instance (especially on GKE or EKS). This, in essence, allows scaling from ~0 to ~1M+ with the only bottleneck being DB scaling (if you're using a single DB like CloudSQL).

Still, I wish I could reconcile my personal disconnect with the majority of people here who regard k8s as overly complicated and unnecessary. Are there really that many shops out there who consider the advantages of k8s above them or are they just achieving the same result in a different manner?

One could certainly learn enough k8s in a weekend to deploy a simple cluster. Now I'm not recommending this for someone's company's production instance, due to the foot guns if improperly configured, but the argument of k8s being too complicated to learn seems unfounded.

/rant


I've been in your shoes for quite a long time. By now I've accepted that a lot of folks on HN and other similar forums simply don't know / care about the issue that Kubernetes resolves, or that someone else in their company takes care of those for them


It’s actually much simpler than that

k8s makes it easier to build over engineered architectures for applications that don’t need that level of complexity

So while you are correct that it is not actually that difficult to learn and implement K8S it’s also almost always completely unnecessary even at the largest scale

given that you can do the largest scale stuff without it and you should do most small scale stuff without it, the number of people for whom all of the risks and costs balancr out is much smaller than the amount that it has been promoted and pushed

And given the fact that orchestration layers are a critical part of infrastructure, handing over or changing the data environment relationship in a multilayer computing environment to such an extent is a non-trivial one-way door


With the simplicity and cost of k3s and alternatives it can also make sense for personal projects from day one.


100%

I can bring up a service, connect it to a postgres/redis/minio instance, and do almost anything locally that I can do in the cloud. It's a massive help for iterating.

There is a learning curve, but you learn it and you can do so damn much so damn easily.


+1 on the learning curve, took me 3 attempts (gave up twice) before I spent 1 day learning the docs, then wasting a week moving some of my personal things to it.

Now I have a small personal cluster with machines and vps's (on some regions I don't have enough deployments to justify an entire machine) with a distributed multi-site fs that's mostly as certified for workloads as any other cloud. CDN, GeoDNS, nameservers all handled within the cluster. Any machine can go offline while connectivity remains the same, minus the timeout requirement of 5 minutes of downed pods to be rescheduled for monolithic services.

Kubernetes also provides an amazing way to learn things like bgp, ipam and many other things via calico, metallb and whatever else you want to learn.


To this I would also add the ability to manage all of your infrastructure with k8s manifests (eg.: crossplane).


For anyone who thinks this is a laundry list - running two instances of your app with a database means you need almost all of the above.

The _minute_ you start running containers in the cloud you need to think of "what happens if it goes down/how do I update it/how does it find the database", and you need an orchestrator of some sort, IMO. A managed service (I prefer ECS personally as it's just stupidly simple) is the way to go here.


Eh, you can easily deploy containers to EC2/GCE and have an autoscaling group/MIG with healthchecks. That's what I'd be doing for a first pass or if I had a monolith (a lot of business is still deploying a big ball of PHP). K8s really comes into its own once you're running lots of heterogeneous stuff all built by different teams. Software reflects organizational structure so if you don't have a centralized infra team you likely don't want container orchestration since it's basically your own cloud.


I did this. It’s not easier than k8s, GKE, EKS, etc…. It’s harder cause you have to roll it yourself.

If you do this just use GKE autopilot. It’s cheaper and done for you.


By containers on EC2 you mean installing docker on AMI's? How do you deploy them?

I really do think Google Cloud Run/Azure Container Apps (and then in AWS-land ECS-on-fargate) is the right solution _especially_ in that case - you just shove a container on and tell it the resources you need and you're done.


From https://stackoverflow.com/questions/24418815/how-do-i-instal... , here's an example that you can just paste into your load balancing LaunchConfig and never have to log into an instance at all (just add your own runcmd: section -- and, hey, it's even YAML like everyone loves)

  #cloud-config
  
  apt:
    sources:
      docker.list:
        source: deb [arch=amd64] https://download.docker.com/linux/ubuntu $RELEASE stable
        keyid: 9DC858229FC7DD38854AE2D88D81803C0EBFCD88
  
  packages:
    - docker-ce
    - docker-ce-cli


Sure you can use an AWS ASG, but I assume you also tie that into an AWS AlB/NLB. Then you use ACM for certs and now you are locked in to AWS times 3.

Instead you can do those 3 and more in k8s and it would be the same manifests regardless which k8s cluster you deploy to, EKS, AKS, GKE, on prem, etc.

Plus you don't get service discovery across VMs, you don't get a CSI so good luck if your app is stateful. How do you handle secrets, configs? How do you deploy everything, Ansible, Chef? The list goes on and on.

If your app is simple sure, I haven't seen simple app in years.


I've never worked anywhere that has benefitted from avoiding lock-in. We would have saved thousands in dev-hours if we just used an ALB instead of tweaking nginx and/or caddy.

Also, if you can't convert an ALB into an Azure Load balancer, then you probably have no business doing any sort of software development.


I don't disagree about avoiding lock-in, and I'm sure it was hyperbole, but if you really spent thousands of dev-hours (approx 1 year) on tweaking nginx, you needed different devs ;)

ALB costs get very steep very quickly too, but you're right - start with ALB and then migrate to nginx when costs get too high


Second paragraph is totally right - start with an ALB and move when you need it. At the point you are running into issues with either perf or cost of your ALB you have good problems.


It's worth bearing in mind that, although any of these can be accomplished with any number of other products as you point out, LB and Horizontal Scaling, in particular, have been solved problems for more than 25 years (or longer depending on how you count)

For example, even servers (aka instances/vms/vps) with load balancers (aka fabric/mesh/istio/traefik/caddy/nginx/ha proxy/ATS/ALB/ELB/oh just shoot me) in front existed for apps that are LARGER than can fit on a single server (virtually the definition of horizontally scalable). These apps are typically monoliths or perhaps app tiers that have fallen out of style (like the traditional n-tier architecture of app server-cache-database, swap out whatever layers you like).

However, K8s is actually more about microservices. Each microservice can act like a tiny app on its own, but they are often inter-dependent and, especially at the beginning, it's often seen as not cost-effective to dedicate their own servers to them (along with the associate load balancing, redundant and cross-AZ, etc). And you might not even know what the scaling pain points for an app is, so this gives you a way to easily scale up without dedicating slightly expensive instances or support staff to running each cluster; your scale point is on the entire k8s cluster itself.

Even though that is ALL true, it's also true that k8s' sweet spot is actually pretty narrow, and many apps and teams probably won't benefit from it that much (or not at all and it actually ends up being a net negative, and that's not even talking about the much lower security isolation between containers compared to instances; yes, of course, k8s can schedule/orchestrate VMs as well, but no one really does that, unfortunately.)

But, it's always good resume fodder, and it's about the closest thing to a standard in the industry right now, since everyone has convinced themselves that the standard multi-AZ configuration of 2014 is just too expensive or complex to run compared to k8s, or something like that.


> k8s is complex, if you don't need the following you probably shouldn't use it:

I use it (specifically, the canned k3s distro) for running a handful of single-instance things like for example plex on my utility server.

Containers are a very nice UX for isolating apps from the host system, and k8s is a very nice UX for running things made out of containers. Sure it's designed for complex distributed apps with lots of separate pieces, but it still handles the degenerate case (single instance of a single container) just fine.


If you don't need any of those things then your use of k8s just becomes simpler.

I find k8s an extremely nice platform to deploy simple things in that don't need any of the advanced features. All you do is package your programs as containers and write a minimal manifest and there you go. You need to learn a few new things, but the things you do not have to worry about that is a really great return.

Nomad is a good contender in that space but I think HashiCorp is letting it slowly become EOL and there are bascially zero Nomad-As-A-Service providers.


If you don't need any of those things, going for a "serverless" option like fargate or whatever other cloud equivalents exist is a far better value prop. Then you never have to worry about k8s support or upgrades (of course, ECS/fargate is shit in its own ways, in particular the deployments being tied to new task definitions...).


Those all seem important to even moderately sized products.


As long as your requirements are simple the config doesn't need to be complex either. Not much more than docker-compose.

But once you start using k8s you probably tend to scope creep and find a lot of shiny things to add to your set up.


Some ways to tell if someone is a great developer are easy. JetBrains IDE? Ample storage space? Solving problems with the CLI? Consistently formatted code using the language's packaging ecosystem? No comments that look like this:

    # A verbose comment that starts capitalized, followed by a single line of code, cuing you that it was written by a ChatBot.
Some ways to tell if someone is a great developer is hard. You can't tell if someone is a brilliant shipper of features, choosing exactly the right concerns to worry about at the moment, like doing more website authoring and less devops, with a grand plan for how to make everything cohere later; or, if the guy just doesn't know what the fuck he is doing.

Kubernetes adoption is one of those, hard ones. It isn't a strong, bright signal like using PEP 8 and having a `pyproject.toml` with dependencies declared. So it may be obvious to you, "People adopt Kubernetes over ad-hoc decoupled solutions like Terraform because it has, in a Darwinian way, found the smallest set of easily surmountable concerns that should apply to most good applications." But most people just see, "Ahh! Why can't I just write the method bodies for Python function signatures someone else wrote for me, just like they did in CS50!!!"


> For what k8s provides, it isn't complex, and it's all documented very well

I had a different experience. Some years ago I wanted to set up a toy K8s cluster over an IPv6-only network. It was a total mess - documentation did not cover this case (at least I have not found it back then) and there was a lot of code to dig through to learn that it was not really supported back then as some code was hardcoded with AF_INET assumptions (I think it's all fixed nowadays). And maybe it's just me, but I really had much easier time navigating Linux kernel source than digging through K8s and CNI codebases.

This, together with a few very trivial crashes of "normal" non-toy clusters that I've seen (like two nodes suddenly failing to talk to each other, typically for simple textbook reasons like conntrack issues), resulted in an opinion "if something about this breaks, I have very limited ideas what to do, and it's a huge behemoth to learn". So I believe that simple things beat complex contraptions (assuming a simple system can do all you want it to do, of course!) in the long run because of the maintenance costs. Yeah, deploying K8s and running payloads is easy. Long-term maintenance - I'm not convinced that it can be easy, for a system of that scale.

I mean, I try to steer away from K8s until I find a use case for it, but I've heard that when K8s fails, a lot of people just tend to deploy a replacement and migrate all payloads there, because it's easier to do so than troubleshoot. (Could be just my bubble, of course.)




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: