I am terrified of containers. In 2012 I used lxc-destroy on a container, and it managed to destroy my entire filesystem. It seems beyond belief to me that something like that could happen, but it did.
Definitely unfair to bring it up at this point--it was a while back--but until everyone universally says they're solid, I'm not touching them.
Which filesystem? Host or container's? Either way, it's more likely that the fs was already corrupted in that case - did you also stop using that filesystem since then?
EDIT: Anyway, it's probably an issue with the tooling around containers, not the implementation of kernel containers itself
(well, in theory it might have been a bug in the kernel that corrupted the rootfs and you reported it here as it has destroyed your host's system).
I've also experienced problems with lack of host/container separation with lxc on arch linux, e.g. shutting down container shut down the host. I suspected the problem was an improperly mounted/unmounted /dev or /sys in the guest.
I've had a much smoother experience with lxc on Ubuntu than arch. The core lxc developers work for canonical, and Ubuntu lxc bootstrap scripts are much more refined. Lxc support for arch linux is provided by the community, and at least when I tried last year, there were minor problems here and there.
On arch linux, I've found that systemd-nspawn support is much better than lxc's. The commands mkarchroot and arch-nspawn (in the devtools package) make running arch in arch straightforward.
Up until Linux 3.8/3.9 the shutdown() syscall was not container aware and hence shutting down from inside a container would shutdown a host
It has since been fixed and attached to the 'PID' namespace meaning that all processes in the PID namespace get shutdown in the same manner as the host calling shutdown() (ie init gets a specific signal, userspace processes get signaled as well)
Definitely not doubting you, but surprised, though my first experience is 2013, and I was doing some "crazy" stuff to get familiar... like binding mounts to my host fs, experimenting with btrfs and the like, definitely seems safe now.
Didn't know enough at the time to be able to diagnose what had happened, and I had just started playing with containers so doubtlessly I fucked up somewhere. But it seems so utterly weird that (a) it could happen and (b) that a newbie could easily accidentally stumble on something that'd cause everything to go to shit.
I just ran lxc-destroy on what I thought was a container, started running into lots of issues in Chrome running on host so I closed out Chrome to try to restart it, at which point its binary couldn't be found so I blindly restarted hoping it'd fix it. And... there was nothing.
I had nasty side effects between host and guest on my first 'play' with lxc. Fortunately it was just processes being killed and locking me out but I'd never expect shutting down a virtual instance to interact with the host like this.
Definitely unfair to bring it up at this point--it was a while back--but until everyone universally says they're solid, I'm not touching them.