Right, we agree! I'm just adding some more information.
> "With LD_PRELOAD and ptrace() an explicit permissin isn't granted to the debugger by the specific debugee therfore regardless of who the user is or file acls permission must be denied. "
Explicit permission is in fact granted because they are running as the same user. There is a permission check on attaching gdb which is using ptrace(2).
Regarding LD_PRELOAD or other linker instructions, again there is an explicit permissions check: In this case, the permission is the execute permission on the original binary. If you have permission to execute then you can set any linker parameter you wish, or even use a non-system provided linker! There's nothing stopping a user from compiling their own linker and using it to run a system binary, so long as it's running as them and not setuid as another user.
> "Even search order hjacking is a vulnerability even if it is the same user running"
It's not, and this kind of thinking is evidence of a faulty security model.
Linker configuration is just one small aspect of invoking an executable. You can always simply copy and edit the executable instead, as I mentioned in my previous post. Or you can always use your own linker. Or bring your own statically linked binary, with no linking to system libraries at all.
Looking for this kind of thing might be useful as a signal on an otherwise locked-down system (is development happening? Why?), but it has no place in a cohesive permissions model for a general-function workstation. It really cannot be a functional part of a well designed security architecture for a system which allows for software development.
> "Isn't that what I said by "drop privilege"? I've only used pam_exec so I never had to deal with this with pam but in general when I need root, I get whatever api handle, socket,etc.... and change uid/gid to a low-priv user."
Typically "dropping privileges" means setting the ruid (real uid), irrevocably becoming the lower-privilege user. This is the "grab a resource and give up access" model. This can't happen with PAM because it's a function not a separate process and it needs to return holding the same permission state it was called with.
pam_exec would be forking and setting the uid in the child process prior to exec.
In a single process scenario where a function must return with the process state unchanged, the euid (effective uid) can be set to a lower-privilege user. The ruid remains 0, so privileges aren't permanently dropped as they can be restored at any time with a subsequent seteuid(0) call. It's more a temporary lowering of privileges. This is not threadsafe, of course, as privilege is process-global state.
> "and that marvelous modular architecture is not faulty and should not be replaced by a giant complex systemd process. "
The one item I still disagree on is with ptrace and LD_PRELOAD's permission being explicit. It is not, it is implicit just as you described because for both cases their permission is implied because of who owns the process and fs acls. On my Mac for example, I couldn't troubleshoot something in my browser process both as user and root with gdb because the browser's publisher didn't give gdb an explicit permission to debug it but it turns out the "builtin" lldb had permission so I was able to attach and debug.
Implying things because ownership is what I am disagreing with. Basic unix acls give read,write or execute permissions, preloading a shared library or using ptrace to attach to a process is not explicitly defined in the acl, read and write apply to the file, not the process and execute can only be execve,clone,fork and related syscalls.
In an ideal world, in addition to PKI/signatures, ELF headers would contain explicit permissions like this. So, for example, htop/top would disallow LD_PRELOAD and the kernel/LSM would police against permission header tampering without root or LSM-specific credentials even by the owner (ideally not root). This would make a lot of usermode rootkits useless (forcing them to use other means of course). Execute acl only defines the permission to start the program not what others can do to the program during its runtime.
I understand your reasoning, but I still disagree. I'll add a bit more information as to why:
> " It is not, it is implicit just as you described because for both cases their permission is implied because of who owns the process and fs acls. On my Mac for example, I couldn't troubleshoot something in my browser process both as user and root with gdb because the browser's publisher didn't give gdb an explicit permission to debug it but it turns out the "builtin" lldb had permission so I was able to attach and debug."
You're just describing extra permissions in osx - I believe related to code signing. Any system can introduce additional privilege checks (linux has yama/ptrace_scope), but this doesn't change the underlying permissions model.
Furthermore, the permission is absolutely implied and this can be directly demonstrated on linux: You can simply open up /proc/$pid/mem and read/write values into the process's memory. This is sufficient to do anything. All memory can be read from and written to (provided the pages are writeable).
> "preloading a shared library or using ptrace to attach to a process is not explicitly defined in the acl"
It is. Specifically: The executable handler for /bin/ls will be (on my system) /lib64/ld-linux-x86-64.so.2. The explicit ACL is the execute bit on /lib64/ld-linux-x86-64.so.2 (to run that linker with all its options) and the executable /bin/ls (the invoking file). There are also permission checks on all the libraries which will be linked.
You can test this by copying the linker:
cp /lib64/ld-linux-x86-64.so.2 /tmp/mylinker
and then if you like, modifying it and running it explicitly:
/tmp/linker /bin/ls
There are explicit permission checks on all of these things -- they're just generally set to be accessible by everyone.
Users can also compile their own linkers and execute system binaries however they wish -- it does not matter. The permission is to run code -- any code.
> "read and write apply to the file, not the process"
As I mentioned above, this is explicitly not true. Memory pages on linux are represented as a flat file and they have standard unix permissions allowing the user to read their own memory in their own processes.
This is also true of a process's file descriptors, including ephemeral descriptors such as pipes, sockets, and what have you. It's all accessible using basic filesystem syscalls like open(), seek(), read(), write().
> "In an ideal world, in addition to PKI/signatures, ELF headers would contain explicit permissions like this."
There are indeed ways to do this but-
> "So, for example, htop/top would disallow LD_PRELOAD "
No! Why? There is no reason for this. It's not a security feature and it cannot serve a useful purpose. Anything you might do with LD_ vars and top you could do equally well by just copying and editing top.
> "and the kernel/LSM would police against permission header tampering without root or LSM-specific credentials even by the owner (ideally not root)"
These are features that you would use for a locked down device, but these types of features have no use on a development workstation where users are allowed to write code.
> "This would make a lot of usermode rootkits useless (forcing them to use other means of course). Execute acl only defines the permission to start the program not what others can do to the program during its runtime."
I think you're describing how to lock down an appliance. I'm talking about a development workstation.
> I think you're describing how to lock down an appliance. I'm talking about a development workstation.
No, any production system, even a dev workstation would have IDEs and browsers restricted but their new code or anything else theh need would be permitted.
I suppose the entire non-LSMed Linux/Unix acl system is broken by modern standards. (Windows as well in many similar regards), a process being able to rwx its own memory makes sense. A separate process being able to rwx any process memory so long as the target process is owned by the same user violates principles of mandator access control because it isn't mandatory for arbitrary processes with no explicitly defined relationship to alter each others' memory. In essence if I could summarize this, the permissions model is not granular enough to apply on a per-process runtime basis. You can alter the memory anyways so who cares about preloading and ptrace? That has been the thinking all along but this thinking had not adapted to modern threats and mitigations. I suppose an LSM is the only option and I wouldn't suggest fixing the root cause here due to political impracticality but it would be nice if any desktop and server distro enforced MAC adequately and at least per-process and this shouldn't be considered locked down just normal. Modifying a file, usage of the glibc feature LD_PRELOAD, ptrace,etc... while write and executing while writing and executing a file somewhere in vfs can eventually allow that, these are distinct things,it shouldn't be left for the admin to figure out the implications, the implications should be explicit by design in an ideal world. If there is no explicit permission to specifically allow an operation that constitutes a security risk the OS should prevent that operation. You only described how badly broken things are.
> No! Why? There is no reason for this. It's not a security feature and it cannot serve a useful purpose. Anything you might do with LD_ vars and top you could do equally well by just copying and editing top.
Yes but then the attacker would have to tamper with top (if some other security measure permits it) or use a kernel mode rootkit instead. Security controls address specific risks, in this case LD_PRELOAD is frequently abused by even the most trivial coinmining malware so that specific risk would be addressed as should any other risks in the best way possible. There is no way outside of an LSM for a developer to say "please don't permit preloading,ptrace, memory tampering unless it is this explicitly defined process because bad guys like to abuse these features against me, unless the user is explicitly warned about the dangers and permits the feature"
In reality you can and people do this with selinux and the like wich are fairly cumbersome to manage by an admin (not end user). But allowing devs to declare specific risky features else defaulting to a deny would not be burdensome on devs, admins and it would make the effective security model and interface user friendly.
The same thing can be said with windows process access, debug, thread/dll/code injection,etc.... these security models are outdated and user unfriendly. "If you get pwned, all bets are off" is not a good way to look st security these days.
> "No, any production system, even a dev workstation would have IDEs and browsers restricted but their new code or anything else theh need would be permitted."
This doesn't sound like a net positive policy.
> "I suppose the entire non-LSMed Linux/Unix acl system is broken by modern standards."
Hey look, I've worked in infosec too. These ideas you have come with a cost. You're starting to include language, terminology and design choices that are clearly derived from a large one-size-fits-all corporate environment. I've worked with similar environments and I understand why they exist -- but they are not the norm and are certainly not representative of an ideal computing environment (in terms of security, or in general)
> "You only described how badly broken things are."
I described a development system where things can be changed. You're describing a calcified system where safety is imposed by controls and invariance. Those costs are worth paying for some environments but absolutely not for others.
> "Yes but then the attacker would have to tamper with top (if some other security measure permits it) or use a kernel mode rootkit instead."
No, there are dozens of ways to supply a modified behavior in top. If you're mucking with the environment, why not just change PATH? After all, the LD_ variables are just linker paths. This is a very silly thing to fixate on.
> " There is no way outside of an LSM for a developer to say "please don't permit preloading"
Uh, of course there is. LD_* environment variables are just inputs to the linker. You can trivially change the linker to ignore these variables, or behave differently with them, or whatever.
> "ptrace"
Also really easy to limit with seccomp. This is SOP for containers - it's built into docker. It happens by default for most use cases.
> "But allowing devs to declare specific risky features else defaulting to a deny"
Look, here's the thing: There's always going to be a lowest hanging fruit. I know there's a fairly common compulsion in the security industry to point at the lowhanging fruit and say "that's the problem! remove it!" but this is a fallacy. It's just not true.
What actually matters is whether or not the permission model is cohesive and readily understandable by a developer. Locking down syscalls is and stripping features from a containerized app is a great idea in certain contexts. It's a terrible idea in others.
The ideas you're proposing aren't creating strict controls - they're just chasing misbehaviors. This makes a lot of sense for a large organization playing the numbers game, but it cannot lead to a cohesive, consistent, general purpose security architecture.
Sounds like? Well it is net positive regardless of how it sounds. Intuition is not a good way to direct or measure security.
> Hey look, I've worked in infosec too. These ideas you have come with a cost. You're starting to include language, terminology and design choices that are clearly derived from a large one-size-fits-all corporate environment. I've worked with similar environments and I understand why they exist -- but they are not the norm and are certainly not representative of an ideal computing environment (in terms of security, or in general)
> The ideas you're proposing aren't creating strict controls - they're just chasing misbehaviors. This makes a lot of sense for a large organization playing the numbers game, but it cannot lead to a cohesive, consistent, general purpose security architecture.
So first of all I heavily resent that sentiment! I also had someone else last week refusing to argue from a technical standpoint and esentially say "that's corporate stuff, it's not popular" do you hear yourself? Who cares? Second, it absolutley is not a corporate thing, this is what android,macos,ios are doing, they are very much popular and modern. When an app needs access to a camera you need to give permission right? It wasn't that way at the start and they learned the hard way that allowing a process to access anything because of implied permissions lead to disaster. This is exactly what I am suggesting except I am making it holistic. This is "zero trust" but at the application level. You're stuck in the past and on what linux cliques and popular personas consider popular. I care about bad guys causing harm on any platform, I've seen these risky features getting abused by threst actors that is motivation. From a corporate perspective I don't care about hardening, I care about logs/monitoring because I can afford to log everytime ldpreload or ptrace is used and do something about it. A guy using ubuntu for gaming and web browsing doesn't get like you that installing a program/package without carefully understanding its capability, authorship and the unix permissions model, syscalls, procfs and all kinds of subsystems means he's screwed. For that guy, defaults matter and the default should align with user expectations when it comes to risky features not developer expectations.
> No, there are dozens of ways to supply a modified behavior in top. If you're mucking with the environment, why not just change PATH? After all, the LD_ variables are just linker paths. This is a very silly thing to fixate on
It's not silly because bad guys abuse it. If there are other means default them to deny/off. Stop offloading default security to end users/admins.
> Also really easy to limit with seccomp. This is SOP for containers - it's built into docker. It happens by default for most use cases.
Don't you need to write the app with seccomp support? How can a user turn that off if they want to debug it? And how would an end user know to do this or even a developer, if your default is to allow ptrace what motivates them to limit ptrace?
> Look, here's the thing: There's always going to be a lowest hanging fruit. I know there's a fairly common compulsion in the security industry to point at the lowhanging fruit and say "that's the problem! remove it!" but this is a fallacy. It's just not true.
I don't think what I am saying is getting across. Preloading and ptrace are low hanging frequently abused things for sure and that's why I mentioned them but my suggestion is to overhaul everything not just these things and please making assumptions about my intentions/background. Any device in /dev, any risky file in procfs or sysfs, any abusable syscall , enironment variable, file acl, network connection default them all to a deny and allow users a friendly means by which to disable the restriction. If you don't like my suggestion that's fine so long as the current mess isn't considered acceptable. Security should be the default and risky exposures should be identified and disabled by default by the OS maintainer not the end user or admin.
The corporate way would be how things are now, pay an admin and a consultant and a manager to painstakingly maintain, audit and service file acl restrictions on a fleet of servers and users which isn't ideal for lone admins, newbies and desktop users in general.
> What actually matters is whether or not the permission model is cohesive and readily understandable by a developer. Locking down syscalls is and stripping features from a containerized app is a great idea in certain contexts. It's a terrible idea in others.
No, what actually matters is the security model and default securitu exposures being easily understandable by a minimally experienced user or admin and easily alterable by experienced admins/devs to suit their needs and that isn' how things are today. Your comment about context is spot on, what is bad is your assumption that defaults should be insecure to suit lazy developers instead of default being secure and as needed developers loosening restrictions.
> "they're just chasing misbehaviors"
No, these are real security exposures, of course, if restrictions can be applied at a more fundamental level without impacting users that would also be great. For example with preloading, I get that there are different ways of using that feature so I simply patched it out of glibc in past systems. But it would have been great if there was at least a complile flag to disable it and even better if it was a runtime option disabled by default until I need it. Assumptions are dangerous to everyone.
Lastly, this has to be the longest HN thread I've replied to. You do have solid arguments, I only wish we could have this discourse in a more suitable place.
> "Well it is net positive regardless of how it sounds."
You can't just state that lol. I am certain it is at a minimum unsettled as I've had many discussions with peers across the field about the relative tradeoffs. Including the dangers of superficial measures which aren't in sync with the underlying security architecture.
> " this is what android,macos,ios are doing, they are very much popular and modern"
It's what consumption platforms are doing. It's typically disabled on development platforms because the cost is too high.
> "When an app needs access to a camera you need to give permission right?"
But this isn't what you're describing and this isn't new. Unix has had permissions on devices since its inception.
Using multiple per-app users is becoming more mainstream, but this sort of thing is in sync with what I'm describing above. It has nothing to do with the underlying security model as to how users interact with resources they own, or this wishy-washy concept of intra-user security -- which, again, isn't how these controls are implemented on platforms like Android. Android creates a new user for every application.
Unix has been doing this since the dawn of time, giving individual users or apps permission to specific devices.
What you're talking about is something entirely different: How users interact with their own objects which are already owned by them.
> "Stop offloading default security to end users/admins."
Default security is fine, but you are talking about intra-user controls on objects vs using the system's security architecture (as above: Android uses separate users per app instead)
> "Don't you need to write the app with seccomp support? "
No, you don't! This is something done in a config file when spinning up a container and it will work for arbitrary processes within the container. You can basically configure away certain syscalls and so on if you know they aren't/shouldn't be used.
You can even make this part of an app architecture (aka: android app packaging) to standardize the permissions interface. When apps publish a dockerfile, this is what they're doing.
> "How can a user turn that off if they want to debug it? And how would an end user know to do this or even a developer, if your default is to allow ptrace what motivates them to limit ptrace?"
These features exist for packaged apps and limit access from the app's perspective. Other debug enabled processes (aka: my workstation shell) can still call ptrace and attach. It's all part of the security model.
> "I don't think what I am saying is getting across. Preloading and ptrace are low hanging frequently abused things for sure and that's why I mentioned them but my suggestion is to overhaul everything "
I think we might be talking past each other. It's fine to lock down developed applications to limit scope. This is the entire point of containers!
What I'm talking about though is access from already privileged execution units. We don't need to protect chsh from LD_PRELOAD because if it's running setuid then it's already ignoring it, and if it isn't, there's no threat because we already control all the related objects.
> "what is bad is your assumption that defaults should be insecure "
My suggestions are secure, and in fact are how android, osx, etc currently operate! I have a shell on an android right now. The browser is running as a separate user in a locked down environment. The entirety of /bin/ is not and I'm allowed to play with the linker parameters in development -- but not from within a production app.
I think we agree that locking down a production app is reasonable.
I think where we're getting off the rails is regarding features for a developer who wants to work on their own system -- like LD_PRELOAD. I'm sure you agree that a developer workstation should support this, right? Or an android with developer mode enabled? Etc? Because it does work that way on these modern platforms.
> "Lastly, this has to be the longest HN thread I've replied to. You do have solid arguments, I only wish we could have this discourse in a more suitable place. "
> "With LD_PRELOAD and ptrace() an explicit permissin isn't granted to the debugger by the specific debugee therfore regardless of who the user is or file acls permission must be denied. "
Explicit permission is in fact granted because they are running as the same user. There is a permission check on attaching gdb which is using ptrace(2).
Regarding LD_PRELOAD or other linker instructions, again there is an explicit permissions check: In this case, the permission is the execute permission on the original binary. If you have permission to execute then you can set any linker parameter you wish, or even use a non-system provided linker! There's nothing stopping a user from compiling their own linker and using it to run a system binary, so long as it's running as them and not setuid as another user.
> "Even search order hjacking is a vulnerability even if it is the same user running"
It's not, and this kind of thinking is evidence of a faulty security model.
Linker configuration is just one small aspect of invoking an executable. You can always simply copy and edit the executable instead, as I mentioned in my previous post. Or you can always use your own linker. Or bring your own statically linked binary, with no linking to system libraries at all.
Looking for this kind of thing might be useful as a signal on an otherwise locked-down system (is development happening? Why?), but it has no place in a cohesive permissions model for a general-function workstation. It really cannot be a functional part of a well designed security architecture for a system which allows for software development.
> "Isn't that what I said by "drop privilege"? I've only used pam_exec so I never had to deal with this with pam but in general when I need root, I get whatever api handle, socket,etc.... and change uid/gid to a low-priv user."
Typically "dropping privileges" means setting the ruid (real uid), irrevocably becoming the lower-privilege user. This is the "grab a resource and give up access" model. This can't happen with PAM because it's a function not a separate process and it needs to return holding the same permission state it was called with.
pam_exec would be forking and setting the uid in the child process prior to exec.
In a single process scenario where a function must return with the process state unchanged, the euid (effective uid) can be set to a lower-privilege user. The ruid remains 0, so privileges aren't permanently dropped as they can be restored at any time with a subsequent seteuid(0) call. It's more a temporary lowering of privileges. This is not threadsafe, of course, as privilege is process-global state.
> "and that marvelous modular architecture is not faulty and should not be replaced by a giant complex systemd process. "
Agreed!