Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Google’s in-house desktop Linux (computerworld.com)
286 points by signa11 on Aug 1, 2022 | hide | past | favorite | 228 comments


Correction: The first official distro internally was "grhat." It was born out of need. We used LDAP+kerberos for auth and our homedirs mounted on first login. This had all kinds of fun problems (looking at you, nscd!) but worked well enough most of the time. Goobuntu came a couple years later. In the in between, lots of people ran their own installed version and we worked together to get things working (even Slackware).


> This had all kinds of fun problems (looking at you, nscd!)

"can someone telnet to 10200 on my box to reset nscd!?"


>> LDAP+kerberos for auth and our homedirs mounted on first login

My first "real" job as a sysadmin had this kind of setup, albeit with Sun servers and workstations running Solaris 8. I was impressed by how well it actually worked most of the time given all the quirkiness of kerberos/nfs/nsc/etc..


My first job at University was dealing with this type of setup (Solaris workstations) except we weren't dealing with LDAP, Kerberos, and NFS, but rather AD and Microsoft DFS.

It did NOT go that well, but was a good learning experience.


I've used kerberos + some auth (actually AD in addition to LDAP) and automounted home (and data dirs) a number of times (various clusters, departmental servers, corps). I'd say it was fairly mature (the OS stack supported it, it didn't crash all the time, etc) but still had (and has) sharp edges when it comes to high throughput computing.


Same, though much newer! UW CSE did the same when I worked there as a help desk tech, student directories mounted on login to the machines. IIRC, it was just a samba share linked into kerberos auth somehow.


Doing a rolling distro sounds like a great idea for Google, at their scale, and they also have the expertise and resources to handle any downsides of that.

For early startups, I've been doing "Debian Stable" as the default, partly so that we don't have to spend any time on any surprise distro changes when we're focused on MVP, etc.

And in 2 years, the startup will probably have a lot more resources, and we can look at whether that still makes sense, though we might just end up doing an in-place APT `dist-upgrade`, or coast on `oldstable` awhile if the timing is not right yet.

I also try to use the same distro on workstations and in production, to permit lightweight efficiencies, like experimenting and debugging outside of containers on workstations, without special tooling, while still having a close match to production. So, Debian Stable everywhere.

A recent Ubuntu LTS would also work for this (and sometimes be easier for things like Nvidia SDK), though Debian has been arguably a bit better for security and stability lately.

For things not in the Debian Stable we're using, such as if we need a bleeding-edge version of some key thing, and we have big security/reliability requirements... I manage non-Debian-packaged third-party dependencies in our own Git repo, and track and vet updates. This also means trying to minimize these dependencies, more than we would if we were pulling in 100 packages casually from a language-specific package manager, since each package is additional work.


Debian Stable (same as Ubuntu LTS) doesn't have newer package versions, it might sound OK for Windows people, that are accustomed having release once every few years, but in case of mode hardcore Linux users it wouldn't be the best user experience.

Even just the git has many releases during Debian Stable/Ubuntu LTS, which enhance user experience, that holding back users is not the best idea. And in most cases security fixes land first in the newest versions and are backported, so it is most likely to have a fix in HEAD than in some random older release.


> in case of mode hardcore Linux users it wouldn't be the best user experience.

The "best" user experience you're referring to is getting surprises like your OpenSSH 9 suddenly stopping to work with your key agent [1] out of the blue because they decided to change the protocol (no big deal right?) without any hint of this whatsoever during normal usage, and you just casually upgraded your packages because, well, when did Linux package upgrades ever hurt anybody?

(Just the latest incredibly frustrating surprise your fellow "Windows person" literally had to spend hours on tracking down. And that I actually remember. Most definitely not the only one.)

[1] https://www.openssh.com/agent-restrict.html


... or when an ill-advised and insufficiently tested change to the internals of glibc suddenly causes threaded applications to misbehave ... great that it was fixed quickly, but for the "regular users" who suddenly found a variety of applications malfunctioning (but they each typically only used one, and hence blamed the application they used), a strong argument against rolling releases.


Did Arch users have this problem? I think OPs point is that rolling distros like that keep all packages closer in line with where the upstream is and avoid these kinds of issues. And if there are versions that have problems you can easily choose from any historic version far more easily than on Debian. Now of course it doesn’t save you from having to troubleshoot your own issues which very much is a time sink.


Yes in fact I first saw this on Arch. This had nothing to do with the distro.


Debian has a backports repository, providing newer versions of software for the existing stable release. This "Debian Stable has old software" canard is way out of date.


Not only that, but there are so many ways to get newer versions of software on Debian Stable when required: backports, Nix, pkgsrc, Docker containers, Appimages, flatpak, snaps...

Debian Stable tends to hand pick the latest LTS version of packages everywhere applicable, so it's actually a good base to standardize on. Packages might be a bit older but they are reliable, predictable, and kept secure by a constant roll of security updates.

In other words, if it doesn't work on the latest Stable, you're probably moving too fast for a good part of the industry. Lots of customer and production stacks are not in the "move fast, release early, release often, and link the crap to the very latest bleeding edge dependency" bandwagon.


If the argument is to enable backports and let it update, what’s your use case for stable vs using testing?

The whole point of stable vs testing is that versions are fixed, reviewed security updates make it, it’s all tested and vetted.

On the other hand, testing and backports haven’t been tested as well and could have issues.

If we use stable and incorporate backports from testing, is it still stable?


If you are selective about the specific packages you are updated, you're closer to stable than you might think.


I'm a linux user for over 25 years. I want my development environment to match production and that almost never involves new software.


If you are in a .... contest, I use Linux for exactly 25 years, and I always use the newest software I can get my hands on, in some cases even beta (like with mesa or some intel drivers) - in the last 15 years it is Debian - combination of testing/unstable (some packages from unstable) and a lot of self compiled ones for those that don't exist in Debian at all or have old versions (even in unstable). Had one case the the update broke my system, it was fixed few days later (after a weekend) - I probably don't update that often (when I remember to do apt-get update && apt-get dist-upgrade) to get more frequent breakages or don't use that much of software.

If I knew about Arch earlier I would have switched, because it sounds like my kind of distro - but I have to much customizations and too much kids to have time for that right now.

I never quite get why people want they development environment be similar to production - you install IDE on production? Or you just code in vim/emacs/ed without any plugins (because production doesn't need it)?


Again, system vs. apps, though. My production servers don't have Slack, Spotify, GUIs, etc.


My production servers run docker containers. Where are Slack, Spotify, GUIs, etc ran? Of course in containers.


I've never seen Slack, Spotify, or a DE run in containers, so you must be going very much out of your way for that to be the case. And if you're going that far, why do you care about the distro in the first place?


> Even just the git has many releases during Debian Stable...

It just so happens the software you mentioned, git, is in stable backports, so it is updated frequently. Regardless, I've been on Debian stable since 2007, and running "bleeding edge" stuff is trivial, depending on your desired workflow. (For example, I just install Debian testing packages in a schroot.) I used to bounce from distribution to distribution in the old days (Red Hat, Mandrake, Slackware, Gentoo, etc.) but once I found Debian Stable I could concentrate on getting work done.


I thought distros were stable until I tried Debian stable... It's stable! I use it everywhere, even my dev machine. Might switch to testing though on the dev machine, 2-3 years of no user software updates is a long time. I know it's unreasonable to ask, but a yearly new stable would be the best of all worlds.


May I ask where you feel it the most?

For me it’s definitely been browsers and IDE’s, but I just install those to /opt.


A notoriously finicky piece of software is Akonadi. If you use KDEs otherwise impeccable PIM software (underappreciated if you ask me!) you are probably very familiar with Akonadis random hangs and crashes and corruption. Debian is the first distro where it has been working steadily for more than a few weeks. Steadier than KDEs own Neon and flatpaks.


I wonder what kind of issues one can get in distros that people prefer stable? What can break? For me it was once, years ago - X didn't start up, fixed in few days after update.

Most of breakages of Linux was my own doing (e.g. by mistake overwritting MBR).

I use testing+unstable (I didn't know that backports exists, thanks) + self compile some software that doesn't exist in either or is too old. I also always compile kernel, prefer it to have less modules and configured towards performance on desktop.


I use Debian stable for my workstation, but get everything I care about from somewhere else. Go, Node, Emacs, Postgres... all installed from some other package repo. (I actually like Homebrew a lot on Linux.)

So I might be two years out of date on ls, but it probably hasn't changed in the last 2 years anyway.


I work for a smaller company running most of our stuff on older Ubuntu LTS versions. While there are things we want to update faster than the distribution does, most of them are leaf packages near the stuff we’re directly working on and are easy enough to either package ourselves or find a more recent package for from a trust worthy source.

It’s really nice to have the rest of the distribution more or less set it and forget it.


For a general desktop OS though that does not sound like a huge problem at all - long term stable always is behind. Thats part of the selling point in a way.


> it might sound OK for Windows people

Windows users might be accustomed to having infrequent OS version releases, but they're used to their _apps_ being current.


Debian is not a good distro for development. They have an intrusive package management philosophy that expects to be able to manage everything on the system, installing up to date things from a language package manager is harder on Debian than elsewhere.

(And, as another reply mentioned, you'll miss out on a lot of usability improvements to development tools by running stuff that's 2 years old or more)

> For things not in the Debian Stable we're using, such as if we need a bleeding-edge version of some key thing, and we have big security/reliability requirements... I manage non-Debian-packaged third-party dependencies in our own Git repo, and track and vet updates. This also means trying to minimize these dependencies, more than we would if we were pulling in 100 packages casually from a language-specific package manager, since each package is additional work.

Making it harder to have up-to-date dependencies is costing you far more than you're saving by not upgrading frequently.


> Making it harder to have up-to-date dependencies is costing you far more than you're saving by not upgrading frequently.

Not if you're separating where you live from where you work - in a manner of speaking. Applying global changes to your system (e.g. using package manager to install a specific versions of packages required by your production app) is a terrible, terrible idea that will cost you more than isolation (standardized docker, dev VMs or similar, which ought to reflect your running environment). Debian stable provides a good workstation environment - the actual work should not be done against Debian-blessed packages. The opposite is true - your workstation packages should not be dictated by the needs of your prod software.


If you do that then you undermine the idea of being able to do lightweight debugging etc., and there's no particular merit to matching your servers. You need a reasonably stable workstation environment, but it doesn't really matter if different developers use whatever they're most comfortable with as long as it makes them productive, and people will probably want more up-to-date tools than what's available in Debian Stable. (In practice a lot of people who follow this model end up using Macbooks, and it's... fine, not an awful way of working, but not the clear best either).

> Applying global changes to your system (e.g. using package manager to install a specific versions of packages required by your production app) is a terrible, terrible idea that will cost you more than isolation (standardized docker, dev VMs or similar, which ought to reflect your running environment).

Why? Your system is (or should be) a single-purpose tool. It absolutely should be managed and reproducible (using something like puppet), but keeping everything flat so that you're working on the same "bare metal" that your application works on has more advantages than disadvantages IME. (E.g. it's hugely helpful to be able to fire up a random REPL, integrated with your IDE or similar, and have the same versions of your libraries available that your application uses).


> Why? Your system is (or should be) a single-purpose tool.

Stability. My assumption is that one would be using their workstation for tasks much broader than the production environment: development, debugging, slideshows, video conferencing, screen recording, etc. Without execution boundaries, it's almost inevitable that there will be incompatibilities.

Stability of the prod software may be impacted, when something on dev environment (and only on a specific dev environment) because the engineer added an implicit dependency on some aspect of their environment inadvertently.


It's a tradeoff to be sure, but IMO if you're a developer then development and especially debugging should be the priority. I've worked at one place where developers had a second "office machine" that we used via RDP for doing emails/presentation/etc., kind of an inversion of the usual suggestion of setting up your main system with a stable environment and using a VM to match your deployment environment. I found that way more productive, because using a remote or VM imposes much less overhead on everyday office stuff than it does on debugging where you want to be able to connect your IDE to the running process, add packet filters to your kernel or whatever. But I guess YMMV.


Maybe.

For my development needs I prefer Debian Stable and install the dev tools locally in my home directory rather than using the packaged versions.

Since I don't share those boxes, it works well for me. I keep up with the frequent Android Studio / VSCode updates that way.


> For my development needs I prefer Debian Stable and install the dev tools locally in my home directory rather than using the packaged versions.

At that point you end up manually doing the job of a package manager.


Either you want the package manager to expect to manage everything or you don't, pick a side.


I want a system that's better at playing nice between system-managed and developer-managed packages. Distributions that stick closer to upstream (e.g. Slackware, Gentoo, or especially FreeBSD with the built-in ports system) are better than Debian for developer machines IME.


> installing up to date things from a language package manager is harder on Debian than elsewhere

For better or for worse, I think this is partly where my preference to use rbenv, pyenv, asdf, etc comes from.

The other part was from hosing my system installed pip packages for the Nth time because long ago I didn't fully understand pip (still don't, but I didn't then either) and I used a mix of "pip install" and "sudo pip install" when the former didn't work. I could only work on one thing at a time with this approach, and switching was a huge task because I had to figure out all the places where packages had landed and either remove them individually or blow the whole thing away.


This is a great point! Quite insightful and detailed on how Ubuntu could have been the greatest program the world could have benefitted from. I think it was too shortlived and misunderstood for its time.


> Besides, the "effort to upgrade our Goobuntu fleet usually took the better part of a year. With a two-year support window, there was only one year left until we had to go through the same process all over again for the next LTS. This entire process was a huge stress factor for our team, as we got hundreds of bugs with requests for help for corner cases."

Ummm.. No? Ubuntu LTS has a 5-year support window. Not 2.

Or maybe it didn't all the way back then? I don't recall.

A rolling distro does make sense though. Especially in a large org where you have a relatively homogenous usepatterns. All the edgecases you can find centrally and pre-empt them before rolling out the updates.


Desktop was 3 years from 6.06 onwards, only server had 5 years. Might have changed recently. This meant you had a year to upgrade the old LTS before it went end of life (from 2008 releases moved back to April, 6.06 was the first LTS and I assume they took a little longer getting it out. I think the version before was 5.10)


They changed it in 2012. Desktop gets 5 years of support and server gets 10.

https://en.wikipedia.org/wiki/Ubuntu_version_history#Version...


> Ummm.. No? Ubuntu LTS has a 5-year support window. Not 2.

Internal users demanded the upgrade every two years anyway.


For all the talk I've heard over the years about their in house Linux distro. Their desktop application support, and sometimes web application support, for Linux is non existent.

Most of their server estate will be Linux based, including what's powering their desktop applications. But almost none of their efforts give back and enrich the desktop ecosystem. That said they do contribute a lot towards frameworks, libraries and the kernel. Certainly they are among the top alongside Microsoft in recent years and Facebook. So I do recognize my entitled call for more... with good cause


As a Fedora user writing this from my brand new Google Pixel phone, who uses Gmail and YouTube, and pretty much nothing else from Google in 2022, what is missing?


A lot of the stuff that annoys me and gets in my way while using Linux is still the same basic quality-of-life stuff that has been broken forever. Bluetooth pairing fails occasionally. Plugging in a new display doesn't immediately work. A USB microphone fails to register as an input sound device.

I cannot imagine that Google engineers would live with such deeply broken systems as daily drivers. So whatever that secret is, whether it consists of drivers, known hardware configurations, or even just config files they use to ensure optimal operation – that's what I'd like them to Open Source. I'm happy to buy whatever laptop spec Google uses for gLinux for myself to go along with their OS.


> I cannot imagine that Google engineers would live with such deeply broken systems as daily drivers. So whatever that secret is, whether it consists of drivers, known hardware configurations, or even just config files they use to ensure optimal operation – that's what I'd like them to Open Source.

Hey there, Google engineer here (opinions are my own, etc etc). I work in the ChromeOS platform team. We have hundreds if not thousands of very talented (not me) engineers who work on low-level firmware, drivers, kernel, performance optimization, etc on Linux. ChromeOS is still Linux and almost entirely open source[0], we also upstream most of it[1]. A lot of the improvements that ChromeOS has had for multi-monitor support, plug-and-play devices, wayland, keyboard/touchpad firmware, etc have all entered mainline Linux kernel and should be perfectly usable by the whole Linux ecosystem and other distributions.

I don't work with the gLinux team anymore (used to in the past) so I don't know how what exactly they do with their stuff, but regardless of the "it's not a real linux distro!" hate ChromeOS might get, we still run on a fully open Linux environment ourselves.

[0] https://source.chromium.org/chromiumos/chromiumos/codesearch

[1] The stuff we don't upstream it's usually because either we can't (ChromeOS-specific hacks that the Linux kernel mainline wouldn't accept) or we haven't been able to yet due to needing to clean up patches before they are accepted. Regardless we'll still open source it in our kernel tree


> We have hundreds if not thousands of very talented (not me) engineers who work on low-level firmware, drivers, kernel, performance optimization, etc on Linux.

I am interested in working on this. Do you know if there positions open and where and which group to apply?


You can check on our careers page, we do post all available positions there, you can filter for stuff like chromeos or firmware or kernel engineer, etc. Example: https://careers.google.com/jobs/results/?distance=50&hl=en_U...


A quick question as it very hard to get an answer back on this. Does Google ask leetcode stuff for firmware and kernel engineers too?


From an enterprise management perspective nobody (as far as I know) has made some kind of tooling to manage linux desktops in a similar way admins can do with ChromeOS or Windows. I think that is the real dealbreaker for linux in the enterprise for general use.


Googler here, opinions on my own. While nothing prevents us from switching desktop environments (many of us do so), basically anything outside the default GNOME/X11 configuration is not supported. (It used to be Cinnamon but we're switching away.)

Bluetooth accessories are not guaranteed to work. People struggle with screen configurations all the time. Nvidia driver updates are still a nightmare. I'm pretty sure there's no secret sauce.


I'm curious about the change from Cinnamon to GNOME. Are you aware of any particular reason why? I feel like attitudes towards GNOME haven't particularly changed over the year, but that just might be my bubble.


They want to switch to Wayland eventually but Cinnamon doesn’t support Wayland.


Thanks!


So even Google cannot truly do "Linux on the desktop" right?


That’s really nice to hear, actually.


The secret is using hardware that's known to work with Linux. Inside Google, I'm sure IT does this (or writes/fixes drivers when it must). Outside google, you have to do this yourself. Find someone with a working setup and duplicate exactly.

> Bluetooth

Bluetooth is a dumpster fire and Linux bluetooth is a radioactive biohazardous dumpster fire. Stay away. My workaround: Linux -> SPIDF -> 1Mii B03Pro bluetooth transmitter. The chipset drivers that output 3.5mm and SPIDF are ancient, stable, and dumb as rocks, so they actually work. SPIDF beats 3.5mm because 3.5mm can detect connectedness and punish you by scrambling your sound settings every time you bump a cable. SPIDF can't do this so software can't screw it up.

Also, even if you get linux bluetooth working it will tend to scramble if you reboot into a different OS. Just say no. Use an external bluetooth transmitter.


> I cannot imagine that Google engineers would live with such deeply broken systems as daily drivers. So whatever that secret is, whether it consists of drivers, known hardware configurations, or even just config files they use to ensure optimal operation – that's what I'd like them to Open Source. I'm happy to buy whatever laptop spec Google uses for gLinux for myself to go along with their OS.

Then your imagination could be improved in a few ways:

1. Google has complete say over the software you're allowed to run at work. They can mandate using whatever tools they want as terms of your employment.

2. Google has complete say over the hardware they purchase you. Random drive incompatibility isn't a thing. They pick hardware that's been validated to work so they don't have random IT complaints.

3. This is a IT / compliance / security thing. The grand total of UX people dedicated to this probably amounts to designing the splash screens. The grand total for bug fixing is for issues impacting an obscene number of users. If you f'ed up your OS install you'll basically have to fix it yourself (IT will offer to reimage your machine and you'll be responsible for making sure you correctly managed your backups by hand - you get limited storage space at least when I worked there, so you had to make sure to exclude certain folders from backups).

4. They do have a slightly better story for Android development but most of that relies on Google3 / Blaze. I think they're doing some work to migrate to Bazel finally but I imagine they'll always have a bit better internal dev story.

About the only value-add they could be giving here is the hardware configurations but they're not really different from stock Linux laptops you'd buy as a consumer.


> Plugging in a new display doesn't immediately work. A USB microphone fails to register as an input sound device.

I have these problems with the Dell laptop running Windows 10 which I use for work, but not with my own desktop or laptop running Fedora. So, my experience is more or less the opposite apparently. This makes it harder for me to buy the theory that there's something uniquely inferior about these Linux desktops.


When I switched from Linux to Macs for my daily drivers ten years ago, I was really shocked by how much stuff just works. But there's plenty of stuff that doesn't work, even on Mac. When I am struggling to get something working with one of my Macs, I often joke to myself that stuff like this is why Linux on the desktop will never be popular.

For example, my work laptop, a Macbook Pro, is always on my desk in my office. When I'm working, I have an external monitor plugged into it. If the laptop goes to sleep, then a bit later the monitor goes to sleep. But when the monitor goes to sleep, the laptop wakes up. Then a few minutes later the laptop goes to sleep and starts the cycle again. So every evening when I'm done working I have to unplug the laptop and manually put it to sleep. Then every once in a while (rarely, but multiple times this year) something else keeps waking the laptop up, or maybe I accidentally wake it up and don't manually put it to sleep again, who knows. When that happens, I have a dead Macbook Pro on my desk in the morning, and I say, "[Stuff] like this is why Linux on the desktop will never be popular.


External monitor handling is one area my Linux Thinkpad works smoother than my work Macbook Pro.

The previous time I used a Mac (2015ish), it was the other way around.


I'm honestly a little disappointed at how smoothly everything works on my Linux machines. It ruins the impression that I'm doing eldritch magic, issuing gnomic commands inherited from the primal hackers, immersing my very self in the unknowable gnostic wisdom of Unix.

Maybe I need to run OpenBSD.


> Bluetooth pairing

BT is just a disaster everywhere. My experience is that for "normal" use cases (input devices & headphone audio, basically) Linux is no worse than anywhere else, but still bad. Apple users think it's great because Apple made AirPods work, but that's a testament to Apple's integration engineering and not the underlying technology.

> Plugging in a new display doesn't immediately work

I haven't seen this with Intel drivers on a major distro in a long time. But yes, the driver story for other hardware remains somewhat weak. And obviously the farther you get from "Gnome on Ubuntu/Fedora" into the weeds of desktop choices, the weirder the feature set is going to get.

> A USB microphone fails to register as an input sound device

No idea here. I haven't seen a USB audio failure in a LONG time (closest I can think of is a Razer headset that has two chat/game output streams and Linux sees only one by default). Mostly likely you had a broken piece of hardware that worked in windows only by installing its own driver. And that sucks, but poor standards compliance is just something we all live with.


My impression from being in the industry far too long is that Bluetooth was one of a variety of what I think were being touted as personal area (wireless) networks once upon a time with Bluetooth seemingly being pushed for relatively simple use cases like keyboards. But none of those other protocols panned out so Bluetooth ended up being the general non-WiFi wireless standard for things that the original spec never contemplated. The standard has been updated over time of course but I imagine a lot of early-on decisions still have an impact on things.


> but that's a testament to Apple's integration engineering and not the underlying technology.

So no one outside of apple is smart enough to get it working.

What bothers me about non apple stuff is. I can pair my AirPods to my phone, tv, and laptop. Even if they connect to the laptop if I press connect on the Apple TV or phone they switch.

With any non apple headphones. (Bose, Audio-Technica, Sony, Asus) I have to press Bluetooth pairing button and connect on the decide again. So switching between two laptops is a pain.

Why can’t other brands do what apple can? It’s not the tech problem when someone solves it…


> So no one outside of apple is smart enough to get it working.

Pretty much, yeah. And the reason is that Bluetooth is a disaster of complexity. One implementer thinks function X in state Y means "frob", but another interprets that as a "glim" command. Apple makes it work by controlling variables: their controller, their driver stack, their library framework, their app integration. If one team doesn't understand why another team's layer is doing something, they walk across campus and ask them.


> So no one outside of apple is smart enough to get it working.

You have to have the capability and be given/assigned the time to do it properly.

Things that are genuinely a mess (e.g. bluetooth?) often only work smoothly when there was a mountain of effort put into it - that consumers can't see.


> So no one outside of apple is smart enough to get it working.

Eeehhh... Specs are big, complex documents with fuzzy terms like SHALL, MAY, MUST and weird corner cases that no one thought of, being read by engineers with varying linguistic prowess, implementing them in disparate hardware and software vendors with radically different manufacturing/release pressures, different skillsets, different interpretations of the same damned specs... It's a wonder any of this shit works at all.

Apple's the only vendor out there with something approaching a consistent experience because they're the vendor with the greatest degree of control/integration over the entire stack. BT, ACPI, all the same.


I have a bluetooth audio receiver in the home office, bt in my car and bt headphones and none of it has given me any trouble. Just works. The only issue I have is which phone the car decides to pair with when we get in to go somewhere together. Oh, I guess we also have one of those bluetooth battery powered speakers for taking to the park. It works fine too. I connect to the bluetooth audio receiver with my linux, mac and windows laptops on an almost daily basis, no problem.

What problems are you running into?


I get terrible USB mic issues but only with native MS Teams (linux is terrible, Windows is not much better). These issues are weirdly resolved using the web client through Chrome on both platforms.


I heard (here, long ago) that Chrome contains its own USB driver.


> I cannot imagine that Google engineers would live with such deeply broken systems as daily drivers.

Oh yes, yes they do.


> I cannot imagine that Google engineers would live with such deeply broken systems as daily drivers.

Heh, most use Macs. Linux laptop users have the same problems, don't worry. My laptop hard locks up on suspend once every couple weeks, plus all the things you mentioned. Oh, I also have a weird audio latency bug.


Plugging in a new display doesn't immediately work on gLinux. It may be me, because I'm on i3wm which is not the official wm.

I have a manually crafted autorandr config but it's kind of hit and miss.


Known good hardware I expect. There's definitely some hardware out there which is sorta-kinda supported on Linux, but only really works for one kernel version on one distro, and doesn't only supports the bare minimum of requirements.

> Bluetooth pairing fails occasionally.

I never had this issue on Linux until my most recent laptop, which shipped with a Qualcomm 1103 card, which gave me no end of issues with Wifi and Bluetooth (resume from sleep failing, weird power drain, random drop outs).

I swapped out the wifi/BT module for an Intel wireless card (AX210), and all the above problems immediately resolved themselves.

Maybe it's just the bluetooth hardware?

> Plugging in a new display doesn't immediately work.

I've honestly not had to worry about this in at least ten years across ~8 different machines. It's just worked on all three major graphics vendors. I haven't used Wayland yet, so I don't know if if that's a factor.


I've had a couple of things that wouldn't always pair. Some headphones, a Bluetooth mouse if I recall.

I just chalk it up to the freedom I get from using Linux and just move on.


> Bluetooth pairing fails occasionally

Anecdata, my airpods have been pairing seemlessly with Xubuntu out-of-the-box for over a year. I've similarly not have had any problems pairing with my Bose portable speaker or soundbar.


> So whatever that secret is

A large IT team.


I would imagine that not only large, but also likely quite capable. If the IT staff to be hired at google goes through a similarly tough gauntlet as the software devs and SREs (and other similar product-producing) roles, then i can imagine those IT staffers really know what they're doing...at least they'd have to be good enough to get into the doors of google; which i guess is not easy.


Google used to actually have a handful of native applications on linux. I remember uploading to google play music with a linux client (not web) and stuff like google drive were "promised" or something like that.

It was all really short lived though. I remember that uploader broke fairly quickly as, of course, some abi change in some library dep and of course there was never another release.

It's not really what's missing... it's just google's 'here today, gone tomorrow'. They really shouldn't be called engineers.


Is there a native Google Play client on any desktop platform?


Well their contributions and 'desktop support' only go towards... ChromeOS.

The fact that this effort is going towards this OS tells us they don't care about open-source and especially the Linux Desktop either. It is only to the Kernel which is fine, but it clear that the wider Linux Desktop ecosystem has had the same 20 year old issues still present today.


Having never worked at a place the size of Google, I'm surprised that their IT team directly manages 100,000 machines. In my (apparently naive) mind, if you're smart enough to work at Google, you're smart enough to manage your own OS.

Is it a security thing? Are Googlers not allowed to directly control their own computers?


(recent Xoogler here) There is quite a lot of freedom, though if you do custom stuff it's just not supported. I usually replaced the desktop, installed a custom terminal app, etc. There is also a lot of freedom in IDE choice - you can use literally whatever you want, it's just that the support is crappy if you don't use a supported one. And Vim support was IIRC completely volunteer based - there was no vim team, just people working on plugins and building releases as 20% etc. I contributed one plugin which sadly not a lot of people used, but Googlers out there - try BlazeDebugCurrentTest (it's part of the Blaze plugin) or something like that!

Actually one of my favorite Google experiences starting out was that I wanted to work with an Apple Magic Keyboard, and to get proper support for it I needed to compile some kernel driver I found on github. I asked IT support if I can do that, and the answer was "you're a SWE, review the code, make sure it looks legit and doesn't contain any security holes, and then it's your responsibility if you mess anything up". Which I did, and it worked just fine. It was in 2018 so it's not like I'm referring to some early days thing.


vim support had only gotten better since 2018. With CiderLSP (LSP support for all of google3), it was a breeze navigating the massive codebase.


Before WFH I used CLion for cpp. When we were locked down I found Cider to be annoying (I think mostly because you couldn't tab switch to a a chrome window on MacOS) so I took a couple of days to really master vim and configure all the plugins (and wrote the one plugin I was missing), and it worked just fine for a while, I would just ssh to my workstation and use that. Later on they fixed the tab switching thing so I found myself drifting more towards Cider, then Cider-V came along, and I was back at the office, so I was mostly using that, though by that time I wasn't writing cpp anymore.


> Are Googlers not allowed to directly control their own computers?

First, if the computer is provided by the company, for company work, they're not "their own computers." They're the company's computers. And I don't know about Google, but yes it is VERY common in most mid- to large-size organizations that employees are not allowed to change things on the machines they use for work.

The IT department is usually required by either contracts or regulation to follow certain security standards and many of those prohibit end-user modification of machines. And further, these are often enforced through annual, semi-annual, or quarterly audits. Failure to follow these standards can result in the loss of a sales contract, a critical certification, or fines.

So, just like a manufacturing worker is not allowed to modify a machine on the factory floor (even to fix it, or improve it, regardless of their skill to do so), employees are not allowed to modify the organization's computing equipment.

And, it's important to note, the IT department generally has NO say in these rules since they are a business or legal requirement.

Edited to add: I've also worked in companies where the developers who write the in-house software might be masters of the business problem domain and their programming language, but have exactly ZERO knowledge of hardware or the most basic system administration principles. It doesn't cross their mind that RAM and disk space are actually finite things, for example, and that it makes for a Bad Day when one of their programs starts to consume ALL of it. These are the people who should not really be managing their own operating systems at work.


Which business and legal requirements, do you think, bind most small and medium enterprises to lock down workstations while engineers across Silicon Valley have root on their MacBooks from huge important companies?


Google doesn’t let devs store IP on their workstations except in very rare circumstances (per this thread and the BeyondCorp papers), and in those rare circumstances I’m willing to bet you either don’t have root or there’s way more auditing turned on.

Most bigcos are the same way, there’s a way to get root but there’s also security software or policies to counteract damage you could do. With most hardcore “zero-trust” setups you consider a device’s posture based on its current state and don’t take into account whether or not the user has admin rights (what do you care how what rights they used to install bad software, you just care that they installed bad software). No admin rights typically means nobody has time to do something better for end users or there’s a regulatory requirement (at the same time, bigcos can throw a sandboxed VM up in their data center that can’t access PII if devs need more access).


Money for lawyers


I gave a demo in Mountain View of some excellent viz, on their big screen in some room. I wanted to make some tiny change to (plug something in) edit no not a new device, being polite, I suggested to move a device of theirs already on that network, closer to the overhead projector .. the alarm, consternation and definite NO that rippled through the three quickly-relaying employees there, was literally like an electric current. I was amazed that they couldn't change even the smallest thing, ever.

of course I was trying to get my brilliant colleague without a degree, noticed with that code; zero response, like stonewall. Later I read, that year Google got over 1million job applications.

weird place


> I suggested to move a device of theirs already on that network, closer to the overhead projector

What you didn't know is that it wouldn't work.

There is also an expectation that you don't just randomly start changing things in shared conference rooms. If there is an issue, you open a GUTS ticket and someone comes and solves the problem. Chances are if you discovered a real issue, there are 90 other rooms with the same issue that also would be updated.


What were you trying to connect? Connecting to an ethernet port wouldn't work as they use ethernet authentication. Connecting to the projector's input ports instead of using Meet to present? Certainly something I wouldn't have protested if somebody external was giving a demo, back when I worked there.


What kind of change?


There probably a lot of brilliant minds at Google who have their very first taste of Linux there. Designers, management types, even developers, who suddenly need a Linux install to run a specific tool or dev build. Having a consistent Distro for them to just turn on and go is very useful.

If something doesn't work, they can communicate back to the (well-bearded) developer who sent them that weird build. The developer surely has some crazy esoteric distro on their bring-your-own device, but they also have desktops running the consistent distro available to them all around the office to reproduce the issue on. It's a bridge between worlds.

I work in academia and we manage our own Ubuntu spin for this reason, despite everyone we ship it to being very conventionally smart.

> if you're smart enough to work at Google, you're smart enough to manage your own OS.

I also like to think of myself as "smart" for self-managing most OS and cloud things. But the role of smarts is probably minor compared to the decade+ of experience I have doing it. When it's something I'm good at, I think it's because of intelligence. When it's something I can't do, I assume the people who can have lots of practice.


> The developer surely has some crazy esoteric distro on their bring-your-own device

Nope. If you want access to more than just the basic corp resources, you’re doing it from a fully managed and approved OS on a company owned machine or VM.


the word is not "smart", the word is "autonomy" and there is none in that network environment. (see post above for real example)


> if you're smart enough to work at Google, you're smart enough to manage your own OS.

No disrespect to you but this is a really naive take on how IT management works.

For one thing, not everyone at Google is an OS expert. There are people working for Google in marketing, sales, support, data science, graphic design, hardware design, and other fields where you can't depend on every person to "do the right thing" when it comes to "managing their own OS."

And even if they can, is that where you want them spending their time? Manually managing their OS? Don't you want them to do the job they were hired to do instead? Their computer is a work tool, not tinkertown.

Basically, the opposite of what you're saying is true: because 100,000 people work at Google, each person represents another way to potentially mess something up and cause problems for the whole company.

Assign each employee with probabilities:

- Probability of falling for a phishing scam

- Probability of installing malware

- Probability of an employee missing an announcement regarding OS/security policy

- Probability of an employee opening a support ticket with IT

These probabilities are very low for people who are computer-savvy, educated, very highly qualified people. The thing is, all you have to do is multiply those probabilities by 100,000 and now you've got a big problem.

It's much easier to run with zero IT automation if you've only got 10 people in your company and they all know each other.


Most of the people you listed don't run Linux desktops at Google. Of those that do, most are able to manage their own OS.

For your probabilities, you describe many issues that aren't big issues at Google. Phishing credentials is mitigated by mandatory FIDO tokens, binary whitelisting heavily reduces installs of malware, security policy isn't needed on these desktops (really just needed for mobile devices incl laptops).


> Of those that do, most are able to manage their OS.

I just explained why it doesn’t matter that “most” people can manage their OS, that’s because Google has thousands of Linux users. If 1% of Google developers make a critical mistake that could be a hundred workstations or more.

On top of that, I still disagree. Many developers I’ve met are not that good at using the OS.

Remember that writing applications is a completely separate skill. You can learn to code completely separate from learning anything else about the computer. I had a college professor that taught C on a chalkboard, in this millennium.

Google’s own developer interviews are very CS and algorithm heavy and to my knowledge never test your abilities with OS management.

When you say “really just needed for mobile devices including laptops” that’s basically all devices isn’t it? You’d have to be incredibly specialized to need or prefer a desktop workstation.

I haven’t even brought up compliance controls and certifications yet, either. If you’re SOC 2 or HIPAA compliant you have to manage workstations to disable functionality and restrict system preferences. For example, there are typically restrictions on USB port data transmission and idle screen lock delay time.


Most could, sure. I could do so. But why would I want to? It would take research and investment into something I don't give a shit about. I don't particularly care what Linux variant is being used, as long as it works. The less I have to think about it, the more I can spend time on other, more important stuff.


Software distribution becomes distinctly more complicated if you're running multiple distros. As is, I can install a lot of software from an internal apt. Saying "hey now we need to maintain an internal yum and pacman and also deploy internal packages to those" or alternatively come up with some bespoke cross platform binary distribution method seems effortful.


I am a google engineer, and a fairly successful one.

I would absolutely despise it if I had to manage my own OS, just like I despise thinking about my keyboard layout or text editor.

Hacker News overrepresents the hacker type which loves the feeling of full control but I'd estimate that at least 33% of engineers, including many very talented ones, don't want that, and only want to focus on the concrete problems they are trying to solve.


Did the even more successful engineers liked the experience more or less than you?


It’s for security and for homogeneity, a lot of google is setup around the principle that “works on my machine” is terrible, and also removing needless cleverness. You have root and can run anything you want*, but you have to go out of your way to configure anything differently than others, and the result is (hopefully) it just works the same everywhere. The monorepo also runs only natively on these Linux machines through a magical fuse interface so most development is either using the web ide or ssh-ing into the Linux box if you aren’t sitting in front of it. There were big economies of scale running this way and at least this setup, definitely felt pretty great and efficient I gotta say.

* By default, a tool called “Santa” keeps a naughty and nice list of runnable programs but all it took to get a program added to the nice list is any other googler vouching for it in an automated web tool.


Santa is open-source: https://github.com/google/santa


I work at google, and I’m also someone who has used and messed around with Gentoo for like 12 years. When it comes to my day job I rather not spend time messing around with my own OS (my workflow aside) and leave it to other people. Multiply that by 100k or so engineers and the productivity boost is insane.

We’re root on both desktops and laptops though, I just rather use my time for work and I’m sure my employer would rather I do that too.


> I rather not spend time messing around with my own OS (my workflow aside) and leave it to other people.

I don't really understand what "messing around mean" in your context. Exotic things like slackware or alpine aside, all major popular distros now offer unattended updates, easy upgrade path between major releases.

Your vscode, neovim, jetbrains whatever or emacs do not work differently from one distro to another and since nowadays many external stuff is either distributed as static binaries, containers or flatpacks or appimages everything is quite straightforward and not distro specific.


I never worked at Google, but based on my experience elsewhere it is partly security and partly standardization of guaranteed minimum functionality.

For example, security aside, as an engineer working on X that depends on internal tools Y, Z and W, I do not particularly care what variant of Linux is under the hood as long as I can run my favorite WM, editor and user apps. But I do want support on Y, Z and W if they misbehave. If I run a non-standard OS or distro I will likely get a lukewarm support because those teams will put debugging problems in "non-standard configurations" as a low priority.

Personally, I learned to live with most Linux distros. When I did roll out a non-standard setup I had a standby system in a standard configuration that I can show failures on before asking for help. My 2c.


> Having never worked at a place the size of Google, I'm surprised that their IT team directly manages 100,000 machines. In my (apparently naive) mind, if you're smart enough to work at Google, you're smart enough to manage your own OS.

Smart isn't really a key point here. You can be smart and make choices that solve your immediate needs but may not be good for the company in the long run. The distro provides for some standardization and baseline in among other variance.

> Is it a security thing?

Yes, one of the forms of standardization are audit solutions that keep track of all kinds of data about the machines. In the time I worked there, I once received outreach because a service I had started on the machine had retained an outgoing port to a third party service provider for an extended period of time. I explained what it was, and I did not get in trouble, but I also shut it down. There are many other forms of audit systems.

> Are Googlers not allowed to directly control their own computers?

It depends on your role, but many roles have root on their machines. You can of course do just about anything from there, however some things may get undone by periodic scripts if they're defensive (e.g. removal of certain software), or in other cases trigger some kind of outreach as described above. For certain other policies, you may need to file a bug with a business justification to request specific policies - you can also do this for entire teams if it applies to their work.


Google was the only employer I had where I had root on my machine, so I would say that developers have control over their machine.


I had root on my Linux desktop box at my university job, but I was in IT, administering a Linux service. Most of my coworkers were using Macs only.


Was this pre or post 2009?


I have root on my Google machine right this second.


I have root right now on my Linux and MacOS systems.


Post.


We have root on our own (personal) boxes, so we can essentially do what we want (within reason).

There's a tremendous value in having a somewhat homogenous environment. We have our own package repo and .deb packages to access internal resources. Even little stuff like editor plugins customized to our internal stuff we have packages for.

Plus, we hire a great deal of folks that aren't Linux experts. Have a sane set of defaults gets people going faster.

It makes a lot of sense to have our own distro when we're talking this kind of scale.


I've worked at large tech companies where very capable software engineers struggled to install or upgrade their OS, let alone be familiar with the intricacies of Linux. I was surprised at first too, but I do think there's a large class of engineers who think about code who don't think at all about the other stuff, and engineering is broad enough that that's totally fine.

Not to mention all the non-engineers, even in tech roles like PM, design, etc.


The way I see it, the engineers aren't being paid to install and repair hardware and operating system stuff. Even if they are perfectly capable, it makes sense to have IT handle that and let the engineers get on with their engineering.


I think it depends on what the priorites are. I care a lot about my own systems but when I'm working I really don't want to have to care about anything but the activity I'm getting paid for.


This. There's a lot of I shaped people out there that don't have the first clue about anything beyond their day-to-day.


Software Engineers are worth more to the company when they're writing code, not when they're debugging issues with the OS. It's much more worthwhile to standardize the machines people have and let people focus on doing their actual work more effortlessly.


Googler opinion, it feels like the right balance between letting me be in control while automating away the boring stuff that I don't want to deal with.


> if you're smart enough to work at Google, you're smart enough to manage your own OS

Regardless of the scale, that is a recipe for having your highly capable, highly paid engineers losing hours, days, even worse - to annoying configuration issues etc. Sometimes it's the right thing to do (e.g. very early stage, no IT infra) but if you can have an IT person who is actually good/experienced at this sort of thing sort out the core problems, it's way more efficient.

It's astonishing how much collective time can be wasted on "well, it works on my machine" issues. This makes it worth coming up with some sort of systematic way to avoid (mostly) it.


Why would I want to manage my own Linux distro? I use it to code for work, that's it.

> Is it a security thing? Are Googlers not allowed to directly control their own computers?

This and simplicity I think. There's definitely some controls over what kind of things you install on the MacBooks as well, though it's actually not that restrictive.


> if you're smart enough to work at Google, you're smart enough to manage your own OS.

Oh boy, have I got some stories to tell you!

Although to be fair the question of who "manages" a workstation is not always as simple as just "who has the technical ability to manage it". It's the intersection of who is legally responsible for the device and it's data (this is BY FAR the most important part), who has the time to maintain it (that's often NOT the user), and who technically can maintain it according to whatever standard it has to be maintained to (this gets complicated when you factor in reporting, patching etc).

In my experience, a common approach on non-Windows systems is to give the users privileged access to do what they need to do while also monitoring that closely, managing patching for them, and if you have some legal requirement to do so; running some kind of AV for reporting purposes. This tends to strike a happy medium between giving IT/Security peace of mind to some extent while also not totally ruining the developer experience.


It's also a matter of efficiency. I can spend more time on projects that matter instead of on system administration. The people doing the system administration are more efficient since they don't need to spend extra time figuring out what to do. It's not a matter of being smart enough to do it, but instead it's a matter of time efficiency.


Not everyone is interested in playing with their own customized OS configurations. Perhaps 90% of the employees would be okay if everything "just works". This is especially true due to lots of internal tooling and figuring out the right configurations for most of them would be a very painful time-consuming process for newcomers.


"In my (apparently naive) mind, if you're smart enough to work at Google, you're smart enough to manage your own OS."

This is a bit like saying if you can operate on human brains, you should absolutely be able to take apart your car and put it back together.


I wish people never used analogies. It adds nothing to conversation and you are always inclined to pick the most ridiculous one. Neurosurgeon disassembling and reassembling a whole car, sounds indeed crazy.


Try doing support when everyone has a different setup

Or your software has all these customization options

Or your product has a bunch of different configurations

What was already hard just became a thousand times more painful


I've worked with Xoogler SWEs who didn't know how to use the command line. They're not hiring for computer literacy.



A recent debconf talk on supporting the same

https://debconf22.debconf.org/talks/11-scalable-support-for-...


Does anyone know what they do for malware and virus detection / protection? There are a lot of commercial products but they are all quite opaque in how they work, vendors don't discuss the cpu/latency impacts of their solutions (or perhaps they don't know what these impacts even are...). Would be great to know if there was something or what approaches Google takes to protect themselves and their employees.


I'd really love if more companies embraced desktop linux, much less maintaining their own distro configuration.


> Whenever Sieve spots a new version of a Debian package, it starts a new build. These packages are built in package groups since separate packages often must be upgraded together. Once the whole group has been built, Google runs a virtualized test suite to ensure no core components and developer workflows are broken.

That must have required an impressive amount of effort!

> Better still, thanks to the rolling release schedule, Google can patch security holes on the entire fleet quickly without compromising stability. Previously, security engineers had to carefully review each Debian Security Advisory (DSA) to make sure the fix was in.

I can only imagine Google upstreams an incredible number of patches from this process.


Don't a lot of Googlers[1] use Macs? And is it still not encouraged to run Windows there, or was that just something created by the media?

[1]: Are they still called Googlers?


As of a few years ago, most engineers in my group had Apple laptops but linux desktops and mostly the laptops were being used as "terminals" via remote desktop and Chrome (as you were not supposed to store source code on a laptop anyways in case it was stolen but with some exceptions). I often used a linux laptop (or chromebook) and mostly just ssh'd into a "screen" session running lots of emacs which worked OK on the google bus.

The only folks with Apple desktops were people doing iOS work and I think they may have had linux desktops as well.

It was fairly easy to get another linux "desktop" which was really just a virtualized linux machine in the cloud.

[1] Yes, still called googlers.


Lots of people have Mac laptops but you can't use them for anything but remote into a gLinux desktop or virtual machine to do development. Or you can use a web-based development environment. Basically, at Google, a laptop is just a web browser.


So a MacBook Air is just a really expensive web browser with a weird keyboard layout (it's probably Ok for US keyboards, but I hate the way they hide keyboard mappings essential for development like square and curly brackets, pipe, backslash, tilde etc. on German keyboards with no other plausible explanation than that some designer didn't want the keyboard to look cluttered)?


[Bias disclaimer: I have previously worked on Windows and ChromeOS]

When your choices are:

1) A 16" MBP with an awful keyboard layout (death to the Command key, and put Ctrl in the corner) but a great screen and battery life

2) A Chromebook that's straightforward, reliable, but for some reason not available in a size >13" with a resolution >1080p (so a dealbreaker for development for many people)

3) A Linux laptop with all of the fun bluetooth, driver, and battery quirks that implies

Many people will choose #1. OS X is by far my least-favorite desktop OS for a wide variety of UX reasons, but when my workflow for the most part requires browser windows, terminals, and enough pixels to use many of them at once the 16" MBP is a pretty good choice (when I'm not paying, at least).


> MBP with an awful keyboard layout

OSX lets me remap caps lock to control without digging in the registry or playing "find where they moved Xorg.conf." It's just an option in the preferences.

Emacs movement shortcuts like C-a and C-e (move-beginning-of-line and move-end-of-line) work everywhere out of the box. ^C can be kill and ^p can be previous-line because they don't collide with copy and print, so terminals Just Work instead of each having their own convention to memorize. Also, for some reason the OSX terminal is the only one that reliably gets SIGWINCH and unicode reliably correct, and has for a decade. It's weird that linux terminals are so bad at this, but whatever.

This is what good design looks like.


>OSX lets me remap caps lock to control without digging in the registry or playing "find where they moved Xorg.conf." It's just an option in the preferences.

For native apps. Sort of.

* Though you can assign Ctrl to the Fn key, you cannot assign the Fn action to any key so you can't swap them without giving up the ability to use the Fn key at all.

* This doesn't work for web apps. Google Docs etc. will still use Command key shortcuts and will thus be different locally versus remoted into a machine.

* Some key mappings break in text boxes. I've remapped Find to Control+F, which works most of the time, but not if my cursor is in a text box, because then it moves the cursor forward. Control+B for bold and Control-A for select all are likewise. To make it worse, this only happens for certain text boxes, and heck if I can figure out the pattern.

In all, it's an incredibly frustrating experience.


> Some key mappings break in text boxes. I've remapped Find to Control+F, which works most of the time, but not if my cursor is in a text box, because then it moves the cursor forward. Control+B for bold and Control-A for select all are likewise. To make it worse, this only happens for certain text boxes, and heck if I can figure out the pattern.

Don't remap Cmd+<key> shortcuts to Ctrl+<key>. The great thing about the cmd key is that it doesn't conflict with anything else. Ctrl-f is forward, ctrl-b is back, ctrl-a is beginning of line, etc, these have been around in emacs (readline?) for a long time and it's great that native os textboxes implement them. This is why it's inconsistent; bad apps use their own snowflake controls and fail to implement them.

cmd-<key> don't conflict which is why they're used. Ctrl-c meaning both SIGINT and "copy to clipboard" is a disaster on terminals, the cmd-c life is much better. By trying to remap cmd to ctrl you're working against the OS and the OS will fight you back because it's not designed to do that.


The entire industry outside of Apple has standardized on keyboard shortcuts. When using my Mac, 80% of my work is remoted into a Linux machine where I need Ctrl+ hotkeys. I have more than 30 years of experience using machines with Ctrl+ hotkeys. All of my machines outside of the work Mac use those hotkeys. I have no interest in switching now.


> OSX lets me remap caps lock to control without digging in the registry or playing "find where they moved Xorg.conf." It's just an option in the preferences.

In Linux you just have to add 'include "capslock(escape)"' in xkb symbols file and it works on any Linux distro. I just set it up in my dotfile installer in 2013 and never had to fiddle with the interface since.

I never have to wonder "where did they moved the option in the interface this time?". It's just a script I have to run when I set up a new distro install.


I think it's mostly MacBook Pros (not Airs).

Most of Google's services have a corresponding iOS app which requires a Mac to build / run a simulator. So if you're not actually picky about computer then it's more practical to have a Mac as it can handle debugging for Web/Android/iOS.

If you're at a desk you can have a dock with a different keyboard.


Can you post a pic of what you are talking about? My mappings for everything you said are one keyboard click or a shift click away.


You can build stuff on your Mac. The code is coming from a FUSE mount, but the builds can run locally.


> is it still not encouraged to run Windows there

You can get a Windows machine, but they are not trusted devices and you can't access a lot of stuff. (At least that was the case a few years ago when I left)


> is it still not encouraged to run Windows there

My experience was that the only encouragement one way or another was toward trying ChromeOS a first, and switching away if you wanted to later.


Yes, yes Windows discouraged, yes.


Not related to Google at all, but I've been in Windows places with Sun, OS/2 and NeXT providing the data. Not a single install was newer than the 90s.

Consider this happened after 2011.

Having a few different Linux distros at Google, okay then.


I would love to see a similar article diving into Amazon's use of RHEL5, and the eventual slow move to "Amazon Linux".

At one point, I had a RHEL5 desktop in Seattle, which I could (and had to) develope remotely on from the Toronto office[0]. The software libraries we used depended on something that RHEL5 had and newer versions of Red Hat didn't, as I understood it. Eventually a new and fantastic manager joined the Toronto office and convinced senior management to at least ship the desktops to Toronto.

[0] At the time, the 'office' was the warehouse in Mississauga.


I was always surprised how fast they got NoMachine to work on the VPN.

In today's world with tools like Nix and more hermetic build systems that include libc, the need to build on RHEL5 wouldn't exist.

Of course it's always good practice to test on the environment you're going to eventually deploy.


I find it very interesting that they're using Debian Testing as essentially a rolling-release distro. I did this in the past for a few years, and stopped (switched to Fedora), but I am pleased to see there is at least one other group of people who think that's a reasonable setup.


Debian testing/unstable deserves more user and developer influx from Ubuntu castaways.

Googles change to debian-testing reflects my own rite of distribution passage. I used manual /etc/apt/sources.list upgrades within Ubuntu so often, after Unity was abandoned, there was no reason to stay on Canonicals flavour.

After a year on testing I went to unstable and subscribe to bugs affecting me from time to time. Usually there's either quickly a workaround or a flatpak as last resort. Not something I'd install on the family pc, but testing or unstable is a good choice for developers.

Contributing to Debian and adjacent projects like GNOME and freedesktop.org got easier (or standardized?) since they all run a gitlab instance each.


My personal dev machines have been on unstable for a couple of years now. I tried testing first, and honestly have been broken there more than I've ever been broken on unstable (which is 0 times).


I’ve been doing this for years. I generally suggest holding off on upgrades in the weeks immediately following a Debian stable release is a good idea: when everything that has been held back from testing suddenly arrives at once, bugs sometimes get missed. But otherwise, generally a really solid experience.

Of course, it’s not officially supported by anyone, so when the (very) occasional bug does hit, you have to sort out rolling back to something that works yourself. Sounds like Google automated this side of things?


Google could also simply buy Canonical, and then decide what to do with Ubuntu. And doing some damage to the Microsoft monopoly, as a side bonus.


Is there any Google branding in the OS? I’d love to see some screenshots of it in action.


It’s a normal GNOME desktop with a stylized penguin wallpaper and a link to IT.


Weird. Why not just use Arch Linux?

They could have simply done what Manjaro did: start as Arch, add some packages and over time fork completely and manage your own testing and stable (by getting inputs from mainstream Arch)


Their servers use a modified version of Debian. They already have knowledge and tooling around dpkg, debs, etc.


Because Debian-testing is wonderful?


Where I could download gLinux? I thought article said it's available for all...


Regular users really wouldn't want glinux. It's really just debian with a lot of things added for Google's corporate fleet management.


It says you can't get it in the second paragraph.


Can you create a Debian derivative and not release the source code?


Of course; even GPL only requires that you share source code with people that you distribute binaries to.

EDIT: The only FOSS license I know about that doesn't do this is AGPL, which Google is known to be extremely averse to.


There's a few other supra-GPL copylefts that impinge on the freedom to privately fork:

* The OpenWatcom license requires source code publication on use. This was approved by the OSI but not FSF, which means its one of the few times a license can be described as Open Source but not Free Software.

* SSPL extends AGPL's copyleft clause to include support utilities, which didn't pass muster at either OSI or FSF (which is inconsistent with the OSI's prior opinion on OpenWatcom but )

IMHO, Google's not wrong to reject AGPL. The license makes it very difficult to use modern fork-and-pull-request workflows unless you write all your code to be a quine. And the lack of people using it makes it mostly useful as an exception sales vector rather than a legitimate renegotiation of the copyright bargain like GPL is. But Google's objection to it is rather weird, based on some hypothetical scenario of "GPL virality" making them publish internal tools. This is a misreading of underlying copyright law; I have yet to see a court demand specific performance of any source code publication requirement[0]. They will give you money damages and possibly an injunction prohibiting use of the specific application in question - not your entire internal stack.

[0] Practical example: that one time Atari hired a subcontractor to republish old Humongous Entertainment games on Wii and wound up infringing the GPL on SCUMMVM. Atari actually considered GPL compliance, but then realized that this would violate their obligation from Nintendo not to disclose game source at all.

http://sev-notes.blogspot.com/2009/06/gpl-scummvm-and-violat...


> The license makes it very difficult to use modern fork-and-pull-request workflows unless you write all your code to be a quine.

What do you mean? AGPL means that you have to open source your server stack, which of course proprietary server-software-based companies don't want to do.


The thing with GPL is that you must provide source to users of software not to to everyone publicly. So if compiled binaries are only available internally the same can be done with source code.


Definitely. The GPL triggers on distributions, so as long as you don‘t distribute you‘re in the clear. Company-internal use doesn’t usually count as distribution.


It's not distributed outside Google so yes, you can.

Read the GPL...


What is “the source code” of Debian?


The article says it's just Debian testing with some (presumably proprietary) Google provisioning and dev tools bolted on.


Google can keep their in-house distro. Just release their damn Linux Google Drive client, and I'll be happy.


Perhaps https://rclone.org/ would be of use to you. It handles other cloud storage providers as well, comes with convenient encryption functionality, and its synchronisation is far more reliable and controllable (at least this is my experience using it with OneDrive and S3).


For the longest time Drive never actually enforced users quotas. This was recently "fixed" and they are getting things under control.

Quota enforcement was a blocker for official Drive linux support because it would have made the abuse issues even worse. (Not saying its going to happen now, but one blocker has been cleared)


Could you explain what you mean by this? Why would an official Linux client lead to more abuse compared to the current situation of several unofficial clients in common use.


A working google drive client for the mac would be nice too. It sucks less than it did a few months ago, but is still so bad it makes Dropbox look good!


I've been very happy with insync for years, it works very well.


Google released BSD-licensed Drive integration a decade ago. The only reason there isn't a "Linux Drive client" in the sense that you implied is the complete lack of initiative among open source developers.

If you want a mature, maintained Drive integration on Linux you can have it right now with ChromeOS.

https://source.chromium.org/chromium/chromium/src/+/main:chr...


If open source developers were going to do all the work, why would they do it for Google's walled garden, rather than Syncthing?

And that's why we now have N open source third party syncthing interfaces and no clone of the GDrive client.


Exactly. Which is fine.

* Google doesn't see enough gain in supporting a Drive client for Linux

* Linux users don't see gain in feeding the beast

... so nobody dedicating their finite lives to solving this problem is win-win.


It is not reasonable to demand that open-source developers build a product for one of the world's largest companies because it is "too hard" for that company to ship a functioning product.


If the competitor offers https://www.dropbox.com/install-linux, developed in the open with a GPL license, is it reasonable to tell other people to take on the cost of building and maintaining a client for your paid service?


is it reasonable to tell other people to take on the cost of building and maintaining a client for your paid service?

Google barely squeaked by with $76 billion in revenue last year. It only has 156,500 employees.

You can't possibly expect something like that from a company this resource-constrained.


They're actually not even as "resource constrained" as this comment implies, as the numbers here are off.

Per https://abc.xyz/investor/, Alphabet made $75 billion in Revenue in Q4. They made $76 billion in net income for all of 2021.


$480,000 per employee is quite decent, but hardly radical these days.


Which aspect of chromiumos drive integration and sync engine is not "developed in the open"? The dropbox source is distributed as a tarball and if you want to contribute to it "contact us". That doesn't meet my definition of "in the open".


I didn't claim that ChromiumOS's code wasn't developed in the open. The fact remains that Google neither supports nor provides a Google Drive client for Debian and its derivatives or Redhat and its derivatives, while its competitor does. It is reasonable for people to complain about this.


Do they use M1 MacBooks?


Yes. I’m a Googler who uses a 15” M1 MacBook Pro


Do you run gLinux on it?


I do not, it’s not an option. If it’s a Mac it has to run macOS


But I thought Google was all in om ChromeOS and chromebooks?


Those target a demographic consisting of schools and non-developers. Internal Software Engineers have different needs.


Why can't we get it? Is this not a violation of the GPL?


The GPL has a clause that mentions internal use.


What a laugh.

I left in 2017, and I was one of the last SWEs to use a Linux laptop. Everyone had moved to a Mac, after the Christmas Windows security disaster (which I won't detail here). It was pretty much forbidden for a SWE to use Windows, and for a while Linux had a lot of users but over time it became all Mac. Finally in 2016 I caved in and drank the Kool-Aid too.

I'd go to the Tech Stop for something or other, and the staff would never object, since, after all, it was the official Google Linux. But you could tell that they didn't see this very often.

Side note: the Tech Stop staff were some of the best, nicest, and most competent people I ever encountered in that role.

Furthermore, whatever I needed seemed to almost always require them to keep it overnight and wipe the machine completely.


> I left in 2017, and I was one of the last SWEs to use a Linux laptop.

Categorically not true, not then, not now.


Yeah, agreed. This isn't even close to true, not sure where he got that from.


(Opinions are my own)

I am a gLinux user. Before google I just installed linux on my laptop at previous companies.


[flagged]


I think it's a misunderstanding of the phrase "one of the last" which can have a connotation that there are no longer any remaining


Obviously YMMV but my team is at least 75% Linux laptops, so my experience doesn't match yours at all.


Seeing as MacOS build support for AOSP was dropped entirely over a year ago ( https://source.android.com/setup/build/initializing ) I'm going to guess your corner of the world is very not representative.


"over a year ago" was 2021.


perhaps read the article before commenting, especially if you aim at criticizing what is written? the word "laptop" isn't even mentioned in the article, this talks about desktops only and how Google runs its own linux distri on those...


Perhaps you should consider before commenting that almost no one has a "desktop" machine anymore (meaning, a machine that's not portable).


I have.


This is wrong.


Does chromebook classify as linux? I had mac laptop, but granted I could not do anything in the shell, but just "remote" into an actual (or provided through ganeti ) machine, it was forbidden to check out locally on your laptop (makes sense).

But clearly remember my team at LAX (part of Ads-Quality) had mix of mac, linux and I think two or three people windows laptops... But all were dumb terminals. Also taht was 2014-2017


do you know what the situation is right now?


I strongly requested a Linux laptop (thinkpad x1 carbon) but got a Chromebook. It would have been easier to get a Mac, and all of my coworkers got new Macs when they joined. I'm going to strongly protest for a Linux laptop again when my hardware refresh date comes.

I think I fell into some bureaucratic limbo because I joined near the beginning of covid lockdowns. I don't think my recruiter fully understood my request for a Linux laptop either, and there was a general shortage of them at the same time.


But you got a Linux machine! What you wanted however is a GNU/Linux machine.


Google ran out of Linux laptops? Even at my humble place of employment we can just give any old computer to IT and they will install Linux on it without any fuss. I would imagine the IT wizards at Google also know how to do it.


Google is particular about the hardware for security reasons. I don't know enough to say what those security reasons are, but there are only a few approved models from Lenovo. And you can't bring your own, even if it's an identical model, so I assume Google has some special hardware auditing and purchase program.


yep, i held off on my laptop refresh until i was assured a thinkpad, but several of my colleagues made the switch to chromebook at the time.


There are still SWEs that use the Linux laptops. There are dozens of us!


I love it.

Like I said: it was not unheard of to use one -- just a bit unusual.


People still use glinux but as a remote host. Everything is done on the browser so laptop choice is less important.


> Everything is done on the browser

Not quite. There is security hardware there, as I recall.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: