Yes, still having hiccups for me. I started having issues about 24 hours ago.
What is the reason for this spate of outages? GitHub is a mature, stable product. What core features could its engineers possibly need to be working on that whatever they did broke core backend functionality so badly?
Once Tableau was acquired by Salesforce, shit that had been stable as can be for YEARS started to break as Tableau (I assume) began to interface with SF systems or requirements. Last week, CLI functionality was down for 3-4 days, depending on the cluster you're on. Nothing like crushing production reporting for a week at the end of the month because Salesforce wants X, Y, and Z added to a mature product that had seen no issues for the 3 years prior.
My guess is that is the same sort of thing happening here w/Microsoft.
Sign up with Objective Uptime Inc. and tell them which vendors you have SLAs with. OU will monitor them on your behalf, compare the actual global availability to what's displayed on the vendor's status page, and if the outage exceeds agreed limits, OU can even automatically file suits on your behalf for breach of contract and misrepresentation (assuming the vendor's status page was showing all peachy).
"When a metric becomes a target, it ceases to be a good metric"
I regret that uptime became a clause in SLAs, or a reputation/marketing thing. I don't care about how many 9s are after your decimal, I just want to know if your service is down or if something is wrong on my end.
One of the many status page/"is it down" services could probably get some good PR out of having a GitHub status page that's based upon live results from several geographical locations..
They have reported incidents for the last two days but seem to limit it to a short amount of time which is clearly not the case for many folks out there.
Again? Last time this happened was 12 - 13 hours ago [0]
If you are self-hosting like GNOME, wireguard, Redox OS, Wine, etc it seems it is business as usual. But for those who went all in on GitHub, it's pretty much a recurring disaster.
At this point, those who are self-hosting might as well say they have better up-time than GitHub had since they are still unreliable even after two years of warning against the 'centralize everything' [1] nonsense.
Every time there's an outage with $SCMProvider, there's always a comment like this.
Git is not GitHub. GitHub is not git.
You can absolutely do git in a distributed way.
That doesn't help when your business workflow relies on GitHub for access control, artifact hosting, issue management, PR coordination & approval, and all the thousand other things that people use GitHub for.
I mean... it still is? I can work on my codebase and commit on my local machine. Push remote to Gitlab if Github is down. When Github is back up I can push to Github. What's the problem?
To answer most comments here, of course git is still decentralized, it’s just that almost no one use it like that. We chose the easier route, as usual. Just like most people don’t host their texts and photos on their own site.
Note that your repo stays usable even while GitHub is down, and your team can sync their work afterwards. Or if you prefer, you can switch to a different remote without much trouble.
On a different note, I do host my own Git server, but between maintainance downtime, power outages and ISP issues, my availability isn't nearly as good as GitHub's.
What is the reason for this spate of outages? GitHub is a mature, stable product. What core features could its engineers possibly need to be working on that whatever they did broke core backend functionality so badly?