Hacker Newsnew | past | comments | ask | show | jobs | submit | zootm's commentslogin

Using logging facades means that libraries don't need to update -- which is great -- but libraries were never directly vulnerable anyway. log4j is, to my knowledge, still by far the most common actual implementation of logging in the Java ecosystem. The assertion that it's not popular for logging only holds if you assume that logging facades are logging implementations, which they are not.

My completely-unverified guess would be that there are more people immune to this issue because they never migrated past log4j 1.x than there are who are immune because they picked up Logback or something similar.

> You shouldn't need to pour over code to figure it out.

This is true but as sibling comments have pointed out, a lot of other software you might be deploying without having written or configured the logging for are written in Java.


Most people using e.g. spring boot or quarkus would end up using the defaults that come with those frameworks. For spring boot, the default is actually logback. However, you can switch it to log4j2. https://spring.io/blog/2021/12/10/log4j2-vulnerability-and-s...

Log4j2 never quite got the same status that v1 had. V1 should be considered a bit obsolete at this point. It still works of course but it has some performance issues that both log4j2 and logback try to address.

The issue with high profile vulnerabilities like this is that there are a lot of projects where dependencies are rarely updated.

I update aggressively on my own projects to stay on top of changes and keep the effort related to mitigating compatibility issues at a minimum. A nice side effect is that you get all the latest security, performance, and other fixes. In my experience, updates get harder the further you fall behind. So, the longer you wait, the more likely you will have a lot of fallout from updates and the more likely it is that you will be exposed to pretty serious bugs that have since been addressed upstream.

If you are like me, I can recommend the excellent refreshVersions plugin for gradle. It makes staying on top of dependency updates a breeze. I run it every few weeks to spend a few minutes updating misc libraries, and verifying everything still works. Run the command, update to the suggested versions, create a pull request and merge when it works.

Occasionally there are issues with specific libraries but 95% of the updates are completely painless and the remainder are usually pretty easy to deal with. And if there are show stopper issues, I want to know about them and document them why we can't update.

I would recommend doing the same for packaged software. I work with a lot of customers running ancient versions of whatever for no other reason than that they seem a combination of fearful, ignorant, and indifferent about what will break because they can't be bothered to even try. Mostly updating them to more recent versions isn't that big of a deal and it tends to address a multitude of performance, security, and other issues.


> For spring boot, the default is actually logback.

I did not know this, thanks for letting me know!

> I update aggressively on my own projects to stay on top of changes and keep the effort related to mitigating compatibility issues at a minimum. A nice side effect is that you get all the latest security, performance, and other fixes. In my experience, updates get harder the further you fall behind.

This is absolutely a best practice, though I think people struggle with it for all sorts of reasons. In general one of the downsides of maintaining a diverse codebase is that this constant update cycle becomes more and more difficult, and it's one of the things that I find drives towards more consistent tooling within a team.

> I work with a lot of customers running ancient versions of whatever for no other reason than that they seem a combination of fearful, ignorant, and indifferent about what will break because they can't be bothered to even try.

While I agree this is something people need to get over, we have to take some blame for this as an industry. A lot of people have bad experiences with upstream Shiny Object Syndrome.


"A bit obsolete" is underselling a 9-year-old release that's been EOL for 6...


That's my observation. My favourites are the projects where the last commit was 5 years ago with a copy-pasted log4j stanza that must have been 4 years old at that point, which now can't practically be upgraded at all because of the bitrot and loss of organisational knowledge. I've seen one that needed somewhat special measures just to regain access to the source code...


Looks like the paper referenced is from May, if you have a subscription to Nature: https://www.nature.com/articles/s41586-020-2278-9


If you do not have a subscription to Nature: https://sci-hub.se/https://www.nature.com/articles/s41586-02...


Nice, it appears that my parent's ISP is doing DPI and a MiTM attack to block Sci-Hub (and I do have a subscription to Nature).


Use the Telegram bot @scihubot, that will probably evade the DPI trap. Just send the DOI or the full article name to this bot and you'l either get a PDF back or a message about the article not yet being in the database.


my ISP's DNS resolver just returns NXDOMAIN for that.

     $ dig @8.8.8.8 sci-hub.se. +short
     186.2.163.219


what happens if you try the .st TLD?

.se is Sweden, I was surprised to see that they haven't taken down sci-hub.


sci-hub.st works here (Telekom in Germany).


Is it possible they are simply blocking DNS or the IP(s)?

MiTM should be impossible on HTTPS - if they somehow obtained legitimate certs for sci-hub , you should really announce someone at Mozilla and/or Google.


They appear to do deep packet inspection looking at the SNI and insert an invalid certificate from Allot, redirecting the connection to a very short "Vodafone can't show you this" website: https://pastebin.com/RHwPWBug

It appears to work with ESNI activated in Firefox. Interesting to see these techniques in use...


> Por causas ajenas a Vodafone, esta web no esta disponible

"Due to causes independent on Vodafone, this website is not available".

How so? This is plain false. I bet they do not even inform their customers that the connectivity service they sell is endangered by Deep Packet Inspection.


Virgin Media give you a message for blocked porn, but not the VPNs they stop you accessing.

It's quite insidious - the VPN blocks are textbook government overreach.

If we ever have a written constitution in the UK we need rules stopping the government fucking about with this stuff (As it seems to be the entropic end-state of all policy to protect the children)


That just means the decision to block the site was made by someone outside of Vodafone. Assuming they face meaningful penalties for noncompliance, I wouldn't consider it false. But it would be nice if these kinds of messages identified who is to blame.


Sometimes people use the phrase mitm loosely to mean sniffing SNI and then blocking the connection in some fashion.


Apologies if the usage is incorrect. They appear to indeed sniff the SNI and then inject a one-line website with a self-signed certificate: https://pastebin.com/RHwPWBug


Its still possible that instead they are just hijacking the ip space. You could probably distinguish by running traceroute. Or doing something like curl https://186.2.163.219 --header "host: sci-hub.se" -k (which should not send an SNI that can be sniffed but still send the corect host header inside the encrypted http stream so the connection would work minus a cert failure, but DPI by the isp wouldnt be able to detect. If that curl fails they are probably doing ip address hijacking. If domain-fronting request works then they are probably sniffing SNI)


Good idea! I tried the domain-fronting curl request and it works, so they seem to be indeed sniffing the SNI...


Thanks both for taking the time to acknowledge and explain the subtlety. Helps somebody like me who’s casually following along to better understand both scenarios.


That isn't true except for the case of cert pinning. This sort of MiTM (or redirection at the very least) reglarly happens from employers, isps, and many others.


ISPs can't deeply inspect TLS traffic. Employers of course can, because they can insert their own trusted certificates on their own boxes.

Now, if by 'inspect' or MITM in this case you are talking about inspecting the TLS headers, that's possible, and based on the other comments it is exactly what this ISP was doing - checking the SNI of the TLS requests and blocking based on those.

But an ISP that hasn't somehow broken TLS isn't in a position to check the encrypted contents of your packets (e.g. HTTP headers, bodies etc). Your employer can very well have installed a TLS MITM device that is trusted by your company-issued device to actually inspect the contents of your encrypted packets (by acting as a proxy - you actually have a TLS tunnel with the MITM device, and it has a separate tunnel with the TLS server).

Certificate pinning can block even these types of employer MITM inspection, and it can also protect from rogue CAs issuing ilicit certs. But if your ISP is in possession of PKI certs for google.com and outlook.com, then the CA that issued them will soon be removed from the trusted list.


You didn't read my comment that is right next to yours. Or the one that you replied to, where I mentioned cert pinning. You are repeating my comments back to me.

The device does not always need to have a special trust relationship with the client browser, since a trust relationship can already exist with FTU.


I'm amazed that my comment was downvoted on a tech board, where presumably people are informed about these things.

Of course TLS connections are regularly inspected. A simple google search will show you edge boxes you can purchased to perform this on your network. IT people know all about this. No, this does not mean that the cryptography used by TLS has been broken.


Which ISP do you use?


It's Vodafone Spain.


And the accompanying overview/summary: https://www.nature.com/articles/d41586-020-01455-w


“Third party”. In this case it just means “not Amazon”.


I've worked at Amazon since 2007 and I can confirm it has gotten a lot better over that time, especially more recently.

I have, in the deep deep past, experienced serious strife with the older policies, and obviously nothing is perfect, but it's gotten to the point where I don't really think about the policy.


I think it's worth noting that the paper is from 1992, where much of the work you reference (such as widespread use of overloading in Haskell) won't have existed yet.

I don't think it's unreasonable to point out that some of the assertions didn't stand the test of time, but I've (maybe unfairly?) read your comment as critical of the paper despite context.


Work on the specification stopped in 2010 but it's not been removed from the browsers that supported it.


I think this is mostly answered in the README. This is probably the most relevant section:

> The structure validation is simplistic by necessity, as it defers to the type system: a few elements will have one or more required children, and any element which accepts children will have a restriction on the type of the children, usually a broad group as defined by the HTML spec. Many elements have restrictions on children of children, or require a particular ordering of optional elements, which isn't currently validated.

It's not complete or thorough at present.

Regarding your link: I'm impressed someone took the time to write a DTD for HTML5!


Thanks! The DTD is described in the talk/paper I gave at XML Prague 2017 linked from [1]. Meanwhile, I've got a revised version for HTML 5.2 with basic coverage of the validator.nu test suite, though not published on the site yet.

[1]: http://sgmljs.net/blog/blog1701.html


Since the macro is procedural, I suppose it's feasible to pass a DTD, XML Schema or Relax NG to typed-html so it would work with any XML. Ideally with checking of ID and IDREF or key/keyref.


The question is then can Rust's macros (or what typed-html's technique is called) encode static/compile-time type checking for regular content models, with SGML-like content exceptions and, to top it, with SGML/HTMLish omitted tag inference?


Isn't the former what Mozilla Persona was trying to provide? A shame that was canned.


We built a successor project under https://portier.github.io/. Alive and kicking :)


If you go to the Privacy section of the iOS app it's mentioned at the bottom with a link to the webpage ("Further customise your privacy..."). Not ideal you can't do it from within the app though.


Also it’s like 6pt text at the bottom of a long form. I really wish it was prominent.


No UGD (data) companies want to make it easy to reduce the amount of data they offer.


They do have the counter pressure of not losing users or inspiring government action. I’m not sure how this plays, for example, with the EU data protection laws but I’d bet there are some smart lawyers looking at this.


This isn't the case, he works for Amazon. Unless he also works for Intel but that seems unlikely.

Edit: Just to make this clearer you can see his email in the Signed-off-by of the patch under discussion: https://lkml.org/lkml/2018/1/20/163


Thanks for pointing that out - I was under the impression he still worked for intel, before commenting I checked and all the top Google results suggested as much, I’ve corrected my comment.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: