Too bad the spec is stupid and requires password managers to be identifiable so servers can deny the "insecure ones".
It's already a pain to use Keepassxc for otp since they all want you to use their apps but it's still doable (the worst offender being steam where you have to hack your own app to extract the otp secret). With passkeys you won't have a choice to use The Google AuthenticatorTM etc because eventually some exec will find they can block every provider except their own to boost app download KPI.
I really like concept of passkeys, the simple fact of using asymmetric keys is so much better than giving the secret to prove you have it, but the spec is hostile and thought for vendor closing.
No, the spec is for companies that need to enforce higher levels of security so that you can e.g. only enable Yubikeys in your env.
I hate big tech just like anybody else but this is just spreading FUD right now.
Also execs can already enforce their apps only - banking apps for approving transactions are already a thing at least in europe, no fido passkey needed.
But didn't the author hint that this could get blocked?
My general read on passkeys and their implementers is that exportability is seen as a risky feature, and there's a push to make it as opaque as possible, likely through attestation or similar mechanisms.
Nothing ready-to-go that I'm aware of. ATP will just observe in the next weekly crawl that a shop is no longer returned by the storefinder API call or sitemap crawl, and that shop will simply not be present in the next weekly dataset generated.
To set up archives of shop-specific pages (e.g. record of opening hours, address, etc at a point in time), one could monitor the latest builds of https://alltheplaces.xyz/builds.html and when a new build completes, take the new build and 2nd oldest build to compare differences. Then for any feature whose attributes have changed (address, phone number, opening hours, etc) archive the `website` and/or `source_uri` attribute pages again to ensure the latest snapshot is captured. Any new feature would get the same treatment so the page for the newly observed shop/feature is archived for the first time.
I'm also aware ArchiveTeam projects tend to commence once the impending collapse of a retail chain is known and someone realises there is a website not archived which would be useful to preserve. Monitoring of ATP feature counts for brands across time may give some hint of how a brand is performing and whether it is growing or shrinking without having to find press releases and financial statements of the brand. Even if a brand suddenly announces bankruptcy (it happens all the time), generally the website will remain online for at least a few months whilst a new buyer is sought or whilst each retail location has a fire sale to get rid of remaining merchandise. It's also worthwhile to be aware of acquisitions of retail chains as this often results in the new parent company changing websites soon after acquisition closes, possibly removing useful content that once existed. Websites also change "just because" and this could be observed after-the-fact by seeing when ATP spiders break and get replaced/fixed.
Your best bet is probably to look for wikidata entries that are marked defunct; and match up to something like name-suggestion-index to get broad categories.
I'd encourage you to look at the Software Heritage archive as an example of the broader diversity in software sources outside of GitHub. Even that doesn't cover everything, because many repos aren't yet archived there, and there are repo formats not yet supported, and code not in repos, and people that refuse/block archiving of their code.
reply