Hacker Newsnew | past | comments | ask | show | jobs | submit | mawax's commentslogin

https://archive.is/Zr2D6

For those of us that can't open the link due to their ISP DNS block.


And for those that can't open archive.is due to their ISP DNS block :( https://web.archive.org/web/20260219023129/https://annas-arc...


A few years ago I switched from KeePass, with the database stored in Dropbox, to a SaaS password manager. My primary reasons where:

- No more sync conflicts when using multiple devices

- Backups are taken care off

- It's harder to steal the database

- Slightly better browser and mobile extensions for auto-filling passwords


Did not know these existed. Just ordered a duo sim from my carrier, thanks!


Probably not down, but blocked by your ISP. Try a VPN. Same thing happens here.


Yes, blocked. This is what I see in germany without a VPN

https://notice.cuii.info/

"Their buisness model is based on copyright infringement"

Well, where to complain that Anna's Archive ain't a buisness?


Aamzingly, I don't even get this page. I just see the default "this page is not available" from my browser. I'm with Vodafone, and I wonder if it is legal to pretend a site doesn't exist without notifying me.


Pretty sure it's DNS level block. So just using private DNS would be enough, no need for full blown VPN. It's just that VPNs also usually use their own DNS instead of the ISPs.

I recommend NextDNS or similar to bypass those DNS blocks and also block ads at a very deep level that works ok mobile and even inside apps.


I'd rather complain why somebody decides for me where what websites I'm allowed to open


The comparison misses the mark: unlike humans, LLMs don't consolidate short-term memory into long-term memory over time.


That is easily fixed, ask it to summarize it's learnings, store it somewhere, and make it searchable through vector indexes. An LLM is part of a bigger system that needs not just a model, but context and long term memory. Just like human needs to write things down.

LLMs are actually pretty good at creating knowledge: if you give it a trial and error feedback loop it can figure things out, and then summarize the learnings and store it in long term memory (markdown, RAG, etc).


You’re making the assumption that there’s one, and only one, objective summarization, this is entirely different than “writing things down.”


Why do you assume i assume that?


My bad if I misunderstood. I assumed by your use of “it” and approximation methods.


This runs into the limitation that nobody has RL'd the models to do this really well.


Over time though, presumably LLM output is going into the training data of later LLMs. So in a way that's being consolidated into the long-term memory - not necessarily with positive results, but depending on how it's curated it might be.


> presumably LLM output is going into the training data of later LLMs

The LLM vendors go to great lengths to assure their paying customers that this will not be the case. Yes, LLMs will ingest more LLM-generated slop from the public Internet. But as businesses integrate LLMs, a rising percentage of their outputs will not be included in training sets.


The LLM vendors aren't exactly the most trustworthy on this, but regardless of that, there's still lots of free-tier users who are definitely contributing back into the next generation of models.


For sure, although I'm fairly certain there is a difference in kind between the outputs of free and paid users (and then again to API usage).


Please describe these "great lengths". They allowing customer audits now?

The first law of Silicon Valley is "Fake it till you make it", with the vast majority never making it past the "Fake it" stage. Whatever the truth may be, it's a safe bet that what they've said verbally is a lie that will likely have little consequence even if exposed.


> great lengths to assure

is not incompatible with

> "Fake it till you make it"

I don't know where they land, but they are definitely telling people they are not using their outputs to train. If they are, it's not clear how big of a scandal would result. I personally think it would be bad, but I clearly overindex on privacy & thought the news of ChatGPT chats being indexed by Google would be a bigger scandal.


You did hear that it did happen (however briefly) though, yeah?

https://techcrunch.com/2025/07/31/your-public-chatgpt-querie...


That's my point. It is a thing that is known and obviously a big negative, but yet failed to leave a lasting mark of any kind.


Ah, the eternal internal corporate search problem.


That's only if you opt out.


ChatGPT training is (advertised as) off by default for their plans above the prosumer level, Team & Enterprise. API results are similarly advertised as not being used for training by default.

Anthropic policies are more restrictive, saying they do not use customer data for training.


Is this not a tool that could be readily implemented and refined?


my knowledge graph mcp disagrees


You can just deploy a function.

You open vscode, install the Azure Functions extensions, walk through the wizard to pick your programming language and write the code. Then create and deploy it from vscode without ever leaving your IDE.


> You open vscode, install the Azure Functions extensions, walk through the wizard to pick your programming language and write the code. Then create and deploy it from vscode without ever leaving your IDE.

You are talking about something entirely different. Provisioning a function app is not the same as deploying the function app. How easy it is to upload a zip is immaterial to the discussion.


The vscode extension can both provisions the resource as well as deploy it.

Edit: And yes, it will create every resource it needs if you want to, except for the subscription.


> The vscode extension can both provisions the resource as well as deploy it.

On top of having to have an Azure subscription, you need to provision:

- a resource group

- a service plan

- a function app

You do not get to skip those with azure.

And by the way, the only time anyone uses vscode to deploy an app, or even visual studio, is to work on personal projects or sandbox environments. Even so, you use the IDE to pick existing resources to deploy to.


You're really trying, aren't you :-)

All of this can easily be automated/cloned if it is something you do often. An RG is a collection of (hopefully) related resources. Plans and the App are provisioned together in the web UI wizard if that's the route you take.


> You're really trying, aren't you :-)

I'm trying to educate you on the topic, but you seem to offer resistance.

I mean, I haven't even mentioned the fact that in order to be able to provision an azure function you are also forced to provision a storage account. As if the absurdity of the whole plan concept wasn't enough.

> All of this can easily be automated/cloned if it is something you do often.

Irrelevant. It's completely besides the point how you can automate deploying all those resources.

The whole point is that Azure follows an absurdly convoluted model that leaks forces users to manage many layers of low-level infrastructure details even when using services that supposedly follow serverless computing models. I mean, why on earth would anyone have to provision a storage account to be able to deploy an Azure Function? Absurd.


I've provisioned many Azure Functions apps; there's nothing you can educate me on, here.

Why do you care about a storage account so much?

https://learn.microsoft.com/en-us/azure/azure-functions/func...

Since you didn't know about the [Flex] Consumption plan, there's your education.

And as to why they require a storage account:

https://learn.microsoft.com/en-us/azure/azure-functions/stor...

Wallah, education!


Which is exactly the opposite of how to effectively manage applications, code, and change at any scale beyond a home project.


One thing I noticed about all of the public clouds is an insistence by small-scale users to avoid the user-friendly interface and go straight to the high scale templating or provisioning APIs because of a perception that that’s “more proper”.

You won’t get any benefits until you have dozens of instances of the same(ish) thing, and maybe not even then!

Especially in the dev stage it is perfectly fine to use the wizards in VS or VS Code.

The newer tooling around Aspire.NET and “azd up” makes this into true IaC with little effort.

Don’t overthink things!

PS: As a case in point I saw an entire team get bogged down for months trying to provision something through raw API calls that had ready-to-run script snippets in the docs and a Portal wizard that would have taken that team all of five minutes to click through… If they’re very slow with a mouse.


That was not the point. Parent was complaining how complicated provisioning and deploying through the Azure portal was.

At scale you'd IaC such as Bicep.


> That was not the point. Parent was complaining how complicated provisioning and deploying through the Azure portal was.

No, I wasn't. I was pointing out the fact that Azure follows an absurd, brain-dead model of what the cloud is, which needlessly and arbutrarily imposes layers of complexity without any reason.

Case in point: the concept of a service plan. It's straight up stupid to have a so-called cloud provider force customers to manage how many instances packing X RAM and Y vCPUs you need to have to run a function-as-a-service app, and then have to manage how that is shared with app services and other function apps.

Think about the backlash that AWS would experience if they somehow decided to force users to allocate EC2 instances to run lambda functions, and on top of that create another type of resource to group together lambdas to run on each EC2 instance.

To let the absurdity of that sink in, it's far easier, simpler, and much cheaper to just provision virtual private servers on a small cloud provider, stitch them together with a container orchestration service, and just deploy apps in there.


> Case in point: the concept of a service plan. It's straight up stupid to have a so-called cloud provider force customers to manage how many instances packing X RAM and Y vCPUs you need to have to run a function-as-a-service app, and then have to manage how that is shared with app services and other function apps.

You're not forced to, you can use a consumption plan.

https://azure.microsoft.com/en-us/pricing/details/functions/...


> You're not forced to, you can use a consumption plan.

Pray tell, what do you think is relevant in citing how many plans you can pick and choose from to just run a simple function? I mean, are you trying to argue that instead of one type of plan, you have to choose another type of plan?


The consumption plan is the default plan, so technically you don't have to choose anything, just go with the defaults.

But it disproves your point that you're "forced" to have an app service plan.

At this point you're simply arguing to argue after having been shown to be incorrect multiple times. Good luck.


The author addresses this: "See this jittering after the animation completes?"


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: