Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't understand how AI scrapers make up such a large percentage of traffic to websites, as people claim it does.

In principle, if you post a webpage, presumably, it's going to be viewed at least a few dozen times. If it's an actually good article, it might be viewed a few hundred or even thousands of times. If each of the 20 or so large AI labs visit it as well, does it just become N+20?

Or am I getting this wrong somehow?



> I don't understand how AI scrapers make up such a large percentage of traffic to websites, as people claim it does.

I think a lot of people confuse scraping for training with on-demand scraping for "agentic use" / "deep research", etc. Today I was testing the new GLM-experimental model, on their demo site. It had "web search", so I enabled that and asked it for something I have recently researched myself for work. It gave me a good overall list of agentic frameworks, after some google searching and "crawling" ~6 sites it found.

As a second message I asked for a list of repo links, how many stars each repo has, and general repo activity. It went on and "crawled" each of the 10 repos on github, couldn't read the stars, but then searched and found a site that reports that, and it "crawled" that site 10 times for each framework.

All in all, my 2 message chat session performed ~ 5-6 searches and 20-30 page "crawls". Imagine what they do when traffic increases. Now multiply that for every "deep research" provider (perplexity, goog, oai, anthropic, etc etc). Now think how many "vibe-coded" projects like this exist. And how many are poorly coded and re-crawl each link every time...


Yeah it seems the implementation of these web-aware GPT queries lacks a(n adequate) caching layer.

Could also be framed as an API issue, as there is no technical limitations why search provider couldn't provide relevant snapshots of the body of the search results. Then again, might be legal issues behind not providing that information.


Caching on client-side is an obvious improvement, but probably not trivial to implement at provider-level (what do you cache, are you allowed to?, how do you deal with auth tokens (if supported), when searching a small difference might invalidate cache, and so on).

Another content-creator avenue might be to move to a 2-tier content serving, where you serve pure html as a public interface, and only allow "advanced" features that take many cpu cycles for authenticated / paying users. It suddenly doesn't make sense to use a huge, heavy and resource intensive framework for things that might be crawled a lot by bots / users doing queries w/ LLMs.

Another idea was recently discussed here, and covers "micropayments" for access to content. Probably not trivial to implement either, even though it sounds easy in theory. We've had an entire web3.0 hype cycle on this, and yet no clear easy solutions for micropayments... Oh well. Web4.0 it is :)


A caching layer sounds wonderful. Improves reliabiltity while reducing load on the original servers.

I worry that such caching layers might run afoul of copyright, though :(

Though an internal caching layer would work, surely?


If you run a website, you'll realize it's very difficult to get human traffic. Worse, trying to understand what those eyeballs are doing is a swamp; there are legitimate privacy concerns for example. Maybe all you care about is if your articles about sewing machines are getting more traction than your articles about computing Pi, but you can't get that without navigating all the legal complications of your analytics platform of choice, who wants to make sure you suffer for not letting them collect private information on your visitors to sell to third parties and to dump ads onto your visitors. Were it not for the bots, you would be fine just by running grep on your access logs. But no, bot traffic leaves noise everywhere; and for small websites that noise is more than enough to bury the signal and to be most of the traffic bill.


On multiple client sites that have > 1 million unique real visitors per month, we are seeing some days where ~25-30% of requests is from AI crawlers. Thankfully we block almost all of this traffic. But it is a huge pain because it add an addition load to your server and messes up your analytics data for what is a terrible return - traffic from AI sources has a horrendous conversion rate, even worse than social media traffic conversion rates.


I don't understand it either. I track requests and AI crawlers are there but not as abusive as people claim. Most annoying requests are from hackers who are trying to find my ".git" directory. But I highly doubt these guys will respect any rules anyway.


Vastly varies on what type of website you have, how many pages you have, and how often they are updated. We routinely see 1000's of requests per minute coming from AI bots and the scraping lasts for hours. Enough to make up 20-30% of overall requests to the server.


I have a symbol server hosting a few thousand PDBs for a foss package.

Amazonbot is every day trying to scan every single PDB directory listed. For no real reason. This is something causing 10k+ requests each day when legitimate traffic sits at maybe 50 requests a day.


Pages are dynamic, they change often: if a page is worth scraping once, it is worth scraping again and again and again to keep up to date with any changes.


Maybe they're vibe-coding the scrapers.


Your speculation assumes a low page count.


Post a URL to a page on your website on something like Mastodon and tail your logs.


Related: Please Don’t Share Our Links on Mastodon

https://news.ycombinator.com/item?id=40222067




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: