Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

One alternative is to run a webcrawler that stores the index in a series of SQLite database files, either by topic or by site, or any other criteria. Then users could download sets of those SQLite databases and run queries on them. Not really completely distributed, but hides some information in the noise of "search sets" and mirrors, and individual queries are run on local. You could mirror the main repository and just run searches on your own server/local. You could also swap the database files with P2P, etc.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: