Hacker Newsnew | past | comments | ask | show | jobs | submit | more epitactic's commentslogin


Bloomberg only has this excerpt:

> The tweet, which said Uighur women were no longer “baby-making machines,” was originally shared on Jan. 7, but wasn’t removed by Twitter until more than 24 hours later.

The complete tweet was archived on https://archive.is/nxC3r#selection-3959.0-3959.255

Chinese Embassy in US ‏Verified account @ChineseEmbinUS · 5h5 hours ago

"Study shows that in the process of eradicating extremism, the minds of Uygur women in Xinjiang were emancipated and gender equality and reproductive health were promoted, making them no longer baby-making machines. They are more confident and independent."



> zipfiles where the first entry is an uncompressed file with the name "mimetype" that has the mimetype

The EPUB format also adopted this convention: https://www.w3.org/publishing/epub3/epub-spec.html#sec-intro...

"The EPUB Publication's resources are bundled for distribution in a ZIP-based archive with the file extension .epub. As conformant ZIP archives, EPUB Publications can be unzipped by many software programs, simplifying both their production and consumption.

The container format not only provides a means of determining that the zipped content represents an EPUB Publication (the mimetype file), "


This project looks amazingly promising, thank you for creating it and I wish you the best of luck in its success.

One humble suggestion/idea I offer to think about, related to:

> It uses trust-based Peers to share the local cache. Peers can receive, interchange, and synchronize their downloaded media. This is especially helpful in rural areas, where internet bandwidth is sparse; and redundant downloads can be saved. Just bookmark Stealh as a Web App on your Android phone and you have direct access to your downloaded wikis, yay!

Trusted peers with a shared web cache is a good start, but how about _trustless_ peers? Is this possible?

Possibly using something like https://tlsnotary.org - which uses TLS to provide cryptographic proof of the authenticity of saved HTTPS pages (but unfortunately only works with TLS 1.0)


I'm still reading through the code and the paper, but this sounds actually amazing.

I planned on integrating a self-signing intermediary certificate for TLS anyways, so that peer-to-peer communication can be encrypted without a thirdparty handshake.

It sounds like this would integrate very nicely as a hashing/verification mechanism for shared caches. Thanks much for the hint!


> https://blast.ncbi.nlm.nih.gov/Blast.cgi?RID=2PUA1EJK114&CMD...

This link is no longer working ("Error: Results for RID 2PUA1EJK114 not found", at least for me), mirror: http://archive.md/zyYYY


Thanks for this, found a review of Authentic8 Silo: https://uk.pcmag.com/password-managers/3921/authentic8-silo

Looks like they have been around a while (5+ years), and from their website https://www.authentic8.com, they are focused on the improved endpoint security aspect:

"The Browser for a Zero Trust Web"

> Traditional browsers run on blind trust. Silo assumes zero trust by running the browser in the cloud.

> Web code can’t be trusted. Organizations know that every page view means risk to the business. Silo restores your trust in the web through isolation, control and audit of the browser.

> Isolate: Silo executes all web code on our servers. Nothing touches your endpoint, and untrusted endpoints can’t corrupt your environment or your data.

> Mitigate risk: Shift your attack surface area off your network and devices to disposable, anonymous cloud infrastructure.

I am intrigued, wonder how well they are doing, and how well it works. Somewhat expensive, I've heard $10/month and $100/year for individuals. No online live free demo, but available on request.

With the Epitactic Cloud Browser, I'm only running the VPS temporarily as a demo, the way I envision it end-users can run their own instance either on a home server or virtual server, maintaining control and privacy.


In the sense that it proxies traffic through the cloud, almost. The target websites won't see your IP address (although I could add a X-Forwarded-For header passing the origin address like archive.is does: http://archive.is/faq - cloudbrowser.website does not currently do this), or other details of your web browser environment.

Almost all metadata is not transferred through. There are two exceptions I can think of:

1) Browser window size. This is actually a significant fingerprinting leak, since desktop users can resize the dimensions of their browser down to the pixel.Cloud Browser uses it to generate an appropriately-sized image, matching the Chrome instance in the cloud to the end-user's browser. Less of a problem with mobile devices where the browser window is fixed, but could help fingerprint the device type.

If you want to avoid this, disabling JavaScript will prevent Cloud Browser from using window.innerWidth, innerHeight, and devicePixelRatio, and it will default to 800x600x1. This may not match your device. The best way to solve this is probably to run your own Cloud Browser instance, configured for what you will browse it from.

Interestingly, Firefox is implementing a "letterboxing" feature, from TorBrowser, to reduce fingerprinting from this technique: https://nakedsecurity.sophos.com/2019/03/08/firefox-browser-...

2) Time of access. The time Cloud Browser accesses a website will be shortly after the end-user accesses the website, as you would expect from a proxy. Could allow some forms of fingerprinting, e.g. work hours, depending your browsing habits, or correlating with other non-cloud website accesses.

If you are concerned about this, Cloud Browser makes it very easy to share the cached pages offline, in a time-independent manner. That is, you can access the files in cache/ offline as needed. The online browser will try to load from the cache first, but automatically refresh with a live version when it is available. But you could setup a cron job to fetch the websites you commonly visit on a fixed schedule, then only browse through the cache while offline, and then websites wouldn't be able to see when you read them.

I've thought about developing this feature further, it could lead to a better user experience, and avoid some of the problems with running Cloud Browser on a VPS. The VPS would be needed for running headless Chrome, but it could upload the static HTML and images as plain files to any static hosting website, for quick and easy browsing. You would need to "subscribe" to the websites you want to visit, and they would have to be periodically refreshed, however.


Deepstream.live, sounded like it was neat, unfortunately, seems to now be down. The same poster also posted about webautomation.guru: https://news.ycombinator.com/item?id=18951821 titled "Show HN: Use Chrome Headless in the Cloud from the Browser", similar to mine, but it too is down for me. Looked a lot more advanced than cloudbrowser.website, though!

These remote browser services seem to be difficult to keep running... (expensive if not profitable, I assume. My VPS is good for a few more weeks.)


Yes indeed it could, I haven't tested it but according to Wikipedia, image maps were introduced in HTML 3.2, which was published as a W3C recommendation in 1997 (!), so in principle it should work. Maybe even earlier, there was a supplemental RFC for adding client-side image maps to HTML 2.0 published in 1996: https://tools.ietf.org/html/rfc1980 A Proposed Extension to HTML : Client-Side Image Maps.

Cloud Browser does use a few modern features, a bit of CSS and (optional) JS, but not for anything strictly essential. Again I haven't tested it on any old browsers, but if anyone does I welcome bug reports/patches at https://gitlab.com/epitactic/cloudbrowser/issues.

(I wonder if it would work on NCSA Mosaic? https://news.ycombinator.com/item?id=18428682 - well, Mosaic added the img tag, but not sure if imagemap was yet available.)


Sanitizing the DOM is another option, but the main problem I was trying to avoid by taking screenshots is the arms race between the cloud browser and websites, as also seen with ad blockers. A static screenshot in contrast is "up" a level, agnostic to the complex details of rendering an HTML5 website.

Another inspiration is image boards such as 4chan, where screenshots are a very common means of sharing information, including articles on websites, or even tweets. Even though it may not be technically ideal, and annotated text seems like it would be more efficient, in practice images as the lowest-common-denominator seem to be a reasonably effective format for sharing information.

On the other hand, if someone does come up with a true "browser in browser" implementation like you propose, I would be very interested in trying it out. Could be a promising idea, but a lot of work to get right.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: