Hacker Newsnew | past | comments | ask | show | jobs | submit | randallu's commentslogin

The main benefit of using CSS Transitions and Animations on the accelerated properties (transform & opacity) is that the animation will be run by the compositor (at least in WebKit browsers; I think Blink's new animation engine changes this).

Running the animation in the compositor means:

1. You avoid a style recalc on every frame (which can be a big deal).

2. The animation keeps running even if the WebProcess gets contended (by painting new content, handling slow JavaScript, GC).

It's fine to mess with transform and opacity from JavaScript, and for gesture handling you have to, so it needs to be fast, but it has more overhead than a CSS Animation would have.


This is generally true in Gecko as well, although neither Gecko nor WebKit are guaranteed to run animations on the compositor if they decide that it isn't worth it in some circumstance.


My understanding is that human-neanderthal interbreeding rarely resulted in viable offspring (though some did, and is where blonde/ginger hair and blue eyes come from in modern humans).

Is that similar here? Are there traits in modern goats or sheep that came from the other?


Blue eyes is a more recent mutation [1]. Blonde hair has appeared independently several times [2].

[1] http://www.sciencedaily.com/releases/2008/01/080130170343.ht...

[2] http://www.nytimes.com/2012/05/08/science/another-genetic-qu...


As a Scot I might be a bit touchy about this (Scots have the highest proportion of redheads and I carry the gene even though my hair and eyes are very dark):

"A DNA study has concluded that some Neanderthals also had red hair, although the mutation responsible for this differs from that which causes red hair in modern humans."

http://en.wikipedia.org/wiki/Red_hair#Origins


...where do you have that "understanding" from?! Are there any actual information about the 'Homo sapiens sapiens' <-> 'Homo sapiens neanderthalensis' contact and relationship or are you just spreading someone's wild guesses?



hterm -- the terminal implementation that the Chrome ssh plugin and crosh is based on is pretty hackable, too.


I imagine not shipping a derivative SDK is to prevent one vendor from making an incompatible "Android+" (and the next vendor, etc) which would cause fragmentation. Instead they get to expose extra APIs through the updater app in the one true Google Android SDK (and you use the same mechanism to get the Google APIs).


I disagree that the Shadow DOM is pretty awesome. I think scoping style is valuable, but building components that are exposed as new tags is not appealing given the vast complexity of the implementation and the limitations of tags.

Markup has a very weak type system (strings and children) which makes building complex UIs more painful than it has to be (this also stands for markup driven toolkits like angular and knockout -- where the equivalent of main() is HTML markup and mostly declarative). Markup isn't a real programming language, and it's very weak compared to a true declarative programming language.

JavaScript however is a real programming language with all of the constructs you need for building extensible systems. For building anything complex (which is where Shadow DOM should shine) you will need to use extensive JS, you will need your Shadow DOM components to expose rich interfaces to JS... At which point, why are you still trying to do mark-up first -- it's something that's more "in your way" than helpful.


Thank you! I thought I was the only one who felt this way. I truly do not understand why Google feels that application composition should happen at the presentation layer rather than the code layer, particularly when the presentation layer is as weakly typed as HTML. This was tried and failed in the very first round of server-side web frameworks back in the mid-late 90s. More recently, the complexity of Angular's transclusion logic should have clued them in that this is an unwieldy idea.

I agree that some kind of style scoping construct would be a good addition, and far simpler than ShadowDOM. Simple namespacing would be a good start. It would be a more elegant solution to the kludgy class prefixing that has become common (".my-component-foo", ".my-component-bar", etc.)


Well, for one thing, div-soups are hard to read, create deeply nested DOMs, and lack semantics or transparency. If you're trying to preserve the nature of the web, which is transparent, indexable data, one where "View Source" actually has some use to you, then having a big ball of JS instantiate everything is very opaque.


The phrase "div-soup" makes me reach for my revolver. It seems to be a straw man that means "Either you're for Web Components or you're for the worst of web engineering."

- How does ShadowDOM (or Web Components more generally) make your DOM shallower? It's still the same box model. Deeply nested DOM structures are usually the result of engineers who don't understand the box model and so over-decorate their DOMs with more markup than is semantically or functionally necessary. Nothing in ShadowDOM (or, again, Web Components) changes this.

- Are custom elements really more transparent than divs? If "View Source" shows <div><div><div>...</div></div></div>, do you really gain much if it shows <custom-element-you've-never-heard-of-with-unknown-semantics><another-custom-element><and-another>...</etc></etc></etc>? Proponents of Web Components seem to imagine that once you can define your own elements, you'll magically become a better engineer, giving your elements nice, clear semantics and cleanly orthogonal functionality. If people didn't do that with the existing HTML, why will custom elements change them? At least with divs, I can be reasonably sure that I'm looking at a block element. Custom elements, I got nuthin'. They're not transparent. They're a black box.

- Finally (and more importantly), we already solved the "div-soup" problem. It was called XHTML. Custom elements in encapsulated namespaces! Composable libraries of semantically-meaningful markup! How's that working out today? It's not.

TL;DR: a common presentation DTD is the strength of the web, not its weakness. Attempts to endow web applications with stronger composition/encapsulation should not be directed at the DOM layer but at the CSS and JS layers above and below it.


1. Shadow DOM scopes down what CSS selectors can match, so deep structures can hide elements from expensive CSS rules.

2. Custom Elements promote a declarative approach to development, as opposed to having JS render everything.

3. XHTML was not the same as Shadow DOM/Custom Elements. XHTML allowed produce custom DSL variants of XHTML, but you still ended up having to implement them in native code as trying to polyfill SVG for example would be horrendously inefficient.

4. The weakness of the web is the lack of composeability due to lack of encapsulation. Shit leaks, and leaks all over the page. Some third party JS widget can be completely fucked up by CSS in your page and vice versa.

A further weakness is precisely the move to presentation style markup. Modern web apps are using the document almost as if it were a PostScript environment, and frankly, that sucks. We are seeing an explosion of "single page apps" that store their data in private data silos, and fetch them via XHRs, rendering into a div-soup.

The strength of the web was publishing information in a form that a URL represented the knowledge. Now the URL merely represents a <script> tag that then fires off network requires to download data and display after the fact. Search engines have had to deal with this new world by making crawlers effectively execute URLs. I find this to be a sad state of effects, because whether you agree or not, the effect is to diminish the transparency of information.

You give me an HTML page, and I can discover lots of content in the static DOM itself, and I can trace links from that document to other sources of information. You give me a SinglePageApp div-soup app that fetches most of its content via XHR? I can't do jack with that until I execute it. The URL-as-resource has become URL-as-executable-code.

IMHO, the Web needs less Javascript, not more.


Both are needed! Javascript is great for portability of apps that would otherwise be done in a native environment (you wouldn't want to index these anyway). Isn't there a standard mime type to execute js directly in browsers? There should if not. If you care about being searchable and having designs that are readable on a variety of devices, powerful and degradable markup is very useful.


Or search engines could use URLs with a custom browser that man-in-the-middles XHR and WebSockets to effectively crawl APIs, since the APIs theoretically are semantic by default.

execute url, index all XHR and websocket data, follow next link and repeat.


> If "View Source" shows <div><div><div>...</div></div></div>, do you really gain much if it shows <custom-element-you've-never-heard-of-with-unknown-semantics><another-custom-element><and-another>...</etc></etc></etc>?

You can extend the semantics of existing elements so you'd actually have <div is="custom-element-with-some-unknown-semantics-but-its-still-mostly-a-div">. Unextended tags are for when nothing in the existing HTML spec mirrors the base semantics you want.

Of course nothing stops people who did bad things before from doing bad things in the future, but it doesn't make tag soup worse.


The Custom Elements[1] and Shadow DOM[2] specifications have little to do with each other. The former is useful for defining new elements in HTML, along with properties and methods. The latter can be used to encapsulate the style/dom of that element's internals. So each technology is useful by itself and can be used standalone. When used together, that's when magic happens :)

[1]: http://w3c.github.io/webcomponents/spec/custom/ [2]: http://w3c.github.io/webcomponents/spec/shadow/


While you're perfectly allowed to disagree, it sounds like what you're saying is this:

"Collections of div-soup activated by jQuery plugins are the way to write maintainable web applications that make sense"

It's not as though Javascript has no role whatsoever in custom elements, but really, there's a lot to be said about how this way of working will be a huge improvement over the current jQuery + div-soup status quo.


No, that's not what I meant.

I'm saying that DOM through its relationship to HTML has weaknesses that make it unsuitable for building application components out of. "jQuery-enabled div-soup" is an example of how mixing presentation with model and logic yields unmaintainable results.

I have been interested in React.js recently, since it provides an interface to create reusable components and to use them inside a rich programming language with full types. I think that's a better example of a competing idea.

My experience is with building single page apps from scratch, so maybe there's a common use-case (embedding a twitter widget, or a 3rd party comment system in a blog) that Shadow DOM and Custom Components will address that I'm not familiar with.


FB React is a good example because it's living more in the presentation layer. But Custom Elements offer some things that React doesn't (as far as I'm aware).

One is better encapsulation, another is a well defined styling system (although obviously this article shows that this is not a super simple problem to solve, I'm certain that a good way of doing this will be around before too long) --- finally, and the most important thing, is that it's just baked into the platform itself, so interop between different frameworks is less of a pain.

For instance, suppose you want to use a particular Ember component in your Angular app. You probably don't want to include the entire Ember environment, and you want it to play nicely with Angular's idea of data binding and local scopes. Can you even do this? If you can, how much effort does it take, and how much does it degrade the application?

So, we've got: interoperable components/widgets. Easily style-able widgets. Elements with some semantic purpose. Simplified documents. Reusable templates (which, once HTML imports are pref'd on by default, should be easily host-able on CDN hosts).

There are a lot of benefits to baking this into the platform, despite making the platform an even bigger, crazier mess than it already is. It should hopefully give us better (and better designed) tools to work with.

Granted, I'm not saying it's going to solve every (web) problem ever, nothing ever does.


Yes, yes, a million times yes.

The biggest problem with this weak type system is obvious with CSS3 Matrix Transforms. CSS3 matrix transforms are the biggest bottleneck preventing fast updates to many DOM elements. Without fast updates of many elements across an entire page (window), pulling off the awesome smoothly animated effects found in modern desktop and mobile operating systems is pretty much impossible, especially in a system that implements immediate mode graphics over retain mode (DOM).

Marshalling matrix3d updates from numbers in an array of length 16 to a stringified array to apply it to an element, only to have the browser have to convert that stringified array back into an array of 16 numbers is insanity.

If you want performance, you need a more robust type system than just strings and children. I'm an engineer at Famo.us and we would absolutely love it if we could do something in javascript like element.set3DTransform(someLength16Array); We could simultaneously update thousands of DOM elements if arrays and typed arrays were supported instead of stringified.


Yeah, the CSS OM is really horrible too. CSS Animations is another area where you end up feeding huge generated strings into the DOM -- in theory Web Animations is meant to improve this, though personally I feel like the API too high level and ends up being really large because of this :(.

In your example, I think it'd only be a small patch (for WebKit, where my experience is) to optimize "elem.style.transform = new WebKitCSSMatrix(a,b,c,d)" without intermediate stringification. Mozilla doesn't expose a CSSMatrix type unfortunately. I've done some similar things for other CSS properties in WebKit -- have you considered submitting a patch? I've found the WK guys super receptive to small optimizations which don't change observable behavior (i.e.: you can't tell if the WebKitCSSMatrix was stringified or not currently) like that.


We're not über familiar with the internals of the browser or how to go about submitting a patch that fixes this. We did talk to people at Mozilla about this, but we still have to follow up on that.

Do you contribute to this area of Webkit? I'd love to chat more about this with you. Email is in my profile. Use the famo.us one.


It's interesting that the advantage for LLVM is that it was used to create a compiler for a processor with a secret and proprietary instruction set.

If NVIDIA had to use GCC (surely they'd have just done their own instead, but for the sake of argument) then we'd all get to learn more about their architecture and maybe make compilers for different languages that natively target their processors...


What makes you think you have the right to learn about a proprietary architecture? Just because you bought the product doesn't mean NVIDIA has to tell you how it works.


The "translateZ: 0" description is a bit misleading -- I wish he'd provided numbers for the improvement. In general using composited layers is more expensive (since the CPU still does rendering of the image, must upload it to texture, etc).

It might be a win if the thing you apply it to:

1. Never changes, but the content around it changes often.

2. Is hard to render (lots of shadows, etc).

The layout and paint thrashing is a really good optimization though. You should be able to insert as many things into the DOM as you like without triggering a layout SO long as you don't read back (like consulting offsetLeft). I think the Chrome inspector will mark read backs with a little exclamation point in the timeline with a tooltip "synchronous layout forced" and a backtrace to your JS...


The translateZ deal just throws the browser into hardware rendering, which will run much smoother with any GFX hardware that will support it.

The same thing works with all of the other 3d transforms: Putting in a BS value for Z will cause the element to use hardware acceleration.


No, translateZ just makes it a composited layer. Hardware comes much later in the pipeline and possibly in another process.

The content of the layer isn't hardware rendered. It's rendered by the CPU and uploaded to a texture. In WebKit and probably Blink there's a fast path for images, canvas and video so that they can be directly uploaded or (on some platforms like Mac) bound to a texture avoiding an upload copy.

Microsoft and (maybe) Mozilla have a "hardware rendering" path via Direct2D, but Chrome and WebKit don't, they have compositors which can use the graphics hardware to perform compositing, but not rendering.


For what it's worth, WebKit on OS X uses hardware acceleration for both rendering and compositing by way of Core Animation.


Which technologies do benefit from GPU rendering? Aren't Quartz calls rasterized on CPU?


Core Animation layers have a mode in which Core Graphics calls targeting them are both processed asynchronously by a another thread and rasterized via OpenGL.


I presume you mean the "drawsAsynchronously" property. I'm extremely curious, does it really push the rasterization on the GPU? I mean, do you have shaders written, that do all the stuff that CPU normally does? Bezier paths, clipping, stroking, filling?


Oh, nice! For some reason I thought that was only for canvas.


It was only used for canvas in the initial release before being deployed more widely.


The translate z trick does not work in general. It works right now in Chrome (and probably Safari). It does not work in Firefox, it may not work in Chrome in the future. (Because you are trying to trick the browser by gaming it's heuristics, those heuristics might change).

That hardware rendering is smoother is also not true in general, just some cases, which the browser will try to guess for you.


You should be careful when adding translateZ. If you go beyond the GPU memory it's going to be extremely slow and has high chances of crashing the app.


How easy is that to do accidentally? 128MB is enough for 16 screenfulls at 1080p. Can you really trigger the creation of that many hardware-composited layers without intending to?


Model S has a fast pulsing green light around the charge port when it's charging.


Chromium is _huge_. If I just wanted to use the HTTP library (with tls and spdy) then how would I build just that, and cleanly integrate the build into my own project in a way that won't require constant revisiting every time I update my chromium sources?


I wouldn't go there. If I needed high level HTTP access I'd go with libcurl [0]. If I wanted a HTTP parser I'd consider Joyents http-parser library from nodejs (no dependencies at all) [1]. If I wanted a SPDY library, I'd consider spdylay (used by aria2 I think) [2].

All of these libraries are MIT licensed, well documented, and designed to be very focused libraries with very little feature-creep.

For TLS: PolarSSL has worked for me, but its GPL and tends to break ABI quite a bit [3]

[0] http://curl.haxx.se/

[1] https://github.com/joyent/http-parser

[2] http://tatsuhiro-t.github.io/spdylay/

[3] https://polarssl.org/


You would only link with the net library (and its dependent libraries - like base and crypto)


And what about source and binary compatibility. What are the current guaranties? (his second question)

Edit: to be clear after you get your app up and running with the current version how much time has to be spent on ongoing maintenance down the road?


You build the libraries and link them into your libraries. If you maintain a cadence of updating your Chrome checkout every month, you will mostly be OK (though this can vary, of course).


So every month if I update I will be required to fix my application to work with the new versions of the library. Perhaps say 25% of my time will be _only_ on maintenance? And from what I recall things are not deprecated for a while, but very rapidly changed with little to no warning. This was a major headache.


Why not just stick with a stable version for a while? Updating every month sounds like a huge headache.


Because the Chrome team makes huge, breaking changes to their code APIs with breathtaking rapidity. If you don't update that often, it'll just get harder and harder and harder to do.

Google can do this (in general) because they have a single source tree (although IIRC Chromium isn't actually part of that). Point is: when you only care about your own code, as the Chrome devs do, third parties are in a really bad place to try and stay current.


We're already half way there: http://kripken.github.io/clangor/demo.html (Clang compiled by Emscripten).


JS is getting out of hand!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: