This is a really impressive demo. Most virtual globes (e.g. Google Earth) separate the terrain, surface image and building data. Normally, these are sent to the client separately and merged in the graphics card: the surface image is texture mapped onto the terrain, and then the building data is drawn separately on top. Special routines are used to draw trees (e.g. billboards).
What Nokia have done here is to merge everything - terrain, surface image, buildings and trees - into the same model. They're still using the classic chunked level of detail approach, just with more complex models, which the graphics card handles with ease.
This requires more work on the server side to prepare the data, but once it is done it is really fast for the client. The main disadvantage is that the data ends up being very static - you can't move objects around, for example.
P.S. I'm currently working on open source WebGL globes like OpenWebGlobe (www.openwebglobe.org) and WebGLEarth (www.webglearth.org). If you're interested in this sort of thing, I recommend reading www.virtualglobebook.com .
What? It's not just merging the different datasets into models, it's an complete accurate 3d model of terrain from C3 Technologies (now owned by Apple, btw), they take thousands of low-altitude photos and do a photosynth-esque reconstruction.
I think the future for WebGL is very bright, especially as it becomes more widely available on mobile, and a full screen mode with mouse capture gets added on the desktop (critical for games).
Mmmh sorry but I have to disagree with you. I can't let you tell this without some corrections, non-expert people could take it as it is.
In short, your post is mainly uninformative.
>> Normally, these are sent to the client separately and merged in the graphics card
It means nothing. You moreover can't really know what the batching and draw calls scheme is in google earth nor in this nokia 3D maps.
>> Special routines are used to draw trees (e.g. billboards).
Mmmh. Ok ok. 3d applications are complicated, there is special routines for a lot of things btw...
>> What Nokia have done here is to merge everything - terrain, surface image, buildings and trees - into the same model.
I agree with that.. vulgarly. It does not mean there is one mesh and it isn't. It can be confusing.
What you mean is there is one skin.
>> They're still using the classic chunked level of detail approach, just with more complex models, which the graphics card handles with ease.
You don't know what the chunks and lod algorithm is and i guess it might be very innovative. Or not. Well but you are talking about the global chunked lod approach so you re right, but it can be very confusing. The LOD algorithm is probably very innovative.
>> This requires more work on the server side to prepare the data, but once it is done it is really fast for the client. The main disadvantage is that the data ends up being very static - you can't move objects around, for example.
This is just false. Please don't take it bad, people may be deceived by that and this is as false as possible. There is actually no benefit to explain why but it could looks like "there isn't any more server side preparation because the mesh is not construct on the fly, this is not faster in the client - you can't say it - it depends highly on the draw calls scheme, the vertex complexity, the textures fetches etc. etc. and the data are static you are right but yeah it is highly doubtful - 3D programmers are smart, take the example of moving BSPs in the quake engine, who could have say that BSPs can move...".
Anyway, thank you very much for the links and references, they are pretty interresting.
edit: please note that I'm trying to do all the efforts to make this post as constructive as possible.
>>> Normally, these are sent to the client separately and merged in the graphics card
> It means nothing. You moreover can't really know what the batching and draw calls scheme is in google earth nor in this nokia 3D maps.
Disclaimer: I haven't looked at this in detail, my statements about the grouping of single LOD objects was based on watching how the image changed as data was progressively loaded.
Hehe, I should have be direct the first time. You are positive trolling. Your post is just spamish technical uninformative bullshits.
>> these are sent to the client separately
in different socket packets? in different ports? in different 'gameobjects'? in different 'line of caches'? in different 'structs'/'classes' or in different meshes? sorry but in this case, meshes and classes are not 'send-able' into a network...
>> merged in the graphics card
sorry but graphics cards have no "merge" function at all - I mean, it has no sense again.
The client is the WebGL application running in the user's browser. It requests the appropriate data from the Nokia's server for the area that the user is looking at. This is what I mean by "being sent to the client".
By "merged in the graphics card" I mean that the final image is composed in the graphics card.
Have a look at vterrain.org to get an idea of how these things work.
You were going to deceive people with an aggregate of technical non-senses and obviousnesses mixed with some good links.
I knew that your initial post was as empty as what you are saying now: the server is sending data (and the right one bob!) to the client, then the graphics card is composing the images (Dude!) and the cpu is executing some special instructions.
Good-Boy, right?
But.
It's a fact, people penalized me. I understand, (ultra)positive attitude is prefered over kindof technical and scientific honesty and demand. Even my bug report has been down-voted a lot =) Bugs are negative gnaaaa :>
I'd suggest that you probably got downvoted for saying things like "This is just false ... There is actually no benefit to explain why ...". Most of your responses provided very little information other than to disagree with the post you replied to.
Technical corrections can add a lot of value, but that kind of attitude overshadows any useful value you might have added to the discussion, and it makes it look like you don't intend to add any value at all. Effectively, it made your entire post look like a verbose "Nuh-uh!".
The mapping team at Nokia is by far the best software development team in the organization (maybe with the exception of Trolltech/qt), and it's surviving the MSFT integration. It's (largely) the legacy of the successful acquisition of Gate5 in Berlin -- and somehow the team there was able to resist full assimilation into the Borg. I was talking to a Nokian today who commented that in Nokia, "Berlin is the new Helsinki".
Your friend is right. I freelanced there earlier this year to create a prototype of Ovi Maps on Windows Phone. I couldn't stay on to see the production version through (commitments in London), but what they shipped on the Lumia (Maps and Drive) is awesome.
[Edit] Earlier LAST year, this year's only a few days old ;-)
I wonder if MS will ever support WebGL? When they had 95% market share they could afford to not support new tech safe in the knowledge that the rest of the industry wouldn’t bother coding to it. Now they’re sub-50% in a lot of sectors and there are a lot of visually impressive tech demos coming out that they don’t support.
I think that Microsoft will be find themselves forced to support WebGL.
Why?
Games.
WebGL is really impressive technology. The combination of a widely-deployed widely-used language (Javascipt) with high performance graphics (WebGL) make for a surprisingly capable platform for cross-platform game development. Once WebGL arrives on mobile and a full screen gets added on the desktop, I think it will become very, very popular.
Of course, the graphics capabilities aren't ask good as full-power OpenGL or Direct3D, but they're plenty good enough for a lot of applications.
This is why I think that Microsoft will be forced to support it: they'll have a hard time convincing the public to buy into Microsoft's platform if the public can't play their favourite games on it.
In the meantime, Chrome Frame provides WebGL inside Internet Explorer and you don't even need admin rights to install it :-)
They probably won't, which is why in an earlier thread I said that IEN will always be IE6. My assumption would be that they'd do something along the lines of webDirectX and we'd have to create a shim to give it a common interface.
Agreed. Given DirectX and Direct3D, MS is unlikely to support a derivative of OpenGL. That may eventually change if WebGL becomes widely adopted, forcing their hand, but current lack of support in IE9+ is a major inhibition to adoption. I doubt they would create a competing standard (such as WebDirectX).
Instead, MS is pushing performance improvements and hardware acceleration for Canvas and SVG. This is NVIDIA, but to give an example of the possibilities:
IMO, focusing on these isn't a bad thing, because these 2D technologies are substantially easier to use (e.g., SVG is declarative and integrates with CSS). Though, WebGL is obviously more expressive.
We may also see some WebGL-derived technologies make their way back into CSS + SVG. Similar to SVG filters for CSS:
(And the irony behind "WebGL is a derivative of OpenGL" is that on Windows (at least for Chrome and Firefox), WebGL is actually all based on Direct3D, via ANGLE:
I would love to see shaders on CSS, but GLSL is such an ugly layer to add on top of a fairly nice design. Notice how the SVG filters are so much simpler to specify than the GLSL-on-CSS proposal.
I would much rather that Adobe designed a more restricted, declarative little language which would easily compile to GLSL, than bolt an almost-turing-complete C variant on top of CSS which is hard to reason about, hard to guarantee safety (most of the webgl-crashes-video-drivers issues have still not been solved, aside from the hamfisted "we will block webgl if we see this set of drivers" solution), and hard to interoperate.
I disagree. I think they're hand is going to be forced because the future is mobile which IE does not control.
I wouldn't even be surprised if MS adopts Webkit.
For MS to leverage IE 'dominance' would be a losing game. They no longer have control over the web. I have no doubt that MS will attempt to control the web with W8 but I think they will lose and do so quickly.
Hm, I’m not sure whether it’s in Microsoft’s best interest to adopt Webkit. I think they will stay with their own rendering engine.
But – as is already obvious – dominance is indeed no longer something they have, will realistically achieve ever again or are able to leverage. If they want to have any say at all when it comes to the web’s future they have to play the standards game. They have to cooperate.
Microsoft is keenly aware of that (though maybe not entirely comfortable), as is evident from the direction they took with IE.
The Microsoft Trident engine is getting more powerful with each iteration. Knowing that it has been around since IE4 in 1997 and seeing where it is today show how extensible it is.
The real problem lies not inside the engine but inside Microsoft themselves. Specifically, within the .NET group. I know everyone on Hacker News loves Ruby, so I'll use that as an example. Microsoft wanted the dynamic language stylings that Ruby offered, so they spent 3 years developing IronRuby that ran on the .NET CLR. Then they suddenly dropped it without warning. Why? Because they had extracted everything they wanted from it. Keeping the technology up to date would not give them anything more than what they already had. Microsoft benefited from it, and when they no longer did, they dropped it. Everything that happens inside Microsoft's core is to strengthen their sellers: Windows and Office. If Windows or Office needs a new technology, they will take it, use it, and .NET-ify it until it becomes proprietary.
If they were to swap Trident for Webkit, it would be the same thing. IE11 built on Webkit for a few years, their development staff would learn from it, and the next release would see Trident 7 (IE12) back in form. Microsoft takes with only nominal giving because that's great for business. They can learn from outside technologies, then use that knowledge to lock people in tighter with better tech.
It's been a while since we've seen Microsoft in true form, pioneering and leveraging their weight to shape the market for their benefit. What we have right now is Microsoft in damage control mode. Moving to Webkit would be more of that, strengthening Trident by sucking the essence out of Webkit or Gecko, directing the flow of HTML5 (and pushing for MSHTML6 afterwards) would be the return of the powerhouse. It'll be interesting to see where things go, but even as someone who sees Microsoft as the best tool for the job in some certain situations, I would't place any bets on Microsoft being the dominant force on the web... ever. Luckily (for them), desktops aren't going anywhere anytime soon.
edit - I should add that, to your point (and mine), Microsoft already does use Webkit where it is advantageous for them: Mac OSX. Instead of continuing development on IE for OSX, they switched Office to Webkit for the Mac. I'd have to believe Trident would have suffered without that move (circa Office 2011).
Can anyone fill us in on how they're collecting such accurate 3D detail for all these buildings? I mean are they flying airplanes with 360 degree cameras over the major cities at low altitude, for instance?
The LIDAR data is also used in the Nokia City Scene app http://www.youtube.com/watch?v=_MxnUAVhdnU Worth noticing is that you can click on every building i.e. 3D information is combined with regular streetview data.
It's based on C3 Technologies' product, which was unfortunately acquired by Apple, so don't count on Nokia's contract being extended. It uses a custom aerial camera system and photogrammetry toolchain to create 3D data with minimal human intervention.
Australian company Nearmap started with exactly the same goals and have a similar product (custom aerial photography system with automated processing), but they don't seem to have figured the 3D photogrammetry part out yet.
It's based on a virtual cityscape, which is painted with images taken much like StreeView's. The 3D models are built with data from Navteq's Journey View system, using lidar (http://en.wikipedia.org/wiki/LIDAR). Photos are then stitched and rendered onto the 3D models.
Thanks for the details! This technique works surprisingly well. (One of the artifacts I was able to find is at the base of the Bay Bridge in SF's SOMA -- there's a vertical wall that has the street surface projected up along it rather than an actual hole underneath the bridge. That does seem like a challenging case for airborne lidar + stitching.)
Another impressive spot is the top of the Stratosphere Tower in Las Vegas -- it manages to capture the spike at the top fairly well. It'd be interesting to know how much hand-editing they did for sites of interest like that, and how they represent hand-edits in a way that can be re-applied when new lidar datasets come in.
Fantastic. Is there a way to create a link to a given viewpoint location/direction/zoomlevel? That would make it possible to share views of the world, always nice.
When zoomed into an area for which there is 3D building coverage, it feels almost game-like. And I say that from a vantage point of some relevance. :)
I wonder how photography will be affected by this sort of technology in the not so distant future, as the images and point cloud data increase in definition. For instance, instead of waiting for the perfect weather conditions for the desired picture, the "photographer" could simply manipulate lighting and such, then render the scene in high definition.
No problem. I believe mine is a late 2010 model. It has the 1.4 GHz Core 2 Duo with 4 GB of RAM. It's 11'' rather than 13''. Performance is generally excellent and it's probably the best computer I've ever owned in terms of outright utility simply because it's approximately the size of a Kindle DX and I can take it anywhere without a thought. The fact that it's usable and responsive essentially instantly after opening the lid is also huge.
The only real problem comes with pushing pixels around, as the other reply mentioned. I can watch QT video fine, but if I go to Vimeo or YouTube, I can't really get a good playback out of Flash. Generally Flash is bad on this machine. I'm a long time Mac user so I'm not too unused to this, but it seems a little worse than on an iMac or something like that.
I think if you use an IDE for dev work you should check it out at an Apple Store or something to make sure the resolution works for you. That's really the only thing I'd consider. My 11'' is really just too small to me, having been spoiled by dual 30''s on my desktop. If you use a text editor or vim/emacs it'll probably be fine, but IntelliJ or Eclipse or whatever just have too many windows to manage in the space, in my opinion.
My choice is between MBP 13" and Air 13", and looking at the online store, Air actually has the better resolution (~1440x900 vs. ~1280x800). I was surprised at this, but I have used Xcode on 1440x900 on a 15" MBP and I was ok with it.
I have the 4GB 11" Air, and it's quickly becoming my favorite dev machine, even when I'm next to a monster dual quad core Xeon box. The Intel card is fairly slow at pushing pixels around (so if you develop graphics you'll notice slow texture access, etc.), but is mostly capable.
The simple deciding factor is a SSD in the laptop. It's hard to overstate how much of a performance difference these make in compiling, localhost-served webpages, etc.
It looks as good or better than google earth, particularly the trees, but the (texture) caching seems to be limited, which could be obviated by using local storage.
Well, what Nokia has done is better than the way Google has it set up. At least for advanced machines that can run it. But I'm sure Google will catch up when it's a reliable standard.
In my area (Canary Wharf in London) with Google I can see the Crossrail works going on but there nothing happening on the Nokia maps so Google is more recent here.
I was convinced the CIA was watching me, because I've been checking Google Maps for over a year now and the same white van's been parked outside my house for all that time. I just saw that on Nokia maps it's left. Thank goodness!
First time I saw Google Street View, I was sitting on my balcony with my laptop. I looked at the Google image for my street, and it was me sitting on the balcony with my laptop. I had to do a double-take before I realized the picture was taken a few weeks prior.
Amazing how well the software renders thousands of objects. On close inspection, I find the post-apocalyptic aesthetic of the rendering geometry very appealing. http://i.imgur.com/dNYer.jpg
It really kind of irks me when people complain about standards in a web demo. HTML5 is not standardized. These demos are no more than POCs written to show what the technology is capable of and where the organization sees themselves going forward.
When you see run-the-business type web apps being written in non standard technology, then you can complain. When you see a neat toy being written in non standard technology, take it for what it is.
Is it a demo of proprietary technology that happens to be baked into a couple of specific web browsers and video card drivers, or is it a demo of what's possible using a (new, emerging) set of "web" standards?
Because "WebGL" certainly sounds like the name of a standard to me, and very few people expect a "web" demo to care about which brand of video card they have installed.
And since you brought it up, "HTML5 is not standardized" is a little disingenuous. Regardless of its ratification state, companies claim and market HTML5-ness precisely to signal their commitment to open standards as opposed to proprietary technology. Or maybe it's come to mean "anything that isn't Flash." In any case, the implicit promise there is that users will enjoy app functionality with minimal-to-zero worries about client-side configuration or component choice.
Well, maps.nokia.com still goes to their Javascript implementation so I'd say the WebGL form isn't the run-the-business site as of yet.
WebGL has a 1.0 specification, but still is not a standard as defined by the W3C/Web Hypertext Application Technology Working Group (WHATWG). Specification is one of the steps towards reaching a standard, so WebGL and HTML5 are well on their way but not there yet. Standards usually don't care what name they're referred to as (much like 4G-advertised mobile service that doesn't actually reach 100mbps/1gbps like the standard dictates).
At this point (much like with the aforementioned 4G) HTML5 means about as much as "Web 2.0" does. It's a set of competing implementations with many cross-platform features that are almost guaranteed to make it into the final standard, and a few vendor-specific implementations that are hoping to make it (if they prove their worth). Your assertion of 'Flash-like content that isn't implemented in Flash' (to paraphrase) is quite accurate in current implementations.
To sum it up, the core HTML5 that companies actually market towards is all but standard (offline storage, AJAX-like content control, Canvas, etc). The really cool things that make the front page of Hacker News and Reddit and require you to be running the beta Chrome or nightly Firefox are generally things that the vendor is hoping will make the standard. Marketing is a powerful thing, but not always accurate.
That's all reasonable enough, but does it really excuse this UX?
1. User visits website with late-model hardware/OS.
2. Website says "this site requires browser foo."
3. User installs browser foo and reloads website.
4. Website says "error - check system configuration."
A technology demo with highly-specific client requirements, especially on the web, especially when the demo plays the look-mom-no-plugins card, should try to enumerate the actual requirements. In this case, the requirement that after installing latest Chrome, the fool at the keyboard navigate to chrome:flags and hit the big "turn WebGL on" toggle.
I understand why these types of doc omissions happen, but it's really a pretty serious bug. Every user that hits 1-4 above is a user who is actively dissuaded from caring about the technology that the rest of the site was designed (at non-trivial expense) to promote.
chrome detects if your graphics card is on some whitelist of gcards known to work with webGL. you can override (force on) this in chrome:flags I believe
Are you in Australia? Just kidding, since I don't have the latest version of Firefox, despite clicking the check for updates box, I can not check this out. I even tried downloading chrome but it failed. Naturally google needs javascript to let me download a file, and even when it was enabled it did not download.
What Nokia have done here is to merge everything - terrain, surface image, buildings and trees - into the same model. They're still using the classic chunked level of detail approach, just with more complex models, which the graphics card handles with ease.
This requires more work on the server side to prepare the data, but once it is done it is really fast for the client. The main disadvantage is that the data ends up being very static - you can't move objects around, for example.
P.S. I'm currently working on open source WebGL globes like OpenWebGlobe (www.openwebglobe.org) and WebGLEarth (www.webglearth.org). If you're interested in this sort of thing, I recommend reading www.virtualglobebook.com .