> Android Studio is unaffected because deployments performed with adb, which Android Studio uses behind the scenes to push builds to devices, is unaffected.
So, simply sending a download link for an APK to a friend is not enough anymore - I now have to teach them how to install and use adb.
EDIT
> we are also introducing a free developer account type that will allow teachers, students, and hobbyists to distribute apps to a limited number of devices without needing to provide a government ID.
Depending on how they implement that, this would at least partially improve the situation. Sounds like no ID is required, but I assume the whole ordeal with registering each app is still mandatory.
from what i understand the apk route still works fine, you just have to be willing to attach your identity to it via their verification + signing process
I work for IPinfo. I did not know that our site was blocked by Firefox Enhanced Tracking Protection. Not sure what I can do here. The project takes the IP addresses you have provided from your traceroute and gets the information related to them from our website using a frontend HTTP call.
Enhanced Tracking Protection is using the Disconnect domain list. ipinfo.io is listed in services-relay.json and mdl-services-relay.info, which I believe makes the Disconnect.me product route requests to these domains through their proxies to prevent IP fingerprinting.
Should be noted that IPInfo doesn't get blocked with tracking protection set to "standard". Users have to set tracking protection to "strict" to run into this issue. When they do this, they get warned that this setting may break sites.
I don't think Mozilla/Disconnect will make an exception because privacy-infringement is a potential risk with a service like yours if used by malicious websites. I wouldn't put too much effort into this, the people affected by this are a fraction of a fraction of the general web audience and they've already seen a warning that websites may break because of their choice.
You could put it under a "PostgreSQL OR Apache-2.0 at your option" dual-license, so all contributors give you their code under both licenses, instead of needing to re-license later. The Rust project does this (MIT OR Apache-2.0) to get the patents clause from Apache while retaining compatibility with MIT and GPL.
If you do this, you need to have a very explicit policy for contributors to say they are contributing under both licenses, though this is something you need to have anyway if you are licensing under Apache 2.0 (a contributor could theoretically claim retroactively that their contributions were all MIT licensed and that they never gave you or any of your users a patent grant). (Most Rust projects do this.)
For other patent-shield licenses such a combination also removes most of the protections of the patent shield (a patent troll user can use the software under MIT and then sue for patent infrigement). However, the Apache 2.0 patent shield is comparatively weak (when compared to GPLv3 and MPLv2) because it only revokes the patent license rather than the entire license and so it actually acts like a permissive license even after you initiate patent litigation. This makes the above problem even worse -- if you don't actually have any patents in the software then a patent troll can contribute code under MIT then sue all of your users without losing access to the software even under just Apache 2.0 (I don't know if this has ever happened but it seems like a possibility).
IMHO, most people should really should just use MPLv2 if they want GPLv2 compatibility and patent grants. MPLv2 even includes a "you accept that your contributions to this project are under MPLv2" clause, avoiding the first problem entirely. It would be nice if there were an Apache 3.0 that had a stronger patent shield but still remained a permissive license (MPLv2 is a weak file-based copyleft), but I'm more of a copyleft guy so whatever.
> However, the Apache 2.0 patent shield is comparatively weak (when compared to GPLv3 and MPLv2) because it only revokes the patent license rather than the entire license and so it actually acts like a permissive license even after you initiate patent litigation.
Isn't the idea that you could then sue the suer for infringing your patent?
Sure, that is the point of the original point of the article after all. I was speaking about the problem in general (I suspect most Rust projects--if not most projects in general--with this setup do not have patents).
It also requires actively persuing a patent case which may result in the patent being rendered invalid, while a termination clause for the whole license just requires a far more clear-cut copyright infringement claim (possibly achievable purely through the DMCA system, out of court). But I'm not a lawyer, maybe counter-suits are more common in such situations and so either approach is just as good in practice.
Great, but Unlicense doesn't grant patent rights so you have the exact same problem as MIT (actually it's even worse because Unlicense explicitly states that it is only concerned with copyrights multiple times).
That's a bad analogy. No one is complaining about Google providing Android security updates.
This is like a car manufacturer preventing the installation of all unapproved aftermarket accessories by claiming they're protecting you from a stalker installing a tracker on your car.
I don’t actually think it’s that bad. If all of a sudden we started hearing an awful lot about Android phones having viruses, to the point where almost everyone had a friend who got a virus on their android. I think the market would actually shift. We’d probably see more people moving to iPhones.
Nixpkgs pulls source code from places like pypi and crates.io, so verifying the integrity of those packages does help the Nix ecosystem along with everyone else.
Compute shaders, which can draw points faster than the native rendering pipeline. Although I have to admit that WebGPU implements things so poorly and restrictive, that this benefit ends up being fairly small. Storage buffers, which come along with compute shaders, are still fantastic from a dev convenience point of view since it allows implementing vertex pulling, which is much nicer to work with than vertex buffers.
For gaussian splatting, WebGPU is great since it allows implementing sorting via compute shaders. WebGL-based implementations sort on the CPU, which means "correct" front-to-back blending lags behind for a few frames.
But yeah, when you ask like that, it would have been much better if they had simply added compute shaders to WebGL, because other than that there really is no point in WebGPU.
Access to slightly more recent GPU features (e.g. WebGL2 is stuck on a feature set that was mainstream ca. 2008, while WebGPU is on a feature set that was mainstream ca 2015-ish).
The GL programming only feels 'natural' if you've been following GL development closely since the late 1990s and learned to accept all the design compromises for sake of backward compatibility. If you come from other 3D APIs and never touched GL before it's one "WTF were they thinking" after another (just look at VAOs as an example of a really poorly designed GL feature).
While I would have designed a few things differently in WebGPU (especially around the binding model), it's still a much better API than WebGL2 from every angle.
The limited feature set of WebGPU is mostly to blame on Vulkan 1.0 drivers on Android devices I guess, but there's no realistic way to design a web 3D API and ignore shitty Android phones unfortunately.
It's not about feeling natural - I fully agree that OpenGL is a terrible and outdated API. It's about the complete overengengineered and pointless complexity in Vulkan-like APIs and WebGPU. Render Passes are entirely pointless complexity that should not exist. It's even optional in Vulkan nowadays, but still mandatory in WebGPU. Similarly static binding groups are entirely pointless, now I've got to cache thousands of vertex and storage buffers. In Vulkan you can nowadays modify those, but not in WebGPU. Wish I could batch them buffers in a single one so I dont need to create thousands of bind groups, but that's also made needlessly cumbersome in WebGPU due to the requirement to use staging buffers. And since buffer sizes are fairly limited, I can't just create one that fits all, so I have to create multiple buffes anyway, might as well have a separate buffer for all nodes. Virtual/Sparse buffers would be helpful in single-buffer designs by growing those as much as needed, but of course they also dont exist in WebGPU.
The one thing that WebGPU is doing better is that it does implicit syncing by default. The problem is, it provides no options for explicit syncing.
I mainly software-rasterize everything in Cuda nowadays, which makes the complexity of graphics apis appear insane. Cuda allows you to get things done simple and easily, but it still has all the functionaility to make things fast and powerful. The important part is that the latter is optinal, so you can get things done quickly, and still make them fast.
In cuda, allocating a buffer and filling it with data is a simple cuMemAlloc and cuMemcpy. When calling a shader/kernel, I dont need bindings and descriptors, I simply pass a pointer to the data. Why would I need that anyway, the shader/kernel knows all about the data, the host doesnt need to know.
> Render Passes are entirely pointless complexity that should not exist. It's even optional in Vulkan nowadays.
AFAIK Vulkan only eliminated pre-baked render pass objects (which were indeed pointless), and now simply copied Metal's design of transient render passes, e.g. there's still 'render pass boundaries' between vkCmdBeginRendering() and vkCmdEndRendering() and the VkRenderingInfo struct that's passed into the vkCmdBeginRendering() function (https://registry.khronos.org/vulkan/specs/latest/man/html/Vk...) is equivalent with Metal's MTLRenderPassDescriptor (https://developer.apple.com/documentation/metal/mtlrenderpas...).
E.g. even modern Vulkan still has render passes, they just didn't want to call those new functions 'Begin/EndRenderPass' for some reason ;) AFAIK the idea of render pass boundaries is quite essential for tiler GPUs.
WebGPU pretty much tries to copy Metal's render pass approach as much as possible (e.g. it doesn't have pre-baked pass objects like Vulkan 1.0).
> The one thing that WebGPU is doing better is that it does implicit syncing by default.
AFAIK also mostly thanks to the 'transient render pass model'.
> Why would I need that anyway, the shader/kernel knows all about the data, the host doesnt need to know.
Because old GPUs are a thing and those usually don't have such a flexible hardware design to make rasterizing (or even vertex pulling) in compute shaders performant enough to compete with the traditional render pipeline.
> Similarly static binding groups are entirely pointless
I agree, but AFAIK Vulkan's 1.0 descriptor model is mostly to blame for the inflexible BindGroups design.
> but that's also made needlessly cumbersome in WebGPU due to the requirement to use staging buffers
Most modern 3D APIs also switched to staging buffers though, and I guess there's not much choice if you don't have unified memory.
> AFAIK the idea of render pass boundaries is quite essential for tiler GPUs.
I've been told by a driver dev of a tiler GPU that they are, in fact, not essential. They pick that info up by themselves by analyzing the command buffer.
> Most modern 3D APIs also switched to staging buffers though, and I guess there's not much choice if you don't have unified memory.
Well I wouldn't know since I switched to using Cuda as a graphics API. It's mostly nonsense-free, and faster than the hardware pipeline for points, and about as fast for splats. Seeing how Nanite also software-rasterizes as a performance improvement, Cuda may even be great for triangles. Only implemented a rudimentary triangle rasterizer that can draw 10 million small textured triangles per millisecond. Still working on the larger ones, but low-priority since I focus on point clouds.
In any case, I won't touch graphics APIs anymore until they make a clean break to remove the legacy nonsense. Allocating buffers should be a single line, providing data to shaders should be as simple as passing pointers, etc..