The problem is those self-same authoritarian strongmen are very successfully using sockpuppeting to change national discourses in ways that benefit them and are detrimental to the targeted countries. Hybrid war is real and has been ongoing for more than a decade. LLMs make it way more cost effective.
Being able to limit the influence of external bad actors is the main goal of ID verification. Age verification is a useful side effect that makes it easier to sell to the general public.
Big Tech has had at least a decade to fix this, did nothing of note, and is all out of ideas. Privacy advocates had the same time to figure out a "least bad" technical solution, but got so obsessed with railing against it happening at all, that nothing got any traction.
So governments are here to legislate, for better or worse. They know it's a trade-off between being undermined by external forces vs. the systems being abused by future governments, but their take is that a future authoritarian government will end up implementing something similar anyway.
> Being able to limit the influence of external bad actors is the main goal of ID verification. Age verification is a useful side effect that makes it easier to sell to the general public.
How? People already sell their accounts to spammers. Why would that change?
Depending on the implementation, I could see that having rate limiting effects. There're only finitely many IDs so scaling sockpuppeting will saturate these IDs quickly but it's quite easy to spin up a new anonymous account. For example, I think the EU ID system has an upcoming way to create pseudo anonymous identifiers that can identify a user per website.
This presents the problem of governments being able to gatekeep speech which I am quite uncomfortable with but maybe there's some safeguard within the eIDAS proposal that makes this idea incorrect?
The internet is for the free exchange of ideas! Why would we want to limit it because some random gov somewhere is writing comments? Allow your citizens to think!
> Being able to limit the influence of external bad actors is the main goal of ID verification.
How does automatically determining your age serve the goal of ID verification? It seems like most sites are choosing this as the first option. If the point was to link your ID, why wouldn't they ask everyone to provide it?
> "Democracy" is when "bad actors" (as defined by the establishment) are shut out of all online discourse.
The point of ID laws is not to stop "bots" or "sockpuppets", it's to enable governments to shut down the speech of their political adversaries by painting them as dangerous. That is not democracy, that is authoritarianism, even if you absolutely hate the people that are being shut up.
Western countries are not in the midst of polarized political crises because of "external bad actors" or "sockpuppets". They're in these crises because of fundamental contradictions in values and desired policies between different segments of the populace.
The Europeans are currently full steam ahead in attempting to "fix" the situation by criminalizing dissent, which will, in the end, only exacerbate the political crisis by making the democratic system illegitimate.
The Internet is already all but dead. We could fix it (as I propose). Or we let it die.
I'm fine with either outcome.
> criminalizing dissent
When has that not been true? Serious question.
Socrates was compelled to commit suicide. Jesus was nailed to a cross. Journalist and activists are routinely murdered. How many political prisoners are there right now?
Probably the lack of pictures. Maybe the moderation. Maybe the slight niche.
It could die if it becomes profitable to spamers. Or maybe it's dead now and one or both of us are llms.
But as long as the content quality meets my personal utility threshold, it makes sense for me to visit it, regardless of whether it is a victim of DIT. Ultimately it's probably up to webmasters to understand if the traffic on their site is either profitable or of a high enough quality to justify the operating costs of a hobby.
No ads. No algorithmic hate machine. Active moderation.
Two other fine examples of thriving online communities are metafilter and ravelry.
I'm sure there's many more on the web. I just don't get out much.
And many, many not on the web. Using discord, telegram, old school BBSes, etc. But, as dead Internet theory notes, they're not publicly visible and therefore not discoverable, not being indexed.
Do you truly believe that ID "verification" will do anything in a world where IDs are leaked by the tens of thousands to the millions?
You are shifting the onus on to the platforms, when the problem is pretty simple; with a few exceptions, we've failed as a species to learn how to think.
Also do you think that the TLAs don't know who the bots most likely are with all the surveillance data they're gathering? That the NSA doesn't have detailed telemetry of the surveillance ops??
Let me ask you the question, what have they done about it? And why not?
> Being able to limit the influence of external bad actors is the main goal of ID verification.
Then they should say so. Elected officials lying to and misleading the public when their real intentions differ is almost criminal. It's not a behavior anyone should ever support. I will not vote for people who do that.
The stated reason is also true in most cases. Imgur was caught harvesting and selling childrens’ data for advertising purposes, TikTok and others are also known to do this. There’s only so long you can avoid fixing a problem before states start to step in.
I don't think "we technically don't lie, we're only actively deceiving you" is a good defense strategy. The politicians you defend need to decide on a narrative for the justification, meandering between different ones is not increasing credibility and a problem on its own.
The point is that if you convert away from COBOL to a more modern language, you can also move away from Z-series hardware to commodity x86 and ARM servers. That's why this announcement affected IBM's share price.
IEEE 754-2008 defines decimal floating point arithmetic that is compatible with COBOL and is usually implemented using the Intel Decimal Floating Point Math Library on commodity hardware.
For a typical core banking ledger application, the performance cost of a software implementation of DFP (vs. having DFP hardware instructions) is pretty low, and greatly outweighed by the benefits of being able to use commodity hardware and more maintainable languages.
Are there ARM or Intel servers capable of the reliability and availability of the Z-Series in Parallel Sysplex operation where processing can continue uninterrupted even if one of a pair of data centers becomes unavailable?
If a change of platform is the real objective, why not compile the COBOL for the ARM or Intel server?
It doesn't appear to be equivalent. If one site in a stretched cluster becomes suddenly available, the same batch processing application would not be running on the alternative site. The application would have to be restarted after the VM has been moved.
Not really, because only the OS core is swapped in this way. Apps and data live in their own partitions/subvolumes, which are mutable and shared between OS versions.
The OS core is deployed as a single unit and is a few GB in size, pretty small when internal storage is into the hundreds of GB.
Loads of GPUs with Vulkan support use TBDR. The Adreno GPU in the Steam Frame's SnapDragon SoC, for one.
There is also a Vulkan driver for the M1/M2 GPU already, used in Asahi Linux. There's nothing special about Apple's GPU that makes writing a Vulkan driver for it especially hard. Apple chooses to provide a Metal driver only for its own reasons, but they're not really technical.
No. For best performance, you have to batch your calls/memory access patterns with TBDR in mind. Dropping in a Steam PC game (indy, AA/AAA) game render pipeline, specifically optimized for Nvidia/AMD/Intel, to a TBDR GPU, is going to give poor performance. That's the context of this discussion. Round pegs DO fit into square holes, you just have to make sure the hole is bigger than would normally be necessary. ;)
Steam frame is more for streaming PCVR than running existing PCVR games natively.
I already run stuff that was very much not made with TBDR in mind, on TBDR GPU architectures, and the performance is perfectly fine.
For sure, you can squeeze a few percentage points more out if you optimize for TBDR, and there are some edge cases where it's possible to make TBDR architectures behave pathologically, but it's not that big a deal in the real world.
I also disagree that the Steam Frame is for streaming primarily. If it was, why put such a powerful SoC in it or using it as the prototype device for doing x86 emulation with Fex?
The Adreno 750 is a 3 TFlops GPU that _should be_ substantially faster than a PS4 or a Steam Deck. It'll play plenty of low-end PCVR games pretty well on its own, if Fex's x86 emulation is performant, which it is.
Like the Meta Quest 2, it's a crossover device that a lot of people will just use standalone.
It's not just Chrome, it's everything, though apps that have a large number of dependencies (including Chrome and the myriad Electron apps most of us use these days) are for sure more noticeable.
My M4 MacBook Pro loads a wide range of apps - including many that have no Chromium code at all in them - noticeably slower than exactly the same app on a 4 year old Ryzen laptop running Linux, despite being approximately twice as fast at running single-threaded code, having a faster SSD, and maybe 5x the memory bandwidth.
Once they're loaded they're fine, so it's not a big deal for the day to day, but if you swap between systems regularly it does give macOS the impression of being slow and lumbering.
Disabling Gatekeeper helps but even then it's still slower. Is it APFS, the macOS I/O system, the dynamic linker, the virtual memory system, or something else? I dunno. One of these days it'll bother me enough to run some tests.
That's the story the proponents of the AI bubble would have you believe, because they are sucking in all available funding to their enrichment, or because they've been huffing their own hype gas for so long that they have no brain cells of their own left.
It is, however, complete nonsense, and the next few years of failed promises on AGI will eventually bring people to their senses, if a market crash and sustained economic depression doesn't do that first. It would be funny if it wasn't going to cause suffering for millions of people, whether we succeed at AGI or not.
I _like_ AI, I find LLMs and many other aspects of useful, and I am optimistic for the long term prospects of AI. But the rush to try and get to AGI is completely out of control at this point, and the fallout from when the bubble pops will set AI, and our societies, back a long time.
Having bought a few Matter devices now, I have discovered that, in practise, Matter is just as full of vendor extensions as ZigBee, and the quirks ecosystem that allows for interoperability despite vendor extensions is far less mature than with ZigBee.
Maybe this will get better with time, but we're half a decade into the Matter era and the end-user experience is _worse_ than with ZigBee. In that sense, Matter has failed.
It's really just a performance tradeoff, and where your acceptable performance level is.
Ollama, for example, will let you run any available model on just about any hardware. But using the CPU alone is _much_ slower than running it on any reasonable GPU, and obviously CPU performance varies massively too.
You can even run models that are bigger than available RAM too, but performance will be terrible.
The ideal case is to have a fast GPU and run a model that fits entirely within the GPU's memory. In these cases you might measure the model's processing speed in tens of tokens per second.
As the idealness decreases, the processing speed decreases. On a CPU only with a model that fits in RAM, you'd be maxing out in the low single digit tokens per second, and on lower performance hardware, you start talking about seconds over token instead. If the model does not fit in RAM, then the measurement is minutes per token.
For most people, their minimum acceptable performance level is in the double digit tokens per second range, which is why people optimize for that with high-end GPUs with as much memory as possible, and choose models that fit inside the GPU's RAM. But in theory you can run large models on a potato, if you're prepared to wait until next week for an answer.
> It's really just a performance tradeoff, and where your acceptable performance level is.
I am old enough to remember developers respecting the economics of running the software they create.
Ollama running locally paired occasionally with using Ollama Cloud when required is a nice option if you use it enough. I have twice signed up and paid $20/month for Ollama Cloud, love the service, but use it so rarely (because local models so often are sufficient) that I cancelled both times.
If Ollama ever implements a pay as you go API for Ollama Cloud, then I will be a long term customer. I like the business model of OpenRouter but I enjoy using Ollama Cloud more.
I am probably in the minority, but I wish subscription plans would go away and Claude Code, gemini-cli, codex, etc. would all be only available pay as you go, with ‘anti dumping’ laws applied to running unsustainable businesses.
I don’t mean to pick on OpenAI, but I think the way they fund their operations actually helps threaten the long term viability of our economy. Our government making the big all-in bet on AI dominance seems crazy to me.
Which is as-designed. Vulkan (and DX12, and Metal) is a much more low-level API, precisely because that's what professional 3D engine developers asked for.
Closer to the hardware, more control, fewer workarounds because the driver is doing something "clever" hidden behind the scenes. The tradeoff is greater complexity.
Mere mortals are supposed to use a game engine, or a scene graph library (e.g. VulkanSceneGraph), or stick with OpenGL for now.
The long-term future for OpenGL is to be implemented on top of Vulkan (specifically the Mesa Zink driver that the blog post author is the main developer of).
To what hardware? Ancient desktop GPUs vs modern desktop GPUs? Ancient smartphones? Modern smartphones? Consoles? Vulkan is an abstraction of a huge set of diverging hardware architectures.
And a pretty bad one, on my opinion. If you need to make an abstraction due to fundamentally different hardware, then at least make an abstraction that isn't terribly overengineered for little to no gain.
Closer to AMD and mobile hardware. We got abominations like monolithic pipelines and layout transition thanks to the first, and render passes thanks to the latter.
Luckily all of these are out or on their way out.
Not really, other than on desktops, because as we all know mobile hardware gets the drivers it gets on release date, and that's it.
Hence why on Android, even with Google nowadays enforcing Vulkan, if you want to deal with a less painful experience in driver bugs, better stick with OpenGL ES, outside Pixel and Samsung phones.
Trying to fit both mobile and desktop in the same API was just a mistake. Even applications that target both desktop and mobile end up having significantly different render paths despite using the same API.
I fully expect it to be split into Vulkan ES sooner or later.
100%. Metal is actually self-described as a high level graphics library for this very reason. I’ve never actually used it on non-Apple hardware, but the abstractions for vendor support is there. And they are definitely abstract. There is no real getting-your-hands-dirty exposure of the underlying hardware
Wow, what a brain fart. So much of metal has improved since M-series, I just forgot it was even the same framework entirely. Even the stack is different now that we have metal cpp and swift++ interop with unified memory access.
> fewer workarounds because the driver is doing something "clever" hidden behind the scenes.
I would be very surprised if current Vulkan drivers are any different in this regard, and if yes then probably only because Vulkan isn't as popular as D3D for PC games.
Vulkan is in a weird place that it promised a low-level explicit API close to the hardware, but then still doesn't really match any concrete GPU architecture and it still needs to abstract over very different GPU architectures.
At the very least there should have been different APIs for desktop and mobile GPUs (not that the GL vs GLES split was great, but at least that way the requirements for mobile GPUs don't hold back the desktop API).
And then there's the issue that also ruined OpenGL: the vendor extension mess.
Being able to limit the influence of external bad actors is the main goal of ID verification. Age verification is a useful side effect that makes it easier to sell to the general public.
Big Tech has had at least a decade to fix this, did nothing of note, and is all out of ideas. Privacy advocates had the same time to figure out a "least bad" technical solution, but got so obsessed with railing against it happening at all, that nothing got any traction.
So governments are here to legislate, for better or worse. They know it's a trade-off between being undermined by external forces vs. the systems being abused by future governments, but their take is that a future authoritarian government will end up implementing something similar anyway.
reply