We need laws and social norms where filming a stranger and uploading it online is considered a serious unacceptable offense regardless of the device. I find it absurd that today is completely acceptable to just film an unaware stranger and put the video online, especially since that the majority of the videos are about making fun of them or humiliate them.
You shouldn't expect privacy in public spaces. That's the nature of public spaces. In the US, freedom of press means anywhere public means you have no expectation of privacy, and should comport yourself as such; don't do anything or wear anything in public you wouldn't want to be recorded.
This is why paparazzi exist and how they operate. It's the dirty, dingy cost of having a free press, freedom of travel, freedom to hold public officials accountable, subject to the same laws you are; you can't waffle or restrict or grant exceptions, because those inevitably, invariably get abused by those in power.
The difference is public vs. private spaces. The supreme court in the US has defended the right to record videos in public. But if someone walks into my home, or my 3rd space, etc. with one of these on actively recording that should absolutely be criminalized and enforced.
>the majority of the videos are about making fun of them or humiliate them
That's just nonsense. Your feeds seem to be polluted by what you are seeking out, as I've never seen a video on any service that shows humiliation of anyone.
I watch a lot of 1st ammendment audit videos, and that is never about humiliation, though many people end up looking very ignorant of the laws concerning recording in public which is in the 1st ammendment.
Cameras in phones are pretty much locked up today, assuming you have an updated version of the OS from a respectable manufacturer. Apps will not be able to access the camera feed (or the microphone) without explicit consent and a visual warning.
The manufacturer might access it, Apple states they don't, Google and Samsung I'm not sure. A bad actor with 0days might too.
Funny enough it's the OS and manufacturer I don't trust with my phone, with my PC I trust them a lot more as they're much more open and I can choose the OS.
For reference, Samsung screenshots everything shown on their televisions at regular intervals and sends these to their South Korean data centres for advertisers to use. It's called Automatic Content Recognition (ACR), which any sane country should be outright banning under international espionage laws.
Giving an Android phone to elderly/non-technical people is asking for trouble imho. They will eventually tap their way into installing suspicious apps, adware or even straight up malware. It's inevitable, they are not aware of what they do and how to avoid the many risks of the digital world.
I remember having the same struggles of OP when setting up a cheap android phone for my grandma, the amount of bloat, adware and misleading content I had to remove was incredible (and some couldn't even be removed). The irony was that after a few months of light usage, the phone was in a state even worse, full of downloaded apps and opened suspicious websites in the browser. She would swear she never even noticed any of those.
This is one of the cases in which giving them an iPhone with its walled garden has great benefits. You can also setup parental control on top of that already locked down ecosystem.
> Plain Google search is still the main vector of scams
How incredibly sad this fact is. And even sadder all the second-level implications about how it got to this point. And then sadder still that there is unlikely anything done about it in the foreseeable future.
My mother can no longer do the stuff she used to on her iOS phone because it is so complicated compared to the iPhone 4 I gave her a long time ago.
I screen her emails with her consent, very easy to do with Fastmail that imports her Yahoo mail into a folder she doesn't see and then I move okay emails to her inbox.
If your relatives are significantly tech illiterate, I'd skip the smartphone entirely and go for a locked-down Linux desktop + feature phone. The most dangerous apps are big legitimate ones.
If you do go for a smartphone, my experience tells me that there's no difference between Android and iOS. The biggest sources for shady apps are the Google Play Store and Apple App Store. Shady stuff on the web can be easily defeated using an adblocking browser, which is essential for older relatives.
> If your relatives are significantly tech illiterate, I'd skip the smartphone entirely and go for a locked-down Linux desktop + feature phone. The most dangerous apps are big legitimate ones.
You know, they are adults and have free will and do want a smartphone like everyone else to use Whatsapp, read the news, search things on Google, etc.
Hell, my 95 year old grandma convinced a nurse to install TikTok on her phone because she saw her using it and also wanted to try it. It's not like we can isolate them from the world
Sir no sir. I believe entirely the opposite. If they're tech illiterate then they don't have the entrenched knowledge that is the only thing keeping most people within the Windows ecosystem.
A Linux install that meets the basic needs of the user is perfecto!
Less so recently just due to time constraints, but I'm generally the technical person in my family group, and I've lost enough touch with Windows that troubleshooting it is increasingly difficult. If they need me to 'format and reinstall' they're getting Linux unless they have a very specific need that only Windows can cater to.
It's getting less silly every month! So many people in that boat only use the web browser anyway.
With a well-supported hardware configuration and a working web browser, even a non-techie may have a more stable experience than they would with Windows.
That has as much to do with the decline of Windows as with the ascent of desktop Linux, but still.
FYI: you can also set up parental controls on Android.
Parental control is a also a hot buggy mess on iOS currently. Our daughter has an iPhone with parental control set up and a bunch of apps that are whitelisted regularly refuse to start at random moments (blocked by parental controls). We hoped that iOS 26 would finally fix it, but nope.
It doesn't really matter, both phone ecosystems are a mess, but in different ways.
It's always crazy to me to see this kind of smug takes defending huge corporations as if they're your friends.
It's not all good or bad, there's a security issue with side loading, as well as shovelware on the play store. However, there is no world where I would argue that these justify limiting consumer grade hardware to walled gardens.
It's less about defending and more about being annoyed with all the over-confident, uninformed opinions people frequently post in reaction to any news article on the subject.
Funny, as someone that uses Android, sideloads apps, and is the "tech guy" for some older people, I went "yep, Google's own Play Store is full of shitty apps".
I recommend getting an Android phone (there are cheap Google Pixels out there) and try to sideload an app. Also browse the web a bit without an adblocker. I'd be surprised if by the end of the experiment you thought that sideloading is the reason their grandma's phone is full of crap.
It's too much energy to keep up with things that become obsolete and get replaced in matters of weeks/months. My current plan is to ignore all of this new information for a while, then whenever the race ends and some winning new workflow/technology will actually become the norm I'll spend the time needed to learn it.
Are we moving to some new paradigm same way we did when we invented compilers? Amazing, let me know when we are there and I'll adapt to it.
I had a similar rule about programming languages. I would not adopt a new one until it had been in use for at least a few years and grew in popularity.
I haven't even gotten around to learning Golang or Rust yet (mostly because the passed the threshold of popularity after I had kids).
It's a collector's market, the value is in the demand and scarcity. Same as with all other collectibles like baseball cards and such. Or even wines, there are some that are so old they become undrinkable but cost like a car. In collectors market the price is detached from any kind of purpose of the item.
Also consider that most Magic cards are also valuable only because of their collector status. The valuable ones are mint first editions and nobody is buying them to play them.
So who fuels this collectors market? Nostalgic 30-something that have now disposable income and want to buy things they wanted as children. Same as with videogames collectors and such. You don't need an original copy of Supermario to play it, but people still spend thousands to buy it.
> I think what we should really ask ourselves is: “Why do LLM experiences vary so much among developers?”
My hypothesis is that developers work on different things and while these models might work very well for some domains (react components?) they will fail quickly in others (embedded?). So one one side we have developers working on X (LLM good at it) claiming that it will revolutionize development forever and the other side we have developers working on Y (LLM bad at it) claiming that it's just a fad.
I think this is right on, and the things that LLM excels at (react components was your example) are really the things that there's just such a ridiculous amount of training data for. This is why LLMs are not likely to get much better at code. They're still useful, don't get me wrong, but they 5x expectations needs to get reined in.
A breadth and depth of training data is important, but modern models are excellent at in-context learning. Throw them documentation and outline the context for what they're supposed to do and they will be able to handle some out-of-distribution things just fine.
I would love to see some detailed failure cases of people who used agentic LLMs and didn't make it work. Everyone is asking for positive examples, but I want to see the other side.
- on some topics, I get the x100 productivity that is pushed by some devs; for instance this Saturday I was able to make two features that I was reschudeling for years because, for lack of knowledge, it would have taken me many days to make them, but a few back and forth with an LLM and everything was working as expected; amazing!
- on other topics, no matter how I expose the issue to an LLM, at best it tells me that it's not solvable, at worst they try to push an answer that doesn't make any sense and push an even worst one when I point it out...
And when people ask me what I think about LLM, I say : "that's nice and quite impressive, but still it can't be blindly trusted and needs a lot of overhead, so I suggest caution".
I guess it's the classic half empty or half full glass.
I believe what Wikipedia tries to do (simplifying here) is reporting the "opinion" of reputable sources which should have an informed view on the matter. If reputable sources believe it's a genocide, then they will report it, if not they will not.
Calling these sources biased because they do not corroborate your view of the situation is your subjective opinion and doesn't mean they actually do have a bias. The whole point of considering them reputable sources is that they should be as unbiased as possible (even though 100% neutrality is impossible), if they had "significant bias" as you claim they would not be considered as reliable sources to begin with.
Actually there's a Wikipedia guideline (WP:BIASED) along the lines of "bias doesn't necessarily make a source unreliable", which in practice is taken to mean that bias doesn't matter.
Of course in practice, editors have their own biases and decisions come down popularity contests. Wikipedia's own biases seem to get worse over time, as more neutral editors give up, so we end up with some weird things like
- Almost all conservative news sources having low reliability ratings.
- Daily Mail for example is deprecated, the lowest possible rating outside of literal spam.
- Al Jazeera, which seems largely controlled by the Qatari monarchy, has the highest reliability rating and is the most-used source in Israel-Palestine. Even their blog is the top source on many articles, despite news blogs being against policy.
- Al-Manar, the Hezbollah mouthpiece which is very unashamedly biased (e.g. refering to their terrorists as "men of god"), has a somewhat low reliability rating, but still higher than several conservative sources like Daily Mail.
There's also a tricky situation where some political factions consistently report closer to reality than others. This makes it hard to be both reality-focused* and politically neutral at the same time.
* It's not this page, but there's a separate Wikipedia policy which says that editors should only insert content which is true.
Circular reasoning that is completely ignorant of the last 2 years of analysis of media reporting on Gaza.
The evidence of media bias is extensive and extremely blatant: it spans framing ("[horrible event, war crimes, etc.] happened, according to Hamas" vs no such qualification for Israeli claims, "20 people killed in Gaza" without mentioning who or what killed them), dehumanisation ("2 people killed" when reporting on children deaths in Gaza vs "2 teenagers in hospital" when talking about IDF soldiers), selective reporting (remember the pogroms in Amsterdam that got debunked on social media while every chief of state was sending their condolences?), constant repeat of Israeli "right to self-defence" while Palestinian context is not mentioned, etc., etc., etc.
If you need something more visual/real-time, Newscord has been been reporting on this consistently: https://newscord.org/editorials
The media might be largely a reputable source, when it doesn't contradict the preferred narrative, and the Gaza genocide was probably the strongest example we could have had of this.
I'm not sure why I even wrote this out, because 2 years in calling it "subjective opinion" is obviously not a position that is based on facts or reason.
The only reason I can find for anyone to be bored by the inside is if they visited on a cloudy day. The way the light enters through the stained glass and colors the environment (and how the light changes during the day) is astonishing, never experienced something similar tbh.
I don't want/need the whole thing to be flat but I do prefer it to be stable. For instance if the plateau were a bit thicker so that the camera lens was flush with the surface (even just an extra bar sort of inside the plateau) it would mean that when I put it down it would never rock back and forth when I'm tapping at it on a table.
The problem with the one-sided camera bump is that the phone is unstable. It wobbles when you touch it, making using it while lying “flat” on the table incredibly annoying.
reply