The problem with Google Maps is that they only let you download rectangular chunks of the map one by one (that aren't large enough to e.g. just download a whole state), and then those chunks must be kept regularly updated or they eventually "expire" without you having any say in the matter.
Google Maps also doesn't support offline contour lines / hillshading (what Google calls "terrain"), which is a big deal for hiking and other outdoor activities. Whereas with OsmAnd, you can literally have e.g. the entire North America in your pocket, with contour lines, offline navigation, and offline Wikipedia articles for every POI.
You definitely get a warning. But also there's an auto-update feature so you can set it an forget it so long as your device connects to the internet once every x months.
Pointing out more options for people to evaluate is great and all. But at least for me, "offline maps" implies not needing the permission of a surveillance company to use it, and without phoning home to that company when the app regains connectivity. And I'd say that's an appropriate definition in the context of the top level comment about privacy.
I went to Grenada in 2014 and used offline maps to drive around the island. In 2015 the tablet suffered an accident and I powered it off to deal with it later. 5 years later I power it back on without internet connectivity. Turns out the maps expired. So much for many years
While I agree this is potentially an issue, most people with most devices won't be away from the internet for years. If you want it to be apocalypse proof for when the internet goes down you should probably get a paper map.
That is the antithesis of many years and why google maps is not sufficient. You know what’s still around many years later? My paper printouts from Mapquest
There is very much plenty of fairly reliable evidence that masks work. And the better the compliance the better they work. In nurse studies you get much better results than in population studies, for instance. Now that I'm looking I'm hard pressed to find any studies that go against this conclusion.
Maybe someone with more knowledge than me can explain - flatpaks seem way more secure than anything you would ever install in Windows by a long shot. It's also fairly trivial for me (and I'm by no means a hardcore user) to use a completely immutable version of linux such as Silverblue. The other complaints in these links also seem suspect. If the Linux kernel is insecure due to it being monolithic doesn't that make ChromeOS just as insecure? What about android? What about the "96.3% of the top one million web servers [that] are running Linux"?
Also there's something to be said for security through obscurity. My bet is I could go through my entire junk mail folder opening all attachments on Linux without a problem, but it'd take me less than 10 on windows to be fully owned. If you're careful on Linux aren't you far, far safer than if you're careful on Windows?
Almost all popular applications on flathub come with filesystem=host, filesystem=home or device=all permissions, that is, write permissions to the user home directory (and more), this effectively means that all it takes to "escape the sandbox" is echo download_and_execute_evil >> ~/.bashrc. That's it.
This includes Gimp, VSCode, PyCharm, Octave, Inkscape, Steam, Audacity, VLC, ...
To make matters worse, the users are misled to believe the apps run sandboxed. For all these apps flatpak shows a reassuring "sandbox" icon when installing the app (things do not get much better even when installing in the command line - you need to know flatpak internals to understand the warnings).
I guess I just don't buy it completely. Given that I myself have had a hard time giving permission to Flatpak to access even an unimportant network drive (Flatseal is a godsend for giving/denying permissions in any way you please) while the same app on windows will happily write anything to C:\Windows\System32 , I feel like we're talking about entirely different beasts. But perhaps I'm naive. I also feel like there would be a very large vested interest in making people feel more unsafe in linux than they do in Windows/MacOS for obvious reasons.
And given that the version of Fedora I use is immutable and even I have a hard time messing with it to the point of pain/exploit with full access to the system (and I've tried for fun in VMs) I feel like a trusted flatpak app I download from a trusted source is going to have a damn near impossible time doing much of anything. While I feel like a simple website hack that serves me a bad .exe could/would cripple every single file it can find on my network on a Windows machine.
You're right. I'm entirely unconvinced by anyone in this thread on that Linux isn't still WAY safer all around.
You can come up with theoretical threats all day that Linux is susceptible to, sure.
But at the end of the day, there is not a single serious cloud company (or just about any tech company that isn't MS) genuinely looking at "we should switch to Windows or MacOS for the backbone of our company," And it's Linux that gets the downstream security that comes with that.
Flatpak permissions are very broad by default in most applications. Even if you manually override them by using Flatseal, some permissions like X.org or PulseAudio sockets are very problematic because these legacy protocols are not designed to be secure. Even if you manage to lock down permissions and only use modern apps that support Wayland and Pipewire, the Flatpak sandbox still exposes a lot of kernel attack surface because it blocks very few syscalls. I think they should add something similar to Win32k lockdown (ProcessSystemCallDisablePolicy) on Windows and disable insecure components like io_uring.
As for immutable distros, AFAIK Silverblue and others are immutable in the sense of package management, but there is actually no process to ensure the integrity of the full boot chain because initrd can be trivially modified by the host and is unsigned. There is a UKI (Unified Kernel Image) proposal that will likely be the path going forward (at least on the Red Hat world), but I think it's still years away.
In my opinion, if you want to use Linux desktop securely, just use Qubes.
I fully agree with using Qubes, but I also think for most people in most cases that's akin to putting a bank vault door on the front of your house. I guess the question I would ask is: gun to your head you have a choice between running a random Setup.exe in Windows, a .sh/.deb/.rpm in linux, or a Flatpak. Which one are you choosing? 10/10 times I'm choosing the Flatpak myself. It might not be perfect, but it does seem better than most alternatives everyone uses all day every day.
> for most people in most cases that's akin to putting a bank vault door on the front of your house
If we are talking about a device in which you do banking, shopping, manage sensitive or work data, etc. then I think security should be a priority. For more casual use, I agree Qubes would be overkill.
> Which one are you choosing?
I'd rather execute Setup.exe inside Windows Sandbox or denying UAC prompts, or a random macOS binary (provided SIP is not disabled) than a Flatpak. To be clear, I think Flatpak is an improvement, I'm glad it exists and I hope it continues evolving. But in my opinion, the Linux desktop still has a long way to catch up to Windows and macOS on security.
But wouldn't you agree reading about this topic now, with the counter-argument of the post-1960 consensus (though I have a hard time thinking most things debatable like this are ever strictly consensus), and the follow-up DNA evidence, is far more informative and convincing than what you would read in 1920? It seems that the people guessing from 1920 might've had about as much chance of being right as the people guessing in 1960 with neither having the relevant evidence to back their claim.
Come on: if you're excavating an ancient village and find a layer of charcoal littered with arrowheads and skulls and find totally different pottery before and after the charcoal layer, then unless your brain has been codrycepted by fashionable academic nonsense, you're going to conclude that someone conquered that village and replaced its people --- not that the charcoal layer represents some kind of ceremonial swords-to-plowshares peaceful pottery replacement ceremony. For 50 years, academics insisted on the latter interpretation. If you'd read old books, you'd know the post-1960s consensus was nonsense even without ancient DNA. Ancient DNA merely created a body of evidence so totally compelling that not even diffusionists (the "pots not people" crowd) could stick to their stories and keep a straight face.
You really can't see the reason behind this? People are suing for libel / slander / copyright infringement / everything else under the sun. If you don't put guardrails up, and it hallucinates bogus medical advice, so many people would just blindly accept it. Remember when 4chan told everyone they could upgrade their iphone to be waterproof? The general public and an LLM that tells you the best way to commit crime or not overdose on fentanyl just do not mix.
I think the pull for most of us who use chatgpt is that google lies far, far more often than chatgpt ever will. Or is just otherwise inconclusive / does not give the relevant information you're looking for. The amount of SEO clickbait or quora/stack overflow answers that are either just incorrect or highly opinionated makes google very difficult to use for many things. As someone new to/learning Fedora it gives me the right answer 95% of the time, google gives me the right answer in the top 5 links far less.
It really is astonishing how much you can get done this way. I've been setting up a home lab for myself, and the answers Gpt4 gives are miles ahead of the stack overflow results or documentation of the apps or whatever else. Rarely (very rarely) it will give me a wrong answer, but then I paste in the error message or describe the problem I had and it almost always comes up with the correct answer 2nd try. Final step is asking where I might learn more because it's not working, and gpt always gives me a better link than google.
I'm convinced the people who say it's nothing but a BS machine have never tried to use it step by step for a project. Or they tried to use it for a project most humans couldn't do, and got upset when it was only 95% perfect.
I disagree with that. It's very useful writing boilerplate and documentation, but two third the time I'm in front of a bug I'm to lazy to understand and ask ChatGPT, with context and all, the answer is wrong. I can fiddle with it to reduce that to a third of the time, but in the end, only the questions that are really, really hard to figure on your own are left.
Still, it's way better and more efficient than Google. Less than not being lazy and using my two braincells tbh.
My newest use is
Hello, i' m working on X, I use Y tech, my app do Z and I want to implement W. Can you provide a plan on how and where to start?
I agree with this. This is my primary use as a new analyst. Weird things that would take lots of time to dig through stack overflow to find, I can find pretty quickly if I feed it the parameters I’m working within, and what I’m trying to get to. Usually it just fills in the gap that Google was doing before, but much better in my opinion.
Sorry i'm a bit late. Depends. Professionally, its a mix a python, typescript (those i practically never use ChatGPT for, or rather, i use it for questions i usually ask google/reddit/SO), terraform/terragrunt on AWS with some Cisco config and some other hardware stack i don't remember but that require custom terraform providers. I automate the deployment of the hardware, so i think writing custom providers and terraform is roughly a third of what i do and i cannot use ChatGPT for that, its output is way too bad.
Personnally, a lot of bash, C, AWK at the moment (typescript + html/css until last april, now i'm back to the basics). The figure i gave in my post were more for that.
The last time i used it was yesterday, i wanted to hack something on an old game i used steam+proton for. I knew it was a weird Wineprefix, so i asked ChatGPT for it, i might have asked poorly, but after fiddling, i had the response (tbh i had to look how to get the game ID, so in the end i lost more time than not), then when it still didn't work, cause the path was shit, i entered all necessary context in ChaGPT4, and it couldn't find the easy "USER=steamuser" env variable to add before launching Wine. I stopped after 10 minutes, looked into a example Wine cfg file, understood the issue and fixed the problem myself.
I mean, it's probably good for really basic stuff, so it could have helped me when i was starting, but 80% of the stuff i code automatically without really thinking about it, and when i have to stop to think, ChatGPT isn't helping. Also tbh, VSCode is really, really good and fix my old, time-consuming task of "what's this argument again?"
Oh come on. I fed Unreal c++ engine code to ChatGPT4 and it couldn't understand inheritence in Slate classes and therefore kept offering me the same broken solution for a parameter with the wrong type.
The Unreal engine code is documented and publicly avaiable for OpenAI to ingest and it still gets the basics wrong.
I wasted hours trying to get it to explain to me what I didn't know, if it doesn't understand the internals of Unreal, I have no hope for it on bigger and better codebases.
It doesn't parse, it doesn't explain, it does not grok. It guesses at best and the blood sucking robot-horse is not telling the truth.
>It doesn't parse, it doesn't explain, it does not grok. It guesses at best and the blood sucking robot-horse is not telling the truth.
In my experience with coding (I've only done javascript and python myself) you have to tell it to explain and grok. It takes on the role you give it. Even just saying something like "you are a professional unreal developer specializing in C++, I am your apprentice writing code to (x). I want you to parse the following code in chunks, and tell me what might be wrong with it" before typing your prompt can help the output immensely. It starts to parse things because it's taken on the role of a teacher.
People love to hate on the idea of "prompt engineering" but it really is important how you prime the thing before asking it a question. The other thing I do is feed it the code slowly, and in logical steps. Feeding it 20 lines of code with a particular purpose / question you'll get a much better answer than feeding 200 lines of code with "what's wrong here?" You still need to know 90% of what's going on, and it becomes very good at helping out with that 10% you're missing. But for all I know it is just really bad at C++, that wouldn't surprise me. The things I'm using it for are definitely more simple.
I do think this is why I sometimes get amazing results, and other times I have to go over a snippet of code so often I just give up and do it myself. It's a matter of how the question was asked in the first place.
Knowing that, it makes sense that your prompt should be as specific as possible if you want the results to be as specific as possible.
The best results I got was feeding it Lisp code that I wanted translated to C (to compile it). It took very little effort on my part because I described what each of the snippets did separately, and the expectation when combined and used together.
Through this, I learned that C doesn't have anything akin to the Lisp's (ATOM). ChatGPT stated clearly that its version of ATOM should only be expected to work in the code it was writing, but might not work as expected if copied out for another use of Lisp's (ATOM).
I asked it to give examples of where it wouldn't work, and it gave me an example of a code snippet that used (ATOM) that would not have worked correctly with the snippet that did work correctly with my original purpose.
Having said that, I myself learned that working with code function by function with ChatGPT, and being explicit about what you need, gives very good results. Focusing on too many things at one time can derail the whole session. One or two intermingling functions works great though.
GPT4 works best when you assume that you're the professional dev with decades of experience, whereas GPT4 is a bright and broadly-informed co-op student lacking in experience in getting stuff working. You have to have a solution in mind, and coach it with specifics. And recognize the tipping point where it takes you more keystrokes of English to say what should be done, than keystrokes in Vim to do it yourself.
I did prompt engineer, using the 'you are an expert, desribe to student with examples' in many different variations.
In my testing prompts did not unlock an ability in GPT to grok the structure of code.
Empirical testing of LLM's is going to prove and map out it's weaknesses.
It is wise to infer from intution and examples what it can handle, leave the empirical map of it's capabilities to the academics, for the provable conclusions.
My observation (which could be wrong) is that ChatGPT as a programmer's aid is only useful for the simple cases. Not so much for complex stuff, and certainly not for something as complex as the Unreal engine.
Do you have some sample chat logs of interactions like this you can share? I'm curious to see what kind of stuff it's coming up with, and how you're prompting it.
I don't tend to keep the chat logs, as the amount of them gets unwieldy very quickly. But examples of things I've done with it that are useful:
I wanted to create a web app, something I haven't done in a very long time. Just a simple throwaway back-of-the napkin app for personal use. I described what I wanted it to do, and asked what might be a good frontend/backend. It listed a few, I narrowed it down even more. Ended up deciding on flask/quasar.
After helping me setup VS Code with the proper extensions for fancy editing, and guiding me through the basic quasar/flask setup, it then was able to help me immensely creating a basic login page for the app. Then it easily integrated openAI api into it with all the proper quasar sliders for tokens/temperature/etc. Then it created a pretty good CSS template for the app as well, and a color scheme that I was able to describe as "something on adobe color that is professional and x and x (friendly, warm, whatever you want to put in)". Everything worked flawlessly with very little fuss, and I'd never used flask or quasar before in my life. You can also delve VERY deep into how to make the app more secure, as I did for fun one evening even though it's not going to be internet facing.
Another thing I did was go over some pfSense documentation with it. I had some clarifying questions about HAProxy, as well as setting up Acme Certificates with my specific DNS provider. It was extremely helpful with both. It also taught me about nitty gritty settings in the Unbound DNS resolver in a way that's much more informative than the documentation, and helped me set up some internal domains for pihole, xen orchestra, etc with certificates. Also helped me separate out my networks (IoT, Guest network, etc), and taught me about Avahi to access my hue lights through mDNS.These are things I always wanted to do, I just never felt like going down a google rabbit hole getting mostly the wrong answers.
Last example I'll give is it was able to help me set up docker-compose plex within portainer that then uses my nvidia GPU for acceleration. The only thing I had to change from the instructions it gave was to get updated nvidia driver #s and I grabbed the latest docker-compose file. I'd never used portainer in my life before, nor do I have experience with nvidia drivers within linux, and I feel like learning it was many times faster being able to ask a chatbot question vs trying to google everything. Granted I still had to RTFM for the basics, as everyone should always do.
I think perhaps my use cases are a bit more "basic" than many HN users. Like I said I'm not asking it to do problems most humans wouldn't be able to do, as I know it isn't quite there yet. But for things like XCP-ng, portainer, linux scripts, learning software you've never used before, or even just framing a problem I'm having in steps I hadn't thought of it's been invaluable to me. For me it's like documentation you can ask clarifying questions to. And almost none of the things I've asked it would work at all if it were wrong, I would know immediately.
If posting on twitter is your idea of how to get things done in politics (as many people believe) then it not being on twitter is a boon for us all. Actual civic engagement is what's needed, not posturing for your party (it's a lot like the sports you mentioned, root for your own team) via small text blurbs.
I guess the part where I'm confused is how you think de-platforming politics at mass population scale will increase civil engagement somehow.
Like, isn't "small text blurbs" better than nothing? ... Do you think billionaires bought and crippled Twitter and Reddit for our sake?
We recently got hard evidence that Facebook and Twitter censored true information at the White House's request; to drive people toward a desired political outcome... On Threads, that wouldn't even have been necessary. They just call any debate they don't like 'political', and de-platform it. Silent, and deadly.
Reddit and Twitter have been suicided. That's not actually a boon, as toxic as they were pre-Musk and pre-SpezGate. Now a platform where political engagement is explicitly shadowbanned rises up to offer ever-more vacuous bullshit heavily dosed with pro-corporate propaganda, with the glimpses of substance all expressly filtered out. That's not a boon either, though that's what seems to be the spin.
I do hope your optimism isn't misplaced. Maybe driving the engaged people toward defederated platforms will work out, and we can leave the sports-minded to their own devices - but realistically, those people will be weaponized against change even more easily once the agitators have been sectioned off.
It's like the free speech zones Bush brought in - say whatever the fuck you like, in this little cage two miles from any TV cameras.
> Like, isn't "small text blurbs" better than nothing?
I honestly don't think they are. Small text blurbs are not actually informative. You can't engage in nuance, you can't really argue a position, and you can't really have a good debate. They're just good for bloviating, and there are lots of places where you can bloviate anyway.
Something that's particularly interesting; on Pod Save America, they referenced a study (which I forget right now) that tracked protest effectiveness vs size, and right around the time that twitter/facebook started taking off, protest size skyrocketed while protest effectiveness plummeted.
As it turns out, social media short-circuited traditional methods of organizing. Those traditional methods of organizing, ie, actually knocking on doors and talking with your neighbours, were also the social glue that formed civic groups which did stuff after the protest ended, like voter drives or lobbying politicians, political tactics which actually worked.
That was also within a few years of Bush W making moves to cut protest effectiveness, such as the free speech zones. Not to mention, there are plenty of examples of social media creating very effective protests, which then inspired heavy handed tactics in retaliation, with examples of this from all over the world.
Do you think what happened to OWS was just because of tweets lack of effectiveness? I don't believe that, and neither should you.
I did try to find the study you mentioned, as well as the podcast, but came up empty. Got a link handy?
Hm I never realized the parallel between politics and sports lol (and more than just the ra ra fandom sense).
I think why I enjoy sports so much more than politics—my ministry is the NBA and WNBA—is that it’s inherently a game so you don’t impose any of the IRL stress politics does. And to elaborate on why I find them similar is that in sports, you take stock of the landscape, analyze your standing, and strategize on improving eg optimal use of resources. And those r true over stretches of years or during a single game.
As the parent alluded to, Twitter is definitely not the place for this nuanced discourse and unfortunately any stereotypical nerd that would be more wont to honestly engage probably didn’t grow up playing a sport or at the very least not such a mainstream American sport like basketball and has no interest in the topic.
But it’s whatever. I still find great joy theorizing, seeing the results of different implementations, being presented with ideas I hadn’t thought of (these r like lineup combos and play calls), team/player growth, etc
What you're talking about is a kind of willpower, which funnily enough has a biological basis also. Genetics seems to play a role, as do many medications. The marshmallow test seems to show willpower largely stays the same over 4 decades. Ozempic also seems to show you can artificially induce it. It's not mind over matter, it's having a mind primed to do it in the first place. It won't be as easily taught to someone who grabs the marshmallow instantly as a preschooler. We also seem to be able to induce it these days with ozempic, which is fascinating.
I don't think Ozempic operates on willpower... it slows your digestion process, which makes you feel fuller longer and can make you incredibly ill if you overeat.
I very rarely use google anymore, as it's a worse bullshit machine. I just ask chatgpt and tell it to give me sources. I also never use cookbooks anymore, as it's much more interesting to have a chat about what I can make with the ingredients in my fridge. It's also absolutely fantastic at making meal plans (tell it restrictions like lactose, protein you want per day, calories per day, and any other preferences you have) and workout plans (tell it the equipment you have in your house and your goals). If you're wanting it to code the next big thing by itself it probably won't do it properly, but for everyday things it's a very useful assistant. More useful than anything else out there by far.
Try using Perplexity.AI [no affiliation], which automatically provides a citation for every sentence it produces. Typical paragraph will have five citations.