Take a quick scroll through, it features programming with pictures which is much more useful and motivating for a beginner than lines of text.
First impressions matter. So avoid excessive keywords and syntax that only scares beginners into thinking programming is a voodoo-magic-minefield. I'm looking at you, Python.
This is an example of how the internet was originally intended: Every user of the internet has a public address that any other user can send and receive messages from.
The design works just like postal addressing. Your postal address contains the directions to your building from any location on earth. Even if you live in a dormitory building with many other residents, I can still send you a letter directly by adding "door number: 42" to your dorm's postal address.
IP addressing use numbers instead of English terms like "door" and "street". So I can't simply add "door number" to your building's IP address, your building has to be given enough addresses so each resident's computer can have their own. When your computer has a public IP address, I can send Internet packets directly to you.
Harvard was early to the slicing of the IPv4-address pie, so they had enough addresses each of their residents, including Zuck. Anyone with internet could put Zuck's IPv4 address on an Internet packet and it would end up on his computer. Most of these packets would be HTTP requests to facebook.com, to which his computer would reply with a page from the facebook website.
This is the internet working as intended.
But we ran out of IPv4 addresses in 2012, which has forced internet service providers to adopt an address-sharing scheme called network-address-translation (NAT) that makes it impossible to send letters directly to other people's computers. Imagine I wasn't allowed to put any room number or name on my letters. If I sent a letter to your dormitory, the staff there wouldn't know what to do with the letter and would be forced to return-to-sender or discard it. This is what NAT does, and it has turned the glory of the Internet into a centralized monster of control and censorship.
If you want to host a website with a public IPv4, only established cloud providers that obtained enough IPv4 addresses before it was too late can help you (primarily Amazon, Google and Microsoft).
The successor of IPv4, IPv6, brings enough address space for every person, their dog, their dog's fleas, and their dog's flea's microbes. We can go back to hosting websites from our dormitories, sending chat messages directly to our friends (not via Google, Facebook and Microsoft), and start new ISPs that missed out on the IPv4 pie that actually have a chance at competing with the likes of Comcast.
IPv6 reintroduces equity to the internet that facebook benefited from in it's inception.
and how many people remember public ipv4 addresses besides a couple of easy to remember ipv4 addresses like 1.1.1.1 for instance?
rfc1918 address space is easily remembered because people use mostly 192.168.xx.xx. but ipv6 has the same idea and when writing it shorthand isnt significantly larger.
When I worked at a company with about 5-6 servers and a couple fixed remote workstations, all the programmers knew all the IP addresses by heart, if there were names for anything but the www host I didn’t know them.
Obviously doesn’t scale, but I would assume this was normal back when you only interacted with say <10 servers.
That’s a false issue nowadays. Basically any cheap router supports Avahi/Zeroconf/Bonjour … and allows you to reach any other machine of the network directly by its host name instead of its IP. There is not any reason to learn the IP address of your first MySQL server when you can reach it through « mysql-1 » or « mysql-1.local ».
You basically just need a router and an OS from the last two decades and your machines to have a defined host name (which your OS installer takes care of).
I don't think that's true. I've never seen a router that lists hostnames that I can actually ping. Sometimes they do but 50% is always empty. It's a very client dependent solution.
> Basically any cheap router supports Avahi/Zeroconf/Bonjour … and allows you to reach any other machine of the network directly by its host name instead of its IP.
I regularly run into instances where local hostname resolution is unreliable.
To improve reliability, I setup a local DNS server to hand out a domain name with the IP address. Even then, whether a client requires a hostname or FQDN to resolve a local address - that can vary over time.
They are easy enough to remember for a few seconds if you need to configure it somewhere. I always ping 8.8.8.8 to verify my internet connectivity. I don't think people should underestimate how much IP addresses are entered manually on a daily basis.
NAT was a thing much before ip addresses became scarse, is a key enabler in the "internets" ease of use as well as the principal ability to connect nearly double-digit billions of devices with about 200mio live addresses.
the end-to-end principle is mostly undermined by stateful firewalls and a total lack of secure-by-design in software developement, this will not change with ipv6
Who uses the indicator lights? - people looking behind a computer or at switchgear. They aren't regular users, they understand how networks operate and what the green/yellow lights mean. They're typically debugging a problem, for which you want the truest indication of what's happening on the wire.
For regular users who just care about working internet, you can notify the user of connection failure where they look - the screen.
> X-Plane is the most advanced flight simulator in the world. The product of 20 years of obsessive labor by a hardcore aeronautics enthusiast who uses capslock a lot when talking about planes, it actually simulates the flow of air over every piece of an aircraft’s body as it flies. This makes it a valuable research tool, since it can accurately simulate entirely new aircraft designs—and new environments.
I splurged and subscribed to X-Plane's global scenery and all planes option on Android.
While it's a limited experience compared to a 3-axis stick and throttle on a mid-high end PC, it's still a lot of fun to pick out places like Gibraltar, Saint Maarten, and other unique situations and fly in and out on HUD / trail view.
(I had my toe in the door, did three hours of PPL instruction, never could follow up on it. And many hours of sims all the way from Sublogic days.)
Heh, I had almost my entire body through the door, but stubbed my toe on the threshold! ~50 hours logged, a handful of "cross country" solos, did my night landings etc. Got sick for about a month, holidays, wedding coming up, just never got back into it. I actually lost my logbook for a number of years but I'm looking at it now on my shelf.... one of these days.
I have about ten hours and loved it but I just can't justify the cost. It's so expensive when I have so little disposable income despite making six figures
add in kj too and you can just press both keys at the same time and it will work like it was just one key press. the added benefit is that it's basically a no-op from the standpoint of vim too just in case you were already in normal mode.
'jk' are on the strongest two fingers on the right hand home row, since that's what you'll be doing the most. 'h' can be hit once or twice with no real effort, but there are better ways to move left and right within a line, such as `bBeEfFwWIA`. I also remap `H` and 'L' to move all the way to either end of the line, but I might be doing it wrong
Protobufs/nanopb would be my go-to for minimal message size.
If you want small code size, CBOR seems like a good bet:
> The Concise Binary Object Representation (CBOR) is a data format whose design goals include the possibility of extremely small code size, fairly small message size, and extensibility without the need for version negotiation. [1]
This [2] C-implementation fits in under 1KiB of ARM code.
CBOR is also used on WebAuthn, usage in a web spec means to me that someone smart considered it a sane choice -- and more importantly that the format is here to stay.
It's great CBOR is accepted in wider area, but I am personally curious why WebAuthn choose CBOR instead of JSON. WebAuthn is a web browser feature, and why W3C would introduce a new data exchange format in their specs? Maybe WebAuthn needed a binary data type?
That’s strange because CBOR is almost literally msgpack that got an RFC and has extensions. I cant remember what MsgPack does for online streaming and indefinite lengths.
Looked at it again, seems memory management is a bit of an issue, it supports memory allocation callback but not just handing it a buffer to work with (though I guess allocation should be predictable).
Also I don't know how they got "code sizes appreciably under 1 KiB". On my STM32F1 release mode with -Os it adds about 12kB.
For reference, I’m using TimyCBOR because it’s include with Amazon FreeRTOS.
You’re on your own for malloc, which for me is great because FreeRTOS Heap4 management is quite good. So I malloc an object I’m decoding into and parse away.
There are two options parsing arrays and strings/bytestrings and I just chose the option where I specify the pointer to use, vs them using normal malloc then free() later.
I really like this setup. I made a deinit(bad_message) that works anywhere it failed (parse, validate, eval, etc), goes through and looks for pointers that I previously would have malloc’ed.
There is another popular library but I forget what it’s called.
The link explains that CouchDB can have replicas on mobile phones and websites, meaning clients don't always have to be connected to the internet.
> The Couch Replication Protocol lets your data flow seamlessly between server clusters to mobile phones and web browsers, enabling a compelling offline-first user-experience
> Speaking at the BlueHat security conference in Israel last week, Microsoft security engineer Matt Miller said that over the last 12 years, around 70 percent of all Microsoft patches were fixes for memory safety bugs.
It seems like the Windows ecosystem hasn’t benefited from the improvements made by the research community. For example, Valgrind doesn’t run on Windows. Cross platform applications (like the ones studied in the referenced paper) don’t have nearly the same level of memory problems. I think that presenting the 70% figure as inherent to developing in C/C++ is misleading because of this. In fact, a brand new project could probably reach 0 (or very close to it) memory bugs in C++ by following modern testing practices and using the variety of dynamic and static analyzers that exist today.
Microsoft has lots of state-of-the-art dynamic and static analysis tooling for Windows. You don't hear much about it because a lot of it is closed source. E.g. here's some info about some of the static annotations they use in the kernel:
https://docs.microsoft.com/en-us/windows-hardware/drivers/de...
The links in the sidebar point to a lot of other stuff. If you look at the publications of MSR's software researchers, many of whom are very good, you will see lots of papers about finding bugs in Windows, some of which have been productized.
> In fact, a brand new project could probably reach 0 (or very close to it) memory bugs in C++ by following modern testing practices and using the variety of dynamic and static analyzers that exist today.
A bold claim to offer without evidence. Unfortunately even the best organizations have so far failed to achieve this.
> Microsoft has lots of state-of-the-art dynamic and static analysis tooling for Windows.
Right. If you look at the linked article, the Microsoft Engineer claimed 70% of security bugs in Microsoft products are caused by memory errors. Does Microsoft apply the same tools to all their products or only Windows? Do these tools even exist for other products?
> A bold claim to offer without evidence.
If one writes a new C++ program, tested with > 75% code coverage, tested with valgrind, the program passed coverity checks and clang static analysis, and they followed the best practices for hardening the host kernel, and told me that they still had an exploitable memory bug, I would be surprised. Notice that performing all those steps is still less effort than learning Rust and building the program in that. And you’d still have to harden your kernel and test anyway.
The evidence? NGINX and Linux is written in C. If the situation was so dire, why isn’t every computer in the world compromised right this second?
And the project has some of the best testing and practices in the world. Constant fuzzing, significant test coverage [0], no doubt there's memory sanitizers, etc.
It's increasing clear that large projects written in memory-unsafe languages will contain memory unsafety.
> The evidence? NGINX and Linux is written in C. If the situation was so dire, why isn’t every computer in the world compromised right this second?
Not hyperbole. Most of these bugs are never known to be exploited by attackers.
>Check the stats
In your first link, there was one memory corruption vulnerability in Chrome last year. If we're looking at RCEs, CVE-2019-5762 and CVE-2019-5756 appear to have the same root cause (a memory bug), and CVE-2018-6118, CVE-2018-6111, and CVE-2017-15401 (which is also the memory corruption vulnerability) are also memory bugs. So it looks like Chrome had ~4 serious memory vulnerabilities last year.
Don't have time to dig right now, but it appears similar observations hold for [1].
> Most of these bugs are never known to be exploited by attackers.
You have moved the goalposts. Of course there are lots of reasons why a bug might not be exploited by attackers, e.g. "the attackers exploited some other bug" or "no-one uses that software". That is not reassuring.
> In your first link, there was one memory corruption vulnerability in Chrome last year.
I don't know how you determined that, but it's just wrong.
https://www.cvedetails.com/vulnerability-list/vendor_id-1224...
Bugs 2, 3, 4, 8, 9, 10, 14 and 15 are obviously memory safety vulnerabilities. Many of the others probably are too, if you dig into them.
> Or that the exploit is so difficult it is practically impossible to attack.
"That bug is so difficult to exploit, it is practically impossible to use in an attack" does not have a good track record in the face of determined and ingenious attackers. Worse, once the attackers figure out how to overcome the difficulties, that knowledge spreads and is often packaged into kits that make it easier for the next bug.
> The parent was talking about vulnerabilities, not bugs.
I have no idea what you're talking about. Bugs 2, 3, 4, 8, 9, 10, 14 and 15 in that list are serious memory safety vulnerabilities that were found in Chrome last year, contrary to your assertion that Chrome only had four last year.
> If you look at the linked article, the Microsoft Engineer claimed 70% of security bugs in Microsoft products are caused by memory errors. Does Microsoft apply the same tools to all their products or only Windows? Do these tools even exist for other products?
They recently released an AddressSanitizer port for MSVC, and they've had Valgrind-like functionality for Windows userspace for over a decade (see https://www.usenix.org/legacy/events/vee06/full_papers/p154-...), but
I don't know of any public source describing what tools they use across their product range, so I don't know. They're well resourced, well motivated, and not stupid, so it would be surprising if they don't use the technology available.
I know that highly capable organizations, e.g. the Chrome and Firefox teams, do use state-of-the-art tools and practices in their browsers and get similar results to the Microsoft 70% number.
> I would be surprised
Check out Firefox and Chrome, for example, and be surprised.
> learning Rust
This isn't about Rust, but FWIW learning Rust doesn't seem so bad when you compare it to just the learning required to keep up with the ever-growing complexity of C++. (See e.g. Scott Meyers refusing to handle errata for his books because his C++ knowledge is obsolete after a few years out of the game.) Not to mention learning how to use and deploy in CI all the static and dynamic analysis tools you need to keep your C++ code safe(-ish).
> I know that highly capable organizations, e.g. the Chrome and Firefox teams, do use state-of-the-art tools and practices in their browsers and get similar results to the Microsoft 70% number.
Unfortunately, the threads grown too long and it’s starting to get difficult tracking referenced and arguments. The paper “Have things changed now? An empirical study of bug characteristics in modern open source software” specifically studies Firefox and finds no where near the 70% number (18%).
You're citing a paper from 2006. I'm not even going to read it.
As a former Mozilla distinguished engineer (left Mozilla in 2016), I assure you memory safety bugs are the majority of exploitable Firefox security bugs.
https://htdp.org/2021-11-15/Book/part_prologue.html
Take a quick scroll through, it features programming with pictures which is much more useful and motivating for a beginner than lines of text.
First impressions matter. So avoid excessive keywords and syntax that only scares beginners into thinking programming is a voodoo-magic-minefield. I'm looking at you, Python.