Hacker Newsnew | past | comments | ask | show | jobs | submit | beckford's commentslogin

> One reason is that far more of their students actually applied than would be expected.

This sentence is buried midway through the article. It would be good for a future post to expand on this ... how much is explained by students simply applying more frequently to their local schools. This explanation was the only plausible explanation in the article I saw answer "why".


> students simply applying more frequently to their local schools

Cheaper to live at home nowadays


If you were looking at the parse link in the author's comment, you were looking at a spec (called a module interface in OCaml/OxCaml, similar to an interface in Java). The parse implementation is at https://github.com/avsm/httpz/blob/240051dd5f00281b09984a14a...

That said, I would be happy if all I needed to type in was a spec.


Since the link is paywalled, the TLDR paragraph is:

> Boasberg said in the ruling that the FTC had incorrectly excluded YouTube and TikTok from the market where it challenged Meta's dominance. "Even if YouTube is out, including TikTok alone defeats the FTC’s case," the judge said.

I agree. Meta does not have a monopoly since they compete with TikTok.


> The first main disadvantage is that they require the kernel to support syscall tracing, which essentially means they only work on Linux. I have Ideas™ for how to get this working on macOS without disabling SIP, but they're still incomplete and not fully general; I may write a follow-up post about that. I don't yet have ideas for how this could work on Windows, but it seems possible.

On Windows, Linux, and also macOS with SIP disabled (as implied, disabling is a bad idea), the https://github.com/jacereda/fsatrace executable exists today and can trace filesystem access. It is used by the Shake build system.

In particular, https://neilmitchell.blogspot.com/2020/05/file-tracing.html mentions that Shake copies system binaries to temporary folders to workaround the SIP protection. That blogpost also mentions other problems and solutions (like library preloading).


Good advice but still:

> investors warn the good times won’t last forever

Umm ... these aren't short-term investors. They are investing because they think at least one of their bets will pay-off.


Introduction paragraph:

Metalang99 is a firm foundation for writing reliable and maintainable metaprograms in pure C99. It is implemented as an interpreted FP language atop of preprocessor macros: just #include <metalang99.h> and you are ready to go. Metalang99 features algebraic data types, pattern matching, recursion, currying, and collections; in addition, it provides means for compile-time error reporting and debugging. With our built-in syntax checker, macro errors should be perfectly comprehensible, enabling you for convenient development.


Not OP, but I think it does run Postgres as a process. However, IMHO the general use case for SQL is for external actors (humans, machines) to get access to the underlying data in a structured way. So I see a benefit for a true in-process embedding of Postgres if the process exposed a Postgres TCP/IP port 5432, etc. (Hook your software up to a query tool, a reporting interface, etc.)

Beyond that, why care whether the "embedding" involves a spawned process? It still works great for integration tests which I suspect is the main use case, and for specialized data analysis software where a spawned process is no big deal.


Can you have a socket that's only shared between a parent and child process?

This sounds like it could be pretty useful.


Sure, socketpairs on Linux.


For something like this I would love to see a formal spec to go along with the examples.


I do appreciate that they lead with the examples. They convey 90% of the important information. TBH, having worked with yaml just enough to get by with k8s deployments, I could immediately spot how this would be an improvement.


Yeah, I don't disagree. I'd go further and say the examples are on-point for a "human oriented" language. But the formal spec reveals how simple or complicated this language is. (And I'm also writing this from the perspective of someone who uses a programming language that does not have a HOML implementation).


Since Cap'n Web is a simplification of Cap'n Proto RPC, it would be amazing if eventually the simplification traveled back to all the languages that Cap'n Proto RPC supports (C++, etc.). Or at least could be made to be binary compatible. Regardless, this is great.


Yeah I now want to go back and redesign the Cap'n Proto RPC protocol to be based on this new design, as it accomplishes all the same features with a lot less complexity!

But it may be tough to justify when we already have working Cap'n Proto implementations speaking the existing protocol, that took a lot of work to build. Yes, the new implementations will be less work than the original, but it's still a lot of work that is essentially running-in-place.

OTOH, it might make it easier for Cap'n Proto RPC to be implemented in more languages, which might be worth it... idk.


Disclaimer: I took over maintenance of the Cap'n Proto C bindings a couple years ago.

That makes sense. There is some opportunity though since the Cap'n Proto RPC had always lacked a JavaScript RPC implementation. For example, I had always been planning on using the Cap'n Proto OCaml implementation (which had full RPC) and using one of the two mature OCaml->JavaScript frameworks to get a JavaScript implementation. Long story short: Not now, but I'd be interested in seeing if Cap'n Web can be ported to OCaml. I suspect other language communities may be interested. Promise chaining is a killer feature and was (previously) difficult to implement. Aside: Promise chaining is quite undersold on your blog post; it is co-equal to capabilities in my estimation.


I tried using the C library recently but was turned off by the lack of bounds checking. I’m not sure how anyone could reasonably accept packets over the wire which allow arbitrary memory access. Am I misunderstanding? Any hope this can be fixed?


You mean redesign Cap'n Proto to not have a schema? Or did you mean the API, not the protocol?


Here is the Cap'n Proto RPC protocol:

https://github.com/capnproto/capnproto/blob/v2/c%2B%2B/src/c...

That's just the RPC state machine -- the serialization is specified elsewhere, and the state machine is actually schema-agnostic. (Schemas are applied at the edges, when messages are actually received from the app or delivered to it.)

This is the Cap'n Web protocol, including serialization details:

https://github.com/cloudflare/capnweb/blob/main/protocol.md

Now, to be fair, Cap'n Proto has a lot of features that Cap'n Web doesn't have yet. But Cap'n Web's high-level design is actually a lot simpler.

Among other things, I merged the concepts of call-return and promise-resolve. (Which, admittedly, CapTP was doing it that way before I even designed Cap'n Proto. It was a complete mistake on my part to turn them into two separate concepts in Cap'n Proto, but it seemed to make sense at the time.)

What I'd like to do is go back and revise the Cap'n Proto protocol to use a similar design under the hood. This would make no visible difference to applications (they'd still use schemas), but the state machine would be much simpler, and easier to port to more languages.


I was trying to port Cap'n Proto to modern C# as a side project when I was unemployed, since the current implementation years old and new C# features have been released that would make it much nicer to use.

I love the no-copy serialization and object capabilities, but wow, the RPC protocol is incredibly complex, it took me a while to wrap my head around it, and I often had to refer to the C++ implementation to really get it.


Obviously C is the ultimate compiler of compilers.

But I would call Rust, Haxe and Hack production compilers. (As mentioned by sibling, Rust bootstraps itself since its early days. But that doesn't diminish that OCaml was the choice before bootstrapping.)


Most C compilers are written in C++ nowadays.


Yes, C and C++ have an odd symbiosis. I should have said C/C++.


Most C and C++ developers take umbrage with combining them. Since C++11, and especially C++17, the languages have diverged significantly. C is still largely compatible (outside of things like uncasted malloc) since the rules are still largely valid in C++; but both have gained fairly substantial incompatibilities to each other. Writing a pure C++ application today will look nothing like a modern C app.

RAII, iterators, templates, object encapsulation, smart pointers, data ownership, etc are entrenched in C++; while C is still raw pointers, no generics (no _Generic doesn’t count), procedural, void* casting, manual malloc/free, etc.

I code in both, and enjoy each (generally for different use cases), but certainly they are significantly differing experiences.


Unfortunately we still have folks writing C++ in the style of pre-C++98 with no desire to change.

It is like adopting Typescript, but the only thing they do is renaming the file extension for better VScode analysis.

Another one is C++ "libraries" that are plain C with extern "C" blocks.


Sure, and we also still have people coding in K&R-style C. Some people are hard to change in their ways, but that doesn't mean the community/ecosystem hasn't moved on.

> Another one is C++ "libraries" that are plain C with extern "C" blocks.

Sure, and you also see "C Libraries" that are the exact same. I don't usually judge the communities on their exceptions or extremists.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: