Hacker Newsnew | past | comments | ask | show | jobs | submit | pbaam's commentslogin

https://ziglang.org/documentation/master/#opaque

> opaque {} declares a new type with an unknown (but non-zero) size and alignment. It can contain declarations the same as structs, unions, and enums

So it can contain methods, just like structs. The only thing it cannot contain is fields (which the example above doesn't contain). An opaque type can only be used as a pointer, it's like a `typedef void* ...` in C but with the possibility to add declarations within a namespace

Edit: the documentation doesn't contain any example of an opaque type which happens to have declarations inside. But here is one: https://codeberg.org/andrewrk/daw/src/commit/38d3f0513bf9bfc...


I guess he means that in order to achieve incremental compilation they need to write their own code generation and linker for every architecture and format. This is needed because incremental compilation here doesn't just mean storing object files in a cache directory (which has always worked that way). They also want to cache every analyzed function and declaration. So they have to serialize compiler state to a file. But after analysis is done, LLVM will start code generation from the beginning (which is the time expensive thing, even in debug builds)


Yes, but isn't that an implementation detail? Shouldn't they prioritize getting to 1.0 (the language itself) and then work in implementation details like that? I mean, It's a monumental task to write compiler and linker from scratch!


Well, if your compilations turn to be submilisecond it's not an implementation detail :) *. As of now it is only supported for x86_64 Linux (only ELF format) and it has some bugs; incremental compilation is in its very early stages. Andrew talked about it in the 2024 roadmap video[1] why they are digging so low on the multiplatform toolchain (for besides incremental compilation):

- Fast build times (closely related to IC, but LLVM gives very slow development iterations, even for the compiler development)

- Language innovations: besides IC, async/await is a feature Andrew determined to not be feasable to implement with LLVM's coroutines. Async will likely not make it into 1.0, as noted in the 0.13 release notes. It is not discarted yet but neither is it the priority.

- There are architectures that don't work very well on LLVM: SPARC and RISC-V are the ones I remember

My personal point is that a language that is meant to compete with C cannot have a hard dependency on a C++ project. That, and that it's appealing to have an alternative to LLVM whenever you want to do some JIT but don't want to bring a heavy dependency

[1] https://www.youtube.com/watch?v=5eL_LcxwwHg

* There is also the `--watch` flag for `zig build` which re-runs every step (IC helps) everytime a file is saved.

[edit: formatting]*


Ironically all major production C compilers evolved to be written in C++.

Also if they value compilation speed that much, maybe they shouldn't be that pushy into compiling always from source, without any support for Zig binary libraries.


> Shouldn't they prioritize getting to 1.0 (the language itself)

Nope. Different languages have different priorities and different USPs. For Zig sub-second compilation / incremental compilation, cross compiling toolchain are flagship features. Without those there is no point in releasing 1.0.


The cat command can be omitted there, as tee reads from standard input by default, even if stdin points to a terminal. I was going to comment an actually useful (and unavoidable in bash) use of cat and ssh, which is to do do nothing with standard input and redirect it to a file:

  <file ssh 'cat >file'
And you could just use scp, but I've found clients without scp and servers with the SFTP subsystem disabled.


I want do paste from my clipboard, not copy another file.


I know, it's intentionally unrelated. But if you read my first sentence, you can do what you are interested in without using cat.

  sudo tee somefile > /dev/null
And you will be able to paste from your clipboard or write anything you want. Without cat or piping.


It will still work. If you look at the type signature of @cImport in the language reference[1], it returns a type just as @import. So you can call @typeInfo on it. But instead of writing

  const win32 = @cImport({
    @cInclude("windows.h");
    @cInclude("winuser.h");
  });
You will write:

  const win32 = @import("win32");
Where the module "win32" is declared in build.zig.

[1] https://ziglang.org/documentation/master/#cImport


What a coincidence, some days ago I was reading some HN posts related to lighttpd and I found [1]. The link is dead and it has inappropriate content, so use arhive.org. The author doesn't go too much in detail of why nginx being purchased is a problem, but in how to configure lighttpd. And the first comment predicts the hypothetical case of F5 being problematic.

[1] https://news.ycombinator.com/item?id=19413901


I have been using lighttpd which can also host static content and do proxying, on top of those lighttpd supports cgi/fastcgi/etc out of the box as well, and it takes 4MB memory only by default at start, so it works for both low end embedded systems and large servers.


I've recently needed to build a docker image to run a static site. I compiled busybox with only it's httpd server. It runs with 300kb of ram with a scratch image and tini.

I didn't compile in fastcgi support in to my build, but it can be enabled.


yes busybox httpd or civetweb is even smaller, both around 300kb.

for tini you mean https://github.com/krallin/tini? how large is your final docker image, why not just alpine in that case which is musl+busybox


Yep that tini. The docker image is about 1.90mb. It's a repack of https://homer-demo.netlify.app/ I pre-gzipped a few of the compressible file extensions too so they can be served compressed.

In this case, I didn't need alpine. I generally aim to get the image as minimal as possible without too much hassle. I end up doing stuff like this alot when I feel like a community image maybe too bloated when something like alpine or distroless can be used. Entry point scripts have all kinds of envars and a shell dependency, I'd rather rebuild the image to cater for my needs and execute the binary directly, and mount in any config via k8s.


I used it to avoid having to learn lots of stuff about web configuration that bigger servers might require. Between lighttpd and DO droplets, I could run a VM per static for $5 a month each with good performance. I’m very grateful for lighttpd!


> Sniffing the traffic from the device showed that it was connecting out to tcp.goodwe-power.com:200001

Is 200001 the right port number? Very good read anyways.


Seems it's corrected now, one zero less :-)


As ports are 16 bit ints, I assume not.


I remember this option was mentioned in a 3 hour video[1] where Daniel Stenberg himself went through most of the curl command line options.

[1] https://www.youtube.com/watch?v=V5vZWHP-RqU


They want to replace LLVM with their own backends. Zig's master branch can now be compiled without LLVM (and without CMake, see bootstrap.c) in x86 Linux because they implemented their own ELF linker and x86 code generation. It's explained in the talk why they want it: Most of the compile time is spent in LLVM, not in AST lowering or semantic analysis. Andrew also said that LLVM's coroutines weren't good enough to implement async/await.


(Discussion of the LLVM-free compiler begins at 10:30 in the video.)

As someone who hasn't been following this: improved compile-times seem achievable, but they surely can't hope to compete with LLVM in terms of opimisation, can they?

Is the new backend intended to be used for quicker dev builds, or for final release builds too? From a look here [0] it seems to be the latter - full removal of LLVM for all builds - which surprises me.

[0] https://github.com/ziglang/zig/issues/16270


> but they surely can't hope to compete with LLVM in terms of opimisation, can they?

This has been discussed more than once on Zig's discord server. Quoting Andrew and Matthew Lugg's discussion in #compiler-devel about pull 17892:

> mlugg: Shout-out to the people on Twitter and HN who are probably still saying "why would you try to compete with LLVM, LLVM is perfect and can do no wrong"

> andrewrk: worse, they're saying "LLVM is not great but it's the best mankind can achieve"

I think it's very appealing to have a project that focuses on fast build times and wants to seriously compete against LLVM in terms of the optimization pass pipeline, specially when you don't have a beefy computer. With that said, for the time being there are no optimizations made by Zig's own x86 backend (it neither does pass all behavior tests like it was pointed out in the talk, but it can build the Zig compiler itself and some other projects).

Cuik[1] is a project that was mentioned in the Q&A section which illustrates how a compiler can be fast and make optimised builds at the same time.

[1] https://github.com/RealNeGate/Cuik


It's a full removal of LLVM code being linked into the compiler. Currently Zig calls LLVM's API to build with it. Instead the compiler will gain the ability to emit LLVM IR into files. Those files can be passed to a separate install of LLVM to produce final machine code.

As for the new backend vs LLVM, the new can be used for everything if it meets your needs. Initially LLVM is going to produce more optimized builds than Zig by itself can, but that is likely to change over time. LLVM isn't some magical blessing from the heavens, it's just software made by people. There's nothing besides effort and competence preventing another compiler matching its optimization performance. Plus, while a lot of research effort has been put into finding LLVM's optimizations, they've been found and can be copied.


There is however no reason to imagine you can get similar optimizations yet go much faster.

Some of the optimization problems are just plain hard, indeed the optimal choices are often Undecidable for non-trivial cases, so LLVM is already trading time for better results. I have other reasons not to like LLVM, but I don't like the habit of blaming LLVM for how slow your compiler is, and Zig isn't alone in doing that.

The segue into optimisation was Andrew noting that there are too many bugs. I haven't seen Andrew speak often before, so maybe it was a joke I didn't get, but the impression I got reminded me of Herb Sutter's introduction of Cpp2 / CppFront his "New syntax" (a C++ successor language by another name). Herb gives genuine complaints people have about C++ but rather than explain why his proposal would fix them (it wouldn't) he just decides they're wrong and rewrites them, then explains how his proposal fixes these made up complaints instead. So instead of "It's much too unsafe" Herb decides the "real" problem is that it's not easy enough to write - see if it was easier you wouldn't have bugs right? An audience members even calls him out for that, but Herb is undeterred.

Like I said, maybe it's just a joke and I didn't get it. But if Andrew seriously thinks that: Zig has too many bugs => Make compiler faster makes any sense that's a problem.


Hi, core team member here (I'm quoted in a parent comment!). The problem with LLVM is not that optimization is slow - it's perfectly acceptable for release builds to take arbitrarily long for optimal binaries. The problem is how long it takes to emit debug builds.

Take building the Zig compiler itself in Debug mode. This process takes about 30 seconds running through the Zig pipeline (semantic analysis and generating LLVM IR), and then 90 seconds just spent waiting for LLVM to emit the binary. OTOH, when using our self-hosted x86_64 backend (which is now capable of building the compiler, although is incomplete enough that it's not necessarily integrated into our development cycle quite yet), that 30 seconds is essentially the full build (there are a couple of extra seconds on the end flushing the ELF file).

I can tell you from first-hand experience that when fixing bugs, a huge amount of time is wasted just waiting for the compiler to build - lots of bugs can be solved with relative ease, but we need to test our fixes! Rebuilds are also made more common by the fact that LLVM has an unfortunate habit of butchering the debug information for some values even in debug builds, so we often have to rebuild with debug prints added to understand a problem. Making rebuilds 75% faster by just ditching LLVM would make a huge difference. Introducing incremental compilation (which we're actively working on) would make these rebuilds under a second, which would improve workflows a crazy amount. This would hugely increase our development velocity wrt both bugfixes and proposal implementation.

It's also important to note that we have quite a few compiler bugs which are [caused by upstream LLVM bugs](https://github.com/ziglang/zig/issues?q=is%3Aissue+is%3Aopen...). LLVM often ships with regressions which we report before releases come out and they simply don't fix. In the long term, eliminating the use of LLVM as our main code generation backend will mean that all bugs encountered are our own, and thus can be solved more easily.


To specifically address concerns of the optimized builds LLVM backend will still be available as an optional dependency configured via Zig's build system. Until zig's own codegen is mature enough for optimized builds, LLVM can be used to generate final optimized binary. But there is a possibility that zig's own backend can implement features (e.g. coroutines for async/await) that are not compatible with LLVM. That will force LLVM out for good.


> Currently Zig calls LLVM's API to build with it. Instead the compiler will gain the ability to emit LLVM IR into files. Those files can be passed to a separate install of LLVM to produce final machine code.

If anything this will further worsen LLVM-powered build-times, surely? What's the motivation here? Does LLVM have API-stability issues that are avoided when using files?

> while a lot of research effort has been put into finding LLVM's optimizations, they've been found and can be copied.

This strikes me as understating the amount of effort that goes into the major compilers. We're talking about hundreds of thousands of developer-hours of work. Targeting only x86-64 will greatly reduce the workload, but still.

Learning about the optimisations performed by modern compilers is the easy part, building a serious compiler is a lot of development work. The optimisations performed by LLVM are presumably pretty similar to those performed by GCC, but that doesn't mean LLVM was easy to develop.

I stumbled across this comment, by someone apparently familiar with LLVM (hanna-kruppe), in a thread discussing moving Rust away from LLVM. [0]

> It's hard to overstate how many people are agreeing on using LLVM and how much this consensus helps all involved: there's mountains of experience, shared code, interoperability, cooperation, etc. in and around LLVM and its community. Any rewrite that does not have the full backing of the LLVM community automatically loses this.

Also, here's an old HackerNews thread discussing Zig's announcement to move away from LLVM. [1]

[0] https://users.rust-lang.org/t/proposal-rllvm-rust-implementa...

[1] https://news.ycombinator.com/item?id=36529456


> If anything this will further worsen LLVM-powered build-times, surely? What's the motivation here?

The key motivation is that this will allow Zig to drop its dependencies on the LLVM libraries, instead using a separate LLVM compilation to compile the bitcode file. This is nice because it simplifies the build process and drops the Zig compiler binary size by a full order of magnitude - see https://github.com/ziglang/zig/issues/16270 for more deatils on that. It also allows us to implement incremental compilation on the bitcode file itself to drop compile times a little, which isn't really possible to do through the LLVM API since it doesn't implement certain operations.

In terms of speed, there's no reason to expect this will worsen our build times; in fact, we expect it will be faster. As with any common C++ API, LLVM's IRBuilder comes with a lot of overhead from how LLVM is written. What we're going to do here is essentially the same work that IRBuilder is doing, but in our own code, for which we will be focusing on performance.

You can find more details on this at https://github.com/ziglang/zig/issues/13265.

> ...but that doesn't mean LLVM was easy to develop.

To be clear, we aren't saying it will be easy to reach LLVM's optimization capabilities. That's a very long-term plan, and one which will unfold over a number of years. The ability to use LLVM is probably never going away, because there might always be some things it handles better than Zig's own code generation. However, trying to get there seems a worthy goal; at the very least, we can get our self-hosted codegen backends to a point where they perform relatively well in Debug mode without sacrificing debuggability.


Thanks for the detailed reply.

> You can find more details on this at https://github.com/ziglang/zig/issues/13265.

Thanks for the link, my thoughts mirror those of certik in the thread, which Andrew answered well.

> at the very least, we can get our self-hosted codegen backends to a point where they perform relatively well in Debug mode without sacrificing debuggability

Perhaps a useful point of comparison: the lightweight qbe C compiler achieved compile times of around a quarter that of GCC and Clang, with the generated code taking very roughly 170% as long to execute as the code from GCC or Clang. qbe has roughly 0.1% the lines of code as those 'big' compilers. [0] This should presumably be possible for Zig too, and could be a big win for Zig developers.

Closing the performance gap with LLVM though would presumably be extremely challenging and, respectfully, I can't see the Zig project achieving this. Compiler optimisation seems to be a game of diminishing returns. Even if this were achieved, optimised compilation would surely be much slower than unoptimised.

[0] https://archive.fosdem.org/2022/schedule/event/lg_qbe/attach... (Relevant discussion: https://news.ycombinator.com/item?id=11555527 )


Lua's codebase is also great for getting started


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: