If it's just a snippet, and require.js made up 20% of the filesize, then why not just have a single file, or assume a single closure and concatenate a few files together? You'll gain even better filesize from minifying inside a single closure. The formalised step up from this is SMASH, which is how d3 is built: https://github.com/mbostock/smash
IMO, a big upside of using a define / require pattern, even when the final library is fully concatenated, is that you have a clear picture of all of variables that are available in a particular module.
It's a little ambiguous when using something like Smash, since you're not actually importing a module onto a variable.
As someone unfamiliar with the codebase, it's unclear to me where many of the variables are coming from (d3_interpolate, d3_rgb_names, d3_interpolateRgb, etc).
It'd imagine it makes managing dependencies a bit trickier.
Well yes, it's not ideal for all development, but you professed a need for smaller filesize, which it's ideal for. Naming conventions can alleviate most of the variable issues, it's just another way to solve the problem that has some advantages and disadvantages.
(I don't use it myself, due to the current project being too large, so I use require.js in conjunction with amdclean which removes a lot of the module loader overheads.)
Rust relies on a newer version of LLVM than Emscripten is based on (as Emscripten-fastcomp is based on the PNaCl fork of LLVM, which is lagging behind). Once the PNaCl fork is updated and Emscipten rebases, or they rebase onto LLVM proper, then it should be doable fairly easily.
Not who you're replying to, but there's a similar class at Cambridge where you implement pong/game of life on an FPGA board. Some of the practical course notes are available publicly, though the full computer design notes referenced are not. There may be some interest in the basic sources and approach however: http://www.cl.cam.ac.uk/teaching/0910/ECAD+Arch/
(Linked to the course I know from a few years ago, the course has since changed)
Yes, this seems like a much better approach. I mean, creating components based on path state is completely trivial, once you have state driven by a router, and lets you be explicit about how state is moving around. react-router seems to be taking a more covention over configuration approach (this.props.activeRouteHandler? Really? Why does the top level routing need to specify the deeply nested components? What if there are multiple components in the interface that depend on the route? etc. etc.).
There are actually two main React libraries for ClojureScript, Om[1] and Reagent[2]. There are also other libraries for doing the templating if you prefer hiccup[3] or enlive[4] style templates, which I find a bit more readable than straight function calls.
Overall though, React seems to be shaping up as a great solution for building reactive templates that isn't tied into larger frameworks like the solutions within AngularJS or Ember.js.
Making the type system pluggable and a separate pass also means you can do cool things like interfacing your type system with your database [1]. And the Heterogeneous and Value types mean you can be very flexible and precise with your requirements while also coding in a mostly idiomatic Clojure way. (Though things are obviously still raw or unimplemented, the base is looking good and shaping up quickly.)
Yes, although that's not the only wayh to do these things: F# has type providers, for instance. Although I've been burnt by C# web services and Hibernate often enough to be sceptical of the idea.
Type providers, while awesome, are simply code generators that run interleaved with the type checking pass.
What's happening in that linked Gist is a bit different: It's generating types signatures that are devoid of their own behavior. In order to recreate type providers (or Hibernate-like things), you'd also need to give a type as a parameter to a macro for code generation. Since you can invoke the type checker at anytime, you can interleave code generation and type checking in much the same way as F#.
I don't think it's a good idea to generate code and strongly coupled types at the same time. An alternative approach would be to take some common source data structure, generate code and then generate types to validate that code. Those are two separate transformations, not one like with type providers. Type providers mean that type checking, compilation, and execution are coupled. In this Gist, you could run the code with or without type checking and you could ignore the code generation completely and run type checking for internal consistency. Much more flexible.
You can define functions inside the database and then use them inside a transaction [1] so that you can get atomic update, if that's the question. You can also mark attributes as noHistory [2] if you don't care about the past state.