Porffor compiles the JS to WASM, so it would be kind of a waste. Though there might be no reason the two projects cannot share some logic, like parsing the JS and such. I kind of doubt this is why its being funded. It sounds like a useful project.
No. Completely green field. Well, it _was_. I believe they’ve recently accepted that third party libraries will be allowed in the non-Serenity OS version.
Given that defunkt's current project is a game engine/IDE, makes sense he's interested in an ultra-fast compiler for a popular language that builds compact sub-MB native executables...
I have thought about doing this and I just can't get around the fact that you can't get much better performance in JS. The best you could probably do is transpile the JS into V8 C++ calls.
The really cool optimizations come from compiling TypeScript, or something close to it. You could use types to get enormous gains. Anything without typing gets the default slow JS calls. Interfaces can get reduced to vtables or maybe even straight calls, possibly on structs instead of maps. You could have an Int and Float type that degrade into Number that just sit inside registers.
The main problem is that both TS and V8 are fast-moving, non-standard targets. You could only really do such a project with a big team. Maintaining compatibility would be a job by itself.
At least without additional extensions, TypeScript would help less than you think. It just wasn’t designed for the job.
As a simple example - TypeScript doesn’t distinguish between integers and floats; they’re all just numbers. So all array accesses need casting. A TypeScript designed to aid static compilation likely would have that distinction.
But the big elephant in the room is TypeScript’s structural subtyping. The nature of this makes it effectively impossible for the compiler to statically determine the physical structure of any non-primitive argument passed into a function. This gives you worse-than-JIT performance on all field access, since JITs can perform dynamic shape analysis.
It’s advertised as that, and it’s a cool project, but while it’s definitely a statically typed language that reuses TypeScript syntax, it’s not clear to me just what subset of the actual TypeScript type system is supported. That’s necessarily bad—TypeScript itself is very unclear about what its type system actually is. I just think the tagline is misleading.
Probably a better way to think about AssemblyScript is first as a DSL for WASM, and second as providing a subset of TS syntax and authoring semantics to achieve that. The type system is closer to TS than the syntax and semantics. At least that was my experience when I explored it some time back.
The tagline on the site seems to be " TypeScript-like language for WebAssembly", which seems pretty clear to me that it's not pretending to be a strict subset or anything.
The 2019 paper[1] says: “STS primitive types are treated according to JavaScript
semantics. In particulars, all numbers are logically IEEE 64-bit floating point, but 31-bit signed tagged integers are used where possible for performance. Implementation of operators, like addition or comparison, branch on the dynamic
types of values to follow JavaScript semantics[.]”
I think the even bigger elephant in the room is that TypeScript's type system is unsound. You can have a function whose parameter type is annotated to be String and there's absolutely no guarantee that every call to that function will pass it a string.
This isn't because of `any` either. The type system itself deliberately has holes in it. So any language that uses TypeScript type annotations to generate faster/smaller code is opening itself to miscompiling code and segfaults, etc.
It might be useful for an interpreter though. I believe that in V8 you have this probabilistic mechanism in which if the interpreter "learns" that an array contains e.g. numbers consistently, it will optimize for numbers and start accessing the array in a more performance way. Typescript could be used to inform the interpreter even before execution.
(My supposition, I'm not an interpreter expert)
So - I know this in theory, but avoided mentioning it because I couldn’t immediately think of any persuasive examples (whereas subtype polymorphism is a core, widely used, wholly unrestricted property of the language) that didn’t involve casts or any/unknown or other things that people might make excuses for.
Do you have any examples off the top of your head?
Here's an example I constructed after reading the TS docs [1] about flow-based type inference and thinking "that can't be right...".
It yields no warnings or errors at compile stage but gives runtime error based on a wrong flow-based type inference. The crux of it is that something can be a Bird (with "fly" function) but can also have any other members, like "swim" because of structural typing (flying is the minimum expected of a Bird). The presence of a spurious "swim" member in the bird causes tsc to infer in a conditional that checks for a "swim" member that the animal must be a Fish or Human, when it is not (it's just a Bird with an unrelated, non-function "swim" member).
type Fish = { swim: () => void };
type Bird = { fly: () => void };
type Human = { swim?: () => void; fly?: () => void };
function move(animal: Fish | Bird | Human) {
if ("swim" in animal) {
// TSC infers wrongly here the presence of "swim" implies animal must be a Fish or Human
onlyForFishAndHumans(animal);
} else {
animal;
}
}
function onlyForFishAndHumans(animal: Fish | Human) {
if (animal.swim) {
animal.swim(); // Error: attempt to call "not-callable".
}
// (receives bird which is not a Fish or Human)
}
const someObj = { fly: () => {}, swim: "not-callable" };
const bird: Bird = someObj;
move(bird);
// runtime error: [ERR]: animal.swim is not a function
This narrowing is probably not the best. I'm not sure why the TS docs suggest this approach. You should really check the type of the key to be safer, though it's still not perfect.
Compilers don't really have the option of just avoiding non-idiomatic code, though. If the goal is to compile TypeScript ahead of time, the only options are to allow it or to break compatibility, and breaking compatibility makes using ahead-of-time TypeScript instead of some native language that already exists much less compelling.
> I think the even bigger elephant in the room is that TypeScript's type system is unsound.
Can you name a single language that is used for high-performance software and whose type system is sound? To speed up the process, note that none of the obvious candidates have sound type systems.
Java, C#, Scala, Haskell, and Dart are all sound as far as I know.
Soundness in all of those languages involves a mixture of compile-time and runtime checks. Most of the safety comes from the static checking, but there are a few places where the compiler defers checking to runtime and inserts checks to ensure that it's not possible to have an expression of type T successfully evaluate to a value that isn't an T.
TypeScript doesn't insert any runtime checks in the places where there are holes in the static checker, so it isn't sound. If it wasn't running on top of a JavaScript VM which is dynamically typed and inserts checks everywhere, it would be entirely possible to segfault, violate memory safety, etc.
I can't speak for the others, but Java allows assigning arrays of subtypes to variables declared as an array of a supertype, which isn't sound:
class A {}
class B1 extends A {}
class B2 extends A {}
A[] arr = new B1[1];
arr[0] = new B2();
In the above example only way that assigning an array of `B1` to a variable typed as an array of `A` is if only valid `B1` objects are ever put into it, at which point there's no reason not to just have the variable typed as a `B1` array. It still will compile fine though!
Because the context here is the idea of using the type system to justify removing those sorts of dynamic checks to generate better code.
The dynamic checks in the Java case are are a well-defined and narrowly-targeted part of the language semantics- you get an exception on mismatched array writes, out-of-bounds access, etc., but when an expression produces a value it always matches its type.
TypeScript defers these kinds of type system violations to the underlying JavaScript engine, which makes things work out (sometimes with an exception, but sometimes just proceeding with a value that doesn't match the expression's type) using precisely the dynamism we wanted to get rid of. And this can leak out and cause arbitrarily-far-away parts of the program not to match their types, either.
> Because the context here is the idea of using the type system to justify removing those sorts of dynamic checks to generate better code
It's more specific than that; the discussion is about writing an ahead-of-time compiler, which necessarily wouldn't be running on a JavaScript engine. The compiler could just as easily emit code that always throws a runtime exception instead of emitting an equivalent to whatever the JavaScript would do.
Okay, I think I understand now. My intuition was that "soundness" refers to whether the compile catches all invalid usage of types, and that soundness if violated if that doesn't happen; it sounds like the way you're using the term is measured whether the invalid usage is caught either at compile-time or run-time, and soundness if violated if it's not caught by any of the checks. I don't know whether my narrower understanding of soundness is incorrect or not, but it's at least more clear to me now why you grouped Java and JavaScript differently in terms of soundness.
All of these have, at the very least, escape hatches that makes the type system unsound overall. And probably other issues https://counterexamples.org/ I can find a few in there for at least scala and haskell. Perhaps this is not a satisfying answer to you, an "unsound type system" is a technical, precise notion, and this is what people who parrot "typescript is unsound" are referring to. You cannot just reply "well there are a few runtime checks so it's all good."
> an "unsound type system" is a technical, precise notion
Yup. Milner's "can't go wrong", progress and preservation, etc.
> You cannot just reply "well there are a few runtime checks so it's all good."
Sure I can. I really like how Shriram Krishnamurthi describes soundness in Programs and Programming Languages [1]. I can't think of a better definition for soundness than:
"The central result we wish to have for a given type-system is called soundness. It says this. Suppose we are given an expression (or program) e. We type-check it and conclude that its type is t. When we run e, let us say we obtain the value v. Then v will also have type t."
The "we obtain the value v" part is critical. If an expression of type e doesn't produce a value at all (it terminates or throws an exception), then we have also satisfied soundness.
Indeed, note that he also says:
"Any rich enough language has properties that cannot be decided statically (and others that perhaps could be, but the language designer chose to put off until run-time to reduce the burden on the programmer to make programs pass the type-checker). When one of these properties fails—e.g., the array index being within bounds—there is no meaningful type for the program. Thus, implicit in every type soundness theorem is some set of published, permitted exceptions or error conditions that may occur. The developer who uses a type system implicitly signs on to accepting this set."
A term like "soundness" for a programming language should be useful. We could, for example, define "evenality" as a property of programming languages where we say that a language whose built-in atomic types have names that are all an even number of letters has evenality and other languages don't. That's a well-defined concept and we could neatly partition extant languages into whether they have evenality or not. But who cares?
When it comes to soundness, the above definition from PAPL is useful for (at least) two concrete reasons:
1. When a user is reading code, if they see an expression has some type T, they can safely reason that any value the expression evaluates to will have type T and when they are reasoning about code surrounding that expression, they can rely on that fact.
2. Likewise, when a compiler is compiling code, it can safely assume that if an expression has type T, then all subsequent code that depends on the value of that expression can assume it has type T. The compiler can optimize safely and correctly based on that assumption.
Neither of these properties require that all type checks are performed at compile time. If the runtime throws an exception on out of bounds array indices, that still correctly preserves the soundness invariant that the type of an array element access is the type of the array element. The reader might have to think about the fact that the expression could throw. But they don't have to think about it evaluating to the wrong type.
If that's not your definition of soundness and you require a sound language to have zero runtime checks, then I'm not aware of any widely-used language that meets that requirement, nor do I see how it's a particularly useful term.
Note that it's not the case that every language is sound according to the above definition. C, C++, TypeScript, and Dart 1.0 (but not 2.0 and later) are all unsound. In the first two, it's possible to completely reinterpret memory as another type which leads to the majority of software security issues in the world. In the latter two, the only reason that doesn't happen is because the underlying execution environment doesn't rely on the static types of expressions at all.
JVM bytecode is a "language" and is proven to be sound. The languages that compile to that language, on the other hand, are a different kettle of fish.
This is specifically about type systems. It's easy to have a sound type system when you have no type system.
Also, I'm not too familiar with JVM bytecode, but if I load i64 in two registers and then perform floating point addition on these registers, does the type system prevent me from compiling/executing the program?
Can you say more about "proven to be sound"? Are you talking about a sound type system?
Fun fact: Said type system has a 'top' type that is both the top type of the type system as well as the top half of a long or double, as those two actually take two values while everything else, including references, is only one value. Made some sense when everything was 32 bit, less so today.
I doubt it's been proved to be sound. It shows up a lot on https://counterexamples.org/, although if I skim the issues seem to have been fixed since then.
I've run a few times into messages of the sort "you can't use these features together" before and I assume at least sometimes these were lessons that they had to learn the hard way.
I'm a little behind times on Haskell (haven't used it for some years) – there always were extensions that made it unsound, but the core language was pretty solid.
Outside of really funky code, especially code originally written in TS, you can assume the interface is the actual underlying object. You could easily flag non-recognized-member accesses to interfaces and then degrade them back to object accesses.
Suppose you have some interface with fields a and c. If your function takes in an object with that interface and operates on the c field, what you want is to be able to do is compile that function to access c at “the address pointed to by the pointer to the object, plus 8” (assuming 64-bit fields). Your CPU supports such addressing directly.
Because of structural subtyping, you can’t do that. It’s not unrecognized member. But your caller might pass in an object with fields a, b, and c. This is entirely idiomatic. Now c is at offset 16, not 8. Because the physical layout of the object is different, you no longer have a statically known offset to the known field.
I would bet that, especially outside of library code, 95+% of the typed objects are only interacted with using a single interface. These could be turned into structs with direct calls.
Outside of this, you can unify the types. You would take every interface used to access the object and create a new type that has all of the members of both. You can then either create vtables or monomorphize where it is used in calls.
At any point that analysis cannot determine the actual underlying shape, you drop to the default any.
Which is exactly the kind of optimizations JIT compilers are able to perform, and AOT compiler can't do them safely without having PGO data, and even then, they can't re-optimize if the PGO happens to miss a critical path that breaks all the assumptions.
> Because of structural subtyping, you can’t do that
In practice v8 does exactly what you're saying can't be done, virtually all the time for any hot function. What you mean to say is that typescript type declarations alone don't give you enough information to safely do it during a static compile step. But modern JS engines, that track object maps and dynamically recompile, do what you described.
Oh, I thought JIT in your comment meant a single compilation. Either way, having TS type guarantees would obviously make optimizing compilers like v8's stronger, right? You seem to be arguing there's no value to it, and I don't follow that.
My claim is that the guarantees that TS provides aren't strong enough to help a compiler produce stronger optimizations. Types don't just magically make code faster - there's specific reasons why they can make code faster, and TypeScript's type system wasn't designed around those reasons.
A compiler might be able to wring some things out of it (I'm skeptical about obviouslynotme's suggestions in a cousin comment, but they seem insistent) or suppress some checks if you're happy with a segfault when someone did a cast...but it's just not a type system like, say, C's, which is more rigid and thus gives the compiler more to work with.
Contributor to Porffor here!
I actually disagree, there's quite a lot that can be improved in JS during compile time. There's been a lot of work creating static type analysis tools for JS, that can do very very thorough analysis, an example that comes to mind is [TAJS](https://www.brics.dk/TAJS/) although its somewhat old.
> there's quite a lot that can be improved in JS during compile time
I wonder how much performance gain you expect to achieve. For simple CPU-bounded tasks, C/Rust/etc is roughly three times as fast as v8 and Julia, which compiles full scripts and has good type analysis, is about twice as fast. There is not much room left. C/Rust/etc can be much faster with SIMD, multi-threading and fine control of memory layout but an AOT JS compiler might not gain much from these.
In my mind, the big room for improvement is eliminating the cost to call from JS into other native languages. In node/V8 you pay a memcopy when you pass or return a string from C++ land. If an ahead of time compiler for JS can use escape analysis or other lifetime analysis for string or byte array data, you could make I/0 or at least writes from JavaScript to, for example, sqlite, about twice as fast.
Honestly, I’m fine with only some speed up compared to V8, it’s already pretty fast…
My issue with desktop/mobile apps using web tech (JS) is mostly the install size and RAM hunger.
The "node" binary on my laptop is 45MB in size. I guess the browser component may take more disk space than JS runtime. Similarly, I am not sure whether JS runtime or webpage rendering takes more RAM. If it is the latter, an AOT compiler won't help much.
Yea I came here to say this, actually I was able to transpile a few typescript files from my project into assembly using GPT just for fun and it actually worked pretty well. If someone simply implements a strict typescript-like linter that is a subset of javascript and typescript that transpiles into assemblyscript, I think that would work better for AOT because then you can have more critical portions of the application in AOT and other parts that are non-critical in JIT and you get best of both worlds or something like that. making js backwards compatible and AOT sounds way too complicated.
Ecmascript 4 was an attempt to add better types to the language, which sadly failed a long time ago.
It'd be nice of TS at least allowed for specifying types like integer, allowing some of the newer TS aware runtimes could take advantage of the additional info, even if the main TS->JS compilation just treated `const val: int` the same as `const val: number`.
Yeah, that is why I said TS (or something similar). TS made some decisions that make sense at the time, but do not help compilation. The complexity of its typing system is another problem. I'm pretty sure that it is Turing-complete. That doesn't remove feasibility, but it increases the complexity of compiling it by a whole lot. When you add onto this the fact that "the compiler is the spec," you really get bogged down. It would be much easier to recognize a sensible subset of TS. You could probably even have the type checker throw a WTFisThisGuyDoing flag and just immediately downgrade it to an any.
Because JS code can arbitrarily modify a type, any language trying to specify what the outputs of a function can be also has to be Turing complete.
There are of course still plenty of types that TS doesn't bother trying to model, but it does try to cover even funny cases like field names going from kebab-case to camelCase.
You say you have "thought about doing this"..."[but] you can't get much better performance", then describe the approach requiring things that are described first-thing, above the fold, on the site.
Did the site change? Or am I missing something? :)
You can do inference and only fall back to Dynamic/any when something more specific can't be globally inferred in the program. For an optimization pass this is an option.
At windmill.dev, when users deploy their code, we use Bun build (which is similar to esbuild) to bundle their scripts and all their dependencies into a single js file to load which improve cold start and memory usage. We store the bundle on s3 because of the size of the bundles.
If we could bundle everything to native that would completely change the game since as good as bun's cold start is, you can't beat running straight native with a small binary.
It's awesome to see how more JS runtimes try to approach Wasm.
This project reminds me to Static Hermes (the JS engine from Facebook to improve the speed of React Native projects on iOS and Android).
I've spent a bit of time trying to review each, so hopefully this analysis will be useful for some readers. What are the main commonalities and differences between Static Hermes and Porffor?
* They both aim for JS test262 conformance [1]
* Porffor supports both Native and Wasm outputs while Static Hermes is mainly focused on Native outputs for now
* Porffor is self-hosted (Porffor is written in pure JS and can compile itself), while Static Hermes relies on LLVM
* Porffor currently doesn't support async/promise/await while Static Hermes does (with some limitations)
* Static Hermes is written in C++ while Porffor is mainly JS
* They both support TypeScript (although Static Hermes does it through transpiling the TS AST to Flow, while Porffor supports it natively)
* Static Hermes has a fallback interpreter (to support `eval` and other hard-to-compile JS scenarios), while Porffor only supports AOT compiling (although, as I commented in other thread here, it maybe be possible to support `eval` in Porffor as well)
In general, I'm excited to see if this project can gain some traction so we can speed-up Javascript engines one the Edge!
Context: I'm Syrus, from Wasmer [3]
For the record, Static Hermes fully supports compiling JS to WASM. We get it basically for free, because it is an existing LLVM backend. See https://x.com/tmikov/status/1706138872412074204 for example.
Admittedly, it is not our focus, we are focusing mainly on React Native, where WASM doesn't make sense.
The most important feature of Static Hermes is our type checker, which guarantees runtime soundness.
Porffor is very interesting, I have been watching it for some time and I am rooting for it.
Contributor for Porffor here! I think this is a great comparison, but Porffor does technically support promises, albeit synchronously. It's a similar approach to Kiesel, https://kiesel.dev/.
Not sure where you mean by synchronously but if you mean what I think you mean then that is not correct behaviour. This is important to ensure predicatibility.
This type of test does work as expected. The "sync" means that it does not feature a full event loop (yet) so cannot easily support async I/O or some more "advanced" use cases.
Yes, it does. Promise continuations always run in the micro task queue per the standard. I guess if someone mutates the promise prototype it’s not guaranteed, but the spec does guarantee this order
Good comparison and thanks! A few minor clarifications:
- Porffor isn't fully self-hosted yet but should be possible hopefully! It does partially compile itself for builtins (eg Array.prototype.filter, Math.sin, atob, ...) though.
- As of late, Porffor does now support basic async/promise/await! Not very well yet though.
Just wanted to say I really appreciated the high-quality comparison. How something compares to existing work is my #1 question whenever I read an announcement like this.
Yeah... It is unclear to me how not using LLVM is a good thing. You'd inherit millions of man-hours of optimization work, code gen, and general thought process.
In this case, being self contained will help implementing things like `eval()` and `Function()` since Porffor can self-host. That would be much harder with a LLVM based solution.
There's a subset of JS that's trivially compilable, it's the long tail of other stuff that's hard. But cool to see research happening on where that boundary lies and how much benefit can be had for that subset
If it’s interested in behaving as though “the ECMAScript host is a web browser”, of course it does, it’s part of the spec: https://tc39.es/ecma262/multipage/additional-ecmascript-feat.... And given how trivial it is (function() { return "<blink>" + this + "</blink>"; }), it makes a fair amount of sense to implement it even if the ECMAScript host is not a web browser, in which case it’s optional (see the top of the linked document). I wouldn’t expect it to have any association with “a sense of humor or playfulness” whatsoever.
The reason I would argue that it does imply a sense of humor is that on any web browser that supports WASM, the <blink> tag itself has been deprecated and non-functional for ages. In fact, it doesn't even have an entry on MDN, only an indirect reference through String.blink()
<blink>, sure, but String.prototype.blink is still part of the spec, and unlike the HTML Standard, the ECMAScript specs are much more pseudocode that you largely just copy and turn into actual code as necessary, to the point that, if as an ECMAScript host it’s playing the web browser, I’d be extremely surprised (as in, “wait, what!? This is literally the weirdest technical thing I’ve seen all year, maybe this decade”) if that one method wasn’t there. When you’re implementing specs written like this, you just do it all; you don’t—you never—pick and choose based on “that thing is obsolete and no one uses it anyway”.
> you don’t—you never—pick and choose based on “that thing is obsolete and no one uses it anyway
This might make sense for a complete implementation, but in reality...no one would implement a useless API.
Also, if truly no one used it, it would simply not exist (e.g. `with` in module mode), a new implementation of some spec will always pick and choose based on utility and use-cases.
What subtleties am I missing that makes "ahead-of-time JS engine" a better description than "JS-to-Wasm compiler"? (If it's mostly a framing strategy, that's cool too.)
Yep! Also as it is technically more of an engine/runtime (sometimes) than "just" a compiler, folks in the JS space are more familiar with engine as a term :)
I'm a bit suspicious of the versioning scheme described here[0]
If some change were required which introduced a regression on some Test262 tests, it could cause the version number to regress as well. This means Porffor cannot have both a version number which increases monotonically and the ability to introduce necessary changes which cause Test262 regressions
Presumably the idea is that any work that causes Test262 regressions is temporary, takes place in a separate branch, and is only merged to main once the branch also contains all the necessary fixes to make the regressions go away again. A new version number would only be used once that merge happens.
Exactly. The versioning system is definitely unique and controversial, but I think it fits for a fast moving project like this, so I don't have to really consider versioning which could slow development. When it becomes more stable, I'll likely move to a more traditional semver scheme from 1.0.
There's the commit hash. Basically the "version number" is the commit hash, the human-generated (version) numbers added to it are merely progress indicators, which might be randomly useful. But for a project that has 1 branch, 0 tags and nearly 2000 commits, that's not really important.
yes, the entire exercise isn’t important. It just breaks the monotonicity that version numbers typically have. At that point, just call your version <progress>.sha
Uncaught ReferenceError: help is not defined
at exports.<computed> [as main] (file:///opt/homebrew/lib/node_modules/porffor/compiler/wrap.js:494:19)
at REPLServer.run (file:///opt/homebrew/lib/node_modules/porffor/runner/repl.js:98:27)
at bound (node:domain:432:15)
at REPLServer.runBound [as eval] (node:domain:443:12)
at REPLServer.onLine (node:repl:927:10)
at REPLServer.emit (node:events:532:35)
at REPLServer.emit (node:domain:488:12)
at [_onLine] [as _onLine] (node:internal/readline/interface:416:12)
at [_line] [as _line] (node:internal/readline/interface:887:18)
Porffor can compile to real native binaries without just packaging a runtime like existing solutions.
Any language that allows generating and interpreting its own code at runtime will have the "eval problem". From some other comments here, it sounds like Porffor's solution is to simply ignore it.
The most interesting bit about Porffor in my eyes is it lets JavaScript compete with something like Blazor (or allows JS to stand its ground), which kind of makes using any JS in your project redundant, since all your front-end logic can be done in C#. The reason I say this is, because obviously, there are JS devs, but if WASM tooling in other languages grows it will make JS redundant or feel incomplete / outcompeted.
I wont be surprised to see a SPA framework that uses Porffor once it is more mature, or even the major ones using it as part of their tooling.
WASM is the next step after SPA's essentially.
If you have never touched Blazor, I recommend you check it out via youtube video if you don't do any C#, it is impressive. Kudos to Microsoft for it. I have had 0 need or use for JavaScript since using it.
I think QuickJS only compiles to bytecode and then embeds it together with the interpreter in an executable. The JS itself is still interpreted. Others please correct me if I'm wrong.
Since Porffor can compile itself (you can run the compiler inside of Porffor), any calls to eval could be compiled to Wasm (via executing the Porffor compiler in Porffor JS engine) and executed performantly on the same JS context *
I haven't used it, but reading their landing page, Porffor says their runtime is vastly smaller because it is AOT. If the compiler had to be bundled with the executable, then the size of the executable would grow much larger.
You don't need to modify an already running program, you can plug new functions into an existing Wasm program via a table, and even attach variables via globals or function arguments.
I'd recommend checking the work on making SpiderMonkey emit Wasm as a backend