This is the old debate whether applications should be tightly or loosely coupled. As the article concludes, there are tradeoffs to consider - the broker is a SPOF indeed, but on the other hand it'll take care of all the queueing logic (which is not that simple as it seems - acknowledgements, backpressure, redelivery, timeouts etc etc). It's also much easier to scale the system out if applications are loosely coupled.
Furthermore, a good broker supports some kind of failover. We run a RabbitMQ cluster behind a load balancer, and it works great - once e.g. we had a hardware failure, and the applications did not notice anything from it. Of course we also have to monitor the queues on the broker (one of the few ways to bring down Rabbit is to fill up its disk). But having a central broker for connecting apps also means you have a central point you can keep an eye on to check where your bottlenecks are (which queues build up).
Loosely coupled applications are also much more unix-like in the sense that you can make them do one thing instead of baking in your custom restart logic (over a network? please don't).
I feel like it's an endless battle, trying to move away from a SPOF; even in your rabbitMQ cluster your load balancer is a SPOF. Who watches the watchmen? At a certain point you just have to accept that you're not gaining a whole lot by adding more complexity. Process supervision over a network (a la something like fleet or flynn or mesos) is a pretty neat solution. I've been playing around with this stuff the past week and I'm having a lot of trouble coming up with the perfect setup haha
A load balancer need (should) not be SPOF: You run two (or more) of them, with IP takeover (e.g. via keepalived or similar, that continuously vote on the master), or you make the clients try to connect to two or more IP's. Or both.
It's really quite trivial to ensure your load balancer is not SPOF if you're ok with clients having to reconnect on failure, and only slightly more hassle if you want to allow connections to survive one of your load balancer boxes failing.
Integration is very easy (check out the examples in their git repo), but this is still alpha quality software - there are known limitations, e.g. the one I ran into is that a large JS expression involving function calls might make Duktape run out of bytecode registers.
SpiderMonkey is easily embeddable like I said. The Mozilla Public License is less liberal than BSD/MIT, but for most practical purposes it's the same.
It's not tiny though (for an underbelly feeling, I found an old binary Windows build online of about 800kb).
Looking at the current SpiderMonkey docs, things look like they've only improved. Back in the day, I just took SpiderMonkey from the "js" subdirectory of the Firefox repo. They split it off and documented build steps since then.
I'm not saying there's no space for contenders here, but don't underestimate the quality of what's out there already. SpiderMonkey really is remarkably easy to interface with.
After a quick glance it also seems that newer versions of SpiderMonkey only offer a C++ API, so you can't embed it anymore into C apps without some glue code. I checked out jsapi.h in SpiderMonkey24 - maybe some trick is needed to use SM from C?
New versions of SpiderMonkey only expose a C++ API (because of exact rooting), but we'll need to fix that at some point as part of Servo since this is a hazard for Rust. Currently we use an old version of SpiderMonkey for this reason.
The funny thing is that most people choose Elixir over Erlang because of the syntax - personally I can't stand Ruby but love the Erlang syntax. When I see Erlang code, everything just clicks - almost like I'm reading prose.
So, if you want to harvest the power of Erlang, I'd suggest you start with the real thing first. The only drawback is that once you get used to functional programming, guards, pattern matching, OTP, cheap processes, links and all the other goodness, there's no way back.
If you still feel Erlang is weird after spending a couple of days working with it, feel free to jump on the Elixir bandwagon. :)
I would say it goes much deeper than syntax and both Elixir/Erlang can co-exist happily just like Scala/Java. After jumping into Elixir, you'll eventually have to become familiar Erlang/OTP conventions, but starting with Erlang is not a requirement. You're right that guards/pattern matching/OTP leave you with language envy once you go back to your previous language of choice. Elixir does remove some pain points of Erlang, particularly around metaprogramming, polymorphism, and string handling. I will say after getting into the ecosystem, I can't believe the Erlang folks have been quietly "building the future" all these years while the rest of us largely ignored their innovations.
Personally I don't find Ruby and Elixir to be very similar. Maybe some of the terms are the same, but in general they feel very different. Elixir has a much more consistent syntax than Erlang (imo), it's macros are more powerful, it has a great build system right out of the box (inspired by leiningen for Clojure), and it can make use of any Erlang library with no additional effort. I'm also a huge fan of Elixir's pipes feature, which I think makes a lot of code much cleaner and easier to read. I love Erlang, but I wouldn't call the use of Elixir over Erlang a bandwagon - there are good reasons why one would favor Elixir as a starting point. There are probably reasons why one would choose Erlang instead too, but I feel like if you are at a point where you can choose one or the other, Elixir makes the most sense.
Erlang syntax is a) Prolog inspired and b) more than 20 (closer to 30) years old. While it generally gets the job done without much hassle, it's very possible to improve on it. The macro system you mention is the reason I'm going to give Elixir a whirl, despite being happy with Erlang otherwise.
Anyway, the situation of Elixir and Erlang looks more like JavaScript and CoffeeScript to me than like Java and Scala. Of course, Elixir is similar to Scala in terms of implementation - they both compile to bytecode instead of transpiling like CS. But the features Elixir brings to the BEAM are less groundbreaking and more practical, just like in CoffeeScript and unlike Scala, which transforms JVM so much that it's almost invisible.
There's a difference, too - JS is being reworked and Harmony will bring many improvements which Coffee has today, but I'm not aware of "next generation Erlang" being actively worked on (Joe Armstrong does erl2, but I don't know how active it is). So while there are people who don't use Coffee because "it will be in the standard in a year anyway" this argument does not hold true for Erlang and Elixir.
Just some random thoughts, I like them both and am actively learning Elixir while maintaining a project in Erlang (and I also like Scala!) and I hope they can both thrive. It's a symbiosis, really - Elixir brings a new wave of developers to Erlang, and Erlang gives a solid foundation to Elixir.
Respectfully, I'm not sure I buy this comparison. Yeah, CoffeeScript adds some cleaner syntax and OO niceties onto JS, it's still basically just a higher level abstraction that get boiled down to JS. Elixir is a multi-paradigm language that that gets turned into the same type of byte code that Erlang does and is interoperable with Erlang. Its more like jRuby and Java both living on the JVM and being able to call Java code in jRuby, or maybe its a bit like ClojureScript and Clojure?
Interesting to see how mainstream intermittent fasting has become. It's worth noting where it started.
A few years ago a few hardcore fitness fanatics started playing with the idea of using controlled fasting for weight loss and/or body recomposition. Two people that should be mentioned are Lyle McDonald[1] and Martin Berkhan[2]. Martin especially made IF popular via his blog, laying out the principles he used as a fitness consultant with his clients.
Most research (and especially the commercial IF knockoffs) only take some part of these principles, but the "diet" part is only part of the picture when it comes to body recomposition. It's almost worthless without the rest (high intensity, low volume weight training, basic compound movements, progressive overload, no focus on cardio).
Quite a few people/company are trying to rip off customers via their IF programs knockoffs and supplements. If you want give IF a shot, read through Martin's blog, and try the original Leangains protocol.
Another recommendation for Leangains.com - been following it for over a year and have seen good results. Although I haven't followed the protocol to the letter - I eat a good amount of junk food during the week, but also good food as well. The point is to narrow your window of time that you eat during the day. I believe I could be doing even better if I concentrated on eating better and cutting out the crap, but now I have a pretty good balance of being in shape and being able to eat what I want.
Dr Mercola also has some really good articles on IF - www.mercola.com
I also have a couple of years of leangains under my belt. Doing the fasted early morning training (modified stronglifts) with a feeding windows 10am-6pm. It gives me energy, I don't ever really feel hungry and my muscle gains have been good. Only thing is you really have to watch your macros. You can still gain fat on IF. I made it down to 9% BF by watching caloric intake and when I decided to just try eating as much of whatever I wanted to get some bulk I gained weight and BF (now 16%). Recomp (losing fat and gaining muscle at the same time) is hard and slow
Best part is that it has been a cinch to stick to. I simply don't get hungry during that 16h window
The whole "clean eating" thing is mostly a fad. Nobody has managed to define what food is "clean" and what is not (see e.g. what the typical bodybuilder thinks is clean, and what someone doing paleo, what the average dietitian recommends, etc). The only important things are 1) the number of calories you eat, and 2) what nutrients your diet provides.
When I'm cutting, I don't make a fuss about "junk food", as long as I'm below my calorie intake limit and I'm not missing any important nutrient. Usually hovering around 20% junk food.
If you're only looking into weight loss/gain, then yes. Both are terrible choices as a diet, though. The latter will provide more nutrients, so we can call it 'healthier', but you will lack important ones still, and significantly undereat if you are an adult male of average weight and height.
The really nice thing about this is that the dispatching to the optimized version is done at runtime, via GCC adding the necessary stubs at compile time. No extra dispatching/CPU detection logic in the application is needed to take advantage of advanced CPU features. Very cool.
I have no experience with Akka, but what you write makes sense. Even Clojure concurrency primitives feel weird sometimes when one wants to mix them with low-level Java libraries.
Another option is to do it at the language level - see my comment on Erlang.
Interesting analogy: Erlang implements an M:N scheduling model, where it starts an OS thread for each CPU core (by default), and (preemptively) schedules its own lightweight processes on top of these threads. However, unlike OS threads, these are very lightweight: when they are created, they only need a few hundred bytes. Erlangers have been using the one process per request model for a very long time, and Erlang applications achieve massive concurrency via this.
To me it seems it is possible to do M:N right, but you need more abstraction and a different design. M:N seems to work well in Erlang-like cases (very lightweight processes on top of a kernel threads).