Since Chris is lurking: will Mojo on GPUs be more like using Jax (relying on compiler), Triton (more control, but abstracted), or more like CUDA (close to maximal control)? Combination? Nvidia and AMD support out of box?
Modular is enabling all of the above for different audiences. MAX provides an operator-graph level abstraction like PyTorch or JAX have, and we expect a bunch of high level libraries like nn.module to get built out over time by the community. You can also go directly to the GPU with a classical CUDA-like programming model for maximal control.
In between we have something we're cooking that I think will be pretty interesting for GPU kernel authors, but it isn't public yet. :-)
The nice thing about this is that it is one system that scales, instead of a bunch of different/inconsistent tech built by different teams over many years, held together with duct tape. Simple and consistent makes it much easier to do the kinds of research and experimentation that power AI ecosystem.
I understand what you are saying so I will address the core of your question instead of just giving you things a child can do.
An LLM right now is still just a facsimile of one of human's cognition. Arguably, without the other senses, it can't experience or understand things the same way we can. This leads to the AI unable to form a comparable sentience to a child, at least, not in the same way and not in a way that would be recognized by most people.
Someone already gave you the sense of time as an example. These senses are things we were born with and critical to our perception of the surroundings. Until the AI is trained on at least the majority of these senses, a child will always have something they can do that the AI can't.
Fascinating. Saw a paper pop up on one of my Semantic Scholar feeds just the other day with the title "Symmetry and simplicity spontaneously emerge from the algorithmic nature of evolution". Here's the link https://www.biorxiv.org/content/10.1101/2021.07.28.454038v2
Thanks for sharing! I hadn't heard of The Feynman Method, but I'll work on integrating that into my learning habits. One of my main interests is philosophy, and I'm very much guilty of relying on jargon to obfuscate my depth of understanding certain concepts.
The Scott Alexander tweet resonates with me a little too much haha! I'm very guilty of that. It's a terrible habit that I'm actively working to break.
I still think I prefer Wikiwand https://en.wikipedia.org/wiki/Wikiwand but I like that yours doesn't redirect from Wikipedia (and presumably track you). If you could steal some design elements from Wikiwand I would personally switch over. (More space-efficient ToC, pulls images to the top to make articles more visually interesting, etc)
This is an exciting idea, but the style of the paper is off-putting. It's written in the style of an academic paper, while clearly eschewing associated norms like not giving blatant opinion or waxing philosophical.
"The rationale is not to customize the ranking according to the implicit interests of the user, but to offer a mechanism to define multiple rankings, plural, open and explicit, for only if it is so, can it be trusted."
Please put opinions in a blogpost and uphold the (reasonable) norms of the research community, for only if it is so, can your work be trusted.
I've also had the idea of creating a site which restricts users to having read the article. So, I'm excited about your project.
Couple questions
1) You mention a paywall, but your video says that users who sign-up before paywall stay in forever. You honoring that (the video was posted 5 days ago)? I see that the median comment count on your site is about 3 so I should hope so
2) How exactly are you paying these authors? We can post articles from anyone and then trust you're able to get in contact with them and hand them their money?
1. Yes, we are honoring that! It's a reward for being an early adopter and it's important to keep the community "starter culture" intact as we move to a paid membership model.
2. We don't want you to have to trust us. That information will be transparent. Check out our current writers leaderboard [1] as a prototype of how that will look. Minutes reading to completion is our basis for payments to writers so you can imagine a pie chart on your account page that shows who your $5 went to for that month in addition to a community-wide distribution that would look similar to the current leaderboards.
As you can imagine there will be cases where we can't get in touch with writers or they're not interested or something like that. We'll probably have to have some sort of time-out period where the uncollected funds might be reallocated to writers who have verified with us or something of that sort. The important thing is that we're committed to making these rules and decisions transparent.
Why is there no tagging or other organization (besides per author)? Skimming through the frontpage I either have no idea what an article is about or it appears to be some lowest-common-denominator politics article. With reddit I can go read my niche subreddits with topics I actually care about (yeah, I know you don't have enough users to replace this quite yet, but still). With no other hooks for following my interests I feel like this is still abusable with clickbait titles.
Could you have like moderator-written abstracts for the AOTDs, at least? Even better would be an abstract for every article (not sure how you would accomplish this)
> Why is there no tagging or other organization (besides per author)?
The only reason is because I'm the only developer and simply haven't had time to build it yet! In fact right now I'm working on a new "Discover" screen that will allow filtering by publisher and topic. We gather "tag" and "description" metadata for articles even though it isn't currently displayed in the UI.
> With no other hooks for following my interests I feel like this is still abusable with clickbait titles.
It's not a perfect filter by any means, but keep in mind that even if an article title looks like clickbait it will only rank highly if users are finishing it so the real garbage usually doesn't float up since people chose to abandon those articles. Again, not perfect by any means but it is something to consider if an article is more than a few minutes long and has a lot of reads on it.
> Could you have like moderator-written abstracts for the AOTDs, at least?
Yes! When the AOTD email goes out (midnight PST every night) it includes the "description" metadata provided by the publisher if present (seems to be available about 90% of the time). We should definitely show that in the UI on the web app too for the AOTD at least (and maybe have some expandable toggle to show it for other articles).
> The only reason is because I'm the only developer and simply haven't had time to build it yet!
I see you're using React on the frontend. And, you only have an iOS app but not (yet) an Android app. Are you not using React Native? In that case it's fairly easy to target both platforms. I know RN is a bit of a shitshow but not supporting the biggest mobile platform in the world is arguably worse
> We gather "tag" and "description" metadata for articles even though it isn't currently displayed in the UI.
If you're taking this metadata directly from the articles I'm skeptical about the accuracy/completeness. Anyway, it's better than nothing I guess
> It's not a perfect filter by any means, but keep in mind that even if an article title looks like clickbait it will only rank highly if users are finishing it
This is something, but still high-quality articles with clickbait titles will outcompete high-quality articles without clickbait titles. Incentive still being: write clickbait titles. Anyway, if someone is 0.25x as likely to finish a bad article having clicked it, but 10x more likely to click a clickbait article, you still have a major problem beyond "not perfect", imo
I know it's a big ask for you to solve every single problem with online reading/discussion but this one is so tangled with the rest it's kinda hard to ignore.
Today's AOTD: "If Everyone Else is Such an Idiot, How Come You're Not Rich?"
From the past few weeks, some selections:
- "Meet the social media echo chamber that is radicalizing you & your friends. - Alexa Rohn"
- "Racism Is Terrible. Blackness Is Not."
- "A White Woman, Racism and a Poodle"
- "The American Press Is Destroying Itself"
- "Dear Fuck Up: How Do I Figure Out What I Want in Life When Every Day Feels the Same?"
- "You Should Be Feeling Miserable"
- "Tom Cotton: Send In the Military"
- "The Sickness in Our Food Supply"
I'm sure some of these are great, but be honest: did the title have anything to do with people clicking through?
You weren't kidding about being dialed in on this space!
> Are you not using React Native?
Not even! Our iOS app just uses WKWebViews for the main UI so yes it would be pretty trivial to do the same thing on Android. Only excuse is that it's in our backlog with 100 other things that we also really want to build.
> If you're taking this metadata directly from the articles I'm skeptical about the accuracy/completeness.
Haha yes, it's an absolute clusterfuck that I'm currently trying to clean up enough to make useful. The descriptions are usually actually pretty solid, but there is so much noise in the tags/topics.
> This is something, but still high-quality articles with clickbait titles will outcompete high-quality articles without clickbait titles.
Yes, yes, yes! I think providing more context in the way of the description could help to cut down on this, but you're very right that there is no easy or complete fix (at least not that I can imagine!). Something else to think about would be looking at the ratio of clicks to completions instead of just the sum of completions alone. That way in your example the 10x likelihood of a click could be cancelled out by the low 0.25x completion rate.
Fair point. I should mention that I'm not a Nim expert, and am not defending its design. It's true that 'nil' appears in Nim under both of its pointer types (GC traced "references" and untraced "pointers") -- which is unlike, say, Rust's safe references vs. unsafe (nullable) pointers. So 'nil' is not simply a systems level construct; otherwise you might expect Nim to have nillable pointers, but not nillable references.
You need nil/null when doing systems work -- and Nim is capable of that -- but I don't know the rationale for allowing 'nil' in higher level language features.
"We need the nil state to disarm pointers but that only means these can be nil and so would require a nil check before deref. Doesn't seem to be too hard nor too cumbersome."
`non-nil` was deemed too breaking to introduce for 1.0 or even as a default because if you tagged your input as `T not nil` the compiler was not powerful enough to prove that T was indeed not nil except in the most obvious cases.
The new battleplan is to integrate the Z3 theorem prover into Nim as DrNim:
1: DrNim is basically additional annotations that are ignored, and you can choose to opt-in via a flag.
It's something you similar to MyPy for python and type annotations. The danger is if it becomes non-optional.
That said contrary too many languages, the core language doesn't need:
- threading
- async
which are huge parts to maintain, but thanks to metaprogramming you can implement them as libraries with very nice syntax as well. It also allows compiler devs to focus on what they know well, compilers and leave those 2 highly specialized areas to people who are experts (or become experts) in those.
2: There are 2 dimensions there: nil is used for C FFI, for higher-level primitives you can use Option.
And C FFI was needed to bootstrap Nim with a useful set of functionality and has an escape hatch so that people could start building early on and not wait for the language.
So both from a functionality and a time-to-market point of view it was necessary.
I agree that it's a sharp edge in the language. Or at least it seems to be -- I've written a little bit of Nim, and never encountered a case where I was actually bitten by 'nil'. The language does have a 'non nil' annotation that can be used when defining structs (and maybe in other parts of the language, I don't recall), so there must have been some thinking about the dangers of nil pointers.
I would still recommend playing with the language for a while. It's quite practical, and the performance is very good. I've used it to write some utilities where I might otherwise use Python, but either needed more speed or needed to use it in a restricted environment where the Python interpreter wasn't available. The experience was mostly positive, and I would use it again.
That's an old comment by the way. At that time the newruntime idea was tested (and now replaced by --gc:arc). One needs to be more careful when using others words, especially taken out of context.