so… it is typical. Find me an article on lesswrong that says climate change should be taken as seriously as AI, it might exist (probably with trigger warnings all over it) but let’s see what the comments are like.
Someone arguing on the forums that EA cares too much about climate change supports your point, and someone arguing on the forums that EA doesn't care enough about climate change also supports your point? What exactly would they have to say to damage your point?
Zero chance private github repos make it into openai training data, can you imagine the shitshow if GPT-4 started regurgitating your org's internal codebase?
Automatic kernel fusion (compilation) is a very active field, and most major frameworks support some easy-to-use compilation (e.g. jax's jit, or torch.compile which iirc uses openai's triton under the hood). Often you can still do better than the compiler by writing fused kernels yourself (either in cuda c++ or in something like triton (python which compiles down to cuda) but compilers are getting pretty good.
edit: not sure why op is getting downvotes, this is a very reasonable question imo; maybe the characterization of kernel compilation as "AI" vs. just "software"?
Both AI and compilers are just software and right now the optimizers are written manually which is kinda weird because the whole point of LLMs is to generate sequences of tokens that minimize some scalar valued loss function. In the case of compilers the input is some high level code in python expressing tensor operations and the output is whatever is executable by GPUs as fast as possible by combination of kernels which are formally equivalent to the tensor operations expressed in Python (or whatever higher level language is used to write the tensor specifications to be optimized for the task at hand). Everything in this loop has a well defined input with a well defined output and an associated scalar valued metric (execution time) and even a normalization factor (output length with shorter sequences being "better").
The whole thing seems obviously amenable to gradient based optimization and data augmentation with synthetic code generators. It is surprising that no one is pursuing such approaches to improving the optimization pipeline in kernel compilation/fusion/optimization because it is just another symbol game with much better defined metrics than natural language models.
New hires' comp is much higher than existing employees', especially if you've hit your cliff. 7 figures for E6 can happen if you joined recently, have good counter-offers, and negotiate. It's not super uncommon but it's also not the median E6 comp.
> the company pushed back against a proposed amendment to the AI Act that would have classified generative AI systems such as ChatGPT and Dall-E as “high risk” if they generated text or imagery that could “falsely appear to a person to be human generated and authentic.” [...] The company argued that it would be sufficient to instead rely on another part of the Act, that mandates AI providers sufficiently label AI-generated content and be clear to users that they are interacting with an AI system.
This sounds pretty reasonable? I don't think it's hypocritical to be talking about the doom of humanity and also arguing that GPT-3, a 3-year old model, should not be classified as "high-risk" in that sense.
Even if you disagree, questioning Altman's leadership and calling him an "empty soul" over this kind of regulatory detail is not adding substance to the discussion imo.
Given his past and present moves ( not what he says publicly ), calling him an "empty soul" are pretty kind words.
Also, the playing dumb card and the "I'm just a tech bro full of innocent dreams" story has already been done to death by the bros from the previous cycle ( think Zuckerberg and his peers ).
It's a scumbag move that most of his peers actually do too and it should at least be publicized and called out.
When Altman says high risk, he means only Microsoft should be allowed to run an AI in case the plot of Terminator happens.
When the EU say high risk, they mean that an AI pacemaker should be explainable enough that you can guarantee it won't randomly kill people. They also mean that low risk applications such as AI holiday recommendation or fiction writing should be more or less unregulated.
Deepmind is high status as well. However with both these companies - they are looking for experienced researchers usually rather than new grads. Google was famous for hiring bright eyed young geniuses and building their careers.