Hacker Newsnew | past | comments | ask | show | jobs | submit | ironrabbit's commentslogin

Which part of SB1047 do you think is too onerous or not "slow regulation" enough?


This is a forum post from a student with 18 total karma


so… it is typical. Find me an article on lesswrong that says climate change should be taken as seriously as AI, it might exist (probably with trigger warnings all over it) but let’s see what the comments are like.


Here's the second result of your google search, which has 34 upvotes vs. 14:

https://forum.effectivealtruism.org/posts/pcDAvaXBxTjRYMdEo/...


… which also supports my point.


Someone arguing on the forums that EA cares too much about climate change supports your point, and someone arguing on the forums that EA doesn't care enough about climate change also supports your point? What exactly would they have to say to damage your point?


Leadership making climate change a priority and not making excuses for why it isn't.


Zero chance private github repos make it into openai training data, can you imagine the shitshow if GPT-4 started regurgitating your org's internal codebase?


Org specific AI is, almost certainly, the killer app. This will have to be possible at some point, or OpenAI will be left in the dust.


You are downvoted but I agree.


Automatic kernel fusion (compilation) is a very active field, and most major frameworks support some easy-to-use compilation (e.g. jax's jit, or torch.compile which iirc uses openai's triton under the hood). Often you can still do better than the compiler by writing fused kernels yourself (either in cuda c++ or in something like triton (python which compiles down to cuda) but compilers are getting pretty good.

edit: not sure why op is getting downvotes, this is a very reasonable question imo; maybe the characterization of kernel compilation as "AI" vs. just "software"?


Both AI and compilers are just software and right now the optimizers are written manually which is kinda weird because the whole point of LLMs is to generate sequences of tokens that minimize some scalar valued loss function. In the case of compilers the input is some high level code in python expressing tensor operations and the output is whatever is executable by GPUs as fast as possible by combination of kernels which are formally equivalent to the tensor operations expressed in Python (or whatever higher level language is used to write the tensor specifications to be optimized for the task at hand). Everything in this loop has a well defined input with a well defined output and an associated scalar valued metric (execution time) and even a normalization factor (output length with shorter sequences being "better").

The whole thing seems obviously amenable to gradient based optimization and data augmentation with synthetic code generators. It is surprising that no one is pursuing such approaches to improving the optimization pipeline in kernel compilation/fusion/optimization because it is just another symbol game with much better defined metrics than natural language models.


thanks for explaining pretty concisely w/out being rude :)


The median OAI employee with a 900k comp is probably L5, not L6 or L7


New hires' comp is much higher than existing employees', especially if you've hit your cliff. 7 figures for E6 can happen if you joined recently, have good counter-offers, and negotiate. It's not super uncommon but it's also not the median E6 comp.


You can filter the results on levels.fyi. New offers going back a few years are less than 7 figures (even including sign-on bonus).


All engineers and researchers, even junior, are "Member of Technical Staff" at OAI


> the company pushed back against a proposed amendment to the AI Act that would have classified generative AI systems such as ChatGPT and Dall-E as “high risk” if they generated text or imagery that could “falsely appear to a person to be human generated and authentic.” [...] The company argued that it would be sufficient to instead rely on another part of the Act, that mandates AI providers sufficiently label AI-generated content and be clear to users that they are interacting with an AI system.

This sounds pretty reasonable? I don't think it's hypocritical to be talking about the doom of humanity and also arguing that GPT-3, a 3-year old model, should not be classified as "high-risk" in that sense.

Even if you disagree, questioning Altman's leadership and calling him an "empty soul" over this kind of regulatory detail is not adding substance to the discussion imo.


Given his past and present moves ( not what he says publicly ), calling him an "empty soul" are pretty kind words.

Also, the playing dumb card and the "I'm just a tech bro full of innocent dreams" story has already been done to death by the bros from the previous cycle ( think Zuckerberg and his peers ).

It's a scumbag move that most of his peers actually do too and it should at least be publicized and called out.


Could you elaborate on why calling him an empty soul are pretty kind words?


Because an empty soul has no wills, so no malice or greed.

Kind of like the usual "I'm not an asshole, I have Aspergers.."

Sam Altman is none of these things.


When Altman says high risk, he means only Microsoft should be allowed to run an AI in case the plot of Terminator happens.

When the EU say high risk, they mean that an AI pacemaker should be explainable enough that you can guarantee it won't randomly kill people. They also mean that low risk applications such as AI holiday recommendation or fiction writing should be more or less unregulated.

Which one is reasonable?


I think lots of us are feeling the same way about ClosedAI, regulations aside.

Scary to think how much power they have choosing winners and losers. I never got GPT4 API access.

Until the local models get up to speed, we are at the whim of this company deciding who wins and who loses.


Not really. If your intent is to truly ensure content is labeled open AI isn't able to ensure that since you can just copy paste their output.

They want the law and they want to promise safety while not being impacted by the overbearing regulation they've invited onto the rest of us.


Edited to remove criticism


Character has their own models, and anecdotally I've heard they have one of the better LM training codebases out there.


Openai is probably the highest status employer among my network, but I don't know any university students


Deepmind is high status as well. However with both these companies - they are looking for experienced researchers usually rather than new grads. Google was famous for hiring bright eyed young geniuses and building their careers.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: