Hacker Newsnew | past | comments | ask | show | jobs | submit | awal2's commentslogin

Ok, so there's a lot of tension here:

1. A lot of people want these systems to be open, and don't want the power that comes along with them to be locked up in the hands of a few rich people.

2. But some people also think these systems are powerful and don't want them in the hands of bad-faith actors (spammers, scammers, propagandists).

3. A lot of people also want these systems to be weakly safe and not have negative externalities when used in good faith (avoid spitting out racism when prompted with innocent questions). This is already hard.

4. Even better would be for the system to be strongly safe and be really hard to use for bad-faith purposes, but this seems unreasonably hard.

5. It's often easier to develop the "unsafe" version of something first and then figure out the details of safety once it's actually able to do something. This is basically where OpenAI is now.

6. The details around liability for the harms caused by this kind of thing are not clear at all.

So OpenAI is in this position where it has built this thing that is not yet weakly safe. People have very different ideas about how potentially harmful this could be, ranging from very dismissive ("there's tons of racism on the internet already, who cares?") to the very not dismissive ("rich white tech people are exacerbating inequities by subjecting us to their evil racist AI systems!").

What should OpenAI do with this thing? Keep it locked up so that it doesn't hurt anybody? Release it to the world and push accountability onto the end users? Brush aside the ethical questions and use the hype generated by the above tensions to get as rich as possible? So far their answer seems to be somewhere cautiously in the middle.

My personal opinion is that these questions will be very important for real AGI, but this ain't it, so the issues may not be as bad as they seem. On the other hand, maybe this is a useful test case for how to deal with these problems for when we do actually get there? Also from past experience, it's probably not a good idea for them to allow open access to something that spits out unprompted racism. I would like to see OpenAI more open, but I also realize that it's very hard for them to make any decision in this space without making people unhappy and generating a lot of bad press and accusations.


By naming it "OpenAI" they've implied what their value system is, namely point 1 in your above list. Putting "open" in the name implies libre/free/open sensibilities, in that the harm from releasing technology that could be used harmfully is outweighed by democratizing the technology and allowing everyone to use it for a variety of reasons, including for good and to combat bad actors.

Openness, in the libre/free sense, is also making sure to minimize gatekeeping or putting the creators in a position of making judgements about what's good an what's not.

All the other points you list are ancillary. OpenAI is a prime example of "open-washing". OpenAI got good will from the community by implying they were open (free/libre) and then hid behind all the other points you listed to not commit to openness.

If they wanted to have a discussion about the moral hazard of AI and their business model was to create a walled garden where only approved scientists, engineers and researchers had access to the data and code, that's their prerogative, just don't name it "OpenAI".


Freedom of speech is in a fragile place in our culture if we start seeing certain words or opinions (even bad ones!) as "unsafe".

This is not a criticism, just an observation of where we're at and how dramatically attitudes have shifted.


At least in the US many words and opinions have been off-limits for a hundred years or more. Death threats are the obvious one, but various types of political or religious views have been off limits (which specific ones has varied of course). In the jim crow south you definitely would get beaten or worse if you espoused views of racial equality, and communist/socialist politics have at various points been enough to get you fired and blacklisted in many industries (still are, sometimes).


I mean, just 7 years after the Bill of Rights and First Amendment in 1791 were ratified, came the Sedition Act of 1798, which was a law, passed by Federalist Congress, prohibiting certain kinds of speech critical of the government. You might notice this being a thing the First Amendment said was precisely off limits. Didn't matter.

Free speech in the US has practically never been principled in support of those marginalized, but a tool for the wealthy and rich to maintain power.


You can do sprite matching using new fangled ML tools and save the same amount of developer time.


When all you have is a hammer


Don't really agree with this. It like saying "why did you use a phone to do that math when a simple calculator or even pen and paper would work"

Sure it would work, but the phone also works and does way more. You might be able to use some trivial processing on duck hunt but it won't work on anything slightly more complex so why would you bother learning a method that only works on the most basic of games when you can develop something that can be applied everywhere.


The reputation of a university takes decades to shape and is tied to the reputations of individual professors. For the other schools to "learn to replicate" Berkeley's success, all of their professors would have to become the top experts in their fields overnight. If the professors knew how to do that, they would have done it already, and then a lot of them would have tried to move Berkeley because it has a better reputation.

I'm not saying the other schools have no experts, just that it's a situation where talent tends to concentrate.


Monocular depth estimation has gotten really good recently though[0]. Not saying this one paper/method is 100% sufficient, but we're closing the gap in this one capability (depth estimation from pure vision) quite rapidly.

[0] https://roxanneluo.github.io/Consistent-Video-Depth-Estimati...


I've worked with these (Azure Kinect), and they are quite nice, but they are much bigger and heavier and require a lot more power.


I also don't think I witnessed much crime until I moved to Seattle. In the six years I've been here though I've witnessed:

1. Somebody broke into my Aunt's place while nobody was home and took some jewelry and a cell phone. Police came by, handed her a form, said nice things for five minutes then took off. Never heard anything about it again.

2. My car window smashed after being parked on the street overnight. I called the police. They pointed me to a website where I could report it so that they could keep track of statistics, but said they don't investigate individual cases. Never heard anything about it again.

3. I saw someone in a car pulling the key cylinder out and doing a thing that looked like what people do when they hot wire a car in movies (I know I'm surprised this is still a thing, it was an older car). Walked to a safe distance, called the cops. I never heard anything about it again.

4. I saw a person trying to use a screwdriver to pry the lock off a neighbor's garage as I was walking to the bus. Walked a safe distance, called the police. Never heard anything about it again.

5. People broke into the parking garage at my apartment complex and broke into cars several times. Neighbors called the police. Each time, somebody comes by, hands over a form, says there's nothing they can do. Never heard anything about it again.

I hate that this app exists. Seems terrible for all the reasons. I'm also not a hard core law-and-order person, and I don't think the answer is beefing up local law enforcement. But I can also empathize with people who live here and feel unsafe, and are looking for someone who will actually provide some level of security, although I think it's misguided to turn to this kind of app/service.


> and I don't think the answer is beefing up local law enforcement.

Yes, I think the number of police and the funding police receive is definitely not the problem here. It doesn't matter how many cops you have or how many expensive toys those cops have. If the DA refuses to charge the people the police arrest, then the police will stop arresting people because they know it's a waste of their time. The root of the problem is with the priorities of the electorate and the people they choose to elect (DA is elected in Seattle.)


This is a popular take, but (with respect) I don't think a more aggressive DA is the answer for the following reasons:

A. The problem is too big to just arrest and charge everybody. There aren't enough police to track all the petty crime even if you had a DA with an appetite to charge them.

B. Even if you start heavily prosecuting the few people you do have the resources to bring in, this isn't enough of a deterrent to stop other people who are committing small time property crime. People aren't doing it simply because they can get away with it, they're doing it out of desperation.

In my opinion (feel free to disagree) the problem won't go away until there is real upward mobility for the lowest end of the economic spectrum so that people have something better to do than break into people's houses, cars and garages. There's a perverse dynamic where if you have zero money it's probably easier to be in the city of Seattle than pretty much anywhere else nearby (more shelters, more services and more other people already living in tents and vans). At the same time, the city is incredibly expensive, and there aren't a lot of entry-level jobs for people trying to get out of poverty, so making the jump from zero to stable seems like it must be really really hard here. So there's a huge wealth/income gap without any real bridge to get across it which leads to a lot of desperation, and (I think) that has more to do with anything than whatever soft policies in the DA's office.


The bottleneck in crime is investigation and detectives, which need to be highly skilled and spend a lot of time.

But most peoppe seem to think police = more grunts on the streets with guns. Or lets give them bigger guns. Or give them more power to harrass random passerbies.


Those may not actually be crimes in Seattle.


You live in Seattle, there's your main problem. The local government keeps the police department hogtied, cops can't do much outside of their narrow rules of engagement. On top of that, the city's residents don't seem too enthusiastic about the 2nd Amendment either. So who or what is going to protect the flock?? Nothing it seems.


SPD is infamously brutal [1]. They just don't give a f** about actually helping people, just destroying homeless encampments and covering their badges at protests.

Source: Lived here for 5 years and have never seen an SPD officer do anything helpful or solve any crime, but have seen them be brutal at multiple protests. My wife literally saw someone be set on fire at a local park, but I never saw a cop there until they decided to sweep a homeless encampment a year and a half later.

[1]

https://www.seattlepi.com/local/crime/article/12000-complain...

https://www.king5.com/article/news/what-the-federal-consent-...


What are the police supposed to do in this situation? If you let them protest unconstrained they will vandalize and loot stores, not to mention block traffic. If you tell them to disperse, they won't. If you try to politely move them they will resist. Seems like tear gas and moderate physical force is completely warranted.

I agree with you that the police are useless in terms of stopping or punishing most crime though.


What has "hogtied" the police is 30 years of the war on drugs. The framing of citizens as "the flock" is also needlessly condescending. Police are, first and foremost, public servants - not a domestic security force. Following up on the crimes listed in the parent doesn't require guns, SWAT teams, or no-knock warrants. Just phone calls and paperwork. Unfortunately individuals who think they're "sheep dogs" find that shit boring.


Vigilantism gets innocent people killed. Cops not showing up, and not investigating, has nothing to do with "rules of engagement," (law enforcement is not working in a war zone), but is just plain incompetence or refusal to do ones job on behalf of the SPD. That being said, the picture non-US residents get from US law enforcement is basically incompetent neglect of duty anyway.


El Camion taco truck on Sand Point way is pretty good


Right. I think the question though is why isn't somebody fixing this situation? There's money sitting on the table for them when they figure it out and get their act together.


> I think the question though is why isn't somebody fixing this situation?

They are. Its called "Use ROCm". Tensorflow support, PyTorch support, etc. etc.

Yeah, its limited to Linux, its limited to a few cards. But within those restrictions, ROCm does work.


As far as I know, AMD doesn't have an incentive to improve this limited offering because they don't have chips with a good enough cost-to-compute ratio to get people to buy them if they did get Rocm/hip/etc working.


Frontier and El Capitan will be the first Exascale processors on the planet (now that Project Aurora has slipped schedule).

Both with AMD MI100 providing the bulk of their compute. Frontier seems like it was given development boards of MI100, because AMD is talking about how they already ported some code over to the MI100 and tested it.


You're on HN so you're probably aware of the costs and difficulty involved in staffing an organization large enough to tackle these issues in an effective time frame.

Nvidia has quite a head start. You're not just talking about some simple driver support either. You're talking about runtime compilation/JIT(to target various flavors of HW), tooling support, library optimizations, API stability and maintenance... AMD can catch up, but unless they come up with a new approach it's going to take a long time and a lot of smart people to do so.


> AMD can catch up, but unless they come up with a new approach it's going to take a long time and a lot of smart people to do so.

I think they will. AMD has the challenger mindset. They rose from the ashes and now actually compete with Intel and they can tackle NVIDIA as well.


The fundamental truth of ML at the moment seems to be that gathering data takes at least as much effort and infrastructure (and often much more) than actually training the models.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: