Hacker Newsnew | past | comments | ask | show | jobs | submit | subb's commentslogin

I suspect a lot of people but especially nerdy folks might mix up knowledge and intelligence, because they've been told "you know so much stuff, you are very smart!"

And so when they interact with a bot that knows everything, they associate it with smart.

Plus we anthropomorphise a lot.

Is Wikipedia "smart"?


What is the definition of intelligence?


Ability to create an internal model of the world and run simulations/predictions on it in order to optimize the actions that lead to a goal. Bigger, more detailed models and more accurate prediction power are more intelligent.


How do you know if something is creating an internal model of the world?


Look at the physical implementation of how it computes.


So you are making the determination based on the method, not on the outcome.


Did I ever promise otherwise? Intelligence is inherently computational, and needs a physical substrate. You can understand it both by interacting with the black box and opening up the box.


Definitely not _only_ knowledge.


Right, so a dictionary isn't intelligent. Is a dog intelligent?


There's a spectrum of human involvement in producing a thing, and art is possibly the last thing I want to see automate.

In the end, art is about human connection. There's a difference between an print of some generated AI slop found online, a painting made in a Chinese factory for a big store, and the scribble your friend made when they went through depression.

You can make a game with all three process. They are not the same.


Except color is a construction of your eye-brain derived from stimuli, surround, memories, etc.

It's definitely not something you can plug into a three-value model. Those are good stimuli encoding space, however.

The distinction between brain-color and physical-color is what screws everyone up.


> It's definitely not something you can plug into a three-value model.

What do you mean? And what is screwed up? We use 3 dimensions because most of us are trichromats, and because (un-coincidentally) most digital display devices have 3 primaries. The three-value models definitely are sufficient for many color tasks & goals. Three-value models work so well that outside of science and graphics research it’s hard to find good reasons to need more, especially for art & design work. It’d be more interesting to identify cases where a 3d color model or color space doesn’t work… what cases are you thinking of? 3D cone response is neither physical (spectral) color nor perceptual (“brain”) color, and it lands much closer to the physically-based side of things, but completely physically justifies using 3D models without needing to understand the brain or perception, does it not?


They are very useful to encode stimuli, but stimuli is "not yet" color. When you have an image that is not just a patch of RGB value, a lot of things will influence what color you will compute based on the exact same RGB.

Akiyoshi's color constancy demonstrations are good examples of this. The RGB model (and any three-values "perceptual" model) fails to predict the perceived color here. You are seeing different colors but the RGB values are the same.

https://www.psy.ritsumei.ac.jp/akitaoka/marie-eyecolorconsta...


Here you’re talking about only perception, and not physical color. You could use 100 dimensional spectral colors, or even 1D grayscale values, and still have the same result. So this example doesn’t have any bearing on whether a 3D color space works well for humans or not. Do you have any other examples that suggest a 3D color space isn’t good enough? I still don’t understand what you meant.


Yes exactly. I'm intentionally using "color" as a perceptual thing, not as a physical thing. If we are talking about a color model, then it needs to model perception. As such, RGB, as a predictor of perception, can often fail because it doesn't account for much more than what hits the retina, not what happens after. For one, it lacks spatial context - placing the same RGB value with a different surround will feel different, like in the example above. But if you had a real color (as-in, perceptual) picker in Photoshop, you would get a different value.

It's excellent at compressing the visible part of the EM spectrum, however. This is what I meant by stimuli encoding.


Still not seeing why you claimed color is definitely not something you can plug into a 3D model. We can, and do, use 3D color models, of course. And some of them are designed to try to be closer to perceptual in nature, such as the LAB space like @zeroq mentioned at the top of this sub-thread. No well known perceptual color space I know of, and no color space in Photoshop, accounts for context/surround/background, so I don’t understand your claim about Photoshop immediately after talking about the surround problem, but FWIW everyone here knows that RGB is not a perceptual color space and doesn’t have a specification or standard, and everyone here knows that color spaces don’t solve all perceptual problems.

I find it confusing to claim that cone response isn’t color yet, that’s going to get you in trouble in serious color discussions. Maybe better to just be careful and qualify that you’re talking about perception than say something that is highly contestable?

The claim that a color model must model perception is also inaccurate. Whether to incorporate human perception is a choice that is a goal in some models. Having perceptual goals is absolutely not a requirement to designing a practical color model, that depends entirely on the design goals. It’s perfectly valid to have physical color models with no perceptual elements.


The problem is that we mix up physical and perception, including in our language. If you look at the physical stuff, there's nothing in this specific range of EM radiation that is different from UV or IR light (or further). The physical stuff is not unique, our reading is. Therefore, color is not a physical thing.

And so when I say "color" I only mean it to be the construction that we make out of the physical thing.

We project back these construction outside of us (e.g. the apple is red), but we must no fool ourselves that the projection is the thing, especially when we try to be more precise about what is happening.

This is why I'm saying a 3D model of color (brain thing) is very far from modelling color (brain thing) at all. But! It's not purely physical either, otherwise it would just be a spectral band or something. So this is pseudo-perceptual. It's the physical stuff, tailored for the very first bits of anatomy that we have to read this physical stuff. It's stimuli encoding.

If you build a color model, it's therefore always perceptual, and needs to be evaluated against what you are trying to model - perception. You create a model to predict things. RGB and all the other models based on three values in a vaccum will always fail at predicting color (brain!) when the stimuli's surround is more complex.


There’s a valid point in there somewhere, but you’re also saying some stuff that seems hyperbolic and getting harder to agree with. You’re right that perception is complicated, and I agree with you when you say 3D models don’t capture all of perception. That is true. That does not imply that people can’t use 3D models for lots of color tasks. Again, it always depends on your goals. You’re making abstract and general claims without stating your goals.

It’s fine for you to think of perception when you say color, but that’s not what everyone means, and therefore, you’re headed for miscommunication when you make assumptions and/or insist on non-standard definitions of these words.

Physical color is of course a thing. (BTW, it seems funny to say it’s not a thing after you introduced the term physical-color to this thread.) Physical color can mean, among other things, the wavelength distribution of light power. A physical color model is also a thing, it can include the quantized numerical representation of a spectral power distribution. Red can mean 700nm light. Some people, especially researchers and scientists, use physical color models all the time. You’re talking about meanings that are more specific than the general terms you’re using, so maybe re-familiarizing yourself with the accepted definitions of color and color model would help? https://en.wikipedia.org/wiki/Color_model

Again, it’s fine to talk about perception and human vision, but FWIW the way you’re talking about this makes it seem like you’re not understanding the specific goals behind 3D color spaces like LAB. Nobody is claiming or fooling themselves to think they solve all perception problems or meet all possible goals, so it seems like a straw man to keep insisting on something that was never an issue in this thread. If you want to talk about 3D models not being good enough for perception, then please be more precise about your goals. That’s why I asked what use cases you’re thinking of, and we haven’t discussed a goal that justifies needing something other than a 3D color model - color constancy illusions do not make that point.


Unfortunately, it seems like we will not reach any agreement here.


Honestly I haven't read the whole thread but I think your mixing stuff like green and blue being called the same word in some languages or ancient greek completely missing word for blue.

What I was thinking is along the lines of showing a real life scene to ten random people - like a view of a city park outside of an office window - and then showing them a picture of said scene on a computer screen using only 256 colors (quantization) and asking them if it looks the same.

Or modeling a 3D photo realistic scene of a room in a video game and then switching off the light and asking the player if the scene still looks realistic after we changed the colors or did we stumbled into uncanny valley.

The simplest, hands on experiment, I can think of is putting yourself in shoes of an oil painter and thinking about creating a gradient between two colors, let's say blue and green (or any other pair, it doesn't really matter). Now try to imagine said gradient in your mind and then try to recreate it with graphical program like Photoshop. If you went down this route the gradient will seem odd. Unnatural.

All common standards we were commonly using for the last 30 years like RGB, HSL, HSV, etc. falls flat. They are not so much off to call them "uncanny" (as in "uncanny valley"), but they seem wrong if you look close enough.

To actually simulate mixing two blobs of an oil paint you need arcane algorithms like Kubelka-Mink (yet another ground breaking discovery in IT made by reading a 100 years old research).

All in all - take a look at this video, I know it's 40 minutes long, but this topic has been a peeve for me for almost 20 years and it's the best and most comprehensible take on the subject: https://www.youtube.com/watch?v=gnUYoQ1pwes


That video is excellent, thanks for sharing. BTW it does back up the point @subb was making, that the experience of color is a perceptual thing; “light isn’t what makes something a color. As we’ve seen, colors are ultimately a psychological phenomenon.” Which is true.

FWIW I suspect the issue in this thread is that color models and color spaces are not necessarily modeling perception. The word color is overloaded and has multiple meanings. Just because color experience is perception, that doesn’t mean “color” is always referring to perception nor that phrases like “spectral color” or “color model” are referring to perceived experience, and they’re often not.

A color model is any numeric representation that captures the information needed to recreate a color, and it can be a physical or spectral color model, a stimulus model (cone response), or a perception model. Being able to recreate a color does not imply that the information is perceptual. Spectral “color” measurements are just pure physics, and spectral color models are just modeling pure physics.

By and large, the color matching experiments that lead to our CIE standards mostly measured average cone response for an average observer, and were never intended nor designed to capture effects like adaptation and surround. This is why many of the 3D color spaces we have that trace lineage to those experiments, especially the “perceptual” ones, are primarily modeling cone response and not perception. CIE color spaces do involve some kind of very averaged out perception of color, in a static unchanging, well adapted, no surround kind of way, which is for example why the “red” color matching function goes negative. [1]

There are people doing stuff like adaptation and spatial tone mapping in video games and research, and they’re using more tools than just 3D color spaces for that. That’s the kind of discussion I was hoping @subb would get into, i.e., what specific cases require going beyond the CIE models.

[1] https://yuhaozhu.com/blog/cmf.html


I had experimented with some photo printing services and came across one professional level service that offered pigment inkjet printing (vs much more common dye inkjet printing). Their printers had 12 colors of ink vs the traditional 4. I did some test photos and visually they looked stunning.


Have you looked at the actual ink colors? Printing is a very different story. They’re not using 12 primaries, they’re using multiple gradations of the same primary. I don’t know which ink set you used, but 5 different grayscale values is common in a 12-ink set. Here’s an example of a 12 ink set:

https://www.amazon.com/Xeepton-Cartridge-Replacement-PFI4100...

There’s only 1 extra color there: red. There are multiple blacks, multiple cyans, multiple yellows, and multiple magentas. The reason printers use more than 3 inks is for better tone in gradations, better gloss and consistency. It’s not because there’s anything wrong with 3D color models. It’s because they’re a different medium than TVs. Note that most color printers take 3D color models as input, even when they use more than 3 inks.


I believe they had the standard CMYK, four shades of black, as well as red, orange, green, and either violet or blue. But it has been a bunch of years so this is off memory. I honestly don’t remember the name of it. What I do remember is that they didn’t not have a web based ordering system. Instead they had a piece of desktop software you had to install. And you had to prove that you are a professional photographer before they would let you create an account. I am not a professional photographer but I did enough amateur photography that I managed to fake my way into it and placed a few orders. Quality was definitely better for all options compared to Nations Photo Lab but so was the price and the ordering setup was much more complex so I didn’t continue using them. They did have a lot more specialty options than any other printer I have seen.


"You only need three colors" is a bit of a cheat, because it doesn't really work out in reality. You can use three colors to get a good color gamut (as your screen is doing right now), but to represent close to every color we can see you would need to choose a red and blue close to the edge of what we can perceive, which would make it very dim. And because human vision is weird you would need some negative red as well, which doesn't really exist.

Printing instead uses colors that are in the range we can perceive well, and whenever you want a color that is beyond what a combination of the chosen CMYK tones can represent you just add more colors to widen your gamut. Also printed media arguably prints more information than just color (e.g. "metal" colors with different reflectivity, or "neon" colors that convert UV to visible light to appear unnaturally bright)


Which is interesting because I am printing digital photos which I edit on an RGB screen.


I paid for college in part by doing digital prepress. We had CMYK and 8 and 12 color separations.

CMYK always has a dramatic color shift from any on-screen colorspace. Vivid green is really hard to get. Neons are (kinda obviously) impossible. And, hilariously/ironically (given how prevalent they are), all manor of skin tones are tough too.

Photoshop and Illustrator let you work in CMYK, and is directionally correct. Ask your printer if they accept those natively.


You see this all the time with professional lighting fixtures as well!

For example, the ETC Source4 LED Lustr X8 has: Deep Red, Red, Amber, Lime, Green, Cyan, Blue, Indigo[0]

RGB LEDs are pretty crappy at rendering colours as they miss quite a lot of the colour spectrum, so the solution is just add more to fill in the gaps!

[0] https://www.etcconnect.com/WorkArea/DownloadAsset.aspx?id=10...


Printing is a whole other beast.

My fav part - if you're preparing an ad for a newspaper you need to contain the sum of all of your CMYK components to under 120 or so value otherwise the print will either dissolve the paper and it will go through.


fun fact: there's a guy with similar background to mine, with similar dedication to color, yet way more productive and he came out with this incredible piece of art: rebelle app

As with most recent technological breakthroughs it uses math from 1931 paper to magically blend colors in ways that seems so realistic it's almost uncanny.


You did send a specific wavelength to your retina, but that wasn't violet. Because violet is a construct by your brain.

Color is not a property of wavelength. There's nothing special about photons wiggling in the 380 to 750nm range.

In general it's not necessary to be this pendatic, but given the topic here, I think it's important to realize this. It takes a while because we are so good at projecting our internal experience outward.

Remember the blue / black dress?


> did send a specific wavelength to your retina, but that wasn't violet.

It was, by definition

> Color is not a property of wavelength.

Sure, it's a label

> There's nothing special about photons wiggling in the 380 to 750nm range.

There is - they activate different receptors your brain relies on, hence leading to a distinct (from other wavelengths) sensation


The waves aren't inherently special, your retina is.

What if we were sensitive to the 200 to 500nm range? What would be blue, violet and red then?

Our eyes and brain are the one constructing what we perceive as color. It doesn't exists outside of us.

Here's good article on the subject: https://anthonywaichulis.com/regarding-perception-photograph...


>What if we were sensitive to the 200 to 500nm range?

https://www.youtube.com/watch?v=A-RfHC91Ewc


In my personal conception, violet is the kind of colour at the lower edge of the rainbow, which is a single wavelength. And purple is what the brain constructs. However, of course, the names of the colours are themselves vague.


Maybe that's a language issue, because purple and violet are color names around here.

And as such, they are both a construct of the brain, as any other colors, like... white.

What we label as "violet wavelength" is only a narrow projection of our experience outward. Case in point, we don't have such colorful (eh) names for other EM wavelength.

I say narrow because you could take this pure laser and change th surrounding and you will inevitably perceive it differently, even though the power and wavelength are the same.


Hmm if you talk to a colorist violet and purple are 2 different colors one more on the red and the other more on the blue. That’s still the construct of 2 wavelength colors. So a made up color of our brain that doesn’t exist.


"Violet" is a spectral color, which means that it is a color formed by a single wavelength of light. And it is a member of the rainbow (the spectrum).

"Purple" is a mixture of red and blue.


Violet is a real wavelength, below blue on the spectrum. Where it becomes invisible to the human eye, it starts getting called ultraviolet.

Magenta and purples are constructs by the brain, as you mention.


No, they are all constructed, including blue.

If I shine some wavelength to your eyeball and you say "it looks blue", but then I change the surrounding and now it looks white, I don't think you would conclude that the original wavelength is blue.

We have a many examples like this, which prescribe that vision is not at all an accurate wavelength measurement device.


Consider that just after the cone cells, there are other cells doing some computation / aggregate of cone signals. Don't forget that color is a brain construct.

For those reasons (and others), there's often a strong disconnect between stimuli and perception, which means there's no such thing as a perceptual uniform color space.


The eyes even do edge detection before sending signals to the brain.


Is this correct? I thought edge detection was done in the primary visual cortex.


Sidestepping what is defined as an "edge", quite a lot of work is done in the retina, including differential computation across cones - some "aggregator" cells will fire when it detect lines, movement, etc.

You can read on ganglion cells, bipolar cells and amacrine cells and see that a lot of preprocessing is done before even reaching the brain!



Capitalism is generally good at the startup scale, to figure out who has the best ideas. Once we have collectively decided (or was forced into) a single or a few implementations, good job, you won! Now you are non-profit / state-owned company / worker coop.


Games are not the real world. When you play a game, you are look at an image. For the current topic, motion blur in games is like motion blur with a camera, not like when you turn your head.

It's photorealism, not realism.


Games however are not movies either. Specifically, they are interactive and thus have different requirements on their visual looks. Most games are also expending significant effort trying to make the experience as immersive as possible and simulating looking trough a film camera works against that. Camera motion blur, film grain, lens flares and other such effects should have no place in games where you play a human or human-like character rather than a robot.


Let me know when we have the tech to render the real sun's power on your TV screen.

Reproduction of reality is not the goal, because it's unachievable.


With enough money...

Larry Ellison used to have a TV projector in his house, with the light output for a drive-in movie theater but aimed at a small screen, so he could watch movies in broad daylight. That was before everyone got bright screens.


I don't see how the result would avoid being either uncomfortably bright or still having shitty contrast due to the unavoidably high "black" levels from the ambient light.


Modern HDR screens already cause the same subconscious perception as the sun would. You flinch, your eyes adjust, you even move a little bit back and feel a kind of warmth that isn't actually there.


Anyone permanently burned their retina yet?


Well HDR screens are becoming more common so we're moving in that direction.

Also, simulating realistic processes is not incompatible with tonemapping the result to be able to display it on limited screens.


Watching Sunshine (2007) will be a real experience when that happens.


The information travelling down the optic nerve is already processed heavily by the retina. At a minimum, you have compression by differentiation, i.e. a bunch of rods and cones are bundled together by comparing their signal.

But I suppose your point is still possibly valid - just even more complex.


You cannot watch raw footage without some transformation unfortunately.

It's better to think of RAW as pure data, just like infrared telescope capture data needs transformation to be displayed on a screen.

Displaying raw as-is means clipping the data to some range, which is a transformation in itself (and a pretty bad one).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: