Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> 1) The function you are learning needs to be continuous

Seems like a bad limitation when you try to model reasoning based on facts and logic, there are many things there that are just true or false and no spectrum to it. There is no "kinda true" in those circumstances, you should only get 1 or 0 and never any value between.



Perceptrons are binary classifiers, that output 0 or 1, based on a threshold.

While not practical to find or use, any feed forward network supervised is effectively a paramedic linear regression.

Think of an Excel line graph, drawing lines between points, with the above the line being 'true', or when the soma fires.

That is how perceptrons work.

Single layer perceptrons cannot represent linearly inseparable functions like XOR or band pass.

A single biological neurons can use the timing of pulses, band pass, change the rate of pulses etc... before it ever reaches the soma.

Not all problem can be reduced to decision problems and not all of them can use constant depth threshold circuits, which hard attention is.

An LLM can be a very reliable threshold or majority gates as an example, but cannot generalize PARITY.

Basically statistical learning inherited the same limits of statistics.

"This statement is 'False'" is a good paradox to use as a lens.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: