This also assumes that IQ testing has remained static. It has not. IQ tests continue to evolve and there are >1 of them and they do not all agree. I.E. the tests themselves might be responsible for some of the variance.
AMD. The final holdout, HDMI 2.1 support being blocked by the HDMI group, has been overcome w/ the HDMI group relenting and support is now landing in the kernel (expected in 7.2).
I sort of figured that HDMI stupidity was strategically a good thing as it sort of brought the dynamic of the HDMI consortium and VESA. specifically how they treat the end users, more to the public eye.
That is, more people being subtly pushed to using display port is not a bad thing.
Purely rumor, but supposedly Valve put tons of pressure on them (no idea by what means, again this is all rumor) because they wanted support for the Steam Machine release.
Unless you're on the absolute newest stuff with DisplayPort 2.1, HDMI 2.1 has more bandwidth than DP1.4. That'll be Nvidias 2000 through 4000 series. No DisplayPort 2.1 until the RTX 5000s.
And then monitors released during this time generally do the same too.
Also if you want to use it through a capture card, HDMI ones are way more common and cheaper
It is futile to expect the TV to be smart and support all sorts of apps and hardware only to be abandoned by the manufacturer years down the line. The only correct way to buy a TV imho is to hunt for a dumb but excellent display properties and get a streaming device such as Google TV Streamer, Apple TV or DIY x86 HTPC.
TVs are made with BOM of like 10$ for the SoC, so it's the cheapest crap available.
Then again - none of the streaming services are streaming at anything remotely close to 100Mbps so I doubt they consider it necessary to upgrade to GbE.
Some people have TVs or displays that only use HDMI. I personally wouldn't recommend HDMI if DisplayPort is available, but if HDMI is your only option, then having it work properly will be important.
My monitor has 1 displayport and 2 hdmi and I have 2 computers I use with it. They can't share the displayport. All comparable monitors (last time I checked) have the same. So it'd be nice if both worked.
The cable length limitations are also a pain in the ass for not-uncommon A/V system configurations. 6' recommended max, and the best you might get working stably if the device and cable gods smile on you is 15'. 6' is the lower edge of acceptable for just about any A/V system setup (in practice it means your devices need to be within about a meter of the screen's port[s], which is pretty close) and even 15' is still too short to be useful for, say, a projector, or a "the A/V receiver or HDMI switch is over in that cabinet, the TV is on this wall across the room" situation.
For 4k at 60Hz, you'd need HDMI 2.0 or DP 1.2. At those speeds, both kinds of cable should be able to reach 25 feet, and I can find reputable brands selling both kinds at the length.
Yep. That's likely because that's an active cable. Active DisplayPort cables exist, too. Here is one vendor selling active UHBR10 cables [0]. If you don't NEED UHBR, then you'll find your selection to be much, much larger. I've been using some Monoprice-branded 50 and 100 ft active fiber-optic HBR3 DisplayPort cables for years with no problem.
If Wall Street was so wise they would only reward meaningful layoffs. Laying off 10% of a company by stack ranking every team accomplishes nothing. Particularly if the company just hires the same number of cut people next quarter.
If a tree has a dead branch, you cut it off. Cutting off 10% of the leaves evenly distributed among branches will remove some dead leaves, but it leaves the source of the problems unaddressed.
The only time layoffs work is when the go with cutting the product worked on completely from the company. Everything else should be managed by not growing too big when times are well, and not hiring when someone leaves when times are bad. There will be ups and downs in your market, figure out what they are and ensure your headcount matches that long term, ride out the bad times with no profit knowing they will get better again - cutting all other costs.
I think it will eventually be its own dialect of English. Telling LLMs what to do is better using not quite normal English and I think this will continue until it isn't recognizable as natural English anymore, but a new fuzzy programming language (probably >1).
I believe new (programming) languages will emerge both for LLMs to parse and take instructions from as well as for them to generate code in. The former is because English is a nuanced language evolved for human usage which the LLMs don't quite need, with the only advantage of it being a metric ton of training material. Same goes for Rust, Go and other languages LLMs do primarily well coding in, which all have concepts geared towards human convenience.
Did you specifically re-enable javascript? Ublock origin on medium mode blocks all the tracking javascript and I'd think advanced would follow the same basic starting point.
Cheating is a social issue, not a technical one. Communities are the solution.
Private servers are a nice way to do this and do still exist in places. My favorite online game uses them along with server side anti-cheat and while cheating occasionally happens, it has never been an ongoing issue. I've maybe seen a cheater once or twice in all my many hours playing the game over 10 years (elite dangerous, in case you were curious).
They are bookmarks. The more people who bookmark something the more attention it has. That attention is what you care about and is why that has become a metric people use (that and the fact that there is little else).
Open source can be a hobby, but is can also be a portfolio. So not a paid job, but a way to get paid jobs. Tech interviewing is so incredibly broken that you really need every option working for you and cannot necessarily afford to "just stop".
Open models running locally is the answer. Relying on proprietary, closed software always puts that company's priorities above your own when using their software. You have given up control.
While running them locally presently doesn't make sense economically, you don't need to run them locally to address this issue. There is a lot of competition in hosting open models and you have a variety of services to choose from. Run the open models now, reward that ecosystem instead of continuing to reward closed systems that dreams of rent-seeking.
You don't need to run the model locally if you don't care about sharing your data. Personally I am happy to share data with Kimi or Deepseek if it means we get better OSS models. For private stuff though local is king
It'll be a while yet before open models that're good enough will be viable for local use. Heck I've been trying to use the Qwen 3.5 39B A3B on my system, which is modest but no slouch, and have only been able to get ~4.5 tok/s after optimization, and it really runs my system red (fans instantly go crazy). It's just not practical for serious work.
I've been using Qwen 3.5 and then 3.6 27b Q4 on Ollama with a single 7900 XTX with the codex cli, and I have been blown away by how genuinely useful it is. I've been able to ask it to do long, multi step problems, and it's able to do things that would have likely taken me days to iron out in a matter of hours, or even minutes sometimes.
I get about 30 tok/s, which is far from blazing, but given the capability it has it is absolutely viable for accelerating my work.
reply