Hacker Newsnew | past | comments | ask | show | jobs | submit | DimitriBouriez's commentslogin

The extension is no longer available on Chrome. So, I am looking for a way to automatically redirect x.com to xcancel.com


Just write a simple user script with Tamper/Greasemonkey. You can even paste it into ChatGPT and it will give you one.



This is also a MV2 extension and therefore also not available on Chrome.


Not sure what search you're using, but Googling `xcancel redirect` gave me plenty on the first page. You could be stuck in a search bubble. Maybe try searching in private browsing mode?

This one is manifest v3: https://chromewebstore.google.com/detail/xcancelcom-redirect...

This userscript (despite the name) also redirects to xcancel: https://greasyfork.org/en/scripts/450008-twitter-to-nitter-r...



Chrome has disabled support for MV2 extensions (unless you set an enterprise policy) and is entirely removing support for MV2 extensions in v139: https://developer.chrome.com/docs/extensions/develop/migrate...

It releases in August.


You're welcome to quit using spyware as your primary browser if you're dissatisfied with the spyware's treatment of your rights as a user.

If you choose not to, you should probably cede the right to complain about problems solved with MV2 extensions, since those problems are now self-inflicted.


Hope those Google guys fix it! Would be bad not having half of the appeal of the browser.


Ok.


For this specific case there's a "Nitter Redirect" extension


One thing to consider: we don’t know if these LLMs are wrapped with server-side logic that injects randomness (e.g. using actual code or external RNG). The outputs might not come purely from the model's token probabilities, but from some opaque post-processing layer. That’s a major blind spot in this kind of testing.


The core of an LLM is completely deterministic. The randomness seen in LLM output is purely the result of post processing the output of the pure neural net part of the LLM, which exists explicitly to inject randomness into the generation process.

This what the “temperature” parameter of an LLM controls. Setting the temperature of an LLM to 0 effectively disables that randomness, but the result is a very boring output that’s likely to end up caught in a never ending loop of useless output.


You're right, although tests like this have been done many times locally as well. This issue comes from the fact that RL usually kills the token prediction variance, disproportionately narrowing it to 2-3 likely choices in the output distribution even in cases where uncertainty calls for hundreds. This is also a major factor behind fixed LLM stereotypes and -isms. Base models usually don't exhibit that behavior and have sufficient randomness.


Agreed. These tests should be performed on local models.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: