The only people using MSSQL Server are people deep, deep in the Microsoft ecosystem. Think government work, and those unlucky enough to work at a pure Microsoft shop where every problem looks like a Microsoft or Azure solution.
It's not a dominant database anywhere on the outside.
We're a B2B shop migrating to MSSQL, from SQL Anywhere. Managed MSSQL in Azure is fairly easy operationally, especially since we don't have a dedicated DBA and our support staff aren't SQL gurus.
However since we now got the tools for running on both, and experience migrating, we might be moving to PostgreSQL at some point in not too distant future. Managed MSSQL in Azure is not cheap.
We started with SQL Anywhere way before it was SAP. SAP was the primary driver to move away, but it also fits nicely with our customers, some which want to run their own MSSQL instance.
I think this is more your bias, it's also regional as different places in the world seem to use these things way more than other parts. Really quick search shows it's used a LOT outside of the areas you mentioned. The place it's not really used? startups... it's the #3 DB in the world.
Sure, it's #3 but whole point is new installs have mostly stalled and outside companies absolutely mainlining the .Net Framework Kool-Aid, no one is building greenfield on it. I've worked for several .Net Core companies, all of them have either converted or in process of converting away from MSSQL to MySQL flavor or PostGres.
Microsoft is heavily investing in Postgres in fact which is why they bought PostGres sharding company, Citus and looking at commit history on PostGres, they have several employees actively working on it. They also contributed DocumentDB which is Mongo over Postgres.
It will take a long time to die and Microsoft will still continue to do little work on the product and stack your money in their vault while giggling.
I know of a couple of rather fancy, proprietary 2-way radio trunking systems products that use local MS SQL on the back end, to keep track of configs for individual subscriber radios and system configurations for the radio repeaters.
(What's that? Well, if you ever walk into a place like a gigantic oil refinery, you'll see a bunch of people working there. If you look long enough, you'll notice that each of them have an expensive-looking radio ("walkie talkie") on their hip. Some of those radios may be my fault -- and of those that are, there's an MS SQL database that knows exactly how it was programmed. But I didn't pick it; that's just how the system operates.)
I don't mind Radio Management, per se. It's a nice idea. It just feels broken and internally-disjointed when it isn't falling flat on its face.
We almost got into bits of the P25 side to help service $giant_government_entity's system, but the GTR 8000 training was complete ass. Mostly what we got out of it was long periods of the dude fretting about the clutch job that his Hyundai was in the shop for and talking on the phone about that, interspersed with a repeated slogan of "I was a Navy man. I don't know what makes sense to you, but I do things by memorizing steps instead of understanding how they work."
Sometimes, he'd get around to mentioning some of those steps.
Much waste, very disappoint.
We all very thoroughly failed the test at the end of that week.
Yep. Cabinet Vision as of 3 years ago required installing both SQL Server 2016 and 2019, plus 2010-vintage Microsoft Jet database, plus Powershell 2.0, plus .NET 3.5.
It’s completely dominant in its industry and has no real competition. Pricing starts at $200 a month for the most basic, single user setup and goes up (way up) from there.
Yep, there is plenty of that type of software, industry niche software that doesn't have market cap to interest competitors, that will require MSSQL and Windows so Microsoft will continue to sell it/develop it.
This is like saying nobody eats at McDonald’s because they have more competition now. It’s not wrong from a certain perspective but that’s still a huge number of customers.
Sure and likely Windows Server and MSSQL will still exist in 2056 because there will be enough money in it. Hell, AS/400 is still kicking but I'm not sure anyone would consider that anything but legacy.
I mean, sure, that can happen, but that obviously depends on what the test is testing, it's not like it's bad in all cases to say "now plus 1 year". In the case in question it's really just "cookie is far enough in the future so it hasn't expired", so "expire X years in the future from now" is fine.
Arguably you should have a fixed start date for any given test, but time is quite hard to abstract out like that (there's enough time APIs you'd want OS support, but linux for example doesn't support clock namespaces for the realtime clock, only a few monotonic clocks)
Not a good idea for CI tests. It will just make things flaky and gum up your PR/release process. Randomness or any form of nondeterminism should be in a different set of fuzzing tests (if you must use an RNG, a deterministic one is fine for CI).
Only if it becomes obvious why it is flaky. If it's just sometimes broken but really hard to reproduce then it just gets piled on to the background level of flakiness and never gets fixed.
To get around this, I have it log the relevant inputs, so it can be reproduced.
The whole concept of allowing a flaky unit test to exist is wild and dangerous to me. It makes a culture of ignoring real failures in what, should be, deterministic code.
Well, if people can't reproduce the failures, people won't fix them.
So, yes, logging the inputs is extremely important. So is minimizing any IO dependency in your tests.
But then that runs against another important rule, that integration tests should test the entire system, IO included. So, your error handling must always log very clearly the cause of any IO error it finds.
I remember having a flaky test with random number generation a few years ago - it failed very rarely (like once every few weeks) and when I finally got to fixing it, it was an actual issue (an off by one error).
Generate fuzz tests using random values with a fixed seed, sure, but using random values in tests that run on CI seems like a recipe for hard-to-reproduce flaky builds unless you have really good logging.
If this isn't a joke, I'd be very interested in the reasoning behind that statement, and whether or not there are some qualifications on when it applies.
humans are very good at overlooking edge cases, off by one errors etc.
so if you generate test data randomly you have a higher chance of "accidentally" running into overlooked edge cases
you could say there is a "adding more random -> cost" ladder like
- no randomness, no cost, nothing gained
- a bit of randomness, very small cost, very rarely beneficial (<- doable in unit tests)
- (limited) prop testing, high cost (test runs multiple times with many random values), decent chance to find incorrect edge cases (<- can be barely doable in unit tests, if limited enough, often feature gates as too expensive)
- (full) prop testing/fuzzing, very very high cost, very high chance incorrect edge cases are found IFF the domain isn't too large (<- a full test run might need days to complete)
I've learnt that if a test only fails sometimes, it can take a long time for somebody to actually investigate the cause,in the meantime it's written off as just another flaky test. If there really is a bug, it will probably surface sooner in production than it gets fixed.
Flaky tests are a very strong signal of a bug, somewhere. Problem is it's not always easy to tell if the bug's in the test or in the code under test. The developer who would rather re-run the test to make it pass than investigate probably thinks it's the test which is buggy.
people often take flaky test way less serious then they should
I had multiple bigger production issues which had been caught by tests >1 month before they happened in production, but where written off as flaky tests (ironically this was also not related to any random test data but more load/race condition related things which failed when too many tests which created full separate tenants for isolation happened to run at the same time).
And in some CI environments flaky test are too painful, so using "actual" random data isn't viable and a fixed seed has to be used on CI (that is if you can, because too much libs/tools/etc. do not allow that). At least for "merge approval" runs. That many CI systems suck badly the moment you project and team size isn't around the size of a toy project doesn't help either.
Can't one get randomness and determinism at the same time? Randomly generate the data, but do so when building the test, not when running the test. This way something that fails will consistently fail, but you also have better chances of finding the missed edge cases that humans would overlook. Seeded randomness might also be great, as it is far cleaner to generate and expand/update/redo, but still deterministic when it comes time to debug an issue.
Most test frameworks I have seen that support non-determinism in some way print the random seed at the start of the run, and let you specify the seed when you run the tests yourself. It's a good practice for precisely the reasons you wrote.
Absolutely for things like (pseudo) random-number streams.
Some tests can be at the mercy of details that are hard to control, e.g. thread scheduling, thermal-based CPU throttling, or memory pressure from other activity on the system
There's another good reason that hasn't been detailed in the comments so far: expressing intent.
A test should communicate its reason for testing the subject, and when an input is generated or random, it clearly communicates that this test doesn't care about the specific _value_ of that input, it's focussed on something else.
This has other beneficial effects on test suites, especially as they change over the lifetime of their subjects:
* keeping test data isolated, avoiding coupling across tests
* avoiding magic strings
* and as mentioned in this thread, any "flakiness" is probably a signal of an edge-case that should be handled deterministically
and
* it's more fun [1]
If it was math_multiply(), then adding the jitter would fail - that would have to be multiplied in.
Nowadays I think this would be done with fuzzing/constraint tests, where you define "this relation must hold true" in a more structured way so the framework can choose random values, test more at once, and give better failure messages.
Damn, must be why only white hair is growing on my head now.
>Nowadays I think this would be done with fuzzing/constraint tests, where you define "this relation must hold true" in a more structured way so the framework can choose random values, test more at once, and give better failure messages.
So the concept of random is still there but expressed differently ? (= Am I partially right ?)
Yes, the randomness is still there but less manually specified by the developer. But also I haven't actually used it myself but had seen stuff on it before, so I had the wrong term: it's "property-based testing" you want to look for.
Randomness is useful if you expect your code to do the correct thing with some probability. You test lots of different samples and if they fail more than you expect then you should review the code. You wouldn't test dynamic random samples of add(x, y) because you wouldn't expect it to always return 3, but in this case it wouldn't hurt.
Are you joking? This is the kind of thing that leads to flaky tests. I was always counseled against the use of randomness in my tests, unless we're talking generative testing like quickcheck.
or, maybe, there is something hugely wrong with your code, review pipeline or tests if adding randomness to unit test values makes your tests flaky and this is a good way to find it
or, maybe, it signals insufficient thought about the boundary conditions that should or shouldn't trigger test failures.
doing random things to hopefully get a failure is fine if there's an actual purpose to it, but putting random values all over the place in the hopes it reveals a problem in your CI pipeline or something seems like a real weak reason to do it.
What is today right now in Australia? How about where you live? You have not thought enough about what you’re saying and are probably not aware of all the weird time issues we have in our world.
How much of the total volume of the device was the case/housing?
I suppose the glue-everything approach is partly due to the desire of making a device very thin. There's no room for strong, load-bearing outer case, the internals are load-bearing.
I suspect manufacturing has something to do with gluing too. Afaik screws are expensive compared with glue, and their assembly involves slow humans or expensive robots.
It's been a long time, but the gasket itself was probably a millimetre or two thick, squeezed extremely tightly by the screws in the battery cover. It ran on AA or AAA batteries, and they took about about half or a third of the depth.
Honestly I'd expect that to be SIGNIFICANTLY easier to waterproof than a laundry machine. Partly because laundry is sometimes done warm, and warm softens materials (like gaskets), but mostly because laundry has surfactants that considerably reduce surface tension, making it far easier to slip past gaps.
There is a good reason waterproofing claims are specific about the kind of liquid (usually just fresh or salt water, usually without significant movement (i.e. jets, like you get in a shower)).
Samsung still make the rugged Xcover range which has both replaceable batteries and waterproofing. And 3.5mm jacks too.
These devices are mostly sold in enterprise environments (eg field use, factories) and as such get a lot of wear and tear. But they hold up well. They're not ultra rugged but a good compromise. We use tons of them in our factories, we replaced DECT handheld phones with the Xcovers loaded with ms teams. Not an ideal setup (teams for mobile kinda sucks) but at least this way they can easily communicate with people in the offices.
Yes, but IP67 is not nearly as water resistant as IP68, which all modern phones are for the most part.
I'm not knowledgeable enough to know if IP68 could be achieved in a phone without glue. There's no clamping mechanism for the backs, they're just press-fit with small clips.
From a mechanical perspective ip68 is perfectly achievable mechanically and watches have been achieving it for a long time, however… with what sort of margins for the manufacturer and what sort of cost for the consumer ? Additionally a lot of them require pretty carefully adherence to instructions torques and tolerances to achieve the same waterproof rating.
Personally I’d be very happy to have a phone that says, if you swap the battery you might lose the ip68 rating unless you follow the resealing process within tolerances.
My phone (A Furiphone FLX1, which is kindof a variant of a Gigaset GX6) has a removable back with a gasket and is IP68. One of their promotional videos had them change the battery on video then boot the phone and and unlock it underwater
Who cares though? Sealing the battery in makes the device less drop resistant. I somehow managed to avoid water damage to my phones for decades, while none of my phones managed to avoid being dropped in a way that would most likely be fatal to them if their batteries were sealed in - and yet most of them survived to this day.
A phone needs to handle some rain droplets falling on its screen, anything more than that is a gimmick that's not worth the downsides it comes with.
> A phone needs to handle some rain droplets falling on its screen, anything more than that is a gimmick that's not worth the downsides it comes with.
I submerge my phone as a matter of normal use because I can. I take it into pools and hot tubs, and I clean it in the sink -- I personally wouldn't trade that for a battery door.
No, it doesn't require a battery door, even for phones that don't meet the exception you mentioned.
Over a decade ago, I replaced a phone screen over a few hours, involving a couple dozen screws. During that, I had to remove the battery. (Replacing only the battery would have been easier.) I'm a layman, and all the screws were Phillips. That's sufficient to be replaceable.
I’ve done it and seen it many times. People throw their phones to each other in pools and the beach for photos all the time. One of the best things about modern phones is the waterproofing. IP68 level is amazing.
> A phone needs to handle some rain droplets falling on its screen, anything more than that is a gimmick that's not worth the downsides it comes with
It’s actually the opposite - a user replacement battery is a gimmick not worth the downsides.
Apple know this, and they know their customers a lot better than you do.
Your position is niche at best, anachronistic really.
Apple has vested interest in getting their customers to switch to a new phone often, and the average time to upgrade is absurdly low these days (less than 4 years), which is greatly influenced by battery wear and fall damage, so I don't think this argument is very persuasive.
It's not really the old kind of replace-ability, though. The only requirement is that you should be able to change it with commercially available tools.
a lot of normal people who daily-use their phones near water and even jump into pools with them. I would bet you $100 that if you asked people "replaceable battery of water proofing to the same level you have it now", ~ nobody will puck the former.
Not once in my life I had thought "I would like to jump into this pool with my phone", while I did sometimes replace the battery on-the-go which actually made my life easier. It's an absurd take. If anything, I'd be more concerned with beverage spills, but these are still easier to avoid than drops.
Well you are the exception. Especially if you live in a hot area where a lot of people have backyard pools. Being in and out of the water constantly is a very normal in Florida for example.
Most the suburban kids in Houston had wristband attachments to their phones in the pool or would be in a floaty taking stupid pics of each other as kids do. Trying to keep a modern phone dry takes away a lot of utility.
Not a lot of people live in hot areas with plenty of backyard pools, but I can understand that waterproof phones could become more popular there than in the rest of the world based on this property alone (right now they're popular because there's not much choice).
Those people are doing a very stupid thing. I don't think that the world should be ordered around "let's make it so people can do stupid things without consequence".
Those people are the public buying the phones. Companies make phones that more people will buy. Turns out your desire for a bulky phone with a replaceable battery is less common than their desire for a phone that does not get destroyed when dropped into a pool.
Maybe as a society it's better for people to have replacement insurance than to have sealed batteries that make phones so disposable. I wonder if we've defined IP68 as a "must have" without considering the alternatives. I'm thinking the percentage of people who actually "use" IP68 over the course of their phone is pretty small...yet that "requirement" drives a huge design choice.
I suspect it's a moot point. Makers have every incentive to drive replacement cycles.
I replaced my phone because of the battery life, and I would have replaced the battery if it would have been easy, to offer a counter anecdote.
I had to make the choice of getting another phone (used in great condition, as I do) or pay half the cost I paid to get the battery replaced but also knowing it would still be heaviy used and more likely to fail in other ways because of use.
If labor cost and decreased relaibility weren't factors, swapping the battery would have been the choice.
Now the question is: are there more people like me or more people who need a sealed, hard to repair phone? I don't know but if I did I'd accept keeping the current situation.
Spills and drops were traditionally most common causes of mobile device insurance claims. We've only seen that change for phones because of their IP ratings in recent years.
While manufacturers do have an incentive to get people to buy new phones, many of them with first party insurance do have an incentive not to pay out as many claims.
Japan only, but KDDI/Kyocera never stopped IP rated phones with removable battery. TORQUE G07(2026) is IP65/68/69 rated with a coin key locked removable back cover.
It also officially support submersion in seawater as well as cleaning with soapy water. Most glued phones support neither.
It's just a consumer phone sold through KDDI retail channels. Not a B2B thing. And it exists because enough consumers in Japan buy one.
The original claims in this tree is that waterproof phones with removable backs are somehow impossible and glued shut designs are somehow superior. That's a total BS, so I posted a counter example. Torque phones being rugged in addition to being waterproof, unlike iPhones that are just purified-water-proof, has nothing to do with feasibility of one-upping them with removable backs and rubber gaskets.
Not really comparable perhaps - but I had a Ericsson t18s or similar that went through a full 60C cotton wash cycle (being on at the start of the wash) and was fine after drying off.
The thing is - if the battery had been destroyed, that could have been replaced...
I was wading through water with a 3310 in my pocket in 2006. Battery was fine and it worked after it was dried. There was a problem with the keyboard though but that was a cheap swap. And this was a phone without any water resistance.
He also ate nothing but McDonald's - three meals a day, even if he was already "full". In one scene, he literally vomits, then continues eating the food.
Literally zero people do what Spurlock did in that film.
You have clearly never met fresh-out-of-basic or back-from-deployment sailors, then.
They build used car dealerships and strip clubs within walking distance of bases. Sailors blow thousands in an evening at the club, and then drive home in $75k vehicles purchased at predatory interest rates.
Despite significant, potentially life-changing enlistment and re-enlistment bonuses, housing stipends and more - many (or most) enlistees leave the service in debt or near penniless.
You're exaggerating or very out of date. They train people specifically against both those now. The official slides stop just short of "no the stripper does not actually like you" but you'd have a hard time making it that far without having that beat into your skull. The number who don't listen is dwindling. The interest rates those guys are getting on vehicles they don't need are no worse than anyone else off the street's would be these days.
The problem is that our public health care system could cover the entire country at no additional cost…if our health care spending per capita was inline with other nations with better health outcomes.
That's a nice bit of trivia but it doesn't really affect the comment you're replying to. It's still food, full of flavor and calories, and able to be used by a home cook (by making a pie).
If you researched this regulation even a little, you'd see the crops are rarely destroyed. They are far more often exported, diverted to secondary markets, donated, or carried-over into next-season's stock.
It's interesting to me how people are quick to comment about things they know nothing about...
> It's still food, full of flavor and calories
Tart cherries have about 1-2 calories per cherry, and do not taste good without a lot of sugar. That's why they are used in commercial processing, not generally sold as a fruit in grocery stores.
Coming back later, I realized earlier I looked up the calories but I didn't compare them to anything else. So while tart cherries "only" have 50 calories per 100g, sweet cherries are up around 60, not very different. An apple also has about 50-60 per 100g. So does an orange.
Fruit isn't super dense in calories to begin with because it has so much water, but it's still a meaningful amount, and tart cherries are pretty standard among fruit.
So we're moving goalposts? Where did I say people in need don't need any fruit?
People in need don't need single/one calorie tart cherries that are rarely eaten on their own. Consuming tart cherries typically involves processing that is more costly in terms of ingredients and time than simply using the pre-processed versions. Tart cherries are sometimes donated and are rarely destroyed.
Which argument will you come up with next?
You've bounced all over the place in this thread. Just let it rest...
> So we're moving goalposts? Where did I say people in need don't need any fruit?
You gave calories as a reason people don't need this fruit.
But that logic would apply to almost fruit.
So I said it would be bad to say people in need don't need fruit, while pointing out that contrast. I'm not accusing you of thinking that, I'm accusing you of using flawed logic.
> People in need don't need single/one calorie tart cherries
There's plenty of calories in a reasonable serving, and again that argument would apply to almost any fruit. It's like complaining about a single blueberry having too few calories.
> are rarely eaten on their own. Consuming tart cherries typically involves processing that is more costly in terms of ingredients and time than simply using the pre-processed versions.
They can cook with them. Lots of things are rarely eaten on their own and need to be processed, costing more ingredients and time than the pre-processed form. This includes flour!
> Tart cherries are sometimes donated and are rarely destroyed.
This is true and has nothing to do with my point.
> Which argument will you come up with next?
If you bring up a new reason to imply that donating tart cherries is unreasonable (even though it does happen!), I might disagree with that reason. Otherwise I have had one single argument and it hasn't changed: Donating tart cherries is a good idea.
I don't know why you're so fixated on whether people eat something directly. That doesn't affect what all2 was saying or what voxl was saying or what anyone else has been saying, but you keep acting like it does.
So you understood the crop we're discussing is rarely destroyed - and more often donated, diverted to secondary markets (ie. sold in grocery stores), or exported - yet still felt compelled to say a home cook could use them?
What was even the point of your snarky comment then?
> So you understood the crop we're discussing is rarely destroyed - and more often donated, diverted to secondary markets (ie. sold in grocery stores), or exported - yet still felt compelled to say a home cook could use them?
In the context of someone talking about home cooks using them, and you acting like "People do not eat tart cherries directly." is a counterargument, yes I felt compelled to correct that.
The incorrect thing you were implying had nothing to do with how often they're actually destroyed. So why would that stop me?
People do not eat tart cherries directly. The overwhelming majority of people will never process them into something edible either.
"People in need" are not going to spend time and money processing tart cherries into juice concentrate or pie filling... especially when a can of either is cheaper than the raw ingredients to make your own.
Your point is ridiculous, absurd and pedantic beyond any reasonable purpose.
Most of what you are saying is correct, but I feel the need to respond to your far too many repeated assertions that "People do not eat tart cherries directly": Except for when they do!
I grow several varieties of sour cherries in my yard, and frequently use them whole and without further processing. Usually I use them in a recipe like this: https://en.wikipedia.org/wiki/Clafoutis. Sometimes I pit them first, sometimes I don't. Sometimes I'll even happily snack on them raw.
No, like most small fruit you aren't going eat them because you are desperate for calories. But they actually aren't any harder to prepare or use than lots of other tasty things that people traditionally grow.
Tart cherries are supply-controlled because they are processed into other goods, like pie filling, and can be stored for long duration (multiple seasons). The supply-control regulation is designed to prevent a surplus crop from depressing the market to the point where it's no longer viable to grow tart cherries - reducing future supply, ie. the regulation is designed to provide a consistent, stable supply.
Surplus tart cherry crops are rarely destroyed. In the event of a surplus, they are often exported, diverted to secondary markets, donated, or carried-over into next-season's stock.
Yup. The regulations on food in the US is exactly to make sure the shelves stay stocked no matter what. Without such regulations, you'd experience random items being unavailable and price shocks.
One thing people often don't figure or realize is food takes time to grow. It requires long term thinking to make sure supplies are sufficient. Left to their own devices, farmers will often chase after last season's cash crop. That is bad. It's far better for farmers to stick to more predictable growing and for more dedicated incentives to be issued.
Did you intend to be so insulting, condescending, and dismissive? "Left to their own devices, farmers will often chase after last season's cash crop. That is bad. It's far better for farmers to stick to more predictable growing and for more dedicated incentives to be issued."
I grew up on a farm and lived around farmers. This is my lived experience.
I saw first hand farmers tear up a barley fields to plant wheat when the price got high enough.
Farming is a game of speculation. Planting last year's cash crop can be a successful strategy just like buying APPL today will likely yield good returns. Yet, it's a very hard market to predict with a lot of luck involved. Maybe only a few chase the cash crop and you win big. Maybe everyone does and you lose. Maybe there's a natural or political disaster that pumps up your crop.
There was nothing insulting, condensing, or dismissive about my comment. Highly speculative markets, like food, have booms and busts that can swing wildly. That's bad for something like food. The free market does not work with crops.
I'd argue that this should be refined to something like "farmers that speculate heavily struggle in an under-regulated free market".
Financial stability in highly volatile markets depends on appropriate planning, saving, and distribution. I say this from the investment perspective, but I would venture to guess that it also applies to hard goods like food-stuffs.
The nature of farming is speculation. It's inescapable. In a completely free market there's no way to guarantee success. Even with the best planning and saving you can't know what the rest of the market is doing and because of the long tail, you are locked in to harvesting and selling your crop no matter what.
You can speculate and be the farmer that always plants and grows wheat. You'll see booms and busts based on that. You can also switch up what you are growing based on your best guess about demand. Both strategies can be successful.
Funnily, one way to make farming less risky is a futures contract. And, if you know anything about futures commodity trading you know they are some of the most risky forms of trading.
It's true though, these regulations exists because speculation and profit-chasing in agriculture is what lead to the dust bowl and worsened the great depression. We really, really don't want a repeat of that.
The amazing thing about people failing to learn from history is that everybody thinks they're too smart to (a) learn history or (b) follow rules enacted to prevent the disasters of yesteryear.
Learning from history is important but it’s much more important to do so in an inclusive manner. In fact, inclusive language is more important than anything else.
Could you please stop posting unsubstantive comments and flamebait? You've unfortunately been doing it a lot lately. We've already asked you a whole bunch of times not to do this. Eventually we ban accounts that won't stop.
Sure, but I think you should strive to run your community in a way where you’re policing the “I don’t endorse X, but I don’t understand why more people don’t do X” that this comment espouses https://news.ycombinator.com/item?id=47773488
You’re busy policing this while people are out there saying “Destroy their things and firebomb their houses”. So is it just that I made a mistake in my phrasing? Should I just frame the same comments in the style “I would never endorse X, but I don’t understand why others don’t do X”?
I can do that easily without LLM assistance if you like. But if you want your community to be exclusively endorsers of violence against enemies of a chosen tribe, then you should ban me so you can keep your little tribe of Ted Kaczynski fanboys.
This is one of those cases where the word "but" negates everything that precedes it.
If you think we haven't been moderating the type of posts you're talking about, you haven't been tracking HN moderation lately*—which is fine, why should/would you? But in that case you shouldn't be taking snarky swipes at the mods based on galactically mistaken assumptions.
More importantly, you shouldn't be pointing fingers at others instead of taking responsibility for your own bad behavior. Even if you were right in what you said, it wouldn't justify your breaking the rules. Moreover you have a longstanding pattern of doing this and we've been cutting you slack for years.
Okay, admittedly when I read these things I lose my mind and become a viral host for the nonsense because I feel the need to retaliate against what is clearly some kind of Blue Tribe mobbery. Clearly it’s a mistaken belief that you allow targeted mob-forming on your platform. Actually you’re just drowning under the load. Fine. What I can edit out I shall and I’ll try to keep in mind that you’re trying and failing, and doing this is just participating in the crap.
I’ll follow your comments for a mod log to see and I’ll refrain.
I do think it would justify breaking any rules that allow targeted mob-forming but since that’s not happening I’m happy to stand off.
I'd argue Intel fell is large part because of Intel's own complacency and incompetence. If Intel had taken AMD seriously, they'd probably still be a serious competitor today.
CUDA was built during the time AMD was focusing every resource on becoming competitive in the CPU market again. Today they dominate the CPU industry - but CUDA was first to market and therefore there's a ton of inertia behind it. Even if ROCm gets very good, it'll still struggle to overcome the vast amount of support (read "moat") CUDA enjoys.
True. After all Nvidia hasn't built tensorflow or PyTorch. That stuff was bound to be built on the first somewhat viable platform. Rocm is probably far ahead of where cuda was back then, but the goal moved.
It's not a dominant database anywhere on the outside.