Hacker Newsnew | past | comments | ask | show | jobs | submit | lisper's commentslogin

Parts of it once were, yes.

Night shift seems to have a very strong causal effect on my sleep cycles. Up until about ten years ago I was a night owl, rarely falling asleep before midnight and rarely waking up before 8. Then I started getting serious about light hygiene and using night shift and now I'm a serious day person, rarely staying awake after 11 and rarely waking up after 7. But the real clincher is that when I travel I don't change the time zone on my computer (because it screws up my calendar). But my sleep cycle continues to track my home time zone for a very long time. I life in California, but at the moment I'm in Hawaii. I've been here three weeks so far. At home I'd fall asleep around 11 and wake up around 7, but here I'm getting sleepy at 9 and waking up at 5.

My wife, on the other hand, is a hard-core night owl even with night shift. So apparently there is a lot of individual variation.

This article has inspired me to do a control experiment by switching night shift off. Check back here in a week or so for the results.


I remember when I found Flux (third party predecessor to night shift) sometime in 2013. It worked in a week, I'd been staying up until 3am for most of the year and a started going to bed at midnight.

> light hygiene

Awesome, hadn’t come across this term before.

You might appreciate the concept of chronotypes.

https://en.wikipedia.org/wiki/Chronotype

The DOAC podcast recently hosted Dr. Michael Breus on same.

Apple Podcast link, or conjure your own:

https://podcasts.apple.com/au/podcast/the-diary-of-a-ceo-wit...


Bear in mind that chronotypes, as stated in the wiki, only varies about 2-3 hours from each other. This is just to say that there is no nocturnal person in terms of biology, we are all diurnal mammals after all.

Yep, when you expose night owls to just natural light they go to sleep earlier like early birds.

Do they?

I am saying out of the top of my mind but I would guess that in the case of owls they have evolved to be nocturnal. So their physiology is problably synchronized to natural light but in way that keep their activity nocturnal.

Hamsters are also nocturnal but you can force them to be diurnal in lab settings, but their physiology is at a constant jet lag state.


Yeah, like anything, proponents like the guy in the podcast I referenced probably overhype the importance / impact.

> Night shift seems to have a very strong causal effect on my sleep cycles.

> light hygiene and using night shift

The OP article is primarily about separating the variables you lumped together.


>inspired me to do a control experiment

Delightful, see ya the 27th!


This analysis is not quite fair. It takes into account locality (i.e. the speed of light) when designing UUID schemes but not when computing the odds of a collision. Collisions only matter if the colliding UUIDs actually come into causal contact with each other after being generated. So just as you have to take locality into account when designing UUID trees, you also have to take it into account when computing the odds of an actual local collision. So a naive application of the birthday paradox is not applicable because that ignores locality. So an actual fair calculation of the required size of a random UUID is going to be a lot smaller than the ~800 bits the article comes up with. I haven't done the math, but I'd be surprised if the actual answer is more than 256 bits.

(Gotta say here that I love HN. It's one of the very few places where a comment that geeky and pedantic can nonetheless be on point. :-)


Reminds me of a time many years ago when I received a whole case of Intel NICs all with the same MAC address.

It was an interesting couple of days before we figured it out.


How does that happen? Was it an OEM bulk kind of deal where you were expected to write a new MAC for each NIC when deploying them?

There's a fun hypothesis I've read about somewhere, goes something like this:

As the universe expands the gap between galaxies widens until they start "disappearing" as no information can travel anymore between them. Therefore, if we assume that intelligent lifeforms exist out there, it is likely that these will slowly converge to the place in the universe with the highest mass density for survival. IIRC we even know approximately where this is.

This means a sort of "grand meeting of alien advanced cultures" before the heat death. Which in turn also means that previously uncollided UUIDs may start to collide.

Those damned Vogons thrashing all our stats with their gazillion documents. Why do they have a UUID for each xml tag??


It is counter intuitive but information can still travel between places that are so distant that expansion between them is faster than the speed of light. It's just extremely slow (so I still vote for going to the party at the highest density place).

We do see light from galaxies that are receding away from us faster than c. At first the photons going in our direction are moving away from us but as the universe expands over time at some point they find themselves in a region of space that is no longer receding faster than c, and they start approaching.


That's not exactly it. Light gets redshifted instead of slowing down, because light will be measured to be the same speed in all frames of reference. So even though we can't actually observe it yet, light traveling towards us still moves at c.

It's a different story entirely for matter. Causal and reachable are two different things.

Regardless, such extreme redshifting would make communication virtually impossible - but maybe the folks at Blargon 5 have that figured out.


I think I missed something: how do galaxies getting further away (divergence) imply that intelligent species will converge anywhere? It isn’t like one galaxy getting out of range of another on the other side of the universe is going to affect things in a meaningful way…

A galaxy has enough resources to be self-reliant, there’s no need for a species to escape one that is getting too far away from another one.


You'll run out of resources eventually. Moving to the place with the most mass gives you the most time before you run out.

Yes that's the idea. The expansion simply means that the window of migration will close. Once it's closed, your galaxy is cut off and will run out of fuel sooner than the high-density area.

Well eventually there are no galaxies just a bunch of cosmic rays. Some clusters of matter will last longer.

I think for this to work, either life would have to plentiful near the end, or you’d need FTL travel.


Social aspect. There is no need but it's more fun to spend the end of the Universe with other intelligences than each in its own place.

I think I sense a strange Battle Royale type game…

Assuming these are advanced enough aliens, they'll also be bringing with them all the mass they can, to accentuate the effect? I'm imagining things like Niven's ringworld star propulsion.

You must consider both time and locality.

From now until protons decay and matter does not exist anymore is only 10^56 nanoseconds.


If protons decay. There isn't really any reason to believe they're not stable.

And recent DESI data suggests that dark energy is not constant and the universe will experience a big crunch in a little more than double its current age, for a total lifespan of 33 billion years, no need to get wild with the orders of magnitude on years into the future. The infinite expansion to heat death over 10^100 years is looking less likely, 10^11 years should be plenty.

https://www.sciencedaily.com/releases/2026/02/260215225537.h...


not obvious to me this makes things better as opposed to worse? sure, the time bound helps but in the runup to a crunch won't we get vastly more devices in causal range at an asymptotically increasing rate?

Who’s there doing the counting? I would assume the temperatures at those extremes won’t support life in its known forms.

Perhaps some Adamesque (as in douglas adams) creature whose sole purpose is to collect all unique UUIDs and give them names.


Runup to the crunch is a looong time lots of which is probably very habitable. in 5 billion years life can arise from scratch become conscious and exterminate itself

Protons can decay because the distinction between matter and energy isn't permanent.

Two quarks inside the proton interact via a massive messenger particle. This exchange flips their identity, turning the proton into a positron and a neutral pion. The pion then immediately converts into gamma rays.

Proton decayed!


This destroys a baryon, an operation which is prohibited by the standard model.

Baryon number is an accidental symmetry, not a fundamental one. Unlike charge or color, it is not protected by a gauge principle and is just a consequence of the field content and renormalizability at low energies.

The standard model is almost certainly an effective field theory and a low-energy approximation of a more comprehensive framework. In any ultraviolet completion, such as a GUT, quarks and leptons inhabit the same multiplets. At these scales, the distinction between matter types blurs, and the heavy gauge bosons provide the exact mediation mechanism described to bypass the baryon barrier.

Furthermore, the existence of the universe is an empirical mandate for baryon-violation. If baryon number were a strict, immutable law, the Sakharov conditions could not be met, and the primordial matter-antimatter symmetry would have resulted in a total annihilation. Our existence is proof that baryon number is not conserved. Even within the current framework, non-perturbative effects like sphalerons demonstrate that the Standard Model vacuum itself does not strictly forbid the destruction of baryons.


The sum of the conserved quantities, e.g. chromatic charge, electric charge and spin, is null for the set of 8 particles formed by the 3 u quarks, the 3 d quarks and the electron and the neutrino, i.e. for the components of a proton plus a neutron plus an electron plus a neutrino.

This is the only case of a null sum for these quantities, where no antiparticles are involved. The sum is also null for 2 particles, where one is the antiparticle of the other, allowing their generation or annihilation, and it is also null for the 4 particles that take part in any weak interaction, like the decay of a neutron into a proton, which involves a u quark, a d antiquark, an electron and an antineutrino, and this is what allows the transmutations between elementary particles that cannot happen just through generation and annihilation of particle-antiparticle pairs.

Thus generation and annihilation of groups of such 8 particles are not forbidden by the known laws. The Big Bang model is based on equal quantities of these 8 particles at the beginning, which is consistent with their simultaneous generation at the origin.

On the other hand, the annihilation of such a group of 8 particles, which would lead to the disappearance of some matter, appears as an extraordinarily improbable event.

For annihilation, all 8 particles would have to come simultaneously at a distance from each other much smaller than the diameter of an atomic nucleus, inside which quarks move at very high speeds, not much less than the speed of light, so they are never close to each other.

The probability of a proton colliding simultaneously with a neutron, with an electron and with a neutrino, while at the same time the 6 quarks composing the nucleons would also be gathering at the same internal spot seems so low that such an event is extremely unlikely to ever have happened in the entire Universe, since its beginning.


Protons (and mass and energy) could also potentially be created. If this happens, the heat death could be avoided.

Conservation of mass and energy is an empirical observation, there is no theoretical basis for it. We just don't know any process we can implement that violates it, but that doesn't mean it doesn't exist.


All of physics is „just“ based on empirical observation. It’s still a pretty good tool for prediction.

Conservation laws result from continuous symmetries in the laws of physics, as proven by Noether's theorem.

Time translation symmetry implies energy conservation, but time translation symmetry is only an empirical observation on a local scale and has not been shown to be true on a global universe scale.

That's such an odd way to use units. Why would you do 10^56 * 10^-9 seconds?

This was my thought. Nanoseconds are an eternity. You want to be using Planck units for your worst-case analysis.

Planck units are a mathematical convenience, not a physical limit. For instance, the Planck mass is on the order of an eyelash or grain of sand.

Planck units are physical limits. The Planck mass is the limit of the mass of an elementary particle before it would form a black hole.

"Plank units are not physical limits on reality itself" is what I should have said. We can obviously have larger or smaller masses.

The plank time is a limit on a measurement process, not the smallest unit of time.


> Plank units are not physical limits on reality itself

We don't actually know that. They might be. Planck units are what happens when GR meets QM and we just don't know yet what happens there.

But as a heuristic, they probably put pretty good bounds on what we can reasonably expect to be technologically achievable before humans go extinct.


Nope. What you say is a myth.

The Planck mass is just the square root of the quotient of dividing the product between the natural units of angular momentum and velocity, by the Newtonian constant of gravitation.

This Planck mass expresses a constant related to the conversion of the Newtonian constant of gravitation from the conventional system of units to a natural system of units, which is why it appears instead of the classic Newtonian constant inside a much more complex expression that computes the Chandrasekhar limit for black holes.

The Planck mass has absolutely no physical meaning (otherwise than expressing in a different system of units a constant equivalent with the Newtonian constant of gravitation), unlike some other true universal constants, like the so-called constant of fine structure (or constant of Sommerfeld), which is the ratio between the speed of an electron revolving around a nucleus of infinite mass in the state with the lowest total energy, and the speed of light (i.e. that electron speed measured in natural units). The constant of fine structure is a measure of the intensity of the electromagnetic interaction, like the Planck mass or the Newtonian constant of gravitation are measures of the intensity of the gravitational interaction.

The so-called "Planck units" have weird values because they are derived from the Newtonian constant of gravitation, which is extremely small. Planck has proposed them in 1899, immediately after computing for the first time what is now called as Planck's constant.

He realized that Planck's constant provides an additional value that would be suitable for a system of natural fundamental units, but his proposal was a complete failure because he did not understand the requirements for a system of fundamental units. He has started from the proposals made by Maxwell a quarter of century before him, but from 2 alternatives proposed by Maxwell for defining a unit of mass, Planck has chosen the bad alternative, of using the Newtonian constant of gravitation.

Any system of fundamental units where the Newtonian constant of gravitation is chosen by convention, instead of being measured, is impossible to use in practice. The reason is that this constant can be measured only with great uncertainties. Saying by law that it has a certain value does not make the uncertainties disappear, but it moves them into the values of almost all other physical quantities. In the Planck system of units, no absolute value is known with a precision good enough for modern technology. The only accurate values are relative, i.e. the ratios between 2 physical quantities of the same kind.

The Planck system of units is only good for showing how a system of fundamental units MUST NOT be defined.

Because the Planck units of length and time happen by chance to be very small, beyond the range of any experiments that have ever been done in the most powerful accelerators, absolutely nobody knows what can happen if a physical system could be that small, so claims that some particle could be that small and it would collapse in a black hole are more ridiculous than claiming to have seen the Monster of Loch Ness.

The Einsteinian theory of gravitation is based on averaging the distribution of matter, so we can be pretty sure that it cannot be valid in the same form at elementary particle level, where you must deal with instantaneous particle positions, not with their mass averaged over a great region of empty space.

It has become possible to use Planck's constant in a system of fundamental units only much later than 1899, i.e. after 1961, when the quantization of magnetic field was measured experimentally. However, next year, in 1962, an even better method was discovered, by the prediction of the Josephson effect. The Josephson effect would have been sufficient to make the standard kilogram unnecessary, but metrology has been further simplified by the discovery of the von Klitzing effect in 1980. Despite the fact that this would have been possible much earlier, only since 2019 the legal system of fundamental units depends on Planck's constant, but in a good way, not in that proposed by Planck.


If you go far beyond nanoseconds, energy becomes a limiting factor. You can only achieve ultra-fast processing if you dedicate vast amounts of matter to heat dissipation and energy generation. Think on a galactic scale: you cannot have even have molecular reaction speeds occurring at femtosecond or attosecond speeds constantly and everywhere without overheating everything.

Maybe. It's not clear whether these are fundamental limits or merely technological ones. Reversible (i.e. infinitely efficient) computing is theoretically possible.

Reversible computing is not infinitely efficient, because irreversible operations, e.g. memory erasing, cannot be completely avoided.

However, the computing efficiency could be greatly increased by employing reversible operations whenever possible and there are chances that this will be done in the future, but the efficiency will remain far from infinite.


If you have a black hole as an infinite heat sink this helps a great deal.

Black holes have a maximum growth rate

By infinite I mean a black hole gets COLDER as you add mass and energy to it.

Nanoseconds is a natural unit for processors operating around a GHz, as it's roughly the time of a clock cycle.

If a CPU takes 4 cycles to generate a UUID and the CPU runs at 4 GHz it churns out one every nanosecond.


I got a big laugh at the “only” part of that. I do have a sincere question about that number though, isn’t time relative? How would we know that number to be true or consistent? My incredibly naive assumption would be that with less matter time moves faster sort of accelerating; so, as matter “evaporates” the process accelerates and converges on that number (or close it)?

Times for things like "age of the universe" are usually given as "cosmic time" for this reason. If it's about a specific object (e.g. "how long until a day on Earth lasts 25 hours") it's usually given in "proper time" for that object. Other observers/reference frames may perceive time differently, but in the normal relativistic sense rather than a "it all needs to wind itself back up to be equal in the end" sense.

The local reference frame (which is what matters for proton decay) doesn't see an outside world moving slower or faster depending on how much mass is around it to any significant degree until you start adding a lot of mass very close around.

If we think of the many worlds interpretation, how many universes will we be making every time we assign a CCUID to something?

> many worlds interpretation

These are only namespaces. Many worlds can have all the same (many) random numbers and they will never conflict with each other!


We don't "make" universes in the MWI. The universal wavefunction evolves to include all reachable quantum states. It's deterministic, because it encompasses all allowed possibilities.

Humpf…

You just had to collapse my wave function here…


That's Copenhagen, not MWI! :P

In that interpretation the total number of worlds does not change.

Proton decay is hypothetical.

So is the need for cosmologically unique IDs. We're having fun.

This is the right critique. The whole article is a fun thought experiment but it massively overestimates the problem by ignoring causality. In practice, UUID collisions only matter within systems that actually talk to each other, and those systems are bounded by light cones. 128 bits is already overkill for anything humans will build in the next thousand years. 256 bits is overkill for anything that could physically exist in this universe.

Ah but if we are considering near-infinitesimal probabilities, we should metagame and consider the very low probability that our understanding of cosmology is flawed and light cones aren’t actually a limiting factor on causal contact.

Sorry, your laptop was produced before FTL was invented, so its MAC address is only recognized in the Milky Way sector.

If we allow FTL information exchange, don't we run into the possibility that the FTL accessible universe is infinite, so unique IDs are fundamnetally not possible? Physics doesn't really do much with this because the observable universe is all that 'exists' in a Russel's Teapot sense.

Would this take into account IDs generated by objects moving at relativistic speeds? It would be a right pain to travel for a year to another planet, arrive 10,000 years late, and have a bunch of id collisions.

I have to confess I have not actually done the math.

Oh no! We should immediately commence work on a new UUID version that addresses this use case.

Maybe the definitions are shifting, but in my experience “on point” is typically an endorsement in the area of “really/precisely good” — so I think what you mean is “on topic” or similar.

Pedantry ftw.



Don't forget that today's observable universe includes places that will never be able to see us because of the expansion of the universe being faster than the speed of light. There's a smaller sphere for the portion of the universe that we can influence.

Hanson's Grabby Aliens actually fits really well here if you're looking for some math to base off of.

The answer is 42. Have it from good source!

I am in Honolulu right now and the power has gone out twice in the last three days because of high winds.

Yep. TFR was issued three hours ago.

https://tfr.faa.gov/tfr3/?page=detail_6_2233


Um, where is the car? All the images are of (parts of) the interior, and the captioning is bizarre. Ooohh! It has a steering wheel! (And it's a input! Who knew?)

They only shared the interior, not exterior.

Singapore is on the equator, so winter is not even a well-defined concept there.

> You need a Monad type in order to express that certain things are supposed to happen after other things

This is the kind of explanation that drives me absolutely batshit crazy because it is fundamentally at odds with:

> Do you understand "flatmap"? Good, that's literally all a monad is: a flatmappable.

So, I think I understand flatmap, assuming that this is what you mean:

https://www.w3schools.com/Jsref/jsref_array_flatmap.asp

But this has absolutely nothing to do with "certain things are supposed to happen after other things", and CANNOT POSSIBLY have anything to do with that. Flatmap is a purely functional concept, and in the context of things that are purely functional, nothing ever actually happens. That's the whole point of "functional" as a concept. It cleanly separates the result of a computation from the process used to produce that result.

So one of your "simple" explanations must be wrong.


Because you're not used to abstract algebra. JavaScript arrays form a monad with flatmap as the join operator. There are multiple ways to make a monad with list like structures.

And you are correct. Monads have nothing to do with sequencing (I mean any more than any other non commutative operator -- remember x^2 is not the same as 2^x)

Haskell handles sequencing by reducing to weak head normal form which is controlled by case matching. There is no connection to monads in general. The IO monad uses case matching in its implementation of flatmap to achieve a sensible ordering.

As for JavaScript flat map, a.flatMap(b).flatMap(c) is the same as a.flatMap(function (x) { return b(x).flatMap(c);}).

This is the same as promises: a.then(b).then(c) is the same as a.then(function (x) { return b(x).then(c)}).

Literally everything for which this is true forms a monad and the monad laws apply.


Nota bene, then is not a monad because the implementation of then implies map is isomorphic to flatmap. This is because `then` turns the return value of a callback into a flat promise, even if the callback itself returns a promise.

That is to say, then checks the type of the return value and then takes on map or flatmap behavior depending on whether the return value of the callback is a Promise or not.


OK, so the defining characteristic of a monad M is that:

a.M(b).M(c) = a.M(function(x){return b(x).M(c)})

So the next question is: why should I care about that pattern in particular?


You don't? There's nothing special about monads. I don't know why everyone cares so much about them.

There are a few generic transforms you can use to avoid boilerplate. You can reason about your code more easily if your monad follows all the monad laws. But let's be real.. most programmers don't know how to reason about their code anyway, so it's a moot point


I can do you one better and specify that the normal base-2 integer represented by the bits is the number of up-arrows. But as /u/tromp has already pointed out, that is not very interesting.


How do you do that?


Settings -> General -> Software Update -> Beta Updates

It's the same on macOS and iOS, pick "macOS Sequoia Public Beta" or the corresponding release for your device. Apple still pushes security updates for those releases, and I haven't heard of any problems with the kind of minor updates that ship late in a major release's lifecycle, so I think the risk of running this way is low. This kicks the can a year or two down the road, at which point hopefully there are better workarounds.


Thanks!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: