The claim of being 7x faster than rsync is very dubious. I would like to know the test conditions for such a result.
I use every day rsync over SSH, and even between 7 to 10 years old computers it reaches the maximum link speed over 2.5 Gb/s Ethernet.
So in order to need something faster than rsync and be able to test it, one must use at least 10 Gb/s Ethernet, where I do not know how good must be your CPU to reach link speed.
For 7x faster, one would need to use at least 25 Gb/s Ethernet, and this in the worst case for rsync, when it were not faster on higher speed Ethernet than what I see on cheap 2.5 Gb/s Ethernet.
If on a higher-speed Ethernet the link speed would not be reached due to an ancient CPU that has insufficient speed for AES-GCM or for AES-UMAC, then using multiple connections would not improve the speed. If the speed is not limited by encryption, then changing TCP parameters, like window sizes, would probably have the same effect as using multiple connections, even when using just rsync over ssh.
If the transfers are done over the Internet, then the speed is throttled by some ISP and it is not determined by your computers. There are some cases when a small number of connections, e.g. 2 or 3 may have a higher aggregate throughput than 1, but in most cases that I have seen the ISPs limit the aggregated throughput for the traffic that goes to 1 IP address, so if you open more connections you get the same throughput as with fewer connections.
> I use every day rsync over SSH, and even between 7 to 10 years old computers it reaches the maximum link speed over 2.5 Gb/s Ethernet.
What are you rsyncing? Is it Maildirs for 5000 users? Or a multi-TB music and movie archive? The former might benefit greatly if the filesystem and its flash backing store is bottlenecking on metadata lookup, not bandwidth. The latter, not so much.
I too would like to know the test conditions. This is probably one of those tools that is lovely for the right use case, useless for the wrong one.
Anecdote: I have rsync’d maildirs and I recall managing a ~7x perf improvement by combining rsync with GNU parallel (trivial to fan out on each maildir)
Except for extremely cheap embedded computers, which save a few cents by not including a battery and a quartz resonator or oscillator for a real-time clock (as a timer suitable to be used as a RTC exists even in the cheapest microcontrollers), NTP is not a major security risk, because when used in the right way it should be used just to compensate the drift of the internal RTC, which should be trusted more than NTP (for the coarse time value).
Any unexplainable discrepancy between the internal RTC and what NTP reports should be interpreted as a possible spoofing of the NTP packets and such suspect values should not be used to update the internal clock. If the accumulated adjustments of the internal clock caused by NTP exceed a few seconds per day, the sysadmin must be alerted, as something like this can happen only when either the hardware is defective or something is wrong with NTP.
Unfortunately, there are many bad implementations that follow blindly what NTP says, without checking the timestamps for plausibility. Most better implementations are still not optimal, because they just limit the time steps of the internal clock to some value e.g. to 1 second, instead of recognizing outliers in the NTP timestamps and ignoring them.
That's a valid defence, and certainly protecting against the clock being moved too far too fast is a no-brainer, but you also need to threat-model the attack: what are the chances of an real attacker (not a theoretical weakness) carrying out an NTP-based attack and what would they gain from it? For example an attacker who has control over your local network and can spoof NTP on it, which means it's probably already game over before you even start, can move the clock on something around and reuse an expired certificate if they've compromised the device and captured its private key... you can probably dream up some sort of scenario where this might be an issue and where all the other things they can do with that level of compromise of the system won't be easier but it's so far down on the list of other things that attackers could do that it doesn't even rate.
And for the few places where clock distribution is critical, e.g. when you need to use IEEE 1588 PTP (Precision Time Protocol) you're typically running over tightly-controlled private links/networks or protocols (IEC 61850, 62439) and/or using GPS or rubidium clocks as your time source, not NTP. Or you sync using nonces, not time. Or whatever. It's just really hard to come up with a real-world scenario where this is a significant threat, and moreso one where the answer would be to run NTP over TLS rather than just using some better mechanism for the job.
Trump has already claimed that he has destroyed all nuclear capability of Iran at the previous attack done by USA against Iran.
Claiming now that this other attack has the same purpose makes certain that USA has lied either at the previous attack or at the current attack.
When the government of a country is a proven liar, no allegations about how dangerous another country is are credible.
Moreover, just before the attack, during the negotiations between USA and Iran it was said that Iran accepted most of the new American requests regarding their nuclear capabilities, which had the goal to prevent them from making any weapons, but their willingness to make concessions did not help them at all to avoid a surprise attack before the end of the negotiations.
The Iranians claim that the previous attack did not completely eliminate their research efforts and that they are continuing on. Anyone who values the American way of life should most certainly ensure that Iran does not achieve nuclear capability.
The problem in modern wars is that those who start them claim that they do this for survival, but the claim is not based on any real action of the adversary or on any evidence that the adversary is dangerous, but on beliefs that the adversary might want to endanger the survival of the attacker some time in an indefinite future, and perhaps might even be able to do that.
Nobody who starts a war today acknowledges that they do this for other reasons than "survival", e.g. for stealing various kinds of resources from the attacked.
It has become difficult to distinguish those who truly fight for survival from those who only claim to do this.
Yes, agreed. Mainland China is not under any threat from Taiwan, for instance.
However, the Iranians chant Death To America regularly and openly. They have both an active nuclear program and a means to deliver a nuclear weapon. They are also heavy funders of anti-American militias and groups. It is incumbent upon the Americans to ensure that the Iranians do not achieve their nuclear ambitions.
Iran launched a 1-ton payload (e.g. nuclear capable) rocket with a 2000 km range two days ago. That rocket can threaten US assets and allies even into Europe. And, of course, and small ship or container ship even could carry a nuclear weapon into an American port.
Those are derived from crude oil only because for a long time that has been the cheapest way to make them, not because oil were necessary in any way.
And it was the cheapest way only because most prices are fake, because they do not correspond to the cost of closed cycles of the materials used to make a product.
All those things require mostly energy, air, water and a few abundant minerals and metals to be made. Technologies to make them in this way have already existed for almost a century (e.g. making synthetic hydrocarbons, to replace oil), but they are still very inefficient. However, the inefficiency is mostly due to the fact that negligible amounts of money have been allocated for the development of such technologies (because as long as the use of fossil oil is permitted, there is no way for synthetic hydrocarbons to be cheaper), in comparison with the frivolous amounts of money that are wasted on various fads, like AI datacenters.
While the ancient Romans liked transparent crystals, especially emeralds and beryls, they were not the most valued gems.
The most valued gems in Ancient Rome were the higher-quality varieties of noble opals and pearls, which are not transparent, but which show a variable play of colors, depending on the ambient light and on the angle of sight.
Once when this happened to me a couple of years ago, it was the opposite.
My e-mails were put by default by Microsoft as spam into the junk folder, without the customer knowing anything about this.
After I succeeded to notify him about this, he searched there the e-mails and marked them as "not spam", and then he received my following e-mails.
So initially the customer did nothing and was not aware that some of the e-mails sent to him are classified as spam, and he had to do active efforts to override this default action by Microsoft.
There was absolutely nothing suspicious about the e-mail messages classified as spam in their content, their only fault was not coming from one of the few major e-mail providers.
I use every day rsync over SSH, and even between 7 to 10 years old computers it reaches the maximum link speed over 2.5 Gb/s Ethernet.
So in order to need something faster than rsync and be able to test it, one must use at least 10 Gb/s Ethernet, where I do not know how good must be your CPU to reach link speed.
For 7x faster, one would need to use at least 25 Gb/s Ethernet, and this in the worst case for rsync, when it were not faster on higher speed Ethernet than what I see on cheap 2.5 Gb/s Ethernet.
If on a higher-speed Ethernet the link speed would not be reached due to an ancient CPU that has insufficient speed for AES-GCM or for AES-UMAC, then using multiple connections would not improve the speed. If the speed is not limited by encryption, then changing TCP parameters, like window sizes, would probably have the same effect as using multiple connections, even when using just rsync over ssh.
If the transfers are done over the Internet, then the speed is throttled by some ISP and it is not determined by your computers. There are some cases when a small number of connections, e.g. 2 or 3 may have a higher aggregate throughput than 1, but in most cases that I have seen the ISPs limit the aggregated throughput for the traffic that goes to 1 IP address, so if you open more connections you get the same throughput as with fewer connections.
reply