Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I am null at cryptography but thie following does not sound too bad as a default tbh. And I think it is misleading to focus solely on e2ee and not mention the distributed aspect.

https://telegram.org/faq#q-do-you-process-data-requests

> To protect the data that is not covered by end-to-end encryption, Telegram uses a distributed infrastructure. Cloud chat data is stored in multiple data centers around the globe that are controlled by different legal entities spread across different jurisdictions. The relevant decryption keys are split into parts and are never kept in the same place as the data they protect. As a result, several court orders from different jurisdictions are required to force us to give up any data.

> Thanks to this structure, we can ensure that no single government or block of like-minded countries can intrude on people's privacy and freedom of expression.

> Telegram can be forced to give up data only if an issue is grave and universal enough to pass the scrutiny of several different legal systems around the world.

> To this day, we have disclosed 0 bytes of user data to third parties, including governments.



You can coherently argue that encryption doesn't matter, but you can't reasonably argue that Telegram is a serious encrypted messaging app (it's not an encrypted messaging app at all for group chats), which is the point of the article. The general attitude among practitioners in the field is: if you have to reason about how the operator will handle legal threats, you shouldn't bother reasoning about the messenger at all.


[flagged]


> you can install a reproducible build of Telegram and be sure it's end-to-end encrypting things.

This is incorrect. The construction for group chats in Telegram is not e2e at all. The construction for dm’s is considered dubious by many cryptographers.

It does not matter if you can reproduce a non-e2e encrypted message scheme, you must still trust the servers which you have no visibility on.

Trustworthy e2e is table stakes for this for that reason. Reproducible builds aren’t because we can evaluate a bunch of different builds collected in the wild and detect differences in implementation. This is the same thing we’d do if reproducible builds were in effect.

There are lots of reasons splitting jurisdictions makes sense but you wrote a whole bunch of words that fall back to “hope Telegram doesn’t change their protections in the face of governmental violence”.


The reproducible build of Telegram lets you evaluate the code doing end-to-end encryption. Once you satisfy yourself it's doing this kind of encryption without implementation-level backdoors, then you don't need to worry about servers reading it (except for #5 above).

I didn't claim it encrypted "group chats". I said "things". If you want me to be specific, the "things" are individual 1-1 end-to-end encrypted chats.


Reproducible builds are not required to evaluate the encryption algorithm used in Telegram.

Software auditors use deployed binaries as a matter of course.

They’d do so even if reproducible builds are on offer because the code and the binary aren’t promised to be the same even with reproducible builds and validating that they are can be more problematic than the normal case of auditing binaries.


Can you show me any audits of Telegram binaries or similar software?



It's interesting how all these years later and cryptographers can still only be dubious; nobody has actually cracked the implementation (or if they have, they haven't publicized it for whatever reason).


> On the question of balancing privacy and security, there are in fact solutions, but you have to get away from the idea of a centralized police force / centralized government, and think in terms of a free market of agencies, that can decrypt limited evidence only with a warrant and only if they provide a good reason. The warrant could be due to an AI at the edge flagging stuff, but the due process must be followed and be transparent to all

What does this mean? How can "we" move away from centralized states to "a free market of agencies"? How can there be a "market" of police forces, even in principle? Who are the customers in this imagined market? Who enforces the laws to keep it a free market?

At first glance, this sounds like libertarian fan fiction, to be honest, but I am curious.


Have you read the article I link to in that point? After you read it, you'll have a better idea, and then if you have a specific point, we can discuss.


I read it now, and I saw nothing at all about a free market of LEAs, either police or intelligence agencies. It's only speaking about some silly idea of filming every private scene but relying on magic encryption with keys that are stored... Somewhere?... And somehow the keys to decrypt these most private moments are only accessible when it would be nice to do so. It's clearly not a serious idea, so it gives me a good idea how wild the speculation goes about the broader trends.


Well, at least you got the broad strokes.

Since the parent post was flagged, I do not see any sense to continue, since no one will really see this conversation.

I do interviews with the top people. I build solutions. I give it away for free. I discuss the actual substance. And in the end it's just flagged before a serious conversation can be had. HN is not what it once was.


> if you have to reason about how the operator will handle legal threats, you shouldn't bother reasoning about the messenger at all.

That's true.

You need to run your own platform people. XMPP is plenty simple, plenty powerful, and plenty safe -- and even your metadata is in your control.

Just self host. There's no excuse in 2024.

Wake up people!

Why should the arrest of someone else affect YOU?


"You need to run your own platform people." What problem does this solve?

I'm someone who's been on the business end of a subpoena for a platform I ran, and narcing on my friends under threat of being held in contempt is perhaps the worst feeling I'm doomed to live with.

"XMPP is ..." not the solution I'd recommend, even with something like OMEMO. Is it on by default? Can you force it to be turned on? The answer to both of those is, as it turns out, "no," which makes it less than useful. (This is notwithstanding several other issues OMEMO has.)


Note in particular that the Ethernet connection to xmpp.ru/jabber.ru's server was physically intercepted by German law enforcement (or whatever-you-think-they're-actually-enforcing enforcement), allowing them to issue fraudulent certificates through Let's Encrypt and snoop on all traffic. This was only noticed when the enforcement forgot to renew the certificate. https://news.ycombinator.com/item?id=37961166


> The answer to both of those is, as it turns out, "no"

This is not true, it depends on the client. Conversations has OMEMO enabled per default.


I don't see any practical difference between "it depends" and "no" here.


This is like saying we shouldn't use TCP/IP because it's not encrypted. How it actually works is that encryption is enforced by the application - indeed the only place you can reasonably enforce it. See for example the gradual phasing out of HTTP in browsers by various means.

What this means in practice is that you shouldn't focus on whether XMPP (or Matrix, or whatever) protocols are encrypted, but whether the applications enforce it. Just as there are many web browsers to choose from, there are many messaging apps. Use (and recommend) apps that enforce encryption if that's what you want.


I'm not sure I agree, particularly given that there's some incentive for us to get our relatives using these messenger protocols and clients. The Web made it work because everyone came together and gathered consensus (well, modulo some details) that enforcing HTTPS is, ultimately, a good idea given the context.

So far, I'm not seeing that same consensus from the XSF and client vendors. If the capital investment can be made to encourage that same culture, the comparison can perhaps be a little closer.


The consensus comes from the people using the clients, not from the standards bodies. It's the same for HTTPs, where the users (in this case the server admins) decided it would be a good idea to use encryption.

There are even apps like Quicksy which have a more familiar onboarding experience using the mobile phone number as the username, while still being federated with other standard compliant servers. There is little reason to use walled garden apps like Signal these days.


As if it were that simple. Where are you going to host that self-hosted instance? What protections against law enforcement inspections do you have? What protections against curious/nefarious hackers? How are you going to convince every single person you interact with to use it?

Gung-ho evangelists rarely convert like a reasonable take on the subject does


  > Just self host. There's no excuse in 2024.
I hate to break it to you, but there's plenty of excuses. We live in a bubble on HN.

May I remind you what the average person is like with this recently famous reddit post:

https://archive.is/hM2Sf

If you want self hosting to happen, with things like Matrix, and so on, the hard truth is that it has to not be easy for someone who can program, but trivial for someone who says "wow, can you hack into <x>" if they see you use a terminal


You're assuming end-to-end encryption doesn't exist, and that the only way to be safe is to have someone close to you self-hosting.

Self-hosting is terrible in that it gives Mike, the unbeknownst creepy tech guy in the group 100% control over the metadata of their close ones. Who talks to whom, when etc. It's much better to either get rid of that with Tor-only p2p architecture (you'll lose offline-messaging), or to outsource hosting to some organization that doesn't have interest in your metadata.

The privacy concern Green made was confidentiality of messages. There is none for Telegram, and Telegram should have moderated content for illegal stuff because of that. They made a decision to become a social media platform like Facebook, but they also chose not to co-operate with the law. Durov was asked to stop digging his hole deeper back in 2013, and now he's reaping what he sow.


Or better use a P2P IM like Jami: https://jami.net


Sadly, you still have to pipe all messages through Apple’s notification API if you want notifications on iOS


Metadata? Yes. The plaintext of the messages is not piped through the notification API.

https://www.medianama.com/2023/12/223-signal-push-notificati...


Wasn’t this the exact rhetoric used to justify PRISM during the Snowden revelations?


Yes: End-to-end encryption is technically quite difficult, but politically and legally feasible (at least currently, at least in most countries).

Simply not cooperating with law enforcement is technically moderately difficult, but politically and legally impossible.

Between a difficult and an impossible option, the rational decision is to pick the difficult one.


Indeed. Even being charitable and assuming that they're not lying (they say elsewhere that they've shared zero bytes with law enforcement, despite this being demonstrably false), in reality if say, they were to arrest the founder in an EU country (France, perhaps), all they need to do is threaten him with twenty years in prison and I'm sure he'll gladly give up the keys from all the different locations they supposedly have.


Is there a nice solution for multiparty (n >= 3) end-to-end encryption?


Arguably WhatsApp's protocol scales reasonably well (nice description in this survey paper: [1]), at least well enough for maximum WhatsApp group sizes (times up to four devices per participant).

[1] https://eprint.iacr.org/2017/713.pdf


MLS scales best for large n, but WhatsApp/Signal or Matrix do pretty well for < 1k people



A possible implementation using existing infrastructure where at least the client is open: modify the messaging client so that when it receives multiple pvt connections it routes every incoming message to all connected members. Now if you have say 10 users that want group encrypted chats, have one of them run the modded client too so that any user connecting to a pvt chat with that client will essentially enter a room with other users. Of course this requires trust between members, and adding another encryption layer on all clients might turn out necessary so that you don't need to worry about the carrier telling the truth (all p2p connections encrypted, etc)..


Have the room owner create an AES 256 key, send it to all Party members via 1:1 e2ee, encrypt room messages with that AES key.


This kills the forward secrecy.

IIRC Signal just has each group member send each group message to each recipient with the standard pair-wise encryption keys. It's the message's headers that lets the recipient know it's intended for the group and not the 1:1 group.


this is pretty much what Matrix does, if I understand correctly.

Additionally the key is regularly updated to provide some degree of perfect forward secrecy and avoid encrypting for people who left the group chat


> this is pretty much what Matrix does, if I understand correctly.

I think it has senders encrypt messages with each room member's public key, rather than a single shared key. (At least, that's what the behavior I've seen suggests to me.)

Here's the spec, in case you want to comb through it:

https://spec.matrix.org/v1.11/client-server-api/#end-to-end-...


> When creating a Megolm session in a room, clients must share the corresponding session key using Olm with the intended recipients, so that they can decrypt future messages encrypted using this session. An m.room_key event is used to do this. Clients must also handle m.room_key events sent by other devices in order to decrypt their messages.

https://spec.matrix.org/v1.11/client-server-api/#mmegolmv1ae...

OLM is the public key encryption scheme, similar to the Signal Protocol. It is used to exchange room_key messages, but not the room messages itself.

MEGOLM as linked in the specification: https://gitlab.matrix.org/matrix-org/olm/blob/master/docs/me...


simplex.chat


The entire platform is a joke. It pretends to have no identifiers and heavily markets queues (a programming technique) as a solution to privacy problem.

You ask the authors how they solved the problem of server needing to know to which client connection an incoming ciphertext needs to be forwarded, and they'll run to the hills.

They're lying by omission about their security, and misleading about what constitutes as a permanent identifier.


That you don't like the design is well known. But this is not the reason to lie.

You understand the design quite well, from our past conversations, you simply don't like the fact that we don't recognise user IP address as a permanent user identifier on the protocol level. It is indeed a transport identifier, not a protocol-level identifier that all other messaging networks have for the users (in addition to transport identifiers).

Message routing protocol has anonymous pairwise identifiers for the connections between users (graph edges), but it has no user identifiers - messaging servers have no concept of a user, and no user accounts.

Also, recently we added a second step in message routing that protects both user IP addresses and transport sessions: https://simplex.chat/blog/20240604-simplex-chat-v5.8-private...

In general, if you want to meaningfully engage in the design criticism, I would be happy too, and it will help, but simply spitting out hate online because you don't like something or somebody, is not a constructive approach – you undermine your own reputation and you also mislead people.

> You ask the authors how they solved the problem of server needing to know to which client connection an incoming ciphertext needs to be forwarded, and they'll run to the hills

This is very precisely documented, and this design was recently audited by Trail of Bits (in July 2024), we are about to publish their report. So either you didn't understand, or your are lying.

> They're lying by omission about their security, and misleading about what constitutes as a permanent identifier.

You would have to substantiate this claim, as otherwise it is slander. We are not lying about anything, by omission or otherwise. You, on another hand, are lying here.

That you are spiteful for some reason is not a good enough reason.

Factually, at this point SimpleX Chat is one of the most private and secure messengers, see the comparisons of e2e encryption properties in SimpleX Chat and other messengers: https://simplex.chat/blog/20240314-simplex-chat-v5-6-quantum...


>> You ask the authors how they solved the problem of server needing to know to which client connection an incoming ciphertext needs to be forwarded, and they'll run to the hills

> This is very precisely documented, and this design was recently audited by Trail of Bits (in July 2024), we are about to publish their report.

I’ve looked at SimpleX in the past and am also curious about this. Is there a high-level summary?


>you simply don't like the fact that we don't recognise user IP address as a permanent user identifier on the protocol level

So how exactly are all those DMCA letters finding themselves to the correct household if IP address doesn't deanonymize you?

>Message routing protocol has anonymous pairwise identifiers for the connections between users

I'm so tired of you avoiding the obvious question. What does this identifier look like?

Given that you say its anonymous, its probably not a username. So. Is it a random string, an RSA/DH/ed25519/ed448/ECDSA key-pair? Is it permanent? If not, how often does it change? How is it changing, are the identifiers advancing in a hash-ratchet etc. Can it change while the IP address stays the same?

Give me an example of the identifier. If it is a collection of data, explain every single segment of it.

It's clear the server does not tell one user connection from another by its IP address. So, until you explain what information exactly the server uses to tell one user's connection apart from another, it makes no sense to discuss this further.

>You would have to substantiate this claim, as otherwise it is slander.

As per above, IPv4 addresses have been used to identify individual subscriptions, and you're not making it clear that if the users wants ambiguity about the identity of who's behind an IP-address, they should not live in a single person household. You're also not defaulting to Tor so by default every single-household user not behind a NAT can be determined by their IPv4 address. You pretending that IPv4 addresses don't matter doesn't change reality.

Also as for your vague threats of SLAPP lawsuits, let me give you a quick lesson on the Finnish law:

"Edellä 1 momentin 2 kohdassa tarkoitettuna kunnianloukkauksena ei pidetä arvostelua, joka kohdistuu toisen menettelyyn politiikassa, elinkeinoelämässä, julkisessa virassa tai tehtävässä, tieteessä, taiteessa taikka näihin rinnastettavassa julkisessa toiminnassa ja joka ei selvästi ylitä sitä, mitä voidaan pitää hyväksyttävänä."

or

"Defamation as referred to in subsection 1, point 2 above is not considered to be criticism directed at another person's conduct in politics, business, public office or task, science, art or similar public activities and which does not clearly exceed what can be considered acceptable."

https://www.finlex.fi/fi/laki/ajantasa/1889/18890039001

Tldr: Criticism of businesses is legal in Finland. Feel free to consult your lawyers on the matter.


As per

https://github.com/simplex-chat/simplexmq/blob/stable/protoc...

"""

SMP is initialized with an in-person or out-of-band introduction message, where Alice provides Bob with details of a server

* IP address / host name [alice-land.net or similar]

* port

* and hash of the long-lived offline certificate*

* a queue ID

* Alice's E2EE public key.

"""

The queue ID is of interest here. The text also says

"""

When setting up a queue, the server will create separate sender and recipient queue IDs (provided to Alice during set-up and Bob during initial connection)."

"""

---

So the queue is basically a pair of port numbers generated by the server. The server gives Alice queue number pair (a11cea11ce, b0bb0bb0b0b).

Alice's introduction message tells Bob that if Bob puts messages to server's port number a11cea11ce, they are sent to her. And upon first connection to Alice, Bob receives from server queue number b0bb0bb0b0b, and he knows, that when he puts messages to that port, they go to Alice. Since these are used to deliver packets, they can not be used by other SimpleX users. So Simplex has created the functional equivalent of emails user46453451@simplex.com and user2646453@simplex.com for the two users, and the users can use these emails to converse.

The server knows that port a11cea11ce is read by 414.414.414.414 (Alice) and if it's suddenly 414.414.414.224, or tor.tor.tor.tor exit node, it's still the same user, and it now knows that user is trying to become anonymous. If OTOH Alice registered via Tor, if she ever fails at connecting to port a11cea11ce without Tor, she must assume her queue number is permanently deanonymized.

But there is queue rotation. https://github.com/simplex-chat/simplexmq/blob/56986f82c89b0...?

The problem is, if Alice requests a new queue number, she is still connected via the same session and user that was already associated with the deanonymized IP-addrees. The service can just add the new queue-pair as the newest queues associated with Alice's account. Alice can not re-anonymize her account again, the server always knows it's the same user, unless Alice creates a completely new account from behind Tor and starts using new queues. This is why SimpleX should force connections through Tor, but it leaves configuration of proxies to the user. Yet it claims it's supposedly more private than the competition, like Cwtch.

"But the server doesn't know who the people behind the IP-addresses are"

Neither does Signal know who the people behind phone numbers or IP-addresses. But that's not the problem.

The server can accumulate data, and Alice's friendly fascist government can come around asking for these logs. Suppose Bob was apprehended and his device was confiscated and the local FSB wants to know who is behind queue ID a11cea11ce. A malicious server will have accumulated Alice's accidental connection to the service without Tor, so the queue ID a11cea11ce can be correlated with Alice's IP address, and the ISP can tell which subscriber was using the IP-address at the time. Again, defaulting to Tor in clients would have prevented the temptation to aggregate user data on server-side.

So reading your documentation, it's not exactly unclear why my reasoning was wrong. Please do tell me how a malicious server can't correlate IP-addresses with queues when your threat model states https://github.com/simplex-chat/simplexmq/blob/stable/protoc...

"""

The server can

* perform the correlation of the queue used to receive messages (matching multiple queues to a single user) via either a re-used transport connection, user's IP Address, or connection timing regularities.

* learn a recipient's IP address, track them through other IP addresses they use to access the same queue, and infer information (e.g. employer) based on the IP addresses, as long as Tor is not used.

"""

Your service doesn't seem to be adding anything ground breaking to the mix with queues. It's not using usernames, or registered accounts. Or phone numbers. It's generating queue numbers that are just random identifiers, that can be used to cross-correlate how may ciphertexts each user sent to one another, and it can bind all IP-addresses to those users. It can't be the case both the IP-address and the queue number changes at the same time, yet the service knows to which user it should deliver an incoming ciphertext.

The bottom line is this. The server that always knows where to relay ciphertexts, can always eventually deanonymize the user, unless the system defaults to Tor.

What is being said on the front page:

>Other apps have user IDs: Signal, Matrix, Session, Briar, Jami, Cwtch, etc. SimpleX does not, not even random numbers.

doesn't match the fine-print under threat model.

So explain to me, why isn't a list of tuples a malicious server secretly builds over time

[

(queue-number1, Tor-IP-address1),

(queue-number1, Tor-IP-address2),

(queue-number2, Tor-IP-address2),

(queue-number3, Tor-IP-address2),

(queue-number3, Deanonymized IPv4 address),

(queue-number3, Tor-IP-address3),

(queue-number4, Tor-IP-address3),

(queue-number4, Tor-IP-address3),

(queue-number4, Tor-IP-address4),

(queue-number4, Tor-IP-address5),

]

not a valid user ID that can be used to deanonymize Alice based on queue-number4 on Bob's confiscated device?

Why is this better than Cwtch where it's just a long term onion address that was never deanonymized? Cwtch didn't require 20 page manual to setup either.


I wonder if this is practically relevant at all.

Given that users can access their messages without interaction with people at Telegram, automatic aggregation of the cloud data for single end points is in place.

In consequence the data can be accessed from a single jurisdiction anyways.


Wouldn’t being forced to give up the password and logging in be a violation of the 5th amendment, at least in the US? I think it’s a mixed bag of rulings right now, but it seems like it would make sense for it to fall that way at the end of the day.


even if you have a password in Telegram as a second factor, Telegram can bypass it anyways; and the user isn't even asked


The problem with this approach is that it relies on governments accepting your legal arguments. You can say "no, these are separate legal entities and each one requires a court order from a different country" all you want, but you also need to get the courts themselves to agree to that fact.


Problem with this claim is that it's hardly verifiable. Telegram's backend is closed source, and the only thing you can be sure of is that their backend sees every message in plaintext.


[flagged]


Crypto is really hard. You have to trust that whoever implemented the crypto is smart and diligent, and you have to trust that whoever operates the crypto is smart and diligent, and you have to trust both of those parties.

Centralization means that it's very easy to trust that whoever implements and operates the crypto is smart. Do I trust them? I don't know. I trust myself, but I don't think I am independently capable of operating or implementing crypto - if I want to make assertions like "this is end-to-end-encrypted" and ensure those assertions remain true, I will need a several million dollar a year budget, at a minimum. "Decentralized" means you've got tons of endpoints that need securing, and they can share crypto implementations, but the operations are duplicated. Which means it's more expensive, and you're trusting more operators, especially if you want resiliency.

Yes, something like Signal or Whatsapp means you've got a single point of failure, but something like Matrix, you've got many points of failure and depending on how it's configured every point of failure can allow a different party to break the confidentiality of the system.

Decentralization is great for resiliency but it actively works against reliable and confidential message delivery.


It's always very easy to trust as long as you're allowed to be mistaken in your trust. That's literally how people fall for all kinds of things, including wars, advertising, etc. It's much harder to fool all the people all the time, than corrupt some of the people (the ones in charge) all the time:

https://www.npr.org/sections/parallels/2014/04/02/297839429/...

The mistake Moxie makes (and you do as well, you should really click on the links I posted to understand why)

is that "no one wants to run a server". In fact, an entire industry of professional "hosting companies" exists for Wordpress, Magento, etc. It's a free market of hosting.

You can't trust the software they're hosting, that's true. Which is why we have things like Subresource Integrity on the Web, IPFS, and many other ways to ensure that the thing you're loading is in fact bit-for-bit the same as the thing that was just audited by 3 different agencies, and battle-tested over time.

Think UniSwap. I'd rather trust UniSwap with a million dollars than Binance. I know exactly what UniSwap will do, both because it's been audited and because it's been battle-tested with billions of dollars. No amount of "trust me bro" will make me trust Binance to that extent. The key is "Smart contract factories":

https://community.intercoin.app/t/intercoin-smart-contract-s...

In short, when you decouple the infrastructure layer (people running ethereum nodes) from the app layer (the smart contracts) all of a sudden you can have, for the first time in human history, code you can trust. And again, there is a separation of responsibilities: one group of people runs nodes, another group of people writes smart contracts, another group audits them, another makes front-end interfaces on IPFS, etc. etc. And they all can get paid, permissionlessly and trustlessly.

Look at Internet Computer canisters, for instance. Or the TON network smart contracts. There are may examples besides slow clunky blockchains today.


What do web3 and crypto moneys have anything to do with the discussion?

Decentralized protocols have existed for a very long time. Email have existed since the 70s. Telephone is also arguably decentralized and have existed for even longer.


The technology has potential to be decentralized, but telephones were famously considered a "natural monopoly" and ended up centralized under Ma Bell.

Government split Ma Bell into multiple smaller pieces, but they still operated as a cartel and kept prices high. They had centralized telephone switchboard operators etc.

It is only when authors of decentralized file-sharing networks like Kazaa (who built them to get around yet another government-enforced centralized regime of Intellectual Property, RIAA, MPAA etc.) went clean did we get Skype, and other Voice over IP consumer products. And seemingly overnight, the prices dropped to zero and we got packed-switched networks over dumb hubs, that anyone can run.

That's the key. We need to relegate these centralized platforms (X, Meta, etc.) into glorified hubs running nodes and earning some crypto, akin to IPFS nodes earning filecoin, or BitTorrent nodes earning BTT, etc.

Everything centralized gets enshittified

Clay Shirky gave a talk abot this in 2005: https://www.ted.com/talks/clay_shirky_institutions_vs_collab...

And Cory Doctorow recently: https://doctorow.medium.com/https-pluralistic-net-2024-04-04...


> and earning some crypto

You are not answering my main concern. Again, you snick in crypto into the discussion. Why?

We have decentralized stuff. Email, xmpp, matrix, the fediverse, all this works without this web3/crypto stuff. Those things are not perfect, including their decentralized aspect (sometimes to the point of doubting that decentralization really works well, although I personally think decentralization is a good thing).

I didn't downvote you but I suspect this is exactly why you are being downvoted. Since you asked. Many of us just think cryptos and this web3 stuff is bullshit and gets mentioned totally off topic without any convincing link to the discussion every single time.


Because crypto is literally how entities on a decentralized network get paid in an autonomous network. It's not via cash transfers. It's not via bank transfers. Or having accounts in some central bank.

Look at FileCoin and IPFS, for instance. Once you automate the micropayments and proofs of spacetime, it becomes a cryptocurrency. And then the providers of services can sell it to the next consumers.

Just because you hear the word "crypto" doesn't mean it's automatically off-topic, when it's literally the thing that is inevitably used by decentralized systems to do proper accounting and reward the providers for providing any services. Without it, you'll still be sitting -- as you are -- with no viable alternatives to Twitter and Facebook.


> Because crypto is literally how entities on a decentralized network get paid in an autonomous network

That doesn't ring true. What is an autonomous network? Those things runs on the internet, largely backed by infrastructure funded by traditional money. Moreover, emails, tor nodes, xmpp servers, matrix homeservers, fediverse hosts... None of those need cryptocoins to fund themselves, and are indeed largely and for the most part funded using traditional money. Micropayments are also not something needed for decentralization.

Decentralization is way more than just about decentralizing money and many of us don't trust crypto coins.


> it's literally the thing that is inevitably used by decentralized systems to do proper accounting and reward the providers for providing any services.

Or just like, advertisement. ActivityPub, Matrix, PeerTube, NextCloud and Urbit are all fully decentralized and let any instance host monetize themselves however they want.

Decentralized services, even for-profit ones, are not synonymous with cryptocurrency. Stop spreading misinformation to promote an unrelated topic.


Urbit uses NFTs as IDs, which can be transferred

"Urbit IDs aren’t money, but they are scarce, so each one costs something. This means that when you meet a stranger on the Urbit network, they have some skin in the game and are less likely to be a bot or a spammer." https://urbit.org/overview

Who pays for the hosting of ActivityPub and Matrix instances?

What if one instance abuses other instances too much? How do you prevent it?

What if some spammer abuses Nexcloud? Oh, look at that, Nextcloud and Sia announce "cloud storage in the blockchain": https://nextcloud.com/blog/introducing-cloud-storage-in-the-...

Now we come to your ActivityPub stuff, including PeerTube. The question is, who pays for storage? What are the economics of storage?

I literally go into detail here: https://community.intercoin.app/t/who-pays-for-storage-nfts-...

I met the founders of LBRY / Odysee and other tokens that are actually being used for actual streaming. LBRY is a genuine utility token being used for instance.

You are totally ignoring the part that people need to get paid for storing stuff, and at the same time the payment needs to happen automatically.

Any other examples?


> Who pays for the hosting of ActivityPub and Matrix instances?

And

> What if one instance abuses other instances too much? How do you prevent it?

Simple, they get blocked by other instances?

How cryptos change anything to these three questions?

> Oh, look at that, Nextcloud and Sia announce "cloud storage in the blockchain"

That just means Sia wrote a Nextcloud integration for their stuff and somehow nextcloud decided to showcase this integration. That doesn't mean Nexctloud has much to do with blockchain stuff. Nextcloud integrates with anything and its dog.

> What if some spammer abuses Nexcloud?

What kind of spam are you imagining and how do you think crypto coins are going to solve this? You don't use cryptos for this, you use good old system administration and in particular antispam systems, which don't use coins.

> You are totally ignoring the part that people need to get paid for storing stuff, and at the same time the payment needs to happen automatically.

We're not.

First, they don't always need to, some people run stuff out of advocacy for instance.

Second, getting paid with regular money is not an unsolved problem, there are plenty of options, many of which also coming with some builtin guaranties against fraud. It's literally how the whole world works. Now, I can't say I'm a huge fan of our financial system but that's a social issue in need for a social solution, not a technical one.

I'm stopping here, it's pretty clear that I won't get a solid, reasonable argument in favor of cryptos here. And that since your top comment is flagged to death, nobody reads us anyway.


I could go on for literal days. The blockchain isn't a panacea, and rationally most solutions don't ultimately settle on a bespoke transactional network with audited consensus protocols. It is stupid, overdesigned, and as a sign of its poor fitness it dies over time. I'm not relaying some "evil villain" speech from someone that wants to see decentralized services die, this is a reality check from your peers who also hate centralization. Borderline intervention if it has to be - you've echoed this same sentiment in several threads while apparently ignoring the self-evident failure of protocols that embody your vision.

This was a cutting-edge and untested concept in maybe 2011. You missed the boat by a decade and a half.

> LBRY is a genuine utility token being used for instance.

Yeah, last I heard of their brand was when a member of my graduating class became a neo-nazi for six months and incessantly uploaded videos detailing his hallucinations to the internet. You're in good company, it sounds like.


Ugh.

> I could go on for literal days

I believe there should be a way to have a rational discussion about this, point by point, maybe threaded. This ain't it. But whatever.

I am not married to blockchains, I have literally criticized blockchains. I have said that smart contracts and distributed systems are what we need, whether that's Internet Computer canisters, SAFE network datachains, IOTA DAGs, Hashgraphs, or Intercloud (something I designed). I don't know why people on HN love to do this strawman over and over.

Blockchain is a settlement layer. I don't even say it's needed for day-to-day micropayments. I explain how the systems could work, which could use any smart contracts for their settlement layer, I don't care about the underlying technology for those, but the Web2 elements are there: https://qbix.com/ecosystem#DIGITAL-MEDIA-AND-CONTENT

> The last I heard of [LBRY]

Yeah, they got sued by Gary Genzler's SEC even though their coin was one of the few actually being used as a utility token. They were forced to shut down their entire company, and only the network survived. Similarly with Telegram, with the SEC. I will wager that you're reflexively on the side of Gary Gensler and the government "because blockchain". But I would have liked to see this innovation grow, not be killed by governments.


> Many people on HN silently downvote anything that has to do with crypto and decentralization.

I primarily downvote them because I haven't seen anything come out of that space that seems like it's remotely capable of actually achieving decentralization (for which I also see a dire need in today's structure of the Internet and the applications running on it).

95% of the time, these things are built as a Potemkin village of technical decentralization backed up by complete administrative centralization, with the path to actual decentralization "very high on our public roadmap available here, we promise!!!"


I downvoted for whataboutism


I respect someone who downvotes and explains why.

I wish the downvote button would require at least a private message to the person of why they are being downvoted. (Upvote could have an optional message).

Otherwise it's the most toxic feature on HN as it promotes extreme groupthink activism.


A public message seems better. There's zero accountability in private messages - you can just smash your keyboard. You can't leave such a message if it's public.


Maybe hijack the key and message before it gets distributed. Or just get after the pieces themselves if they are from Chinese or Russian authorities. Or just threaten to close the local data center if they do not collect the pieces from elsewhere, see if they can be convinced to hand over what they have, regardless where they put it.

We can be null in cryptography, but handing over both the secret and the key to this secret to the very same person is quite a trustful step, even when they say 'I promise I will not peek or let others peek, pinky promise!' - with an 'except if we have to or if we change our mind' in the small prints or between the lines.


https://www.spiegel.de/netzwelt/apps/telegram-gibt-nutzerdat...

> Translated: Contrary to what has been publicly stated so far, the operators of the messenger app Telegram have released user data to the Federal Criminal Police Office (BKA) in several cases.

https://torrentfreak.com/telegram-discloses-user-details-of-...

> Telegram has complied with an order from the High Court in Delhi by sharing user details of copyright-infringing users with rightsholders.

Anyways just some examples in which their structure doesn't matter. In the end, user data is still given away. It's also why e2ee should be the sole focus. Everything else is "trust me bro it's safe" levels of security.


>To protect the data that is not covered by end-to-end encryption, Telegram uses a distributed infrastructure. Cloud chat data is stored in multiple data centers around the globe that are controlled by different legal entities spread across different jurisdictions.

This is utter bullshit I debunked back in 2021.

https://security.stackexchange.com/questions/238562/how-does...


In practice also didn't work, only one government was needed to arrest the guy. And now all they need is a hammer or some pliers. No need for multiple governments to coordinate.


Well I'm sure France isn't taking Durov to some black site at this point. But since there's no such thing as distributed computation of single AES block operation, each server must by definition have access to the server's SQL-database key, and that key can be confiscated from which ever node is interacting with the database. Last I heard the servers in EU were in Netherlands, so if needed, perhaps the authorities there will handle it after court proceedings.


> The relevant decryption keys are split into parts and are never kept in the same place as the data they protect. As a result, several court orders from different jurisdictions are required to force us to give up any data.

Or the CEO and owner, staring down the barrel of a very long time in prison, obtains the keys from his employees and provides them to the authorities.

Would he do this? To me, it matters little how much I trust someone and believe in their mental fortitude. I could instead rely on mathematical proofs to keep secrets, which have proven to be far better at it than corporations.


I am wondering if there was any incident that disproved the “we have disclosed 0 bytes of user data to third parties, including governments.” statement.



Splitting stuff between multiple companies doesn't really protect anyone if the boss of all companies is held hostage.

Also

> To this day, we have disclosed 0 bytes of user data to third parties, including governments.

Didn't they conclude an agreement with Russian gvt in 2021?


Clearly the investigating authorities are not buying that argument because, well, it's completely absurd. Both technically and legally, Telegram are in control of those keys, regardless of where they are hosted.


> Telegram can be forced to give up data

That's all you need to know. Matrix and Signal can't be forced in any way.


The admins of Matrix instances sure can be forced to give up data. The metadata is not encrypted, and many rooms are not either.


Metadata is indeed an open issue on Matrix. I believe addressing it is on their to-do list.

Many rooms are not encrypted because they are public rooms, where there would be no point in it. Encryption has been the default for quite a while now.


> I believe addressing it is on their to-do list.

I doubt that it's very high on that list, as the problem seems a very hard. Very hard as in that do we even know it's possible? "Metadata" includes a lot of stuff, but basically the originator, the destination and the timing of the messages and participants of a room are all quite difficult to hide in a federated system.

I do believe there is a plan for getting rid of the association of one user in multiple rooms, but that's but a small bit of metadata. I think it is part of the puzzle for supporting changing homeservers.


I was referring to the metadata that are typical complaints about Matrix, like usernames and reactions.

> "Metadata" includes a lot of stuff, but basically the originator, the destination and the timing of the messages

Indeed. AFAIK, sender/recipient correlation cannot actually be protected at the software level, because packet switched networking necessarily reveals it. The common way I'm aware of to mitigate this problem is at the network level, by trying to avoid common routes that would allow monitoring many users' traffic from any one place.

Concretely, that might mean having everyone use Tor (which some folks suggest already) or going fully peer-to-peer (which some messengers do already, and Matrix has been experimenting with).

Signal tries to improve the situation with Sealed Sender, but I'm pretty confident that can't protect against the Signal servers being compromised, nor against network monitoring. When trying to think of how it's useful at all, the only thing that comes to mind is that it might strengthen the Signal Foundation's position when a government demands logs. (And if that is why they implemented it, I suppose they must be keeping logs, at least for a short period.)

Related:

https://www.ndss-symposium.org/ndss-paper/improving-signals-...


With Telegram, even the data can be accessed. Also: https://news.ycombinator.com/item?id=41351227



That’s Telegram's CEO saying how he and his employees were “persuaded and pressured” by US FBI agents to integrate open-source libraries into Telegram (1).. There are a lot of questions to ask, like if the open-source libraries are indeed compromised, among other things. I take it as this arrest was the final straw to pressure him to give up and hand over some “needed” data, as all the accusations I read are laughable. Instagram is full of human trafficking and minor exploitation, drug dealers, and worse. The same goes with other social media, and I don’t see Elon or Zuck getting arrested. I am confident that this arrest is to obtain specific information, and after that, he will be released, or spend 20 years if he doesn’t comply.

(1) https://youtu.be/1Ut6RouSs0w?t=1082


Or he's trained in the art of lying

"At St. Petersburg State University, Mr. Durov studied linguistics. In lieu of military service, he trained in propaganda, studying Sun Tzu, Genghis Khan and Napoleon, and he learned to make posters aimed at influencing foreign soldiers."

https://www.nytimes.com/2014/12/03/technology/once-celebrate...

You really think the FBI would casually go to Durov and start telling him which libraries to deploy in his software.

This "They're trying to influence me that means its working" 5D-chess is the most stupid way to assess security of anything.

There's nothing to backdoor because it's already backdoored:

Code does not lie about what it does. And Telegram clients' code doesn't lie it doesn't end-to-end encrypt data it outputs to Telegram's servers. That's the backdoor. It's there. Right in front of you. With a big flashing neon light says backdoor. It's so obvious I can't even write a paper about it because no journal or conference wouldn't accept me stating the fucking obvious.


I do wonder if this would hold up though, if telegram stored each character of your chat in a different country, would a single country not be able to force them to hand over the data and either fine them or force them to stop operating if they wouldn't share the full chat? It seems like a loophole but I don't know what the precedent is.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: