That's a bit misleading. WhatsApp uses Signal's end-to-end encryption scheme, but not Signal's networking protocol. It's still proprietary. Otherwise, we could have cross-messaging between Signal and Whatsapp.
WhatsApp just implemented cross messaging using the open Signal Protocol forced by the EU. We will see if the Signal messenger enables interop with WhatsApp, they are not forced to do this.
Coding Dissent: Art, Technology, and Tactical Media
This presentation examines artistic practices that engage with sociotechnical systems through tactical interventions. The talk proposes art as a form of infrastructural critique and counter-technology. It also introduces a forthcoming HackLab designed to foster collaborative development of open-source tools addressing digital authoritarianism, surveillance capitalism, propaganda infrastructures, and ideological warfare.
In this talk, media artist and curator Helena Nikonole presents her work at the intersection of art, activism, and tactical technology — including interventions into surveillance systems, wearable mesh networks for off-grid communication, and AI-generated propaganda sabotage.
Featuring projects like Antiwar AI, the 868labs initiative, and the curatorial project Digital Resistance, the talk explores how art can do more than just comment on sociotechnical systems — it can interfere, infiltrate, and subvert them.
This is about prototypes as politics, networked interventions as civil disobedience, and media hacks as tools of strategic refusal. The talk asks: what happens when art stops decorating crisis and starts debugging it?
The talk will also introduce an upcoming HackLab initiative — a collaboration-in-progress that brings together artists, hackers, and activists to develop open-source tools for disruption, resilience, and collective agency — and invites potential collaborators to get involved.
I agree the ELIZA effect is strong, additionally I think it is some kind of natural selection.
I feel like LLM's are specifically selected to impress people that have a lot of influence. People like investors and CEO's. Because a "AI" that does not impress this section of the population does not get adopted widely.
This is one of the reasons I think AI will never really be an expert as it does not need to be. It only needs to adopt a skill (for example coding) to pass the examination of the groups that decide if it is to be used. It needs to be "good enough to pass".
I got this wild idea a short while ago and your comment helped cement it: probably one of the reasons why languages like Lisp are not "successful" has something to do with the impressability factor? If the people with money (and the decision) do not understand the tech or are not able to even fake that understanding, will they bet their money on it?
> If the people with money (and the decision) do not understand the tech or are not able to even fake that understanding, will they bet their money on it?
LLMs are calculating probable text/code. A lot of it in very short time.
Probable text/code is not the same as correct/proper text/code.
It is a huge mass of probable and maybe correct/proper text/code.
That is very dangerous as it looks correct but maybe it is not.
It is likely that the cost of software will increase because of the unmanageable mess that this creates.