"There is a five-step process that has been used, with minor variations, to sell every major product and policy of the last century. It works for breakfast. It works for engagement rings. It works for regime change wars.
1. Simplify. Reduce a complicated reality to one sentence. No qualifiers.
2. Find the emotional lever - and make it visual. The best simple stories aren’t sentences. They’re images. A cocktail on the beach. A vial held up to the light. A mushroom cloud over a city. The image arrives before the critical mind can engage.
3. Route through authority. Doctors, institutions, heads of state. The claim doesn’t need to be true. It needs to come from someone trusted.
4. Make questioning it feel wrong. Frame the story so that scepticism looks like moral failure.
5. Act before verification. By the time anyone checks the facts, the action is irreversible.
This process was first documented in 1928 by a man named Edward Bernays, in a book titled _Propaganda. He later rebranded the concept as “public relations,” which was itself a masterclass in the discipline he was naming."
Fine tuning does not make a model any smaller. It can make a smaller model more effective at a specific task, but a larger model with the same architecture fine-tuned on the same dataset will always be more capable in a domain as general as programming or software design. Of course, as architecture and related tooling improves the smallest model that is "good enough" will continue to get smaller.
I have no affiliation with the website, but the website is pretty neat if you are learning LLM internals.
It explains:
Tokenization, Embedding, Attention, Loss & Gradient, Training, Inference and comparison to "Real GPT"
Pretty nifty. Even if you are not interested in the Korean language
By "modified" this person of course means that they swapped out the list of X0,000 names from English to Korean names. That is seemingly the only change.
The attached website is a fully ai-generated "visualization" based on the original blog post with little added.
Genuine question, why is it very difficult even with our 21st century technology to accurately detect landmines for the purpose of destroying them after the war?
In order to be effective landmines need to be very close to the land surface thus should be easier enough to detect. Researcher in Japan has succesfully detect using low power radar sub-surface bamboo shoots since they are more expensive than bamboo shoots that are already grown over land surface.
For safe and fast detection mechanism close to the ground aerial UAV can be deploy to scan the the suspected land mine area.
Something is missing and don't add up here, perhaps someone can help explain the situations?
>Accurate and low latency attitude estimation is critical for stable flight of the flapping-wing platform. We implemented a quaternion-based Madgwick filter (80) on the ESP32-S3 (dual-core 240 MHz with floating-point unit) to fuse IMU measurements in real time. This approach was selected for its low computational cost, fast convergence, and robustness under dynamic motion, outperforming complementary filters in accuracy and avoiding the high complexity and matrix operations of extended Kalman filters.
Bravo, quaternion is the (only) way to go, the sooner UAV/UAS system designer realize this the better.
You can chain normalized quaternions to combine or diff transformations. For example you can subtract desired attitude quaternion from predicted attitude quaternion to get attitude error quaternion which you can then feed to control algs designed for driving that error to zero. This is even more important when multiple frames of reference are involved as quaternions can be used to transform between them.
reply