Align with the rest of the system, when sqlite is used as a component in a system which mainly speaks json.
Performance, when records are complex and usually not accessed.
Ease, avoiding/postponing table design decisions.
Flexibility, for one of many corner-cases (since sqlite is a very broad tool).
Incremental enhancement, e.g. when starting with sqlite as replacement to an ndjson-file and incrementally taking advantage of transactions and indexes on fields [1,2,3].
For example, several of these could apply when doing structured logging to a sqlite database.
[1]: https://www.sqlite.org/expridx.html
[2]: https://www.sqlite.org/gencol.html
[3]: https://antonz.org/json-virtual-columns/
See also: https://www.sqlite.org/json1.html
I got good use of the run-time type checking of typeguard [0] when I recently invoked it via its pytest plugin [2]. For all code visited in the test suite, you get a failing test whenever an actual type differs from an annotated type.
Firefox supports custom search engines, the most bang for the buck custom search engine must be https://duckduckgo.com/?q=%s with keyword being the letter d. Then you get all these 13000+ bangs without having to configure the custom search engines. E.g. write "d !drive term" in url bar. And "d !w hacker news" sends you directly to https://en.wikipedia.org/wiki/Hacker_News
Firefox keyword search has one little known killer feature: You can combine it with data URIs and JavaScript to run small "command line snippets" stored in your bookmarks from your browser bar.
To get started, create a keyword search from any form (like the search bar on duckduckgo.com) and edit the URL of the entry in the bookmark manager to point to
data:text/html,<script>alert("%s")</script>
instead.
What you can do with this is (fortunately) limited by cross-origin restrictions but there are some useful applications. For example, I use this snippet
The reason for magnetic field strength falling off as 1/r^3 is interesting: the biot-savart law says that magnetic field falls off as 1/r^2 from a magnetic source, but in reality sources tend to be better approximated by magnetic dipoles than magnetic monopoles. A "north pole" is always accompanied by a "south pole", and at distance there are "interaction effects" such that a part of the field strength is "canceled out".
Git uses SHA-1, a hardened version since 2017, and are now doing per-repo upgrades to SHA-256 [0]. Lots of repos are presumably still on SHA-1 (and users on older versions of git).
As of 2020, chosen-prefix attacks against SHA-1 are now practical. [verbatim from 1] But I don't think second preimage attacks are practical yet.
Linus Torvalds argued in 2006 basically that it's irrelevant whether git's hash function is second preimage resistant. Selective quoting:
> remember that the git model is that you should primarily trust only your _own_ repository [2]
> [a malicious] collision is entirely a non-issue: you'll get a "bad" repository that is different from what the attacker intended, but since you'll never actually use his colliding object, it's _literally_ no different from the attacker just not having found a collision at all [2]
All that is just to say: git originally chose its hashes for the above mentioned "git model", thus didn't 100 % care about second preimage resistance. For your suggested search engine, depending on how the database is collected you might not be able to trust "your own repository" (if it's crowdsourced I could register another codebase with the same hash as Linux). A second preimage resistant hash function would be a requirement for the suggested use case.
Compact summaries are useful when revisiting something that was learnt before. Such a document might be more useful for mathematics than most subjects, since many have studied maths but stopped using it, and those teachings are generally still true and relevant.
The doc would be at least 20 % more useful to me if the pdf had a table of contents. Should be easy to include assuming that it was written with latex. Opinion: when writing a lengthy latex document, the extra 0.5 % of work required to add automated pdf metadata (table of contents, clickable references) has outsized usability effects.
I stumbled upon typos:
* "Basel problem formula": pi should be squared.
* The "more general" statement related to Bayes theorem lacks a right parenthesis.
And PEP 8 doesn't mention sorting the imports. There are even counterexamples which aren't alphabetically sorted. It does mention grouping though.
Since Python imports can have side effects, the order can matter. But to the extent that it doesn't break anything, alphabetically sorted groups seems deterministic and readable.
reply