This is why I got an accountant who set up the aspects of my side-hustle business I'm too stupid^H^H^H^H^H^H lazy^H^H^H^H busy to sort out. Yes I could have done this all myself but could I have done it myself the correct way from the outset? Not a chance. I would have screwed up some detail.
Its silly, lots of people do their own taxes but forget what they normally charge per hour and how slow they do the taxes makes it not worth it. No one would hire them to do taxes at those rates.
this of course only works if you spend the time in exchange for billable hours.
if you do it during down time or in your free time, you of course save money.
Side note: all this off-topic pedantry about octopi vs octopuses is an old discussion already resolved. It is clouding discussion about what the full implications of what it actually means to treat octopus (ha!) as sentient.
A twenty something wasting 5 hours is wasting future benefits of using that. The opportunity cost is horrendous. Multiples of any later hourly rate because of how early its wasted.
Addiction is often poor social connections and relationships. That's why isolation is so dangerous. Most addictions thrive in people who have poor interconnections with others. This isn't the only factor but its a big one. People operate better in tribes or groups of connected people.
The real joke is the fact that most Italians look at US coffee with a sense of either vague or specific disappointment. Does not matter what fancy name you come up with.
On the other hand you could try asking an Italian barista for a Latte and see what you end up with.
I can confirm that, in a non-bank financial institution, date formats involved with Excel->CSV->XML->Certain (shit) magic application were considerable pain point :|
I can see the benefits of this collection of tools within an all-in-one monolith. Ease of deployment is a big benefit. I can also see the costs. As a stack its probably better in some ways than how a lot of other businesses operate as well as worse. There's probably a lot both ways.
The mainframe mindset might be a factor here as well. The giant mainframe where all the magic happens is still a thing to behold and this is definitely part of banking's history and present. Mainframes are beasts and are still far from any kind of obsolescence. A monolithic Bank Python with a standardised set of libraries etc would slot right in to that mindset and way of thinking.
The part about programming languages frequently not having tables is interesting. The closest as mentioned is the hash, but you lose so much in that abstraction eg the relational aspects. The counter argument then becomes the obvious: why aren't you using a database library, or in a pinch, sqlite? Rightly so. Why would you add relational tables to python rather than have a generic python database spec or a collection of database connector libraries. Databases are separate and large projects in themselves.
I'd still be overly disturbed if they were running some old python 2.5 or similar. Just saying. That would be a source of pity.
> The part about programming languages frequently not having tables is interesting. The closest as mentioned is the hash, but you lose so much in that abstraction eg the relational aspects. The counter argument then becomes the obvious: why aren't you using a database library, or in a pinch, sqlite? Rightly so. Why would you add relational tables to python rather than have a generic python database spec or a collection of database connector libraries. Databases are separate and large projects in themselves.
The separate datastore is the problem to be solved here - databases, especially relational databases, are extremely poorly integrated into programming languages and this makes it really painful to develop anything that uses them. You can just about use them as a place to dump serialized data to and from (not suitable for large systems because they're not properly distributed), but if you actually want to operate on data you need it to be in memory where you're running the code and you want it to be tightly integrated with your language and IDE and so on.
(It's not even the main benefit, but just as an example of that kind of integration, when you're querying large datasets Minerva works a bit like Hadoop in that it will ship your code to where the data is and run it there)
Funny thing is, databases were tightly integrated into programming languages all the way back in 80s - that's exactly what dBase was, and why it became so popular. FoxBASE/FoxPro, Clipper, Paradox etc were all similar in that respect.
And yes, it made for some very powerful high-level tooling. I actually learned to code on FoxPro for DOS, and the efficiency with which you could crank out even fairly complicated line-of-business data-centric apps was amazing, and is not something I've seen in any tech stack since.
> FoxBASE/FoxPro, Clipper, Paradox etc were all similar in that respect.
> the efficiency with which you could crank out even fairly complicated line-of-business data-centric apps was amazing, and is not something I've seen in any tech stack since.
Did you ever get to try Delphi? Those "line-of-business data-centric apps" is what it was all about.
And I'm not quite sure, but I think and hope Free Pascal / Lazarus is close to that in ease and power.
> The separate datastore is the problem to be solved here - databases, especially relational databases, are extremely poorly integrated into programming languages and this makes it really painful to develop anything that uses them.
Hence "Active Record" ORMs like Rails and Django being highly successful. They functionally embed the RDBMS into the language/app (almost literally if using SQlite), which is a huge boon for developer productivity...
...but also a significant footgun, because it means the database is now effectively owned by the Active Record ORM and its (SWE) team, and not by some app-agnostic data team.
Want to reuse that juicy clean data managed by Django? Write a REST API driven by the app; don't try to access the data directly over SQL, although it may be tempting.
> Hence "Active Record" ORMs like Rails and Django being highly successful. They functionally embed the RDBMS into the language/app (almost literally if using SQlite), which is a huge boon for developer productivity...
Right, those are a step in the right direction, but still a lot more cumbersome than properly integrating your datastore with your application.
The first-blush conversion from Excel to this ecosystem only needs lookup tables. Excel has some static database I/O, but people who only know Excel use it as dat input for lookup tables.
The Python results of that first conversion need to test against Excel, so it’ll have identical lookup tables.
> The part about programming languages frequently not having tables is interesting. The closest as mentioned is the hash, but you lose so much in that abstraction eg the relational aspects. The counter argument then becomes the obvious: why aren't you using a database library, or in a pinch, sqlite? Rightly so. Why would you add relational tables to python rather than have a generic python database spec or a collection of database connector libraries. Databases are separate and large projects in themselves.
This is covered in the article, in the distinction between "code-first" and "data-first". Databases means that you leave the interaction with data to a third party, and the only thing you do is send commands and receive results. This is very different from having all the data in your program, and starting from that. I'm not sure if "code-first" is the right word from it. Perhaps another way to put it would be that when data is the most important thing, you don't want to encapsulate it in a "database object", you want it to be right here.
Hmmm. I'm not keen on giving my DNA profile to China. This seems like an unwise thing to do.
from Newsweek: "China is also getting the DNA of Americans by buying American companies. China's BGI Group may now have the largest database on Americans after acquiring Complete Genomics in 2013. This year, GNC, which holds customer profiles, was sold to a Chinese entity, Harbin Pharmaceutical Group.
Another Chinese technique is to offer low-cost "large-scale genetic sequencing" to ancestry and other businesses. Ancestry firm 23andMe's chief security officer says China is looking at the firm for its genetic data. There were, as of last year, 23 Chinese-associated companies accredited to perform genetic testing of Americans.
While Beijing is hoovering up American genetic data, it is prohibiting the transfer of Chinese data to foreigners. The State Council announced new restrictions in May of last year, and officials are stepping up efforts to punish genetic data transfers."
I'd not be thinking that this is a good thing. Especially the one-way nature of DNA data transfer to China. The fact they are outlawing the export of such data on Chinese citizens shows the value they put on their own.
The only conclusion one could draw, were this to be true, that they're looking for a property to exploit that would kill very few Chinese and very many Americans.
CDE? That's an acronym I haven't heard in a long time. Still being developed? Very nice.
Brings back happy memories of running oddball screen resolutions on X Window. 1000x800 I think. Definitely not 1024. Monitor couldn't do 1024. But it could do 1000. The 800 number is hazy but in the ball park.
One thing I did notice was the fonts. Those pixels.
Classic MacOS screen modes always tried to deliver square pixels, in contrast to IBM/PC-compatible screen modes where pixels were rectangular. Apple also tried to stick to a consistent 72dpi for a long time.
The monitor I had wasn't capable of 1024x768. I got that custom resolution using a manually defined modeline determined through experimentation. It wasn't a simple VESA style display setting like the later 15" monitor I got. A flat CRT was quite a sight.
OK, fair enough. I don't think that's one I saw, then.
I do recall netbooks with 1024×600 screens, which were a bit of a pain. E.g. in Ubuntu Netbook Edition, LibreOffice dialog boxes wouldn't fit on the screen -- they extended off the bottom. Meaning you couldn't reach the "OK" or "Cancel" buttons. >_<
Another fun exercise was configuring an old Dell PowerEdge server I had, connected to a very old mono VGA monitor. *Really* VGA: it maxed out at 640×480 -- but sitting on top of the server, it fit under my table and it didn't draw much power.
Every allegedly text-only server Linux distro I tried -- I remember Ubuntu Server & Debian, but probably more -- had an 800×600 graphical splash screen, which sent my monitor into conniptions so that I then couldn't complete the setup process.
This was a problem with Windows Server 2008, too. It assumed you had unaccelerated SVGA graphics and couldn't and wouldn't do actual VGA mode. I had to identify my motherboard GPU, find a Vista driver for it, install it, accept a compatibility warning, and then I could forcibly choose the monitor type and pick IBM PS/2 VGA. Then Windows believed me and displayed a 640×480 mode that my screen could show.