every researcher who believes in their idea shouts (to grant organizations) that their thing is the best. but gathering actual data is difficult.
usually research is super inefficient, academia is full of people who are ... true believers. (sometimes only in their own greatness, and that leads to fraud.) which is amazing, but it's not really conductive to figuring out what's the best way to set up a high-throughput process to generate data.
in many cases it would require them to stop most of what they do and let a specialist team build said pipeline for them. (but that runs into cost problems. and that immediately leads us back to the grant organizations allocating resources.)
and just an anecdote, I have a friend who worked at a brain research group (they implanted electrodes into rodents, put them into mazes, and then see whether they dream about the maze) and ... it was cool, but IMHO that public money was mostly wasted.
docker got popular because it had better DX (better tooling), it was like a super lightweight VM (and initially people really wanted to put init and SSH into containers)
easy but powerful, it's not just packaging, it's also a very basic deployment system too. (docker ps) and said better allowed a relatively foolproof cross-platform develop-deploy loop.
It would be nice to have some kind of forever patch mode on these git forges, where my fork (which, let's say, is a one line change) gets rebased on top of the original repo periodically.
You can ask an LLM to create a github action for that. The action can fail if the rebase fails and you can either fix it yourself or ask an LLM to do it for you.
(and thanks for calling attention to the interesting part of the code, I haven't even checked the snippets, I assumed it's not really interesting compared to the prose [poetry?])
The machine cannot get authorship, but just as images created by humans with PhotoShop and all kinds of machinery are still copyrighted to the human creator - unless they explicitly set up some circumstances where the process of creation happens completely without them - code/software produced by a machine instructed by a human should get copyright (either original or derivative).
Unless the human is so far removed from the output. (And how far is far enough is probably very much depends on the circumstances and unless/until case law or Congress gives us some unifying criteria, it's going to be up to how the judge and the jury feels.)
..
For example someone set up a system where their dog ends up promoting some AI to make video games. This might be the closest to the case of that photo.
Though there the court ruled only that PETA (as friend of the monkey) cannot sue the photographer, because the monkey cannot be a copyright holder, but very importantly it didn't rule on the authorship of the photographer. (And thus the Wikimedia metadata which states that the image is in thr public domain, is simply their opinion.)
usually research is super inefficient, academia is full of people who are ... true believers. (sometimes only in their own greatness, and that leads to fraud.) which is amazing, but it's not really conductive to figuring out what's the best way to set up a high-throughput process to generate data.
in many cases it would require them to stop most of what they do and let a specialist team build said pipeline for them. (but that runs into cost problems. and that immediately leads us back to the grant organizations allocating resources.)
for example see how many problems the paper from 2014 had: https://pubpeer.com/publications/A32D7989007655CBF8D9DB2A250...
also see how fiendishly difficult it was initially to create transgenic mice embryos: https://www.astralcodexten.com/i/167092138/prerequisites-dex...
and just an anecdote, I have a friend who worked at a brain research group (they implanted electrodes into rodents, put them into mazes, and then see whether they dream about the maze) and ... it was cool, but IMHO that public money was mostly wasted.
reply