Speaking of corruption, I found this[1] video a while back. On mobile the video shows up fine, but on desktop it's just a gray screen, although the thumbnails work.
I can confirm that the mp4 I downloaded is not (detectably by my desktop video player) corrupt. I am not sure what it is, but it is not corrupt. (Except for the filename, but that is because javascript is brain-damaged as regards strings.)
Unless what you were encoding wasn't meant to be consumed as a bytestream. If you encoded a resilient optical format (like UPC or QR codes), transcoding the format shouldn't be a deal breaker. Obviously it's not optimized for backing up a harddisk, though.
Interesting - say video @ 60fps, encode 1 QR code per frame, would be highly resistant against transcoding errors, very easy to extract information from again given the standard format.
Wouldn't be terribly efficient though. Wikipedia says max bytes per code is 2953 per QR code [1]. So 2953Bps * 60fps = 177KBps data encoding. I guess that's what you get for encoding it in a visual (human readable) format instead of a datastream directly.
you have sound too. Must be an audio equivalent that would have a similar level of durability to a qr code. I dont know if youtube ever drops frames during compression. Perhaps using there 4k support would help get a bit more data
Maybe using Fourier/wavelet/whatsoever transform would be the way to go, just like in digital watermarking techniques. Both high capacity and robustness would seem easier to achieve.
Interesting. I guess we can experiment with how many bits can be compressed to a video frame. Is there a guarantee that YT doesn't change the frame rate?
Another idea is YT as code repo: essentially one makes a movie that shows code files. On retrieval, OCR can be applied to transform the movie back to code in text.
I know that YouTube used to support only up to 30fps video, but IIRC they now support 60fps. This became a thing because people making videos of themselves playing the newest generation of consoles (PS4 / XBOne) want to upload in high quality, and the consoles now do 1080p at 60fps.
If this is a concern for people (recording at 60fps to upload for 60fps), I doubt that Google would downgrade the framerate except for maybe the lower quality versions of the video (does 60fps really matter for 240p video?).
What they meant was taking arbitrary data and turning it into a valid video file that plays back. For instance, you could read each bit off as audio (zero, one, zero, zero...). The quest is to find the most efficient, yet resilient way to do so.
They said they would like to have some encoding technique (completely new) which would get transcoded without any data loss. So, my point was that such _new_ encoding techniques will be rejected by YT in the first step itself, before even transcoding.
No, they mean new encoding within the video and audio. A watermark is encoded in video, even though it's just visual data. Encoding can mean different things at different levels.