That's only one-side of the story. It is actually a very accurate contact model. The problem is that it's also computationally expensive and therefore we reduce the stiffness of objects to make it faster. It's basically a trade-off between the accuracy and speed.
In gaming engines you have speed and it looks good, but then it's impossible to generalize for real-life applications (because the contact is inaccurate).
As a 10/10er, I think the fact that people tend to score less goes to show that:
1. The photos are low-res enough to hide the most ridiculous artifacts produced by neural nets
2. The machine-generated images are trying to emulate the likes of van Gogh, not the likes of Leonardo (again hiding the extent of NNs complete inability to understand what they're doing)
3. Most people simply neither paint nor appreciate visual art, so this is not as powerful as the Turing test! (One point of the Turing test is that a human easily passes it, because all humans speak; they don't all paint.)
It's much simpler than that. This is NOT a Turing test in that the computer generated images have been selected beforehand from the best results. What users are grading here is not the quality of the algorithm that generated the pictures; it's the quality of the choice made by the human who selected these specific pictures among all the examples of computer-generated art.
9/10. The low res made it harder to tell which one was computer generated, but it was still pretty clear. Now if they could do it full size, I'd start to be interested.
this is great work kidzik. looking forward to see more about it. hi-res are quite good, that even if in the uncanny they can be quite a success for deepart lovers.
"Stiefvater [...] described it to me as 'black magic,' and doesn’t even fully understand how it works. He downloaded it from Github, where a computer programmer he’s never met posted it."
Would you also mind retracting your claim below that we have "misappropriated" your technology? Perhaps we're having a language problem - "misappropriate" is a crime in English. If you honestly believe a crime has been committed, I would strongly urge you to discuss it with us (somewhere other than this public forum). We've tried a few times now to reach out to your team, and have not heard back.
I am sorry, I didn't mean any legal accusation - it just feels a little odd that the description of the app sounds as if you invented the technology without ever referring to the original work.
Yeah - I think you're right - that's a fair complaint. In the original description I had a link to the paper - but somehow that got lost in the versioning. I'm traveling right now, give me a couple days to fix that.
The "Use style" wording didn't immediately register for me that the images were clickable. I wondered why the gallery images all had the same description when they were clearly different styles!
Perhaps "Use this style" would be more expressive?
When you do click through, it shows the same image (with face) to be used as the style image, leaving me wondering if a user provided style image should just be a texture (newsprint, vegetables etc.), or an already styled image.
The "Buy a painting" button is deceptive, since it's a canvas print, not a painting. I was going to suggest teaming up with instapainting, but I see they beat you to it [1]. I guess you could always go to the source [2].
The resulting modal dialog on clicking "Buy a painting" barely fits in the browser on a 15" retina MBP, even then, the cookie info bar clips the button, so I suspect you may be loosing sales from people who can't figure out how to get to the button, especially on smaller screens (you'd be surprised!).
Clicking through that dialog brings me to an ebay page, but not for the image I selected to buy. There are a range of other images shown, but it doesn't seem to be possible to choose one, never mind the one I wanted. I have no idea what I'm buying, or how to buy what I want.
Finally I can't "Buy", I can only bid, as it's an auction, not a "Buy it now".
Oh, and please tell Michael to change his name to Andy. :) [3]
Thank you very much! As these are recent changes which we haven't tested live, your comments are very useful and we are actually implementing them now.
You should add an explanation for the waiting time, queue thing. This was unexpected and also not really clear. First thought I could only get it by paying...
Good point, thanks! We actually had one, but we removed it in the current version (it seemed natural for us but it obviously is not) - we will bring it back.
We simply have limited resources for servers, but still want to provide the highest quality possible in the free version. That's why there is a queue, but we are working hard to make the implementation faster and more efficient :)
The success message (when there will be one - there should be one instead of only the submissions queue) should definitely include an explanation, what happens next and how I can influence it. Will I get an email when my pictures are ready? Can I schedule more pictures?
I think you will have more success showing the # of people in the queue and a "jump the queue" feature.
Is there a way to build a "I want it NOW, even if it isn't that great... (and get the great version later via email)".
The public/private thing could also use some explanation "Show this on the website" vs. "Only give the result to me".
"Delete" would probably better be named "remove" as you remove the submission from the queue, and don't really delete anything that was already created.
Also, "Submit a new image" could let me choose some style images... but I bet that's already on your backlog anyway.
Definitely it all requires more explanation as we take it for granted but for users it is not clear what is happening
Unfortunately "I want it now" is not possible for the moment as the computations are really heavy and there is a lot of people who want to use it. But we also have some ideas for that.
Now our focus is the quality, regardless the cost. We increased the resolution and we try to make things faster. Later we will figure out how to make it more sustainable.
On the waiting time point. I was putting together a similar service to yours (though you got there first). We managed to get our processing time down to under 50s for a 300x300 image. How do you cope with such long times? Or, how long is the waiting time?
Given that we are discussing it in the comments of the app who just misappropriated the algorithm, I unfortunately have to refuse your request for the moment, sorry. We love open research and open source but there is always some risk involved.