1.5M ratings
277k ratings

See, that’s what the app is perfect for.

Sounds perfect Wahhhh, I don’t wanna
strangebiology
beetledrink

image
image

currently obsessed with this taxidermied great white that was left to gather dust in an abandoned display in a park in australia. something about it is so simultaneously eerie and melancholy to me

strangebiology

WOW! I’m seeing some speculation in the notes that this is a mislabeled Damien Hirst piece. Definitely a reasonable suspicion, but here’s the Reddit Thread with one of the photos in it, and an actual video of the urban explorer discovering it. Scrub to around 18:00 for the shark. Wild. 

Related: You might like this “abandoned” shark museum that you can visit on Google Maps. At the time Google photographed it 4 years ago, the website listed it as “temporarily closed,” and I think that’s more likely than it being really abandoned. 

Source: beetledrink
fruitsoftheweb

The creepiest images generated by BigGAN

lewisandquark

Writing to you from deep inside the Uncanny Valley, I present to you some hand-selected images generated by an algorithm called BigGAN. This algorithm looks at example images of a bunch of different objects and then tries its best to figure out how to generate more of them. Because the algorithm comes in two parts - one that generates the images, and the other that tries to tell the difference between the generated images and the real thing - it’s known as a Generative Adversarial Network (GAN). Each of the images below is from the big set of generated images that the BigGAN authors released along with their paper. I merely came across them, was freaked out by them, and decided to present them here.

As you look at these, remember: the GAN thought these looked a lot like the real thing.

“Microphone”

Note lack of actual microphone. BigGAN probably saw lots of pictures of humans holding microphones and doesn’t understand that the human is not the microphone. It does understand how to do stage lighting though.

image

“Pan pipes”

Those are not teeth but, apparently, the pipes. No explanation for all the other stuff though.

image

“Teddy Bear”

I regret to tell you they’re almost certainly cursed. At least they’re fuzzy.

image

“Stopwatch”

Its letters and numbers are in some strange unearthly script. It has trouble counting things like watch hands. But its textures are spot on. This, including the unearthly GAN-script, is very characteristic of GAN images.

image

“Nipple”

I have found it. The worst BigGAN category. As far as I can tell, these were all from baby bottles, but that doesn’t make it better. It seems to have come up with a hybrid bottle-human head, which it does from time to time. (I saw an oboe/human hybrid once, and the authors report that during training it crossed a dog and a tennis ball into a dogball. It also produced catflowers and hendogs)

image

These results may not LOOK photorealistic, but they’re really impressive for a GAN that can do so many categories, especially in high resolution. Jer Thorpe also calculated that generating them took enough power to run the entire city of Cleveland for 6 days. Hopefully that will get a lot better soon as technology improves, because the aesthetic is often strikingly beautiful. I would love to see a movie with GAN graphics.

More GAN images in my earlier blog post on BigGAN and in these Twitter threads.

And to see even more (and optionally get bonus material every time I post), sign up here.

Source: lewisandquark
fruitsoftheweb

Imaginary worlds dreamed by BigGAN

lewisandquark

These are some of the most amazing generated images I’ve ever seen. Introducing BigGAN, a neural network that generates high-resolution, sometimes photorealistic, imitations of photos it’s seen. None of the images below are real - they’re all generated by BigGAN.

image

The BigGAN paper is still in review so we don’t know who the authors are, but as part of the review process a preprint and some data were posted online. It’s been causing a buzz in the machine learning community. For generated images, their 512x512 pixel resolution is high, and they scored impressively well on a standard benchmark known as Inception. They were able to scale up to huge processing power (512 TPUv3′s), and they’ve also introduced some strategies that help them achieve both photorealism and variety. (They also told us what *didn’t* work, which was nice of them.) Some of the images are so good that the researchers had to check the original ImageNet dataset to make sure it hadn’t simply copied one of its training images - it hadn’t.

Now, the images above were selected for the paper because they’re especially impressive. BigGAN does well on common objects like dogs and simple landscapes where the pose is pretty consistent, and less well on rarer, more-varied things like crowds. But the researchers also posted a huge set of example BigGAN images and some of the less photorealistic ones are the most interesting.

image

I’m pretty sure this is how clocks look in my dreams. BigGAN’s writing generally looks like this, maybe an attempt to reconcile the variety of alphabets and characters in its dataset. And Generative Adversarial Networks (and BigGAN is no exception) have trouble counting things. So clocks end up with too many hands, spiders and frogs end up with too many eyes and legs, and the occasional train has two ends.

image

And its humans… the problem is that we’re really attuned to look for things that are slightly “off” in the faces and bodies of other humans. Even though BigGAN did a comparatively “good job” with these, we are so deep in the uncanny valley that the effect is utterly distressing.

image

So let’s quickly scroll past BigGAN’s humans and look at some of its other generated images, many of which I find strangely, gloriously beautiful.

Its landscapes and cityscapes, for example, often follow rules of composition and lighting that it learned from the dataset, and the result is both familiar and deeply weird.

image
image

Its attempts to reproduce human devices (washing machines? furnaces?) often result in an aesthetic I find very compelling. I would totally watch a movie that looked like this.

image

It even manages to imitate macro-like soft focus. I don’t know what these tiny objects are, and they’re possibly haunted, but I want them.

image

Even the most ordinary of objects become interesting and otherworldly. These are a shopping cart, a spiderweb, and socks.

image

Some of these pictures are definitely beautiful, or haunting, or weirdly appealing. Is this art? BigGAN isn’t creating these with any sort of intent - it’s just imitating the data it sees. And although some artists curate their own datasets so that they can produce GANs with carefully designed artistic results, BigGAN’s training dataset was simply ImageNet, a huge all-purpose utilitarian dataset used to train all kinds of image-handling algorithms.

But the human endeavor of going through BigGAN’s output and looking for compelling images, or collecting them to tell a story or send a message - like I’ve done here - that’s definitely an artistic act. You could illustrate a story this way, or make a hauntingly beautiful movie set. It all depends on the dataset you collect, and the outputs you choose. And that, I think, is where algorithms like BigGAN are going to change human art - not by replacing human artists, but by becoming a powerful new collaborative tool.

The BigGAN authors have posted over 1GB of these images, and it’s so fun to go through them. I’ve collected a few more of my favorites - you can read them (and optionally get bonus material every time I post) by entering your email here.

Source: lewisandquark