OpenAI have developed a computer vision system called CLIP to recognize objects, images, and text, but this leaves it vulnerable to be easily confused by text labels.
“CLIP’s multimodal neurons generalize across the literal and the iconic, which may be a double-edged sword. Through a series of carefully-constructed experiments, we demonstrate that we can exploit this reductive behavior to fool the model into making absurd classifications. We have observed that the excitations of the neurons in CLIP are often controllable by its response to images of text, providing a simple vector of attacking the model.”
Computer vision software can now detect motorists throwing litter out of their window and match this with the car’s number plate to issue an automatic fine of £90. The first trial of the new system will begin in Maidstone, Kent, UK.
“We develop noninvasive technologies to identify and monitor bears, facilitating their conservation.”
AI autocompletes Windows 95 startup tune
Here’s what happens when the Windows 95 startup sound is fed to OpenAI Jukebox algorithms to generate its continuation. To train Jukebox on a number of music genres, OpenAI researchers crawled the web to extract a new dataset of 1.2 million songs paired with corresponding lyrics and metadata.
Chants of Fuck the Algorithm
Students gathered outside of the department of education, in London, to protest Ofqual’s use of an Algorithm to determine this year’s A-level exam results, of which 40% were downgraded, causing many students to lose their places at universities. Video credit: Huck
The artist Shardcore has used a Machine Learning technique called First Order Motion Model to animate celebrities singing the Prince song kiss.
Janelle Shane has run various ‘transfer learning’ experiments with RunwayML, where you train an existing model on a new set of images. Here, the popular StyleGAN v2 ‘faces’ model is trained for a while on photos of her cats.
Coughing Simulations
Various research groups are using software to run simulations of how coughing can spread in public places. The first one is by Aalto university and the airplane one is by researchers at Purdue University. Are these funny or scary?
AI Memes by imgFlip
This AI meme generator autocompletes popular meme templates using Machine Learning. The Neural Network model was trained on memes created by Imgflip users. The generator has an easy-to-use interface (pictured top) where you can choose a template and add optional keywords to influence the generated text. Here is a result I got using the ‘Distracted Boyfriend’ meme, along with some of the most popular results from the AI Memes stream.
A drawing bot continuously doodles on an online canvas. The bot is present for conversation - as a spectator you can write messages on the canvas and the bot responds (powered by a ML text generator called GPT-2).
Human-to-Animal translation
A new machine learning model called StarGAN v2 can do image-to-image translation - allowing for experiments into turning humans into animals. Image source here.
Yelp trained a neural net to eliminate bugs from its app’s code, and it simply deleted everything, or so they claim, in their version notes on the app-store. It could be yet another case of the “algorithms ate my homework” excuse for when humans mess up.
No One Is Lonely (owo uwu owo uwu)
Google Translate found poetry.
The Igbo language is tonal, and apparently particularly laden with semantic ambiguity, which could be one reason Translate appears to be overdetermining meaning here.
What crowd surveillance software does for fun in it’s spare time.
Google Translate’s reading of a Spanish menu - with Puppy Warm, and the Hamburger of Fear Complete. A holiday snap by Shardcore