Google is using AI to compress images better than JPEG
Small is beautiful, as the old saying goes, and nowhere is that more true than in media files. Compressed images are considerably easier to transmit and store than uncompressed ones are, and now Google is using neural networks to beat JPEG at the compression game.
Google began by taking a random sample of 6 million 1,280×720 images on the web. It then broke those down into nonoverlapping 32×32 tiles and zeroed in on 100 of those with the worst compression ratios. The goal there, essentially, was to focus on improving performance on the “hardest-to-compress” data, because it’s bound to be easier to succeed on the rest.
The researchers then used the TensorFlow machine-learning system Google open-sourced last year to train a set of experimental neural network architectures. They used one million steps to train them and then collected a series of technical metrics to find which training models produced the best-compressed results.
In the end, their models outdid the JPEG compression standard’s performance on average. The next challenge, the researchers said, will be to beat compression methods derived from video compression codecs on large images, becuase “they employ tricks such as reusing patches that were already decoded.” WebP, which was derived from the VP8 video codec, is an example of such a method.
The researchers did note, however, that it’s not always easy to define a winner when it comes to compression performance, because technical metrics don’t always agree with human perception.
A paper describing the Google team’s work was published last week.
Source: InfoWorld Big Data