Google introduced a new machine learning based process called Rapid and Accurate Image Super-Resolution (RAISR) in November . Raisr takes low resolution images, and converts them to high resolution images. The upscaling works by guessing the nearby pixels. The machine first makes a smaller version of the image, stretches it using traditional methods, and then compares the stretched image to the original high resolution image. The differences between the two images are learned by the algorithm, which allows it to preserve the underlying structure of the image, and build on it.
Google has started rolling out Raisr implementation for images shared on Google+. Raisr can work on mobile phones as well, and this allows Google to conserve on bandwidth when beaming the information. Google+ has been able to save on up to seventy five per cent of the bandwidth requirements with Raisr. The images are restored to the full resolution copy when received on the device. Only one fourth of the pixels are transmitted over the internet.
Transmission of images has heavy demands on the bandwidth, which can be a constraint in places with costly data plans or where the connection is spotty. In such cases, the user experience for browsing photos can be so bad that people may choose to not view images at all. Raisr is just the kind of thing needed to substantially reduce the amount of bandwidth that is needed to browse photos.
The feature is being rolled out to an unspecified subset of Android users. To the end user, the whole process is almost invisible, showing up only as reduced consumption of data. Raisr is being applied to over one billion images per week. The consumption of bandwidth for users has been reduced by about a third. Google plans to roll out the technology more widely in the coming weeks. Google has said that it will be working towards further reducing the time and bandwidth requirements for transmission of images.