Achieving Super-Resolution

One application of this research breakthrough is to overcome physical camera lens limitations in mobile phones

Can you take high-resolution, high-quality images on your mobile phone without being limited by the physical lens? Are you able to remaster classic films and video games for today’s 4K world?

One answer to these questions lies in the ability to upscale a low-resolution image to super-resolution by restoring the missing high frequency components. And this is an area that Nanyang Associate Professor Loy Chen Change, from Nanyang Technological University’s School of Computer Science and Engineering, has been focusing on within the field of computer vision.

Unlike conventional interpolation techniques, image super-resolution aims to provide sharper edges and textures for a more pleasing and vivid viewing experience. But it is a very tough nut to crack. As Prof Loy explained, “This is mathematically difficult because there are far too many high-resolution possibilities for a low-solution pixel.”

Prof Loy’s team has been investigating novel deep learning-based algorithms to solve this problem and invented the first deep convolutional network for single image super-resolution in 2014 (Learning a Deep Convolutional Network for Image Super-Resolution, ECCV, 2014).

This seminal work has inspired a new wave of technologies that make use of AI upscaling in mobile phone photography. Such technologies, which can be seen in various models of Xiaomi and Vivo mobile phones, typically exploit redundant spatio-temporal information from several photos taken consecutively to generate a single sharp and vivid image. This enables users to capture high-resolution and high-quality images without being limited by the physical lens in their mobile phones.

Image super-resolution can also be applied to re-digitise and restore film stock as well as to remaster classic video games. “The widespread presence of 4K and more recently 8K televisions is driving demand for high-quality versions of existing videos, as existing videos are limited by the resolution they were originally acquired in,” said Prof Loy. “Image super-resolution technology is able to enhance the quality much better, compared with conventional methods such as bilinear or bicubic interpolation that often give rise to blurry edges and textures.”

The technology also helps users to save considerable costs in bandwidth when they send images over the network. This is achieved by transmitting low-resolution images and upscaling them at the receiving end-user device.

These developments, however, are “only the tip of the iceberg”. “Given enough time and resources, computer vision could be integrated into most things that the human eye can perceive,” said Prof Loy.

He pointed out that by observing objects and how they move, a human being is capable of grasping the structure and some information on the object, and then generalising the knowledge to unseen samples.

In contrast, modern deep learning systems rely heavily on massive amounts of annotated data for learning effective representations. This means hundreds of thousands of hours spent on manual labelling for each percentage gain in accuracy.

“Can deep models learn meaningful visual representation without labelled data?”

His team is trying to answer this by developing new learning approaches for deep learning so that it can learn from a massive number of images and videos in an unsupervised manner, namely without explicit annotations.

As he continues on his research journey, Prof Loy feels that he has been most fortunate to be able to do research in a field that he is truly passionate in and to work with the right people. “I have been very lucky to have met the right people at the right time and at the right place (my collaborators, my postdocs, and my students). I am still working closely with many of these people, and I look forward to an even more exciting research expedition ahead.”

Share the post!

Facebook
Twitter
LinkedIn
Reddit
Telegram
WhatsApp
Email
Print

Related Posts

Leave a Comment