It has happened to everyone more or less than shooting videos in which they appear blurred images. Now, a group of researchers from MIT has found a way to recover lost details of images and create visually clean copies of blurred parts due to movement. The merit is of an algorithm called "visual deprojection model" (visual deprojection model) and is based on the convolutional neural network.
When the model is used to process low quality images never seen before with blurry elements, it analyzes them to understand what the blur may have produced in the video. It then synthesizes new images that combine data from both sharp and blurred parts of a video. For example, if the user shoots a video in the garden with the dog running and rolling on the lawn, the technology will be able to create a version of that video that clearly shows the sources of the pet movement.
During the testing phase conducted by the MIT development team, the model was able to correct 24 frames of a video showing a person's gait and the position of his legs. Researchers are focused on perfecting technology to make it useful in the medical field. They believe the model could be used to convert 2D images, such as X-rays, into 3D images with additional information, such as computerized tomography but without additional costs (we know that a 3D CT is very expensive).
This would benefit, in the future, not only citizens but also developing nations, where the medical sector needs to progress.