An Apple patent (number 20100110303) for a look-ahead system and method for pan and zoom detection in video sequences has appeared at the US Patent & Trademark Office. It relates generally to analysis of motion in video sequences and, more particularly, to identifying pan and zoom global motion in video sequences.
The system and method use motion vectors in a reference coordinate system to identify pans and zooms in video sequences. The identification of pans and zooms enables parameter switching for improved encoding in various video standards (e.g., H.264) and improved video retrieval of documentary movies and other video sequences in video databases or other storage devices. The inventors are Adriana Dumitras and Barin G. Haskell.
Here’s Apple’s background and summary of the invention: “The analysis of motion information in video sequences has typically addressed two largely non-overlapping applications: video retrieval and video coding. In video retrieval systems, the dominant motion, motion trajectories and tempo are computed to identify particular video clips or sequences that are similar in terms of motion characteristics or belong to a distinct class (e.g., commercials).
“In video coding systems, global motion parameters are estimated for global motion compensation and for constructing sprites. In both video retrieval and video coding systems, it is desirable to identify pan and zoom global motion. For video retrieval systems, pan and zoom detection enables classification of video sequences (e.g., documentary movies) for efficient retrieval from video databases. For video coding systems, pan and zoom detection enables the adaptive switching of coding parameters (e.g., the selection of temporal and spatial Direct Modes in H.264).”
“The present invention overcomes the deficiencies of the prior art by providing a look-ahead system and method for pan and zoom detection in video sequences based on motion characteristics.
“One aspect of the present invention includes a method of detecting pan and zoom in a video sequence. The method comprises selecting a set of frames from a video sequence (e.g., by identifying scene cuts), determining a set of motion vectors for each frame in the set of frames, identifying at least two largest regions in each frame in the frame set having motion vectors with substantially similar orientation in a reference coordinate system (e.g., polar coordinates), determining percentages of each frame covered by the at least two largest regions, determining a statistical measure (e.g., variance) of the motion vector orientations in the reference coordinate system for at least one of the two largest regions, and comparing the percentages and statistical measure to threshold values to identify a pan or zoom in the video sequence.
“Another aspect of the present invention includes a system for detecting pan and zoom sequences in a video sequence. The system comprises: a preprocessor for selecting a set of frames from a video sequence, and a motion analyzer for determining a motion vector for each frame in the set of frames, identifying the two largest regions in each frame having motion vectors with substantially similar orientation in a reference coordinate system, determining percentages of each frame covered by the two largest regions, determining a statistical measure of the motion vector orientations in the reference coordinate system for at least one of the two largest regions, and comparing the percentages and statistical measure to threshold values to identify a pan or zoom in the video sequence.
“The present invention as defined by the claims herein provides a computationally efficient solution for identifying pans and zooms in video sequences, including but not limited to the enabling of parameter switching for improved encoding in video standards (e.g., H.264) and improved video retrieval of video sequences from databases and other video storage devices.