Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/2072298.2071967acmconferencesArticle/Chapter ViewAbstractPublication PagesmmConference Proceedingsconference-collections
short-paper

Detection and location of near-duplicate video sub-clips by finding dense subgraphs

Published: 28 November 2011 Publication History

Abstract

Robust and fast near-duplicate video detection is an important task with many potential applications. Most existing systems focus on the comparison between full copy videos or partial near-duplicate videos. While it is more challenging to find similar content for videos containing multiple near-duplicate segments at random locations with various connections. In this paper, we propose a new graph based method to detect complex near-duplicate video sub-clips. First, we develop a new succinct video descriptor for keyframe match. Then a graph is established to exploit temporal consistency of matched keyframes. The nodes of the graph are the matched frame pairs; the edge weights are computed from the temporal alignment and frame pair similarities. In this way, the validly matched keyframes would form a dense subgraph whose nodes are strongly connected. This graph model also preserves the complex connections of sub-clips. Thus detecting complex near-duplicate sub-clips is transformed to the problem of finding all the dense subgraphs. We employ the optimization method of graph shift to solve this problem due to its robust performance. The experiments are conducted on the dataset with various transformations and complex temporal relations. The results demonstrate the effectiveness and efficiency of the proposed method.

References

[1]
A.-F. Smeaton, P. Over, and W. Kraaij. Evaluation campaigns and trecvid. In ACM MIR, 2006.
[2]
H.-R. Liu and S.-C. Yan. Robust Graph Mode Seeking by Graph Shift. In ICML, 2010.
[3]
H.-R. Liu and S.-C. Yan. Common visual Pattern Discovery via Spatially Coherent Correspondences. In CVPR, 2010.
[4]
H.-K. Tan, X. Wu, C.-W. Ngo and W.-L. Zhao. Accelerating near-duplicate video matching by combining visual similarity and alignment distortion. In ACM Multimedia, 2008.
[5]
H.-K .Tan, C.-W. Ngo, Richang Hong and T.-S. Chua. Scalable Detection of Partial Near-duplicate Videos by Visual-Temporal Consistency. In ACM Multimedia, 2009.
[6]
H.-K. Tan, C.-W. Ngo and T.-S Chua. Efficient mining of multiple partial near-duplicate alignments by temporal network. IEEE Trans. On circuits and systems for video Technology, 2010.
[7]
H.-T. Shen, J. Shao, Z. Huang, X.-F. Zhou. Effective and Efficient Query Processing for Video Subsequence Identification. IEEE Trans. Knowl. Data Eng. 21(3): 321--334,March 2009.
[8]
http://en.wikipedia.org/wiki/Histogram_of_oriented_gradients.
[9]
L.-f. Shang, L.-j. Y.-F. Wang, K.-P. Chan and X.-S. Hua. Real-time large scale near-duplicate web video retrieval. In ACM Multimedia, 2010.
[10]
J. Law-To, L. Chen, A. Joly, et al. Video copy detection: a comparative study. In CIVR, 2007.
[11]
J. Sivic and A. Zisserman. Video google: A text retrieval approach to object matching in videos. In Proc. ICCV, 2003.
[12]
P. Wu, T. Thaipanich, and C.-C. j. Kuo. A suffix array approach to video copy detection in video sharing social networks. In Proc. ICASSP, 2009.
[13]
S. Poullot, M. Crucianu, and O. Buisson. Scalable mining of large video databases using copy detection. In ACM Multimedia, 2008.
[14]
S. Siersdorfer, J. San Pedro, and M. Sanderson. Automatic video tagging using content redundancy. In ACM SIGIR, July 2009.
[15]
X.S. Hua, X. Chen, H.J. Zhang. Robust video signature based on ordinal measure. In Proc. ICIP, 2004.
[16]
X. Wu, A. G. Hauptmann, and C.-W. Ngo. Practical elimination of near-duplicates from web video search. In ACM Multimedia, 2007.

Cited By

View all
  • (2019)Multiscale video sequence matching for near-duplicate detection and retrievalMultimedia Tools and Applications10.1007/s11042-018-5862-378:1(311-336)Online publication date: 1-Jan-2019
  • (2016)Effective Multimodality Fusion Framework for Cross-Media Topic DetectionIEEE Transactions on Circuits and Systems for Video Technology10.1109/TCSVT.2014.234755126:3(556-569)Online publication date: 1-Mar-2016
  • (2015)Video Copy Detection Based on Path Merging and Query Content PredictionIEEE Transactions on Circuits and Systems for Video Technology10.1109/TCSVT.2015.239577125:10(1682-1695)Online publication date: Oct-2015
  • Show More Cited By

Index Terms

  1. Detection and location of near-duplicate video sub-clips by finding dense subgraphs

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    MM '11: Proceedings of the 19th ACM international conference on Multimedia
    November 2011
    944 pages
    ISBN:9781450306164
    DOI:10.1145/2072298
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 28 November 2011

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. graph shift
    2. near-duplicate video detection
    3. sub-clip location

    Qualifiers

    • Short-paper

    Conference

    MM '11
    Sponsor:
    MM '11: ACM Multimedia Conference
    November 28 - December 1, 2011
    Arizona, Scottsdale, USA

    Acceptance Rates

    Overall Acceptance Rate 995 of 4,171 submissions, 24%

    Upcoming Conference

    MM '24
    The 32nd ACM International Conference on Multimedia
    October 28 - November 1, 2024
    Melbourne , VIC , Australia

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)1
    • Downloads (Last 6 weeks)0
    Reflects downloads up to 06 Oct 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2019)Multiscale video sequence matching for near-duplicate detection and retrievalMultimedia Tools and Applications10.1007/s11042-018-5862-378:1(311-336)Online publication date: 1-Jan-2019
    • (2016)Effective Multimodality Fusion Framework for Cross-Media Topic DetectionIEEE Transactions on Circuits and Systems for Video Technology10.1109/TCSVT.2014.234755126:3(556-569)Online publication date: 1-Mar-2016
    • (2015)Video Copy Detection Based on Path Merging and Query Content PredictionIEEE Transactions on Circuits and Systems for Video Technology10.1109/TCSVT.2015.239577125:10(1682-1695)Online publication date: Oct-2015
    • (2015)Frame-level matching of near duplicate videos based on ternary frame descriptor and iterative refinement2015 IEEE International Conference on Image Processing (ICIP)10.1109/ICIP.2015.7350753(31-35)Online publication date: Sep-2015
    • (2015)A convergence theorem for graph shift-type algorithmsPattern Recognition10.1016/j.patcog.2015.02.01348:8(2751-2760)Online publication date: 1-Aug-2015
    • (2015)Fusing cross-media for topic detection by dense keyword groupsNeurocomputing10.1016/j.neucom.2015.02.083169(169-179)Online publication date: Dec-2015
    • (2012)An effective multi-clue fusion approach for web video topic detectionProceedings of the 20th ACM international conference on Multimedia10.1145/2393347.2396311(781-784)Online publication date: 29-Oct-2012

    View Options

    Get Access

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media