Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/641007.641065acmconferencesArticle/Chapter ViewAbstractPublication PagesmmConference Proceedingsconference-collections
Article

Annotations as multiple perspectives of video content

Published: 01 December 2002 Publication History

Abstract

This paper describes a video annotation tool based on a new and flexible model, that gives several perspectives over the same video content. The model was designed in a way that allows having multiple views over the same video data, enabling users with different requirements to have the most appropriate interface. These views, video-lenses, highlight a specific aspect of the video content that is being annotated. Annotations are made using a timeline based interface with multiple tracks, where each track corresponds to a given video-lens. The format used to store and exchange the information is the MPEG-7 standard. The annotation tool (VAnnotator) is being developed in the scope of Vizard, an ambitious project that aims to define a new paradigm for video navigation, annotation, editing and retrieval. The Vizard project includes users, both from the production/archiving area and from the consumer electronics area, that help to define and validate the annotation requirements and functionality.

References

[1]
Nack., F. and Putz, W., Designing Annotation Before It's Needed, ACM Multimedia 2001, Ottawa, Canada, (2001)]]
[2]
Rehatschek, H. and Kienast, G., VIZARD -- An Innovative Tool for Video Navigation, Retrieval and Editing, Proceedings of the 23rd Workshop of PVA 'Multimedia and Middleware', Vienna, (2001)]]
[3]
Harrison, B. and Baecker, R., Designing Video Annotation and Analysis Systems, Proceedings Graphics Interface (1992), 157--166]]
[4]
Varlamis, I., Vaziargiannis M., Poulos, P., Akrivas, G. and Spiros, I., X-Database. A Middleware for Collaborative Video Annotation, Storage and Retrieval, in the proceedings of the 8th Panhellenic Conference. Cyprus, (2001)]]
[5]
Davis, M., MediaStreams: An Iconic Visual Language for Video Representation, Readings in Human Computer Interaction: Toward the Year 2000, Morgan Kaufman Publishers Inc., (1995), 854--866]]
[6]
Davis, M., Media Streams: Representing Video for Retrieval and Repurposing, PhD Thesis, MIT Media Laboratory, (1995)]]
[7]
Correia, N. and Chambel, T., Active Video Watching Using Annotation, ACM Multimedia'99, Orlando, Florida, USA, (1999)]]
[8]
Sgouropoulou, C., Koutoumanos, A., Goodyear, P., and Skordalakis, E., WebOrama: A Web Based System for Ordered Asynchronous Multimedia Annotations, WebNet 98 - World Conference on the WWW, Internet, & Intranet, AACE Conferences, Orlando, Florida, USA, (1998)]]
[9]
Elmagarmid, A., Jiang, H., Helal, A., Jishi, A., and Ahmed, M., Video Database Systems: Issues, Products and Applications, Kluwer Academic Publishers, (1997)]]
[10]
ISO MPEG-7. Text of ISO/IEC FCD 15938-2 Information Technology -- Multimedia Content Description Interface -- Part 2 Description Definition Language, ISO/IEC JTC 1/SC 29/WG 11 N4002, (2001)]]
[11]
ISO MPEG-7. Text of ISO/IEC 15938-5/FCD Information Technology -- Multimedia Content Description Interface -- Part 5 Multimedia Description Schemes, ISO/IEC JTC 1/SC 29/WG 11 N3966, (2001)]]
[12]
Bier, E., Stone, M., Pier, K., Buxton, W. and T. DeRose, Toolglass and Magic Lenses: The See-Through Interface, Procdeedings of Sigraph 93, Anaheim, (1993), 73--80]]
[13]
Bargeron, D., Gupta, A., Grudin, J. and Sanocki, E., Annotations for Streaming Video on the Web: System Design and Usage Studies, http://www.research.microsoft.com]]

Cited By

View all
  • (2021)From Detectables to Inspectables: Understanding Qualitative Analysis of Audiovisual DataProceedings of the 2021 CHI Conference on Human Factors in Computing Systems10.1145/3411764.3445458(1-10)Online publication date: 6-May-2021
  • (2014)CASAMMultimedia Tools and Applications10.1007/s11042-012-1255-170:2(1277-1308)Online publication date: 1-May-2014
  • (2014)A Tool for Integrating Log and Video Data for Exploratory Analysis and Model Generation12th International Conference on Intelligent Tutoring Systems - Volume 847410.1007/978-3-319-07221-0_9(69-74)Online publication date: 5-Jun-2014
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
MULTIMEDIA '02: Proceedings of the tenth ACM international conference on Multimedia
December 2002
683 pages
ISBN:158113620X
DOI:10.1145/641007
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 01 December 2002

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. MPEG-7
  2. authoring paradigms
  3. timeline model
  4. video annotation
  5. video-lens

Qualifiers

  • Article

Conference

MM02: ACM Multimedia 2002
December 1 - 6, 2002
Juan-les-Pins, France

Acceptance Rates

MULTIMEDIA '02 Paper Acceptance Rate 46 of 330 submissions, 14%;
Overall Acceptance Rate 995 of 4,171 submissions, 24%

Upcoming Conference

MM '24
The 32nd ACM International Conference on Multimedia
October 28 - November 1, 2024
Melbourne , VIC , Australia

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)4
  • Downloads (Last 6 weeks)0
Reflects downloads up to 03 Sep 2024

Other Metrics

Citations

Cited By

View all
  • (2021)From Detectables to Inspectables: Understanding Qualitative Analysis of Audiovisual DataProceedings of the 2021 CHI Conference on Human Factors in Computing Systems10.1145/3411764.3445458(1-10)Online publication date: 6-May-2021
  • (2014)CASAMMultimedia Tools and Applications10.1007/s11042-012-1255-170:2(1277-1308)Online publication date: 1-May-2014
  • (2014)A Tool for Integrating Log and Video Data for Exploratory Analysis and Model Generation12th International Conference on Intelligent Tutoring Systems - Volume 847410.1007/978-3-319-07221-0_9(69-74)Online publication date: 5-Jun-2014
  • (2012)Semantic annotation and retrieval of documentary media objectsThe Electronic Library10.1108/0264047121127575630:5(721-747)Online publication date: 28-Sep-2012
  • (2012)Discrimination of media moments and media intervalsMultimedia Tools and Applications10.1007/s11042-011-0846-661:3(675-696)Online publication date: 1-Dec-2012
  • (2010)Using temporal video annotation as a navigational aid for video browsingAdjunct proceedings of the 23nd annual ACM symposium on User interface software and technology10.1145/1866218.1866263(445-446)Online publication date: 3-Oct-2010
  • (2010)Distributed discrimination of media moments and media intervalsProceedings of the 2010 ACM Symposium on Applied Computing10.1145/1774088.1774496(1929-1935)Online publication date: 22-Mar-2010
  • (2010)Supporting Collaborative Workflows of Digital Multimedia AnnotationProceedings of COOP 201010.1007/978-1-84996-211-7_6(79-99)Online publication date: 28-Apr-2010
  • (2009)A collective discrimination of moments and segments as an exploitation of the Watch-and-Comment conceptProceedings of the XV Brazilian Symposium on Multimedia and the Web10.1145/1858477.1858496(1-8)Online publication date: 5-Oct-2009
  • (2008)EmoPlayerInteracting with Computers10.5555/1327552.132793420:1(17-28)Online publication date: 1-Jan-2008
  • Show More Cited By

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media