Interactive Exploration of Journalistic Video Footage Through Multimodal Semantic Matching

Sarah Ibrahimi, Shuo Chen, Devanshu Arya, Arthur Câmara, Yunlu Chen, Tanja Crijns, Maurits van der Goes, Thomas Mensink, Emiel van Miltenburg, Daan Odijk, William Thong, Jiaojiao Zhao, Pascal Mettes

    Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

    Abstract

    This demo presents a system for journalists to explore video footage for broadcasts. Daily news broadcasts contain multiple news items that consist of many video shots and searching for relevant footage is a labor intensive task. Without the need for annotated video shots, our system extracts semantics from footage and automatically matches these semantics to query terms from the journalist. The journalist can then indicate which aspects of the query term need to be emphasized, e.g. the title or its thematic meaning. The goal of this system is to support the journalists in their search process by encouraging interaction and exploration with the system.
    Original languageEnglish
    Title of host publicationProceedings of the 27th ACM International Conference on Multimedia
    Place of PublicationNew York, NY, USA
    PublisherACM
    Pages2196-2198
    Number of pages3
    ISBN (Print)9781450368896
    DOIs
    Publication statusPublished - 2019

    Publication series

    NameMM '19
    PublisherACM

    Keywords

    • exploration
    • matching
    • multimodal
    • semantics
    • video

    Fingerprint

    Dive into the research topics of 'Interactive Exploration of Journalistic Video Footage Through Multimodal Semantic Matching'. Together they form a unique fingerprint.

    Cite this