Enhanced Movie Content Similarity Based on Textual,
Auditory and Visual Information
The ability of low-level multimodal features to extract movie
similarity, in the context of a content-based movie recommendation approach. In
particular, we demonstrate the extraction of multimodal representation models
of movies, based on textual information from subtitles, as well as cues from
the audio and visual channels. With regards to the textual domain, we emphasize
our research in topic modeling of movies based on their subtitles, in order to
extract topics that discriminate between movies. Regarding the visual domain,
we focus on the extraction of semantically useful features that model camera
movements, colors and faces, while for the audio domain we adopt simple
classification aggregates based on pretrained models. The three domains are
combined with static metadata (e.g. directors, actors) to prove that the
content-based movie similarity procedure can be enhanced with low-level
multimodal information. To our knowledge, this is the first approach that
utilizes a wide range of features from all involved modalities, in order to
enhance the performance of the content similarity estimation, compared to the
metadata-based approaches.
No comments:
Post a Comment