Carregant...
Miniatura

Tipus de document

Article

Versió

Versió publicada

Data de publicació

Llicència de publicació

cc-by-sa (c) Rosado Rodrigo, Pilar et al., 2016
Si us plau utilitzeu sempre aquest identificador per citar o enllaçar aquest document: https://hdl.handle.net/2445/100830

Del píxel a las resonancias visuales: la imagen con voz propia

Títol de la revista

Director/Tutor

ISSN de la revista

Títol del volum

Recurs relacionat

Resum

The objective of our research is to develop a series of computer vision programs to search for analogies in large datasets¿in this case, collections of images of abstract paintings¿ based solely on their visual content without textual annotation. We have programmed an algorithm based on a specific model of image description used in computer vision. This approach involves placing a regular grid over the image and selecting a pixel region around each node. Dense features computed over this regular grid with overlapping patches are used to represent the images. Analysing the distances between the whole set of image descriptors we are able to group them according to their similarity and each resulting group will determines what we call 'visual words'. This model is called Bag-of-Words representation Given the frequency with which each visual word occurs in each image, we apply the method pLSA (Probabilistic Latent Semantic Analysis), a statistical model that classifies fully automatically, without any textual annotation, images according to their formal patterns. In

Citació

Citació

ROSADO RODRIGO, Pilar, FIGUERAS FERRER, Eva, REVERTER COMES, Ferran. Del píxel a las resonancias visuales: la imagen con voz propia. _AusArt. Journal for Research in Art_. 2016. Vol. 4, núm. 1, pàgs. 19-28. [consulta: 24 de gener de 2026]. ISSN: 2340-8510. [Disponible a: https://hdl.handle.net/2445/100830]

Exportar metadades

JSON - METS

Compartir registre