Philipsen, Mark PhilipVelling Dueholm, JacobJørgensen, AndersEscalera Guerrero, SergioMoeslund, Thomas Baltzer2018-06-142018-06-142018-01-031424-8220https://hdl.handle.net/2445/122953We present a pattern recognition framework for semantic segmentation of visual structures, that is, multi-class labelling at pixel level, and apply it to the task of segmenting organs in the eviscerated viscera from slaughtered poultry in RGB-D images. This is a step towards replacing the current strenuous manual inspection at poultry processing plants. Features are extracted from feature maps such as activation maps from a convolutional neural network (CNN). A random forest classifier assigns class probabilities, which are further refined by utilizing context in a conditional random field. The presented method is compatible with both 2D and 3D features, which allows us to explore the value of adding 3D and CNN-derived features. The dataset consists of 604 RGB-D images showing 151 unique sets of eviscerated viscera from four different perspectives. A mean Jaccard index of 78.11% is achieved across the four classes of organs by using features derived from 2D, 3D and a CNN, compared to 74.28% using only basic 2D image features.15 p.application/pdfengcc-by (c) Philipsen, Mark Philip et al., 2018http://creativecommons.org/licenses/by/3.0/esOcellsXarxes neuronals (Neurobiologia)BirdsNeural networks (Neurobiology)Organ Segmentation in Poultry Viscera Using RGB-Dinfo:eu-repo/semantics/article6750642018-06-14info:eu-repo/semantics/openAccess29301337