Please use this identifier to cite or link to this item: https://hdl.handle.net/2445/215777
Full metadata record
DC FieldValueLanguage
dc.contributor.authorCarós, Mariona-
dc.contributor.authorJust, Ariadna-
dc.contributor.authorSeguí Mesquida, Santi-
dc.contributor.authorVitrià i Marca, Jordi-
dc.date.accessioned2024-10-15T07:56:40Z-
dc.date.available2024-10-15T07:56:40Z-
dc.date.issued2024-06-13-
dc.identifier.issn2072-4292-
dc.identifier.urihttps://hdl.handle.net/2445/215777-
dc.description.abstractLight Detection and Ranging systems serve as robust tools for creating three-dimensional representations of the Earth’s surface. These representations are known as point clouds. Point cloud scene segmentation is essential in a range of applications aimed at understanding the environment, such as infrastructure planning and monitoring. However, automating this process can result in notable challenges due to variable point density across scenes, ambiguous object shapes, and substantial class imbalances. Consequently, manual intervention remains prevalent in point classification, allowing researchers to address these complexities. In this work, we study the elements contributing to the automatic semantic segmentation process with deep learning, conducting empirical evaluations on a self-captured dataset by a hybrid airborne laser scanning sensor combined with two nadir cameras in RGB and near-infrared over a 247 km2 terrain characterized by hilly topography, urban areas, and dense forest cover. Our findings emphasize the importance of employing appropriate training and inference strategies to achieve accurate classification of data points across all categories. The proposed methodology not only facilitates the segmentation of varying size point clouds but also yields a significant performance improvement compared to preceding methodologies, achieving a mIoU of 94.24% on our self-captured dataset.-
dc.format.extent28 p.-
dc.format.mimetypeapplication/pdf-
dc.language.isoeng-
dc.publisherMDPI-
dc.relation.isformatofReproducció del document publicat a: https://doi.org/10.3390/rs16122153-
dc.relation.ispartofRemote Sensing, 2024, vol. 16, num.12-
dc.relation.urihttps://doi.org/10.3390/rs16122153-
dc.rightscc-by (c) Caros Mariona et al., 2024-
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/-
dc.sourceArticles publicats en revistes (Matemàtiques i Informàtica)-
dc.subject.classificationVisualització tridimensional-
dc.subject.classificationTeledetecció-
dc.subject.classificationVisió per ordinador-
dc.subject.otherThree-dimensional display systems-
dc.subject.otherRemote sensing-
dc.subject.otherComputer vision-
dc.titleEffective Training and Inference Strategies for Point Classification in LiDAR Scenes-
dc.typeinfo:eu-repo/semantics/article-
dc.typeinfo:eu-repo/semantics/publishedVersion-
dc.identifier.idgrec750788-
dc.date.updated2024-10-15T07:56:40Z-
dc.rights.accessRightsinfo:eu-repo/semantics/openAccess-
Appears in Collections:Articles publicats en revistes (Matemàtiques i Informàtica)

Files in This Item:
File Description SizeFormat 
867921.pdf26.84 MBAdobe PDFView/Open


This item is licensed under a Creative Commons License Creative Commons