Please use this identifier to cite or link to this item: http://hdl.handle.net/2445/146457
Title: Implementación de Amazon Web Services para anális de video
Author: Pérez Martínez, Rubén
Director/Tutor: Amorós Huguet, Oscar
Keywords: Vídeo digital
Processament digital d'imatges
Programari
Treballs de fi de grau
Visió per ordinador
Computació en núvol
Digital video
Digital image processing
Computer software
Computer vision
Cloud computing
Bachelor's theses
Issue Date: 27-Jun-2019
Abstract: [en] This project will present different ways to solve the problem of video processing in real time, at the time of capturing the images with the camera. We will take as an example of use case, a real product in the industry. This product consists in the most extreme case, in 6 cameras 4K at 60fps, a server at the edge connected to these cameras, and a software that generates a single FullHD video as a result of the composition of these cameras, and computer vision analysis of only 2 of them. All the calculation can be done at the edge that is in the same place where you have the cameras, and that allows to have a good bandwidth, stable, controlled and cheap, between the cameras and the server. This allows you to obtain the final video with low latency with respect to reality, and high image quality. But depending on the needs of calculation, the hardware can be very expensive (as the case that we present) and its use is not intensive because it is only used in the moment of doing the processing. The other option is to use the cloud, where you do not have to worry about hardware maintenance, and has a lot of software already done, giving you optimized and centralized resources. The negative side is that you need stable connections with large bandwidths to support 4K cameras at 60 fps with good encoding quality that usually involves high bitrates. In this project we will explore different strategies to solve video analysis, using Amazon Web Services as a cloud platform. We will also present a hybrid strategy, in which video analysis is done in the cloud, and the use of that analysis to generate the FullHD composition is done at the edge. This way we achieve a reduction of computational costs at the edge and of connection costs to the cloud, since we will not send the whole video, but only some frames of the two cameras involved in the computer vision analysis.
Note: Treballs Finals de Grau d'Enginyeria Informàtica, Facultat de Matemàtiques, Universitat de Barcelona, Any: 2019, Director: Oscar Amorós Huguet
URI: http://hdl.handle.net/2445/146457
Appears in Collections:Programari - Treballs de l'alumnat
Treballs Finals de Grau (TFG) - Enginyeria Informàtica

Files in This Item:
File Description SizeFormat 
codi.zipCodi font6.29 MBzipView/Open
memoria.pdfMemòria11.17 MBAdobe PDFView/Open


This item is licensed under a Creative Commons License Creative Commons