Tesis Doctorals - Facultat - Matemàtiques
URI permanent per a aquesta col·leccióhttps://hdl.handle.net/2445/43181
Examinar
Enviaments recents
Mostrant 1 - 19 de 19
Tesi
Towards Video Transformers for Automatic Human Analysis(Universitat de Barcelona, 2023-12-19) Selva Castelló, Javier; Escalera Guerrero, Sergio; Clapés i Sintes, Albert; Universitat de Barcelona. Facultat de Matemàtiques[eng] With the aim of creating artificial systems capable of mirroring the nuanced understanding and interpretative powers inherent to human cognition, this thesis embarks on an exploration of the intersection between human analysis and Video Transformers. The objective is to harness the potential of Transformers, a promising architectural paradigm, to comprehend the intricacies of human interaction, thus paving the way for the development of empathetic and context-aware intelligent systems. In order to do so, we explore the whole Computer Vision pipeline, from data gathering, to deeply analyzing recent developments, through model design and experimentation. Central to this study is the creation of UDIVA, an expansive multi-modal, multi-view dataset capturing dyadic face-to-face human interactions. Comprising 147 participants across 188 sessions, UDIVA integrates audio-visual recordings, heart-rate measurements, personality assessments, socio- demographic metadata, and conversational transcripts, establishing itself as the largest dataset for dyadic human interaction analysis up to this date. This dataset provides a rich context for probing the capabilities of Transformers within complex environments. In order to validate its utility, as well as to elucidate Transformers' ability to assimilate diverse contextual cues, we focus on addressing the challenge of personality regression within interaction scenarios. We first adapt an existing Video Transformer to handle multiple contextual sources and conduct rigorous experimentation. We empirically observe a progressive enhancement in model performance as more context is added, reinforcing the potential of Transformers to decode intricate human dynamics. Building upon these findings, the Dyadformer emerges as a novel architecture, adept at long-range modeling of dyadic interactions. By jointly modeling both participants in the interaction, as well as embedding multi- modal integration into the model itself, the Dyadformer surpasses the baseline and other concurrent approaches, underscoring Transformers' aptitude in deciphering multifaceted, noisy, and challenging tasks such as the analysis of human personality in interaction. Nonetheless, these experiments unveil the ubiquitous challenges when training Transformers, particularly in managing overfitting due to their demand for extensive datasets. Consequently, we conclude this thesis with a comprehensive investigation into Video Transformers, analyzing topics ranging from architectural designs and training strategies, to input embedding and tokenization, traversing through multi-modality and specific applications. Across these, we highlight trends which optimally harness spatio-temporal representations that handle video redundancy and high dimensionality. A culminating performance comparison is conducted in the realm of video action classification, spotlighting strategies that exhibit superior efficacy, even compared to traditional CNN-based methods.Tesi
Optimization of neural networks for deep learning and applications to CT image segmentation(Universitat de Barcelona, 2023-07-27) Pezzano, Giuseppe; Radeva, Petia; Ribas Ripoll, Vicent; Universitat de Barcelona. Facultat de Matemàtiques[eng] During the last few years, AI development in deep learning has been going so fast that even important researchers, politicians, and entrepreneurs are signing petitions to try to slow it down. The newest methods for natural language processing and image generation are achieving results so unbelievable that people are seriously starting to think they can be dangerous for society. In reality, they are not dangerous (at the moment) even if we have to admit we reached a point where we have no more control over the flux of data inside the deep networks. It is impossible to open a modern deep neural network and interpret how it processes the information and, in many cases, explain how or why it gives back that particular result. One of the goals of this doctoral work has been to study the behavior of weights in convolutional neural networks and in transformers. We hereby present a work that demonstrates how to invert 3x3 convolutions after training a neural network able to learn how to classify images, with the future aim of having precisely invertible convolutional neural networks. We demonstrate that a simple network can learn to classify images on an open-source dataset without loss in accuracy, with respect to a non-invertible one. All that with the ability to reconstruct the original image without detectable error (on 8-bit images) in up to 20 convolutions stacked in a row. We present a thorough comparison between our method and the standard. We tested the performances of the five most used transformers for image classification on an open- source dataset. Studying the embedded matrices, we have been able to provide two criteria that can help transformers learn with a training time reduction of up to 30% and with no impact on classification accuracy. The evolution of deep learning techniques is also touching the field of digital health. With tens of thousands of new start-ups and more than 1B $ of investments only in the last year, this field is growing rapidly and promising to revolutionize healthcare. In this thesis, we present several neural networks for the segmentation of lungs, lung nodules, and areas affected by pneumonia induced by COVID-19, in chest CT scans. The architecturesm we used are all residual convolutional neural networks inspired by UNet and Inception. We customized them with novel loss functions and layers studied to achieve high performances on these particular applications. The errors on the surface of nodule segmentation masks are not over 1mm in more than 99% of the cases. Our algorithm for COVID-19 lesion detection has a specificity of 100% and overall accuracy of 97.1%. In general, it surpasses the state-of-the-art in all the considered statistics, using UNet as a benchmark. Combining these with other algorithms able to detect and predict lung cancer, the whole work was presented in a European innovation program and judged of high interest by worldwide experts. With this work, we set the basis for the future development of better AI tools in healthcare and scientific investigation into the fundamentals of deep learning.Tesi
Degenerate invariant tori in KAM theory(Universitat de Barcelona, 2023-11-22) Pello García, Juan; Haro, Àlex; Fontich, Ernest, 1955-; Universitat de Barcelona. Facultat de Matemàtiques[eng] The thesis develops an incipient methodology to study bifurcations of invariant curves in one-dimensional and quasiperiodic discrete systems, based on translated curve theorems and KAM theory.The (extended) phase space is a bundle whose base is a torus of dimension 1, and the real-line is the fiber but both the methodology and the results can be easily adapted to higher dimensional tori (the dimension being the number of external frequencies). The systems themselves are maps of bundles over translations in the torus with d frequencies. over translations on the torus with d frequencies. The methodology involves KAM theory, bifurcation theory, and translated curve theorems (in the spirit of Moser, Rüßmann, Herman, Delshams and Ortega). In the project, rigorous results are obtained in a posteriori format on the existence of families of translated tori in the analytical framework, establishing a methodology to study the bifurcations of translated tori. The a posteriori format is suitable to develop rigorous numerical calculations. Complementarily, the algorithms derived from the iterative process associated with this methodology have been implemented on the computer.Tesi
Deep Learning-based Solutions to Improve Diagnosis in Wireless Capsule Endoscopy(Universitat de Barcelona, 2023-12-12) Laiz Treceño, Pablo; Seguí Mesquida, Santi; Vitrià i Marca, Jordi; Universitat de Barcelona. Facultat de Matemàtiques[eng] Deep Learning (DL) models have gained extensive attention due to their remarkable performance in a wide range of real-world applications, particularly in computer vision. This achievement, combined with the increase in available medical records, has made it possible to open up new opportunities for analyzing and interpreting healthcare data. This symbiotic relationship can enhance the diagnostic process by identifying abnormalities, patterns, and trends, resulting in more precise, personalized, and effective healthcare for patients. Wireless Capsule Endoscopy (WCE) is a non-invasive medical imaging technique used to visualize the entire Gastrointestinal (GI) tract. Up to this moment, physicians meticulously review the captured frames to identify pathologies and diagnose patients. This manual process is time- consuming and prone to errors due to the challenges of interpreting the complex nature of WCE procedures. Thus, it demands a high level of attention, expertise, and experience. To overcome these drawbacks, shorten the screening process, and improve the diagnosis, efficient and accurate DL methods are required. This thesis proposes DL solutions to the following problems encountered in the analysis of WCE studies: pathology detection, anatomical landmark identification, and Out-of-Distribution (OOD) sample handling. These solutions aim to achieve robust systems that minimize the duration of the video analysis and reduce the number of undetected lesions. Throughout their development, several DL drawbacks have appeared, including small and imbalanced datasets. These limitations have also been addressed, ensuring that they do not hinder the generalization of neural networks, leading to suboptimal performance and overfitting. To address the previous WCE problems and overcome the DL challenges, the proposed systems adopt various strategies that utilize the power advantage of Triplet Loss (TL) and Self-Supervised Learning (SSL) techniques. Mainly, TL has been used to improve the generalization of the models, while SSL methods have been employed to leverage the unlabeled data to obtain useful representations. The presented methods achieve State-of-the-art results in the aforementioned medical problems and contribute to the ongoing research to improve the diagnostic of WCE studies.Tesi
Energy and random point processes on two-point homogeneous manifolds(Universitat de Barcelona, 2023-11-24) Torre Estévez, Víctor de la; Marzo Sánchez, Jordi; Universitat de Barcelona. Facultat de Matemàtiques[eng] We study discrete energy minimization problems on two-point homogeneous manifolds. Since finding N-point configurations with optimal energy is highly challenging, recent approaches have involved examining random point processes with low expected energy to obtain good N- point configurations. In Chapter 2, we compute the second joint intensity of the random point process given by the zeros of elliptic polynomials, which enables us to recover the expected logarithmic energy on the 2-dimensional sphere previously computed by Armentano, Beltrán, and Shub. Moreover, we obtain the expected Riesz s-energy, which is remarkably close to the conjectured optimal energy. The expected energy serves as a bound for the extremal s-energy, thereby improving upon the bounds derived from the study of the spherical ensemble by Alishahi and Zamani. Among other additional results, we get a closed expression for the expected separation distance between points sampled from the zeros of elliptic polynomials. In Chapter 3, we explore the average discrepancies and worst-case errors of random point configurations on the d-dimensional sphere. We find that the points drawn from the so called spherical ensemble and the zeros of elliptic polynomials achieve optimal spherical L^2 cap discrepancy on average. Additionally, we provide an upper bound for the L^intiy discrepancy for N-point configurations drawn from the harmonic ensemble on any two-point homogeneous space, thereby generalizing the previous findings for the sphere by Beltrán, Marzo and Ortega- Cerdà. We introduce a nondeterministic version of the Quasi Monte Carlo (QMC) strength for random sequences of points and compute its value for the spherical ensemble, the zeros of elliptic polynomials, and the harmonic ensemble. Finally, we compare our results with the conjectured QMC strengths of certain deterministic distributions associated with these random point processes. In Chapter 4, our focus hits to the Green energy minimization problem. Firstly, we extend the work by Beltrán and Lizarte on spheres to establish a close to sharp lower bound for the minimal Green energy on any two-point homogeneous manifold, improving on the existing lower bounds on projective spaces. Secondly, by adapting a method introduced by Wolff, we deduce an upper bound for the L^intiy discrepancy of N-point sets that minimize the Green energy.Tesi
Generalizability in multi-centre cardiac image analysis with machine learning(Universitat de Barcelona, 2023-12-15) Campello Román, Víctor Manuel; Lekadir, Karim, 1977-; Seguí Mesquida, Santi; Universitat de Barcelona. Facultat de Matemàtiques[eng] The field of Artificial Intelligence (AI) has undergone a revolution in recent years with the advent of more efficient computing hardware and well-documented software for model development. Many fields are being transformed. Medicine is one of the fields that has seen the appearance of models that can solve complex tasks such as automatic image segmentation or diagnosis. However, there are important challenges that need to be overcome for a successful application in clinical practice. One important challenge is the generalization of models to unseen domains independently of other factors, such as the scanner manufacturer, the scanning protocol, the sample size or the image quality. In this thesis, we aim to investigate the effects of the domain shift in medical imaging, specifically for cardiovascular studies, which present a particular challenge since the heart is a moving organ. Furthermore, we aim to contribute to methods to overcome or reduce the model performance gap. First, we establish a collaboration with clinical researchers from six different centres from three countries and assemble a large multi-centre dataset to tackle one of the greatest challenges in research: the domain gap problem. We process and annotate the data and develop a benchmark study by organizing an international competition to compare and analyse different techniques to bridge the generalization gap. The dataset is later open-sourced to foster innovation within the research community, becoming the first open multi-centre cardiac dataset. Then, we perform an exhaustive comparison of domain generalization and adaptation methods, including the best-performing methods in the aforementioned competition, for late gadolinium- enhanced image segmentation for the first time. We show that extensive data augmentation is very important for generalization and that model fine-tuning can reach or even surpass multi-centre models. Finally, we investigate the effects of differences in image appearance for the first time in a multi-centre study with cardiovascular imaging and compare several harmonisation techniques both at the feature and image levels for improved diagnosis. We show that histogram matching-based harmonisation results in image features (radiomics) that are more generalizable across centres.Tesi
Vector bundles and sheaves on toric varieties(Universitat de Barcelona, 2022-11-15) Salat Moltó, Martí; Miró-Roig, Rosa M. (Rosa Maria); Universitat de Barcelona. Facultat de Matemàtiques[eng] Framed within the areas of algebraic geometry and commutative algebra, this thesis contributes to the study of sheaves and vector bundles on toric varieties. From different perspectives, we take advantage of the theory on toric varieties to address two main problems: a better understanding of the structure of equivariant sheaves on a toric variety, and the Ein-Lazarsfeld-Mustopa conjecture concerning the stability of syzygy bundles on projective varieties. After a preliminary Chapter 1, the core of this dissertation is developed along three main chapters. The plot line begins with the study of equivariant torsion-free sheaves, and evolves to the study of equivariant reflexive sheaves with an application towards the problem finding equivariant Ulrich bundles on a projective toric variety. Finally, we end this dissertation by addressing the stability of syzygy bundles on certain smooth complete toric varieties, and their moduli space, contributing to the Ein-Lazarsfeld-Mustopa conjecture. In Chapter 2, we focus our attention on the study of equivariant torsion-free sheaves, connected in a very natural way to the theory of monomial ideals. We introduce the notion of a Klyachko diagram, which generalizes the classical stair-case diagram of a monomial ideal. We provide many examples to illustrate the results throughout the two main sections of this chapter. After describing methods to compute the Klyachko diagram of a monomial ideal, we use it to describe the first local cohomology module, which measures the saturatedness of a monomial ideal. Finally, we apply the notion of a Klyachko diagram to the computation of the Hilbert function and the Hilbert polynomial of a monomial ideal. As a consequence, we characterize all monomial ideals having constant Hilbert polynomial, in terms of the shape of the Klyachko diagram. Chapter 3 is devoted to the study of equivariant reflexive sheaves on a smooth complete toric variety. We describe a family of lattice polytopes encoding how the global sections of an equivariant reflexive sheaf change as we twist it by a line bundle. In particular, this gives a method to compute the Hilbert polynomial of an equivariant reflexive sheaf. We study in detail the case of smooth toric varieties with splitting fan. We are able to give bounds for the multigraded initial degree and for the multigraded regularity index of an equivariant reflexive sheaf on a smooth toric variety with splitting fan. From the latter result we give a method to compute explicitly the Hilbert polynomial of an equivariant reflexive sheaf on a smooth toric variety with splitting fan. Finally, we apply these tools to present a method aimed to find equivariant Ulrich bundles on a Hirzebruch surface, and we give an example of a rank 3 equivariant Ulrich bundle in the first Hirzebruch surface. Chapter 4 treats the stability of syzygy bundles on a certain toric variety. We contribute to the Ein-Lazarsfeld-Mustopa conjecture, by proving the stability of the syzygy bundle of any polarization of a blow-up of a projective space along a linear subspace. Finally, we study the rigidness of the syzygy bundles in this setting, all of which correspond to smooth points in their associated moduli space.Tesi
Reinforcement Learning for Value Alignment(Universitat de Barcelona, 2023-06-26) Rodríguez Soto, Manel; López Sánchez, Maite; Rodríguez-Aguilar, Juan A. (Juan Antonio); Universitat de Barcelona. Facultat de Matemàtiques[eng] As autonomous agents become increasingly sophisticated and we allow them to perform more complex tasks, it is of utmost importance to guarantee that they will act in alignment with human values. This problem has received in the AI literature the name of the value alignment problem. Current approaches apply reinforcement learning to align agents with values due to its recent successes at solving complex sequential decision-making problems. However, they follow an agent-centric approach by expecting that the agent applies the reinforcement learning algorithm correctly to learn an ethical behaviour, without formal guarantees that the learnt ethical behaviour will be ethical. This thesis proposes a novel environment-designer approach for solving the value alignment problem with theoretical guarantees. Our proposed environment-designer approach advances the state of the art with a process for designing ethical environments wherein it is in the agent's best interest to learn ethical behaviours. Our process specifies the ethical knowledge of a moral value in terms that can be used in a reinforcement learning context. Next, our process embeds this knowledge in the agent's learning environment to design an ethical learning environment. The resulting ethical environment incentivises the agent to learn an ethical behaviour while pursuing its own objective. We further contribute to the state of the art by providing a novel algorithm that, following our ethical environment design process, is formally guaranteed to create ethical environments. In other words, this algorithm guarantees that it is in the agent's best interest to learn value- aligned behaviours. We illustrate our algorithm by applying it in a case study environment wherein the agent is expected to learn to behave in alignment with the moral value of respect. In it, a conversational agent is in charge of doing surveys, and we expect it to ask the users questions respectfully while trying to get as much information as possible. In the designed ethical environment, results confirm our theoretical results: the agent learns an ethical behaviour while pursuing its individual objective.Tesi
Uncertainty, interpretability and dataset limitations in Deep Learning(Universitat de Barcelona, 2023-02-17) Pascual i Guinovart, Guillem; Vitrià i Marca, Jordi; Seguí Mesquida, Santi; Universitat de Barcelona. Facultat de Matemàtiques[eng] Deep Learning (DL) has gained traction in the last years thanks to the exponential increase in compute power. New techniques and methods are published at a daily basis, and records are being set across multiple disciplines. Undeniably, DL has brought a revolution to the machine learning field and to our lives. However, not everything has been resolved and some considerations must be taken into account. For instance, obtaining uncertainty measures and bounds is still an open problem. Models should be able to capture and express the confidence they have in their decisions, and Artificial Neural Networks (ANN) are known to lack in this regard. Be it through out of distribution samples, adversarial attacks, or simply unrelated or nonsensical inputs, ANN models demonstrate an unfounded and incorrect tendency to still output high probabilities. Likewise, interpretability remains an unresolved question. Some fields not only need but rely on being able to provide human interpretations of the thought process of models. ANNs, and specially deep models trained with DL, are hard to reason about. Last but not least, there is a tendency that indicates that models are getting deeper and more complex. At the same time, to cope with the increasing number of parameters, datasets are required to be of higher quality and, usually, larger. Not all research, and even less real world applications, can keep with the increasing demands. Therefore, taking into account the previous issues, the main aim of this thesis is to provide methods and frameworks to tackle each of them. These approaches should be applicable to any suitable field and dataset, and are employed with real world datasets as proof of concept. First, we propose a method that provides interpretability with respect to the results through uncertainty measures. The model in question is capable of reasoning about the uncertainty inherent in data and leverages that information to progressively refine its outputs. In particular, the method is applied to land cover segmentation, a classification task that aims to assign a type of land to each pixel in satellite images. The dataset and application serve to prove that the final uncertainty bound enables the end-user to reason about the possible errors in the segmentation result. Second, Recurrent Neural Networks are used as a method to create robust models towards lacking datasets, both in terms of size and class balance. We apply them to two different fields, road extraction in satellite images and Wireless Capsule Endoscopy (WCE). The former demonstrates that contextual information in the temporal axis of data can be used to create models that achieve comparable results to state-of-the-art while being less complex. The latter, in turn, proves that contextual information for polyp detection can be crucial to obtain models that generalize better and obtain higher performance. Last, we propose two methods to leverage unlabeled data in the model creation process. Often datasets are easier to obtain than to label, which results in many wasted opportunities with traditional classification approaches. Our approaches based on self-supervised learning result in a novel contrastive loss that is capable of extracting meaningful information out of pseudo-labeled data. Applying both methods to WCE data proves that the extracted inherent knowledge creates models that perform better in extremely unbalanced datasets and with lack of data. To summarize, this thesis demonstrates potential solutions to obtain uncertainty bounds, provide reasonable explanations of the outputs, and to combat lack of data or unbalanced datasets. Overall, the presented methods have a positive impact on the DL field and could have a real and tangible effect for the society.Tesi
Causality without Estimands: from Causal Estimation to Black-Box Introspection(Universitat de Barcelona, 2023-05-12) Parafita, Álvaro; Vitrià i Marca, Jordi; Universitat de Barcelona. Facultat de Matemàtiques[eng] The notion of cause and effect is fundamental to our understanding of the real world; ice cream sales correlate with jellyfish stings (both increase during summer), but a ban on ice cream could hardly stop jellyfishes. This discrepancy between the patterns that we observe and the results of our actions is essential: without causal knowledge we are mere spectators of the world, unable to understand its inner workings, enact effective change, explain which factors were responsible for a specific outcome or imagine potential scenarios resulting from alternative decisions. The field of statistics has traditionally stayed in the realm of observations, powerless in the measurement of causal effects unless by performing randomized experiments. These consist of dividing a set of individuals in two groups at random and assigning a certain action/treatment to each subgroup, to then compare the outcomes of both. This could be applied, for instance, to measure the impact of large-scale advertisement campaigns on sales, test the effects of smoking on the development of lung cancer, or determine the influence of new pedagogical strategies on eventual career success. However, randomized experiments are not always feasible, as is the case in these examples, due to economic, ethical or timing concerns. Causal Inference is the field that studies how to circumvent this problem: only using observational data, not subject to randomization, it allows us to measure causal effects. Even so, the standard approach for Causal Estimation (CE), estimand-based methods, results in ad hoc models that cannot extrapolate to other datasets with different causal relationships, and often require training a new model every time we want to answer a different query on the same dataset. Contrary to this perspective, estimand-agnostic approaches train a model of the observational distribution that acts as a proxy of the underlying mechanism that generated the data; this model needs to be trained only once and can answer any identifiable queries reliably. However, this latter approach has seldom been studied, primarily because of the difficulty of defining a good model of the target distribution satisfying every causal requirement while still flexible enough to answer the desired causal queries. This dissertation is focused on the definition of a general estimand-agnostic CE framework, Deep Causal Graphs, that can leverage the expressive modelling capabilities of Neural Networks and Normalizing Flows while still providing a flexible and comprehensive estimation toolkit for all kinds of causal queries. We will contrast its capabilities against other estimand-agnostic approaches and measure its performance in comparison with the state of the art in Causal Query Estimation. Finally, we will also illustrate the connection between CE and Machine Learning Interpretability, Explainability and Fairness: since the examination of black-boxes often requires to answer many causal queries (e.g., what is the effect of each input variable on the outcome, or how would the outcome have changed had we intervened on a certain input), estimand-based techniques would force us to train as many different models; in contrast, estimand-agnostic frameworks allow us to ask as many questions as needed with just a single trained model, and therefore are essential for this kind of application.Tesi
Diseño de arquitecturas de aprendizaje profundo para la determinación de cubiertas sobre el territorio y el estudio de series temporales(Universitat de Barcelona, 2023-07-04) García Rodríguez, Carlos; Vitrià i Marca, Jordi; Mora Sacristán, Oscar; Universitat de Barcelona. Facultat de Matemàtiques[spa] Las cubiertas del suelo son el resultado de factores naturales, socioeconómicos y de su utilización por parte de las personas en el tiempo y en el espacio. La información sobre las dinámicas del territorio es esencial para la selección, planificación y gestión territorial. La caracterización del tipo de cubierta del suelo y de su uso es clave para muchas aplicaciones: monitorización del medio ambiente, silvicultura, hidrología, agricultura y geología, entre otras. La información obtenida a través del análisis de series temporales de imágenes aerotransportadas y satelitales permite identificar dinámicas de la cobertura del territorio. Con el desarrollo de métodos de clasificación de cubiertas basados en la teledetección es posible evaluar los atributos estáticos y dinámicos de la ocupación temporal del suelo de grandes y pequeñas regiones del territorio, proporcionando información valiosa para la gestión territorial. Actualmente estas tareas se realizan a nivel de fotointerpretación, con altos costes en términos de tiempo y recursos, no solo para su creación sino también para su actualización. La disponibilidad de imágenes detalladas a lo largo del tiempo de las misiones Sentinel-1 y Sentinel-2 del programa espacial europeo Copernicus, y las imágenes aéreas capturadas por el lnstitut Cartografíe i Geologic de Catalunya (ICGC), brindan acceso a una gran cantidad de datos de alta calidad de observación de la tierra. El proyecto de investigación realizado en este doctorado industrial busca desarrollar técnicas de análisis de estos grandes volúmenes de datos para la determinación de usos y coberturas del suelo de forma automática. Se combina una clasificación automática con metodologías de humano en el bucle. De entre las múltiples cubiertas, se presta especial atención a las zonas de extracción de agua de los acuíferos analizando las series temporales de movimientos del suelo con técnicas DlnSAR (lnterferometría Diferencial con Radar de Apertura Sintética).Tesi
Towards Efficient and Realistic Animation of 3D Garments with Deep Learning(Universitat de Barcelona, 2022-11-17) Bertiche, Hugo; Escalera Guerrero, Sergio; Madadi, Meysam; Universitat de Barcelona. Facultat de Matemàtiques[eng] Machine learning has experienced a soar thanks to the proliferation of deep learning based methodologies. 3D vision is one of the many fields that benefited from this trend. Within this domain, I focused my research on human-centric scenarios. As a starting point, I begin with a 3D human pose and shape reconstruction approach from still images. Relying on a powerful CNN and a novel inverse graphics solution, I define the steps to predict volumetric humans as 3D meshes. As a natural extension, I turn my attention to the modelling of 3D garments for complete human representation. Deep learning models require huge volumes of data. For this reason, next, I explain my work developing the biggest 3D garment dataset, CLOTH3D. This was motivated by the lack of such data for the study of cloth on humans. Additionally, in the same context, I describe a baseline model for 3D garment generation trained on CLOTH3D. After identification of the major drawbacks of the baseline model, I introduce a novel solution for the garment animation problem. Deep learning models usually require data with a fixed dimensionality. Related works proposed expensive data pre-processings to make data uniform, albeit diminishing the quality, among other issues. By focusing purely in garment animation, I designed a fully-convolutional model that does not suffer from the aforementioned problem. This new model can animate even completely unseen outfits. Nonetheless, cloth animation is a tremendously complex problem. In practice, deep models which encode multiple garments end up showing poor quality. Moreover, I noted significant drawbacks in supervised learning schemes for garments. Motivated by these observations, I devised a novel technique that allows unsupervised training for the 3D garment animation task. As a consequence, this methodology leads to smaller, more robust models that can be obtained in a matter of minutes. Furthermore, it shows an unprecedented level of performance. Because of this, it became the first viable option for deep-based real-time garment animation in real life applications. Nonetheless, it is a quasi-static approach. Cloth dynamics are crucial for proper garment animation. Finally, the last of my contributions describes how to learn cloth dynamics unsupervisedly, making the solution for garment animation complete. Additionally, I establish the bases of this new unsupervised neural garment animation framework.Tesi
Sobre la parte p-fundamental del grupo de Brauer(Universitat de Barcelona, 1960) Vaquer i Timoner, Josep, 1928-2020; Augé Farreras, Juan, 1919-1993; Universitat de Barcelona. Facultat de Matemàtiques[spa] Según un conocido teorema de Wedderburn, toda álgebra normal y simple es el producto directo (tensorial) de un álgebra total de matrices y un álgebra de división. A partir de aquí R. Brauer divide las álgebras normales y simples en clases y define en el conjunto de clases una estructura de grupo abeliano. En el caso que el cuerpo de coeficientes sea de característica p>0, el estudio de la parte p-fundamental del grupo de Brauer de clases de álgebras se reduce al de las p-álgebras cíclicas, que fueron estudiadas por E. Witt en 1936 y al mismo tiempo O. Techmüller obtenía algunas relaciones entre las p-álgebras cíclicas con respecto a la clasificación de Brauer. Recientemente E. Witt ha dado un isomorfismo entre la parte p-fundamental del grupo de Brauer de clases de álgebras y un módulo de formas de Pfaff en el que se calcula con gran fluidez y permite dar un sistema de generadores de la parte p-fundamental del grupo de Brauer a partir de una p-base del cuerpo de coeficientes. Las relaciones entre las p-álgebras cíclicas obtenidas por O. Techmüller dan los generadores nulos y algunas relaciones de forma muy particular. El objeto de este trabajo es obtener todas las relaciones entre los generadores anteriormente citados. Este problema se resuelve primeramente trabajando en el módulo de formas de Pfaff, y después, siguiendo los mismos métodos de Techmüller, independientemente del módulo de formas de Pfaff. Para los teoremas generales sobre álgebras usados en este trabajo puede verse el libro de A. Albert. Para facilitar la lectura se empieza con un resumen del cálculo vectorial de Witt y su aplicación a la obtención de las p-álgebras cíclicas sobre un cuerpo k, y finalmente el isomorfismo entre la parte p-fundamental del grupo de Brauer de clases de álgebras y un módulo de formas de Pfaff sobre el anillo de vectores de Witt. El tema del presente trabajo me fue propuesto durante mi estancia en Hamburgo (1956/53) por el Profesor E. Witt al que expreso ni agradecimiento tanto por la ayuda que me ha prestado durante la ejecución del mismo como por todas las enseñanzas de él recibidas. Expreso también mi agradecimiento ai Dr. J. Augé por el apadrinamiento de esta tesis y al C.S.I.C. y a D.A.A.D, que me proporcionaron la beca que me permitió realizar estos estudios.Tesi
Estudi sobre la dinàmica dels sistemes estel·lars amb simetria cilíndrica(Universitat de Barcelona, 1986-01-01) Sala Mirabet, Ferran; Orús Navarro, J. J. de; Universitat de Barcelona. Facultat de Matemàtiques[spa] Se ha desarrollado un modelo galáctico no estacionario con simetría cilíndrica basado en las hipótesis de Chandrasekhar, habiéndose caracterizado las funciones que lo determinan. La forma obtenida para el potencial ha permitido el desarrollo de un nuevo modelo con la hipótesis suplementaria de separabilidad para el potencial en el que se han hallado las trayectorias de las estrellas y de los centroides locales y que se ha aplicado a la descripción de una población estelar en el entorno solar.Tesi
Contribución al estudio de las extensiones galoisianas de grupo diedral(Universitat de Barcelona, 1975-01-01) Pascual Xufré, Griselda, 1926-2001; Linés Escardó, Enrique; Universitat de Barcelona. Facultat de Matemàtiques[spa] En esta memoria, nos proponemos dar alguna contribución al estudio de la aritmética de las extensiones de cuerpos, de números, galoisianas, no abelianas, cuyo grupo de Galois G sea de uno de los tipos siguientes: diedral de orden 2p(n) (p primo impar), diedral de orden 2pq (p, q primos impares). En el primer capitulo se considera un tipo especial de ideales llamados invariantes, y que coinciden con los que Ullom llama ambiguos, en el caso de extensiones de Q diedrales de orden 2p, y se da una condición suficiente de existencia de bases normales para dichos ideales. El capítulo segundo está destinado al estudio de ramificaciones y cálculo de discriminantes en el caso 2p(n). En los capítulos tercero y cuarto se estudia el anillo de los enteros A(subN) de la extensión N, considerado como A[G]-módulo, siendo A el anillo de Dedekind sobre cuyo cuerpo de fracciones X se construye la extensión N de grupo de Galois G de orden 2p(n) y se dan algunas condiciones suficientes para que A(subN) sea A[G]-módulo libre. En el capitulo quinto se estudia la ramificación y se calculan discriminantes en el caso de una extensión diedral de orden 2pq, y se dan condiciones suficientes para que A(subN) sea A[G]-proyectivo.Tesi
Sobre la distribución de los valores de una función representada por una serie de Dirichlet lagunar(Universitat de Barcelona, 1962-10-23) Sunyer i Balaguer, Ferran, 1912-1967; Orts Aracil, José María, 1891-1968; Universitat de Barcelona. Facultat de Matemàtiques[spa] En una serie de Notas y Memorias, el autor de ésta demostró que cuando la serie de Taylor, que representa una función entera u holomorfa en el círculo unidad, es suficiente lagunar, la función toma la totalidad de los valores finitos sin que sea posible la existencia del valor excepcional que, según el teorema de Picard, puede presentar; y también desaparece la posibilidad de existencia de otros casos excepcionales en otros campos de la teoría general. Luego, en otra Memoria, extendí los resultados anteriores referentes a las funciones enteras a la series de Dirichlet. Esta extensión tenía, además del propio, un interés doble: en primer lugar cuando las series de Dirichlet tenían exponentes enteros los últimos resultados precisaban en cierto sentido los anteriormente obtenidos para las series de Taylor. Y en segundo lugar completaban unos teoremas de Plya y Mandelbrojt sobre las direcciones de Julia. Como los resultados referentes a las series de Dirichlet solamente eran válidos para las funciones enteras y para las series convergentes en la totalidad del plano, evidentemente faltaban resultados para las funciones holomorfas en un semiplano, correspondientes a los teoremas para las series de Taylor convergentes únicamente en le círculo unidad. Es a llenar este vacío a lo que va dirigida esta Memoria.Tesi
Discurso sobre la teoria general del movimiento en las máquinas : desarrollado por D. Lauro Clariana en el ejercicio del doctorado correspondiente a la sección de Ciencias exactas(Universitat de Barcelona, 1873-01-01) Clariana Ricart, Lauro, 1842-1916; Universitat de Barcelona. Facultat de MatemàtiquesLauro Clariana Ricart (Barcelona, 1842-1916) fue un matemático e ingeniero industrial español. Obtuvo en 1873 la cátedra de matemáticas del Instituto de Tarragona. Ese mismo año se graduó de doctor en Ciencias Exactas. Fue nombrado en 1881 catedrático numerario de Cálculo Infinitesimal de la Universidad de Barcelona. De 1902 a 1907 desempeñó interinamente la cátedra de Cálculo Integral y Mecánica Racional de la Escuela de Ingenieros Industriales de esta ciudad, y desde 1909 desempeñó la de Cálculo Infinitesimal, de cuya nueva asignatura fue el iniciador. Perteneció a la Real Academia de Ciencias y Artes de Barcelona y a la de Ciencias Exactas, Físicas y Naturales de Madrid. Colaboró en diversas revistas científicas. En la Academia de Barcelona dio conferencias sobra matemáticas superiores referentes a Funciones elípticas. Ha figurado dignamente en los congresos de París, Bruselas, Múnich y Friburgo, en los que presentó numerosos trabajos sobre distintas materias. (Fuente: Wikipedia.org)Tesi
Fundamentos de geometría pseudoconforme en "n" dimensiones(Universitat de Barcelona, 1934-06-16) Planas Corbella, José María, 1910-1936; Torroja Miret, Antonio, 1888-1974[spa] En una serie de recientes trabajos, donde se estudian a fondo muchos problemas esenciales de la moderna teoría de las funciones de dos variables complejas, F. SEVERI ha establecido las bases de una fundamentación geométrica de aquella teoría; este camino ha sido explorado minuciosamente por B. SEGRE, el cual ha obtenido también notables resultados. Según este modo de ver, las propiedades fundamentales derivan de la representación real de los entes complejos, y especialmente la manera de considerar el infinito del campo de variabilidad. Como ha demostrado SEVERI, lo más correcto es considerar el par de variables complejas (x,y) distendido sobre la V(6/4) de C. SEGRE (es decir, sobre la falda real de una V(6/4) de SEGRE de tipo elíptico) de un espacio de ocho dimensiones; esta variedad es, en efecto, el modelo algebraico-topológico mínimo de dicho campo. Haciendo la proyección, en modo conveniente, de dicha variedad sobre un espacio plano de cuatro dimensiones S(4), obtenemos la representación de los puntos del plano proyectivo complejo por medio de los puntos reales de un S(4) euclídeo; en esta forma, los puntos del infinito de aquel plano corresponden homeomórroficamente a las rectas reales de una cierta congruencia lineal elíptica del espacio impropio de aquel S(4). Todas estas consideraciones fueron ya hechas en una nota de B. SEGRE, donde se hace ver toda su importancia. Me propongo en esta memoria estudiar estas cuestiones en toda su generalidad, extendiéndolas al caso de “n” variables. Inmediatamente se echa de ver que, salvo algunos conceptos fundamentales que se transportan en seguida con ligeros cambios, no se trata de extensiones banales: se presenta una gran riqueza y variedad de hechos nuevos, que demuestran la conveniencia de no limitarnos al caso n=2 si queremos llegar a poseer una visión completa de la teoría de funciones analíticas de varias variables. Hay, además, cuestiones ya estudiadas en este último caso que adquieren nueva luz cuando se aumenta el número de dimensiones del espacio ambiente. Un ejemplo elemental se nos presenta al considerar las llamadas superficies características: una propiedad fundamental de estas superficies, que demostraremos en el Capítulo III, es la reducción del número de dimensiones de sus espacios osculadores en puntos genéricos. Cosa que no tiene sentido, evidentemente, en un espacio de cuatro dimensiones. Dicho de otro modo: en ese último espacio cualquier trozo regular de superficie analítica contiene un doble sistema conjugado de líneas, en el sentido de DUPIN. Apenas se consideran, en cambio, superficies pertenecientes a espacios de más de cuatro dimensiones; esto no ocurre ya en general. Pero las superficies características gozan de dicha propiedad. Las consideraciones fundamentales que aquí desarrollamos se refieren a dos conceptos importantes que corresponden, en nuestra representación real de los entes complejos, a los de variedad analítica y transformación analítica del campo complejo. O sea, las variedades características y las transformados llamadas “pseudoconformes” por SEVERI. Las propiedades que aquí estudiaremos tienen casi siempre carácter local, limitándonos a considerar trozos regulares de variedad.Tesi
Un concepto generalizado de conjugación : aplicación a las funciones quasiconvexas(Universitat de Barcelona, 1981-10-29) Martínez Legaz, Juan Enrique; Augé Farreras, Juan, 1919-1993; Universitat de Barcelona. Facultat de Matemàtiques[spa] En este trabajo se definen y estudian los conceptos de H-convexidad y H-conjugación, siendo H una familia de funciones reales de variable real cerrada para el supremo puntual de tal manera que coinciden con los clásicos al considerar la familia H de las traslaciones de R. Mediante ellos se construye una teoría de la dualidad en programación matemática y se estudian los Lagrangieros que se derivan. Entre las aplicaciones de estas nociones figura la interpretación de algunas teorías previas sobre conjugación quasi-convexa que se obtienen al considerar ciertas familias H de funciones crecientes. También se aborda la conjugación de multiaplicaciones en conjuntos abstractos, generalizando así las ya conocidas en los que se requieren estructuras algebraicas y de orden.