Abstract: As humans we never fully observe the world around us and yet we are able to build remarkably useful models of it from our limited sensory data. Machine learning systems are often required to operate in a similar setup, that is the one of inferring unobserved information from the observed one. For example, when inferring 3D shape from a single view image of an object, when reconstructing high fidelity MR images from a subset of frequency measurements, or when modelling a data distribution from a limited set of data points. These partial observations naturally induce data uncertainty, which may hinder the quality of the model predictions. In this talk, I will present our recent work in content recovery and creation from limited sensory data, which leverages active acquisition strategies and user guidance to improve the model outcomes. Finally, I will briefly present the findings of a systematic assessment of potential vulnerabilities and fairness risks of the models we develop.