Zhu, Jialin ORCID: https://orcid.org/0000-0002-1826-6566 (2024) Deep Learning for Infinite Virtual Urban Environment Generation. PhD thesis, University of Leeds.
Abstract
The development of information technology has brought significant improvements to people’s daily lives. This leads to an increasing number of people engaging in the virtual world instead of the physical realm. The virtual urban environment has played an essential role in many fields like urban or region planning preview in 2D, and city modelling in games or Virtual Reality applications in 3D. Deep Learning technologies, which have advanced in various scientific research fields at an extremely high speed, have brought new possibilities to generate the virtual urban environment.
Our research is concentrated on generating virtual urban environments of infinite size both in 2D and 3D by Deep Learning in a more efficient and user-friendly manner. We demonstrate our research results by two pipelines and a generative model.
For 2D virtual environment generation, we propose SSS (Seamless Satellite-image Synthesis), a novel neural architecture to create scale-and-space continuous satellite tex- tures from cartographic data. Our approach generates seamless textures over arbitrarily large spatial extents which are consistent through scale-space. We also show applications to texturing procedurally generated maps and interactive satellite image manipulation.
Turning to the domain of 3D generation, NeuroSculpt (Neural Sculptor) is a pipeline that learns to sculpt 3D models of massive urban environments. We train 2D neural net- works to deform a theoretically infinite 3D plane to a large-scale 3D surface of the urban environment in order to avoid high memory costs limited in scale with 3D deep learning architectures. By starting with coarse features and progressing to fine details we are able to synthesize highly detailed, concave, large scale models..
Finally, we propose VoxNeRF, which is a generative model that uses novel neural rendering techniques for urban building geometries rendering. It takes 3D geometry data as input and generates the corresponding 3D rendering result.
We evaluated our methods both qualitatively and quantitatively. By utilising the power of Deep Learning, we have been able to develop novel, data-driven, and user-friendly methods for creating virtual cityscapes of unlimited size.
Metadata
Supervisors: | Hogg, David and Wang, He and Kelly, Tom |
---|---|
Keywords: | Deep Learning, Generative model, Image generation, Mesh deformation, Linear programming, Neural Rendering |
Awarding institution: | University of Leeds |
Academic Units: | The University of Leeds > Faculty of Engineering (Leeds) > School of Computing (Leeds) |
Depositing User: | Mr Jialin Zhu |
Date Deposited: | 18 Dec 2024 15:32 |
Last Modified: | 18 Dec 2024 15:32 |
Open Archives Initiative ID (OAI ID): | oai:etheses.whiterose.ac.uk:35107 |
Download
Final eThesis - complete (pdf)
Filename: ZHU_J_Computing_PhD_2024.pdf
Licence:
This work is licensed under a Creative Commons Attribution NonCommercial ShareAlike 4.0 International License
Export
Statistics
You do not need to contact us to get a copy of this thesis. Please use the 'Download' link(s) above to get a copy.
You can contact us about this thesis. If you need to make a general enquiry, please see the Contact us page.