待填:
- General description of skills, background knowledge, research interests and experience
My primary research interests are computer graphics and artificial intelligence, along with associated subfields: neural rendering, differentiable rendering, NeRF, 3D reconstruction, and photorealistic rendering, etc. I've undertaken independent study in many modern computer graphics courses, and attended a cutting-edge graphics summer camp (USTC 2023). Supervised by Professor Xianzhi Li, I have participated in a research work in the field of point cloud 3D reconstruction, aiming for publication in CVPR 2024(joint author). Proficient in PyTorch and Python, I possess substantial experience in deep learning development. I avidly follow NeRF advancements, studying related research papers on instant-ngp, Plenoxels etc.
- Notable Achievements
Outstanding freshman Award Scholarship
- Briefly describe the research work you have completed
I collaborated with senior fellow on point cloud anomaly detection in the context of Open-World problem, entailed writing code, conducting experiments, and drafting paper, aiming publication at CVPR 2024. A substantial portion was dedicated to code implementation for methods outlined in the paper, which honed my collaborative and engineering skills. Actively immersed myself in learning graphics, accomplishing tasks such as constructing rendering pipeline and Path Tracing. Additionally, followed tutorials from CVPR and Siggraph regarding neural rendering and differentiable rendering. Also read NeRF papers like NeRF, instant-ngp, Plenoxels, with special focus on the insights advances glean from the field of graphics.
I collaborated on point cloud anomaly detection for Open-World scenarios, aiming for CVPR 2024 publication. I focused on code implementation to replicate paper methods, enhancing collaborative and engineering skills. I immersed myself in graphics, building rendering pipelines, and exploring Path Tracing. Additionally, I delved into CVPR and Siggraph tutorials on neural and differentiable rendering. I extensively studied NeRF-related papers like NeRF, instant-ngp, Plenoxels, emphasizing insights from graphics advancements.
- Resume/CV
TOP 3 list
Faculty supervisor: NanditaVijaykumar
Faculty Province: Ontario
Faculty University: University of Toronto
Faculty Campus: Toronto
Project Location: Toronto, Ontario
Language: English
Preferred start date: 2024-05-20 (yyyy-mm-dd)
Project ID 34962
Accelerating neural radiance fields and neural rendering
This project will tackle both accelerating and addressing various challenges (e.g., handling dynamic scenes, complex scenes, editability, generalizability) in designing more efficient neural radiance fields for novel view synthesis, 3D-scene representation, and perception. Neural radiance fields have emerged as a promising direction by representing scenes as MLPs. We will perform cutting-edge research at the intersection of machine learning, graphics, computer vision and computer systems with the goal of publications at NeurIPs, SIGGRAPH, CVPR, ICLR, ASPLOS, ISCA, etc.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Faculty supervisor: IgorGilitschenski
Faculty Province: Ontario
Faculty University: University of Toronto
Faculty Campus: Mississauga
Project Location: Toronto, Ontario
Language: English
Preferred start date: 2024-05-01 (yyyy-mm-dd)
Project ID 32960
3D Scene Manipulation
Neural Radiance Fields (NeRFs) have been shown to be effective in 3D reconstruction with 2D supervision. Having enough data (2D images of a scene with ground truth camera poses), NeRFs are able to generate high-quality novel views of the scene. Although a whole line of research is dedicated to making NeRFs faster, less data-hungry, and more performant, scene manipulation in NeRFs is still an important and much less explored problem. In this project, we will explore how NeRFs can be trained in a way that allows modifying a scene such as for moving objects, changing their appearance, or inpainting new / removing existing objects in a NeRF.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Faculty supervisor: DavidLindell
Faculty Province: Ontario
Faculty University: University of Toronto
Faculty Campus: Toronto
Project Location: Toronto, Ontario
Language: English
Preferred start date: 2024-05-20 (yyyy-mm-dd)
Project ID 34506
Generative Single-Photon Imaging
Single-photon cameras are an emerging type of sensing technology capable of measuring the precise times at which individual photons arrive at each pixel. As such, these cameras offer superb performance in ultra low-light settings and for ultrafast imaging. Yet, despite their single-photon sensitivity, using these cameras to image in near-complete darkness remains challenging because photons arrive to the sensor stochastically, and very few photons arrive over any reasonable exposure period. Reconstructing a clean image from only a few captured photons is a highly ill-posed problem. In this project we will explore how to leverage emerging generative models and pretrained foundation models to enable imaging in near-complete darkness—where camera pixels record far less than a single photon on average. Specifically, we will perform image reconstruction by sampling the output of the generative model, conditioned on the accumulated photons at each pixel. Using this conditioning mechanism, the generative model will naturally scale to any lighting condition: for well-lit environments and relatively noise-free captured images (i.e., with many photons), the output of the generative model would match the input image used for conditioning; in the extreme case where no photons are captured, the generative model performs unconditional synthesis. For cases where very few photons are captured, the generative model can be repeatedly sampled to synthesize plausible images based on the information provided by each photon. The project will consist of three main steps: (1) creating a simulated dataset of images captured by a single-photon camera; (2) training or fine-tuning pre-trained generative models on this dataset; and (3) fine-tuning and evaluating the model using data captured from a prototype single-photon camera. The successful project will demonstrate this new imaging paradigm for single-photon image reconstruction and ultra low-light imaging, and will culminate in a paper submission to a top-tier computer vision conference.