Songyin Wu

Songyin Wu

University of California, Santa Barbara

Short Bio

My name is Songyin Wu (吴松隐). I am a Ph.D. student at the University of California, Santa Barbara, advised by Prof. Lingqi Yan. My research mainly focuses on rendering. I’m especially interested in exploring novel representations for efficient real-time rendering.

Interests

  • Rendering
  • Inverse Graphics

Education

  • BSc in Computer Science, 2017 ~ 2021

    Turing Class, Peking University

  • Ph.D. Student in Computer Science, 2022 ~ now

    University of California, Santa Barbara

Recent Publications

GFFE: G-buffer Free Frame Extrapolation for Low-latency Real-time Rendering

Real-time rendering has been embracing ever-demanding effects, such as ray tracing. However, rendering such effects in high resolution and high frame rate remains challenging. Frame extrapolation methods, which do not introduce additional latency as opposed to frame interpolation methods such as DLSS 3 and FSR 3, boost the frame rate by generating future frames based on previous frames. However, it is a more challenging task because of the lack of information in the disocclusion regions and complex future motions, and recent methods also have a high engine integration cost due to requiring G-buffers as input. We propose a G-buffer free frame extrapolation method, GFFE, with a novel heuristic framework and an efficient neural network, to plausibly generate new frames in real time without introducing additional latency. We analyze the motion of dynamic fragments and different types of disocclusions, and design the corresponding modules of the extrapolation block to handle them. After that, a light-weight shading correction network is used to correct shading and improve overall quality. GFFE achieves comparable or better results than previous interpolation and G-buffer dependent extrapolation methods, with more efficient performance and easier integration.

Unified Gaussian Primitives for Scene Representation and Rendering

Searching for a unified scene representation remains a research challenge in computer graphics. Traditional mesh-based representations are unsuitable for dense, fuzzy elements, and introduce additional complexity for filtering and differentiable rendering. Conversely, voxel-based representations struggle to model hard surfaces and suffer from intensive memory requirement. We propose a general-purpose rendering primitive based on 3D Gaussian distribution for unified scene representation, featuring versatile appearance ranging from glossy surfaces to fuzzy elements, as well as physically based scattering to enable accurate global illumination. We formulate the rendering theory for the primitive based on non-exponential transport and derive efficient rendering operations to be compatible with Monte Carlo path tracing. The new representation can be converted from different sources, including meshes and 3D Gaussian splatting, and further refined via transmittance optimization thanks to its differentiability. We demonstrate the versatility of our representation in various rendering applications such as global illumination and appearance editing, while supporting arbitrary lighting conditions by nature. Additionally, we compare our representation to existing volumetric representations, highlighting its efficiency to reproduce details.

ExtraSS: A Framework for Joint Spatial Super Sampling and Frame Extrapolation

We introduce ExtraSS, a novel framework that combines spatial super sampling and frame extrapolation to enhance real-time rendering performance. By integrating these techniques, our approach achieves a balance between performance and quality, generating temporally stable and high-quality, high-resolution results. Leveraging lightweight modules on warping and the ExtraSSNet for refinement, we exploit spatial-temporal information, improve rendering sharpness, handle moving shadings accurately, and generate temporally stable results. Computational costs are significantly reduced compared to traditional rendering methods, enabling higher frame rates and alias-free high resolution results.

Projects

Microfacet Material Energy Compensation

We proposed a neural network approach for microfacet material energy compensation. Our method only takes roughness and F0 parameters for GGX model and predicts energy compensated BRDF values. The model is very effective in the inference stage, and can handle isotropic/anisotropic, colored materials.

Experience

 
 
 
 
 

Research Scientist Intern

Meta

Jun 2024 – Present Washington, USA
 
 
 
 
 

Research Scientist Intern

Intel Corporation

Jun 2023 – Sep 2023 California, USA
 
 
 
 
 

Research Scientist Intern

Microsoft Research Asia

Dec 2021 – Jul 2022 Beijing, China