Optimizing Augmented Reality Game Rendering Pipeline Using Convolutional Neural Networks: Multi-layer Convolutional Acceleration and Texture Enhancement

Main Article Content

Zheng Hongkai

Abstract

This paper presents a novel approach to optimizing the augmented reality (AR) game rendering pipeline by leveraging Convolutional Neural Networks (CNNs) for real-time texture enhancement and rendering acceleration. The integration of CNNs into the Unreal Engine rendering pipeline enables the enhancement of low-resolution textures, improving visual quality without compromising performance. The CNN model employed in this study achieves significant improvements in both texture quality, as measured by Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM), and rendering speed. The results demonstrate a reduction in rendering time by approximately 25%, while maintaining an average frame rate of 45 FPS, making the system suitable for real-time AR applications. User perception tests confirm that the CNN-enhanced method provides superior visual quality and a more engaging user experience compared to traditional methods. The proposed approach offers a promising solution to the challenges of high-resolution texture rendering and real-time performance in AR games. Future research could explore the use of advanced CNN architectures and hybrid models to further improve rendering efficiency and visual fidelity, particularly for mobile platforms.

Article Details

Section
Articles