Design of an Improved Method for Visual Rendering in the Metaverse Using CIEM and MSRANet

The metaverse is a fast-growing frontier in virtual reality that requires future visual rendering techniques to realize better user experience. Most of the existing approaches are normally challenged by wide-angle views and computational efficiency, with personalization at low energy consumption for...

Full description

Saved in:
Bibliographic Details
Main Authors: Janapati Venkata Krishna, Priyanka Singh, Regonda Nagaraju, Setti Vidya Sagar Appaji, Attuluri Uday Kiran, K. Spandana
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10849542/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The metaverse is a fast-growing frontier in virtual reality that requires future visual rendering techniques to realize better user experience. Most of the existing approaches are normally challenged by wide-angle views and computational efficiency, with personalization at low energy consumption for the best possible user experience and engagement. This paper alleviates these challenges by proposing a set of innovative models tailored to optimize visual rendering in deployments of the metaverse. The Cooperative Insect Eye model can then bio-inspire compound insect eyes for wide-angle and high-resolution panoramas with low distortion to increase field-of-view coverage by 20% and reduce rendering timestamp by 15%. Multi-Scale Residual Attention Network combines residual learning with the attention mechanism at multiple scales, achieving a latency reduction of 25% and an image quality improvement of 10% to balance high visual fidelity with computational efficiency. Adaptive User Profiling and Vision Enhancement (AUPVE), allows dynamic changes of the visual settings based on real-time user data, raising the level of satisfaction by 30% and session time—by 20%. Anticipatory Scene Rendering (ASR) utilizes predictive modeling in order to allow for the pre-rendering of scenes based on user behavior, in this way significantly reducing latency by 40% with an accuracy of 85% in seamless navigation. Finally, BEER, standing for Bioinspired Energy-Efficient Rendering, borrows from the energy-efficient way of visual processing in the human brain through a spiking neural network that reduces energy consumption by 35% without image quality degradation. On the whole, these models have substantially improved the state of the art of metaverse rendering, with far-reaching ramifications for future virtual reality environments by improving the user experience to become more immersive, personalize and efficient.
ISSN:2169-3536