Design of an Improved Method for Visual Rendering in the Metaverse Using CIEM and MSRANet
The metaverse is a fast-growing frontier in virtual reality that requires future visual rendering techniques to realize better user experience. Most of the existing approaches are normally challenged by wide-angle views and computational efficiency, with personalization at low energy consumption for...
Saved in:
Main Authors: | , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2025-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/10849542/ |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1832088030743101440 |
---|---|
author | Janapati Venkata Krishna Priyanka Singh Regonda Nagaraju Setti Vidya Sagar Appaji Attuluri Uday Kiran K. Spandana |
author_facet | Janapati Venkata Krishna Priyanka Singh Regonda Nagaraju Setti Vidya Sagar Appaji Attuluri Uday Kiran K. Spandana |
author_sort | Janapati Venkata Krishna |
collection | DOAJ |
description | The metaverse is a fast-growing frontier in virtual reality that requires future visual rendering techniques to realize better user experience. Most of the existing approaches are normally challenged by wide-angle views and computational efficiency, with personalization at low energy consumption for the best possible user experience and engagement. This paper alleviates these challenges by proposing a set of innovative models tailored to optimize visual rendering in deployments of the metaverse. The Cooperative Insect Eye model can then bio-inspire compound insect eyes for wide-angle and high-resolution panoramas with low distortion to increase field-of-view coverage by 20% and reduce rendering timestamp by 15%. Multi-Scale Residual Attention Network combines residual learning with the attention mechanism at multiple scales, achieving a latency reduction of 25% and an image quality improvement of 10% to balance high visual fidelity with computational efficiency. Adaptive User Profiling and Vision Enhancement (AUPVE), allows dynamic changes of the visual settings based on real-time user data, raising the level of satisfaction by 30% and session time—by 20%. Anticipatory Scene Rendering (ASR) utilizes predictive modeling in order to allow for the pre-rendering of scenes based on user behavior, in this way significantly reducing latency by 40% with an accuracy of 85% in seamless navigation. Finally, BEER, standing for Bioinspired Energy-Efficient Rendering, borrows from the energy-efficient way of visual processing in the human brain through a spiking neural network that reduces energy consumption by 35% without image quality degradation. On the whole, these models have substantially improved the state of the art of metaverse rendering, with far-reaching ramifications for future virtual reality environments by improving the user experience to become more immersive, personalize and efficient. |
format | Article |
id | doaj-art-8c5c99aa22be44c0ab3d737e8cb51f74 |
institution | Kabale University |
issn | 2169-3536 |
language | English |
publishDate | 2025-01-01 |
publisher | IEEE |
record_format | Article |
series | IEEE Access |
spelling | doaj-art-8c5c99aa22be44c0ab3d737e8cb51f742025-02-06T00:00:38ZengIEEEIEEE Access2169-35362025-01-0113221762219610.1109/ACCESS.2025.353263410849542Design of an Improved Method for Visual Rendering in the Metaverse Using CIEM and MSRANetJanapati Venkata Krishna0https://orcid.org/0009-0009-7360-3009Priyanka Singh1https://orcid.org/0000-0001-5476-6501Regonda Nagaraju2https://orcid.org/0000-0002-9850-1721Setti Vidya Sagar Appaji3Attuluri Uday Kiran4K. Spandana51IoT & Robotics and AI and AR & VR Departments, Institute of Engineering and Technology, Srinivas University, Mangaluru, IndiaSchool of Computer Science and Engineering, VIT-AP University, Amaravati, IndiaDepartment of CSE-AI and ML, School of Engineering, Malla Reddy University, Hyderabad, Telangana, IndiaDepartment of CSM, Raghu Engineering College, Visakhapatnam, Andhra Pradesh, IndiaDepartment of CSE, CMR Technical Campus, Hyderabad, Telangana, IndiaDepartment of CSE, CBIT, Hyderabad, IndiaThe metaverse is a fast-growing frontier in virtual reality that requires future visual rendering techniques to realize better user experience. Most of the existing approaches are normally challenged by wide-angle views and computational efficiency, with personalization at low energy consumption for the best possible user experience and engagement. This paper alleviates these challenges by proposing a set of innovative models tailored to optimize visual rendering in deployments of the metaverse. The Cooperative Insect Eye model can then bio-inspire compound insect eyes for wide-angle and high-resolution panoramas with low distortion to increase field-of-view coverage by 20% and reduce rendering timestamp by 15%. Multi-Scale Residual Attention Network combines residual learning with the attention mechanism at multiple scales, achieving a latency reduction of 25% and an image quality improvement of 10% to balance high visual fidelity with computational efficiency. Adaptive User Profiling and Vision Enhancement (AUPVE), allows dynamic changes of the visual settings based on real-time user data, raising the level of satisfaction by 30% and session time—by 20%. Anticipatory Scene Rendering (ASR) utilizes predictive modeling in order to allow for the pre-rendering of scenes based on user behavior, in this way significantly reducing latency by 40% with an accuracy of 85% in seamless navigation. Finally, BEER, standing for Bioinspired Energy-Efficient Rendering, borrows from the energy-efficient way of visual processing in the human brain through a spiking neural network that reduces energy consumption by 35% without image quality degradation. On the whole, these models have substantially improved the state of the art of metaverse rendering, with far-reaching ramifications for future virtual reality environments by improving the user experience to become more immersive, personalize and efficient.https://ieeexplore.ieee.org/document/10849542/CIEMenergy-efficient renderingmetaverseMSRANetuser-centric rendering |
spellingShingle | Janapati Venkata Krishna Priyanka Singh Regonda Nagaraju Setti Vidya Sagar Appaji Attuluri Uday Kiran K. Spandana Design of an Improved Method for Visual Rendering in the Metaverse Using CIEM and MSRANet IEEE Access CIEM energy-efficient rendering metaverse MSRANet user-centric rendering |
title | Design of an Improved Method for Visual Rendering in the Metaverse Using CIEM and MSRANet |
title_full | Design of an Improved Method for Visual Rendering in the Metaverse Using CIEM and MSRANet |
title_fullStr | Design of an Improved Method for Visual Rendering in the Metaverse Using CIEM and MSRANet |
title_full_unstemmed | Design of an Improved Method for Visual Rendering in the Metaverse Using CIEM and MSRANet |
title_short | Design of an Improved Method for Visual Rendering in the Metaverse Using CIEM and MSRANet |
title_sort | design of an improved method for visual rendering in the metaverse using ciem and msranet |
topic | CIEM energy-efficient rendering metaverse MSRANet user-centric rendering |
url | https://ieeexplore.ieee.org/document/10849542/ |
work_keys_str_mv | AT janapativenkatakrishna designofanimprovedmethodforvisualrenderinginthemetaverseusingciemandmsranet AT priyankasingh designofanimprovedmethodforvisualrenderinginthemetaverseusingciemandmsranet AT regondanagaraju designofanimprovedmethodforvisualrenderinginthemetaverseusingciemandmsranet AT settividyasagarappaji designofanimprovedmethodforvisualrenderinginthemetaverseusingciemandmsranet AT attuluriudaykiran designofanimprovedmethodforvisualrenderinginthemetaverseusingciemandmsranet AT kspandana designofanimprovedmethodforvisualrenderinginthemetaverseusingciemandmsranet |