Showing 1 - 20 results of 679 for search '(transformer OR transformed)-based encoder', query time: 0.17s Refine Results
  1. 1
  2. 2
  3. 3
  4. 4

    An enhanced network for extracting tunnel lining defects using transformer encoder and aggregate decoder by Bo Guo, Zhihai Huang, Haitao Luo, Perpetual Hope Akwensi, Ruisheng Wang, Bo Huang, Tsz Nam Chan

    Published 2025-02-01
    “…We propose a deep network model utilizing an encoder–decoder framework that integrates Transformer and convolution for comprehensive defect extraction. …”
    Get full text
    Article
  5. 5

    Noise robust aircraft trajectory prediction via autoregressive transformers with hybrid positional encoding by Youyou Li, Yuxiang Fang, Teng Long

    Published 2025-04-01
    “…This study introduces the Noise-Robust Autoregressive Transformer, a novel model that enhances prediction reliability by integrating noise-regularized embeddings within a multi-head attention equipped with hybrid positional encoding. …”
    Get full text
    Article
  6. 6

    The vestibular system implements a linear-nonlinear transformation in order to encode self-motion. by Corentin Massot, Adam D Schneider, Maurice J Chacron, Kathleen E Cullen

    Published 2012-01-01
    “…Although it is well established that the neural code representing the world changes at each stage of a sensory pathway, the transformations that mediate these changes are not well understood. …”
    Get full text
    Article
  7. 7
  8. 8
  9. 9

    Transformer model with external token memories and attention for PersonaChat by Taize Sun, Katsuhide Fujita

    Published 2025-07-01
    “…This paper introduces a transformer model with external token memory and attention (Tmema) that is inspired by humans’ ability to define and remember each object in a chat. …”
    Get full text
    Article
  10. 10

    Convolutional Swin Encoder by Aditya Majithia, Arthur Paul Pedersen, Michael Grossberg

    Published 2025-05-01
    “…It introduces Convolutional Swin Encoder (CSE), a novel architecture combining Visual Geometry Group Network (VGGNet) and Swin Transformer blocks. …”
    Get full text
    Article
  11. 11
  12. 12
  13. 13
  14. 14

    SET: A Shared-Encoder Transformer Scheme for Multi-Sensor, Multi-Class Fault Classification in Industrial IoT by Kamran Sattar Awaisi, Qiang Ye, Srinivas Sampalli

    Published 2025-01-01
    “…Leveraging the transformer architecture, the SET uses a shared encoder with positional encoding and multi-head self-attention mechanisms to capture complex temporal patterns in sensor data. …”
    Get full text
    Article
  15. 15
  16. 16

    Urban Sprawl Monitoring by VHR Images Using Active Contour Loss and Improved U-Net with Mix Transformer Encoders by Miguel Chicchon, Francesca Colosi, Eva Savina Malinverni, Francisco James León Trujillo

    Published 2025-04-01
    “…This study explores the effectiveness of combining Mix Transformer encoders with U-Net architectures to improve feature extraction and spatial context understanding in VHR satellite imagery. …”
    Get full text
    Article
  17. 17

    Medical Report Generation With Knowledge Distillation and Multi-Stage Hierarchical Attention in Vision Transformer Encoder and GPT-2 Decoder by Hilya Tsaniya, Chastine Fatichah, Nanik Suciati, Takashi Obi, Joong-Sun Lee

    Published 2025-01-01
    “…Our approach leverages knowledge distillation with Vision Transformer (ViT) as the image encoder to capture complex visual features, the model benefits from knowledge distillation, transferring knowledge from an ensemble of Convolutional Neural Networks (CNNs) – including VGG16, InceptionV3, and DenseNet121 – to the ViT, ensuring rich and diverse feature extraction. …”
    Get full text
    Article
  18. 18
  19. 19

    Structure-Aware and Format-Enhanced Transformer for Accident Report Modeling by Wenhua Zeng, Wenhu Tang, Diping Yuan, Hui Zhang, Pinsheng Duan, Shikun Hu

    Published 2025-07-01
    “…SAFE-Transformer adopts a dual-stream encoding architecture to separately model symbolic section features and heading text, integrates hierarchical depth and format types into positional encodings, and introduces a dynamic gating unit to adaptively fuse headings with paragraph semantics. …”
    Get full text
    Article
  20. 20