Showing 3,981 - 4,000 results of 13,689 for search 'data (visualisation OR visualization)', query time: 0.35s Refine Results
  1. 3981

    InterDuPa-UAV: A UAV-based dataset for the classification of intercropped durian and papaya treesZenodo by Quang Hieu Ngo, Trong Hieu Luu, Phuc Vinh Nguyen, Ilias El Makrini, Bram Vanderborght, Hoang-Long Cao

    Published 2025-08-01
    “…The growing use of Unmanned Aerial Vehicles (UAVs) in agriculture has made data collection more efficient and cost-effective, enabling the development of advanced solutions to enhance agricultural productivity. …”
    Get full text
    Article
  2. 3982
  3. 3983
  4. 3984
  5. 3985

    Simultaneous EEG and fNIRS recordings for semantic decoding of imagined animals and tools by Milan Rybář, Riccardo Poli, Ian Daly

    Published 2025-04-01
    “…We investigated the feasibility of semantic neural decoding to develop a new type of brain-computer interface (BCI) that allows direct communication of semantic concepts, bypassing the character-by-character spelling used in current BCI systems. We provide data from our study to differentiate between two semantic categories of animals and tools during a silent naming task and three intuitive sensory-based imagery tasks using visual, auditory, and tactile perception. …”
    Get full text
    Article
  6. 3986
  7. 3987

    Unsupervised clustering based coronary artery segmentation by Belén Serrano-Antón, Manuel Insúa Villa, Santiago Pendón-Minguillón, Santiago Paramés-Estévez, Alberto Otero-Cacho, Diego López-Otero, Brais Díaz-Fernández, María Bastos-Fernández, José R. González-Juanatey, Alberto P. Muñuzuri

    Published 2025-03-01
    “…Abstract Background The acquisition of 3D geometries of coronary arteries from computed tomography coronary angiography (CTCA) is crucial for clinicians, enabling visualization of lesions and supporting decision-making processes. …”
    Get full text
    Article
  8. 3988

    An fMRI Dataset on Occluded Image Interpretation for Human Amodal Completion Research by Bao Li, Li Tong, Chi Zhang, Panpan Chen, Long Cao, Hui Gao, ZiYa Yu, LinYuan Wang, Bin Yan

    Published 2025-07-01
    “…Abstract In everyday environments, partially occluded objects are more common than fully visible ones. Despite their visual incompleteness, the human brain can reconstruct these objects to form coherent perceptual representations, a phenomenon referred to as amodal completion. …”
    Get full text
    Article
  9. 3989

    An fNIRS dataset for Multimodal Speech Comprehension in Normal Hearing Individuals and Cochlear Implant Users by András Bálint, Wilhelm Wimmer, Christian Rummel, Marco Caversaccio, Stefan Weder

    Published 2025-07-01
    “…Participants completed a clinically relevant speech comprehension task using the German Matrix Sentence Test (OLSA) under speech-in-quiet, speech-in-noise, audiovisual and visual speech (i.e., lipreading) conditions. fNIRS recordings covered key cortical regions involved in speech processing, including the prefrontal, temporal, and visual cortices. …”
    Get full text
    Article
  10. 3990
  11. 3991

    A multimodal experimental dataset on agile software development team interactions by Diego Miranda, Carlos Escobedo, Dayana Palma, Rene Noel, Adrián Fernández, Cristian Cechinel, Jaime Godoy, Roberto Munoz

    Published 2025-08-01
    “…The resulting dataset includes audio recordings of verbal interactions and non- verbal behaviour data, such as body posture, facial expressions, visual attention, and gestures, captured using MediaPipe, YOLOv8, and DeepSort. …”
    Get full text
    Article
  12. 3992
  13. 3993
  14. 3994

    A multiple session dataset of NIRS recordings from stroke patients controlling brain–computer interface by Mikhail R. Isaev, Olesya A. Mokienko, Roman Kh. Lyukmanov, Ekaterina S. Ikonnikova, Anastasiia N. Cherkasova, Natalia A. Suponeva, Michael A. Piradov, Pavel D. Bobrov

    Published 2024-10-01
    “…The BCI was controlled by imagined hand movements; visual feedback was presented based on the real–time data classification results. …”
    Get full text
    Article
  15. 3995
  16. 3996

    The Eye Movement Database of Passage Reading in Vertically Written Traditional Mongolian by Yaqian Borogjoon Bao, Xingshan Li, Victor Kuperman

    Published 2025-03-01
    “…As one of the very few actively used vertical writing systems, these data offer unique insights into the cognitive and visual processing demands of vertical reading. …”
    Get full text
    Article
  17. 3997

    A Database of Underwater Radiated Noise from Small Vessels in the Coastal Area by Mark Shipton, Juraj Obradović, Fausto Ferreira, Nikola Mišković, Tomislav Bulat, Neven Cukrov, Roee Diamant

    Published 2025-02-01
    “…As such, not a lot of data is available for the URN of unidentified vessels of opportunity (VOO), especially for small vessels that do not carry an automatic identification systems (AIS). …”
    Get full text
    Article
  18. 3998

    A longitudinal EEG dataset of event-related potential by Yufeng Zhang, Hongxin Zhang, Yixuan Li, Yijun Wang, Xiaorong Gao, Chen Yang

    Published 2025-06-01
    “…This dataset provides comprehensive and high-quality data for the development of EEG-based identity authentication systems. …”
    Get full text
    Article
  19. 3999

    Dataset for Evaluating the Production of Phonotactically Legal and Illegal Pseudowords by Valérie Chanoine, Snežana Todorović, Bruno Nazarian, Jean-Michel Badier, Khoubeib Kanzari, Andrea Brovelli, Sonja A. Kotz, Elin Runnqvist

    Published 2025-05-01
    “…We organized the dataset according to the Brain Imaging Data Structure (BIDS), pre-processed the data and performed a minimal analysis of Event-Related Fields (ERFs), to ensure data quality and integrity of the dataset. …”
    Get full text
    Article
  20. 4000

    A Dataset on Takeover during Distracted L2 Automated Driving by Jiwoo Hwang, Woohyeok Choi, Jungmin Lee, Woojoo Kim, Jungwook Rhim, Auk Kim

    Published 2025-03-01
    “…The dataset comprises 500 cases including takeover performance, workload, physiological, and ocular data collected across 10 secondary task conditions: (1) no secondary tasks, (2) three visual tasks, and (3) six auditory tasks. …”
    Get full text
    Article