Showing 1 - 20 results of 60 for search '"ImageNet"', query time: 0.06s Refine Results
  1. 1
  2. 2

    Integrating deformable CNN and attention mechanism into multi-scale graph neural network for few-shot image classification by Yongmin Liu, Fengjiao Xiao, Xinying Zheng, Weihao Deng, Haizhi Ma, Xinyao Su, Lei Wu

    Published 2025-01-01
    “…This paper provides a comprehensive performance evaluation of the new model on both mini-ImageNet and tiered ImageNet datasets. Compared with the benchmark model, the classification accuracy has increased by 1.07% and 1.33% respectively; In the 5-way 5-shot task, the classification accuracy of the mini-ImageNet dataset was improved by 11.41%, 7.42%, and 5.38% compared to GNN, TPN, and dynamic models, respectively. …”
    Get full text
    Article
  3. 3

    Group-based siamese self-supervised learning by Zhongnian Li, Jiayu Wang, Qingcong Geng, Xinzheng Xu

    Published 2024-08-01
    “…When combined with a robust linear protocol, this group self-supervised learning model achieved competitive results in CIFAR-10, CIFAR-100, Tiny ImageNet, and ImageNet-100 classification tasks. Most importantly, our model demonstrated significant convergence gains within just 30 epochs as opposed to the typical 1000 epochs required by most other self-supervised techniques.…”
    Get full text
    Article
  4. 4

    Improved Deep Support Vector Data Description Model Using Feature Patching for Industrial Anomaly Detection by Wei Huang, Yongjie Li, Zhaonan Xu, Xinwei Yao, Rongchun Wan

    Published 2024-12-01
    “…Features are extracted from a pre-trained backbone network on ImageNet, and each extracted feature is split into multiple small patches of appropriate size. …”
    Get full text
    Article
  5. 5

    Re-Calibrating Network by Refining Initial Features Through Generative Gradient Regularization by Naim Reza, Ho Yub Jung

    Published 2025-01-01
    “…In empirical evaluation, we applied the proposed methodology to CIFAR, SVHN and ImageNet datasets, utilizing a range of network architectures. …”
    Get full text
    Article
  6. 6

    Vision Transformer untuk Klasifikasi Kematangan Pisang by Arya Pangestu, Bedy Purnama, Risnandar Risnandar

    Published 2024-02-01
    “…Penelitian dilakukan dengan menggunakan lima model ViT yang sudah dilatih sebelumnya atau pre-trained, yaitu ViT-B/16, ViT-B/32, ViT-L/16, ViT-L/32, and ViT-H/14 pada ImageNet-21k dan ImageNet-1k. Kemudian, model ViT tersebut dievaluasi dan dibandingkan dengan model CNN. …”
    Get full text
    Article
  7. 7

    Facial masks and soft‐biometrics: Leveraging face recognition CNNs for age and gender prediction on mobile ocular images by Fernando Alonso‐Fernandez, Kevin Hernandez‐Diaz, Silvia Ramis, Francisco J. Perales, Josef Bigun

    Published 2021-09-01
    “…To counteract this, we adapt two existing lightweight CNNs proposed in the context of the ImageNet Challenge, and two additional architectures proposed for mobile face recognition. …”
    Get full text
    Article
  8. 8

    Klasifikasi Penyakit Alzheimer Dari Scan Mri Otak Menggunakan Convnext by Yehezkiel Stephanus Austin, Haikal Irfano, Juan Young Christopher, Lintang Cahyaning Sukma, Octo Perdana Putra, Riyadh Ilham Ardhanto, Novanto Yudistira

    Published 2024-12-01
    “…Teknologi machine learning dan neural network dapat mendukung deteksi dini melalui penggunaan model ConvNeXt yang telah dilatih dengan metode transfer learning menggunakan bobot awal dari ImageNet, dan di-fine-tune untuk mengklasifikasikan empat tingkat keparahan Alzheimer berdasarkan hasil pemindaian MRI otak, yaitu Mild Demented, Moderate Demented, Non Demented, dan Very Mild Demented. …”
    Get full text
    Article
  9. 9

    A benchmark of deep learning approaches to predict lung cancer risk using national lung screening trial cohort by Yifan Jiang, Leyla Ebrahimpour, Philippe Després, Venkata SK. Manem

    Published 2025-01-01
    “…We evaluated ten 3D and eleven 2D SOTA deep learning models, which were pretrained on large-scale general-purpose datasets (Kinetics and ImageNet) and radiological datasets (3DSeg-8, nnUnet and RadImageNet), for their lung cancer risk prediction performance. …”
    Get full text
    Article
  10. 10

    EMNet: A Novel Few-Shot Image Classification Model with Enhanced Self-Correlation Attention and Multi-Branch Joint Module by Fufang Li, Weixiang Zhang, Yi Shang

    Published 2025-01-01
    “…In the five-way one-shot and five-way five-shot experiments on the miniImageNet dataset, EMNet’s classification accuracies were 0.02 and 0.48 percentage points higher than those of RENet, respectively. …”
    Get full text
    Article
  11. 11

    Evaluation of Novel AI Architectures for Uncertainty Estimation by Erik Pautsch, John Li, Silvio Rizzi, George K. Thiruvathukal, Maria Pantoja

    Published 2024-12-01
    “…Our research evaluates uncertainty in Convolutional Neural Networks (CNN) and Vision Transformers (ViT) using the MNIST and ImageNet-1K datasets. Using High-Performance (HPC) platforms, including the traditional Polaris supercomputer and AI accelerators like Cerebras CS-2 and SambaNova DataScale, we assessed the computational merits and bottlenecks of each platform. …”
    Get full text
    Article
  12. 12

    SEGMENTASI LESI KULIT MONKEYPOX MENGGUNAKAN ARSITEKTUR U-NET by NI PUTU DIAN ASTUTIK, IGN LANANG WIJAYAKUSUMA

    Published 2024-11-01
    “…The division of skin injuries was in this way carried out with a U-Net demonstrate, utilizing MobileNetV2 as the backbone and ImageNet weights for transfer learning. The U-Net model reached an accuracy of 88.07%, though some signs of overfitting were observed, likely due to low-quality label information from the watershed labeling process, which necessitates parameter tuning.…”
    Get full text
    Article
  13. 13

    Compressing fully connected layers of deep neural networks using permuted features by Dara Nagaraju, Nitin Chandrachoodan

    Published 2023-07-01
    “…The authors also showed 7× reduction of parameters on VGG16 architecture with ImageNet dataset. The authors also showed that the proposed method can be used in the classification stage of the transfer learning networks.…”
    Get full text
    Article
  14. 14

    Swin Transformer lightweight: an efficient strategy that combines weight sharing, distillation and pruning by HAN Bo, ZHOU Shun, FAN Jianhua, WEI Xianglin, HU Yongyang, ZHU Yanping

    Published 2024-09-01
    “…Experiments conducted on the ImageNet-Tiny-200 public dataset demonstrate that, with a reduction of 32% in model computational complexity, the proposed method only results in approximately a 3% performance degradation at minimum. …”
    Get full text
    Article
  15. 15

    Advancements in Image Classification: From Machine Learning to Deep Learning by Cheng Haoran

    Published 2025-01-01
    “…This paper systematically reviews the growth of image classification technology, beginning with the introduction of commonly used datasets such as CIFAR-10, ImageNet, and MNIST, and exploring their impact on algorithm development. …”
    Get full text
    Article
  16. 16

    A geometric approach for accelerating neural networks designed for classification problems by Mohsen Saffar, Ahmad Kalhor, Ali Habibnia

    Published 2024-07-01
    “…The proposed method achieves impressive pruning results on networks trained by CIFAR-10 and ImageNet datasets, with 87.5%, 77.6%, and 78.8% of VGG16, GoogLeNet, and DenseNet parameters pruned, respectively. …”
    Get full text
    Article
  17. 17

    Effect of Camera Choice on Image-Classification Inference by Jason Brown, Andy Nguyen, Nawin Raj

    Published 2024-12-01
    “…We examine the classification ranking of object classes when these images are input to an independently pretrained Resnet-18 model based on the ImageNet-1k dataset. We find that the camera used can affect the top prediction of object class, particularly in scenarios with a more complex background. …”
    Get full text
    Article
  18. 18

    GO Loss: A Gaussian Distribution-Based Orthogonal Decomposition Loss for Classification by Mengxin Liu, Wenyuan Tao, Xiao Zhang, Yi Chen, Jie Li, Chung-Ming Own

    Published 2019-01-01
    “…Experiments on multiple classification benchmarks, such as MNIST, CIFAR, and ImageNet, demonstrate the effectiveness of GO loss.…”
    Get full text
    Article
  19. 19

    Probabilistic Automated Model Compression via Representation Mutual Information Optimization by Wenjie Nie, Shengchuan Zhang, Xiawu Zheng

    Published 2024-12-01
    “…Through extensive experiments on CIFAR-10 and ImageNet, we demonstrate that Prob-AMC achieves a superior compression ratio of 33.41× on ResNet-18 with only a 1.01% performance degradation, outperforming state-of-the-art methods in terms of both compression efficiency and accuracy. …”
    Get full text
    Article
  20. 20

    Novel defense based on softmax activation transformation by Jinyin CHEN, Changan WU, Haibin ZHENG

    Published 2022-04-01
    “…Deep learning is widely used in various fields such as image processing, natural language processing, network mining and so on.However, it is vulnerable to malicious adversarial attacks and many defensive methods have been proposed accordingly.Most defense methods are attack-dependent and require defenders to generate massive adversarial examples in advance.The defense cost is high and it is difficult to resist black-box attacks.Some of these defenses even affect the recognition of normal examples.In addition, the current defense methods are mostly empirical, without certifiable theoretical support.Softmax activation transformation (SAT) was proposed in this paper, which was a light-weight and fast defense scheme against black-box attacks.SAT reactivates the output probability of the target model in the testing phase, and then it guarantees privacy of the probability information.As an attack-free defense, SAT not only avoids the burden of generating massive adversarial examples, but also realizes the advance defense of attacks.The activation of SAT is monotonic, so it will not affect the recognition of normal examples.During the activation process, a variable privacy protection transformation coefficient was designed to achieve dynamic defense.Above all, SAT is a certifiable defense that can derive the effectiveness and reliability of its defense based on softmax activation transformation.To evaluate the effectiveness of SAT, defense experiments against 9 attacks on MNIST, CIFAR10 and ImageNet datasets were conducted, and the average attack success rate was reduced from 87.06% to 5.94%.…”
    Get full text
    Article