Andmebaasi logo
Valdkonnad ja kollektsioonid
Kogu ADA
Eesti
English
Deutsch
  1. Esileht
  2. Sirvi autori järgi

Sirvi Autor "Wu, Yucui" järgi

Tulemuste filtreerimiseks trükkige paar esimest tähte
Nüüd näidatakse 1 - 1 1
  • Tulemused lehekülje kohta
  • Sorteerimisvalikud
  • Laen...
    Pisipilt
    listelement.badge.dso-type Kirje ,
    Latent-Gated-MoE: A Novel Mixture of Experts with Latent Space Splitting for Multi-class Image Classification
    (Tartu Ülikool, 2025) Wu, Yucui; Roy, Kallol , juhendaja; Pisek, Jan, juhendaja; Tartu Ülikool. Loodus- ja täppisteaduste valdkond; Tartu Ülikool. Arvutiteaduse instituut
    This thesis explores a novel mixture of experts (MoE) model for a multiclass image classification task. We call our model a Latent-Gated-MoE that focuses on the trade-off between computational complexity and accuracy. Big convolutional models, such as EfficientNet, while highly accurate, impose considerable training and inference costs. To address these challenges, a novel low-complexity architecture of mixture of experts (MoE) is proposed that first adds a variational auto-encoder (VAE) on top of a routing gate. The latent space from the variational autoencoder (VAE) architecture is split into 5 parts, and each latent part is routed to its corresponding experts. First, a standard MoE model is implemented in which a set of simple expert subnets is trained on the whole data set and combined using a learnable gating mechanism. Then, the traditional gating mechanism is replaced with a variant autoencoder (VAE)-based router, allowing routing decisions to be informed by probabilistic low-dimensional latent representations. In the final stage, a novel architecture is introduced in which the VAE latent vector is explicitly divided into expert-specific subspaces. Each expert receives a distinct portion of the latent code, while the router uses the full vector to determine the weights of the experts. Experiments are conducted on a five-class leaf image classification dataset, using clean and augmented samples to evaluate generalization and robustness. Our results show that the final model achieves competitive classification accuracy while maintaining a significantly smaller model footprint and reduced inference time.

DSpace tarkvara autoriõigus © 2002-2026 LYRASIS

  • Teavituste seaded
  • Saada tagasisidet