Andmebaasi logo
Valdkonnad ja kollektsioonid
Kogu ADA
Eesti
English
Deutsch
  1. Esileht
  2. Sirvi märksõna järgi

Sirvi Märksõna "autonomous" järgi

Tulemuste filtreerimiseks trükkige paar esimest tähte
Nüüd näidatakse 1 - 3 3
  • Tulemused lehekülje kohta
  • Sorteerimisvalikud
  • Laen...
    Pisipilt
    listelement.badge.dso-type Kirje , listelement.badge.access-status Avatud juurdepääs ,
    Development of an Autonomous Open-Source Inventory Performance Robot for the University of Tartu Library
    (Tartu Ülikool, 2025) Zvirgzdina, Robina; Raudmäe, Renno, juhendaja; Vunder, Veiko, juhendaja; Tartu Ülikool. Loodus- ja täppisteaduste valdkond; Tartu Ülikool. Bioinseneeria instituut
    Inventory management is crucial for the successful functioning of a venue. Manual inventory management and asset tracking can be time-consuming, dull, and inefficient. In particular, in large-scale libraries, it can take personnel months to perform a single inventory round of all the books. Nowadays, using advanced robotics solutions, it is possible to automate the process and provide more accurate information on the availability and location of the desired book. This results in higher satisfaction for librarians and visitors, as they can successfully find the books of interest. This thesis aims to create an open-source solution that contains a radio frequency identification system for the University of Tartu library. The result is intended to use the Robotont project as a base platform. It should be able to autonomously navigate through library halls while scanning the bookshelves and adjusting the antenna height for it to be at the same level as the books. Simultaneously, it should document the book IDs and their location to produce a heat map containing the most probable location for each book.
  • Laen...
    Pisipilt
    listelement.badge.dso-type Kirje , listelement.badge.access-status Avatud juurdepääs ,
    Smart Traffic Control Using Optimised Convolutional Neural Network
    (Tartu Ülikool, 2019) Surrage Reis, Mateus; Anbarjafari, Gholamreza
    In English: The state-of-the-art in image object detection is in convolutional neural networks, which is a computationally expensive base to build on. To run accurate detection in an embedded device, additional optimization is required if there is a need to run it real-time on each frame of a video. This thesis details work done in the development of a smart pedestrian crosswalk: an Internet of Things enabled embedded platform for traffic control. By fine-tuning an individual neural network for each SPC post, it was possible to significantly boost accuracy in a fast, low-accuracy CNN. This was accomplished by taking advantage of the low variation in possible input images, being drawn from only 3 cameras per post. The improvement was from 33.1% mAP in general context images and 80 classes to 60.7% mAP on solely traffic images and seven traffic-relevant classes. Eesti keeles: Uusimad objekti tuvastus meetodid kasutavad oma töös konvuleerivaid närvivõrke, mis arvutuslikust küljest on ressursiahned. Täpsete tuvastusmudelite jooksutamine manussüsteemides vajab palju optimeerimist, eriti kui seade peab toimima reaalajas. Käesolev töö kirjeldab targa ülekäiguraja teemärgi loomist: nutiseade, mis on mõeldud liikluse juhtimiseks. Seadistades iga SPC posti eraldi närvivõrku, oli võimalik märkimisväärselt tõsta kiire ja ebatäpse närvivõrgu selgust. See saavutati kasutades ära kolmest kaamerast tulevate sisendpiltide minimaalset varieeruvust. Algoritmi täpsust parandati 80 klassilise üldnärvivõrgu 33.1% mAP pealt 60.7% mAP peale, rakendades ainult liiklusega seotud pilte koos seitsme erineva teemakohase klassiga.
  • Laen...
    Pisipilt
    listelement.badge.dso-type Kirje , listelement.badge.access-status Avatud juurdepääs ,
    VR-Enhanced Remote Inspection Framework for Semi-Autonomous Robot Fleet
    (Tartu Ülikool, 2024) Reynes, Gautier; Valner, Robert; Norbisrath, Ulrich; Tartu Ülikool. Loodus- ja täppisteaduste valdkond; Tartu Ülikool. Tehnoloogiainstituut
    This thesis presents the design and development of a Virtual Reality (VR)-enhanced user interface and communication infrastructure for remote inspection using a semiautonomous robot fleet. The core of this project is the creation of a VR interface that allows operators to immerse themselves in a digital twin of the remote environment, facilitating intuitive and efficient control over robot inspection. This interface supports both third-person and robot’s point-of-view perspectives, enhancing situational awareness and decision-making capabilities in hazardous environments. The software framework is built upon ROS 2 Foxy, and the VR application was designed with a new graphics engine called Wonderland Engine, particularly suited for lightweightWebXR experiences capable of running on a number of VR headsets, like the Oculus Quest 2. The communication between the interface and the robot fleet is tackled by a custom-built WebSocket server. The work is demonstrated using simulated robot scenarios in Gazebo. The demonstration serves as a proof of concept, showcasing the viability of the VR interface in a controlled environment and setting the stage for future real-world applications. This work contributes to the field of VR-enhanced remote inspection by providing an interface that bridges the gap between operators and remote environments. The integration of VR technology with robotic systems opens new possibilities for remote operation, offering a more immersive and intuitive control mechanism that can be adapted to various industrial and research applications.

DSpace tarkvara autoriõigus © 2002-2026 LYRASIS

  • Teavituste seaded
  • Saada tagasisidet