Vicente, Raul, juhendajaAru, Jaan, juhendajaKhajuria, TarunTartu Ülikool. Loodus- ja täppisteaduste valdkondTartu Ülikool. Arvutiteaduse instituut2023-11-062023-11-062020https://hdl.handle.net/10062/94054Visual Search is a task ubiquitously performed by humans in everyday life. In the laboratory, to understand more about this process, experiments have characterised the time that humans need to locate a particular target object amongst others. Based on this search time’s dependence on the number of objects in the image, it is believed that two kinds of search take place. Feature search, where the target pops-out of the search image and is instantly found using a parallel search mechanism, and conjunction search, with more complex objects where the search is serial and the search time increases with the number of objects. In this work, we use a computational model to propose a unified process that can result in feature or conjunction search characteristics depending on the precision of the attention guidance mechanism. We show that the search performance can be partly explained by the precision or capacity of the encoding of distinct features that is used to guide attention during the search process.engopenAccessAttribution-NonCommercial-NoDerivatives 4.0 InternationalVisual SearchAttentionComputational NeuroscienceDeep LearningConvolutional Neural NetworksmagistritöödinformaatikainfotehnoloogiainformaticsinfotechnologyA unified account of visual search using a computational modelThesis