Multilingual and multi-domain representational patterns across transformer-based models
Date
2024-10-22
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
Tehisintellekti (TI) mudelid toimivad sageli nagu salapärased mustad kastid: nad võtavad andmeid ja genereerivad ennustusi, kuid nende sisemine töö on varjatud. Nende TI võrkude tõlgendamine on sarnane keerulise bioloogilise või tulnukate aju toimimise uurimisega. See läbipaistvuse puudumine muudab nende mudelite usaldamise keeruliseks, kuna me ei saa olla kindlad, et need on ohutud, õiglased või usaldusväärsed. Näiteks võib mudel, mis töötab hästi ühes keeles, ebaõnnestuda teises keeles.
Meie uurimistöö keskendub TI mudelite arusaadavuse suurendamisele, keskendudes mitmekeelse ja mitmevaldkonnalise mudelitele. Avastame kaks olulist nähtust Transformer-põhistes mudelites: mitmekeelne abstraktsioon, kus mudelid õpivad teisendama sisendlauseid "mentaalseks ühiskeeleks" sõltumata sellest, kas sisend on eesti või inglise keeles, ja mitmevaldkonnaline spetsialiseerumine, kus mudelid õpivad pühendama eraldi tööriistu iga valdkonna jaoks seesmiselt. Need mustrid olid järjepidevad erinevate mudelite ja andmekogumite puhul.
Kuigi meie peamine eesmärk on pakkuda teadmisi mitmekeelse ja mitmevaldkonnalise mudelite sisemisest toimimisest, tutvustame me ka uut metoodikat mitmekeelse mudeli tõlgendamiseks ja esitleme praktilist rakendust mitmevaldkonnalise masintõlke parandamiseks. Loodame, et need teadmised aitavad parandada TI tehnoloogia ohutust, õiglust või kättesaadavust, eriti alaesindatud keelte ja valdkondade puhul.
Artificial Intelligence (AI) models often work like mysterious black boxes: they take data and generate predictions, but their internal workings are hidden. Interpreting these AI networks is akin to exploring the workings of a complex biological or alien brain. This lack of transparency makes it hard to trust these models, as we cannot be sure they are safe, fair, or reliable. For example, a model that works well in one language might fail in another. Our research focuses on making AI models more understandable, focusing on multilingual and multi-domain models. We discovered two key phenomena in Transformer-based models: multilingual abstraction, where models learn to convert sentences to a "shared mental language" independently of whether the input is in Estonian or English, and multi-domain specialization, where models learn to dedicate separate tools for each domain inside. These patterns were consistent across various models and datasets. While our main aim is to provide insights into the inner workings of multilingual and multi-domain models, we also introduce a new methodology for interpreting multilingual models and present a practical application to improve multi-domain machine translation. We hope that these insights can assist in enhancing the safety, fairness, or accessibility of AI technology, especially for underrepresented languages and domains.
Artificial Intelligence (AI) models often work like mysterious black boxes: they take data and generate predictions, but their internal workings are hidden. Interpreting these AI networks is akin to exploring the workings of a complex biological or alien brain. This lack of transparency makes it hard to trust these models, as we cannot be sure they are safe, fair, or reliable. For example, a model that works well in one language might fail in another. Our research focuses on making AI models more understandable, focusing on multilingual and multi-domain models. We discovered two key phenomena in Transformer-based models: multilingual abstraction, where models learn to convert sentences to a "shared mental language" independently of whether the input is in Estonian or English, and multi-domain specialization, where models learn to dedicate separate tools for each domain inside. These patterns were consistent across various models and datasets. While our main aim is to provide insights into the inner workings of multilingual and multi-domain models, we also introduce a new methodology for interpreting multilingual models and present a practical application to improve multi-domain machine translation. We hope that these insights can assist in enhancing the safety, fairness, or accessibility of AI technology, especially for underrepresented languages and domains.
Description
Väitekirja elektrooniline versioon ei sisalda publikatsioone