Train More Parameters But Mind Their Placement: Insights into Language Adaptation with PEFT
dc.contributor.author | Kunz, Jenny | |
dc.contributor.editor | Johansson, Richard | |
dc.contributor.editor | Stymne, Sara | |
dc.coverage.spatial | Tallinn, Estonia | |
dc.date.accessioned | 2025-02-18T09:31:29Z | |
dc.date.available | 2025-02-18T09:31:29Z | |
dc.date.issued | 2025-03 | |
dc.description.abstract | Smaller LLMs still face significant challenges even in medium-resourced languages, particularly when it comes to language-specific knowledge -- a problem not easily resolved with machine-translated data. In this case study on Icelandic, we aim to enhance the generation performance of an LLM by specialising it using unstructured text corpora. A key focus is on preventing interference with the models’ capabilities of handling longer context during this adaptation. Through ablation studies using various parameter-efficient fine-tuning (PEFT) methods and setups, we find that increasing the number of trainable parameters leads to better and more robust language adaptation. LoRAs placed in the feed-forward layers and bottleneck adapters show promising results with sufficient parameters, while prefix tuning and (IA)$^3$ are not suitable. Although improvements are consistent in 0-shot summarisation, some adapted models struggle with longer context lengths, an issue that can be mitigated by adapting only the final layers. | |
dc.identifier.uri | https://hdl.handle.net/10062/107226 | |
dc.language.iso | en | |
dc.publisher | University of Tartu Library | |
dc.relation.ispartofseries | NEALT Proceedings Series, No. 57 | |
dc.rights | Attribution-NonCommercial-NoDerivatives 4.0 International | |
dc.rights.uri | https://creativecommons.org/licenses/by-nc-nd/4.0/ | |
dc.title | Train More Parameters But Mind Their Placement: Insights into Language Adaptation with PEFT | |
dc.type | Article |
Files
Original bundle
1 - 1 of 1