Absar, ShayaanTudor, Crina MadalinaDebess, Iben NyholmBruton, MicaellaScalvini, BarbaraIlinykh, NikolaiHoldt, Špela Arhar2025-02-142025-02-142025-03https://aclanthology.org/2025.resourceful-1.0/https://hdl.handle.net/10062/107110Code-switching (CS) involves speakers switching between two (or potentially more) languages during conversation and is a common phenomenon in bilingual communities. The majority of NLP research has been devoted to mono-lingual language modelling. Consequentially, most models perform poorly on code-switched data. This paper investigates the effectiveness of Cross-Lingual Large Language Models on the task of POS (Part-of-Speech) tagging in code-switched contexts, once they have undergone a fine-tuning process. The models are trained on code-switched combinations of Indian languages and English. This paper also seeks to investigate whether fine-tuned models are able to generalise and POS tag code-switched combinations that were not a part of the fine-tuning dataset. Additionally, this paper presents a new metric, the S-index (Switching-Index), for measuring the level of code-switching within an utterance.enAttribution-NonCommercial-NoDerivatives 4.0 Internationalhttps://creativecommons.org/licenses/by-nc-nd/4.0/Fine-Tuning Cross-Lingual LLMs for POS Tagging in Code-Switched ContextsArticle