Multi-Task Fine-Tuning for Code-Mixed Machine Translation

Proposed Multi-Task Fine-Tuning involving first Token-level Language Identification, followed by Machine Translation. Trained Llama 3.2 1B model using QLoRA. Improved BLEU scores of Romanized Hindi by 5.32% compared to simple Supervised Fine-Tuning (Most imbalanced class 0.70% of all tokens).