Identifying task groupings for multi-task learning using pointwise V-usable information

Published in Journal of Biomedical Informatics, 2025

Recommended citation: Li, Y., Miller, T., Bethard, S. and Savova, G., 2025. Identifying task groupings for multi-task learning using pointwise V-usable information. Journal of Biomedical Informatics, p.104881. https://doi.org/10.1016/j.jbi.2025.104881

Abstract:

Objective
Even in the era of Large Language Models (LLMs) which are claimed to be solutions for many tasks, fine-tuning language models remains a core methodology used in deployment for a variety of reasons – computational efficiency and performance maximization among them. Fine-tuning could be single-task or multi-task joint learning where the tasks support each other thus boosting their performance. The success of multi-task learning can depend heavily on which tasks are grouped together. Naively grouping all tasks or a random set of tasks can result in negative transfer, with the multi-task models performing worse than single-task models. Though many efforts have been made to identify task groupings and to measure the relatedness among different tasks, it remains a challenging research topic to define a metric to identify the best task grouping out of a pool of many potential task combinations. We propose such a metric.

Methods
We propose a metric of task relatedness based on task difficulty measured by pointwise V-usable information (PVI). PVI is a recently proposed metric to estimate how much usable information a dataset contains given a model. We hypothesize that tasks with not statistically different PVI estimates are similar enough to benefit from the joint learning process. We conduct comprehensive experiments to evaluate the feasibility of this metric for task grouping on 15 NLP datasets in the general, biomedical, and clinical domains. We compare the results of the joint learners against single learners, existing baseline methods, and recent large language models, including Llama and GPT-4.

Results
The results show that by grouping tasks with similar PVI estimates, the joint learners yielded competitive results with fewer total parameters, with consistent performance across domains.

Conclusion
For domain-specific tasks, finetuned models may remain a preferable option, and the PVI-based method of grouping tasks for multi-task learning could be particularly beneficial. This metric could be wrapped in the overall recipe of fine-tuning language models.