LLMs tend to lose prior skills when fine-tuned for new tasks. A new self-distillation approach aims to reduce regression and simplify model management.A new fine-tuning technique aims to solve “catastrophic forgetting,” a limitation that often complicates repeated model updates in enterprise deployments.Researchers at MIT, the Improbable AI Lab, and ETH Zurich have introduced a fine-tuning […]