Paper Review: Distilling Step-by-Step! Outperforming Larger Language Models with Less Training Data and Smaller Model Sizes
08 May 2023
My review of the paper Distilling Step-by-Step Outperforming Larger Language Models with Less Training Data and Smaller Model Sizes