Few-shot and zero-shot fine-tuning
Few-shot and zero-shot learning are powerful techniques for adapting LLMs to new tasks with minimal or no task-specific training data. Let’s implement a few-shot fine-tuning approach:
- We create a prompt that includes a few examples of the task:
def prepare_few_shot_dataset(examples, tokenizer, num_shots=5): few_shot_examples = examples[:num_shots] prompt = "\n\n".join( [ f"Input: {ex['input']}\n" f"Output: {ex['output']}" for ex in few_shot_examples ] ) prompt += "\n\nInput: {input...