Chain-of-Thought Prompting
Chain-of-thought (CoT) prompting originated from a research paper titled Chain-of-Thought Prompting Elicits Reasoning in Large Language Models, published by Google researchers Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, and Denny Zhou in 2022.
The key innovation of CoT prompting was encouraging language models to break down complex reasoning problems into intermediate steps before arriving at a final answer. This was done by including demonstrations where the model is shown examples of step-by-step reasoning.
The researchers demonstrated that by prompting LLMs with a few examples of reasoning chains (such as “Let’s think step by step”), the models could significantly improve their performance on complex tasks requiring multi-step reasoning, such as arithmetic, commonsense, and symbolic reasoning problems.
Before CoT, most prompting techniques focused on getting direct answers...