Prompt engineering
In-context learning (ICL) is one of the most fascinating properties of LLMs. Traditionally, machine learning (ML) models are trained to solve specific tasks drawn on training data. For example, in a classical classification task, we have input-output pairs (X,y), and the model learns to map the relationship that is between input X and output y. Any deviation from this task leads the model to have less-than-optimal results. If we train a model for text classification in different topics, we have to conduct fine-tuning to make it efficient in sentiment analysis. In contrast, ICL allows us not to have to have any model update to use the model in a new task. ICL is, thus, an emergent property of LLMs that allows the model to perform a new task in inference, taking advantage of the acquired knowledge to map a new relationship.
ICL was first defined in the article Language Models are Few-Shot Learners (https://arxiv.org/abs/2005.14165). The authors define LLMs as few...