Understanding hallucinations and ethical and legal issues
A well-known problem with LLMs is their tendency to hallucinate. Hallucination is defined as the production of nonsensical or unfaithful content. This is classified into factuality hallucination and faithfulness hallucination. Factual hallucinations are responses produced by the model that contradict real, verifiable facts. Faithfulness hallucination, on the other hand, is content that is at odds with instructions or context provided by the user. The model is trained to generate consistent text but has no way to revise its output or check that it is correct:

Figure 3.29 – Example of LLM hallucination (https://arxiv.org/pdf/2311.05232)
The model can also generate toxic content and present stereotypes and negative attitudes toward specific demographic groups. It is important to prevent models from producing harm. Different studies have highlighted different instances of potential harm resulting...