SlideShare a Scribd company logo
CHAPTER 5
Large Language Models(LLMs)
• Large Language Models (LLMs) are machine learning models or
can be termed as deep learning algorithms, that can understand,
comprehend and generate human language text
• They work by analyzing massive data sets of language and they
are trained using huge data sets. The large amount of data sets
makes LLM models capable of understanding and generating
natural language and other types of content to perform a wide
range of tasks
• The LLM algorithm converted to a computer program is provided
with set of examples, so that the LLM can recognize and interpret
human language or other types of complex data.
• Though the LLMs are trained on data that has been gathered from the
Internet and other trusted resources, the output of the data totally
depends on the quality of the samples.
• The quality of the data impacts on how well LLMs will learn natural
language, then the developers can exploit the data to the extent that
LLM can perform as per expectations
• LLMs use machine learning techniques named deep learning to
understand how characters words, and sentences function together.
• Deep learning involves the probabilistic analysis of unstructured data,
which eventually enables the deep learning model to recognize
distinctions between pieces of content without human intervention.
• Large language models are also referred to as Neural Networks (NNs),
which are computing systems inspired by the human brain. These
neural networks work using a network of nodes that are layered, much
like neurons
• LLMs are then further trained and fine-tuned, to do the particular task
that the programmer wants them to do, such as interpreting and
understanding the questions and generating responses, or
translating text from one language to another.
• Apart from teaching human languages to Artificial Intelligence (AI)
applications, large language models can also be trained to perform a
variety of tasks like understanding protein structures, DNA research,
writing software code, online searching, Chatbots, customer service
and similar tasks for which LLM can be well trained.
• The LLMs work similarly to the human brain. First the human brain is
taught, trained and then it becomes capable of doing certain tasks in an
expected manner. In the same manner, large language models must be
pre-trained and then fine-tuned so that they can solve text classification,
question answering, document summarization, and text generation tasks.
Large Language Models Transformer
Model
• A transformer model is the most
common architecture of a large
language model. It consists of
an encoder and a decoder. A
transformer model processes
data by tokenizing the input,
then simultaneously conducting
mathematical equations to
discover relationships between
tokens.
Linear
• The encoder reads and processes the input text, transforming it
into a format that the model can understand. Imagine it as
absorbing a sentence and breaking it down into its essence.
• On the other side, the decoder takes this processed information
and steps through it to produce the output, like translating the
sentence into another language.
• This back-and-forth is what makes transformers so powerful for
tasks like translation, where understanding context and
generating accurate responses are key. This enables the computer
to see the patterns a human would see when the same query is given
as input.
Large Language Models Key Components
• Large language models are composed of multiple neural network
layers. Various neural networks, such as recurrent layers, feed
forward layers, embedding layers, and attention layers work
together to process the input text and generate output.
• The embedding layer creates embedding's from the input text that
captures the semantic and syntactic meaning of the input, so the
model can understand the context and relevance.
• The feed forward layer (FFN) of a large language model is made
up of multiple fully connected layers that transform the input
embedding's. While doing this process, these layers enable the
model to get insight into higher-level abstractions helps model to
understand the user's intent with the text input.
• The recurrent layer interprets the words in the input text in
sequence, that captures the relationship between words in a
sentence.
• The attention mechanism enables a language model to concentrate
on single part of the input text that is relevant to the task under
consideration. This layer allows the model to generate the most
accurate outputs.
Large Languages Model Types
1. Generic (raw) language model - This model predicts the next
word based on the language in the training data. This language
model performs information retrieval tasks.
2. Instruction-tuned language model - This model is trained to
predict responses to the instructions given in the input. This is
useful to perform sentiment analysis, or to generate text or code.
3. Dialog-tuned language model - This model is trained to have a
dialog by predicting the next response, like the chatbots.
Large Language Models Working
• LLMs operate by leveraging deep learning techniques and vast
amounts of textual data. These models are typically based on a
transformer architecture, like the generative pre-trained
transformer, which excels at handling sequential data like text input.
• A large language model is based on a transformer model and
works by receiving an input, encoding it, and then decoding it to
produce an output prediction. But before a large language model
can receive text input and generate an output prediction, it requires
training, so that it can fulfill general functions, and fine-tuning,
which enables it to perform specific tasks.
• Task 1 - Training - Large language models are pre-trained using
large textual datasets from sites like Wikipedia, GitHub, or others.
These datasets consist of trillions of words, and their quality will
affect the language model's performance. At this stage, the large
language model engages in unsupervised learning, meaning it
processes the datasets fed to it without specific instructions. During
this process, the LLM's Al algorithm can learn the meaning of
words, and of the relationships between words. It also learns to
distinguish words based on context.
• Task 2 - Fine-tuning- In order for a large language model to
perform a specific task, such as translation, it must be fine-tuned to
that particular activity. Fine-tuning optimizes the performance of
specific tasks.
• Task 3 - Prompt-tuning - This task is similar to a function to fine-
tuning, whereby it trains a model to perform a specific task
through few-shot prompting, or zero-shot prompting. A prompt
is an instruction given to an LL.M. Few-shot prompting teaches the
model to predict outputs through the use of examples
Large Language Models Use Cases/Applications
• Large language models are useful for business tasks that involve
following sub-tasks:
1. Text summarization.
2. Text translation.
3. Text and image generation.
4. Code writing and debugging.
5. Web search enabling search engines to respond to queries, to
assisting developers with writing code.
6. Customer service and sentiment analysis.
7. Virtual assistants/chatbots.
8. Text/Document classification.
9. Automated document review and approval.
10. Knowledge base responses to answer queries.
11. Copywriting and technical writing.
12. Large language models have the ability to understand proteins,
molecules, DNA, and RNA. This helps to assist in the
development of vaccines, finding cures for illnesses.
13. LLMs are used in industries for customer service purposes such as
conversational Al, chatbots.
14. Marketing teams can use LLMs to perform sentiment analysis to
quickly generate campaign ideas or text as pitching examples.
15. From searching through massive textual datasets to generating
legalese, large language models are literally the assistants to
lawyers and the entire law community.
16. LLMs can support credit card companies in detecting fraud or
similar issues.
Large Language Models Benefits
1. Efficiency - LLMs can significantly improve the efficiency of
processes due to their ability to understand and process natural
language at a large scale with huge data processing.
2. Large set of applications - LLMs can be used for language
translation, sentence completion, sentiment analysis, question
answering, mathematical equations, and such tasks.
3. Cost reduction - With LLMs, tasks such as customer support, data
analysis, and others can be automated, thus reducing operational costs.
4. Continuous improvement- Large language model performance is
continually improving because it grows when more data and
parameters are added. In other words, the more it learns, the better it
gets. Large language models can exhibit what is called "in-context
learning."
5. Data analysis- LLMs can analyze and interpret vast amounts of data
faster and more effectively than humanly possible, providing
businesses with valuable insights quickly.
6. Real time response to customer - LLM-based applications can
enhance customer interactions by offering personalized assistance and
real-time responses within stipulated time.
7. Scalability- LLMs can handle an increasing amount of work due to
their deep learning process
Large Language Models Limitations
• LLMs have various limitations and challenges that they faced:
1. Hallucinations - A hallucination is event when a LLM produces
an output that is false, or that does not match the user's intent. The
result can sometimes be what is referred to as a "hallucination."
2. Security - LLMs face important security risks when not managed
or surveilled properly. They can leak people's private information,
participate in phishing scams, and produce spam.
3. Ethical concerns and bias - LLMs are trained on vast amounts of
data from many sources, so they might reflect and reproduce the
biases present in those data sets. The data used to train language
models will affect the outputs a given model produces. If the data
represents a single demographic, or lacks diversity, the outputs
produced by the large language model will also lack diversity and
hence bias will be inherent
4. Consent - LLMs are trained on trillions of datasets - some of
which might not have been obtained consensually or unknown to
owners. When scraping data from the internet, large language
models have been known to ignore copyright licenses, plagiarize
written content, and repurpose proprietary content without getting
permission from the original owners or artists.
5. Scaling- It is very critical to scale on time- and resource-
consuming factors so as to maintain large language models.
6. Deployment - Deploying large language models requires deep
learning, a transformer model, distributed software and hardware,
and overall technical expertise.

More Related Content

PDF
Build a Large Language Model From Scratch MEAP Sebastian Raschka
esperomareta
 
PPTX
Introduction-to-Large-Language-Models.pptx
RyujiChanneru
 
PDF
Crafting Your Customized Legal Mastery: A Guide to Building Your Private LLM
ChristopherTHyatt
 
PDF
How Large Language Models Are Changing the AI Landscape
Qubited
 
PDF
LLM.pdf
MedBelatrach
 
PPTX
Non technical explanation of Large Language Model
shroffmegha6695
 
PDF
leewayhertz.com-How to build a private LLM (1).pdf
alexjohnson7307
 
PDF
1721436375967hhhhhhhhhhhhhuuuuuuuuuu.pdf
LoubnaAitTalleb
 
Build a Large Language Model From Scratch MEAP Sebastian Raschka
esperomareta
 
Introduction-to-Large-Language-Models.pptx
RyujiChanneru
 
Crafting Your Customized Legal Mastery: A Guide to Building Your Private LLM
ChristopherTHyatt
 
How Large Language Models Are Changing the AI Landscape
Qubited
 
LLM.pdf
MedBelatrach
 
Non technical explanation of Large Language Model
shroffmegha6695
 
leewayhertz.com-How to build a private LLM (1).pdf
alexjohnson7307
 
1721436375967hhhhhhhhhhhhhuuuuuuuuuu.pdf
LoubnaAitTalleb
 

Similar to Unit 5.ppt Fundamenrtal of Artificial intelligence (20)

PDF
Large Language Modelsjjjhhhjjjjjjbbbbbbj.pdf
BrsioftBlogger
 
PPTX
Gnerative AI presidency Module1_L4_LLMs_new.pptx
Arunnaik63
 
PDF
Evaluating the top large language models.pdf
ChristopherTHyatt
 
PDF
solulab.com-Top Comparison of Large Language ModelsLLMs Explained.pdf
RamayaRam
 
PDF
solulab.com-Top Comparison of Large Language ModelsLLMs Explained.pdf
RamayaRam
 
PDF
Comparison of Large Language Models The Ultimate Guide.pdf
imoliviabennett
 
PDF
Top Comparison of Large Language ModelsLLMs Explained.pdf
SoluLab1231
 
PPTX
Large Language Models (LLMs) part one.pptx
harmardir
 
PDF
solulab.com-Comparison of Large Language Models The Ultimate Guide (1).pdf
celinedion89121
 
PDF
Master LLMs with LangChain -the basics of LLM
ssuser3d8087
 
PDF
Top Comparison of Large Language ModelsLLMs Explained (2).pdf
imoliviabennett
 
PPTX
How_LLMs_Work_workflow_descriptionn.pptx
pgbm24krishnav
 
PPTX
Understanding Large Language Models (1).pptx
RabikaKhalid
 
PDF
Top Comparison of Large Language ModelsLLMs Explained.pdf
imoliviabennett
 
PPTX
UTorrent Pro 3.5.5 Build 45231 Crack With Latest Version
beenachuhdri
 
PDF
Rapid PHP 2025 v18.2.0.265 Crack Free Full Activated!
aluarjun2244
 
PDF
uTorrent 3.6.0 Crack (Pro Unlocked) for Windows Free 2025
ahsanloru6
 
PDF
Lumion Pro Crack 2025 With License Key [Latest] Free
anubhavsingh4365
 
PPTX
Generative AI and Large Language Models (LLMs)
romasmartjoseph
 
PDF
Automate your Job and Business with ChatGPT #3 - Fundamentals of LLM/GPT
Anant Corporation
 
Large Language Modelsjjjhhhjjjjjjbbbbbbj.pdf
BrsioftBlogger
 
Gnerative AI presidency Module1_L4_LLMs_new.pptx
Arunnaik63
 
Evaluating the top large language models.pdf
ChristopherTHyatt
 
solulab.com-Top Comparison of Large Language ModelsLLMs Explained.pdf
RamayaRam
 
solulab.com-Top Comparison of Large Language ModelsLLMs Explained.pdf
RamayaRam
 
Comparison of Large Language Models The Ultimate Guide.pdf
imoliviabennett
 
Top Comparison of Large Language ModelsLLMs Explained.pdf
SoluLab1231
 
Large Language Models (LLMs) part one.pptx
harmardir
 
solulab.com-Comparison of Large Language Models The Ultimate Guide (1).pdf
celinedion89121
 
Master LLMs with LangChain -the basics of LLM
ssuser3d8087
 
Top Comparison of Large Language ModelsLLMs Explained (2).pdf
imoliviabennett
 
How_LLMs_Work_workflow_descriptionn.pptx
pgbm24krishnav
 
Understanding Large Language Models (1).pptx
RabikaKhalid
 
Top Comparison of Large Language ModelsLLMs Explained.pdf
imoliviabennett
 
UTorrent Pro 3.5.5 Build 45231 Crack With Latest Version
beenachuhdri
 
Rapid PHP 2025 v18.2.0.265 Crack Free Full Activated!
aluarjun2244
 
uTorrent 3.6.0 Crack (Pro Unlocked) for Windows Free 2025
ahsanloru6
 
Lumion Pro Crack 2025 With License Key [Latest] Free
anubhavsingh4365
 
Generative AI and Large Language Models (LLMs)
romasmartjoseph
 
Automate your Job and Business with ChatGPT #3 - Fundamentals of LLM/GPT
Anant Corporation
 
Ad

Recently uploaded (20)

PDF
top-5-use-cases-for-splunk-security-analytics.pdf
yaghutialireza
 
PPTX
Victory Precisions_Supplier Profile.pptx
victoryprecisions199
 
PPTX
22PCOAM21 Session 1 Data Management.pptx
Guru Nanak Technical Institutions
 
PPTX
FUNDAMENTALS OF ELECTRIC VEHICLES UNIT-1
MikkiliSuresh
 
PPTX
Online Cab Booking and Management System.pptx
diptipaneri80
 
PPTX
Module2 Data Base Design- ER and NF.pptx
gomathisankariv2
 
PDF
2025 Laurence Sigler - Advancing Decision Support. Content Management Ecommer...
Francisco Javier Mora Serrano
 
PPTX
database slide on modern techniques for optimizing database queries.pptx
aky52024
 
PDF
STUDY OF NOVEL CHANNEL MATERIALS USING III-V COMPOUNDS WITH VARIOUS GATE DIEL...
ijoejnl
 
PDF
settlement FOR FOUNDATION ENGINEERS.pdf
Endalkazene
 
PDF
Introduction to Ship Engine Room Systems.pdf
Mahmoud Moghtaderi
 
PDF
20ME702-Mechatronics-UNIT-1,UNIT-2,UNIT-3,UNIT-4,UNIT-5, 2025-2026
Mohanumar S
 
PDF
Biodegradable Plastics: Innovations and Market Potential (www.kiu.ac.ug)
publication11
 
PPT
1. SYSTEMS, ROLES, AND DEVELOPMENT METHODOLOGIES.ppt
zilow058
 
PPTX
quantum computing transition from classical mechanics.pptx
gvlbcy
 
PPTX
Inventory management chapter in automation and robotics.
atisht0104
 
PDF
Packaging Tips for Stainless Steel Tubes and Pipes
heavymetalsandtubes
 
PDF
Advanced LangChain & RAG: Building a Financial AI Assistant with Real-Time Data
Soufiane Sejjari
 
PDF
FLEX-LNG-Company-Presentation-Nov-2017.pdf
jbloggzs
 
PDF
All chapters of Strength of materials.ppt
girmabiniyam1234
 
top-5-use-cases-for-splunk-security-analytics.pdf
yaghutialireza
 
Victory Precisions_Supplier Profile.pptx
victoryprecisions199
 
22PCOAM21 Session 1 Data Management.pptx
Guru Nanak Technical Institutions
 
FUNDAMENTALS OF ELECTRIC VEHICLES UNIT-1
MikkiliSuresh
 
Online Cab Booking and Management System.pptx
diptipaneri80
 
Module2 Data Base Design- ER and NF.pptx
gomathisankariv2
 
2025 Laurence Sigler - Advancing Decision Support. Content Management Ecommer...
Francisco Javier Mora Serrano
 
database slide on modern techniques for optimizing database queries.pptx
aky52024
 
STUDY OF NOVEL CHANNEL MATERIALS USING III-V COMPOUNDS WITH VARIOUS GATE DIEL...
ijoejnl
 
settlement FOR FOUNDATION ENGINEERS.pdf
Endalkazene
 
Introduction to Ship Engine Room Systems.pdf
Mahmoud Moghtaderi
 
20ME702-Mechatronics-UNIT-1,UNIT-2,UNIT-3,UNIT-4,UNIT-5, 2025-2026
Mohanumar S
 
Biodegradable Plastics: Innovations and Market Potential (www.kiu.ac.ug)
publication11
 
1. SYSTEMS, ROLES, AND DEVELOPMENT METHODOLOGIES.ppt
zilow058
 
quantum computing transition from classical mechanics.pptx
gvlbcy
 
Inventory management chapter in automation and robotics.
atisht0104
 
Packaging Tips for Stainless Steel Tubes and Pipes
heavymetalsandtubes
 
Advanced LangChain & RAG: Building a Financial AI Assistant with Real-Time Data
Soufiane Sejjari
 
FLEX-LNG-Company-Presentation-Nov-2017.pdf
jbloggzs
 
All chapters of Strength of materials.ppt
girmabiniyam1234
 
Ad

Unit 5.ppt Fundamenrtal of Artificial intelligence

  • 2. Large Language Models(LLMs) • Large Language Models (LLMs) are machine learning models or can be termed as deep learning algorithms, that can understand, comprehend and generate human language text • They work by analyzing massive data sets of language and they are trained using huge data sets. The large amount of data sets makes LLM models capable of understanding and generating natural language and other types of content to perform a wide range of tasks • The LLM algorithm converted to a computer program is provided with set of examples, so that the LLM can recognize and interpret human language or other types of complex data.
  • 3. • Though the LLMs are trained on data that has been gathered from the Internet and other trusted resources, the output of the data totally depends on the quality of the samples. • The quality of the data impacts on how well LLMs will learn natural language, then the developers can exploit the data to the extent that LLM can perform as per expectations • LLMs use machine learning techniques named deep learning to understand how characters words, and sentences function together. • Deep learning involves the probabilistic analysis of unstructured data, which eventually enables the deep learning model to recognize distinctions between pieces of content without human intervention. • Large language models are also referred to as Neural Networks (NNs), which are computing systems inspired by the human brain. These neural networks work using a network of nodes that are layered, much like neurons
  • 4. • LLMs are then further trained and fine-tuned, to do the particular task that the programmer wants them to do, such as interpreting and understanding the questions and generating responses, or translating text from one language to another. • Apart from teaching human languages to Artificial Intelligence (AI) applications, large language models can also be trained to perform a variety of tasks like understanding protein structures, DNA research, writing software code, online searching, Chatbots, customer service and similar tasks for which LLM can be well trained. • The LLMs work similarly to the human brain. First the human brain is taught, trained and then it becomes capable of doing certain tasks in an expected manner. In the same manner, large language models must be pre-trained and then fine-tuned so that they can solve text classification, question answering, document summarization, and text generation tasks.
  • 5. Large Language Models Transformer Model • A transformer model is the most common architecture of a large language model. It consists of an encoder and a decoder. A transformer model processes data by tokenizing the input, then simultaneously conducting mathematical equations to discover relationships between tokens. Linear
  • 6. • The encoder reads and processes the input text, transforming it into a format that the model can understand. Imagine it as absorbing a sentence and breaking it down into its essence. • On the other side, the decoder takes this processed information and steps through it to produce the output, like translating the sentence into another language. • This back-and-forth is what makes transformers so powerful for tasks like translation, where understanding context and generating accurate responses are key. This enables the computer to see the patterns a human would see when the same query is given as input.
  • 7. Large Language Models Key Components • Large language models are composed of multiple neural network layers. Various neural networks, such as recurrent layers, feed forward layers, embedding layers, and attention layers work together to process the input text and generate output. • The embedding layer creates embedding's from the input text that captures the semantic and syntactic meaning of the input, so the model can understand the context and relevance. • The feed forward layer (FFN) of a large language model is made up of multiple fully connected layers that transform the input embedding's. While doing this process, these layers enable the model to get insight into higher-level abstractions helps model to understand the user's intent with the text input.
  • 8. • The recurrent layer interprets the words in the input text in sequence, that captures the relationship between words in a sentence. • The attention mechanism enables a language model to concentrate on single part of the input text that is relevant to the task under consideration. This layer allows the model to generate the most accurate outputs.
  • 9. Large Languages Model Types 1. Generic (raw) language model - This model predicts the next word based on the language in the training data. This language model performs information retrieval tasks. 2. Instruction-tuned language model - This model is trained to predict responses to the instructions given in the input. This is useful to perform sentiment analysis, or to generate text or code. 3. Dialog-tuned language model - This model is trained to have a dialog by predicting the next response, like the chatbots.
  • 10. Large Language Models Working • LLMs operate by leveraging deep learning techniques and vast amounts of textual data. These models are typically based on a transformer architecture, like the generative pre-trained transformer, which excels at handling sequential data like text input. • A large language model is based on a transformer model and works by receiving an input, encoding it, and then decoding it to produce an output prediction. But before a large language model can receive text input and generate an output prediction, it requires training, so that it can fulfill general functions, and fine-tuning, which enables it to perform specific tasks.
  • 11. • Task 1 - Training - Large language models are pre-trained using large textual datasets from sites like Wikipedia, GitHub, or others. These datasets consist of trillions of words, and their quality will affect the language model's performance. At this stage, the large language model engages in unsupervised learning, meaning it processes the datasets fed to it without specific instructions. During this process, the LLM's Al algorithm can learn the meaning of words, and of the relationships between words. It also learns to distinguish words based on context. • Task 2 - Fine-tuning- In order for a large language model to perform a specific task, such as translation, it must be fine-tuned to that particular activity. Fine-tuning optimizes the performance of specific tasks.
  • 12. • Task 3 - Prompt-tuning - This task is similar to a function to fine- tuning, whereby it trains a model to perform a specific task through few-shot prompting, or zero-shot prompting. A prompt is an instruction given to an LL.M. Few-shot prompting teaches the model to predict outputs through the use of examples
  • 13. Large Language Models Use Cases/Applications • Large language models are useful for business tasks that involve following sub-tasks: 1. Text summarization. 2. Text translation. 3. Text and image generation. 4. Code writing and debugging. 5. Web search enabling search engines to respond to queries, to assisting developers with writing code. 6. Customer service and sentiment analysis. 7. Virtual assistants/chatbots. 8. Text/Document classification.
  • 14. 9. Automated document review and approval. 10. Knowledge base responses to answer queries. 11. Copywriting and technical writing. 12. Large language models have the ability to understand proteins, molecules, DNA, and RNA. This helps to assist in the development of vaccines, finding cures for illnesses. 13. LLMs are used in industries for customer service purposes such as conversational Al, chatbots. 14. Marketing teams can use LLMs to perform sentiment analysis to quickly generate campaign ideas or text as pitching examples. 15. From searching through massive textual datasets to generating legalese, large language models are literally the assistants to lawyers and the entire law community. 16. LLMs can support credit card companies in detecting fraud or similar issues.
  • 15. Large Language Models Benefits 1. Efficiency - LLMs can significantly improve the efficiency of processes due to their ability to understand and process natural language at a large scale with huge data processing. 2. Large set of applications - LLMs can be used for language translation, sentence completion, sentiment analysis, question answering, mathematical equations, and such tasks. 3. Cost reduction - With LLMs, tasks such as customer support, data analysis, and others can be automated, thus reducing operational costs.
  • 16. 4. Continuous improvement- Large language model performance is continually improving because it grows when more data and parameters are added. In other words, the more it learns, the better it gets. Large language models can exhibit what is called "in-context learning." 5. Data analysis- LLMs can analyze and interpret vast amounts of data faster and more effectively than humanly possible, providing businesses with valuable insights quickly. 6. Real time response to customer - LLM-based applications can enhance customer interactions by offering personalized assistance and real-time responses within stipulated time. 7. Scalability- LLMs can handle an increasing amount of work due to their deep learning process
  • 17. Large Language Models Limitations • LLMs have various limitations and challenges that they faced: 1. Hallucinations - A hallucination is event when a LLM produces an output that is false, or that does not match the user's intent. The result can sometimes be what is referred to as a "hallucination." 2. Security - LLMs face important security risks when not managed or surveilled properly. They can leak people's private information, participate in phishing scams, and produce spam. 3. Ethical concerns and bias - LLMs are trained on vast amounts of data from many sources, so they might reflect and reproduce the biases present in those data sets. The data used to train language models will affect the outputs a given model produces. If the data represents a single demographic, or lacks diversity, the outputs produced by the large language model will also lack diversity and hence bias will be inherent
  • 18. 4. Consent - LLMs are trained on trillions of datasets - some of which might not have been obtained consensually or unknown to owners. When scraping data from the internet, large language models have been known to ignore copyright licenses, plagiarize written content, and repurpose proprietary content without getting permission from the original owners or artists. 5. Scaling- It is very critical to scale on time- and resource- consuming factors so as to maintain large language models. 6. Deployment - Deploying large language models requires deep learning, a transformer model, distributed software and hardware, and overall technical expertise.