2. Bandwidth
Bandwidth
• It is the amount of data that can be transferred from one point to another within a
network in a specific amount of time. It is measured in bits per second
• It is used in two different contexts with two different measuring values: bandwidth in
hertz and bandwidth in bits per second
Bandwidth in hertz
• It is the range of frequencies contained in a composite signal or the range of frequencies a
channel can pass.
• For example, we can say the bandwidth of a subscriber telephone line is 4 kHz.
Bandwidth in Bits per Seconds
• The term bandwidth can also refer to the number of bits per second that a channel, a link,
or even a network can transmit.
• For example, one can say the bandwidth of a Fast Ethernet network (or the links in this
network) is a maximum of 100 Mbps. This means that this network can send 100 Mbps.
Relationship
• There is an explicit relationship between the bandwidth in hertz and bandwidth in bits per
seconds. Basically, an increase in bandwidth in hertz means an increase in bandwidth in
bits per second.
3. Throughput
• Throughput refers to amount of data that can be transferred from one
device to another in a given amount of time
• Throughput (bits/sec)= (number of successful packets)*(average
packet_size))/Total Time
Example:
4. Latency (Delay)
• The latency or delay defines how long it takes for an entire message to
completely arrive at the destination from the time the first bit is sent out
from the source.
• We can say that latency is made of four components: propagation time,
transmission time, queuing time and processing delay.
• Latency =propagation time + transmission time + queuing
time + processing delay
Propagation time (Propagation delay)
• It measures the time required for a bit to travel from the
source to the destination.
• Propagation Delay = Distance/Propagation speed
5. Transmission time (or Transmission delay)
• It is the amount of time taken to transmit a whole packet
over a link
• Transmission Delay = Message size/bandwidth bps
Queuing Time
• The time needed for each intermediate or end device to hold
the message before it can be processed.
• The queuing time is not a fixed factor; it changes with the load
imposed on the network.
• When there is heavy traffic on the network, the queuing time
increases.
• An intermediate device, such as a router, queues the arrived
messages and processes them one by one.
• If there are many messages, each message will have to wait.
6. Bandwidth and delay
• These are two performance metrics of a link.
Jitter
• Another performance issue that is related to delay is jitter.
• It is a problem if different packets of data encounter different
delays and the application using the data at the receiver site
is time-sensitive (audio and video data, for example).
• If the delay for the first packet is 20 ms, for the second is 45
ms, and for the third is 40 ms, then the real-time application
that uses the packets endures jitter
7. • For example, let us assume that a real-time video server creates live video images
and sends them online.
• The video is digitized and packetized.
• There are only three packets, and each packet holds 10s of video information.
• The first packet starts at 00:00:00, the second packet starts at 00:00: 10, and the
third packet starts at 00:00:20.
• Also imagine that it takes 1 s (an exaggeration for simplicity) for each packet to
reach the destination (equal delay).
• The receiver can play back the first packet at 00:00:01, the second packet at
00:00:11, and the third packet at 00:00:21.
• Although there is a I-s time difference between what the server sends and what
the client sees on the computer screen, the action is happening in real time.
• The time relationship between the packets is preserved
8. • But what happens if the packets arrive with different delays?
• For example, say the first packet arrives at 00:00:01 (l-s delay), the second arrives
at 00:00: 15 (5-s delay), and the third arrives at 00:00:27 (7-s delay). If the receiver
starts playing the first packet at 00:00:01, it will finish at 00:00: 11. However, the
next packet has not yet arrived; it arrives 4 s later.
• There is a gap between the first and second packets and between the second and
the third as the video is viewed at the remote site. This phenomenon is called
jitter.
• The figure below shows the situation.
9. Timestamp
• One solution to jitter is the use of a timestamp. If each packet has a timestamp
that shows the time it was produced relative to the first (or previous) packet, then
the receiver can add this time to the time at which it starts the playback.
• In other words, the receiver knows when each packet is to be played.
• Imagine the first packet in the previous example has a timestamp of 0, the second
has a timestamp of 10, and the third has a timestamp of 20.
• If the receiver starts playing back the first packet at 00:00:08, the second will be
played at 00:00: 18 and the third at 00:00:28.
• There are no gaps between the packets.
• To prevent jitter, we can time-stamp the packets and separate the arrival time from
the playback time.
• The figure below shows the situation.