Queue Management is defined as the algorithm that manages the length of the packet queues by dropping packets when necessary.
Passive Queue Management: In Passive Queue Management the packet drop occurs only when the buffer gets full. Ex: Drop Tail.
Active Queue Management: Active Queue Management employs preventive packet drops. It provides an implicit feedback mechanism to notify senders of the onset of congestion. Arriving packets are randomly dropped. Ex: RED.
Droptail:
In Droptail, the router accepts and forwards all the packets that
arrive as long as its buffer space is available for the incoming packets.
If a packet arrives and the queue is full, the incoming packet will be
dropped. The sender eventually detects the packet lost and shrinks its
sending window. Drop-tail queues have a tendency to penalize bursty
flows, and to cause global synchronization between flows.
RED:
RED is a type of active queue management technique used for
congestion avoidance. RED monitors the average queue size and
drops (or marks when used in conjunction with ECN) packets based
on statistical probabilities. If the buffer is almost empty, all incoming
packets are accepted. As the queue grows, the probability for
dropping an incoming packet grows too. When the buffer is full, the
probability has reached 1 and all incoming packets are dropped
REM:
REM is an active queue management scheme that measures
congestion not by performance measure such as loss or delay, but by
quantity. REM can achieve high utilization, small queue length, and
low buffer overflow probability. Many works have used control
theory to provide the stable condition of REM without considering
the feedback delay. In case of (Random Exponential Marking) REM,
the key idea is to decouple congestion measure from performance
measure (loss, queue length or delay).
Fair Queuing
In fair queuing every flow gets the bandwidth
propositional to its demand. The main goal of fair queuing is
to allocate resources fairly to keep separate queue for each
flow currently flowing via a router. Every queue gets equal
bandwidth when the packets are in same size and non-empty
queue follows a round robin fashion like FIFO. But if the
packets are in different size the flow of large size packets gets
more bandwidth than small size packets and these problems
are overcome by weighted fair queuing algorithm, etc.
Maintaining a separate queue for each flow requires a gateway
or router to map from source to destination address pair for the
related queue on a per packet basis.
Stochastic Fair Queuing
Stochastic Fair Queuing uses a hash algorithm to
divide the traffic over a limited number of queues. Due to the
hashing in SFQ multiple sessions might end up into the same
bucket. SFQ changes its hashing algorithm so that any two
colliding sessions will only work for a small number of
seconds. Stochastic Fair Queuing algorithm is the best
algorithm among all algorithms in case of providing
satisfactory bandwidth to the legitimate users (TCP and UDP)
in network. It is called Stochastic due to the reason that it does
not actually assign a queue for every session; it has an
algorithm which divides traffic over a restricted number of
queues using a hashing algorithm.SFQ assigns a pretty large
number of FIFO queues. Stochastic Fair Queuing (SFQ)
ensures fair access to network resources and prevents a busty
flow from consuming more than its fair share.SFQ has a
minimum average loss ratio and maximum throughput
compared to RED.
No comments:
Post a Comment