This preview shows page 1. Sign up to view the full content.
Unformatted text preview: lures.
To develop an increase rule upon a successful transmission, observe that two factors
must be considered: ﬁrst, the estimate of the number of other backlogged nodes whose
queues might have emptied during the time it took us to send our packet successfully, and
second, the potential waste of slots that might occur if the increased value of p is too small.
In general, if n backlogged nodes contended with a given node x, and x eventually sent
its packet, we can expect that some fraction of the n nodes also got their packets through.
Hence, the increase in p should at least be multiplicative. pmax is a parameter picked by
the protocol designer, and must not exceed 1 (obviously).
Thus, one possible increase rule is:
p ← min(2p, pmax ). (10.4) Another possible rule is even simpler:
p ← pmax . (10.5) The second rule above isn’t unreasonable; in fact, under burst trafﬁc arrivals, it is quite
possible for a much smaller number of other nodes to continue to remain backlogged, and
in that case resetting to a ﬁxed maximum probability would be a good idea.
For now, let’s assume that pmax = 1 and use (10.4) to explore the performance of the
protocol; one can obtain similar results with (10.5) as well.
10.5.1 Performance Let’s look at how this protocol works in simulation using WSim, a shared medium simulator that you will use in the lab. Running a randomized simulation with N = 6 nodes, each
generating trafﬁc in a random fashion in such a way that in most slots many of the nodes
are backlogged, we see the following result: SECTION 10.5. STABILIZING ALOHA: BINARY EXPONENTIAL BACKOFF 9 Figure 10-4: For each node, the top row (blue) shows the times at which the node successfully sent a packet,
while the bottom row (red) shows collisions. Observe how nodes 3 and 0 are both clobbered getting almost
no throughput compared to the other nodes. The reason is that both nodes end up with repeated collisions,
and on each collision the probability of transmitting a packet reduces by 2, so pretty soon both nodes are
completely shut out. The bottom panel is a bar graph of each node’s throughput. Node 0 attempts 335 success 196 coll 139
Node 1 attempts 1691 success 1323 coll 367
Node 2 attempts 1678 success 1294 coll 384
Node 3 attempts 114 success 55 coll 59
Node 4 attempts 866 success 603 coll 263
Node 5 attempts 1670 success 1181 coll 489
Time 10000 attempts 6354 success 4652 util 0.47
Inter-node fairness: 0.69
Each line starting with “Node” above says what the total number of transmission
attempts from the speciﬁed node was, how many of them were successes, and how
many of them were collisions. The line starting with “Time” says what the total number
of simulated time slots was, and the total number of packet attempts, successful packets
(i.e., those without collisions), and the utilization. The last line lists the fairness.
A fairness of 0.69 with six nodes is actually quite poor (in fact, even a value of 0.8 would
be considered poor for N = 6). Figure 10-4 shows two rows of dots for each node; the top
row corresponds to successful transmissions while the bottom one corresponds to collisions. The bar graph in the bottom panel is each node’s throughput. Observe how nodes
3 and 0 get very low throughput compared to the other nodes, a sign of signiﬁcant longterm unfairness. In addition, for each node there are long periods of time when both nodes LECTURE 10. SHARING A COMMON MEDIUM: 10 MEDIA ACCESS PROTOCOLS Figure 10-5: Node transmissions and collisions when backlogged v. slot index and each node’s throughput
(bottom row) when we set a lower bound on each backlogged node’s transmission probability. Note the
“capture effect” when some nodes hog the medium for extended periods of time, starving others. Over
time, however, every node gets the same throughput (fairness is 0.99), but the long periods of inactivity
while backlogged is undesirable. send no packets, because each collision causes their transmission probability to reduce by
two, and pretty soon both nodes are made to starve, unable to extricate themselves from
this situation. Such “bad luck” tends to happen often because a node that has backed off
heavily is competing against a successful backlogged node whose p is a lot higher; hence,
the “rich get richer”.
How can we overcome this fairness problem? One approach is to set a lower bound on
p, something that’s a lot smaller than the reciprocal of the largest number of backlogged
nodes we expect in the network. In most networks, one can assume such a quantity; for
example, we might set the lower bound to 1/128 or 1/1024.
Setting such a bound greatly reduces the long-term unfairness (Figure 10-5) and the
corresponding simulation output is as follows:
Node 0 attempts 1516 success 1214 coll 302
Node 1 attempts 1237 success 964 coll 273
Node 2 attempts 1433 success 1218 coll 215
Node 3 attempts 1496 success 1207 coll 289
View Full Document
This document was uploaded on 02/26/2014 for the course CS 6.02 at MIT.
- Fall '13
- The Land