This preview shows page 1. Sign up to view the full content.
Unformatted text preview: Metrics for Tra c Analysis Prevention
Richard E. Newman1 , Ira S. Moskowitz2, Paul Syverson2 and Andrei Serjantov3
CISE Department University of Florida Gainesville, FL 326116120, USA
1 2 & Center for High Assurance Computer Systems, Code 5540 Naval Research Laboratory Washington, DC 20375, USA
3 nemo@cise.ufl.edu & University of Cambridge Computer Laboratory Cambridge CB3 0FD, United Kingdom
Andrei.Serjantov@cl.cam.ac.uk fmoskowitz,syversong@itd.nrl.navy.mil Abstract. This paper considers systems for Tra c Analysis Prevention (TAP) in a theoretical model. It considers TAP based on padding and rerouting of messages and describes the e ects each has on the di erence between the actual and the observed tra c matrix (TM). The paper introduces an entropybased approach to the amount of uncertainty a global passive adversary has in determining the actual TM, or alternatively, the probability that the actual TM has a property of interest. Unlike previous work, the focus is on determining the overall amount of anonymity a TAP system can provide, or the amount it can provide for a given cost in padding and rerouting, rather than on the amount of protection a orded particular communications. 1 Introduction
Previous attempts to gauge anonymity provided by an anonymous communication system have been focused on the extent to which the actions of some entity are protected by that system. For example, how well protected is the anonymity of the sender of an arbitrary message, or its recipient, or the connection of sender and recipient, etc. 11, 18]. Various ways to measure such protection have been proposed from the classic anonymity set to cryptographic techniques 12], probabilistic measures 14], and information theoretic measures 3, 15]. The focus of this work is a bit di erent from all of those. Rather than examine how well protected the actions of a particular agent (or pair of agents) are, we will examine how much protection a system provides to all its users collectively. Put too succinctly, previous work has focused on how well the system distributes available anonymity, while we focus on the amount of anonymity there is to distribute. We consider a system of N nodes wanting to send (a large number of) end to end encrypted messages to one another over an underlying network.1 These N sender nodes cooperate to try to prevent the adversary from performing tra c analysis by using padding and rerouting. While elded Tra c Analysis Prevention (TAP) systems are likely to be limited in their ability to so cooperate, padding and rerouting are commonly proposed means to counter tra c analysis 1, 2, 13, 19]. Yet, there has been no theoretical analysis of how much protection is possible using padding and rerouting techniques. Our model allows assessment of upper bounds on what any system can accomplish by such means. Our central means to examine anonymous communication is the tra c matrix (TM), which represents all endtoend message ows. One can examine the di erence between observed tra c matrices and the tra c matrix of an ideal system to determine how much an adversary might gain from observing the system. Alternatively, the di erence between observations on a protected system and an unprotected system can be examined to determine the amount of protection a orded. Tra c matrices allow us to measure the communication costs of TAP methods, which gives us a potential means of comparing the costs and bene ts of various TAP methods and systems. This paper uses an informationtheoretic, entropybased approach to measuring the success of a TAP system, much as Shannon used entropy to measure the success of a cryptosystem 16]. The goal of the group of nodes sending messages to one another is to make the number of possible tra c matrices (TMs) large enough and the probability that the actual TM is determined from what is observed low enough that the observations are essentially useless to the adversary. If the adversary has no a priori means of excluding any particular TM (which may depend on the measurement interval and the expectations of tra c), then the possible TMs are not just all TMs that are dominated by the observed TM, but all that have a rerouted TM that is dominated by the observed TM. These terms will be made precise in subsection 2.2. Previous methods of TAP have either used rerouting or padding or both (in addition to padding messages to a constant length and payload encryption) to achieve TAP. In general, the e ects of these controls are to a. increase the total amount of tra c; b. increase the cryptographic processing load on the involved nodes; c. mask the true source and destination of individual messages; d. make the number of possible true tra c patterns very large. While traditional link encryption and padding to the link speed at the link level is perfect at concealing the true tra c patterns, it has many de ciencies. It requires that all routers in the network participate and remain secure, and that all are willing to saturate their links with apparent tra c, whether or not there is actual tra c to send. The more e cient \Neutral TM" approach used by NewmanWolfe and Venkatraman 8, 21] still increases tra c to around twice its
1 The network graph is not necessarily complete. original level, depending on the spatial tra c distribution 9, 20]. Onion routing 10, 5, 19] increases tra c greatly as well, by routing a packet through several (usually at least ve) onion routers. One might expect this to increase the aggregate tra c by the number of onion routers the packet traverses (i.e., make the total load ve times higher in this case).2 This paper considers the information that is available in the static, spatial tra c information to a global passive adversary when transport level padding and rerouting are employed. 2 Adversary Model
As in much previous work, we assume a global passive adversary who can observe all tra c on all links between all nodes, that is all senders, receivers, and any intermediate relay points the system may contain. Since she observes all message ows, the global passive adversary is very strong, perhaps stronger than any likely real adversary. On the other hand she mounts no active attacks, which makes her weaker than many likely real adversaries. However, our concern is to rst describe means to determine a bound on anonymity capacity of a system even if that bound is not likely to be reached in practice. Since we are only addressing TAP, we assume no one can track redirected messages through an intermediate node by recognizing its format or appearance. Similarly, no one is able to distinguish padding messages from `genuine' tra c. Of course, a node that is a redirection intermediary knows which incoming message correlates with which outgoing message, and nodes that generate and/or eliminate padding can recognize it locally. Our adversary is thus best thought of as having a tra c counter on all the wires between nodes. The units of tra c may be generically described as messages. If necessary, tra c may also be measured in bits. The rate at which these counters are checked governs the granularity of the picture of tra c ows that the adversary has. The degree of synchronization on those link clocks (i.e., whatever governs the frequency at which each link is checked), will also determine the granularity of the causal picture that the adversary has. For example, an adversary may be able to recognize or dismiss possible message redirections by observing the relative timing of ows into and out of a node. However, for the purposes of these initial investigations, we will consider the period of observation to be su cient for all actual tra c, as well as dummy messages and rerouted actual tra c, to be delivered and counted. Note that there is some degree of noise or uncertainty due to the nature of measurement of tra c  it is not instantaneous but must be measured over some period of observation (window). Both the size of the window and the window alignment will a ect the measurements and their variation. This argues for decreased resolution in the measured values (e.g., the di erence between 68,273
2 The actual load increase depends on the underlying network and the routes taken. packets and 67,542 packets may be considered to be below the noise threshold in the measured system; likewise, byte count numbers may also only be of use up to two or three digits). Study of the levels of \noise" in the measured system and \noise" in the measurement methods is needed to make a valid estimate of the appropriate level of resolution for the measurements. This paper assumes such considerations out of the model. 2.1 Network and Adversary Assumptions
For purposes of this paper, we make a number of assumptions. ate between senders, receivers, and virtual network elements. This is most typically true of a peertopeer system; however, this could also re ect communication within an anonomizing network where the outside connections are either invisible or ignored. { All links (directed edges) have a constant xedbound capacity (in messages that can be sent in some unit of time). The number of messages that can be passed over any (simplex) network link is the same. Any padding or redirection a node passes over a link will reduce the number of messages it can initiate over that link. { All link tra c counters are checked once (simultaneously). This last assumption means that we do not capture any timing information or causal connections between message ows. Even with this simplifying assumption there is more than enough complexity in the network tra c information for an initial investigation. Further, as we have noted, a primary purpose of this work is to set out means to describe the anonymity capacity of a network. This assumption allows us to consider the temporally coarsest adversary of our model. Any temporal information that a ner adversary could use will only serve to lower such a bound. While such a coarsegrained adversary is inherently interesting and may even be realistic for some settings, obviously the study of an adversary that can take advantage of timing information is ultimately important. Such re nement of assumptions is possible within our general model, and we leave such questions for future work. { All nodes may send, receive, or forward tra c. Thus, we do not di erenti 2.2 De nitions
Now we de ne some terms. T i; j ] holds the number of messages sent from node i to node j in the period of observation. The diagonal entries are all zero. Domination One tra c matrix T dominates another tra c matrix T 0 i 8i; j 2 1::N ]; T i; j ] T 0 i; j ]: Tra c Matrix (TM) An N N nonnegative integer matrix T in which cell The unit neutral TM is the neutral TM in which all the nondiagonal values are ones. The magnitude of a neutral TM is the constant by which the unit TM must be multiplied to equal the neutral TM of interest. Actual TM, Tact The endtoend tra c matrix, neither including dummy messages nor apparent tra c arising from rerouting through intermediate nodes; the true amount of information required to ow among the principals in the period of observation. Observed TM, Tobs The tra c matrix that results from treating all and only observed ows on links as re ecting genuine tra c, i.e., all padding is treated as genuine tra c and redirection is treated as multiple genuine one hop messages. Routes, ow assignments If the actual tra c matrix speci es that T i; j ] messages must be sent from node i to node j in a period of time, then these messages must be routed from node i to node j either directly or indirectly. A route from node i to node j is a path in the network topology graph starting at node i and ending at node j . A ow assignment speci es for each path used to send messages from node i to node j how many of the messages are delivered using that path. Link Load The load on a (simplex) link is the sum of the number of messages delivered by the ow assignments over paths that include that link. For a ow assignment to be feasible, the load on a link must not exceed its capacity. Total Tra c Load Total tra c load in an N N tra c matrix T is Neutral TM A tra c matrix in which all of the nondiagonal values are equal. L(T ) = X i;j 2 1::N ] T i; j ]: where 1::N ] is the set of integers between 1 and N , inclusive. That is, the total (or aggregate) load is just the sum of the link loads. Feasible TM These TMs are the only ones for which there are corresponding routes with ow assignments for which the combined ows on a given link in the graph do not exceed its capacity. 3 Observations
First, we notice that, depending upon Tobs , there are limits to what the true tra c matrix can be, no matter what the TAP techniques might be used. For example, if a node A in Tobs has a total incoming ow of fin;Tobs (A), fin;Tobs (A) , N X i=1 Tobs i; A]; then the total incoming ow for the same node A in Tact is bounded by that same total, that is, fin;Tact (A) fin;Tobs (A): This is true because the observed incoming ow includes all of the tra c destined for A, as well as any dummy packets or redirected messages for which A is the intermediate node. For similar reasons, the outgoing ow of any node A in Tact is bounded by the observed outgoing ow in A. The topology (graph connectivity) of the network and the link capacities limit the possible tra c matrices that can be realized. As noted, feasible TMs are the only ones for which there are corresponding routes with ow assignments for which the combined ows on a given link in the graph do not exceed its capacity. Based on the limitations of the network, the set of possible tra c matrices is therefore nite (if we consider integer number of packets sent over a period of observation). De ne the set of possible tra c matrices for a network represented by a directed graph G =< V; E > with positive integer edge3 weights w : E ! N to be T<G;w> = fT j T is feasible in < G; w >g The graphs we consider are cliques, but a node A may be able to send more data to node B than the link directly from A to B can carry, by sending some of the messages through an intermediate node. Beyond the limits of the network itself, our adversary is able to observe all of the tra c on the links, and from observations over some period of time, form an observed tra c matrix, Tobs . As previously noted, since any tra c matrix T re ects the endtoend tra c between nodes, Tobs can be thought of as re ecting the pretense that there are no messages sent indirectly, i.e., all messages arrive in one hop. The observed tra c matrix further limits the set of actual tra c matrices possible, as they must be able to produce the observed tra c matrix after modi cations performed by the TAP system. For example, it is not feasible for the total tra c in the actual TM to exceed the total tra c in the observed TM. Let the set of tra c matrices compatible with an observed TM, Tobs be de ned as
TTobs , fT j T could produce Tobs by TAP methodsg Note that TTobs T<G;w>, since the observed tra c matrix must be feasible, and that Tact ; Tobs 2 TTobs . We now describe the a ect of TAP methods in determining TTobs . Further details on the TAP transforms themselves are presented in section 6. A unit padding transform re ects adding a single padding message on a single link and results in incrementing, by one, the value of exactly one cell of a tra c matrix. A unit rerouting transform re ects redirecting a single message via a single other node. So, rerouting one unit of tra c from A to B via C causes the tra c from A to B to decrease by one unit, and the tra c from A to C and from C to B
3 Edge weights can be considered the number of packets or the number of bytes that a link can transfer over the period of observations. We can also consider node capacities, which could represent the packet switching capacity of each node, but for now consider this to be in nite and therefore not a limitation. each to increase by one unit. This causes the tra c in the new TM to remain constant for A's row and for B 's column, but to increase by one unit for C 's column and C 's row (C now receives and sends one more unit of tra c than before). The total load therefore increases by one unit also (two unit increases and one unit decrease for a net of one unit increase  we replaced one message with two). We say that a tra c matrix T is P derivable from tra c matrix T 0 i T is the result of zero or more unit padding transforms on T 0 . We say that a tra c matrix T is k P derivable from tra c matrix T 0 i T is the result of exactly k unit padding transforms on T 0. This is true i 8i; j T 0 i; j ] T i; j ] and L(T ) = L(T 0) + k
Note that the set of P derivable tra c matrices from some TM T is the union for k = 0 to L(T ) of the sets of k P derivable TMs relative to T . We say that a tra c matrix T is Rderivable from another tra c matrix T 0 i T is the result of zero or more unit rerouting transforms on T 0. We say that a tra c matrix T is k Rderivable from another tra c matrix T 0 i T is the result of exactly k unit rerouting transforms on T 0. The set of Rderivable tra c matrices from some TM T is the union for k = 0 to L(T ) of the sets of k Rderivable TMs relative to T . We say that a tra c matrix T is R; P derivable from another tra c matrix T 0 i T is the result of zero or more unit padding or rerouting transforms on T 0. We say that a tra c matrix T is k R; P derivable from another tra c matrix T 0 i T is the result of exactly k unit padding or rerouting transforms on T 0. The set of R; P derivable tra c matrices from some TM T is the union for k = 0 to L(T ) of the sets of k R; P derivable TMs relative to T . In general, padding and rerouting transformations may be described as addition of speci c unit transformation matrices to a given TM. This will be explored further in section 6. Note that, in most cases, padding and rerouting operations commute.4 4 Problem Statement
This section de nes the problems considered. In this model, the \sender" consists of all of the N nodes listed in the tra c matrix, which cooperate to try to disguise an actual tra c matrix Tact by performing TAP operations to produce the tra c matrix Tobs observed by the global, passive adversary. This aggregate sender must deliver all of the messages required by Tact in the period of observation, and we assume there is su cient time to do this.
4 If a padding message may then be rerouted, then padding rst o ers more options for the subsequent rerouting. We do not consider this useful, and limit rerouting to actual tra c. The aggregate sender is given the actual TM, Tact, and must produce the set of TAP transformations on it to create the observed TM, Tobs . The sender may be under some cost constraints (in which case the goal is to create the greatest amount of uncertainty in the adversary possible within the given budget), or may be required to create an observed TM, Tobs , that meets some goal of obfuscation (at a minimum cost). The adversary may ask generically the following question, \Is Tact 2 T ?," where T T<G;w> is some set of TMs of interest to the adversary. Note that T may be a singleton, which means that the adversary has some particular TM in which he has interest, and through a series of such questions, the adversary can attempt to determine the actual TM, Tact, exactly. More often, the adversary may not care about some of the communicating pairs, and may not even care about the detailed transmission rates between the pairs of interest. In general, the property T can be given as the union of sets of the form Tk = fT j i;j;k T i; j ] i;j;k 8i; j = 1; 2; :::; N g ; i.e., a range set, in which the values of the cells of the TM are constrained to lie within some range. So T = Tk : Observe that the set of these range sets is closed under set intersection, that is, the intersection of two range sets results in another range set.5 It may be more apropos to rephrase the question as, \What is the probability that the actual TM has the property of interest, given the observed TM," i.e., Pr(Tact 2 T j Tobs ), since under most circumstances, whether or not Tact is in T cannot be known with certainty. X Pr(Tact 2 T j Tobs ) = Pr(T jTobs) : Absent a priori information to give one possible TM (i.e., one consistent with the observations), a greater likelihood of having been the actual TM, we can give all those TMs consistent with the observed TM equal weight, so that Pr(T jT ) = 1 :
obs T 2T k 4.1 Sender 4.2 Adversary This is the maximum entropy result, with
5 jTTobs j obs Pr(Tact 2 T j Tobs ) = jTTT \ T j : j Tobs j These kinds of properties may be of interest to adversaries exercising a network covert channel. Adversary possession of a priori information may reduce anonymity in two ways. 1. She may limit TTobs further by using knowledge about this instance of Tact ;6 e.g., \At least one of the nodes did not send any real tra c." Such constraints on TTobs may be expressed by using the same techniques as we used to express matrices of interest, T . 2. She may alter relative probabilities of the TMs within TTobs (which leads to submaximal entropy). Examples of this include the adversary possessing a probability distribution over the total amount of tra c in Tact or the total cost which the sender is prepared to to incur to disguise the actual tra c matrices (see Section 5.2). Indeed, the adversary may even possess a probability distribution over the Tact that she expects will occur. So, in the end, it is not necessary to make the observed tra c matrix, Tobs , neutral; it is enough to disguise Tact so that the adversary's knowledge of its properties of interest are su ciently uncertain. 5 Tra c Analysis Prevention Metrics
This section considers the degree to which the sender can make the adversary uncertain regarding the nature of Tact . First, it considers the costs of performing TAP operations, then considers the strategies the sender may have, and the e ects of these on the adversary's knowledge. Finally, the e ects of a priori knowledge by the adversary are evaluated. Rerouting and padding are not free operations. Unit padding adds one more message from some source to some destination in the period (increasing exactly that cell by one unit and no others). Unit rerouting from node A to node B via node C decreases the tra c from A to B by one unit, but increases the tra c from A to C and from C to B , without changing any other cells. Hence in both cases, in this model, they increase the total load by one unit of tra c. The simplest cost metric for disguising tra c is just the change in the total tra c load from the actual to the observed TM. Let T1 and T2 be two tra c matrices, and de ne the distance between them to be d(T1 ; T2) = jL(T1 ) L(T2)j: In the simplest case, the cost is just the distance as de ned above. In general, the cost may be nonlinear in the distance, and may be di erent for padding than for rerouting.7 For the remainder of this paper, we will only consider the simple case.
6 5.1 Cost Metrics We can then estimate the amount of information that the observations give to the adversary in terms of the relative entropy from the knowledge to the observations. 7 Padding and rerouting costs may not be the same if node computation is considered. It may be much easier for a node that receives a dummy message to decode 5.2 Sender Strategies
Making changes to the actual tra c matrix by rerouting and padding will increase the total tra c load in the system, and the sender may not wish to incur large costs. Sender strategies may be thought of in two factors. The rst factor is whether a neutral tra c matrix is sent every period, or whether a nonneutral observed tra c matrix is acceptable. The second factor is whether or not the sender adapts the costs it is willing to incur to the actual tra c it must send. These are not unrelated, as is explained below. If the observed tra c matrix is always made neutral, then the sender must use a total load su cient to handle the peak amount of tra c expected (modulo tra c shaping8), and must alway reroute and pad to that level. Often, the total tra c load of the observed tra c matrix will be many times larger than the total tra c load of the actual tra c matrix, and the sender will just have to live with these costs. The advantage of this is that the adversary never learns anything; the tra c always appears to be uniform and the rates never vary. If the set of actual TMs to be sent is known to the sender in advance, then an adaptive strategy may be used to minimize the total cost. The \peaks" in the actual TMs are attened using rerouting. Then the maximum matrix cell value over all of the TMs resulting from rerouting is chosen as the amplitude of the neutral TMs to send for that sequence. Mechanisms for dynamically handling changing load requirements are considered in Venkatraman and NewmanWolfe 21]. Here, the sender may change the uniform level in the neutral tra c matrix, adjusting it higher when there are more data to send and lower when there are fewer. This will reduce the costs for disguising the actual tra c patterns. However, the sender should avoid making frequent adjustments of small granularity in order to avoid providing the adversary with too much information about the total actual load.9 If nonneutral tra c matrices are acceptable, the sender can either set a cost target and try to maximize the adversary's uncertainty, or can set an uncertainty target and try to minimize the cost of reaching it. Regardless, the goal is to keep the amortized cost of su ciently disguising the actual TMs reasonable. In the former case, a nonadaptive strategy can be employed, in the sense that the cost will not depend on the actual tra c matrix. If the sender always uses the same cost for each period, and the adversary knows this cost, then this severely reduces the entropy for the adversary. Here, the adversary need only consider
the encrypted header and determine that the remainder of the message is to be discarded than it is for the node to decrypt and reencrypt the message body, create an appropriate TAP header and network header, then form the forwarded message and send it on the the true destination. 8 In traditional networking, tra c shaping is a form of ow control that is intended to reduce the burstiness and unpredictability of the tra c that the sources inject into the network so as to increase e ciency and QOS 6, 4, 17]. In TAP networks it is used to hide tra c ow information 1]. 9 A \Pump"type 7] approach may be taken to lessen the leaked information. the intersection of a hypersphere and TTobs . That is, the adversary knows that Tact 2 fT 2 TTobs jd(T; Tobs ) = cg;
where c is the cost (known to the adversary) that the sender incurs each period. A better nonadaptive strategy is to pick a distribution for the costs for each period, then generate random costs from that distribution. Once a cost is picked, then the entropy associated with the observed TM (with respect to the properties of interest, if these are known by the sender) can be maximized. The adversary then has to consider the intersection of a ball with TTobs rather than a hypersphere. In this fashion, the mean cost per period can be estimated, and yet the adversary has greater uncertainty about the possible actual TMs that lead to the observations. When the total tra c is very low, the sender may be willing to incur a greater cost to pad the tra c to an acceptably high level, and when the actual TM already has a high entropy (for the adversary), then it may be that no adjustments to it need to be made (e.g., when it is already a neutral TM with a reasonably high total tra c load). If the cost the sender is willing to incur can depend on the actual tra c, then the sender can set a goal of some minimum threshold of uncertainty on the part of the adversary as measured by the entropy of the observed tra c matrix, then try to achieve that entropy with minimum cost. If the sender has to live within a budget, then some average cost per period may be set as a goal, and the sender can try to maximize entropy within this average cost constraint. Here, there may be two variants: ahead of time, and can pick a cost for each period that balances the entropy that can be achieved for each period within its cost; { Online: the sender only knows the amortized cost goal and the history of tra c and costs up until the current time. In the o ine case, the sender can achieve greater entropy if most of the actual TMs in the sequence have high entropy to begin with, or avoid having some observed TMs at the end of the sequence with low entropy because the budget was exhausted too early in the sequence. Online computation will su er from these possibilities, but the goals can be changed dynamically given the history and remaining budget, if there is any reason to believe that the future actual TMs can be predicted from the recent past TMs. { O ine: the sender knows what the tra c is going to be for many periods 5.3 Sender and Adversary Knowledge
In the strongest case, the sender may know the sequence of Tact(i)'s, or at least the set (but not the order) ahead of time and be able to plan how to disguise that particular set of actual TMs. A weaker assumption is that the sender knows the probability distribution for the actual TMs (or for properties they possess) ahead of time, and the actual sequence is close to this (de ned by some error metric). What the adversary sees, and what the adversary knows, a priori, determine what the adversary learns from a sequence of observations. For example, if the sender always sends neutral TMs of the same magnitude the adversary learns very little (only a bound on the total load), but the sender must accept whatever cost is needed to arrive at the neutral TM that is always sent. On the other hand, if the sender sends di erent TMs each period, then what the adversary learns can depend on what the sender had to disguise and the adversary's knowledge of that. For example, if the sender always has the same actual TM, but disguises it di erently each time, and the adversary knows this, then that adversary can take the intersection of all of the sets of TMs consistent with the observed TMs over time to reduce uncertainty over what was actually sent: where Tobs (i) is the ith observed TM. The entropy (if all TMs are equally probable) is then S = lg(j \k=1 TTobs (i)j); i where lg is shorthand for log2. Other adversary information (on sender cost budgets or expected tra c pattern properties) may further limit the entropy. If the sender always uses the same cost c for each period, and the adversary knows this cost, then as stated in section 5.2, the adversary knows that Tact 2 \k=1 TTobs (i); i Tact 2 fT 2 TTobs jd(T; Tobs ) = cg:
The entropy is then S = lg(jfT 2 TTobs jd(T; Tobs ) = cgj):
If the sender has di erent actual TMs each period, and has a cost distribution that is randomly applied (and the adversary knows what it is), then the adversary can determine the probability for each T 2 TTobs according to d(T; Tobs ). Let Sc(Tobs ) = fT 2 T<G;w> jd(T; Tobs) = cg be the hypersphere at distance c from Tobs of feasible tra c matrices for a graph G. Let Pc (Tobs ) = fT 2 TTobs jd(T; Tobs ) = cg = TTobs \ Sc(Tobs ) be the intersection of the hypersphere at distance c from Tobs and the TMs from which Tobs can be R; P derived, TTobs . Let U = f(c; pc)g
be the sender's probability distribution for costs (i.e., cost c is incurred with probability pc ). Of course this distribution is dependent on how we do our TAP, and should be considered as a dynamic distribution. So
1 X
c=0 pc = 1:
so Then the attacker can infer that X If the sender adapts the cost to the actual tra c matrix, but still has an amortized cost per period goal that the adversary knows, then it may still be possible for the adversary to assign probabilities to the TMs in TTobs based on assumptions (or knowledge) of the nature of the distribution of the actual TMs. prob(T jTobs ; U ) = jP (pc )j c Tobs T 2Pc(Tobs ) prob(T jTobs; U ) = pc ; for T 2 Pc(Tobs ):10 6 Transforms
This section formally describes the two types of TAP method considered in this paper, padding and rerouting. If we limit the TAP method to be padding only, then every element of Tact is pointwise bounded by the corresponding element of Tobs : 6.1 Padding Tact i; j ]
In fact, Tobs i; j ]: where P is a tra c matrix (i.e., it is nonnegative) representing the pad tra c added to the true tra c in Tact . Tobs = Tact + P; 6.2 Rerouting
If the TAP method is limited to rerouting alone, then the true tra c matrix must be a preimage of the apparent tra c matrix under transformation by some rerouting quantities. Rerouting e ects will be represented by a rerouting di erence matrix, Dr , that describes the change in tra c due to rerouting, so that
10 There is a little hair here. The probability distribution may have a long tail (i.e., large c's have nonzero pc 's), but for a particular Tobs , there is a maximum possible distance for TMs in Pc(Tobs ). The adversary must normalize the distribution over the set of possible costs to account for this. Tobs = Tact + Dr : Note that Dr may have negative elements. For distinct nodes A; B; C 2 1::N ] we de ne the unit reroute matrix as follows. The unit reroute matrix UA;B;C for rerouting one unit of tra c from A to C via B is the N N matrix consisting of all zeros except that UA;B;C A; C ] = 1, representing a unit decrease in the tra c from A to C due to rerouting, and UA;B;C A; B ] = UA;B;C B; C ] = 1, representing a unit increase in the tra c from A to B and from B to C due to rerouting. 0 otherwise The unit reroute matrix UA;B;C has row and column sums equal to zero for all rows and columns except for the intermediate node's:
N X i=1 N X j =1 UA;B;C i; j ] = 1 i i = A ^ j = C ( 1 i (i = A ^ j = B) _ (i = B ^ j = C ) UA;B;C i; j ] = 0 8j 2 1::N ]; j 6= B; UA;B;C i; j ] = 0 8i 2 1::N ]; i 6= B:
N X i=1 N X j =1 For the intermediate node, B , the row and column sum are each equal to one: UA;B;C i; B ] = 1; UA;B;C B; j ] = 0: The total change in the tra c load due to a unit reroute is thus one. Reroute quantities may be represented by a 3dimensional array, r A; B; C ], indicating the number of packets rerouted from source A via intermediate node B to destination C . Note that the reroute quantities r A; A; A], r A; A; B ], and r A; B; B ] are all zero, as they represent either selfcommunication or rerouting via either the source or destination node itself. From the reroute quantities and the unit reroute matrices, we may compute the rerouting di erence matrix, Dr , which represents the net rerouting e ects for all rerouting speci ed by r simultaneously. If k units of tra c are rerouted from A to C via B , then a contribution of k UA;B;C is made by these rerouted packets to Dr . Then the matrix representing the net di erence due to rerouting is just the elementwise matrix sum of the weighted unit reroute matrices, Dr = X A;B;C 2 1::N ] r A; B; C ] UA;B;C Any rerouting di erence matrix Dr of a nonnegative r must have a nonnegative sum over all its elements (or aggregate tra c load), in fact,
N N XX i=1 j=1 Dr i; j ] = N N N XXX i=1 j =1 k=1 r i; j; k]: Since each unit reroute matrix represents a unit increase in the total tra c load, it is obvious that the total increase in the aggregate tra c load is equal to the total amount of rerouting performed. 6.3 Discussion
Both padding and rerouting cause a net increase in the resultant TM. Thus, for a TM T to be a preimage of an observed TM, Tobs , its total load is bounded above by the total load of the observed TM, L(T ) L(Tobs ) :
Furthermore, it may be noted that for both transforms, the row and column totals either remain the same or increase. Therefore,
N X i=1 N X j=1 N X i=1 T i; j ]
N X j =1 Tobs i; j ] 8 j 2 1::N ]; and T i; j ] Tobs i; j ] 8 i 2 1::N ]; for any T 2 TTobs : An arbitrary N N matrix whose sum of elements is nonnegative may not be realizable as a rerouting di erence matrix. There may be negative elements in the rerouting di erence matrix, so the true tra c matrix Tact is not constrained to be pointwise bounded by Tobs , as is the case when only padding was used. However, the row and column tra c bounds and the constraints on the rerouting di erence matrices do limit the set of tra c matrices that could give rise to an observed TM. This in turn means that for some TM's, the conditional probability will be zero for a given Tobs even if the aggregate tra c bound, or even the row and column tra c constraints are satis ed. Now the issue is the degree to which the uncertainty that can be created by rerouting and padding is adequate to mask the true TM. This is in e ect represented by the entropy. 7 Examples
Consider a simple example { the attacker observes 3 nodes sending 1 message to each other, but, of course, not to themselves. She knows nothing about the padding or rerouting policies of these nodes. Let us see what level of anonymity this gives us. The observed matrix is: 00 1 11 Tobs = @ 1 0 1 A : 1 1 0 The rows (columns) represent a message leaving (going to) nodes A, B , or C respectively. We now try to calculate the set of Tobs which could have resulted in the above Tact after having been subjected to padding or rerouting. We start by considering rerouting. There are six possible tra c matrices that 00 2 01 can be rerouted into Tobs . Consider T1 = @ 1 0 1 A. If we take one message 1 0 0 that was sent from A to B , and redirect that message via the intermediary node C , our new tra c matrix is just Tobs . Thus, we see that rerouting can hide the true tra c pattern, which is T1, by making the tra c pattern look like Tobs . In fact there are ve more tra c matrices which can be disguised to look like Tobs by0 one 1 0 of a 1 0 Those tra c matrices0 T2 ; : : : 1 6 using rerouting message. 1 0 0 1 1 1 are 0 1 ; T 0 0 2 0 1 1 0 1 0 0 = @ 1 0 0 A, @ 2 0 0 A, @ 0 0 2 A, @ 0 0 1 A, @ 1 0 1 A. 1 1 0 0 1 0 1 1 0 2 0 0 0 20 0 1 0 2 0 Now consider rerouting two messages. Observe the matrix T ;1 = @ 2 0 0 A. 0 0 0 If that is the true tra c matrix, then we can disguise this tra c pattern by taking one of the messages from B to A, and redirect it through C, this results in the above tra c matrix T1 , and as we noted another rerouting at this level will result in Tobs . But notice that T ;1 will also result in T3 after rerouting on one of the A to B messages through C. Therefore, we see that this second level inverse rerouting result in three unique tra c matrices. At this point we see there are 6 + 3 = 9 possible tra c matrices that are hidden by Tobs . We have been concentrating on rerouting. Let us now turn our attention to padding. The tra c after the padding has been applied must equal Tobs , so each link can be padded by at most 1 message. This gives us six entries in the matrix with the freedom of one bit for each entry. This results in 26 possible tra c matrices. Since we count Tobs itself as a possible tra c matrix this gives us 26 1 additional tra c matrices. So far, we have 1 tra c matrix if we count Tobs , another 26 1 by counting possible tra c matrices by padding, 6 by counting rerouting of 1 message, and another 3, by counting a prior rerouting. We are not done yet. Consider the six tra c matrices T1 ; : : : ; T6 that results from rerouting of 1 message. Each one of these may be the result of padding from a sparser tra c matrix. For example consider T2 and the lower triangular entries that are ones. If the original tra c 00 0 21 matrix was @ 1 0 0 A we can obtain T2 by two 1pads. In fact we see that 0 0 0 the entries that \are one" in T2 give us three degrees of freedom, with one bit for each degree of freedom. This results in 23 possible tra c matrices that result into T2 after the 1pads. So as not to count T2 twice this gives us 23 1 unique tra c matrices. This follows for all six of the onelevel rerouting tra c matrices. Therefore, we have an additional 6(23 1) possible tra c matrices to consider. So we see that jTTobs j = 1+(26 1)+6(23 1)+6+3 = 26 +3(24 +1) = 115. This hides the actual tra c matrix behind a probabilistic value of 1=115. If Tobs 00 5 51 was a little more exciting, say it was @ 5 0 5 A , the probability of the actual 5 5 0 tra c matrix would be much smaller, but this lower probability comes at the cost of excessive reroutes and padding. Therefore, pragmatic choices must be made, as is usually the case, when one wishes to obfuscate their true business on a network. 8 Conclusions
This paper represents a step in the direction of precisely de ning the amount of success a TAP system has in hiding the nature of the actual tra c matrix from a global, passive adversary. Padding and rerouting are considered, with observations on the e ects each has on the di erence between the actual and the observed TM. The paper introduces an entropybased approach to the amount of uncertainty the adversary has in determining the actual TM, or alternatively, the probability that the actual TM has a property of interest. If the sender has no cost constraints, then it may adopt a strategy of transmitting neutral TMs, providing the adversary with minimal information. If the sender does have cost constraints, then it may not be able always to send neutral TMs, so it must use other approaches. The goal may be to maintain a certain cost distribution and to maximize the adversary's uncertainty within that budget, or it may be to achieve a minimum degree of uncertainty in the adversary while minimizing the cost of doing so. Acknowledgements
We thank the anonymous reviewers for helpful comments and suggestions. Andrei Serjantov acknowledges the support of EPSRC research grant GRN24872 Wide Area programming and EC FETGC IST200133234 PEPITO project. Ira Moskowitz, Richard Newman, and Paul Syverson were supported by ONR. References
1. Adam Back, Ulf Moller, and Anton Stiglic. Tra c analysis attacks and tradeo s in anonymity providing systems. In Ira S. Moskowitz, editor, Information Hiding, 4th International Workshop (IH 2001), pages 245{257. SpringerVerlag, LNCS 2137, 2001. 2. O. Berthold and H. Langos. Dummy tra c against long term intersection attacks. In Paul Syverson and Roger Dingledine, editors, Privacy Enhancing Technologies (PET 2002). SpringerVerlag, LNCS 2482, April 2002. 3. Claudia Diaz, Stefaan Seys, Joris Claessens, and Bart Preneel. Towards measuring anonymity. In Paul Syverson and Roger Dingledine, editors, Privacy Enhancing Technologies (PET 2002). SpringerVerlag, LNCS 2482, April 2002. 4. Leonidas Georgiadis, Roch Guerin, Vinod Peris, and Kumar N. Sivarajan. E cient network QoS provisioning based on per node tra c shaping. IEEE/ACM Transactions on Networking, 4(4):482{501, 1996. 5. D. Goldschlag, M. Reed, and P. Syverson. Hiding routing information. In Ross Anderson, editor, Information Hiding, First International Workshop, pages 137{ 150. SpringerVerlag, LNCS 1174, May 1996. 6. F. Halsall. Data Communications, Computer Networks, and Open Systems. AddisonWesley, 1992. 7. Myong H. Kang, Ira S. Moskowitz, and Daniel C. Lee. A network Pump. IEEE Transactions on Software Engineering, 22(5):329{328, 1998. 8. R. E. NewmanWolfe and B. R. Venkatraman. High level prevention of tra c analysis. In Proc. IEEE/ACM Seventh Annual Computer Security Applications Conference, pages 102{109, San Antonio, TX, Dec 26 1991. IEEE CS Press. 9. R. E. NewmanWolfe and B. R. Venkatraman. Performance analysis of a method for high level prevention of tra c analysis. In Proc. IEEE/ACM Eighth Annual Computer Security Applications Conference, pages 123{130, San Antonio, TX, Nov 30Dec 4 1992. IEEE CS Press. 10. Onion routing home page. http://www.onionrouter.net. 11. Andreas P tzmann and Marit Kohntopp. Anonymity, unobservability and pseudonymity  a proposal for terminology. In Hannes Federrath, editor, Designing Privacy Enhancing Technologies: Design Issues in Anonymity and Observability, pages 1{9. SpringerVerlag, LNCS 2009, July 2000. 12. Charles Racko and Daniel R. Simon. Cryptographic defense against tra c analysis. In ACM Symposium on Theory of Computing, pages 672{681, 1993. 13. J. Raymond. Tra c analysis: Protocols, attacks, design issues, and open problems. In Hannes Federrath, editor, Designing Privacy Enhancing Technologies: Design Issues in Anonymity and Observability, pages 10{29. SpringerVerlag, LNCS 2009, July 2000. 14. Michael K. Reiter and Aviel D. Rubin. Crowds: anonymity for web transactions. ACM Transactions on Information and System Security, 1(1):66{92, 1998. 15. Andrei Serjantov and George Danezis. Towards an information theoretic metric for anonymity. In Paul Syverson and Roger Dingledine, editors, Privacy Enhacing Technologies (PET 2002). SpringerVerlag, LNCS 2482, April 2002. 16. C.E. Shannon. Communication theory of secrecy systems. Bell System Technical Journal, 28:656{715, 1949. 17. W. Stallings. Data and Computer Communications (6th Ed.). PrenticeHall, 2000. 18. Paul Syverson and Stuart Stubblebine. Group principals and the formalization of anonymity. In J.M. Wing, J. Woodcock, and J. Davies, editors, FM'99 { Formal Methods, Vol. I, pages 814{833. SpringerVerlag, LNCS 1708, March 1999. 19. Paul F. Syverson, Gene Tsudik, Michael G. Reed, and Carl E. Landwehr. Towards an analysis of onion routing security. In Hannes Federrath, editor, Designing Privacy Enhancing Technologies: Design Issues in Anonymity and Observability, pages 96{114. SpringerVerlag, LNCS 2009, July 2000. 20. B. R. Venkatraman and R. E. NewmanWolfe. Performance analysis of a method for high level prevention of tra c analysis using measurements from a campus network. In Proc. IEEE/ACM Tenth Annual Computer Security Applications Conference, pages 288{297, Orlando, FL, December 59 1994. IEEE CS Press. 21. B. R. Venkatraman and R. E. NewmanWolfe. Capacity estimation and auditability of network covert channels. In Proc. IEEE Symposium on Security and Privacy, pages 186{198, Oakland, CA, May 810 1995. IEEE CS Press. ...
View Full
Document
 Fall '08
 Staff

Click to edit the document details