IntroductionSensorNetworks-part 2(1)

IntroductionSensorNetworks-part 2(1) - Introduction to...

Info icon This preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Introduction to Wireless Sensor Networks -Research Problems (Clustering, Routing, etc) Energy-Efficient Communication Protocol Architecture for Wireless Microsensor Networks (LEACH Protocol) [Heinzelman+ 2000, 2002] – LEACH (Low-Energy Adaptive Clustering Hierarchy) is a clustering-based protocol that utilizes the randomized rotation of local cluster base stations to evenly distribute the energy load within the network of sensors to – It is a distributed, does not require any control information from base station It (BS) and the nodes do not need to have knowledge of global network for LEACH to function LEACH – The energy saving of LEACH is achieved by combining compression with The data routing data – Key features of LEACH include: Localized coordination and control of cluster set-up and operation Randomized rotation of the cluster base stations or clusterheads and their Randomized clusters clusters Local compression of information to reduce global communication LEACH [Heinzelman+ 2000, 2002] – Considered microsensor network has the following characteristics: The base station is fixed and located far from the sensors All the sensor nodes are homogeneous and energy constrained – Communication between sensor nodes and the base station is expensive and no Communication high energy nodes exist to achieve communication high – By using clusters to transmit data to the BS, only few nodes need to transmit for By larger distances to the BS while other nodes in each cluster use small transmit distances distances – LEACH achieves superior performance compared to classical clustering algorithms LEACH by using adaptive clustering and rotating clusterheads; assisting the total energy of the system to be distributed among all the nodes the – By performing load computation in each cluster, amount of data to be transmitted to By BS is reduced. Therefore, large reduction in the energy dissipation is achieved since communication is more expensive than computation since LEACH [Heinzelman+ 2000, 2002] Algorithm Overview Algorithm – The nodes are grouped into local clusters with one node acting as the local base The station (BS) or clusterhead (CH) station – The CHs are rotated in random fashion among the various sensors – Local data fusion is achieved to compress the data being sent from clusters to the Local BS; resulting the reduction in the energy dissipation and increase in the network lifetime lifetime – Sensor elect themselves to be local BSs at any any given time with a certain Sensor probability and these CHs broadcast their status to other sensor nodes probability – Each node decided which CH to join based on the minimum communication energy – Upon clusters formation, each CH creates a schedule for the nodes in its cluster Upon such that radio components of each non-clusterhead node need to be turned OFF always except during the transmit time always – The CH aggregates all the data received from the nodes in its cluster before The transmitting the compressed data to BS transmitting LEACH [Heinzelman+ 2000, 2002] Algorithm Overview Algorithm – The transmission between CH and BS requires high energy transmission – In order to evenly distribute energy usage among the sensor nodes, clusterheads In are self-elected at different time intervals are – The nodes decides to become a CH depending on the amount of energy it has left – The decisions to become CH are made independently of the other nodes The – The system can determine the optimal number of CHs prior to election procedure The based on parameters such as network topology and relative costs of computation vs. communication (Optimal number of CHs considered is 5% of the nodes) vs. – It has been observed that nodes die in a random fashion – No communication exists between CHs – Each node has same probability to become a CH LEACH [Heinzelman+ 2000, 2002] Algorithm Details Algorithm – The operation of LEACH is achieved by rounds The rounds – Each round begins with a set-up phase (clusters are selected) followed by steadystate phase (data transmission to BS occurs) 1. Advertisement Phase: Advertisement – Initially, each node need to decide to become a CH for the current round based Initially, on the suggested percentage of CHs for the network (set prior to this phase) and the number times the node has acted as a CH and – The node (n) decides by choosing a random number between 0 and 1 – If this random number is less than T(n), the nodes become a CH for this round – The threshold is set as follows: P T(n) = 1 – P * (rmod 1 ) P 0 If n C G Otherwise P = desired percentage of CHs r = current round G = set of nodes that have not been CHs in the last 1/P rounds LEACH [Heinzelman+ 2000, 2002] Algorithm Details Algorithm 1. Advertisement Phase: 1. – Assumptions are (i) each node starts with the same amount of energy and (ii) Assumptions each CHs consumes relatively same amount of energy for each node each – Each node elected as CH broadcasts an advertisement message to the rest – During this “clusterhead-advertisement” phase, the non-clusterhead nodes During hear the ads of all CHs and decide which CH to join hear – A node joins to a CH in which it hears with its advertisement with the highest node signal strength signal 2. Cluster Set-Up Phase: – Each node informs its clusterhead that it will be member of the cluster 3. Schedule Creation: – Upon receiving all the join messages from its members, CH creates a TDMA Upon schedule about their allowed transmission time based on the total number of members in the cluster members LEACH [Heinzelman+ 2000, 2002] Algorithm Details Algorithm 4. Data Transmission: 4. – Each node starts data transmission to their CH based on their TDMA schedule – The radio of each cluster member nodes can be turned OFF until their The allocated transmission time comes; minimizing the energy dissipation allocated – The CH nodes must keep its receiver ON to receive all the data – Once all the data is received, the CH compresses the data to send it to BS Multiple Clusters – In order to minimize the radio interference between nearby clusters, each CH In chooses randomly from a list of spreading CDMA codes and it informs its cluster members to transmit using this code cluster – The neighboring CHs radio signals will be filtered out to avoid corruption in the The transmission transmission An Energy Efficient Hierarchical Clustering Algorithm for Wireless Sensor Networks [Bandyopadhyay+, 2003] – Distributed, randomized clustering algorithm to organize the sensors in a Distributed, wireless sensor network into clusters to minimize the energy used to communicate information from all nodes to the processing center communicate – Hierarchy of clusterheads leads to the energy savings – In the clustered environment, the data gathered by the sensors is In communicated to the data processing center through a hierarchy of clusterheads – The processing center determines the final estimates of the parameters The using information communicated by the clusterheads using – The processing center can be a specialized device or one of the sensors – Sensor data communicated over smaller distances, the energy consumed Sensor in the network will be much lower than the energy consumption when every sensor communicates directly to the information processing center sensor An Energy Efficient Hierarchical Clustering Algorithm for Wireless Sensor Networks [Bandyopadhyay+, 2003] A New, Energy-Efficient, Single-Level Clustering Algorithm New, – Each sensor becomes a clusterhead (CH) with probability p and advertises Each itself as a clusterhead to the sensors within its radio range – these clusterheads are called volunteer clusterheads volunteer – This advertisement is forwarded to all the sensors that are no more than k This hops away from the clusterhead hops – Any sensor node that is not clusterhead itself receiving such advertisement Any joins the cluster of the closest clusterhead joins – Any sensor node that is neither a clusterhead nor has joined any cluster Any itself becomes a clusterhead – called forced clusterheads forced – Since the advertisement forwarding has been limited to k hops, if a sensor Since does not receive a CH advertisement within time duration t (where t is the time required for data to reach the CH from any sensor k hops away), it means that the sensor node is not within k hops of any volunteer CHs means An Energy Efficient Hierarchical Clustering Algorithm for Wireless Sensor Networks [Bandyopadhyay+, 2003] A New, Energy-Efficient, Single-Level Clustering Algorithm New, – Therefore, the sensor node becomes a forced clusterhead – The CH can transmit the aggregated information to the processing center The after every t units of time since all the sensors within a cluster are at most k hops away from the CH hops – The limit on the number of hops allows the CH to reschedule their The transmissions transmissions – This is a distributed algorithm and does not demand clock synchronization This between the sensors between – The energy consumed for the information gathered by the sensors to reach The the processing center will depend on the parameters p and k – Since the objective of this work is to organize sensors in clusters to Since minimize the energy consumption, values of the parameters (p and k) must minimize must be found to ensure the goal be An Energy Efficient Hierarchical Clustering Algorithm for Wireless Sensor Networks [Bandyopadhyay+, 2003] A New, Energy-Efficient, Single-Level Clustering Algorithm New, Assumptions made for the optimal parameters are as follows: – The sensors are distributed as per a homogeneous spatial Poisson process The of intensity λ in 2-dimensional space – All sensors transmit at the same power level – have the same radio range r All – Data exchanged between two communicating sensors not within each others’ Data radio range is forwarded by other sensors radio – A distance of d between any sensor and its CH is equivalent to d / r hops distance – Each sensor uses 1 unit of energy to transmit or receive 1 unit of data – A routing infrastructure is in place; when a sensor communicates data to routing another sensor, only the sensors on the routing path forward the data another – The communication environment is contention- and error-free; sensors do not The have to retransmit any data have An Energy Efficient Hierarchical Clustering Algorithm for Wireless Sensor Networks [Bandyopadhyay+, 2003] A New, Energy-Efficient, Hierarchical Clustering Algorithm New, – This algorithm is extension of the previous one by allowing more than one This level of clustering in place level – Assume that there are h levels in the clustering hierarchy with level 1 being Assume the lowest level and level h being the highest – The sensors communicate the gathered data to level-1 clusterheads (CHs) – The level-1 CHs aggregate this data and communicate the aggregated data The to level-2 CHs and so on to – Finally, level-h CHs communicate the aggregated data or estimates based on Finally, this aggregated data to the processing center this An Energy Efficient Hierarchical Clustering Algorithm for Wireless Sensor Networks [Bandyopadhyay+, 2003] A New, Energy-Efficient, Hierarchical Clustering Algorithm New, – The cost of communicating the information from the sensors to the The processing center is the energy consumed by the sensors to communicate the information to level-1 CHs, plus the energy consumed by the level-1 CHs to communicate the aggregated data to level-2 CHs, …., plus the energy consumed by the level-h CHs to communicate the aggregated data to the information processing center information Algorithm Details – The algorithm works in a bottom-up fashion – First, it elects the level-1 clusterheads, then level-2 clusterheads, and so on An Energy Efficient Hierarchical Clustering Algorithm for Wireless Sensor Networks [Bandyopadhyay+, 2003] A New, Energy-Efficient, Hierarchical Clustering Algorithm New, Algorithm Details – Level-1 clusterheads are chosen as follows: o Each sensor decides to become a level-1 CH with certain probability p1 Each and advertises itself as a clusterhead to the sensors within its radio range range o This advertisement is forwarded to all the sensors within k1 hops of the This advertising CH advertising o Each sensor receiving an advertisement joins the cluster of the closest Each level-1 CH; the remaining sensors become forced level-1 CHs level-1 – Level-1 CHs then elect themselves as level-2 CHs with a certain probability Level-1 p2 and broadcast their decision of becoming a level-2 CH and – This decision is forwarded to all the sensors within k2 hops This An Energy Efficient Hierarchical Clustering Algorithm for Wireless Sensor Networks [Bandyopadhyay+, 2003] A New, Energy-Efficient, Hierarchical Clustering Algorithm New, Algorithm Details – The level-1 CHs that receive the advertisement from level-2 CHs joins the The cluster of the closest level-2 CH; the remaining level-1 CHs become forced level-2 CHs level-2 – Clusterheads at level 3, 4, 5,…,h are chosen in similar fashion with Clusterheads 3, probabilities p3, p4, p5,...,ph respectively to generate a hierarchy of CHs, in which any level-i CH is also CH of level (i-1), (i-2),…,1. which Directed Diffusion [Intanagonwiwat+ 2000] – Motivated by scaling, robustness and energy efficiency requirements – Directed diffusion is data-centric in that all communication is for named data Directed data-centric named – Data generated by sensor nodes is named using attribute-value pairs Data named – All nodes in the network are application-aware – A node requests data by sending interests for named data node interests – A sensing task is disseminated via sequence of local interactions throughout sensing the sensor network as an interest for named data interest – Nodes diffusing the interest sets up their own caches and gradients within the Nodes caches gradients network to which channel the delivery of data network – During the data transmission, reinforcement and negative reinforcement are used to converge to efficient distribution used – Intermediate nodes fuse interests, aggregate, correlate or cache data Directed Diffusion [Intanagonwiwat+ 2000] – Assumes that sensor networks are task-specific – the task types are known at the Assumes time the sensor network is deployed – An essential feature of directed diffusion is that interest, data propagation and An data aggregation are determined by local interactions local – Focused on design of dissemination protocols for tasks and events Naming – Task descriptions are named (specifies an interest for data matching the list of Task named attribute-value pairs) and also called as interest interest – Example task: “Every I ms, for the next T seconds, send me a location of any Example four-legged animal in subregion R of the sensor field.” of task = four-legged animal // detect animal location interval = 20 ms // send back events every 20 ms duration = 10 seconds // … for the next 10 seconds rect = [-100, 100, 200, 400] // from sensors within rectangle Directed Diffusion [Intanagonwiwat+ 2000] Naming – A sensor detecting an animal may generate the following data: task = four-legged animal // type of animal seen instance = horse // instance of this type location = [150, 200] // node location intensity = 0.5 // signal amplitude measure confidence = 0.85 // confidence in the match timestamp = 01:30:45 // event generation time Interests and Gradients – Interest is generally given by the sink node – For each active task, sink periodically broadcasts an interest message to each of For its neighbors (including rect and duration attributes) its – Sink periodically refreshes each interest by re-sending the same interest with Sink monotonically increasing timestamp attribute for reliability purposes monotonically Directed Diffusion [Intanagonwiwat+ 2000] Interests and Gradients – Every node maintains an interest cache where each item in the cache Every corresponds to a distinct interest (different type, interval attributes with disjoint distinct type interval rect attributes) rect – Interest entries in the cache do not contain information about the sink – In some cases, definition of distinct interests allows interest aggregation In aggregation – The interest entry contains several gradient fields, up to one per neighbor The gradient – When a node receives an interest, it determines if the interest exists in the cache 1. If no matching exist, the node creates an interest entry 2. This entry has single gradient towards the neighbor from which the This interest was received with specified data rate interest Individual neighbors can be distinguished by locally unique identifiers If the interest entry exists, but no gradient for the sender of interest Node adds a gradient with the specified value Updates the entry’s timestamp and duration fields Directed Diffusion [Intanagonwiwat+ 2000] Interests and Gradients 1. If there exists both entry and a gradient, If The node updates the entry’s timestamp and duration fields – When a gradient expires, it is removed from its interest entry – When all gradients for an interest entry have expired, the interest entry is When removed from the cache removed – After receiving an interest, a node may re-send the interest to subset of its After neighbors neighbors – To the neighbors, it may seem that interest originated from the sending node To even though it may have been generated a distant sink. This represents a local interaction interaction – This way, interest diffuse throughout the network and not each interest have been This sent to all the neighbors if a node sent matching interest recently sent – Gradient specifies data rate (value) and a direction in directed diffusion, whereas Gradient the values can be used to probabilistically forward data in different paths in other sensor networks sensor Directed Diffusion [Intanagonwiwat+ 2000] Data propagation – Data message is unicast individually to the relevant neighbors – A node receiving a data message from its neighbors checks to see if matching node interest entry in its cache exists according the matching rules described interest 1. If no match exist, the data message is dropped 2. If match exists, the node checks its data cache associated with the matching If interest entry interest If a received data message has a matching data cache entry, the data If message is dropped message Otherwise, the received message is added to the data cache and the Otherwise, data message is re-sent to the neighbors data – Data cache keeps track of the recently seen data items, preventing loops – By checking the data cache, a node can determine the data rate of the received By events events Directed Diffusion [Intanagonwiwat+ 2000] Reinforcement – After the sink starts receiving low data rate events, it reinforces one neighbor in After order to “draw down” higher quality (higher data rate) events order – This is achieved by data driven local rules – To enforce a neighbor, the sink may re-send the original interest with higher data To rate rate – When the data rate is higher than before, the node node must also reinforce at When least one neighbor least – Reinforcement can be carried out from neighbors to other neighbors in a Reinforcement particular path (i.e., when a path delivers an event faster than others, sink attempts to use this path to draw down high quality data) attempts – In summary, reinforce one path, or part of it, based on observed losses, delay In variances, and so on variances, – Negative reinforce certain paths because resource levels are low Directed Diffusion [Intanagonwiwat+ 2000] [Figure adapted from Intanagonwiwat+ 2000] Stealth Routing [Turgut+ 2009] Intruder Tracking Sensor Network Intruder Sensor networks used to detect and track intruders in a geographic region – “Interest area” – Observations are disseminated to the sink by hop by hop transmission Performance metric: track...
View Full Document

{[ snackBarMessage ]}

What students are saying

  • Left Quote Icon

    As a current student on this bumpy collegiate pathway, I stumbled upon Course Hero, where I can find study resources for nearly all my courses, get online help from tutors 24/7, and even share my old projects, papers, and lecture notes with other students.

    Student Picture

    Kiran Temple University Fox School of Business ‘17, Course Hero Intern

  • Left Quote Icon

    I cannot even describe how much Course Hero helped me this summer. It’s truly become something I can always rely on and help me. In the end, I was not only able to survive summer classes, but I was able to thrive thanks to Course Hero.

    Student Picture

    Dana University of Pennsylvania ‘17, Course Hero Intern

  • Left Quote Icon

    The ability to access any university’s resources through Course Hero proved invaluable in my case. I was behind on Tulane coursework and actually used UCLA’s materials to help me move forward and get everything together on time.

    Student Picture

    Jill Tulane University ‘16, Course Hero Intern