{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

It can also consult any other node in order to catch

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: t can be batched. Does mean we need some way to remove a node permanently, and to add a new node into the system permanently (that is, we can use a different way to recover after a node failure – one that requires the vote of all of the participants to exclude a node or to let the node back into the system – that vote gives an opportunity to erase any pending state) 4) from one paxos to many Algorithm is highly available, redundant log of one event - > how do we get to a highly available sequence of events Terminology: instance of Paxos refers to one slot in the sequence of events a) If no failure (or late messages), easy: original leader (the one with the lowest ID #) can make a proposal for each instance in turn, send a prepare, and get it accepted higher performance version: ok to run multiple instances of Paxos in parallel. For example: two clients each request a transaction, that are commutative Leader chooses the order, e.g., request A is instance 57, request B is instance 58 Send out the prepare and accept requests for each at the same time B might be chosen “before” A! (e.g., with packet loss) - - ok to tell client that B is complete But assuming A is eventually chosen, then A is put in the sequence at 57, and B at 58 Even higher performance version: Leader sends a single prepare message for all possible instances first (I’m about to send proposal #i), then only the accept request needs to be sent for each instance b) If a non- leader fails, easy: leader keeps on chugging with the remainder When the...
View Full Document

{[ snackBarMessage ]}

Ask a homework question - tutors are online