The core of YML : Three components are included in this layer: YML register, YML compiler, and YML scheduler . YML register is used to register reusable services and third-party services. Once registered, these services can be invoked by YML scheduler automatically. YML compiler is composed of a set of transformation stages that lead to the creation of an application file from pseudocode-based program. The application file consists of a series of events and operations. Events are in charge of sequences of operations. In other words, which operation can be executed in parallel/sequence is decided by the events table. Operations refer to those services registered by YML register. One important work made in this paper is that a data flow table is gener- ated in the application file. Through the data flow table, data dependence between operations can be found (see “data flow table” in Fig. 9.6 ). As well known to us all, these data dependencies determine the execution (in parallel/sequence) of different operations. According to these data dependencies, prescheduling mechanisms can be realized (see column “node” in “IP address table” of Fig. 9.6 ). Then, col- laborating the “IP address table” (in Fig. 9.6 ), data persistence and data replication mechanisms can be realized. The general idea of this part of work can be described using Fig. 9.6 . YML scheduler is a just-in-time scheduler. It is in charge of allocating the executable YML services to appropriate computing resources shielded by Fig. 9.5 Core part of YML-PC worker_n XtremWeb worker_… OmniRPC Third Party Service End users Pseudo-code Reusable service YML Register YML Compiler Application file YML service YML scheduler Trust Model Monitor worker_1 OmniRPC worker_2 XtremWeb Data Server Frontend Core Backend Data Manager Worker Coordinator
155 9 A Reference Architecture Based on Workflow for Building Scientific Private Clouds YML back-end layer. YML scheduler is always executing two main operations sequentially. First, it checks for tasks ready for execution. This is done each time a new event is introduced and leads to allocating tasks to the YML back-end. The second operation is to monitor those tasks currently being executed. Once tasks have started to execute, the scheduler regularly checks whether these tasks have changed to the finished state. The scheduler will push new tasks with its input data set and related YML services to an underlying computing node when the node’s state is completion or unexpected error. To make the process presented above a reality, two parts of this work are in mentioned this paper. The first is to introduce monitoring and a prediction model for volunteer computing resources. It is well known that volatility is the key char- acteristic of volunteer computing resources and if we do not know any regularity of volunteer computing resources, the problem with data dependence between operations means that it cannot run on a Desktop Grid platform. The reason is that frequent task migration will render the program incomplete forever. We call
You've reached the end of your free preview.
Want to read all 386 pages?
- Spring '16
- Mr Gebre