Silver_Understanding_and_Evaluating_BPM_Suites - Bruce...

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Bruce Silver Associates BPM and Content Management Advisors Published in conjunction with The 2006 BPMS Report: The Understanding and Evaluating BPM Suites Bruce Silver Bruce Silver Associates The 2006 BPMS Report BPM and Content Management Advisors U N D E R S TA N D I N G A N D E VA L U AT I N G B PM S U I T E S 1. What Is a Business Process Management Suite? 1.1 Organizations, Systems, and Processes A business process is a coordinated chain of activities intended to produce a business result. What distinguishes a process from a simple activity is that the component steps of a process are performed by multiple individuals and systems, requiring a flow of control and data between them. The enterprise is a tangled web of processes. Processes are the basis of product planning, marketing campaigns, customer transactions, order fulfillment, customer service, supply chain, billing, and financial reporting. In fact, processes largely govern everyday life for most employees. But the typical enterprise is not organized around processes. Instead it is organized around discrete business functions, like marketing, sales, manufacturing, finance, and customer service, organizations which may be replicated in each division or geographical unit. Each organizational unit has its own business systems and hierarchical chain of command. Processes Cut Across Traditional Stovepipe Boundaries Process A Process B Process C Process D Trading Partners Financials Fulfillment Service and Support Sales and Mktg Figure 1. Companies and enterprise applications are organized around discrete business functions, but mission-critical processes cross those boundaries. Processes that run entirely within a single unit or business system are frequently already efficient and automated. A decade of investment in so-called enterprise applications like ERP and CRM has brought tremendous advances in task automation, data integration, and control within individual functions or systems. But the price of that functional automation has been the creation of new stovepipes that increasingly hinder agility and management of the end-to-end processes that extend across organizational and system boundaries. © Bruce Silver Associates 2005 500 Bear Valley Road Aptos CA 95003 USA Contact: Bruce Silver, Principal +1 831 685-8803 2006 BPMS Report Understanding and Evaluating BPM Suites It turns out that cross-functional end-to-end processes are the ones that matter most to the bottom line: to overall efficiency, customer satisfaction, compliance, and responsiveness to changing demands. Business process management (BPM), as a management discipline, began twenty years ago as a way to think about the business – how to plan it, understand it, and measure it – in terms of key cross-functional processes rather than through the traditional lens of functional organizations and systems. Dr Geary Rummler, one of the founding gurus of BPM, always emphasizes “managing the white space in the organization,” i.e., the handoffs between the stovepipes in the enterprise org chart. There are some who believe that a management discipline and a new way of analyzing business is all BPM needs to be. But that thinking aims far too low. The real payback from business process modeling and analysis comes from the ability to execute, measure, and optimize the model using a Business Process Management Suite (BPMS). BPMS actively breaks down the barriers between the stovepipes to bring end-to-end efficiency, agility, compliance, and visibility to crossfunctional business processes. 1.2 The Process Management Challenge BPMS does not simply analyze and reconfigure the white space in the organization, but actively manages the processes that span it. The processes that really determine bottom line success and customer satisfaction, like quote-to-cash, loan origination, customer dispute resolution, or SOX 404 compliance, span the boundaries of stovepiped organizational structures and business systems. In fact, the rise of stovepiped enterprise applications has created several problems: • Inefficiency. Exceptions are handled manually, resulting in processes that are inefficient and take too long to complete. • Rigidity. Critical enterprise systems are hard to integrate and even harder to change. • Lack of compliance and control. The same process is done differently in different departments and sites. • Poor visibility. Business performance can’t be measured at the end-to-end process level. • Inertia. The process rules keep changing, while IT resources are already stretched to the breaking point. 1.3 BPMS’s Four Promises to the Business BPMS attacks these challenges head-on. It does not replace your enterprise applications but coordinates their actions to make end-to-end processes more efficient, more flexible and agile, more standardized and compliant. BPMS automates, integrates, and optimizes business processes. It does that by defining process models or templates that define the flow of activities, some human some automated, and then executes that flow and continuously monitors its performance. It’s efficient because it automates manual tasks and handoffs, and makes sure the most important tasks are done first and on time. It’s agile because executable process models are not built with complex code but composed graphically like a flowchart, so they can be built quickly and easily changed. A key element of BPMS is integration middleware that allows the process to control discrete activities of external applications without writing code. © Bruce Silver Associates 2005 2 2006 BPMS Report Understanding and Evaluating BPM Suites It’s compliant because process logic is based on rules, reflecting policies and best practices, and process components based on those rules can be shared and reused across the organization, so you’re always following the rules and standards in every office of your enterprise. And it makes processes visible end-to-end by aggregating data from disparate business systems along with human workflow statistics, and displaying key performance indicators in management and administrative dashboards. Performance management allows process bottlenecks to be relieved in real time, while maintaining a secure audit trail to ensure provable compliance. The automation, integration, and visibility BPMS brings to cross-functional processes translate into tangible return on investment. BPMS makes processes run faster and employees more productive. Besides automating manual procedures, it streamlines handoffs and optimizes tasks by priority. It also provides deadlines, notifications, and user-defined escalation actions that put the focus on customer value rather than simply first-in first-out. BPMS also brings standardization and control. You can enforce business rules, and prove to an auditor that you did. You can replicate best practices globally across all offices in the enterprise. You can track service level agreements (SLAs) and compliance with regulations, even when the process spans departments or possibly outsourced functions. Exceptions are handled explicitly within the process model, and all actions are logged automatically for performance management and auditability. Third, BPMS lowers the cost of developing and maintaining business solutions. Because executable processes can be designed and maintained with little programming and process model components can be reused easily in new variants, BPMS reduces the demand on critical IT resources as well as total cost of operations. Beyond those hard benefits, BPMS adds strategic value as well. • Agility, the ability to bring new products and services to market quickly and to respond rapidly to changing demands. While enterprise applications are hard to change, the process model is that coordinates their actions is easy to change. • Business integration. Through its standards-based integration middleware, BPMS allows you to extend the scope of process automation and management across the IT barriers that historically have separated departments, front and back office, and your customers and suppliers – despite the diversity of system platforms, API languages, and data models they represent. • Global visibility. BPMS makes process performance visible at the process level, tracking process data and aggregating it in tables of key performance indicators and graphical dashboards at various levels for process owners, system administrators, and business executives. 1.4 Using the 2006 BPMS Report The past few years have seen great strides in BPMS development. Today there are many BPMS vendors out there, all using the same basic elevator pitch but actually selling very different products based on different visions of what BPMS should do and how it should do it. While BPMS offerings vary somewhat in architecture and technical details, for the prospective buyer the key difference among them is in the types of process they are primarily designed for. This isn’t something BPMS vendors like to talk about – why limit the pool of potential buyers? BPMS vendors all also promise a degree of “process without programming” – the idea that much of the agility bottleneck in business can be traced to scarce programming resources, which can be eliminated with BPMS’s point-click development tools and component reuse. But while one © Bruce Silver Associates 2005 3 2006 BPMS Report Understanding and Evaluating BPM Suites vendor’s point-click tool is aimed at business analysts, another’s is really meant for Java programmers. Again, you’d never know it from the brochures and webinars. Understanding the real issues in BPMS product selection requires boring through the marketing veneer. That’s what the 2006 BPMS Report does. We look at several leading BPMS offerings, hold them up to a common framework for analysis, and in the end show you which ones are best suited for your particular process types, which we call use cases, and how each matches up with the modeling and design skills available in your organization. To understand the analysis, this chapter of the report explains what BPMS is, its core technology building blocks, the differing requirements of the major process use cases, and the common format for product evaluation. Each of the other chapters describes a specific BPMS offering in that common format, and analyzes its strengths, applicability to specific process use cases, and the level of technical skill required to build and maintain solutions. 2. BPMS Technology Overview 2.1 Process Modeling Some in the BPM community like to distinguish between process modeling and process design, the former meaning an abstract description of the process used for documentation and analysis, and the latter meaning an executable process management application programmed by IT. While this accurately reflects the traditional software development lifecycle, advances in BPMS are blurring the lines. The key idea of the 2003 book Business Process Management, The Third Wave by Smith and Fingar is the notion of an executable process model. While executable models are built graphically without programming like a flowchart, they can be deployed to a process engine to actually automate, integrate, and monitor the end-to-end process. Executable models are central to BPMS, and for that reason, it is more helpful to think of two types of modeling and their associated tools – analytical modeling and executable modeling. 2.1.1 Analytical Modeling Analytical process modeling has a history as long as BPM itself. In fact, many vendors of analytical modeling tools tend to support the original notion of BPM as a strictly analytical discipline separate from application design. Analytical models are abstract in that activities are not bound to a particular implementation. They provide a number of functions valuable both to traditional BPM and modern BPMS: • Capture of existing process flow in a structured diagrammatic notation often associated with a particular methodology, such as swimlanes to indicate lines of visibility between collaborating organizations or systems. • Diagramming new or modified processes using the same notation, which associates resources, processing time, cost, inputs and outputs with each process activity. • Simulation of process performance based on various scenarios of instance volume, resource allocation, branching ratios at decision points in the flow, and other process parameters. • Analysis and reporting of expected average cycle time (service level), throughput, and cost for various scenarios, and optimization of the parameters. • Documentation of the process in some exportable format. © Bruce Silver Associates 2005 4 2006 BPMS Report Understanding and Evaluating BPM Suites Analytical modeling may be provided externally to the BPMS in a dedicated tool (typically part of an enterprise architecture tool suite), or by the BPMS itself. The trend today is to bring it inside the BPM suite, and some analysts consider it an essential BPMS component. In 2004, (now part of OMG) published a diagramming standard for analytical modeling called the Business Process Modeling Notation (BPMN) standard, and many analytical process modeling tools now support it, at least as an option. 2.1.2 Executable Modeling Executable process modeling is a part of every BPMS. It means the design of an executable application through graphical composition of process activities. Like analytical models, executable process models use a structured graphical notation, but instead of being bound to an analytical methodology or standard graphical notation, the variety of activity shapes in executable process diagrams usually depend on details of the particular BPMS. As a result, the look and feel of executable process diagrams varies considerably from one BPMS to the next. While activities in analytical models have properties, like expected duration and resource cost, that enable analysis through simulation, activity properties in executable models provide the implementation detail needed for process execution. For interactive steps, these include things like: • Roles, mapped to specific users and groups at runtime • Task user interfaces, specified as forms or in some cases full web applications, including screenflows, built in the BPMS design environment • Deadlines, with defined actions upon expiration. For automated steps (typically integration actions), activity properties include things like: • The executable component to run, which could be a program, script, object method, or integration adapter. • Process data sent to and returned from the executable component. • Exception handling and transaction management properties, enabling rollback, compensation, or other defined actions if the activity fails to execute normally Underneath the graphical notation of the design tool, BPMS maintains an XML-based process description. Although two basic standards for these process description languages exist (XPDL from the Workflow Management Coalition and BPEL from OASIS), each BPMS implements capabilities that go beyond the standards, so models are rarely 100% portable between BPMS platforms. 2.2 2.2.1 Process Automation Process Engine A key feature of executable process models is their ability to be transferred, or deployed, to a runtime BPMS component called the process engine. When triggered by conditions specified by the model, the engine creates a new instance of the process and executes it automatically. For each instance, the process engine automates and monitors process flow. When routed to an interactive step, the instance becomes a work item accessible from the assigned participant’s worklist. © Bruce Silver Associates 2005 5 2006 BPMS Report Understanding and Evaluating BPM Suites For automated steps, the process engine invokes an executable object as specified by the model, typically an integration component generated in the BPMS design tool. Integration components typically receive a set of data elements from the process engine and use them to invoke an API or object method on an external business system, such as a DBMS or enterprise application, typically via an integration adapter. Data returned from the invocation is passed back to the process by the integration component. Conversion of process data to the formats required by the integration component is performed by a data transformation engine within the BPMS runtime. Definition of transformations (also called data mappings) is typically provided graphically by the BPMS design tool. 2.2.2 Flow Control Not all instances of a process follow the same path of activities. Routing rules based on logical expressions of process data determine the flow at branch points. In some BPMS (and associated process modeling languages), routing decisions are evaluated in a separate activity, called a Decision or Switch. In others, routing rules are executed by default by each step in the process. The flow may split into parallel segments, giving a single process multiple threads of control. In some BPMS, each parallel segment operates on a separate copy of the instance, and copies are merged at a subsequent join step. In others, all parallel segments operate on the original instance, which necessitates some form of concurrency control. In addition to process splits, the process may create an instance of an independent subprocess – “independent” meaning the calling process continues its own thread of execution after the call. The subprocess has its own lifetime independent of the parent, and possibly synchronizes state or data with it through a return call or message. While parallel threads of control are a feature of all BPMS, each offering does it differently, with attendant advantages and disadvantages for particular process use cases. Also, BPMS offerings differ in constraints to the allowed flow topologies. Some allow flow to loop back to a previous activity; most do not. Some require all parallel segments from a split to recombine at the same join; others do not. Some even allow steps to be inserted dynamically into the flow at runtime for a particular process instance; most do not. Often these constraints are built into the process modeling languages, and can be enforced by model validation rules. 2.2.3 Events and Exceptions Another area where BPMS capabilities vary widely is in the handling of events. Events are signals received by the process engine either from an external system, typically as a message, or from another part of the BPMS, such as a rule engine. Typically a process can respond to a particular type of event in one of two ways: It can launch a new instance of a process, or it can complete an activity defined to wait for the event or message. In workflow-oriented BPMS, the latter may be called a WaitForEvent activity; in service orchestration languages like BPEL, it is called a Receive activity. The list of events a BPMS can listen for and respond to (without custom programming) varies widely from one offering to the next, again reflecting each vendor’s emphasis on particular process use cases. Unlike traditional EAI processes, which are data exchanges typically lasting milliseconds, processes in BPMS are inherently “long-running,” meaning the process engine must persist their state must in a database. The state of a process means not only where the instance is in the flow, but the current values of all process data elements. Because there may be parallel threads of control, process state actually means the aggregated state of all of them. While all BPMS can do this in the case of a parallel Split, not all engines automatically “know” the combined state of all their independent child processes; determining this may require custom programming. © Bruce Silver Associates 2005 6 2006 BPMS Report Understanding and Evaluating BPM Suites Finally, the process engine provides exception handling and transaction management. Exceptions come in two types: system exceptions and business exceptions. A system exception is typically a fault raised by an automated step in the process. For example, the data passed to the integration component may not match the required structure or schema, or the external system invoked may be down, resulting in a timeout. A business exception is an event issued by a participating user or system, such as a change or cancellation of an order in process. Upon detection of a system exceptions, the process engine suspends the normal flow of control and triggers a special exception handler flow defined in the modeling tool and attached to an individual activity, a group of activities, or the process as a whole. The process model also allows a block of activities to be grouped as a single transaction scope, meaning if an exception occurs within that block, the effects of any completed activity within the scope must be “undone,” a feature called compensation. In most (not all) BPMS process models, a compensating activity can be defined for each normal activity in a transaction scope for this purpose. Unlike classical “ACID” transactions using protocols like two-phase commit, compensation works with longrunning transaction scopes. Some process engines, particularly those that run in a J2EE container, also support ACID transaction recovery (rollback, retry) for those activities where it is applicable. 2.3 Process Architecture and Standards While vendors generally agree about BPMS’s core functional capabilities, key promises to the business, and even the basic component block diagram, these common elements actually mask important architectural differences between BPMS offerings deriving from a workflow orientation and those coming from an integration or infrastructure perspective. While rarely mentioned by industry analysts, these differences underlie the current debate over BPM standards, and they will definitely impact the future course of BPMS evolution. For that reason, users need to understand them and take them into consideration in the BPMS buying decision. 2.3.1 Workflow Architecture The essential difference comes down to what a process activity is, and what the process engine conceptually does. On one side is workflow architecture, based generally on the reference model of the Workflow Management Coalition (WfMC) and its associated process definition language XPDL. While the workflow diagram represents the process as a flow of activities, in this architecture the process engine actually routes process instances to a sequence of queues. Instances are selected and accessed from queues by a client executable program that implements the assigned process activity on the instance. The client retrieves the instance, executes the process activity, and returns a modified instance back to the process engine. This architecture is well suited to human interaction. Queues can be easily shared by workgroups, and they assume activities are long-running. For interactive activities the client program is typically a browser-based application that can retrieve the work item from the process engine and display or key-enter work item data using electronic forms. If the activity does not require human interaction but only integration with an external business system, workflow architecture implements it as an “automated step.” In an automated step, the client program retrieves the work item, transforms selected data elements into the format required by the external application, executes the integration action via an adapter or other EAI middleware, transforms the returned data back to the work item data format, and finally returns the work item to the process engine. To the process engine, however, automated steps and interactive steps basically look the same. © Bruce Silver Associates 2005 7 2006 BPMS Report 2.3.2 Understanding and Evaluating BPM Suites Service Orchestration Architecture On the other side is service orchestration, which is based on service-oriented architecture (SOA) and its associated process definition language from OASIS, called BPEL. In BPEL, process activities represent service invocations, i.e., message-based requests for action performed by an external system, usually returning a response message. In concrete terms, the process engine does not route work items to queues, but exchanges messages with service endpoints, typically URLs. This architecture emphasizes integration more than human interaction. Data transformations and specific operations on external systems are specified explicitly in the BPEL model and performed by the process engine directly. On the other hand, human users are not easily addressed as service endpoints. To deal with human interaction, BPMS vendors following the service orchestration model typically provide a task manager service that the process engine can invoke to create interactive tasks, and which notifies the process engine when a task is complete. Queues, roles, and other workflow constructs can be defined and implemented by the task manager service, safely removed from the BPEL process engine. In other words, classical workflow architecture defines human interaction details in the process model and executes them on the process engine, but leaves integration details outside the model and opaque to the engine. Service-oriented architecture defines integration details in the process model and executes them on the process engine, but leaves human interaction outside and opaque to the engine. Of course, complete BPMS offerings based on either architecture must provide both human interaction and application integration, and bring their specification together somehow in the overall design environment. 2.3.3 Modeling Language Standards While SOA and BPEL have grabbed all the media and analyst attention, far more BPMS products today are based on the workflow architecture. One reason is that the center of gravity in BPM remains streamlining human work, not agile business integration. The XPDL-based products tend to have richer functionality there, and in newer versions have added support for agile integration middleware, including web services and integration adapters. While workflow-centric offerings like Savvion and Fuego keep the focus on the human element, they accommodate service orchestration by allowing BPEL subprocesses to be embedded in an end-to-end workflow. Another reason is that service orchestration is really an infrastructure game where the major software companies have an inherent edge. So it’s no surprise that BPMS on that side of the ledger comes from names like IBM, Microsoft, Oracle, and SAP. As BPMS moves from a point solution to true enterprise infrastructure, the service orchestration side will gain increasing advantage. Undoubtedly that’s why the analysts assume BPEL will “win.” But it’s not really clear that BPMS is what’s driving BPEL evolution in the standards committees. Service orchestration is not just about BPMS. It’s also a next-generation agile programming paradigm, often called composite applications, and that – not BPM – appears to be closer to the heart of the BPEL technical committee. From day one, BPM has sought to make process design directly accessible to business analysts, but today’s BPEL process models are for the most part undecipherable to non-programmers. For BPEL to win the hearts and minds of the BPM community, that has to change. 2.4 Business Rules While many BPMS vendors have in the past promoted their own process models as examples of “business rules,” process engines and true business rule engines are actually very different. © Bruce Silver Associates 2005 8 2006 BPMS Report Understanding and Evaluating BPM Suites Today, however, BPMS and business rule engine (BRE) providers have come to realize that the two technologies can work together effectively to make processes more effective and agile. It is important to understand how business rules differ from traditional process rules to see how new BPMS offerings are leveraging the benefits of BRE integration. The rule logic in a process engine typically directs the flow of work at a branch point in the diagram: If Amount > $10,000, route to Manager Approval. Typically the rule is expressed in the process model as a Boolean (true/false) condition enabling a particular branch or path in the flow. Process rules of this sort can and do enforce policies, procedures, and best practices, but they differ from true business rules in a number of important ways. Since they are defined as Boolean expressions of workflow variables, process rules are by necessity simple. True business rules, on the other hand, can be extremely complex, and chain to other rules. They can, for example, compute a “score” based on dozens of criteria using data stored in multiple business systems, and then apply a threshold to that score to pay a claim, approve credit, or determine that a transaction is compliant with regulations. Process rules are bound to a specific point of use in the process model, typically a branch point in the flow diagram. If the same rule is to be used globally, it must be replicated at each point of use. That means that if the threshold for Manager Approval should change to $15,000, the change must be implemented separately in every branch point in every process template where the rule applies! Business rules, in contrast are defined globally and stored centrally. If the Manager Approval threshold were defined using a BRE, the change would just have to be made once, and it would be immediately applicable in any step in any business process that referenced it. Compared to business rules, process rules are also limited in their effect: they can only enable a branch in the flow diagram. But when integrated with a BPMS, business rules can do more: start a new process, release a pending task, update process data, assign a task to a particular user, or send a notification. Another advantage of BRE technology is that rule parameters can be modified by business people, not IT. While past workflow offerings geared their tools to non-technical designers, today’s more powerful BPMS technology – based on J2EE, .Net, and web services – has made the process design environment less business-friendly. So when a process rule changes, even though no programming is required, it must be implemented by IT. That tempers the agility benefit. Most business rule engines, in contrast, allow rule designers to expose rule parameters that are likely to change so that business people can change them through a simple web interface, on the fly. BRE technology improves process agility in an even more significant way. When a rule change is deployed to the BRE, it takes effect immediately in all processes invoking the rule. The process model on the BPMS does not need to change at all. In contrast, changing a process rule means creating a new version of the process model and redeploying it to the process engine. Usually the new rule takes effect only for new instances created with the new model version; in-flight instances continue to use the old rule. For all of these reasons, a BRE is increasingly seen as key component of a BPM solution, and BPMS vendors are beginning to compete on the basis of their BRE integration. Some vendors provide a BRE natively within the BPMS, while others elect to integrate with a separate third party rule engine. When a BPMS is integrated with a separate “best of breed” BRE, the tricky part is getting the BRE design tool to understand the BPMS’s process data model so it can be referenced in business rules. While any BPMS can usually “talk” to any BRE, a special integration module between BRE vendor X and BPMS vendor Y may be required to use process © Bruce Silver Associates 2005 9 2006 BPMS Report Understanding and Evaluating BPM Suites data within business rule definitions. Business rule management native to the BPMS environment does not have this problem. BREs are adding a powerful new dimension to BPM by providing centralized global management of rules, vastly more complex rule logic, increased agility, and more business-oriented rule languages. But, as always, it’s not a simple checkoff item. Buyers need to understand how the integration is performed, how the BRE understands and uses process data, and what kind of actions can be triggered by a business rule. And these dimensions vary widely with each BPMS. 2.5 Application Integration Application integration middleware allows process activities to directly execute APIs, object methods, or web service operations on external applications and information systems with little or no programming. The essential components of that middleware include a communications bus, integration adapters, and data transformation. The communications bus supports reliable transport of requests and responses between the process engine and the target business systems. While some integration actions can be performed synchronously as remote procedure calls over the network, the general case requires a communications bus based on message queuing, such as IBM WebSphere MQ or Tibco Rendezvous. On J2EE application servers, standard capabilities such as Java Message Service (JMS) now provide a common API to enterprise messaging from multiple bus providers. As integration becomes increasingly service-oriented, a new generation of message queuing infrastructure known as enterprise service bus (ESB) is becoming popular. Figure 2. Integration adapters let designers introspect enterprise applications and generate integration components that BPMS can invoke as “services.” Source: iWay Integration adapters are software components that expose the native APIs and object methods of enterprise applications and information systems as service requests and responses for access by the process engine. In the BPMS design tool, the adapters allow users to introspect the available APIs and methods of the target system (Figure 2). Selected functions are then turned into © Bruce Silver Associates 2005 10 Understanding and Evaluating BPM Suites 2006 BPMS Report integration components that can be attached to process activities and invoked at runtime either synchronously or asynchronously. Data transformations map process data into the elements and formats required by the integration component. Each component has a request format – typically XML schema but in some cases a list of Java parameters. The BPMS design environment provides a graphical tool to build the mappings, typically based on XSLT or XQuery, and a transformation engine to perform them at runtime. 2. “Introspect” resource methods, events using integration adapter provided by BPMS. Select method to be invoked by adapter. BPMS creates integration component/service with input, output parameters defined as request/response schemas Point-click design, not code! 1. Diagram the flow as an orchestration of services – Process activities send, receive messages. Process Engine 4. Make model executable by binding to an adapter, protocol, and endpoint. Map process variables to request, response schemas. Start (Receive Order) Schema Connector Resource Transformation Order Event Web Site Accounting System Check Credit Schema for Operation 1 Operation 1 Operation 2 Credit 3. Define data transformations between process variables and request, response schemas (requires extensions in BPEL) Figure 3. Service-oriented integration middleware enables integration with minimal coding, improving agility. Integration middleware allows the process to coordinate the functions of multiple external systems, even though they may be based on different platforms, languages, and data models, and to do it with little code (Figure 3). BPMS provides standard activity types that can execute an integration component (i.e. issue it a service request) and receive the response. First, the process needs to perform a data transformation to populate the request variable. The integration component, via the adapter and communications bus, translates the request into the native API of the target system, executes it, and returns the response to the process. 2.6 Performance Management While the process is running, BPMS logs snapshots of process data and timestamps of step completions for use in performance management. Typically performance data is stored separately from the database of live process data used by the engine to execute the model. Some BPMS offerings provide special OLAP schemas for their performance management database, allowing certain metrics to be sliced and diced in a variety of ways. Others provide elaborate wizards through which aggregation and graphical display of key performance indicators can be defined without programming. © Bruce Silver Associates 2005 11 Understanding and Evaluating BPM Suites 2006 BPMS Report Performance management supports both administrative monitoring of tasks and queues for bottlenecks and SLA conformance, and management reporting of high-level business metrics, such as the percentage of orders completed on time or claims processed in one pass. Most BPMS offerings support design of graphical dashboards that allow management to tell in a glance how the process is performing. 2.7 The Cycle of Continuous Process Improvement Putting all these components together, BPMS supports a cycle of continuous process improvement. Figure 4 illustrates this cycle, the key handoffs between business and IT, and the typical separation of function between BPMS and supporting technology. The first step is analytical process modeling and simulation, using tools aimed at the business analyst. Traditionally this function has been provided by enterprise architecture tools, but today it is becoming a feature directly built into the BPMS design environment. If analytical modeling is performed using an external tool, it must be exported to the BPMS, typically as XPDL or BPEL. Within the enterprise architecture toolset, system architects also model the data structures required to build the process application, typically using UML or similar IT modeling standard. Based on these IT models, developers build any custom functions required and expose them as components accessible to the process designer. This may include integration components created using adapters and introspection. BPMS Dashboards Analytical Modeling (BPMN) Performance Mgmt Modeling (KPIs) Process Engine ERP Integration Business Framework Rules BPMS Modeling Tool Biz Integration adapters Process Data CRM SQL Legacy BPMS Design Tool EA Tool IT Component Modeling (UML) Process Design - Implementation Detail - Data, transformation - Integration - Exception handling - UI design Business Rule Design IDE Component Design Figure 4. BPMS encourages business-IT collaboration in process design and maintenance, enabling the cycle of continuous process improvement. In the BPMS design environment, there is typically a handoff between business and IT, but it is more “collaborative” than traditional software development because of the common process design tool. Many BPMS design tools allow flow composition, task role assignment, and business rule definition to be performed by business analysts. Process steps are simply dragged onto the diagram from a palette of reusable process components, some of which are provided outof-the-box by the BPMS and others created by IT. © Bruce Silver Associates 2005 12 Understanding and Evaluating BPM Suites 2006 BPMS Report The more technical part of process design is typically done by IT. Even though true programming is rarely necessary, knowledge of data structures, XML schemas and transformations, events and exceptions, scripting and web user interface design may be required. The key thing is that the design handoff between business and IT, if required, remains within the BPMS environment, which greatly enhances the collaboration. The completed executable model is then validated, tested, and deployed to the process engine. At runtime, the engine executes the steps, directing interactive tasks to human participants and invoking automated steps using the integration middleware. Performance data is saved in a database that can be monitored in real time and mined using advanced analytics. The results of this analysis can then be fed back into analytical models to further improve the process. 3. Business Process Styles (Use Cases) While the principles of BPMS are applicable to virtually any business process, different types of processes emphasize different sets of capabilities. For example, a complex collaborative process such as a new product launch would not be expected to emphasize the same BPMS features as a repetitive high-volume process for paying health insurance claims. While BPMS vendors like to say their tools are suitable for any kind of process, in reality each offering is optimized for a particular set of process styles or use cases. Finding a BPMS that matches your specific use case is a key part of the vendor selection process. Content Management Events and Exceptions Production workflow Transactional/ straight-through Business Rules Complex collaborative Case management Application Integration Content lifecycle Benefits enrollment Documentation, SOX New product launch Mortgage origination, underwriting Health insurance claims Order processing. trade settlement Human Activity Basic workflow Example Instance Volume Use Case E-forms M M L L L L Content methods and events Ad hoc flow, team room Electronic folder, runtime-added flows Shared queues M M L M H L L H L L H L M H M H H M H H M H H L Integration infrastructure, automated exception handling H L H H L H Key Artifact Figure 5. BPMS capability requirements depend on process use case. Figure 5 summarizes the characteristics of the six use cases considered in the 2006 BPMS Report. The lines between them are somewhat fuzzy and exceptions always occur, but it is important to try to understand how your own processes fit in this use case framework in order to assist BPMS vendor selection. The essential characteristics of each use case are: • Key artifact. An identifying feature of BPMS used for this type of process. • Instance volume. Hourly or daily volume of instances of a particular process type. Low means tens, High means thousands. © Bruce Silver Associates 2005 13 2006 BPMS Report Understanding and Evaluating BPM Suites • Human activity. Intensity of interactive steps in running processes. Low means tens of human steps per hour (aggregated); High means thousands. • Application integration. Importance of integration middleware and infrastructure. Low means relatively unimportant, message bus not required. High means very important, including adapters and message bus. • Business rules. Importance of a true business rule engine to automate complex decisions based on process data. Low means relatively unimportant; High means important. • Content management. Importance of a content repository supporting versioning, search, access control, and records retention, integrated with the process engine. Low means relatively unimportant, no repository required; High means scalable repository is generally required. • Events and exceptions. Importance of facilities for responding to events and exceptions, including modeled handlers, propagation between parent and child processes, and transaction management. Low means relatively unimportant, leveraging interactive (manual) decision-making; High means very important, primarily automated. 3.1 Basic Workflow Basic workflow is the simplest type of process. It primarily consists of routing electronic forms to individual users for data entry, review, and approval. Examples include purchase requisitions, HR benefits enrollment, or new hire processing. A transaction, if any, typically occurs at the end, and may not even be automated by the process, so little application integration is typically required. Routing is performed by process rules operating on form data or user decisions at runtime. All BPMS can automate this type of process, but some are not cost-effective solutions to basic workflow problems. Look for: Rich forms user interface Process design and maintenance by non-programmers Easy to deploy and maintain 3.2 Content Lifecycle Content lifecycle processes are similar to basic workflow, but in addition to electronic forms the process routes documents for creation, review and revision, approval, publishing and distribution, and archival retention. Examples include technical documentation, web content management, and Sarbanes-Oxley 404 compliance. BPMS requirements are similar to basic workflow, the major difference being the need to integrate with an enterprise content management (ECM) repository. While any BPMS can in principle integrate with a third party repository like any other external business system, content lifecycle processes work best when the BPMS is contentaware, meaning core ECM capabilities, such as document viewing and check-in/check-out from the task user interface, are easily composed by the process designer without custom integration. Look for: Support for document attachments and viewers Support for content management library services: check-in/check-out, versioning, metadata search © Bruce Silver Associates 2005 14 2006 BPMS Report Understanding and Evaluating BPM Suites Support for scalable ECM repositories 3.3 Complex Collaborative Complex collaborative processes have a more unstructured character. Each instance is like a project. Examples include launching a new product or putting together a deal. While there may be a series of milestones that need to be accomplished before completion, the sequence of steps cannot always be specified precisely at design-time. Flow may be ad hoc and loop back, or may involve online team rooms, collaborative document editing and team calendars. Most BPMS are not well suited for this, except for those that come from team collaboration software vendors. Look for: Support for collaborative document review and discussion Support for unstructured/ad hoc flow and offline work Integration of online team rooms 3.4 Case Management Case management is another special type of process that only some BPMS support well. Its key artifact is an electronic folder of case documents, which are added while the case process is running. The classic example is mortgage origination, but examples include other types of underwriting for insurance and loans, financial advisory processes, and many types of benefits adjudication. What makes case management different is that the number and identities of documents may not be known at design time, and each document may have its own workflow (create, review, approve) in addition to the overall case process. Many case management processes involve complex business rules and require integration of a business rule engine. Look for: Electronic case folder of independent work objects Ability to add case objects and flows at runtime Content management integration 3.5 Production Workflow Production workflow processes are high-volume flows involving pools of users drawing work from shared queues. Examples include health insurance claims, credit card applications, and accounts payable. Often the processes are document-intensive, and process tasks involve capturing data from documents, reviewing documents, and approving transactions based on document data. In addition to scalable performance, production workflow requires rich queue management features, the ability to flexibly assign user roles and groups to those queues, and performance management metrics geared to work team productivity. Look for: Support for shared queues and rule-driven task assignment Performance optimization through simulation, analytics, and real-time escalation Business rules High-performance document retrieval/parsing © Bruce Silver Associates 2005 15 2006 BPMS Report 3.6 Understanding and Evaluating BPM Suites Transactional/Straight-Through Transactional or so-called straight-through processes are high-volume flows with human interaction generally limited to approvals and exception resolution. Examples include automated order processing or trade ticket settlement. Most BPMS from integration or infrastructure vendors emphasize this type of process. Look for: Rich integration infrastructure, including adapters and enterprise service bus Complex business objects, data transformation, and business rules Comprehensive event management and automated exception-handling Industry solution templates with prebuilt objects, transformations, adapters, protocols, performance metrics and reports corresponding to industry standards and best practices. 4. Principles of Product Selection BPMS buyers are confronted with a wide selection of available offerings but are given little useful guidance from vendors on how to make an intelligent choice. A practical approach is to ask three basic questions of each prospective BPMS: 1. Is the BPMS a good fit to my process use case? This requires upfront analysis of process characteristics to understand the features and functions most critical to success. Only then can BPMS capabilities be properly weighted in the selection decision. 2. Does the BPMS support my preferred process lifecycle? This requires understanding the roles, responsibilities, and skills of your business analysts and in-house IT. What level of technical ability is required to build and maintain solutions? What kind of handoff or collaboration is required, between business and IT, and between functional units in the enterprise? What facilities are required to support process change and reuse? 3. Does the BPMS meet our standard IT criteria? These include issues of platform fit, architecture and standards, scalability and reliability, and vendor maturity and focus. 4.1 Use Case Fit The most important consideration is BPMS fit to your process use case. Many factors, large and small, figure in the analysis of use case fit, but for this report we focus on the attributes referenced in Figure 5: 1. Instance Volume All BPMS vendors say their product is “scalable,” but what does that mean? You may have an occasional process, such as employee purchase requisitions or travel expense reports, that at some point touches 100,000 users in your enterprise, but only a couple times a year on average. That vendor may claim “the largest process implementation in existence,” but it says little about true performance scalability. Rather than number of participants, a better measure of scalability requirements is the number of instances created per hour, or the number of instances simultaneously in process. Processes like online order processing, trade ticket settlement, or health insurance claims have high instance © Bruce Silver Associates 2005 16 2006 BPMS Report Understanding and Evaluating BPM Suites volumes. On the other hand, even though they may be mission-critical, processes like new product launch or technical document production generally have low instance volumes. Others may simply depend upon the scale of the proposed implementation. In a demo, all BPMS offerings can model virtually any process, but in production, if the engine can’t handle the instance volume, the implementation is going to fail. While there is no generally accepted standard metric for performance scalability, some BPMS vendors publish lab tests showing the number of “process transactions per hour” on a given server platform. A process transaction typically means execution of a single activity on the process engine. The best way to assess performance scalability for your application is to look for reference customers of the BPMS that have successfully implemented a process similar to yours in type and instance volume. 2. Rich Human Workflow In some processes, human workflow is a major emphasis, the cost and speed of human labor representing a significant part of key performance indicators for the process as a whole. In others, human participation is only required on an exception basis, to handle instances that fail some validation rule or that require special approval. The workflow features required by the former would in general be overkill for the latter. Human workflow is a fundamental component of BPMS and all offerings support it, but some provide a rich set of workflow features, while others offer just a perfunctory capability. By “rich” we mean features like: • Worklists that can display and be sorted or filtered by user-defined properties. • Queues and worklists shared by a pool of participants acting in the same role. • User models based on organizational tables and directories, so a ManagerApproval step gets automatically routed to the right manager. • Rich task user interfaces, including web forms and screenflows, customizable without programming. These interfaces may allow users to perform actions such as database queries, correspondence generation, or starting an instance of another process. • Advanced work management capabilities like task delegation, replication, annotation, and voting. • Privileged user actions such as returning an instance to a previous step, suspending or restarting the instance, or manually re-routing it to an exception queue. 3. Agile Integration The ability to integrate external systems with minimal programming is a key advance of BPMS over the workflow technology of the 1990s. Process agility generally requires an integration middleware framework within the BPMS, but the capabilities of that middleware can vary greatly. If the process is mostly human workflow and the only integration required is an occasional database query, filesystem read or write, or web service invocation, simple out-of-thebox wizards and adapters for these activity types are all that is needed. On the other hand, if the process requires integration with enterprise applications such as ERP or CRM, legacy systems, J2EE components, or B2B protocols, more extensive middleware is appropriate, including: • Integration adapters specific to the enterprise application, legacy technology, or other component or protocol, with capability to introspect the external system and create integration components that can be invoked by the process engine. © Bruce Silver Associates 2005 17 2006 BPMS Report Understanding and Evaluating BPM Suites • The ability to invoke integration both synchronously and asynchronously, and to correlate responses (callbacks) from asynchronous calls. • Support for enterprise messaging, either through a standard interface like JMS or specific support for common backbones like IBM MQ or newer ESBs. The backbone is not part of the BPMS, but its use is transparently supported by the process implementation. • A data transformation tool and engine, including both XML-to-XML and XML-tononXML. • Event adapters and listeners for inbound integration. Event adapters allow external systems to issue events to the process. Event listeners monitor message queues and relay received events to the process engine. • Ability to expose BPMS processes for invocation by external applications as a web service. 4. Business Rules Processes where decisions are based on complex sets of inter-related business rules, or where changes to the rules are frequent or must be applied immediately, are best handled by BPMS that includes or integrates a true business rule engine (BRE). Examples of such decisions might be credit approval, underwriting, claims approval, fraud detection, pricing, or calculation of discounts or commissions. In many processes, however, a BRE may be unnecessary. Other BPMS functions where BRE can add value include assignment of tasks to human participants, dynamic selection of service providers or trading partners, or determining the correct response to an exception, performance problem, or external event. 5. Content and Collaboration Many process types are content-centric, meaning the creation, review, or processing of documents or other content types plays a critical role. Most BPMSs are data-centric, meaning they have no explicit notion of content separate from the data they manage in the BPMS runtime database. For many content-centric processes, the BPMS should not only integrate an enterprise content management (ECM) repository, supporting metadata and content search, access control, versioning, check-in/check-out locking, and web viewers, but should ideally be content-aware, meaning that common repository methods (e.g., check-in, check-out, modify metadata) are built into the BPMS design environment. Data-centric BPMS can integrate with a third party ECM repository like any other external business system, but if it is not content-aware, additional programming may be required. Case management is a particular type of content-centric process in which the specific content items required by the instance is determined at runtime based on the specifics of the instance. Examples include various types of lending, underwriting, and benefits administration processes. Each content item may have its own creation/acquisition, review, and approval process within the larger case process. Because of the historical importance of case management in traditional workflow, many BPMS today support it to some degree. Support for collaborative or “unstructured” work tends to go hand-in-hand with content. This support may include the ability to create and integrate instances of online team room activities with the process, or to allow ad hoc routing of process instances at runtime from one process step to any other process step. While such collaboration features are important in certain kinds of processes, very few BPMS support them today. © Bruce Silver Associates 2005 18 2006 BPMS Report Understanding and Evaluating BPM Suites 6. Events and Exceptions Events are signals received by a running process instance, either from an external system or process, or from another part of the BPMS (e.g. rule engine, deadline manager). Events are used for many purposes, including: • in-flight request for a process change (e.g. cancellation or modification of an order • change in state to an external system (e.g. customer address changed in the ERP system) • expiration of a timer or deadline • synchronization of state with another process (e.g., order approved) • an exception in the running process instance. For basic workflow processes, automated event processing is relatively unimportant. Generating an occasional event may require programming to the BPMS API. For high-volume automated processes involving extensive integration, providing a common message-based event framework is much more important, perhaps including an event broker that filters and sorts inbound messages. The richness of event processing is determined by factors such as: • What inbound event channels are supported, e.g., message queues, web services, Java API? • Are adapters available for external systems to generate events? • What determines the type of event and its correlation to specific process instances? • What actions can the BPMS take upon receipt of an event? Exceptions occur in any process, but the degree to which exception management must be automated varies greatly. In basic workflow processes, simple routing by a process rule or user action to an administrator exception queue is sufficient. At the other extreme, straight-through processes require most exceptions to be resolved automatically and any necessary compensation, rollback, or other transaction management actions applied by the BPMS. A BPMS with extensive exception management features may include: • Explicit support for different exception types, including process faults, rule-based exceptions, external events, timeouts, and user-initiated exceptions. • Exception handler actions or flows defined explicitly in the process model. • Compensation handler actions or flows defined explicitly in the process model. • Grouping of activities into transaction scopes in the process model, so that an exception rolls back or compensates all completed activities in the scope automatically. • Support for true ACID transaction management where activity resources permit it. 4.2 Process Lifecycle Support Matching a BPMS’s process lifecycle support capabilities ranks next to use case fit in importance for product selection. Process lifecycle here means: • How are process requirements understood, analyzed, and translated into an executable implementation? • What are the respective roles, responsibilities, and assumed skills of various actors in the analysis, design, and maintenance phases of the process lifecycle? © Bruce Silver Associates 2005 19 2006 BPMS Report Understanding and Evaluating BPM Suites • What are the assumed handoffs and/or collaborations between business and IT in the lifecycle? • What is the assumed software development lifecycle model, from traditional waterfall to iterative/RAD methodologies? • What is the expected frequency of change to process models, and how are those changes to be implemented in production? Each BPMS makes different assumptions about each of these lifecycle issues, reflecting its own understanding of its target market, but may not make those assumptions explicit to you. This section describes some of the factors you need to consider. 1. Analytical Modeling Recall from Figure 4 that analytical modeling may be performed either externally to the BPMS or within it. If externally, it is typically done in an enterprise architecture tool like Popkin System Architect, Casewise Corporate Modeler, or IDS Scheer ARIS. In addition to modeling, simulating, and optimizing process models, these EA tools also support modeling of other IT components, and incorporate their own process data model, separate from the one defined by BPMS. While dedicated analytical modeling tools often have more advanced simulation and analysis capabilities than native BPMS tools, there is the issue of connecting the two with as little redundancy and loss as possible. If a BPMS assumes integration with an external analytical modeling tool, you need to understand the modeling skills and roles in each environment and the mechanism for the handoff. Alternatively, if the BPMS assumes its internal analytical modeling is sufficient, you need to understand any limitations that entails, and how process modeling is integrated (if at all) with other aspects of enterprise architecture. 2. Design Environment Integration The broad scope of BPMS generally requires multiple types of design, largely within the BPMS. These include: • Analytical modeling and simulation, as described above. • Overall process design, emphasizing the flow of control between activities executed by the process engine, and definition of process data. • Integration design, including creation of integration components via introspection of external systems, customization of integration adapters, and data transformation design. • Custom component design. In service-oriented BPMS, for example, design of process activities (services) is external to the BPMS design environment. • Business rule design, including rule definition and management, integration with external information systems, and integration with the process engine. • Workflow task management, including queues and worklists, participant groups and roles, etc. • Task user interface design, including electronic forms and screenflows. • Performance data design, optimized for aggregation and analysis. • Report and dashboard design, based on performance data, including alerts, graphs, and tables. © Bruce Silver Associates 2005 20 2006 BPMS Report Understanding and Evaluating BPM Suites Each type of design typically has its own design tool, which may be integrated by the BPMS in a unified design environment, or not. A comprehensive unified modeling environment greatly simplifies the design, testing, and deployment process. In some BPMS, this is impossible because the particular design or execution component comes from a third party or is a legacy offering from the BPMS vendor. Also, some enterprise architects favor a best-of-breed approach to design tools rather than unified suites from a single vendor. Another lifecycle fit issue is the assumed skills required for each of these tools. Some BPMS assume most of these design functions are performed by Java programmers. To them, unifying the BPMS design environment means running it inside a programmer IDE such as Eclipse. Others go to lengths to ensure that programming is needed only rarely if at all, and provide extensive wizards and point-click configuration dialogs for virtually all design functions. 3. Process Structure and Component Reuse The complex nature of an end-to-end process means that it may in fact be composed of multiple, technically independent, BPMS processes in some nested or chained arrangement. With nesting, one process serves as the top-level or end-to-end shell, and various sections of this process are invoked as subprocesses. Subprocesses can be either embedded in the calling process or independent. Embedded means that the calling activity does not complete until the subprocess finishes and returns. It is synchronous, like a subroutine call in a program. Independent means that the lifetime of the called subprocess is independent of the calling process. The subprocess is invoked asynchronously, and synchronizes its state (if needed) with the calling process via messages or shared data. For example, an order management process may invoke a fulfillment process as an independent subprocess. If the fulfillment process finds that the items are in stock and reports an estimated ship date, order management can issue an invoice and complete, but the fulfillment process keeps on going (until the items are all shipped). In a chained arrangement, order management might just run until it launches fulfillment, at which point it terminates. When fulfillment is done, it launches billing and then it terminates, and so on. The chain of independent fragments makes up the end-to-end process. Within the process engine, embedded subprocesses are part of the calling process instance and typically share its data, but independent or chained subprocesses are actually separate instances of their own BPMS process and have their own instance data. The reason for allowing, or even encouraging, such independent nested and chained process structures is that in cross-functional processes, the process logic in each section of the process is owned by a different organization, possibly in a different location. BPMS allows each piece to be designed and maintained independently of the others (subject to some constraints), but must then deal with the challenge of managing the end-to-end process as a whole, including state tracking and propagating exceptions between parent and child processes. The degree of support for this type of process integration varies widely across BPMS offerings. A related issue is process component reuse. Since they represent company policies or best practices, subprocesses should be reusable in end-to-end processes created by others. This is implicit in the BPMS promise, but offerings vary widely in their facilities for making it happen. What is required is a repository of process components – which could be data objects, transformations, integration components, or entire subprocesses – accessible to all processes in the design environment. While all BPMS provide a catalog of components being used in the particular process being defined, they don’t all support this kind of general component repository accessible to any process. © Bruce Silver Associates 2005 21 2006 BPMS Report Understanding and Evaluating BPM Suites 4. Solution Value Out Of The Box Increasingly, standards are evolving in specific industries for data objects, protocols, and best practices metrics used in key processes. While any BPMS provides tools that allow you implement these standards in process designs, some offer vertical solution frameworks that give them to you pre-built out of the box. Tailored to specific industry processes, these solution components include: • Complex data objects • Skeleton process flows • Adapters to enterprise applications and technology components (e.g. EDI) commonly used in the industry • B2B protocols and trading partner management • Transformation mappings • Web applications, including task user interfaces • Performance management reports and dashboards 5. Testing and Deployment Whether it involves programming or not, process design is a form of application development, requiring rigorous testing followed by transfer (deployment) from the development to the production or staging environment. Ideally, integration testing of the entire process, including integration, events, and rules, can be performed directly in the development environment, without requiring deployment. This generally requires a version of the runtime environment to be available on a single PC, along with a debugger that can trace and inspect instances as they run through the process. In the production environment, runtime components may be distributed across different machines as required for scalability, reliability, and security. Some BPMS offerings allow even complex deployments to be automated in one click. This is very helpful in allowing new versions of the process to be installed quickly and reliably. 6. Performance Management The process engine records snapshots of instance data and other runtime data for use in process monitoring, performance management, and audit trail. Most BPMS automatically record timestamps of each activity for running instances, and aggregate statistics of queue lengths, cycle times, and work performance by selected groups and individuals. In addition, user-defined key performance indicators typically require tracking of specific process data elements and correlating them with external data, such as the percentage of orders fulfilled within some target time value, or the fractional dollar value of claims processed with no human intervention. Performance management requires careful design of the data elements logged, where in the process they are captured, how they are correlated with external data, aggregated and filtered, and reported in tables and graphical dashboards. Today most BPMS provide some level of performance management within the design and execution environment. Some provide only a fixed set of metrics, while others allow process designers to implement their own. Some provide special OLAP schemas (cubes) that allow custom slice-and-dice queries on specific data elements to be defined on the fly by users. In addition to simple reporting, many BPMS allow the performance management component to © Bruce Silver Associates 2005 22 2006 BPMS Report Understanding and Evaluating BPM Suites issue alerts to managers, administrators, or process owners when certain metrics exceed some threshold value, enabling immediate corrective action. The range of performance management capability varies significantly across BPMS offerings. Like other parts of process modeling, performance management is considered by some to be distinct from BPMS, more closely aligned to the analytics found in business intelligence tools. And as with analytical modeling, BPMS vendors differ in whether they provide performance management reporting and analytics entirely within their own product or integrate with a third party BI tool. 7. Process Change and Optimization Fundamental to BPMS’s agility promise is the understanding that processes change over time, either in response to changing business demands or to optimize implementation based on runtime experience. Generally what needs to change is the process logic, not the internals of individual activities. BPMS allows process logic to be done quickly with no programming, creating a new version of the process model. Versions are managed in a process model repository within the BPMS. Implementing the new process logic requires deploying the new version, often in parallel with the existing version. Usually instances already in flight continue to follow the old version, and new instances follow the new version. If changes in process logic must be implemented immediately, even for instances in flight, it can be done with a business rule engine. No versioning or redeploying of the process model is required. When the process instance invokes a rule on the BRE, the latest version of the rule is executed. 4.3 Traditional IT Values The third set of considerations for BPMS technology selection revolves around traditional IT concerns: platform and architectural fit, system scalability and reliability, and the BPMS vendor’s financial strength, market focus, and past experience. 1. Architecture and Standards Consistency with established platforms and standards in the organization is always a factor. Although the web and XML web services are smoothing over some of the barriers between platforms, familiar platform issues like Windows vs Unix, J2EE vs .Net, and supported DBMS will usually rule out certain BPMS offerings right off the bat. The ability to leverage expensive existing infrastructure like integration middleware, business rule engines, or EA modeling tools may also favor certain BPMS contenders. As discussed previously, BPMS architecture and modeling language standards should be taken into consideration as well. In workflow architecture, the end-to-end process and human workflow fragments are modeled using the WfMC’s XPDL. Increasingly, XPDL-based BPMS also support BPEL-based orchestrations as short-running subprocesses, typically for application integration. On the other hand, BPMS based on service orchestration use BPEL for the top-level process and all service invocations, and typically provide a task management service to handle human workflow. To date, BPEL models are very fine-grained and their tools require a higher degree of technical knowledge on the part of the process designer than do XPDL-based offerings. However, the increasing ability of some offerings to generate BPEL under the covers from business-friendly analytical modeling tools is changing these dynamics. © Bruce Silver Associates 2005 23 2006 BPMS Report Understanding and Evaluating BPM Suites Other standards to keep an eye on include support for new web services standards and new versions of existing core standards, including SOAP, WSDL, and XPATH. The landscape is shifting rapidly here. 2. Scalability and Reliability Most of the analysis in the 2006 BPMS Report is based on capabilities documented by the BPMS vendor. However, the only good way to understand critical issues like performance scalability (measured in concurrent instances, process transactions per hour, or any other way), reliability (bug-free), availability (uptime), development cycle, and total cost of ownership is to talk to reference customers of each BPMS under consideration. The reference customer should be implementing a process similar in character and scale to your own. THIS IS REQUIRED DUE DILIGENCE BEFORE YOU BUY. This report doesn’t provide much of that information. Another source of this information, although anecdotal, is from analyst firms like Gartner; it’s probably the best information those analysts can give you about BPMS. 3. Vendor Strength and Focus Another factor is the vendor itself. Unfortunately, most BPMS vendors are relatively small companies (under $100 Million). However, as most of them have many customers in the Global 1000, this is a matter of your company policy, and I’ll leave it at that. Beyond sheer company size is the issue of product maturity and market focus. Is the BPMS a new offering or has it existed for a few years already? How many customers in your industry does it have? How many implementations of your process solution has it done? If the BPMS vendor has done lots of installs in health insurance and brokerage, but you’re the first mortgage loan process they’ve tried, you might want to think again. 5. Using the 2006 BPMS Report With the preceding discussion as background, we’re ready to talk about the BPMS evaluation format in the 2006 BPMS Report. The same analytical framework is applied to each BPMS offering and is presented in the same format. Sections of each report chapter include: 1. Vendor and Product Overview This section includes company and product background, industry focus, and overall approach to the BPMS market. It describes typical customer implementations, and outlines the basic components of the BPM suite offering. 2. Environment and Architecture This section describes the system component architecture, design and runtime environment, and supported platforms and standards. 3. Analytical Modeling This section describes tools for business analysts to model business processes and simulate their performance in various scenarios. 4. Process Structure and Data This section describes core concepts and terminology used in executable process design, business objects or other forms of process data, and the structure of end-to-end processes in terms of © Bruce Silver Associates 2005 24 2006 BPMS Report Understanding and Evaluating BPM Suites BPMS constructs, including activity types, subprocesses, workflows, events, scopes, control and message flows, and choreographies. 5. Human Workflow This section describes support for human workflow, including task assignment to groups and roles, task user interface and form design, worklists, and task participation through the process portal. 6. Integration Framework This section describes integration architecture, capabilities, and design tools, including adapters, enterprise service bus, and event listeners, with special attention to support for web services and integration of existing business systems. It also describes tools for resource introspection, data mapping and transformation. 7. Business Rules This section describes support for business rule engines, either native or third party, including rule design, rule usage patterns, and integration with process actions. 8. Content, Collaboration, and Case Management This section describes support for content-centric processes, through native content methods and events, integration with content management repositories, support for team collaboration, and case management capabilities, including dynamic addition of case items and their attached subprocesses at runtime. 9. Events and Exceptions This section describes the BPMS’s ability to listen for and receive various types of events from external systems or from other parts of the BPMS itself. In addition, it describes handling of exceptions, both system faults and business exceptions, including exception signaling, handlers and flows; compensation; and transaction management, including scopes, coordination, and recovery (both ACID and long-running). 10. Performance Management This section describes process monitoring and performance management capabilities, including technical architecture and design tools, out-of-the-box OLAP cubes and dashboards; and capabilities for automated escalation actions and optimization. 11. Industry Solutions and Services This section describes available pre-built solution frameworks for specific vertical and crossindustry process applications. 12. Analysis Based on the previous sections, the report assesses the BPMS’s strengths and suitability for the major use cases outlined in Figure 5, as well as the assumed skills and roles in the process lifecycle. © Bruce Silver Associates 2005 25 2006 BPMS Report About the Author Dr. Bruce Silver is an independent industry analyst and consultant specializing in BPM and content management technology. He has advised both users and vendors of BPMS technology for many years, as VP at the analyst firm BIS Strategic Decisions (which became Giga Information Group, now Forrester Research) and later through his own firm, Bruce Silver Associates. He currently writes the monthly BPMS Watch column on and the Change Agent column for Intelligent Enterprise magazine. He is also the BPMS Technology Track chair at the Brainstorm BPM Conference series and a frequent speaker on BPMS product selection. In addition to the 2006 BPMS Report, Bruce Silver Associates offers numerous free white papers on BPMS and content management technology, available from Comments on this report as well as requests for future reports are welcome. Write to ...
View Full Document

Ask a homework question - tutors are online