Decision Node

UML—Unified Modeling Language

Tim Weilkiens , in Systems Engineering with SysML/UML, 2007

3.6.6 Decision and Merge Nodes

Definitions

A decision node is a node in an activity at which the flow branches into several optional flows. There is exactly one incoming edge and an arbitrary number of outgoing edges, which each have a condition.

A merge node is a node in an activity at which several flows are merged into one single flow. There is an arbitrary number of incoming edges and exactly one outgoing edge.

A flow within an activity is generally controlled by conditions. If XY is true, then do A, otherwise do B. What we need for this is a decision node. The notation is a rhombus with one incoming edge and an arbitrary number of outgoing edges, at which there is a condition each within brackets (Figure 3.49).

FIGURE 3-49. Example for decision nodes.

A condition is a Boolean expression in any language. 13 Never more than one condition may be true. Also, exactly one condition must always be met. We can use the [else] condition to ensure this. This way is always chosen when all other conditions are false (Figure 3.49).

The notation for merge nodes is the same as that for decision nodes. A merge node is the counterpart that merges several optional flows into one single flow. As flows are merged, no conditions are tested, and there is no waiting for special events (Figure 3.50).

FIGURE 3-50. Example for merge nodes.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123742742000031

Business Modeling

Raul Sidnei Wazlawick , in Object-Oriented Analysis and Design for Information Systems, 2014

2.4.2 Control flow nodes

Two control flow nodes are common in the activity diagram: decision nodes and parallelism nodes.

The decision nodes (branch and merge nodes) are represented by diamonds

. The flows coming out of the decision node must have guard conditions (a logic expression between brackets). Two or more flows may leave a decision node, but it is important that the guard conditions are mutually exclusive, that is, only one of them may be true at a time.

The parallelism nodes (fork and join nodes) are represented by bars

. The flows coming out of a fork node are performed in parallel.

The diagram in Figure 2.8 is still a very crude representation of the real process of selling books. Just to illustrate a possible evolution of that diagram, let's examine the situation when some of the ordered books are not available in stock. It would be necessary to order them from one of the publishers and add them to the customer order after arrival. Figure 2.9 shows this situation by indicating that if some books are not available, then they have to be ordered from publishers, and the order is sent only after they arrive.

Figure 2.9. Example of decision control flow.

Just below the Confirm payment activity there is a branch node represented by the diamond. As noted before, the decision node allows only one exit flow to be followed. The guard conditions ([some books missing] and [all books in stock]) are mutually exclusive. From the branch node, the flow follows one of the alternate paths until reaching the merge node that is depicted just above the Send books activity.

Later, the analyst could discover that the model is still not satisfactory. For instance, even if some books are not in stock, the order could be sent in two or more deliveries. Thus, two paths could be performed in parallel: sending the books that are available in stock and, if necessary, ordering the other books, and sending them later, as shown in Figure 2.10.

Figure 2.10. Example of parallel flows.

The bar below the Confirm payment activity in Figure 2.10 represents a fork node because it starts two parallel paths, and the other bar represents a join node because it synchronizes the parallel paths into a single path.

The single path after a join node can only be followed if all paths that come to the join node have been followed. It can be seen that in that model, if all the books are in stock, only one of the parallel paths will have activities to be performed, because the other path immediately goes to the merge and join nodes.

The fork, join, decision, and merge nodes, as well as the initial and activity final nodes, may be placed inside swim lanes. However, this does not affect their semantics. Thus, the choice of a swim lane to place such nodes is only due to visual suitability.

Another node that can be useful sometimes is the flow final node,

, which terminates a path (parallel or not) that is being performed. The difference between it and the activity final node is that in the case of the activity final node, all flows of the activity diagram are terminated when a single flow reaches it, while in the case of the flow final node only one flow (among other parallel flows) is terminated, but the activity continues.

In the use case of Figure 2.7, only one actor (customer) and one worker (clerk) participate in the use case Sell books. However, after detailing the activities related to that use case, as done in Figure 2.10, it was discovered that two more actors are necessary: the Publisher and the Credit card operator. Additionally, the Publisher is an actor that has to be linked to the use case Buy books. Thus, depending on the level of detail desired at this point of the project, the business use case diagram may be updated to reflect those discoveries, as seen in Figure 2.11.

Figure 2.11. Business use case diagram updated with information discovered with the activity diagram.

Other options for detailing a business use case are the UML sequence diagram and communication diagram. However, those diagrams are used to represent messages being sent to elements. It is not always quite natural to find names to label those messages in the case of a business process, and meaningless names such as "seek data" and "check result" are chosen eventually. Thus, the activity diagram is a more natural choice to describe what happens in the real world, inside an organization of people. However, as will be seen in Section 5.8, sequence diagrams are very useful to detail system use cases, because system use cases usually consist of a sequence of information flows being exchanged between actors and a system.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780124186736000028

Coverage-Based Software Testing

W. Masri , F.A. Zaraket , in Advances in Computers, 2016

3.3 Branch Coverage

The branch coverage criterion defines TR to include all the branches (edges originating from decision nodes) in all the CFGs of the functions in the subject program. Thus, for T to satisfy branch coverage, T should exercise each branch of each control structure. For example, given an if statement, the body of the if should be executed in at least one instance and skipped in at least one other instance. Given an if-else, the body of the if should be executed in at least one instance and the body of the else executed in at least one other instance. And given a loop, it should iterate one or more times in at least one instance and zero times in at least one other instance.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/S0065245816300274

Decision Analysis Fundamentals

Richard E. Neapolitan , Xia Jiang , in Probabilistic Methods for Financial and Marketing Informatics, 2007

Formulation of the Theory

To create a dynamic influence diagram from a dynamic Bayesian network, we need only add decision nodes and a value node. Figure 5.30 shows the high-level structure of such a network for T = 2. The chance node at each time step in that figure represents the entire DAG at that time step, and so the edges represent sets of edges. There is an edge from the decision node at time t to the chance nodes at time t + 1 because the decision made at time t can affect the state of the system at time t + 1. The problem is to determine the decision at each time step which maximizes expected utility at some point in the future. Figure 5.30 represents the situation where we are determining the decision at time 0 which maximizes expected utility at time 2. The final utility could, in general, be based on the earlier chance nodes and even the decision nodes. However, we do not show such edges to simplify the diagram. Furthermore, the final expected utility is often a weighted sum of expected utilities independently computed for each time step up to the point in the future we are considering. Such a utility function is called time-separable.

Figure 5.30. The high-level structure of a dynamic influence diagram.

In general, dynamic influence diagrams can be solved using the algorithm presented in Section 5.2.2. The next section contains an example.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123704771500220

Methods and Methodology for an Incremental Test Generation from SDL Specifications

Athmane Touag , Anne Rouger , in SDL '99, 1999

2.1.2 SDL_EFSM computation

The transformation of the SDL process into a SDL_EFSM mainly consists in considering a SDL decision node as a specific state of the SDL_EFSM (we call it decision state), and SDL decision responses as specific trigger conditions of SDL_EFSM transitions. These specific trigger conditions are semantically equivalent to SDL continuous signals. Figure 1(b) gives an example of Figure 1(a) decision node transformation. The hypotheses described before are also assumed for the SDL_EFSM.

The behaviours of the SDL process reachability graph and the those of the SDL_EFSM are equivalent. The proof consists (for each SDL process and SDL_EFSM transition) in showing that the SDL process queue has the same content as the SDL_EFSM queue and vice-versa.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780444502285500126

Game Theory

Theodore L. Turocy , Bernhard von Stengel , in Encyclopedia of Information Systems, 2003

VI.B. Strategies in Extensive Games

In an extensive game with perfect information, backward induction usually prescribes unique choices at the players' decision nodes. The only exception is if a player is indifferent between two or more moves at a node. Then, any of these best moves, or even randomly selecting from among them, could be chosen by the analyst in the backward induction process. Since the eventual outcome depends on these choices, this may affect a player who moves earlier, since the anticipated payoffs of that player may depend on the subsequent moves of other players. In this case, backward induction does not yield a unique outcome; however, this can only occur when a player is exactly indifferent between two or more outcomes.

The backward induction solution specifies the way the game will be played. Starting from the root of the tree, play proceeds along a path to an outcome. Note that the analysis yields more than the choices along the path. Because backward induction looks at every node in the tree, it specifies for every player a complete plan of what to do at every point in the game where the player can make a move, even though that point may never arise in the course of play. Such a plan is called a strategy of the player. For example, a strategy of player II in Figure 7 is "buy if offered high-quality service, don't buy if offered low-quality service." This is player IPs strategy obtained by backward induction. Only the first choice in this strategy comes into effect when the game is played according to the backward-induction solution.

With strategies defined as complete move plans, one can obtain the strategic form of the extensive game. As in the strategic form games shown before, this tabulates all strategies of the players. In the game tree, any strategy combination results into an outcome of the game, which can be determined by tracing out the path of play arising from the players adopting the strategy combination. The payoffs to the players are then entered into the corresponding cell in the strategic form. Figure 8 shows the strategic form for our example. The second column is player II's backward induction strategy, where "buy if offered high-quality service, don't buy if offered low-quality service" is abbreviated as H: buy, L: don't.

Figure 8. Strategic form of the extensive game in Fig. 7.

A game tree can therefore be analyzed in terms of the strategic form. It is not hard to see that backward induction always defines a Nash equilibrium. In Fig. 8, it is the strategy combination (High; H: buy, L: don't).

A game that evolves over time is better represented by a game tree than using the strategic form. The tree reflects the temporal aspect, and backward induction is succinct and natural. The strategic form typically contains redundancies. Figure 8, for example, has eight cells, but the game tree in Fig. 7 has only four outcomes. Every outcome appears twice, which happens when two strategies of player II differ only in the move that is not reached after the move of player I. All move combinations of player II must be distinguished as strategies since any two of them may lead to different outcomes, depending on the action of player I.

Not all Nash equilibria in an extensive game arise by backward induction. In Fig. 8, the rightmost bottom cell (Low; H: don't, L: don't) is also an equilibrium. Here the customer never buys, and correspondingly Low is the best response of the service provider to this anticipated behavior of player II. Although H: don't is not an optimal choice (so it disagrees with backward induction), player II never has to make that move, and is therefore not better off by changing her strategy. Hence, this is indeed an equilibrium. It prescribes a suboptimal move in the sub-game where player II has learned that player I has chosen High. Because a Nash equilibrium obtained by backward induction does not have such a deficiency, it is also called subgame perfect.

The strategic form of a game tree may reveal Nash equilibria which are not subgame perfect. Then a player plans to behave irrationally in a subgame. He may even profit from this threat as long as he does not have to execute it (that is, the subgame stays un-reached). Examples are games of market entry deterrence, for example, the so-called "chain store" game. The analysis of dynamic strategic interaction was pioneered by Selten, for which he earned a share of the 1994 Nobel prize.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B0122272404000769

Obtaining supported decision trees from text for health system applications

Boris Galitsky , in Artificial Intelligence for Healthcare Applications and Management, 2022

Abstract

In this chapter, we automatically build a decision tree (DecT) from textual data relying on discourse analysis. We refer to such tree as supported, as each decision node is supported with a text fragment the respective rule is extracted from. A DecT can be built from multiple documents as well as traditional attribute-value data. We propose a step-by-step procedure for building discourse trees (DTs) from text and then constructing a DecT from multiple DTs. We also evaluate the correctness and completeness of the formed DecTs in the making diagnoses for a series of diseases, given various textual descriptions of symptoms. We conclude the chapter with an extensive survey of expert systems leveraging DecT algorithms.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128245217000132

Policies, Access Control, and Formal Methods

Elisa Bertino , in Handbook on Securing Cyber-Physical Critical Infrastructure, 2012

23.4.3 Binary Decision Diagrams

Binary decision diagrams are data structures for representing Boolean functions. A binary decision diagram is a rooted, directed, acyclic graph. Nonterminal nodes in such a graph are called decision nodes; each decision node is labeled by a Boolean variable and has two child nodes, referred to as low child and high child. The edge from a decision node to its low (high) child represents an assignment of the variable equal to 0 (1). The terminal nodes are of two types: 0-terminal, representing the Boolean value false, and 1-terminal, representing the Boolean value true. An example of a binary decision diagram encoding the following simple access control policy is shown in Figure 23-7:

Permit if ( filename = fileA ) AND ( time < 17 : 00 OR age > 18 ) .

In the graphical representation, a dotted edge denotes the connection to a low child, whereas the continuous edge denotes the connection to a high child. The terminal symbols denote the case in which the policy permits access (symbol Y), and the case in which the policy does not apply to the request (symbol NA). Notice that to determine which are the requests authorized by this policy, one has simply to traverse the graph backwards starting from the Y terminal node.

Figure 23-7. An example of an access control policy encoded by a binary decision diagram.

Approaches based on the binary decision diagram technique have been proposed for policy analysis (like in the EXAM system) and for impact analysis of policy changes.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780124158153000236

Artificial intelligence and machine learning for the healthcare sector

Pratiyush Guleria , Manu Sood , in Cognitive and Soft Computing Techniques for the Analysis of Healthcare Data, 2022

3.1.3.5 Decision tree

The decision tree is a supervised ML algorithm for data classification and regression. In a decision tree, if-then rules are applied to the data set to form a tree-like structure with decision nodes and leaf nodes. In a decision tree, the input features and target class are there to achieve the probability of an event. The information gained is calculated for each node of the tree to split further and to achieve the best possible result and perform predictions with the feature having the highest information gain. The entropy calculation is an important factor for building the decision tree. The equation for finding the entropy for building the decision tree is shown in Eq. (1.6), and the equation for finding the information gain metric is shown in Eq. (1.7).

(1.6) e ( s ) ( attr ) = j = 1 n p j log 2 p j

Here e ( s ) is the entropy of the attribute for sample set s , and p j is the probability of an input feature.

(1.7) infogain ( s , attr i ) = e ( s ) v Values ( attr i ) p ( attr i = v ) e ( s v )

The information gain ( infogain ) calculated for a particular attribute ( attr i ) gives the knowledge about the target function, given the value of that attribute, that is, conditional entropy.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780323857512000074

Modeling Flow-Based Behavior with Activities

Sanford Friedenthal , ... Rick Steiner , in A Practical Guide to SysML (Third Edition), 2015

9.9.3 Modeling Probabilistic Flow

When appropriate, a flow can be tagged with a probability to specify the likelihood that a given token will traverse a particular flow among available alternative flows. This is typically encountered in flows that emanate from a decision node, although probabilities can also be specified on multiple edges going out of the same object node (including pins). Each token can only traverse one edge with the specified probability. If probabilistic flows are used, then all alternative flows must have a probability and the sum of the probabilities of all flows must equal 1.

Probabilities are shown either on activity flow symbols or parameter set symbols as a property/value pair probability = probability value enclosed in braces floating somewhere near the appropriate symbol.

Figure 9.18 shows the activity diagram for Transmit MPEG, first introduced in Figure 9.15. In this example, the probability of successful transmission has been added. The two flows that correspond to successful and unsuccessful transmission have been labeled with their relative probability of occurrence.

FIGURE 9.18. Probabilistic flow.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128002025000096