I spend some time trying to find an MDD formulation to handle it in a few calls to MDD, but that explodes in complexity when a 3rd event is added in a check. The conclusion for now is that it's better to do these checks in many more smaller steps.
@freijnen So I have been implementing the confluence check in MDD for the past week, and while implementing the reversible check the program concludes the example is not confluent.
For the record, this is the CIF input file:
controllable a, b, c;automaton A: location p: initial; edge a goto q; // Event 1 location q: marked;endautomaton B: disc int[0..2] v; location p: initial; edge b when A.p do v := v + 2 goto q; // Event 2 location q: edge c when A.q do v := v - 2 goto p; // Event 3end
Program output says:
Loading CIF specification "confluence_reversible1.cif"...Checking for finite response...Iteration 1. The following events have been encountered in a controllable-event loop of automaton B: (b c), which is controllable unconnectable.CONCLUSION: The specification has finite response.Checking for confluence...Mutual exclusive event pairs: (a, c), (b, c)Pairs that FAILED ALL CHECKS: (a, b)CONCLUSION: ERROR, the specification may NOT have confluence.
The check that I am doing to conclude reversibility here is
// 'event213Done' is the state after doing the sequence event2, event1, event3 (ie `b`, `a`, `c`).// 'event1Done` is the state after doing event1 (ie `a`)if(event213Done!=Tree.ZERO&&event213Done==event1Done){foundReversible=true;break;}
As you can see, the location pointers are equal, but the variable v is missing in the event1Done tree (meaning it can have any value), while it is 0 in the event213Done tree. As the second state forbids some values of v, the states are not equal.
After some thinking, I believe the program is correct here. What do you think?
It has been a while, I must say. I'm not sure what is and what is not taken into account.
In the basis I think what you are doing is correct.
The difficult thing here is the implicit guard on edge b. b may not be executed when v > 0. If you take that into account, that should also be taken into account when determining when a and b are enabled together. Because if you take that into account, a and b can only be enabled together when A.p and B.p and v = 0. If you start from here and check event1Done you should end up in a situation where v = 0.
I don't quite understand what you mean. The reversible example in your paper is
There is no condition about v (x in the example). There is a condition for enabledness of the b edge, namely A must be in its initial state (at least, that is what I consider to be expressed by LP=3 in the example). I implemented that as when A.p.
The starting condition in the program is just the LP values of the global event-enable guards (commonEnabled means the state that both event1 and event2 are enabled). That includes the A.p condition.
In general, I don't see why you may require that v would have to be 0. We're grabbing event pairs from anywhere in the specification, the paper doesn't say to compute the feasible set variable values from the initial state when arriving in (A.p, B.p). Just because we are here by accident in the initial state doesn't justify the sudden need for additional requirements imho.
I think what you are saying makes sense. But I find it difficult whether this is what we want.
If I look at the example. Simply said if I execute a, then the value of v does not change. If I execute bca, the value of v does not change, as -2 cancels +2. Conclusion 1: a and bca do not change the value of v.
Your algorithm also concludes 2: v = 0, which implies that v must have been 0 before. Conclusion 2 does not change conclusion 1, I think. Why is it relevant here?
It concludes 0 because that's the only way to add 2 to v and still be in its domain. If you enlarge the domain you'll get more valid values.
I don't see how to decide in the program the example is confluent other than by a full implementation of algebraic reasoning over variable values. For extra fun also implement the PLC behavior for under/overflow.
At that point it's likely simpler to just compute ranges of values in all states.
So remove this check? What other options do we have?
We could check for identity of all data values between start and end through both paths, but that also fails, since all values v > 0 don't survive in the result through bac, while they do survive through just a.
I am not even sure any more both alternative paths should actually be considered 'the same'. For infinite math sure (but that doesn't look feasible to solve at all to me in general), but any practical implementation does have a limit somewhere.