OASIS eXtensible Access Control Markup Language (XACML) TC

Expand all | Collapse all

wd-19 indeterminate policy target handling

  • 1.  wd-19 indeterminate policy target handling

    Posted 05-06-2011 16:52
    I withdraw my objection to the Section 7 changes made by Erik in the 3.0 core spec wd-19. I'm still concerned that the policy evaluation specification (in section 7) may cause unexpected variations in the results from two seemingly "equivalent" policies, but I need to produce some theoretical or empirical evidence to demonstrate this (or to relieve my concern). In any case, the wd-19 changes probably do not make this any better or worse. Regards, --Paul


  • 2.  Re: [xacml] wd-19 indeterminate policy target handling

    Posted 05-07-2011 03:07
    Hi Paul and TC, I think the toothpaste is out of the tube on this one: i.e. I think too much has been invested in the analysis for one member to unilaterally shut down the issue by withdrawal .  In any event, that's my opinion, but, regardless, based on yesterday's mtg, I believe there is more to be said on this issue, and hopefully we can channel it to a clean resolution. That being said, following is additional analysis I have done and some conclusions that I believe we can reach agreement on, and that I think I can describe in terms that everyone can follow (for clarity I will just add an s for the plural of Policy ). There are 2 arguments I would like to make. Argument 1: First, there are 3 types of Policys: Policys{P} where all Rules have Effect= Permit and therefore these Policys can never return a Deny . Policys{D} where all Rules have Effect= Deny and therefore these Policys can never return a Permit Policys{DP} where there are a mix of Rules, some of which are Permit , and some of which are Deny , and therefore, there is no apriori way to look at such a Policy and know whether or not it can return either a Permit or a Deny. Therefore, the 3 types of Policys each have an inherent property, which can be determined simply by inspection of the Policy w/o regard to evaluation of any Attributes. In fact, 2 out of 3 of the types retain their property regardless of evaluation of the attributes. i.e. Policy{P} is always Policy{P}, it can never change its property and become either Policy{D} or Policy{DP} i.e. same can be said for Policy{D} I would therefore refer to these as static properties The third type Policy{DP} has a run-time characteristic, where if current values of the Attributes happen to exclude all the Rules of either D or P, then the current run-time property of the Policy{DP} for a single evaluation can effectively become either Policy{P} or Policy{D}. On subsequent evaluations the Policy{DP} can again by happenstance become any one of the 3 types. I would therefore consider this a runtime property if we allow its definition to be subject to Attribute evaluation. Therefore, I think we can say that the problem we are discussing reduces to only the evaluation of Policy{DP} elements. We can then ask whether we want our combining algorithms to be subject to runtime values of Attributes that on any given evaluation can cause a Policy{DP} to become a Policy{D} or a Policy{P}, thus rendering the property of the Policy indeterminate until runtime values are plugged in. I would also suggest that it is this indeterminacy, which would cause Policys not to be comparable for equivalence , because the Policys themselves have a built-in uncertainty depending on how one regards this property. I would also suggest that for the purpose of equivalence this runtime characteristic could be considered a performance optimization , which could be a property of the Policy Engine, whereas the inherent D and P properties can be considered a Policy language characteristic independent of runtime, which could be included in an equivalence algorithm. Argument 2: There is one additional argument I would like to add for consideration. In XACML 2.0, there is a statement in section 7.10 for Policy Evaluation, which says: 'If the target value is No-match or “Indeterminate” then the policy value SHALL be  “NotApplicable” or “Indeterminate”, respectively, regardless of the value of the rules.  For these cases, therefore, the rules need not be evaluated.' By comparison, in XACML 3.0, WD 19, the corresponding statement in section 7.11 has been modified to say: 'If the target value is No-match then the policy value SHALL be   NotApplicable , regardless of the value of the rules.  For this case, therefore, the rules need not be evaluated.' The Indeterminate part of this statement has been modified to say: 'If the target value is Indeterminate , then the policy value SHALL be  determined as specified in Table 7, in section 7.13.' Therefore, the meaning of the spec has been changed, because in order to select an entry in Table 7, now the rules do have to be evaluated, which is not obvious unless one does a very careful and complete reading of the changes that are being proposed. Additional Consideration: One other side effect that I think is of concern, is that if we allow the Policy property (P, D, or DP) to be subject to runtime determination then when an Indeterminate is obtained at the top of the tree, then it would be necessary to evaluate the complete subtree in order to determine what this property is. By comparison, the static property can be determined at any time by processing the tree once and recording the property for all subsequent evaluations. My Conclusions: Bottom line: my recommendation is that we define the D,P,DP property in such a way that it is a static characteristic of the Policy definition, which presumably allow it to be used in equivalence determinations. I would also recommend that runtime optimization be a configurable option, and it will be clear that if this option is activated, that any presumption of equivalence should be disregarded as far as runtime behavior would be concerned. Comments, suggestions welcome.     Thanks,     Rich On 5/6/2011 12:51 PM, Tyson, Paul H wrote: 3898C40CCD069D4F91FCD69C9EFBF096064B3D1C@txamashur004.ent.textron.com type= cite > I withdraw my objection to the Section 7 changes made by Erik in the 3.0 core spec wd-19. I'm still concerned that the policy evaluation specification (in section 7) may cause unexpected variations in the results from two seemingly equivalent policies, but I need to produce some theoretical or empirical evidence to demonstrate this (or to relieve my concern). In any case, the wd-19 changes probably do not make this any better or worse. Regards, --Paul --------------------------------------------------------------------- To unsubscribe from this mail list, you must leave the OASIS TC that generates this mail. Follow this link to all your TCs in OASIS at: https://www.oasis-open.org/apps/org/workgroup/portal/my_workgroups.php


  • 3.  Re: [xacml] wd-19 indeterminate policy target handling

    Posted 05-09-2011 04:28
    Hi again, Paul, Erik, Hal, and TC: I have spent some additional time looking at this problem and I am now leaning toward leaving the spec as is, at least as far as I have analyzed it. For anyone interested, my reassessment is based on the following: The intention has always been to maintain consistency with XACML 2.0, while at the same time enabling the D and P types of Indeterminates to propagate up the PolicySet hierarchy in addition to the DP which was all that was propagated up in 2.0, despite the fact that D and P were determined and used on the first hop up, they were unnecessarily cut off at that point and information was lost. It appears that I inadvertently lost sight of this big picture when looking at the details from the top down. However, in order to go from the top down one has to allow the existing algorithms on the bottom level to remain the same, and obviously by assuming that the Rules do not need to be evaluated is a direct contradiction with the existing XACML 2.0 algorithms which first evaluate the Rule, then look directly at the effect later if there was an indeterminate. Bottom line: I withdraw this sidebar issue, about not needing to evaluate the Rules when the Policy or PolicySet Target produces an Indeterminate. In 2.0 the spec was able to say that because it did not propagate the D and P properties up, however, to do the complete job of propagating all the D and P properties, we do need to evaluate the Rules, and the changes in the spec to this effect I believe are correct.     Thanks,     Rich On 5/6/2011 11:06 PM, rich levinson wrote: 4DC4B750.2070402@oracle.com type= cite > Hi Paul and TC, I think the toothpaste is out of the tube on this one: i.e. I think too much has been invested in the analysis for one member to unilaterally shut down the issue by withdrawal .  In any event, that's my opinion, but, regardless, based on yesterday's mtg, I believe there is more to be said on this issue, and hopefully we can channel it to a clean resolution. That being said, following is additional analysis I have done and some conclusions that I believe we can reach agreement on, and that I think I can describe in terms that everyone can follow (for clarity I will just add an s for the plural of Policy ). There are 2 arguments I would like to make. Argument 1: First, there are 3 types of Policys: Policys{P} where all Rules have Effect= Permit and therefore these Policys can never return a Deny . Policys{D} where all Rules have Effect= Deny and therefore these Policys can never return a Permit Policys{DP} where there are a mix of Rules, some of which are Permit , and some of which are Deny , and therefore, there is no apriori way to look at such a Policy and know whether or not it can return either a Permit or a Deny. Therefore, the 3 types of Policys each have an inherent property, which can be determined simply by inspection of the Policy w/o regard to evaluation of any Attributes. In fact, 2 out of 3 of the types retain their property regardless of evaluation of the attributes. i.e. Policy{P} is always Policy{P}, it can never change its property and become either Policy{D} or Policy{DP} i.e. same can be said for Policy{D} I would therefore refer to these as static properties The third type Policy{DP} has a run-time characteristic, where if current values of the Attributes happen to exclude all the Rules of either D or P, then the current run-time property of the Policy{DP} for a single evaluation can effectively become either Policy{P} or Policy{D}. On subsequent evaluations the Policy{DP} can again by happenstance become any one of the 3 types. I would therefore consider this a runtime property if we allow its definition to be subject to Attribute evaluation. Therefore, I think we can say that the problem we are discussing reduces to only the evaluation of Policy{DP} elements. We can then ask whether we want our combining algorithms to be subject to runtime values of Attributes that on any given evaluation can cause a Policy{DP} to become a Policy{D} or a Policy{P}, thus rendering the property of the Policy indeterminate until runtime values are plugged in. I would also suggest that it is this indeterminacy, which would cause Policys not to be comparable for equivalence , because the Policys themselves have a built-in uncertainty depending on how one regards this property. I would also suggest that for the purpose of equivalence this runtime characteristic could be considered a performance optimization , which could be a property of the Policy Engine, whereas the inherent D and P properties can be considered a Policy language characteristic independent of runtime, which could be included in an equivalence algorithm. Argument 2: There is one additional argument I would like to add for consideration. In XACML 2.0, there is a statement in section 7.10 for Policy Evaluation, which says: 'If the target value is No-match or “Indeterminate” then the policy value SHALL be  “NotApplicable” or “Indeterminate”, respectively, regardless of the value of the rules.  For these cases, therefore, the rules need not be evaluated.' By comparison, in XACML 3.0, WD 19, the corresponding statement in section 7.11 has been modified to say: 'If the target value is No-match then the policy value SHALL be   NotApplicable , regardless of the value of the rules.  For this case, therefore, the rules need not be evaluated.' The Indeterminate part of this statement has been modified to say: 'If the target value is Indeterminate , then the policy value SHALL be  determined as specified in Table 7, in section 7.13.' Therefore, the meaning of the spec has been changed, because in order to select an entry in Table 7, now the rules do have to be evaluated, which is not obvious unless one does a very careful and complete reading of the changes that are being proposed. Additional Consideration: One other side effect that I think is of concern, is that if we allow the Policy property (P, D, or DP) to be subject to runtime determination then when an Indeterminate is obtained at the top of the tree, then it would be necessary to evaluate the complete subtree in order to determine what this property is. By comparison, the static property can be determined at any time by processing the tree once and recording the property for all subsequent evaluations. My Conclusions: Bottom line: my recommendation is that we define the D,P,DP property in such a way that it is a static characteristic of the Policy definition, which presumably allow it to be used in equivalence determinations. I would also recommend that runtime optimization be a configurable option, and it will be clear that if this option is activated, that any presumption of equivalence should be disregarded as far as runtime behavior would be concerned. Comments, suggestions welcome.     Thanks,     Rich On 5/6/2011 12:51 PM, Tyson, Paul H wrote: 3898C40CCD069D4F91FCD69C9EFBF096064B3D1C@txamashur004.ent.textron.com type= cite > I withdraw my objection to the Section 7 changes made by Erik in the 3.0 core spec wd-19. I'm still concerned that the policy evaluation specification (in section 7) may cause unexpected variations in the results from two seemingly equivalent policies, but I need to produce some theoretical or empirical evidence to demonstrate this (or to relieve my concern). In any case, the wd-19 changes probably do not make this any better or worse. Regards, --Paul --------------------------------------------------------------------- To unsubscribe from this mail list, you must leave the OASIS TC that generates this mail. Follow this link to all your TCs in OASIS at: https://www.oasis-open.org/apps/org/workgroup/portal/my_workgroups.php


  • 4.  Re: [xacml] wd-19 indeterminate policy target handling

    Posted 05-11-2011 09:38
    Hi All, Rich, when you say leave it as it is , I assume you mean the new working draft which evaluates the children of policy sets. If so, I think everybody is in agreement. I will still post an updated draft which moves the definitions of the text from the appendix to section 9, so everything is in one place. Best regards, Erik On 2011-05-09 06:27, rich levinson wrote: 4DC76D48.7030208@oracle.com type= cite > Hi again, Paul, Erik, Hal, and TC: I have spent some additional time looking at this problem and I am now leaning toward leaving the spec as is, at least as far as I have analyzed it. For anyone interested, my reassessment is based on the following: The intention has always been to maintain consistency with XACML 2.0, while at the same time enabling the D and P types of Indeterminates to propagate up the PolicySet hierarchy in addition to the DP which was all that was propagated up in 2.0, despite the fact that D and P were determined and used on the first hop up, they were unnecessarily cut off at that point and information was lost. It appears that I inadvertently lost sight of this big picture when looking at the details from the top down. However, in order to go from the top down one has to allow the existing algorithms on the bottom level to remain the same, and obviously by assuming that the Rules do not need to be evaluated is a direct contradiction with the existing XACML 2.0 algorithms which first evaluate the Rule, then look directly at the effect later if there was an indeterminate. Bottom line: I withdraw this sidebar issue, about not needing to evaluate the Rules when the Policy or PolicySet Target produces an Indeterminate. In 2.0 the spec was able to say that because it did not propagate the D and P properties up, however, to do the complete job of propagating all the D and P properties, we do need to evaluate the Rules, and the changes in the spec to this effect I believe are correct.     Thanks,     Rich On 5/6/2011 11:06 PM, rich levinson wrote: 4DC4B750.2070402@oracle.com type= cite > Hi Paul and TC, I think the toothpaste is out of the tube on this one: i.e. I think too much has been invested in the analysis for one member to unilaterally shut down the issue by withdrawal .  In any event, that's my opinion, but, regardless, based on yesterday's mtg, I believe there is more to be said on this issue, and hopefully we can channel it to a clean resolution. That being said, following is additional analysis I have done and some conclusions that I believe we can reach agreement on, and that I think I can describe in terms that everyone can follow (for clarity I will just add an s for the plural of Policy ). There are 2 arguments I would like to make. Argument 1: First, there are 3 types of Policys: Policys{P} where all Rules have Effect= Permit and therefore these Policys can never return a Deny . Policys{D} where all Rules have Effect= Deny and therefore these Policys can never return a Permit Policys{DP} where there are a mix of Rules, some of which are Permit , and some of which are Deny , and therefore, there is no apriori way to look at such a Policy and know whether or not it can return either a Permit or a Deny. Therefore, the 3 types of Policys each have an inherent property, which can be determined simply by inspection of the Policy w/o regard to evaluation of any Attributes. In fact, 2 out of 3 of the types retain their property regardless of evaluation of the attributes. i.e. Policy{P} is always Policy{P}, it can never change its property and become either Policy{D} or Policy{DP} i.e. same can be said for Policy{D} I would therefore refer to these as static properties The third type Policy{DP} has a run-time characteristic, where if current values of the Attributes happen to exclude all the Rules of either D or P, then the current run-time property of the Policy{DP} for a single evaluation can effectively become either Policy{P} or Policy{D}. On subsequent evaluations the Policy{DP} can again by happenstance become any one of the 3 types. I would therefore consider this a runtime property if we allow its definition to be subject to Attribute evaluation. Therefore, I think we can say that the problem we are discussing reduces to only the evaluation of Policy{DP} elements. We can then ask whether we want our combining algorithms to be subject to runtime values of Attributes that on any given evaluation can cause a Policy{DP} to become a Policy{D} or a Policy{P}, thus rendering the property of the Policy indeterminate until runtime values are plugged in. I would also suggest that it is this indeterminacy, which would cause Policys not to be comparable for equivalence , because the Policys themselves have a built-in uncertainty depending on how one regards this property. I would also suggest that for the purpose of equivalence this runtime characteristic could be considered a performance optimization , which could be a property of the Policy Engine, whereas the inherent D and P properties can be considered a Policy language characteristic independent of runtime, which could be included in an equivalence algorithm. Argument 2: There is one additional argument I would like to add for consideration. In XACML 2.0, there is a statement in section 7.10 for Policy Evaluation, which says: 'If the target value is No-match or “Indeterminate” then the policy value SHALL be  “NotApplicable” or “Indeterminate”, respectively, regardless of the value of the rules.  For these cases, therefore, the rules need not be evaluated.' By comparison, in XACML 3.0, WD 19, the corresponding statement in section 7.11 has been modified to say: 'If the target value is No-match then the policy value SHALL be   NotApplicable , regardless of the value of the rules.  For this case, therefore, the rules need not be evaluated.' The Indeterminate part of this statement has been modified to say: 'If the target value is Indeterminate , then the policy value SHALL be  determined as specified in Table 7, in section 7.13.' Therefore, the meaning of the spec has been changed, because in order to select an entry in Table 7, now the rules do have to be evaluated, which is not obvious unless one does a very careful and complete reading of the changes that are being proposed. Additional Consideration: One other side effect that I think is of concern, is that if we allow the Policy property (P, D, or DP) to be subject to runtime determination then when an Indeterminate is obtained at the top of the tree, then it would be necessary to evaluate the complete subtree in order to determine what this property is. By comparison, the static property can be determined at any time by processing the tree once and recording the property for all subsequent evaluations. My Conclusions: Bottom line: my recommendation is that we define the D,P,DP property in such a way that it is a static characteristic of the Policy definition, which presumably allow it to be used in equivalence determinations. I would also recommend that runtime optimization be a configurable option, and it will be clear that if this option is activated, that any presumption of equivalence should be disregarded as far as runtime behavior would be concerned. Comments, suggestions welcome.     Thanks,     Rich On 5/6/2011 12:51 PM, Tyson, Paul H wrote: 3898C40CCD069D4F91FCD69C9EFBF096064B3D1C@txamashur004.ent.textron.com type= cite > I withdraw my objection to the Section 7 changes made by Erik in the 3.0 core spec wd-19. I'm still concerned that the policy evaluation specification (in section 7) may cause unexpected variations in the results from two seemingly equivalent policies, but I need to produce some theoretical or empirical evidence to demonstrate this (or to relieve my concern). In any case, the wd-19 changes probably do not make this any better or worse. Regards, --Paul --------------------------------------------------------------------- To unsubscribe from this mail list, you must leave the OASIS TC that generates this mail. Follow this link to all your TCs in OASIS at: https://www.oasis-open.org/apps/org/workgroup/portal/my_workgroups.php


  • 5.  RE: [xacml] wd-19 indeterminate policy target handling

    Posted 05-11-2011 12:54
    Erik, also in case you haven’t already, update the cross-references to account for renumbered sections 7.14 & on.   Regards, --Paul   From: Erik Rissanen [mailto:erik@axiomatics.com] Sent: Wednesday, May 11, 2011 04:38 To: xacml@lists.oasis-open.org Subject: Re: [xacml] wd-19 indeterminate policy target handling   Hi All, Rich, when you say "leave it as it is", I assume you mean the new working draft which evaluates the children of policy sets. If so, I think everybody is in agreement. I will still post an updated draft which moves the definitions of the text from the appendix to section 9, so everything is in one place. Best regards, Erik On 2011-05-09 06:27, rich levinson wrote: Hi again, Paul, Erik, Hal, and TC: I have spent some additional time looking at this problem and I am now leaning toward leaving the spec as is, at least as far as I have analyzed it. For anyone interested, my reassessment is based on the following: The intention has always been to maintain consistency with XACML 2.0, while at the same time enabling the "D" and "P" types of Indeterminates to propagate up the PolicySet hierarchy in addition to the "DP" which was all that was propagated up in 2.0, despite the fact that D and P were determined and used on the first hop up, they were unnecessarily cut off at that point and information was lost. It appears that I inadvertently lost sight of this big picture when looking at the details from the top down. However, in order to go from the top down one has to allow the existing algorithms on the bottom level to remain the same, and obviously by assuming that the Rules do not need to be evaluated is a direct contradiction with the existing XACML 2.0 algorithms which first evaluate the Rule, then look directly at the effect later if there was an indeterminate. Bottom line: I withdraw this sidebar issue, about not needing to evaluate the Rules when the Policy or PolicySet Target produces an Indeterminate. In 2.0 the spec was able to say that because it did not propagate the D and P properties up, however, to do the complete job of propagating all the D and P properties, we do need to evaluate the Rules, and the changes in the spec to this effect I believe are correct.     Thanks,     Rich On 5/6/2011 11:06 PM, rich levinson wrote: Hi Paul and TC, I think the toothpaste is out of the tube on this one: i.e. I think too much has been invested in the analysis for one member to unilaterally shut down the issue by "withdrawal".  In any event, that's my opinion, but, regardless, based on yesterday's mtg, I believe there is more to be said on this issue, and hopefully we can channel it to a clean resolution. That being said, following is additional analysis I have done and some conclusions that I believe we can reach agreement on, and that I think I can describe in terms that everyone can follow (for "clarity" I will just add an "s" for the plural of "Policy"). There are 2 arguments I would like to make. Argument 1: First, there are 3 "types" of Policys: Policys{P} where all Rules have Effect="Permit" and therefore these Policys can never return a "Deny". Policys{D} where all Rules have Effect="Deny" and therefore these Policys can never return a "Permit" Policys{DP} where there are a mix of Rules, some of which are "Permit", and some of which are "Deny", and therefore, there is no apriori way to look at such a Policy and know whether or not it can return either a Permit or a Deny. Therefore, the 3 types of Policys each have an inherent property, which can be determined simply by inspection of the Policy w/o regard to evaluation of any Attributes. In fact, 2 out of 3 of the types retain their "property" regardless of evaluation of the attributes. i.e. Policy{P} is always Policy{P}, it can never change its property and become either Policy{D} or Policy{DP} i.e. same can be said for Policy{D} I would therefore refer to these as "static properties" The third type Policy{DP} has a run-time characteristic, where if current values of the Attributes happen to exclude all the Rules of either D or P, then the current run-time "property" of the Policy{DP} for a single evaluation can effectively become either Policy{P} or Policy{D}. On subsequent evaluations the Policy{DP} can again by happenstance become any one of the 3 types. I would therefore consider this a "runtime property" if we allow its definition to be subject to Attribute evaluation. Therefore, I think we can say that the problem we are discussing reduces to only the evaluation of Policy{DP} elements. We can then ask whether we want our combining algorithms to be subject to runtime values of Attributes that on any given evaluation can cause a Policy{DP} to become a Policy{D} or a Policy{P}, thus rendering the property of the Policy indeterminate until runtime values are plugged in. I would also suggest that it is this indeterminacy, which would cause Policys not to be comparable for "equivalence", because the Policys themselves have a built-in uncertainty depending on how one regards this property. I would also suggest that for the purpose of "equivalence" this runtime characteristic could be considered a "performance optimization", which could be a property of the Policy Engine, whereas the inherent D and P properties can be considered a Policy language characteristic independent of runtime, which could be included in an equivalence algorithm. Argument 2: There is one additional argument I would like to add for consideration. In XACML 2.0, there is a statement in section 7.10 for Policy Evaluation, which says: 'If the target value is "No-match" or “Indeterminate” then the policy value SHALL be  “NotApplicable” or “Indeterminate”, respectively, regardless of the value of the rules.  For these cases, therefore, the rules need not be evaluated.' By comparison, in XACML 3.0, WD 19, the corresponding statement in section 7.11 has been modified to say: 'If the target value is "No-match" then the policy value SHALL be  "NotApplicable", regardless of the value of the rules.  For this case, therefore, the rules need not be evaluated.' The "Indeterminate" part of this statement has been modified to say: 'If the target value is "Indeterminate", then the policy value SHALL be  determined as specified in Table 7, in section 7.13.' Therefore, the "meaning" of the spec has been changed, because in order to select an entry in Table 7, now the rules do have to be evaluated, which is not obvious unless one does a very careful and complete reading of the changes that are being proposed. Additional Consideration: One other side effect that I think is of concern, is that if we allow the Policy property (P, D, or DP) to be subject to runtime determination then when an Indeterminate is obtained at the top of the tree, then it would be necessary to evaluate the complete subtree in order to determine what this property is. By comparison, the static property can be determined at any time by processing the tree once and recording the property for all subsequent evaluations. My Conclusions: Bottom line: my recommendation is that we define the D,P,DP property in such a way that it is a static characteristic of the Policy definition, which presumably allow it to be used in "equivalence" determinations. I would also recommend that runtime optimization be a configurable option, and it will be clear that if this option is activated, that any presumption of equivalence should be disregarded as far as runtime behavior would be concerned. Comments, suggestions welcome.     Thanks,     Rich On 5/6/2011 12:51 PM, Tyson, Paul H wrote: I withdraw my objection to the Section 7 changes made by Erik in the 3.0 core spec wd-19.   I'm still concerned that the policy evaluation specification (in section 7) may cause unexpected variations in the results from two seemingly "equivalent" policies, but I need to produce some theoretical or empirical evidence to demonstrate this (or to relieve my concern).  In any case, the wd-19 changes probably do not make this any better or worse.   Regards, --Paul   --------------------------------------------------------------------- To unsubscribe from this mail list, you must leave the OASIS TC that generates this mail.  Follow this link to all your TCs in OASIS at: https://www.oasis-open.org/apps/org/workgroup/portal/my_workgroups.php    


  • 6.  Re: [xacml] wd-19 indeterminate policy target handling

    Posted 05-11-2011 14:44
    Hi Erik, Yes, I am referring to WD-19, when I am saying leave it as it is . However (sorry), that being said, I do think there is some additional clean-up required on this overall issue. There is also an additional unrelated typo we found in section A.13. I will just list the following clarifications to WD-19 and Erik's current email: from the appendix to section 9 ??   I think you mean section 7.13 (or earlier (see below)). Since we are introducing Ind{P,D,DP} in section 7, I think it needs to also be included in Table 4 for Rule evaluation, and possibly other places. I think we also need to consider having more explanation in the old C.1 section about the extended Inds , which describes the underlying cause: which imo is that when you have an -overrides type comb-alg, the relative weight of the return values suddenly has a precedence that would otherwise not be there, namely: for example for a deny-overrides Policy:    Ind{P} < P < Ind{D} < D which means if a Rule that evaluates to D is encountered, the processing for the policy can end, since that is the final answer, no matter what follows. However, until a D is encountered processing must continue. When all Rules are processed, the answer is the greatest value in the precedence chain above. Also, considering the above bullet, I think the current algorithms should be modified back to look more like the original 2.0 algs. For example compare the denyOverridesCombiningAlgorithm of section C.2 (new) w C.10 (legacy): In C.2 the parameter to the algorithm is Decision[] decisions , where as in C.10 the parameter to the algorithm is Rule[] rules . Also, in section C.10, within the loop, the first thing is:   decision = evaluate(rules[i]);   if (decision == Deny) return Deny; I think it is important to retain this logic so it can be shown where the breakout occurs, which cuts off unnecessary evaluation of subsequent rules. Also, we can retain the criteria for choosing Ind{d} vs Ind{p} where the   if (effect(rules[i]) ... is evaluated. Finally, rather than passing in (Rule[] rules), or (Policy[] policies), we might want to consider using a neutral term, such as (Node[] nodes) or (Child[] children) where Node or Child could refer to either a Rule or Policy. And, finally, the typo in section A.3.14, under rfc822Name-match: In cs-01 line 4992 (next to last para), the phrase: matches a value in the first argument should say matches a value in the second argument . I think this is just a typo, esp when compared w the next para.     Thanks,     Rich On 5/11/2011 5:37 AM, Erik Rissanen wrote: 4DCA58DB.4080406@axiomatics.com type= cite > Hi All, Rich, when you say leave it as it is , I assume you mean the new working draft which evaluates the children of policy sets. If so, I think everybody is in agreement. I will still post an updated draft which moves the definitions of the text from the appendix to section 9, so everything is in one place. Best regards, Erik On 2011-05-09 06:27, rich levinson wrote: 4DC76D48.7030208@oracle.com type= cite > Hi again, Paul, Erik, Hal, and TC: I have spent some additional time looking at this problem and I am now leaning toward leaving the spec as is, at least as far as I have analyzed it. For anyone interested, my reassessment is based on the following: The intention has always been to maintain consistency with XACML 2.0, while at the same time enabling the D and P types of Indeterminates to propagate up the PolicySet hierarchy in addition to the DP which was all that was propagated up in 2.0, despite the fact that D and P were determined and used on the first hop up, they were unnecessarily cut off at that point and information was lost. It appears that I inadvertently lost sight of this big picture when looking at the details from the top down. However, in order to go from the top down one has to allow the existing algorithms on the bottom level to remain the same, and obviously by assuming that the Rules do not need to be evaluated is a direct contradiction with the existing XACML 2.0 algorithms which first evaluate the Rule, then look directly at the effect later if there was an indeterminate. Bottom line: I withdraw this sidebar issue, about not needing to evaluate the Rules when the Policy or PolicySet Target produces an Indeterminate. In 2.0 the spec was able to say that because it did not propagate the D and P properties up, however, to do the complete job of propagating all the D and P properties, we do need to evaluate the Rules, and the changes in the spec to this effect I believe are correct.     Thanks,     Rich On 5/6/2011 11:06 PM, rich levinson wrote: 4DC4B750.2070402@oracle.com type= cite > Hi Paul and TC, I think the toothpaste is out of the tube on this one: i.e. I think too much has been invested in the analysis for one member to unilaterally shut down the issue by withdrawal .  In any event, that's my opinion, but, regardless, based on yesterday's mtg, I believe there is more to be said on this issue, and hopefully we can channel it to a clean resolution. That being said, following is additional analysis I have done and some conclusions that I believe we can reach agreement on, and that I think I can describe in terms that everyone can follow (for clarity I will just add an s for the plural of Policy ). There are 2 arguments I would like to make. Argument 1: First, there are 3 types of Policys: Policys{P} where all Rules have Effect= Permit and therefore these Policys can never return a Deny . Policys{D} where all Rules have Effect= Deny and therefore these Policys can never return a Permit Policys{DP} where there are a mix of Rules, some of which are Permit , and some of which are Deny , and therefore, there is no apriori way to look at such a Policy and know whether or not it can return either a Permit or a Deny. Therefore, the 3 types of Policys each have an inherent property, which can be determined simply by inspection of the Policy w/o regard to evaluation of any Attributes. In fact, 2 out of 3 of the types retain their property regardless of evaluation of the attributes. i.e. Policy{P} is always Policy{P}, it can never change its property and become either Policy{D} or Policy{DP} i.e. same can be said for Policy{D} I would therefore refer to these as static properties The third type Policy{DP} has a run-time characteristic, where if current values of the Attributes happen to exclude all the Rules of either D or P, then the current run-time property of the Policy{DP} for a single evaluation can effectively become either Policy{P} or Policy{D}. On subsequent evaluations the Policy{DP} can again by happenstance become any one of the 3 types. I would therefore consider this a runtime property if we allow its definition to be subject to Attribute evaluation. Therefore, I think we can say that the problem we are discussing reduces to only the evaluation of Policy{DP} elements. We can then ask whether we want our combining algorithms to be subject to runtime values of Attributes that on any given evaluation can cause a Policy{DP} to become a Policy{D} or a Policy{P}, thus rendering the property of the Policy indeterminate until runtime values are plugged in. I would also suggest that it is this indeterminacy, which would cause Policys not to be comparable for equivalence , because the Policys themselves have a built-in uncertainty depending on how one regards this property. I would also suggest that for the purpose of equivalence this runtime characteristic could be considered a performance optimization , which could be a property of the Policy Engine, whereas the inherent D and P properties can be considered a Policy language characteristic independent of runtime, which could be included in an equivalence algorithm. Argument 2: There is one additional argument I would like to add for consideration. In XACML 2.0, there is a statement in section 7.10 for Policy Evaluation, which says: 'If the target value is No-match or “Indeterminate” then the policy value SHALL be  “NotApplicable” or “Indeterminate”, respectively, regardless of the value of the rules.  For these cases, therefore, the rules need not be evaluated.' By comparison, in XACML 3.0, WD 19, the corresponding statement in section 7.11 has been modified to say: 'If the target value is No-match then the policy value SHALL be   NotApplicable , regardless of the value of the rules.  For this case, therefore, the rules need not be evaluated.' The Indeterminate part of this statement has been modified to say: 'If the target value is Indeterminate , then the policy value SHALL be  determined as specified in Table 7, in section 7.13.' Therefore, the meaning of the spec has been changed, because in order to select an entry in Table 7, now the rules do have to be evaluated, which is not obvious unless one does a very careful and complete reading of the changes that are being proposed. Additional Consideration: One other side effect that I think is of concern, is that if we allow the Policy property (P, D, or DP) to be subject to runtime determination then when an Indeterminate is obtained at the top of the tree, then it would be necessary to evaluate the complete subtree in order to determine what this property is. By comparison, the static property can be determined at any time by processing the tree once and recording the property for all subsequent evaluations. My Conclusions: Bottom line: my recommendation is that we define the D,P,DP property in such a way that it is a static characteristic of the Policy definition, which presumably allow it to be used in equivalence determinations. I would also recommend that runtime optimization be a configurable option, and it will be clear that if this option is activated, that any presumption of equivalence should be disregarded as far as runtime behavior would be concerned. Comments, suggestions welcome.     Thanks,     Rich On 5/6/2011 12:51 PM, Tyson, Paul H wrote: 3898C40CCD069D4F91FCD69C9EFBF096064B3D1C@txamashur004.ent.textron.com type= cite > I withdraw my objection to the Section 7 changes made by Erik in the 3.0 core spec wd-19. I'm still concerned that the policy evaluation specification (in section 7) may cause unexpected variations in the results from two seemingly equivalent policies, but I need to produce some theoretical or empirical evidence to demonstrate this (or to relieve my concern). In any case, the wd-19 changes probably do not make this any better or worse. Regards, --Paul --------------------------------------------------------------------- To unsubscribe from this mail list, you must leave the OASIS TC that generates this mail. Follow this link to all your TCs in OASIS at: https://www.oasis-open.org/apps/org/workgroup/portal/my_workgroups.php


  • 7.  Re: [xacml] wd-19 indeterminate policy target handling

    Posted 05-16-2011 15:43
    Hi Rich, I am about to post a new draft, but I noticed I need to get some clarifications on your comments before I do that. See inline. On 2011-05-11 16:43, rich levinson wrote: 4DCAA0A5.3010808@oracle.com type= cite > Hi Erik, Yes, I am referring to WD-19, when I am saying leave it as it is . However (sorry), that being said, I do think there is some additional clean-up required on this overall issue. There is also an additional unrelated typo we found in section A.13. I will just list the following clarifications to WD-19 and Erik's current email: from the appendix to section 9 ??   I think you mean section 7.13 (or earlier (see below)). Yes. 4DCAA0A5.3010808@oracle.com type= cite > Since we are introducing Ind{P,D,DP} in section 7, I think it needs to also be included in Table 4 for Rule evaluation, and possibly other places. Yes, to make it clearer. Though, the text in appendix C (to be moved to Section 7) does contain the base case for the rule already, but it's better to put in the table to avoid confusion. 4DCAA0A5.3010808@oracle.com type= cite > I think we also need to consider having more explanation in the old C.1 section about the extended Inds , which describes the underlying cause: which imo is that when you have an -overrides type comb-alg, the relative weight of the return values suddenly has a precedence that would otherwise not be there, namely: for example for a deny-overrides Policy:    Ind{P} < P < Ind{D} < D which means if a Rule that evaluates to D is encountered, the processing for the policy can end, since that is the final answer, no matter what follows. However, until a D is encountered processing must continue. When all Rules are processed, the answer is the greatest value in the precedence chain above. I am not sure about this. I don't want to fill up the normative sections with long examples. We could say something short like The extended Indeterminate values allow combining algorithms to treat Indeterminate smarter, so that in some cases an Indeterminate can be ignored when the extended Indeterminate value shows that the section of the policy which produced the Indeterminate could not have influenced the final decision even when there would not have been an Indeterminate. And clearly mark this non-normative. 4DCAA0A5.3010808@oracle.com type= cite > Also, considering the above bullet, I think the current algorithms should be modified back to look more like the original 2.0 algs. For example compare the denyOverridesCombiningAlgorithm of section C.2 (new) w C.10 (legacy): In C.2 the parameter to the algorithm is Decision[] decisions , where as in C.10 the parameter to the algorithm is Rule[] rules . Also, in section C.10, within the loop, the first thing is:   decision = evaluate(rules[i]);   if (decision == Deny) return Deny; I think it is important to retain this logic so it can be shown where the breakout occurs, which cuts off unnecessary evaluation of subsequent rules. Also, we can retain the criteria for choosing Ind{d} vs Ind{p} where the   if (effect(rules[i]) ... is evaluated. Is this just to refactor the algorithms to look prettier, or are there actual errors in them? If there are no errors, could we just keep them as they are, so we don't risk breaking them? 4DCAA0A5.3010808@oracle.com type= cite > Finally, rather than passing in (Rule[] rules), or (Policy[] policies), we might want to consider using a neutral term, such as (Node[] nodes) or (Child[] children) where Node or Child could refer to either a Rule or Policy. And, finally, the typo in section A.3.14, under rfc822Name-match: In cs-01 line 4992 (next to last para), the phrase: matches a value in the first argument should say matches a value in the second argument . I think this is just a typo, esp when compared w the next para. Yes, this is a typo. I will fix it. 4DCAA0A5.3010808@oracle.com type= cite >     Thanks,     Rich On 5/11/2011 5:37 AM, Erik Rissanen wrote: 4DCA58DB.4080406@axiomatics.com type= cite > Hi All, Rich, when you say leave it as it is , I assume you mean the new working draft which evaluates the children of policy sets. If so, I think everybody is in agreement. I will still post an updated draft which moves the definitions of the text from the appendix to section 9, so everything is in one place. Best regards, Erik On 2011-05-09 06:27, rich levinson wrote: 4DC76D48.7030208@oracle.com type= cite > Hi again, Paul, Erik, Hal, and TC: I have spent some additional time looking at this problem and I am now leaning toward leaving the spec as is, at least as far as I have analyzed it. For anyone interested, my reassessment is based on the following: The intention has always been to maintain consistency with XACML 2.0, while at the same time enabling the D and P types of Indeterminates to propagate up the PolicySet hierarchy in addition to the DP which was all that was propagated up in 2.0, despite the fact that D and P were determined and used on the first hop up, they were unnecessarily cut off at that point and information was lost. It appears that I inadvertently lost sight of this big picture when looking at the details from the top down. However, in order to go from the top down one has to allow the existing algorithms on the bottom level to remain the same, and obviously by assuming that the Rules do not need to be evaluated is a direct contradiction with the existing XACML 2.0 algorithms which first evaluate the Rule, then look directly at the effect later if there was an indeterminate. Bottom line: I withdraw this sidebar issue, about not needing to evaluate the Rules when the Policy or PolicySet Target produces an Indeterminate. In 2.0 the spec was able to say that because it did not propagate the D and P properties up, however, to do the complete job of propagating all the D and P properties, we do need to evaluate the Rules, and the changes in the spec to this effect I believe are correct.     Thanks,     Rich On 5/6/2011 11:06 PM, rich levinson wrote: 4DC4B750.2070402@oracle.com type= cite > Hi Paul and TC, I think the toothpaste is out of the tube on this one: i.e. I think too much has been invested in the analysis for one member to unilaterally shut down the issue by withdrawal .  In any event, that's my opinion, but, regardless, based on yesterday's mtg, I believe there is more to be said on this issue, and hopefully we can channel it to a clean resolution. That being said, following is additional analysis I have done and some conclusions that I believe we can reach agreement on, and that I think I can describe in terms that everyone can follow (for clarity I will just add an s for the plural of Policy ). There are 2 arguments I would like to make. Argument 1: First, there are 3 types of Policys: Policys{P} where all Rules have Effect= Permit and therefore these Policys can never return a Deny . Policys{D} where all Rules have Effect= Deny and therefore these Policys can never return a Permit Policys{DP} where there are a mix of Rules, some of which are Permit , and some of which are Deny , and therefore, there is no apriori way to look at such a Policy and know whether or not it can return either a Permit or a Deny. Therefore, the 3 types of Policys each have an inherent property, which can be determined simply by inspection of the Policy w/o regard to evaluation of any Attributes. In fact, 2 out of 3 of the types retain their property regardless of evaluation of the attributes. i.e. Policy{P} is always Policy{P}, it can never change its property and become either Policy{D} or Policy{DP} i.e. same can be said for Policy{D} I would therefore refer to these as static properties The third type Policy{DP} has a run-time characteristic, where if current values of the Attributes happen to exclude all the Rules of either D or P, then the current run-time property of the Policy{DP} for a single evaluation can effectively become either Policy{P} or Policy{D}. On subsequent evaluations the Policy{DP} can again by happenstance become any one of the 3 types. I would therefore consider this a runtime property if we allow its definition to be subject to Attribute evaluation. Therefore, I think we can say that the problem we are discussing reduces to only the evaluation of Policy{DP} elements. We can then ask whether we want our combining algorithms to be subject to runtime values of Attributes that on any given evaluation can cause a Policy{DP} to become a Policy{D} or a Policy{P}, thus rendering the property of the Policy indeterminate until runtime values are plugged in. I would also suggest that it is this indeterminacy, which would cause Policys not to be comparable for equivalence , because the Policys themselves have a built-in uncertainty depending on how one regards this property. I would also suggest that for the purpose of equivalence this runtime characteristic could be considered a performance optimization , which could be a property of the Policy Engine, whereas the inherent D and P properties can be considered a Policy language characteristic independent of runtime, which could be included in an equivalence algorithm. Argument 2: There is one additional argument I would like to add for consideration. In XACML 2.0, there is a statement in section 7.10 for Policy Evaluation, which says: 'If the target value is No-match or “Indeterminate” then the policy value SHALL be  “NotApplicable” or “Indeterminate”, respectively, regardless of the value of the rules.  For these cases, therefore, the rules need not be evaluated.' By comparison, in XACML 3.0, WD 19, the corresponding statement in section 7.11 has been modified to say: 'If the target value is No-match then the policy value SHALL be   NotApplicable , regardless of the value of the rules.  For this case, therefore, the rules need not be evaluated.' The Indeterminate part of this statement has been modified to say: 'If the target value is Indeterminate , then the policy value SHALL be  determined as specified in Table 7, in section 7.13.' Therefore, the meaning of the spec has been changed, because in order to select an entry in Table 7, now the rules do have to be evaluated, which is not obvious unless one does a very careful and complete reading of the changes that are being proposed. Additional Consideration: One other side effect that I think is of concern, is that if we allow the Policy property (P, D, or DP) to be subject to runtime determination then when an Indeterminate is obtained at the top of the tree, then it would be necessary to evaluate the complete subtree in order to determine what this property is. By comparison, the static property can be determined at any time by processing the tree once and recording the property for all subsequent evaluations. My Conclusions: Bottom line: my recommendation is that we define the D,P,DP property in such a way that it is a static characteristic of the Policy definition, which presumably allow it to be used in equivalence determinations. I would also recommend that runtime optimization be a configurable option, and it will be clear that if this option is activated, that any presumption of equivalence should be disregarded as far as runtime behavior would be concerned. Comments, suggestions welcome.     Thanks,     Rich On 5/6/2011 12:51 PM, Tyson, Paul H wrote: 3898C40CCD069D4F91FCD69C9EFBF096064B3D1C@txamashur004.ent.textron.com type= cite > I withdraw my objection to the Section 7 changes made by Erik in the 3.0 core spec wd-19. I'm still concerned that the policy evaluation specification (in section 7) may cause unexpected variations in the results from two seemingly equivalent policies, but I need to produce some theoretical or empirical evidence to demonstrate this (or to relieve my concern). In any case, the wd-19 changes probably do not make this any better or worse. Regards, --Paul --------------------------------------------------------------------- To unsubscribe from this mail list, you must leave the OASIS TC that generates this mail. Follow this link to all your TCs in OASIS at: https://www.oasis-open.org/apps/org/workgroup/portal/my_workgroups.php


  • 8.  Re: [xacml] wd-19 indeterminate policy target handling

    Posted 05-16-2011 18:46
    Hi Erik, Responses inline: On 5/16/2011 11:42 AM, Erik Rissanen wrote: 4DD145FB.6000509@axiomatics.com type= cite > Hi Rich, I am about to post a new draft, but I noticed I need to get some clarifications on your comments before I do that. See inline. On 2011-05-11 16:43, rich levinson wrote: 4DCAA0A5.3010808@oracle.com type= cite > Hi Erik, Yes, I am referring to WD-19, when I am saying leave it as it is . However (sorry), that being said, I do think there is some additional clean-up required on this overall issue. There is also an additional unrelated typo we found in section A.13. I will just list the following clarifications to WD-19 and Erik's current email: from the appendix to section 9 ??   I think you mean section 7.13 (or earlier (see below)). Yes. <Rich> ok </> 4DD145FB.6000509@axiomatics.com type= cite > 4DCAA0A5.3010808@oracle.com type= cite > Since we are introducing Ind{P,D,DP} in section 7, I think it needs to also be included in Table 4 for Rule evaluation, and possibly other places. Yes, to make it clearer. Though, the text in appendix C (to be moved to Section 7) does contain the base case for the rule already, but it's better to put in the table to avoid confusion. <Rich> That's what I was thinking as well, except possibly even more so, because one of the effects of this change is that, at least for this type of processing (analyzing the Indeterminate to determine D, P, or DP), the Rule, Policy, and PolicySet processing is now the same, which I believe makes the whole thing easier to understand, although it becomes more abstract by no longer differentiating between these element types. But I think the next bullet is the more important one to discuss, so let's leave this more philosophical aspect until later. </> 4DD145FB.6000509@axiomatics.com type= cite > 4DCAA0A5.3010808@oracle.com type= cite > I think we also need to consider having more explanation in the old C.1 section about the extended Inds , which describes the underlying cause: which imo is that when you have an -overrides type comb-alg, the relative weight of the return values suddenly has a precedence that would otherwise not be there, namely: for example for a deny-overrides Policy:    Ind{P} < P < Ind{D} < D which means if a Rule that evaluates to D is encountered, the processing for the policy can end, since that is the final answer, no matter what follows. However, until a D is encountered processing must continue. When all Rules are processed, the answer is the greatest value in the precedence chain above. I am not sure about this. I don't want to fill up the normative sections with long examples. We could say something short like The extended Indeterminate values allow combining algorithms to treat Indeterminate smarter, so that in some cases an Indeterminate can be ignored when the extended Indeterminate value shows that the section of the policy which produced the Indeterminate could not have influenced the final decision even when there would not have been an Indeterminate. And clearly mark this non-normative. <Rich> This is somewhat subjective in terms of how to best explain the situation, and I was considering writing a separate email totally focused on that since w the resolution of the Policy/Target concept of Indeterminate, I think I can explain it all fairly concisely in a systematic way. To add to what's above: For biased combining algorithms: deny-biased or deny-overrides: N/A < Ind{P} < P < Ind{D} < D permit-biased or permit-overrides: N/A < Ind{D} < D < Ind{P} < P non-biased (ex. first-applicable): ( N/A = Ind {*} )  <  ( D = P ) (i.e. (D = P) means either one that comes first is good, and both of them override either N/A or Ind{*}) While these statements might at first be confusing, I think that ultimately they dictate the processing algorithm and so from that perspective can provide a sanity check on both the p-code and the text. But, again, this falls into the explanatory category, the significant point I think is in the next bullet. </> 4DD145FB.6000509@axiomatics.com type= cite > 4DCAA0A5.3010808@oracle.com type= cite > Also, considering the above bullet, I think the current algorithms should be modified back to look more like the original 2.0 algs. For example compare the denyOverridesCombiningAlgorithm of section C.2 (new) w C.10 (legacy): In C.2 the parameter to the algorithm is Decision[] decisions , where as in C.10 the parameter to the algorithm is Rule[] rules . Also, in section C.10, within the loop, the first thing is:   decision = evaluate(rules[i]);   if (decision == Deny) return Deny; I think it is important to retain this logic so it can be shown where the breakout occurs, which cuts off unnecessary evaluation of subsequent rules. Also, we can retain the criteria for choosing Ind{d} vs Ind{p} where the   if (effect(rules[i]) ... is evaluated. Is this just to refactor the algorithms to look prettier, or are there actual errors in them? If there are no errors, could we just keep them as they are, so we don't risk breaking them? This is not just to refactor the algorithms. When I looked over the new algorithms in detail, I noticed there was a quantitative difference between them. i.e. comparing C.10 and C.2: In C.10 (legacy deny-overrides), line 6131, there is a loop that begins: for (i=0; i<lenghtOf(rules); i++ ) {     decision = evaluate(rules[i])     if (decision == Deny)          return Deny;     ... } With the new algorithms, this logic also applies to Policies as well as rules. Furthermore, starting from the top of the PolicySet tree, and working down, this logic is recursive, and the processing should be identical for PolicySet, Policy, and Rule (although the evaluate function will be different for each). A second important point here is that it is significant that the first if stmt causes a return as opposed to the subsequent if stmts that set various booleans, but allow the loop to continue processing. The importance is that in a biased algorithm, if the decision to which the algorithm is biased is found, then processing of the loop can end right there, and there is no need to evaluate the rest of the rules, policies, or policysets, within the current node. i.e. if I am in a deny-overrides node, if, when I evaluate the next node, it returns a Deny, I am done. Clearly in the case of Policy and PolicySet, there is no ability to get either a Permit or Deny until the leaf Rules are processed. However, if I am in a deny-overrides PolicySet, and there are 10 child Policy elements, for example, and I evaluate the first Policy and it in turn evaluates its child Rules, if its first, or any other, rule returns a Deny, then the first Policy will return a Deny, and there is no need to evaluate the other 9 Policy elements since the decision will be a Deny regardless of what they return. Therefore, I think a better algorithm would look like this for C.2 (note the evaluate operation in the first line of the for loop which is not currently in C.2): Decision denyOverridesCombiningAlgorithm(Node[] nodes) {     ...     for ( i=0; i<lengthOf(nodes); i++ ) {         Decision decision = evaluate(nodes[i]);         if (decision == Deny) {             return Deny;         }         // I believe the rest of the logic remains the same as currently in C.2:         ...     } // end for loop     // logic should also be same as C.2 after loop } // end algorithm Now, if we look again at C.10, and try to see how it relates to the new algorithms, I think it would go something like this, where Rule is subclass of Node: Decision denyOverridesRuleCombiningAlgorithm(Node[] nodes) {     Boolean atLeastOneError = false;     Boolean atLeastOneErrorD = false;     Boolean atLeastOneErrorP = false;     Boolean atLeastOneErrorDP = false;     Boolean atLeastOnePermit = false;     for ( i=0; i<lengthOf(rules); i++  ) {         Decision decision = evaluate(nodes[i]);         if (decision==Deny) {             return Deny;         }         // the next 2 if s are the same as C.10:         if (decision==Permit) {             atLeastOnePermit = true;             continue; // i.e. skip the rest of the logic current iteration of loop                               // and start next iteration         }         if (decision==NotApplicable) {             continue;         }         if (decision==Indeterminate) { // this can only be returned for rules             if ( effect((Rule)nodes[i])==Deny) ) { // cast to Rule to get effect                 atLeastOneErrorD = true;             }             else {                 atLeastOneErrorP = true;             }             continue;         }         // the following is same as C.2 and will evaluate the 3 types         // of Indeterminate, which can only be returned for Policy and PolicySet         ... same as lines 5762->5776 (not repeated here)     } // end for loop     if (atLeastOneErrorD==true &&           (atLeastOneErrorP==true atLeastOnePermit==true) {         atLeastOneErrorDP = true;     }     if (atLeastOneErrorDP==true) {         return Indeterminate(DP);     if (atLeastOneErrorD==true) {         return Indeterminate(D);     }     if (atLeastOnePermit==true) {         return Permit;     }     if (atLeastOneErrorP == true) {         return Indeterminate(P);     }     return NotApplicable; } // end algorithm The representation above clearly shows:     N/A < Ind{P} < P < Ind{D} < D by simple following the return statements up the algorithm. I think the above algorithm also shows the origin of D,P,DP coming from the effect of the rules, and then being percolated thru Policy and PolicySet. So, basically, although the above algorithm may look a little more complicated than the current C.2, I think it retains 2 things from C.10 that the current C.2 drops: It retains the break of the loop when the biased Decision is returned It retains the logic that creates the breakout of Indeterminate to Ind(D,P,DP) Comments?   Thanks,   Rich </> 4DD145FB.6000509@axiomatics.com type= cite > 4DCAA0A5.3010808@oracle.com type= cite > Finally, rather than passing in (Rule[] rules), or (Policy[] policies), we might want to consider using a neutral term, such as (Node[] nodes) or (Child[] children) where Node or Child could refer to either a Rule or Policy. And, finally, the typo in section A.3.14, under rfc822Name-match: In cs-01 line 4992 (next to last para), the phrase: matches a value in the first argument should say matches a value in the second argument . I think this is just a typo, esp when compared w the next para. Yes, this is a typo. I will fix it. 4DCAA0A5.3010808@oracle.com type= cite >     Thanks,     Rich On 5/11/2011 5:37 AM, Erik Rissanen wrote: 4DCA58DB.4080406@axiomatics.com type= cite > Hi All, Rich, when you say leave it as it is , I assume you mean the new working draft which evaluates the children of policy sets. If so, I think everybody is in agreement. I will still post an updated draft which moves the definitions of the text from the appendix to section 9, so everything is in one place. Best regards, Erik On 2011-05-09 06:27, rich levinson wrote: 4DC76D48.7030208@oracle.com type= cite > Hi again, Paul, Erik, Hal, and TC: I have spent some additional time looking at this problem and I am now leaning toward leaving the spec as is, at least as far as I have analyzed it. For anyone interested, my reassessment is based on the following: The intention has always been to maintain consistency with XACML 2.0, while at the same time enabling the D and P types of Indeterminates to propagate up the PolicySet hierarchy in addition to the DP which was all that was propagated up in 2.0, despite the fact that D and P were determined and used on the first hop up, they were unnecessarily cut off at that point and information was lost. It appears that I inadvertently lost sight of this big picture when looking at the details from the top down. However, in order to go from the top down one has to allow the existing algorithms on the bottom level to remain the same, and obviously by assuming that the Rules do not need to be evaluated is a direct contradiction with the existing XACML 2.0 algorithms which first evaluate the Rule, then look directly at the effect later if there was an indeterminate. Bottom line: I withdraw this sidebar issue, about not needing to evaluate the Rules when the Policy or PolicySet Target produces an Indeterminate. In 2.0 the spec was able to say that because it did not propagate the D and P properties up, however, to do the complete job of propagating all the D and P properties, we do need to evaluate the Rules, and the changes in the spec to this effect I believe are correct.     Thanks,     Rich On 5/6/2011 11:06 PM, rich levinson wrote: 4DC4B750.2070402@oracle.com type= cite > Hi Paul and TC, I think the toothpaste is out of the tube on this one: i.e. I think too much has been invested in the analysis for one member to unilaterally shut down the issue by withdrawal .  In any event, that's my opinion, but, regardless, based on yesterday's mtg, I believe there is more to be said on this issue, and hopefully we can channel it to a clean resolution. That being said, following is additional analysis I have done and some conclusions that I believe we can reach agreement on, and that I think I can describe in terms that everyone can follow (for clarity I will just add an s for the plural of Policy ). There are 2 arguments I would like to make. Argument 1: First, there are 3 types of Policys: Policys{P} where all Rules have Effect= Permit and therefore these Policys can never return a Deny . Policys{D} where all Rules have Effect= Deny and therefore these Policys can never return a Permit Policys{DP} where there are a mix of Rules, some of which are Permit , and some of which are Deny , and therefore, there is no apriori way to look at such a Policy and know whether or not it can return either a Permit or a Deny. Therefore, the 3 types of Policys each have an inherent property, which can be determined simply by inspection of the Policy w/o regard to evaluation of any Attributes. In fact, 2 out of 3 of the types retain their property regardless of evaluation of the attributes. i.e. Policy{P} is always Policy{P}, it can never change its property and become either Policy{D} or Policy{DP} i.e. same can be said for Policy{D} I would therefore refer to these as static properties The third type Policy{DP} has a run-time characteristic, where if current values of the Attributes happen to exclude all the Rules of either D or P, then the current run-time property of the Policy{DP} for a single evaluation can effectively become either Policy{P} or Policy{D}. On subsequent evaluations the Policy{DP} can again by happenstance become any one of the 3 types. I would therefore consider this a runtime property if we allow its definition to be subject to Attribute evaluation. Therefore, I think we can say that the problem we are discussing reduces to only the evaluation of Policy{DP} elements. We can then ask whether we want our combining algorithms to be subject to runtime values of Attributes that on any given evaluation can cause a Policy{DP} to become a Policy{D} or a Policy{P}, thus rendering the property of the Policy indeterminate until runtime values are plugged in. I would also suggest that it is this indeterminacy, which would cause Policys not to be comparable for equivalence , because the Policys themselves have a built-in uncertainty depending on how one regards this property. I would also suggest that for the purpose of equivalence this runtime characteristic could be considered a performance optimization , which could be a property of the Policy Engine, whereas the inherent D and P properties can be considered a Policy language characteristic independent of runtime, which could be included in an equivalence algorithm. Argument 2: There is one additional argument I would like to add for consideration. In XACML 2.0, there is a statement in section 7.10 for Policy Evaluation, which says: 'If the target value is No-match or “Indeterminate” then the policy value SHALL be  “NotApplicable” or “Indeterminate”, respectively, regardless of the value of the rules.  For these cases, therefore, the rules need not be evaluated.' By comparison, in XACML 3.0, WD 19, the corresponding statement in section 7.11 has been modified to say: 'If the target value is No-match then the policy value SHALL be   NotApplicable , regardless of the value of the rules.  For this case, therefore, the rules need not be evaluated.' The Indeterminate part of this statement has been modified to say: 'If the target value is Indeterminate , then the policy value SHALL be  determined as specified in Table 7, in section 7.13.' Therefore, the meaning of the spec has been changed, because in order to select an entry in Table 7, now the rules do have to be evaluated, which is not obvious unless one does a very careful and complete reading of the changes that are being proposed. Additional Consideration: One other side effect that I think is of concern, is that if we allow the Policy property (P, D, or DP) to be subject to runtime determination then when an Indeterminate is obtained at the top of the tree, then it would be necessary to evaluate the complete subtree in order to determine what this property is. By comparison, the static property can be determined at any time by processing the tree once and recording the property for all subsequent evaluations. My Conclusions: Bottom line: my recommendation is that we define the D,P,DP property in such a way that it is a static characteristic of the Policy definition, which presumably allow it to be used in equivalence determinations. I would also recommend that runtime optimization be a configurable option, and it will be clear that if this option is activated, that any presumption of equivalence should be disregarded as far as runtime behavior would be concerned. Comments, suggestions welcome.     Thanks,     Rich On 5/6/2011 12:51 PM, Tyson, Paul H wrote: 3898C40CCD069D4F91FCD69C9EFBF096064B3D1C@txamashur004.ent.textron.com type= cite > I withdraw my objection to the Section 7 changes made by Erik in the 3.0 core spec wd-19. I'm still concerned that the policy evaluation specification (in section 7) may cause unexpected variations in the results from two seemingly equivalent policies, but I need to produce some theoretical or empirical evidence to demonstrate this (or to relieve my concern). In any case, the wd-19 changes probably do not make this any better or worse. Regards, --Paul --------------------------------------------------------------------- To unsubscribe from this mail list, you must leave the OASIS TC that generates this mail. Follow this link to all your TCs in OASIS at: https://www.oasis-open.org/apps/org/workgroup/portal/my_workgroups.php


  • 9.  Re: [xacml] wd-19 indeterminate policy target handling

    Posted 05-16-2011 19:13
    Hi Erik, One more point that may not be obvious at first look at C.2. Even though C.2 does have the statement: if (decision == Deny) {     return Deny; } the problem is that the evaluate would have to be done for all the nodes before the algorithm is even called, because the parameter is (Decision[] decisions) which to me at least implies that all the evaluation has already been done to determine those decisions. The change I am recommending is passing in (Node[] nodes), which have not been evaluated, and do not get evaluated until they are encountered in the loop.     Thanks,     Rich On 5/16/2011 2:45 PM, rich levinson wrote: 4DD170D6.7030102@oracle.com type= cite > Hi Erik, Responses inline: On 5/16/2011 11:42 AM, Erik Rissanen wrote: 4DD145FB.6000509@axiomatics.com type= cite > Hi Rich, I am about to post a new draft, but I noticed I need to get some clarifications on your comments before I do that. See inline. On 2011-05-11 16:43, rich levinson wrote: 4DCAA0A5.3010808@oracle.com type= cite > Hi Erik, Yes, I am referring to WD-19, when I am saying leave it as it is . However (sorry), that being said, I do think there is some additional clean-up required on this overall issue. There is also an additional unrelated typo we found in section A.13. I will just list the following clarifications to WD-19 and Erik's current email: from the appendix to section 9 ??   I think you mean section 7.13 (or earlier (see below)). Yes. <Rich> ok </> 4DD145FB.6000509@axiomatics.com type= cite > 4DCAA0A5.3010808@oracle.com type= cite > Since we are introducing Ind{P,D,DP} in section 7, I think it needs to also be included in Table 4 for Rule evaluation, and possibly other places. Yes, to make it clearer. Though, the text in appendix C (to be moved to Section 7) does contain the base case for the rule already, but it's better to put in the table to avoid confusion. <Rich> That's what I was thinking as well, except possibly even more so, because one of the effects of this change is that, at least for this type of processing (analyzing the Indeterminate to determine D, P, or DP), the Rule, Policy, and PolicySet processing is now the same, which I believe makes the whole thing easier to understand, although it becomes more abstract by no longer differentiating between these element types. But I think the next bullet is the more important one to discuss, so let's leave this more philosophical aspect until later. </> 4DD145FB.6000509@axiomatics.com type= cite > 4DCAA0A5.3010808@oracle.com type= cite > I think we also need to consider having more explanation in the old C.1 section about the extended Inds , which describes the underlying cause: which imo is that when you have an -overrides type comb-alg, the relative weight of the return values suddenly has a precedence that would otherwise not be there, namely: for example for a deny-overrides Policy:    Ind{P} < P < Ind{D} < D which means if a Rule that evaluates to D is encountered, the processing for the policy can end, since that is the final answer, no matter what follows. However, until a D is encountered processing must continue. When all Rules are processed, the answer is the greatest value in the precedence chain above. I am not sure about this. I don't want to fill up the normative sections with long examples. We could say something short like The extended Indeterminate values allow combining algorithms to treat Indeterminate smarter, so that in some cases an Indeterminate can be ignored when the extended Indeterminate value shows that the section of the policy which produced the Indeterminate could not have influenced the final decision even when there would not have been an Indeterminate. And clearly mark this non-normative. <Rich> This is somewhat subjective in terms of how to best explain the situation, and I was considering writing a separate email totally focused on that since w the resolution of the Policy/Target concept of Indeterminate, I think I can explain it all fairly concisely in a systematic way. To add to what's above: For biased combining algorithms: deny-biased or deny-overrides: N/A < Ind{P} < P < Ind{D} < D permit-biased or permit-overrides: N/A < Ind{D} < D < Ind{P} < P non-biased (ex. first-applicable): ( N/A = Ind {*} )  <  ( D = P ) (i.e. (D = P) means either one that comes first is good, and both of them override either N/A or Ind{*}) While these statements might at first be confusing, I think that ultimately they dictate the processing algorithm and so from that perspective can provide a sanity check on both the p-code and the text. But, again, this falls into the explanatory category, the significant point I think is in the next bullet. </> 4DD145FB.6000509@axiomatics.com type= cite > 4DCAA0A5.3010808@oracle.com type= cite > Also, considering the above bullet, I think the current algorithms should be modified back to look more like the original 2.0 algs. For example compare the denyOverridesCombiningAlgorithm of section C.2 (new) w C.10 (legacy): In C.2 the parameter to the algorithm is Decision[] decisions , where as in C.10 the parameter to the algorithm is Rule[] rules . Also, in section C.10, within the loop, the first thing is:   decision = evaluate(rules[i]);   if (decision == Deny) return Deny; I think it is important to retain this logic so it can be shown where the breakout occurs, which cuts off unnecessary evaluation of subsequent rules. Also, we can retain the criteria for choosing Ind{d} vs Ind{p} where the   if (effect(rules[i]) ... is evaluated. Is this just to refactor the algorithms to look prettier, or are there actual errors in them? If there are no errors, could we just keep them as they are, so we don't risk breaking them? This is not just to refactor the algorithms. When I looked over the new algorithms in detail, I noticed there was a quantitative difference between them. i.e. comparing C.10 and C.2: In C.10 (legacy deny-overrides), line 6131, there is a loop that begins: for (i=0; i<lenghtOf(rules); i++ ) {     decision = evaluate(rules[i])     if (decision == Deny)          return Deny;     ... } With the new algorithms, this logic also applies to Policies as well as rules. Furthermore, starting from the top of the PolicySet tree, and working down, this logic is recursive, and the processing should be identical for PolicySet, Policy, and Rule (although the evaluate function will be different for each). A second important point here is that it is significant that the first if stmt causes a return as opposed to the subsequent if stmts that set various booleans, but allow the loop to continue processing. The importance is that in a biased algorithm, if the decision to which the algorithm is biased is found, then processing of the loop can end right there, and there is no need to evaluate the rest of the rules, policies, or policysets, within the current node. i.e. if I am in a deny-overrides node, if, when I evaluate the next node, it returns a Deny, I am done. Clearly in the case of Policy and PolicySet, there is no ability to get either a Permit or Deny until the leaf Rules are processed. However, if I am in a deny-overrides PolicySet, and there are 10 child Policy elements, for example, and I evaluate the first Policy and it in turn evaluates its child Rules, if its first, or any other, rule returns a Deny, then the first Policy will return a Deny, and there is no need to evaluate the other 9 Policy elements since the decision will be a Deny regardless of what they return. Therefore, I think a better algorithm would look like this for C.2 (note the evaluate operation in the first line of the for loop which is not currently in C.2): Decision denyOverridesCombiningAlgorithm(Node[] nodes) {     ...     for ( i=0; i<lengthOf(nodes); i++ ) {         Decision decision = evaluate(nodes[i]);         if (decision == Deny) {             return Deny;         }         // I believe the rest of the logic remains the same as currently in C.2:         ...     } // end for loop     // logic should also be same as C.2 after loop } // end algorithm Now, if we look again at C.10, and try to see how it relates to the new algorithms, I think it would go something like this, where Rule is subclass of Node: Decision denyOverridesRuleCombiningAlgorithm(Node[] nodes) {     Boolean atLeastOneError = false;     Boolean atLeastOneErrorD = false;     Boolean atLeastOneErrorP = false;     Boolean atLeastOneErrorDP = false;     Boolean atLeastOnePermit = false;     for ( i=0; i<lengthOf(rules); i++  ) {         Decision decision = evaluate(nodes[i]);         if (decision==Deny) {             return Deny;         }         // the next 2 if s are the same as C.10:         if (decision==Permit) {             atLeastOnePermit = true;             continue; // i.e. skip the rest of the logic current iteration of loop                               // and start next iteration         }         if (decision==NotApplicable) {             continue;         }         if (decision==Indeterminate) { // this can only be returned for rules             if ( effect((Rule)nodes[i])==Deny) ) { // cast to Rule to get effect                 atLeastOneErrorD = true;             }             else {                 atLeastOneErrorP = true;             }             continue;         }         // the following is same as C.2 and will evaluate the 3 types         // of Indeterminate, which can only be returned for Policy and PolicySet         ... same as lines 5762->5776 (not repeated here)     } // end for loop     if (atLeastOneErrorD==true &&           (atLeastOneErrorP==true atLeastOnePermit==true) {         atLeastOneErrorDP = true;     }     if (atLeastOneErrorDP==true) {         return Indeterminate(DP);     if (atLeastOneErrorD==true) {         return Indeterminate(D);     }     if (atLeastOnePermit==true) {         return Permit;     }     if (atLeastOneErrorP == true) {         return Indeterminate(P);     }     return NotApplicable; } // end algorithm The representation above clearly shows:     N/A < Ind{P} < P < Ind{D} < D by simple following the return statements up the algorithm. I think the above algorithm also shows the origin of D,P,DP coming from the effect of the rules, and then being percolated thru Policy and PolicySet. So, basically, although the above algorithm may look a little more complicated than the current C.2, I think it retains 2 things from C.10 that the current C.2 drops: It retains the break of the loop when the biased Decision is returned It retains the logic that creates the breakout of Indeterminate to Ind(D,P,DP) Comments?   Thanks,   Rich </> 4DD145FB.6000509@axiomatics.com type= cite > 4DCAA0A5.3010808@oracle.com type= cite > Finally, rather than passing in (Rule[] rules), or (Policy[] policies), we might want to consider using a neutral term, such as (Node[] nodes) or (Child[] children) where Node or Child could refer to either a Rule or Policy. And, finally, the typo in section A.3.14, under rfc822Name-match: In cs-01 line 4992 (next to last para), the phrase: matches a value in the first argument should say matches a value in the second argument . I think this is just a typo, esp when compared w the next para. Yes, this is a typo. I will fix it. 4DCAA0A5.3010808@oracle.com type= cite >     Thanks,     Rich On 5/11/2011 5:37 AM, Erik Rissanen wrote: 4DCA58DB.4080406@axiomatics.com type= cite > Hi All, Rich, when you say leave it as it is , I assume you mean the new working draft which evaluates the children of policy sets. If so, I think everybody is in agreement. I will still post an updated draft which moves the definitions of the text from the appendix to section 9, so everything is in one place. Best regards, Erik On 2011-05-09 06:27, rich levinson wrote: 4DC76D48.7030208@oracle.com type= cite > Hi again, Paul, Erik, Hal, and TC: I have spent some additional time looking at this problem and I am now leaning toward leaving the spec as is, at least as far as I have analyzed it. For anyone interested, my reassessment is based on the following: The intention has always been to maintain consistency with XACML 2.0, while at the same time enabling the D and P types of Indeterminates to propagate up the PolicySet hierarchy in addition to the DP which was all that was propagated up in 2.0, despite the fact that D and P were determined and used on the first hop up, they were unnecessarily cut off at that point and information was lost. It appears that I inadvertently lost sight of this big picture when looking at the details from the top down. However, in order to go from the top down one has to allow the existing algorithms on the bottom level to remain the same, and obviously by assuming that the Rules do not need to be evaluated is a direct contradiction with the existing XACML 2.0 algorithms which first evaluate the Rule, then look directly at the effect later if there was an indeterminate. Bottom line: I withdraw this sidebar issue, about not needing to evaluate the Rules when the Policy or PolicySet Target produces an Indeterminate. In 2.0 the spec was able to say that because it did not propagate the D and P properties up, however, to do the complete job of propagating all the D and P properties, we do need to evaluate the Rules, and the changes in the spec to this effect I believe are correct.     Thanks,     Rich On 5/6/2011 11:06 PM, rich levinson wrote: 4DC4B750.2070402@oracle.com type= cite > Hi Paul and TC, I think the toothpaste is out of the tube on this one: i.e. I think too much has been invested in the analysis for one member to unilaterally shut down the issue by withdrawal .  In any event, that's my opinion, but, regardless, based on yesterday's mtg, I believe there is more to be said on this issue, and hopefully we can channel it to a clean resolution. That being said, following is additional analysis I have done and some conclusions that I believe we can reach agreement on, and that I think I can describe in terms that everyone can follow (for clarity I will just add an s for the plural of Policy ). There are 2 arguments I would like to make. Argument 1: First, there are 3 types of Policys: Policys{P} where all Rules have Effect= Permit and therefore these Policys can never return a Deny . Policys{D} where all Rules have Effect= Deny and therefore these Policys can never return a Permit Policys{DP} where there are a mix of Rules, some of which are Permit , and some of which are Deny , and therefore, there is no apriori way to look at such a Policy and know whether or not it can return either a Permit or a Deny. Therefore, the 3 types of Policys each have an inherent property, which can be determined simply by inspection of the Policy w/o regard to evaluation of any Attributes. In fact, 2 out of 3 of the types retain their property regardless of evaluation of the attributes. i.e. Policy{P} is always Policy{P}, it can never change its property and become either Policy{D} or Policy{DP} i.e. same can be said for Policy{D} I would therefore refer to these as static properties The third type Policy{DP} has a run-time characteristic, where if current values of the Attributes happen to exclude all the Rules of either D or P, then the current run-time property of the Policy{DP} for a single evaluation can effectively become either Policy{P} or Policy{D}. On subsequent evaluations the Policy{DP} can again by happenstance become any one of the 3 types. I would therefore consider this a runtime property if we allow its definition to be subject to Attribute evaluation. Therefore, I think we can say that the problem we are discussing reduces to only the evaluation of Policy{DP} elements. We can then ask whether we want our combining algorithms to be subject to runtime values of Attributes that on any given evaluation can cause a Policy{DP} to become a Policy{D} or a Policy{P}, thus rendering the property of the Policy indeterminate until runtime values are plugged in. I would also suggest that it is this indeterminacy, which would cause Policys not to be comparable for equivalence , because the Policys themselves have a built-in uncertainty depending on how one regards this property. I would also suggest that for the purpose of equivalence this runtime characteristic could be considered a performance optimization , which could be a property of the Policy Engine, whereas the inherent D and P properties can be considered a Policy language characteristic independent of runtime, which could be included in an equivalence algorithm. Argument 2: There is one additional argument I would like to add for consideration. In XACML 2.0, there is a statement in section 7.10 for Policy Evaluation, which says: 'If the target value is No-match or “Indeterminate” then the policy value SHALL be  “NotApplicable” or “Indeterminate”, respectively, regardless of the value of the rules.  For these cases, therefore, the rules need not be evaluated.' By comparison, in XACML 3.0, WD 19, the corresponding statement in section 7.11 has been modified to say: 'If the target value is No-match then the policy value SHALL be   NotApplicable , regardless of the value of the rules.  For this case, therefore, the rules need not be evaluated.' The Indeterminate part of this statement has been modified to say: 'If the target value is Indeterminate , then the policy value SHALL be  determined as specified in Table 7, in section 7.13.' Therefore, the meaning of the spec has been changed, because in order to select an entry in Table 7, now the rules do have to be evaluated, which is not obvious unless one does a very careful and complete reading of the changes that are being proposed. Additional Consideration: One other side effect that I think is of concern, is that if we allow the Policy property (P, D, or DP) to be subject to runtime determination then when an Indeterminate is obtained at the top of the tree, then it would be necessary to evaluate the complete subtree in order to determine what this property is. By comparison, the static property can be determined at any time by processing the tree once and recording the property for all subsequent evaluations. My Conclusions: Bottom line: my recommendation is that we define the D,P,DP property in such a way that it is a static characteristic of the Policy definition, which presumably allow it to be used in equivalence determinations. I would also recommend that runtime optimization be a configurable option, and it will be clear that if this option is activated, that any presumption of equivalence should be disregarded as far as runtime behavior would be concerned. Comments, suggestions welcome.     Thanks,     Rich On 5/6/2011 12:51 PM, Tyson, Paul H wrote: 3898C40CCD069D4F91FCD69C9EFBF096064B3D1C@txamashur004.ent.textron.com type= cite > I withdraw my objection to the Section 7 changes made by Erik in the 3.0 core spec wd-19. I'm still concerned that the policy evaluation specification (in section 7) may cause unexpected variations in the results from two seemingly equivalent policies, but I need to produce some theoretical or empirical evidence to demonstrate this (or to relieve my concern). In any case, the wd-19 changes probably do not make this any better or worse. Regards, --Paul --------------------------------------------------------------------- To unsubscribe from this mail list, you must leave the OASIS TC that generates this mail. Follow this link to all your TCs in OASIS at: https://www.oasis-open.org/apps/org/workgroup/portal/my_workgroups.php


  • 10.  Re: [xacml] wd-19 indeterminate policy target handling

    Posted 05-17-2011 07:35
    Hi Rich, I need to parse your other email carefully, and I will respond to it. :-) But regarding this comment, the fact that the algorithm is defined as if all children are evaluated, does not mean that an implementation has to work like that. This is explicitly stated in the spec as well as that no XACML function may have side effects. So out of order evaluation, lazy evaluation, etc, are all permitted, as long as the result is the same. The spec should strive for the simplest possible explanation of the behavior, not the most efficient implementation. Best regards, Erik On 2011-05-16 21:12, rich levinson wrote: 4DD17711.8030906@oracle.com type= cite > Hi Erik, One more point that may not be obvious at first look at C.2. Even though C.2 does have the statement: if (decision == Deny) {     return Deny; } the problem is that the evaluate would have to be done for all the nodes before the algorithm is even called, because the parameter is (Decision[] decisions) which to me at least implies that all the evaluation has already been done to determine those decisions. The change I am recommending is passing in (Node[] nodes), which have not been evaluated, and do not get evaluated until they are encountered in the loop.     Thanks,     Rich On 5/16/2011 2:45 PM, rich levinson wrote: 4DD170D6.7030102@oracle.com type= cite > Hi Erik, Responses inline: On 5/16/2011 11:42 AM, Erik Rissanen wrote: 4DD145FB.6000509@axiomatics.com type= cite > Hi Rich, I am about to post a new draft, but I noticed I need to get some clarifications on your comments before I do that. See inline. On 2011-05-11 16:43, rich levinson wrote: 4DCAA0A5.3010808@oracle.com type= cite > Hi Erik, Yes, I am referring to WD-19, when I am saying leave it as it is . However (sorry), that being said, I do think there is some additional clean-up required on this overall issue. There is also an additional unrelated typo we found in section A.13. I will just list the following clarifications to WD-19 and Erik's current email: from the appendix to section 9 ??   I think you mean section 7.13 (or earlier (see below)). Yes. <Rich> ok </> 4DD145FB.6000509@axiomatics.com type= cite > 4DCAA0A5.3010808@oracle.com type= cite > Since we are introducing Ind{P,D,DP} in section 7, I think it needs to also be included in Table 4 for Rule evaluation, and possibly other places. Yes, to make it clearer. Though, the text in appendix C (to be moved to Section 7) does contain the base case for the rule already, but it's better to put in the table to avoid confusion. <Rich> That's what I was thinking as well, except possibly even more so, because one of the effects of this change is that, at least for this type of processing (analyzing the Indeterminate to determine D, P, or DP), the Rule, Policy, and PolicySet processing is now the same, which I believe makes the whole thing easier to understand, although it becomes more abstract by no longer differentiating between these element types. But I think the next bullet is the more important one to discuss, so let's leave this more philosophical aspect until later. </> 4DD145FB.6000509@axiomatics.com type= cite > 4DCAA0A5.3010808@oracle.com type= cite > I think we also need to consider having more explanation in the old C.1 section about the extended Inds , which describes the underlying cause: which imo is that when you have an -overrides type comb-alg, the relative weight of the return values suddenly has a precedence that would otherwise not be there, namely: for example for a deny-overrides Policy:    Ind{P} < P < Ind{D} < D which means if a Rule that evaluates to D is encountered, the processing for the policy can end, since that is the final answer, no matter what follows. However, until a D is encountered processing must continue. When all Rules are processed, the answer is the greatest value in the precedence chain above. I am not sure about this. I don't want to fill up the normative sections with long examples. We could say something short like The extended Indeterminate values allow combining algorithms to treat Indeterminate smarter, so that in some cases an Indeterminate can be ignored when the extended Indeterminate value shows that the section of the policy which produced the Indeterminate could not have influenced the final decision even when there would not have been an Indeterminate. And clearly mark this non-normative. <Rich> This is somewhat subjective in terms of how to best explain the situation, and I was considering writing a separate email totally focused on that since w the resolution of the Policy/Target concept of Indeterminate, I think I can explain it all fairly concisely in a systematic way. To add to what's above: For biased combining algorithms: deny-biased or deny-overrides: N/A < Ind{P} < P < Ind{D} < D permit-biased or permit-overrides: N/A < Ind{D} < D < Ind{P} < P non-biased (ex. first-applicable): ( N/A = Ind {*} )  <  ( D = P ) (i.e. (D = P) means either one that comes first is good, and both of them override either N/A or Ind{*}) While these statements might at first be confusing, I think that ultimately they dictate the processing algorithm and so from that perspective can provide a sanity check on both the p-code and the text. But, again, this falls into the explanatory category, the significant point I think is in the next bullet. </> 4DD145FB.6000509@axiomatics.com type= cite > 4DCAA0A5.3010808@oracle.com type= cite > Also, considering the above bullet, I think the current algorithms should be modified back to look more like the original 2.0 algs. For example compare the denyOverridesCombiningAlgorithm of section C.2 (new) w C.10 (legacy): In C.2 the parameter to the algorithm is Decision[] decisions , where as in C.10 the parameter to the algorithm is Rule[] rules . Also, in section C.10, within the loop, the first thing is:   decision = evaluate(rules[i]);   if (decision == Deny) return Deny; I think it is important to retain this logic so it can be shown where the breakout occurs, which cuts off unnecessary evaluation of subsequent rules. Also, we can retain the criteria for choosing Ind{d} vs Ind{p} where the   if (effect(rules[i]) ... is evaluated. Is this just to refactor the algorithms to look prettier, or are there actual errors in them? If there are no errors, could we just keep them as they are, so we don't risk breaking them? This is not just to refactor the algorithms. When I looked over the new algorithms in detail, I noticed there was a quantitative difference between them. i.e. comparing C.10 and C.2: In C.10 (legacy deny-overrides), line 6131, there is a loop that begins: for (i=0; i<lenghtOf(rules); i++ ) {     decision = evaluate(rules[i])     if (decision == Deny)          return Deny;     ... } With the new algorithms, this logic also applies to Policies as well as rules. Furthermore, starting from the top of the PolicySet tree, and working down, this logic is recursive, and the processing should be identical for PolicySet, Policy, and Rule (although the evaluate function will be different for each). A second important point here is that it is significant that the first if stmt causes a return as opposed to the subsequent if stmts that set various booleans, but allow the loop to continue processing. The importance is that in a biased algorithm, if the decision to which the algorithm is biased is found, then processing of the loop can end right there, and there is no need to evaluate the rest of the rules, policies, or policysets, within the current node. i.e. if I am in a deny-overrides node, if, when I evaluate the next node, it returns a Deny, I am done. Clearly in the case of Policy and PolicySet, there is no ability to get either a Permit or Deny until the leaf Rules are processed. However, if I am in a deny-overrides PolicySet, and there are 10 child Policy elements, for example, and I evaluate the first Policy and it in turn evaluates its child Rules, if its first, or any other, rule returns a Deny, then the first Policy will return a Deny, and there is no need to evaluate the other 9 Policy elements since the decision will be a Deny regardless of what they return. Therefore, I think a better algorithm would look like this for C.2 (note the evaluate operation in the first line of the for loop which is not currently in C.2): Decision denyOverridesCombiningAlgorithm(Node[] nodes) {     ...     for ( i=0; i<lengthOf(nodes); i++ ) {         Decision decision = evaluate(nodes[i]);         if (decision == Deny) {             return Deny;         }         // I believe the rest of the logic remains the same as currently in C.2:         ...     } // end for loop     // logic should also be same as C.2 after loop } // end algorithm Now, if we look again at C.10, and try to see how it relates to the new algorithms, I think it would go something like this, where Rule is subclass of Node: Decision denyOverridesRuleCombiningAlgorithm(Node[] nodes) {     Boolean atLeastOneError = false;     Boolean atLeastOneErrorD = false;     Boolean atLeastOneErrorP = false;     Boolean atLeastOneErrorDP = false;     Boolean atLeastOnePermit = false;     for ( i=0; i<lengthOf(rules); i++  ) {         Decision decision = evaluate(nodes[i]);         if (decision==Deny) {             return Deny;         }         // the next 2 if s are the same as C.10:         if (decision==Permit) {             atLeastOnePermit = true;             continue; // i.e. skip the rest of the logic current iteration of loop                               // and start next iteration         }         if (decision==NotApplicable) {             continue;         }         if (decision==Indeterminate) { // this can only be returned for rules             if ( effect((Rule)nodes[i])==Deny) ) { // cast to Rule to get effect                 atLeastOneErrorD = true;             }             else {                 atLeastOneErrorP = true;             }             continue;         }         // the following is same as C.2 and will evaluate the 3 types         // of Indeterminate, which can only be returned for Policy and PolicySet         ... same as lines 5762->5776 (not repeated here)     } // end for loop     if (atLeastOneErrorD==true &&           (atLeastOneErrorP==true atLeastOnePermit==true) {         atLeastOneErrorDP = true;     }     if (atLeastOneErrorDP==true) {         return Indeterminate(DP);     if (atLeastOneErrorD==true) {         return Indeterminate(D);     }     if (atLeastOnePermit==true) {         return Permit;     }     if (atLeastOneErrorP == true) {         return Indeterminate(P);     }     return NotApplicable; } // end algorithm The representation above clearly shows:     N/A < Ind{P} < P < Ind{D} < D by simple following the return statements up the algorithm. I think the above algorithm also shows the origin of D,P,DP coming from the effect of the rules, and then being percolated thru Policy and PolicySet. So, basically, although the above algorithm may look a little more complicated than the current C.2, I think it retains 2 things from C.10 that the current C.2 drops: It retains the break of the loop when the biased Decision is returned It retains the logic that creates the breakout of Indeterminate to Ind(D,P,DP) Comments?   Thanks,   Rich </> 4DD145FB.6000509@axiomatics.com type= cite > 4DCAA0A5.3010808@oracle.com type= cite > Finally, rather than passing in (Rule[] rules), or (Policy[] policies), we might want to consider using a neutral term, such as (Node[] nodes) or (Child[] children) where Node or Child could refer to either a Rule or Policy. And, finally, the typo in section A.3.14, under rfc822Name-match: In cs-01 line 4992 (next to last para), the phrase: matches a value in the first argument should say matches a value in the second argument . I think this is just a typo, esp when compared w the next para. Yes, this is a typo. I will fix it. 4DCAA0A5.3010808@oracle.com type= cite >     Thanks,     Rich On 5/11/2011 5:37 AM, Erik Rissanen wrote: 4DCA58DB.4080406@axiomatics.com type= cite > Hi All, Rich, when you say leave it as it is , I assume you mean the new working draft which evaluates the children of policy sets. If so, I think everybody is in agreement. I will still post an updated draft which moves the definitions of the text from the appendix to section 9, so everything is in one place. Best regards, Erik On 2011-05-09 06:27, rich levinson wrote: 4DC76D48.7030208@oracle.com type= cite > Hi again, Paul, Erik, Hal, and TC: I have spent some additional time looking at this problem and I am now leaning toward leaving the spec as is, at least as far as I have analyzed it. For anyone interested, my reassessment is based on the following: The intention has always been to maintain consistency with XACML 2.0, while at the same time enabling the D and P types of Indeterminates to propagate up the PolicySet hierarchy in addition to the DP which was all that was propagated up in 2.0, despite the fact that D and P were determined and used on the first hop up, they were unnecessarily cut off at that point and information was lost. It appears that I inadvertently lost sight of this big picture when looking at the details from the top down. However, in order to go from the top down one has to allow the existing algorithms on the bottom level to remain the same, and obviously by assuming that the Rules do not need to be evaluated is a direct contradiction with the existing XACML 2.0 algorithms which first evaluate the Rule, then look directly at the effect later if there was an indeterminate. Bottom line: I withdraw this sidebar issue, about not needing to evaluate the Rules when the Policy or PolicySet Target produces an Indeterminate. In 2.0 the spec was able to say that because it did not propagate the D and P properties up, however, to do the complete job of propagating all the D and P properties, we do need to evaluate the Rules, and the changes in the spec to this effect I believe are correct.     Thanks,     Rich On 5/6/2011 11:06 PM, rich levinson wrote: 4DC4B750.2070402@oracle.com type= cite > Hi Paul and TC, I think the toothpaste is out of the tube on this one: i.e. I think too much has been invested in the analysis for one member to unilaterally shut down the issue by withdrawal .  In any event, that's my opinion, but, regardless, based on yesterday's mtg, I believe there is more to be said on this issue, and hopefully we can channel it to a clean resolution. That being said, following is additional analysis I have done and some conclusions that I believe we can reach agreement on, and that I think I can describe in terms that everyone can follow (for clarity I will just add an s for the plural of Policy ). There are 2 arguments I would like to make. Argument 1: First, there are 3 types of Policys: Policys{P} where all Rules have Effect= Permit and therefore these Policys can never return a Deny . Policys{D} where all Rules have Effect= Deny and therefore these Policys can never return a Permit Policys{DP} where there are a mix of Rules, some of which are Permit , and some of which are Deny , and therefore, there is no apriori way to look at such a Policy and know whether or not it can return either a Permit or a Deny. Therefore, the 3 types of Policys each have an inherent property, which can be determined simply by inspection of the Policy w/o regard to evaluation of any Attributes. In fact, 2 out of 3 of the types retain their property regardless of evaluation of the attributes. i.e. Policy{P} is always Policy{P}, it can never change its property and become either Policy{D} or Policy{DP} i.e. same can be said for Policy{D} I would therefore refer to these as static properties The third type Policy{DP} has a run-time characteristic, where if current values of the Attributes happen to exclude all the Rules of either D or P, then the current run-time property of the Policy{DP} for a single evaluation can effectively become either Policy{P} or Policy{D}. On subsequent evaluations the Policy{DP} can again by happenstance become any one of the 3 types. I would therefore consider this a runtime property if we allow its definition to be subject to Attribute evaluation. Therefore, I think we can say that the problem we are discussing reduces to only the evaluation of Policy{DP} elements. We can then ask whether we want our combining algorithms to be subject to runtime values of Attributes that on any given evaluation can cause a Policy{DP} to become a Policy{D} or a Policy{P}, thus rendering the property of the Policy indeterminate until runtime values are plugged in. I would also suggest that it is this indeterminacy, which would cause Policys not to be comparable for equivalence , because the Policys themselves have a built-in uncertainty depending on how one regards this property. I would also suggest that for the purpose of equivalence this runtime characteristic could be considered a performance optimization , which could be a property of the Policy Engine, whereas the inherent D and P properties can be considered a Policy language characteristic independent of runtime, which could be included in an equivalence algorithm. Argument 2: There is one additional argument I would like to add for consideration. In XACML 2.0, there is a statement in section 7.10 for Policy Evaluation, which says: 'If the target value is No-match or “Indeterminate” then the policy value SHALL be  “NotApplicable” or “Indeterminate”, respectively, regardless of the value of the rules.  For these cases, therefore, the rules need not be evaluated.' By comparison, in XACML 3.0, WD 19, the corresponding statement in section 7.11 has been modified to say: 'If the target value is No-match then the policy value SHALL be   NotApplicable , regardless of the value of the rules.  For this case, therefore, the rules need not be evaluated.' The Indeterminate part of this statement has been modified to say: 'If the target value is Indeterminate , then the policy value SHALL be  determined as specified in Table 7, in section 7.13.' Therefore, the meaning of the spec has been changed, because in order to select an entry in Table 7, now the rules do have to be evaluated, which is not obvious unless one does a very careful and complete reading of the changes that are being proposed. Additional Consideration: One other side effect that I think is of concern, is that if we allow the Policy property (P, D, or DP) to be subject to runtime determination then when an Indeterminate is obtained at the top of the tree, then it would be necessary to evaluate the complete subtree in order to determine what this property is. By comparison, the static property can be determined at any time by processing the tree once and recording the property for all subsequent evaluations. My Conclusions: Bottom line: my recommendation is that we define the D,P,DP property in such a way that it is a static characteristic of the Policy definition, which presumably allow it to be used in equivalence determinations. I would also recommend that runtime optimization be a configurable option, and it will be clear that if this option is activated, that any presumption of equivalence should be disregarded as far as runtime behavior would be concerned. Comments, suggestions welcome.     Thanks,     Rich On 5/6/2011 12:51 PM, Tyson, Paul H wrote: 3898C40CCD069D4F91FCD69C9EFBF096064B3D1C@txamashur004.ent.textron.com type= cite > I withdraw my objection to the Section 7 changes made by Erik in the 3.0 core spec wd-19. I'm still concerned that the policy evaluation specification (in section 7) may cause unexpected variations in the results from two seemingly equivalent policies, but I need to produce some theoretical or empirical evidence to demonstrate this (or to relieve my concern). In any case, the wd-19 changes probably do not make this any better or worse. Regards, --Paul --------------------------------------------------------------------- To unsubscribe from this mail list, you must leave the OASIS TC that generates this mail. Follow this link to all your TCs in OASIS at: https://www.oasis-open.org/apps/org/workgroup/portal/my_workgroups.php


  • 11.  RE: [xacml] wd-19 indeterminate policy target handling

    Posted 05-17-2011 09:08
    From: Erik Rissanen [ mailto:erik@axiomatics.com ] Sent: Tuesday, May 17, 2011 9:35 AM To: xacml@lists.oasis-open.org Subject: Re: [xacml] wd-19 indeterminate policy target handling > The spec should strive for the simplest possible explanation of the behavior, not the most efficient implementation. +1 We can leave it up to vendors to come up with some nice performance tricks. Thanks, Ray


  • 12.  Re: [xacml] wd-19 indeterminate policy target handling

    Posted 05-17-2011 13:37
    This is not a performance issue. It is a change from XACML 2.0 that implies that the combining algorithm has as input a set of decisions as opposed to 2.0 where the combining algorithm had as input a set of Rules, Policies, or PolicySets, that had yet to be evaluated. The change implies that the algorithm is working on a different state, which is not the case. Thanks, Rich On 5/17/2011 5:07 AM, remon.sinnema@emc.com wrote: > From: Erik Rissanen [ mailto:erik@axiomatics.com ] > Sent: Tuesday, May 17, 2011 9:35 AM > To: xacml@lists.oasis-open.org > Subject: Re: [xacml] wd-19 indeterminate policy target handling > >> The spec should strive for the simplest possible explanation of the behavior, not the most efficient implementation. > +1 We can leave it up to vendors to come up with some nice performance tricks. > > Thanks, > Ray > > > > --------------------------------------------------------------------- > To unsubscribe from this mail list, you must leave the OASIS TC that > generates this mail. Follow this link to all your TCs in OASIS at: > https://www.oasis-open.org/apps/org/workgroup/portal/my_workgroups.php >


  • 13.  Re: [xacml] wd-19 indeterminate policy target handling

    Posted 05-18-2011 08:02
    Rich, Does the algorithm with your proposed changes lead to a different result in any case than the algorithm which is in WD-19? Best regards, Erik On 2011-05-17 15:36, rich levinson wrote: > This is not a performance issue. It is a change from XACML 2.0 that > implies > that the combining algorithm has as input a set of decisions as > opposed to 2.0 > where the combining algorithm had as input a set of Rules, Policies, > or PolicySets, > that had yet to be evaluated. > > The change implies that the algorithm is working on a different state, > which is not > the case. > > Thanks, > Rich > > > On 5/17/2011 5:07 AM, remon.sinnema@emc.com wrote: >> From: Erik Rissanen [ mailto:erik@axiomatics.com ] >> Sent: Tuesday, May 17, 2011 9:35 AM >> To: xacml@lists.oasis-open.org >> Subject: Re: [xacml] wd-19 indeterminate policy target handling >> >>> The spec should strive for the simplest possible explanation of the >>> behavior, not the most efficient implementation. >> +1 We can leave it up to vendors to come up with some nice >> performance tricks. >> >> Thanks, >> Ray >> >> >> >> --------------------------------------------------------------------- >> To unsubscribe from this mail list, you must leave the OASIS TC that >> generates this mail. Follow this link to all your TCs in OASIS at: >> https://www.oasis-open.org/apps/org/workgroup/portal/my_workgroups.php >> > > --------------------------------------------------------------------- > To unsubscribe from this mail list, you must leave the OASIS TC that > generates this mail. Follow this link to all your TCs in OASIS at: > https://www.oasis-open.org/apps/org/workgroup/portal/my_workgroups.php


  • 14.  Re: [xacml] wd-19 indeterminate policy target handling

    Posted 05-18-2011 15:13
    Hi Erik, The algorithm w proposed changes in my earlier email in first draft form was this: Decision denyOverridesRuleCombiningAlgorithm(Node[] nodes) { // see 1 below Boolean atLeastOneError = false; Boolean atLeastOneErrorD = false; Boolean atLeastOneErrorP = false; Boolean atLeastOneErrorDP = false; Boolean atLeastOnePermit = false; for ( i=0; i<lengthOf(nodes); i++ ) { Decision decision = evaluate(nodes[i]); // see #2 below if (decision==Deny) { return Deny; // loop breakout (#2 below) } // the next two if s are the same as C.10: if (decision==Permit) { atLeastOnePermit = true; continue; // i.e. skip the rest of the logic current iteration of loop // and start next iteration } if (decision==NotApplicable) { continue; } // see #3 below if (decision==Indeterminate) { // this can only be returned for rules if ( effect((Rule)nodes[i])==Deny) ) { // cast to Rule to get effect atLeastOneErrorD = true; } else { atLeastOneErrorP = true; } continue; } // the following is same as C.2 and will evaluate the 3 types // of Indeterminate, which can only be returned for Policy and PolicySet ... same as lines 5762->5776 (not repeated here) } // end for loop if (atLeastOneErrorD==true && (atLeastOneErrorP==true atLeastOnePermit==true) { atLeastOneErrorDP = true; } if (atLeastOneErrorDP==true) { return Indeterminate(DP); if (atLeastOneErrorD==true) { return Indeterminate(D); } if (atLeastOnePermit==true) { return Permit; } if (atLeastOneErrorP == true) { return Indeterminate(P); } return NotApplicable; } // end algorithm It is intended to produce the same results in every case as the current algorithm. The differences that it embodies (that do not impact the final results) are: it uses nodes as input rather than decisions, where a node can be any of: {Rule, Policy, PolicySet} it preserves the original logic from 2.0 that shows the evaluate done in each iteration, which enables the loop breakout as soon as a certain final result is obtained (i.e. the explicit biased decision type of the algorithm it preserves(and makes explicit) the logic whereby the D or P status of Indeterminate is established It should reduce to the 2.0 algorithms when the constraints that were implicit in 2.0 are applied (i.e. that the property does not apply to policy) I think it needs one more pass to get the syntax of the Indeterminates consistent w the current defns in the doc, but otherwise I am pretty sure it does the same as the current. (I will try to clean it up a bit, later today  but I am bust until then)     Thanks,     Rich On 5/18/2011 4:01 AM, Erik Rissanen wrote: 4DD37CEB.9000706@axiomatics.com type= cite >Rich, Does the algorithm with your proposed changes lead to a different result in any case than the algorithm which is in WD-19? Best regards, Erik On 2011-05-17 15:36, rich levinson wrote: This is not a performance issue. It is a change from XACML 2.0 that implies that the combining algorithm has as input a set of decisions as opposed to 2.0 where the combining algorithm had as input a set of Rules, Policies, or PolicySets, that had yet to be evaluated. The change implies that the algorithm is working on a different state, which is not the case.     Thanks,     Rich On 5/17/2011 5:07 AM, remon.sinnema@emc.com wrote: From: Erik Rissanen [ mailto:erik@axiomatics.com ] Sent: Tuesday, May 17, 2011 9:35 AM To: xacml@lists.oasis-open.org Subject: Re: [xacml] wd-19 indeterminate policy target handling The spec should strive for the simplest possible explanation of the behavior, not the most efficient implementation. +1 We can leave it up to vendors to come up with some nice performance tricks. Thanks, Ray --------------------------------------------------------------------- To unsubscribe from this mail list, you must leave the OASIS TC that generates this mail.  Follow this link to all your TCs in OASIS at: https://www.oasis-open.org/apps/org/workgroup/portal/my_workgroups.php --------------------------------------------------------------------- To unsubscribe from this mail list, you must leave the OASIS TC that generates this mail.  Follow this link to all your TCs in OASIS at: https://www.oasis-open.org/apps/org/workgroup/portal/my_workgroups.php --------------------------------------------------------------------- To unsubscribe from this mail list, you must leave the OASIS TC that generates this mail.  Follow this link to all your TCs in OASIS at: https://www.oasis-open.org/apps/org/workgroup/portal/my_workgroups.php


  • 15.  Re: [xacml] wd-19 indeterminate policy target handling

    Posted 05-19-2011 06:02
    Hi Erik, As I indicated in prev email, this 2nd draft is a slight cleanup of the syntax, with some additional comments at the end: Decision denyOverridesRuleCombiningAlgorithm(Node[] nodes) { // see 1 below Boolean atLeastOneErrorD = false; Boolean atLeastOneErrorP = false; Boolean atLeastOneErrorDP = false; Boolean atLeastOnePermit = false; for ( i=0; i<lengthOf(nodes); i++ ) { Decision decision = evaluate(nodes[i]); // see #2 below if (decision==Deny) { return Deny; // loop breakout (#2 below) } // the next two if s are the same as C.10: if (decision==Permit) { atLeastOnePermit = true; continue; // i.e. skip the rest of the logic for current // iteration of loop, and start next iteration } if (decision==NotApplicable) { continue; } // Ind{} (no qualifier) can only be returned for rules (#3 below) if (decision==Indeterminate) { // cast node to Rule, then get its effect if ( effect((Rule)nodes[i])==Deny) ) { atLeastOneErrorD = true; } else { atLeastOneErrorP = true; } continue; } it (decision == Indeterminate{D}) { atLeastOneErrorD = true; } it (decision == Indeterminate{P}) { atLeastOneErrorp = true; } it (decision == Indeterminate{DP}) { atLeastOneErrorDP = true; } } // end for loop if (atLeastOneErrorD==true && (atLeastOneErrorP==true atLeastOnePermit==true) { atLeastOneErrorDP = true; } if (atLeastOneErrorDP==true) { return Indeterminate{DP}; } if (atLeastOneErrorD==true) { return Indeterminate{D}; } if (atLeastOnePermit==true) { return Permit; } if (atLeastOneErrorP == true) { return Indeterminate{P}; } return NotApplicable; } // end algorithm It is intended to produce the same results in every case as the current C.2 algorithm. The differences that it embodies are: it uses nodes as input rather than decisions, where a node can be any of: {Rule, Policy, PolicySet} it preserves the original logic from 2.0 that shows the evaluate done in each iteration, which enables the loop breakout as soon as a certain final result is obtained (i.e. the explicit biased decision type of the algorithm) it preserves(and makes explicit) the logic whereby the D or P status of Indeterminate is established; i.e. the qualifiers D,P originate from the effect of rules. DP is a result of combining. The only place an unqualified Indeterminate (Indeterminate{}) can appear is in the decision that results from evaluation of a Rule, or from the evaluation of a Target. However, the unqualified Ind from a Target will always be combined to a qualified decision, as shown in WD19 Table 7. Also note, that the above algorithm should be consistent w Table 4 in section 7.10, because it is the statement at the beginning of the loop, evaluate(nodes[i]), which, when nodes are rules, will produce a decision that is an unqualified Ind{}. However, an unqualified Ind{} can never escape the algorithm because after the end of the loop on qualified Ind{D,P,DP} can be returned. It should reduce to the 2.0 algorithms when the constraints that were implicit in 2.0 are applied (i.e. that the property does not apply to policy) This objective needs to be qualified by the fact that in 2.0 deny-overrides and permit-overrides were not completely symmetric, as d-o did not allow any Indeterminate to be returned, whereas p-o did. I believe the TC decided when we chg'd to qualified Indeterminates that we would drop this anomaly as being unnecessary, so it does not appear in new algs. Note that the evaluate(nodes[i]) is recursive, and this algorithm should be viewed as being applied starting with the top PolicySet, and processing all children as required by the evaluations. Note also that there is an intermediate layer of selecting a combining algorithm before the next recursive evaluate(nodes[i]) is called. Note also that the recursion must proceed down to the leaf Rules, because evaluate(nodes[i]) will not get any results until a Rule is reached which effectively stops the recursion. While the above comments might appear complicated, they are only included for guidance for anyone who is interested in delving deeply into the mechanisms that are implicitly present in the evaluation of XACML PolicySets. Bottom line: the proposal is the algorithm. The comments that appear in the list that follows the algorithm are to help people understand the algorithm. I believe the algorithm should be able to be inserted as is in Section C.2, and, if there is agreement, corresponding algorithms can be prepared for sections C.3 -> C.7.  Note C.8, C.9, and the legacy sections can probably remain as they are, since they do not appear to deal with qualified Indeterminates.     Thanks,     Rich On 5/18/2011 11:12 AM, rich levinson wrote: 4DD3E1F5.8060303@oracle.com type= cite > Hi Erik, The algorithm w proposed changes in my earlier email in first draft form was this: Decision denyOverridesRuleCombiningAlgorithm(Node[] nodes) { // see 1 below Boolean atLeastOneError = false; Boolean atLeastOneErrorD = false; Boolean atLeastOneErrorP = false; Boolean atLeastOneErrorDP = false; Boolean atLeastOnePermit = false; for ( i=0; i<lengthOf(nodes); i++ ) { Decision decision = evaluate(nodes[i]); // see #2 below if (decision==Deny) { return Deny; // loop breakout (#2 below) } // the next two if s are the same as C.10: if (decision==Permit) { atLeastOnePermit = true; continue; // i.e. skip the rest of the logic current iteration of loop // and start next iteration } if (decision==NotApplicable) { continue; } // see #3 below if (decision==Indeterminate) { // this can only be returned for rules if ( effect((Rule)nodes[i])==Deny) ) { // cast to Rule to get effect atLeastOneErrorD = true; } else { atLeastOneErrorP = true; } continue; } // the following is same as C.2 and will evaluate the 3 types // of Indeterminate, which can only be returned for Policy and PolicySet ... same as lines 5762->5776 (not repeated here) } // end for loop if (atLeastOneErrorD==true && (atLeastOneErrorP==true atLeastOnePermit==true) { atLeastOneErrorDP = true; } if (atLeastOneErrorDP==true) { return Indeterminate(DP); if (atLeastOneErrorD==true) { return Indeterminate(D); } if (atLeastOnePermit==true) { return Permit; } if (atLeastOneErrorP == true) { return Indeterminate(P); } return NotApplicable; } // end algorithm It is intended to produce the same results in every case as the current algorithm. The differences that it embodies (that do not impact the final results) are: it uses nodes as input rather than decisions, where a node can be any of: {Rule, Policy, PolicySet} it preserves the original logic from 2.0 that shows the evaluate done in each iteration, which enables the loop breakout as soon as a certain final result is obtained (i.e. the explicit biased decision type of the algorithm it preserves(and makes explicit) the logic whereby the D or P status of Indeterminate is established It should reduce to the 2.0 algorithms when the constraints that were implicit in 2.0 are applied (i.e. that the property does not apply to policy) I think it needs one more pass to get the syntax of the Indeterminates consistent w the current defns in the doc, but otherwise I am pretty sure it does the same as the current. (I will try to clean it up a bit, later today  but I am bust until then)     Thanks,     Rich On 5/18/2011 4:01 AM, Erik Rissanen wrote: 4DD37CEB.9000706@axiomatics.com type= cite >Rich, Does the algorithm with your proposed changes lead to a different result in any case than the algorithm which is in WD-19? Best regards, Erik On 2011-05-17 15:36, rich levinson wrote: This is not a performance issue. It is a change from XACML 2.0 that implies that the combining algorithm has as input a set of decisions as opposed to 2.0 where the combining algorithm had as input a set of Rules, Policies, or PolicySets, that had yet to be evaluated. The change implies that the algorithm is working on a different state, which is not the case.     Thanks,     Rich On 5/17/2011 5:07 AM, remon.sinnema@emc.com wrote: From: Erik Rissanen [ mailto:erik@axiomatics.com ] Sent: Tuesday, May 17, 2011 9:35 AM To: xacml@lists.oasis-open.org Subject: Re: [xacml] wd-19 indeterminate policy target handling The spec should strive for the simplest possible explanation of the behavior, not the most efficient implementation. +1 We can leave it up to vendors to come up with some nice performance tricks. Thanks, Ray --------------------------------------------------------------------- To unsubscribe from this mail list, you must leave the OASIS TC that generates this mail.  Follow this link to all your TCs in OASIS at: https://www.oasis-open.org/apps/org/workgroup/portal/my_workgroups.php --------------------------------------------------------------------- To unsubscribe from this mail list, you must leave the OASIS TC that generates this mail.  Follow this link to all your TCs in OASIS at: https://www.oasis-open.org/apps/org/workgroup/portal/my_workgroups.php --------------------------------------------------------------------- To unsubscribe from this mail list, you must leave the OASIS TC that generates this mail.  Follow this link to all your TCs in OASIS at: https://www.oasis-open.org/apps/org/workgroup/portal/my_workgroups.php


  • 16.  Re: [xacml] wd-19 indeterminate policy target handling

    Posted 05-19-2011 10:03
    Hi Rich, If it has the same results as the current specification, I would prefer to not make any changes at this stage. There is always the risk that we introduce some error by making changes. Also, I prefer the way the current algorithm is more uniformly described. It does not need to do a cast to a Rule for instance. It should not be necessary since the base case for a Rule is already covered in another section. Best regards, Erik On 2011-05-19 08:01, rich levinson wrote: 4DD4B23B.3050106@oracle.com type= cite > Hi Erik, As I indicated in prev email, this 2nd draft is a slight cleanup of the syntax, with some additional comments at the end: Decision denyOverridesRuleCombiningAlgorithm(Node[] nodes) { // see 1 below Boolean atLeastOneErrorD = false; Boolean atLeastOneErrorP = false; Boolean atLeastOneErrorDP = false; Boolean atLeastOnePermit = false; for ( i=0; i<lengthOf(nodes); i++ ) { Decision decision = evaluate(nodes[i]); // see #2 below if (decision==Deny) { return Deny; // loop breakout (#2 below) } // the next two if s are the same as C.10: if (decision==Permit) { atLeastOnePermit = true; continue; // i.e. skip the rest of the logic for current // iteration of loop, and start next iteration } if (decision==NotApplicable) { continue; } // Ind{} (no qualifier) can only be returned for rules (#3 below) if (decision==Indeterminate) { // cast node to Rule, then get its effect if ( effect((Rule)nodes[i])==Deny) ) { atLeastOneErrorD = true; } else { atLeastOneErrorP = true; } continue; } it (decision == Indeterminate{D}) { atLeastOneErrorD = true; } it (decision == Indeterminate{P}) { atLeastOneErrorp = true; } it (decision == Indeterminate{DP}) { atLeastOneErrorDP = true; } } // end for loop if (atLeastOneErrorD==true && (atLeastOneErrorP==true atLeastOnePermit==true) { atLeastOneErrorDP = true; } if (atLeastOneErrorDP==true) { return Indeterminate{DP}; } if (atLeastOneErrorD==true) { return Indeterminate{D}; } if (atLeastOnePermit==true) { return Permit; } if (atLeastOneErrorP == true) { return Indeterminate{P}; } return NotApplicable; } // end algorithm It is intended to produce the same results in every case as the current C.2 algorithm. The differences that it embodies are: it uses nodes as input rather than decisions, where a node can be any of: {Rule, Policy, PolicySet} it preserves the original logic from 2.0 that shows the evaluate done in each iteration, which enables the loop breakout as soon as a certain final result is obtained (i.e. the explicit biased decision type of the algorithm) it preserves(and makes explicit) the logic whereby the D or P status of Indeterminate is established; i.e. the qualifiers D,P originate from the effect of rules. DP is a result of combining. The only place an unqualified Indeterminate (Indeterminate{}) can appear is in the decision that results from evaluation of a Rule, or from the evaluation of a Target. However, the unqualified Ind from a Target will always be combined to a qualified decision, as shown in WD19 Table 7. Also note, that the above algorithm should be consistent w Table 4 in section 7.10, because it is the statement at the beginning of the loop, evaluate(nodes[i]), which, when nodes are rules, will produce a decision that is an unqualified Ind{}. However, an unqualified Ind{} can never escape the algorithm because after the end of the loop on qualified Ind{D,P,DP} can be returned. It should reduce to the 2.0 algorithms when the constraints that were implicit in 2.0 are applied (i.e. that the property does not apply to policy) This objective needs to be qualified by the fact that in 2.0 deny-overrides and permit-overrides were not completely symmetric, as d-o did not allow any Indeterminate to be returned, whereas p-o did. I believe the TC decided when we chg'd to qualified Indeterminates that we would drop this anomaly as being unnecessary, so it does not appear in new algs. Note that the evaluate(nodes[i]) is recursive, and this algorithm should be viewed as being applied starting with the top PolicySet, and processing all children as required by the evaluations. Note also that there is an intermediate layer of selecting a combining algorithm before the next recursive evaluate(nodes[i]) is called. Note also that the recursion must proceed down to the leaf Rules, because evaluate(nodes[i]) will not get any results until a Rule is reached which effectively stops the recursion. While the above comments might appear complicated, they are only included for guidance for anyone who is interested in delving deeply into the mechanisms that are implicitly present in the evaluation of XACML PolicySets. Bottom line: the proposal is the algorithm. The comments that appear in the list that follows the algorithm are to help people understand the algorithm. I believe the algorithm should be able to be inserted as is in Section C.2, and, if there is agreement, corresponding algorithms can be prepared for sections C.3 -> C.7.  Note C.8, C.9, and the legacy sections can probably remain as they are, since they do not appear to deal with qualified Indeterminates.     Thanks,     Rich On 5/18/2011 11:12 AM, rich levinson wrote: 4DD3E1F5.8060303@oracle.com type= cite > Hi Erik, The algorithm w proposed changes in my earlier email in first draft form was this: Decision denyOverridesRuleCombiningAlgorithm(Node[] nodes) { // see 1 below Boolean atLeastOneError = false; Boolean atLeastOneErrorD = false; Boolean atLeastOneErrorP = false; Boolean atLeastOneErrorDP = false; Boolean atLeastOnePermit = false; for ( i=0; i<lengthOf(nodes); i++ ) { Decision decision = evaluate(nodes[i]); // see #2 below if (decision==Deny) { return Deny; // loop breakout (#2 below) } // the next two if s are the same as C.10: if (decision==Permit) { atLeastOnePermit = true; continue; // i.e. skip the rest of the logic current iteration of loop // and start next iteration } if (decision==NotApplicable) { continue; } // see #3 below if (decision==Indeterminate) { // this can only be returned for rules if ( effect((Rule)nodes[i])==Deny) ) { // cast to Rule to get effect atLeastOneErrorD = true; } else { atLeastOneErrorP = true; } continue; } // the following is same as C.2 and will evaluate the 3 types // of Indeterminate, which can only be returned for Policy and PolicySet ... same as lines 5762->5776 (not repeated here) } // end for loop if (atLeastOneErrorD==true && (atLeastOneErrorP==true atLeastOnePermit==true) { atLeastOneErrorDP = true; } if (atLeastOneErrorDP==true) { return Indeterminate(DP); if (atLeastOneErrorD==true) { return Indeterminate(D); } if (atLeastOnePermit==true) { return Permit; } if (atLeastOneErrorP == true) { return Indeterminate(P); } return NotApplicable; } // end algorithm It is intended to produce the same results in every case as the current algorithm. The differences that it embodies (that do not impact the final results) are: it uses nodes as input rather than decisions, where a node can be any of: {Rule, Policy, PolicySet} it preserves the original logic from 2.0 that shows the evaluate done in each iteration, which enables the loop breakout as soon as a certain final result is obtained (i.e. the explicit biased decision type of the algorithm it preserves(and makes explicit) the logic whereby the D or P status of Indeterminate is established It should reduce to the 2.0 algorithms when the constraints that were implicit in 2.0 are applied (i.e. that the property does not apply to policy) I think it needs one more pass to get the syntax of the Indeterminates consistent w the current defns in the doc, but otherwise I am pretty sure it does the same as the current. (I will try to clean it up a bit, later today  but I am bust until then)     Thanks,     Rich On 5/18/2011 4:01 AM, Erik Rissanen wrote: 4DD37CEB.9000706@axiomatics.com type= cite >Rich, Does the algorithm with your proposed changes lead to a different result in any case than the algorithm which is in WD-19? Best regards, Erik On 2011-05-17 15:36, rich levinson wrote: This is not a performance issue. It is a change from XACML 2.0 that implies that the combining algorithm has as input a set of decisions as opposed to 2.0 where the combining algorithm had as input a set of Rules, Policies, or PolicySets, that had yet to be evaluated. The change implies that the algorithm is working on a different state, which is not the case.     Thanks,     Rich On 5/17/2011 5:07 AM, remon.sinnema@emc.com wrote: From: Erik Rissanen [ mailto:erik@axiomatics.com ] Sent: Tuesday, May 17, 2011 9:35 AM To: xacml@lists.oasis-open.org Subject: Re: [xacml] wd-19 indeterminate policy target handling The spec should strive for the simplest possible explanation of the behavior, not the most efficient implementation. +1 We can leave it up to vendors to come up with some nice performance tricks. Thanks, Ray --------------------------------------------------------------------- To unsubscribe from this mail list, you must leave the OASIS TC that generates this mail.  Follow this link to all your TCs in OASIS at: https://www.oasis-open.org/apps/org/workgroup/portal/my_workgroups.php --------------------------------------------------------------------- To unsubscribe from this mail list, you must leave the OASIS TC that generates this mail.  Follow this link to all your TCs in OASIS at: https://www.oasis-open.org/apps/org/workgroup/portal/my_workgroups.php --------------------------------------------------------------------- To unsubscribe from this mail list, you must leave the OASIS TC that generates this mail.  Follow this link to all your TCs in OASIS at: https://www.oasis-open.org/apps/org/workgroup/portal/my_workgroups.php


  • 17.  Re: [xacml] wd-19 indeterminate policy target handling

    Posted 05-19-2011 12:45
    Hi Erik, In principle, I would agree that if the p-code produces the same results from an end user perspective that the details of the implementation (namely the p-code translated to a native language) would be incidental. However, the way the spec is currently set up, the verbal description, lines 5727-5738, is defined to be: a non-normative informative description of this combining algorithm. whereas the p-code, following line 5739 is defined to be: the normative specification of this combining algorithm. Therefore, I think it is necessary to raise an issue as to what aspects of this combining algorithm are normative. For example, is it necessary to calculate all the decisions prior to entering the algorithm.     Thanks,     Rich On 5/19/2011 6:02 AM, Erik Rissanen wrote: 4DD4EAC0.8050300@axiomatics.com type= cite > Hi Rich, If it has the same results as the current specification, I would prefer to not make any changes at this stage. There is always the risk that we introduce some error by making changes. Also, I prefer the way the current algorithm is more uniformly described. It does not need to do a cast to a Rule for instance. It should not be necessary since the base case for a Rule is already covered in another section. Best regards, Erik On 2011-05-19 08:01, rich levinson wrote: 4DD4B23B.3050106@oracle.com type= cite > Hi Erik, As I indicated in prev email, this 2nd draft is a slight cleanup of the syntax, with some additional comments at the end: Decision denyOverridesRuleCombiningAlgorithm(Node[] nodes) { // see 1 below Boolean atLeastOneErrorD = false; Boolean atLeastOneErrorP = false; Boolean atLeastOneErrorDP = false; Boolean atLeastOnePermit = false; for ( i=0; i<lengthOf(nodes); i++ ) { Decision decision = evaluate(nodes[i]); // see #2 below if (decision==Deny) { return Deny; // loop breakout (#2 below) } // the next two if s are the same as C.10: if (decision==Permit) { atLeastOnePermit = true; continue; // i.e. skip the rest of the logic for current // iteration of loop, and start next iteration } if (decision==NotApplicable) { continue; } // Ind{} (no qualifier) can only be returned for rules (#3 below) if (decision==Indeterminate) { // cast node to Rule, then get its effect if ( effect((Rule)nodes[i])==Deny) ) { atLeastOneErrorD = true; } else { atLeastOneErrorP = true; } continue; } it (decision == Indeterminate{D}) { atLeastOneErrorD = true; } it (decision == Indeterminate{P}) { atLeastOneErrorp = true; } it (decision == Indeterminate{DP}) { atLeastOneErrorDP = true; } } // end for loop if (atLeastOneErrorD==true && (atLeastOneErrorP==true atLeastOnePermit==true) { atLeastOneErrorDP = true; } if (atLeastOneErrorDP==true) { return Indeterminate{DP}; } if (atLeastOneErrorD==true) { return Indeterminate{D}; } if (atLeastOnePermit==true) { return Permit; } if (atLeastOneErrorP == true) { return Indeterminate{P}; } return NotApplicable; } // end algorithm It is intended to produce the same results in every case as the current C.2 algorithm. The differences that it embodies are: it uses nodes as input rather than decisions, where a node can be any of: {Rule, Policy, PolicySet} it preserves the original logic from 2.0 that shows the evaluate done in each iteration, which enables the loop breakout as soon as a certain final result is obtained (i.e. the explicit biased decision type of the algorithm) it preserves(and makes explicit) the logic whereby the D or P status of Indeterminate is established; i.e. the qualifiers D,P originate from the effect of rules. DP is a result of combining. The only place an unqualified Indeterminate (Indeterminate{}) can appear is in the decision that results from evaluation of a Rule, or from the evaluation of a Target. However, the unqualified Ind from a Target will always be combined to a qualified decision, as shown in WD19 Table 7. Also note, that the above algorithm should be consistent w Table 4 in section 7.10, because it is the statement at the beginning of the loop, evaluate(nodes[i]), which, when nodes are rules, will produce a decision that is an unqualified Ind{}. However, an unqualified Ind{} can never escape the algorithm because after the end of the loop on qualified Ind{D,P,DP} can be returned. It should reduce to the 2.0 algorithms when the constraints that were implicit in 2.0 are applied (i.e. that the property does not apply to policy) This objective needs to be qualified by the fact that in 2.0 deny-overrides and permit-overrides were not completely symmetric, as d-o did not allow any Indeterminate to be returned, whereas p-o did. I believe the TC decided when we chg'd to qualified Indeterminates that we would drop this anomaly as being unnecessary, so it does not appear in new algs. Note that the evaluate(nodes[i]) is recursive, and this algorithm should be viewed as being applied starting with the top PolicySet, and processing all children as required by the evaluations. Note also that there is an intermediate layer of selecting a combining algorithm before the next recursive evaluate(nodes[i]) is called. Note also that the recursion must proceed down to the leaf Rules, because evaluate(nodes[i]) will not get any results until a Rule is reached which effectively stops the recursion. While the above comments might appear complicated, they are only included for guidance for anyone who is interested in delving deeply into the mechanisms that are implicitly present in the evaluation of XACML PolicySets. Bottom line: the proposal is the algorithm. The comments that appear in the list that follows the algorithm are to help people understand the algorithm. I believe the algorithm should be able to be inserted as is in Section C.2, and, if there is agreement, corresponding algorithms can be prepared for sections C.3 -> C.7.  Note C.8, C.9, and the legacy sections can probably remain as they are, since they do not appear to deal with qualified Indeterminates.     Thanks,     Rich On 5/18/2011 11:12 AM, rich levinson wrote: 4DD3E1F5.8060303@oracle.com type= cite > Hi Erik, The algorithm w proposed changes in my earlier email in first draft form was this: Decision denyOverridesRuleCombiningAlgorithm(Node[] nodes) { // see 1 below Boolean atLeastOneError = false; Boolean atLeastOneErrorD = false; Boolean atLeastOneErrorP = false; Boolean atLeastOneErrorDP = false; Boolean atLeastOnePermit = false; for ( i=0; i<lengthOf(nodes); i++ ) { Decision decision = evaluate(nodes[i]); // see #2 below if (decision==Deny) { return Deny; // loop breakout (#2 below) } // the next two if s are the same as C.10: if (decision==Permit) { atLeastOnePermit = true; continue; // i.e. skip the rest of the logic current iteration of loop // and start next iteration } if (decision==NotApplicable) { continue; } // see #3 below if (decision==Indeterminate) { // this can only be returned for rules if ( effect((Rule)nodes[i])==Deny) ) { // cast to Rule to get effect atLeastOneErrorD = true; } else { atLeastOneErrorP = true; } continue; } // the following is same as C.2 and will evaluate the 3 types // of Indeterminate, which can only be returned for Policy and PolicySet ... same as lines 5762->5776 (not repeated here) } // end for loop if (atLeastOneErrorD==true && (atLeastOneErrorP==true atLeastOnePermit==true) { atLeastOneErrorDP = true; } if (atLeastOneErrorDP==true) { return Indeterminate(DP); if (atLeastOneErrorD==true) { return Indeterminate(D); } if (atLeastOnePermit==true) { return Permit; } if (atLeastOneErrorP == true) { return Indeterminate(P); } return NotApplicable; } // end algorithm It is intended to produce the same results in every case as the current algorithm. The differences that it embodies (that do not impact the final results) are: it uses nodes as input rather than decisions, where a node can be any of: {Rule, Policy, PolicySet} it preserves the original logic from 2.0 that shows the evaluate done in each iteration, which enables the loop breakout as soon as a certain final result is obtained (i.e. the explicit biased decision type of the algorithm it preserves(and makes explicit) the logic whereby the D or P status of Indeterminate is established It should reduce to the 2.0 algorithms when the constraints that were implicit in 2.0 are applied (i.e. that the property does not apply to policy) I think it needs one more pass to get the syntax of the Indeterminates consistent w the current defns in the doc, but otherwise I am pretty sure it does the same as the current. (I will try to clean it up a bit, later today  but I am bust until then)     Thanks,     Rich On 5/18/2011 4:01 AM, Erik Rissanen wrote: 4DD37CEB.9000706@axiomatics.com type= cite >Rich, Does the algorithm with your proposed changes lead to a different result in any case than the algorithm which is in WD-19? Best regards, Erik On 2011-05-17 15:36, rich levinson wrote: This is not a performance issue. It is a change from XACML 2.0 that implies that the combining algorithm has as input a set of decisions as opposed to 2.0 where the combining algorithm had as input a set of Rules, Policies, or PolicySets, that had yet to be evaluated. The change implies that the algorithm is working on a different state, which is not the case.     Thanks,     Rich On 5/17/2011 5:07 AM, remon.sinnema@emc.com wrote: From: Erik Rissanen [ mailto:erik@axiomatics.com ] Sent: Tuesday, May 17, 2011 9:35 AM To: xacml@lists.oasis-open.org Subject: Re: [xacml] wd-19 indeterminate policy target handling The spec should strive for the simplest possible explanation of the behavior, not the most efficient implementation. +1 We can leave it up to vendors to come up with some nice performance tricks. Thanks, Ray --------------------------------------------------------------------- To unsubscribe from this mail list, you must leave the OASIS TC that generates this mail.  Follow this link to all your TCs in OASIS at: https://www.oasis-open.org/apps/org/workgroup/portal/my_workgroups.php --------------------------------------------------------------------- To unsubscribe from this mail list, you must leave the OASIS TC that generates this mail.  Follow this link to all your TCs in OASIS at: https://www.oasis-open.org/apps/org/workgroup/portal/my_workgroups.php --------------------------------------------------------------------- To unsubscribe from this mail list, you must leave the OASIS TC that generates this mail.  Follow this link to all your TCs in OASIS at: https://www.oasis-open.org/apps/org/workgroup/portal/my_workgroups.php


  • 18.  Re: [xacml] wd-19 indeterminate policy target handling

    Posted 05-19-2011 13:32
    To TC: In order to make this issue concrete, I gave an example earlier that we can discuss that I think will address this problem: http://lists.oasis-open.org/archives/xacml/201105/msg00036.html ... if I am in a deny-overrides PolicySet, and there are 10 child (deny-override) Policy elements, for example, and I evaluate the first Policy and it in turn evaluates its child Rules, if its first, or any other, rule returns a Deny, then the first Policy will return a Deny, and there is no need to evaluate the other 9 Policy elements since the decision will be a Deny regardless of what they return. I think that C.2, as written, insists that the other 9 Policy elements must be calculated, since the input to the algorithm is an array of Decisions, presumably the Decisions resulting from evaluating the child Policies of the PolicySet. The question is: how are the child Policies selected for input? I would think that is exactly what the combining algorithm determines. i.e. it combines the Policies according to a script and determines which decision from which Policy governs its result. If that is not the case, then what determines the contents of the input array? i.e. I believe the algorithm must clearly state what the inputs are that it is processing. As written, it appears to me that C.2 effectively requires running the algorithm on the child Policies before passing the results in to run the algorithm.     Thanks,     Rich On 5/19/2011 8:44 AM, rich levinson wrote: 4DD51097.6030209@oracle.com type= cite > Hi Erik, In principle, I would agree that if the p-code produces the same results from an end user perspective that the details of the implementation (namely the p-code translated to a native language) would be incidental. However, the way the spec is currently set up, the verbal description, lines 5727-5738, is defined to be: a non-normative informative description of this combining algorithm. whereas the p-code, following line 5739 is defined to be: the normative specification of this combining algorithm. Therefore, I think it is necessary to raise an issue as to what aspects of this combining algorithm are normative. For example, is it necessary to calculate all the decisions prior to entering the algorithm.     Thanks,     Rich On 5/19/2011 6:02 AM, Erik Rissanen wrote: 4DD4EAC0.8050300@axiomatics.com type= cite > Hi Rich, If it has the same results as the current specification, I would prefer to not make any changes at this stage. There is always the risk that we introduce some error by making changes. Also, I prefer the way the current algorithm is more uniformly described. It does not need to do a cast to a Rule for instance. It should not be necessary since the base case for a Rule is already covered in another section. Best regards, Erik On 2011-05-19 08:01, rich levinson wrote: 4DD4B23B.3050106@oracle.com type= cite > Hi Erik, As I indicated in prev email, this 2nd draft is a slight cleanup of the syntax, with some additional comments at the end: Decision denyOverridesRuleCombiningAlgorithm(Node[] nodes) { // see 1 below Boolean atLeastOneErrorD = false; Boolean atLeastOneErrorP = false; Boolean atLeastOneErrorDP = false; Boolean atLeastOnePermit = false; for ( i=0; i<lengthOf(nodes); i++ ) { Decision decision = evaluate(nodes[i]); // see #2 below if (decision==Deny) { return Deny; // loop breakout (#2 below) } // the next two if s are the same as C.10: if (decision==Permit) { atLeastOnePermit = true; continue; // i.e. skip the rest of the logic for current // iteration of loop, and start next iteration } if (decision==NotApplicable) { continue; } // Ind{} (no qualifier) can only be returned for rules (#3 below) if (decision==Indeterminate) { // cast node to Rule, then get its effect if ( effect((Rule)nodes[i])==Deny) ) { atLeastOneErrorD = true; } else { atLeastOneErrorP = true; } continue; } it (decision == Indeterminate{D}) { atLeastOneErrorD = true; } it (decision == Indeterminate{P}) { atLeastOneErrorp = true; } it (decision == Indeterminate{DP}) { atLeastOneErrorDP = true; } } // end for loop if (atLeastOneErrorD==true && (atLeastOneErrorP==true atLeastOnePermit==true) { atLeastOneErrorDP = true; } if (atLeastOneErrorDP==true) { return Indeterminate{DP}; } if (atLeastOneErrorD==true) { return Indeterminate{D}; } if (atLeastOnePermit==true) { return Permit; } if (atLeastOneErrorP == true) { return Indeterminate{P}; } return NotApplicable; } // end algorithm It is intended to produce the same results in every case as the current C.2 algorithm. The differences that it embodies are: it uses nodes as input rather than decisions, where a node can be any of: {Rule, Policy, PolicySet} it preserves the original logic from 2.0 that shows the evaluate done in each iteration, which enables the loop breakout as soon as a certain final result is obtained (i.e. the explicit biased decision type of the algorithm) it preserves(and makes explicit) the logic whereby the D or P status of Indeterminate is established; i.e. the qualifiers D,P originate from the effect of rules. DP is a result of combining. The only place an unqualified Indeterminate (Indeterminate{}) can appear is in the decision that results from evaluation of a Rule, or from the evaluation of a Target. However, the unqualified Ind from a Target will always be combined to a qualified decision, as shown in WD19 Table 7. Also note, that the above algorithm should be consistent w Table 4 in section 7.10, because it is the statement at the beginning of the loop, evaluate(nodes[i]), which, when nodes are rules, will produce a decision that is an unqualified Ind{}. However, an unqualified Ind{} can never escape the algorithm because after the end of the loop on qualified Ind{D,P,DP} can be returned. It should reduce to the 2.0 algorithms when the constraints that were implicit in 2.0 are applied (i.e. that the property does not apply to policy) This objective needs to be qualified by the fact that in 2.0 deny-overrides and permit-overrides were not completely symmetric, as d-o did not allow any Indeterminate to be returned, whereas p-o did. I believe the TC decided when we chg'd to qualified Indeterminates that we would drop this anomaly as being unnecessary, so it does not appear in new algs. Note that the evaluate(nodes[i]) is recursive, and this algorithm should be viewed as being applied starting with the top PolicySet, and processing all children as required by the evaluations. Note also that there is an intermediate layer of selecting a combining algorithm before the next recursive evaluate(nodes[i]) is called. Note also that the recursion must proceed down to the leaf Rules, because evaluate(nodes[i]) will not get any results until a Rule is reached which effectively stops the recursion. While the above comments might appear complicated, they are only included for guidance for anyone who is interested in delving deeply into the mechanisms that are implicitly present in the evaluation of XACML PolicySets. Bottom line: the proposal is the algorithm. The comments that appear in the list that follows the algorithm are to help people understand the algorithm. I believe the algorithm should be able to be inserted as is in Section C.2, and, if there is agreement, corresponding algorithms can be prepared for sections C.3 -> C.7.  Note C.8, C.9, and the legacy sections can probably remain as they are, since they do not appear to deal with qualified Indeterminates.     Thanks,     Rich On 5/18/2011 11:12 AM, rich levinson wrote: 4DD3E1F5.8060303@oracle.com type= cite > Hi Erik, The algorithm w proposed changes in my earlier email in first draft form was this: Decision denyOverridesRuleCombiningAlgorithm(Node[] nodes) { // see 1 below Boolean atLeastOneError = false; Boolean atLeastOneErrorD = false; Boolean atLeastOneErrorP = false; Boolean atLeastOneErrorDP = false; Boolean atLeastOnePermit = false; for ( i=0; i<lengthOf(nodes); i++ ) { Decision decision = evaluate(nodes[i]); // see #2 below if (decision==Deny) { return Deny; // loop breakout (#2 below) } // the next two if s are the same as C.10: if (decision==Permit) { atLeastOnePermit = true; continue; // i.e. skip the rest of the logic current iteration of loop // and start next iteration } if (decision==NotApplicable) { continue; } // see #3 below if (decision==Indeterminate) { // this can only be returned for rules if ( effect((Rule)nodes[i])==Deny) ) { // cast to Rule to get effect atLeastOneErrorD = true; } else { atLeastOneErrorP = true; } continue; } // the following is same as C.2 and will evaluate the 3 types // of Indeterminate, which can only be returned for Policy and PolicySet ... same as lines 5762->5776 (not repeated here) } // end for loop if (atLeastOneErrorD==true && (atLeastOneErrorP==true atLeastOnePermit==true) { atLeastOneErrorDP = true; } if (atLeastOneErrorDP==true) { return Indeterminate(DP); if (atLeastOneErrorD==true) { return Indeterminate(D); } if (atLeastOnePermit==true) { return Permit; } if (atLeastOneErrorP == true) { return Indeterminate(P); } return NotApplicable; } // end algorithm It is intended to produce the same results in every case as the current algorithm. The differences that it embodies (that do not impact the final results) are: it uses nodes as input rather than decisions, where a node can be any of: {Rule, Policy, PolicySet} it preserves the original logic from 2.0 that shows the evaluate done in each iteration, which enables the loop breakout as soon as a certain final result is obtained (i.e. the explicit biased decision type of the algorithm it preserves(and makes explicit) the logic whereby the D or P status of Indeterminate is established It should reduce to the 2.0 algorithms when the constraints that were implicit in 2.0 are applied (i.e. that the property does not apply to policy) I think it needs one more pass to get the syntax of the Indeterminates consistent w the current defns in the doc, but otherwise I am pretty sure it does the same as the current. (I will try to clean it up a bit, later today  but I am bust until then)     Thanks,     Rich On 5/18/2011 4:01 AM, Erik Rissanen wrote: 4DD37CEB.9000706@axiomatics.com type= cite >Rich, Does the algorithm with your proposed changes lead to a different result in any case than the algorithm which is in WD-19? Best regards, Erik On 2011-05-17 15:36, rich levinson wrote: This is not a performance issue. It is a change from XACML 2.0 that implies that the combining algorithm has as input a set of decisions as opposed to 2.0 where the combining algorithm had as input a set of Rules, Policies, or PolicySets, that had yet to be evaluated. The change implies that the algorithm is working on a different state, which is not the case.     Thanks,     Rich On 5/17/2011 5:07 AM, remon.sinnema@emc.com wrote: From: Erik Rissanen [ mailto:erik@axiomatics.com ] Sent: Tuesday, May 17, 2011 9:35 AM To: xacml@lists.oasis-open.org Subject: Re: [xacml] wd-19 indeterminate policy target handling The spec should strive for the simplest possible explanation of the behavior, not the most efficient implementation. +1 We can leave it up to vendors to come up with some nice performance tricks. Thanks, Ray --------------------------------------------------------------------- To unsubscribe from this mail list, you must leave the OASIS TC that generates this mail.  Follow this link to all your TCs in OASIS at: https://www.oasis-open.org/apps/org/workgroup/portal/my_workgroups.php --------------------------------------------------------------------- To unsubscribe from this mail list, you must leave the OASIS TC that generates this mail.  Follow this link to all your TCs in OASIS at: https://www.oasis-open.org/apps/org/workgroup/portal/my_workgroups.php --------------------------------------------------------------------- To unsubscribe from this mail list, you must leave the OASIS TC that generates this mail.  Follow this link to all your TCs in OASIS at: https://www.oasis-open.org/apps/org/workgroup/portal/my_workgroups.php


  • 19.  Re: [xacml] wd-19 indeterminate policy target handling

    Posted 05-19-2011 13:43
    Hi Rich, Yes, I noticed too that there is nothing which says what the input array is. It should be said that it's the result of evaluating all the child nodes which are to be combined. But just because the semantics of the algorithm are specified like this, does not mean that an implementation has to actually act like this to get the result. As I said in my other email, the way you wrote it is not necessarily the most efficient implementation and a PDP may be smarter than that. The spec is not intended to put restrictions on the implementation, as long as it gets the same result as specified. Best regards, Erik On 2011-05-19 15:31, rich levinson wrote: 4DD51BA2.6000709@oracle.com type= cite > To TC: In order to make this issue concrete, I gave an example earlier that we can discuss that I think will address this problem: http://lists.oasis-open.org/archives/xacml/201105/msg00036.html ... if I am in a deny-overrides PolicySet, and there are 10 child (deny-override) Policy elements, for example, and I evaluate the first Policy and it in turn evaluates its child Rules, if its first, or any other, rule returns a Deny, then the first Policy will return a Deny, and there is no need to evaluate the other 9 Policy elements since the decision will be a Deny regardless of what they return. I think that C.2, as written, insists that the other 9 Policy elements must be calculated, since the input to the algorithm is an array of Decisions, presumably the Decisions resulting from evaluating the child Policies of the PolicySet. The question is: how are the child Policies selected for input? I would think that is exactly what the combining algorithm determines. i.e. it combines the Policies according to a script and determines which decision from which Policy governs its result. If that is not the case, then what determines the contents of the input array? i.e. I believe the algorithm must clearly state what the inputs are that it is processing. As written, it appears to me that C.2 effectively requires running the algorithm on the child Policies before passing the results in to run the algorithm.     Thanks,     Rich On 5/19/2011 8:44 AM, rich levinson wrote: 4DD51097.6030209@oracle.com type= cite > Hi Erik, In principle, I would agree that if the p-code produces the same results from an end user perspective that the details of the implementation (namely the p-code translated to a native language) would be incidental. However, the way the spec is currently set up, the verbal description, lines 5727-5738, is defined to be: a non-normative informative description of this combining algorithm. whereas the p-code, following line 5739 is defined to be: the normative specification of this combining algorithm. Therefore, I think it is necessary to raise an issue as to what aspects of this combining algorithm are normative. For example, is it necessary to calculate all the decisions prior to entering the algorithm.     Thanks,     Rich On 5/19/2011 6:02 AM, Erik Rissanen wrote: 4DD4EAC0.8050300@axiomatics.com type= cite > Hi Rich, If it has the same results as the current specification, I would prefer to not make any changes at this stage. There is always the risk that we introduce some error by making changes. Also, I prefer the way the current algorithm is more uniformly described. It does not need to do a cast to a Rule for instance. It should not be necessary since the base case for a Rule is already covered in another section. Best regards, Erik On 2011-05-19 08:01, rich levinson wrote: 4DD4B23B.3050106@oracle.com type= cite > Hi Erik, As I indicated in prev email, this 2nd draft is a slight cleanup of the syntax, with some additional comments at the end: Decision denyOverridesRuleCombiningAlgorithm(Node[] nodes) { // see 1 below Boolean atLeastOneErrorD = false; Boolean atLeastOneErrorP = false; Boolean atLeastOneErrorDP = false; Boolean atLeastOnePermit = false; for ( i=0; i<lengthOf(nodes); i++ ) { Decision decision = evaluate(nodes[i]); // see #2 below if (decision==Deny) { return Deny; // loop breakout (#2 below) } // the next two if s are the same as C.10: if (decision==Permit) { atLeastOnePermit = true; continue; // i.e. skip the rest of the logic for current // iteration of loop, and start next iteration } if (decision==NotApplicable) { continue; } // Ind{} (no qualifier) can only be returned for rules (#3 below) if (decision==Indeterminate) { // cast node to Rule, then get its effect if ( effect((Rule)nodes[i])==Deny) ) { atLeastOneErrorD = true; } else { atLeastOneErrorP = true; } continue; } it (decision == Indeterminate{D}) { atLeastOneErrorD = true; } it (decision == Indeterminate{P}) { atLeastOneErrorp = true; } it (decision == Indeterminate{DP}) { atLeastOneErrorDP = true; } } // end for loop if (atLeastOneErrorD==true && (atLeastOneErrorP==true atLeastOnePermit==true) { atLeastOneErrorDP = true; } if (atLeastOneErrorDP==true) { return Indeterminate{DP}; } if (atLeastOneErrorD==true) { return Indeterminate{D}; } if (atLeastOnePermit==true) { return Permit; } if (atLeastOneErrorP == true) { return Indeterminate{P}; } return NotApplicable; } // end algorithm It is intended to produce the same results in every case as the current C.2 algorithm. The differences that it embodies are: it uses nodes as input rather than decisions, where a node can be any of: {Rule, Policy, PolicySet} it preserves the original logic from 2.0 that shows the evaluate done in each iteration, which enables the loop breakout as soon as a certain final result is obtained (i.e. the explicit biased decision type of the algorithm) it preserves(and makes explicit) the logic whereby the D or P status of Indeterminate is established; i.e. the qualifiers D,P originate from the effect of rules. DP is a result of combining. The only place an unqualified Indeterminate (Indeterminate{}) can appear is in the decision that results from evaluation of a Rule, or from the evaluation of a Target. However, the unqualified Ind from a Target will always be combined to a qualified decision, as shown in WD19 Table 7. Also note, that the above algorithm should be consistent w Table 4 in section 7.10, because it is the statement at the beginning of the loop, evaluate(nodes[i]), which, when nodes are rules, will produce a decision that is an unqualified Ind{}. However, an unqualified Ind{} can never escape the algorithm because after the end of the loop on qualified Ind{D,P,DP} can be returned. It should reduce to the 2.0 algorithms when the constraints that were implicit in 2.0 are applied (i.e. that the property does not apply to policy) This objective needs to be qualified by the fact that in 2.0 deny-overrides and permit-overrides were not completely symmetric, as d-o did not allow any Indeterminate to be returned, whereas p-o did. I believe the TC decided when we chg'd to qualified Indeterminates that we would drop this anomaly as being unnecessary, so it does not appear in new algs. Note that the evaluate(nodes[i]) is recursive, and this algorithm should be viewed as being applied starting with the top PolicySet, and processing all children as required by the evaluations. Note also that there is an intermediate layer of selecting a combining algorithm before the next recursive evaluate(nodes[i]) is called. Note also that the recursion must proceed down to the leaf Rules, because evaluate(nodes[i]) will not get any results until a Rule is reached which effectively stops the recursion. While the above comments might appear complicated, they are only included for guidance for anyone who is interested in delving deeply into the mechanisms that are implicitly present in the evaluation of XACML PolicySets. Bottom line: the proposal is the algorithm. The comments that appear in the list that follows the algorithm are to help people understand the algorithm. I believe the algorithm should be able to be inserted as is in Section C.2, and, if there is agreement, corresponding algorithms can be prepared for sections C.3 -> C.7.  Note C.8, C.9, and the legacy sections can probably remain as they are, since they do not appear to deal with qualified Indeterminates.     Thanks,     Rich On 5/18/2011 11:12 AM, rich levinson wrote: 4DD3E1F5.8060303@oracle.com type= cite > Hi Erik, The algorithm w proposed changes in my earlier email in first draft form was this: Decision denyOverridesRuleCombiningAlgorithm(Node[] nodes) { // see 1 below Boolean atLeastOneError = false; Boolean atLeastOneErrorD = false; Boolean atLeastOneErrorP = false; Boolean atLeastOneErrorDP = false; Boolean atLeastOnePermit = false; for ( i=0; i<lengthOf(nodes); i++ ) { Decision decision = evaluate(nodes[i]); // see #2 below if (decision==Deny) { return Deny; // loop breakout (#2 below) } // the next two if s are the same as C.10: if (decision==Permit) { atLeastOnePermit = true; continue; // i.e. skip the rest of the logic current iteration of loop // and start next iteration } if (decision==NotApplicable) { continue; } // see #3 below if (decision==Indeterminate) { // this can only be returned for rules if ( effect((Rule)nodes[i])==Deny) ) { // cast to Rule to get effect atLeastOneErrorD = true; } else { atLeastOneErrorP = true; } continue; } // the following is same as C.2 and will evaluate the 3 types // of Indeterminate, which can only be returned for Policy and PolicySet ... same as lines 5762->5776 (not repeated here) } // end for loop if (atLeastOneErrorD==true && (atLeastOneErrorP==true atLeastOnePermit==true) { atLeastOneErrorDP = true; } if (atLeastOneErrorDP==true) { return Indeterminate(DP); if (atLeastOneErrorD==true) { return Indeterminate(D); } if (atLeastOnePermit==true) { return Permit; } if (atLeastOneErrorP == true) { return Indeterminate(P); } return NotApplicable; } // end algorithm It is intended to produce the same results in every case as the current algorithm. The differences that it embodies (that do not impact the final results) are: it uses nodes as input rather than decisions, where a node can be any of: {Rule, Policy, PolicySet} it preserves the original logic from 2.0 that shows the evaluate done in each iteration, which enables the loop breakout as soon as a certain final result is obtained (i.e. the explicit biased decision type of the algorithm it preserves(and makes explicit) the logic whereby the D or P status of Indeterminate is established It should reduce to the 2.0 algorithms when the constraints that were implicit in 2.0 are applied (i.e. that the property does not apply to policy) I think it needs one more pass to get the syntax of the Indeterminates consistent w the current defns in the doc, but otherwise I am pretty sure it does the same as the current. (I will try to clean it up a bit, later today  but I am bust until then)     Thanks,     Rich On 5/18/2011 4:01 AM, Erik Rissanen wrote: 4DD37CEB.9000706@axiomatics.com type= cite >Rich, Does the algorithm with your proposed changes lead to a different result in any case than the algorithm which is in WD-19? Best regards, Erik On 2011-05-17 15:36, rich levinson wrote: This is not a performance issue. It is a change from XACML 2.0 that implies that the combining algorithm has as input a set of decisions as opposed to 2.0 where the combining algorithm had as input a set of Rules, Policies, or PolicySets, that had yet to be evaluated. The change implies that the algorithm is working on a different state, which is not the case.     Thanks,     Rich On 5/17/2011 5:07 AM, remon.sinnema@emc.com wrote: From: Erik Rissanen [ mailto:erik@axiomatics.com ] Sent: Tuesday, May 17, 2011 9:35 AM To: xacml@lists.oasis-open.org Subject: Re: [xacml] wd-19 indeterminate policy target handling The spec should strive for the simplest possible explanation of the behavior, not the most efficient implementation. +1 We can leave it up to vendors to come up with some nice performance tricks. Thanks, Ray --------------------------------------------------------------------- To unsubscribe from this mail list, you must leave the OASIS TC that generates this mail.  Follow this link to all your TCs in OASIS at: https://www.oasis-open.org/apps/org/workgroup/portal/my_workgroups.php --------------------------------------------------------------------- To unsubscribe from this mail list, you must leave the OASIS TC that generates this mail.  Follow this link to all your TCs in OASIS at: https://www.oasis-open.org/apps/org/workgroup/portal/my_workgroups.php --------------------------------------------------------------------- To unsubscribe from this mail list, you must leave the OASIS TC that generates this mail.  Follow this link to all your TCs in OASIS at: https://www.oasis-open.org/apps/org/workgroup/portal/my_workgroups.php


  • 20.  Re: [xacml] wd-19 indeterminate policy target handling

    Posted 05-19-2011 14:24
    Hi Erik, Another way to see what the problem is, for the example in prev email, is the following: Let's say I had a really smart PDP that knew the algorithm was deny-overrides, and so when it first encountered a child Policy in the PolicySet that produced a deny, then it would submit only that Decision to the algorithm. That sounds like it might be reasonable as an optimization . However, let's assume instead that someone changed the combining-algorithm to permit-overrides. Now it would seem the PDP would have to smart enough to pick a different result to submit to the algorithm. i.e. it would have to look for a Permit and then stop processing the rest of the Policiesl. The overall point is that, as written, I think the algorithms are going to end causing a lot more question of the type we have been discussing. The proposal I submitted, I believe, makes all these types of problems go away: http://lists.oasis-open.org/archives/xacml/201105/msg00043.html     Thanks,     Rich On 5/19/2011 9:42 AM, Erik Rissanen wrote: 4DD51E54.6010802@axiomatics.com type= cite > Hi Rich, Yes, I noticed too that there is nothing which says what the input array is. It should be said that it's the result of evaluating all the child nodes which are to be combined. But just because the semantics of the algorithm are specified like this, does not mean that an implementation has to actually act like this to get the result. As I said in my other email, the way you wrote it is not necessarily the most efficient implementation and a PDP may be smarter than that. The spec is not intended to put restrictions on the implementation, as long as it gets the same result as specified. Best regards, Erik On 2011-05-19 15:31, rich levinson wrote: 4DD51BA2.6000709@oracle.com type= cite > To TC: In order to make this issue concrete, I gave an example earlier that we can discuss that I think will address this problem: http://lists.oasis-open.org/archives/xacml/201105/msg00036.html ... if I am in a deny-overrides PolicySet, and there are 10 child (deny-override) Policy elements, for example, and I evaluate the first Policy and it in turn evaluates its child Rules, if its first, or any other, rule returns a Deny, then the first Policy will return a Deny, and there is no need to evaluate the other 9 Policy elements since the decision will be a Deny regardless of what they return. I think that C.2, as written, insists that the other 9 Policy elements must be calculated, since the input to the algorithm is an array of Decisions, presumably the Decisions resulting from evaluating the child Policies of the PolicySet. The question is: how are the child Policies selected for input? I would think that is exactly what the combining algorithm determines. i.e. it combines the Policies according to a script and determines which decision from which Policy governs its result. If that is not the case, then what determines the contents of the input array? i.e. I believe the algorithm must clearly state what the inputs are that it is processing. As written, it appears to me that C.2 effectively requires running the algorithm on the child Policies before passing the results in to run the algorithm.     Thanks,     Rich On 5/19/2011 8:44 AM, rich levinson wrote: 4DD51097.6030209@oracle.com type= cite > Hi Erik, In principle, I would agree that if the p-code produces the same results from an end user perspective that the details of the implementation (namely the p-code translated to a native language) would be incidental. However, the way the spec is currently set up, the verbal description, lines 5727-5738, is defined to be: a non-normative informative description of this combining algorithm. whereas the p-code, following line 5739 is defined to be: the normative specification of this combining algorithm. Therefore, I think it is necessary to raise an issue as to what aspects of this combining algorithm are normative. For example, is it necessary to calculate all the decisions prior to entering the algorithm.     Thanks,     Rich On 5/19/2011 6:02 AM, Erik Rissanen wrote: 4DD4EAC0.8050300@axiomatics.com type= cite > Hi Rich, If it has the same results as the current specification, I would prefer to not make any changes at this stage. There is always the risk that we introduce some error by making changes. Also, I prefer the way the current algorithm is more uniformly described. It does not need to do a cast to a Rule for instance. It should not be necessary since the base case for a Rule is already covered in another section. Best regards, Erik On 2011-05-19 08:01, rich levinson wrote: 4DD4B23B.3050106@oracle.com type= cite > Hi Erik, As I indicated in prev email, this 2nd draft is a slight cleanup of the syntax, with some additional comments at the end: Decision denyOverridesRuleCombiningAlgorithm(Node[] nodes) { // see 1 below Boolean atLeastOneErrorD = false; Boolean atLeastOneErrorP = false; Boolean atLeastOneErrorDP = false; Boolean atLeastOnePermit = false; for ( i=0; i<lengthOf(nodes); i++ ) { Decision decision = evaluate(nodes[i]); // see #2 below if (decision==Deny) { return Deny; // loop breakout (#2 below) } // the next two if s are the same as C.10: if (decision==Permit) { atLeastOnePermit = true; continue; // i.e. skip the rest of the logic for current // iteration of loop, and start next iteration } if (decision==NotApplicable) { continue; } // Ind{} (no qualifier) can only be returned for rules (#3 below) if (decision==Indeterminate) { // cast node to Rule, then get its effect if ( effect((Rule)nodes[i])==Deny) ) { atLeastOneErrorD = true; } else { atLeastOneErrorP = true; } continue; } it (decision == Indeterminate{D}) { atLeastOneErrorD = true; } it (decision == Indeterminate{P}) { atLeastOneErrorp = true; } it (decision == Indeterminate{DP}) { atLeastOneErrorDP = true; } } // end for loop if (atLeastOneErrorD==true && (atLeastOneErrorP==true atLeastOnePermit==true) { atLeastOneErrorDP = true; } if (atLeastOneErrorDP==true) { return Indeterminate{DP}; } if (atLeastOneErrorD==true) { return Indeterminate{D}; } if (atLeastOnePermit==true) { return Permit; } if (atLeastOneErrorP == true) { return Indeterminate{P}; } return NotApplicable; } // end algorithm It is intended to produce the same results in every case as the current C.2 algorithm. The differences that it embodies are: it uses nodes as input rather than decisions, where a node can be any of: {Rule, Policy, PolicySet} it preserves the original logic from 2.0 that shows the evaluate done in each iteration, which enables the loop breakout as soon as a certain final result is obtained (i.e. the explicit biased decision type of the algorithm) it preserves(and makes explicit) the logic whereby the D or P status of Indeterminate is established; i.e. the qualifiers D,P originate from the effect of rules. DP is a result of combining. The only place an unqualified Indeterminate (Indeterminate{}) can appear is in the decision that results from evaluation of a Rule, or from the evaluation of a Target. However, the unqualified Ind from a Target will always be combined to a qualified decision, as shown in WD19 Table 7. Also note, that the above algorithm should be consistent w Table 4 in section 7.10, because it is the statement at the beginning of the loop, evaluate(nodes[i]), which, when nodes are rules, will produce a decision that is an unqualified Ind{}. However, an unqualified Ind{} can never escape the algorithm because after the end of the loop on qualified Ind{D,P,DP} can be returned. It should reduce to the 2.0 algorithms when the constraints that were implicit in 2.0 are applied (i.e. that the property does not apply to policy) This objective needs to be qualified by the fact that in 2.0 deny-overrides and permit-overrides were not completely symmetric, as d-o did not allow any Indeterminate to be returned, whereas p-o did. I believe the TC decided when we chg'd to qualified Indeterminates that we would drop this anomaly as being unnecessary, so it does not appear in new algs. Note that the evaluate(nodes[i]) is recursive, and this algorithm should be viewed as being applied starting with the top PolicySet, and processing all children as required by the evaluations. Note also that there is an intermediate layer of selecting a combining algorithm before the next recursive evaluate(nodes[i]) is called. Note also that the recursion must proceed down to the leaf Rules, because evaluate(nodes[i]) will not get any results until a Rule is reached which effectively stops the recursion. While the above comments might appear complicated, they are only included for guidance for anyone who is interested in delving deeply into the mechanisms that are implicitly present in the evaluation of XACML PolicySets. Bottom line: the proposal is the algorithm. The comments that appear in the list that follows the algorithm are to help people understand the algorithm. I believe the algorithm should be able to be inserted as is in Section C.2, and, if there is agreement, corresponding algorithms can be prepared for sections C.3 -> C.7.  Note C.8, C.9, and the legacy sections can probably remain as they are, since they do not appear to deal with qualified Indeterminates.     Thanks,     Rich On 5/18/2011 11:12 AM, rich levinson wrote: 4DD3E1F5.8060303@oracle.com type= cite > Hi Erik, The algorithm w proposed changes in my earlier email in first draft form was this: Decision denyOverridesRuleCombiningAlgorithm(Node[] nodes) { // see 1 below Boolean atLeastOneError = false; Boolean atLeastOneErrorD = false; Boolean atLeastOneErrorP = false; Boolean atLeastOneErrorDP = false; Boolean atLeastOnePermit = false; for ( i=0; i<lengthOf(nodes); i++ ) { Decision decision = evaluate(nodes[i]); // see #2 below if (decision==Deny) { return Deny; // loop breakout (#2 below) } // the next two if s are the same as C.10: if (decision==Permit) { atLeastOnePermit = true; continue; // i.e. skip the rest of the logic current iteration of loop // and start next iteration } if (decision==NotApplicable) { continue; } // see #3 below if (decision==Indeterminate) { // this can only be returned for rules if ( effect((Rule)nodes[i])==Deny) ) { // cast to Rule to get effect atLeastOneErrorD = true; } else { atLeastOneErrorP = true; } continue; } // the following is same as C.2 and will evaluate the 3 types // of Indeterminate, which can only be returned for Policy and PolicySet ... same as lines 5762->5776 (not repeated here) } // end for loop if (atLeastOneErrorD==true && (atLeastOneErrorP==true atLeastOnePermit==true) { atLeastOneErrorDP = true; } if (atLeastOneErrorDP==true) { return Indeterminate(DP); if (atLeastOneErrorD==true) { return Indeterminate(D); } if (atLeastOnePermit==true) { return Permit; } if (atLeastOneErrorP == true) { return Indeterminate(P); } return NotApplicable; } // end algorithm It is intended to produce the same results in every case as the current algorithm. The differences that it embodies (that do not impact the final results) are: it uses nodes as input rather than decisions, where a node can be any of: {Rule, Policy, PolicySet} it preserves the original logic from 2.0 that shows the evaluate done in each iteration, which enables the loop breakout as soon as a certain final result is obtained (i.e. the explicit biased decision type of the algorithm it preserves(and makes explicit) the logic whereby the D or P status of Indeterminate is established It should reduce to the 2.0 algorithms when the constraints that were implicit in 2.0 are applied (i.e. that the property does not apply to policy) I think it needs one more pass to get the syntax of the Indeterminates consistent w the current defns in the doc, but otherwise I am pretty sure it does the same as the current. (I will try to clean it up a bit, later today  but I am bust until then)     Thanks,     Rich On 5/18/2011 4:01 AM, Erik Rissanen wrote: 4DD37CEB.9000706@axiomatics.com type= cite >Rich, Does the algorithm with your proposed changes lead to a different result in any case than the algorithm which is in WD-19? Best regards, Erik On 2011-05-17 15:36, rich levinson wrote: This is not a performance issue. It is a change from XACML 2.0 that implies that the combining algorithm has as input a set of decisions as opposed to 2.0 where the combining algorithm had as input a set of Rules, Policies, or PolicySets, that had yet to be evaluated. The change implies that the algorithm is working on a different state, which is not the case.     Thanks,     Rich On 5/17/2011 5:07 AM, remon.sinnema@emc.com wrote: From: Erik Rissanen [ mailto:erik@axiomatics.com ] Sent: Tuesday, May 17, 2011 9:35 AM To: xacml@lists.oasis-open.org Subject: Re: [xacml] wd-19 indeterminate policy target handling The spec should strive for the simplest possible explanation of the behavior, not the most efficient implementation. +1 We can leave it up to vendors to come up with some nice performance tricks. Thanks, Ray --------------------------------------------------------------------- To unsubscribe from this mail list, you must leave the OASIS TC that generates this mail.  Follow this link to all your TCs in OASIS at: https://www.oasis-open.org/apps/org/workgroup/portal/my_workgroups.php --------------------------------------------------------------------- To unsubscribe from this mail list, you must leave the OASIS TC that generates this mail.  Follow this link to all your TCs in OASIS at: https://www.oasis-open.org/apps/org/workgroup/portal/my_workgroups.php --------------------------------------------------------------------- To unsubscribe from this mail list, you must leave the OASIS TC that generates this mail.  Follow this link to all your TCs in OASIS at: https://www.oasis-open.org/apps/org/workgroup/portal/my_workgroups.php


  • 21.  Re: [xacml] wd-19 indeterminate policy target handling

    Posted 05-19-2011 15:04
    Hi Rich, Sorry, but I don't understand your second point. The PDP would just the different algorithm code, right? Regards, Erik On 2011-05-19 16:23, rich levinson wrote: 4DD527E8.9030608@oracle.com type= cite > Hi Erik, Another way to see what the problem is, for the example in prev email, is the following: Let's say I had a really smart PDP that knew the algorithm was deny-overrides, and so when it first encountered a child Policy in the PolicySet that produced a deny, then it would submit only that Decision to the algorithm. That sounds like it might be reasonable as an optimization . However, let's assume instead that someone changed the combining-algorithm to permit-overrides. Now it would seem the PDP would have to smart enough to pick a different result to submit to the algorithm. i.e. it would have to look for a Permit and then stop processing the rest of the Policiesl. The overall point is that, as written, I think the algorithms are going to end causing a lot more question of the type we have been discussing. The proposal I submitted, I believe, makes all these types of problems go away: http://lists.oasis-open.org/archives/xacml/201105/msg00043.html     Thanks,     Rich On 5/19/2011 9:42 AM, Erik Rissanen wrote: 4DD51E54.6010802@axiomatics.com type= cite > Hi Rich, Yes, I noticed too that there is nothing which says what the input array is. It should be said that it's the result of evaluating all the child nodes which are to be combined. But just because the semantics of the algorithm are specified like this, does not mean that an implementation has to actually act like this to get the result. As I said in my other email, the way you wrote it is not necessarily the most efficient implementation and a PDP may be smarter than that. The spec is not intended to put restrictions on the implementation, as long as it gets the same result as specified. Best regards, Erik On 2011-05-19 15:31, rich levinson wrote: 4DD51BA2.6000709@oracle.com type= cite > To TC: In order to make this issue concrete, I gave an example earlier that we can discuss that I think will address this problem: http://lists.oasis-open.org/archives/xacml/201105/msg00036.html ... if I am in a deny-overrides PolicySet, and there are 10 child (deny-override) Policy elements, for example, and I evaluate the first Policy and it in turn evaluates its child Rules, if its first, or any other, rule returns a Deny, then the first Policy will return a Deny, and there is no need to evaluate the other 9 Policy elements since the decision will be a Deny regardless of what they return. I think that C.2, as written, insists that the other 9 Policy elements must be calculated, since the input to the algorithm is an array of Decisions, presumably the Decisions resulting from evaluating the child Policies of the PolicySet. The question is: how are the child Policies selected for input? I would think that is exactly what the combining algorithm determines. i.e. it combines the Policies according to a script and determines which decision from which Policy governs its result. If that is not the case, then what determines the contents of the input array? i.e. I believe the algorithm must clearly state what the inputs are that it is processing. As written, it appears to me that C.2 effectively requires running the algorithm on the child Policies before passing the results in to run the algorithm.     Thanks,     Rich On 5/19/2011 8:44 AM, rich levinson wrote: 4DD51097.6030209@oracle.com type= cite > Hi Erik, In principle, I would agree that if the p-code produces the same results from an end user perspective that the details of the implementation (namely the p-code translated to a native language) would be incidental. However, the way the spec is currently set up, the verbal description, lines 5727-5738, is defined to be: a non-normative informative description of this combining algorithm. whereas the p-code, following line 5739 is defined to be: the normative specification of this combining algorithm. Therefore, I think it is necessary to raise an issue as to what aspects of this combining algorithm are normative. For example, is it necessary to calculate all the decisions prior to entering the algorithm.     Thanks,     Rich On 5/19/2011 6:02 AM, Erik Rissanen wrote: 4DD4EAC0.8050300@axiomatics.com type= cite > Hi Rich, If it has the same results as the current specification, I would prefer to not make any changes at this stage. There is always the risk that we introduce some error by making changes. Also, I prefer the way the current algorithm is more uniformly described. It does not need to do a cast to a Rule for instance. It should not be necessary since the base case for a Rule is already covered in another section. Best regards, Erik On 2011-05-19 08:01, rich levinson wrote: 4DD4B23B.3050106@oracle.com type= cite > Hi Erik, As I indicated in prev email, this 2nd draft is a slight cleanup of the syntax, with some additional comments at the end: Decision denyOverridesRuleCombiningAlgorithm(Node[] nodes) { // see 1 below Boolean atLeastOneErrorD = false; Boolean atLeastOneErrorP = false; Boolean atLeastOneErrorDP = false; Boolean atLeastOnePermit = false; for ( i=0; i<lengthOf(nodes); i++ ) { Decision decision = evaluate(nodes[i]); // see #2 below if (decision==Deny) { return Deny; // loop breakout (#2 below) } // the next two if s are the same as C.10: if (decision==Permit) { atLeastOnePermit = true; continue; // i.e. skip the rest of the logic for current // iteration of loop, and start next iteration } if (decision==NotApplicable) { continue; } // Ind{} (no qualifier) can only be returned for rules (#3 below) if (decision==Indeterminate) { // cast node to Rule, then get its effect if ( effect((Rule)nodes[i])==Deny) ) { atLeastOneErrorD = true; } else { atLeastOneErrorP = true; } continue; } it (decision == Indeterminate{D}) { atLeastOneErrorD = true; } it (decision == Indeterminate{P}) { atLeastOneErrorp = true; } it (decision == Indeterminate{DP}) { atLeastOneErrorDP = true; } } // end for loop if (atLeastOneErrorD==true && (atLeastOneErrorP==true atLeastOnePermit==true) { atLeastOneErrorDP = true; } if (atLeastOneErrorDP==true) { return Indeterminate{DP}; } if (atLeastOneErrorD==true) { return Indeterminate{D}; } if (atLeastOnePermit==true) { return Permit; } if (atLeastOneErrorP == true) { return Indeterminate{P}; } return NotApplicable; } // end algorithm It is intended to produce the same results in every case as the current C.2 algorithm. The differences that it embodies are: it uses nodes as input rather than decisions, where a node can be any of: {Rule, Policy, PolicySet} it preserves the original logic from 2.0 that shows the evaluate done in each iteration, which enables the loop breakout as soon as a certain final result is obtained (i.e. the explicit biased decision type of the algorithm) it preserves(and makes explicit) the logic whereby the D or P status of Indeterminate is established; i.e. the qualifiers D,P originate from the effect of rules. DP is a result of combining. The only place an unqualified Indeterminate (Indeterminate{}) can appear is in the decision that results from evaluation of a Rule, or from the evaluation of a Target. However, the unqualified Ind from a Target will always be combined to a qualified decision, as shown in WD19 Table 7. Also note, that the above algorithm should be consistent w Table 4 in section 7.10, because it is the statement at the beginning of the loop, evaluate(nodes[i]), which, when nodes are rules, will produce a decision that is an unqualified Ind{}. However, an unqualified Ind{} can never escape the algorithm because after the end of the loop on qualified Ind{D,P,DP} can be returned. It should reduce to the 2.0 algorithms when the constraints that were implicit in 2.0 are applied (i.e. that the property does not apply to policy) This objective needs to be qualified by the fact that in 2.0 deny-overrides and permit-overrides were not completely symmetric, as d-o did not allow any Indeterminate to be returned, whereas p-o did. I believe the TC decided when we chg'd to qualified Indeterminates that we would drop this anomaly as being unnecessary, so it does not appear in new algs. Note that the evaluate(nodes[i]) is recursive, and this algorithm should be viewed as being applied starting with the top PolicySet, and processing all children as required by the evaluations. Note also that there is an intermediate layer of selecting a combining algorithm before the next recursive evaluate(nodes[i]) is called. Note also that the recursion must proceed down to the leaf Rules, because evaluate(nodes[i]) will not get any results until a Rule is reached which effectively stops the recursion. While the above comments might appear complicated, they are only included for guidance for anyone who is interested in delving deeply into the mechanisms that are implicitly present in the evaluation of XACML PolicySets. Bottom line: the proposal is the algorithm. The comments that appear in the list that follows the algorithm are to help people understand the algorithm. I believe the algorithm should be able to be inserted as is in Section C.2, and, if there is agreement, corresponding algorithms can be prepared for sections C.3 -> C.7.  Note C.8, C.9, and the legacy sections can probably remain as they are, since they do not appear to deal with qualified Indeterminates.     Thanks,     Rich On 5/18/2011 11:12 AM, rich levinson wrote: 4DD3E1F5.8060303@oracle.com type= cite > Hi Erik, The algorithm w proposed changes in my earlier email in first draft form was this: Decision denyOverridesRuleCombiningAlgorithm(Node[] nodes) { // see 1 below Boolean atLeastOneError = false; Boolean atLeastOneErrorD = false; Boolean atLeastOneErrorP = false; Boolean atLeastOneErrorDP = false; Boolean atLeastOnePermit = false; for ( i=0; i<lengthOf(nodes); i++ ) { Decision decision = evaluate(nodes[i]); // see #2 below if (decision==Deny) { return Deny; // loop breakout (#2 below) } // the next two if s are the same as C.10: if (decision==Permit) { atLeastOnePermit = true; continue; // i.e. skip the rest of the logic current iteration of loop // and start next iteration } if (decision==NotApplicable) { continue; } // see #3 below if (decision==Indeterminate) { // this can only be returned for rules if ( effect((Rule)nodes[i])==Deny) ) { // cast to Rule to get effect atLeastOneErrorD = true; } else { atLeastOneErrorP = true; } continue; } // the following is same as C.2 and will evaluate the 3 types // of Indeterminate, which can only be returned for Policy and PolicySet ... same as lines 5762->5776 (not repeated here) } // end for loop if (atLeastOneErrorD==true && (atLeastOneErrorP==true atLeastOnePermit==true) { atLeastOneErrorDP = true; } if (atLeastOneErrorDP==true) { return Indeterminate(DP); if (atLeastOneErrorD==true) { return Indeterminate(D); } if (atLeastOnePermit==true) { return Permit; } if (atLeastOneErrorP == true) { return Indeterminate(P); } return NotApplicable; } // end algorithm It is intended to produce the same results in every case as the current algorithm. The differences that it embodies (that do not impact the final results) are: it uses nodes as input rather than decisions, where a node can be any of: {Rule, Policy, PolicySet} it preserves the original logic from 2.0 that shows the evaluate done in each iteration, which enables the loop breakout as soon as a certain final result is obtained (i.e. the explicit biased decision type of the algorithm it preserves(and makes explicit) the logic whereby the D or P status of Indeterminate is established It should reduce to the 2.0 algorithms when the constraints that were implicit in 2.0 are applied (i.e. that the property does not apply to policy) I think it needs one more pass to get the syntax of the Indeterminates consistent w the current defns in the doc, but otherwise I am pretty sure it does the same as the current. (I will try to clean it up a bit, later today  but I am bust until then)     Thanks,     Rich On 5/18/2011 4:01 AM, Erik Rissanen wrote: 4DD37CEB.9000706@axiomatics.com type= cite >Rich, Does the algorithm with your proposed changes lead to a different result in any case than the algorithm which is in WD-19? Best regards, Erik On 2011-05-17 15:36, rich levinson wrote: This is not a performance issue. It is a change from XACML 2.0 that implies that the combining algorithm has as input a set of decisions as opposed to 2.0 where the combining algorithm had as input a set of Rules, Policies, or PolicySets, that had yet to be evaluated. The change implies that the algorithm is working on a different state, which is not the case.     Thanks,     Rich On 5/17/2011 5:07 AM, remon.sinnema@emc.com wrote: From: Erik Rissanen [ mailto:erik@axiomatics.com ] Sent: Tuesday, May 17, 2011 9:35 AM To: xacml@lists.oasis-open.org Subject: Re: [xacml] wd-19 indeterminate policy target handling The spec should strive for the simplest possible explanation of the behavior, not the most efficient implementation. +1 We can leave it up to vendors to come up with some nice performance tricks. Thanks, Ray --------------------------------------------------------------------- To unsubscribe from this mail list, you must leave the OASIS TC that generates this mail.  Follow this link to all your TCs in OASIS at: https://www.oasis-open.org/apps/org/workgroup/portal/my_workgroups.php --------------------------------------------------------------------- To unsubscribe from this mail list, you must leave the OASIS TC that generates this mail.  Follow this link to all your TCs in OASIS at: https://www.oasis-open.org/apps/org/workgroup/portal/my_workgroups.php --------------------------------------------------------------------- To unsubscribe from this mail list, you must leave the OASIS TC that generates this mail.  Follow this link to all your TCs in OASIS at: https://www.oasis-open.org/apps/org/workgroup/portal/my_workgroups.php


  • 22.  Re: [xacml] wd-19 indeterminate policy target handling

    Posted 05-19-2011 21:39
    Hi Erik, Just to answer your question (see mtg minutes for overall issue disposition), I will try again to make the point: Assume the current processing location is that the PolicySet has just been called for evaluation, and it knows it needs to obtain a decision by sending an array of Decisions from its children to the combining algorithm. So, the code that knows it needs to send the array of Decisions to the combining algorithm now needs to evaluate each Policy that it wants a Decision from and add that Decision to the array it is going to pass. If it wants to send Decisions from all its children, then there are no issues because it doesn't have to know very much to take this approach. However, if it wants to send the minimum Decisions that it can get away with evaluating then it must decide which Policies to evaluate, and it will have to have some basis for selecting a subset of the full set of 10 Policies in the example. One way to achieve this is for the code to know it is calling a deny-overrides combining algorithm, and it can then only evaluate Policies until it finds a Deny, and then only submit that one Decision or whatever set of Decisions it has accumulated up to that point. However, I claim that by requiring the caller to know about the algorithm that it is calling and to prune its policy selection accordingly has a multitude of undesirable effects, such as: It now becomes very unclear to anyone analyzing the PolicySet, what criteria are going to be used to select the Decisions to send to the algorithm, and as a result makes the PolicySet nearly impossible to analyze by simply looking at the Policy and not knowing details of the implementation. Things get even more obscure when one considers changing the algorithm for the PolicySet. If I change to permit-overrides, then which Policies will then be selected to send to the algorithm? Clearly I can't use the same criteria as above where I stopped when I found the first Deny. The overall point is that this structure of passing in the processed nodes as opposed to the unprocessed nodes puts the burden on the implementer to decide which nodes to send in, and, even worse, in my opinion, makes it nearly impossible for an administrator to clearly say what the behavior is going to be based only on the XACML language of the PolicySet and its descendants. On the other hand, I believe the proposal I sent earlier, which requires some modest changes, with some defined benefits, makes all these potential issues go away. This is for the record only, not to impact the decisions made at the meeting today.     Thanks,     Rich On 5/19/2011 11:03 AM, Erik Rissanen wrote: 4DD53145.2040602@axiomatics.com type= cite > Hi Rich, Sorry, but I don't understand your second point. The PDP would just the different algorithm code, right? Regards, Erik On 2011-05-19 16:23, rich levinson wrote: 4DD527E8.9030608@oracle.com type= cite > Hi Erik, Another way to see what the problem is, for the example in prev email, is the following: Let's say I had a really smart PDP that knew the algorithm was deny-overrides, and so when it first encountered a child Policy in the PolicySet that produced a deny, then it would submit only that Decision to the algorithm. That sounds like it might be reasonable as an optimization . However, let's assume instead that someone changed the combining-algorithm to permit-overrides. Now it would seem the PDP would have to smart enough to pick a different result to submit to the algorithm. i.e. it would have to look for a Permit and then stop processing the rest of the Policiesl. The overall point is that, as written, I think the algorithms are going to end causing a lot more question of the type we have been discussing. The proposal I submitted, I believe, makes all these types of problems go away: http://lists.oasis-open.org/archives/xacml/201105/msg00043.html     Thanks,     Rich On 5/19/2011 9:42 AM, Erik Rissanen wrote: 4DD51E54.6010802@axiomatics.com type= cite > Hi Rich, Yes, I noticed too that there is nothing which says what the input array is. It should be said that it's the result of evaluating all the child nodes which are to be combined. But just because the semantics of the algorithm are specified like this, does not mean that an implementation has to actually act like this to get the result. As I said in my other email, the way you wrote it is not necessarily the most efficient implementation and a PDP may be smarter than that. The spec is not intended to put restrictions on the implementation, as long as it gets the same result as specified. Best regards, Erik On 2011-05-19 15:31, rich levinson wrote: 4DD51BA2.6000709@oracle.com type= cite > To TC: In order to make this issue concrete, I gave an example earlier that we can discuss that I think will address this problem: http://lists.oasis-open.org/archives/xacml/201105/msg00036.html ... if I am in a deny-overrides PolicySet, and there are 10 child (deny-override) Policy elements, for example, and I evaluate the first Policy and it in turn evaluates its child Rules, if its first, or any other, rule returns a Deny, then the first Policy will return a Deny, and there is no need to evaluate the other 9 Policy elements since the decision will be a Deny regardless of what they return. I think that C.2, as written, insists that the other 9 Policy elements must be calculated, since the input to the algorithm is an array of Decisions, presumably the Decisions resulting from evaluating the child Policies of the PolicySet. The question is: how are the child Policies selected for input? I would think that is exactly what the combining algorithm determines. i.e. it combines the Policies according to a script and determines which decision from which Policy governs its result. If that is not the case, then what determines the contents of the input array? i.e. I believe the algorithm must clearly state what the inputs are that it is processing. As written, it appears to me that C.2 effectively requires running the algorithm on the child Policies before passing the results in to run the algorithm.     Thanks,     Rich On 5/19/2011 8:44 AM, rich levinson wrote: 4DD51097.6030209@oracle.com type= cite > Hi Erik, In principle, I would agree that if the p-code produces the same results from an end user perspective that the details of the implementation (namely the p-code translated to a native language) would be incidental. However, the way the spec is currently set up, the verbal description, lines 5727-5738, is defined to be: a non-normative informative description of this combining algorithm. whereas the p-code, following line 5739 is defined to be: the normative specification of this combining algorithm. Therefore, I think it is necessary to raise an issue as to what aspects of this combining algorithm are normative. For example, is it necessary to calculate all the decisions prior to entering the algorithm.     Thanks,     Rich On 5/19/2011 6:02 AM, Erik Rissanen wrote: 4DD4EAC0.8050300@axiomatics.com type= cite > Hi Rich, If it has the same results as the current specification, I would prefer to not make any changes at this stage. There is always the risk that we introduce some error by making changes. Also, I prefer the way the current algorithm is more uniformly described. It does not need to do a cast to a Rule for instance. It should not be necessary since the base case for a Rule is already covered in another section. Best regards, Erik On 2011-05-19 08:01, rich levinson wrote: 4DD4B23B.3050106@oracle.com type= cite > Hi Erik, As I indicated in prev email, this 2nd draft is a slight cleanup of the syntax, with some additional comments at the end: Decision denyOverridesRuleCombiningAlgorithm(Node[] nodes) { // see 1 below Boolean atLeastOneErrorD = false; Boolean atLeastOneErrorP = false; Boolean atLeastOneErrorDP = false; Boolean atLeastOnePermit = false; for ( i=0; i<lengthOf(nodes); i++ ) { Decision decision = evaluate(nodes[i]); // see #2 below if (decision==Deny) { return Deny; // loop breakout (#2 below) } // the next two if s are the same as C.10: if (decision==Permit) { atLeastOnePermit = true; continue; // i.e. skip the rest of the logic for current // iteration of loop, and start next iteration } if (decision==NotApplicable) { continue; } // Ind{} (no qualifier) can only be returned for rules (#3 below) if (decision==Indeterminate) { // cast node to Rule, then get its effect if ( effect((Rule)nodes[i])==Deny) ) { atLeastOneErrorD = true; } else { atLeastOneErrorP = true; } continue; } it (decision == Indeterminate{D}) { atLeastOneErrorD = true; } it (decision == Indeterminate{P}) { atLeastOneErrorp = true; } it (decision == Indeterminate{DP}) { atLeastOneErrorDP = true; } } // end for loop if (atLeastOneErrorD==true && (atLeastOneErrorP==true atLeastOnePermit==true) { atLeastOneErrorDP = true; } if (atLeastOneErrorDP==true) { return Indeterminate{DP}; } if (atLeastOneErrorD==true) { return Indeterminate{D}; } if (atLeastOnePermit==true) { return Permit; } if (atLeastOneErrorP == true) { return Indeterminate{P}; } return NotApplicable; } // end algorithm It is intended to produce the same results in every case as the current C.2 algorithm. The differences that it embodies are: it uses nodes as input rather than decisions, where a node can be any of: {Rule, Policy, PolicySet} it preserves the original logic from 2.0 that shows the evaluate done in each iteration, which enables the loop breakout as soon as a certain final result is obtained (i.e. the explicit biased decision type of the algorithm) it preserves(and makes explicit) the logic whereby the D or P status of Indeterminate is established; i.e. the qualifiers D,P originate from the effect of rules. DP is a result of combining. The only place an unqualified Indeterminate (Indeterminate{}) can appear is in the decision that results from evaluation of a Rule, or from the evaluation of a Target. However, the unqualified Ind from a Target will always be combined to a qualified decision, as shown in WD19 Table 7. Also note, that the above algorithm should be consistent w Table 4 in section 7.10, because it is the statement at the beginning of the loop, evaluate(nodes[i]), which, when nodes are rules, will produce a decision that is an unqualified Ind{}. However, an unqualified Ind{} can never escape the algorithm because after the end of the loop on qualified Ind{D,P,DP} can be returned. It should reduce to the 2.0 algorithms when the constraints that were implicit in 2.0 are applied (i.e. that the property does not apply to policy) This objective needs to be qualified by the fact that in 2.0 deny-overrides and permit-overrides were not completely symmetric, as d-o did not allow any Indeterminate to be returned, whereas p-o did. I believe the TC decided when we chg'd to qualified Indeterminates that we would drop this anomaly as being unnecessary, so it does not appear in new algs. Note that the evaluate(nodes[i]) is recursive, and this algorithm should be viewed as being applied starting with the top PolicySet, and processing all children as required by the evaluations. Note also that there is an intermediate layer of selecting a combining algorithm before the next recursive evaluate(nodes[i]) is called. Note also that the recursion must proceed down to the leaf Rules, because evaluate(nodes[i]) will not get any results until a Rule is reached which effectively stops the recursion. While the above comments might appear complicated, they are only included for guidance for anyone who is interested in delving deeply into the mechanisms that are implicitly present in the evaluation of XACML PolicySets. Bottom line: the proposal is the algorithm. The comments that appear in the list that follows the algorithm are to help people understand the algorithm. I believe the algorithm should be able to be inserted as is in Section C.2, and, if there is agreement, corresponding algorithms can be prepared for sections C.3 -> C.7.  Note C.8, C.9, and the legacy sections can probably remain as they are, since they do not appear to deal with qualified Indeterminates.     Thanks,     Rich On 5/18/2011 11:12 AM, rich levinson wrote: 4DD3E1F5.8060303@oracle.com type= cite > Hi Erik, The algorithm w proposed changes in my earlier email in first draft form was this: Decision denyOverridesRuleCombiningAlgorithm(Node[] nodes) { // see 1 below Boolean atLeastOneError = false; Boolean atLeastOneErrorD = false; Boolean atLeastOneErrorP = false; Boolean atLeastOneErrorDP = false; Boolean atLeastOnePermit = false; for ( i=0; i<lengthOf(nodes); i++ ) { Decision decision = evaluate(nodes[i]); // see #2 below if (decision==Deny) { return Deny; // loop breakout (#2 below) } // the next two if s are the same as C.10: if (decision==Permit) { atLeastOnePermit = true; continue; // i.e. skip the rest of the logic current iteration of loop // and start next iteration } if (decision==NotApplicable) { continue; } // see #3 below if (decision==Indeterminate) { // this can only be returned for rules if ( effect((Rule)nodes[i])==Deny) ) { // cast to Rule to get effect atLeastOneErrorD = true; } else { atLeastOneErrorP = true; } continue; } // the following is same as C.2 and will evaluate the 3 types // of Indeterminate, which can only be returned for Policy and PolicySet ... same as lines 5762->5776 (not repeated here) } // end for loop if (atLeastOneErrorD==true && (atLeastOneErrorP==true atLeastOnePermit==true) { atLeastOneErrorDP = true; } if (atLeastOneErrorDP==true) { return Indeterminate(DP); if (atLeastOneErrorD==true) { return Indeterminate(D); } if (atLeastOnePermit==true) { return Permit; } if (atLeastOneErrorP == true) { return Indeterminate(P); } return NotApplicable; } // end algorithm It is intended to produce the same results in every case as the current algorithm. The differences that it embodies (that do not impact the final results) are: it uses nodes as input rather than decisions, where a node can be any of: {Rule, Policy, PolicySet} it preserves the original logic from 2.0 that shows the evaluate done in each iteration, which enables the loop breakout as soon as a certain final result is obtained (i.e. the explicit biased decision type of the algorithm it preserves(and makes explicit) the logic whereby the D or P status of Indeterminate is established It should reduce to the 2.0 algorithms when the constraints that were implicit in 2.0 are applied (i.e. that the property does not apply to policy) I think it needs one more pass to get the syntax of the Indeterminates consistent w the current defns in the doc, but otherwise I am pretty sure it does the same as the current. (I will try to clean it up a bit, later today  but I am bust until then)     Thanks,     Rich On 5/18/2011 4:01 AM, Erik Rissanen wrote: 4DD37CEB.9000706@axiomatics.com type= cite >Rich, Does the algorithm with your proposed changes lead to a different result in any case than the algorithm which is in WD-19? Best regards, Erik On 2011-05-17 15:36, rich levinson wrote: This is not a performance issue. It is a change from XACML 2.0 that implies that the combining algorithm has as input a set of decisions as opposed to 2.0 where the combining algorithm had as input a set of Rules, Policies, or PolicySets, that had yet to be evaluated. The change implies that the algorithm is working on a different state, which is not the case.     Thanks,     Rich On 5/17/2011 5:07 AM, remon.sinnema@emc.com wrote: From: Erik Rissanen [ mailto:erik@axiomatics.com ] Sent: Tuesday, May 17, 2011 9:35 AM To: xacml@lists.oasis-open.org Subject: Re: [xacml] wd-19 indeterminate policy target handling The spec should strive for the simplest possible explanation of the behavior, not the most efficient implementation. +1 We can leave it up to vendors to come up with some nice performance tricks. Thanks, Ray --------------------------------------------------------------------- To unsubscribe from this mail list, you must leave the OASIS TC that generates this mail.  Follow this link to all your TCs in OASIS at: https://www.oasis-open.org/apps/org/workgroup/portal/my_workgroups.php --------------------------------------------------------------------- To unsubscribe from this mail list, you must leave the OASIS TC that generates this mail.  Follow this link to all your TCs in OASIS at: https://www.oasis-open.org/apps/org/workgroup/portal/my_workgroups.php --------------------------------------------------------------------- To unsubscribe from this mail list, you must leave the OASIS TC that generates this mail.  Follow this link to all your TCs in OASIS at: https://www.oasis-open.org/apps/org/workgroup/portal/my_workgroups.php


  • 23.  RE: [xacml] wd-19 indeterminate policy target handling

    Posted 05-20-2011 06:36
    Rich, From: rich levinson [ mailto:rich.levinson@oracle.com ] Sent: Thursday, May 19, 2011 11:38 PM To: xacml@lists.oasis-open.org Subject: Re: [xacml] wd-19 indeterminate policy target handling >> Just to answer your question (see mtg minutes for overall issue disposition), I will try again to make the point: Assume the current processing location is that the PolicySet has just been called for evaluation, and it "knows" it needs to obtain a decision by sending an array of Decisions from its children to the combining algorithm. << The current WD presents a conceptual explanation of how the algorithm should work, not an actual implementation. Therefore, the "array" of Decisions is not an actual array in the programming language sense. Instead, the algorithm could be implemented in a programming language to receive an iterator of Decisions, and the iterator could be lazy, meaning it will only evaluate the next Decision when that Decision is requested by the combining algorithm. This is an implementation detail, one that we should leave to vendors implementing the spec. All we should care about is that we have a description that allows vendors to implement the algorithms in a consistent manner, and that there is at least one efficient implementation imaginable. Since I've given one above, I think our work here is done. Do you agree? >> So, the code that knows it needs to send the array of Decisions to the combining algorithm now needs to "evaluate" each Policy that it wants a Decision from and add that Decision to the array it is going to pass. << This is not true, see my comment about lazy iterators above. >> * If it wants to send Decisions from all its children, then there are no issues because it doesn't have to "know" very much to take this approach. * However, if it wants to send the minimum Decisions that it can get away with evaluating then it must decide which Policies to evaluate, and it will have to have some basis for selecting a subset of the full set of 10 Policies in the example. << No, the combining algorithm should drive which subset to evaluate. >> o One way to achieve this is for the code to know it is calling a deny-overrides combining algorithm, and it can then only evaluate Policies until it finds a Deny, and then only submit that one Decision or whatever set of Decisions it has accumulated up to that point. o However, I claim that by requiring the caller to know about the algorithm that it is calling and to prune its policy selection accordingly has a multitude of undesirable effects, such as: * It now becomes very unclear to anyone analyzing the PolicySet, what criteria are going to be used to select the Decisions to send to the algorithm, and as a result makes the PolicySet nearly impossible to analyze by simply looking at the Policy and not knowing details of the implementation. << Not at all. The current WD specifies the conceptual workings of the algorithm, and that should be enough to analyze the PolicySet. >> * Things get even more obscure when one considers changing the algorithm for the PolicySet. If I change to permit-overrides, then which Policies will then be selected to send to the algorithm? o Clearly I can't use the same criteria as above where I stopped when I found the first Deny. * The overall point is that this structure of passing in the processed nodes as opposed to the unprocessed nodes puts the burden on the implementer to decide which nodes to send in << No, see my comment about lazy iterators above. >> , and, even worse, in my opinion, makes it nearly impossible for an administrator to clearly say what the behavior is going to be based only on the XACML language of the PolicySet and its descendants. << I don't understand that claim at all. The current description is clear to me as to what effect the combining algorithm should have. What do you think an administrator will not understand? >> On the other hand, I believe the proposal I sent earlier, which requires some modest changes, with some defined benefits, makes all these potential issues go away. << I agree with that statement, but I still prefer a conceptual description over one that is implementation-centric. We should not steer vendors in a certain direction, but give them as much freedom as possible to come up with innovative solutions. Thanks, Ray


  • 24.  Re: [xacml] wd-19 indeterminate policy target handling

    Posted 05-20-2011 22:07
    Hi Ray (and Paul, and TC), (Part 1 is to Ray/TC, Part 2 is rsp to Paul re: yesterday's mtg about whether this discussion is part of issue raised on list, and Part 3 is some items I found in the spec that might be quick fixes. Decided to combine to 1 email because none of the parts really represent a separate issue and all are loosely related to the current issue) Hi Ray, Thanks for taking the time to comment on the email where I tried to explain my ongoing concern (which I basically agreed at yesterday's meeting to defer to an implementation guide document that we have had an ongoing commitment to produce at some unspecified future time :) ). In any event, the primary reason I am looking for closure on this issue is that I get questions from developers on a regular basis about XACML, and now that 3.0 is imminent those questions are turning in that direction and I need to be able to explain what's going on. That being said, maybe we can rephrase the issue as to what it means when it says on line 5739: The following pseudo-code represents the normative   specification of this combining algorithm. The first line of the algorithm that follows appears to be some kind of interface signature: Decision denyOverridesCombiningAlgorithm(Decision[] decisions) It sounds to me like what you are saying is that when a software development organization sees that in the specification, that said organization will be completely comfortable by implementing the parameter, Decision[] decisions, by passing in an Iterator to a collection of Decision objects that have not yet been resolved. In other words, if a development organization first implements this algorithm by passing in a precalculated array as represented by the normative interface signature, and then they complain that it is too inefficient, that I can tell them to pass in an Iterator to a collection of unprocessed nodes and just calculate the Decision objects in the first line of the loop. For example, I could suggest that the first line of the loop look something like:     Decision denyOverridesCombiningAlgorithm(Iterator decisions)     ...     for ... {         Decision decision = evaluate(decisions.next()) where Iterator.next() returns a PolicySet, Policy, or Rule, and evaluate() produces the Decision. If that is what you are suggesting and people agree, then I am ok with that, as well, I think, assuming people really believe that this is an acceptable view of the specification.     Thanks,     Rich *******************************  end part 1 Part 2: re: Paul's comment at yesterday's mtg that original question was Target, and not what yesterday's discussion was about:     Also, to Paul's comment at yesterday's meeting that this     discussion was originally about the Target, which I agree     with, my response at mtg was that this aspect we are currently     discussing arose from the analysis of the Target. In particular,     one of the key issues w the Target, was whether or not     the underlying Rules (or Policies or PolicySets) needed to     be evaluated. My initial inclination was that they did not,     but the following discussion convinced me otherwise.     What ultimately convinced me was the realization that     w/o looking at the rules, since the rules are the only     place where D and P are defined, that there was no way     to then assign D,P,or DP to the Indeterminate found by     the Target.     Then, based on that understanding, it became clear to me     that it was not sufficient to just look at the list of     D's and P's, because, if the Target had not been     Indeterminate, then rules would be evaluated and the     result might be completely different than if one did     not evaluate the Rules. It seemed to me a bad idea that     the collection of rules could produce a different     result dependent on whether the Target was Indeterminate     or not. Therefore, since we have to evaluate the Rules     when the Target is not Indeterminate (and not NotApplicable),     then for consistency, if we wanted a D, P, or DP to come     out of the Rules, then in both cases, regardless of the     result of the Target, we had to evaluate the Rules.     It was while going thru this analysis of what it meant     whether or not we evaluated the rules that this other     aspect of the issue that we are discussing came up. *******************************  end part 2 Part 3: some minor issues that could possibly be quickly addressed:     1. line 569:         A policy comprises four main components        should be:         A policy comprises five main components     (because Advice was added)     2.  I found some lack of clarity wrt Named Attribute in a few spots:     section 5.9 (1609-10) is somewhat ambiguous wrt to what it is     trying to say, and clarification needed to section 5.29 as     to what is the named attribute vs the attribute ; i.e.     the named attribute appears to be specified by the subelements     of the AttributeDesignator element (which turn out to be consistent     w defn in sec 7.3, but not particularly w glossary).     also lines 47-50 should probably change:         the identity of the attribute holder          (which may be of type:          subject, resource, action or environment)     to:         the category of the entity holding the attribute          (which may be of category:          subject, resource, action, environment, or other)     note: see 2b below for usage of term entity        2a. Named attribute: best defn appears sec 7.3, lines 3220-3221:         A named attribute is the term used for the criteria          that the specific attribute designators use to refer          to particular attributes in the <Attributes> elements          of the request context.         recommend ref'ing this in section 5.29, end of 1st para.     2b. I think, in general that the Attributes element is missing a noun. i.e. an     Attributes element is a collection of Attribute elements, where the     Attributes element specifies the Category to which this collection     belongs. The question is: Category of what?     i.e. there is no word to collectively refer to Subject, Resource, Action,     Environment, etc. Simply saying they are the name of a Category     is incomplete.     In any event, my suggestion is we start referriing to a collection of     a specific category of attributes as collectively referring to an entity .     This is pretty much consistent with the document as is, but could     use a couple of tweaks, which could be quickly identified if people     agree to this change.     4. line 3093: should last word of first sentence be undefined         rather than defined ? *******************************  end part 3 On 5/20/2011 2:34 AM, remon.sinnema@emc.com wrote: 2B109622CFC23D4BA8349254A011FBE001497D4E20@MX27A.corp.emc.com type= cite > Rich, From: rich levinson [ mailto:rich.levinson@oracle.com ] Sent: Thursday, May 19, 2011 11:38 PM To: xacml@lists.oasis-open.org Subject: Re: [xacml] wd-19 indeterminate policy target handling Just to answer your question (see mtg minutes for overall issue disposition), I will try again to make the point: Assume the current processing location is that the PolicySet has just been called for evaluation, and it knows it needs to obtain a decision by sending an array of Decisions from its children to the combining algorithm. << The current WD presents a conceptual explanation of how the algorithm should work, not an actual implementation. Therefore, the array of Decisions is not an actual array in the programming language sense. Instead, the algorithm could be implemented in a programming language to receive an iterator of Decisions, and the iterator could be lazy, meaning it will only evaluate the next Decision when that Decision is requested by the combining algorithm. This is an implementation detail, one that we should leave to vendors implementing the spec. All we should care about is that we have a description that allows vendors to implement the algorithms in a consistent manner, and that there is at least one efficient implementation imaginable. Since I've given one above, I think our work here is done. Do you agree? So, the code that knows it needs to send the array of Decisions to the combining algorithm now needs to evaluate each Policy that it wants a Decision from and add that Decision to the array it is going to pass. << This is not true, see my comment about lazy iterators above. * If it wants to send Decisions from all its children, then there are no issues because it doesn't have to know very much to take this approach. * However, if it wants to send the minimum Decisions that it can get away with evaluating then it must decide which Policies to evaluate, and it will have to have some basis for selecting a subset of the full set of 10 Policies in the example. << No, the combining algorithm should drive which subset to evaluate. o One way to achieve this is for the code to know it is calling a deny-overrides combining algorithm, and it can then only evaluate Policies until it finds a Deny, and then only submit that one Decision or whatever set of Decisions it has accumulated up to that point. o However, I claim that by requiring the caller to know about the algorithm that it is calling and to prune its policy selection accordingly has a multitude of undesirable effects, such as: * It now becomes very unclear to anyone analyzing the PolicySet, what criteria are going to be used to select the Decisions to send to the algorithm, and as a result makes the PolicySet nearly impossible to analyze by simply looking at the Policy and not knowing details of the implementation. << Not at all. The current WD specifies the conceptual workings of the algorithm, and that should be enough to analyze the PolicySet. * Things get even more obscure when one considers changing the algorithm for the PolicySet. If I change to permit-overrides, then which Policies will then be selected to send to the algorithm? o Clearly I can't use the same criteria as above where I stopped when I found the first Deny. * The overall point is that this structure of passing in the processed nodes as opposed to the unprocessed nodes puts the burden on the implementer to decide which nodes to send in << No, see my comment about lazy iterators above. , and, even worse, in my opinion, makes it nearly impossible for an administrator to clearly say what the behavior is going to be based only on the XACML language of the PolicySet and its descendants. << I don't understand that claim at all. The current description is clear to me as to what effect the combining algorithm should have. What do you think an administrator will not understand? On the other hand, I believe the proposal I sent earlier, which requires some modest changes, with some defined benefits, makes all these potential issues go away. << I agree with that statement, but I still prefer a conceptual description over one that is implementation-centric. We should not steer vendors in a certain direction, but give them as much freedom as possible to come up with innovative solutions. Thanks, Ray


  • 25.  RE: [xacml] wd-19 indeterminate policy target handling

    Posted 05-21-2011 09:58
    Rich, From: rich levinson [ mailto:rich.levinson@oracle.com ] Sent: Saturday, May 21, 2011 12:06 AM To: Sinnema, Remon Cc: xacml@lists.oasis-open.org Subject: Re: [xacml] wd-19 indeterminate policy target handling >> That being said, maybe we can rephrase the issue as to what it means when it says on line 5739: "The following pseudo-code represents the normative specification of this combining algorithm." The first line of the algorithm that follows appears to be some kind of interface signature: Decision denyOverridesCombiningAlgorithm(Decision[] decisions) << Yes, but remember this is *pseudo*-code, not actual code. IMHO, the implementation doesn't necessarily need to have the exact same interface, as long as it's conceptually equivalent, IOW, as long as it gives the same results. I think this is what Erik has been saying all along. >> It sounds to me like what you are saying is that when a software development organization sees that in the specification, that said organization will be completely comfortable by implementing the parameter, Decision[] decisions, by passing in an Iterator to a collection of Decision objects that have not yet been resolved. << Yes, because arrays and iterators are conceptually equivalent in this case, since the random access part of an array (that an iterator doesn't have) isn't needed for implementing any of the combining algorithms. >> In other words, if a development organization first implements this algorithm by passing in a precalculated array as represented by the normative interface signature, and then they complain that it is too inefficient, that I can tell them to pass in an Iterator to a collection of unprocessed nodes and just calculate the Decision objects in the first line of the loop. For example, I could suggest that the first line of the loop look something like: Decision denyOverridesCombiningAlgorithm(Iterator decisions) ... for ... { Decision decision = evaluate(decisions.next()) where Iterator.next() returns a PolicySet, Policy, or Rule, and "evaluate()" produces the Decision. << Well, the iterator should return Decisions, not unprocessed nodes, because that's what the algorithm needs to work on. The implementation of the iterator should do the evaluation when it's next() method is called. The combining algorithms shouldn't need to know anything about evaluating PolicySets, Policies, or Rules. >> If that is what you are suggesting and people agree, then I am ok with that, as well, I think, assuming people really believe that this is an acceptable view of the specification. << I'm interested to see if anybody has any objections to this view, since this is actually how I implemented it ;) Thanks, Ray


  • 26.  Re: [xacml] wd-19 indeterminate policy target handling

    Posted 05-22-2011 21:11
    Hi Ray/TC, I want to emphasize that I consider this a serious issue, and hope that people can bear with what some might consider a waste of time picking at details of marginal relevance to specification. In particular, to emphasize the seriousness, please consider the following points: Section 2.1 Requirements, lists the basic requirements for a policy language for expressing information systems security policy, of which the first item is:   To provide a method for combining individual rules and policies into a     single policy set that applies to a particular decision request . In order to test compliance with the specification, it is necessary to be able to define meaningful tests that an implementation can execute to demonstrate such compliance. I also want to say that I consider the fact the Ray has implemented to this spec, and has provided detail in prev email as to the nature of that impl, as valuable input for analyzing the integrity of the algorithm that we are discussing, which is lines 5740-5799 of section C.2 of WD19, which is declared on line 5739 to represent the normative specification of this combining algorithm . All that being said, there are a few points that Ray mentioned that I would like to respond to in the interests of advancing this discussion toward some kind of closure where we are all satisfied that the spec is meeting our objectives (which, while unstated, I assume are well understood in conventional software engineering terms). To the specific points: Ray said: remember this is *pseudo*-code, not actual code . I agree, but also would like to point out that pseudo-code, in order to be meaningful must be consistent in its use of terminology. i.e. the terms used cannot arbitrarily change their meaning partway thru the algorithm. In order to test this consistency, please look again at the algorithm signature:   Decision denyOverridesCombiningAlgorithm(Decision[] decisions) If the p-code is considered to be consistent, then I think it should be self-evident that what is returned is an instance of Decision, where Decision may be implicitly defined as one of the values returned on any of lines: 5751 return Deny 5780 return Indeterminate{DP} 5784 return Indeterminate{DP} 5788 return Indeterminate{D} 5792 return Permit 5796 return Indeterminate{P} 5798 return NotApplicable Since this set of return values actually represents the full set of possible return values defined by XACML (where the D,P,DP can be considered internal combining algorithm intermediate values that are removed from the final result which is just Indeterminate w no qualifier, when any of the qualified Indeterminates are the preliminary final decision), then for consistency, the input: Decision[] decisions, of necessity is a collection of Decision instances each of which can be one of the list enumerated above. (or, as discussed below, conceptually can be some kind of object that will resolve to one of these instances when pulled from the input collection.) The next point was that arrays and iterators are conceptually equivalent , at least in terms of p-code. For the moment, as I prev indicated, I will accept that premise, because I understand the point to be that the distinction between these two forms of collection are not particularly relevant to the concerns of the combining algorithm. The next point was that the iterator should return Decisions, not unprocessed nodes As mentioned above, I agree with this point as being required in order for the p-code to be self consistent. However, it appears to me that given that, that the iterator must be envisioned to be operating on a collection of yet to be resolved objects, such as an object that is equivalent to evaluate(node) , where the evaluate is not executed until the iterator performs the next() operation. To me, this is not an ordinary construct that one can assume is meant by the algorithm specification. On the one hand the algorithm explicitly says the output is one of the enumerated Decision instances listed above, but implicitly says that a collection of these same objects on input may be regarded as unresolved instances, to which some sort of handle and evaluation mechanism is attached . This is the primary issue that I have with Ray's response, which is that while I understand what is being said, on the other hand, I do not think it is reasonable to expect a reader of the spec to be able to consider such an abstraction to be implicit in the specification of the algorithm. i.e. a clearly defined output object, with no further explanation is expected to be able to be envisioned as some kind of object with which there is little general familiarity. So, I guess what I am looking for is some kind of definition up front that will state that the Decision[] decisions input represents the set of values from the child nodes of the current node, and that either the decisions independently obtained from those nodes for static processing , or the unprocessed nodes themselves may be passed in for dynamic processing. If we can reach agreement on that then I think there are a couple of other minor details that we should address: recommend that text similar to 2nd para above be included in section C.1 to describe the assumptions about the input Decision[]. recommend that normative be applied to the definitions on lines 5701-5712. recommend that a statement be added that Indeterminate is what is returned as the final decision (i.e. the D,P,DP qualifiers are stripped off - I don't recall having seen that mentioned anywhere). recommend that section 7 be updated in some places mentioned before in earlier emails so that the terminology is consistent. Explicitly identifying each spot pending agreement on the above.     Thanks,     Rich On 5/21/2011 5:57 AM, remon.sinnema@emc.com wrote: 2B109622CFC23D4BA8349254A011FBE001497D4FAB@MX27A.corp.emc.com type= cite > Rich, From: rich levinson [ mailto:rich.levinson@oracle.com ] Sent: Saturday, May 21, 2011 12:06 AM To: Sinnema, Remon Cc: xacml@lists.oasis-open.org Subject: Re: [xacml] wd-19 indeterminate policy target handling That being said, maybe we can rephrase the issue as to what it means when it says on line 5739: The following pseudo-code represents the normative specification of this combining algorithm. The first line of the algorithm that follows appears to be some kind of interface signature: Decision denyOverridesCombiningAlgorithm(Decision[] decisions) << 2B109622CFC23D4BA8349254A011FBE001497D4FAB@MX27A.corp.emc.com type= cite > Yes, but remember this is *pseudo*-code, not actual code. IMHO, the implementation doesn't necessarily need to have the exact same interface, as long as it's conceptually equivalent, IOW, as long as it gives the same results. I think this is what Erik has been saying all along. It sounds to me like what you are saying is that when a software development organization sees that in the specification, that said organization will be completely comfortable by implementing the parameter, Decision[] decisions, by passing in an Iterator to a collection of Decision objects that have not yet been resolved. << 2B109622CFC23D4BA8349254A011FBE001497D4FAB@MX27A.corp.emc.com type= cite > Yes, because arrays and iterators are conceptually equivalent in this case, since the random access part of an array (that an iterator doesn't have) isn't needed for implementing any of the combining algorithms. In other words, if a development organization first implements this algorithm by passing in a precalculated array as represented by the normative interface signature, and then they complain that it is too inefficient, that I can tell them to pass in an Iterator to a collection of unprocessed nodes and just calculate the Decision objects in the first line of the loop. For example, I could suggest that the first line of the loop look something like: Decision denyOverridesCombiningAlgorithm(Iterator decisions) ... for ... { Decision decision = evaluate(decisions.next()) where Iterator.next() returns a PolicySet, Policy, or Rule, and evaluate() produces the Decision. << 2B109622CFC23D4BA8349254A011FBE001497D4FAB@MX27A.corp.emc.com type= cite > Well, the iterator should return Decisions, not unprocessed nodes, because that's what the algorithm needs to work on. The implementation of the iterator should do the evaluation when it's next() method is called. The combining algorithms shouldn't need to know anything about evaluating PolicySets, Policies, or Rules. If that is what you are suggesting and people agree, then I am ok with that, as well, I think, assuming people really believe that this is an acceptable view of the specification. << 2B109622CFC23D4BA8349254A011FBE001497D4FAB@MX27A.corp.emc.com type= cite > I'm interested to see if anybody has any objections to this view, since this is actually how I implemented it ;) Thanks, Ray


  • 27.  Re: [xacml] wd-19 indeterminate policy target handling / wd-20 /proposed resolution

    Posted 05-25-2011 21:57
    To TC, In an attempt to bring the combining algorithm issue to a satisfactory close, and following the guidance of the TC at last week's meeting: http://lists.oasis-open.org/archives/xacml/201105/msg00053.html (see note about continuing discussion on separate impl strategy thread), I have uploaded a proposed implementor's guidance section in the context of resurrecting the old Implementor's Guide, which we have been talking about doing for a long time, but there has not been a threshold issue to initiate the effort. I considered this issue significant enough to pass the threshold of need to get it going, and thus uploaded proposed starting point, as a modification to the long dormant original Implementor's Guide. I believe I have captured the issue and presented information to implementors and users alike that should explain the issue that people may encounter and the strategy for addressing it. I believe I have incorporated everyone's input, and this can serve the purpose of handling concerns regarding this issue that may come up in the future. In addition, the doc contains a ref to some previous work that was done on this exact same issue, which should provide another reference point for understanding both the issue and possible solution spaces for addressing the issue. With this explanatory note, I believe the current spec can be used pretty much as is (there are a few suggested cosmetic changes to facillitate consistency in some of the earlier emails that may be useful, although none are critical enough to require separate action). The information in the first draft is intended to be the minimum necessary to bridge the conceptual issues that have been raised against the current spec, and serve the purpose of setting the perspective so that developers and users can clearly understand what their options are. The text was written with the changes currently in WD-20, so everything should be current and consistent.     Thanks,     Rich On 5/22/2011 5:10 PM, rich levinson wrote: 4DD97BC2.4070802@oracle.com type= cite > Hi Ray/TC, I want to emphasize that I consider this a serious issue, and hope that people can bear with what some might consider a waste of time picking at details of marginal relevance to specification. In particular, to emphasize the seriousness, please consider the following points: Section 2.1 Requirements, lists the basic requirements for a policy language for expressing information systems security policy, of which the first item is:   To provide a method for combining individual rules and policies into a     single policy set that applies to a particular decision request . In order to test compliance with the specification, it is necessary to be able to define meaningful tests that an implementation can execute to demonstrate such compliance. I also want to say that I consider the fact the Ray has implemented to this spec, and has provided detail in prev email as to the nature of that impl, as valuable input for analyzing the integrity of the algorithm that we are discussing, which is lines 5740-5799 of section C.2 of WD19, which is declared on line 5739 to represent the normative specification of this combining algorithm . All that being said, there are a few points that Ray mentioned that I would like to respond to in the interests of advancing this discussion toward some kind of closure where we are all satisfied that the spec is meeting our objectives (which, while unstated, I assume are well understood in conventional software engineering terms). To the specific points: Ray said: remember this is *pseudo*-code, not actual code . I agree, but also would like to point out that pseudo-code, in order to be meaningful must be consistent in its use of terminology. i.e. the terms used cannot arbitrarily change their meaning partway thru the algorithm. In order to test this consistency, please look again at the algorithm signature:   Decision denyOverridesCombiningAlgorithm(Decision[] decisions) If the p-code is considered to be consistent, then I think it should be self-evident that what is returned is an instance of Decision, where Decision may be implicitly defined as one of the values returned on any of lines: 5751 return Deny 5780 return Indeterminate{DP} 5784 return Indeterminate{DP} 5788 return Indeterminate{D} 5792 return Permit 5796 return Indeterminate{P} 5798 return NotApplicable Since this set of return values actually represents the full set of possible return values defined by XACML (where the D,P,DP can be considered internal combining algorithm intermediate values that are removed from the final result which is just Indeterminate w no qualifier, when any of the qualified Indeterminates are the preliminary final decision), then for consistency, the input: Decision[] decisions, of necessity is a collection of Decision instances each of which can be one of the list enumerated above. (or, as discussed below, conceptually can be some kind of object that will resolve to one of these instances when pulled from the input collection.) The next point was that arrays and iterators are conceptually equivalent , at least in terms of p-code. For the moment, as I prev indicated, I will accept that premise, because I understand the point to be that the distinction between these two forms of collection are not particularly relevant to the concerns of the combining algorithm. The next point was that the iterator should return Decisions, not unprocessed nodes As mentioned above, I agree with this point as being required in order for the p-code to be self consistent. However, it appears to me that given that, that the iterator must be envisioned to be operating on a collection of yet to be resolved objects, such as an object that is equivalent to evaluate(node) , where the evaluate is not executed until the iterator performs the next() operation. To me, this is not an ordinary construct that one can assume is meant by the algorithm specification. On the one hand the algorithm explicitly says the output is one of the enumerated Decision instances listed above, but implicitly says that a collection of these same objects on input may be regarded as unresolved instances, to which some sort of handle and evaluation mechanism is attached . This is the primary issue that I have with Ray's response, which is that while I understand what is being said, on the other hand, I do not think it is reasonable to expect a reader of the spec to be able to consider such an abstraction to be implicit in the specification of the algorithm. i.e. a clearly defined output object, with no further explanation is expected to be able to be envisioned as some kind of object with which there is little general familiarity. So, I guess what I am looking for is some kind of definition up front that will state that the Decision[] decisions input represents the set of values from the child nodes of the current node, and that either the decisions independently obtained from those nodes for static processing , or the unprocessed nodes themselves may be passed in for dynamic processing. If we can reach agreement on that then I think there are a couple of other minor details that we should address: recommend that text similar to 2nd para above be included in section C.1 to describe the assumptions about the input Decision[]. recommend that normative be applied to the definitions on lines 5701-5712. recommend that a statement be added that Indeterminate is what is returned as the final decision (i.e. the D,P,DP qualifiers are stripped off - I don't recall having seen that mentioned anywhere). recommend that section 7 be updated in some places mentioned before in earlier emails so that the terminology is consistent. Explicitly identifying each spot pending agreement on the above.     Thanks,     Rich On 5/21/2011 5:57 AM, remon.sinnema@emc.com wrote: 2B109622CFC23D4BA8349254A011FBE001497D4FAB@MX27A.corp.emc.com type= cite > Rich, From: rich levinson [ mailto:rich.levinson@oracle.com ] Sent: Saturday, May 21, 2011 12:06 AM To: Sinnema, Remon Cc: xacml@lists.oasis-open.org Subject: Re: [xacml] wd-19 indeterminate policy target handling That being said, maybe we can rephrase the issue as to what it means when it says on line 5739: The following pseudo-code represents the normative specification of this combining algorithm. The first line of the algorithm that follows appears to be some kind of interface signature: Decision denyOverridesCombiningAlgorithm(Decision[] decisions) << 2B109622CFC23D4BA8349254A011FBE001497D4FAB@MX27A.corp.emc.com type= cite > Yes, but remember this is *pseudo*-code, not actual code. IMHO, the implementation doesn't necessarily need to have the exact same interface, as long as it's conceptually equivalent, IOW, as long as it gives the same results. I think this is what Erik has been saying all along. It sounds to me like what you are saying is that when a software development organization sees that in the specification, that said organization will be completely comfortable by implementing the parameter, Decision[] decisions, by passing in an Iterator to a collection of Decision objects that have not yet been resolved. << 2B109622CFC23D4BA8349254A011FBE001497D4FAB@MX27A.corp.emc.com type= cite > Yes, because arrays and iterators are conceptually equivalent in this case, since the random access part of an array (that an iterator doesn't have) isn't needed for implementing any of the combining algorithms. In other words, if a development organization first implements this algorithm by passing in a precalculated array as represented by the normative interface signature, and then they complain that it is too inefficient, that I can tell them to pass in an Iterator to a collection of unprocessed nodes and just calculate the Decision objects in the first line of the loop. For example, I could suggest that the first line of the loop look something like: Decision denyOverridesCombiningAlgorithm(Iterator decisions) ... for ... { Decision decision = evaluate(decisions.next()) where Iterator.next() returns a PolicySet, Policy, or Rule, and evaluate() produces the Decision. << 2B109622CFC23D4BA8349254A011FBE001497D4FAB@MX27A.corp.emc.com type= cite > Well, the iterator should return Decisions, not unprocessed nodes, because that's what the algorithm needs to work on. The implementation of the iterator should do the evaluation when it's next() method is called. The combining algorithms shouldn't need to know anything about evaluating PolicySets, Policies, or Rules. If that is what you are suggesting and people agree, then I am ok with that, as well, I think, assuming people really believe that this is an acceptable view of the specification. << 2B109622CFC23D4BA8349254A011FBE001497D4FAB@MX27A.corp.emc.com type= cite > I'm interested to see if anybody has any objections to this view, since this is actually how I implemented it ;) Thanks, Ray


  • 28.  Re: [xacml] wd-19 indeterminate policy target handling

    Posted 05-19-2011 13:36
    Hi Rich, None of specs are meant to normatively specify exactly how an implementation must be built, only that it must produce the results as specified. I thought it said so in the spec actually, but I cannot find that reference now, except for A.3.17. If the case would be that one must do the actual implementation like it is described in the pseudo code, then I would not be pleased what you suggested either, since that is not how I would implement it. ;-) Best regards, Erik On 2011-05-19 14:44, rich levinson wrote: 4DD51097.6030209@oracle.com type= cite > Hi Erik, In principle, I would agree that if the p-code produces the same results from an end user perspective that the details of the implementation (namely the p-code translated to a native language) would be incidental. However, the way the spec is currently set up, the verbal description, lines 5727-5738, is defined to be: a non-normative informative description of this combining algorithm. whereas the p-code, following line 5739 is defined to be: the normative specification of this combining algorithm. Therefore, I think it is necessary to raise an issue as to what aspects of this combining algorithm are normative. For example, is it necessary to calculate all the decisions prior to entering the algorithm.     Thanks,     Rich On 5/19/2011 6:02 AM, Erik Rissanen wrote: 4DD4EAC0.8050300@axiomatics.com type= cite > Hi Rich, If it has the same results as the current specification, I would prefer to not make any changes at this stage. There is always the risk that we introduce some error by making changes. Also, I prefer the way the current algorithm is more uniformly described. It does not need to do a cast to a Rule for instance. It should not be necessary since the base case for a Rule is already covered in another section. Best regards, Erik On 2011-05-19 08:01, rich levinson wrote: 4DD4B23B.3050106@oracle.com type= cite > Hi Erik, As I indicated in prev email, this 2nd draft is a slight cleanup of the syntax, with some additional comments at the end: Decision denyOverridesRuleCombiningAlgorithm(Node[] nodes) { // see 1 below Boolean atLeastOneErrorD = false; Boolean atLeastOneErrorP = false; Boolean atLeastOneErrorDP = false; Boolean atLeastOnePermit = false; for ( i=0; i<lengthOf(nodes); i++ ) { Decision decision = evaluate(nodes[i]); // see #2 below if (decision==Deny) { return Deny; // loop breakout (#2 below) } // the next two if s are the same as C.10: if (decision==Permit) { atLeastOnePermit = true; continue; // i.e. skip the rest of the logic for current // iteration of loop, and start next iteration } if (decision==NotApplicable) { continue; } // Ind{} (no qualifier) can only be returned for rules (#3 below) if (decision==Indeterminate) { // cast node to Rule, then get its effect if ( effect((Rule)nodes[i])==Deny) ) { atLeastOneErrorD = true; } else { atLeastOneErrorP = true; } continue; } it (decision == Indeterminate{D}) { atLeastOneErrorD = true; } it (decision == Indeterminate{P}) { atLeastOneErrorp = true; } it (decision == Indeterminate{DP}) { atLeastOneErrorDP = true; } } // end for loop if (atLeastOneErrorD==true && (atLeastOneErrorP==true atLeastOnePermit==true) { atLeastOneErrorDP = true; } if (atLeastOneErrorDP==true) { return Indeterminate{DP}; } if (atLeastOneErrorD==true) { return Indeterminate{D}; } if (atLeastOnePermit==true) { return Permit; } if (atLeastOneErrorP == true) { return Indeterminate{P}; } return NotApplicable; } // end algorithm It is intended to produce the same results in every case as the current C.2 algorithm. The differences that it embodies are: it uses nodes as input rather than decisions, where a node can be any of: {Rule, Policy, PolicySet} it preserves the original logic from 2.0 that shows the evaluate done in each iteration, which enables the loop breakout as soon as a certain final result is obtained (i.e. the explicit biased decision type of the algorithm) it preserves(and makes explicit) the logic whereby the D or P status of Indeterminate is established; i.e. the qualifiers D,P originate from the effect of rules. DP is a result of combining. The only place an unqualified Indeterminate (Indeterminate{}) can appear is in the decision that results from evaluation of a Rule, or from the evaluation of a Target. However, the unqualified Ind from a Target will always be combined to a qualified decision, as shown in WD19 Table 7. Also note, that the above algorithm should be consistent w Table 4 in section 7.10, because it is the statement at the beginning of the loop, evaluate(nodes[i]), which, when nodes are rules, will produce a decision that is an unqualified Ind{}. However, an unqualified Ind{} can never escape the algorithm because after the end of the loop on qualified Ind{D,P,DP} can be returned. It should reduce to the 2.0 algorithms when the constraints that were implicit in 2.0 are applied (i.e. that the property does not apply to policy) This objective needs to be qualified by the fact that in 2.0 deny-overrides and permit-overrides were not completely symmetric, as d-o did not allow any Indeterminate to be returned, whereas p-o did. I believe the TC decided when we chg'd to qualified Indeterminates that we would drop this anomaly as being unnecessary, so it does not appear in new algs. Note that the evaluate(nodes[i]) is recursive, and this algorithm should be viewed as being applied starting with the top PolicySet, and processing all children as required by the evaluations. Note also that there is an intermediate layer of selecting a combining algorithm before the next recursive evaluate(nodes[i]) is called. Note also that the recursion must proceed down to the leaf Rules, because evaluate(nodes[i]) will not get any results until a Rule is reached which effectively stops the recursion. While the above comments might appear complicated, they are only included for guidance for anyone who is interested in delving deeply into the mechanisms that are implicitly present in the evaluation of XACML PolicySets. Bottom line: the proposal is the algorithm. The comments that appear in the list that follows the algorithm are to help people understand the algorithm. I believe the algorithm should be able to be inserted as is in Section C.2, and, if there is agreement, corresponding algorithms can be prepared for sections C.3 -> C.7.  Note C.8, C.9, and the legacy sections can probably remain as they are, since they do not appear to deal with qualified Indeterminates.     Thanks,     Rich On 5/18/2011 11:12 AM, rich levinson wrote: 4DD3E1F5.8060303@oracle.com type= cite > Hi Erik, The algorithm w proposed changes in my earlier email in first draft form was this: Decision denyOverridesRuleCombiningAlgorithm(Node[] nodes) { // see 1 below Boolean atLeastOneError = false; Boolean atLeastOneErrorD = false; Boolean atLeastOneErrorP = false; Boolean atLeastOneErrorDP = false; Boolean atLeastOnePermit = false; for ( i=0; i<lengthOf(nodes); i++ ) { Decision decision = evaluate(nodes[i]); // see #2 below if (decision==Deny) { return Deny; // loop breakout (#2 below) } // the next two if s are the same as C.10: if (decision==Permit) { atLeastOnePermit = true; continue; // i.e. skip the rest of the logic current iteration of loop // and start next iteration } if (decision==NotApplicable) { continue; } // see #3 below if (decision==Indeterminate) { // this can only be returned for rules if ( effect((Rule)nodes[i])==Deny) ) { // cast to Rule to get effect atLeastOneErrorD = true; } else { atLeastOneErrorP = true; } continue; } // the following is same as C.2 and will evaluate the 3 types // of Indeterminate, which can only be returned for Policy and PolicySet ... same as lines 5762->5776 (not repeated here) } // end for loop if (atLeastOneErrorD==true && (atLeastOneErrorP==true atLeastOnePermit==true) { atLeastOneErrorDP = true; } if (atLeastOneErrorDP==true) { return Indeterminate(DP); if (atLeastOneErrorD==true) { return Indeterminate(D); } if (atLeastOnePermit==true) { return Permit; } if (atLeastOneErrorP == true) { return Indeterminate(P); } return NotApplicable; } // end algorithm It is intended to produce the same results in every case as the current algorithm. The differences that it embodies (that do not impact the final results) are: it uses nodes as input rather than decisions, where a node can be any of: {Rule, Policy, PolicySet} it preserves the original logic from 2.0 that shows the evaluate done in each iteration, which enables the loop breakout as soon as a certain final result is obtained (i.e. the explicit biased decision type of the algorithm it preserves(and makes explicit) the logic whereby the D or P status of Indeterminate is established It should reduce to the 2.0 algorithms when the constraints that were implicit in 2.0 are applied (i.e. that the property does not apply to policy) I think it needs one more pass to get the syntax of the Indeterminates consistent w the current defns in the doc, but otherwise I am pretty sure it does the same as the current. (I will try to clean it up a bit, later today  but I am bust until then)     Thanks,     Rich On 5/18/2011 4:01 AM, Erik Rissanen wrote: 4DD37CEB.9000706@axiomatics.com type= cite >Rich, Does the algorithm with your proposed changes lead to a different result in any case than the algorithm which is in WD-19? Best regards, Erik On 2011-05-17 15:36, rich levinson wrote: This is not a performance issue. It is a change from XACML 2.0 that implies that the combining algorithm has as input a set of decisions as opposed to 2.0 where the combining algorithm had as input a set of Rules, Policies, or PolicySets, that had yet to be evaluated. The change implies that the algorithm is working on a different state, which is not the case.     Thanks,     Rich On 5/17/2011 5:07 AM, remon.sinnema@emc.com wrote: From: Erik Rissanen [ mailto:erik@axiomatics.com ] Sent: Tuesday, May 17, 2011 9:35 AM To: xacml@lists.oasis-open.org Subject: Re: [xacml] wd-19 indeterminate policy target handling The spec should strive for the simplest possible explanation of the behavior, not the most efficient implementation. +1 We can leave it up to vendors to come up with some nice performance tricks. Thanks, Ray --------------------------------------------------------------------- To unsubscribe from this mail list, you must leave the OASIS TC that generates this mail.  Follow this link to all your TCs in OASIS at: https://www.oasis-open.org/apps/org/workgroup/portal/my_workgroups.php --------------------------------------------------------------------- To unsubscribe from this mail list, you must leave the OASIS TC that generates this mail.  Follow this link to all your TCs in OASIS at: https://www.oasis-open.org/apps/org/workgroup/portal/my_workgroups.php --------------------------------------------------------------------- To unsubscribe from this mail list, you must leave the OASIS TC that generates this mail.  Follow this link to all your TCs in OASIS at: https://www.oasis-open.org/apps/org/workgroup/portal/my_workgroups.php


  • 29.  RE: [xacml] wd-19 indeterminate policy target handling

    Posted 05-19-2011 13:41
    Perhaps this issue could be cleared by adding a note near each block of pseudo code in Appendix C to the effect that “it is not necessary to implement this pseudo-code, only that the results are the same”.   Regards, --Paul   From: Erik Rissanen [mailto:erik@axiomatics.com] Sent: Thursday, May 19, 2011 08:36 To: xacml@lists.oasis-open.org Subject: Re: [xacml] wd-19 indeterminate policy target handling   Hi Rich, None of specs are meant to normatively specify exactly how an implementation must be built, only that it must produce the results as specified. I thought it said so in the spec actually, but I cannot find that reference now, except for A.3.17. If the case would be that one must do the actual implementation like it is described in the pseudo code, then I would not be pleased what you suggested either, since that is not how I would implement it. ;-) Best regards, Erik On 2011-05-19 14:44, rich levinson wrote: Hi Erik, In principle, I would agree that if the p-code produces the same results from an end user perspective that the details of the implementation (namely the p-code translated to a native language) would be incidental. However, the way the spec is currently set up, the verbal description, lines 5727-5738, is defined to be: "a non-normative informative description of this combining algorithm." whereas the p-code, following line 5739 is defined to be: "the normative specification of this combining algorithm." Therefore, I think it is necessary to raise an issue as to what aspects of this combining algorithm are normative. For example, is it necessary to calculate all the decisions prior to entering the algorithm.     Thanks,     Rich On 5/19/2011 6:02 AM, Erik Rissanen wrote: Hi Rich, If it has the same results as the current specification, I would prefer to not make any changes at this stage. There is always the risk that we introduce some error by making changes. Also, I prefer the way the current algorithm is more uniformly described. It does not need to do a cast to a Rule for instance. It should not be necessary since the base case for a Rule is already covered in another section. Best regards, Erik On 2011-05-19 08:01, rich levinson wrote: Hi Erik, As I indicated in prev email, this 2nd draft is a slight cleanup of the syntax, with some additional comments at the end: Decision denyOverridesRuleCombiningAlgorithm(Node[] nodes) { // see 1 below     Boolean atLeastOneErrorD = false;     Boolean atLeastOneErrorP = false;     Boolean atLeastOneErrorDP = false;     Boolean atLeastOnePermit = false;     for ( i=0; i<lengthOf(nodes); i++  ) {         Decision decision = evaluate(nodes[i]);   // see #2 below         if (decision==Deny) {             return Deny;        // loop breakout (#2 below)         }         // the next two "if"s are the same as C.10:         if (decision==Permit) {             atLeastOnePermit = true;             continue; // i.e. skip the rest of the logic for current                       // iteration of loop, and start next iteration         }         if (decision==NotApplicable) {             continue;         }         // Ind{} (no qualifier) can only be returned for rules (#3 below)         if (decision==Indeterminate) {             // cast node to Rule, then get its effect             if ( effect((Rule)nodes[i])==Deny) ) {                 atLeastOneErrorD = true;             }             else {                 atLeastOneErrorP = true;             }             continue;         }         it (decision == Indeterminate{D}) {             atLeastOneErrorD = true;         }         it (decision == Indeterminate{P}) {             atLeastOneErrorp = true;         }         it (decision == Indeterminate{DP}) {             atLeastOneErrorDP = true;         }     } // end for loop     if (atLeastOneErrorD==true &&           (atLeastOneErrorP==true atLeastOnePermit==true) {         atLeastOneErrorDP = true;     }     if (atLeastOneErrorDP==true) {         return Indeterminate{DP};     }     if (atLeastOneErrorD==true) {         return Indeterminate{D};     }     if (atLeastOnePermit==true) {         return Permit;     }     if (atLeastOneErrorP == true) {         return Indeterminate{P};     }     return NotApplicable; } // end algorithm It is intended to produce the same results in every case as the current C.2 algorithm. The differences that it embodies are: it uses "nodes" as input rather than decisions, where a "node" can be any of: {Rule, Policy, PolicySet} it preserves the original logic from 2.0 that shows the evaluate done in each iteration, which enables the loop breakout as soon as a certain final result is obtained (i.e. the explicit "biased" decision type of the algorithm) it preserves(and makes explicit) the logic whereby the D or P status of Indeterminate is established; i.e. the qualifiers D,P originate from the effect of rules. DP is a result of combining. The only place an unqualified Indeterminate (Indeterminate{}) can appear is in the decision that results from evaluation of a Rule, or from the evaluation of a Target. However, the unqualified Ind from a Target will always be "combined" to a qualified decision, as shown in WD19 Table 7. Also note, that the above algorithm should be consistent w Table 4 in section 7.10, because it is the statement at the beginning of the loop, evaluate(nodes[i]), which, when nodes are rules, will produce a decision that is an unqualified Ind{}. However, an unqualified Ind{} can never escape the algorithm because after the end of the loop on qualified Ind{D,P,DP} can be returned. It should reduce to the 2.0 algorithms when the "constraints" that were implicit in 2.0 are applied (i.e. that the property does not apply to policy) This objective needs to be qualified by the fact that in 2.0 deny-overrides and permit-overrides were not completely symmetric, as d-o did not allow any Indeterminate to be returned, whereas p-o did. I believe the TC decided when we chg'd to qualified Indeterminates that we would drop this "anomaly" as being unnecessary, so it does not appear in new algs. Note that the "evaluate(nodes[i])" is recursive, and this algorithm should be viewed as being applied starting with the top PolicySet, and processing all children as required by the evaluations. Note also that there is an intermediate layer of selecting a combining algorithm before the next recursive "evaluate(nodes[i])" is called. Note also that the recursion must proceed down to the leaf Rules, because "evaluate(nodes[i])" will not get any results until a Rule is reached which effectively stops the recursion. While the above comments might appear complicated, they are only included for guidance for anyone who is interested in delving deeply into the mechanisms that are implicitly present in the evaluation of XACML PolicySets. Bottom line: the proposal is the algorithm. The comments that appear in the list that follows the algorithm are to help people understand the algorithm. I believe the algorithm should be able to be inserted as is in Section C.2, and, if there is agreement, corresponding algorithms can be prepared for sections C.3 -> C.7.  Note C.8, C.9, and the legacy sections can probably remain as they are, since they do not appear to deal with qualified Indeterminates.     Thanks,     Rich On 5/18/2011 11:12 AM, rich levinson wrote: Hi Erik, The algorithm w proposed changes in my earlier email in "first draft form" was this: Decision denyOverridesRuleCombiningAlgorithm(Node[] nodes) {  // see 1 below     Boolean atLeastOneError = false;     Boolean atLeastOneErrorD = false;     Boolean atLeastOneErrorP = false;     Boolean atLeastOneErrorDP = false;     Boolean atLeastOnePermit = false;     for ( i=0; i<lengthOf(nodes); i++  ) {         Decision decision = evaluate(nodes[i]);   // see #2 below         if (decision==Deny) {             return Deny;        // loop breakout (#2 below)         }         // the next two "if"s are the same as C.10:         if (decision==Permit) {             atLeastOnePermit = true;             continue; // i.e. skip the rest of the logic current iteration of loop                               // and start next iteration         }         if (decision==NotApplicable) {             continue;         }         // see #3 below         if (decision==Indeterminate) { // this can only be returned for rules             if ( effect((Rule)nodes[i])==Deny) ) { // cast to Rule to get effect                 atLeastOneErrorD = true;             }             else {                 atLeastOneErrorP = true;             }             continue;         }         // the following is same as C.2 and will evaluate the 3 types         // of Indeterminate, which can only be returned for Policy and PolicySet         ... same as lines 5762->5776 (not repeated here)     } // end for loop     if (atLeastOneErrorD==true &&           (atLeastOneErrorP==true atLeastOnePermit==true) {         atLeastOneErrorDP = true;     }     if (atLeastOneErrorDP==true) {         return Indeterminate(DP);     if (atLeastOneErrorD==true) {         return Indeterminate(D);     }     if (atLeastOnePermit==true) {         return Permit;     }     if (atLeastOneErrorP == true) {         return Indeterminate(P);     }     return NotApplicable; } // end algorithm It is intended to produce the same results in every case as the current algorithm. The differences that it embodies (that do not impact the final results) are: it uses "nodes" as input rather than decisions, where a "node" can be any of: {Rule, Policy, PolicySet} it preserves the original logic from 2.0 that shows the evaluate done in each iteration, which enables the loop breakout as soon as a certain final result is obtained (i.e. the explicit "biased" decision type of the algorithm it preserves(and makes explicit) the logic whereby the D or P status of Indeterminate is established It should reduce to the 2.0 algorithms when the "constraints" that were implicit in 2.0 are applied (i.e. that the property does not apply to policy) I think it needs one more pass to get the syntax of the Indeterminates consistent w the current defns in the doc, but otherwise I am pretty sure it does the same as the current. (I will try to clean it up a bit, later today  but I am bust until then)     Thanks,     Rich On 5/18/2011 4:01 AM, Erik Rissanen wrote: Rich, Does the algorithm with your proposed changes lead to a different result in any case than the algorithm which is in WD-19? Best regards, Erik On 2011-05-17 15:36, rich levinson wrote: This is not a performance issue. It is a change from XACML 2.0 that implies that the combining algorithm has as input a set of decisions as opposed to 2.0 where the combining algorithm had as input a set of Rules, Policies, or PolicySets, that had yet to be evaluated. The change implies that the algorithm is working on a different state, which is not the case.     Thanks,     Rich On 5/17/2011 5:07 AM, remon.sinnema@emc.com wrote: From: Erik Rissanen [ mailto:erik@axiomatics.com ] Sent: Tuesday, May 17, 2011 9:35 AM To: xacml@lists.oasis-open.org Subject: Re: [xacml] wd-19 indeterminate policy target handling The spec should strive for the simplest possible explanation of the behavior, not the most efficient implementation. +1 We can leave it up to vendors to come up with some nice performance tricks. Thanks, Ray --------------------------------------------------------------------- To unsubscribe from this mail list, you must leave the OASIS TC that generates this mail.  Follow this link to all your TCs in OASIS at: https://www.oasis-open.org/apps/org/workgroup/portal/my_workgroups.php --------------------------------------------------------------------- To unsubscribe from this mail list, you must leave the OASIS TC that generates this mail.  Follow this link to all your TCs in OASIS at: https://www.oasis-open.org/apps/org/workgroup/portal/my_workgroups.php --------------------------------------------------------------------- To unsubscribe from this mail list, you must leave the OASIS TC that generates this mail.  Follow this link to all your TCs in OASIS at: https://www.oasis-open.org/apps/org/workgroup/portal/my_workgroups.php    


  • 30.  Re: [xacml] wd-19 indeterminate policy target handling

    Posted 05-19-2011 13:57
    I would suggest adding it at the top of section 7 or 10 so it applies to everything, not just the combining algorithms. Best regards, Erik On 2011-05-19 15:40, Tyson, Paul H wrote: 3898C40CCD069D4F91FCD69C9EFBF096065909C4@txamashur004.ent.textron.com type= cite > Perhaps this issue could be cleared by adding a note near each block of pseudo code in Appendix C to the effect that “it is not necessary to implement this pseudo-code, only that the results are the same”.   Regards, --Paul   From: Erik Rissanen [ mailto:erik@axiomatics.com ] Sent: Thursday, May 19, 2011 08:36 To: xacml@lists.oasis-open.org Subject: Re: [xacml] wd-19 indeterminate policy target handling   Hi Rich, None of specs are meant to normatively specify exactly how an implementation must be built, only that it must produce the results as specified. I thought it said so in the spec actually, but I cannot find that reference now, except for A.3.17. If the case would be that one must do the actual implementation like it is described in the pseudo code, then I would not be pleased what you suggested either, since that is not how I would implement it. ;-) Best regards, Erik On 2011-05-19 14:44, rich levinson wrote: Hi Erik, In principle, I would agree that if the p-code produces the same results from an end user perspective that the details of the implementation (namely the p-code translated to a native language) would be incidental. However, the way the spec is currently set up, the verbal description, lines 5727-5738, is defined to be: a non-normative informative description of this combining algorithm. whereas the p-code, following line 5739 is defined to be: the normative specification of this combining algorithm. Therefore, I think it is necessary to raise an issue as to what aspects of this combining algorithm are normative. For example, is it necessary to calculate all the decisions prior to entering the algorithm.     Thanks,     Rich On 5/19/2011 6:02 AM, Erik Rissanen wrote: Hi Rich, If it has the same results as the current specification, I would prefer to not make any changes at this stage. There is always the risk that we introduce some error by making changes. Also, I prefer the way the current algorithm is more uniformly described. It does not need to do a cast to a Rule for instance. It should not be necessary since the base case for a Rule is already covered in another section. Best regards, Erik On 2011-05-19 08:01, rich levinson wrote: Hi Erik, As I indicated in prev email, this 2nd draft is a slight cleanup of the syntax, with some additional comments at the end: Decision denyOverridesRuleCombiningAlgorithm(Node[] nodes) { // see 1 below     Boolean atLeastOneErrorD = false;     Boolean atLeastOneErrorP = false;     Boolean atLeastOneErrorDP = false;     Boolean atLeastOnePermit = false;     for ( i=0; i<lengthOf(nodes); i++  ) {         Decision decision = evaluate(nodes[i]);   // see #2 below         if (decision==Deny) {             return Deny;        // loop breakout (#2 below)         }         // the next two if s are the same as C.10:         if (decision==Permit) {             atLeastOnePermit = true;             continue; // i.e. skip the rest of the logic for current                       // iteration of loop, and start next iteration         }         if (decision==NotApplicable) {             continue;         }         // Ind{} (no qualifier) can only be returned for rules (#3 below)         if (decision==Indeterminate) {             // cast node to Rule, then get its effect             if ( effect((Rule)nodes[i])==Deny) ) {                 atLeastOneErrorD = true;             }             else {                 atLeastOneErrorP = true;             }             continue;         }         it (decision == Indeterminate{D}) {             atLeastOneErrorD = true;         }         it (decision == Indeterminate{P}) {             atLeastOneErrorp = true;         }         it (decision == Indeterminate{DP}) {             atLeastOneErrorDP = true;         }     } // end for loop     if (atLeastOneErrorD==true &&           (atLeastOneErrorP==true atLeastOnePermit==true) {         atLeastOneErrorDP = true;     }     if (atLeastOneErrorDP==true) {         return Indeterminate{DP};     }     if (atLeastOneErrorD==true) {         return Indeterminate{D};     }     if (atLeastOnePermit==true) {         return Permit;     }     if (atLeastOneErrorP == true) {         return Indeterminate{P};     }     return NotApplicable; } // end algorithm It is intended to produce the same results in every case as the current C.2 algorithm. The differences that it embodies are: it uses nodes as input rather than decisions, where a node can be any of: {Rule, Policy, PolicySet} it preserves the original logic from 2.0 that shows the evaluate done in each iteration, which enables the loop breakout as soon as a certain final result is obtained (i.e. the explicit biased decision type of the algorithm) it preserves(and makes explicit) the logic whereby the D or P status of Indeterminate is established; i.e. the qualifiers D,P originate from the effect of rules. DP is a result of combining. The only place an unqualified Indeterminate (Indeterminate{}) can appear is in the decision that results from evaluation of a Rule, or from the evaluation of a Target. However, the unqualified Ind from a Target will always be combined to a qualified decision, as shown in WD19 Table 7. Also note, that the above algorithm should be consistent w Table 4 in section 7.10, because it is the statement at the beginning of the loop, evaluate(nodes[i]), which, when nodes are rules, will produce a decision that is an unqualified Ind{}. However, an unqualified Ind{} can never escape the algorithm because after the end of the loop on qualified Ind{D,P,DP} can be returned. It should reduce to the 2.0 algorithms when the constraints that were implicit in 2.0 are applied (i.e. that the property does not apply to policy) This objective needs to be qualified by the fact that in 2.0 deny-overrides and permit-overrides were not completely symmetric, as d-o did not allow any Indeterminate to be returned, whereas p-o did. I believe the TC decided when we chg'd to qualified Indeterminates that we would drop this anomaly as being unnecessary, so it does not appear in new algs. Note that the evaluate(nodes[i]) is recursive, and this algorithm should be viewed as being applied starting with the top PolicySet, and processing all children as required by the evaluations. Note also that there is an intermediate layer of selecting a combining algorithm before the next recursive evaluate(nodes[i]) is called. Note also that the recursion must proceed down to the leaf Rules, because evaluate(nodes[i]) will not get any results until a Rule is reached which effectively stops the recursion. While the above comments might appear complicated, they are only included for guidance for anyone who is interested in delving deeply into the mechanisms that are implicitly present in the evaluation of XACML PolicySets. Bottom line: the proposal is the algorithm. The comments that appear in the list that follows the algorithm are to help people understand the algorithm. I believe the algorithm should be able to be inserted as is in Section C.2, and, if there is agreement, corresponding algorithms can be prepared for sections C.3 -> C.7.  Note C.8, C.9, and the legacy sections can probably remain as they are, since they do not appear to deal with qualified Indeterminates.     Thanks,     Rich On 5/18/2011 11:12 AM, rich levinson wrote: Hi Erik, The algorithm w proposed changes in my earlier email in first draft form was this: Decision denyOverridesRuleCombiningAlgorithm(Node[] nodes) {  // see 1 below     Boolean atLeastOneError = false;     Boolean atLeastOneErrorD = false;     Boolean atLeastOneErrorP = false;     Boolean atLeastOneErrorDP = false;     Boolean atLeastOnePermit = false;     for ( i=0; i<lengthOf(nodes); i++  ) {         Decision decision = evaluate(nodes[i]);   // see #2 below         if (decision==Deny) {             return Deny;        // loop breakout (#2 below)         }         // the next two if s are the same as C.10:         if (decision==Permit) {             atLeastOnePermit = true;             continue; // i.e. skip the rest of the logic current iteration of loop                               // and start next iteration         }         if (decision==NotApplicable) {             continue;         }         // see #3 below         if (decision==Indeterminate) { // this can only be returned for rules             if ( effect((Rule)nodes[i])==Deny) ) { // cast to Rule to get effect                 atLeastOneErrorD = true;             }             else {                 atLeastOneErrorP = true;             }             continue;         }         // the following is same as C.2 and will evaluate the 3 types         // of Indeterminate, which can only be returned for Policy and PolicySet         ... same as lines 5762->5776 (not repeated here)     } // end for loop     if (atLeastOneErrorD==true &&           (atLeastOneErrorP==true atLeastOnePermit==true) {         atLeastOneErrorDP = true;     }     if (atLeastOneErrorDP==true) {         return Indeterminate(DP);     if (atLeastOneErrorD==true) {         return Indeterminate(D);     }     if (atLeastOnePermit==true) {         return Permit;     }     if (atLeastOneErrorP == true) {         return Indeterminate(P);     }     return NotApplicable; } // end algorithm It is intended to produce the same results in every case as the current algorithm. The differences that it embodies (that do not impact the final results) are: it uses nodes as input rather than decisions, where a node can be any of: {Rule, Policy, PolicySet} it preserves the original logic from 2.0 that shows the evaluate done in each iteration, which enables the loop breakout as soon as a certain final result is obtained (i.e. the explicit biased decision type of the algorithm it preserves(and makes explicit) the logic whereby the D or P status of Indeterminate is established It should reduce to the 2.0 algorithms when the constraints that were implicit in 2.0 are applied (i.e. that the property does not apply to policy) I think it needs one more pass to get the syntax of the Indeterminates consistent w the current defns in the doc, but otherwise I am pretty sure it does the same as the current. (I will try to clean it up a bit, later today  but I am bust until then)     Thanks,     Rich On 5/18/2011 4:01 AM, Erik Rissanen wrote: Rich, Does the algorithm with your proposed changes lead to a different result in any case than the algorithm which is in WD-19? Best regards, Erik On 2011-05-17 15:36, rich levinson wrote: This is not a performance issue. It is a change from XACML 2.0 that implies that the combining algorithm has as input a set of decisions as opposed to 2.0 where the combining algorithm had as input a set of Rules, Policies, or PolicySets, that had yet to be evaluated. The change implies that the algorithm is working on a different state, which is not the case.     Thanks,     Rich On 5/17/2011 5:07 AM, remon.sinnema@emc.com wrote: From: Erik Rissanen [ mailto:erik@axiomatics.com ] Sent: Tuesday, May 17, 2011 9:35 AM To: xacml@lists.oasis-open.org Subject: Re: [xacml] wd-19 indeterminate policy target handling The spec should strive for the simplest possible explanation of the behavior, not the most efficient implementation. +1 We can leave it up to vendors to come up with some nice performance tricks. Thanks, Ray --------------------------------------------------------------------- To unsubscribe from this mail list, you must leave the OASIS TC that generates this mail.  Follow this link to all your TCs in OASIS at: https://www.oasis-open.org/apps/org/workgroup/portal/my_workgroups.php --------------------------------------------------------------------- To unsubscribe from this mail list, you must leave the OASIS TC that generates this mail.  Follow this link to all your TCs in OASIS at: https://www.oasis-open.org/apps/org/workgroup/portal/my_workgroups.php --------------------------------------------------------------------- To unsubscribe from this mail list, you must leave the OASIS TC that generates this mail.  Follow this link to all your TCs in OASIS at: https://www.oasis-open.org/apps/org/workgroup/portal/my_workgroups.php