OASIS eXtensible Access Control Markup Language (XACML) TC

 View Only
  • 1.  Behavior of deny-overrides

    Posted 01-30-2007 15:26
    All,
    
    Olav Bandmann has discovered that a couple of the 2.0 combining
    algorithms have "weird" effects sometimes. I am not sure if this was
    already known, or if anything should be done about it.
    
    This is a bit difficult to explain, but I will make an attempt. One can
    define a soundness criterion for policy combining algorithms: A
    combining algorithm is sound iff a constituent policy cannot cause a
    combined decision which the constituent policy would never evaluate to
    in isolation. By this definition, the deny-overrides algorithm is not
    sound. For instance:
    
    P1 has a singe rule with a Permit effect. This policy in isolation can
    never evaluate to a Deny.
    
    PS is a policy set with a number of policies:
    
    PS
    /|
    / |
    / |
    P2-Pn
    
    PS uses a deny-overrides policy combining algorithm. Assume that for a
    request R, PS, as it is, will evaluate to Permit. Now, insert P1 into PS:
    
    PS
    /|\
    / | \
    / | \
    P2-Pn P1
    
    Let’s evaluate R against PS again and let’s say that P1 evaluates to
    indeterminate. This will cause PS to be Deny. P1 made PS into a Deny,
    although P1 can never evaluate to Deny in isolation!
    
    There is a similar behavior in the only-one-applicable.
    
    So, why is this a problem? Because it makes harder to understand a
    complex policy. You cannot look at the parts in isolation. For instance,
    if several administrators are responsible for parts of a large policy
    set, then their policies could have effects that they did not intend or
    anticipate.
    
    One could argue that the algorithms have a well defined behavior, so
    they are not “wrong” and they behave as intended. That could be said
    about the only-one-applicable algorithm for instance, or if I for some
    reason want to define an algorithm which inverts everything, makes its
    decisions randomly, or whatever.
    
    However, in the case of deny-overrides, the algorithm could have been
    designed so it would have been sound in this respect.
    
    Olav suspects (and so do I) that the motivation for the current design
    was based on concerns about access being allowed in case of an error: If
    PS uses deny-overrides and P1 evaluates to indeterminate, then perhaps
    P1 could have evaluated to Deny if there was no error? So to be safe, we
    make the whole a Deny. However, it would have been better to make it
    indeterminate, and then have a Deny biased PEP instead, if denying
    access is important in case of uncertainty in policy evaluation.
    
    Is this already known? Is it a concern? To fix it, we could define a new
    combining algorithm which does not have this behavior and recommend
    people to use it instead of the old one.
    
    Regards,
    Erik
    
    
    
    


  • 2.  Re: [xacml] Behavior of deny-overrides

    Posted 01-30-2007 17:43
    I believe Barth and Mitchell made the same comments about EPAL's 
    "First-Applicable" rule in one or both of the two following papers:
    
    # Adam Barth and John C. Mitchell. Enterprise Privacy Promises and 
    Enforcement. In WITS 2005: Proceedings of the 2005 ACM Workshop on 
    Issues in the Theory of Security. 2005.
    # Adam Barth, John C. Mitchell, and Justin Rosenstein. Conflict and 
    Combination in Privacy Policy Languages. In WPES 2004: Proceedings of 
    the 2004 ACM Workshop on Privacy in the Electronic Society. 2004.
    
    Regards,
    Anne
    
    Erik Rissanen wrote On 01/30/07 10:26,:
    > All,
    > 
    > Olav Bandmann has discovered that a couple of the 2.0 combining
    > algorithms have "weird" effects sometimes. I am not sure if this was
    > already known, or if anything should be done about it.
    > 
    > This is a bit difficult to explain, but I will make an attempt. One can
    > define a soundness criterion for policy combining algorithms: A
    > combining algorithm is sound iff a constituent policy cannot cause a
    > combined decision which the constituent policy would never evaluate to
    > in isolation. By this definition, the deny-overrides algorithm is not
    > sound. For instance:
    > 
    > P1 has a singe rule with a Permit effect. This policy in isolation can
    > never evaluate to a Deny.
    > 
    > PS is a policy set with a number of policies:
    > 
    > PS
    > /|
    > / |
    > / |
    > P2-Pn
    > 
    > PS uses a deny-overrides policy combining algorithm. Assume that for a
    > request R, PS, as it is, will evaluate to Permit. Now, insert P1 into PS:
    > 
    > PS
    > /|\
    > / | \
    > / | \
    > P2-Pn P1
    > 
    > Let’s evaluate R against PS again and let’s say that P1 evaluates to
    > indeterminate. This will cause PS to be Deny. P1 made PS into a Deny,
    > although P1 can never evaluate to Deny in isolation!
    > 
    > There is a similar behavior in the only-one-applicable.
    > 
    > So, why is this a problem? Because it makes harder to understand a
    > complex policy. You cannot look at the parts in isolation. For instance,
    > if several administrators are responsible for parts of a large policy
    > set, then their policies could have effects that they did not intend or
    > anticipate.
    > 
    > One could argue that the algorithms have a well defined behavior, so
    > they are not “wrong” and they behave as intended. That could be said
    > about the only-one-applicable algorithm for instance, or if I for some
    > reason want to define an algorithm which inverts everything, makes its
    > decisions randomly, or whatever.
    > 
    > However, in the case of deny-overrides, the algorithm could have been
    > designed so it would have been sound in this respect.
    > 
    > Olav suspects (and so do I) that the motivation for the current design
    > was based on concerns about access being allowed in case of an error: If
    > PS uses deny-overrides and P1 evaluates to indeterminate, then perhaps
    > P1 could have evaluated to Deny if there was no error? So to be safe, we
    > make the whole a Deny. However, it would have been better to make it
    > indeterminate, and then have a Deny biased PEP instead, if denying
    > access is important in case of uncertainty in policy evaluation.
    > 
    > Is this already known? Is it a concern? To fix it, we could define a new
    > combining algorithm which does not have this behavior and recommend
    > people to use it instead of the old one.
    > 
    > Regards,
    > Erik
    > 
    > 
    > 
    
    -- 
    Anne H. Anderson             Email: Anne.Anderson@Sun.COM
    Sun Microsystems Laboratories
    1 Network Drive,UBUR02-311     Tel: 781/442-0928
    Burlington, MA 01803-0902 USA  Fax: 781/442-1692
    
    


  • 3.  Re: [xacml] Behavior of deny-overrides

    Posted 01-30-2007 18:08
    Hi Erik. Formalisms aside, I think the basic issue you're reacting to is
    that the rule and policy versions of deny-overrides work differenly (and
    yes, this is well-known). It's certainly possible that this could add
    confusion to policy analysis, though in practice I've never seen it
    happen. Personally, I don't see any problem with the current behavior.
    There is certainly nothing incorrect about it, although it may well be
    unexpected.
    
    A few comments..
    
    > P1 has a singe rule with a Permit effect. This policy in isolation can
    > never evaluate to a Deny.
    
    That's not actually true. Using only the standard 2.0 combining
    algorithms, it can never result in Deny. Since I am free to write my own
    combining algorithm, however, the policy certainly can return Deny.
    
    > [...]
    > So, why is this a problem? Because it makes harder to understand a
    > complex policy. You cannot look at the parts in isolation. For instance,
    > if several administrators are responsible for parts of a large policy
    > set, then their policies could have effects that they did not intend or
    > anticipate.
    
    I'm not sure that I agree here. As the author of P1, I shouldn't need to
    know what's above me. All I care is what my Rules return. Likewise, as
    the consumer of P1, I know that I'm using a specific algorithm that may
    or may not re-interpret the results of my children. If I am both
    parties, then I have a global view. It's true that I can't look at just
    a leaf node in isolation to understand why some decision was made, but
    then that is always the case.
    
    > One could argue that the algorithms have a well defined behavior, so
    > they are not ���wrong��� and they behave as intended. That could be said
    > about the only-one-applicable algorithm for instance, or if I for some
    > reason want to define an algorithm which inverts everything, makes its
    > decisions randomly, or whatever.
    > 
    > However, in the case of deny-overrides, the algorithm could have been
    > designed so it would have been sound in this respect.
    
    Again, I don't think it's an issue of soundness. The question is whether
    you expect the policy and rule variants of a given algorithm to always
    behave in the same manner. Maybe this would have been the place to start
    (see below), but I think most people today understand the current
    behavior.
    
    > Olav suspects (and so do I) that the motivation for the current design
    > was based on concerns about access being allowed in case of an error: If
    > PS uses deny-overrides and P1 evaluates to indeterminate, then perhaps
    > P1 could have evaluated to Deny if there was no error? So to be safe, we
    > make the whole a Deny. However, it would have been better to make it
    > indeterminate, and then have a Deny biased PEP instead, if denying
    > access is important in case of uncertainty in policy evaluation.
    > 
    > Is this already known? Is it a concern? To fix it, we could define a new
    > combining algorithm which does not have this behavior and recommend
    > people to use it instead of the old one.
    
    My advice would be against adding new algorithms or changing the
    existing ones. Already I get regular questions about why there are
    ordered and un-ordered algorithms, something that was added to help with
    the formal logic of the system. In practice, I've never seen anyone get
    tripped up by the current behavior, and anyone who is truly worried
    about debugging their policies for this case can always use a custom
    algorithm that behaves differently. Just my opinion..
    
    
    seth
    
    
    


  • 4.  Re: [xacml] Behavior of deny-overrides

    Posted 01-31-2007 06:21
    Hi Seth.
    
    My comments are inline.
    
    Seth Proctor wrote:
    > Hi Erik. Formalisms aside, I think the basic issue you're reacting to is
    > that the rule and policy versions of deny-overrides work differenly (and
    > yes, this is well-known). It's certainly possible that this could add
    > confusion to policy analysis, though in practice I've never seen it
    > happen. Personally, I don't see any problem with the current behavior.
    > There is certainly nothing incorrect about it, although it may well be
    > unexpected.
    >
    > A few comments..
    >
    >   
    >> P1 has a singe rule with a Permit effect. This policy in isolation can
    >> never evaluate to a Deny.
    >>     
    >
    > That's not actually true. Using only the standard 2.0 combining
    > algorithms, it can never result in Deny. Since I am free to write my own
    > combining algorithm, however, the policy certainly can return Deny.
    >   
    I actually meant to write here that it uses a permit-overrides rule
    combining algorithm. Sorry about that. :-)
    
    
    >> [...]
    >> So, why is this a problem? Because it makes harder to understand a
    >> complex policy. You cannot look at the parts in isolation. For instance,
    >> if several administrators are responsible for parts of a large policy
    >> set, then their policies could have effects that they did not intend or
    >> anticipate.
    >>     
    >
    > I'm not sure that I agree here. As the author of P1, I shouldn't need to
    > know what's above me. All I care is what my Rules return. Likewise, as
    > the consumer of P1, I know that I'm using a specific algorithm that may
    > or may not re-interpret the results of my children. If I am both
    > parties, then I have a global view. It's true that I can't look at just
    > a leaf node in isolation to understand why some decision was made, but
    > then that is always the case.
    >   
    
    What we, or actually mostly Olav, has been looking at, is to find a
    subset of XACML to be used in our applications, which has "sound"
    predictable behavior in some respects, where it would be possible to
    have fairly simple connection between the local and global behaviors.
    This simplifies the analysis of a full system, and gives a better
    assurance of correctness.
    
    In other words, we are looking for a set of algorithms which we know
    will not "re-interpret the results of my children". We found that the
    permit-overrides may do so, and would prefer to have a version which
    doesn't. Using a stricter variant of the algorithm, gives you more
    invariants to work with (just in an informal sense here).
    
    In general, it is of course not possible to say how a local decision
    will affect the global end result. (This is precisely why we try to find
    this subset.)
    
    
    >> One could argue that the algorithms have a well defined behavior, so
    >> they are not  €œwrong €� and they behave as intended. That could be said
    >> about the only-one-applicable algorithm for instance, or if I for some
    >> reason want to define an algorithm which inverts everything, makes its
    >> decisions randomly, or whatever.
    >>
    >> However, in the case of deny-overrides, the algorithm could have been
    >> designed so it would have been sound in this respect.
    >>     
    >
    > Again, I don't think it's an issue of soundness. The question is whether
    > you expect the policy and rule variants of a given algorithm to always
    > behave in the same manner. Maybe this would have been the place to start
    > (see below), but I think most people today understand the current
    > behavior.
    >   
    
    You cannot expect it in general, but what we are interested to find a
    subset of XACML where you could.
    
    >> Olav suspects (and so do I) that the motivation for the current design
    >> was based on concerns about access being allowed in case of an error: If
    >> PS uses deny-overrides and P1 evaluates to indeterminate, then perhaps
    >> P1 could have evaluated to Deny if there was no error? So to be safe, we
    >> make the whole a Deny. However, it would have been better to make it
    >> indeterminate, and then have a Deny biased PEP instead, if denying
    >> access is important in case of uncertainty in policy evaluation.
    >>
    >> Is this already known? Is it a concern? To fix it, we could define a new
    >> combining algorithm which does not have this behavior and recommend
    >> people to use it instead of the old one.
    >>     
    >
    > My advice would be against adding new algorithms or changing the
    > existing ones. Already I get regular questions about why there are
    > ordered and un-ordered algorithms, something that was added to help with
    > the formal logic of the system. In practice, I've never seen anyone get
    > tripped up by the current behavior, and anyone who is truly worried
    > about debugging their policies for this case can always use a custom
    > algorithm that behaves differently. Just my opinion..
    >   
    
    Maybe using a custom algorithm is the best solution for us. I definitely
    do not advocate changing the old algorithm. That, for sure would lead to
    confusion. And I sympathize with about the unordered vs ordered one. :-)
    
    Regards,
    Erik