OASIS Topology and Orchestration Specification for Cloud Applications (TOSCA) TC

 View Only
  • 1.  RE: [tosca] Proposal for requirement "occurrences"

    Posted 02-03-2022 17:47






     


    2) Though I think 2.9.2 is the right way to do it, I'm willing to compromise with the idea of allowing requirement fulfilments to be postponed to runtime, simply because it seems like you'll never agree that 2.9.2
    should be the only way to achieve it. However, I cannot agree that this feature would be automatically defined by a heuristic. I would insist that there would be a keyword that a user would have to explicitly set for requirement assignments (and I would discourage
    people from using it). I dislike the word "dangling" for various reasons, so I would suggest something like "runtime: true". (Calin's suggestion of a "scope" keyword doesn't make much sense to me, because the important aspect is the WHEN and not the WHERE.)
    In essence the keyword just tells the processor: skip this requirement. It's not your responsibility. It will be fulfilled instead by the orchestrator on Day 1 at the moment the topology is deployed, assuming the orchestrator supports this feature. I imagine
    many orchestrators can't do this easily if at all, because they would consume the TOSCA topology as an input and cannot go back and change it. (You'll see this in action in my upcoming TOSCA for Ansible demo.) Unless you add an initial pre-orchestration phase
    ... but I wouldn't consider it worth the effort when 2.9.2 does everything needed without this awkwardness.
     
    OK, perhaps we re getting to the core of the confusion/disagreement. Requirement fulfillment is not at all a question of WHEN . The issue of WHEN is extremely straightforward. It is either done on Day 0 (Design
    Time) or on Day 1 (Deployment Time):
     


    If a designer knows at design time which target node will be used to fulfill a given requirement, then the designer will add an explicit relationship to that target node in the topology template
    If a designer doesn t know (or doesn t want to dictate) which target node will be used, then the topology template leaves the requirement dangling and the requirement must be fulfilled at deployment time based on the target capability type, target node type,
    and node filter specified in the requirement.
     
    There really aren t any other options that work in general.  Let s look at the following code snippet that has an unfulfilled requirement:
     
    node_types:
      A:
        derived_from: tosca.nodes.Root
        requirements:
          - dependency:
              node: B
              occurrences: [ 1, 1 ]
     
      B:
        derived_from: tosca.nodes.Root
        properties:
          size:
            type: string
            constraints:
              - valid_values: [ small, large ]
     
    topology_template:
     
      inputs:
        size_input:
          type: string
          constraints:
            - valid_values: [ small, large ]
         

      a:
        type: A
        requirements:
          - dependency:
              node_filter:
                properties:
                  - size: { equal: large }
     
      b:
        type: B
        properties:
          size: { get_input: size_input }
     
    Clearly it is not possible to fulfill the dependency requirement until input values are available, which is at Deployment time. There is no scenario under which a processor can fulfill the requirement any earlier.
     
    So, I agree with Calin. There is no question of WHEN a requirement can be fulfilled. The real question is WHERE a processor/orchestrator will look for target nodes.
     
    Chris








  • 2.  Re: [tosca] Proposal for requirement "occurrences"

    Posted 02-03-2022 19:07
    On Thu, Feb 3, 2022 at 11:46 AM Chris Lauwers < lauwers@ubicity.com > wrote: OK, perhaps we re getting to the core of the confusion/disagreement. Requirement fulfillment is not at all a question of WHEN . The issue of WHEN is extremely straightforward. It is either done on Day 0 (Design Time) or on Day 1 (Deployment Time): Yes, but here you are exactly saying that there are two moments in which it can happen. So there definitely is a WHEN question. Our disagreement is in that you don't find the answer to WHEN to be problematic (you say it's "extremely straightforward".) I say it has very profound and negative implications: 1) A runtime requirement means, by definition, that we do not have a complete topology at design time. That's what I meant when I said "unnecessarily sacrificing Day 0 for Day 1". Again, the cost is tremendous, because Day 0 just happens to be one of TOSCA's strong points. So whatever we end up deciding I will continue to insist that this is an anti-pattern and should not be used by anyone who cares about design. Is it the end of the world? No. There are a lot of awful features in TOSCA that I strongly advise people not to use. For example: workflows. Also notifications, as they stand in 1.3, are quite pointless and I would suggest better ways of handling events within the existing grammar. But I'll continue to insist that we need to make runtime requirements (assuming we cannot agree to get rid of them and focus on 2.9.2) 100% explicit with a keyword and not magically postpone a requirement fulfillment to runtime without the designer ever knowing that it even happened. That to me is the true disaster, as it entirely breaks TOSCA for designers. A graph can be broken without the designer even knowing. 2) We could very well be dealing with entirely different systems handling requirement fulfillment in these two times. This is indeed my most common use case: I use existing orchestrators which don't have a "runtime requirements" feature. In fact, I would say most of them don't have anything like this. (A notable exception is Canonical's Charm ecosystem.) Of course we can always add an extra layer in front of an orchestrator that could create that kind of feature for us. But it's a huge complication, and again not worth it for a feature that is ultimately unnecessary. Meanwhile, 2.9.2 is a feature that almost every orchestrator can handle: provision a new resource vs. use an existing one. The devil is in the details of where this "existing one" comes from, but that's where every orchestrator and platform is vastly different. The kinds of scopes available in different platforms, and the relationships between these scopes, are diverse. Sometimes there are inventories or multiple inventories, sometimes there are pools (that need to be managed), and often there are complex policies regarding all of those and rules on how to decide which scope to use. And again, that's why a "global" scope seems wrong to me. The scope itself might be a runtime decision by the orchestrator. It's just not something that a single keyword can help us with. The example you provided of properties based on input values is useful here. You are right that the value of get_input is used in Day 1 deployment, but it's not a "runtime" feature in that it is well defined as, well, an explicit input. It absolutely can be used in Day 0, during validation. It is not at all like attributes that must be retrieved from the platform. And it's definitely not like an inventory of existing resources. "get_input" relates to our discussion about design variability. Essentially it means that this specific topology template represents a set of possible topologies. This introduces a challenge to design validation. Ideally we would want to test that the entire set is valid, but this challenge is not always easily met. In some cases it might be clear what the variants are and someone could test by just using different inputs. But it might not always be possible if, say, you have many different inputs that interact in complex matrix ways. The set of variations might be enormous. So, to me, none of this has anything to do with "runtime: true". I would still insist that all these variant's validity is a Day 0 issue that conforms to 2.9.2. Actually, your example is odd to me. You want node template "a" to require a "large" node template, so why not just provide it in the design? Even you agreed that 2.9.2 is semantically equivalent to "dangling". Isn't it obviously the better pattern here? Like so: a: type: A requirements: - dependency: node_filter: properties: - size: { equal: large } b: type: B directives: [ select ] properties: size: large Nothing is "dangling". Design is valid. We exactly fulfilled what "a" needs. Node template "b" should not be variable because "a" here has a clear requirement, so the topology would always look like this. It's a singular topology, not a set. The only "change" introduced is what runtime node is selected to actually implement the "b" template, but it's still within this design. The point is that "b" will always be "large", adhering to the property. It's a single clear design. How then could variability work here? Well, I'd have to change the intent of your example, but let me try to stay as close to it as possible: a: type: A requirements: - dependency: node_filter: properties: - size: { equal: { get_input: size:_input } } b: type: B directives: [ select ] properties: size: small c: type: B directives: [ select ] properties: size: large Clearly we now have two possible topologies depending on the input. "a" will connect to either "b" or "c". Both variations are valid and also easy to validate. We just need to provide the two possible input values and see that indeed in both cases there is no error and both designs are valid. And again, nothing is "dangling".