OASIS Cyber Threat Intelligence (CTI) TC

Expand all | Collapse all

Re: [cti] The Adaptive Object-Model Architectural Style

  • 1.  Re: [cti] The Adaptive Object-Model Architectural Style

    Posted 11-13-2015 14:17
    sorry for the others if off-topic. Remember that a software is good only if it satisfies the users (meet, or exceed, their requirements). You can write 'perfect/optimized' code. If the users are not satisfied; it's a bad software. Then, "If you can't explain it simply, you don't understand it well enough.", Albert Einstein Challenges are exciting, but sometimes difficult. It's about motivation and satisfaction. There is not programming language better than an other (just like OS); it is just you that can select the best for your needs. I did a conceptual map for the 'biggest Ruby project of the internet' (Metasploit Framework), it's just a picture, but represents 100 pages of documentation. I think we could optimize (like for a maturity model) our approach of resolving problems. 2015-11-13 17:02 GMT+03:00 John Anderson <janderson@soltra.com>: > The list returns my mail, so probably you'll be the only one to get my reply. > > Funny, I missed that quote from the document. And it's spot on. As an architect myself, I have built several "elegant" architectures, only to find that the guys who actually had to use it just. never. quite. got it. (sigh) > > My best architectures have emerged when I've written test code first. ("Test-first" really does work.) I've learned that writing code--while applying KISS, DRY and YAGNI--saves me from entering the architecture stratosphere. That's why I ask the architects to express their creations in code, and not only in UML. > > I'm pretty vocal about Python, because it's by far the simplest popular language out there today. But this principal applies in any language: If the implementation is hard to explain, it's a bad idea. (Another quote from the Zen of Python.) Our standard has a lot that's hard to explain, esp. to new-comers. How can we simplify, so that it's almost a no-brainer to adopt? > > Again, thanks for the article, and the conversation. I really do appreciate your point-of-view, > JSA > > > ________________________________________ > From: Jerome Athias <athiasjerome@gmail.com> > Sent: Friday, November 13, 2015 8:45 AM > To: John Anderson > Cc: cti@lists.oasis-open.org > Subject: Re: [cti] The Adaptive Object-Model Architectural Style > > Thanks for the feedback. > Kindly note that I'm not strongly defending this approach for the CTI > TC (at least for now). > Since you're using quotes: > "Architects that develop these types of systems are usually very proud > of them and claim that they are some of the best systems they have > ever developed. However, developers that have to use, extend or > maintain them, usually complain that they are hard to understand and > are not convinced that they are as great as the architect claims." > > This, I hope could have our developers just understand > that what they feel difficult sometimes, is not intended to be > difficult per design, but because we are dealing with a complex domain > and > that the use of abstraction/conceptual approaches/ontology have benefits > > Hopefully we can obtain consensus on a good balanced adapted approach. > > > > > > > > > > 2015-11-13 16:24 GMT+03:00 John Anderson <janderson@soltra.com>: >> Jerome, >> Thanks for the link. I really enjoy those kinds of research papers. >> >> On Page 20, the section "Maintaining the Model" [1] states pretty clearly that this type of architecture is very unwieldy, from an end-user perspective; consequently, it requires a ton of tooling development. >> >> The advantage of such a model is that it's extensible and easily changed. But I'm not convinced that extensibility is really our friend. In my (greatly limited) experience, the extensibility of STIX and CybOX have made them that much harder to use and understand. I'm left wishing for "one obvious way to do things." [2] >> >> If I were given the choice between (1) a very simple data model that's not extensible, but clear and easy to approach and (2) a generic, extensible data model whose extra layers of indirection make it hard to find the actual data, I'd gladly choose the first. >> >> Keeping it simple, >> JSA >> >> >> [1] The full wording from "Maintaining the Model": >> The observation model is able to store all the metadata using a well-established >> mapping to relational databases, but it was not straightforward >> for a developer or analyst to put this data into the database. They would >> have to learn how the objects were saved in the database as well as the >> proper semantics for describing the business rules. A common solution to >> this is to develop editors and programming tools to assist users with using >> these black-box components [18]. This is part of the evolutionary process of >> Adaptive Object-Models as they are in a sense, “Black-Box” frameworks, >> and as they mature, they need editors and other support tools to aid in >> describing and maintaining the business rules. >> >> [2] From "The Zen of Python": https://www.python.org/dev/peps/pep-0020/ >> >> >> ________________________________________ >> From: cti@lists.oasis-open.org <cti@lists.oasis-open.org> on behalf of Jerome Athias <athiasjerome@gmail.com> >> Sent: Friday, November 13, 2015 5:20 AM >> To: cti@lists.oasis-open.org >> Subject: [cti] The Adaptive Object-Model Architectural Style >> >> Greetings, >> >> realizing that the community members have different background, >> experience, expectations and use of CTI in general, from an high-level >> (abstracted/conceptual/ontology oriented) point of view, through a >> day-to-day use (experienced) point of view, to a technical >> (implementation/code) point of view... >> I found this diagram (and document) interesting while easy to read and >> potentially adapted to our current effort. >> So just wanted to share. >> >> http://www.adaptiveobjectmodel.com/WICSA3/ArchitectureOfAOMsWICSA3.pdf >> >> Regards


  • 2.  RE: [Non-DoD Source] Re: [cti] The Adaptive Object-Model Architectural Style

    Posted 11-13-2015 14:48
    CLASSIFICATION: UNCLASSIFIED Hi I am new to this committee. I am very interested in the progress of this set of standards and am going to try to jump in to make a contribution. Based on below, I agree with all comments as they each have merit, but I mostly agree that simple description and approaches should be used first. I am seeing comments that suggest this domain is difficult; ref: "Our standard has a lot that's hard to explain" and " dealing with a complex domain". I do believe there are concepts and abstracts that will be tough to deal with but we should explicitly lay out what those complexities are; we may just find that they are not as complex as rumored. They are rumors or speculative until we document them. By documenting the complex abstracts and subsequently solving them, we will also reduce the chances that the domain will generally always be viewed or perceived as a complex domain, just because it has always been stated as such. Documenting the challenging portions will also enable us to gauge our own abilities to deal with them correctly. In turn, we may seek additional knowledge or support to tackle the challenges or we realize it is not as difficult as perceived. V/R - James ~~~~~~~~~~~~~~~~~~~~~~~~~ James Bohling, Joint Staff J6, DD Cyber and C4 Integration, Chief, Cyberspace Interoperability Data and Services Division ? 757-836-8079 NIPR:james.t.bohling.civ@mail.mil SIPR:james.t.bohling.civ@mail.smil.mil ~~~~~~~~~~~~~~~~~~~~~~~~~


  • 3.  Re: [Non-DoD Source] Re: [cti] The Adaptive Object-Model Architectural Style

    Posted 11-13-2015 14:59
    First thing first, Welcome! and thanks for your feedback. I do agree. I guess so far we used "A good plan today is better than a perfect plan tomorrow...." - Planning quote by George S Patton And definitely, we are trying to (and making good progress) on the documentation. /JA 2015-11-13 17:47 GMT+03:00 Bohling, James T CIV JS J6 (US) <james.t.bohling.civ@mail.mil>: > CLASSIFICATION: UNCLASSIFIED > > Hi I am new to this committee. I am very interested in the progress of this set of standards and am going to try to jump in to make a contribution. > > Based on below, I agree with all comments as they each have merit, but I mostly agree that simple description and approaches should be used first. > > I am seeing comments that suggest this domain is difficult; ref: "Our standard has a lot that's hard to explain" and " dealing with a complex domain". I do believe there are concepts and abstracts that will be tough to deal with but we should explicitly lay out what those complexities are; we may just find that they are not as complex as rumored. They are rumors or speculative until we document them. By documenting the complex abstracts and subsequently solving them, we will also reduce the chances that the domain will generally always be viewed or perceived as a complex domain, just because it has always been stated as such. > > Documenting the challenging portions will also enable us to gauge our own abilities to deal with them correctly. In turn, we may seek additional knowledge or support to tackle the challenges or we realize it is not as difficult as perceived. > > V/R > - James > > > ~~~~~~~~~~~~~~~~~~~~~~~~~ > James Bohling, > Joint Staff J6, DD Cyber and C4 Integration, Chief, Cyberspace Interoperability Data and Services Division ? 757-836-8079 NIPR:james.t.bohling.civ@mail.mil SIPR:james.t.bohling.civ@mail.smil.mil > ~~~~~~~~~~~~~~~~~~~~~~~~~ > > > > >


  • 4.  Re: [cti] The Adaptive Object-Model Architectural Style

    Posted 11-13-2015 15:09
    So I’ve been waiting for a good time to outline this and I guess here is as good a place as any. I’m sure people will disagree, but I’m going to say it anyway :) Personally I think of these things as four levels: - User requirements - Implementations - Instantiation of the data model (XML, JSON, database schemas, an object model in code, etc) - Data model User requirements get supported in running software. Running software uses instantiations of the data model to work with data in support of those user requirements. The data model and specification define the instantiations of the data and describe how to work with them in a standard way. The important bit here is that there’s always running software between the user and the data model. That software is (likely) a tool that a vendor or open source project supports that contains custom code to work specifically with threat intel. It might be a more generic tool like Palantir or whatever people do RDF stuff with these days. But there’s always something. This has a couple implications: - Not all user requirements get met in the data model. It’s perfectly valid to decide not to support something in the data model if we think it’s fine that implementations do it in many different ways. For example, de-duplication: do we need a standard approach or should we let tools decide how to do de-duplication themselves? It’s a user requirement, but that doesn’t mean we need to address it in the specs. - Some user requirements need to be translated before they get to the data model. For example, versioning: users have lots of needs for versioning. Systems also have requirements for versioning. What we put in the specs needs to consider both of these. - This is the important part: some user requirements are beyond what software can do today. I would love it if my iphone would get 8 days of battery life. I could write that into some specification. That doesn’t mean it’s going to happen. In CTI, we (rightfully) have our eyes towards this end state where you can do all sorts of awesome things with your threat intel, but just putting it in the data model doesn’t automatically make that happen. We’re still exploring this domain and software can only do so much. So if the people writing software are telling us that the user requirements are too advanced (for now), maybe that means we should hold off on putting it in the data model until it’s something that we can actually implement? In my mind this is where a lot of the complexity in STIX comes from: we identified user requirements to do all these awesome things and so we put them in the data model, but we never considered how or whether software could really implement them. The perfect example here is data markings: users wanted to mark things at the field level, most software isn’t ready for that yet, and so we end up with data markings that are effectively broken in STIX 1.2. This is why many standards bodies have requirements for running code: otherwise the temptation is too great to define specification requirements that are not implementable and you end up with a great spec that nobody will use. Sorry for the long rant. Been waiting to get that off my chest for awhile (as you can probably tell). John > On Nov 13, 2015, at 9:17 AM, Jerome Athias <athiasjerome@GMAIL.COM> wrote: > > sorry for the others if off-topic. > > Remember that a software is good only if it satisfies the users (meet, > or exceed, their requirements). > You can write 'perfect/optimized' code. If the users are not > satisfied; it's a bad software. > > Then, > "If you can't explain it simply, you don't understand it well > enough.", Albert Einstein > > Challenges are exciting, but sometimes difficult. It's about > motivation and satisfaction. > > There is not programming language better than an other (just like OS); > it is just you that can select the best for your needs. > > I did a conceptual map for the 'biggest Ruby project of the internet' > (Metasploit Framework), it's just a picture, but represents 100 pages > of documentation. > I think we could optimize (like for a maturity model) our approach of > resolving problems. > > > > > > > > > > 2015-11-13 17:02 GMT+03:00 John Anderson <janderson@soltra.com>: >> The list returns my mail, so probably you'll be the only one to get my reply. >> >> Funny, I missed that quote from the document. And it's spot on. As an architect myself, I have built several "elegant" architectures, only to find that the guys who actually had to use it just. never. quite. got it. (sigh) >> >> My best architectures have emerged when I've written test code first. ("Test-first" really does work.) I've learned that writing code--while applying KISS, DRY and YAGNI--saves me from entering the architecture stratosphere. That's why I ask the architects to express their creations in code, and not only in UML. >> >> I'm pretty vocal about Python, because it's by far the simplest popular language out there today. But this principal applies in any language: If the implementation is hard to explain, it's a bad idea. (Another quote from the Zen of Python.) Our standard has a lot that's hard to explain, esp. to new-comers. How can we simplify, so that it's almost a no-brainer to adopt? >> >> Again, thanks for the article, and the conversation. I really do appreciate your point-of-view, >> JSA >> >> >> ________________________________________ >> From: Jerome Athias <athiasjerome@gmail.com> >> Sent: Friday, November 13, 2015 8:45 AM >> To: John Anderson >> Cc: cti@lists.oasis-open.org >> Subject: Re: [cti] The Adaptive Object-Model Architectural Style >> >> Thanks for the feedback. >> Kindly note that I'm not strongly defending this approach for the CTI >> TC (at least for now). >> Since you're using quotes: >> "Architects that develop these types of systems are usually very proud >> of them and claim that they are some of the best systems they have >> ever developed. However, developers that have to use, extend or >> maintain them, usually complain that they are hard to understand and >> are not convinced that they are as great as the architect claims." >> >> This, I hope could have our developers just understand >> that what they feel difficult sometimes, is not intended to be >> difficult per design, but because we are dealing with a complex domain >> and >> that the use of abstraction/conceptual approaches/ontology have benefits >> >> Hopefully we can obtain consensus on a good balanced adapted approach. >> >> >> >> >> >> >> >> >> >> 2015-11-13 16:24 GMT+03:00 John Anderson <janderson@soltra.com>: >>> Jerome, >>> Thanks for the link. I really enjoy those kinds of research papers. >>> >>> On Page 20, the section "Maintaining the Model" [1] states pretty clearly that this type of architecture is very unwieldy, from an end-user perspective; consequently, it requires a ton of tooling development. >>> >>> The advantage of such a model is that it's extensible and easily changed. But I'm not convinced that extensibility is really our friend. In my (greatly limited) experience, the extensibility of STIX and CybOX have made them that much harder to use and understand. I'm left wishing for "one obvious way to do things." [2] >>> >>> If I were given the choice between (1) a very simple data model that's not extensible, but clear and easy to approach and (2) a generic, extensible data model whose extra layers of indirection make it hard to find the actual data, I'd gladly choose the first. >>> >>> Keeping it simple, >>> JSA >>> >>> >>> [1] The full wording from "Maintaining the Model": >>> The observation model is able to store all the metadata using a well-established >>> mapping to relational databases, but it was not straightforward >>> for a developer or analyst to put this data into the database. They would >>> have to learn how the objects were saved in the database as well as the >>> proper semantics for describing the business rules. A common solution to >>> this is to develop editors and programming tools to assist users with using >>> these black-box components [18]. This is part of the evolutionary process of >>> Adaptive Object-Models as they are in a sense, “Black-Box” frameworks, >>> and as they mature, they need editors and other support tools to aid in >>> describing and maintaining the business rules. >>> >>> [2] From "The Zen of Python": https://www.python.org/dev/peps/pep-0020/ >>> >>> >>> ________________________________________ >>> From: cti@lists.oasis-open.org <cti@lists.oasis-open.org> on behalf of Jerome Athias <athiasjerome@gmail.com> >>> Sent: Friday, November 13, 2015 5:20 AM >>> To: cti@lists.oasis-open.org >>> Subject: [cti] The Adaptive Object-Model Architectural Style >>> >>> Greetings, >>> >>> realizing that the community members have different background, >>> experience, expectations and use of CTI in general, from an high-level >>> (abstracted/conceptual/ontology oriented) point of view, through a >>> day-to-day use (experienced) point of view, to a technical >>> (implementation/code) point of view... >>> I found this diagram (and document) interesting while easy to read and >>> potentially adapted to our current effort. >>> So just wanted to share. >>> >>> http://www.adaptiveobjectmodel.com/WICSA3/ArchitectureOfAOMsWICSA3.pdf >>> >>> Regards > > --------------------------------------------------------------------- > To unsubscribe from this mail list, you must leave the OASIS TC that > generates this mail. Follow this link to all your TCs in OASIS at: > https://www.oasis-open.org/apps/org/workgroup/portal/my_workgroups.php >


  • 5.  Re: [cti] The Adaptive Object-Model Architectural Style

    Posted 11-13-2015 15:22
    that seems perfectly fair and, the best way to resolve a problem, if any, is to start by admitting that there is a problem (if any) There, seems we, imho, did not invested enough time for clear definition of the requirements, we did not have the opportunity to have consensus on what requirements we agree to keep or reject. I do understand your points, and thanks for sharing them. One point, if you allow me, is regarding the requirements for the MTI. And so far, I'm still not sure than some of mine would be satisfied by JSON. (e.g. Digital Signatures of the Content, which btw, could be, - potentially -, related to Information_Source, Confidence, data markings) So could we work on the documentation of these requirements? (and so agree on what we need vs what we want) Thank you Best regards 2015-11-13 18:09 GMT+03:00 Wunder, John A. <jwunder@mitre.org>: > So I’ve been waiting for a good time to outline this and I guess here is as good a place as any. I’m sure people will disagree, but I’m going to say it anyway :) > > Personally I think of these things as four levels: > > - User requirements > - Implementations > - Instantiation of the data model (XML, JSON, database schemas, an object model in code, etc) > - Data model > > User requirements get supported in running software. Running software uses instantiations of the data model to work with data in support of those user requirements. The data model and specification define the instantiations of the data and describe how to work with them in a standard way. > > The important bit here is that there’s always running software between the user and the data model. That software is (likely) a tool that a vendor or open source project supports that contains custom code to work specifically with threat intel. It might be a more generic tool like Palantir or whatever people do RDF stuff with these days. But there’s always something. > > This has a couple implications: > > - Not all user requirements get met in the data model. It’s perfectly valid to decide not to support something in the data model if we think it’s fine that implementations do it in many different ways. For example, de-duplication: do we need a standard approach or should we let tools decide how to do de-duplication themselves? It’s a user requirement, but that doesn’t mean we need to address it in the specs. > > - Some user requirements need to be translated before they get to the data model. For example, versioning: users have lots of needs for versioning. Systems also have requirements for versioning. What we put in the specs needs to consider both of these. > > - This is the important part: some user requirements are beyond what software can do today. I would love it if my iphone would get 8 days of battery life. I could write that into some specification. That doesn’t mean it’s going to happen. In CTI, we (rightfully) have our eyes towards this end state where you can do all sorts of awesome things with your threat intel, but just putting it in the data model doesn’t automatically make that happen. We’re still exploring this domain and software can only do so much. So if the people writing software are telling us that the user requirements are too advanced (for now), maybe that means we should hold off on putting it in the data model until it’s something that we can actually implement? In my mind this is where a lot of the complexity in STIX comes from: we identified user requirements to do all these awesome things and so we put them in the data model, but we never considered how or whether software could really implement them. The perfect example here is data markings: users wanted to mark things at the field level, most software isn’t ready for that yet, and so we end up with data markings that are effectively broken in STIX 1.2. This is why many standards bodies have requirements for running code: otherwise the temptation is too great to define specification requirements that are not implementable and you end up with a great spec that nobody will use. > > Sorry for the long rant. Been waiting to get that off my chest for awhile (as you can probably tell). > > John > >> On Nov 13, 2015, at 9:17 AM, Jerome Athias <athiasjerome@GMAIL.COM> wrote: >> >> sorry for the others if off-topic. >> >> Remember that a software is good only if it satisfies the users (meet, >> or exceed, their requirements). >> You can write 'perfect/optimized' code. If the users are not >> satisfied; it's a bad software. >> >> Then, >> "If you can't explain it simply, you don't understand it well >> enough.", Albert Einstein >> >> Challenges are exciting, but sometimes difficult. It's about >> motivation and satisfaction. >> >> There is not programming language better than an other (just like OS); >> it is just you that can select the best for your needs. >> >> I did a conceptual map for the 'biggest Ruby project of the internet' >> (Metasploit Framework), it's just a picture, but represents 100 pages >> of documentation. >> I think we could optimize (like for a maturity model) our approach of >> resolving problems. >> >> >> >> >> >> >> >> >> >> 2015-11-13 17:02 GMT+03:00 John Anderson <janderson@soltra.com>: >>> The list returns my mail, so probably you'll be the only one to get my reply. >>> >>> Funny, I missed that quote from the document. And it's spot on. As an architect myself, I have built several "elegant" architectures, only to find that the guys who actually had to use it just. never. quite. got it. (sigh) >>> >>> My best architectures have emerged when I've written test code first. ("Test-first" really does work.) I've learned that writing code--while applying KISS, DRY and YAGNI--saves me from entering the architecture stratosphere. That's why I ask the architects to express their creations in code, and not only in UML. >>> >>> I'm pretty vocal about Python, because it's by far the simplest popular language out there today. But this principal applies in any language: If the implementation is hard to explain, it's a bad idea. (Another quote from the Zen of Python.) Our standard has a lot that's hard to explain, esp. to new-comers. How can we simplify, so that it's almost a no-brainer to adopt? >>> >>> Again, thanks for the article, and the conversation. I really do appreciate your point-of-view, >>> JSA >>> >>> >>> ________________________________________ >>> From: Jerome Athias <athiasjerome@gmail.com> >>> Sent: Friday, November 13, 2015 8:45 AM >>> To: John Anderson >>> Cc: cti@lists.oasis-open.org >>> Subject: Re: [cti] The Adaptive Object-Model Architectural Style >>> >>> Thanks for the feedback. >>> Kindly note that I'm not strongly defending this approach for the CTI >>> TC (at least for now). >>> Since you're using quotes: >>> "Architects that develop these types of systems are usually very proud >>> of them and claim that they are some of the best systems they have >>> ever developed. However, developers that have to use, extend or >>> maintain them, usually complain that they are hard to understand and >>> are not convinced that they are as great as the architect claims." >>> >>> This, I hope could have our developers just understand >>> that what they feel difficult sometimes, is not intended to be >>> difficult per design, but because we are dealing with a complex domain >>> and >>> that the use of abstraction/conceptual approaches/ontology have benefits >>> >>> Hopefully we can obtain consensus on a good balanced adapted approach. >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> 2015-11-13 16:24 GMT+03:00 John Anderson <janderson@soltra.com>: >>>> Jerome, >>>> Thanks for the link. I really enjoy those kinds of research papers. >>>> >>>> On Page 20, the section "Maintaining the Model" [1] states pretty clearly that this type of architecture is very unwieldy, from an end-user perspective; consequently, it requires a ton of tooling development. >>>> >>>> The advantage of such a model is that it's extensible and easily changed. But I'm not convinced that extensibility is really our friend. In my (greatly limited) experience, the extensibility of STIX and CybOX have made them that much harder to use and understand. I'm left wishing for "one obvious way to do things." [2] >>>> >>>> If I were given the choice between (1) a very simple data model that's not extensible, but clear and easy to approach and (2) a generic, extensible data model whose extra layers of indirection make it hard to find the actual data, I'd gladly choose the first. >>>> >>>> Keeping it simple, >>>> JSA >>>> >>>> >>>> [1] The full wording from "Maintaining the Model": >>>> The observation model is able to store all the metadata using a well-established >>>> mapping to relational databases, but it was not straightforward >>>> for a developer or analyst to put this data into the database. They would >>>> have to learn how the objects were saved in the database as well as the >>>> proper semantics for describing the business rules. A common solution to >>>> this is to develop editors and programming tools to assist users with using >>>> these black-box components [18]. This is part of the evolutionary process of >>>> Adaptive Object-Models as they are in a sense, “Black-Box” frameworks, >>>> and as they mature, they need editors and other support tools to aid in >>>> describing and maintaining the business rules. >>>> >>>> [2] From "The Zen of Python": https://www.python.org/dev/peps/pep-0020/ >>>> >>>> >>>> ________________________________________ >>>> From: cti@lists.oasis-open.org <cti@lists.oasis-open.org> on behalf of Jerome Athias <athiasjerome@gmail.com> >>>> Sent: Friday, November 13, 2015 5:20 AM >>>> To: cti@lists.oasis-open.org >>>> Subject: [cti] The Adaptive Object-Model Architectural Style >>>> >>>> Greetings, >>>> >>>> realizing that the community members have different background, >>>> experience, expectations and use of CTI in general, from an high-level >>>> (abstracted/conceptual/ontology oriented) point of view, through a >>>> day-to-day use (experienced) point of view, to a technical >>>> (implementation/code) point of view... >>>> I found this diagram (and document) interesting while easy to read and >>>> potentially adapted to our current effort. >>>> So just wanted to share. >>>> >>>> http://www.adaptiveobjectmodel.com/WICSA3/ArchitectureOfAOMsWICSA3.pdf >>>> >>>> Regards >> >> --------------------------------------------------------------------- >> To unsubscribe from this mail list, you must leave the OASIS TC that >> generates this mail. Follow this link to all your TCs in OASIS at: >> https://www.oasis-open.org/apps/org/workgroup/portal/my_workgroups.php >> >


  • 6.  Re: [cti] The Adaptive Object-Model Architectural Style

    Posted 11-13-2015 18:18
    John this is really well said.   I feel like we listened to every possible user requirement out there for STIX 1.0 and we tried to create a data-model that could solve every possible use case and corner case regardless of how small.  The one thing we sorely forgot to do is figure out what can developers actually implement in code or what are product managers willing to implement in code.   Lets make STIX 2.0 something that meets 70-80% of the use cases and can actually be implemented in code by the majority of software development shops.  Yes, I am talking about a STIX Lite.  People can still use STIX 1.x if they want everything.  Over time we can add more and more features to the STIX 2.0 branch as software products that use CTI advance and users can do more and more with it.   Lets start with JSON + JSON Schema and go from there.  I would love to have to migrate to a binary solution or something that supports RDF in the future because we have SO MUCH demand and there is SO MUCH sharing that we really need to do something. 1) Lets not put the cart before the horse 2) Lets fail fast, and not ride the horse to the glue factory  3) Lets start small and build massive adoption.   4) Lets make things so easy for development shops to implement that there is no reason for them not to Thanks, Bret Bret Jordan CISSP Director of Security Architecture and Standards Office of the CTO Blue Coat Systems PGP Fingerprint: 63B4 FC53 680A 6B7D 1447  F2C0 74F8 ACAE 7415 0050 Without cryptography vihv vivc ce xhrnrw, however, the only thing that can not be unscrambled is an egg.   On Nov 13, 2015, at 08:09, Wunder, John A. < jwunder@mitre.org > wrote: So I’ve been waiting for a good time to outline this and I guess here is as good a place as any. I’m sure people will disagree, but I’m going to say it anyway :) Personally I think of these things as four levels: - User requirements - Implementations - Instantiation of the data model (XML, JSON, database schemas, an object model in code, etc) - Data model User requirements get supported in running software. Running software uses instantiations of the data model to work with data in support of those user requirements. The data model and specification define the instantiations of the data and describe how to work with them in a standard way. The important bit here is that there’s always running software between the user and the data model. That software is (likely) a tool that a vendor or open source project supports that contains custom code to work specifically with threat intel. It might be a more generic tool like Palantir or whatever people do RDF stuff with these days. But there’s always something. This has a couple implications: - Not all user requirements get met in the data model. It’s perfectly valid to decide not to support something in the data model if we think it’s fine that implementations do it in many different ways. For example, de-duplication: do we need a standard approach or should we let tools decide how to do de-duplication themselves? It’s a user requirement, but that doesn’t mean we need to address it in the specs. - Some user requirements need to be translated before they get to the data model. For example, versioning: users have lots of needs for versioning. Systems also have requirements for versioning. What we put in the specs needs to consider both of these. - This is the important part: some user requirements are beyond what software can do today. I would love it if my iphone would get 8 days of battery life. I could write that into some specification. That doesn’t mean it’s going to happen. In CTI, we (rightfully) have our eyes towards this end state where you can do all sorts of awesome things with your threat intel, but just putting it in the data model doesn’t automatically make that happen. We’re still exploring this domain and software can only do so much. So if the people writing software are telling us that the user requirements are too advanced (for now), maybe that means we should hold off on putting it in the data model until it’s something that we can actually implement? In my mind this is where a lot of the complexity in STIX comes from: we identified user requirements to do all these awesome things and so we put them in the data model, but we never considered how or whether software could really implement them. The perfect example here is data markings: users wanted to mark things at the field level, most software isn’t ready for that yet, and so we end up with data markings that are effectively broken in STIX 1.2. This is why many standards bodies have requirements for running code: otherwise the temptation is too great to define specification requirements that are not implementable and you end up with a great spec that nobody will use. Sorry for the long rant. Been waiting to get that off my chest for awhile (as you can probably tell). John On Nov 13, 2015, at 9:17 AM, Jerome Athias < athiasjerome@GMAIL.COM > wrote: sorry for the others if off-topic. Remember that a software is good only if it satisfies the users (meet, or exceed, their requirements). You can write 'perfect/optimized' code. If the users are not satisfied; it's a bad software. Then, If you can't explain it simply, you don't understand it well enough. , Albert Einstein Challenges are exciting, but sometimes difficult. It's about motivation and satisfaction. There is not programming language better than an other (just like OS); it is just you that can select the best for your needs. I did a conceptual map for the 'biggest Ruby project of the internet' (Metasploit Framework), it's just a picture, but represents 100 pages of documentation. I think we could optimize (like for a maturity model) our approach of resolving problems. 2015-11-13 17:02 GMT+03:00 John Anderson < janderson@soltra.com >: The list returns my mail, so probably you'll be the only one to get my reply. Funny, I missed that quote from the document. And it's spot on. As an architect myself, I have built several   elegant architectures, only to find that the guys who actually had to use it just. never. quite. got it. (sigh) My best architectures have emerged when I've written test code first. ( Test-first really does work.) I've learned that writing code--while applying KISS, DRY and YAGNI--saves me from entering the architecture stratosphere. That's why I ask the architects to express their creations in code, and not only in UML. I'm pretty vocal about Python, because it's by far the simplest popular language out there today. But this principal applies in any language: If the implementation is hard to explain, it's a bad idea. (Another quote from the Zen of Python.) Our standard has a lot that's hard to explain, esp. to new-comers. How can we simplify, so that it's almost a no-brainer to adopt? Again, thanks for the article, and the conversation. I really do appreciate your point-of-view, JSA ________________________________________ From: Jerome Athias < athiasjerome@gmail.com > Sent: Friday, November 13, 2015 8:45 AM To: John Anderson Cc: cti@lists.oasis-open.org Subject: Re: [cti] The Adaptive Object-Model Architectural Style Thanks for the feedback. Kindly note that I'm not strongly defending this approach for the CTI TC (at least for now). Since you're using quotes: Architects that develop these types of systems are usually very proud of them and claim that they are some of the best systems they have ever developed. However, developers that have to use, extend or maintain them, usually complain that they are hard to understand and are not convinced that they are as great as the architect claims. This, I hope could have our developers just understand that what they feel difficult sometimes, is not intended to be difficult per design, but because we are dealing with a complex domain and that the use of abstraction/conceptual approaches/ontology have benefits Hopefully we can obtain consensus on a good balanced adapted approach. 2015-11-13 16:24 GMT+03:00 John Anderson < janderson@soltra.com >: Jerome, Thanks for the link. I really enjoy those kinds of research papers. On Page 20, the section Maintaining the Model [1] states pretty clearly that this type of architecture is very unwieldy, from an end-user perspective; consequently, it requires a ton of tooling development. The advantage of such a model is that it's extensible and easily changed. But I'm not convinced that extensibility is really our friend. In my (greatly limited) experience, the extensibility of STIX and CybOX have made them that much harder to use and understand. I'm left wishing for one obvious way to do things. [2] If I were given the choice between (1) a very simple data model that's not extensible, but clear and easy to approach and (2) a generic, extensible data model whose extra layers of indirection make it hard to find the actual data, I'd gladly choose the first. Keeping it simple, JSA [1] The full wording from Maintaining the Model : The observation model is able to store all the metadata using a well-established mapping to relational databases, but it was not straightforward for a developer or analyst to put this data into the database. They would have to learn how the objects were saved in the database as well as the proper semantics for describing the business rules. A common solution to this is to develop editors and programming tools to assist users with using these black-box components [18]. This is part of the evolutionary process of Adaptive Object-Models as they are in a sense, “Black-Box” frameworks, and as they mature, they need editors and other support tools to aid in describing and maintaining the business rules. [2] From The Zen of Python : https://www.python.org/dev/peps/pep-0020/ ________________________________________ From: cti@lists.oasis-open.org < cti@lists.oasis-open.org > on behalf of Jerome Athias < athiasjerome@gmail.com > Sent: Friday, November 13, 2015 5:20 AM To: cti@lists.oasis-open.org Subject: [cti] The Adaptive Object-Model Architectural Style Greetings, realizing that the community members have different background, experience, expectations and use of CTI in general, from an high-level (abstracted/conceptual/ontology oriented) point of view, through a day-to-day use (experienced) point of view, to a technical (implementation/code) point of view... I found this diagram (and document) interesting while easy to read and potentially adapted to our current effort. So just wanted to share. http://www.adaptiveobjectmodel.com/WICSA3/ArchitectureOfAOMsWICSA3.pdf Regards --------------------------------------------------------------------- To unsubscribe from this mail list, you must leave the OASIS TC that generates this mail.  Follow this link to all your TCs in OASIS at: https://www.oasis-open.org/apps/org/workgroup/portal/my_workgroups.php Attachment: signature.asc Description: Message signed with OpenPGP using GPGMail


  • 7.  Re: [cti] The Adaptive Object-Model Architectural Style

    Posted 11-13-2015 18:26
    I do appreciate the let's do it, if it is not a just do it. For JSON approach, I just would like to see (by facts) what the % of use cases/requirements it can cover, and when. 2015-11-13 21:17 GMT+03:00 Jordan, Bret <bret.jordan@bluecoat.com>: > John this is really well said. > > I feel like we listened to every possible user requirement out there for > STIX 1.0 and we tried to create a data-model that could solve every possible > use case and corner case regardless of how small. The one thing we sorely > forgot to do is figure out what can developers actually implement in code or > what are product managers willing to implement in code. > > Lets make STIX 2.0 something that meets 70-80% of the use cases and can > actually be implemented in code by the majority of software development > shops. Yes, I am talking about a STIX Lite. People can still use STIX 1.x > if they want everything. Over time we can add more and more features to the > STIX 2.0 branch as software products that use CTI advance and users can do > more and more with it. > > Lets start with JSON + JSON Schema and go from there. I would love to have > to migrate to a binary solution or something that supports RDF in the future > because we have SO MUCH demand and there is SO MUCH sharing that we really > need to do something. > > 1) Lets not put the cart before the horse > 2) Lets fail fast, and not ride the horse to the glue factory > 3) Lets start small and build massive adoption. > 4) Lets make things so easy for development shops to implement that there is > no reason for them not to > > > Thanks, > > Bret > > > > Bret Jordan CISSP > Director of Security Architecture and Standards Office of the CTO > Blue Coat Systems > PGP Fingerprint: 63B4 FC53 680A 6B7D 1447 F2C0 74F8 ACAE 7415 0050 > "Without cryptography vihv vivc ce xhrnrw, however, the only thing that can > not be unscrambled is an egg." > > On Nov 13, 2015, at 08:09, Wunder, John A. <jwunder@mitre.org> wrote: > > So I’ve been waiting for a good time to outline this and I guess here is as > good a place as any. I’m sure people will disagree, but I’m going to say it > anyway :) > > Personally I think of these things as four levels: > > - User requirements > - Implementations > - Instantiation of the data model (XML, JSON, database schemas, an object > model in code, etc) > - Data model > > User requirements get supported in running software. Running software uses > instantiations of the data model to work with data in support of those user > requirements. The data model and specification define the instantiations of > the data and describe how to work with them in a standard way. > > The important bit here is that there’s always running software between the > user and the data model. That software is (likely) a tool that a vendor or > open source project supports that contains custom code to work specifically > with threat intel. It might be a more generic tool like Palantir or whatever > people do RDF stuff with these days. But there’s always something. > > This has a couple implications: > > - Not all user requirements get met in the data model. It’s perfectly valid > to decide not to support something in the data model if we think it’s fine > that implementations do it in many different ways. For example, > de-duplication: do we need a standard approach or should we let tools decide > how to do de-duplication themselves? It’s a user requirement, but that > doesn’t mean we need to address it in the specs. > > - Some user requirements need to be translated before they get to the data > model. For example, versioning: users have lots of needs for versioning. > Systems also have requirements for versioning. What we put in the specs > needs to consider both of these. > > - This is the important part: some user requirements are beyond what > software can do today. I would love it if my iphone would get 8 days of > battery life. I could write that into some specification. That doesn’t mean > it’s going to happen. In CTI, we (rightfully) have our eyes towards this end > state where you can do all sorts of awesome things with your threat intel, > but just putting it in the data model doesn’t automatically make that > happen. We’re still exploring this domain and software can only do so much. > So if the people writing software are telling us that the user requirements > are too advanced (for now), maybe that means we should hold off on putting > it in the data model until it’s something that we can actually implement? In > my mind this is where a lot of the complexity in STIX comes from: we > identified user requirements to do all these awesome things and so we put > them in the data model, but we never considered how or whether software > could really implement them. The perfect example here is data markings: > users wanted to mark things at the field level, most software isn’t ready > for that yet, and so we end up with data markings that are effectively > broken in STIX 1.2. This is why many standards bodies have requirements for > running code: otherwise the temptation is too great to define specification > requirements that are not implementable and you end up with a great spec > that nobody will use. > > Sorry for the long rant. Been waiting to get that off my chest for awhile > (as you can probably tell). > > John > > On Nov 13, 2015, at 9:17 AM, Jerome Athias <athiasjerome@GMAIL.COM> wrote: > > sorry for the others if off-topic. > > Remember that a software is good only if it satisfies the users (meet, > or exceed, their requirements). > You can write 'perfect/optimized' code. If the users are not > satisfied; it's a bad software. > > Then, > "If you can't explain it simply, you don't understand it well > enough.", Albert Einstein > > Challenges are exciting, but sometimes difficult. It's about > motivation and satisfaction. > > There is not programming language better than an other (just like OS); > it is just you that can select the best for your needs. > > I did a conceptual map for the 'biggest Ruby project of the internet' > (Metasploit Framework), it's just a picture, but represents 100 pages > of documentation. > I think we could optimize (like for a maturity model) our approach of > resolving problems. > > > > > > > > > > 2015-11-13 17:02 GMT+03:00 John Anderson <janderson@soltra.com>: > > The list returns my mail, so probably you'll be the only one to get my > reply. > > Funny, I missed that quote from the document. And it's spot on. As an > architect myself, I have built several "elegant" architectures, only to > find that the guys who actually had to use it just. never. quite. got it. > (sigh) > > My best architectures have emerged when I've written test code first. > ("Test-first" really does work.) I've learned that writing code--while > applying KISS, DRY and YAGNI--saves me from entering the architecture > stratosphere. That's why I ask the architects to express their creations in > code, and not only in UML. > > I'm pretty vocal about Python, because it's by far the simplest popular > language out there today. But this principal applies in any language: If the > implementation is hard to explain, it's a bad idea. (Another quote from the > Zen of Python.) Our standard has a lot that's hard to explain, esp. to > new-comers. How can we simplify, so that it's almost a no-brainer to adopt? > > Again, thanks for the article, and the conversation. I really do appreciate > your point-of-view, > JSA > > > ________________________________________ > From: Jerome Athias <athiasjerome@gmail.com> > Sent: Friday, November 13, 2015 8:45 AM > To: John Anderson > Cc: cti@lists.oasis-open.org > Subject: Re: [cti] The Adaptive Object-Model Architectural Style > > Thanks for the feedback. > Kindly note that I'm not strongly defending this approach for the CTI > TC (at least for now). > Since you're using quotes: > "Architects that develop these types of systems are usually very proud > of them and claim that they are some of the best systems they have > ever developed. However, developers that have to use, extend or > maintain them, usually complain that they are hard to understand and > are not convinced that they are as great as the architect claims." > > This, I hope could have our developers just understand > that what they feel difficult sometimes, is not intended to be > difficult per design, but because we are dealing with a complex domain > and > that the use of abstraction/conceptual approaches/ontology have benefits > > Hopefully we can obtain consensus on a good balanced adapted approach. > > > > > > > > > > 2015-11-13 16:24 GMT+03:00 John Anderson <janderson@soltra.com>: > > Jerome, > Thanks for the link. I really enjoy those kinds of research papers. > > On Page 20, the section "Maintaining the Model" [1] states pretty clearly > that this type of architecture is very unwieldy, from an end-user > perspective; consequently, it requires a ton of tooling development. > > The advantage of such a model is that it's extensible and easily changed. > But I'm not convinced that extensibility is really our friend. In my > (greatly limited) experience, the extensibility of STIX and CybOX have made > them that much harder to use and understand. I'm left wishing for "one > obvious way to do things." [2] > > If I were given the choice between (1) a very simple data model that's not > extensible, but clear and easy to approach and (2) a generic, extensible > data model whose extra layers of indirection make it hard to find the actual > data, I'd gladly choose the first. > > Keeping it simple, > JSA > > > [1] The full wording from "Maintaining the Model": > The observation model is able to store all the metadata using a > well-established > mapping to relational databases, but it was not straightforward > for a developer or analyst to put this data into the database. They would > have to learn how the objects were saved in the database as well as the > proper semantics for describing the business rules. A common solution to > this is to develop editors and programming tools to assist users with using > these black-box components [18]. This is part of the evolutionary process of > Adaptive Object-Models as they are in a sense, “Black-Box” frameworks, > and as they mature, they need editors and other support tools to aid in > describing and maintaining the business rules. > > [2] From "The Zen of Python": https://www.python.org/dev/peps/pep-0020/ > > > ________________________________________ > From: cti@lists.oasis-open.org <cti@lists.oasis-open.org> on behalf of > Jerome Athias <athiasjerome@gmail.com> > Sent: Friday, November 13, 2015 5:20 AM > To: cti@lists.oasis-open.org > Subject: [cti] The Adaptive Object-Model Architectural Style > > Greetings, > > realizing that the community members have different background, > experience, expectations and use of CTI in general, from an high-level > (abstracted/conceptual/ontology oriented) point of view, through a > day-to-day use (experienced) point of view, to a technical > (implementation/code) point of view... > I found this diagram (and document) interesting while easy to read and > potentially adapted to our current effort. > So just wanted to share. > > http://www.adaptiveobjectmodel.com/WICSA3/ArchitectureOfAOMsWICSA3.pdf > > Regards > > > --------------------------------------------------------------------- > To unsubscribe from this mail list, you must leave the OASIS TC that > generates this mail. Follow this link to all your TCs in OASIS at: > https://www.oasis-open.org/apps/org/workgroup/portal/my_workgroups.php > > >


  • 8.  Re: [cti] The Adaptive Object-Model Architectural Style

    Posted 11-13-2015 18:42
    The reason I push for JSON is all of the developers and CTOs I have talked to in various organizations, companies, vendors, and open-source groups always ask for anything but XML .  I then ask what would you prefer, and they all say, without exception JSON .   So a novel idea...  Lets give them what they want, JSON and a simple STIX 2.0 model, and lets drive for massive adoption.  Our number 1 goal should be adoption followed up by a model that can meet at least 70-80% of the market use cases. Lets get STIX 2.0 support in every networking product, every security tool, and every security broker.  Then, as we gain massive adoption, lets iterate and figure out what we need to do solve the problems we are running in to.  Lets first get adoption, and I do not mean a few niche groups here and there and one large eco-system.  I am talking about every networking and security product on the planet. I want to remove as many hurdles development shops have against STIX.  I want to make it so easy for them to adopt it that there is no question of them adopting it.  I do not want to see more groups go off and do their own thing or move over to FB's ThreatExchange or OpenTPX.   It would be a great problem to have, where we had SO MUCH adoption and SO MANY STIX documents flowing across the network each day that we had to do something to address the load.  That would be a GREAT problem to have. Thanks, Bret Bret Jordan CISSP Director of Security Architecture and Standards Office of the CTO Blue Coat Systems PGP Fingerprint: 63B4 FC53 680A 6B7D 1447  F2C0 74F8 ACAE 7415 0050 Without cryptography vihv vivc ce xhrnrw, however, the only thing that can not be unscrambled is an egg.   On Nov 13, 2015, at 11:26, Jerome Athias < athiasjerome@GMAIL.COM > wrote: I do appreciate the let's do it, if it is not a just do it. For JSON approach, I just would like to see (by facts) what the % of use cases/requirements it can cover, and when. 2015-11-13 21:17 GMT+03:00 Jordan, Bret < bret.jordan@bluecoat.com >: John this is really well said. I feel like we listened to every possible user requirement out there for STIX 1.0 and we tried to create a data-model that could solve every possible use case and corner case regardless of how small.  The one thing we sorely forgot to do is figure out what can developers actually implement in code or what are product managers willing to implement in code. Lets make STIX 2.0 something that meets 70-80% of the use cases and can actually be implemented in code by the majority of software development shops.  Yes, I am talking about a STIX Lite.  People can still use STIX 1.x if they want everything.  Over time we can add more and more features to the STIX 2.0 branch as software products that use CTI advance and users can do more and more with it. Lets start with JSON + JSON Schema and go from there.  I would love to have to migrate to a binary solution or something that supports RDF in the future because we have SO MUCH demand and there is SO MUCH sharing that we really need to do something. 1) Lets not put the cart before the horse 2) Lets fail fast, and not ride the horse to the glue factory 3) Lets start small and build massive adoption. 4) Lets make things so easy for development shops to implement that there is no reason for them not to Thanks, Bret Bret Jordan CISSP Director of Security Architecture and Standards Office of the CTO Blue Coat Systems PGP Fingerprint: 63B4 FC53 680A 6B7D 1447  F2C0 74F8 ACAE 7415 0050 Without cryptography vihv vivc ce xhrnrw, however, the only thing that can not be unscrambled is an egg. On Nov 13, 2015, at 08:09, Wunder, John A. < jwunder@mitre.org > wrote: So I’ve been waiting for a good time to outline this and I guess here is as good a place as any. I’m sure people will disagree, but I’m going to say it anyway :) Personally I think of these things as four levels: - User requirements - Implementations - Instantiation of the data model (XML, JSON, database schemas, an object model in code, etc) - Data model User requirements get supported in running software. Running software uses instantiations of the data model to work with data in support of those user requirements. The data model and specification define the instantiations of the data and describe how to work with them in a standard way. The important bit here is that there’s always running software between the user and the data model. That software is (likely) a tool that a vendor or open source project supports that contains custom code to work specifically with threat intel. It might be a more generic tool like Palantir or whatever people do RDF stuff with these days. But there’s always something. This has a couple implications: - Not all user requirements get met in the data model. It’s perfectly valid to decide not to support something in the data model if we think it’s fine that implementations do it in many different ways. For example, de-duplication: do we need a standard approach or should we let tools decide how to do de-duplication themselves? It’s a user requirement, but that doesn’t mean we need to address it in the specs. - Some user requirements need to be translated before they get to the data model. For example, versioning: users have lots of needs for versioning. Systems also have requirements for versioning. What we put in the specs needs to consider both of these. - This is the important part: some user requirements are beyond what software can do today. I would love it if my iphone would get 8 days of battery life. I could write that into some specification. That doesn’t mean it’s going to happen. In CTI, we (rightfully) have our eyes towards this end state where you can do all sorts of awesome things with your threat intel, but just putting it in the data model doesn’t automatically make that happen. We’re still exploring this domain and software can only do so much. So if the people writing software are telling us that the user requirements are too advanced (for now), maybe that means we should hold off on putting it in the data model until it’s something that we can actually implement? In my mind this is where a lot of the complexity in STIX comes from: we identified user requirements to do all these awesome things and so we put them in the data model, but we never considered how or whether software could really implement them. The perfect example here is data markings: users wanted to mark things at the field level, most software isn’t ready for that yet, and so we end up with data markings that are effectively broken in STIX 1.2. This is why many standards bodies have requirements for running code: otherwise the temptation is too great to define specification requirements that are not implementable and you end up with a great spec that nobody will use. Sorry for the long rant. Been waiting to get that off my chest for awhile (as you can probably tell). John On Nov 13, 2015, at 9:17 AM, Jerome Athias < athiasjerome@GMAIL.COM > wrote: sorry for the others if off-topic. Remember that a software is good only if it satisfies the users (meet, or exceed, their requirements). You can write 'perfect/optimized' code. If the users are not satisfied; it's a bad software. Then, If you can't explain it simply, you don't understand it well enough. , Albert Einstein Challenges are exciting, but sometimes difficult. It's about motivation and satisfaction. There is not programming language better than an other (just like OS); it is just you that can select the best for your needs. I did a conceptual map for the 'biggest Ruby project of the internet' (Metasploit Framework), it's just a picture, but represents 100 pages of documentation. I think we could optimize (like for a maturity model) our approach of resolving problems. 2015-11-13 17:02 GMT+03:00 John Anderson < janderson@soltra.com >: The list returns my mail, so probably you'll be the only one to get my reply. Funny, I missed that quote from the document. And it's spot on. As an architect myself, I have built several   elegant architectures, only to find that the guys who actually had to use it just. never. quite. got it. (sigh) My best architectures have emerged when I've written test code first. ( Test-first really does work.) I've learned that writing code--while applying KISS, DRY and YAGNI--saves me from entering the architecture stratosphere. That's why I ask the architects to express their creations in code, and not only in UML. I'm pretty vocal about Python, because it's by far the simplest popular language out there today. But this principal applies in any language: If the implementation is hard to explain, it's a bad idea. (Another quote from the Zen of Python.) Our standard has a lot that's hard to explain, esp. to new-comers. How can we simplify, so that it's almost a no-brainer to adopt? Again, thanks for the article, and the conversation. I really do appreciate your point-of-view, JSA ________________________________________ From: Jerome Athias < athiasjerome@gmail.com > Sent: Friday, November 13, 2015 8:45 AM To: John Anderson Cc: cti@lists.oasis-open.org Subject: Re: [cti] The Adaptive Object-Model Architectural Style Thanks for the feedback. Kindly note that I'm not strongly defending this approach for the CTI TC (at least for now). Since you're using quotes: Architects that develop these types of systems are usually very proud of them and claim that they are some of the best systems they have ever developed. However, developers that have to use, extend or maintain them, usually complain that they are hard to understand and are not convinced that they are as great as the architect claims. This, I hope could have our developers just understand that what they feel difficult sometimes, is not intended to be difficult per design, but because we are dealing with a complex domain and that the use of abstraction/conceptual approaches/ontology have benefits Hopefully we can obtain consensus on a good balanced adapted approach. 2015-11-13 16:24 GMT+03:00 John Anderson < janderson@soltra.com >: Jerome, Thanks for the link. I really enjoy those kinds of research papers. On Page 20, the section Maintaining the Model [1] states pretty clearly that this type of architecture is very unwieldy, from an end-user perspective; consequently, it requires a ton of tooling development. The advantage of such a model is that it's extensible and easily changed. But I'm not convinced that extensibility is really our friend. In my (greatly limited) experience, the extensibility of STIX and CybOX have made them that much harder to use and understand. I'm left wishing for one obvious way to do things. [2] If I were given the choice between (1) a very simple data model that's not extensible, but clear and easy to approach and (2) a generic, extensible data model whose extra layers of indirection make it hard to find the actual data, I'd gladly choose the first. Keeping it simple, JSA [1] The full wording from Maintaining the Model : The observation model is able to store all the metadata using a well-established mapping to relational databases, but it was not straightforward for a developer or analyst to put this data into the database. They would have to learn how the objects were saved in the database as well as the proper semantics for describing the business rules. A common solution to this is to develop editors and programming tools to assist users with using these black-box components [18]. This is part of the evolutionary process of Adaptive Object-Models as they are in a sense, “Black-Box” frameworks, and as they mature, they need editors and other support tools to aid in describing and maintaining the business rules. [2] From The Zen of Python : https://www.python.org/dev/peps/pep-0020/ ________________________________________ From: cti@lists.oasis-open.org < cti@lists.oasis-open.org > on behalf of Jerome Athias < athiasjerome@gmail.com > Sent: Friday, November 13, 2015 5:20 AM To: cti@lists.oasis-open.org Subject: [cti] The Adaptive Object-Model Architectural Style Greetings, realizing that the community members have different background, experience, expectations and use of CTI in general, from an high-level (abstracted/conceptual/ontology oriented) point of view, through a day-to-day use (experienced) point of view, to a technical (implementation/code) point of view... I found this diagram (and document) interesting while easy to read and potentially adapted to our current effort. So just wanted to share. http://www.adaptiveobjectmodel.com/WICSA3/ArchitectureOfAOMsWICSA3.pdf Regards --------------------------------------------------------------------- To unsubscribe from this mail list, you must leave the OASIS TC that generates this mail.  Follow this link to all your TCs in OASIS at: https://www.oasis-open.org/apps/org/workgroup/portal/my_workgroups.php Attachment: signature.asc Description: Message signed with OpenPGP using GPGMail


  • 9.  Re: [cti] The Adaptive Object-Model Architectural Style

    Posted 11-13-2015 19:19





    I am sold on JSON. Is there an argument against JSON? If so, let’s here it so that we can hash through it.


    Aharon









    From: < cti@lists.oasis-open.org > on behalf of "Jordan, Bret" < bret.jordan@bluecoat.com >
    Date: Friday, November 13, 2015 at 10:42 AM
    To: Jerome Athias < athiasjerome@GMAIL.COM >
    Cc: "Wunder, John A." < jwunder@mitre.org >, " cti@lists.oasis-open.org " < cti@lists.oasis-open.org >
    Subject: Re: [cti] The Adaptive Object-Model Architectural Style





    The reason I push for JSON is all of the developers and CTOs I have talked to in various organizations, companies, vendors, and open-source groups always ask for "anything but XML".  I then ask what would you prefer, and they all say, without exception "JSON".
     


    So a novel idea...  Lets give them what they want, JSON and a simple STIX 2.0 model, and lets drive for massive adoption.  Our number 1 goal should be adoption followed up by a model that can meet at least 70-80% of the market use cases.


    Lets get STIX 2.0 support in every networking product, every security tool, and every security broker.  Then, as we gain massive adoption, lets iterate and figure out what we need to do solve the problems we are running in to.  Lets first get
    adoption, and I do not mean a few niche groups here and there and one large eco-system.  I am talking about every networking and security product on the planet.


    I want to remove as many hurdles development shops have against STIX.  I want to make it so easy for them to adopt it that there is no question of them adopting it.  I do not want to see more groups go off and do their own thing or move over to
    FB's ThreatExchange or OpenTPX.  


    It would be a great problem to have, where we had SO MUCH adoption and SO MANY STIX documents flowing across the network each day that we had to do something to address the load.  That would be a GREAT problem to have.











    Thanks,


    Bret











    Bret Jordan CISSP

    Director of Security Architecture and Standards Office of the CTO

    Blue Coat Systems

    PGP Fingerprint: 63B4 FC53 680A 6B7D 1447  F2C0 74F8 ACAE 7415 0050
    "Without cryptography vihv vivc ce xhrnrw, however, the only thing that can not be unscrambled is an egg." 











    On Nov 13, 2015, at 11:26, Jerome Athias < athiasjerome@GMAIL.COM > wrote:

    I do appreciate the let's do it, if it is not a just do it.
    For JSON approach, I just would like to see (by facts) what the % of
    use cases/requirements it can cover, and when.

    2015-11-13 21:17 GMT+03:00 Jordan, Bret < bret.jordan@bluecoat.com >:
    John this is really well said.

    I feel like we listened to every possible user requirement out there for
    STIX 1.0 and we tried to create a data-model that could solve every possible
    use case and corner case regardless of how small.  The one thing we sorely
    forgot to do is figure out what can developers actually implement in code or
    what are product managers willing to implement in code.

    Lets make STIX 2.0 something that meets 70-80% of the use cases and can
    actually be implemented in code by the majority of software development
    shops.  Yes, I am talking about a STIX Lite.  People can still use STIX 1.x
    if they want everything.  Over time we can add more and more features to the
    STIX 2.0 branch as software products that use CTI advance and users can do
    more and more with it.

    Lets start with JSON + JSON Schema and go from there.  I would love to have
    to migrate to a binary solution or something that supports RDF in the future
    because we have SO MUCH demand and there is SO MUCH sharing that we really
    need to do something.

    1) Lets not put the cart before the horse
    2) Lets fail fast, and not ride the horse to the glue factory
    3) Lets start small and build massive adoption.
    4) Lets make things so easy for development shops to implement that there is
    no reason for them not to


    Thanks,

    Bret



    Bret Jordan CISSP
    Director of Security Architecture and Standards Office of the CTO
    Blue Coat Systems
    PGP Fingerprint: 63B4 FC53 680A 6B7D 1447  F2C0 74F8 ACAE 7415 0050
    "Without cryptography vihv vivc ce xhrnrw, however, the only thing that can
    not be unscrambled is an egg."

    On Nov 13, 2015, at 08:09, Wunder, John A. < jwunder@mitre.org > wrote:

    So I’ve been waiting for a good time to outline this and I guess here is as
    good a place as any. I’m sure people will disagree, but I’m going to say it
    anyway :)

    Personally I think of these things as four levels:

    - User requirements
    - Implementations
    - Instantiation of the data model (XML, JSON, database schemas, an object
    model in code, etc)
    - Data model

    User requirements get supported in running software. Running software uses
    instantiations of the data model to work with data in support of those user
    requirements. The data model and specification define the instantiations of
    the data and describe how to work with them in a standard way.

    The important bit here is that there’s always running software between the
    user and the data model. That software is (likely) a tool that a vendor or
    open source project supports that contains custom code to work specifically
    with threat intel. It might be a more generic tool like Palantir or whatever
    people do RDF stuff with these days. But there’s always something.

    This has a couple implications:

    - Not all user requirements get met in the data model. It’s perfectly valid
    to decide not to support something in the data model if we think it’s fine
    that implementations do it in many different ways. For example,
    de-duplication: do we need a standard approach or should we let tools decide
    how to do de-duplication themselves? It’s a user requirement, but that
    doesn’t mean we need to address it in the specs.

    - Some user requirements need to be translated before they get to the data
    model. For example, versioning: users have lots of needs for versioning.
    Systems also have requirements for versioning. What we put in the specs
    needs to consider both of these.

    - This is the important part: some user requirements are beyond what
    software can do today. I would love it if my iphone would get 8 days of
    battery life. I could write that into some specification. That doesn’t mean
    it’s going to happen. In CTI, we (rightfully) have our eyes towards this end
    state where you can do all sorts of awesome things with your threat intel,
    but just putting it in the data model doesn’t automatically make that
    happen. We’re still exploring this domain and software can only do so much.
    So if the people writing software are telling us that the user requirements
    are too advanced (for now), maybe that means we should hold off on putting
    it in the data model until it’s something that we can actually implement? In
    my mind this is where a lot of the complexity in STIX comes from: we
    identified user requirements to do all these awesome things and so we put
    them in the data model, but we never considered how or whether software
    could really implement them. The perfect example here is data markings:
    users wanted to mark things at the field level, most software isn’t ready
    for that yet, and so we end up with data markings that are effectively
    broken in STIX 1.2. This is why many standards bodies have requirements for
    running code: otherwise the temptation is too great to define specification
    requirements that are not implementable and you end up with a great spec
    that nobody will use.

    Sorry for the long rant. Been waiting to get that off my chest for awhile
    (as you can probably tell).

    John

    On Nov 13, 2015, at 9:17 AM, Jerome Athias < athiasjerome@GMAIL.COM > wrote:

    sorry for the others if off-topic.

    Remember that a software is good only if it satisfies the users (meet,
    or exceed, their requirements).
    You can write 'perfect/optimized' code. If the users are not
    satisfied; it's a bad software.

    Then,
    "If you can't explain it simply, you don't understand it well
    enough.", Albert Einstein

    Challenges are exciting, but sometimes difficult. It's about
    motivation and satisfaction.

    There is not programming language better than an other (just like OS);
    it is just you that can select the best for your needs.

    I did a conceptual map for the 'biggest Ruby project of the internet'
    (Metasploit Framework), it's just a picture, but represents 100 pages
    of documentation.
    I think we could optimize (like for a maturity model) our approach of
    resolving problems.









    2015-11-13 17:02 GMT+03:00 John Anderson < janderson@soltra.com >:

    The list returns my mail, so probably you'll be the only one to get my
    reply.

    Funny, I missed that quote from the document. And it's spot on. As an
    architect myself, I have built several  "elegant" architectures, only to
    find that the guys who actually had to use it just. never. quite. got it.
    (sigh)

    My best architectures have emerged when I've written test code first.
    ("Test-first" really does work.) I've learned that writing code--while
    applying KISS, DRY and YAGNI--saves me from entering the architecture
    stratosphere. That's why I ask the architects to express their creations in
    code, and not only in UML.

    I'm pretty vocal about Python, because it's by far the simplest popular
    language out there today. But this principal applies in any language: If the
    implementation is hard to explain, it's a bad idea. (Another quote from the
    Zen of Python.) Our standard has a lot that's hard to explain, esp. to
    new-comers. How can we simplify, so that it's almost a no-brainer to adopt?

    Again, thanks for the article, and the conversation. I really do appreciate
    your point-of-view,
    JSA


    ________________________________________
    From: Jerome Athias < athiasjerome@gmail.com >
    Sent: Friday, November 13, 2015 8:45 AM
    To: John Anderson
    Cc: cti@lists.oasis-open.org
    Subject: Re: [cti] The Adaptive Object-Model Architectural Style

    Thanks for the feedback.
    Kindly note that I'm not strongly defending this approach for the CTI
    TC (at least for now).
    Since you're using quotes:
    "Architects that develop these types of systems are usually very proud
    of them and claim that they are some of the best systems they have
    ever developed. However, developers that have to use, extend or
    maintain them, usually complain that they are hard to understand and
    are not convinced that they are as great as the architect claims."

    This, I hope could have our developers just understand
    that what they feel difficult sometimes, is not intended to be
    difficult per design, but because we are dealing with a complex domain
    and
    that the use of abstraction/conceptual approaches/ontology have benefits

    Hopefully we can obtain consensus on a good balanced adapted approach.









    2015-11-13 16:24 GMT+03:00 John Anderson < janderson@soltra.com >:

    Jerome,
    Thanks for the link. I really enjoy those kinds of research papers.

    On Page 20, the section "Maintaining the Model" [1] states pretty clearly
    that this type of architecture is very unwieldy, from an end-user
    perspective; consequently, it requires a ton of tooling development.

    The advantage of such a model is that it's extensible and easily changed.
    But I'm not convinced that extensibility is really our friend. In my
    (greatly limited) experience, the extensibility of STIX and CybOX have made
    them that much harder to use and understand. I'm left wishing for "one
    obvious way to do things." [2]

    If I were given the choice between (1) a very simple data model that's not
    extensible, but clear and easy to approach and (2) a generic, extensible
    data model whose extra layers of indirection make it hard to find the actual
    data, I'd gladly choose the first.

    Keeping it simple,
    JSA


    [1] The full wording from "Maintaining the Model":
    The observation model is able to store all the metadata using a
    well-established
    mapping to relational databases, but it was not straightforward
    for a developer or analyst to put this data into the database. They would
    have to learn how the objects were saved in the database as well as the
    proper semantics for describing the business rules. A common solution to
    this is to develop editors and programming tools to assist users with using
    these black-box components [18]. This is part of the evolutionary process of
    Adaptive Object-Models as they are in a sense, “Black-Box” frameworks,
    and as they mature, they need editors and other support tools to aid in
    describing and maintaining the business rules.

    [2] From "The Zen of Python":
    https://www.python.org/dev/peps/pep-0020/


    ________________________________________
    From: cti@lists.oasis-open.org < cti@lists.oasis-open.org > on behalf of
    Jerome Athias < athiasjerome@gmail.com >
    Sent: Friday, November 13, 2015 5:20 AM
    To: cti@lists.oasis-open.org
    Subject: [cti] The Adaptive Object-Model Architectural Style

    Greetings,

    realizing that the community members have different background,
    experience, expectations and use of CTI in general, from an high-level
    (abstracted/conceptual/ontology oriented) point of view, through a
    day-to-day use (experienced) point of view, to a technical
    (implementation/code) point of view...
    I found this diagram (and document) interesting while easy to read and
    potentially adapted to our current effort.
    So just wanted to share.

    http://www.adaptiveobjectmodel.com/WICSA3/ArchitectureOfAOMsWICSA3.pdf

    Regards


    ---------------------------------------------------------------------
    To unsubscribe from this mail list, you must leave the OASIS TC that
    generates this mail.  Follow this link to all your TCs in OASIS at:
    https://www.oasis-open.org/apps/org/workgroup/portal/my_workgroups.php

















  • 10.  JSON or what???

    Posted 11-13-2015 19:32



    Changed the thread title since the topic changed.

    We had several discussions about JSON in the past with no result of a complete STIX implementation. XML to JSON, as a format, can be done. I think we should show the JSON validation mechanism(s) that will be used by the CTI/SC to assure producers/consumers
    that we can provide means of testing schema/spec conformity.

    -Marlon
     

    From : Aharon Chernin [mailto:achernin@soltra.com]

    Sent : Friday, November 13, 2015 02:18 PM
    To : Jordan, Bret <bret.jordan@bluecoat.com>; Jerome Athias <athiasjerome@GMAIL.COM>

    Cc : Wunder, John A. <jwunder@mitre.org>; cti@lists.oasis-open.org <cti@lists.oasis-open.org>

    Subject : Re: [cti] The Adaptive Object-Model Architectural Style
     



    I am sold on JSON. Is there an argument against JSON? If so, let’s here it so that we can hash through it.


    Aharon









    From: < cti@lists.oasis-open.org > on behalf of "Jordan, Bret" < bret.jordan@bluecoat.com >
    Date: Friday, November 13, 2015 at 10:42 AM
    To: Jerome Athias < athiasjerome@GMAIL.COM >
    Cc: "Wunder, John A." < jwunder@mitre.org >, " cti@lists.oasis-open.org " < cti@lists.oasis-open.org >
    Subject: Re: [cti] The Adaptive Object-Model Architectural Style





    The reason I push for JSON is all of the developers and CTOs I have talked to in various organizations, companies, vendors, and open-source groups always ask for "anything but XML".  I then ask what would you prefer, and they all say, without exception "JSON".
     


    So a novel idea...  Lets give them what they want, JSON and a simple STIX 2.0 model, and lets drive for massive adoption.  Our number 1 goal should be adoption followed up by a model that can meet at least 70-80% of the market use cases.


    Lets get STIX 2.0 support in every networking product, every security tool, and every security broker.  Then, as we gain massive adoption, lets iterate and figure out what we need to do solve the problems we are running in to.  Lets first get
    adoption, and I do not mean a few niche groups here and there and one large eco-system.  I am talking about every networking and security product on the planet.


    I want to remove as many hurdles development shops have against STIX.  I want to make it so easy for them to adopt it that there is no question of them adopting it.  I do not want to see more groups go off and do their own thing or move over to
    FB's ThreatExchange or OpenTPX.  


    It would be a great problem to have, where we had SO MUCH adoption and SO MANY STIX documents flowing across the network each day that we had to do something to address the load.  That would be a GREAT problem to have.











    Thanks,


    Bret











    Bret Jordan CISSP

    Director of Security Architecture and Standards Office of the CTO

    Blue Coat Systems

    PGP Fingerprint: 63B4 FC53 680A 6B7D 1447  F2C0 74F8 ACAE 7415 0050
    "Without cryptography vihv vivc ce xhrnrw, however, the only thing that can not be unscrambled is an egg." 











    On Nov 13, 2015, at 11:26, Jerome Athias < athiasjerome@GMAIL.COM > wrote:

    I do appreciate the let's do it, if it is not a just do it.
    For JSON approach, I just would like to see (by facts) what the % of
    use cases/requirements it can cover, and when.

    2015-11-13 21:17 GMT+03:00 Jordan, Bret < bret.jordan@bluecoat.com >:
    John this is really well said.

    I feel like we listened to every possible user requirement out there for
    STIX 1.0 and we tried to create a data-model that could solve every possible
    use case and corner case regardless of how small.  The one thing we sorely
    forgot to do is figure out what can developers actually implement in code or
    what are product managers willing to implement in code.

    Lets make STIX 2.0 something that meets 70-80% of the use cases and can
    actually be implemented in code by the majority of software development
    shops.  Yes, I am talking about a STIX Lite.  People can still use STIX 1.x
    if they want everything.  Over time we can add more and more features to the
    STIX 2.0 branch as software products that use CTI advance and users can do
    more and more with it.

    Lets start with JSON + JSON Schema and go from there.  I would love to have
    to migrate to a binary solution or something that supports RDF in the future
    because we have SO MUCH demand and there is SO MUCH sharing that we really
    need to do something.

    1) Lets not put the cart before the horse
    2) Lets fail fast, and not ride the horse to the glue factory
    3) Lets start small and build massive adoption.
    4) Lets make things so easy for development shops to implement that there is
    no reason for them not to


    Thanks,

    Bret



    Bret Jordan CISSP
    Director of Security Architecture and Standards Office of the CTO
    Blue Coat Systems
    PGP Fingerprint: 63B4 FC53 680A 6B7D 1447  F2C0 74F8 ACAE 7415 0050
    "Without cryptography vihv vivc ce xhrnrw, however, the only thing that can
    not be unscrambled is an egg."

    On Nov 13, 2015, at 08:09, Wunder, John A. < jwunder@mitre.org > wrote:

    So I’ve been waiting for a good time to outline this and I guess here is as
    good a place as any. I’m sure people will disagree, but I’m going to say it
    anyway :)

    Personally I think of these things as four levels:

    - User requirements
    - Implementations
    - Instantiation of the data model (XML, JSON, database schemas, an object
    model in code, etc)
    - Data model

    User requirements get supported in running software. Running software uses
    instantiations of the data model to work with data in support of those user
    requirements. The data model and specification define the instantiations of
    the data and describe how to work with them in a standard way.

    The important bit here is that there’s always running software between the
    user and the data model. That software is (likely) a tool that a vendor or
    open source project supports that contains custom code to work specifically
    with threat intel. It might be a more generic tool like Palantir or whatever
    people do RDF stuff with these days. But there’s always something.

    This has a couple implications:

    - Not all user requirements get met in the data model. It’s perfectly valid
    to decide not to support something in the data model if we think it’s fine
    that implementations do it in many different ways. For example,
    de-duplication: do we need a standard approach or should we let tools decide
    how to do de-duplication themselves? It’s a user requirement, but that
    doesn’t mean we need to address it in the specs.

    - Some user requirements need to be translated before they get to the data
    model. For example, versioning: users have lots of needs for versioning.
    Systems also have requirements for versioning. What we put in the specs
    needs to consider both of these.

    - This is the important part: some user requirements are beyond what
    software can do today. I would love it if my iphone would get 8 days of
    battery life. I could write that into some specification. That doesn’t mean
    it’s going to happen. In CTI, we (rightfully) have our eyes towards this end
    state where you can do all sorts of awesome things with your threat intel,
    but just putting it in the data model doesn’t automatically make that
    happen. We’re still exploring this domain and software can only do so much.
    So if the people writing software are telling us that the user requirements
    are too advanced (for now), maybe that means we should hold off on putting
    it in the data model until it’s something that we can actually implement? In
    my mind this is where a lot of the complexity in STIX comes from: we
    identified user requirements to do all these awesome things and so we put
    them in the data model, but we never considered how or whether software
    could really implement them. The perfect example here is data markings:
    users wanted to mark things at the field level, most software isn’t ready
    for that yet, and so we end up with data markings that are effectively
    broken in STIX 1.2. This is why many standards bodies have requirements for
    running code: otherwise the temptation is too great to define specification
    requirements that are not implementable and you end up with a great spec
    that nobody will use.

    Sorry for the long rant. Been waiting to get that off my chest for awhile
    (as you can probably tell).

    John

    On Nov 13, 2015, at 9:17 AM, Jerome Athias < athiasjerome@GMAIL.COM > wrote:

    sorry for the others if off-topic.

    Remember that a software is good only if it satisfies the users (meet,
    or exceed, their requirements).
    You can write 'perfect/optimized' code. If the users are not
    satisfied; it's a bad software.

    Then,
    "If you can't explain it simply, you don't understand it well
    enough.", Albert Einstein

    Challenges are exciting, but sometimes difficult. It's about
    motivation and satisfaction.

    There is not programming language better than an other (just like OS);
    it is just you that can select the best for your needs.

    I did a conceptual map for the 'biggest Ruby project of the internet'
    (Metasploit Framework), it's just a picture, but represents 100 pages
    of documentation.
    I think we could optimize (like for a maturity model) our approach of
    resolving problems.









    2015-11-13 17:02 GMT+03:00 John Anderson < janderson@soltra.com >:

    The list returns my mail, so probably you'll be the only one to get my
    reply.

    Funny, I missed that quote from the document. And it's spot on. As an
    architect myself, I have built several  "elegant" architectures, only to
    find that the guys who actually had to use it just. never. quite. got it.
    (sigh)

    My best architectures have emerged when I've written test code first.
    ("Test-first" really does work.) I've learned that writing code--while
    applying KISS, DRY and YAGNI--saves me from entering the architecture
    stratosphere. That's why I ask the architects to express their creations in
    code, and not only in UML.

    I'm pretty vocal about Python, because it's by far the simplest popular
    language out there today. But this principal applies in any language: If the
    implementation is hard to explain, it's a bad idea. (Another quote from the
    Zen of Python.) Our standard has a lot that's hard to explain, esp. to
    new-comers. How can we simplify, so that it's almost a no-brainer to adopt?

    Again, thanks for the article, and the conversation. I really do appreciate
    your point-of-view,
    JSA


    ________________________________________
    From: Jerome Athias < athiasjerome@gmail.com >
    Sent: Friday, November 13, 2015 8:45 AM
    To: John Anderson
    Cc: cti@lists.oasis-open.org
    Subject: Re: [cti] The Adaptive Object-Model Architectural Style

    Thanks for the feedback.
    Kindly note that I'm not strongly defending this approach for the CTI
    TC (at least for now).
    Since you're using quotes:
    "Architects that develop these types of systems are usually very proud
    of them and claim that they are some of the best systems they have
    ever developed. However, developers that have to use, extend or
    maintain them, usually complain that they are hard to understand and
    are not convinced that they are as great as the architect claims."

    This, I hope could have our developers just understand
    that what they feel difficult sometimes, is not intended to be
    difficult per design, but because we are dealing with a complex domain
    and
    that the use of abstraction/conceptual approaches/ontology have benefits

    Hopefully we can obtain consensus on a good balanced adapted approach.









    2015-11-13 16:24 GMT+03:00 John Anderson < janderson@soltra.com >:

    Jerome,
    Thanks for the link. I really enjoy those kinds of research papers.

    On Page 20, the section "Maintaining the Model" [1] states pretty clearly
    that this type of architecture is very unwieldy, from an end-user
    perspective; consequently, it requires a ton of tooling development.

    The advantage of such a model is that it's extensible and easily changed.
    But I'm not convinced that extensibility is really our friend. In my
    (greatly limited) experience, the extensibility of STIX and CybOX have made
    them that much harder to use and understand. I'm left wishing for "one
    obvious way to do things." [2]

    If I were given the choice between (1) a very simple data model that's not
    extensible, but clear and easy to approach and (2) a generic, extensible
    data model whose extra layers of indirection make it hard to find the actual
    data, I'd gladly choose the first.

    Keeping it simple,
    JSA


    [1] The full wording from "Maintaining the Model":
    The observation model is able to store all the metadata using a
    well-established
    mapping to relational databases, but it was not straightforward
    for a developer or analyst to put this data into the database. They would
    have to learn how the objects were saved in the database as well as the
    proper semantics for describing the business rules. A common solution to
    this is to develop editors and programming tools to assist users with using
    these black-box components [18]. This is part of the evolutionary process of
    Adaptive Object-Models as they are in a sense, “Black-Box” frameworks,
    and as they mature, they need editors and other support tools to aid in
    describing and maintaining the business rules.

    [2] From "The Zen of Python":
    https://www.python.org/dev/peps/pep-0020/


    ________________________________________
    From: cti@lists.oasis-open.org < cti@lists.oasis-open.org > on behalf of
    Jerome Athias < athiasjerome@gmail.com >
    Sent: Friday, November 13, 2015 5:20 AM
    To: cti@lists.oasis-open.org
    Subject: [cti] The Adaptive Object-Model Architectural Style

    Greetings,

    realizing that the community members have different background,
    experience, expectations and use of CTI in general, from an high-level
    (abstracted/conceptual/ontology oriented) point of view, through a
    day-to-day use (experienced) point of view, to a technical
    (implementation/code) point of view...
    I found this diagram (and document) interesting while easy to read and
    potentially adapted to our current effort.
    So just wanted to share.

    http://www.adaptiveobjectmodel.com/WICSA3/ArchitectureOfAOMsWICSA3.pdf

    Regards


    ---------------------------------------------------------------------
    To unsubscribe from this mail list, you must leave the OASIS TC that
    generates this mail.  Follow this link to all your TCs in OASIS at:
    https://www.oasis-open.org/apps/org/workgroup/portal/my_workgroups.php

















  • 11.  Re: JSON or what???

    Posted 11-13-2015 19:38





    If we just convert STIX XML to JSON I think what we will see is very complex looking JSON. While I prefer JSON now, I don’t think the switch from XML to JSON is going to make anyone’s life that much easier.




    Aharon









    From: "Taylor, Marlon" < Marlon.Taylor@hq.dhs.gov >
    Date: Friday, November 13, 2015 at 11:31 AM
    To: Aharon < achernin@soltra.com >, " 'bret.jordan@bluecoat.com '" < bret.jordan@bluecoat.com >,
    " 'athiasjerome@GMAIL.COM '" < athiasjerome@GMAIL.COM >
    Cc: " 'jwunder@mitre.org '" < jwunder@mitre.org >, " 'cti@lists.oasis-open.org '" < cti@lists.oasis-open.org >
    Subject: JSON or what???





    Changed the thread title since the topic changed.

    We had several discussions about JSON in the past with no result of a complete STIX implementation. XML to JSON, as a format, can be done. I think we should show the JSON validation mechanism(s) that will be used by the CTI/SC to assure producers/consumers
    that we can provide means of testing schema/spec conformity.

    -Marlon
     

    From : Aharon Chernin [ mailto:achernin@soltra.com ]

    Sent : Friday, November 13, 2015 02:18 PM
    To : Jordan, Bret < bret.jordan@bluecoat.com >; Jerome Athias < athiasjerome@GMAIL.COM >

    Cc : Wunder, John A. < jwunder@mitre.org >;
    cti@lists.oasis-open.org < cti@lists.oasis-open.org >

    Subject : Re: [cti] The Adaptive Object-Model Architectural Style
     



    I am sold on JSON. Is there an argument against JSON? If so, let’s here it so that we can hash through it.


    Aharon









    From: < cti@lists.oasis-open.org > on behalf of "Jordan, Bret" < bret.jordan@bluecoat.com >
    Date: Friday, November 13, 2015 at 10:42 AM
    To: Jerome Athias < athiasjerome@GMAIL.COM >
    Cc: "Wunder, John A." < jwunder@mitre.org >, " cti@lists.oasis-open.org " < cti@lists.oasis-open.org >
    Subject: Re: [cti] The Adaptive Object-Model Architectural Style





    The reason I push for JSON is all of the developers and CTOs I have talked to in various organizations, companies, vendors, and open-source groups always ask for "anything but XML".  I then ask what would you prefer, and they all say, without exception "JSON".
     


    So a novel idea...  Lets give them what they want, JSON and a simple STIX 2.0 model, and lets drive for massive adoption.  Our number 1 goal should be adoption followed up by a model that can meet at least 70-80% of the market use cases.


    Lets get STIX 2.0 support in every networking product, every security tool, and every security broker.  Then, as we gain massive adoption, lets iterate and figure out what we need to do solve the problems we are running in to.  Lets first get
    adoption, and I do not mean a few niche groups here and there and one large eco-system.  I am talking about every networking and security product on the planet.


    I want to remove as many hurdles development shops have against STIX.  I want to make it so easy for them to adopt it that there is no question of them adopting it.  I do not want to see more groups go off and do their own thing or move over to
    FB's ThreatExchange or OpenTPX.  


    It would be a great problem to have, where we had SO MUCH adoption and SO MANY STIX documents flowing across the network each day that we had to do something to address the load.  That would be a GREAT problem to have.











    Thanks,


    Bret











    Bret Jordan CISSP

    Director of Security Architecture and Standards Office of the CTO

    Blue Coat Systems

    PGP Fingerprint: 63B4 FC53 680A 6B7D 1447  F2C0 74F8 ACAE 7415 0050
    "Without cryptography vihv vivc ce xhrnrw, however, the only thing that can not be unscrambled is an egg." 











    On Nov 13, 2015, at 11:26, Jerome Athias < athiasjerome@GMAIL.COM > wrote:

    I do appreciate the let's do it, if it is not a just do it.
    For JSON approach, I just would like to see (by facts) what the % of
    use cases/requirements it can cover, and when.

    2015-11-13 21:17 GMT+03:00 Jordan, Bret < bret.jordan@bluecoat.com >:
    John this is really well said.

    I feel like we listened to every possible user requirement out there for
    STIX 1.0 and we tried to create a data-model that could solve every possible
    use case and corner case regardless of how small.  The one thing we sorely
    forgot to do is figure out what can developers actually implement in code or
    what are product managers willing to implement in code.

    Lets make STIX 2.0 something that meets 70-80% of the use cases and can
    actually be implemented in code by the majority of software development
    shops.  Yes, I am talking about a STIX Lite.  People can still use STIX 1.x
    if they want everything.  Over time we can add more and more features to the
    STIX 2.0 branch as software products that use CTI advance and users can do
    more and more with it.

    Lets start with JSON + JSON Schema and go from there.  I would love to have
    to migrate to a binary solution or something that supports RDF in the future
    because we have SO MUCH demand and there is SO MUCH sharing that we really
    need to do something.

    1) Lets not put the cart before the horse
    2) Lets fail fast, and not ride the horse to the glue factory
    3) Lets start small and build massive adoption.
    4) Lets make things so easy for development shops to implement that there is
    no reason for them not to


    Thanks,

    Bret



    Bret Jordan CISSP
    Director of Security Architecture and Standards Office of the CTO
    Blue Coat Systems
    PGP Fingerprint: 63B4 FC53 680A 6B7D 1447  F2C0 74F8 ACAE 7415 0050
    "Without cryptography vihv vivc ce xhrnrw, however, the only thing that can
    not be unscrambled is an egg."

    On Nov 13, 2015, at 08:09, Wunder, John A. < jwunder@mitre.org > wrote:

    So I’ve been waiting for a good time to outline this and I guess here is as
    good a place as any. I’m sure people will disagree, but I’m going to say it
    anyway :)

    Personally I think of these things as four levels:

    - User requirements
    - Implementations
    - Instantiation of the data model (XML, JSON, database schemas, an object
    model in code, etc)
    - Data model

    User requirements get supported in running software. Running software uses
    instantiations of the data model to work with data in support of those user
    requirements. The data model and specification define the instantiations of
    the data and describe how to work with them in a standard way.

    The important bit here is that there’s always running software between the
    user and the data model. That software is (likely) a tool that a vendor or
    open source project supports that contains custom code to work specifically
    with threat intel. It might be a more generic tool like Palantir or whatever
    people do RDF stuff with these days. But there’s always something.

    This has a couple implications:

    - Not all user requirements get met in the data model. It’s perfectly valid
    to decide not to support something in the data model if we think it’s fine
    that implementations do it in many different ways. For example,
    de-duplication: do we need a standard approach or should we let tools decide
    how to do de-duplication themselves? It’s a user requirement, but that
    doesn’t mean we need to address it in the specs.

    - Some user requirements need to be translated before they get to the data
    model. For example, versioning: users have lots of needs for versioning.
    Systems also have requirements for versioning. What we put in the specs
    needs to consider both of these.

    - This is the important part: some user requirements are beyond what
    software can do today. I would love it if my iphone would get 8 days of
    battery life. I could write that into some specification. That doesn’t mean
    it’s going to happen. In CTI, we (rightfully) have our eyes towards this end
    state where you can do all sorts of awesome things with your threat intel,
    but just putting it in the data model doesn’t automatically make that
    happen. We’re still exploring this domain and software can only do so much.
    So if the people writing software are telling us that the user requirements
    are too advanced (for now), maybe that means we should hold off on putting
    it in the data model until it’s something that we can actually implement? In
    my mind this is where a lot of the complexity in STIX comes from: we
    identified user requirements to do all these awesome things and so we put
    them in the data model, but we never considered how or whether software
    could really implement them. The perfect example here is data markings:
    users wanted to mark things at the field level, most software isn’t ready
    for that yet, and so we end up with data markings that are effectively
    broken in STIX 1.2. This is why many standards bodies have requirements for
    running code: otherwise the temptation is too great to define specification
    requirements that are not implementable and you end up with a great spec
    that nobody will use.

    Sorry for the long rant. Been waiting to get that off my chest for awhile
    (as you can probably tell).

    John

    On Nov 13, 2015, at 9:17 AM, Jerome Athias < athiasjerome@GMAIL.COM > wrote:

    sorry for the others if off-topic.

    Remember that a software is good only if it satisfies the users (meet,
    or exceed, their requirements).
    You can write 'perfect/optimized' code. If the users are not
    satisfied; it's a bad software.

    Then,
    "If you can't explain it simply, you don't understand it well
    enough.", Albert Einstein

    Challenges are exciting, but sometimes difficult. It's about
    motivation and satisfaction.

    There is not programming language better than an other (just like OS);
    it is just you that can select the best for your needs.

    I did a conceptual map for the 'biggest Ruby project of the internet'
    (Metasploit Framework), it's just a picture, but represents 100 pages
    of documentation.
    I think we could optimize (like for a maturity model) our approach of
    resolving problems.









    2015-11-13 17:02 GMT+03:00 John Anderson < janderson@soltra.com >:

    The list returns my mail, so probably you'll be the only one to get my
    reply.

    Funny, I missed that quote from the document. And it's spot on. As an
    architect myself, I have built several  "elegant" architectures, only to
    find that the guys who actually had to use it just. never. quite. got it.
    (sigh)

    My best architectures have emerged when I've written test code first.
    ("Test-first" really does work.) I've learned that writing code--while
    applying KISS, DRY and YAGNI--saves me from entering the architecture
    stratosphere. That's why I ask the architects to express their creations in
    code, and not only in UML.

    I'm pretty vocal about Python, because it's by far the simplest popular
    language out there today. But this principal applies in any language: If the
    implementation is hard to explain, it's a bad idea. (Another quote from the
    Zen of Python.) Our standard has a lot that's hard to explain, esp. to
    new-comers. How can we simplify, so that it's almost a no-brainer to adopt?

    Again, thanks for the article, and the conversation. I really do appreciate
    your point-of-view,
    JSA


    ________________________________________
    From: Jerome Athias < athiasjerome@gmail.com >
    Sent: Friday, November 13, 2015 8:45 AM
    To: John Anderson
    Cc: cti@lists.oasis-open.org
    Subject: Re: [cti] The Adaptive Object-Model Architectural Style

    Thanks for the feedback.
    Kindly note that I'm not strongly defending this approach for the CTI
    TC (at least for now).
    Since you're using quotes:
    "Architects that develop these types of systems are usually very proud
    of them and claim that they are some of the best systems they have
    ever developed. However, developers that have to use, extend or
    maintain them, usually complain that they are hard to understand and
    are not convinced that they are as great as the architect claims."

    This, I hope could have our developers just understand
    that what they feel difficult sometimes, is not intended to be
    difficult per design, but because we are dealing with a complex domain
    and
    that the use of abstraction/conceptual approaches/ontology have benefits

    Hopefully we can obtain consensus on a good balanced adapted approach.









    2015-11-13 16:24 GMT+03:00 John Anderson < janderson@soltra.com >:

    Jerome,
    Thanks for the link. I really enjoy those kinds of research papers.

    On Page 20, the section "Maintaining the Model" [1] states pretty clearly
    that this type of architecture is very unwieldy, from an end-user
    perspective; consequently, it requires a ton of tooling development.

    The advantage of such a model is that it's extensible and easily changed.
    But I'm not convinced that extensibility is really our friend. In my
    (greatly limited) experience, the extensibility of STIX and CybOX have made
    them that much harder to use and understand. I'm left wishing for "one
    obvious way to do things." [2]

    If I were given the choice between (1) a very simple data model that's not
    extensible, but clear and easy to approach and (2) a generic, extensible
    data model whose extra layers of indirection make it hard to find the actual
    data, I'd gladly choose the first.

    Keeping it simple,
    JSA


    [1] The full wording from "Maintaining the Model":
    The observation model is able to store all the metadata using a
    well-established
    mapping to relational databases, but it was not straightforward
    for a developer or analyst to put this data into the database. They would
    have to learn how the objects were saved in the database as well as the
    proper semantics for describing the business rules. A common solution to
    this is to develop editors and programming tools to assist users with using
    these black-box components [18]. This is part of the evolutionary process of
    Adaptive Object-Models as they are in a sense, “Black-Box” frameworks,
    and as they mature, they need editors and other support tools to aid in
    describing and maintaining the business rules.

    [2] From "The Zen of Python":
    https://www.python.org/dev/peps/pep-0020/


    ________________________________________
    From: cti@lists.oasis-open.org < cti@lists.oasis-open.org > on behalf of
    Jerome Athias < athiasjerome@gmail.com >
    Sent: Friday, November 13, 2015 5:20 AM
    To: cti@lists.oasis-open.org
    Subject: [cti] The Adaptive Object-Model Architectural Style

    Greetings,

    realizing that the community members have different background,
    experience, expectations and use of CTI in general, from an high-level
    (abstracted/conceptual/ontology oriented) point of view, through a
    day-to-day use (experienced) point of view, to a technical
    (implementation/code) point of view...
    I found this diagram (and document) interesting while easy to read and
    potentially adapted to our current effort.
    So just wanted to share.

    http://www.adaptiveobjectmodel.com/WICSA3/ArchitectureOfAOMsWICSA3.pdf

    Regards


    ---------------------------------------------------------------------
    To unsubscribe from this mail list, you must leave the OASIS TC that
    generates this mail.  Follow this link to all your TCs in OASIS at:
    https://www.oasis-open.org/apps/org/workgroup/portal/my_workgroups.php



















  • 12.  Re: JSON or what???

    Posted 11-13-2015 19:47
    We need to do for STIX what we did with TAXII when we produced the JSON representations of TAXII.  Once we do that, we have a nice clean looking JSON version of STIX, not an XML-ized version of STIX in JSON format.  It is a really simple process, and most of the work has already been done by EclecticIQ.  They are using JSON based STIX today.  If we took what they have done and further refined it and cleaned it up where necessary we could have a JSON representation in a few months.   Working on this could also be a feedback loop in to the STIX 2.0 discussions about things that should change in the model to make things easier.  We found that in TAXII and I would say that JSON TAXII is a LOT cleaner and more tidy than its XML cousin.   Thanks, Bret Bret Jordan CISSP Director of Security Architecture and Standards Office of the CTO Blue Coat Systems PGP Fingerprint: 63B4 FC53 680A 6B7D 1447  F2C0 74F8 ACAE 7415 0050 Without cryptography vihv vivc ce xhrnrw, however, the only thing that can not be unscrambled is an egg.   On Nov 13, 2015, at 12:38, Aharon Chernin < achernin@soltra.com > wrote: If we just convert STIX XML to JSON I think what we will see is very complex looking JSON. While I prefer JSON now, I don’t think the switch from XML to JSON is going to make anyone’s life that much easier. Aharon From: Taylor, Marlon < Marlon.Taylor@hq.dhs.gov > Date: Friday, November 13, 2015 at 11:31 AM To: Aharon < achernin@soltra.com >, 'bret.jordan@bluecoat.com ' < bret.jordan@bluecoat.com >, 'athiasjerome@GMAIL.COM ' < athiasjerome@GMAIL.COM > Cc: 'jwunder@mitre.org ' < jwunder@mitre.org >, 'cti@lists.oasis-open.org ' < cti@lists.oasis-open.org > Subject: JSON or what??? Changed the thread title since the topic changed. We had several discussions about JSON in the past with no result of a complete STIX implementation. XML to JSON, as a format, can be done. I think we should show the JSON validation mechanism(s) that will be used by the CTI/SC to assure producers/consumers that we can provide means of testing schema/spec conformity. -Marlon   From : Aharon Chernin [ mailto:achernin@soltra.com ] Sent : Friday, November 13, 2015 02:18 PM To : Jordan, Bret < bret.jordan@bluecoat.com >; Jerome Athias < athiasjerome@GMAIL.COM > Cc : Wunder, John A. < jwunder@mitre.org >; cti@lists.oasis-open.org < cti@lists.oasis-open.org > Subject : Re: [cti] The Adaptive Object-Model Architectural Style   I am sold on JSON. Is there an argument against JSON? If so, let’s here it so that we can hash through it. Aharon From: < cti@lists.oasis-open.org > on behalf of Jordan, Bret < bret.jordan@bluecoat.com > Date: Friday, November 13, 2015 at 10:42 AM To: Jerome Athias < athiasjerome@GMAIL.COM > Cc: Wunder, John A. < jwunder@mitre.org >, cti@lists.oasis-open.org < cti@lists.oasis-open.org > Subject: Re: [cti] The Adaptive Object-Model Architectural Style The reason I push for JSON is all of the developers and CTOs I have talked to in various organizations, companies, vendors, and open-source groups always ask for anything but XML .  I then ask what would you prefer, and they all say, without exception JSON .   So a novel idea...  Lets give them what they want, JSON and a simple STIX 2.0 model, and lets drive for massive adoption.  Our number 1 goal should be adoption followed up by a model that can meet at least 70-80% of the market use cases. Lets get STIX 2.0 support in every networking product, every security tool, and every security broker.  Then, as we gain massive adoption, lets iterate and figure out what we need to do solve the problems we are running in to.  Lets first get adoption, and I do not mean a few niche groups here and there and one large eco-system.  I am talking about every networking and security product on the planet. I want to remove as many hurdles development shops have against STIX.  I want to make it so easy for them to adopt it that there is no question of them adopting it.  I do not want to see more groups go off and do their own thing or move over to FB's ThreatExchange or OpenTPX.   It would be a great problem to have, where we had SO MUCH adoption and SO MANY STIX documents flowing across the network each day that we had to do something to address the load.  That would be a GREAT problem to have. Thanks, Bret Bret Jordan CISSP Director of Security Architecture and Standards Office of the CTO Blue Coat Systems PGP Fingerprint: 63B4 FC53 680A 6B7D 1447  F2C0 74F8 ACAE 7415 0050 Without cryptography vihv vivc ce xhrnrw, however, the only thing that can not be unscrambled is an egg.   On Nov 13, 2015, at 11:26, Jerome Athias < athiasjerome@GMAIL.COM > wrote: I do appreciate the let's do it, if it is not a just do it. For JSON approach, I just would like to see (by facts) what the % of use cases/requirements it can cover, and when. 2015-11-13 21:17 GMT+03:00 Jordan, Bret < bret.jordan@bluecoat.com >: John this is really well said. I feel like we listened to every possible user requirement out there for STIX 1.0 and we tried to create a data-model that could solve every possible use case and corner case regardless of how small.  The one thing we sorely forgot to do is figure out what can developers actually implement in code or what are product managers willing to implement in code. Lets make STIX 2.0 something that meets 70-80% of the use cases and can actually be implemented in code by the majority of software development shops.  Yes, I am talking about a STIX Lite.  People can still use STIX 1.x if they want everything.  Over time we can add more and more features to the STIX 2.0 branch as software products that use CTI advance and users can do more and more with it. Lets start with JSON + JSON Schema and go from there.  I would love to have to migrate to a binary solution or something that supports RDF in the future because we have SO MUCH demand and there is SO MUCH sharing that we really need to do something. 1) Lets not put the cart before the horse 2) Lets fail fast, and not ride the horse to the glue factory 3) Lets start small and build massive adoption. 4) Lets make things so easy for development shops to implement that there is no reason for them not to Thanks, Bret Bret Jordan CISSP Director of Security Architecture and Standards Office of the CTO Blue Coat Systems PGP Fingerprint: 63B4 FC53 680A 6B7D 1447  F2C0 74F8 ACAE 7415 0050 Without cryptography vihv vivc ce xhrnrw, however, the only thing that can not be unscrambled is an egg. On Nov 13, 2015, at 08:09, Wunder, John A. < jwunder@mitre.org > wrote: So I’ve been waiting for a good time to outline this and I guess here is as good a place as any. I’m sure people will disagree, but I’m going to say it anyway :) Personally I think of these things as four levels: - User requirements - Implementations - Instantiation of the data model (XML, JSON, database schemas, an object model in code, etc) - Data model User requirements get supported in running software. Running software uses instantiations of the data model to work with data in support of those user requirements. The data model and specification define the instantiations of the data and describe how to work with them in a standard way. The important bit here is that there’s always running software between the user and the data model. That software is (likely) a tool that a vendor or open source project supports that contains custom code to work specifically with threat intel. It might be a more generic tool like Palantir or whatever people do RDF stuff with these days. But there’s always something. This has a couple implications: - Not all user requirements get met in the data model. It’s perfectly valid to decide not to support something in the data model if we think it’s fine that implementations do it in many different ways. For example, de-duplication: do we need a standard approach or should we let tools decide how to do de-duplication themselves? It’s a user requirement, but that doesn’t mean we need to address it in the specs. - Some user requirements need to be translated before they get to the data model. For example, versioning: users have lots of needs for versioning. Systems also have requirements for versioning. What we put in the specs needs to consider both of these. - This is the important part: some user requirements are beyond what software can do today. I would love it if my iphone would get 8 days of battery life. I could write that into some specification. That doesn’t mean it’s going to happen. In CTI, we (rightfully) have our eyes towards this end state where you can do all sorts of awesome things with your threat intel, but just putting it in the data model doesn’t automatically make that happen. We’re still exploring this domain and software can only do so much. So if the people writing software are telling us that the user requirements are too advanced (for now), maybe that means we should hold off on putting it in the data model until it’s something that we can actually implement? In my mind this is where a lot of the complexity in STIX comes from: we identified user requirements to do all these awesome things and so we put them in the data model, but we never considered how or whether software could really implement them. The perfect example here is data markings: users wanted to mark things at the field level, most software isn’t ready for that yet, and so we end up with data markings that are effectively broken in STIX 1.2. This is why many standards bodies have requirements for running code: otherwise the temptation is too great to define specification requirements that are not implementable and you end up with a great spec that nobody will use. Sorry for the long rant. Been waiting to get that off my chest for awhile (as you can probably tell). John On Nov 13, 2015, at 9:17 AM, Jerome Athias < athiasjerome@GMAIL.COM > wrote: sorry for the others if off-topic. Remember that a software is good only if it satisfies the users (meet, or exceed, their requirements). You can write 'perfect/optimized' code. If the users are not satisfied; it's a bad software. Then, If you can't explain it simply, you don't understand it well enough. , Albert Einstein Challenges are exciting, but sometimes difficult. It's about motivation and satisfaction. There is not programming language better than an other (just like OS); it is just you that can select the best for your needs. I did a conceptual map for the 'biggest Ruby project of the internet' (Metasploit Framework), it's just a picture, but represents 100 pages of documentation. I think we could optimize (like for a maturity model) our approach of resolving problems. 2015-11-13 17:02 GMT+03:00 John Anderson < janderson@soltra.com >: The list returns my mail, so probably you'll be the only one to get my reply. Funny, I missed that quote from the document. And it's spot on. As an architect myself, I have built several   elegant architectures, only to find that the guys who actually had to use it just. never. quite. got it. (sigh) My best architectures have emerged when I've written test code first. ( Test-first really does work.) I've learned that writing code--while applying KISS, DRY and YAGNI--saves me from entering the architecture stratosphere. That's why I ask the architects to express their creations in code, and not only in UML. I'm pretty vocal about Python, because it's by far the simplest popular language out there today. But this principal applies in any language: If the implementation is hard to explain, it's a bad idea. (Another quote from the Zen of Python.) Our standard has a lot that's hard to explain, esp. to new-comers. How can we simplify, so that it's almost a no-brainer to adopt? Again, thanks for the article, and the conversation. I really do appreciate your point-of-view, JSA ________________________________________ From: Jerome Athias < athiasjerome@gmail.com > Sent: Friday, November 13, 2015 8:45 AM To: John Anderson Cc: cti@lists.oasis-open.org Subject: Re: [cti] The Adaptive Object-Model Architectural Style Thanks for the feedback. Kindly note that I'm not strongly defending this approach for the CTI TC (at least for now). Since you're using quotes: Architects that develop these types of systems are usually very proud of them and claim that they are some of the best systems they have ever developed. However, developers that have to use, extend or maintain them, usually complain that they are hard to understand and are not convinced that they are as great as the architect claims. This, I hope could have our developers just understand that what they feel difficult sometimes, is not intended to be difficult per design, but because we are dealing with a complex domain and that the use of abstraction/conceptual approaches/ontology have benefits Hopefully we can obtain consensus on a good balanced adapted approach. 2015-11-13 16:24 GMT+03:00 John Anderson < janderson@soltra.com >: Jerome, Thanks for the link. I really enjoy those kinds of research papers. On Page 20, the section Maintaining the Model [1] states pretty clearly that this type of architecture is very unwieldy, from an end-user perspective; consequently, it requires a ton of toolin