This note is long and I apologize in advance for that. Your argument overlooks two points. 1. The principal threat is not denial of service. The principal threat is unintended actions. The specification is ambiguous regarding the significance of MIME headers for payloads. Thus, one implementation may choose to essentially ignore such MIME headers depending solely on information in the SOAP envelope. The MITM threat for such implementations would be reduced as a result. However, another implementation may choose to dispatch as a result of some information in the MIME headers. Such an implementation would put any trading relationship at risk. An adversary could construct a body part that would pass the digest calculation but the actual content would cause an implementation to take actions other than what was intended. You might think this is contrived but then I'm sure most of the buffer overflow bugs found in a variety of popular applications were thought to be contrived before somebody did it. This might be a denial of service attack for the ebXML messaging but since you dragged MIME into the processing who knows what it might do. The point is the entire server machine is at risk, not just this message. 2. Routing is not under control of the application. Also, what is to prevent a peer from re-routing a message or otherwise forwarding it? While it may be true that the MITM attack is reduced for peer-to-peer relationships, it is not at all obvious to me that either end has complete control of the path a message may take. Even if you do, it doesn't make sense from a security view to have two sets of processing rules, i.e., if the path is peer-to-peer process this way and if the path is multi-hop do it this way. In general, when implementing it is a bad practice to introduce unnecessary changes in the control flow. In security there are purists who insist on doing everything possible and doing it right. Even if one takes a more pragmatic view, the goal needs to be that whatever you do you must do it right. Adding digital signatures that leave a clear and present vulnerability, however small, is analogous to putting steel doors on a wooden house that still has windows without bars. Speaking more directly to your points: 1. (Internal) Header modification is most commonly a problem for one transport, SMTP, when using Relays (or Gateways or other intermediaries). The fact that something occurs ordinarily *and* is wrong is exactly a reason for detecting it. Otherwise you become complacent and won't notice when it really is a problem. Helpful CTE changes and companion header changes are presumably not going to happen under HTTP or HTTPS, even when intermediaries are involved, unless they gateway into email. Presumably won't happen? It's not about whether it will happen or not. In a risk analysis you assume it will. The question is, What is the downside of it happening? Here it is not just ebXML messaging errors, is the exposure of the entire server. And simply ignore any BEEP profile that has the problem :-) So a MITM threat based on changed headers is usually not malicious, and not universal across transports. Which is exactly the reason you need to detect it. In a way, the SMTP situation speaks for not including headers in the scope of the signature if we are mainly concerned with the threat of inadvertently breaking signatures that are otherwise good. In other words, by including headers in the scope of the signature, we risk not getting the payload (potentially validly signed and intact) processed, because a signature check would fail due to the changed headers in the SMTP relay case. This is tantamount to suggesting that I should put a digital signature on the payload and if it doesn't validate I should ignore it. We are opening ourselves up to uninintentional denial of service by signing headers! You are protecting yourself from unintended consequences. Denial of service is secondary, i.e., it is not the principal threat. If headers were important, and they were in danger of being maliciously changed, then using a Suresh/RichSalz method could be optionally adopted (but I think simplicity would favor not going there). The fact that it can be done means they are in danger. As Chris Ferris mentioned, peer to peer transports using transport layer encryption would frustrate header manipulation. Likewise, digital envelopes would discourage header manipulation, even when intermediaries (gateways, relays, MSHes, various FW proxies,etc) are involved. This presumes the third parties (peers) are not adversaries or have not been compromised by a third party. This means there exist other ways of avoiding the MITM threat to headers when it is thought to be a live possibility. This means that mandating some redundant encoding of headers is not universally warranted; whether it is even a live possibility, depends on the specifics of transport as well as packaging. No reason to require unconditionally. Although conceptually it might (and I'm not suggesting I agree here) make sense to conditionally require, I would look to the implementors for an opinion. Which do they think is more straightforward: always doing it the same or adding additional control flows based on circumstances. 2. There could be hijacked relays or even other hijacked intermediaries and they could change headers maliciously. One suggested change was to alter the content-type header so that the payload processing would benefit the hijacker somehow, by misdirecting it to another application handler. However, the _routing_ function within ebXML has largely dispensed with any strict dependence on the semantics of MIME content-* fields. MIME is mainly being used for its generalized bundling and unbundling capability. The diminished dependence on the MIME apparatus is partly because whatever the service or action element says, the ebXML payload will most likely just have the same old content-type of text/xml or application/xml ( or maybe */*+xml. ) Why did we have service and action elements, if we were going to use the MIME apparatus for application-level routing? The MIME values were regarded as insufficiently fine-grained. The MIME content-type provides a partly redundant, insufficiently informative label, that within ebXML messaging, can be largely ignored for app. routing purposes anyway. So, MITM threat largely irrelevant for internal content-type headers and the application level misrouting. No reason to require unconditionally. If it is true that the MIME headers are irrelevant and are strictly for transport, then as I said to you private email earlier the specification should make this the case. Simply declare that all payloads are of type application/octet-stream with a Content-ID header and mandate that only information in the SOAP Envelope is to be used to process the payloads. The fact that the specification is ambiguous on this point is why there is an issue. But then you have the problem of binding the Content-ID to the payload. (See below.) 3. MITM change of headers could at least interfere with processing, and so be an interference with service attack. If the goal is interfering with service, however, there are a lot of other attacks that would be easier/faster/cheaper to undertake. Signing headers would not in itself defeat the interference with service, but just offer another way to detect it. Signing provides no remedy for this threat. No reason to require here. As previously suggested, this is tantamount to saying put a digital signature on it and if it doesn't validate just ignore it. 4. Another possible reason to show that the message had not been messed with, would be to discourage a certain kind of replay ( payload recycling ). By binding a SOAP envelope to its payload(s) by signing, we would at least have some evidence about the entity that was replaying the payload. (It would not prevent it, of course. We could always still cut and paste an independently signed payload and repack, resign, resend.) This binding is only true to the extent that Content-ID: values are unique and that you check to make sure you haven't seen the one you just received in a prior payload. And this would require that the Content-ID: value be derived from the payload. Since none of this is true your argument does not apply. Interestingly, we already decided that some changes are don't care with respect to message integrity (the changes intermediaries do to the SOAP envelope, for example). Given this existing agreement--which forced our use of XMLDsig in the first place--why not just decide to exclude the headers from within the scope of the signature, and say that ebXML message integrity does not guarantee that the bodypart headers have not changed (added, deleted, changed case, been given wrong values, etc)? Not caring about trace information or any other informational information is vastly different than caring about information upon which any processing may depend. As Suresh pointed out, the one bodypart header that ebXML messagers do care about a little (content-id) is one whose alteration will most probably throw an exception during processing anyway, if we are using XMLDsig. Since the error is already detected, no reason to require internal header signing here either. The error is not already detected. There is another point that would have come out when we finally got to canonicalizing the MIME headers and that is the cotent-id needs to be bound to the payload. One way to do this is to include the content-id value with the payload digest. Another way is to ensure that the order of the payloads and the order of the reference elements of those payloads and the order of the duplicated MIME headers all matches. All of that needs to be true for your argument (and Suresh's) to be valid. None of the other headers (content-description, -disposition,etc) supply info that needs to be trusted in the typical ebXML messaging case. If the payload did happen to be some elaborate MIME structure of many varied content types with embedded multiparts and whatnot, protecting all these headers could get quite complex. Easier to just envelope 'em if there is a viable threat of MITM. Mandating some header protection scheme again unwarranted. If it's not relevant don't include it. 5. Under a CPA, the packaging elements would specify what the content types and layout are supposed to be for a specific conversation. The wrong content-type headers could be detected and warned about. So, again, embedding and signing a header is not universally warranted or necesary. This assumes that the detection is attempted and is warned about. However, the specification is ambiguous on this point. All that said, if you still want to complicate things by worrying about a threat with very little real disutility, the Suresh/RichSalz procedure seems OK as an optional 2.0 addition. I think there is yet no compelling reason to require its use unconditionally within ebXML messaging, however. If history has taught us anything about protocol security it surely has taught us that if you don't include security as goal from the start you will be forever backfilling to get it right because it will be competing with backwards compatibility. Jim