[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]
Subject: Re: [ws-tx] optional features means optional tests?
+1 On 9/22/06 2:35 AM, "Mark Little" <mark.little@jboss.com> wrote: > Peter, as I mentioned weeks ago, I believe we should take a leaf out > of the W3C approach to interoperability testing. First there is > obviously a distinction between test participants (companies taking > part) and the technical committee who are working on the > specification. All mandatory features must be supported and > demonstrated for interoperability by all participants. All optional > features may be ignored for interoperability testing by participants. > The reason for making this distinction is fairly obvious: if it's > optional in the specification then it's optional for a reason (e.g., > not enough proven customer need for it, or there are multiple ways to > skin the same cat). Therefore, requiring that all optional features > are supported by all participants is tantamount to making the feature > (s) mandatory anyway. Kind of defeats the point! It also increases > the barrier for acceptance by the wider community (developers > primarily). > > However, that does leave it entirely possible that no participants > will test any (or some) of the optional features. This means that > there's a good argument for not being able to demonstrate that the > specification is interoperable. The way that's dealt with in W3C is > that a specification can only progress to standard if all mandatory > features have been proven to be interoperable by at least 4 different > implementations and all optional features by at least 2 different > implementations. > > I would like to think that as a TC we take the same approach. > > I also dispute whether supporting an optional feature means "lesser > functionality for customers". Maybe "less waste" or "less useless > baggage". Of course both of our definitions are subjective ;-) > > Mark. > > > On 22 Sep 2006, at 10:16, Peter Furniss wrote: > >> What would it mean to define a test as optional anyway ? Or, put >> the other way round, what would be the significance of >> implementation P3 not doing scenario X ? >> >> mandatory/optional (and the defining words MUST, SHOULD, MAY etc) >> have rather different implications depending on the level they >> concern: >> - for example, we make it mandatory to distinguish whether a >> Prepared received by coordinator in state None is from a volatile >> ordurable participant because we don't think the protocol will work >> if an implementation can't do that; the other side expects that >> distinction to be made and different behavour is expected >> - we make it mandatory to implement some piece of function (such >> as Completion if Activation is implemented) because we don't think >> an implementation would be useful if it didn't. >> >> The latter is actually rather tricky. For example, we never defined >> whether a participant implementation must support volatile - but >> one could imagine a WS-AT implementation embedded in a web-service >> accessible resource manager that would only need to register >> durable participants. Such an implementation couldn't do the 1.* >> tests, could only be PA on any of them, and couldn't do the ones >> involving volatile. But it would be perfectly useful for its >> purpose. Of course, it wouldn't be a general purpose WS-AT >> implementation, and couldn't be advertised as such. But it is fit >> for purpose, and within that purpose, can be expected to >> interoperate with other implementations. >> >> I have heard some conformance testing people demand that the >> "underlying engine" can do all the features, even if the use in a >> particular environment makes them inaccessible or irrelevant. That >> has always seemed to me a nonsense - especially when it is >> expressed as expecting the capabilities to be configurable, so you >> set things one way to pass the conformance test, another way in use >> (c.f. would you fly in a plane that had passed the strength tests >> with the doors welded shut, and the evacuation tests with the doors >> removed ?). >> >> In defining mandatory/optional in terms of general function, >> standardisers are defining the (future) set of conformant >> implemenations. There is also the set of useful implemenations - >> ones that someone might install and use, or even pay for. The >> standardisers have a choice of what they are trying to do: >> a) all members of the set of useful implementations are members >> of the conformant set >> b) all members of the set of conformant implementations are >> members of the useful set >> >> Obviously, one aspires to make the two sets close - but you won't >> get it perfect. Defining various categories of conformance (client- >> only, server-only - c.f. first (pre-oasis) WS-RX workshop which I >> think included a client-only implementation) helps, but you still >> have to decide which should be a subset of which. (and I'm of the >> view that a) is the right way to go). >> >> End of rant. >> >> Back to Mark's question: >> >> I propose that all scenarios are open to all implementations. If an >> implementor chooses not to attempt some, because for their >> implementation it is not considered useful, that is a decision to >> be justified between them and their "customers". It indicates >> lesser functionality for that implementation, but that was their >> deliberate choice. >> >> Peter >> >> >> Mark Little wrote: >>> Since we didn't get a chance to discuss this on the phone >>> yesterday, but we did discuss timeframes for WS-BA interop, I >>> think this particular issue is extremely pertinent now. If we >>> can't reach a conclusion via discussion, how about just having an >>> electronic ballot? >>> >>> Mark. >>> >>> >>> >>> On 21 Sep 2006, at 15:28, Mark Little wrote: >>> >>>> I meant the original issue: optional features mean optional >>>> tests. What happened in the past may be a precedent for the TC to >>>> consider, but if not then the optional features in WS-BA need to >>>> be considered for OPTIONAL tests IMO. >>>> >>>> Mark. >>>> >>>> >>>> On 20 Sep 2006, at 19:47, Ian Robinson wrote: >>>> >>>>> Per the resolution to i047: "A coordination service that >>>>> supports an >>>>> Activation service MUST support the Completion protocol." The >>>>> Activation >>>>> service has always been optional. >>>>> >>>>> This is, of course, a spec statement. From an AT interop >>>>> perspective, the >>>>> majority of the tests focussed on the madatory 2PC protocol but >>>>> there are 2 >>>>> scenarion that include the Activation and Completion protocols. >>>>> For AT, I >>>>> don't believe we categorized interop scenarios as "optional" or >>>>> not. >>>>> >>>>> Regards, >>>>> Ian >>>>> >>>>> >>>>> >>>>> Mark Little >>>>> <mark.little@jbos >>>>> >>>>> s.com> To >>>>> ws-tx@lists.oasis-open.org >>>>> 20/09/2006 >>>>> 17:51 cc >>>>> >>>>> >>>>> Subject >>>>> Re: [ws-tx] optional >>>>> features means >>>>> optional tests? >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> I don't believe we came to any agreement on this as a TC. As we >>>>> approach BA interop I'd at least like to know what is and is not >>>>> required/mandated. Any chance we can discuss this on the next call? >>>>> >>>>> Mark. >>>>> >>>>> >>>>> On 6 Sep 2006, at 13:42, Mark Little wrote: >>>>> >>>>>> >>>>>> On 6 Sep 2006, at 11:51, Alastair Green wrote: >>>>>> >>>>>>> Completion protocol is not mandatory under any circumstances. >>>>>>> Activation Service is not mandatory under any circumstances. >>>>>> >>>>>> >>>>>> The change from mandatory to optional occurred during that >>>>>> interop. >>>>>> phase. If it had been earlier, then I would be arguing for the >>>>>> same >>>>>> point there. >>>>>> >>>>>>> >>>>>>> In my view, to repeat, the point of these interop tests is to >>>>>>> prove (very roughly) -- better, to give some confidence -- that >>>>>>> the words in the spec are capable of being rendered into >>>>>>> interoperable software. >>>>>> >>>>>> But that should not mean that the tests themselves are mandatory. >>>>>> The distinction between optional and mandatory elements in a >>>>>> specification and how they are handled by optional and mandatory >>>>>> tests in used well in W3C. Are you suggesting that those >>>>>> specifications/standards are not interoperable? >>>>>> >>>>>>> >>>>>>> Besides, how hard is it to do this? Support for mixed outcome >>>>>>> at a >>>>>>> wire level is trivial. >>>>>> >>>>>> Fine, but it shouldn't make the interop. tests mandatory. All that >>>>>> does is make it easier for those companies who wish to participate >>>>>> in those tests to do so. >>>>>> >>>>>> What I want is for us to agree that optional features are covered >>>>>> by optional tests. Then we can have a discussion about how many >>>>>> companies we should ideally have to cover optional features in >>>>>> order to give us a degree of confidence. I refer back to the W3C >>>>>> approach. >>>>>> >>>>>> Mark. >>>>>> >>>>>> >>>>>>> >>>>>>> Alastair >>>>>>> >>>>>>> Mark Little wrote: >>>>>>>> We need to describe the tests for all features if we want to >>>>>>>> show >>>>>>>> interoperability for those features. However, and the specific >>>>>>>> case I have in mind is mixed outcome, which is not mandatory >>>>>>>> under any circumstances, it shouldn't be a requirement for >>>>>>>> anyone >>>>>>>> in the TC to test against because then it's effectively a >>>>>>>> mandatory implementation (at least as far as the TC work is >>>>>>>> concerned). It does not make sense to have optional features >>>>>>>> covered by mandatory tests. Likewise, it does not make sense to >>>>>>>> have optional features that aren't tested by at least 2 >>>>>>>> different >>>>>>>> implementations, but that's a separate issue. >>>>>>>> >>>>>>>> Mark. >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On 5 Sep 2006, at 14:41, Alastair Green wrote: >>>>>>>> >>>>>>>>> Unlike in WS-AT, where optional Completion protocol was a >>>>>>>>> mandatory interop test. :-) >>>>>>>>> >>>>>>>>> Not sure of final outcome from F2F, but this point was >>>>>>>>> discussed, and it was pointed out that in AT this approach was >>>>>>>>> not taken. >>>>>>>>> >>>>>>>>> In my view the point of interop tests is not conformance, >>>>>>>>> but to >>>>>>>>> prove that the specs are workable -- a task which applies to >>>>>>>>> all >>>>>>>>> parts. >>>>>>>>> >>>>>>>>> Yrs, >>>>>>>>> >>>>>>>>> Alastair >>>>>>>>> >>>>>>>>> Mark Little wrote: >>>>>>>>>> I'm assuming that any optional features in the specification >>>>>>>>>> that are covered by tests in the interoperability scenarios >>>>>>>>>> inherently means that those tests are also optional? Certainly >>>>>>>>>> in W3C interoperability testing, only mandatory features have >>>>>>>>>> to be tested. >>>>>>>>>> >>>>>>>>>> Mark. >>>>>>>>>> >>>>>>>>>> >>>>>>>> >>>>>>>> >>>>>> >>>>> >>>>> >>>>> >>> >>> . >>> >> -- ****************************************************** Sr. Technical Evangelist - Adobe Systems, Inc. * Chair - OASIS SOA Reference Model Technical Committee* Blog: http://technoracle.blogspot.com * ******************************************************
[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]