[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]
Subject: Re: SV: [ubl-dev] Réf. : Re: [ubl-dev] RE: UML meta-modeling of UN/CEFACT Core Components
Not quite sure. Anyone have opinions about this? Duane Anders W. Tell wrote: > Duane, > > What happens to the impact study if one adds tools such a the free JAXB? > > /anders > > Duane Nickull wrote: > >> Your suggestions, while correct for the UML crowd, will result in a >> large scale and largely unnecessary amount of coding work to >> implement the forwards changes. I urge you all not to hardcode class >> names into elements an class attributes into XML attributes without >> undertaking an impact study on the poor programmer. >> >> I also agree that versioning should be addressed, my arguments are >> that "thinking" of efficiency when expressing UML models and XML >> should be undertaken. The effort is not large and it does have large >> payoffs. >> >> Duane >> >>> The breaking of forwards and backwards compatibility is managed by a >>> model(by model I mean conceptual model, and not UML model) for how >>> forwards >>> and backwards compatibility works. Basically the second version of >>> markup >>> is, I would guess, forced on you by a model that limits >>> extensibility, if >>> you are using XSD for validation, then you will want to have the >>> contexts >>> with multiple context child nodes. >>> Extensibility in the wild is often done via the use of namespaces, >>> thus what >>> one wants is a namespace handling model that follows the model which >>> developers are most used to, which is non-essential namespaces to >>> application x are ignored by application x, this allows developers >>> to extend >>> a type of xml dialect with new semantics relevant to them without >>> worrying >>> about breakage, unfortunately this is not the model that XSD is >>> especially >>> good at handling. >>> finally when you talk about forwards or backwards compatibility I >>> think that >>> what one should be discussing is a versioning strategy, how should >>> version >>> 1.0 processors relate to version 1.1 instances that are bound to our >>> namespace, in that case the general reaction is to say that in a >>> version 1.0 >>> instance read by a version 1.0 processor all markup in the target >>> namespace >>> must conform to version 1.0 markup, in a version 1.1 instance all >>> markup in >>> the namespace not conforming must be ignored, and it's subtree ignored, >>> sometimes one extends the dialect by using a construct >>> mustUnderstand, so if >>> your first example was: >>> >>> <mydoc version="1.1" xmlns="http://mydoc.org"> >>> <Contexts mustUnderstand="True"> >>> <GeopoliticalContext> >>> <Value> Canada</Value> >>> </GeopoliticalContext> >>> <SomeOtherContext> >>> <Value>1234</Value> >>> </SomeOtherContext> >>> <NewVersionContext><Value>282738</Value></NewVersionContext> >>> </Contexts> >>> </mydoc> >>> >>> this should cause a mydoc 1.0 processor to fail because it >>> mustUnderstand >>> everything under Contexts, and NewVersionContext is an element >>> introduced >>> with version 1.1 >>> The question is not if one version is more or less extensible than the >>> other, they are both extensible, the question would be which makes >>> for a >>> better understood model for developers, users etc. I have my >>> preferences, I >>> think that the Contexts with multiple child context nodes is probably >>> preferable in comparison to the other example, but that a well-defined >>> versioning strategy and namespace handling model in the initial >>> specification is how one handles extensibility and forward backward >>> compatibility. >>> Just my 0.02 kr. (I'm sorta getting screwed on the exchange rate here) >>> >>> >> > > -- Senior Standards Strategist Adobe Systems, Inc. http://www.adobe.com
[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]