The OpenNET Project / Index page

[ новости /+++ | форум | теги | ]

Поиск:  Каталог документации | Other

Ada FAQ: Programming with Ada (part 2 of 4)

Ada Programmer's Frequently Asked Questions (and answers), part 2 of 4. Please read before posting.
Archive-name: computer-lang/Ada/programming/part2
Comp-lang-ada-archive-name: programming/part2
Posting-Frequency: monthly
Last-modified: 22 May 1996
Last-posted: 23 April 1996

                               Ada Programmer's
                       Frequently Asked Questions (FAQ)

   IMPORTANT NOTE: No FAQ can substitute for real teaching and
   documentation. There is an annotated list of Ada books in the
   companion comp.lang.ada FAQ.

    Recent changes to this FAQ are listed in the first section after the table
    of contents. This document is under explicit copyright.

This is part 2 of a 4-part posting; part 1 contains the table of contents.
Part 3 begins with question 6.
Part 4 begins with question 9.
Parts 3 and 4 should be the next postings in this thread.
Part 1 should be the previous posting in this thread.


5: Object-Oriented Programming with Ada


5.1: Why does Ada have "tagged types" instead of classes?

   (Tucker Taft responds):

   Someone recently asked me to explain the difference between the
   meaning of the term "class" in C++ and its meaning in Ada 9X. Here is
   a synopsis of the answer:

   In C++, the term "class" refers to three different, but related
   things:
     * a language construct, that encapsulates the definitions of data
       members, member functions, nested types, etc.;

     * a particular kind of type, defined by a class construct (or by
       "struct" which is a special case of "class");

     * a set of types consisting of a type and all of its derivatives,
       direct and indirect.


   In Ada 9X, the term "class" refers only to the third of the above
   definitions. Ada 9X (and Ada 83) has three different terms for the
   concepts corresponding to the above three things:
     * a "package" encapsulates the definitions of types, objects,
       operations, exceptions, etc which are logically related. (The
       operations of a type defined immediately within the package where
       the type is declared are called, in 9X, the "primitive operations"
       of the type, and in some sense, define the "primitive" semantics
       of the type, especially if it is a private type.)

     * a "type" is characterized by a set of values and a set of
       primitive operations (there are a million definitions of "type,"
       unfortunately, but you know what I mean...);

     * a "class" is a set of types with similar values and operations; in
       particular, a type and and all of its derivatives, direct and
       indirect, represents a (derivation) class. Also, the set of
       integer types form the integer "class," and so on for the other
       language-defined classes of types in the language.


   Some OOP languages take an intermediary position. In CLOS, a "class"
   is not an encapsulating construct (CLOS has "packages"). However, a
   "class" is both a type and a set of types, depending on context.
   (Methods "float" freely.)

   The distinction Ada 9X makes between types and classes (= set of
   types) carries over into the semantic model, and allows some
   interesting capabilities not present in C++. In particular, in Ada 9X
   one can declare a "class-wide" object initialized by copy from a
   "class-wide" formal parameter, with the new object carrying over the
   underlying type of the actual parameter. For example:

     procedure Print_In_Bold (X : T'Class) is
       -- Copy X, make it bold face, and then print it.
       Copy_Of_X : T'Class := X;
     begin
        Make_Bold (Copy_Of_X);
        Print (Copy_Of_X);
     end P;


   In C++, when you declare an object, you must specify the "exact" class
   of the object -- it cannot be determined by the underlying class of
   the initializing value. Implementing the above procedure in a general
   way in C++ would be slightly more tedious.

   Similarly, in Ada 9X one can define an access type that designates
   only one specific type, or alternatively, one can define one that can
   designate objects of any type in a class (a "class-wide" access type).
   For example:

     type Fancy_Window_Ptr is access Fancy_Window;
       -- Only points at Fancy Windows -- no derivatives allowed
     type Any_Window_Ptr is access Window'Class;
       -- Points at Windows, and any derivatives thereof.


   In C++, all pointers/references are "class-wide" in this sense; you
   can't restrict them to point at only one "specific" type.

   In other words, C++ makes the distinction between "specific" and
   "class-wide" based on pointer/reference versus object/value, whereas
   in Ada 9X, this distinction is explicit, and corresponds to the
   distinction between "type" (one specific type) and "class" (set of
   types).

   The Ada 9X approach we believe (hope ;-) gives somewhat better control
   over static versus dynamic binding, and is less error prone since it
   is type-based, rather than being based on reference vs. value.

   In any case, in Ada 9X, C++, and CLOS it makes sense to talk about
   "class libraries," since a given library will generally consist of a
   set of interrelated types. In Ada 9X and CLOS, one could alternatively
   talk about a set of "reusable packages" and mean essentially the same
   thing.


5.2: Variant records seem like a dead feature now. When should I use them
instead of tagged types?

   This is an instance of a much more general question: "When should I
   use what kind of type?" The simple answer is: "When it makes sense to
   do so." The real key to chosing a type in Ada is to look at the
   application, and pick the type that most closely models the problem.

   For instance, if you are modelling data transmission where the message
   packets may contain variable forms of data, a variant record --not a
   hierarchy of tagged types-- is an appropriate model, since there may
   be no relationship between the data items other than their being
   transmitted over one channel. If you choose to model the base type of
   the messages with a tagged type, that may present more problems than
   it solves when communicating across distinct architectures.

   [More to be said about variant programming vs. incremental
   programming.]


5.3: What is meant by "interface inheritance" and how does Ada support it?

   This answer intentionally left blank.


5.4: How do you do multiple inheritance in Ada 9X?

   There is a lengthy paper in file
   ftp://sw-eng.falls-church.va.us/public/AdaIC/flyers/9xm-inh.txt

   That document describes several mechanisms for achieving MI in Ada. It
   is not unusual, however, to find complaints about the syntax and the
   perceived burden it places on the developer. This is what Tucker Taft
   had to say when responging to such a criticism on comp.lang.ada:

   Coming up with a syntax for multiple inheritance was not the
   challenge. The challenge was coming up with a set of straightforward
   yet flexible rules for resolving the well known problems associated
   with multiple inheritance, namely:
     * If the same type appears as an ancestor more than once, should all
       or some of its data components be duplicated, or shared? If any
       are duplicated, how are they referenced unambiguously?

     * If the same-named (including same parameter/result profile)
       operation is inherited along two paths, how is the ambiguity
       resolved? Can you override each with different code? How do you
       refer to them later?

     * Etc.


   For answers, you can look at the various languages that define a
   built-in approach to multiple inheritance. Unfortunately, you will
   generally get a different answer for each language -- hardly a
   situation that suggests we will be able to craft an international
   consensus. Eiffel uses renaming and other techniques, which seem quite
   flexible, but at least in some examples, can be quite confusing (where
   you override "B" to change what "A" does in some distant ancestor).
   C++ has both non-virtual and virtual base clases, with a number of
   rules associated with each, and various limitations relating to
   downcasting and virtual base classes. CLOS uses simple name matching
   to control "slot" merging. Some languages require that all but one of
   the parent types be abstract, data-less types, so only interfaces are
   being inherited; however if the interfaces happen to collide, you
   still can end up with undesirable and potentially unresolvable
   collisions (where you really want different code for same-named
   interfaces inherited from different ancestors).

   One argument is that collisions are rare to begin with, so it doesn't
   make much different how they are resolved. That is probably true, but
   the argument doesn't work too well during an open language design
   process -- people get upset at the most unbelievably trivial and
   rarely used features if not "correctly" designed (speaking from
   experience here ;-).

   Furthermore, given that many of the predominant uses of MI (separation
   of interface inheritance from implementation inheritance, gaining
   convenient access to another class's features, has-a relationships
   being coded using MI for convenience, etc.) are already handled very
   well in Ada 9X, it is hard to justify getting into the MI language
   design fray at all. The basic inheritance model in Ada 9X is simple
   and elegant. Why clutter it up with a lot of relatively ad-hoc rules
   to handle one particular approach to MI? For the rare cases where MI
   is really critical, the last thing the programmer wants in the
   language is the "wrong" MI approach built in.

   So the basic answer is that at this point in the evolution of OO
   language design, it seemed wiser to provide MI building blocks, rather
   than to foist the wrong approach on the programmer, and be regretting
   it and working around it for years to come.

   Perhaps [Douglas Arndt] said it best...

     Final note: inheritance is overrated, especially MI. ...


   If the only or primary type composition mechanism in the language is
   based on inheritance, then by all means, load it up. But Ada 9X
   provides several efficient and flexible type composition mechanisms,
   and there is no need to overburden inheritance with unnecessary and
   complicated baggage.


5.5: Why are Controlled types so, well, strange?

   (Tucker Taft responds):

   We considered many approaches to user-defined finalization and
   user-defined assignment. Ada presents challenges that make it harder
   to define assignment than in other languages, because assignment is
   used implicitly in several operations (by-copy parameter passing,
   function return, aggregates, object initialization, initialized
   allocators, etc.), and because Ada has types whose set of components
   can be changed as a result of an assignment.

   For example:

     type T (D : Boolean := False) is record
       case D is
         when False => null;
         when True => H : In_Hands;
       end case;
     end record;

     X,Z : T;
     Y : T := (True, H => ...);

     ...

     X := Y;   -- "X.H" component coming into existence
     Y := Z;   -- "Y.H" component going out of existence


   With a type like the one above, there are components that can come and
   go as a result of assignment. The most obvious definition of
   assignment would be:

     procedure ":=" (Left : in out In_Hands; Right : in In_Hands);


   Unfortunately, this wouldn't work for the "H" component, because there
   is no preexisting "In_Hands" component to be assigned into in the
   first case, and in the second case, there is no "In_Hands" component
   to assign "from."

   Therefore, we decided to decompose the operation of assignment into
   separable pieces: finalization of the left hand side; simple copying
   of the data from the right hand side to the left hand side; and then
   adjustment of the new left hand side. Other decompositions are
   probably possible, but they generally suffer from not being easily
   composable, or not handling situations like the variant record above.

   Imagine a function named ":=" that returns a copy of its in parameter.
   To do anything interesting it will have to copy the in parameter into
   a local variable, and then "fiddle" with that local variable
   (essentially what "Adjust" does), and then return that local variable
   (which will make yet another copy). The returned result will have to
   be put back into the desired place (which might make yet another
   copy). For a large object, this might involve several extra copies.

   By having the user write just that part of the operation that
   "fiddles" with the result after making a copy, we allow the
   implementation to eliminate redundant copying. Furthermore, some
   user-defined representations might be position dependent. That is, the
   final "fiddling" has to take place on the object in its final
   location. For example, one might want the object to point to itself.
   If the implementation copies an object after the user code has
   adjusted it, such self-references will no longer point to the right
   place.

   So, as usual, once one gets into working out the details and all the
   interactions, the "obvious" proposal (such as a procedure ":=") no
   longer looks like the best answer, and the best answer one can find
   potentially looks "clumsy" (at least before you try to work out the
   details of the alternatives).


5.6: What do "covariance" and "contravariance" mean, and does Ada support
either or both?

   (From Robert Martin) [This is C++ stuff, it should be completely
   re-written for Ada. --MK]


 R> covariance:  "changes with"
 R> contravariance: "changes against"

 R> class A
 R> {
 R>    public:
 R>      A* f(A*);   // method of class A, takes A argument and returns A
 R>      A* g(A*);   // same.
 R> };

 R> class B : public A // class B is a subclass of class A
 R> {
 R>   public:
 R>     B* f(B*);  // method of class B overrides f and is covariant.
 R>     A* g(A*);  // method of class B overrides g and is contravariant.
 R> };

 R> The function f is covariant because the type of its return value and
 R> argument changes with the class it belongs to.  The function g is
 R> contravariant because the types of its return value and arguments does not
 R> change with the class it belongs to.


   Actually, I would call g() invariant. If you look in Sather, (one of
   the principle languages with contravariance), you will see that the
   method in the decendent class actually can have arguments that are
   superclasses of the arguments of its parent. So for example:

class A : public ROOT
{
   public:
     A* f(A*);   // method of class A, takes A argument and returns A
     A* g(A*);   // same.
};

class B : public A // class B is a subclass of class A
{
  public:
    B* f(B*);  // method of class B overrides f and is covariant.
    ROOT* g(ROOT*);  // method of class B overrides g and is contravariant.
};


   To my knowledge the uses for contravariance are rare or nonexistent.
   (Anyone?). It just makes the rules easy for the compiler to type
   check. On the other hand, co-variance is extremely useful. Suppose you
   want to test for equality, or create a new object of the same type as
   the one in hand:

class A
{
   public:
      BOOLEAN equal(A*);
      A* create();
}

class B: public A
{
   public:
      BOOLEAN equal(B*);
      B* create();
}


   Here covariance is exactly what you want. Eiffel gives this to you,
   but the cost is giving up 100% compile time type safety. This seem
   necessary in cases like these.

   In fact, Eiffel gives you automatic ways to make a method covariant,
   called "anchored types". So you could declare, (in C++/eiffese):

class A
{
   public:
      BOOLEAN equal(like Current *);
      like Current * create();
}


   Which says equal takes an argument the same type as the current
   object, and create returns an object of the same type as current. Now,
   there is not even any need to redeclare these in class B. Those
   transformations happen for free!


5.7: What is meant by upcasting/expanding and downcasting/narrowing?

   (Tucker Taft replies):

   Here is the symmetric case to illustrate upcasting and downcasting.

     type A is tagged ...;   -- one parent type

     type B is tagged ...;   -- another parent type

     ...

     type C;   -- the new type, to be a mixture of A and B

     type AC (Obj : access C'Class) is
       new A
       with ...;
       -- an extension of A to be mixed into C

     type BC (Obj : access C'Class) is
       new B
       with ...;
       -- an extension of B to be mixed into C

     type C is
       tagged limited record
         A : AC (C'Access);
         B : BC (C'Access);
         ... -- other stuff if desired
       end record;


   We can now pass an object of type C to anything that takes an A or B
   as follows (this presumes that Foobar and QBert are primitives of A
   and B, respectively, so they are inherited; if not, then an explicit
   conversion (upcast) to A and B could be used to call the original
   Foobar and QBert).

     XC : C;
   ...
     Foobar (XC.A);
     QBert (XC.B);


   If we want to override what Foobar does, then we override Foobar on
   AC. If we want to override what QBert does, then we override QBert on
   BC.

   Note that there are no naming conflicts, since AC and BC are distinct
   types, so even if A and B have same-named components or operations, we
   can talk about them and/or override them individually using AC and BC.


   Upcasting (from C to A or C to B) is trivial -- A(XC.A) upcasts to A;
   B(XC.B) upcasts to B.

   Downcasting (narrowing) is also straightforward and safe. Presuming XA
   of type A'Class, and XB of type B'Class:

     AC(XA).Obj.all downcasts to C'Class (and verifies XA in AC'Class)
     BC(XB).Obj.all downcasts to C'Class (and verifies XB in BC'Class)


   You can check before the downcast to avoid a Constraint_Error:

     if XA not in AC'Class then -- appropriate complaint

     if XB not in BC'Class then -- ditto


   The approach is slightly simpler (though less symmetric) if we choose
   to make A the "primary" parent and B a "secondary" parent:

     type A is ...
     type B is ...

     type C;

     type BC (Obj : access C'Class) is
       new B
       with ...

     type C is
       new A
       with record
         B : BC (C'Access);
         ... -- other stuff if desired
       end record;


   Now C is a "normal" extension of A, and upcasting from C to A and
   (checked) downcasting from C'Class to A (or A'Class) is done with
   simple type conversions. The relationship between C and B is as above
   in the symmetric approach.

   Not surprisingly, using building blocks is more work than using a
   "builtin" approach for simple cases that happen to match the builtin
   approach, but having building blocks does ultimately mean more
   flexibility for the programmer -- there are many other structures that
   are possible in addition to the two illustrated above, using the
   access discriminant building block.

   For example, for mixins, each mixin "flavor" would have an access
   discriminant already:

     type Window is ...  -- The basic "vanilla" window

     -- Various mixins
     type Win_Mixin_1 (W : access Window'Class) is ...

     type Win_Mixin_2 (W : access Window'Class) is ...

     type Win_Mixin_3 (W : access Window'Class) is ...


   Given the above vanilla window, plus any number of window mixins, one
   can construct a desired window by including as many mixins as wanted:

     type My_Window is
       new Window
       with record
         M1 : Win_Mixin_1 (My_Window'access);
         M3 : Win_Mixin_3 (My_Window'access);
         M11 : Win_Mixin_1(My_Window'access);
         ... -- plus additional stuff, as desired.
       end record;


   As illustrated above, you can incorporate the same "mixin" multiple
   times, with no naming conflicts. Every mixin can get access to the
   enclosing object. Operations of individual mixins can be overridden by
   creating an extension of the mixin first, overriding the operation in
   that, and then incorporating that tweaked mixin into the ultimate
   window.

   I hope the above helps better illustrate the use and flexibility of
   the Ada 9X type composition building blocks.


5.8: How does Ada do "narrowing"?

   Dave Griffith said

     . . . Nonetheless, The Ada9x committee chose a structure-based
     subtyping, with all of the problems that that is known to cause. As
     the problems of structure based subtyping usually manifest only in
     large projects maintained by large groups, this is _precisely_ the
     subtype paradigm that Ada9x should have avoided. Ada9x's model is,
     as Tucker Taft pointed out, quite easy to use for simple OO
     programming. There is, however, no good reason to _do_ simple OO
     programming. OO programmings gains click in somewhere around 10,000
     LOC, with greatest gains at over 100,000. At these sizes, "just
     declare it tagged" will result in unmaintainable messes. OO
     programming in the large rapidly gets difficult with structure based
     subtyping. Allowing by-value semantics for objects compounds these
     problems. All of this is known. All of this was, seemingly, ignored
     by Ada9x.


   (Tucker Taft answers)

   As explained in a previous note, Ada 9X supports the ability to hide
   the implementation heritage of a type, and only expose the desired
   interface heritage. So we are not stuck with strictly "structure-based
   subtyping." Secondly, by-reference semantics have many "well known"
   problems as well, and the designers of Modula-3 chose to, seemingly,
   ignore those ;-) ;-). Of course, in reality, neither set of language
   designers ignored either of these issues. Language design involves
   tradeoffs. You can complain we made the wrong tradeoff, but to
   continue to harp on the claim that we "ignored" things is silly. We
   studied every OOP language under the sun on which we could find any
   written or electronic material. We chose value-based semantics for
   what we believe are good reasons, based on reasonable tradeoffs.

   First of all, in the absence of an integrated garbage collector,
   by-reference semantics doesn't make much sense. Based on various
   tradeoffs, we decided against requiring an integrated garbage
   collector for Ada 9X.

   Secondly, many of the "known" problems with by-value semantics we
   avoided, by eliminating essentially all cases of "implicit
   truncation." One of the problems with the C++ version of "value
   semantics" is that on assignment and parameter passing, implicit
   truncation can take place mysteriously, meaning that a value that
   started its life representing one kind of thing gets truncated
   unintentionally so that it looks like a value of some ancestor type.
   This is largely because the name of a C++ class means differnt things
   depending on the context. When you declare an object, the name of the
   class determines the "exact class" of the object. The same thing
   applies to a by-value parameter. However, for references and pointers,
   the name of a class stands for that class and all of its derivatives.
   But since, in C++, a value of a subclass is always acceptable where a
   value of a given class is expected, you can get implicit truncation as
   part of assignment and by-value parameter passing. In Ada 9X, we avoid
   the implicit truncation because we support assignment for "class-wide"
   types, which never implicitly truncates, and one must do an explicit
   conversion to do an assignment that truncates. Parameter passing never
   implicitly truncates, even if an implicit conversion is performed as
   part of calling an inherited subprogram.


5.9: What is the difference between a class-wide access type and a "general"
class-wide access type?

     What is exactly the difference between

type A is access Object'Class;

     and

type B is access all Object'Class;

     In the RM and Rationale only definitions like B are used. What's the
     use for A-like definitions ?


   (Tucker Taft answers)

   The only difference is that A is more restrictive, and so presumably
   might catch bugs that B would not. A is a "pool-specific" access type,
   and as such, you cannot convert values of other access types to it,
   nor can you use 'Access to create values of type A. Values of type A
   may only point into its "own" pool; that is only to objects created by
   allocators of type A. This means that unchecked-deallocation is
   somewhat safer when used with a pool-specific type like A.

   B is a "general" access type, and you can allocate in one storage
   pool, and then convert the access value to type B and store it into a
   variable of type B. Similarly, values of type B may point at objects
   declared "aliased."

   When using class-wide pointer types, type conversion is sometimes used
   for "narrowing." This would not in general be possible if you had left
   out the "all" in the declaration, as in the declaration of A. So, as a
   general rule, access-to-classwide types usually need to be general
   access types. However, there is no real harm in starting out with a
   pool-specific type, and then if you find you need to do a conversion
   or use 'Access, the compiler should notify you that you need to add
   the "all" in the declaration of the type. This way you get the added
   safety of using a pool-specific access type, until you decide
   explicitly that you need the flexibility of general access types.

   In some implementations, pool-specific access types might have a
   shorter representation, since they only need to be able to point at
   objects in a single storage pool. As we move toward 64-bit address
   spaces, this might be a significant issue. I could imagine that
   pool-specific access types might remain 32-bits in some
   implementations, while general access types would necessarily be
   64-bits.



Партнёры:
PostgresPro
Inferno Solutions
Hosting by Hoster.ru
Хостинг:

Закладки на сайте
Проследить за страницей
Created 1996-2024 by Maxim Chirkov
Добавить, Поддержать, Вебмастеру