Welcome





Monday, August 29, 2011

Limitations of E-R model

To quash one very common misconception, we emphasize that the E-R approach is not a relative, a derivative, or a generalization of the relational data model. Infact, it is not a data model at all but a design methodology, which can be applied to the relational model. The term “relationship” refers to one of the

two main components of the methodology rather than to the relational data model.

The two main components of the E-R approach are the concepts of entity and relationship.

· Entities model the objects that are involved in an enterprise.

· Relationships model the connections among the entities.

NAVIGATION TRAP:

Starting with a given entity and moving along the triangle formed by the three relationships, we might end up with a different entity of the same type. This probem is referred as navigation trap.

Consider the example about Client/Broker information.

client/broker information

There is one problem with our design . Suppose that we have the following relationships:

§ Client1, Acct1, Office1 _ HasAccount

§ Acct1, Broker1_ HandledBy

§ Broker1, Office2_ WorksIn

What is there to ensure that Office1 and Office2 are the same i.e., Account1’s office is the same as that of the broker who manages Account1? This problem

is known as a navigation trap: Starting with a given entity, Office1, and moving along the triangle formed by the three relationships HasAccount, HandledBy,and WorksIn, we might end up with a different entity, office2, of the same type.

Navigation traps of this kind are particularly difficult to avoid in the E-R model because doing so requires the use of participation constraints in combination with so-called functional dependencies but these constraints are supported by the E-R model only in a very limited way: as keys and participation

constraints.

Note that we can avoid the navigation trap by removing the relationship HasAccount completely and reintroducing the Owns relationship between clients and accounts. However, this brings back the problem that the constraint that a client cannot have more than one account in any given office is no longer represented.

CONVERSION FROM ENTITY TO ATTRIBUTE OR RELATIONSHIP TO ENTITY:

There is considerable freedom in deciding whether a particular datum should be an entity,a relationship, or an attribute. The arity of a relationship might change by demoting an entity to an attribute or by collapsing an entity into a relationship.



figure 1

Entity or attribute? In Figure1, semesters are represented as entities. However,we could as well make Transcript into a binary (rather than ternary) relation and turn Semester into one of its attributes. The obvious question is which representation

is best (and in which case).

Ø To some extent, the decision of whether a particular datum should be represented as an entity or an attribute is a matter of taste. Beyond that, the representation might depend on whether the datum has an internal structure of its own. I

Ø If the datum has no internal data structure, keeping it as a separate entity makes the E-R diagram more complex and more important, adds an extra relation to your database schema when you convert the diagram into the relational model.

Ø If the datum has attributes of its own, it is possible that these attributes cannot be represented if the datum itself is demoted to the status of an attribute.

For instance, in Figure1 the entity type Semester does not have its own attributes, so representing the semester information as an entity appears to be

overkill. However, it is entirely possible that the Requirements Document might state that the following additional information must be available for each semester:Start_date, End_date, Holidays, Enrollment. In such a case, the semester information

cannot be an attribute of the Transcript relationship.

Entity or relationship? Consider the Figure 1, where we treat transcript records as relationships between Student, Course, and Semester entities. An alternative to this design is to represent transcript records as entities and use a new relationship type, Enrolled, to connect them as shown below.

figure 2

Here we incorporate some of the attributes for the entity Semester, as discussed earlier. We also add an extra attribute, Credits, to the relationship Enrolled. Clearly, the two diagrams represent the same information. but which one is better?

· For instance, it is a good idea to keep the total number of entities and relations as small as possible because it is directly related to the number of relations that will result when the E-R diagram is converted to the relational model.

· Generally, it is not too serious a problem if two relations are lumped together at this stage because the relational design theory is geared to identifying relation schemas that must be split and to providing algorithms for doing that.

· On the other hand, it is much harder to spot the opposite problem: needlessly splitting one relation into two or more.

Coming back to Figure.2, we notice that there is a participation constraint for the entity Transcript in the relationship type Enrolled.Moreover,the arrow leading from Transcript to Enrolled indicates that the Transcript role forms a key of the Enrolled relationship. Therefore, there is a one-to-one correspondence between the relationships of type Enrolled and the entities of type Transcript.

This means that relationships of type Enrolled can be viewed as superfluous, because Transcript entities can be used instead to relate the entities of types Student,

Course, and Semester. All that is required (in order not to lose information) is to transfer the proper attributes of Enrolled to Transcript after converting the latter into a relationship.As a result of this discussion, we have the following rule:

*Consider a relationship type, R, that relates the entity types E1, . . . , En,and suppose that E1 is attached to R via a role that (by itself) forms a key

of R, and that a participation constraint exists between E1 and R. Then it might be possible to collapse E1 and R into a new relationship type that involves only the entity types E2, . . . , En.

Note that this rule is only an indication that E1 can be collapsed into R, not a guarantee that this is possible. For instance, E1 might be involved in some other

Relationship R’. In that case, collapsing E1 into R leaves an edge that connects two relationship types, R and R, which is not allowed by the construction rules for E-R diagrams.

Information loss:

The arity of a relationship might change by demoting an entity to an attribute or by collapsing an entity into a relationship. In all of these cases,however, the transformations obviously preserve the diagrams information content. There are some typical situations where seemingly innocuous transformations cause information loss; that is, they lead to diagrams with subtly different information content.

Consider the Parts/Supplier/Project diagram of Figure.3. Some designers do not like ternary relationships, preferring to deal with multiple binary relationships instead. Such a decision might lead to the diagram shown in Figure .4

figure .3

Figure.4

Although superficially the new diagram seems equivalent to the original, there are several subtle differences.

*First, the new design introduces a navigation trap. It is possible that a supplier, Acme,sells “Screw” and that Acme has sold something to project “Screw Driving.” It is even possible that the screw driving project uses screws of the kind Acme sells.

*From the relationships represented in the diagram it is not possible to conclude that it was Acme who sold these screws to the project. All we can tell is that Acme might have done so.

*The other problem with the new design is that the price attribute is now associated with the relationship Supplies. This implies that a supplier has a fixed price for each item regardless of the project to which that item is sold.

*In contrast, the original design in Figure .3, supports different pricing for different projects Similarly, the new design allows only one transaction between any supplier and project on any given day because each transaction is represented as a triple (p, s, d), so there is no way to distinguish among different transactions between the same parties on the same day.

E-R and object databases:

Some of the difficult issues involved in

translating E-R diagrams into schemas become easier for object databases

* Issues involved in representing entities with set-valued attributes in a relational database.The objects stored in an object database can have set-valued attributes, so the representation of such entities in the schema of the object database is considerably easier.

* Issues involved in representing the IsA relationship in a relational database. Object databases allow a direct representation of the IsA relationship within the schema, so, again, representation of such relationships is considerably easier..

It should be apparent that not only is it generally easier to translate E-R diagrams into schemas for object databases than into schemas for relational databases, but for many applications object databases allow a much more intuitive model of the enterprise than do relational databases.

Monday, August 22, 2011

COMPARING EXPRESSIVENESS OF DIFFERENT QUERY LANGUAGES


This paper outlines a series of exercises that relate propositional logic to various straightforward query languages as used by some popular programs. It also provides exercises which naturally motivate the use of boolean algebra to translate between equivalent formulas (including a practical application of converting a formula to conjunctive normal form). These exercises also illustrate connections between topics student see in logic/discrete math, programming topics, and topics in design.

Click here to download this paper.

Friday, August 19, 2011

Class Assignment 1: Hospital Administration

Schema :
Nurse(NID,Name,Bdate,WID)
Function(FID,Fname,Description)
Ward(WID,Wname,Location)
Services(WID,FID)
Certified(NID,FID)
Queries :
1.print names of nurses not assigned to any ward.
2.print the name of the ward for which no nurse is assigned.
3.for each ward print ward name and no.of services it offers.
4.print the ward with maximum no.of nurses assigned.
5.print the names of nurses whose functions are ensured by the ward to which they are assigned.
6.list the wards that offer all services offered by ward w1.
7.print the name of the most certified nurse.
8.print pairs of nurses assigned to same ward.
9.print the name of wards that ensure each function offered by the hospital.
10.print nurse-id of nurses certified for every function the hospital offers.
11.for each ward print ward-id and nurse-id of most certified nurse.

A Pedagogical tool for Teaching Advanced Database Systems


The designing and implementing database engines may have become a lost art. Although most standard database text books include ample coverage of algorithms for design and implementation database engines, many computer science programs seem to provide minimal coverage of file organizations, theoretical foundations, and algorithms necessary to build a database engine. The systematic removal of “file organizations and information retrieval” as a topic of study coupled with greater emphasis on the so called “practical applications” of databases, have joined hands to eliminate the coverage of theory and implementation of the underlying database engine.


This paper discusses a step by step process by which we in this advanced database course can design and construct a simple, yet fully functional database engine.

Click here to download this paper.

A Simpler (and Better) SQL Approach to Relational Division


A common type of database query requires one to find all tuples of some table that are related to each and every one of the tuples of a second group. In general those queries can be solved using the relational algebra division operator. This paper conveys that the phrasing of this operator in SQL seems to present an overwhelming challenge to novice and experienced database programmers. This paper presents an alternative solution that is not only more intuitive and easier to deliver in the classroom but also exhibits a better computational performance.

Click here to download this paper.

A Relational Model of Data for Large Shared Banks - IBM Research Laboratory

Future users of large data banks must be protected from having to know how the data is organized in the machine (the internal representation). A prompting service which supplies such information is not a satisfactory solution. Activities of users at terminals and most application programs should remain unaffected when the internal representation of data is changed or even when external representation is changed. Changes in data representation will often be needed as a result of changes in query, update etc. Here in this paper we study the inadequacies of the tree structured files or general network data models, and this paper also provides the operations on relations to solve the problems of consistency and redundancy in the user's model.

Click here to download the paper.

Basic Data model proposals


This paper provides a summary of 35 years of data model proposals, grouped into 9 different eras. It discusses the proposals of each era, and show that there are only a few basic data modeling ideas, and most have been around a long time. Later proposal inevitably bear a strong resemblance to certain earlier proposals. Hence, it is a worthwhile exercise to study previous proposals. Here in this paper we study data models in 9 historical epochs;

Click here to download the paper.



Sunday, August 14, 2011

Oracle Database Download

To practice SQL queries at home, you could download Oracle 10g Express Edition from here and then follow this tutorial. (Note that the sample database supplied with Oracle 10g is the same database we use in college)

Sunday, August 7, 2011

Case Study - Scenario 1 (ER Model)

You have been engaged by a pizza shop to create a database to enable the business to efficiently keep records. The pizza shop sells pizzas to customers who walks in (takeaway) and also takes phone orders which may be delivered to the customer's address or held at the shop until the customer comes in to pick them up. When a customer orders by phone the shop assistant taking the order asks the customer for their phone number and enters it into the computer system. The assistant also enters his ID number so a record is also kept of who took the order. The time when the phone was answered is recorded as well as the time when the phone call was terminated. (From this we can calculate how many orders each assistant took and the average time for each order). If the customer has previously ordered by phone the name(last name only) and address appears on the screen. The customer is then asked for his name and address and then takes the order. If the customer has not ordered before or if the name and address given does not correspond with that in the computer or if that phone number and address is marked as being the subject of a hoax previously, after the order has been taken the assistant dials the number given and confirms the order with the customer. It is required that each order given by a customer be recorded. The price of each item is recorded with the order. Pizzas are made from ingredients. From the database you must be able to determine the name and amount of each ingredient for a particular type of pizza. For each ingredient is kept the amount on hand (in the shop) and the date of the last stocktake. (Stocktakes are done weekly). It should be possible to calculate from the database the total amount of each ingredient used in the making of the pizzas from one stocktake to the next. Any variation from the expected amount is also recorded for that stocktake as a percentage e.g.-1%. For each ingredient we also keep the name and address of the supplier. Each ingredient can be supplied by many suppliers and each supplier can supply many ingredients. Not all pizzas are the same price, so a record exists in database for the current price of each type of pizza. The employees at the shop may be divided into two types: those who work in the shop and those who carry out the deliveries (the drivers). For each employee is kept their name and address, home phone number and tax file number. For the drivers is also kept their driver's license number Hours of work are not regular so for each time an employee works a record must be kept of their hours. Employees inside the shop are paid at an hourly rate. Drivers are paid according to the number of deliveries the do. A record is kept for which orders a driver delivers and of how many deliveries a driver does on each shift.

Monday, July 25, 2011

List of DBMS Assignments

 1.  Demonstration of data independence with ADT
 2.  Simple Application using conventional files
 3.  Demonstration of Hierarchical model
 4.  Demonstration of Network model
 5.  Simple tool to convert E-R diagram to tables
 6. Persistent Java API
 7. Pro C
 8. SQL J
 9. Implementation of editor with shadow copying
10. Demonstration of race programs in concurrent programs

Database Management Systems

This blog will now serve as the official blog for DBMS 3-1 2011-2012.

Tuesday, July 19, 2011

static library

CREATION OF STATIC LIBRARY
--------------------------------------------

The creation of static library is illustrated below with the help of different stages.

step1:
--------

vi file1.c /* The file to be called */

#include

void call()
{

printf("hai");
}

--> gcc -c file1.c

In this step,the object file of the code is obtained.



step 2:
--------

/*Header file which has prototype of file1.c*/

vi h1.h

void call();

--> ar rs libstat.a file1.o

The above command enables the object file generated in the previous stage
to be placed in the archiver.




step3:
-------
/*create "include" and "lib" folder(\home\student) */

note: create these folders in \home\student folder.

(a):copy h1.h file to include folder

(b):copy libstat.a to lib folder


step 4:
--------

vi program.c

#include
void main()
{

call();
return 0;

}

This is the application program making use of the static library.


step 5:
--------

gcc --static -I /home/student/include -L /home/student/lib -o program program.c -lstat



note -I stands for include and -L for lib and l for linking

Compiling the application code must be done using the above command


step 6:
-------

./program

Executing the application code





Saturday, July 16, 2011


Screen shots of  Personal Calendar
                                                  -> It looks like this when its opened                      
                                            -> It looks like this when "save" button is clicked
     
                                           -> It looks like this when "view" button is clicked    

           It is done using AWT,Collections,Files,IOstreams and different LayoutManagers. It can store a maximum of three events for a particular date and month. It works only for this year.You can download the .class file here, run it in your own system and test it. Any queries are accepted and answered. 

Thursday, April 7, 2011

CREATION OF STATIC LIBRARIES IN LINUX

An archive is a single file holding a collection of other files in a structure that makes it possible to retrieve the original individual files (called members of the archive). A static library is an archive whose members are object files. A library makes it possible for a program to use common routines without the administrative overhead of maintaining their source code, or the processing overhead of compiling them each time the program is compiled. Conventionally, static libraries end with the ``.a'' suffix. Dynamic linking involves loading the subroutines of a library into an application program at load time or runtime, rather than linking them in at compile time. With static linking, it is enough to include those parts of the library that are directly and indirectly referenced by the target executable (or target library).
Implementation:
The XXX.c file containing the code for library is converted into object file. Then, the archiver (ar) is invoked to produce a static library (named libXXXX.a) out of the object file XXX.o. It is important to note that the library must start with the three letters lib and have the suffix .a since it is static however shared libraries have .so as suffix.When a program needs a function, that is stored in a static library,it includes the header file that declares the function.The program that uses the library should be linked with it and linking is done using the gcc commands(-lname).The user program is then executed to obtain the output.The striking feature of static linking is that it avoids dependency problems and increases reusability.

Monday, March 28, 2011

PERSONAL CALENDAR



Using personal calendars we can personalise our significant events.This personal calendar is based on java application.

The interface is for regular users to browse and edit their calendars.The initial display consists of one window which consists of two parts:command menu and regular monthly display.

The File menu contains typical commands for manipulating data files . 'File New' opens a new calendar in a new display window. 'File Open' opens an existing calendar from a previously saved file, displaying it in the current display window or a new window as selected by the user. 'File Close' closes the currently active calendar, offering to save if it has been modified since opening. (The currently active calendar is the one on which the user has most recently performed a command.)'File Save' saves the currently active calendar on the file from which it was opened, or on a new file if it was created from a new display.

The View menu allows the user to browse through a calendar in a variety of ways.'View Item' displays the scheduling details for a selected scheduled item. 'View Day' displays details of the currently selected calendar day. 'View Week' displays the seven-day week in which the currently selected day appears, with less detail than the daily display. Weeks can be displayed in tabular or list format.'View Goto Date' displays a dialog for choosing a specific date to become the current date in the active display.

Appointment scheduling is one of the most commonly performed operations with the Calendar system.To schedule an appointment, the user selects the 'Appointment' command in the Schedule menu, In response, the system displays the dialog.The 'Title' field is a one-line string that describes the appointment briefly. The 'Date' field contains the date on which the appointment is to occur.Appointment Security is one of four levels: 'public', 'title only', 'confidential', and private.


Sunday, March 27, 2011

VIRTUAL TABLE

(Team 2 : 71,49,31,30,29,23,876).


          A  Virtual Table or Vtable is a mechanism used in a programming language to support dynamic dispatch or dynamic polymorphism where run time binding is done. The virtual table is a lookup table of functions used to resolve function calls in dynamic binding.
          Suppose a program contains several classes in an inheritance hierarchy i.e., a superclass and two subclasses and when the program calls the virtual function on a superclass pointer or any of the subclass pointers,the run time environment must be able to determine which implementation to call,depending on the actual type of the object that is pointed to. There are a variety of different ways to implement such dynamic dispatch,but the vtable solution is common among C++ and its related languages ,because it allows objects to use a different implementation simply by using a different set of method pointers.
        The virtual table is actually quite simple, though it’s a little complex to describe in words.

Implementation of VTABLE:


        Every class that uses virtual functions (or is derived from a class that uses virtual functions) is given it’s own virtual table. This table is simply a static array that the compiler sets up at compile time. A virtual table contains one entry for each virtual function that can be called by objects of the class. Each entry in this table is simply a function pointer that points to the most-derived function accessible by that class. An object's dispatch table or vtable will contain the addresses of the object's dynamically bound methods. Method calls are performed by fetching the method's address from the object's dispatch table. The virtual table is the same for all objects belonging to the same class, and is therefore typically shared between them. Objects belonging to type-compatible classes (for example siblings in an inheritance hierarchy) will have dispatch tables with the same layout: the address of a given method will appear at the same offset for all type-compatible classes. Thus, fetching the method's address from a given dispatch table offset will get the method corresponding to the object's actual class.
       The compiler also adds a hidden pointer to the base class, which we will call  as Vpointer or virtual pointer or vptr .vptr is set (automatically) when a class instance is created so that it points to the virtual table for that class. Unlike the *this pointer, which is actually a function parameter used by the compiler to resolve self-references, vptr is a real pointer. Consequently, it makes each class object allocated bigger by the size of one pointer. It also means that vptr is inherited by derived classes, which is important.  The compiler also generates "hidden" code in the constructor of each class to initialize the vpointers of its objects to the address of the corresponding vtable.

Concept of method overloading in generic units:

In the Barton-Nackman technique, a template class derives from an instantiation of another template, passing itself as a template parameter to that other template as follows:

class A : public base<A> { ... };

The important application of this technique is to give a common “base template” for a set of template classes. One can overload operators and other non-member functions for the common base template; the derived classes will match the definitions. Because it avoids run-time dispatching, this technique is often used in the numerical domain [11–13, 42], The usage of the Barton-Nackman trick is best explainedwith an example,which we take from the domain of linear algebra. Matrix types (dense matrix, diagonal matrix, and triangular matrix) represent different kinds of matrices. Each of these matrix types has a common interface which can be exploited to define functions and operators that work for any matrix type. In this example, our interface is just one function.We overload the function call operator that gives the syntax A(i,j) for accessing the element at row I and column j of matrix A. This interface is defined in the common base template matrix:

template <class Derived>

class matrix {

public:

double operator()(int i, int j) const {

return static cast<const Derived&>(this)(i,j);

}

};

The matrix template will always be instantiated in such a way that the template parameter Derived is the type of a derived class. Note the static cast in the definition of operator(). As long as Derived is the actual type of the object, this is a safe downcast to a derived class. The function call operator thus invokes the function call operator of the derived class without dynamic dispatching. Each specialized matrix type must pass itself as a template argument to the matrix template and define the actual implementation of operator():

class dense matrix : public matrix<dense matrix> {

public:

double operator()(int i, int j) { ... }

...

};

class diagonal matrix : public matrix<diagonal matrix> {

public:

double operator()(int i, int j) { ... }

...

};

class triangular matrix : public matrix<triangular matrix> {

public:

double operator()(int i, int j) { ... }

...

};

With this arrangement, one can overload generic functions for the matrix template, instead of separately for each matrix type. For example, the multiplication operator could be defined as:

template<class A, class B>

typename product traits<A, B>::type

operator(const matrix<A>& a, const matrix<B>& b);

The product traits template is a traits class to determine the result type of multiplying matrices of types A and B. Such traits classes for deducing return types of operations are common in C++ template libraries [15, 40, 42].We can observe many benefits in this approach. The run-time overhead of virtual function calls is avoided. Several implementation types can be grouped under a common base template, allowing one operator implementation to cover many separate types.We can also identify several problems with the approach: One needs to be careful not to slice objects. Taking the matrix arguments by copy in the above operatorfunction would not copy the whole object, but only the base class matrix. Thus some data would be lost, leading to undefined behavior at runtime when the object was cast to the actual matrix type in the function call operator of the matrix class . It is difficult to get the desired overloading behavior if some parameters of a function are instances of “Barton-Nackman powered” types, but others are not. For example, one might want to define multiplication between two matrices, and additionally between a matrix and a scalar with the scalar as either the left or right argument. The straightforward solution is to define three function templates (we leave the return types unspecified):

template<class A, class B>

... operator(const matrix<A>& a, const B&b);

template<class A, class B>

... operator(const A& a, const matrix<B>& b);

template<class A, class B>

... operator(const matrix<A>&m1, const matrix<B>&m2);

The following code demonstrates why this approach does not work:

diagonal matrix A; dense matrix B;

A B; // error, ambiguous call

The third function is not the best match; insteadthe call is ambiguous. The first two definitions are both better than the third definition, but neither is better than the other. Thus, a compiler error occurs. Even though the typematrix<A> is more specialized than A, the actual argument type is not matrix<A> but diagonal matrix, which derives from matrix<A> where A is diagonal matrix. Therefore, matching diagonal matrix with matrix<A> requires a cast, whereasmatching it with A does not, making the latter a better match. Thus, the second definition provides a better match for the first argument, and the first definition provides a better match for the second argument; this makes the call ambiguous. There are two immediate workarounds: providing explicit overloads for all common scalar types, or overloading for the element types of the matrix arguments. The first solution is tedious and not generic because one cannot know all the possible (user-defined) types that might be used as scalars. The second solution is limiting, as it prevents sensible combinations of argument types, such as multiplying a matrix with elements of type double with a scalar of type int.

The curiously recurring template pattern is quite a mind-teaser; it significantly adds to the complexity of the code. Type classes and the enable if template solve the above problems. We rewrite our matrix example using type classes. First, all matrix types must be instances of the following type class:

template <class T, class Enable = void>

struct matrix traits {function that uses assignment, and to provide the specialized implementations as overloads.

Without enable if and type class emulation, overloading is limited.for example, it is not possible to define a swap function for all types that derive from a particular class; the generic swap is a better match if the overloaded function requires a conversion from a derived class to a base class (see section 4). Type classes allow us to express the exact overloading rules. First, two type classes are defined. Assignable represents types with assignment and copy construction (as in the Standard Library), and User Swappable is for types that overload the generic swap function:

template <class T, class Enable = void>

struct assignable traits { static const bool conforms = false; };

template <class T>

struct assignable {

BOOST STATIC ASSERT((assignable traits<T>::conforms));

};

template <class T, class Enable = void>

struct user swappable traits { static const bool conforms = false; };

template <class T>

struct user swappable {

BOOST STATIC ASSERT(user swappable traits<T>::conforms);

static void swap(T& a, T& b) {

user swappable traits<T>::swap(a,b);

}

};

The Assignable requirements are assignment and copy construction, which are not subject to overload resolution, and thus need not be routed via the type class mechanism. Second, two overloaded definitions of generic swap are provided. The first is for types that are instances of User Swappable. The function forwards calls to the swap function defined in the User Swappable type class:

template <class T>

typename enable if<user swappable traits<T>::conforms, void>::type

generic swap(T& a, T& b) {

user swappable<T>::swap(a,b); }

};

The second overload is used for types which are instances of Assignable but not instances of User Swappable. The exclusion is needed to direct types that are both Assignable and User Swappable to the customized swap:

template <class T>

typename enable if<

assignable traits<T>::conforms && !user swappable traits<T>::conforms,

void>::type

generic swap(T& a, T& b) {

T temp(a); a = b; b = temp;

}

static const bool conforms = false;

};

template <class T>

struct matrix {

BOOST STATIC ASSERT(matrix traits<T>::conforms);

static double index(const T& M, int i, int j) {

return matrix traits<T>::index(M, i, j);

}

};

Thus, the generic swap function can be defined in the most efficient way possible for each type. There is no fear of overload resolution accidentally picking a function other than that the programmer intended.