Welcome





Sunday, March 27, 2011

Concept of method overloading in generic units:

In the Barton-Nackman technique, a template class derives from an instantiation of another template, passing itself as a template parameter to that other template as follows:

class A : public base<A> { ... };

The important application of this technique is to give a common “base template” for a set of template classes. One can overload operators and other non-member functions for the common base template; the derived classes will match the definitions. Because it avoids run-time dispatching, this technique is often used in the numerical domain [11–13, 42], The usage of the Barton-Nackman trick is best explainedwith an example,which we take from the domain of linear algebra. Matrix types (dense matrix, diagonal matrix, and triangular matrix) represent different kinds of matrices. Each of these matrix types has a common interface which can be exploited to define functions and operators that work for any matrix type. In this example, our interface is just one function.We overload the function call operator that gives the syntax A(i,j) for accessing the element at row I and column j of matrix A. This interface is defined in the common base template matrix:

template <class Derived>

class matrix {

public:

double operator()(int i, int j) const {

return static cast<const Derived&>(this)(i,j);

}

};

The matrix template will always be instantiated in such a way that the template parameter Derived is the type of a derived class. Note the static cast in the definition of operator(). As long as Derived is the actual type of the object, this is a safe downcast to a derived class. The function call operator thus invokes the function call operator of the derived class without dynamic dispatching. Each specialized matrix type must pass itself as a template argument to the matrix template and define the actual implementation of operator():

class dense matrix : public matrix<dense matrix> {

public:

double operator()(int i, int j) { ... }

...

};

class diagonal matrix : public matrix<diagonal matrix> {

public:

double operator()(int i, int j) { ... }

...

};

class triangular matrix : public matrix<triangular matrix> {

public:

double operator()(int i, int j) { ... }

...

};

With this arrangement, one can overload generic functions for the matrix template, instead of separately for each matrix type. For example, the multiplication operator could be defined as:

template<class A, class B>

typename product traits<A, B>::type

operator(const matrix<A>& a, const matrix<B>& b);

The product traits template is a traits class to determine the result type of multiplying matrices of types A and B. Such traits classes for deducing return types of operations are common in C++ template libraries [15, 40, 42].We can observe many benefits in this approach. The run-time overhead of virtual function calls is avoided. Several implementation types can be grouped under a common base template, allowing one operator implementation to cover many separate types.We can also identify several problems with the approach: One needs to be careful not to slice objects. Taking the matrix arguments by copy in the above operatorfunction would not copy the whole object, but only the base class matrix. Thus some data would be lost, leading to undefined behavior at runtime when the object was cast to the actual matrix type in the function call operator of the matrix class . It is difficult to get the desired overloading behavior if some parameters of a function are instances of “Barton-Nackman powered” types, but others are not. For example, one might want to define multiplication between two matrices, and additionally between a matrix and a scalar with the scalar as either the left or right argument. The straightforward solution is to define three function templates (we leave the return types unspecified):

template<class A, class B>

... operator(const matrix<A>& a, const B&b);

template<class A, class B>

... operator(const A& a, const matrix<B>& b);

template<class A, class B>

... operator(const matrix<A>&m1, const matrix<B>&m2);

The following code demonstrates why this approach does not work:

diagonal matrix A; dense matrix B;

A B; // error, ambiguous call

The third function is not the best match; insteadthe call is ambiguous. The first two definitions are both better than the third definition, but neither is better than the other. Thus, a compiler error occurs. Even though the typematrix<A> is more specialized than A, the actual argument type is not matrix<A> but diagonal matrix, which derives from matrix<A> where A is diagonal matrix. Therefore, matching diagonal matrix with matrix<A> requires a cast, whereasmatching it with A does not, making the latter a better match. Thus, the second definition provides a better match for the first argument, and the first definition provides a better match for the second argument; this makes the call ambiguous. There are two immediate workarounds: providing explicit overloads for all common scalar types, or overloading for the element types of the matrix arguments. The first solution is tedious and not generic because one cannot know all the possible (user-defined) types that might be used as scalars. The second solution is limiting, as it prevents sensible combinations of argument types, such as multiplying a matrix with elements of type double with a scalar of type int.

The curiously recurring template pattern is quite a mind-teaser; it significantly adds to the complexity of the code. Type classes and the enable if template solve the above problems. We rewrite our matrix example using type classes. First, all matrix types must be instances of the following type class:

template <class T, class Enable = void>

struct matrix traits {function that uses assignment, and to provide the specialized implementations as overloads.

Without enable if and type class emulation, overloading is limited.for example, it is not possible to define a swap function for all types that derive from a particular class; the generic swap is a better match if the overloaded function requires a conversion from a derived class to a base class (see section 4). Type classes allow us to express the exact overloading rules. First, two type classes are defined. Assignable represents types with assignment and copy construction (as in the Standard Library), and User Swappable is for types that overload the generic swap function:

template <class T, class Enable = void>

struct assignable traits { static const bool conforms = false; };

template <class T>

struct assignable {

BOOST STATIC ASSERT((assignable traits<T>::conforms));

};

template <class T, class Enable = void>

struct user swappable traits { static const bool conforms = false; };

template <class T>

struct user swappable {

BOOST STATIC ASSERT(user swappable traits<T>::conforms);

static void swap(T& a, T& b) {

user swappable traits<T>::swap(a,b);

}

};

The Assignable requirements are assignment and copy construction, which are not subject to overload resolution, and thus need not be routed via the type class mechanism. Second, two overloaded definitions of generic swap are provided. The first is for types that are instances of User Swappable. The function forwards calls to the swap function defined in the User Swappable type class:

template <class T>

typename enable if<user swappable traits<T>::conforms, void>::type

generic swap(T& a, T& b) {

user swappable<T>::swap(a,b); }

};

The second overload is used for types which are instances of Assignable but not instances of User Swappable. The exclusion is needed to direct types that are both Assignable and User Swappable to the customized swap:

template <class T>

typename enable if<

assignable traits<T>::conforms && !user swappable traits<T>::conforms,

void>::type

generic swap(T& a, T& b) {

T temp(a); a = b; b = temp;

}

static const bool conforms = false;

};

template <class T>

struct matrix {

BOOST STATIC ASSERT(matrix traits<T>::conforms);

static double index(const T& M, int i, int j) {

return matrix traits<T>::index(M, i, j);

}

};

Thus, the generic swap function can be defined in the most efficient way possible for each type. There is no fear of overload resolution accidentally picking a function other than that the programmer intended.

SECURE CODING IN C

SECURE CODING IN C

(By team 5: 733007,13,14,40,45,62,63)

ABSTRACT:

C is a high level language developed in early 1970’s.It is a imperative system implementation language and is most popular language of all time. C has more effective concepts such as arrays , structures and pointers. It has formed basis for many other languages like C++, Java and C#. Even though C is used for developing system software, it cannot be used for real time and web applications due to lack of security.

Why do we need secure coding?

Writing a secure code is a big deal despite lot of viruses in world .Most probable solution is to use safer language like java, which runs in protected environment.But for higher performance i.e,coding in C we need to be aware of writing an unexploitable code.

Major obstacles contributing to insecure coding are:

1) Buffer overflow-smashing stack,

2) Double free attack.

A buffer overflow occurs when user tries to enter more data beyond program requirement, thereby allowing arbitrary modifications to memory. Due to the way the stack is setup, an attacker can write arbitrary code into memory.

When functions are called, both the memory to store the variables declared in the function and the memory to store the arguments to the function are pushed onto the stack as part of a "stack frame". This leads to buffer overflow.

Another more sophisticated attack, is the double free attack that affects some implementations of malloc.

The attack can happen when you call free on a pointer that has already been freed before you have reinitialized the pointer with a new memory address.

Other shortcomings encountered are

1)Structure initialization in C,

2)Random number between two integers,

3)Large arrays in C,

4)De-Begging Code before check-in,

5)and security issues in strings .

In this assignment, we deal about the above mentioned problems and how to tackle them in secure way.


Tuesday, February 8, 2011

COPY CONSTRUCTOR IN JAVA

Click this link to view the presentation regarding Copy Constructor Concept in Java.

Wednesday, February 2, 2011

ROUND OFF ERRORS:

      A round-off error, also called rounding error, is the difference between the calculated approximation of a number and its exact mathematical value.
v                          When dealing with floating-point numbers, computers cannot store exact values. They may store floating point values to many decimal places, and calculate results to many decimal places of precision, but round-off error will always be present. To ensure that results of floating-point routines are meaningful, the need exists to quantify the round-off error of such routines.
v       
         Machine epsilon, epsmch, is defined as the smallest positive number such that 1.0 + epsmch is not equal to 1.0.
v      If you are familiar with the "C" or "C++" programming languages, epsmch is supplied in one of the C library files as a constant: DBL_EPSILON. It is accessed by including the header file float.h in the "C" or "C++" program.



 Precision  vs.  Accuracy.
    Precision = tightness of specification. Accuracy = correctness. Do not confuse precision with accuracy. 3.133333333 is an estimate of the mathematical constant π which is specified with 10 decimal digits of precision, but it only has two decimal digits of accuracy. As John von Neumann once said "There's no sense in being precise when you don't even know what you're talking about." Java typically prints out floating point numbers with 16 or 17 decimal digits of precision, but do not blindly believe that this means there that many digits of accuracy! Calculators typically display 10 digits, but compute with 13 digits of precision.


         Kahan : the mirror for the Hubble space telescope was ground with great precision, but to the wrong specification. Hence, it was initially a great failure since it couldn't produce high resolution images as expected.However, it's precision enabled an astronaut to install a corrective lens to counter-balance the error.



Ø ROUNDING MODES :
§         round to nearest, where ties round to the nearest even digit in the required position (the default and by far the most common mode)
§       round to nearest, where ties round away from zero (optional for binary floating-point and commonly used in decimal)
§       round up (toward +∞; negative results thus round toward zero)
§        round down (toward −∞; negative results thus round away from zero)
§       round toward zero (truncation; it is similar to the common behavior of float-to-integer conversions, which convert −3.9 to −3)

 

Floating-point rounding:

 

        In floating-point arithmetic, rounding aims to turn a given value x into a value z with a specified number of significant digits. In other words, z should be a multiple of a number m that depends on the magnitude of z. The number m is a power of the base (usually 2 or 10) of the floating-point representation.
       Apart from this detail, all the variants of rounding discussed above apply to the rounding of floating-point numbers as well. The algorithm for such rounding is presented in the Scaled rounding section above, but with a constant scaling factor s=1, and an integer base b>1.
       For results where the rounded result would overflow the result for a directed rounding is either the appropriate signed infinity, or the highest representable positive finite number (or the lowest representable negative finite number if x is negative), depending on the direction of rounding. The result of an overflow for the usual case of round to even is always the appropriate infinity.
       In addition, if the rounded result would underflow, i.e. if the exponent would exceed the lowest representable integer value, the effective result may be either zero (possibly signed if the representation can maintain a distinction of signs for zeroes), or the smallest representable positive finite number (or the highest representable negative finite number if x is negative), possibly a denormal positive or negative number (if the mantissa is storing all its significant digits, in which case the most significant digit may still be stored in a lower position by setting the highest stored digits to zero, and this stored mantissa does not drop the most significant digit, something that is possible when base b=2 because the most significant digit is always 1 in that base), depending on the direction of rounding. The result of an underflow for the usual case of round to even is always the appropriate zero.
        It is important to understand that the number of significant digits in a value provides, only a rough indication of its precision, and that information is lost when rounding off occurs.

THE strictfp CONSTRUCT:

       strictfp is a keyword in the Java programming language that restricts floating-point calculations to ensure portability. It was introduced into Java with the Java virtual machine (JVM) version 1.2.

          The IEEE standard, specifies a standard method for both floating-point calculations and storage of floating-point values in either single or double precision. Prior to JVM 1.2, floating-point calculations were strict; that is, all intermediate floating-point results were represented as IEEE single or double precisions. As a consequence, errors of calculation, overflows and underflows, could occur. Whether or not an error had occurred, the calculation would always return a valid number; if an overflow or underflow had occurred, that number would be incorrect. Hence, whether an error had occurred was typically not obvious. Since JVM 1.2, intermediate computations are not limited to the standard 32- and 64- bit precisions. On platforms that can handle other representations, those representations can be used, preventing overflows and underflows, thereby increasing precision. For some applications, a programmer might need every platform to have precisely the same floating-point behavior, even on platforms that could handle greater precision. The strictfp modifier accomplishes this by truncating all intermediate values to IEEE single- and double- precision, as occurred in earlier versions of the JVM[1].

Usage:
          Programmers can use the modifier strictfp to ensure that calculations are performed as in the earlier versions; that is, only with IEEE single and double precision types used. Using strictfp guarantees that results of floating-point calculations are identical on all platforms. This can be extremely useful when comparing floating-point numbers.
         It can be used on classesinterfaces and non-abstract methods .When applied to a method, it causes all calculations inside the method to use strict floating-point math. When applied to a class, all calculations inside the class uses stritfp floating-point math. Compile-time constant expressions must always use strict floating-point behavior

      Ø  Java's default class Math uses this strictfp modifier.
The Java package java.lang.Math class contains these strictfp methods:
 public static strictfp double abs(double);
 public static strictfp int max(int, int);
 public static strictfp long max(long, long);
 public static strictfp float max(float, float);
 public static strictfp double max(double, double



·       How to use strictfp ??
Ø Syntax for classes:
public strictfp class MyClass 
{ 
  //...
}


ØSyntax for methods:
public strictfp void method() 
{ 
  //...
}