Contents

MATRICES. 4

BASIC PROPERTIES -I. 4

LU DECOMPOSITION.. 6

SPECIAL MATRICES. 6

Diagonal Matrix. 6

ORTHOGONAL MATRIX. 7

SIMILAR MATRIX. 7

UNITARY MATRIX. 7

HERMITIAN MATRIX. 7

Skew-Hermitian Matrix. 7

IDEMPOTENT MATRIX. 8

INVOLUTORY MATRIX. 8

PERMUTATION MATRIX. 8

DETERMINANTS. 8

Properties of Determinants: 8

Signed Product 9

INVERSE. 11

Methods of finding Inverse of a matrix: 11

RANK. 12

Methods to find the rank of a matrix. 12

Properties of Rank. 12

MINOR & COFACTOR. 12

DIAGONALISATION.. 12

SYMMETRIC MATRIX. 13

NORMAL FORM... 13

Procedure to find the Normal form of a matrix. 13

EIGEN VALUE & VECTOR. 14

DIAGONALIZATION: 14

Properties of Eigen Values. 14

CAYLEY-HAMILTON.. 15

Cayley-Hamilton Theorem: 15

SYSTEM OF EQUATIONS. 15

Methods of solving .. 15

CALCULUS. 16

Continuity. 16

Common Limits. 16

Rolle's Theorem.. 17

Differentiation. 17

Taylor's Theorem.. 18

Maclaurin's Theorem.. 18

Total Derivative. 18

Derivative of Implicit Function. 18

Euler's Theorem (Homogenous functions) 19

Functional Dependence. 19

Approximation. 20

Differentiate using Leibnitz's Rule. 20

Integration. 20

Trigonometric Functions. 21

Common Trigonometric Functions. 21

Common Transformations. 21

Application of Integration. 22

ORDINARY DIFFERENTIAL EQUATIONS. 23

ORDER & DEGREE. 23

Understanding Linearity of ODE. 23

Variable Separable Types. 24

Exact Differential Equation. 25

Non Exact Differential Equation. 25

Linear / Non-Linear First Order DE. 26

Linear Second/Higher Order DE (LODE) 27

Homogenous LSODE (CC) 27

Homogenous LSODE (VC) 28

Non Homogenous LSODE (CC) 28

Non Homogenous LHODE (VC) reducible to LHODE (CC) 29

General LHODE (VC) 30

One Dimensional Heat Equation. 30

COMPLEX FUNCTION THEORY. 31

Analytic Function. 31

Complex Integration Theorems. 32

Power Series. 33

PROBABILITY. 33

Probability Theorem.. 33

Principle of sum.. 33

Principle of product 34

Permutation. 34

Combination. 34

Probability. 34

Conditional Probability. 34

Theorem of Total Probability (Rule of Elimination) 34

Bayes' Theorem.. 34

RANDOM VARIABLE. 35

Properties of Random variable. 35

Chebyshev's Theorem.. 35

Distribution. 35

NUMERICAL METHODS. 36

Roots of Non-linear (Transcendental) Equations. 36

Methods of solving non-linear equations / Finding roots of non-linear equations. 36

Gauss-Seidel Method. 37

Finite Differences. 38

Forward Difference. 38

Backward Difference. 38

Central Difference Table. 39

Divided Difference. 39

Interpolation. 40

Newton's Interpolation Formula. 40

Newton's Divided Difference Formula. 42

Lagranges Interpolation Formula. 43

Spline Interpolation. 44

Numerical Differentiation. 45

Numerical Integration. 45

Numerical Solutions of First Order ODE. 46

Single-step methods. 46

Multi-step methods. 46

Taylor's power series method. 46

Euler's Method. 46

Modified Euler's Method. 47

Runge-Kutta 4th order method. 47

Milne's Predictor Corrector method. 47

Piccard's Successive Approximation. 48

Adam-Bashforth-Moulton's  Method  (ABM) 48

Numerical Solutions of PDE. 48

VECTOR CALCULUS. 48

Line & Surface Integrals. 48

Some common curves in parametric form.. 48

Vector equation of a curve. 49

Vector equation of a surface. 49

Greens Theorem.. 49

Stokes Theorem.. 49

Conservative Field. 49

Divergence Theorem.. 49

Line Integral of normal functions. 50

Line Integral of Vector Field. 50

Normal to a curve. 50

Surface Integral of vector functions/fields. 50

Surface Integral of normal functions. 50

Surface Integral of normal functions (SURFACE PARAMETERIZED) 50

Direct Evaluation of Surface Integrals. 51

Evaluation of Surface Integrals. 51

MATHEMATICA.. 52

Basics. 52

Conditional Statements. 53

Matrix form.. 53

List generating Functions. 53

Algebraic Equation. 53

Solving Algebraic Equation. 53

How use solution directly in some expression. 54

Rational Expression. 54

Rectangular Wave. 54

REFERENCE. 54

 

MATRICES

BASIC PROPERTIES -I

 

1)       If A is a matrix then the reduced row-echelon form of the matrix will either contain at least one row of all zeroes or it will be   the  identity matrix.

 

2)       If A and B are both  matrices then we say that A = B provided corresponding entries from each matrix are equal. In other words, A = B provided for all i and j.

 

Matrices of different sizes cannot be compared.

3)       If  and B are both  matrices then  is a new  matrix that is found by adding/subtracting corresponding entries from each matrix.

Matrices of different sizes cannot be added or subtracted

4)       If A is any matrix and c is any number then the product (or scalar multiple) , is  a new matrix of the same size as A and it’s entries are found by multiplying the original entries of A by c.

 In other words for all i and j.

 

5)       Assuming that A and B are appropriately sized so that AB is defined then,

                     1. The  row of AB is given by the matrix product: [ row of A]B

                     2. The  column of AB is given by the matrix product: A[ column of B]

 

6)       If A is a  matrix then the transpose of A, denoted by   is an matrix that is obtained by interchanging the rows and columns of A.

 

7)       If A is a square matrix of size  then the trace of A, denoted by  , is the sum of the entries on main diagonal.

 

                                            

                     

                       If A is not square then trace is not defined.

8)        does not always imply or  

                         with and  unlike real number properties.

                   

                        Real number properties may OR may-not apply to matrices.

 

9)       If  is a square matrix then


  

 

                                      ;  ;

 

10)   If A and B are matrices whose sizes are such that given operations are defined and c is a scalar then

 

                                     ;

  

 ;   

 

11)   If A is a square matrix and we can find another matrix of the same size, say B, such that

                                                                        

                                                                           

 

                      Then we call invertible and we say that B is an inverse of the matrix A.

                    

a.        If we can’t find such a matrix  we call   singular matrix.

b.        Inverse of a matrix is unique.

c.           

d.      

e.       

 

12)   A square matrix is called an elementary matrix if it can be obtained by applying a single  elementary row operation to the identity matrix of the same size.

 

E.g.

 

13)   Suppose A is a  matrix and by performing one row operation R ( ) it becomes another matrix say B. Now  if same row operation R( ) is performed on an identity matrix  and  it becomes  matrix say  E , then  E is called the elementary matrix corresponding to the row operation R( ) and multiplying E and A would give B.

 

14)   Suppose  is the elementary matrix associated with a particular row operation and   is the elementary matrix associated with the inverse operation. Then E is invertible i.e.

 

Suppose that we’ve got two matrices of the same size A and B. If we can reach B by applying a finite number of row operations to A then we call the two matrices row equivalent.

Note that this will also mean that we can reach A from B by applying the inverse operations in the reverse order

 

LU DECOMPOSITION

 

SPECIAL MATRICES

Diagonal Matrix

If any diagonal element is zero then the matrix is singular

 

Upper-Traingular Matrix=  

 

 

Lower-Traingular Matrix =

 

ORTHOGONAL MATRIX

 

 

Matrix  is orthogonal if

 

Matrix  is orthogonal if

  

Matrices  and  are orthogonal if

 

SIMILAR MATRIX

 

Matrix  is said to be similar to matrix  if  can be expressed as


 

 

where  is some non-singular matrix

 

UNITARY MATRIX

 

Matrix  is unitary if

 

 is the complex conjugate of

 

 

HERMITIAN MATRIX

 

A Hermitian matrix (also called self-adjoint matrix) is a square matrix with complex entries which is equal to its own conjugate transpose - i.e. the element in the   row and  column is equal to the complex conjugate  of the element in the  row and  column, for all  and

 

·        

·         Example:

 

·         Entries on main diagonal are always real

 

·         A symmetric matrix with all real entries is Hermitian

 

Skew-Hermitian Matrix

 

A Hermitian matrix with complex entries which is equal to negative of  its own conjugate transpose - i.e the element in the   row and  column is equal to the negative of  complex conjugate  of the element in the  row and  column, for all  and

 

·        

·         E.g.:-

·         Entries on main diagonal are always purely imaginary

·         If  is skew-Hermitian then  raised to odd power is skew-Hermitian

·         If  is skew-Hermitian then  raised to even power is skew-Hermitian

 

Symmetric

Skew-Symmetric

Hermitian

Skew-Hermitian

Conjugate-Symmetric

Conjugate-Anti-Symmetric

Same as Skew-Hermitian

 

IDEMPOTENT MATRIX

 

A matrix  is idempotent if

 

·         An idempotent matrix is diagonalizable

·         Eigen values are 0 or 1

·         Rank of an idempotent matrix = sum of its diagonal elements

 

INVOLUTORY MATRIX

 

A matrix  is Involutory matrix if

 

·         If  is involutory then

·         A  Matrix of the form   is always involutory.

 

PERMUTATION MATRIX

 

A matrix that has exactly one entry of 1 in each row and column and zero elsewhere is called a permutation matrix.

·         It is a representation of permutation of numbers

·         Example:-  is permutation matrix of the permutation (1,3,2,4)

DETERMINANTS

 

Determinant functions: The determinant function is a function that will associate a real number with a square matrix.

 

Permutation of integers: An arrangement of integers without repetition and omission.

 

Theorem 1: If A is square matrix then the determinant function is denoted by ‘det’ and  is defined to be the sum of all the signed elementary products of A.

 

Properties of Determinants:

 

1.      

 

2.           {it may be equal in some cases but not so in general}

 

3.      

 

4.      

 

5.      

 

6.        if A is a triangular matrix

 

7.       If  is the matrix that results from multiplying a row or column of  by a scalar c, then  

 

If  is the matrix that results from interchanging two rows or two columns of  then  

 

If  is the matrix that results from adding a multiple of one row of  onto another row of  or adding a multiple of one column of A onto another column of  then

Signed Product

 

 

 

 

 

 

INVERSE

Methods of finding Inverse of a matrix:

1)       Using Elementary Row-Transformation

 

 

2)       Using Cofactor.

 

RANK

 

Rank is the highest order of a non-zero minor of the matrix.

 

Methods to find the rank of a matrix

 

Method-1

·         Apply elementary row transformation.

·         No of non-zero rows = rank of the matrix.

 

Method-2

·         Find all the minors of the matrix.

·         Find the order of non-zero minors.

·         The highest order is the rank.

 

Method-3

·         From the norm of the matrix

·         Rank = r

 

 

 

Properties of Rank

·  

·  

·  

·  

·  

 [ is conjugate transpose of , conjugate transpose is the adjoint of any matrix]

 

 

MINOR & COFACTOR

 

DIAGONALISATION

 

Matrix  is said to be diagonalizable if there exist a non-singular matrix  such that

 

 

                                                               

 

Diagonalization of  is possible if and only if the Eigen vectors of  are linearly independent.

 

 

Some application of Diagonalization:

 

1.       Evaluation of powers of  : 

2.       Evaluation of function  over  :  If  then

3.       Not all matrices are diagonalizable but real symmetric matrix (RSM) are always diagonalizable.

4.       Eigen vectors of RSM are always distinct

 

 

SYMMETRIC MATRIX

A matrix is symmetric if  . For example A=

 

Skew -Symmetric: . For example B=

 

 

The diagonal elements of a skew symmetric is always zero

 

Eigenvalues of a real skew symmetric matrix is either zero or purely imaginary

 

 

NORMAL FORM

 

The normal form of a matrix  is  where the rank of the matrix  

Procedure to find the Normal form of a matrix

 

Step 1 :

 

Step 2 :

Apply ERT on  and  till  becomes Row Echelon Matrix. This operation converts  into  and  into

Step 3

Apply ERT on  and  till  becomes a matrix in normal form. This operation converts  into  and  into

Step 4

This form is

 

 

 So given a matrix  of rank  two other non singular matrices  and  can be found such that

                                                               

 

 

EIGEN VALUE & VECTOR

 

 

= square matrix (

=Eigen vector

= Eigen value (scalar)

 

 

·         Eigenvalues and eigenvectors will always occur in pairs.

 

·        

 

·         The set of all solutions to  is called the eigenspace of corresponding to λ.

 

·         Suppose that λ is an eigenvalue of the matrix A with corresponding Eigenvector.

Then if k is a positive integer  is an eigenvalue of the matrix  with corresponding Eigenvector.

 

·        

 

·          =

 

·         If  are eigenvectors of  corresponding to the k distinct eigenvaluesthen they form a linearly independent set of vectors.

 

 

DIAGONALIZATION:

 

Suppose that  is a square matrix and if there exists an invertible

               matrix (of the same size as ) such that  is a diagonal matrix then we call  

               diagonalizable and  that P diagonalizes A .(Columns of P are the eigen-vectors of A).

                             

 

THEOREM 1:              

Suppose that A is an n× n non-singular matrix, then the following are equivalent.

                                

(a) A is diagonalizable.

                               

(b) A has n linearly independent Eigen-vectors, which forms the column of the matrix that diagonalizes A.

 

THEOREM 2:             

Suppose that A is an  matrix and that A has n distinct eigenvalues, then A is diagonalizable.

 

Properties of Eigen Values

 

  are the Eigen values of a square matrix

 

 

Same as that of  but Eigen vectors are different.

 

 

 

 

 

 

 

·         Only Eigen vectors of distinct Eigen values are linearly independent.

·         For Diagonalization of a matrix its Eigen vectors should be linearly independent.

 

 

CAYLEY-HAMILTON

Cayley-Hamilton Theorem:

 

Every square matrix  satisfies its characteristic equation.

 

·         It is used in calculating the power of matrices (in place of  direct matrix multiplication)

 

 

SYSTEM OF EQUATIONS

 

 

The above system of equation is 

 

If B=0 system is Homogenous.

If  system in Non-Homogenous.

 

 

Methods of solving

 

1.       Cramer's Method

2.       Augmented Matrix Method

3.       LU-Decomposition Method

 

CALCULUS

 

Continuity

 

Limit

A function  is said to have a limit at a point  if

1.        along any path

Continuity

A function  is said to be continuous at a point  if

1.        along any path

2.      

 

 

Differentiability

A function  is said to be differentiable at a point  if

 

1.        is continuous at

2.       Limit   exist

 

 

 

Rolle's Theorem

If

1.        is continuous in

2.        differentiable in  

3.        

Then there exist a  in the interval  such that

Mean Value Theorem

If

1.        is continuous in .

2.        differentiable in  

 

Then there exist a  in the interval  such that

 

 

 

 

 

 

 

Common Limits

 

Trigonometric

Exponential

Logarithmic

Differentiation

Series

LHospital's Rule


Form

  Convert To Form

not required

 

 

Rolle's Theorem

 

If  is continuous and derivable in  and  then there exist at least one  such that

 

 

 

 

 Machine generated alternative text:
R7)
I
. - -- ()  o
Ç(4) J(b) t
I I
I I
I I
a !
b

 

Differentiation

 

Taylor's Theorem

If  is a function such that  is continuous in the interval  and  exist in the interval  then Taylor's theorem says that there exist a number  in the interval  and a positive integer  such that

 

 

 

 

 

 

 

Cauchy Series

Lagranges Series

Maclaurin's Series

 and

 

Convenient form of Taylor's Theorem 

 

 

Maclaurin's Theorem

 

 

 

Taylor Series expansion of  about the point  (or in powers of ()

 

 

Total Derivative

 

Total Derivative

 is a function of independent variables

Total Derivative

 is a function of    and

 is independent variable

 

Derivative of Implicit Function                                                                                                     

 

First order Derivative of an implicit function

Second order partial derivative of an implicit function

 

 

 

 

Euler's Theorem (Homogenous functions)

 

Homogenous Function

If homogenous function  of degree  satisfies the equation

 

 

Euler's Theorem for Homogenous function of degree

 

 

 

 

 

 

Functional Dependence

 

 and  are two functions of  and . If there is another function such that  then  and  are said to be functionally dependent.

 

Test for Functional Dependence

 

 

Approximation

 

If z= is a function of  and  then

 

 

Using the definition of total derivative

 

 

Differentiate using Leibnitz's Rule

 

Leibnitz's Rule : It is a rule that gives  order derivative of product of two functions.

 

From Integral Calculus

General Leibnitz Rule

Leibnitz Rule

 

Integration

 

 

While choosing  give preference to functions on the right side of the ILATE table. While choosing  give preference to functions on the left side of the ILATE table.

 

Trigonometric Functions

 

Common Trigonometric Functions

 

 

 

 

Common Transformations

 

 

 

 

 

 

 

 

 

 

Application of Integration

 

Length  of an arc/curve

Machine generated alternative text:
y = f(x)
b

Surface Area of an arc/ curve (obtained by rotating it about x (or y) axis)

Machine generated alternative text:
y = f(x)
b

 

 

 

 

 

Single Integral

1.       It gives the length of the arc  from .

2.        It also gives the area between the arc  and the  from

Double Integral

 

Double Integral (Polar Coordinates)

 

 

1.       It gives the area of the region D.

 

2.       It also gives the volume of the region D

Triple Integral

It gives the volume of the region enclosed by D

 

ORDINARY DIFFERENTIAL EQUATIONS

 

ODE means that there is only one independent variable in the equation

 

ORDER & DEGREE

   

ORDER: order of ODE is the order of the highest derivative appearing in it.

 

DEGREE: power of the highest order

 

 

Order = 3

Degree=6

 

Understanding Linearity of ODE

 

A Differential Equation where , the independent variable and , the dependent variable is said to be LINEAR if

 

1.        and all its derivative  are of degree one.

2.       No product term of  and(or) any of its derivative forms are present.

3.       No transcedental functions of  and(or) any of its derivative are present.

 

Note: Linearity of a DE depends only on the way the depenedent variable appears in the equation and is independent of the way the independent variable appears in it.

 

Linear ODE

Non-Linear ODE

 

Variable Separable Types

 

Type

Functional Form

Examples

Variable Separable Type I

 

 

 

1.      

2.      

3.      

4.      

 

Variable Separable Type II

(requires substitution )

 

1.       Substitute

2.       Solve the equations on the lines of Variable Separable Type I

 

 

1.      

2.      

3.      

4.      

Variable Separable Type III

(Homogenous ODE reducible to Variable Separable Type III)

 

1.      

2.       Substitute

3.       Solve the equations on the lines of Variable Separable Type III

 

Note:  should be homogenous of same degree

 

 

1.      

2.      

3.      

4.      

 

Variable Separable Type IV (Non-homogenous ODE reducible to homogenous type reducible to Variable Separable Type I/II)

 

1.        is non-homogenous

 

2.        ,substitute  and

else

 

3.       Solve
         

 

4.        is homogenous

 

5.       Substitute

 

6.       Solve the equations on the lines of Variable Separable Type III and  replace p by  and finally  and

 

Note:  should be homogenous of same degree

1.      

 

2.      

 

3.      

 

4.      

 

 

Exact Differential Equation

 

Exact Differential

If

 

                   is called an exact differential

                           is called an exact differential equation

Solution to Exact Differential Equation

 

where

or

 

 

Exact DE

Solution

 

Non Exact Differential Equation

 

Steps to Solve Non-Exact Differential Equation

 

 

where

Step 1: Reduce Non-Exact DE to Exact DE using Integrating Factor (I.F)

 

Step 2: Solve the Exact DE. Solution is of the form

 

                        or

 

   Type

Integrating Factor

 and  are homogenous of same degree

 and  are homogenous of same degree

and

 

where  and  is given by solving

 

Rearranging  such that some special group of terms are formed

 

Group of Terms

IF

Exact Differential

1

Group of Terms

IF

Exact Differential

 

Linear / Non-Linear First Order DE

 

A linear   first order first degree  DE  of the form

 

 

This DE is  known as Leibnitz Linear Equation

1.       Here  and  

 

2.       Also  

 

3.       So IF =

 

4.       Solution is

 

A non-linear  first order first degree  DE of the form

 

where

 

This DE is known as Bernoulli's Equation

 

 

1.       Substitute

 

2.       Equation reduces to Linear PDE

 

 

3.       Solve using the method described for linear first order DE

A non-linear  first order higher degree DE of the form

 

 

 

Solvable for

Solvable for

Solvable for

 

 

 

A non-linear  first order higher degree DE of the form

 

 

This DE is known as Clairaut's Equation

 

A non-linear  first order higher degree DE of the form

 

 

This DE is known as Lagrange's Equation

 

 

 

Linear Second/Higher Order DE (LODE)

 

General Linear Higher Order   DE

 

Homogenous LODE

Non-Homogenous ODE

Constant coefficients LODE

or (LODE (CC))

 

 

Homogenous LSODE (CC)

 

Solution of Homogenous LSODE (Constant Coefficients) of the form

 

 

or using the notation

 

1.       Characteristic Equation is

2.       If  and  are the roots of the equation  

 

Types of the root

Form of the Complimentary function

Real and distinct ( an

Real and same ( an

Complex conjugate

 and

Homogenous LSODE (VC)

 

Solution of Homogenous LSODE (Variable Coefficients) of the form

 

 

provided the complimentary functions are available

 

This method is called the method of variation of parameters.

 If the complimentary solution is  then

 

1.      

 

If  

 

2.      

The particular solution is then given as

 

3.      

 

Note: This method is suitable only for second order DE with variable coefficients. There is no general method for solving higher order DE with variable coefficients.

Non Homogenous LSODE (CC)

 

Solution of Non-Homogenous LSODE (CC) of the form

 

 

1.       Characteristic Equation is

2.       If  and  are the roots of the equation  

 

Types of the root

Form of the Complimentary function

Real and distinct ( and

Real and same ( and

Complex conjugate

 and

Two Equal Complex conjugate roots 

 

3.       Evaluation of  Particular Integral (P.I)  

Type of

P.I

 

when

 

 

 

 

If the denominator  is odd power of , apply conjugation to convert it to even powers of

   is a general function

 

4.       Solution is

Non Homogenous LHODE (VC) reducible to LHODE (CC)

Solution of Homogenous LHODE (VC) of the form

 

 

or

 

 

 

1.       Substitute

 

2.       Substitute

 

where

 

3.       Now the equation is of Homogenous LSODE(CC) form with characteristic equation of the form

 

4.       Solve the new Homogenous LHODE(CC) by finding C.F and PI and replace  in the final solution

Solution of Non-Homogenous LHODE (VC) of the form

 

 

or

 

 

This equation is also known as Cauchy-Euler's Equation

1.       Substitute

 

2.       Substitute

 

where

 

3.       Now the equation is of Non-Homogenous LSODE(CC) form with characteristic equation of the form

 

4.       Solve the new Homogenous LHODE(CC) by finding C.F and PI and replace  in the final solution

Solution of Non-Homogenous LHODE (VC) of the form

 

 

or

 

 

This equation is also known as Legendre's Equation

1.       Substitute

 

2.       Substitute


 

where

 

3.       Now the equation is of Non-Homogenous LSODE(CC) form with characteristic equation of the form

 

4.       Solve the new Homogenous LHODE(CC) by finding C.F and PI and replace  in the final solution

General LHODE (VC)

 

There is no general procedure for finding solutions to linear higher order DE with variable coefficients.

Some Special Forms

How to approach for the solution

Solve by repeated integration

Multiply both sides by  which makes the equation exact.

 

 

The solution is given by

 

Equations not explicitly containing x

 

Equations not explicitly containing y

 

Change of independent variable

 

 

One Dimensional Heat Equation

 

One Dimensional Heat Equation

Boundary Condition

Solution

 

Machine generated alternative text:
xL
f(L,t)=O f(L,c)=O
k
k”
f(Ot)O f(O,oo)=O
x=O
Initiol Fino!
Condition Condition

 

Homogenous Boundary

Condtion

 

 

 

 

 

 

 

 

 

Machine generated alternative text:
f(L,t)B0 F1 
I’.I
loi
II
f(OM=Ao U ffQ,ao )=A1
. . x=O
Initio! Final
Conthtion Condition

 

Non-Homogenous

Boundary Condtion

 

 

 

 


 

 

 

 

 

 

Machine generated alternative text:
x’L
f’ (L,t)O f’ (L,»» )=O
k
o
.-
f’(Û,t)=O f(O,oe)=O
x=O
Initiol Final
Condition Condition

 

Homogenous with both ends insulated


 

 

 

 

 

 

COMPLEX FUNCTION THEORY

 

Analytic Function

 

A function   is said to be analytic at  if  is differentiable at  and in its neighbourhood.


is analytic   is differentiable   is continuous, but the inverse is not always true.

 

Properties of analytic function

 

If  is analytic it satisfies Cauchy-Riemann Equation

 

If  is analytic then  is also analytic

If  is analytic then  and  is harmonic

Milne's Thompson Method :

 

If  is analytic then  can be evaluated by integrating  after substituting  and  in  i.e.

 

 

 

Complex Integration Theorems

 

Cauchy Integral Theorem

If  is analytic in the simply connected Domain D and C is a closed curve in D then

 

Machine generated alternative text:
fiz) is analytic
inside C

 

 

Machine generated alternative text:
fiz) is analytic everywhere inside C
(z) = is analytic everywhere inside C except at z0
z — z0

 

Cauchy Integral Formula

If  is analytic in the simply connected Domain D and C is a closed curve in D enclosing point

 

 

 

Residue Theorem

 

If  is analytic in the simply connected Domain D and C is a closed curve containing singularities  then


 

 

Residue  at  

 

Taylor Series Theorem

 

 If  (z) is analytic in the region   with center  , then it can be uniquely represented by a convergent power series.

 


 

Laurent Series Theorem

 

If   is analytic in the annulus region  with center then it can be uniquely represented by a convergent power series known as Laurent Series


 

 

 

Power Series

 

1.       A sequence of complex numbers is an assignment of a positive integer to a complex number

2.       A series of complete numbers is the sum of the terms in the sequence. For eg

 

 

3.       Power Series: A power series in   is defined as

 

 

4.       Power series represent analytic functions. Conversely every analytic function can be represented as a power series known as the Taylor series. Moreover a function  can be expanded about a singular point  as a Laurent series containing both positive and negative integer powers of

 

PROBABILITY

 

Probability Theorem

 

Principle of sum

 

If an event can be done in ways and another event can be done in  ways then both the events can be done in   ways provided they cannot be done simultaneously.

 

Principle of product

 

If an event can happen in  ways and another event can happen in  ways then in the same order they can happen in  ways, provided they do not happen simultaneously.

 

 

Permutation

·         Permutation of a set of  distinct objects is an ordered arrangement  of these  objects.

·         Permutation of a set of  distinct objects taken  at a time is

·         Permutation of  r objects from a set of n objects with repetition is

 

Combination

·         Combination of a set of  distinct objects is an unordered arrangement  of these  objects.

·         Combination of a set of  distinct objects taken  at a time is

·         Combination of  r objects from a set of n objects with repetition is

 

 

Probability

·         If A and B are mutually exclusive then

·         If  A and B are mutually exclusive then

 

Conditional Probability

 

·        

·        

·           if A and B are independent events.

 

 

Theorem of Total Probability (Rule of Elimination)

 

 

 

Let    the theorem of total probability gives

                                                        

 

Bayes' Theorem

 

  

RANDOM VARIABLE

 

X is a random variable (in fact it is a single valued function) which maps s from the sample space S to a real number x.

 

·          are different ways of writing the probability of

 

·         f(x) is called the density function.

·  

 

·        

 

·        

 

·        

 

·         F(x) is called the distribution function.

·  

 

Properties of Random variable

 

·         Mean

 

·         Variance

 

·         Variance

 

·        

 

Chebyshev's Theorem

 

·          Probability that X will assume a value within k standard deviations of the mean  is at least

·          

·        

 

Distribution

 

·         Binomial Distribution

 

·         Mean = np=nq

 

·         Variance = npq

 

 

·         Hyper-geometric Distribution

 

·         Mean =

 

·         Variance =

 

 

·         Poisson's Distribution

·  

 

·         Mean =

 

·         Variance =

 

·         Normal Distribution

 

·        

 

 

 

NUMERICAL METHODS

 

Roots of Non-linear (Transcendental) Equations

 

Methods of solving non-linear equations / Finding roots of non-linear equations

 

Bisection Method

 

 


.

.

.

 

Regula-Falsi

 

(or Method of chords)

 

 

 

When ├ f(a)>f(b)

 

Newton-Raphson Method

 

 

(or Method of tangents)

 

 

Gauss-Seidel Method

 

A diagonal system is one in which in each equation the coefficient of a different unknown is greater in absolute value than the sum of the absolute values of coefficients of other unknown. For example the system of equations given below is diagonal.


 

Gauss-Seidel method converges quickly if the system is diagonal.

 

Steps in Gauss-Seidel method

 

1.       Rewrite the equation as

 

 

2.       Assume an initial solution (0,0,0)

 

3.       Evaluate  using Now use the new  and  to evaluate .

4.       Now use  and new  to evaluate

5.       Iterate these steps using new calculated values till desired approximation is reached.

 

6.       If the system is not diagonal, Gauss-Seidel method  may or may not converge.

 

Finite Differences

 

Forward Difference

 

Forward difference is defined as

 

 

·        

 

·        

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Backward Difference

 

Backward difference is defined as

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Central Difference Table

 

x

y

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Divided Difference

 

Divided difference is defined as 

 

 

 

For example

 

a.         

b.      

 

 

 

·         Divided difference formula is used for interpolating data sets where the independent data points are unequally spaced i.e.

 

·         But when the points are equally spaced i.e.  it reduces to forward/backward difference formula

 

 Examples:

1.      

2.      

 

 

 

 

 

 

 


 

 

 

 

 


 

 


 


 

 

 


 

 


 

 

 

 

 

 

 

 

 

 

 

 

Interpolation

 

Newton's Interpolation Formula

 

Newtons Forward Interpolation Formula


 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 h=  is the uniform distance betwen the data points.

q= a variable introduced to simplify expressions.

 

 

 

 

Newtons Backward Interpolation Formula


 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

.

.

.

.

.

.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

h=  is the uniform distance betwen the data points.

q= a variable introduced to simplify expressions.

 

 



Newton's Divided Difference Formula

 

Newton's divided difference interpolation formula

 

 

 

 

 

 

 

 

 

 


 

 

 

 

 


 

 


 


 

 

 


 

 


 

 

 

 

 

 

 

 

 

 

 

 

Gaussian Interpolation Formula

x

y

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Gaussian Interpolation Formula I

 

Gaussian Interpolation Formula II

 

Gaussian Interpolation Formula III

 

Stirling Interpolation Formula (Arithmetic mean of  and

 

 

 

Bessel's Interpolation Formula (Arithmetic mean of  and

 

Lagranges Interpolation Formula

 

...

...

 

 

 

 

Lagranges interpolation formula is given by

 

 

 

For the given set of data points Lagranges polynomial can be written as

 


 

 

 

 

Lagranges inverse interpolation formula is

 

 

Spline Interpolation

 

In spline interpolation a polynomial called spline S(x) is assigned to each sub interval  and

 

 

...

...

 

 

Linear Spline

Discontinuous first derivate at the inner knots.

Quadratic Spline

Continuous first derivate at the inner knots.

 

End points are connected with straight lines.

Cubic Spline

Continuous first and second derivate at the inner knots.

 

End points are connected with curves

 

 

 So a set of n+1 data points with n sub-intervals would have a different spline polynomial say   for each sub-interval.

 

Numerical Differentiation

 

Differentiating the polynomial formed using forward interpolation gives the differentiated polynomial  


 

 

 

 

With backward interpolation

 

 

 

The error between interpolated values and actual values is large in case of differentiation than for a polynomial function.

 

Numerical Integration

 

 

 

...

...

 

 

Trapezoidal Rule (n=1)

Simpsons  Rule (n=2)

 

Simpson's  Rule (n=3)

Weddle's Rule (n=6)

Boole's Rule

 

In all the above methods the general rule is to replace the tabulated points with a polynomial function and then carry out its integration. In other words the polynomial is an interpolation function evaluated using Newton's forward (or backward ) interpolation formula. Now if there are large data points then the polynomial becomes oscillatory.

 

Piecewise interpolation is about finding a separate polynomial for each subinterval rather than for the whole set of data points. These functions are called splines.

 

 

 

Numerical Solutions of First Order ODE

 

Single-step methods

1.       Taylor's power series method

2.       Euler's method

3.       Modified Euler's method

4.       Runge-Kutta 4th order method

 

 

Multi-step methods

1.       Milne's predictor corrector method

2.       Picard's successive approximation method

3.       Adam-Bashforth-Moulton method

 

Taylor's power series method

 

Taylor series about a point  is given by

Solve  Given

 

Sol : From the given equation we have

 

Using Taylor's series about  we have

 

Substituting we get the solution 

 

Euler's Method

 

 

 

·         Euler's method does not give us analytical expression of y in terms of x, but it gives the value of y at any point x.

 

 

 

Solve   . Given  and use step size of  and find

 

Sol:

 

xi

0

0

100

-1

75

1

25

75

-0.75

56.25

2

50

56.25

-0.5625

42.1875

3

75

42.18

-04218

31.63

4

100

31.63

 

 

 

 

Modified Euler's Method

 

Predictor Formula :

 

Corrector Formula :

Solve   . Given , use step size of  Find

 

Sol:

 

xi

0

0

100

-1

75

-0.75

78.125

1

25

78.125

-0.5859

58.59

-0.5859

61.034

2

50

61.034

-0.6103

45.77

-0.4577

47.68

3

75

47.68

-0.4768

35.76

-0.3576

37.25

4

100

37.25

 

 

 

 

 

Runge-Kutta 4th order method

 

Milne's Predictor Corrector method

Predictor Formula  

 

Corrector Formula  

Piccard's Successive Approximation

 

 

.

.

.

 

and so on till desired approximation is achieved

 

Adam-Bashforth-Moulton's  Method  (ABM)

 

Adam Bashforth's Predictor formula 

Adams Moulton's Corrector formula 

 

 

It is a non-self-starting four step  method which uses 4 initial points  to calculate  for . It requires only two function evaluation of  per step.

 

Numerical Solutions of PDE

 

VECTOR CALCULUS

 

Line & Surface Integrals

 

Machine generated alternative text:
Fdš
Divergence Theorem
Ë).d

 

Some common curves in parametric form

 

Circle

 

Ellipse

 

Segment of a line from point  to

 

 

 

Vector equation of a curve

 

 

Vector equation of a surface

 

Greens Theorem

 

 

·         Line integral converted to double integral

 

Stokes Theorem

 

·         Line integral converted to surface integral and vice-versa

 

Conservative Field

 

A vector field is called conservative if a scalar function  can be found such that . The scalar function  is called potential function of

Divergence Theorem

 

 

·         Surface integral and Volume integral

Line Integral of normal functions

 

Line integral w.r.t to arc length  does not change sign if curve is traversed in the opposite direction

Line integrals w.r.t variables  or  change sign if curve is traversed in the opposite direction

Line Integral of Vector Field

 

Given    and 

 

The line integral of vector field   is defined as

 

Normal to a curve

 

A vector normal to the curve  is given by

 

·          if  where  is a scalar function

Surface Integral of vector functions/fields

 

 

Surface Integral of normal functions

where the surface S is  and D is the region for double integration in the  plane.

 

Surface Integral of normal functions (SURFACE PARAMETERIZED)

 

 

Direct Evaluation of Surface Integrals

 

Evaluation of Surface Integrals

 

1.       Evaluate a unit normal vector  to the elemental surface . This converts the integral

 

 

 

Machine generated alternative text:
ds = n

 

2.       Projection* of   onto the  plane

 

Machine generated alternative text:
Proiechon ol ds on xv plane

 

3.       Substitution for

 

Machine generated alternative text:
?

 

4.       After all the above substitution we have

 

 

5.        Evaluate the multiple integral

 

 

 

Evaluate  where  and  is plane  located in the first quadrant.

 

1.       A vector normal to  is

 

2.       Projection of  on  axis


 

3.       After substitution

 

  

4.       Substitute

 

5.       Evaluate the double integral

 

Machine generated alternative text:
3x.2y — lO
10— Jx

 

 

 

 

* The concept of projection of one vector over another is used here.

MATHEMATICA

 

Basics

 

1.        

Abort execution of a command

Alt+.

2.        

Abort execution of a command

Evaluation -> Quit Kernel -> Local

3.        

Include a comment

* Insert comments between asterisk *

4.        

TraditionalForm[]

5.        

Symbol=.

Removes the value of the symbol

6.        

Expression//Command

It is the same as Command[Expression]

7.        

N[Expression,n]

attempts to give answer in n decimal digits

8.        

?Command

Gives help about the command

9.        

??Command

Gives help on the attributes and options of the command

10.     

Ctrl+K

Shows all command starting with say ‘Arc’

11.     

?Command*

Shows all command beginning with say ‘Arc’

12.     

?`*

Lists all global variables

13.     

/.

Replacement or Substitution

14.     

/;

Conditional

15.     

Together[]

Combines the difference or sum of fractions

16.     

Function argument on the left hand side is always suffixed with an underscore in user defined functions

17.

:> or :->

Rule Delayed

18

# &

Pure function

19.

/@

Used to map a function to a list Map[f,list]  f/@list

 

Conditional Statements

Using: = is a must. The argument of the function is suffixed with an underscore.

 It is better to use Piecewise [ ] function to create such functions because limits and continuity can be then be checked.

 

 Matrix form

 

Expression//MatrixForm

Expression//TableForm

Expression//TraditionalForm

 

List generating Functions

 

NestList[]

 

Range[]

 

Array[]

 

Table[]

 

 

Algebraic Equation

Solving Algebraic Equation

 

1.        

Solve [equation, variables]

 

2.        

Reduce[equation, variables]

 

3.        

Solve[equation, variables]//ComplexExpand

Represents complex number in traditional form rather than as rational power

4.        

Eliminate [equation, variables]

Removes variable from a set of simultaneous equation

5.        

NSolve [ equation, variables]

Gives numerical solution

6.        

FindRoot [equation, startingvalue]

Gives solution to ‘transcendental equation’

7.

LinearSolve [a,b]

Produces vector  such that