—>algorithms
Complexity |
Description |
Examples |
---|---|---|
1 | Constant algorithm does not depend on the input size. Execute one instruction a fixed number of times |
Arithmetic operations (+, -, *, /, %) Comparison operators (<, >, ==, !=) Variable declaration Assignment statement Invoking a method or function |
log N | Logarithmic algorithm gets slightly slower as N grows. Whenever N doubles, the running time increases by a constant. |
Bits in binary representation of N Binary search Insert, delete into heap or BST |
N | Linear algorithm is optimal if you need to process N inputs. Whenever N doubles, then so does the running time. |
Iterate over N elements Allocate array of size N Concatenate two string of length N |
N log N | Linearithmic algorithm scales to huge problems. Whenever N doubles, the running time more (but not much more) than doubles. |
Quicksort Mergesort FFT |
N2 | Quadratic algorithm practical for use only on relatively small problems. Whenever N doubles, the running time increases fourfold. |
All pairs of N elements Allocate N-by-N array |
N3 | Cubic algorithm is practical for use on only small problems. Whenever N doubles, the running time increases eightfold. |
All triples of N elements N-by-N matrix multiplication |
2N | Exponential algorithm is not usually appropriate for practical use. Whenever N doubles, the running time squares! |
Number of N-bit integers All subsets of N elements Discs moved in Towers of Hanoi |
N! | Factorial algorithm is worse than exponential. Whenever N increases by 1, the running time increases by a factor of N |
All permutations of N elements |
1.Binary search program in c with recursion.
5. SELECTION SORT IN C with RECURSION
8.EUCLIDS METHOD FOR GCD PROGRAM IN C.
9.LUCAS NUMBERS SEQUENCE PROGRAM IN C.
- Linear — O(n)
- Quadratic — O(n2)
- Cubic — O(n3)
- Logarithmic — O(log n)
- Exponential — O(2n)
- Square root — O(sqrt n)
- O(1) constant
- O(log(n)) logarithmic
- O((log(n)) c ) polylogarithmic
- O(n) linear
- O(n^2 ) quadratic
- O(n^c ) polynomial
- O(c^n ) exponential
http://science.slc.edu/~jmarshall/courses/2002/spring/cs50/BigO/
In-place Algorithm(with Example)-In-place Algorithm
Build the heap(max, One By one): http://en.wikipedia.org/wiki/Heapsort
In-place sorting algorithm
A sorting algorithm is said to be in-place if it requires very little additional space besides the initial array holding the elements that are to be sorted. Normally “very little” is taken to mean that for sorting n elements, O(logn) extra space is required.1 This is reasonable because in a purely mathematical analysis of the algorithms, any sorting algorithm that operates on a contiguous array requires O(logn) extra space, since this is the number of bite required to represent an index into the array. Normally one ignores this as in most algorithms one treats integers as a data type requiring constant space and constant time for basic operations. An exception are number theoretic algorithms often encountered in cryptography.
Of course, the efficiency of any algorithm will depend on how the data is stored; usually, the elementary discussion of sorting algorithms focuses on sorting elements stored in a contiguous array (which has constant-time access and swap operations but takes a long time to do shifts). In this context, Heapsort is an in-place sorting algorithm, since it requires constant additional space. Quicksort is also considered in-place, although it requires an amount of extra space logarithmic in the size of the array. Most implementations of Quicksort are recursive, and each recursive call needs to store some local variables on the stack; the depth of the recursion is usually O(logn), but can be O(n) in degenerate cases. If you try to convert Quicksort to a non-recursive algorithm, you will find that it is necessary to store some intermediatde indexes on a stack, which usually grows up to size O(logn) but may grow up to size O(n). Mergesort is a sorting algorithm with running time O(nlogn) which is not in-place when dealing with arrays.
On the other hand, if one is sorting linked lists, looking up a general element by index requires O(n) steps, so Quicksort and Heapsort need to be drastically modified to run in even O(n2); this is not a natural context to use these algorithms. Mergesort, on the other hand, retains its O(nlogn) time complexity and requires only O(logn) extra space.
In-place sorting is often useful when dealing with truly enormous data sets, where O(n) extra space is truly difficult to work with. In-place sorting algorithms may or may not have better locality of reference than other sorting algorithms. Truly enormous data sets are usually stored on media where random access of data is very expensive but extra storage space is realtively inexpensive (such as disks); in these cases, whether an algorithm is in-place is less relevant. In fact, even when dealing with data stored as arrays Mergesort is much more efficient for disk-based sorting than the other algorithms because of its better locality of reference.
It should be pointed out in any analysis of the standard sorting algorithms that they are based on an assumption that is almost never true: they assume that the only operation possible on keys is comparison. Sorting methods taking advantage of the structure of keys (say, they are strings) can be much faster both asymptotically and in practice; they are generally not in-place, but the extra space is often worth the time advantage.
dynamic programming fibonacci c= http://www.geeksforgeeks.org/program-for-nth-fibonacci-number/