Archive

Posts Tagged ‘Overview’

Tries (Overview)

January 27, 2012 1 comment

What is it? Trie? Did you mean “Tree”? No! I meant “Trie”. It is a data structure (also known as “prefix tree”) that is used in form of a dictionary. It is basically a way of representing data. I don’t think I can explain this without an example. So here we see the way to represent two words in English language using a trie. Shrey and Shayne.

As you can see, a trie gives us words with the same prefixes. In this example, “SH”. So that is how you represent words. You can do the same for numbers (in any format – binary, decimal, etc). Also, note that both have different “Y” nodes because a trie represents words on the base of prefixes. If we associate a count with each node, we can also tell the number of words starting with a particular prefix.

Please note that the insertion in a trie takes O(length) time where length is that of the word being inserted. We can create a trie for english language using a structure with information and 26 pointers for the corresponding letters. Easy, eh?

So a trie can be of many uses. In the next post, we’ll see an example of using Trie. 🙂

PS : For more information on Tries, visit http://en.wikipedia.org/wiki/Trie

PPS : Sorry for the long hiatus in posting!

Advertisements

Derangement

You must have heard of arrangement, but today we’ll talk about Derangement. The exact opposite. Derangement is the concept of arranging a set of given items in such a manner so that no object is in its original place. For example, a question might say,

If you are given 4 cards with A, B, C, D written on them and they are placed in this specific order. Now you are supposed to count the number of ways these cards can be arranged so that no card is in its original place.

The answer is 9. And the different arrangements are as follows:

BADC, BCDA, BDAC,CADB, CDAB, CDBA,DABC, DCAB, DCBA

So how do you go about solving such problems? Well, of course you use Derangement. So let us discuss how we count derangements. I’ll just pick up a short explanation from Wikipedia since I am too lazy to explain it myself 😛

Suppose that there are n persons numbered 1, 2, …, n. Let there be n hats also numbered 1, 2, …, n. We have to find the number of ways in which no one gets the hat having same number as his/her number. Let us assume that first person takes the hat i. There are n − 1 ways for the first person to choose the number i. Now there are 2 options:

  1. Person i does not take the hat 1. This case is equivalent to solving the problem with n − 1 persons n − 1 hats: each of the remaining n − 1 people has precisely 1 forbidden choice from among the remaining n − 1 hats (i’s forbidden choice is hat 1).
  2. Person i takes the hat of 1. Now the problem reduces to n − 2 persons and n − 2 hats
Now that you understand the concept, let us talk about counting the number of derangements. The number of derangements of an n-element set is called the subfactorial of n ( denoted as !n ). Now there is a simple recurrence relation we can see from the explanation above,
!n = n * !(n-1) + (-1)^n
and, !n = (n-1) * (!(n-1)+!(n-2))
The formula for it is, (image sourced from AoPSWiki, once again laziness comes into play)
Now I hope you all understand Derangement and its huge set of applications! A very important concept for programmers and mathematicians alike. 🙂

Dynamic Programming (Overview)

You all might have heard a lot about Dynamic Programming and its awesomeness. Today, I will explain to you why exactly it is as awesome as we say it is. So let me start with the basic definition of Dynamic Programming:

It is a method to solve complex problems by breaking it down into simpler sub-problems, which once solved, do not need to be solved again in the future. This can be done with problems where an optimal substructure can be found. These simpler solutions are then merged to form a greater solution. There are two kinds of Dynamic Programming. As Wikipedia defines it –

Top-down dynamic programming simply means storing the results of certain calculations, which are later used again since the completed calculation is a sub-problem of a larger calculation. Bottom-up dynamic programming involves formulating a complex calculation as a recursive series of simpler calculations.

For example, if you have an array and the problem given to you is one of dynamic programming, you will most probably be calculating a value at each index, depending on the value you calculated at the previous index. Please note, doing this requires a optimal substructure to be found. (Some of you might find this task a pretty daunting one)

Some common examples of problems where Dynamic Programming can be useful are finding the longest increasing subsequence (not continuous) inside an array, finding the set of elements (continuous) in an array with the maximum sum, calculating the longest common subsequence in two given sequences, machine scheduling, etc.

I will discuss such problems in the upcoming posts. 🙂

PS: A problem discussed earlier relating to 2 finite sequences and intersections and such is somewhat an example of Dynamic Programming as well.

Bit Manipulation (Overview)

October 12, 2011 1 comment

“What? That title isn’t a question like you promised.” – Yep, it isn’t. I just wanted to give a short introduction of Bit Manipulation so that it might be easier for you to understand the term when I have used it in the future. An application of bit manipulation was already shown in the previous post (Find the only element in array which does not have a duplicate).

So if you don’t know it yet, all bit manipulation is about is playing around with bits to get the desired result. The most used operators for such manipulations are | (bitwise OR), & (bitwise AND), ^ (XOR), >> (right shift), << (left shift).

Hopefully, everyone will be familiar with the operation of the first 3 operators. I’ll just explain the way shift operators function.

2<<3 would give you 16. Why? In binary 2=10, shifting it right by 3 bits, we get 10000 which equals 16. Simple as that. So right shift multiplies your number by 2 raised to the number of bits you shift.

On the other hand. 16>>3 gives two. The logic is exactly opposite to the one for right shit. Thus, left shift ends up dividing a number by 2 raised to the number of bits you shift. Easy-peasy!

You will be amazed to see the applications of such bit-manipulations to problems. Some problems are rendered so easy with the use of these operators that it would almost make you laugh. Other problems (like finding ‘a*7’ for a given ‘a’ without using *,/,+) necessarily require the use of bit manipulations. We will soon discuss more applications where bit manipulations make your life easier. 🙂

PS: The solution for the question in bold is –

b=a<<3; print b-a; //in essence, multiplying number by 8 and then subtracting it once from the result of the shifting 😉

More cool stuff follows soon.

 

Sorting Algorithms (Overview)

October 11, 2011 1 comment

“Bah! This is what you give to us as your first useful post?”, might be what is running through your mind right now. Well, the answer is, yes. Sorting algorithms are one of the most used class of algorithms in every sort of application that you use. While these might be taught well at the graduate level, their applications are not so well explained. Students don’t realize the different kinds of problems that can be solved utilizing such algorithms.

So let me first name the algorithms which are used for sorting and give a short overview for each. In the subsequent posts, I’ll go on to show the applications (the usual suspects, and some new ones) of the sorting algorithms. But you just have to know the following methods for any interviews/development process.

1. Bubble Sort – With a time complexity of O(n^2), this is the one of the worst algorithms you can use to sort your data. If you plan to use this ever, you aren’t fit to be on this website. The noobs can look up the algorithm and understand it here on this wiki link.

2. Exchange/Selection Sort – The complexity is O(n^2) once again and what is said for bubble sort is true for this as well. Once again, for noobs, the wiki link is here. Both this and Bubble sort can be fine tuned a bit, but not to much use.

3. Insertion Sort – This algorithm can fall in the same category as the previous two if implemented in the way explained at the start here. However, things can get interesting if this kind of sort is performed on a linked list while creating it. But again, such a method will also prove efficient only for small data sets.

4. Merge Sort – Hmm. O(n lg n) time complexity. Now we come to interesting things. Here, what we do is to separate the data set into smaller fragments till we are left with all segments of size one. Once this has been done, we start merging each of these fragments in a way such that each of the merged fragments is now sorted. This is known as the Divide-And-Conquer method. Since I expect readers to know how to implement all of these sorting algorithms, the ones not knowing the exact working of the algorithm can again refer to the wiki link here.

5. Heap Sort – Oh isn’t this just cool! It has a time complexity of O(n lg n) as well. So what we do in heap sort is that we form a heap of all the elements (max or min heap depending on what you want) and remove the root from the heap and re-heap it once again. So what you might be wondering is, what exactly is a heap?! Quite simply put, it is a binary tree with a certain property that every parent is the tree is either greater (in max heap) or lesser (in min heap) than both its children. Also, each new element is added in a balanced manner before the tree is re-heaped. You know what, I’ll give you a great powerpoint slide from my professor which explains Heap Sort very well. Here you go. Heap Sort (The great part is when you realize a heap can be easily implemented using an array itself. Remember doing that with binary trees?! 🙂 )

6. Quick Sort – With time complexity of O(n lg n), this type of sort is another example of the Divide-And-Conquer method that we talked about earlier. The reader might want to note that such a method (DAC) always uses recursion as its modus operandi. In this particular method, an element is selected as the pivot (generally, the last element. But variations exist) and this pivot is put in its right place. This ensures that the data-set is now divided into 2 parts (except in the worst case,i.e., data-set already sorted in some order). Now the same technique is applied to the two parts. And so on. Again, here is an awesome powerpoint slide to make you understand. Quicksort

7. Counting Sort – This is a sort based on the concept of counting how many times a number occurs. It can only be applied when you know the elements which can be present in your data-set. With a time complexity of O(n), this adds a space complexity of the order of the number of possible elements in the data-set. The sort is easily understandable and the concept of counting is better used at other places as will be explained soon in an applied post. 🙂 Noobs – refer the wiki link here for details.

Hah! So we come to the end of an overview of the sorting algorithms. As mentioned in the introductory post, I will dedicate more time on this blog on tackling application problems rather than explaining such simple algorithms in painful detail. That, you can find anywhere on the internet. So hopefully this will be a recap enough for you to understand the application questions of sorting algorithms in the next few posts.

PS: I skipped 2 ubercool algorithms : Bucket and Radix. I feel these will better be explained with an application and will follow up on these in later posts. 🙂

 

Categories: Overview, Sorting Tags: ,
%d bloggers like this: