Skip to main content

Starting with Queues

So in this post no more about stack its all about queues. When talking about queues the first thing that comes in to our mind is the movie queue. What happens there is the earlier you come the best chance of getting a good seat. Likewise in here this data structure is also like a movie queue. It serves elements in First In First Out (FIFO) basis. The elements are added in the rear and must be remove prom the front.
That's right in stacks we had our concern only on the top element but here we need to keep our concern on two elements rear and front




1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
public class QueueOne {
    private int MaxSize;    // size of the array
    private int[] arr;      // the queue
    private int rear;       // the back element
    private int front;      // the front element
    private int elementnum; // number of elements
    
    public QueueOne(int x){ // initiating the queue with constructor
        MaxSize = x;
        arr = new int[MaxSize];
        front = 0;          // no front element
        rear = -1;          //no back element
        elementnum = 0;     // initially 0 elemets
    }
    /* ============== functions ============= */
    public boolean isFull(){
        return (elementnum == MaxSize);
    }
    public boolean isEmpty(){
        return (elementnum == 0);
    } 
    public void insert(int i){
        if (isFull() == false){  // chacks stack is full
            arr[++rear] = i;
            elementnum++;
        }else
            System.out.println("This stack is full");
    }
    public int remove(){
        if(isEmpty() == false){   
            elementnum--;           //reduze the size by one
            return (arr[front++]);
        }else
            System.out.println("This stack is Empty");
            return 0;
    }
    public int peekFront(){
        if(isEmpty() == false){
            return (arr[front]);
        }else
            return 0;
    }
    public int peekRear(){
        if (isEmpty() == false){
            return (arr[rear]);
        }else
            return 0;
    }
    public int size(){
        return(elementnum); // return the queue size
    }
    public static void main(String[] args) {
        QueueOne Q = new QueueOne(6);
        Q.insert(20);
        Q.insert(25);
        Q.insert(70);
        Q.insert(125);
        System.out.println("Front "+Q.peekFront());
        System.out.println("rear "+Q.peekRear());
        System.out.println("Size " +Q.size());
        Q.remove();
        System.out.println("Front "+Q.peekFront());
        System.out.println("rear "+Q.peekRear());
        System.out.println("Size " +Q.size());
    }
}

The results would be as like this:

Highlighted ones are before removing an element

Popular posts from this blog

Natural Language Processing with Python NLTK part 5 - Chunking and Chinking

Natural Language Processing Using regular expression modifiers we can chunk out the PoS tagged words from the earlier example. The chunking is done with regular expressions defining a chunk rule. The Chinking defines what we need to exclude from the selection. Here are list of modifiers for Python: {1,3} = for digits, u expect 1-3 counts of digits, or "places" + = match 1 or more ? = match 0 or 1 repetitions. * = match 0 or MORE repetitions $ = matches at the end of string ^ = matches start of a string | = matches either/or. Example x|y = will match either x or y [] = range, or "variance" {x} = expect to see this amount of the preceding code. {x,y} = expect to see this x-y amounts of the preceding code source: https://pythonprogramming.net/regular-expressions-regex-tutorial-python-3/ Chunking import nltk from nltk.tokenize import word_tokenize # POS tagging sent = "This will be chunked. This is for Test. World is awesome. Hello world....

Natural Language Processing with Python NLTK part 1 - Tokenizer

Natural Language Processing Starting with the NLP articles first we will try the  tokenizer  in the NLTK package. Tokenizer breaks a paragraph into the relevant sub strings or sentences based on the tokenizer you used. In this I will use the Sent tokenizer, word_tokenizer and TweetTokenizer which has its specific work to do. import nltk from nltk.tokenize import sent_tokenize, word_tokenize, TweetTokenizer para = "Hello there this is the blog about NLP. In this blog I have made some posts. " \ "I can come up with new content." tweet = "#Fun night. :) Feeling crazy #TGIF" # tokenizing the paragraph into sentences and words sent = sent_tokenize(para) word = word_tokenize(para) # printing the output print ( "this paragraph has " + str(len(sent)) + " sentences and " + str(len(word)) + " words" ) # print each sentence k = 1 for i in sent: print ( "sentence ...

Design Patterns 1 : Introduction

Design Patterns : Introduction So its the holiday time and thought of starting with the Design patterns. In this post I'll talk about What are design patterns?, What good to us using them?, Why and when use them? and many more. So why wait? Lets start the journey to Design patterns. What are Design patterns? So over the years when programmers tried to build systems that can solve problems often they encountered problems that were difficult to overcome. So after finding a solution what they did was presenting it as an future guideline where other programmers when encountered the same problem can easily surpass that. These are what we called as Design Patterns. To see how it all started we have to go back in the past. History of Design Patterns? All these patterns buzz started in about 1977/79 when  Christopher Alexander  showed interest in using using pattern in architecture. That's right not in computer, in architectural ...