Sunday, March 30, 2014

week11_sorting and eff



time complexity is just an partial part of measuring how good a program is.

why do time complexity useful? From my point of view, this is because we usually deal with a large amount manually-hard data by computer. when n becomes large enough, n*n is always larger than c*n no matter what constant c is.

computer or program's application of large and rigid characters make time complexity useful.

(1)Selection sort

It has O(n2) time complexity


def selection_sort(L):
    """Sort the items in L in non-decreasing order."""

    i = 0
    while i != len(L) - 1:

        # Find the smallest remaining item and move it to index i.
        smallest = min(L, i)
        L[smallest], L[i] = L[i], L[smallest]
        i += 1

the worst case is that always have to go through all the left items to get the minimum one.(i.e reversed-sorted one)
so n+(n-1)....+3+2+1= (n*n + n)/2
which is O(n2)

(2)Insertion sort
the time complexity is O(n)
every iteration this sort make sure the first k items (for kth iteration) is sorted.
so theworst case is that you need rearrange the total line in every iteration( i.e reversed-sorted one)

def insert(L, i):
    """Move L[i] to where it belongs in L[:i]"""

    v = L[i]
    while i > 0 and L[i - 1] > v:
        L[i] = L[i - 1]
        i -= 1
    L[i] = v

def insertion_sort_1(L):
    """Sort the items in L in non-decreasing order."""

    i = 1

    # Insert each item i where it belongs in L[0:i + 1]
    while i != len(L):
        insert(L, i)
        i += 1

(3) Bubblesort
Bubble sort has worst-case and average complexity both О(n2)
just like its mean, each iteration check all the items by comparing two adjoint items. If this two-pattern is unsorted, then switch, like bubling up.
From my point of view Bubblesort is worse than Selection sort although they are both O(n2) in the worst case.

def bubblesort_1(L):
    """Sort the items in L in non-decreasing order."""

    j = len(L) - 1
    while j != 0:

        # Swap every item that is out of order.
        for i in range(j):
            if L[i] > L[i + 1]:
                L[i], L[i + 1] = L[i + 1], L[i]
        j -= 1

The reason that some may think it is better is because it do something useful for every comparison. But that is a illusion.
two steps a stage walking upstairs are faster than one steps a stage walking upstairs.
You can directly swith ith and jth item, but insteading of directing doing that, you switch the ith and i+1th first, then i+1th and i+2th...finally j-1th and jth.
It looks that you make some effect every step, but the fact is you make in all the condition the worst case, by checking step by step.

(4) Mergesort
Worst case space complexity, O(n)
the mergesort is similar to insertion sort.
Instead of making sure that first k items is sorted, mergesort makes sure the all the subparts are sorted.
This is a good use of recursion - saves times of typing.

def _merge_1(left, right):
    """Merge sorted lists left and right into a new list and return that new
    list."""

    result = []

    i ,j = 0, 0
    while i < len(left) and j < len(right):
        if left[i] <= right[j]:
            result.append(left[i])
            i += 1
        else:
            result.append(right[j])
            j += 1

    result += left[i:]
    result += right[j:]
    return result

def _mergesort_1(L):
    """Return a new list that is a sorted version of list L."""

    if len(L) < 2:
        return L[:]
    else:
        middle = len(L) // 2
        left = _mergesort_1(L[:middle])
        right = _mergesort_1(L[middle:])
    return _merge_1(left, right)

There are many version of mergesort. They are with the character that divide all the items into smaller part (which small enough) and sort them and merge them together.

Because of the different way of dividing, I think "their worst cases" are different, although in the same time complexity



(5) Quicksort
O(n log n) expected time complexity follows.

Quicksort first divides a large list into two smaller sub-lists: the low elements and the high elements. Quicksort can then recursively sort the sub-lists.
The worst case for quick sort is that the list is already sorted. Since in that case each iteration get two-sublists which one of them has only one item.

However, even in the worst case, it still can match the other sorting program.(shown by the data in the our lab 10) 

def _partition_1(L):
    """Rearrange L so that items < L[0] are at the beginning and items >= L[0]
    are at the end, and return the new index for L[0]."""

    v = L[0]
    i = 1
    j = len(L) - 1
    while i <= j:
        if L[i] < v:
            i += 1
        else:
            L[i], L[j] = L[j], L[i]
            j -= 1

    L[0], L[j] = L[j], L[0]
    return j

def _quicksort_1(L):
    """Sort L by partitioning it around the first item, then recursing."""

    if len(L) <= 1:
        return L
    else:
        pivot = _partition_1(L)
        left = L[:pivot]
        right = L[pivot + 1:]
        return _quicksort_1(left) + [L[pivot]] + _quicksort_1(right)




One of importance of time complexity is how to define "the time"
For instance, the binary search tree.

By giving a binart search tree with full nodes, what is the time complexity of a function to get a specific number in the tree( with full nodes)? it is O(log n)
Since the worst case is that we need to find the the one in the deep end, however since the tree is 'sorted', each time we can at least exclude half of potential data.
so it is log2(n) => O(log n) 

Here, we define each comparison as one time , but in the sorting program above, we define the switch or the change as one time.

When the definition of time complexity is unspecified to be the same, it does not make sense in the formula aspect.

For example, by giving a binart search tree with full nodes, what is the time complexity of a function to get a sorted list consisting of all the elements from the tree?

We could it is O(0) (it makes sense since it is somehow already 'sorted') or O(n), by the two different of time above. 

week9

During the lecture, we learn a new concept name Binary Search Tree. The tree is a list.

It is useful I think for example if we want to get a sorted output of all the elements, the best thing to do is sort them when they are inputed.

however, I think it is also a typical way of trading the space for time...

usually, we have a list with n elements, then n+1 address will produced to capture the value.
But when we use BST, 3n+1 is necessary since each node need an address to itself and contains two elements: item and link.

week8

This week we did more on linked lists. We had done a lab for linked lists in week 7, but we did a continuation of it in this week's lab. My partner and I had a lot of trouble figuring out the different methods for linked lists. Even though we drew pictures, we still had difficulties. Clearly this is an important concept that I need to figure out.

It is all the terminal matters.
The method exist or not. The method is None or not.
These two things look similarly but need to be treated carefully.

From my point of view, the attribute .empty is useless, and It is sufficent to decide whether it is empty or not by looking at the sub link.

Sunday, March 9, 2014

week7

Recursion:

Honestly speaking, Recursion is the best thing I have ever met in Computer Science.

The application of it is so useful that I really doubt I can write it down properly.

(1)
From my point of view,Recursion is a method sacrificing space in order to shorten time.

When ever a recursion happens, the super level of recursion will be saved. It means that unless the whole recursion get to its end, all these space will be occupied. The bad part of this fact is we never know whether it will end. If the recursion gets to a dead cycle, program will  be crashed by burden of space.

When I say shortening time, it doesnot mean computer running quicker. Although, in fact, recursion does make the program run more effectively.

It saves time of typing!
It saves a lot of time of typing!

Recursion provides a way that fits human mind in!
The logic part of the recursion is not something like 1+1==2.

(2) Recursion application
The application of Recursion is very wild. Any structural data can be handled by recursion.
e.g. a(i)= a(i-1) +a(i-2), where a(1)=1 a(2)=1

binary tree, linkedlist,power sets. These things are defined form bottom and defined to themselves further.

Recursion usually changes a big and complex problem into many similar small problems. Only a few and limited sentences describe a infinite object.





recursion also have some outstanding benefits in some small problems.

Sps there are a n-order ladder stairs, you can go upstairs step by step,or you can step on the second order. how many different moves totally?

def f(x:integer):integer;
   if x==1:
           return 1
   elif x==2:
           return 2
   else:
       return f(x-1)+f(x-2)
using recursion here is a good idea.
if you are asked not to use recursion, you might need to type a lot.
def f:
    if there is only 1 two-steps
    if there is 2 two-steps
    if there is 3 two-steps
.......
maybe you can use
     
     i = 0
     while i< n:
          if there is i two-steps
          ...
          i = i + 1

but obviously, using recursion is more nice and elegent.

when doing a problem by using recursion, I always have a feeling that what I really do is induction.
Start form base case and then somehow make every induction steps to prove or contect each others.

(3)lab:linkedlist
the lab about linkedlist makes me feel really bad.
It is  a really bad idea to implement Queue by linkedlist.
The end of the linkedlist is so unclear that you really need to draw it on a paper to see it.
And whenever doing the changing of the value, you have to care about its  .empty method, or whether its .children method exists.

It is  a challenge but also implies some kinds of data that may not be as easy as we thought.
Recursion need a clear structure, the confusing terminal of linkedlist( okay, I admit that if you draw a paper you will not mix it) makes it hard. When I was doing that lab, I spend most of my time trying to type code to find out whether the .children method exists or not.....

Maybe if the empty checking method is self-contained in each method, rahter than in   .empty  attribute, then thing can be better.


week6

today someone tells me a very interesting idea about property.

When we use the property, we want to unable others' ability to modify our class directly.
When we  consult the professor about how to use it, he smiles and says that it is enough to only set the get_method of the property.

def set_coord(self: "Points", coord: [float, ]) -> None:
        """Set coordinates for this point"""
        if '_coord' in dir(self):
            raise Exception('Cannot reset coords')
        else:
            self._coord = coord
def get_coord(self: 'Point') -> [float, ...]:
        """Get coordinates for this Point"""
        return self._coord
   
coord = property(get_coord, set_coord, None, None)


this is the class example of how to using property.

it is true that we are unable to change the self.coord directly after initialization.
However, someone figures out a bad idea.

Notice it is perfect fine to change self._coord directly...which under property method, it is effective...
I think that maybe this is why programmers look awesome...complexing things on purpose.

Saturday, February 22, 2014

week5

There is really nothing totalk about this week....

Find something useful online about unittest:

unittest.skip(reason)
Unconditionally skip the decorated test. reason should describe why the test is being skipped.

unittest.skipIf(condition, reason)
Skip the decorated test if condition is true.
unittest.skipUnless(condition, reason)
Skip the decorated test unless condition is true.
unittest.expectedFailure()
Mark the test as an expected failure. If the test fails when run, the test is not counted as a failure.
 unittest.SkipTest(reason)
This exception is raised to skip a test.




Maybe it is not clear how to use it. I will show you a homemade example.


import random
import unittest
def #this the function you want to test define youself
      #in oreder for convenience, we test the whether f(x,y)= x/y is right
class Test(unittest.TestCase):

    @unittest.skipIf(b==0, "0 can not divide")
    def test_shuffle(self):
        self.assertEqual(f(1,5), 0.2)
        expectedFailure(f(1,2),3)
       a=random.choice(range(10))
        b=random.choice(self.seq(10))
        self.assertEqual(f(b,a), b/a

Tuesday, February 4, 2014

week4

Since This week there is no topic assigned, maybe I will talk something about assignment 1? I am pround to tell you guys that I have finished.

Thank you those who watching my blogger. In order to return your attention, I decide to write something to make the assignment easier.

If TA see this article, do not worry! I will handle the extent!

Here we go!

1. step(1)(2)(3) The difficulty of this step is "reading GUIController and GUIViewables".

If you are really unconfident about trace the information they contain about TOAHModel, then open the Wing, drag the part that may be relative to TOAHModel. The Wing will automatically show the repeated part in different color for you. I promise it will speed you up.


For GUIController, if you really want or are interested in understanding all of them, reading chapter 16 of your CSC108 book is a good option.

For GUIViewables, I advice you to treat CheeseView simply as class Cheese in TOAHModel.


2.step(4) this step is not hard once you finish the steps above

3. step(5) If you can handle the recursion well, this part should not be a problem for you. But considering there are still some of you feel uncomfortable about recursion, I provide another way of doing it. Still you should be able to get the best seq for solving three_stools type.

Using 'while' to try all the possible.

suppose we want to solve an  8 cheeses four_stools problem, what is the possible of moving seq?

we can move 1 cheese using 4 stools, then moving 6 cheese using 3 stools, then moving 1 cheese using 4 stools.

or we we can move 2 cheese using four_stools, then moving 4 cheese using three_stools, then moving 2 cheese using four_stools.

or we can move 3 cheese using four_stools, then moving 2 cheese using  three_stools, then moving 3 cheese using four_stools.

or we can move 4 cheese using four_stools, then moving 0 cheese using  three_stools, then moving 4 cheese using four_stools.

we have 4 options here.

for each option exist another two four_stools problems.

By keeping trace of these problems, here is my conclusion!:

for every n cheeses four_stools problem, it is equal to solve:
a1,b1,a2,b2,a3,b3.....a(n),b(n)
where a(i) is 1 cheese four_stools problem for all i,
and b(i) is some cheese( can be zero) three_stools problem for all i.

the sum of number of cheeses is n.
By Exhaustive Attack method, list all the possible ways( Notice the sequence have to be symmtric! And in each sub part, they are still symmtric!)

sequence for 1 cheese four_stools is easy, so we are done.

But please notice, this solution is trivial! If you have any idea of recursion, you should not use this. This method is for those who try to get grades but fail to use recursion. Last option only!

I follow two functions that might be useful to you, even you want to use recursion to solve problem.

def M(n):
#smallest number of moves of four_stools problem with n cheeses
    small=(2**n)-1
    if n==1:
        return(small)
    else:
        for i in range(1,n):
            if (2*M(i)+2**(n-i)-1) < (small):
                small=(2*M(i)+2**(n-i)-1)
    return(small)


def H(n):
#next division number
#which is the proper number i s.t make M(n)=M(i)+ 2**(n-i)+M(i) be the smallest

    for i in range(1,n):
        if M(n)==(2*M(i)+2**(n-i)-1):
            return i


4.step(6)
animating the program:
    Here is a trick, since you finished the step(5), anyway you can get the best seq. It is not necessary to generating the animation while generating the seq. You can simply generate the seq first, then use a simple "for" loop to make a model move along the seq.

In this case, all you need to do use a "while" loop to check the time.
for instance,

t1 = time.perf_counter()
t2 = time.perf_counter()
            while (t2 - t1) < delay_btw_moves:
                t2 = time.perf_counter()

and insert "print('TOAHModel')" inside some where.

Okay! Thanks for reading.