Final thoughts about the course

Screen Shot 2014-12-02 at 1.20.54 AM

I enjoyed this course a lot, mainly because Heap was making the lectures very interesting and I liked how he presented the material in a more philosophical way, although it is basically a discrete math course. I also had a chance to interact with him one on one, and although at first he seemed a little intimidating, I realized that he was a very kind and understanding person.

When it comes to the material, the tricky part is that you think you understand the concepts, up until the point where you have to apply your understanding to solve problems. There are so many ways to solve a problem. There are also many ways to solve a problem incorrectly (not solve in that case). The problem is that when you solve a problem incorrectly (a proof, for example), most of the time, only someone else can spot the hole in your proof. I believe that we have not learned enough concepts or seen enough proof examples to make any student in this class strong enough at proving statements that are not similar to the ones we’ve seen in class (unless the student had help outside school or has previous experience), and the only way we are able to write proofs so far is by following the structure given in class, which makes us tend to memorize more than really understand what’s going on. I’ve read on some SLOGs that some people found the material boring at times, which I think is mainly because their knowledge of the material is still very superficial. So is mine, but I am sure that there is something more to it, and hopefully I will understand it better when I take upper year courses (CSC236).

41glROfR93L._SY344_BO1,204,203,200_{9DF4E631-798E-4E7D-A8D0-55AF933AF905}Img100

This course also triggered my interest in math in general; I am planning to read a few math books during Christmas break: “How to prove it: A Structured Approach – by Daniel J. Velleman” and “Calculus: An Intuitive and Physical Approach – by Morris Kline” (Can’t wait!).

Also, I realized through this course that studying hard doesn’t mean you’ll do well on the test/assignment/exam, studying smart does. You need to have some idea about what type of questions are going to be on the test and focus your study on this material specifically, because your knowledge of the material, no matter how strong, won’t prevent you from screwing up on a question. Practicing this question over and over again will (no student has time to do all the questions over and over again, would take weeks).

I’ve also been reading a few SLOGs throughout the term, mostly because it’s interesting to see how some people are coping with new material and their point of view on it, sometimes because I know the person and it’s fun to check what they wrote:

http://165choi.blogspot.ca – I read this person’s slog regularly because it’s one of the few who posts regularly, so whenever we learn something new, I know I’ll be able to read about the new material in this slog re-explained in his/her own words and some new insight.

http://sams-slog.weebly.com – I like this slog mainly because I like how this person explains what difficulties he/she is going through, and it usually coincides with mine.

http://pythoncodeshark.wordpress.com – This slog is fun to read because this person sees the material in a totally different way than I do, so it’s always interesting to see what he has to say about it.

Overall, I liked this course a lot and am very proud to know all the material learned. I hope that CSC236 will be a continuation of this course and help me understand the material more in detail.

Thank you for taking the time to read my slog 🙂

Cheers!

Nanalelfe

Advertisements

Week #11 – The Halting Problem, Computability

Last week, we learned about the halting problem and computability. I found those lectures very interesting. I have never thought of computing from that perspective and believed that we can compute anything.
There are problems that aren’t solvable using algorithms, and the function halt is proof:

Screen Shot 2014-11-27 at 9.48.21 PM

The problem with this function is that halt will return True only if the input function halts, but if the function doesn’t halt (enters an infinite loop), then the input function will never halt, and therefore function halt itself will never halt. Therefore there is no way to check whether the function will enter an infinite loop or not given input n. Computers aren’t capable of doing so.
I like to think that sometime in the future, someone will come up with a solution to this problem and will find a way to check whether a function enters an infinite loop or not (is the infinite loop the only problem?), but I presume that for this, more intelligent computers will have to be created and same with languages (sounds like I’m watching too many sci-fi movies… but I don’t).
Also, it is exciting that the proof that halt is not computable was discovered by Alan Turing – there is a movie coming up about him soon!

The-Imitation-Game-Quad-poster-Benedict-Cumberbatch1

What is a little more confusing is that we can prove that other functions are non-computable by showing that halt reduces to those functions. For example this made_up function:

Screen Shot 2014-11-27 at 9.20.26 PM

This made_up function could work if we restricted it to preconditions. But obviously it cannot work for all functions. But in that case, if we didn’t consider the preconditions for any function, non of the functions I came across for far would work. For example the function pow() only works if you input at least two arguments, but it is computable…
Maybe I’m missing something here…
This is a really hard material to grasp completely, I will have to work and do more research on it.

Penny Piles Problem

On the seventh week, Heap handed out a problem-solving worksheet: Penny Piles. The problem went on like this:
“”” You are sitting in front of two drawers. The left drawer contains 64 pennies, the right drawer contains nothing. Can you arrange things so that one of the drawers has 48 pennies, using the following two operations:

L
If the left pile has an even number of pennies, transfer half of them to the right pile. If the left pile has an odd number of pennies, operation L is disallowed.
R
If the right pile has an even number of pennies, transfer half of them to the left pile. If the right pile has an odd number of pennies, operation R is disallowed.

What about arranging things so that one of the piles has other numbers in the range [0,64]? Are there any numbers in that range that are impossible to achieve? What about starting with a different number of pennies in the left drawer? “””

The worksheet suggested to use Polya’s problem solving techniques. I always thought that each person has their own way of solving problems, and that’s the only way for them to get to the solution, so there is no general formula for everyone. What I usually used to do is try to solve a problem and understand the problem after I get stuck, which is sort of backwards :P.
During class, I was a little skeptical about what the problem was asking, because the answer to “Can you arrange things so that one of the drawers has 48 pennies?” was fairly simple. My partner and I just created a diagram using operation L two times:

Screen Shot 2014-11-27 at 5.44.22 PM

After completing this table, we just sat there and stared at each other, thinking we were done, while other students surrounding us were intensively talking to each other, as if they were solving some important life-or-death problem. Then it came to my mind that there was probably something more to the problem.
Once the class was over, I forgot about this problem for a while, assuming that this is the kind of problem beyond my comprehension and abilities.

After a certain time, while surfing through other SLOGs, I realized that many students were trying to solve this problem, I liked how they were trying to come up with something. After all, it’s the intention that counts. Yet to be honest, I did not understand most of the results and couldn’t see how the results related to the problem. I guess there are different ways to interpret what the problem is and how to tackle the problem in the first place.

First, the question that looked interesting to me was “Are there any numbers in that range that are impossible to achieve?”. The first thought that came to my mind is that only even numbers can be achieved. After a few scribbles of left and right drawer diagrams and few combinations, I came up with this:

Screen Shot 2014-11-27 at 5.46.19 PM

Not only did this show that we can come up with a combination of odd numbers, but I also remembered about the question “What about starting with a different number of pennies in the left drawer?”. I think you can definitely start with a different number of pennies as long as the number in at least one of the drawers is even, since if all numbers are odd, you cannot use any of the operations, so you’re stuck.
After playing around with diagrams and combinations some more, I came up with this diagram with all the possible combinations:

Tree

I stared at this diagram for a while, trying to think of what it could possibly mean for our problem. Well, it means that every time, there is only two possibilities; to either choose to execute operation R or L, and that once you get to a combination of odd numbers, you cannot do anything,. After starring at it some more, I got tired and decided to come up with something else; despite my (very) limited abilities in coding, I decided to come up with a function that returns True if some number is possible to achieve with operations R and L given numbers in the right and left drawers:

Screen Shot 2014-11-27 at 5.35.50 PM

I’m guessing that there should be a correct implementation of this code, but it’s obviously beyond my knowledge. Oh well, this was just for fun, I like coding a lot (No, I don’t have to be good at it to like it).

After playing around with combinations some more and doing some Google search on number 64, it came to me that it can also be written as 2 to the power of 6.
In fact, all of the numbers in the table that I came up with earlier can be written as sums of 2 to the power from 0 to 5 (except 0). So the table earlier:

Screen Shot 2014-11-27 at 5.46.19 PM

Can be rewritten like this:

Screen Shot 2014-11-27 at 6.27.34 PM

Or in binary:

Screen Shot 2014-11-27 at 6.29.36 PM

Therefore, since 64 = 2^6, if we want to find a number within the range [0, 64], the summation can be at most to the power of two to the n, where n is between 0 and 5: 2^n, where 0 <= n <= 5.
Yet, all numbers can be written in binary:

SlCCGod

I may be wrong, but my intuition is that you can pick any number within range [0, 64] that is going to be possible to achieve in at least one of the drawers, assuming that you execute a correct combination of operations, since I believe that as long as a number can be written as a summation of powers of two, it is possible for it to appear in one of the drawers. And that’s any number. I’m not sure about my conclusion but I feel that I’m on to something.

Weeks #9 and #10 – Big-Oh of polynomials, Big-Omega, Big-Theta and limits

Week #9 is the week when Assignment #2 was due. Once again, two epsilon-delta functions showed up in it as if the people who wrote this assignment desperately wanted me to fail. Well at least that’s what I thought at first. As I was going through infinitely many ideas on how to possibly prove (or disprove) epsilon/delta statements, I was mastering my ability with solving epsilon/delta proofs! And completing the assignment as a whole made me more confident of my ability to write proofs. A day after the assignment due date came Term test #2, which was almost the same as assignment #2, although slightly easier. At the same time, in lectures, my understanding of Big-Oh was strengthening. I enjoy proving Big-Oh statements 🙂 (maybe cause they’re easier?). Basically, when a function f is in Big-Oh of another function g, it means that beyond a breakpoint B, f(n) is a lower bound of g(n) multiplied by a constant c,  meaning that f grows no faster than g, or equivalently that g is an upper bound of f.

(Assume g(n) = n^2 for this image)

Screen Shot 2014-11-29 at 5.47.00 PM

In order to prove that a function f is upper-bounded by another function, i.e f ∈O(g), we need to prove the statement:
(Once again, assume g(n) = n^2 for this image)

Screen Shot 2014-11-29 at 5.53.15 PM

Here is another example of proving that a function is in the Big-Oh of another function:
Screen Shot 2014-11-29 at 5.56.01 PM

For this example, assume that f(n) = 7n^6 – 5n^4 + 2n^3 and g(n)  = 6n^8 – 4n^5 – n^2. To prove that f ∈ O(g), you only get to pick a constant c and a breakpoint B so that beyond any B, c*g(n) is an upper bound of f(n). This is why choosing a B and c that “works” is very important, otherwise the proof is not plausible. General tips for proving that functions are in Big-Oh is to overestimate f(n) and underestimate g(n), and choose c that is large enough to make g(n) upper bound of f(n). In this example, we try to come up with an overestimation of f(n) and come up with a c that we can use to continue the chain by introducing an underestimated g(n).

Disproving statements about Big-Oh turned out to be similar, however in order to do that we have to negate the statement linked with f∈O(g) so that it becomes f∉O(g):
Screen Shot 2014-11-29 at 6.09.55 PM

In this case, we have to choose the right n so that we can prove everything that follows in the statement, since it’s the only value that we are allowed to pick. Also, n is introduced after c and B, so n can depend on c and/or B, as long as it meets the
“n ∈Ν” requirement.

Now so far, we had only seen Big-Ohs of polynomials, and it is pretty easy to figure out whether a function is in Big-Oh of another function simply by looking at the highest degrees of the two functions. But for non-polynomials, turns out we have to use limits…
Screen Shot 2014-11-29 at 6.18.09 PM

In this example, let’s assume f(n) = 2^n and g(n) = n^2. The graph clearly shows that f(n) grows way faster than g(n). Therefore 2^n/n^2 approaches infinity as n approaches infinity. On the other hand, if two other functions f(n)/g(n) were to approach 0 as n approaches infinity, then we know that f(n) grows slower than g(n). To prove that, for example, f(n)/g(n) approaches infinity, we can use L’Hopital’s Rule, where we can derive f(n) and g(n) up until it’s obvious that their ratio approaches infinity. Once this is shown, we can assume the following statement to help us with our general proof:
Screen Shot 2014-11-29 at 6.24.00 PMThis is the definition of limit in “statement” form, applicable only if f(n)/g(n) approaches infinity (if it approaches 0, then the last part of the statement should be f(n)/g(n) < c, since we can find any number c beyond a breakpoint n’ where the ratio is going to be smaller than that number c).

Screen Shot 2014-11-29 at 6.32.03 PM

Since we are trying to disprove the statement 2^n ∉O(n^2), we only get to choose n, therefore we can introduce c and B at the beginning (if we were proving the statement, we would want to introduce B after proving the limit and then making B depend on n’ to make the statement easier to prove).
I was really confused about L’Hopital’s Rule at first because I am not taking Calculus at the moment, but thankfully there are websites such as Khan Academy for lost students like me. I also had to review derivatives in order to use L’Hopital’s Rule correctly. However, I am glad that this course doesn’t ask for any advanced math skills.

On the tenth week, we moved on to proving statements with Big-Omega, which are pretty similar to Big-Oh statements. In fact, they’re the same, and “opposites” in way, i.e f ∈O(g) can also be written g ∈Ω( f ), meaning that a function g that is in Big-Omega of another function f implies that g is an upper bound of f.  The Big-Omega in “statement form” is almost the same as Big-Oh, except the last inequality:
Screen Shot 2014-11-29 at 6.58.09 PMAlso, to prove this statement, we have to underestimate g(n) and come up with a c to introduce an overestimated version of f(n) to move on with the proof (the opposite way of proving Big-Oh).

We also learned about proving general statements with Big-Oh, Big-Omega and Big-Theta together. I found those general  proofs to be as interesting as the specific proofs, and the general concept is the same.
For example this statement:
Screen Shot 2014-11-29 at 8.03.55 PMWe need to really understand what the statement means before figuring out whether this statement is true or false.
Screen Shot 2014-11-29 at 8.07.23 PMThe statement means that if there is a function f that is upper bounded by function g and if that function g is upper bounded by function h, it implies that function f is also upper bounded by function h. The statement intuitively looks true, because this is a basic rule of transitivity. In order to prove this statement, all we have to do is conclude this statement:
Screen Shot 2014-11-29 at 8.13.12 PM

I am pretty excited about this material because I feel as if this is the only material I understand fully.

Weeks #7 and #8 – Sorting algorithm, complexity, Big – Oh, counting steps…

google-algorithm-460x290

On week 7, we finished off with proofs and started learning about algorithms, We first reviewed material from logical notation to make sure we negate statements correctly in order to disprove them correctly. Then Heap asked all of us to take a sheet of paper and prove a statement and then disprove a statement similar to it. Since I hadn’t had enough practice with disproving statements, the disproving part was pretty challenging. However, after disproving it, I enjoyed the feeling of satisfaction I got after hearing that my proof was about right. After this, Heap presented a summary of allowed interferences , which helps us make conclusions in certain situations while proving statements. It reminded me of the first chapters we had learned at the beginning of the semester. Here is a list of the ones that seem helpful and not too obvious (maybe put it on my exam cheat sheet?)

– Disjunction elimination: If you know A ∨ B, the additional information ¬A allows you to conclude B.
– Implication elimination: If you know A ⇒B, the additional information ¬A allows you to conclude B. On the other hand, the additional information ¬B allows you to conclude ¬A.
– Conjunction introduction: If you know A and you know B, then you can conclude A ∧ B.
– Disjunction introduction: If you know A, you can conclude A ∨ B.

Then, after a summary of allowed interferences, Heap presented about sorting strategies, which was interesting. I liked how he connected “sorting” with playing cards, which gave me a better idea of what sorting was in the first place. The concept that all quadratic functions are the “same” startled me at first because I found that the information was given out of the blue, without context. I understood it a little later when Heap moved on to examples with Big-Oh (they are the same because we’re overestimating basically).   At the end of the week, we had to work with the Penny Piles problem, to which I will get back later.

On the eighth week, we continued to learn about counting costs, a comparison of an algorithm’s “speed” that ignores hardware and other unimportant things, and the only way to do this is by counting how many steps an algorithm takes to execute in the worst case. This makes sense because if we were to count the steps in the best case, it doesn’t really say anything about the algorithm’s performance, because the speed of the algorithm might decrease while input increases. Average case makes sense as well because what if it is possible for a function to perform well in the best and worst cases but perform not so well with medium size input, i.e it’s performance graph (speed) looks like an upside-down parabola (if those even exist?).  After a quick introduction, we analyzed the time complexity of a function LS. Then Heap showed us a code of this function and we counted how many steps the function would take in the worst case scenario. I understood how to count steps in general however I feel like I would not be able to count the steps of a more complicated function. I wish we could go through many more examples in class, and with hard examples as well, so that we don’t get confused with hard examples during the exam. I also wish that CSC108 and CSC165 collaborated and presented the same material at the same time, so that students get exposed to the material in both programming and logic-oriented (?) ways. This way, the student will understand one way or the other, which will help him/her understand the other way based on the one he/she understands better. I was a little confused about the material before it was introduced in CSC108, and unfortunately it was after the tutorial in which we were tested on this material.

Week #6 – Cases, multiple quantifiers, limits…and epsilon/delta :(

Screen Shot 2014-11-24 at 9.40.19 PM

This week, we continued to learn about proofs, more specifically proofs by cases, proofs with multiple quantifiers and the proof of limits using epsilon/delta. Right from the start, “non-boolean functions” popped up and it took me a while to figure out what it was (Wikipedia was helpful!). Then we followed on to proofs by cases, which was pretty straight forward. The material that scares me the most are epsilon/delta proofs. They were the reason I dropped out of MAT137 and they came back haunting me in this course! Well, now I feel more confident about it, but at that time thinking with epsilon and delta frightened me. I couldn’t think of it as simple letters that represent numbers in the positive real numbers set. Also, most of my mistakes for this proof specifically was to ignore the order of the predicates. Therefore I used to make some predicates depend on something that hadn’t been introduced yet (thankfully Heap mentioned about it a few weeks later). Hence, epsilon/delta proofs were the biggest challenge that week (actually in this course as a whole). I did not feel very confident about the material at that time because it was the week after the test, so I hadn’t read the course notes. Fortunately, I realized this material can be mastered with a lot of practice, which I did later on, and I consider it as an achievement because writing proofs seems to require at least some level of creativity in math, which I had always believed I lacked.

Weeks #4 and #5 – Mixed quantifiers, Proofs

axiomofchoice

On week 4, I was behind on all my other courses because I was focused on Assignment 1. I underestimated it big time. Right from the first question, the ‘unless’ word popped up. The statement was “All acronyms are catheterized unless they are bifurcated”, and we had to negate it. I went over my lecture notes and it said “unless = if not”. So here was the other English translation I came up with: “All acronyms are catheterized if they are not bifurcated” and from that statement the symbolic form: \all a \in A, C(a) \Longrightarrow \neg B(a) (which is obviously wrong). Happy that I came up with a solution, I continued on to the next question. I feel bad for my partner, I convinced him that my answer was right and his was wrong, when in fact his was right. I need to understand that I can’t be always right and take other answers into consideration, no matter how strongly I believe in mine.

The fifth week was the week when we had our first term test. I studied for it the weekend before, right after submitting my Assignment. As I went over the tests from the previous years, I stumbled upon tests made by another professor, and I found they were much less challenging than ours. I don’t mind being challenged, however the more you get challenged, the lower your grades are (unless you’re good at challenges, but then if you do well then it’s not a challenge…?). Anyway I worked on the tutorial exercises, read over the lecture notes and did a few past tests. I did so many of them during the weekend that I just couldn’t stand doing them again the day before the test. We were allowed to bring cheat sheets, but I couldn’t figure what I could possibly write on it. It all depends on the question. I presumed its one of Heaps jokes on us. It just showed that there was no way to actually cheat if you don’t know the material. After the test, deep depression. I was certain I failed it.