Well, we're gonna have to revisit one of our old SLOG posts and reflect on them, so we might as well dive into them.
One of the most common concepts that we've often encountered throughout CSC148 was recursion - that is, when a function calls itself for the purpose of reducing input to the smallest base case, and working from there. Initially, I felt that recursion was something that was too overemphasized, and was less significant than the traditional iterative methods of programming. The factorial question can emphasize this; we could simply iterate through from 1 to n, multiplying each value to the total to achieve our factorial value. This was effective, correct and fast. Its recursive version was albeit a bit confusing, and its functionality was limited. So why did we even bother about recursion?
It turns out that recursion was a useful tool in the wide world of data structures - by being able to perform actions that iteration is limited in. For a challenge, we can do this by trying to implement an iterative method for binary search trees; it would be frustrating to implement. Recursion offers us a form of problem solving that is efficient in reducing the overall weight of a problem. For example, when using recursion on binary search trees, it makes sense because, for example, when we try to search for a value we would traverse through each subtree in the hopes of achieving the simplest case possible - either when the value is not found, or when the value is reached. Ultimately, the point of recursion is simple; to break down problems to simpler versions of itself, and nonetheless.
Furthermore, I would also like to add that recursive functions are, in a sense, 'cleaner' and a tad bit more 'readable' than an iterative function. Proof of this can be visualized in the form of 'for loops'. In for loops, we would iterate through n values and do stuff with them. When we have multiple statements that we would like to do inside of them, they can get messy and be difficult to read. I know because I have had some troublesome experiences reading other peoples' 'for' loops, especially when there were many statements and executions in them. Recursion, on the other hand, is simpler; just a base case, and a recursive case and that's all to it. Granted, recursion can be a bother to trace in some occasions, but compared to iterative functions, are at least a little bit more nicer to come by if implemented correctly.
In conclusion, recursion > iterative loops because they are cleaner, support a wide variety of applications that iterative functions are limited to, and are much easier to delve through than large, iterative blocks.
CSC148 SLOG
Wednesday, 25 March 2015
Tuesday, 17 March 2015
A Hectic Week (Week 9)
Whew! Week 8 was one heck of a week! Between two assignments (one for CSC165 and for CSC148) and two term tests for the same two courses above, I can't believe that I managed to survive this week.
To begin with, after weeks of procrastination (ouch!) and a rather frustrating date with the minimax program, me and my partner took it upon ourselves to visit Danny Heap's office hours to figure out this minimax problem. We got one thing out for sure; our code wasn't working as the assignment told us to do, and Danny really showed us an interesting way to think about minimax. Unfortunately, after all that, my minimax solution did not work out as expected (it worked for subtract square but was a complete dork at tippy), but it was an eye-opener for me to the world of AI design. This was a rather interesting assignment as it was my first real encounter with a "smart" AI function for a game, and on implementing such. I felt that this AI implementation should be more suited for a later, upper-year course (in my opinion), but on the other hand, it was an interesting way to look at artificial intelligence implementations. After all, as my peers, parents and the world out there have pointed out, AI is going to be a big thing in the future years ahead, so you could say that this will be a painful, but rewarding experience for me in later years as a CS student at UofT.
Next came the test. This test was quite a worrying factor for me. I didn't do well on the first test - to be honest, it was easy but I choked Toronto Maple Leafs-style on one of the questions and lost major marks - so this time I was determined to kill/murder/do good on this test. I basically went over the labs we did in class, for study purposes, and taught myself how to implement linked lists in code - which, for me, was difficult due to the ongoing strike conditions.
One of the things that was difficult for me to comprehend about linked lists was how to represent them. In class, we were shown pictorial representations of linked lists, which described what linked lists were meant to imitate. However, when I went to compute some problems on my own, I found the diagrams difficult to understand. It wasn't until I went to Danny Heap's office hours that, on my own (and with a friend to help) that for me, it was easier to represent linked lists in their code form. So rather than this:
25 -> 69 -> 10 -> X
I found that this was easier for me to work with, and trace through:
LLNode(1, LLNode(2, LLNode(3)))
Apparently the latter was more convenient and an organized way for me to think of linked lists.
Luckily, the test day came and I found the test to be easier than expected. From the past tests they were difficult to comprehend and trying to work the answers on my own were a disaster. I did get some resolve from Danny Heap assuring that the test would be based on lab questions - which they were - and it was a great relief. Ultimately, thank you Danny Heap for your help and your determination to help us succeed!
Here's to a good mark on my test!
+ Slava Gospodu! Praise the Lord! +
To begin with, after weeks of procrastination (ouch!) and a rather frustrating date with the minimax program, me and my partner took it upon ourselves to visit Danny Heap's office hours to figure out this minimax problem. We got one thing out for sure; our code wasn't working as the assignment told us to do, and Danny really showed us an interesting way to think about minimax. Unfortunately, after all that, my minimax solution did not work out as expected (it worked for subtract square but was a complete dork at tippy), but it was an eye-opener for me to the world of AI design. This was a rather interesting assignment as it was my first real encounter with a "smart" AI function for a game, and on implementing such. I felt that this AI implementation should be more suited for a later, upper-year course (in my opinion), but on the other hand, it was an interesting way to look at artificial intelligence implementations. After all, as my peers, parents and the world out there have pointed out, AI is going to be a big thing in the future years ahead, so you could say that this will be a painful, but rewarding experience for me in later years as a CS student at UofT.
Next came the test. This test was quite a worrying factor for me. I didn't do well on the first test - to be honest, it was easy but I choked Toronto Maple Leafs-style on one of the questions and lost major marks - so this time I was determined to kill/murder/do good on this test. I basically went over the labs we did in class, for study purposes, and taught myself how to implement linked lists in code - which, for me, was difficult due to the ongoing strike conditions.
One of the things that was difficult for me to comprehend about linked lists was how to represent them. In class, we were shown pictorial representations of linked lists, which described what linked lists were meant to imitate. However, when I went to compute some problems on my own, I found the diagrams difficult to understand. It wasn't until I went to Danny Heap's office hours that, on my own (and with a friend to help) that for me, it was easier to represent linked lists in their code form. So rather than this:
25 -> 69 -> 10 -> X
I found that this was easier for me to work with, and trace through:
LLNode(1, LLNode(2, LLNode(3)))
Apparently the latter was more convenient and an organized way for me to think of linked lists.
Luckily, the test day came and I found the test to be easier than expected. From the past tests they were difficult to comprehend and trying to work the answers on my own were a disaster. I did get some resolve from Danny Heap assuring that the test would be based on lab questions - which they were - and it was a great relief. Ultimately, thank you Danny Heap for your help and your determination to help us succeed!
Here's to a good mark on my test!
+ Slava Gospodu! Praise the Lord! +
Saturday, 7 March 2015
Data Structures (Week 8)
During week 8 we spent a lot of time trying to finish assignment 2 - where we had to implement another game state for Tippy - a tic tac toe variation where the object of the game was to create a sequence of X's or O's that formed the Z or S tetromino found in Tetris. While the game state itself was not difficult to implement, the minimax strategy - which was designed such that the AI would always pick the best possible move - was a pain to build. I will admit though, even though the explanation of how to approach the design was somewhat simple, how to codify it using Python was difficult as it can be. I spent many hours working on this, including time at Danny's office hours - in the end, my minimax would be strong for subtract square but a complete dork at Tippy. Hopefully my effort though, won't go to waste.
In previous weeks, we were introduced to several data structures - trees, binary trees and binary search trees; those ones were basically variations of each other. A tree would basically be a data structure in which data was represented as a node - or an individual object - which would point to other objects of the like - known as its children. Nodes without children were called leaves, and nodes with children would be root nodes of such children. Ultimately, the design of the tree was such that it would implement recursion to travel down the trees, and whatnot. Binary trees, on the other hand, followed the same idea as the generic tree structure, but instead of being able to have nodes that branched to n other nodes, binary trees could only have two. Furthermore, binary search trees, on the other hand, derived from binary trees, but differ in the fact that binary search trees could only have two nodes - the left node's value being smaller than its root node's and the right node's value being larger than its root node's!
Then we moved on to linked lists. One could consider linked lists to be a variation of lists, but like the tree, each value would be represented as its own object. A node in a linked list would contain two pieces of information - what value it held, and what other node(s) it pointed to. Intuitively, this was not difficult to visualize in the mind. For me, being bogged down on assignment 2 caused me to be a step back in understanding linked lists, and as such I spent an entire Thursday focusing on the implementation of linked lists - and their functions.
We are going to have a midterm on data structures soon, and I am almost at my breaking point. Between a CSC148 test on Wednesday I also have a CSC165 test on Tuesday (which mainly consists of practicing proofs and whatnot), and a religion essay due on March 12, which I haven't started to revise. Yes, this week is going to be a bummer.
Thursday, 26 February 2015
Recursion (Week 7) *NOTE: This is to be marked
Basically, recursion is a programming strategy in which a particular problem is broken down into smaller sub-problems until the simplest possible solution is reached. There are three necessary components for a recursive problem to work; these include:
1) A base case. This is the simplest case possible to encounter in a recursive problem.
2) A recursive case. This is where the function calls itself.
3) A recursive case must be such that each call reduces the size of the problem, getting closer to the base case each time.
In programming, recursion can be achieved by having a function repeatedly call itself on an input - provided that each call reduces the size of the input itself. For example, say one wanted to calculate the product of two numbers, x=2 and y=3, without a multiplication sign. To approach the problem, we recall that xy is the same as adding the value x to itself y times. In other words, 2 x 3 is the same as 2 + 2 + 2.
Now, we write the recursive code (using the logic we have in the first paragraph), by analyzing the three points of recursion. In this case, the base case would be when y = 1; in other words we would return x. When y is greater than 1, we have the recursive case: we would want to add x to itself each time, while y decreased. The final code snippet for such would be:
def multiply(x, y):
if y == 1:
return x
else:
return x + multiply(x, y-1)
The result is that, as each iteration goes on, x would be added to itself until y reaches 1 in the recursive call; but by that time we would have added x y times, and that would be the final solution.
Recursion is an important tool, sometimes even easier to implement than the traditional iterative method, especially when we look at complex data structures such as binary trees, when we discover that the only way to traverse (or travel downwards) through a tree is by recursive calls on each node.
Ah, I have a lot of fond memories about recursion. In high school recursion was the one object that scared the heck out of my entire computer science class (save for a few people, Kashish and Ipsita) because they just simply could not understand the recursive process behind it. When we were first introduced to it in class, my teacher simply threw us into the formulation of a recursive problem - which was not helpful in any way, as he, despite being a great teacher and all, didn't really explain that well as to why the recursive process work. By the time the assignment for recursion had rolled in we were stumped (including me) and some people took the desperate measure of plagiarizing their work as a whole. Recursion was a problem.
Enter university, and the world of CSC148, where I was told that half of my code would have to be recursive. I had some fears about repeating the same frustration I had in high school with such a subject - until I met Danny Heap. After starting off the session with a lecture on tracing recursion, it soon dawned on me that the key to understanding recursion was code tracing; to understand how recursion worked in code, we needed to resolve the code ourselves as if we were the computer doing the problem. It makes sense; if we just tried to step into a recursive problem without understanding how a computer works on recursion, then we would end up in a deeper, more frustrating hole than we would have initially.
During the reading week, in preparation for assignment 2, I took it upon myself to do a couple of traditional recursion problems on my own, such as the factorial, palindrome, and math problems. For each problem, I would ask myself questions such as: "What is the base case?" "How would recursion work on each element of the problem?" "How can we reach the base case?" When I made a recursive call, before evaluating it I would trace the problem on my own to see whether the desired result would be achieved. Understanding how to trace recursion problems was the gold in the mine that helped me to comprehend recursive code solutions in class, labs, and in my personal coding endeavors.
Alas, thank you, Danny Heap, for making me a witness to the glories of writing recursive code.
1) A base case. This is the simplest case possible to encounter in a recursive problem.
2) A recursive case. This is where the function calls itself.
3) A recursive case must be such that each call reduces the size of the problem, getting closer to the base case each time.
In programming, recursion can be achieved by having a function repeatedly call itself on an input - provided that each call reduces the size of the input itself. For example, say one wanted to calculate the product of two numbers, x=2 and y=3, without a multiplication sign. To approach the problem, we recall that xy is the same as adding the value x to itself y times. In other words, 2 x 3 is the same as 2 + 2 + 2.
Now, we write the recursive code (using the logic we have in the first paragraph), by analyzing the three points of recursion. In this case, the base case would be when y = 1; in other words we would return x. When y is greater than 1, we have the recursive case: we would want to add x to itself each time, while y decreased. The final code snippet for such would be:
def multiply(x, y):
if y == 1:
return x
else:
return x + multiply(x, y-1)
The result is that, as each iteration goes on, x would be added to itself until y reaches 1 in the recursive call; but by that time we would have added x y times, and that would be the final solution.
Recursion is an important tool, sometimes even easier to implement than the traditional iterative method, especially when we look at complex data structures such as binary trees, when we discover that the only way to traverse (or travel downwards) through a tree is by recursive calls on each node.
Ah, I have a lot of fond memories about recursion. In high school recursion was the one object that scared the heck out of my entire computer science class (save for a few people, Kashish and Ipsita) because they just simply could not understand the recursive process behind it. When we were first introduced to it in class, my teacher simply threw us into the formulation of a recursive problem - which was not helpful in any way, as he, despite being a great teacher and all, didn't really explain that well as to why the recursive process work. By the time the assignment for recursion had rolled in we were stumped (including me) and some people took the desperate measure of plagiarizing their work as a whole. Recursion was a problem.
Enter university, and the world of CSC148, where I was told that half of my code would have to be recursive. I had some fears about repeating the same frustration I had in high school with such a subject - until I met Danny Heap. After starting off the session with a lecture on tracing recursion, it soon dawned on me that the key to understanding recursion was code tracing; to understand how recursion worked in code, we needed to resolve the code ourselves as if we were the computer doing the problem. It makes sense; if we just tried to step into a recursive problem without understanding how a computer works on recursion, then we would end up in a deeper, more frustrating hole than we would have initially.
During the reading week, in preparation for assignment 2, I took it upon myself to do a couple of traditional recursion problems on my own, such as the factorial, palindrome, and math problems. For each problem, I would ask myself questions such as: "What is the base case?" "How would recursion work on each element of the problem?" "How can we reach the base case?" When I made a recursive call, before evaluating it I would trace the problem on my own to see whether the desired result would be achieved. Understanding how to trace recursion problems was the gold in the mine that helped me to comprehend recursive code solutions in class, labs, and in my personal coding endeavors.
Alas, thank you, Danny Heap, for making me a witness to the glories of writing recursive code.
Saturday, 21 February 2015
Object-Oriented Programming (Week 5)
Object-oriented programming is, in my opinion, one of the most fundamental concepts to grasp in the field of computer programming. As starters, our only encounter with designing programs was by taking input, processing the data and output - all in one 'main' file. If we were to continue down this path, then large programs would become unnecessarily difficult, and rather disorganized, as the logic becomes more complicated. The question is: how does object-oriented programming play out in simplifying large projects?
First, we should take a look at object-oriented programming itself. At the core of object-oriented programming is the designing of classes - or rather, blueprints of objects. When we design a class, we should take into account what we are trying to emulate with this class, what data can we associate with it and what can such an implementation of this be used for. When we create an object in code, that is to say that we are merely declaring an instance of a class to be made. For example, we can have a class named Human that contains data about age, weight, name, height, and methods to emulate the person's speaking, walking, and interaction with other humans. When we create an instance of a Human, a Human object springs to life with its own set of data, and behaviors.
Another thing that can be used in object-oriented programming is the idea of inheritance. Say you have an Animal class, that has a method speak(). But what if you wanted that Animal to contain data specifically for one animal? Sure, you could reuse the same Animal code and rename it, but that would be tedious, no? This is where inheritance comes in; rather than do the above, you can have another class take in data and methods from another existing class, and modify/add to those features to make them specific for one group only.
Ultimately, how does object-oriented programming play out in designing needlessly large, complex programs? This is how I saw it. When I was a beginner in programming, I struggled with object-oriented programming and I told myself, "If I wanna design a large project I can just use the main interface and nothing else. Granted, while some early projects were successful (I made a Deal or No Deal game without the use of classes), the end result was confusing and all over the place.
However, I later encountered a Java beginner's book which helped me to realize that "object-oriented programming is much like designing the necessary components of a particular something (be it a car, slot machine or animal) and personalizing it with actions and values that pertain to it." It was rather helpful and it solidified my overall understanding of OOP as a whole, and I would never have any difficulty with the concepts again. Once I mastered object oriented programming, I decided to improve my Deal or No Deal game by representing each aspect of the game as its own object, and put those together to make a functional Deal or No Deal game. The end result: the code was more organized, more readable and professional than the previous cluster I had.
Thus, to answer my own question, object-oriented programming is useful for designing large projects because it allows you to create objects for each individual aspect of the program, and combine them together to create an organized, functional solution. Without object-oriented programming, sure, we can make large projects in one shot, but the result would be tedious and clustered / difficult to fix. Can you imagine trying to make a bank system without any objects? I wouldn't; in the end I wouldn't even be able to understand my own code, let alone others because the code would be all over the place.
First, we should take a look at object-oriented programming itself. At the core of object-oriented programming is the designing of classes - or rather, blueprints of objects. When we design a class, we should take into account what we are trying to emulate with this class, what data can we associate with it and what can such an implementation of this be used for. When we create an object in code, that is to say that we are merely declaring an instance of a class to be made. For example, we can have a class named Human that contains data about age, weight, name, height, and methods to emulate the person's speaking, walking, and interaction with other humans. When we create an instance of a Human, a Human object springs to life with its own set of data, and behaviors.
Another thing that can be used in object-oriented programming is the idea of inheritance. Say you have an Animal class, that has a method speak(). But what if you wanted that Animal to contain data specifically for one animal? Sure, you could reuse the same Animal code and rename it, but that would be tedious, no? This is where inheritance comes in; rather than do the above, you can have another class take in data and methods from another existing class, and modify/add to those features to make them specific for one group only.
Ultimately, how does object-oriented programming play out in designing needlessly large, complex programs? This is how I saw it. When I was a beginner in programming, I struggled with object-oriented programming and I told myself, "If I wanna design a large project I can just use the main interface and nothing else. Granted, while some early projects were successful (I made a Deal or No Deal game without the use of classes), the end result was confusing and all over the place.
However, I later encountered a Java beginner's book which helped me to realize that "object-oriented programming is much like designing the necessary components of a particular something (be it a car, slot machine or animal) and personalizing it with actions and values that pertain to it." It was rather helpful and it solidified my overall understanding of OOP as a whole, and I would never have any difficulty with the concepts again. Once I mastered object oriented programming, I decided to improve my Deal or No Deal game by representing each aspect of the game as its own object, and put those together to make a functional Deal or No Deal game. The end result: the code was more organized, more readable and professional than the previous cluster I had.
Thus, to answer my own question, object-oriented programming is useful for designing large projects because it allows you to create objects for each individual aspect of the program, and combine them together to create an organized, functional solution. Without object-oriented programming, sure, we can make large projects in one shot, but the result would be tedious and clustered / difficult to fix. Can you imagine trying to make a bank system without any objects? I wouldn't; in the end I wouldn't even be able to understand my own code, let alone others because the code would be all over the place.
Monday, 16 February 2015
Tracing Recursion (Week 4)
In high school, I remember the way we were introduced to recursion was through a comedic dictionary definition of recursion: "Recursion: See Recursion." XD
But now, in university, to get ourselves introduced to recursion, Danny Heap started off by introducing the idea behind recursion - which was to solve a problem by breaking the problem into smaller sets, and evaluating such from there on. It seemed a bit confusing at first, which was why Danny Heap showed us how recursion worked by tracing the steps involved in such.
Tracing recursion was not hard. In fact, the way they showed us this was 10 times easier than how they did in high school. I could remember in high school, none of my friends could understand the logic behind recursion, let alone 90% of my other classmates, because, despite my compsci teacher being great and all, we were just thrown into how to implement recursion without any background on how or why this worked. However, the way Danny Heap introduced us to recursive methods was clear, concise and, on top of all that, effective to grasp on your own. I had thought of recursion as a burden back in the past, but after Danny Heap's examples and ways of going around it I had never felt so excited about learning recursion before.
Granted, the recursion examples were simplistic, as we mostly performed recursion to do simple things such as getting the length of a list (with nested lists included), the sum of all elements in a list, and whatnot, yet the way they were explained to me was actually effective and well-performed. From what I understand, as we iterate through a series of elements, every time the recursive case was met, it would result in another method call on such an element until the base case was reached. From there, on each inner element the function would call itself until the base case was reached, where after that the recursion would settle itself, resulting in our final result. It was amazing, and I could confidently say that understanding the basics of recursion was one of the highlights of my week.
During our labs, we pretty much did the same thing again. The exercises were not that difficult, neither was the quiz; compared to the rest of the class I was able to finish my work early (after all, the work was not that much of a challenge) - maybe, after all this, recursion shouldn't be that bad after all, right? Only time will tell.
But now, in university, to get ourselves introduced to recursion, Danny Heap started off by introducing the idea behind recursion - which was to solve a problem by breaking the problem into smaller sets, and evaluating such from there on. It seemed a bit confusing at first, which was why Danny Heap showed us how recursion worked by tracing the steps involved in such.
Tracing recursion was not hard. In fact, the way they showed us this was 10 times easier than how they did in high school. I could remember in high school, none of my friends could understand the logic behind recursion, let alone 90% of my other classmates, because, despite my compsci teacher being great and all, we were just thrown into how to implement recursion without any background on how or why this worked. However, the way Danny Heap introduced us to recursive methods was clear, concise and, on top of all that, effective to grasp on your own. I had thought of recursion as a burden back in the past, but after Danny Heap's examples and ways of going around it I had never felt so excited about learning recursion before.
Granted, the recursion examples were simplistic, as we mostly performed recursion to do simple things such as getting the length of a list (with nested lists included), the sum of all elements in a list, and whatnot, yet the way they were explained to me was actually effective and well-performed. From what I understand, as we iterate through a series of elements, every time the recursive case was met, it would result in another method call on such an element until the base case was reached. From there, on each inner element the function would call itself until the base case was reached, where after that the recursion would settle itself, resulting in our final result. It was amazing, and I could confidently say that understanding the basics of recursion was one of the highlights of my week.
During our labs, we pretty much did the same thing again. The exercises were not that difficult, neither was the quiz; compared to the rest of the class I was able to finish my work early (after all, the work was not that much of a challenge) - maybe, after all this, recursion shouldn't be that bad after all, right? Only time will tell.
Thoughts On The First Few Weeks (Week 3)
The first few weeks of CSC148 went by with a blur, and quite frankly it wasn't all that difficult to understand. Our main topic was object oriented programming - mainly the basic idea behind OOP, how to implement it, and applications of such.
Object oriented programming was, as mentioned before, not too difficult to pick up and play with - probably because I had previous experience with such a topic. My first 'real' encounter with object-oriented programming was in Grade 10, when I self-taught myself Java, which, in all due respects, an object-oriented language. After a few lessons, the concept of object-oriented programming was no big deal and I immediately could understand the purpose of it all (whereas initially, using Python, I could not grasp the concept - and this was in Grade 9).
We started off by understanding what an object was in computer programming; simply put, a virtual instance of a real-world object. We covered ways to implement it - such as reading a description and manually picking out proper 'verbs' (which would be translated to methods) and 'data' (which would be translated to the object attributes) found in each object's description. For example, if we were given the description of a car below:
"A car can keep track of the amount of gas remaining, its total mileage, and number of passengers; furthermore we can use a car to drive n miles, add/remove passengers, or get a new paint job."
We could immediately say that the Car object would include data values for amount of gas remaining, total mileage, and passengers; its methods would be to drive, add/remove people and change color.
When it came to implementing the actual Python code, it was not too difficult; for the most part the code would consist of writing methods, variables and logical statements - things that we have, and should be familiar with from previous programming experience or CSC108. All in all, the first few weeks of CSC148 were simple enough to comprehend via oneself, or the lecturers/TAs in class.
Subscribe to:
Comments (Atom)