In weeks 9 and 10 we learned about Big-Oh, Big-Omega, and Big-Theta.
Let's say we had the functions f(n) and g(n). The Big-3 essentially define the relationship between the growth of such functions as n approaches infinity.
In layman's terms:
1) f(n) is in Big-Oh of g(n) means that as n approaches infinity, the value of f(n) is always equal to or less than that of g(n)
2) f(n) is in Big-Omega of g(n) means that as n approaches infinity, the value of f(n) is always equal to or greater than that of g(n)
3) f(n) is in Big-Theta of g(n) means that as n approaches infinity, the value of f(n) is always equal to g(n), or greater than some constant*g(n) and less than some other constant*g(n)
There are many important practical applications of these definitions. For example, when you are interested in finding out how the polynomial running time of one algorithm compares to another, you can formalize such a comparison by defining their relationship in terms of statements 1 through 3. This creates a common basis for the communication of algorithm run-time comparisons between computer scientists and removes unnecessary ambiguity.
CSC165 Slog
Wednesday, 3 December 2014
Week 7 & 8
During weeks 7 and 8 we learned about sorting algorithm complexity.
The particular kind of complexity we are interested in is time complexity. More specifically, time in terms of the number of steps required to execute an algorithm given a certain input, and NOT the amount of time in seconds, minutes, hours, etcetera. This definition of time complexity allows computer scientists to think of the running time of an algorithm independent of the hardware on which it is being run.
In order to count the steps, we have to differentiate between which portions of the code will run an input-dependent amount of times versus a static number of times. For example, the contents inside of a for-loop which iterates over a list of size n will each be executed n times-- once for each element. A variable initialization statement outside of the for-loop on the other hand will be executed a single time, hence being input-independent. The sum of these two distinct types of code will determine the overall running time in terms of steps of a given algorithm.
The particular kind of complexity we are interested in is time complexity. More specifically, time in terms of the number of steps required to execute an algorithm given a certain input, and NOT the amount of time in seconds, minutes, hours, etcetera. This definition of time complexity allows computer scientists to think of the running time of an algorithm independent of the hardware on which it is being run.
In order to count the steps, we have to differentiate between which portions of the code will run an input-dependent amount of times versus a static number of times. For example, the contents inside of a for-loop which iterates over a list of size n will each be executed n times-- once for each element. A variable initialization statement outside of the for-loop on the other hand will be executed a single time, hence being input-independent. The sum of these two distinct types of code will determine the overall running time in terms of steps of a given algorithm.
Tuesday, 11 November 2014
Week 4
This week we learned about mixed quantifiers.
To recap, the two fundamental logical quantifiers are existential and universal (as far as I'm aware there aren't any more). All the existential quantifier claims is that there exists some element that happens to belong to some set of interest. For example:
"There exists some natural number..." In order to be usable in a precise logical statement, it could be broken down to mean that there exists some number that we can give a name, such as X, that happens to belong to the set of all natural numbers.
Universal quantification goes a giant leap further and makes a claim about all of the elements inside of some set. Extending the previous example:
"Every single natural number that exists..." More precisely, this means that a variable X can be assigned in sequence to every single number in the set of natural numbers, and that whatever statement follows will be true for each of these instantiations of X.
Where it gets interesting is when you combine these two quantifiers into a single statement. In particular, order matters.
1) "Some X exists such that for all Y..."
!=
2) "For all X there exists a Y...".
Statement 1 says that there is a single value that X can take on, that when coupled with any value that Y can take on, yields a truthful statement. Statement 2 says that for every single X that you can pick, you can also pick a unique value Y each time X takes on a new value, in order to make the statement true.
To make this distinction more concrete, imagine the following equation that takes in integer values for X and Y:
3) X = Y
Statement 1 in combination with 3 says that when you pick some X, Statement 3 will be true for any Y value, which is clearly false. Statement 2 in combination with 3 on the other hand says that for any value X takes on, you can pick a Y such that Statement 3 is true. Simply set Y to be X each time and the statement is true.
This difference in quantifier ordering can make a profound difference in the meaning and hence interpretation of all logical statements.
To recap, the two fundamental logical quantifiers are existential and universal (as far as I'm aware there aren't any more). All the existential quantifier claims is that there exists some element that happens to belong to some set of interest. For example:
"There exists some natural number..." In order to be usable in a precise logical statement, it could be broken down to mean that there exists some number that we can give a name, such as X, that happens to belong to the set of all natural numbers.
Universal quantification goes a giant leap further and makes a claim about all of the elements inside of some set. Extending the previous example:
"Every single natural number that exists..." More precisely, this means that a variable X can be assigned in sequence to every single number in the set of natural numbers, and that whatever statement follows will be true for each of these instantiations of X.
Where it gets interesting is when you combine these two quantifiers into a single statement. In particular, order matters.
1) "Some X exists such that for all Y..."
!=
2) "For all X there exists a Y...".
Statement 1 says that there is a single value that X can take on, that when coupled with any value that Y can take on, yields a truthful statement. Statement 2 says that for every single X that you can pick, you can also pick a unique value Y each time X takes on a new value, in order to make the statement true.
To make this distinction more concrete, imagine the following equation that takes in integer values for X and Y:
3) X = Y
Statement 1 in combination with 3 says that when you pick some X, Statement 3 will be true for any Y value, which is clearly false. Statement 2 in combination with 3 on the other hand says that for any value X takes on, you can pick a Y such that Statement 3 is true. Simply set Y to be X each time and the statement is true.
This difference in quantifier ordering can make a profound difference in the meaning and hence interpretation of all logical statements.
Week 3
This week we learned about conjunction, disjunction, negation, and implication.
I found vacuous truth to be the most interesting concept of this week. At first it seemed somewhat odd to conclude that if the antecedent of a statement was false, the statement would always be true. An example of such vacuous truth could be the following: If all mice are the size of an elephant, then every person on Earth is bankrupt. To a person untrained in the laws and conventions of logic, a statement such as this would seem preposterous. How can you conclude that mice being the size of elephants implies that everyone is bankrupt when it is clearly not the case that everyone is bankrupt or that all mice are the size of elephants, and that a scenario such as that would most likely never be true? It is easy to get caught up in the absurdity of the consequent/antecedent and to simply dismiss the statement as a whole based on their improbability.
The easiest way to resolve this confusion is as follows: consider implication statements to be contracts. Whenever the antecedent is true (or occurs), and only in that case, should we expect for the consequent to occur. In our case, only if all mice were the size of elephants, and everyone turned out not to be bankrupt, then and only then could we argue that the contract has been breached, and that the statement as a whole is false. Any other combination of truths of the consequent and antecedent do not need to be considered, as we've established the only possible way for the contract to be breached. By compliment, every other scenario yields a true statement.
Ultimately I think that the evaluation of an implication being true when the antecedent is false is simply a logical convention that was adopted in order for the evaluation of larger, more complex logical statements (which contain implications) to produce more intuitive and meaningful truth results.
I found vacuous truth to be the most interesting concept of this week. At first it seemed somewhat odd to conclude that if the antecedent of a statement was false, the statement would always be true. An example of such vacuous truth could be the following: If all mice are the size of an elephant, then every person on Earth is bankrupt. To a person untrained in the laws and conventions of logic, a statement such as this would seem preposterous. How can you conclude that mice being the size of elephants implies that everyone is bankrupt when it is clearly not the case that everyone is bankrupt or that all mice are the size of elephants, and that a scenario such as that would most likely never be true? It is easy to get caught up in the absurdity of the consequent/antecedent and to simply dismiss the statement as a whole based on their improbability.
The easiest way to resolve this confusion is as follows: consider implication statements to be contracts. Whenever the antecedent is true (or occurs), and only in that case, should we expect for the consequent to occur. In our case, only if all mice were the size of elephants, and everyone turned out not to be bankrupt, then and only then could we argue that the contract has been breached, and that the statement as a whole is false. Any other combination of truths of the consequent and antecedent do not need to be considered, as we've established the only possible way for the contract to be breached. By compliment, every other scenario yields a true statement.
Ultimately I think that the evaluation of an implication being true when the antecedent is false is simply a logical convention that was adopted in order for the evaluation of larger, more complex logical statements (which contain implications) to produce more intuitive and meaningful truth results.
Friday, 19 September 2014
Week 2
Last week I learned about an interesting technique for problem-solving developed by George Polya. It consists of 4 distinct steps whose primary purpose is to allow for a systematic deconstruction of any generic problem into more manageable components. These steps involve, broadly; understanding the problem, devising a plan, carrying out the plan, and finally reflecting on the entire problem-solving process. Of particular interest to me is the final step-- reflection. It seems as though this step would be the most useful in improving ones general problem-solving ability for one very simple reason: the more problems you solve, in any context, the greater your intellectual toolbox will be in future bouts of problem-solving. Each unique problem that is solved will almost always contain some nuggets of knowledge (whether that be the specific solution set to a problem, or a general methodology) that can be extracted and potentially applied to any number of alternate problems.
A standard (haphazard) problem-solving approach, while also potentially effective in producing a solution, seems like it would be inherently worse than a Polya's approach (involving reflection) as there is no residual carry-over of knowledge from problem to problem. After all, I don't know who could possibly find it fun to solve multiple problems that are fundamentally similar in nature from scratch each and every time. Not only is it not fun it's a complete waste of time.
Whether or not this technique turns out to be as useful as I make it out to be is has yet to be seen. I will try to apply it and share my experiences in a couple of weeks.. until next time!
A standard (haphazard) problem-solving approach, while also potentially effective in producing a solution, seems like it would be inherently worse than a Polya's approach (involving reflection) as there is no residual carry-over of knowledge from problem to problem. After all, I don't know who could possibly find it fun to solve multiple problems that are fundamentally similar in nature from scratch each and every time. Not only is it not fun it's a complete waste of time.
Whether or not this technique turns out to be as useful as I make it out to be is has yet to be seen. I will try to apply it and share my experiences in a couple of weeks.. until next time!
Subscribe to:
Posts (Atom)