[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Thinking Cap] on learning.. (May be the last or penultimate chance to don that cap) [4th qn added]



Qn 4. added below.


Qn 0. [George Costanza qn] Consider two learners that are trying to solve the same classification problem with two classes (+  and -). L1 seems to be averaging about 50% accuracy on the test cases while L2 seems to be averaging 25% accuracy. Which learner is good?  Why is this called the George Costanza question? ;-)
 
 
Qn 1. Consider a scenario where the training set examples have been labeled by a slightly drunk teacher--and thus they sometimes have wrong labels (e.g. +ve are wrongly labelled negative etc.).  Of course, for the learning to be doable, the percentage of these mislabelled instances should be quite small.  We have two learners, L1 and L2. L1 seems to be 100% correct on the *training* examples. L2 seems to be 90% correct on the training examples. Which learner is likely to do well on test cases?
 
Qn 2.  Compression involves using the pattern in the data to reduce the storage requirements of the data.  One way of doing this would be to find the rule underlying the data, and keep the rule and throw the data out. Viewed this way, compression and learning seem one and the same. After all, learning too seems to take the training examples, find a hypothesis ("pattern"/"rule") consistent with the examples, and use that hypothesis instead of the training examples.  What, if any, differences do you see between Compression and Learning?
 
Qn 3. We said that most human learning happens in the context of prior knowledge. Can we view prior knowledge as a form of bias?
In particular, can you say that our prior knowledge helps us focus on certain hypotheses as against other ones in explaining the data?
 
 
Qn 4.  We test the effectiveness of a learner on the test cases. Obviously, the test can be made "unfair" in that the test examples are somehow unconnected with training examples. 
 
One way of ensuring fairness, the the learner would love, is to pick a subset of training examples and give them back as test examples. (This is like your good old grade school days where the teacher would give "review" sheet and the exam
will ask a subset of questions from the review sheet). Obviously, this makes the test a litle too dull (from the sadistic teacher's point of view). 
 
A way of ensuring fairness that the teacher would love is to actually give test cases that have not been seen in the training cases (e.g. set exams that don't just repeat homework questions). However, it is too easy to get into unfair tests this way.
 
***What is the most general restriction you can put on the test so that it is considered "FAIR"? Can you argue that your definition works well with your gut feelign about "fair" vs. "unfair" exams/tests?
 
 (Anecdote on "unfair tests":  When my wife took a class on algorithms and complexity in her UofMD days, the teacher--who I won't name--spent one class at the end of the semester on NP-completeness proofs, and set the entire final exam on NP-completeness proofs. She never realized how riled up she was about this until she met another student from the same class, who is a faculty member at Duke at a DARPA meeting 15 years later--and found themselves talking about the sheer unfairness of the test in the first couple of seconds. Apparently we remember unfair tests quite well ;-).
 
that is all for now.
Rao