The Boosting Margin, or Why Boosting Doesn’t Overfit
There’s a well-understood phenomenon in machine learning called overfitting. The idea is best shown by a graph: Let me explain. The vertical axis represents the error of a hypothesis. The horizontal...
View ArticleZero Knowledge Proofs — A Primer
In this post we’ll get a strong taste for zero knowledge proofs by exploring the graph isomorphism problem in detail. In the next post, we’ll see how this relates to cryptography and the bigger...
View ArticleThe Blum-Blum-Shub Pseudorandom Generator
Problem: Design a random number generator that is computationally indistinguishable from a truly random number generator. Solution (in Python): note this solution uses the Miller-Rabin primality...
View ArticleBoolean Logic in Polynomials
Problem: Express a boolean logic formula using polynomials. I.e., if an input variable is set to , that is interpreted as false, while is interpreted as true. The output of the polynomial should be 0...
View ArticleNP-hard does not mean hard
When NP-hardness pops up on the internet, say because some silly blogger wants to write about video games, it’s often tempting to conclude that the problem being proved NP-hard is actually very hard!...
View Article
More Pages to Explore .....