It's been my experience that many important "complex" theorems are complicated in only one or two parts. Oftentimes, if you're willing to accept a lemma or two without proof, you can understand the entire thing. So this is a post explaining what I think is a fairly complicated topic, but there is a piece I don't try to prove. I hope the result is something that's understandable to anyone with a little programming experience.
One of the things which fascinates me most are "incompleteness" theorems. I understand how we can look at some individual solution and determine that it doesn't solve some problem, but the proof that no solutions exist seems a lot harder.
A foundational result in theoretical computer science is the undecidability of the halting problem. This states that, given some program P, there is no general way to tell if P will run forever or if it will eventually stop. For example, the following program:
for each integer x:
if x + 1 = x, then halt
else go on to the next integer
will run forever, because there's not integer which is the same as itself with one added to it. This program:
for each integer x:
if x + 0 = x, then halt
else go on to the next integer
will halt immediately.
So how can we prove that there is no general method to tell whether a program will halt? It's a proof by contradiction: suppose we have some method, let's say M, of telling if a program will halt. Consider the following program:
if M says *this program* will halt, then run forever
else if M says this program will run forever, then halt
For any method M, it will fail to correctly predict the behavior of our program. So for any method M, there must be at least one program whose behavior it fails to predict, which proves our theorem.
When I first learned this, I felt (and to a certain extent still feel) that it was kind of a cheap trick. In retrospect, I think this was due to the program's ability to refer to itself. This is the complexity that I left out of the post, but the basic idea isn't that hard: we can assign a natural number to every program (i.e. there's a "first" program, a second program, ...). Once we've done that, a program "referring to itself" can be accomplished via these numbers, e.g. program 12345 referring to program 12345.
One simple yet important extension to the halting problem is Rice's Theorem, which basically says that you can never prove anything about a program. Want to prove that your program won't crash? Tough. Someone asks you to prove your algorithm always gives the right answer? No can do.
If you're interested, you can see a list of undecidable problems, the most famous of which is Hilbert's tenth.