P vs. NP - An Introduction
P vs. NP - An Introduction
NP – An Introduction
P versus NP is one of the most famous open problems in computer science. It's certainly in part
due to its challenge, but the problem also cuts the very nature of problem-solving. Connecting
with cryptography P versus NP typifies the easy to understand but hard to solve a math problem.
In computer science, we like to categorize problems based on how hard they are to solve. For
example, some problems we know are easy such as adding two numbers together. Some problems
may or may not be hard to solve but are easy to check if a solution is correct. We'll call these
problems of unknown difficulty NP.
It seems like NP problems are more challenging, and we know that if a problem is in P, it must
also be in NP.
We've been thinking about P as easy and NP problems as problems that might be hard, but what
does that mean? Easy isn't a very mathematical term. So we'll use our knowledge of Big O. What
we mean by an easy problem is that there was a solution that runs in polynomial time. The jigsaw
puzzle would then be an example of an NP problem since checking the solution runs in
polynomial time if there were a polytime algorithm to solve jigsaw puzzles. The problem would
also be in P. But as of right now, we don't know if such an algorithm exists. It might not be
immediately clear how we would prove whether P and NP are in the same sense.
Cook-Levin theorem presents an incredible idea, which shows that a problem called sad is the
hardest problem. In NP, my hardest, we could use a solver for Sat to solve any other NP problem.
Sat encapsulates all problems inside NP, and any problem can be transformed into a version of
Sat. If we could prove that solving Sat was easy, it would mean that all NP problems are easy
since solving. Sat allows us to solve all other NP problems, and so P equals NP on the other hand.
If Sat is fundamentally hard to solve, then P does not equal NP. Since Sat would be an NP
problem but not in P, this gives us a potential vector for solving P versus NP. Since we can now
focus on whether or not Sat is easy, Sat is like the king of NP, and it's able to represent every
other problem in NP because of this completely will call Sat np-complete. However, it was just
the first domino to be the hardest problem in NP in this field of study. As it allowed us to find
many more of these np-complete problems, including the jigsaw puzzles from before Sudoku
Mario and hundreds more other more math problems. Include finding a Hamiltonian path that
finds a path on an arbitrary map to get to every city without going on the same road more than
once. It also includes subset-sum, which asks if we can select a group of numbers on a random
list of integers that add up to zero. Sat has to share the throne with all of these problems as the
hardest problem in NP.
All np-complete problems are fundamentally the same since they can all be used to solve each
other. If we can show that any one of these problems can be easily solved or that it's impossible to
solve them efficiently, we will have solved P versus NP beyond P and NP.
There are many more groups of problems under an entire field known as complexity theory. Such
classes looked at the efficiency of using space, looking at the complement of problems using
probabilistic algorithms, and using quantum computers going even further. Some problems are so
hard that they're impossible such as the halting problem.
Huge swathes of problems we once thought were hard would suddenly be easy. The common
example is that it would invalidate most of our cryptography since they're dependent on the hope
that prime factorization is hard.
We also see np-complete problems crop up in other fields like biology, such as protein structure
prediction. They show up in surprising places like optimally filling an art gallery with guards or
matching medical students with residency programs. Perhaps most importantly, if P equals NP,
we would find math proofs to problems quickly. Since verifying formal proof is easy for
computers before.
It is important to note that the vast majority of computer scientists believe P does not equal NP.
One reason is that we continue to find more and more np-complete problems. But nowhere closer
to finding an efficient solution to any of them. Even if P does equal NP, that doesn't mean we can
easily start solving np-complete problems.
In practice, our definition of efficient being poly-time algorithms is extremely generous and can
still be very slow on the flip side. Just because we discover that a problem is np-complete doesn't
mean it's impossible. Our Big O analysis of algorithms is pessimistic. But real-life problems may
be pretty simple.
Analyzing problems allows us to connect the theoretical with the practical, helping us know what
is easy and hard and how we can get around hard problems. But to the appeal of P versus NP cuts
deeper than that this type of analysis. Allows us to see the underlying nature of problems. It strips
them down to their core, unveiling the two problems that seemed very different on the surface are
two sides of the same coin.