Philip Maymin, Markets are Efficient if and Only if P = NP.

I prove that if markets are efficient, meaning current prices fully reflect all information available in past prices, then P = NP, meaning every computational problem whose solution can be verified in polynomial time can also be solved in polynomial time. I also prove the converse by showing how we can “program” the market to solve NP-complete problems. Since P probably does not equal NP, markets are probably not efficient. Specifically, markets become increasingly inefficient as the time series lengthens or becomes more frequent. An illustration by way of partitioning the excess returns to momentum strategies based on data availability confirms this prediction.

But if P = NP then that’s it for most of modern cryptography, especially public/private key encryption. We’ll have to send giant one-time pads to each other before we can have secure communications.

So it turns out (if this paper is correct) that the choice is not (national) security or privacy. It’s market efficiency or (data) security and privacy.

Then again, it’s hardly news that markets fail. Look outside your window.

*Related*

I disagree that encryption is an NP problem. It is not. It’s more akin to the book search problem in the article. It may take time, but it is not unsolvable. In fact, you may even get lucky and solve it quickly. The problem with encryption is that it is nearly always breakable. In the modern world, we deal with that fundamental problem by making the breaking unfeasible for practical use. (i.e. even if you set up enough computers to break a 128-bit key in 5 years, would the intell be of use to you at that point? Would you be able to do it twice?) The only unbreakable code uses a one-time pad – everything else is just a time problem.

Quantum computers are just a theoretical pipe dream at the moment. One might as well just talk about magic computers, or one administered by unicorns. (and in reality, technical descriptors aside, quantum computers offer incredible speed, not a new way of computing – so all it will really mean is that the 128-bit key will become a 1024-bit key, or whatever.)

A side point to your post, but whatever.

Whether encryption is P or NP is one of the great unsolved problems in math. This is, ultimately, a question of fact, just one we don’t know at present. Thus it is not something one can agree or disagree about.

I may simply be misunderstanding you, but if you are suggesting that we don’t know that all encryption is breakable, I think you are incorrect (saving the one time pad, which if made and used properly IS unbreakable). Lots and lots of theoretical work has been done on this. You might be interested in some of the work done by Claude Shannon. So long as the coded message has to contain information (and why wouldn’t it) it is breakable (the chief weakness is the information itself). The only question is of the time involved.

As I stated, we get around this fundamental flaw in various ways such as long keys, short messages, etc., but the flaw doesn’t go away.

Maybe this isn’t what you are talking about though.

As regards your specific and inaccurate assertion about all “codes” being “breakable”, if P is not equal to NP then with a long-enough key length (and no errors in implementation) it is possible to create a cipher (note that codes and ciphers are different things) that cannot be broken by brute-force attack before the heat death of the universe. If (but only if) P is not equal to NP it follows that it takes so vastly less computer power to encrypt than to brute-force decrypt, and that the theoretical advantage will always be with the encryption. It remains the case that cipher may be theoretically “breakable” (unlike the one time pad) but not in any conceivable practical way.

I suggest you read Bruce Schneier’s

Applied Cryptographyif you would like a (fairly) readable introduction to how this all works. I’ve also written a few papers on legal aspects of cryptography that discuss a number of related issues (although not the P/NP issue) that you can find via the “My Publications” link just below the header to this blog.Which is what I said.

Distilled to essence: Excepting a one-time pad, every cipher can be broken. The way around that fact is to either use a one-time pad (often impractical), or use a key that prevents practical-time breaking. Breaking gets faster, so key gets longer to compensate.

In practice, many ciphers can be broken before their theoretical breakdown time because of misuse (very often) and the fact that information is not random. The human element.

That’s all I’ve been saying. Is there some disagreement that I’m unaware of?

Even specific uses of one time pads can be deciphered due to human error (reuse). See Venona.

That’s not what we security folks mean when we say “broken”. We mean learning the encryption key in less than the number of designed or specified operations. That’s the reason “heat death of the universe” is a limiting factor. Very hard time to exceed.

It seems to me that the Michael is correct. Vic is engaged in semantic quibbling about a field where he has neither expertise nor experience. (I’d say “lawyerly” quibbling, but that might be an insult to actual lawyers reading this discourse.)

Michael,

Have you read Scott Aaronson’s recent paper Why Philosophers Should Care About Computational Complexity Contact”? It’s full of good insights, as you would expect, and also has a short section on economics – he talks about bounded rationality, and the computational complexity of finding Nash equilibria.

Vic: Look at it this way. If we have a cryptographic protocol which provably takes exponential time to crack then a) yes I admit that cracking it is _computable_ but b) we can always increase the size of the key so that the solution time is greater than the age of the universe on the best available hardware (or even on the best physcially-possible hardware). If the code can be cracked in polynomial time then this isn’t possible. That is why P/NP is important.

That looks interesting, thank you!