Note to self: the difficulty of an NP-complete problem for different inputs forms a similar fractal, does it not?

youtu.be/-RdOwhmqP5s?t=850

Follow

Like, a 3-SAT where almost all assignments are false is easier than a random 3-SAT and may be solvable in P time, and likewise a 3-SAT with nearly all assignments true might also be in P, and it's the boundary between these two regions that forms a fractal.

Thinking out loud, this implies that if you can solve the fractal, i.e. compute the color of the fractal at some arbitrary position x in O(poly) time, you can solve 3-SAT in O(poly) time and you get P = NP.

It may be the cannabis talking, but I also can't shake the feeling that this is lurking near the edge of Decidability and the Halting Problem. I wonder if there are any useful connections between P vs NP and computability.

And this is definitely the cannabis talking, but isn't it odd how the algorithms that are most important to us tend to be the ones contained in P, with most of the rest contained in BQP or EXP, and not in the truly outlandish wilderness of things like O(n!).

Like… what made that inevitable? Why, of all the possible ways to map computer science time with physical time, did almost all the important algorithms end up in P, when PH \ P is such a larger universe in terms of absolute number of problems?

Additional note to self: A* is quite similar to Newton's Method when you really get down to it, isn't it?

davidkoloski.me/blog/intellige

Sign in to participate in the conversation
Mastodon

This is a brand new server run by the main developers of the project as a spin-off of mastodon.social 🐘 It is not focused on any particular niche interest - everyone is welcome as long as you follow our code of conduct!