Artwork

Content provided by The Nonlinear Fund. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The Nonlinear Fund or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

LW - When Are Results from Computational Complexity Not Too Coarse? by Dalcy

5:49
 
Share
 

Manage episode 427164656 series 3337129
Content provided by The Nonlinear Fund. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The Nonlinear Fund or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: When Are Results from Computational Complexity Not Too Coarse?, published by Dalcy on July 4, 2024 on LessWrong. Tl;dr, While an algorithm's computational complexity may be exponential in general (worst-case), it is often possible to stratify its input via some dimension k that makes it polynomial for a fixed k, and only exponential in k. Conceptually, this quantity captures the core aspect of a problem's structure that makes specific instances of it 'harder' than others, often with intuitive interpretations. Example: Bayesian Inference and Treewidth One can easily prove exact inference (decision problem of: "is P(X)>0?") is NP-hard by encoding SAT-3 as a Bayes Net. Showing that it's NP is easy too. Therefore, inference is NP-complete, implying that algorithms are worst-case exponential. But this can't be the typical case! Let's examine the example of a Bayes Net whose structure is a chain ABCD, and say you want to compute the marginals P(D). The Naive Algorithm for marginalization would be to literally multiply all the conditional probability distribution (CPD) tables for each of the Bayes Net's nodes, and sum over all the variables other than X. If we assume each variable has at most v values, then the computational complexity is exponential in the number of variables n. P(D)=ABCP(A,B,C,D), which is O(vn). But because of the factorization P(A,B,C,D)=P(A)P(B|A)P(C|B)P(D|C) due to the chain structure, we can shift the order of the sum around like this: P(D)=CP(D|C)BP(C|B)AP(A)P(B|A) and now the sum can be done in O(nv2). Why? Notice AP(A)P(B|A) is P(B), and to compute P(B=b) we need to multiply v times and sum v1 times, overall O(v). This needs to be done for every b, so O(v2). Now we have cached P(B), and we move on to BP(C|B)P(B), where the same analysis applies. This is basically dynamic programming. So, at least for chains, inference can be done in linear time in input size. The earlier NP-completeness result, remember, is a worst-case analysis that applies to all possible Bayes Nets, ignoring the possible structure in each instance that may make some easier to solve than the other. Let's attempt a more fine complexity analysis by taking into account the structure of the Bayes Net provided, based on the chain example. Intuitively, the relevant structure of the Bayes Net that determines the difficulty of marginalization is the 'degree of interaction' among the variables, since the complexity is exponential in the "maximum number of factors ever seen within a sum," which was 2 in the case of a chain. How do we generalize this quantity to graphs other than chains? Since we could've shuffled the order of the sums and products differently (which would still yield O(nv2) for chains, but for general graphs the exponent may change significantly), for a given graph we want to find the sum-shuffling-order that minimizes the number of factors ever seen within a sum, and call that number k, an invariant of the graph that captures the difficulty of inference - O(mvk)[1] This is a graphical quantity of your graph called treewidth[2][3]. So, to sum up: We've parameterized the possible input Bayes Nets using some quantity k. Where k stratifies the inference problem in terms of their inherent difficulty, i.e. computational complexity is exponential in k, but linear under fixed or bounded k. We see that k is actually a graphical quantity known as treewidth, which intuitively corresponds to the notion of 'degree of interaction' among variables. General Lesson While I was studying basic computational complexity theory, I found myself skeptical of the value of various complexity classes, especially due to the classes being too coarse and not particularly exploiting the structures specific to the problem instance: The motif of proving NP-hardness by finding a clever way to encode 3-SA...
  continue reading

1702 episodes

Artwork
iconShare
 
Manage episode 427164656 series 3337129
Content provided by The Nonlinear Fund. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The Nonlinear Fund or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: When Are Results from Computational Complexity Not Too Coarse?, published by Dalcy on July 4, 2024 on LessWrong. Tl;dr, While an algorithm's computational complexity may be exponential in general (worst-case), it is often possible to stratify its input via some dimension k that makes it polynomial for a fixed k, and only exponential in k. Conceptually, this quantity captures the core aspect of a problem's structure that makes specific instances of it 'harder' than others, often with intuitive interpretations. Example: Bayesian Inference and Treewidth One can easily prove exact inference (decision problem of: "is P(X)>0?") is NP-hard by encoding SAT-3 as a Bayes Net. Showing that it's NP is easy too. Therefore, inference is NP-complete, implying that algorithms are worst-case exponential. But this can't be the typical case! Let's examine the example of a Bayes Net whose structure is a chain ABCD, and say you want to compute the marginals P(D). The Naive Algorithm for marginalization would be to literally multiply all the conditional probability distribution (CPD) tables for each of the Bayes Net's nodes, and sum over all the variables other than X. If we assume each variable has at most v values, then the computational complexity is exponential in the number of variables n. P(D)=ABCP(A,B,C,D), which is O(vn). But because of the factorization P(A,B,C,D)=P(A)P(B|A)P(C|B)P(D|C) due to the chain structure, we can shift the order of the sum around like this: P(D)=CP(D|C)BP(C|B)AP(A)P(B|A) and now the sum can be done in O(nv2). Why? Notice AP(A)P(B|A) is P(B), and to compute P(B=b) we need to multiply v times and sum v1 times, overall O(v). This needs to be done for every b, so O(v2). Now we have cached P(B), and we move on to BP(C|B)P(B), where the same analysis applies. This is basically dynamic programming. So, at least for chains, inference can be done in linear time in input size. The earlier NP-completeness result, remember, is a worst-case analysis that applies to all possible Bayes Nets, ignoring the possible structure in each instance that may make some easier to solve than the other. Let's attempt a more fine complexity analysis by taking into account the structure of the Bayes Net provided, based on the chain example. Intuitively, the relevant structure of the Bayes Net that determines the difficulty of marginalization is the 'degree of interaction' among the variables, since the complexity is exponential in the "maximum number of factors ever seen within a sum," which was 2 in the case of a chain. How do we generalize this quantity to graphs other than chains? Since we could've shuffled the order of the sums and products differently (which would still yield O(nv2) for chains, but for general graphs the exponent may change significantly), for a given graph we want to find the sum-shuffling-order that minimizes the number of factors ever seen within a sum, and call that number k, an invariant of the graph that captures the difficulty of inference - O(mvk)[1] This is a graphical quantity of your graph called treewidth[2][3]. So, to sum up: We've parameterized the possible input Bayes Nets using some quantity k. Where k stratifies the inference problem in terms of their inherent difficulty, i.e. computational complexity is exponential in k, but linear under fixed or bounded k. We see that k is actually a graphical quantity known as treewidth, which intuitively corresponds to the notion of 'degree of interaction' among variables. General Lesson While I was studying basic computational complexity theory, I found myself skeptical of the value of various complexity classes, especially due to the classes being too coarse and not particularly exploiting the structures specific to the problem instance: The motif of proving NP-hardness by finding a clever way to encode 3-SA...
  continue reading

1702 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide