Строительный блокнот Automata methods and madness by this algorithm is 0{e{e + m)). This running time is polynomial in the size of the input, which we might informally take to be the sum of e and m. When we translate the above ideas to Turing machines, we face several issues: When we study algorithms, we encounter problems that ask for outputs in a variety of forms, such as the list of edges in a MWST. When we deal with Turing machines, we may only think of problems as languages, and the only output is yes or no, i.e., accept or reject. For instance, the MWST tree problem could be couched as: given this graph G and limit W, does G have a spanning tree of weight W or less? That problem may seem easier to answer than the MWST problem with which we are familiar, since we dont even learn what the spanning tree is. However, in the theory of uitractability, we generally want to argue that a problem is hard, not easy, and the fact that a yes-no version of a problem is hard implies that a more standard version, where a full answer must be computed, is also hard. While we might think informally of the size of a graph as the number of its nodes or edges, the mput to a TM is a string over a finite alphabet. Thus, problem elements such as nodes and edges must be encoded suitably. The eflfect of this requirement is that inputs to Turing machines are generally slightly longer than the intuitive size of the input. However, there are two reasons why the difference is not significant: 1. The difference between the size as a TM input string and as an informal problem input is never more than a small factor, usually the logarithm of the input size. Thus, what can be done in polynomial time using one measure can be done in polynomial time using the other measure. 2. The length of a string representing the input is actually a more accurate measure of the number of bj-tes a real computer has to read to get its input. For instance, if a node is represented by an integer, then the number of bytes needed to represent that integer is proportional to the logarithm of the integers size, and it is not 1 bj-te for any node as we might imagine in an informal accounting for input size. Exctmple 10.2 : Let us consider a possible code for the graphs and weight limits that could be the input to the MWST problem. The code has five symbols, 0, 1, the left and right parentheses, and the comma. 1. Assign integers 1 through m to the nodes. 2. Begin the code with the value of m in binary and the weight limit W in binary, separated by a comma. 3. If there is an edge between nodes i and j with weight w, place {i,j,w) in the code. The integers i, j, and w are coded in binary. The order of i and j within an edge, and the order of the edges within the code are immaterial. Thus, one of the possible codes for the graph of Fig. 10.1 with limit № = 40 is 100,101000(1,10,1111)(1,11,1010)(10,11,1100)(10,100,10100)(11,100,10010) □ If we represent inputs to the MWST problem as in Example 10.2, then an input of length n can represent at most 0(n/logn) edges. It is possible that m, the number of nodes, could be exponential in n, if there are very few edges. However, unless the number of edges, e, is at least m - 1, the graph cannot be connected and therefore will have no MWST, regardless of its edges. Consequently, if the number of nodes is not at least some fraction of u/logn, there is no need to run Kruskals algorithm at all; we simply say no; there is no spanning tree of that weight. Thus, if we have an upper bound on the runrung time of Kruskals algorithm as a function of m and e, such as the upper bound 0{e(m+e)) developed above, we can conservatively replace both m and e by n and say that the running time, as a function of the input length n is 0{n(n + n)), or О(и). In fact, a better implementation of Kruskals algorithm takes time O(nlogn), but we need not concern ourselves with that improvement here. Of course, we are using a Turing machine as our model of computation, while the algorithm we described was intended to be implemented in a programming language with useful data structures such as arrays and pointers. However, we claim that in O(n) steps we can implement the version of Kruskals algorithm described above on a multitape TM. The extra tapes are used for several jobs: 1. One tape can be used to store the nodes and their current component numbers. The length of this table is 0(n). 2. A tape can be used, as we scan the edges on the input tape, to hold the currently least edge-weight found, among those edges that have not been marked used. We could use a second track of the input tape to mark those edges that were selected as the edge of least remaining weight in some previous round of the algorithm. Scanning for the lowest-weight, unmarked edge takes 0{n) time, since each edge is considered only once, and comparisons of weight can be done by a linear, right-to-left scan of the binary numbers. 3. When an edge is selected in a round, place its two nodes on a tape. Search the table of nodes and components to find the components of these two nodes. This task takes 0{n) time. 10.1.4 An MV Example: The Traveling Salesman Problem To get a feel for the power of MV, we shall consider an excmiple of a problem that appears to be in but not in T: the Traveling Salesman Problem (TSP). The input to TSP is the same as to MWST, a graph with integer weights on the edges such as that of Fig. 10.1, and a weight limit W. The question asked is whether the graph has a Hamilton circuit of total weight at most И-. A Hamilton circuit is a set of edges that connect the nodes into a single cycle. 4. A tape can be used to hold the two components, г and j, being merged when an edge is found to connect two previously unconnected components. We then scan the table of nodes and components, and each node found to be in component i has its component number chjmged to j. This scan also takes 0(n) time. You should thus be able to complete the argument that says one round can be executed in 0(n) time on a multitape TM. Since the number of rounds, e, is at most n, we conclude that O(n) time suffices on a multitape TM. Now, remember Theorem 8.10, which says that whatever a multitape TM can do in s steps, a single-tape TM can do in O(s) steps. Thus, if the multitape TM takes O(n) steps, then we can construct a single-tape TM to do the same thing in 0((n)) = O(n) steps. Our conclusion is that the yes-no version of the MWST problem, does graph G have a MWST of total weight W or less, is in V. 10.1.3 Nondeterministic Polynomial Time A fundamental class of problems in the study of intractability is those problems that can be solved by a nondeterministic TM that runs in polynomial time. Formally, we say a language L is in the class NV (nondeterministic polynomial) if there is a nondeterministic TM M and a polynomial time complexity T(n) such that L = L{M), and when M is given an input of length n, tliere are no sequences of more than T(n) moves of M. Our first observation is that, since every deterministic TM is a nondeterministic TM that happens never to have a choice of moves, V С MV. However, it appears that NV contains many problems not in V. The intuitive reason is that a NTM running in polynomial time has the ability to guess an exponential number of possible solutions to a problem and check each one in polynomial time, in parallel. However: It is one of the deepest open questions of Mathematics whether V - NV, i.e., whether In fact everything that can be done in polynomial time by a NTM can in fact be done by a DTM in polynomial time, perhaps with a higher-degree polynomial.
|