Строительный блокнот  Automata methods and madness 

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 [ 163 ] 164 165 166 167 168 169 170 171 172 173 174 175

11.4.6 The Class ZVV

Our second class of languages Involving randomization is called zero-error, probabilistic, polynomial, or ZW. The class is based on a randomized TM that always halts, and has an expected time to halt that is some polynomial in the length of the input. This TM accepts its input if it enters and accepting state {and therefore halts at that time), and it rejects its input if it halts without accepting. Thus, the definition of class ZW is almost the same as the definition of V, except that ZW allows the behavior of the TM to involve randomness, and the expected running time, rather than the worst-case rmming time is measured.

A TM that always gives the correct answer, but whose running time varies depending on the values of some random bits, is sometimes called a Las-Vegas Turing machine or Las-Vegas algorithm. We may thus think of ZW as the languages accepted by Las-Vegas Turing machines with a polynomial expected running time.

w is in L. If we run M on L, using coin-iiips or some other random-number-generating device to simulate the creation of random bits, then we know:

1. If u; is not in L, then our run will surely not lead to acceptance of w.

2. и w is in L, there is at least a 50% chance that w will be accepted.

However, if we simply take the outcome of this run to be definitive, we shall sometimes reject w when we should have accepted {a false negative result), although we shall never accept when we should not {a false positive result). Thus, we must distinguish between the randomized TM itself and the algorithm that we use to decide whether or not w is in L. We can never avoid false negatives altogether, although by repeating the test many times, we can reduce the probability of a false negative to be as small as we like.

For instance, if we want a probability of false negative of one in a billion, we may run the test thirty times. If w is in L, then the chance that all thirty tests will fail to lead to acceptance is no greater than 2~°, which is less than 10 , or one in a billion. In general, if we want a probability of false negatives less than с > 0, we must run the test log2(l/c) times, Since this number is a constant if с is, and since one run of the randomized TM M takes polynomial time because L is assumed to be in HV, we know that the repeated test also takes a polynomial amount of time. The implication of these considerations is stated as a theorem, below.

Theorem 11.16: If L is in liV, then for any constant с > 0, no matter how small, there is a polynomial-time randomized algorithm that renders a decision whether its given input w is in L, makes no false-positive errors, and makes false-negative errors with probabifity no greater than c. □



Is Fraction 1/2 Special in the Definition of TZV?

While we defined PP to require that the probability of accepting a string win L should be at least 1 /2, we could have defined 1ZP with any constant that lies properly between 0 and 1 in place of 1/2. Theorem 11.16 says that we could, by repeating the experiment made by M the appropriate number of times, make the probability of acceptance as high as we like, up to but not including 1. Rirther, the same technique for decreasing the probability of nonacceptance for a string in L that we used in Section 11.4.5 will allow us to take a randomized TM with any probability greater than 0 of accepting w in L and boosting that probability to 1 /2 by repeating the experiment some constant number of times.

We shall continue to require 1/2 as the probability of acceptance in the definition of PP, but we should be aware that any nonzero probability is sufficient to use in the definition of the class PP. On the other hand, changing the constant from 1/2 will change the language defined by a particular randomized TM. For instance, we observed in Example 11.14 how lowering the required probability to 1/16 would cause string 001 to be in the language of the randomized TM discussed there.

11.4.7 Relationship Between KV and ZVV

There is a simple relationship between the two randomized classes we have defined. To state this theorem, we first need to look at the complements of the classes. It should be clear that if L is in ZPP, then so is L. The reason is that, if L is accepted by a polynomial-expected-time Las-Vegas TM M, then L is accepted by a modification of M in which we turn acceptance by M into halting without acceptance, and if M halts without accepting, we instead go to an accepting state and halt.

However, it is not obvious that PP is closed under complementation, because the definition of Monte-Carlo Turing machines treats acceptance and rejection asymmetrically. Thus, let us define the class co-PP to be the set of languages L such that L is in PP; i.e., co-PP is the complements of the languages in PP.

Theorem 11.17: ZPP = ПРГ\ со-ПР.

PROOF: We first show ПР П со-ПР С ZPP. Suppose L is in PP П co-PP. That is, both L and L have Monte-Carlo TMs, each with a polynomial running time. Assume that p{n) is a large enough polynomial to bound the running times of both machines. We design a Las-Vegas TM M for L as follows.

1. Run the Monte-Carlo TM for L; if it accepts, then M accepts and halts.



2. If not, run the Monte-Carlo TM for L. If that TM accepts, then M halts without accepting. Otherwise, M returns to step (I).

Clearly, M only accepts an input u) if w is in i, and only rejects w ii w is not in L. The expected running time of one round (an execution of steps I and 2) is 2p(n). Moreover, the probability that any one round will resolve the issue is at least 1/2. If w is in L, then step (1) has a 50% chance of leading to acceptance by M, and if w is not in L, then step (2) has a 50% chance of leading to rejection by M. Thus, the expected running time of M is no more than 111

2p{n) + -2p{n) + -2p{n) + g2p(n) + -= 4p(n)

Now, let us consider the converse: assume L is in ZW and show L is in both TIV and co-TlV. We know L is accepted by a Las-Vegas TM My with an expected running time that is some polynomial p{n). We construct a Monte-Carlo TM M2 for L as follows. M2 simulates Mi for 2p{n) steps. И Mi accepts during this time, so does M2; otherwise M2 rejects.

Suppose that input w of length n is not in L. Then Mi will surely not accept w, and therefore neither will M2. Now, suppose w is in L. Mi will surely accept w eventually, but it might or might not accept within 2p{n) steps.

However, we claim that the probability Mi accepts w within 2p(n) steps is at least 1/2. Suppose the probability of acceptance of w by Mi within time 2p{n) were constant с < 1/2. Then the expected running time of Mi on input w is at least (1 - c)2p(n), since 1 - с is the probability that Mi will take more than 2p(n) time. However, if с < 1/2, then 2(1 - c) > 1, and the expected running time of Ml on w is greater than p{n). We have contradicted the assumption that Ml has expected running time at most p(n) and conclude therefore that the probability M2 accepts is at least 1/2. Thus, M2 is a polynomial-time-bounded Monte-Carlo TM, proving that L is in TIV-

For the proof that L is also in со-ЯР, we use essentially the same construction, but we complement the outcome of M- That is, to accept L, wc have M2 accept when Mi rejects within time 2p(n), while M2 rejects otherwise. Now, M2 is a polyuomial-time-bounded Monte-Carlo TM for L. □

11.4.8 Relationships to the Classes V and MP

Theorem 11.17 tells us that ZPP С ПР. We can place these classes between P and AfP by the following simple theorems.

Theorem U.lSiVQ ZW.

PROOF: Any deterministic, polynomial-time bounded TM is also a Las-Vegas, polynomial-time bounded TM, that happens not to use its ability to make random choices. □

Theorem 11.19: UP С MP.



1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 [ 163 ] 164 165 166 167 168 169 170 171 172 173 174 175