The Not P Machine
Ն.Բ. Այս էջը դեռ չունի a “Պարզեցված անգլերեն” տարբերակը.
Ավտոմատ թարգմանությունները հիմնված են բնօրինակ անգլերեն տեքստի վրա. Դրանք կարող են ներառել էական սխալներ.
The “սխալ Ռիսկի” վարկանիշն թարգմանության: ????
Մասը ‘Ընտրության ուժը‘ գրքի բլոգ.
Քանի որ սա դեռ մշակման փուլում է, այն դեռ չունի հատուկ Բովանդակություն բաժին. Խնդրում ենք նավարկեք՝ օգտագործելով Համատեքստային մենյու փոխարենը.
As I read the arguments in New Scientist I observed that the two chief protagonists were focussing on the issue of predictability: though from slightly different angles.
The hypothetical question was essentially this: could it ever be possible to construct a complete prediction of a subject’s future brain state such that the subject, whether or not he actually knew what the prediction was, would have to acknowledge it as true if he did?
Donald M. Mackay, Professor of Communications at Keele University, took the view that this was logically impossible, since the very act of believing or disbelieving would alter the state of the subject’s brain. But John Taylor, Professor of Mathematics at Kings College, London, countered that this was simply an example of a ‘non-linear fixed point problem’ and concluded that ‘self-consistent predictions of A’s future brain state, given that he is told the prediction before it occurs, are always possible.’
Both men implicitly accepted that if the subject is not aware of the prediction it should be theoretically possible to come up with a correct prediction (more on that later). But it is rather harder to see the wood for the trees when the subject does know of the prediction. We are all too easily confused here by our personal experience of just how difficult it is to control our own thoughts. Just you try for the next 20 seconds not to think about pink elephants … See what I mean?
So let’s simplify the issue with a very simple, totally deterministic, example: the ‘Not P’ մեքենա. This is a simple computer that will consistently invalidate any attempt to predict its future state, provided that it is told the prediction in advance. The computer has a component, ‘P’ which has only two states, + կամ -. It works like this:
- Read the prediction of the state of P.
- Calculate the time when the prediction is due to be fulfilled.
- At the last possible moment before this, send a signal to P, setting it to the opposite state.
Հստակորեն, if a machine can refute any prediction so easily, then so too can its human programmer, despite the problems we have controlling our thoughts. Does the prediction say I am standing or sitting, օրինակ…?
Notice that this does not mean that the ‘Not P’ machine cannot be predicted. If it is not told the prediction we can be 100% correct: and if it is told, we know we will be 100% wrong. But is this really a primitive example of deterministic freewill, or is our concept of freewill defective?
Շարունակեք կարդալ…Ինչ է ազատ կամքը?
Գնալ: Ինչու եմ ես այստեղ? / Liegeman Life գլխավոր էջ.
Էջի ստեղծումը Քևին Քինգ