Live from the Grove of Cognitive Philosophy Academe: Horsing around by reading the very wise and clever Scott Aaronson: The Ghost in the Quantum Turing Machine, instead of doing work this morning, has led me to be thunderstruck by an excellent insight of his: Newcomb's Problem is, in its essentials, the same as the problem of Self-Locating Belief, and the solution by Adam Elga: Defeating Dr. Evil with Self-Locating Belief makes as much sense as a solution of one as the other. (I think it makes enormous sense as a solution to both, but YMMV.) As Scott puts it:

In Newcomb’s paradox, a superintelligent “Predictor” presents you with two closed boxes, and offers you a choice between opening the first box only or opening both boxes. Either way, you get to keep whatever you find in the box or boxes that you open. The contents of the first box can vary—sometimes it contains $1,000,000, sometimes nothing—but the second box always contains$1,000.... Whatever you would get by opening the first box only, you can get $1,000 more by opening the second box as well. But here’s the catch: using a detailed brain model, the Predictor has already foreseen your choice. If it predicted that you would open both boxes, then the Predictor left the first box empty; while if it predicted that you would open the first box only, then the Predictor put$1,000,000 in the first box. Furthermore, the Predictor has played this game hundreds of times before, both with you and with other people, and its predictions have been right every time. Everyone who opened the first box ended up with $1,000,000, while everyone who opened both boxes ended up with only$1,000. Knowing all of this, what do you do?...

I consider myself a one-boxer, [and] the only justification for one-boxing that makes sense to me goes as follows.... If the Predictor can solve this problem reliably, then it seems to me that it must possess a simulation of you so detailed as to constitute another copy of you.... But in that case, to... think about Newcomb’s paradox in terms of a freely-willed decision... we need to imagine two entities... the “flesh-and-blood you,” and the simulated version being run by the Predictor.... If we think this way, then we can easily explain why one-boxing can be rational.... Who’s to say that you’re not “actually” the simulation? If you are, then of course your decision can affect what the Predictor does in an ordinary, causal way...