In the diagram above, we see Parfit's case graphically. For causal theorists like Nagel and myself, finding the crucial difference between the cases is important for survival. In Case 2, causal continuity is disrupted as our existence for a period is solely as information, either in a computer, or the surgeon's mind. As information, we are a fictional specification. Other specifications similar to ours could be generated with entirely imaginary personalities. In Case 1, we never have a fictional existence. Parfit's point is that causal discontinuities do not necessarily affect the result, viz. an A->C transition with a numerically identical product in the two cases.

In Case 2, there could have been 2 products, an A->C transition and an A->D transition at the same time producing 2 individuals qualitatively identical, but not numerically identical. Why should we have survived in A->C but not A->D in Case 2? The production methods are the same. The only consistent conclusion for the causal theorist is that Case 2 is not legitimate even though the product is numerically identical to the product of Case 1.

At this point, Parfit has to ask us as Locke would to justify how sleep can count as legitimate causal storage whereas a complete specification of {a(i)} stored in a computer does not? Both are different types of causal links, our subjectivity is potential in both, what is the crucial difference? Armstrong has pointed out that Parfit goes too far to claim that any cause will do. If an unrelated event somewhere else on the planet resulted in the accidental creation of {a(i)} composed of C, then this would not have any causal continuity with {a(i)} composed of A. Armstrong's example shows that the mere numerically identical existence of the product does not necessarily guarantee survival, a causal link is necessary.

One difference between sleep and computer storage is in the level of emulation. If it were just a simulation, then dependence on an external set of interpretative rules relative to an observer would render the computer storage incomplete. If it were stored in the computer as a realization, (see Pattee in the Proceedings of the 1st Artificial Life Conference), then the interpretative dependence would be gone, but the storage of {a(i)} would still be abstracted back an extra level from the hardware, viz. on top of the storage of our brain specification. So, the best response that the causal theorist can give is to ask what a realization of brain storage is in a computer, and to point out that almost every example of information storage in a computer is interpreted, and not intrinsically complete.