As opposed to (e.g.) the agent's external behavior being sufficient for determining the issue.
I think this one is kinda subtle/complex, and I'm pretty unsure of my position and also what the question is.
Consider a toaster that behaves just like any other toaster, but its power supply secretly feeds an advanced nanotech chip which simulates a human on an island. If the toaster is unused/destroyed, that person stops existing. This seems to me to be a question of the toaster's internals, since it has no real impact on behavior, but does seem relevant to the moral worth of the toaster.
But perhaps I don't understand your question?
@MaxHarms You understand the question the same way that I do, I think. But in my opinion there must be some causal effect from the simulated human. Otherwise it would just be a simulation that happened to be spatially located within the toaster.
But this is such a low bar to pass, that it's hardly worth making the distinction. Like how books can be made into rows of dominoes in a Rube Goldberg machine, and become causally necessary to the functionality of the system, even if the books are never opened or read.