@SaiVazquez that is if you look at the model of quantum disassembly and reassemble in other place.
What if practical human teleportation is based on wormhole model? For a person his integrity is never questioned, he just moves through a short "artificial" space bridge.
@SaiVazquez "Teleportation = Death" Does it matter that you die if you instantly come back to life?
@levifinkelstein Why do you keep betting this up to 18%? I hope you're not planning to resolve it based on some sort of bizarre interpretation of the title question.
@levifinkelstein Even IF we have a properly aligned AGI that is 1,000x smarter than an ensemble of the smartest humans that ever lived, it’s not certain that it could solve this.
Its answer might be “sorry fam, what you’re asking for is impossible,” or “here’s how to do this, but it’s going to take 50% of Humanity’s annual energy expenditure to complete a single trip.”
@nottelling2ccc "50% of Humanity’s annual energy expenditure to complete a single trip" not an issue with a few Dyson spheres
@levifinkelstein I would predict significantly greater than 50% odds that a "maximally intelligent" supercomputer AI would not be able to make a significant dent on a Dyson sphere by 2030. Protein folding, let alone strong nano-tech, probably requires a lot of further empirical research beyond just thinking really smart using the currently available public research. And strong nano-tech would still take a long time to build up astronomically relevant quantities of launch systems, navigation systems, payload delivery systems, solar panels, etc. even using self-replication tech. This is due to constraints like not making earth uninhabitable as a side effect, other stellar bodies being hard to reach, significant tech advancements and empirical research being necessary to know how to design both the scaffolding system and the actual dyson sphere components, etc.
General Intelligence is a superpower compared to random chance, gradual cumulative selection pressure, and anything else relevant that has preceded it. And humans are indeed probably not close to the theoretical maximum intelligence of a limited system. It's not literally a superpower though. It is still constrained by information theoretic limits, the laws of physics, current industry/tech base, land, natural resources, currently available empirical information, currently available computing power, human society, etc.
@nottelling2ccc Plus, it's answer is even more likely to be something like: "not enough information, here's a plausible experimental program, tech development trajectory, (series of) manhatten project(s), etc., that would likely let me derive the answer." So you'd probably be looking at later than 2030 to even know if teleportation is possible/energy feasible, even if GAI is developed in the next year or two.