![](/_next/image?url=https%3A%2F%2Ffirebasestorage.googleapis.com%2Fv0%2Fb%2Fmantic-markets.appspot.com%2Fo%2Fdream%252F6Dh26kwwNS.png%3Falt%3Dmedia%26token%3D68493061-e3c7-41f5-b73c-d351b2399c93&w=3840&q=75)
Inspired by this lesswrong post: Connectomics seems great from an AI x-risk perspective.
I'm studying this myself. I think we're closer to a human brain connectome (for an average human, represented in pieces scattered across many sources) than the linked article suggests.
Quote from the article:
"It seems that, in the course of trying to do WBE, we would necessarily wind up understanding brain learning algorithms well enough to build non-WBE brain-like AGI, and then presumably somebody would do so before WBE was ready to go."
I feel quite confident that this is the case. I think an org with high infosec, safety mindset, and belief in the importance of not deploying non-WBE brain-like AGI could manage to develop WBE. I think this is a high-reward / high-risk approach, but I favor it as a direction of research.