Will we ever have a scientific metric for testing and verifying consciousness?
27
Never closes
No - this is not possible or beyond natural limits of empirical scientific validation
Yes - and studies will prove that a biological component is required (AI cannot be conscious) Furthermore the metric used for assessing consciousness is fundamentally binary - either something is conscious or it isn't.
Yes - and studies will prove that a biological component is required (AI cannot be conscious) Furthermore the metric used for assessing consciousness is gradient based - i.e. some things can be 'more' conscious than others.
Yes - and studies will prove that both biological and computer based systems can possess conscious (AI can be conscious) + The metric used for assessing consciousness is fundamentally binary - either something is conscious or it isn't.
Yes - and studies will prove that both biological and computer based systems can possess conscious (AI can be conscious) + The metric used for assessing consciousness is gradient based - i.e. some things can be 'more' conscious than others.

Get
แน€1,000
and
S1.00
Sort by:

It's going to be more of a definition problem, depending on how you define the word it might or might not be testable.

I don't believe biology to be strictly required, but our current approaches are far from anything remotely close to how humans think.

I suspect that consciousness as currently understood has several different elements/components, each of which might be binary, or might not, I dunno. I picked gradient-based as that is the closest available option.

I also answered based on what I think would happen if we had infinite time to explore and research and discovered every discoverable thing eventually. There's a decent chance we extinctify ourselves fairly soon while leaving many answerable scientific questions unanswered, but I notice your options don't allow for that, so I assume risk of extinction is out of scope.

@equinoxhq Good point. Another 'No' option covering - it may be possible but too difficult or unlikely would have been good to have. The possibility of us extincting ourselves first is not zero.

Personally I think if it is solvable it will be solved - possibly by some more advanced future AI. It would be really strange if it's so challenging that only AI manages to figure it out and then struggles to explain it to us. Even stranger if AI is able to create a metric that demonstrates a test for consciousness that proves AI itself isn't consciousness (although I lean towards the last option myself.)