We Shouldn't Try to Make Conscious Software--Until We Should

Wed, 18 May 2022 03:45:00 GMT
Scientific American - Technology

Eventually, the most ethical option might be to divert all resources toward building very happy...

We could develop a good theory of consciousness to create a measurement that might determine whether something that cannot speak was conscious or not, depending on how it worked and what it was made of.

The top three most popular theories of consciousness, including global workspace, fundamentally disagree on whether, or under what conditions, a computer might be conscious.

Depending on what theory turns out to be correct, there are three possibilities: computers will never be conscious, they might be conscious someday, or some already are.

Very few people are deliberately trying to make conscious machines or software.

Specifically, most scholars, whatever ethical theory they might endorse, believe that the ability to experience pleasant or unpleasant conscious states is one of the key features that makes an entity worthy of moral consideration.

If we make computers that can experience positive and negative conscious states, what ethical obligations would we then have to them? We would have to treat a computer or piece of software that could experience joy or suffering with moral considerations.

Making a conscious machine do work it is miserable doing is ethically problematic.

If someone has a computer running a conscious software instance, would we then be ethically obligated to keep it running forever?

What if one were ethically obligated to keep running every instance of the conscious software even during this development process? This might be unavoidable: computer modeling is a valuable way to explore and test theories in psychology.

Ethically dabbling in conscious software would quickly become a large computational and energy burden without any clear end.