467
机器人
机器人道德规范遇困境:机器人如何做出伦理选择?
Can we teach robots ethic
导读:在未来,可能会出现一些无人驾驶汽车需要做出选择的场合——选哪条路,谁会受到伤害,或是谁会处于受伤害的风险里?我们应该对汽车进行什么样的道德规范?
We are not used to the idea of machines making ethical decisions, but the day when they will routinely do this - by themselves - is fast approaching. So how, asks the BBC’s David Edmonds, will we teach them to do the right thing?
我们过去不经常考虑机器做出道德决策的事情,但是他们自己拥有这样能力的日子正在快速靠近。那么,BBC的大卫·埃德蒙兹(David Edmonds)问,我们能教会他们做正确的事情吗?
The car arrives at your home bang on schedule at 8am to take you to work. You climb into the back seat and remove your electronic reading device from your briefcase to scan the news. There has never been trouble on the journey before: there’s usually little congestion. But today something unusual and terrible occurs: two children, wrestling playfully on a grassy bank, roll on to the road in front of you. There’s no time to brake. But if the car skidded to the left it would hit an oncoming motorbike.
一辆车在早上8点准时到达你家接你去上班。你坐到后座,把你的电子阅读设备从公文包里拿出来开始浏览新闻。以前从未有过这样麻烦的经历,通常很少有交通堵塞。但今天发生了一些不寻常的可怕事情:两个孩子在一块草地上闹着玩,滚到了你前面的马路上。此时来不及刹车。但如果汽车拐到左边,就会撞上迎面而来的摩托车。
Neither outcome is good, but which is least bad?
两种结果都不好,但哪一种带来的伤害最小呢?
The year is 2027, and there’s something else you should know. The car has no driver.
今年是2027年,并且你还应该知道一件事情,这辆车没有司机。
In the future there may be a few occasions when the driverless car does have to make a choice - which way to swerve, who to harm, or who to risk harming? What kind of ethics should we programme into the car?
在未来,可能会出现一些无人驾驶汽车需要做出选择的场合——选哪条路,谁会受到伤害,或是谁会处于受伤害的风险里?我们应该对汽车进行什么样的道德规范?
Then there’s the thorny matter of who’s going to make these ethical decisions. Will the government decide how cars make choices? Or the manufacturer? Or will it be you, the consumer? Will you be able to walk into a showroom and select the car’s ethics as you would its colour?
还有一个棘手的问题是谁将做出这些道德决定。政府会决定汽车的选择吗?还是制造商?还是作为消费者的你?你能走进一个陈列室,选择这辆车所配备的道德标准,就像选择它的颜色一样吗?
One big advantage of robots is that they will behave consistently. They will operate in the same way in similar situations. The autonomous weapon won’t make bad choices because it is angry. The autonomous car won’t get drunk, or tired, it won’t shout at the kids on the back seat. Around the world, more than a million people are killed in car accidents each year - most by human error. Reducing those numbers is a big prize.
机器人的一大优点是它们的行为会始终如一。在类似的情况下,他们也会以同样的方式操作。自动化机器不会因为愤怒而做出错误的选择。自动驾驶汽车不会喝醉,也不会疲劳,它不会对后座上的孩子大喊大叫。在世界各地,每年有一百多万人死于车祸,大多数是人为失误造成。减少这个数字是一个很大的回馈。
Crash into two kids, or veer in front of an oncoming motorbike?
撞到两个孩子,还是转向一辆迎面而来的摩托车?
Dr Amy Rimmer said: "I don’t have to answer that question to pass a driving test, and I’m allowed to drive. So why would we dictate that the car has to have an answer to these unlikely scenarios ?"
Amy Rimmer博士说:“我不需要回答这个问题来通过驾驶考试,我照样可以开车。那么为什么我们要规定汽车必须对这些看似不可能的场景有一个答案呢?”
Ultimately, though, we’d better hope that our machines can be ethically programmed - because, like it or not, in the future more and more decisions that are currently taken by humans will be delegated to robots.
最终,我们最好希望我们的机器能够被伦理化——因为,不管你喜欢与否,如今人类需要做的决定未来将越来越多的被委派给机器人。
There are certainly reasons to worry. We may not fully understand why a robot has made a particular decision. And we need to ensure that the robot does not absorb and compound our prejudices. But there’s also a potential upside. The robot may turn out to be better at some ethical decisions than we are. It may even make us better people.
人们当然有理由担心。我们也许将不能完全理解为什么机器人做出了某个特别的决定。我们需要确保机器人不会吸收和加重我们的偏颇之处。但也有一个潜在可能性的优势。机器人可能比我们更擅长一些道德决策。它甚至可以使我们成为更好的人。
最后更新:2017-10-20 11:39:22