閱讀467 返回首頁    go 機器人


機器人道德規範遇困境:機器人如何做出倫理選擇?

Can we teach robots ethic

導讀:在未來,可能會出現一些無人駕駛汽車需要做出選擇的場合——選哪條路,誰會受到傷害,或是誰會處於受傷害的風險裏?我們應該對汽車進行什麼樣的道德規範?

We are not used to the idea of machines making ethical decisions, but the day when they will routinely do this - by themselves - is fast approaching. So how, asks the BBC’s David Edmonds, will we teach them to do the right thing?

我們過去不經常考慮機器做出道德決策的事情,但是他們自己擁有這樣能力的日子正在快速靠近。那麼,BBC的大衛·埃德蒙茲(David Edmonds)問,我們能教會他們做正確的事情嗎?

The car arrives at your home bang on schedule at 8am to take you to work. You climb into the back seat and remove your electronic reading device from your briefcase to scan the news. There has never been trouble on the journey before: there’s usually little congestion. But today something unusual and terrible occurs: two children, wrestling playfully on a grassy bank, roll on to the road in front of you. There’s no time to brake. But if the car skidded to the left it would hit an oncoming motorbike.

一輛車在早上8點準時到達你家接你去上班。你坐到後座,把你的電子閱讀設備從公文包裏拿出來開始瀏覽新聞。以前從未有過這樣麻煩的經曆,通常很少有交通堵塞。但今天發生了一些不尋常的可怕事情:兩個孩子在一塊草地上鬧著玩,滾到了你前麵的馬路上。此時來不及刹車。但如果汽車拐到左邊,就會撞上迎麵而來的摩托車。

Neither outcome is good, but which is least bad?

兩種結果都不好,但哪一種帶來的傷害最小呢?

The year is 2027, and there’s something else you should know. The car has no driver.

今年是2027年,並且你還應該知道一件事情,這輛車沒有司機。

In the future there may be a few occasions when the driverless car does have to make a choice - which way to swerve, who to harm, or who to risk harming? What kind of ethics should we programme into the car?

在未來,可能會出現一些無人駕駛汽車需要做出選擇的場合——選哪條路,誰會受到傷害,或是誰會處於受傷害的風險裏?我們應該對汽車進行什麼樣的道德規範?

Then there’s the thorny matter of who’s going to make these ethical decisions. Will the government decide how cars make choices? Or the manufacturer? Or will it be you, the consumer? Will you be able to walk into a showroom and select the car’s ethics as you would its colour?

還有一個棘手的問題是誰將做出這些道德決定。政府會決定汽車的選擇嗎?還是製造商?還是作為消費者的你?你能走進一個陳列室,選擇這輛車所配備的道德標準,就像選擇它的顏色一樣嗎?

One big advantage of robots is that they will behave consistently. They will operate in the same way in similar situations. The autonomous weapon won’t make bad choices because it is angry. The autonomous car won’t get drunk, or tired, it won’t shout at the kids on the back seat. Around the world, more than a million people are killed in car accidents each year - most by human error. Reducing those numbers is a big prize.

機器人的一大優點是它們的行為會始終如一。在類似的情況下,他們也會以同樣的方式操作。自動化機器不會因為憤怒而做出錯誤的選擇。自動駕駛汽車不會喝醉,也不會疲勞,它不會對後座上的孩子大喊大叫。在世界各地,每年有一百多萬人死於車禍,大多數是人為失誤造成。減少這個數字是一個很大的回饋。

Crash into two kids, or veer in front of an oncoming motorbike?

撞到兩個孩子,還是轉向一輛迎麵而來的摩托車?

Dr Amy Rimmer said: "I don’t have to answer that question to pass a driving test, and I’m allowed to drive. So why would we dictate that the car has to have an answer to these unlikely scenarios ?"

Amy Rimmer博士說:“我不需要回答這個問題來通過駕駛考試,我照樣可以開車。那麼為什麼我們要規定汽車必須對這些看似不可能的場景有一個答案呢?”

Ultimately, though, we’d better hope that our machines can be ethically programmed - because, like it or not, in the future more and more decisions that are currently taken by humans will be delegated to robots.

最終,我們最好希望我們的機器能夠被倫理化——因為,不管你喜歡與否,如今人類需要做的決定未來將越來越多的被委派給機器人。

There are certainly reasons to worry. We may not fully understand why a robot has made a particular decision. And we need to ensure that the robot does not absorb and compound our prejudices. But there’s also a potential upside. The robot may turn out to be better at some ethical decisions than we are. It may even make us better people.

人們當然有理由擔心。我們也許將不能完全理解為什麼機器人做出了某個特別的決定。我們需要確保機器人不會吸收和加重我們的偏頗之處。但也有一個潛在可能性的優勢。機器人可能比我們更擅長一些道德決策。它甚至可以使我們成為更好的人。

最後更新:2017-10-20 11:39:22

  上一篇:go 機器人、新能源、航空航天…這麼多黑科技,常州正在為你一一呈現!
  下一篇:go 機器人行業大事件:蘇寧科沃斯簽雙十一5000萬大單