Hidden Content
The idea of killer robots currently remains in the realm in science fiction, but it's pretty alarming to realize that artificial intelligence experts are treating it as a genuine possibility in the near future. After years of petitioning the United Nations to take action against weaponized AI, the Future of Life Institute (FLI) has now taken matters into its own hands, with thousands of researchers, engineers and companies in the AI industry pledging not to develop or support the development of autonomous killing machines in any way.

In 2015, the FLI presented an open letter to the UN, urging the organization to impose a ban on the development of lethal autonomous weapons systems (LAWS). The letter was signed by over 1,000 robotics researchers and prominent scientists, such as Elon Musk and Stephen Hawking. Two years later, the FLI and many of the same signatories sent off a follow-up, as talks continuously stalled.
After another year of inaction, these industry leaders have now taken a more direct approach that doesn't require the UN's input. Thousands of people have now signed a pledge declaring that "we opt to hold ourselves to a high standard: we will neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons."

The signatories this time around include 160 AI-related companies and organizations, and over 2,400 individuals. Among the ranks are Google DeepMind, ClearPath Robotics, the European Association for AI, the XPRIZE Foundation, Silicon Valley Robotics, University College London, and people like Elon Musk, Google Research's Jeffrey Dean and Member of UK Parliament Alex Sobel.
The specific technology that the group is opposing includes weaponized AI systems that can identify, target and kill people entirely autonomously. That wouldn't include things like military drones, which human pilots can use to identify and kill targets remotely. While that might sound like a strange distinction to make, the group argues that the latter case still has a human "in the loop", as a moral and ethical filter for the act.
"We the undersigned agree that the decision to take a human life should never be delegated to a machine," reads the official statement. "There is a moral component to this position, that we should not allow machines to make life-taking decisions for which others – or nobody – will be culpable."

When these killing machines are linked to data and surveillance platforms, the statement continues, LAWS could become powerful instruments of violence and oppression, essentially making the act of taking human lives too easy, risk-free and unaccountable. Especially problematic is the potential for those devices to fall into the wrong hands through the black market.
The aim of the pledge, it seems, is to "shame" companies and people into signing. If more and more of the big players are jumping onboard, those that don't will likely come under scrutiny by their peers and customers until they also sign up.
Whether or not it plays out that way, it's at least encouraging to see baby steps being made towards the goal of a killer robot-free future. The UN is set to hold the next meeting on LAWS in August.

Source: Future of Life Institute