The Pentagon is moving toward letting AI weapons autonomously decide to kill humans
-
The deployment of AI-controlled drones that can make autonomous decisions about whether to kill human targets is moving closer to reality, The New York Times reported.
Lethal autonomous weapons, that can select targets using AI, are being developed by countries including the US, China, and Israel.
The use of the so-called "killer robots" would mark a disturbing development, say critics, handing life and death battlefield decisions to machines with no human input.
What can go wrong? 🥶
-
If true, that's absolutely terrifying.
-
New cinema plot? yes, aehmmm, no; reality:
Operator: I started missile.
AI: Detecting crowd.
AI: Let me, the Allknowing Intelligence, decide that people with dark skin or beards are most dangerous in the world.Fire free! Kawoooom…
Operator: WTF. It killed all, not only this wanted Hummuz terrorist.
Software programmer team: Ooopsie, we used wrong learning model for targets. -
@RiveDroite said in The Pentagon is moving toward letting AI weapons autonomously decide to kill humans:
absolutely terrifying
Letting a AI decide automatically is terrifiying in civil life, too.
-
@Catweazle Good for military. They all will say: was not our decision to kill this or that person, was software error.
"The software" or "Cyber Attack" is in our days always the excuse.
-
@DoctorG of course.... It's always terrifying.
-
@DoctorG said in The Pentagon is moving toward letting AI weapons autonomously decide to kill humans:
... Ooopsie
The last word of humanity
-
@Catweazle Yes.
-
Child: Look, Ma!, a Robodog.
Mother: Down, hide.
AI: Black person which will kidnapp a child. Attention… Shoot, to save child.
Mother: Arrgh.
Child: Ma!? Mom? Maaaaaaaa!
AI: Please stand up, litle boy, you are safe. -
@DoctorG, .... and that they do not say that they have not been warned, even by the AI ​​itself
-
When dumb politicians, scientists, programmers, media and internet let AI lern the wrong way, they will get many dead persons because of wrong decisions.
And you can not unlearn bad behaviour and bias of such AI. And no, you can not control Learning Models. That would be naif.
And no, laws which "control" learning of such AI will not have any influence; politicians and lawmakers are like little children, keeping their eyes shut if they need. -
This is the underlying problem, AI can be a good and powerful tool, but it requires responsibility and human intelligence from those who use it. But precisely there it fails, being in the hands of those who understand it least and who are only oriented towards percentages in the stock market and momentary benefits, is to distribute machine guns to a herd of Orang Utans.
The development of learning models cannot be stopped by law, but the use can be restricted for certain topics and applications.. -
When weapons think and decide, we do not need teach sociopaths to command and send young people to die for their country and commercial world order, and to become "heroes".
Good? No, there are enuff politicians with attitude to like to kill, make money and war. -
@DoctorG, to prevent young people from leaving their lives for childish purposes, instead of using AI, we should return to the good ancient custom, that leaders and lobbies, who start a war, then also fight in it on the front line. The world would surely be much more peaceful.
In general, it is not very wrong, that politicians are the first to suffer the consequences of their actions and decisions..