As Israel deploys remote-controlled robotic guns in West Bank. But, the U.S. Department of Defense and military contractors are also focusing on implementing artificial intelligence into their technologies.
The single greatest concern lies in the incorporation of AI into weapon systems, enabling them to operate autonomously and administer lethal force devoid of human intervention, a Public Citizen report warned last week.

The Pentagon’s policies fall short of barring the deployment of autonomous weapons, commonly known as killer robots, programmed to make their own decisions.
Autonomous weapons inherently dehumanize the people targeted and make it easier to tolerate widespread killing, which is in violation of international human rights law, the report points out.
Yet American military contractors are developing autonomous weapons, and the introduction of AI into the Pentagon’s battlefield decision-making and weapons systems poses several risks.
Hebron Wolfpack system tested Draconian Surveillance Technology in West Bank
Concerning Israel itself a silent sentinel watches over every corner in the bustling streets of Hebron, the largest city in the West Bank, where the ancient echoes of history collide with the modern hum of daily life.
This sentinel is not a person but a network of surveillance technology known ominously as the Hebron Smart Apartheid City, or officially known as the draconian surveillance technology called the Wolfpack sytem.



Designed by Israeli authorities, this system blankets the city in a web of cameras, sensors and even automated weapons, tracking every movement of its Palestinian residents.
Palestinians in Hebron are the most surveilled people on the planet, explains journalist and activist Mnar Adley, highlighting the omnipresence of cameras and face-scanning technology.
Adley says that the area, also known as al-Khalil to Palestinians, has become a testing ground for Israel’s surveillance apparatus, with advanced technologies like the “Wolf Pack” surveillance system in operation.

![]()
In Gaza it also brings up questions about who bears accountability, pointed out Jessica Wolfendale, a professor of philosophy at Case Western Reserve University who studies ethics of political violence with a focus on torture, terrorism, war, and punishment.
When autonomous weapons can make decisions or select targets without direct human input, there is a significant risk of mistaken target selection, Wolfendale said.
In such a scenario, if an autonomous weapon mistakenly kills a civilian under the belief that they were a legitimate military target, the question of accountability arises. Depending on the nature of that mistake, it could be a war crime.

Once you have some decision-making capacity located in the machine itself, it becomes much harder to say that it ought to be the humans at the top of the decision-making tree who are solely responsible, Wolfendale said.
So there’s an accountability gap that could arise that could lend itself to the situation where nobody is effectively held accountable.
The Pentagon recognizes the risks and issued a DOD Directive in January 2023, explaining their policy relating to the development and use of autonomous and semi-autonomous functions in weapon systems.



It mentions that the use of AI capabilities in autonomous or semi-autonomous weapons systems will be consistent with the DOD AI Ethical Principles.
The directive says that individuals who authorize or direct the use of, or operate autonomous and semi-autonomous weapon systems will do so with appropriate care and under the law of war, applicable treaties, weapon system safety rules, and applicable rules of engagement.
It also states that the DOD will take deliberate steps to minimize unintended bias in AI capabilities.

However, the policy has several shortcomings, including that the required senior review of autonomous weapon development and deployment can be waived in cases of urgent military need,
According to a Human Rights Watch and Harvard Law School International Human Rights Clinic, the policy has several shortcomings, including that the required senior review of autonomous weapon development and deployment can be waived in cases of urgent military need.
The directive constitutes an inadequate response to the serious ethical, legal, accountability, and security concerns and risks raised by autonomous weapons systems, their review says.
It highlights that the DOD directive allows for international sales and transfers of autonomous weapons.
The directive also solely applies to the DOD and does not include other U.S. government agencies such as the Central Intelligence Agency or U.S. Customs and Border Protection, which may also utilize autonomous weapons.
There isn’t a lot of guidance in the current legal framework that specifically addresses the issues related to autonomous weapons, Wolfendale said. But sometimes, the exhilarating aspects of technology can blind us or mask the severity of the ethical issues surrounding it.

There’s a human tendency around technology to attribute moral values to technology that obviously just don’t exist.
The focus on the ethics of deploying these systems distracts from the fact that humans remain in control of the politics of dehumanization that legitimates war and killing, and the decision to wage war itself.
Jeremy Moses, an associate professor at the Department of Political Science and International Relations at the University of Canterbury, whose research focuses on the ethics of war and intervention.

Autonomous weapons are no more dehumanizing or contrary to human dignity than any other weapons of war. Dehumanization of the enemy will have taken place well before the deployment of any weapons in war.
Whether they are precision-guided missiles, remote-controlled drone strikes, hand grenades, bayonets, or a robotic quadruped with a gun mounted on it, the justifications to use these things to kill others will already be in place.
If political and military decision-makers are concerned about mass killing by AI systems, they can choose not to deploy them.

Regardless of whether the use is killing in war, mass surveillance, profiling, policing, or crowd control, the AI systems don’t do the work of dehumanization and they are not responsible for mass killing.
[This] is something that is always done by the humans that deploy them and it is with the decision-makers that responsibility always lies. We shouldn’t allow the technologies to distract us from that.
The Public Citizen report suggests that the United States pledge not to deploy autonomous weapons and support international efforts to negotiate a global treaty to that effect.

However, these weapons are already being developed around the world and progressing rapidly.
Within the USA alone, competition for autonomous weapons will be driven by geopolitical rivalries and further accelerated by both the military-industrial complex and corporate contractors.
Some of these military contractors including General Dynamics, Vigor Industrial and Anduril Industries, are already developing unmanned tanks, submarines, and drones.
However, the introduction of AI technology, such as the smart-shooter, has only heightened tensions in the city. Residents walk through their own neighborhoods with a sense of unease, knowing that they are always under watchful eyes.
Just as Gaza has become a laboratory and showroom for Israel’s battle-tested weapons, the success of the Hebron Smart City facial recognition technology and database through Wolf Pack to track Palestinians will be used for Israel to continue to profit off of its illegal military occupation of Palestine and surveillance of Palestinian civilians.
This Automated Apartheid only further establishes segregation of Palestinians and expands Israel’s apartheid system and ethnic cleansing of Palestinians.
Yahoo / ABC Flash Point News 2024.







































Gaza turned into Palestinian burial grounds?
No big deal here, the so-called chosen ones are aloud to do whatever they want or please.