Blog Banner

Modern Warfare - Killer Robots: Beyond Artificial Intelligence

Modern Warfare Killer Robots

Pentagon - 9/30/2022
Modern Warfare- Killer Robots: Beyond Artificial Intelligence

How AI impacts security, military affairs

For too long, the conversation on AI and militaries has been narrowly focused on autonomous weapons and the ethical issues that come with them. The time is ripe to take stock of the myriad of other ways that AI will impact security and military affairs. Just as AI is dramatically changing a range of sectors in the civilian world, improving efficiency, reducing costs, and automating processes, there is no reason to believe that militaries, too, will not be joining the AI revolution.
Israel is a world leader in developing autonomous military capabilities, from Iron Dome interceptors to unmanned aerial vehicles to ground-based platforms – though all currently rely on human approval before kinetic firepower can be activated, in line with the values of the Israeli defense establishment.


The question of what will happen when adversaries deploy autonomous weapons that do not seek a person in the loop for approval to use lethal firepower looms on the horizon for all militaries defending democratic states. It seems reasonable to believe that even those states that have set some limits on AI capabilities will encounter adversaries who have no qualms about doing so, putting the states that limit integrating AI for national security at a considerable disadvantage. Thus, it’s imperative for states to understand the full extent of what AI can do.

While autonomous weapons attract a lot of attention, much of the conversation about this technology is negative, causing analysts to overlook the positive application of AI in areas such as force protection and the reduction of civilian casualties.

Start Now

When it comes to preparing for AI-empowered warfare, senior defense leaders and intellectuals seem to be missing the forest for the trees. Current paradigms of future war tend to either reflect incremental innovation (artillery that shoots farther) or the sort of science fiction that is always thirty to forty years away. The middle ground is a defense enterprise that sees the changing nature of warfare rooted in information technology hardware and software, and it is long past time to start doing the hard work of truly modernizing the services. To prepare for this future of war, all of the services will need to transform their workforces in ways that, put simply, they are not even discussing.

The many benefits of artificial intelligence

Other AI functions – including optimizing chain-of-command communications, human-machine teaming in areas like logistics, and predicting adversary maneuvers – offer equally promising avenues. Many are already being developed by western militaries, including by the Israel Defense Forces. As time goes by, military commanders will feel increasingly comfortable relying on this technology, just as consumers have in the civilian world. Whether AI is introduced in the civilian or military realm does not mean suspending human involvement or judgment, but rather, receiving a new tool to boost performance.

Military commanders will use AI to minimize the fog of war. While they will continue to make maneuver decisions, AI capabilities will augment their decision-making capabilities during battle by providing a more accurate picture of the reality on the ground and keeping to the speed of modern warfare, thanks to continuously updated sensor data. AI technologies will also help decision-makers and analysts combat the effects of information overload, and to better organize and process growing data pools on enemy behavior. AI will not only alleviate this information clutter, but it will allow for forces to make predictions about future events and outcomes, allowing states to better prepare for war.

The use of AI to better understand an adversary is shaping up to be one of the most promising and fascinating aspects of this tool. This will enable faster, real-time gathering of information, detecting patterns, mapping out communication networks, and even better understanding of how the enemy ‘feels,’ in terms of its morale, by analyzing its language on social media and other platforms. These new AI capabilities amount to intelligence gathering 2.0. This type of analysis can be extended to both military communications and social media activities by civilians in adversarial states, to better understand a nation’s will to fight based on societal trends on any given day. The will to fight remains the most critical factor in human warfare, and being able to identify when this will decrease, in real time, could prove enormously beneficial for decision-makers in the civilian and military worlds.

In the area of military logistics and maintenance, AI can create revolutionary cost-saving efficiency, which is why most militaries are prioritizing making progress on this front. Typically considered a more technical aspect, logistics will probably lead to the most radical changes in how militaries “do business.”

AI systems can also optimize the procurement process and automate supply chains. They can forecast the need to repair equipment and order resupplies while minimizing costs. They can also be used in personnel allocation by helping militaries figure out which soldier is best suited to what unit. And unlike other aspects of AI, these applications are unlikely to raise any significant legal or ethical issues. AI-based technologies can also enhance the capabilities of individual soldiers, and this should not be seen as unethical or dangerous across the board. In the past, amphetamines and caffeine have been handed out to soldiers for similar purposes.

NHDoD Interactive Course Schedule - Click Here to Download  >>

What are the applications for AI in defense?

Just as limiting blood loss and boosting resistance to extreme conditions are worthy goals to help soldiers, providing them with new situational awareness and command capabilities are equally legitimate objectives. Human enhancement calls for certain limits – but those have yet to be (publicly) set. Such limits should consider force protection and the preservation of a soldier’s autonomy to choose to undergo a given enhancement, whether it can be reversed, and if it poses long-term health risks.

At the strategic level, AI can boost the capabilities of air defense systems. Emerging weapons, such as hypersonic missiles, can avoid detection from defense systems due to their speed. Air defense systems integrated with AI processing capabilities will be able to properly detect and intercept these incoming missiles. In the area of information warfare, AI can, of course, help fabricate deep fakes and spread misinformation. Ironically, it can also help governments quickly verify information or recognize efforts by a hostile actor at shaping public perception in a harmful or disruptive manner.

This could give NATO states an ability to know, in real-time, if Russia is trying to use fake news to destabilize its security environment and threaten the alliance. Ultimately, such capabilities extend far beyond the area of autonomous weapons and fears of ‘killer robots.’ The security community must broaden its grasp of AI capabilities and acknowledge positive as well as disruptive AI applications.

Beyond Killer Robots: How Artificial Intelligence Can Improve Resilience in Cyber Space

Popular imagination and much of current AI scholarship tend to focus, understandably, on the more glamorous aspects of AI — the stuff of science fiction and the Terminator movies. While lethal and autonomous weapons have been a hot topic in recent years, this is only one aspect of war that will change as artificial intelligence becomes more sophisticated. AI itself will not manifest just as a weapon; rather, it is an enabler that can support a broad spectrum of technologies. We agree: AI’s most substantial impacts are likely to fly under the radar in discussions about its potential. Therefore, a more holistic conversation should acknowledge AI’s potential effects in cyber space, not by facilitating cyber- attacks, but rather by improving cyber security at scale through increased asset awareness and minimized source code vulnerabilities.

Should we watch out for military killer robots or domestic ones?

The more benign the appearance, the more insidious it’s potential. Military deployments are becoming ever rarer. The decades since WWII have seen less bloodshed than almost any 75-year period in modern history. While critics contest this simplistic conclusion, it is hard to imagine a major military conflict in the near future. There is just too much at stake. As Israeli historian Yuval Noah Harari points out in Sapiens, nation states are too interdependent for war to be worth waging. Out-and-out military battle is too expensive, and a deluge of sanctions – political and otherwise – awaits any leader who tries to invade a sovereign state in the 21st century.

On the other hand, distant programs and algorithms increasingly govern our behaviors – even those we consider personal. Corporations frequently collect data on individuals that we would likely find concerning. Algorithms monitor your every online activity. The way you interact with social media can be used to manipulate that most inviolable right – the right to vote. The work of Cambridge Analytica and parent company SCL Elections reveals the disturbing power of domesticated, normalized robotics. Black Mirror is here already, you just don’t know it.

The ultimate showdown

Depending on how you look at it, the sad (or relieving) fact is, you are unlikely to find yourself face to face with a robot whose express aim is to blow you up. The world of Hollywood films and mega-budget PlayStation 4 games is sensational, dramatic and personal for a reason. It’s fun. The real life alternative is likely to be far more mundane. As the future arrives, and life is increasingly mediated by small electronic tablets, ask yourself this: could you survive without modern technology? Perhaps you are staring at a killer robot right now, and you don’t even know it. By distracting you from the immediacy of real life, these machines are effectively de-skilling us. In the wild, most modern humans would be useless.

MORE THAN KILLER ROBOTS: ARTIFICIAL INTELLIGENCE WILL DISPLACE MORE SOLDIERS THAN IT KILLS

For nearly four thousand years, the horse was as integral to warfighting as weapons and armor. Then the Second Industrial Revolution led to developments in motorized vehicles and aircraft. These new technologies’ experimental use during World War I hinted at a new style of war. What those hints portended was the subject of intense debate in the years that followed. Some recognized that motorized vehicles would dramatically change the way wars were fought. But many others held on to the deep-seated, millennia-old preference for beasts of burden on the battlefield. Only World War II would eliminate any remaining such preference. And with the horses’ disappearance from their old roles, the military services also divested all associated equine training, breeding, and care in exchange for drivers, welders, and mechanics.

We are again in a period of disruptive technological progress—the Fourth Industrial Revolution—and this current era of persistent low-intensity conflict is arguably an interwar period not unlike that of the 1920s and 1930s. Experts are again intensely debating how new technology—in the form of automation, robotics, and artificial intelligence—will change the character of war. Killer robots, loss of control, and abdication of human responsibility in killing another human are all profound issues for us to confront. However, while national security experts often focus on the sexy applications of these technologies (and public attention is most easily earned by apocalyptic characterizations), too little attention is paid to the more obvious and less complicated issue: those same technologies are also reshaping the workforce. In historical terms, this is akin to debating the merits of using vehicles as mobile firing positions on the battlefield while overlooking the way motorized transport would, for example, dramatically transform logistics.

Even if militaries shun killer robots, nonlethal automated and autonomous systems will still play significant roles in warfare—and will do so very soon. In 1980, then Commandant of the Marine Corps Gen. Robert H. Barrow said, “Amateurs talk about tactics, but professionals study logistics.” And people are the resource with the longest lead time. These new technologies will have equally profound impacts on the military labor force as on direct combat and we need to adapt to this new reality. Automation will require reducing certain specialties, re-skilling many service members, and creating entirely new job families. Moreover, the trends will have second-order impacts on who we recruit and how we train while increasing dependencies on data and communications.

Automation is already displacing workers in the global workforce and the body of literature expects this to get more intense over the next ten years. Creating autonomous weapons is both far more difficult and much more contentious than either automating low-skilled jobs or creating narrow artificial intelligence for specific, complex tasks. Unsurprisingly, trends within the civilian workforce show that the tasks that are easiest to automate are highly repetitive, are often manual, and require a low degree of judgment. Conversely, the skills that are most difficult to automate involve applying expertise, interacting with human stakeholders in complex situations, and creativity. One important caveat is that low-skilled jobs performed in unpredictable environments—gardening, plumbing, childcare—will generally see less automation, but many of these roles have low wages already. This is important as it reflects the second-order fear that automation will hollow out the middle class and lead to less socioeconomic mobility, which has implications for the military.

As automation displaces humans in some areas, workers will need to shift into new fields where difficult-to-automate human skills are of critical importance. Experts predict a growing need for particular categories of workers: caregivers, coaches, subject-matter experts, technicians (e.g., software, data, cloud), and educators. Viewed broadly, these industries fall into two categories: improving human performance and improving technological performance and integration. The increased specialization between humans and computers will lead to efficiency gains—the so-called Third Offset—but other impacts such as the need to re-skill workers are less obvious and receive less consideration in the national security community.

It is perhaps predictable that military leaders are focusing on killer robots and missing the big picture. The last eighteen years of war, conducted on a rotational model that constantly cycles units in and out of combat zones, seem to have only reinforced an emphasis on a near horizon line and undercut interest in the specific types of critical thinking, imagination, or modernization needed to conceptualize, as an organization, the full range of new technologies’ applications. Moreover, there are dynamics related to the nature of bureaucracies at play. Digital automation requires digital information and the military has struggled to adapt modern best practices that would enable much of this transformation—from software systems that do not connect with each other to the use of paper or other manual processes. It is no surprise that there are significant barriers to overcome before an institution that struggles to have a user-friendly system for processing official travel can field a lethal autonomous robot.

Regardless of why scholars and senior leaders have so far missed the low-hanging fruit, it is time to begin addressing it. Recruiting and training people for the military occupations of the future will take years and the services can begin reaping the benefits of automation if they start investing now.

Embracing Automation, Re-Skilling Soldiers, and Creating the Jobs of the Future

Again, across the economy, automation and AI will have the soonest impact on roles that fall into three categories: those that are predictable and primarily physical; those related to data collection; and those that involve data processing. Many of the specific fields—food preparation and serving, transportation, construction, and office and administrative support roles, for instance—have direct counterparts in the military. The services should immediately begin phasing out or automating as much of these roles as possible. It’s not sexy, but long before we can create autonomous Terminator-like warbots, we will need autonomous systems that can perform nonlethal manual tasks and use natural language processing to manage personnel paperwork. Increasing automation in these roles will both change how the human workers perform their duties and liberate soldiers who can be re-skilled into new areas.

So where will soldiers displaced by automation become useful? In jobs that will see an increased demand. The Army is currently undertaking an effort to improve lethality, especially in its close-combat forces. But it should avoid the temptation to treat this as a technological or hardware challenge, when it is fundamentally about people. Improving the way the Army conducts physical fitness training and the way it measures fitness—inarguably central to soldier effectiveness (and lethality)—is a case in point. Much of the debate around the Army’s new physical fitness test focuses on the cost of the equipment and the difficulty of the events. Improving lethality requires shifting soldiers out of roles that are better suited for machines and into health, conditioning, human performance, and counselor or therapist roles.

Special operations units have dedicated athletic trainers and physical therapists, but we will need both far more to support direct combat units and a wider array of specialists including coaches, counselors, and educators. The services, to varying degrees, already have versions of these roles and they need to re-invest the savings from automation into them to gain further performance improvements.

Lastly, the services need to accept that these trends are not going away and should embrace the new technology rather than keep it at arm’s length—and begin thinking about the workforce they need to succeed in the future operating environment. They shouldn’t worry about purple-haired techies; they should start creating software soldiers and other tech warriors. Here are two steps that they can take now.

Tactical unmanned vehicle operators: Before we deploy lethal autonomous swarm drones, the services will need to embrace drones. The Army has added small unmanned aerial vehicles to the equipment for many units, but lags in updating the tactical doctrine for their use. To date, only the Marine Corps has announced plans to place drone operator specialists in every infantry rifle squad. It is unclear if this will simply be an infantryman with additional training (and possibly a skill code) or an entirely new military occupational specialty and there are precedents for both cases. Looking further to the future, the careers developed for these specialists can also help to shape careers and training of swarm drone operators and other unique roles for autonomous ground and undersea vehicles.

More technical experts: The Air Force launched a pilot program in 2017 to create a software development team within the service. Named Kessel Run, the program has been a runaway success and more than validated the concept. The other services should immediately begin similar pilots, but expand them to also include other skills in data science and cloud computing architecture. The combination of skills will create teams that can rapidly build, test, and deploy scalable IT systems that will empower warfighters and accelerate other innovation projects. Having these technologists within the force may lead to applications that are even more imaginative than what is currently conceived. The services can also reap the lessons learned from talent management within the cyber force to avoid early attrition and further improve talent management for technical experts.

Broader Impacts of Automation and Robots on Military workforce

While ethicists and policymakers are debating the impact of autonomy on warfighting, it appears as though relatively little thought has been given to the impact of automation on warfighters. What happens when automation leads to greater specialization between humans and computers?

Experts expect automation to displace more men than women in the civilian workforce as men disproportionately fill roles most ready for automation. By contrast, women disproportionately fill roles in the civilian workforce that are difficult to automate and will see increased demand—jobs that demand critical thinking and emotional intelligence. Similarly in the military, technological shifts could create greater opportunities for women to leverage the skills that are most valuable in the types of jobs that will increase in demand.

Additionally, and while it’s likely wishful thinking, the confluence of greater demands for technical skills and increased emphasis on human performance may someday shift basic training from its historical function of weeding out weak recruits to instead focus on developing their unique talents (which will improve satisfaction and retention and drive down recruiting churn—a virtuous cycle.)

Depending on how the services classify these new technical and human performance specialties, automation may widen the rift between enlisted and officer ranks and may decrease the role of military service as a mechanism for social mobility. Some experts argue that automation is leaving fewer middle-skill roles in the civilian workforce, and it is critical that the military works to avoid this fate. Failure to do so will raise important concerns of about “who serves” and have a consequent impact on civil-military relations.

Lastly, military recruitment may get a bump thanks to increased AI, but that’s not necessarily a good thing. The military is struggling both to recruit tech talent and to meet its recruitment goals for a workforce that has not yet been disrupted by automation. While AI will make recruiters more effective, automation will also displace civilian workers in historically strong recruiting areas—the heartland and rural areas. The services will have incentives to accept these recruits rather than make the changes to recruiting that are needed to attract the workforce of tomorrow. Service leaders will need to clearly define (and closely measure) the attributes sought in recruits when balancing efforts to find and attract the best talent with adapting the workforce for tomorrow.

Reviewing the status of robot cybersecurity

Robots are often shipped insecure and in some cases fully unprotected. The rationale behind is threefold: first, defensive security mechanisms for robots are still on their early stages, not covering the complete threat landscape. Second, the inherent complexity of robotic systems makes their protection costly, both technically and economically. Third, vendors do not generally take responsibility in a timely manner, extending the zero-days exposure window (time until mitigation of a zero-day) to several years on average. Worse, several manufacturers keep forwarding the problem to the end-users of these machines or discarding it.

• What is the status of cybersecurity in robotics?
• How can we best improve cyber-resilience in robotics?

In this article, the status of the robot cybersecurity is reviewed considering three sources of data:
• Recent Literature
• Frequently asked questions performed in top robotics forums
• Research results in robot cybersecurity

Building upon a decade of experiences in robotics, this article reviews the current status of cybersecurity in robotics and argues about the current challenges to secure robotic systems. Ultimately, based on the empirical results collected over a period of three years performing security assessments in robots, the present text advocates for a complementary offensive approach methodology to protect robots in a feasible and timely manner.

Using these different sources of information, we draw the following observations:

1) Based on literature, robot cybersecurity is still a new field that deserves further attention, tools and the educational material to train new engineers in security practices for robotics.

2) There's a gap between the expectations and the actual investment, which suggests that cybersecurity actions in robotics will grow in the future for the ROS community.

3) The lack of robot-specific security measures (36%) and offensive assessments (26%) can be interpreted as an indicator of the maturity level of the technology when compared to other sectors (e.g. IT or OT) where these practices are common and specialized.

4) Both the PX4 and the ROS communities indicated that the majority is yet to witness a cyber-attack. In the ROS community only one out of ten respondents (9%) had seen it whereas in the PX4 group, approximately one out of four (27%).

5) Data confirm that respectively for both ROS and ROS-I groups mitigations concentrate mostly on the perimeter.

6) In Europe, the majority of the respondents agree that the responsibility in case of damage as a result of a cyber-incident is to be assumed by the supply chain (86% indicated that it'd sit between System Integrators and robot vendors), with only a 14% pushing the responsibility to the end-user.

7) Collaborative robot manufacturers MiR and UR have zero days with an age at least older than one year. These flaws continue growing older due to the inactivity from the manufacturers.

8) Vulnerability data affecting ABB robots shows how according to historical data, vulnerabilities were patches as early as 14 days after its disclosure however the average mitigation time is above four years (1500 days).

9) The ratio of publicly disclosed vulnerabilities versus the ones remaining private is an indicator when evaluating the security readiness of a robot manufacturer. The threat landscape of a given robot is correlated to this ratio in a direct manner.

Complexity difficulties security in robotics: The inherent complexity of robotic systems leads to wide attack surfaces and a variety of potential attack vectors which manufacturers are failing to mitigate in reasonable time periods. As research advances in the field and the first commercial solutions to protect robots appear, to meet the security expectations of most immediate industries, a reverse defensive approach (an offensive one) is recommended. Periodic security assessments in collaboration with security experts will be the most effective security mechanism in the short term.

 

10% Off NHDoD Courses

William Jordan

William Jordan

Other posts by William Jordan

Contact author

Related articles

Contact author

x

Subscribe for Future Blog Notifications