Research Challenges

In the follow-up of the work of NATO IST-152 Research Task Group, we have identified 13 major research challenges that describe the advances needed in order to create, test, and deploy effective AICA agents.

These include a non-technical challenge that belongs in more social sciences and bears on Law, Ethics, Doctrines and Society.

These challenges are grouped into four classes that can initially help organising four IWG’s technical subcommittees:

  • Infrastructure, Architecture & Engineering challenges;
  • Individual & Collective Decision Making challenges;
  • Stealth & Resilience challenges;
  • Societal challenges.
AICA - The 12 research challenges

INFRASTRUCTURE, ARCHITECTURE & ENGINEERING CHALLENGES

1.     Agents’ architecture of reference

Originally inspired by Russell & Norwig, the AICA Reference Architecture – AICARA elaborated by the NATO IST-152 RTG in 2016-2019 is today at its preliminary stage of definition and must be further detailed and precisely specified and justified based on contexts of operation and use cases. This stream of research must elaborate the set of requirements needed to standardise the engineering of AICA agents and Multi Agent Systems to the benefit of research and the industry. This will serve also as a basis for other streams of research and technology, such as agents’ engineering and certification.

2.     Agents’ engineering & certification

Engineering AICA agents or AICA Multi Agent Systems requires 1) to be inscribed within the engineering process of the complex or autonomous systems in which they are to be embedded ; 2) to be conducted according to a specific methodology that takes account of their nature, purpose, architecture of reference and context of operation; 3) To be built to recognised standards to assure their interoperability, quality and security. Beside their standardisation (we could think of working along with IEEE FIPA), their qualification, perhaps their certification, are a priori extremely complex. This stream of research should build the required engineering platform and elaborate ad hoc guidelines. This will serve as a basis or receptacle of results from other streams of research such as Testability & At-scale simulation.

3.     Testability & At-scale Simulation

AICA agents’ possibilities and limitations must be carefully and thoroughly tested and evaluated. Test data sets and protocols need being created to guarantee that this technology is reliable, safe and efficient, including in dubious and problematic conditions of operation. Besides, given the types of host environments in which agents will operate, at-scale simulation will become a necessity. At-scale means that simulated host environments should scale up to 106 Things, as in the Internet of Battle Things for instance. This sort of size implies to develop cyber range platforms to new technological standards, both in terms of high-power computing and in terms of the tooling required for the usability of these platforms. How to create quickly enough new networks of Things at the 106 scale? How to generate quickly data and process flows reflecting the real life of such massive systems? How to control the experiments within such large simulated topologies? How to inject cyber-attacks of a wide variety of strategies and technologies? How to allow reproducing tests to verify their protocols and the scientificity of results gained? How to visualise phenomena that will occur within and analyse results? Etc.

4.     Implementation and compatibility technologies

There are several aspects to this particular challenge. The first one is about agents’ “specialisation”. As anticipated by the IST-152 report, AICA agents might be implemented in a variety of ways. One is a single, full agent that hosts all the functions and data required to detect and beat malware. Another is a swarm of full agents that will develop superior capacities to those of a single one. Another way might be to have a community of specialised agents working together in swarms, such as, but not necessarily as described, a detection agent, another interpreting data, a third making decisions about countermeasures, etc. Finally, some agents might be hardware and some software, some might be elements of the host environment itself while its “colleagues” would be full or specialised agents. The second aspect is about AICA agents’ compatibility with host systems. The latter, at the present moment, are certainly not designed to accept our friendly agents, especially if those are to patrol networks and systems, constantly moving from one spot to another. It is likely that the engineering of host systems itself might be impacted by the concept or AICA agents. Finally, the third aspect is about agents’ compatibility with cybersecurity systems and devices. As of today, agents would be stopped by the first firewall or detected as adverse by an IDS. AICA agents must be able to function despite cybersecurity devices, software and procedures or they won’t operate at all. Like host systems, cybersecurity systems may have to evolve in order to allow AICA agents to function.

5.     Autonomous self-engineering and self-assurance

AICA agents will be embedded into complex networks, autonomous systems as well as into very simple devices such as sensors dispatched on the battleground. This may stand true for very long periods of time during which they might have no communication either between themselves or with a central cyber C2 or with human operators. Without any possibility to get updates of their algorithms and databases, thus, their efficiency might decline quickly and drastically. Under such an assumption, it is vital that agents be equipped with the possibility to self-develop their functions to adapt to new conditions of operation. They also need to have the capacity of self-assurance in order to guarantee their quality, reliability, integrity, etc. Such capacities would also stand in the context of autonomous learning that would then feed agents’ databases with new elements, possibly needs for developing new algorithms.

INDIVIDUAL & COLLECTIVE DECISION MAKING CHALLENGES

6.     Cyber battle modelling and  formal graphs

It is anticipated that, besides “classic”, more or less basic malware detection and remediation endeavours, our AICA agents will have to fight Autonomous Intelligent Malware (AIM), created on similar conceptual and technical grounds, in tactical cyber battles. In both contexts, but especially in tactical cyber battles, AICA agents will fight AIM successfully only if they have adequate “mental models”, i.e. formal representations of malware / AIM circumstances, locations, tactics, techniques, procedures, technologies, data. Similarly, AICA agents will need the same kind of mental models relating to their own circumstances and behaviours. And, also, mental models should depict formally the relationship between the former and the latter. Formal mental models can be imagined as system and propagation maps and process and/or data models. Current research works on formal attack graphs should be extended to provide the agents’ “brain” with the required knowledge, skills and routines (as per the study by Rasmussen and others of experts’ cognitive skills to resolve issues).

7.     Agents’ individual decision making

AICA agents’ Decision-Making will be a key to their trustworthiness. AICA agents’ decision making process, as described in the IST-152 report for instance, is a sequence of functions: sensing/data acquisition, situation awareness, action planning, action selection, action activation. In order to address a wide variety of tactical and technical situations, and in order to generate uncontrollable disastrous consequences as far as feasible, AICA agents need individual decision-making ways that mimic human decision making. To that end, they will require not just ML algorithms to suggest decisions or to associate a reaction to a stimulus, but a “deep decision-making” process resembling human cognition in action, plastic enough to constantly adjust to and take account of complex circumstances. This process will rely upon knowledge databases storing formal mental models of cyber battle tactics and techniques. The functions of the deep decision making process will be implemented via AI and non-AI techniques, such as ML, game theory, formal attack-graphs, etc. functions will work sequentially or in cooperation/combination. Deep decision-making processes, and their functions, are a new, complex, focus of cognitive sciences. Moreover, the decisions made and the decision making process should be explainable to human decision makers.

8.     Collective intelligence & decision making

For swarms or communities of AICA agents to bring superiority over single full AICA agents, they need to make collective decisions that are more effective than single agents. To that end, agents will need sharing and/or exchange data. They will need to help one another. They will also need, probably, to spread the load of their calculations, data and actions. Besides, in relation to the trust challenge, this topic will need to be part of collective decision making and intelligence.

9.     Learning, loading, sizing

AICA agents will learn and help learning about malware, cyber battles and their own behaviours and actions. There can be two ways to learning: online and off-line. Online learning will require large computing and storage capacities while off-line learning can be made “at home”, on large computers, and the result can be uploaded into agents before they go to war with malware. Both possibilities have to me made available and will influence the sizing of agents. A contrario, sizing will constrain the choice of a learning process, online or off-line. When agents will need to be small-sized, off-line learning might be privileged. Off-line learning and uploading before going to war may limit agents’ capacity to decide to simple decisions. And if agents need to be embedded into host systems for long periods of time, and if their efficiency may thus decline without regular updates or learning, their lifespan might become an issue.

10.  Agents cooperation with other entities

AICA agents may have to cooperate with other agents, a cyber C2 or human operators. Besides the issue of trust, that of cooperation conditions, needs, functionalities, protocols, data flows and information saturation, ergonomics, security and continuity in case of disturbances are at the heart of this strand of research.

STEALTH & RESILIENCE

11.  Agents’ stealth and resilience

AICA agents will become a primary target for enemy malware, especially if they are of an AIM class. AICA agents need to be protected and defended against attacks from the opponent. To that end, cyber threat and risk analyses will be important, and could, maybe, be performed by agents themselves when needed or on a regular basis. One can imagine that agents could have a capacity to deter enemy malware to attack them. Including protective mechanisms such as making agents stealthy or embedding into them the capacity to be robust (not degrade) to attacks is imperative. Besides, AICA agents need being resistant to attacks when they occur. They must be able to monitor their perimeter and to detect signs of imminent attacks, individually and collectively. They must be able to fight back, individually and collectively, attacks that target individual agents or groups of agents. They need to learn from attacks in order to reinforce their individual and collective capacity to prevent and defeat attacks. Threat & risk analysis, deterrence, protection, monitoring & detection, attack response, learning & improving should be the six mechanisms of agents resilience. This should cover also their communication

12.  Friend of foe? Ping, trust and social dynamics

When two agents “meet”, or when an unknown agent knocks on the door, our AICA agents must be able to discriminate good agents from bad ones. Swarms of AICA agents could thus defend themselves against intruders. They could reconfigure their cooperation liaisons when accepting a new member in the swarm. Besides, friendly agents must be able to trust one another. And untrustworthy agents must be banned or destroyed and associated strategies must be aligned on agents’ missions, priority interests and rules of operation. They must also be signalled to other agents so that in turn they avoid cooperating with them.

SOCIETAL CHALLENGES

13.  Law, ethics, doctrines and society

The development of the AICA defensive technology, along with the development of a similar but offensive technology, will have societal, philosophical and legal implications, just like the UN’s Lethal Autonomous Weapon Systems (LAWS) committee reflects today on the societal, legal, moral and philosophical aspects of such a technology. The AICA technology must be trustworthy or it will contribute to creating more chaos within our society.