When artificial intelligence is the target and not the tool of the attack…

Cyber security Csaba Krasznay todayOctober 26, 2023 124

Background
share close

Without artificial intelligence, the near future is inconceivable, as is probably clear to everyone by now. As is the fact that what we depend on is under attack. But how?

how can we defend the artificial intelligence technology itself against cyber attacks
image by macrovector on Freepik

As we discussed in a previous blog post, there are three cybersecurity questions about AI:

  • how do we use it for cyber defence,
  • how can bad guys use it to launch cyber attacks,
  • and how can we defend the technology itself against attacks?

We will now focus on the latter, giving us the opportunity to present The MITRE Corporation’s MITRE ATLAS™ Matrix, which summarises the threats to machine learning solutions. Why is this important to us? It’s because machine learning is the most widely used of all artificial intelligence solutions, and by the way, it’s also the basis for cyber defence, the basis of almost all services visible to ordinary people.

As summarised in the European Cybersecurity Agency’s (ENISA) publication A multilayer framework for good cybersecurity practices for AI, there are two possible phases of an AI attack.

The first phase, known as poisoning, can be experienced during the design and development of the data models on which the algorithm is based. The second phase, when we talk about evasion and/or data theft, can be detected during the deployment and monitoring phases. For example, in cybersecurity solutions using AI, the first phase is to make the model learn something incorrectly, such as accepting certain malicious activities as normal, and the second phase is to bypass the defence controls developed for AI and steal the data models therein, in order to allow the attacker to learn how the defence works and to target the attack against the organisation.

Of course, this is a theoretical, rather abstract risk until we see machine learning algorithms attacked in real attacks. MITRE Corporation has therefore created the ATLAS™ matrix, modelled on their ATT&CK® framework, to help cybersecurity professionals work to protect artificial intelligence, a field probably unknown to most.

The matrix maps the attack steps to each step in the attack chain in a familiar way from MITRE, but after the Initial Access phase, the ML Model Access step is a new addition and one of the specialties of this matrix. The description, which currently includes 44 attack techniques, also covers general attacks such as the misuse of Valid Accounts, but also mentions machine learning specific examples such as Discover ML Model Ontology and ML Artifact Collection.

Perhaps more interesting for those on the defence side, however, is the Mitigations list of proposed protection measures. And we have to admit that this list is rather short and general. In total there are 19 protection measures listed that MITRE experts recommend, including user education or vulnerability testing. However, ML specific proposals such as Sanitize Training Data or Restrict Number of ML Model Queries could provide a very interesting new perspective on the protection of models. One thing is for sure, this list will grow very quickly in the coming years, so it will be worth checking the ATLAS website regularly.

As Sun Tzu wrote in The Art of War, 2500 years ago,

If you know the enemy and know yourself, you need not fear the result of a hundred battles. If you know yourself but not the enemy, for every victory gained you will also suffer a defeat. If you know neither the enemy nor yourself, you will succumb in every battle.

Cybersecurity professionals working in the age of artificial intelligence are currently characterised by the situation described in the last sentence. We neither know the enemy, i.e. we don’t know what attacks he will launch against ML, but in return we don’t know our own capabilities, as AI is a completely new field that most of us are not yet trained in. MITRE’s matrix can provide some help in this regard, and we think it should be required reading for all experts.

In any case, until such time as this knowledge is transferred into everyday life, it is worth working with managed security service providers who are already familiar with the opportunities and threats that AI presents on a daily basis. And time may (MAYBE) solve this problem.

Written by: Csaba Krasznay

Tagged as: , , .

Rate it
Previous post

Similar posts