Artificial intelligence and cybersecurity – threat or opportunity?

Cyber security Csaba Krasznay todayJune 6, 2023 71

Background
share close

Since ChatGPT has become available to all Internet users, major vendors have been announcing a series of AI-enabled products and services, including in the field of cybersecurity. We are undoubtedly entering a period of widespread use of AI, which will soon have a noticeable impact on the security of cyberspace. Opportunity or threat, many are asking the question? An opportunity, as AI can offer good solutions to cybersecurity manpower shortages in the short term, help to warn potential incidents faster and more accurately and provide support to solve simple but massive cyber threats to end-users. But it is dangerous because it can be used by attackers as an easy tool to improve and automate current attack scenarios, or even develop new types of attack strategies.

The European Union Agency for Cybersecurity (ENISA) has addressed the cybersecurity aspects of AI in several publications. In its study summarising the standardisation of cybersecurity in AI (https://www.enisa.europa.eu/publications/cybersecurity-of-ai-and-standardisation), it has identified three security dimensions:

  • cybersecurity of AI: lack of robustness and the vulnerabilities of AI models and algorithms,
  • AI to support cybersecurity: AI used as a tool/means to create advanced cybersecurity (e.g., by developing more effective security controls) and to facilitate the efforts of law enforcement and other public authorities to better respond to cybercrime,
  • malicious use of AI: malicious/adversarial use of AI to create more sophisticated types of attacks.

In its Foresight Cybersecurity Threats for 2030 (https://www.enisa.europa.eu/publications/enisa-foresight-cybersecurity-threats-for-2030), the European organisation describes in detail the threats it expects to face in relation to this last point:

“Threat actors will try to leverage the power of AI applications to shape the decision-making outcomes and to gather information on potential victims. For example, they may tamper with training data sets to create dysfunctional and harmful AI applications – this may include crowd sourced data projects. AI can be used to sift through the mass amounts of data about individuals to correlate data points about them – the presence of this capability may lead to an increase in stalkerware. Further, attackers may use AI for offensive or criminal purposes – such as analysing user behaviour to create highly developed spear phishing or hybrid campaigns.”

The malicious use of available AI solutions can already be felt in everyday life. More targeted phishing emails, deepfake videos and high-quality chatbots are available to cybercriminals, while on the defence side, organisations are still far from widespread adoption of AI. This is understandable, of course, as it takes considerable time to build the right technological and human environment in an enterprise environment. A report on generative AI by the analyst firm Forrester (https://reprints2.forrester.com/#/assets/2/108/RES178876/report) also concludes that the time is here for preparation, but that we are still a long way from full deployment. The study says: “As you begin to investigate how generative AI can improve and enhance your products and services, keep in mind that you:

  • Can investigate and implement without hyperscalers or deep technical expertise.
  • Prepare for generative AI to be embedded in your first- and third-party apps.
  • Can use today’s generative AI capabilities to enhance workflows in low-risk scenarios.”

Cybersecurity is not a low-risk process, so it is unlikely that AI will be the primary enabler of enterprise information security processes in the next few years. This means that the attacker side will have a technological advantage over the defence side. However, there are of course exceptions where it may be possible to introduce AI into defence immediately, with minimal risk. Widely used security software, such as endpoint protection solutions, have for some time been able to produce high quality data to ensure that security anomalies can be detected as early as possible. This data can be efficiently processed by artificial intelligence algorithms running in the cloud and, in the absence of internal resources, managed by an external managed security service provider.

This service is called Managed detection and response (MDR), which, according to Gartner’s definition (https://www.gartner.com/reviews/market/managed-detection-and-response-services), “provide customers with remotely delivered security operations center (SOC) functions. These functions allow organizations to rapidly detect, analyze, investigate and actively respond through threat disruption and containment. They offer a turnkey experience, using a predefined technology stack that commonly covers endpoint, network, logs and cloud. Telemetry is analysed within the provider’s platform using a range of techniques. This process allows for investigation by experts skilled in threat hunting and incident management, who deliver outcomes that businesses can act upon.” So, if you want to counter the technological advantage of attackers as quickly and efficiently as possible, it’s worth exploring the potential of MDR!

Written by: Csaba Krasznay

Rate it
Previous post

Similar posts

Cyber security White Hat / September 13, 2023

Analysing a latent malware infection on a recently MDE-onboarded machine (Part 2)

Recently, an interesting latent malware infection was found on a newly onboarded machine at one of our clients (Client). Microsoft Defender for Endpoint (MDE) reported anomalies about the computer shortly after onboarding, but uncovering the inner workings of the malware and the infection methods required thorough investigation We present the second part of the investigation ...

Read more trending_flat