The cyber security landscape continues to evolve as cybercriminals become more sophisticated and digital security tools accelerate risk mitigation as much as possible. The Year 2020 saw even more opportunities for hackers to attack, for example, by using phishing scams to deceive unsuspecting victims. More recently, we have seen how these “fishermen” use the supposedly available vaccine (Covid-19) to deceive people into paying for fake vaccines.

Artificial Intelligence and Machine Learning have been promoted as innovative technologies that help to overthrow evolving farms and are an essential part of any cyber security arsenal. But AI is not necessarily the right tool for every job. Humans are still able to perform complex decision-making processes much better than machines, especially when it comes to determining which data is safe to send out of the organization. Therefore, reliance on artificial intelligence for this decision making can cause problems, or worse, lead to data leakage if Artificial Intelligence is not mature enough to fully understand what is sensitive and what is not.

So where can AI play an effective role in a cyber defense strategy and where can it challenge the user?

Identifying similarities

One of the primary challenges for Artificial Intelligence in mitigating the risk of accidental intrusion is to be able to identify similarities between documents or to know if it is okay to send a particular document to a specific person. The company’s templates, such as invoices, seem to look a lot alike every time they are sent, with minor differences that Machine Learning and Artificial Intelligence usually do not understand. The technology will record the document as usual, although there are very few differences in the numbers or words used and it usually allows the user to send the attachment. While in this example, a person will know exactly which invoice or offer should be sent to which customer.

The development of artificial intelligence for this purpose in a large company would probably stop sending a small percentage of emails. But even when the AI ​​detects a markup issue, it will notify the administration team and not the user. This is because if the AI ​​believes that the email should not be sent, it does not want the user to bypass it and send it anyway.

Data storage

AI can also be very demanding when used for this defense strategy. This is because in this setting, every email must be sent to an external system, other than the website, for analysis. Especially for industries that deal with very sensitive information, the fact that their data goes elsewhere for scanning is worrying. In addition, with Machine Learning, technology must retain some of this sensitive information to learn rules and use them over and over again in order to make the right decision next time. Given the nature of Machine Learning, where the learning phase lasts a few months, we realize that it cannot immediately provide the right security checks.

It is understandable that many companies, especially at the business level, do not feel comfortable sending their sensitive data elsewhere. The last thing they want is to store this information off-site, even if it is for analysis only. AI therefore adds an unnecessary and unwanted risk element to sensitive material.

The role of artificial intelligence in cyber-security

On the other hand, artificial intelligence can play a critical role in many aspects of a business cyber defense strategy. Antivirus technology, for example, implements a strict “yes or no” policy on whether a file is potentially malicious or not. It is not subjective, but through a strict level of parameters, the file is either considered a threat or not. The AI ​​can quickly decide whether to shut down or lock the device, shut down the network, or delete the malicious file.

Thus, while AI may not be the ideal method for preventing accidental data leakage via email, it has an important role to play in specific areas such as virus detection, test environment, and threat analysis.


With so much dependence on e-mail tools in business practices, accidental data leakage is an inevitable risk. The impact of the breach on the business reputation and the related financial loss can be devastating. It is essential to have a cyber security culture with continuous training and the use of the right technology.

The contribution of technology that alerts users to potential errors – either by sending an email to the wrong person or by sharing sensitive data about the company, its customers or its staff – not only minimizes errors, but helps create a better communication culture. Mistakes are easily made in a fast-paced, stressful work environment – especially with the increase in tele-work that does not provide immediate evaluation by colleagues or bosses. This type of technology, combined with trained human insight, can allow users to make better decisions about the nature and legitimacy of an email before sending it.