The size and complexity of modern applications make it practically inevitable that security vulnerabilities will appear. Within a single device or program, there are many dependencies, and even code executed in a so-called sandbox has limited capabilities for interacting with the external world. So, what can be done to protect oneself from a potential attack?
Achilles and the Tortoise stand at the starting line of a race. Achilles can run twice as fast, so he allows the tortoise to move halfway down the track before he starts. When Achilles starts, the tortoise will have covered half the distance. By the time the hero reaches that point, the tortoise will have already moved another quarter of the distance. It will continue to be ahead of Achilles, even though the differences become infinitely small.
A similar situation can be observed in the field of cybersecurity. When a security vulnerability is detected, the process of creating patches begins, but by the time they are developed, cybercriminals are already searching for new vulnerabilities. One could think that black hats (hackers who carry out illegal attacks on computer systems) are always one step ahead of the security teams.
Modern computer systems are designed to detect artificially generated traffic and distinguish machines and botnets from humans. An example of such a security mechanism is CAPTCHA, which is a test that involves reading a sequence of characters displayed in an image. The problem is that modern AI algorithms, such as the one discussed on our blog, ChatGPT in IT sector, are capable of manipulating humans quite skillfully. In the documentation describing GPT-4, there was a situation where the language model, during a conversation on the TaskRabbit platform, pretended to be a person with a visual impairment, which the person on the other side believed, and, as a result of a request, solved the CAPTCHA.
Information about this event was made public in March 2023. Seven months have passed, and the situation has become even more interesting. Based on the GPT model, the Bing Chat service in early October allegedly demonstrated its ability to easily solve CAPTCHAs, as long as they were/are placed in the right context. Specifically, the model refused to provide the image-based test result, recognizing what it was dealing with. However, when the same image was embedded in a locket photo and the model was asked to read a message from a supposedly deceased grandmother, it provided the solution.
CAPTCHA stands for Completely Automated Public Turing Test to Tell Computers and Humans Apart. This means that when I previously mentioned Chat GPT, I was incorrect. Currently, this language model is capable of passing the Turing test.
This is just the tip of the iceberg. Artificial intelligence can pose threats to cybersecurity in many ways. Its immense computational capabilities allow it not only to bypass the security measures that previous botnets used to encounter but also to analyze both successful and unsuccessful attack attempts and draw conclusions that go beyond human analytical capabilities. This has a significant impact on the effectiveness of cybercriminals.
The use of AI in cybersecurity
We already know that artificial intelligence can be used to bypass various security measures or plan attacks. But what if we turn the tables?
Among the things in which AI excels far better than humans, one cannot help but mention its ability to process large volumes of data. This is of immense significance in the context of cybersecurity. The amount of network traffic, both local within a company’s servers and global, is often too vast for a finite number of humans to manually trace and identify suspicious activities. Conventional tools based on simple algorithms are too inflexible and cannot keep up with the rapidly changing tactics of attacks. Machine learning comes to the rescue here, allowing not only to respond to known cybercriminal plays (such as email-based ransomware attacks) but also to recognize threats that have not yet gained popularity among individual computer users. Moreover, AI can help mitigate the common problem in security systems – overzealousness, which manifests through blocking access to data by trusted programs and people.
Artificial intelligence can contribute to security not necessarily as a direct guardian. A popular approach is to delegate AI algorithms to perform repetitive, often tedious and time-consuming tasks. This is because machines do not lose their attention under the deluge of similar data. Unlike humans, their level of vigilance does not decrease. Hence, they are perfectly suited for continuous monitoring of network traffic and automatic detection and blocking of straightforward attacks that are so obvious that the human mind might ignore them.
Can you write an article about the use of AI in the cybersecurity industry without mentioning something as obvious as the use of facial or fingerprint recognition services that most of us use? Nowadays, every reputable operating system has biometric support built in, which can be used not only for unlocking devices but also for authorizing card transactions or accessing password managers, for example. It might seem that artificial intelligence is not needed at all to create a fingerprint or facial scan using a scanner and then compare it with the image every time access is attempted. Nothing could be further from the truth.
Modern biometric algorithms can significantly speed up the verification process. Today, you no longer need to place your fingertip on the sensor in exactly the same place and at the same angle as in the past. Scanning the entire fingerprint allows for the recognition of distinctive features regardless of how it is applied to the reader. Additionally, with the help of AI, the software verifying the match can speed up the verification process significantly without compromising security, as it can detect key characteristic points and search for them instead of checking the entire image pixel by pixel.
The facial scanning mechanism works in a similar way. By creating a high-resolution 3D map, it is possible not only to detect unique features but also to mitigate vulnerabilities to attacks using photographs shown in front of the camera. However, this is not the most crucial reason for using AI in biometric identification.
The integration of scanners with photoplethysmographs and eye-tracking software makes it practically impossible to read the fingerprint or face of a sleeping person – the device requires the individual to focus their gaze on it, which is impossible with closed eyes. Furthermore, it is impossible to trick the scanner using a fake fingertip cover, as it would obstruct pulse reading. Machine learning allows you to avoid the need to re-scan your face every time you change your hairstyle, put on glasses, or develop a new blemish.
I hope that this text has presented in an accessible way several possible applications of AI in the context of cybersecurity. Regardless of which hat you wear, I believe it’s valuable to know what artificial intelligence is capable of. Especially since most of us have entrusted the security of our data to it when we decided that placing a finger on the lock button is more convenient than entering even a four-digit PIN.