The sophistication of cyberattacks in recent years has rendered conventional security systems such as anti-viruses and firewalls ineffective.
Humans write most software, so it tends to be flawed. Hackers can exploit the slightest security flaw in a system to gain access to data or even to take over the system completely. These days, the discovery of zero-day vulnerabilities and security flaws has become increasingly common. Even if a system seems bulletproof for now, it is only a matter of time before someone finds a loophole to breach its securities.
Protecting against these attacks needs constant monitoring and frankly has become a headache for system admins. Therefore, a team of researchers at Purdue University have developed a new security system that relies entirely on artificial intelligence (AI). In a new paper, the researchers reveal a new computer model that runs “”cyber-physical”” systems that are self-aware and can self-heal.
The system sends one-time signals to every connected component and turns them into active monitoring systems, on the lookout for potential intrusion. The algorithm is so “”self-aware”” that even if the attacker uses a perfect copy of the model itself, it can detect the falsified data and prevent the attack.
“”We call it covert cognizance,”” said Abdel-Khalik, lead author of the paper, and a research professor at Purdue University’sUniversity’s Center for Education and Research in Information Assurance and Security. “”Imagine having a bunch of bees hovering around you. Once you move a little bit, the whole network of bees responds, so it has that butterfly effect. Here, if someone sticks their finger in the data, the whole system will know that there was an intrusion, and it will be able to correct the modified data.””
According to the researchers, any defence system is only as good as the knowledge of the model. If the attacker knows the defence model well, they can theoretically breach it.
“When you have components that are loosely coupled with each other, the system really isn’tisn’t aware of the other components or even of itself,” said Arvind Sundaram, a graduate student in nuclear engineering at Purdue. “It just responds to its inputs. When you’reyou’re making it self-aware, you build an anomaly detection model within itself. If something is wrong, it needs to not just detect that, but also operate in a way that doesn’tdoesn’t respect the malicious input that’sthat’s come in.”