Description
Microsoft Corp., continuing a wave of AI software releases, introduces new chat tools that can help cybersecurity teams defend against attacks and clean up after an attack. The latest of Microsoft's AI support tools — the software giant likes to call them Copilots — uses OpenAI's new GPT-4 language system and security domain-specific data, the company announced Tuesday. The idea is to help security officers more quickly see the connections between different parts of an attack, such as a suspicious email, a malware file, or which parts of the system have been compromised. Microsoft and other security software vendors have been using machine learning techniques to eliminate suspicious behavior and detect vulnerabilities for several years. But the latest AI technologies enable faster analysis and add the ability to use simple questions in English, making it easier for employees who may not be experts in security or AI.
This is important because there is a shortage of workers with these skills, said Vasu Jakkal, Microsoft's vice president of security, compliance, identity and privacy. Meanwhile, the hackers have only accelerated.
“Just since the pandemic, we have seen an incredible proliferation,” he said. For example, "It takes an average of one hour and 12 minutes for an attacker to gain full access to your inbox after a user clicks on a phishing link. Previously, it took months or weeks to gain full access to your inbox." input someone has access to it.
The software allows users to ask questions like "How can I contain devices that are already compromised by an attack?" Or they can ask the co-pilot to list anyone who has sent or received an email containing a dangerous link in the weeks before and after the violation. The tool can also more easily create incident and response reports and summaries.
Microsoft will start by giving access to the tool to a few customers and will add more later. Jakkal declined to say when it will be widely available or who the first customers are. The security co-pilot uses data from government agencies and Microsoft researchers, who track nation states and cybercriminal groups. To act, the wizard works with Microsoft security products and will add integration with other companies' programs in the future.
As with previous AI releases this year, Microsoft is working to make sure users are aware that new systems make mistakes. In a demo of the security product, the chatbot warned about a flaw in Windows 9, a product that doesn't exist.
But it is also capable of learning from users. The system allows customers to choose privacy settings and determine to what extent they want to share the information collected. If they want, customers can let Microsoft use the data to help other customers, Jakkal said.
"It will be a learning system," he said. "It's also a paradigm shift: now humans become the verifiers and the AI provides us with the data."