Neueste KundenberichteWeitere Informationen
NTT und EBTS Pro Assist
Mit der Migration zu einer offenen, integrierten und vollständig vernetzten Genesys Cloud-Plattform ist EBTS Pro Assist auf die zahlreichen zukünftigen Herausforderungen in einer zunehmend vernetzten Welt vorbereitet.Mehr erfahren
The Cybersecurity and Infrastructure Security Agency (CISA) estimates that cyberattacks cost the United States $242 billion every year. As technology evolves, hackers' methods are also becoming increasingly sophisticated.
Fortunately, engineers have made critical gains in threat detection technology. Today, the best AI cybersecurity systems use machine learning as part of their threat detection capabilities.
These systems get results. But what is machine learning and AI? And how do they improve cybersecurity? Many people use the terms AI and ML interchangeably, but there are notable differences in terms of their underpinning technologies and specific use cases.
In this article, we’ll explore both concepts in depth and clarify where and when each should be applied to deliver relevant, timely and actionable business insights.
What is machine learning?
Machine learning is a subset of artificial intelligence (AI). It lets machines take new information into account when they make a decision. Machine learning empowers automatic reasoning and decision-making. In some cases, it enables computers to use vast volumes of data. When computers recognize patterns in the data, they can use them to make accurate predictions. This is critical for threat detection. Recently, machine learning has empowered some machines to teach other machines. This spreads knowledge and solves problems faster.
AI vs. machine learning (AI cybersecurity)
Artificial intelligence is how computers and ‘smart’ devices think. When a machine makes a decision automatically, it's using some form of AI. Machine learning is a specific facet of AI. It lets a machine recognize patterns. Then, it can decide how to use those data sets as it learns.
Artificial intelligence is an umbrella term for computers that mime human thought. Engineers design AIs to perform human-like tasks. AIs can reason, learn from experience and make critical decisions. This decision-making capacity sets artificial intelligence apart from other mechanisms. Think about the difference between the Roomba® robot vacuum and a dishwasher. A dishwasher is a machine that cleans dishes. It can use different water pressures and streams to clean different dishes. But it can't choose which streams to use. Instead, a human must select the ‘light’ or ‘heavy’ cycle.
In contrast, a robot vacuum makes choices. It continually senses as it moves, then alters its pathway and suction strength based on new information. With accurate information, a robot vacuum makes ‘maps’ of the region it cleans that are unique to that space. This map-making capacity is a function of the vacuum's AI solving the problem.
In contrast, a dishwasher cannot create a new pattern of water pressure to better clean the dishes. It will not create a new solution to the problem of your dirty dishes, even if it gets new information. So, the dishwasher is a regular machine. It's incapable of learning. Whereas the robot vacuum is a ‘smart’ vacuum.
Non-learning AI (Rule-based decisions)
It's worth noting that not all AI is programmed to learn. Machine learning is one subset of artificial intelligence. One example is a chatbot. A customer service support chatbot has access to a large amount of data. It can answer customer questions by following ‘if/then’ rules.
So, if a customer mentions ‘billing’, it refers them to the article about billing. Unlike the dishwasher, the chatbot decides which post to refer people to automatically. It doesn't need human input each time. An if/then decision tree can be complex. But a chatbot cannot add new information to its database itself. Nor can it learn from its environment to improve its quality of customer service.
Cybersecurity programs need to alter their decision based on new information as it comes in. So, AI security systems use machine learning. It’s important to note that keeping up with new threats and threat vectors by manually writing rules is no longer a feasible or secure approach. By employing ML and AI, it’s possible to react much faster to changes in the threat landscape. This capability is critical going forward as the speed at which new threats emerge is constantly increasing.
Machine learning is a subset of AI. Engineers develop machine learning systems when they want to solve problems too complex for if/then trees. Machine learning enables a machine to recognize patterns. Rather than pre-program an if/then tree, engineers give a machine vast swathes of data. Then, it can learn to do things like recognize faces by applying statistical formulas and algorithms to the data.
Algorithms help machines sort out the signal from the noise. This way, a machine won't incorrectly recognize a pattern that's not really there – that is, a pattern that's just a coincidence. Instead, it can home in on statistically significant patterns humans might miss.
Smart machines: not all equal
The sophistication of this pattern recognition varies. For example, the latest robot vacuum, Roomba s9, takes in data from sensors and cameras. It also processes information from users. The Roomba s9 evaluates the success of its work. It remembers whether a cleaning mission was aborted or completed.
In terms of recognizing patterns, it's motivated to complete missions. If certain regions of the space often result in it failing the mission, it learns that those regions are ‘keep out zones’. But it doesn't generate a new way to approach those regions. Its data is limited, and so is its potential to use the information. In contrast, smart cybersecurity machines use more complex algorithms. They also parse a much higher volume of data than a household appliance.
In the next section, we'll break down how machine learning algorithms work. Then, you'll learn how different algorithms power different cybersecurity processes effectively.
How machine learning algorithms work
Machine learning algorithms are complex mathematical models. They use statistics and probability to empower machines to recognize patterns and come to logical conclusions.
When engineers want machines to learn, they'll use one of three approaches:
- Supervised learning
- Unsupervised learning
- Reinforcement learning
Each learning mode works to teach different types of reasoning.
Supervised learning enables machines to recognize patterns within pre-determined input-output options. This helps machines make sense of data in the context of a specific question.
Unsupervised learning removes that context. The machine learns to group, label and classify data according to its own logic. This lets it resolve more complex data sets.
Reinforcement learning uses trial and error. This system directs the machine's behavior with rewards and punishments.
This speeds up the learning process, as machines are motivated to pursue long-term rewards. It also discourages learning the ‘wrong’ lessons. These learning models facilitate machine learning algorithms. These algorithms shape the learning process. The result of these processes are algorithms unique to a given machine.
Big data refers to large, dynamic data sets. The data often changes, and it moves at a high speed within its context. All data and each data set have meaning within their context. To clarify, big data isn't simply the data itself. It's data within an analytic framework. This could be a predictive framework or a transformational framework.
Machine learning uses the data within the framework. The big data framework enables machines to make sense of information – and use it to make effective decisions. Algorithms cultivate this framework.
Machine-to-machine learning (M2M) empowers networked devices to exchange information. This lets machines learn faster. It also automates learning. M2M enables machines to learn from one another without human input. It typically transmits information via public networks, cellular networks or Ethernet. Machine learning algorithms structure the teaching process. The right algorithm enables machines to transmit good, useful information.
Deep learning is a subset of machine learning. It mimics the structure and functions of the human brain. Specifically, deep learning systems use artificial neural networks that react like neurons in the brain. Instead of neurons, a deep learning system is composed of nodes.
Deep learning is a form of unsupervised (self-taught) machine learning. Engineers design deep learning systems to understand data at a massive scale. In essence, deep learning is a set of coordinated algorithms. Together, they extract increasingly useful, granular information from dynamic data.
Cybersecurity: using AI and machine learning for threat detection
Machine learning works. It uses algorithms to process vast amounts of ever-changing data. In cybersecurity, this means we have increasingly sophisticated tools to recognize patterns, predict threats and use up-to-the-second information. Consider these three use cases.
Malware prediction modelling
Supervised machine learning can train a machine to recognize malware. It learns the parameters of harmful files. Then, it creates an accurate model of what those files look like. This lets it pre-emptively block malware files. It can do this even though it's impossible to account for all possible malware variants.
A cybersecurity program with access to updated data can revise its model as needed. A machine learning-driven program will constantly learn about harmful files with different parameters. It may learn from other machines, from human input, or via its own query and input features. Reinforcement learning can prevent it from developing new, incorrect models as it receives more data.
Inconsistencies trigger threat-hunting
Machine learning is about pattern recognition. A cybersecurity AI can note inconsistencies in patterns of transmitted data. The AI might not recognize the inconsistency as a known threat. But the inconsistency itself can trigger threat-hunting. Threat-hunting processes let the AI examine network traffic and anomalies more closely.
With more granular information, it can take action. It can update its threat model to accommodate the anomalous information. It can also slam the door on the pattern-breaking data. Prior reinforcement will drive the AI to the right choice. In some cases, the AI's parameters defer the choice to a human user.
Cut down on false positives
Machine-learning-driven cybersecurity software rarely interrupts the normal flow of traffic. Rules-based software may discover that many innocuous files fall outside its parameters. Its interference can slow down necessary network use.
Machine learning programs don't rely on narrow rulesets. Instead, they can make smart decisions. This lets them block dangerous threats without interrupting benign files.
Unlock technological insights with NTT
AI is changing our world. Sometimes, that change is machine learning reshaping AI cybersecurity with improved threat detection. Sometimes it’s artificial neural networks accelerating medical research. At NTT, we keep up with how the world changes. Take a look at how we’re leveraging AI as part of our Cyber Threat Sensor – AI security service.
We’ve made significant investments over the last 20 years in AI and machine learning as part of the Managed Security Services we offer. If you’d like to learn more about our services and how we’re leveraging AI and machine learning as part of them, contact us today.