in

Nine ways to use machine learning to launch an attack

Some malicious hackers also use machine learning and AI to expand their cyberattacks, circumvent security controls, and find new vulnerabilities faster than ever with devastating consequences. Here are some common ways hackers use these two techniques.

Machine learning and artificial intelligence (AI) are becoming core technologies for some threat detection and response tools. Its ability to instantly learn and automatically adapt to cyber threat dynamics empowers security teams.

1. Spam
Fernando Montenegro, an analyst at Omida, said epidemic prevention personnel have used machine learning techniques to detect spam for decades. “Spam prevention is the most successful initial use case for machine learning.”

If the spam filter in use provides a reason for not letting the email pass or giving a certain score, the attacker can adjust his behavior. They will use legitimate tools to make their attacks more successful. “As long as you submit enough, you can restore what the model is, and then you can adjust the attack to bypass the model.”

It’s not just spam filters that are vulnerable. Any security vendor that provides ratings or some other output can be abused. “Not everyone has this problem, but as long as you’re not careful, someone will maliciously exploit this output.”

2. More sophisticated phishing emails
Attackers aren’t just using machine-learning security tools to test whether their emails pass spam filters. They will also use machine learning to compose these emails. “They advertise these services on criminal forums. They use these techniques to generate more sophisticated phishing emails and create fake personas to facilitate scams,” said Adam Malone, a technology consulting partner at Ernst & Young.

These services are advertised with a focus on the use of machine learning, and it may not be just marketing rhetoric, but the truth. “Just try it,” Malone said. “It works really well.”

Attackers can use machine learning to creatively tailor phishing emails to prevent them from being marked as spam, giving targeted users a chance to click through. They can customize more than just email text. Attackers will use AI to generate very realistic-looking photos, social media profiles, and other material to make communications appear as authentic as possible.

3. More efficient password guessing
Cybercriminals also employ machine learning to guess passwords. “We have evidence that they are using password guessing engines more often and with better guessing success rates.” Cybercriminals are making better dictionaries to crack stolen hashes.

They also used machine learning to identify security controls so that passwords could be guessed with fewer attempts, increasing the probability of successfully breaking into the system.

4. Deepfakes
The most horrific abuse of AI is deepfakes: tools that generate video or audio that can look like the real thing. “Being able to imitate someone else’s voice or appearance is very effective,” Montenegro said. “If someone fakes my voice, you’re probably going to get caught.”

In fact, a series of major cases revealed over the past few years show that fake audio can cost companies hundreds of thousands or even millions of dollars. “People get calls from their bosses — it’s fake,” said Murat Kantarcioglu, a professor of computer science at the University of Texas.

More commonly, scammers use AI to generate real-looking photos, user profiles and phishing emails to make their emails look more believable. This is big business. According to the FBI report, business email fraud has resulted in more than $43 billion in losses since 2016. Last fall, media reports said a bank in Hong Kong was tricked into transferring $35 million to a criminal gang simply because a bank employee received a call from a director of a company he knew. He recognized the director’s voice and authorized the transfer without a doubt.

5. Invalidate off-the-shelf security tools
Many security tools in common use today have some form of artificial intelligence or machine learning built in. Antivirus software, for example, relies on more than basic signatures to find suspicious behavior. “Anything that’s available online, especially open source stuff, can be exploited by bad guys.”

Attackers can use these tools, not to ward off attacks, but to tune their malware until it can bypass detection. “AI models have a lot of blind spots,” Kantarcioglu said. “You can adjust by changing the characteristics of the attack, such as the number of packets sent, the resources of the attack, and so on.”

And attackers are using more than just AI-enabled security tools. AI is just one of many different technologies. For example, users can often learn to identify phishing emails by looking for grammatical errors. And AI-powered grammar checkers, like Grammarly, can help attackers improve their writing.

6. Scouting
Machine learning can be used for reconnaissance, where attackers can view a target’s traffic patterns, defenses, and potential vulnerabilities. Reconnaissance is not an easy task, and ordinary cybercriminals can’t do it. “To use AI reconnaissance, you have to have certain skills. So, I think, only advanced state hackers will use these techniques.”

But once it’s commercialized to some extent, and the technology is offered as a service through the underground black market, it’s available to a lot of people. “This could also happen if a national hacker team developed a toolkit using machine learning and released it to the criminal community,” Mellen said. “But cybercriminals still need to understand how machine learning applications work and how effective they are. The way of use, this is the threshold of use.”

7. Autonomous Agents
If a business discovers that it is under attack and disconnects the affected system from the Internet, the malware may not be able to connect back to its command and control (C2) server to receive further instructions. “Attackers may want to come up with an intelligent model that can persist for long periods of time even outside of direct control,” Kantarcioglu said. “But for normal cybercrime, I don’t think that’s particularly important.”

8. AI Poisoning
Attackers can trick machine learning models by feeding them new information. “Adversaries can manipulate training data sets. For example, they deliberately bias the model and make the machine learn the wrong way,” said Alexey Rubtsov, senior associate research fellow at the Global Risk Institute.

For example, a hacker can manipulate a hijacked user account to log in to the system at 2 am every day to do harmless work, causing the system to think there is nothing suspicious about the 2 am work, thereby reducing the security checkpoints the user has to pass.

The 2016 Microsoft Tay chatbot was taught to be racist for a similar reason. The same approach can be used to train the system to think that characteristic types of malware are safe, or that certain crawler behaviors are completely normal.

9. AI Fuzzing
Legitimate software developers and penetration testers use fuzzing software to generate random sample inputs in an attempt to crash applications or find vulnerabilities. Enhanced versions of such software use machine learning to generate input in a more targeted and organized way, such as prioritizing text strings that are most likely to cause problems. These types of fuzzing tools are used by enterprises to achieve better testing results, but they are also more lethal in the hands of attackers.

These techniques are among the reasons cybersecurity measures such as security patches, anti-phishing education, and micro-segmentation remain critical. “Why is defense in depth so important? That’s one of the reasons,” said Mellen of the Forrester Institute. “You have to have multiple roadblocks, not just the one that attackers use against you.”

Lack of expertise prevents malicious hackers from leveraging machine learning and AI
Investing in machine learning requires a lot of expertise, and machine learning-related expertise is currently a scarce skill. And, with so many vulnerabilities left unpatched, there are many convenient ways that attackers can breach corporate defenses.

“There are so many low-hanging fruit, other avenues to make money without using machine learning and AI to attack,” Mellen said. “In my experience, in most cases, attackers don’t take advantage of these Technology.” However, as corporate defenses improve, and cybercriminals and hacker national teams continue to invest in attack development, the balance may soon begin to shift.

What do you think?

Leave a Reply

Your email address will not be published. Required fields are marked *

GIPHY App Key not set. Please check settings

Do you really understand 3D printing?

“Open Kylin”, China’s first desktop operating system developer platform released