Top Ad unit 728 × 90

Breaking News

z-breaking

Kaspersky Sounds the Alarm on AI-Powered Cyber Threats in Asia Pacific


We often hear about how artificial intelligence is changing the world for the better. But here’s the uncomfortable truth, AI is also making cybercrime smarter, faster, and harder to detect. And it’s hitting Asia Pacific harder than ever.


At a recent event in Seoul, cybersecurity experts from Kaspersky dropped some eye-opening updates. Their forum, Cyber Insights 2025, focused on how cybercriminals are now using AI to launch attacks that are more convincing and more dangerous than anything we’ve seen before.


One of the biggest takeaways? AI isn’t just helping the good guys. It’s also giving hackers new tools to fool people and infiltrate systems. Think phishing emails that sound eerily real, deepfake videos used for scams, and even AI-generated malware designed to sneak past security.


Kaspersky didn’t just talk theory, they brought data. In 2024, their systems flagged more than 3 billion malware attacks worldwide. That’s nearly half a million threats per day. Trojan malware is on the rise, and Windows systems remain a top target. The cybercrime world is also seeing a big increase in scams targeting mobile financial apps and cryptocurrency users.


Even more worrying is how fast passwords are becoming obsolete. Kaspersky’s research shows that nearly half of them can be cracked in under 60 seconds. It’s not just about tech anymore, it’s about how smart the tech is getting.


Another growing problem is what they call “shadow AI.” This is when employees use AI tools like ChatGPT or other assistants without approval from their IT department. These tools can accidentally leak private data or give attackers a way in. Some malicious AI models have even been found uploaded in public repositories, just waiting to be used by mistake.


So what can be done about it?


According to Kaspersky, businesses need to step up and rethink how they handle security. One of their biggest recommendations is investing in an AI-ready Security Operations Center, or SOC. This kind of setup isn’t just for monitoring threats, it uses automation, machine learning, and human analysts to spot weird behavior before it becomes a crisis.


They also suggested using smarter security tools that are trained to detect AI-generated threats, educating employees about the risks of unauthorized AI use, and keeping up with real-time threat intelligence.


Kaspersky has also built its own tools to help with this shift and is offering consulting services for companies that want to build or upgrade their own SOCs.


Bottom line? The cybersecurity battlefield is changing. AI can be a powerful ally, but only if we use it wisely, and protect ourselves against those who use it for harm.

No comments:

Comments on GameOPS are moderated. Please keep your comments relevant to this blog entry.

If you don't have a Google, LiveJournal, Wordpress, AIM, Typepad or OpenID account, please choose NAME/URL when posting a commment. Anonymous comments will be rejected.

Proud member of 9rules

Contact Form

Name

Email *

Message *

Powered by Blogger.