Facebook Google Plus Twitter LinkedIn YouTube RSS Menu Search Resource - BlogResource - WebinarResource - ReportResource - Eventicons_066 icons_067icons_068icons_069icons_070

Tenable Blog

Subscribe

Cybersecurity Snapshot: Curb Your Enthusiasm Over ChatGPT-type Tools at Work, Says U.K.’s NCSC 

Curb Your Enthusiasm Over ChatGPT-type Tools at Work Says U.K.’s NCSC

As OpenAI released ChatGPT Enterprise, the U.K.’s cyber agency warned about the risks of workplace use of AI chatbots. Plus, the QakBot botnet got torn down, but the malware threat remains – what CISA suggests you do. Moreover, new quantum-resistant algorithms are due next year. And much more!

Dive into six things that are top of mind for the week ending September 1st.

1 – NCSC: Be careful when deploying AI chatbots at work 

When adopting AI chatbots powered by large language models (LLMs), like ChatGPT, organizations should go slow and make sure they understand these tools’ cybersecurity risks. That’s the advice dispensed this week in a pair of blogs by the U.K. National Cyber Security Centre.

Specifically, the NCSC warned about prompt injection attacks, in which attackers abuse AI chatbots by entering prompts into their query fields that make the tools act in unintended ways – such as disclosing confidential information or generating offensive responses.

“As LLMs are increasingly used to pass data to third-party applications and services, the risks from malicious prompt injection will grow,” the NCSC states in the blog “Thinking about the security of AI systems.

“Consider your system architecture carefully and take care before introducing an LLM into a high-risk system,” the NCSC adds.

Also, organizations should be aware of data poisoning attacks, in which attackers manipulate AI chatbots for nefarious purposes by tampering with their training data sets. The NCSC also pointed out that this is a new, rapidly evolving field, so products that organizations adopt today could fundamentally change in the near future or even disappear on a moment’s notice.

Tread carefully when deploying AI chatbots in the workplace

“If you’re an organisation building services that use LLM APIs, you need to account for the fact that models might change behind the API you’re using – breaking existing prompts – or that a key part of your integrations might cease to exist,” reads the NCSC blog “Exercise caution when building off LLMs.

In addition, much is still unknown about LLM-powered AI chatbots. “Amongst the understandable excitement around LLMs, the global tech community still doesn‘t yet fully understand LLMs’ capabilities, weaknesses, and crucially vulnerabilities,” the NCSC wrote.

Here are some risk-mitigation recommendations from the NCSC:

  • Apply standard supply chain security practices when downloading pretrained AI models from the internet, as they could contain vulnerabilities and other security gaps.
  • Stay on top of vulnerability disclosures impacting these tools, and upgrade and patch them promptly.
  • Understand that this technology is generally at a “beta” stage, so take that into account when deciding what business operations to integrate it with at this point.

For more about the NCSC guidance: 

To get more details about generative AI cybersecurity issues, check out these Tenable blogs:

VIDEO

Tenable CEO Amit Yoran discusses AI and preventive security on CNBC (CNBC)

2 – OpenAI says ChatGPT is now ready for the office

And speaking of the risks of using generative AI chatbots at work, OpenAI unveiled this week a version of its ultra-famous ChatGPT that the company says has been designed to provide “enterprise-grade security and privacy.”

It’s appropriately called ChatGPT Enterprise, and OpenAI said it comes in response to broad business adoption of the consumer-grade version of ChatGPT, which the company says is used in 80% of the Fortune 500.

ChatGPT goes to the office

There have been notable examples of businesses that have forbidden their employees from using ChatGPT and similar AI chatbots at work, due to a variety of security, privacy and compliance concerns – including Amazon, Apple, Northrop Grumman, Wells Fargo and Samsung.

For more information check out OpenAI’s announcement, along with coverage from The Verge, ZDNet, The Register and TechCrunch.

3 – FBI: QakBot botnet is down, but malware threat remains

News broke this week that the QakBot botnet, used for years to unleash ransomware attacks and other cyber crimes, got dismantled – but that doesn’t mean this malware’s threat has been wiped out completely.

In a joint advisory, CISA and the FBI detailed the FBI-led international operation to take down the botnet’s infrastructure, while offering guidance for cybersecurity teams about QakBot prevention, detection and remediation measures.

“The disruption of QakBot infrastructure does not mitigate other previously installed malware or ransomware on victim computers. If potential compromise is detected, administrators should apply the incident response recommendations included,” the advisory reads.

QakBot botnet is down, but malware threat remains

The FBI penetrated the QakBot infrastructure and unlinked 700,000-plus computers globally that had been stealthily hijacked and incorporated into the botnet. Created in 2008, the QakBot malware has been used in attacks resulting in hundreds of millions of dollars in losses globally.

To get more details, check out the CISA announcement, the joint advisory, the FBI announcement, the U.S. Department of Justice announcement, as well as coverage from PC World, Bleeping Computer and Krebs On Security.

VIDEO

FBI Director Christopher Wray Announces Major Operation Targeting the Qakbot Botnet (FBI)

4 – NIST: Quantum-resistant algos will be ready in 2024

Three encryption algorithms that can protect data from quantum computer attacks will be ready next year, which would be a major step in the efforts to prevent a global data-theft disaster.

The U.S. National Institute of Standards and Technology (NIST), whose efforts in this area go back to 2016, announced it has released the draft standards for these quantum-resistant algorithms:

Quantum computers, expected to be widely available around 2030, will be able to decrypt data protected with existing public-key cryptographic algorithms.

“We’re getting close to the light at the end of the tunnel, where people will have standards they can use in practice,” Dustin Moody, a NIST mathematician and leader of the project, said in a statement.

Quantum-resistant algos will be ready in 2024

NIST will field feedback from the world’s cryptographic community about the new algorithms until Nov. 22, 2023. You can send comments to [email protected], [email protected] and [email protected].

For more information about quantum computing’s cybersecurity issues:

5 – Report: Beware Russia-backed Infamous Chisel mobile malware

Cyber agencies from the U.S., the U.K., Canada, New Zealand and Australia are warning about a new mobile malware called Infamous Chisel that targets Android devices.

The Russian military’s Sandworm hacker team created Infamous Chisel to breach Android devices from the Ukraine army, the agencies said in a joint report published by CISA. Infamous Chisel opens up access to infected devices, scans files, monitors traffic and steals information.

CISA warns about Infamous Chisel malware

“Infamous Chisel is a collection of components which enable persistent access to an infected Android device over the Tor network, and which periodically collates and exfiltrates victim information from compromised devices,” the report reads.

Cybersecurity teams will find a detailed description of Infamous Chisel in the report, along with indicators of compromise, detection rules and signatures.

To get more details, check out CISA’s announcement, the NCSC’s announcement, the joint report and CISA’s page about Russian cyber threats, along with coverage from Bleeping Computer, The Record and ComputerWeekly.

6 – U.S. urges space industry to tighten up its cybersecurity

Companies involved in the U.S. commercial space industry need to be increasingly vigilant about attackers backed by foreign governments that want to disrupt their operations and steal their intellectual property.

So said the FBI, the National Counterintelligence and Security Center and the Air Force Office of Special Investigations in their advisory “Safeguarding the U.S. Space Industry.

“Space is fundamental to every aspect of our society, including emergency services, energy, financial services, telecommunications, transportation, and food and agriculture. All rely on space services to operate,” reads the advisory.

US urges space industry to tighten up its cybersecurity

The document lists indicators that could signal that “foreign intelligence entities” are trying to target space industry companies, and offers recommended mitigations. It also encourages targeted businesses to report concerns and potential attacks.

For more information about cybersecurity challenges in the space industry:

Related Articles

Cybersecurity News You Can Use

Enter your email and never miss timely alerts and security guidance from the experts at Tenable.