Today's news focuses on the AI debate, from security risks and legal grey areas to basic questions about how these systems should work in practice. We also feature data breach updates.
Let's take a closer look.
US Software Stocks Hit by $1 Trillion Selloff as AI Fears Take Hold
US software stocks are falling as investors reassess what generative AI actually means for the industry. Companies face questions about whether some software products are necessary once AI tools can perform similar tasks on their own.
That shift has pushed the S&P 500 software and services index down for seven straight sessions, wiping out close to $1 trillion in sector value since late January.
Losses have hit large and small firms alike. Salesforce and Microsoft both posted sharp one-day declines, with some shares dropping by as much as 7%.
Pressure intensified after new AI plug-ins from developers such as Anthropic demonstrated they could handle work like legal research, sales preparation, and data analysis; tasks many businesses currently pay software vendors to manage.
Thomson Reuters was among the hardest hit. Its shares slid sharply as investors questioned how defensible its legal information business remains if AI systems can search, summarise, and interpret similar material at speed.
Software stocks are now trading about 21% below their 200-day average, a level last seen in mid-2022. While some investors see prices beginning to reflect the risk, most remain cautious about which companies can still protect their core products as AI tools grow more capable.
Substack Confirms 700,000 User Records Exposed in 2025 Breach
Substack has acknowledged a data breach that exposed personal information belonging to around 700,000 users. The company disclosed that email addresses, phone numbers, and internal account metadata were accessed after a vulnerability was exploited in October 2025.
The issue was identified and patched on February 3, 2026, the four-month window between the breach and its discovery remains a point of concern for users.
Substack stated that passwords and payment details were not affected, noting that financial data remained encrypted. However, exposed contact information increases the risk of targeted phishing, particularly for paying writers and subscribers.
Users are being urged to treat unexpected emails or payment requests claiming to originate from Substack or Stripe with caution.
The incident comes amid strong platform growth, with Substack reporting more than five million paid subscriptions by early 2025.
Stanley Malware Uses Browser Extensions to Steal Credentials in Plain Sight
Varonis researchers have identified a malware-as-a-service toolkit known as Stanley that enables attackers to take over web browsers without changing the visible URL.
The toolkit is currently being promoted on Russian-language forums, with prices reported between $2,000 and $6,000. It allows attackers to distribute malicious Chrome extensions that pose as legitimate tools and can slip through official store reviews.
After installation, the extension injects phishing screens directly into real websites using iFrame overlays. Because the browser continues to show the correct domain, users may not notice the attack. It bypasses visual trust; the habit of checking the URL bar for the padlock or domain.
Stanley includes a control panel that allows operators to push fake browser alerts, manage compromised systems, and trigger attacks in real time. Instead of breaking into websites, the toolkit steals login details through trusted browser extensions, targeting the browser rather than the visited websites.
Sygnia Reveals AI-Driven Network Cloning Over 150 Law Firm Websites
Sygnia has uncovered a large fraud scheme involving more than 150 fake websites posing as legitimate law firms. The group behind it used AI and automation to copy real legal websites quickly and at scale.
The scheme came to light after one law firm discovered several copies of its own site, prompting investigators to dig deeper and uncover a much wider network.
The fake websites are deliberately simple. Most consist of only a homepage and a contact form, just enough to appear credible. To keep them online, the operators regularly rotate domain names, IP addresses, and hosting providers. They also sit behind Cloudflare, which makes tracing and takedowns more difficult. Each site uses its own SSL certificate, preventing mass shutdowns.
The main targets are people who have already been scammed. The fake firms claim they can help recover lost funds and stress that there are no upfront fees, a tactic designed to lower suspicion.
Once contact is made, victims are pressured to share personal or financial details. This indicates how AI has made this kind of fraud faster to launch, easier to scale, and harder to disrupt.
ShinyHunters Publishes Panera Bread Customer Data After SSO Breach
The ShinyHunters group has released customer data stolen from Panera Bread after ransom negotiations reportedly collapsed. Early claims pointed to as many as 14 million affected records, but current assessments confirm exposure of at least 5.1 million customer records.
The leaked information includes customer names, email addresses, and physical mailing details.
Investigators believe access was gained by compromising a Microsoft Entra single sign-on workflow, likely through voice phishing. Attackers allegedly posed as internal IT staff to trick employees into sharing login credentials or multi-factor authentication codes. This gave them access to cloud systems without exploiting any traditional software flaws.
Panera has acknowledged the incident and stated it is cooperating with authorities.
The breach shows a wider pattern of attacks targeting identity systems and SSO platforms. As more organisations centralise access controls, a single compromise can open the door to multiple systems at once.
Cape Town Halts AI Traffic Camera Expansion Pending Legal Review
Cape Town has paused plans to expand its use of AI-powered traffic cameras while it waits for legal clarity on whether evidence generated by the systems can be used in court. The technology, tested on Philip Kgosana Drive, successfully identified drivers using mobile phones or failing to wear seatbelts.
The city is seeking guidance from the National Director of Public Prosecutions on whether fines issued by automated systems can stand without human review. The issue is further complicated by South Africa’s Protection of Personal Information Act, which restricts how facial images and biometric data may be collected and used.
During the trial, every flagged offence was reviewed by a traffic officer before a fine was issued. While this helped ensure accuracy and fairness, city officials acknowledged it undercuts the efficiency the technology is meant to deliver. The outcome is expected to affect how AI-based traffic enforcement is rolled out nationwide.
For more insights, download our 2026 Cyber Threat Outlook to stay ahead of the latest risks.
You can also join our newsletter to get security updates delivered straight to your inbox.
