Meta appears to have decided Britain's Online Safety Act would be much easier to swallow if Ofcom stopped counting all the money the social media giant makes everywhere else. The Facebook and Instagram owner has launched a legal challenge against the UK comms regulator, arguing that the way Ofcom calculates fees and potential penalties under the Online Safety Act is fundamentally wrong because it relies on global turnover rather than UK-specific revenue. The law allows Ofcom to fine companies for up to 10 percent of their qualifying worldwide revenue, or £18 million, whichever is higher. For Meta, which brought in about $201 billion last year, that means the numbers stop sounding like regulatory penalties and start sounding like national infrastructure projects. Meta is now seeking a judicial review in the High Court over how Ofcom defines "qualifying worldwide revenue." The dispute boils down to three complaints. First, Meta argues that Ofcom should only consider UK revenue tied to regulated services, not the company’s global income. Second, it objects to rules that treat multiple services under the same corporate umbrella as jointly liable, potentially exposing the wider organization to larger penalties. Third, it is challenging how Ofcom aggregates revenue across services rather than assessing them individually. An Ofcom spokesperson told The Register: "Meta have initiated a judicial review in relation to online safety fees and penalties. Under the Online Safety Act, these are to be set with reference to a provider's 'Qualifying Worldwide Revenue', which we have defined based on a plain reading of the law. "Disappointingly, Meta are objecting to the payment of fees, and any penalties that could be levied on companies in future, that are calculated on this basis. We will robustly defend our reasoning and decisions." A Meta spokesperson told The Register: "We are committed to cooperating constructively with Ofcom as it enforces the Online Safety Act. However, we and others in the tech industry believe its decisions on the methodology to calculate fees and potential fines are disproportionate. We believe fees and penalties should be based on the services being regulated in the countries they're being regulated in. This would still allow Ofcom to impose the largest fines in UK corporate history." The case marks the latest flare-up between Silicon Valley and Britain over the Online Safety Act, which has already triggered complaints from US politicians, free speech campaigners, and tech firms unhappy about the scale of Ofcom’s new powers. The regulator has not been shy about flexing them either. It has already threatened action against Elon Musk's X over sexually explicit AI-generated images linked to Grok and, in March, issued its first fine under the regime against 4chan. Meta appears to have looked at where that enforcement road leads and decided now was the time to argue about the math. ®
Mozilla fixed 423 Firefox security bugs in April, a repair rate more than five times higher than the 76 fixes issued in March and almost 20 times higher than its 21.5 monthly average last year. The browser maker previously said Anthropic's ballyhooed Mythos Preview model found 271 of these in Firefox 150. Now, a trio of technical types has come forward to provide a bit more detail about what Mythos (and its less storied sibling Opus 4.6) actually found. But they also highlight something that may matter more than the model: the agentic harness – the middleware mediating between AI and the end user. Brian Grinstead, Firefox distinguished engineer, Christian Holler, Firefox tech lead, and Frederik Braun, head of the Firefox security team, observe that over the past few months, AI-generated security reports have gone from slop to rather more tasty. They attribute the transformation to better models and development of better ways of harnessing those models – steering them in a way that increases the ratio of signal to noise. But they also appear to be aware that there's some skepticism in the security community about Mythos. So they've decided to publicize selected wins in an effort to encourage others to jump aboard the AI bug remediation train. "Ordinarily we keep detailed bug reports private for several months after shipping fixes and issuing security advisories, largely as a precaution to protect any users who, for whatever reason, were slow to update to the latest version of Firefox," they said. "Given the extraordinary level of interest in this topic and the urgency of action needed throughout the software ecosystem, we’ve made the calculated decision to unhide a small sample of the reports behind the fixes we recently shipped." The post links to a dozen Firefox bugs with varying degrees of severity. The list includes, for example, a 20-year-old heap use-after-free bug (high severity) that a web page could trigger using the XSLTProcessor DOM API without any user interaction. Many of these bugs are sandbox escapes, they note, which are difficult to find using techniques like fuzzing. AI analysis, they say, helps provide broader security coverage. And they add that it has helped validate prior browser hardening work designed to prevent prototype pollution attacks – audit logs showed AI models making unsuccessful exploitation attempts using this technique. Following Anthropic's announcement of Project Glasswing – a program for companies to gain early access to Mythos because it's touted as too dangerous for public release – security experts expressed skepticism. For example, Davi Ottenheimer, president of security consultancy flyingpenguin, wrote in an April 13 blog post, "The supposedly huge Anthropic 'step change' appears to be little more than a rounding error. The threat narrative so far appears to be ALL marketing and no real results. The Glasswing consortium is regulatory capture dressed up poorly as restraint." He subsequently ran a test in which he strapped Anthropic's lesser models Sonnet 4.6 and Haiku 4.5 into a harness called Wirken with an auditing skill called Lyrik. The result was eight findings in two minutes at a cost of about $0.75, Ottenheimer claims, noting that two of the eight matched bugs Mythos had identified. Other security folk have also reported that bug hunting and exploit development can be quite productive with off-the-shelf models like Opus 4.6, which among other virtues costs about 5x less than Mythos. In an email to The Register, Ottenheimer said, "There's a fundamental philosophical failure in the Mozilla post. A reading and a measurement are not the same thing. I don't see a measurement, but they seem to want us to believe we're looking at one. "When they give us the 'behind the scenes math' it's circular, a trick. 'Mythos found 271 bugs' is what Mythos found, not what other tools could not find against the same code. Why leave it as an assumption if it can be proven?" Ottenheimer said Mozilla advocates that every project adopt a similar approach without proving the merits of that approach. "It's like saying if you don't drink Coca-Cola, you can't run a mile under six minutes, because that's what a guy sponsored by Coca-Cola just did," he said. "The bar moves on rhetoric, marketing, not proper evidence. That is the capture crew again." He notes that the merits of Mythos might be more convincing if Mozilla had reported they couldn't do this work without Mythos. And since they're not saying that, he suggests, it's worth asking why there's no transparent comparison of Mythos to other models. He points to Mozilla's admission that Opus 4.6 was already identifying "an impressive amount of previously unknown vulnerabilities." "Mozilla never quantifies what Opus 4.6 [did] before saying what Mythos added," he said. "So 271 attributed to Mythos doesn't fit the analysis. And there's a deeper reveal when they say 'we dramatically improved our techniques for harnessing these models.' The improvement may be entirely in the harness, not as much in the model. This maps to my own experience. A nail gun has advantages over the hammer, yet without being in the right hands the outputs are as bad or worse." ®
How explicit does the maker of a footgun need to be about the product's potential to shoot you in the foot? That's essentially the question security firm Adversa AI is asking with the disclosure of a one-click remote code execution attack via an MCP server in Claude Code, Gemini CLI, Cursor CLI, and Copilot CLI. The TrustFall proof-of-concept attack demonstrates how a cloned code repository can include two JSON files (.mcp.json and .claude/settings.json) that open the door to an attacker-controlled Model Context Protocol (MCP) server. MCP servers make tools, configuration data, schemas, and documentation available in a standard format to AI models via JSON. The vulnerability arises from inconsistent restrictions governing the scope of settings: Anthropic blocks some dangerous settings at the project level (e.g. bypassPermissions) but not others (e.g. enableAllProjectMcpServers and enabledMcpjsonServers). The JSON files simply enable those settings. "The moment a developer presses Enter on Claude Code's generic 'Yes, I trust this folder' dialog, the server spawns as an unsandboxed Node.js process with the user's full privileges — no per-server consent, no tool call from Claude required," Adversa AI explains in its PoC repo. The likely result is a compromised system. The PoC demonstrated in this video. It worked on Claude Code CLI v2.1.114, as of May 2. Other agent CLIs are also said to be affected, but specific PoCs have not been published. "It's the third CVE in Claude Code in six months from the same root cause (project-scoped settings as injection vector)," Alex Polyakov, co-founder of Adversa AI, told The Register in an email. "Each gets patched in isolation but the underlying class hasn't been finally fixed. Most developers don't know these settings exist, let alone that a cloned repo can set them silently." Anthropic, according to the security biz, contends that the user's trust decision moves the issue outside its threat model. CVE-2025-59536 was considered a vulnerability because it triggered automatically when a user started up Claude Code in a malicious directory. TrustFall, however, is considered out of scope because the user has been presented with a dialog box and made a trust decision. Adversa argues that the decision is not being made with informed consent, citing a prior, more explicit warning notice that was removed in v2.1 of the Claude Code CLI. "The pre-v2.1 dialog explicitly warned that .mcp.json could execute code and offered three options including 'proceed with MCP servers disabled,'" writes Adversa's Sergey Malenkovich. "That informed-consent UX was removed. The current dialog defaults to 'Yes, I trust this folder' with no MCP-specific language, no enumeration of which executables will spawn, and no opt-out for MCP while keeping the rest of the trust grant." Then there's the zero-click variant to consider for CI/CD pipelines that implement Claude Code. When Claude Code is invoked in CI/CD, that happens via SDK rather than the interactive CLI. So there's no terminal prompt. Malenkovich argues that Anthropic should make three changes. First, block enableAllProjectMcpServers, enabledMcpjsonServers, and permissions.allow from any settings file inside a project. The idea is that a malicious server should not be able to approve its own servers. Second, implement a dedicated MCP consent dialog that defaults to "deny." And third, require interactive consent per server rather than for all servers. Anthropic did not respond to a request for comment. ®
It’s World Password Day, and there’s really no better way to celebrate than with news that a majority of supposedly secure password hashes can be cracked with a single GPU in less than an hour, some in less than a minute. Using a dataset of more than 231 million unique passwords sourced from dark web leaks - including 38 million added since its previous study - and hashing them with MD5, researchers at security firm Kaspersky found that, using a single Nvidia RTX 5090 graphics card, 60 percent of passwords could be cracked in less than an hour, and a full 48 percent in under 60 seconds. Sure, that’s not exactly your run-of-the-mill desktop graphics processor given its price, but it highlights an important point: It takes surprisingly little to crack the average password hash. Aspiring cybercriminals don’t even really need their own 5090, Kaspersky notes, as they can easily rent one from a cloud provider and crack hashes for a few bucks. The bottom line is that passwords protected only by fast hashing algorithms such as MD5 are no longer safe if attackers obtain them in a data breach. “One hour is all an attacker needs to crack three out of every five passwords they’ve found in a leak,” Kaspersky noted. Much of the reason password hashes have become so easy to crack is password predictability. Per Kaspersky, its analysis of more than 200 million exposed passwords revealed common patterns that attackers can use to optimize cracking algorithms, significantly reducing the time needed to guess the character combinations that grant access to target accounts. In case you’re wondering whether there’s a trend to compare this to, Kaspersky ran a prior iteration of this study in 2024, and bad news: Passwords are actually a bit easier to crack in 2026 than they were a couple of years ago. Not by much, mind you - only a few percent - but it’s still a move in the wrong direction. “Attackers owe this boost in speed to graphics processors, which grow more powerful every year,” Kaspersky explained. “Unfortunately, passwords remain as weak as ever.” How about a World Let’s-Stop-Relying-On Passwords Day? News of the death of the password has, unfortunately, been greatly exaggerated in the past couple of decades, yet most of us still rely on them multiple times a day. It likely won’t surprise El Reg readers to learn that us vultures are inundated with pitches for events like World Password Day, and most of them received this year had the same takeaway: We really need to get a move on with ditching passwords, or, at the very least, rethinking our security paradigms. Chris Gunner, a CISO-for-hire at managed service provider giant Thrive, told us in emailed comments that there’s no reason to ditch passwords entirely, but they need to be just one part of a broader identity-based security strategy. “Even a strong password can be undermined if the wider identity and access environment is not properly managed,” Gunner said. Passwords should be paired with a second factor, preferably biometric, said Gunner, because it’s the most difficult for hackers to bypass. “MFA controls should then be joined by identity governance and endpoint protection so gaps between systems are reduced,” Gunner added, recommending that a broader zero trust model be established as well, restricting lateral movement possibilities via a compromised account. Senior IEEE member and University of Nottingham cybersecurity professor Steven Furnell said that World Password Day messaging shouldn’t stop at telling people to improve their personal security posture either. Passwords aren’t going anywhere for a long while, Furnell explained in an email, and inconsistent adoption of new security technologies will mean users will be left at risk as certain providers fail to adapt. “Many sites and services still don’t offer passkey support, so users will find themselves with a mixed login experience,” Furnell explained. “While some might argue that it’s the user’s responsibility to protect themselves properly, they need to know how to do it.” The professor noted that, in many cases, users aren’t told how to create a good modern password, and in other cases, sites simply don’t enforce adequate password requirements to make passwords secure, to the degree that they can be made so. “This World Password Day, the main message ought not to be to the users, who often have no choice but to use passwords anyway, but to the sites and providers that are requiring them to do so,” Furnell told us. You heard the man - time to upgrade that user security stack. No matter how safe you think those passwords might be, with their complex requirements and proper hashed storage, it probably won’t take too long for someone to break in, making it an organizational responsibility to ensure there’s yet another locked door behind the first one. ®
PWNED Welcome back to PWNED, the weekly column where we turn a white hot spotlight onto the cracks and crevices in company security and write about those who have let their guard down, often in the name of convenience, incompetence, or just plain laziness. Today’s tale of woe concerns the need to secure a network and the dangers of an insecure password. Our story comes courtesy of Roger Grimes, CISO advisor at security firm KnowBe4. He recounts a time when he had to get into a client’s network but didn’t have the credentials. Grimes was installing accounting software for a client and, as a result, needed to take the network down for a day. To make sure that he didn’t disturb any work, he decided to log into the system on a Saturday. Unfortunately, he was missing the admin password he needed to uninstall old software and add the new app. Since it was the weekend, no one was answering their work phones to give him the information he needed, and there was a good chance he would have to delay the upgrade until the following weekend. Grimes could have given up right there, but he had an idea. Why not try to figure out what the password was? The situation reminded him of a movie. “You know, the scene where the hacker is sitting at the terminal trying to log on, but the victim refuses to give up credentials. So the hacker starts typing random passwords out of thin air,” he said. “And wouldn’t you know it? They correctly guess the password at the last possible moment.” After trying numerous passwords, the advisor thought about a famous movie he had just watched: Citizen Kane. He decided to try “rosebud,” and voilà. (This vulture can identify with the Orson Welles focus, having just watched The Third Man this week.) It’s a good thing that it was Grimes, a legit contractor, guessing passwords instead of some miscreant. Picking a password from a movie plotline is a bad idea and, in this case, made even worse by the lack of numbers, capital letters, or symbols in the password. If you’re picking out a password, you might be better off generating a strong password that’s a string of random numbers and letters and then having it remembered by a password manager. Then, for the password manager itself, consider a passphrase that contains capital letters, symbols, and numbers such as “Shoe-Please6-Wrapped-Carbon-Wear” so you can try to remember it. You might also use a passphrase for your admin password – you can generate a random one using Keeper’s Passphrase Generator. Have a story about someone leaving a gaping hole in their network? Share it with us at pwned@sitpub.com. Anonymity available upon request. ®
Cybersecurity vendor Arctic Wolf has laid off 250 workers in a restructuring that it says is designed to position the company to invest more in AI through its superintelligence platform and agentic Security Operations Center (SOC), a company spokesperson told The Register. “We recently made an organizational restructuring to better align the company’s structure and investments with our long‑term strategy,” a spokesperson said. “While these decisions are difficult, they position Arctic Wolf to operate more efficiently, continue investing in our Superintelligence platform and Agentic SOC, and deliver strong value to customers. We remain confident in our direction and momentum.” The layoffs appear to represent less than 10 percent of the total workforce. Arctic Wolf is a privately held company and does not publish a current headcount, but in December 2024, the company said it employed more than 2,600 workers, according to a press release it issued at the time. According to the website PitchBook, Arctic Wolf has 3,323 employees. The job cuts appeared to fall across several categories including sales, product development, and marketing. Some had been with the company for four years or more in revenue-generating roles such as sales engineer. One senior systems engineer with experience in datacenter infrastructure and cyber threat detection said on LinkedIn he was let go after more than a year with the company. “Wow! I was not expecting to have such a swing in posts this week from super positive to negative. Today I was laid off by Arctic Wolf due to restructuring,” wrote one sales engineer the day after he wrote a post about the success they had experienced last year. Alongside its five global SOCs, Arctic Wolf has offices in Waterloo, Ontario; San Antonio, Texas; Eden Prairie, Minnesota; Bengaluru, India, and other locations worldwide. Arctic Wolf operates in crowded endpoint detection and response (EDR) and managed detection and response (MDR) markets alongside CrowdStrike, Rapid7, and SentinelOne. It also competes for channel partners and customers with the likes of Huntress and Blackpoint Cyber. The company has bet on its Aurora Superintelligence Platform that combines security data, a “Swarm of Experts” AI agents and humans in the loop to protect customers' systems. ®
You can't trust anyone these days! Get together with seven of your colleagues, and there’s a decent chance one of the eight will say they’ve either sold company login details in the past year or know someone who has, says UK fraud prevention outfit Cifas. That 13 percent figure is shocking. Just as strikingly, Cifas found a similar 13 percent of employees overall believed selling access to company systems was justifiable, though the org’s Workplace Fraud Trends report did not spell out those justifications. Regardless, Cifas says it suggests that there’s a worrying shift happening among attitudes toward insider-enabled fraud that should trouble leadership. Then again, leadership might not be too worried based on the data. Cifas doesn’t give a precise number for the share of rank-and-file employees who feel selling credentials is justified, but it does call attention to how leadership feels, and the more power they have, the more they seem to think it’s okay to sell their access. Thirty-two percent of managers, 36 percent of directors, and 43 percent of C-suite executives said it was justifiable to sell their login details. Even more shockingly, a full 81 percent of business owners felt the exact same way. As for why, that’s not entirely clear, though Cifas told us it’s heard various excuses in the past. Financial challenges, the belief it would be a harmless one-off, confidence they wouldn’t get caught, and disgruntlement were among the reasons cited for selling credentials. If you’re wondering who to keep an eye on, Cifas suggests looking at IT and telecoms professionals, who showed the highest tolerance for fraud-related behavior across multiple scenarios covered in the study. Those scenarios included the aforementioned selling of login details, as well as secretly moonlighting for a competitor, using fraudulent references on job applications, expense fraud, and the like. Selling access to company systems was one of the less common types of fraud covered in the survey, but the 13 percent figure reflects respondents who said they had done it or knew someone who had - meaning that, in a company of 1,000 people, around 130 might report direct or indirect exposure to the behavior. The fact that leadership respondents and IT and telecoms professionals showed higher tolerance for such activity makes the findings more concerning, even if the survey focused specifically on selling login details, in some cases to a former colleague. This data is specific to the UK, mind you, but there’s no reason to assume a relaxed attitude toward such a critical cybersecurity weakness is confined to the Isles - that’s just as likely as the person buying those credentials keeping it to themselves. When asked if Cifas had comparable data from prior years to compare this to, the organization described its findings as revealing “a worrying shift in attitudes toward insider-enabled fraud.” However, the firm said that this is the first year it compiled this report, so it doesn’t have comparable data. Nonetheless, Cifas Director of Learning Rachael Tiffen said in a press release that the point is that organizations need to be aware of how many employees might be willing to sell access to company systems. “These findings show how vital it is for organisations to build fraud‑aware cultures, where employees at all levels understand their responsibilities and the consequences of their actions,” Tiffen said. Be sure to pay them well, too. ®
Researchers at Rapid7 say that they have spotted what they believe was an Iranian intelligence cyber unit masquerading as the Chaos ransomware gang to hide a state-sponsored espionage operation. The intrusion was spotted earlier this year, and investigators say breadcrumbs left behind give them "medium confidence" in saying it was the work of MuddyWater, which has been linked to intrusions affecting Western government and banking networks in recent months. Attackers began with a Microsoft Teams phishing campaign, which is not uncommon. They also encouraged targets to share their screens. Again, it was nothing too out of the ordinary. However, what must have required some expert persuasion work was that they convinced these individuals to enter their credentials into local text files, and even modify MFA settings to allow attacker-controlled devices to complete authentication. Rapid7 researchers Alexandra Blia and Ivan Feigl wrote: "While connected, the [threat actor (TA)] executed basic discovery commands, accessed files related to the victim's VPN configuration, and instructed users to enter their credentials into locally-created text files. "In at least one instance, the TA also deployed a remote management tool (AnyDesk) to further facilitate access." From there, browser artifacts suggested that attackers lifted credentials through phishing pages. At least one mimicked a Microsoft Quick Assist page. Armed with valid credentials, the attackers then executed various commands via RDP, which downloaded payloads using curl. These payloads included a backdoor malware dubbed Darkcomp, a malicious Microsoft WebView2 loader to disguise traffic, and an encrypted configuration file that sent instructions to Darkcomp. Then it was a case of performing lateral movement by using additional compromised accounts and scooping up sensitive data along the way. The attackers used the same accounts to send emails internally notifying organization leaders about the intrusion and data theft, and included an onion link leading to Chaos ransomware’s data leak site (DLS), where a corresponding entry appeared with all data redacted and hidden behind a countdown timer. Follow-up emails aimed to build the illusion of a genuine ransomware attack, although the illusion was short-lived. The attackers instructed recipients to look for a file containing "access credentials" they could use to begin ransom negotiations. Unlike the plaintext credential files the attackers had socially engineered the original targets into creating, this file did not actually exist. There was no way to contact the attackers, whereas in a typical scenario the intruders would be looking for a payout. There was also no file encryption, which is inconsistent with Chaos affiliates' typical way of working. "Despite these inconsistencies in the initial proof-of-compromise, the TA later published the stolen data on its DLS in line with modern extortion tactics," Blia and Feigl wrote. "The leaked data was assessed to be legitimate." If not for financial gain, then what? MuddyWater – if that is indeed the group behind this – did not extort the organizations in question, nor did they deploy a ransomware payload, but they did pose as an established ransomware group. Rapid7 believes the group did this as an extension of its false-flag operations to provide a plausible front for cyberespionage activity, or preposition work to underpin potential destructive cyberattacks. It wouldn't be the first time MuddyWater or Iranian intelligence (MOIS) was found LARPing as a ransomware crew. Both have previously been linked to an attack on an Israeli hospital, allegedly carried out by a Qilin affiliate. "Following the subsequent public attribution of that incident to the MOIS, it is plausible that the group adopted alternative ransomware branding, in this case Chaos, in an effort to reduce attribution risk and maintain a degree of plausible deniability," said the researchers. The unique benefits of masquerading as ransomware crooks include muddying attribution for attacks by leaving behind ransomware breadcrumbs, as well as redirecting defensive efforts toward locating signs of ransomware deployment instead of the backdoors that underpin espionage activity. ®
Privacy groups, VPN providers, and civil liberties outfits have lined up to warn the UK government that its latest plan to slap age gates across swathes of the internet risks breaking the web while doing little to keep kids safe. In a joint statement, signatories including the Electronic Frontier Foundation, Mozilla, the Open Rights Group, Proton, and the Tor Project took aim at proposals now moving forward after the Children's Wellbeing and Schools Bill cleared Parliament, with access to some platforms, services, and specific features potentially restricted by age checks. "The open internet is a global public resource that has long since become foundational to the flourishing of individuals, businesses, and societies," the letter states, warning that "this openness and the opportunities it affords are coming under threat in the UK." Ministers are now consulting on measures that could include curfews for younger users and restrictions across services ranging from games and VPNs to static websites. The signatories say that will quickly turn into a system where everyone, not just children, has to prove their age to get full access. "Implementing such access restrictions hinges on all users having to verify their ages, not just young people," the letter warns, adding that the approach "focuses on restricting young people's access, rather than ensuring services are designed to uphold their rights and interests by default." Early results are not exactly inspiring. It's been months since tougher checks under the Online Safety Act began rolling out, and some systems have already been fooled by little more than a drawn-on mustache, raising questions about how effective the tech really is at keeping minors out. This hasn't gone unnoticed. "Existing age assurance technologies are either insufficiently accurate, undermine privacy and data security, or are not widely available across populations," the letter says, warning that rolling them out broadly "creates serious new security threats." It is not just a privacy headache either: the groups argue the policy could tilt the market further toward Big Tech. Mandating checks across more services risks "cementing the dominance of gatekeeper app stores, operating systems, and platforms' walled gardens," while turning the web into "a patchwork of age-gated jurisdictions." Instead of doubling down on access controls, the groups argue policymakers are targeting the wrong problem. "These risks are real and require thoughtful policy interventions that address the root of the issue, not just simplistic policies like access bans," the letter says, pointing to business models built on "massive collection of user data" as a bigger driver of harm. The closing line does not leave much room for interpretation: "Now is the time to hold tech to account, not undermine the open internet." ®
India’s Securities and Exchange Board has advised participants in the nation’s equities industry to immediately revisit their information security systems and practices, in case Anthropic’s Mythos bug-finding AI sparks a cyberattack spree. The Board is India’s equivalent of the USA’s Securities and Exchange Commission, or the UK’s Financial Conduct Authority. On Tuesday, the Indian regulator issued an advisory that opens with the following observation: In response to those threats, the Board has established a taskforce that will examine the risks posed by models like Mythos, share threat intelligence, report incidents, and initiate a review of cybersecurity at third-party software vendors who supply the regulator and the entities it oversees. The advisory then offers some basic infosec advice: ensure patches are up to date, conduct audits of potential vulnerabilities, conduct inventories of APIs and secure them, run a serious SOC and take its advice, and harden systems by adopting principles such as zero-trust networking and running only essential services. The regulator also told participants in India’s equities markets to have their IT committees issue guidance on how to mitigate risks created by AI-led vulnerability detection models, then develop a plan to use AI as part of their infosec armoury. “Also, undertake other measures including recalibration of risks for AI accelerated threats, AI-augmented SOC transformation, and continuous vulnerability management using AI tools,” the advisory states. The Board directed the above advice at 19 different classes of company, ranging from venture capitalists to merchant bankers, mutual funds, stock exchanges, and even niche suppliers such as agencies that store know your customer information. Other regulators around the world have also acknowledged the risks Mythos poses. US Treasury Secretary Scott Bessent convened an emergency meeting with the nation’s banks a few weeks back. Singaporean regulators did likewise, yesterday. Australian regulators sent local banks a strongly worded reminder that they must develop AI strategies that consider risks the technology creates. Hong Kong’s Monetary Authority is working on new infosec guidance for the age of Mythos. India’s approach stands out for effectively putting entities it regulates on alert to an imminent threat and ordering them to take action to prevent problems. ®
Securities regulator urges market players to develop new strategies and nail cyber-basics before AI models fuel mass attacks
India’s Securities and Exchange Board has advised participants in the nation’s equities industry to immediately revisit their information security systems and practices, in case Anthropic’s Mythos bug-finding AI sparks a cyberattack spree.…
ServiceNow announced an expansion of its AI Control Tower, transforming what began last year as a governance dashboard into what the company now describes as a command center for managing AI assets across an entire enterprise, including those running outside ServiceNow's own platform. The updated AI Control Tower, shipping as part of ServiceNow's Australia platform release, now operates across five areas: discovery, observation, governance, security, and measurement. The company said that this is its answer to AI agent sprawl, as enterprises have deployed more AI than they can account for and the tools to govern it have not kept pace. “What we launched last year gave customers a governance layer, but what we're shipping this year goes significantly deeper, evolving from visibility and management into a full enterprise AI command center,” Nenshad Bardoliwalla, group vice president of AI products at ServiceNow told reporters during a media briefing ahead of the company’s annual product show, Knowledge 26. “Our AI control tower ensures every AI system asset and identity is compliant, secure, and aligned with your strategy.” The AI Control Tower now reaches beyond ServiceNow's own platform with 30 new enterprise connectors that span all three major hyperscalers, Amazon Web Services, Google Cloud, and Microsoft Azure, along with enterprise applications such as SAP, Oracle, and Workday. The system can now discover AI assets, models, agents, prompts, and datasets running across an organization's full technology estate, not just those deployed on ServiceNow. “With our Veza integration, we're bringing patented access graph technology into the AI control tower, extending identity access governance to hyperscaler AI environments and every connected device, every agent, every model, every action has scope permissions, least privilege enforcement and auditable identity chains,” Bardoliwalla said. Bardoliwalla walked through a demo in which the AI Control Tower detected a prompt injection attack on a pricing agent. The system identified malicious instructions hidden inside order payloads, mapped the blast radius of affected systems using access graph technology from Veza, and presented a kill switch to disable the compromised agent, without human intervention. "You need a system that senses, decides and acts on its own, that can scale with your AI portfolio, not your head count," said Bardoliwalla. Two recent acquisitions underpin the security architecture. ServiceNow announced in December it would acquire Veza, which contributes an access graph that maps every identity and access path across systems whether it belongs to humans, machines, or AI agents. It also knows which entities have create, read, update, and delete-level permissions. ServiceNow said the access graph currently maps over 30 billion fine-grained permissions. When a vendor pushes a new version of a model or agent, the platform detects permission changes and automatically triggers a re-scoping workflow. Traceloop, which ServiceNow acquired in March, provides deep AI observability inside the Control Tower by tracking every LLM call that is running in the system. The integration delivers continuous runtime monitoring with live alerts, replacing what ServiceNow described as the periodic manual audits most enterprises still rely on. Teams can watch how agents reason, where they make decisions, and when to course-correct. ServiceNow also addressed the cost side of the AI equation. Control Tower now includes cost tracking and ROI dashboards to give finance teams visibility into model spend. The measurements track token consumption across providers such as OpenAI, Anthropic, and Google so customers can predict costs and tie spending to business outcomes. ServiceNow said it uses the AI Control Tower internally to manage over 1,600 AI assets and tracked half a billion dollars in cumulative AI value from internal use cases in 2025. "The number one question every CFO is asking is, where's the value?" said Bardoliwalla during the briefing. He added that runaway model spend ranks among the biggest pain points enterprises currently face as they scale AI deployments. Alongside the Control Tower expansion, ServiceNow announced Action Fabric, a mechanism that opens the company's full workflow engine to external AI agents. Through a generally available MCP server, agents built on Claude, Copilot, or custom platforms can now trigger governed enterprise actions — not just read and write data, but execute the flows, playbooks, approval chains, and catalog requests that ServiceNow customers have built over years. Anthropic is the first design partner for Action Fabric. The integration connects Claude directly to ServiceNow's governed system of action. "The gap between knowing what needs to happen and making it happen is where productivity dies," said Boris Cherny, head of Claude Code at Anthropic said in a statement. "Connecting Claude Cowork to ServiceNow's system of action closes that gap with enterprise execution, directly in the flow of work." Every action routed through Action Fabric runs through the AI Control Tower, so it carries identity verification, permission scoping, and a full audit trail. The MCP server is included in every Now Assist and AI Native SKU, with additional features planned for the second half of 2026.
CISA is warning that a newly-disclosed Linux kernel bug dubbed "CopyFail" is already being exploited, just days after researchers dropped a working root-level exploit. Tracked as CVE-2026-31431, the bug sits in the Linux kernel and gives low-level users a way to take full control of a system by modifying data they should only be able to read, effectively turning limited access into full root privileges on unpatched machines. The issue was disclosed by cybersecurity consultancy Theori, which said the flaw was discovered by its AI-powered penetration testing platform, Xint, and reported to the Linux kernel security team on March 23. Major Linux distributions pushed out patches ahead of public disclosure, which Theori published alongside a proof-of-concept exploit. The Python-based code works against Ubuntu 24.04 LTS, Amazon Linux 2023, RHEL 10.1, and SUSE 16, but the researchers warned that every mainstream Linux kernel built since 2017 is in scope of potential exploitation. "Same script, four distributions, four root shells — in one take. The same exploit binary works unmodified on every Linux distribution," Theori says. That level of reliability has not gone unnoticed. The CISA, the US government's cybersecurity agency, has added the bug to its Known Exploited Vulnerabilities catalog and ordered Federal Civilian Executive Branch agencies to patch within two weeks, setting a May 15 deadline. Microsoft backed CISA's findings and said it is already seeing signs of activity following the PoC's release. "Given the availability of a fully working exploit proof-of-concept (PoC) and the race to patch systems, Microsoft Defender is seeing preliminary testing activity that might result most likely in increased threat actor exploitation over the next few days," the company warned. The mechanics help explain the urgency. The attack is local and requires little access, with no user interaction, so anyone who already has a foothold on a vulnerable box can try their luck. It is the kind of bug that turns a small break-in into full control pretty quickly. As The Register reported last week, the flaw stems from how the kernel handles certain cryptographic operations, opening a path to tamper with cached data in ways that were never meant to be user-controlled. With a reliable exploit now in the wild, that design quirk has effectively turned into a universal privilege-escalation trick. ®
Researchers dropped a reliable root exploit and it didn’t sit idle for long
CISA is warning that a newly-disclosed Linux kernel bug dubbed "CopyFail" is already being exploited, just days after researchers dropped a working root-level exploit.…
Real estate giant Cushman & Wakefield has confirmed a data breach after two cybercrime groups, ShinyHunters and Qilin, separately claimed responsibility for attacks on the company. A spokesperson told The Register the attack was "limited" in scope and stemmed from vishing (voice phishing), suggesting an employee was socially engineered. The representative said: "Cushman & Wakefield recently became aware of a limited data security incident due to vishing. We have activated our response protocols, including taking steps to contain the unauthorized activity and engaging third-party expert advisors to support a comprehensive response. "Our systems and operations continue to run normally, and we are working diligently to investigate the incident. We recognize the trust placed in us to protect sensitive data and we take this responsibility very seriously." Cushman & Wakefield (C&W) did not address the apparent dual targeting by both ShinyHunters, which operates a pay-or-leak model, and Qilin, currently viewed as the world's most prolific ransomware group. There is no previously established coalition between ShinyHunters and Qilin, which suggests the two alleged attacks are separate but coincidentally timed. In a message sent to The Register, ShinyHunters claimed they attacked the company on May 1, while Qilin listed C&W on its data leak site on May 4. Qilin's website listing did not detail how it allegedly attacked C&W, although ShinyHunters claimed it stole "over 500,000 Salesforce records containing PII and other internal corporate data." ShinyHunters set a May 6 deadline for C&W to make contact to prevent the data from being leaked, but the cybercriminals claimed this had yet to happen. ShinyHunters has been on something of a tear recently. Known for its large-scale, high-impact attacks, the group's latest wave of activity began in March when it laid claim to an expansive supply chain attack after breaching Salesforce customers via the CRM giant itself. At the time, it said it had stolen data belonging to Salesforce and more than 100 of its high-profile customers. Since then, big-name brands like ADT, Carnival Cruise Line, Rockstar Games, Vimeo, and others have all confirmed ShinyHunters-linked cyberattacks, although not all were explicitly linked to its earlier Salesforce compromise. ®
Cushman & Wakefield activated incident response protocols after serial extortionists issued separate threats
Real estate giant Cushman & Wakefield has confirmed a data breach after two cybercrime groups, ShinyHunters and Qilin, separately claimed responsibility for attacks on the company.…
More than 119,000 Vimeo users's email addresses were extracted in a breach traced to a third-party analytics vendor, according to Have I Been Pwned. The incident first surfaced in April when the ShinyHunters crew added Vimeo to its growing "pay or leak" hit list, claiming it had pulled hundreds of gigabytes of data and threatening to dump the lot unless a deal was struck. That dump has since landed, and breach notification service Have I Been Pwned now puts a number on at least part of the fallout: 119,000 unique email addresses, in some cases paired with names. Vimeo last week confirmed that data was taken, but stopped short of saying how many people were affected. The company pinned the incident on Anodot, a third-party analytics provider used across its systems, and said the attacker gained access via that integration rather than breaking into Vimeo directly. Anodot has not said anything publicly, but its status page shows the incident kicked off on April 4. According to Vimeo, the stolen databases were heavy on technical data, video titles, metadata, and some customer email addresses. The company has been keen to stress what was not included: no actual video content, no valid login credentials, and no payment card information. That does not make the data harmless. Email lists like this get reused, resold, and recycled into phishing runs for years, especially when they come with enough context to make a message look convincing. The attackers, for their part, claim the breach went deeper. In a post seen by The Register, ShinyHunters alleged that "Snowflake and BigQuery instances data was compromised thanks to Anodot.com," adding that the company "failed to reach an agreement" despite multiple attempts to negotiate. Vimeo says it has cut off the problem at the source, disabling Anodot credentials, ripping out the integration, and bringing in outside security help while notifying law enforcement. The investigation is ongoing, and the company says it will update customers as it learns more. For now, the numbers from Have I Been Pwned seem to fill in the gap left by Vimeo's initial disclosure, and underline a familiar problem: you can lock down your own systems, but your vendors only have to slip once. ®
Vimeo points finger at analytics supplier Anodot, says no logins or card data were touched
More than 119,000 Vimeo users's email addresses were extracted in a breach traced to a third-party analytics vendor, according to Have I Been Pwned.…
Romance fraudsters scammed Britons out of £102 million ($138 million) last year, according to the latest police figures. That works out to roughly £280,000 ($379,000) a day, the City of London Police said Tuesday. The average victim loses around £9,500 ($12,866) per scam, though individual cases have reached £1 million ($1.35 million). The figures come from Report Fraud, a City of London Police service that logged 10,784 romance scam reports in 2025, a 29 percent year-on-year bump. "Romance fraud is particularly harmful because it targets trust and emotional connection," said Detective Superintendent Oliver Little at the City of London Police. "Offenders will often spend significant time building what appears to be a genuine relationship before attempting to exploit their victim financially," he added. "While the monetary losses can be substantial, the emotional impact is often just as damaging. This crime can affect anyone, and by reporting it, victims help us build intelligence, disrupt offenders, and protect others from harm." The scams disproportionately hit older victims, with almost half of 2025's total losses coming from those aged 55-74. Men submitted the highest number of reports, but women incurred the greatest financial losses. The playbook is well-established: criminals build fake profiles on social media, cultivate rapport with targets – often expressing strong feelings early – then request money for various reasons, including travel, medical expenses, and other invented needs. City of London Police has urged the public to look out for common tactics used by fraudsters: unsolicited affection from strangers online, excuses to avoid video calls or in-person meetings, and sudden investment pitches. A second opinion from a friend or family member can help. Confidence/romance scams are an even bigger problem in the US, where they rank as the fifth most costly form of cybercrime. An annual report from the FBI's Internet Crime Complaint Center (IC3) estimated total losses in 2025 at $929.4 million, ahead of data breaches, phishing, extortion, and ransomware. In the UK, romance fraud sits at the lower end of the cybercrime spectrum. Advance fee fraud, banking fraud, investment fraud, and online shopping scams all generate far more reports. Total fraud losses in the UK reached £3.4 billion ($4.6 billion) in 2025 across 388,895 reports, according to data, a figure that puts romance fraud's toll in stark perspective. Underreporting is also thought to be widespread, with many victims staying silent out of shame. ®
Victims losing £280K a day to fake profiles and sob stories
Romance fraudsters scammed Britons out of £102 million ($138 million) last year, according to the latest police figures.…
Pages