
Articles from www.theregister.com
Updated: 42 min 49 sec ago
1 hour 18 min ago
Cybersecurity vendor Arctic Wolf has laid off 250 workers in a restructuring that it says is designed to position the company to invest more in AI through its superintelligence platform and agentic Security Operations Center (SOC), a company spokesperson told The Register. “We recently made an organizational restructuring to better align the company’s structure and investments with our long‑term strategy,” a spokesperson said. “While these decisions are difficult, they position Arctic Wolf to operate more efficiently, continue investing in our Superintelligence platform and Agentic SOC, and deliver strong value to customers. We remain confident in our direction and momentum.” The layoffs appear to represent less than 10 percent of the total workforce. Arctic Wolf is a privately held company and does not publish a current headcount, but in December 2024, the company said it employed more than 2,600 workers, according to a press release it issued at the time. According to the website PitchBook, Arctic Wolf has 3,323 employees. The job cuts appeared to fall across several categories including sales, product development, and marketing. Some had been with the company for four years or more in revenue-generating roles such as sales engineer. One senior systems engineer with experience in datacenter infrastructure and cyber threat detection said on LinkedIn he was let go after more than a year with the company. “Wow! I was not expecting to have such a swing in posts this week from super positive to negative. Today I was laid off by Arctic Wolf due to restructuring,” wrote one sales engineer the day after he wrote a post about the success they had experienced last year. Alongside its five global SOCs, Arctic Wolf has offices in Waterloo, Ontario; San Antonio, Texas; Eden Prairie, Minnesota; Bengaluru, India, and other locations worldwide. Arctic Wolf operates in crowded endpoint detection and response (EDR) and managed detection and response (MDR) markets alongside CrowdStrike, Rapid7, and SentinelOne. It also competes for channel partners and customers with the likes of Huntress and Blackpoint Cyber. The company has bet on its Aurora Superintelligence Platform that combines security data, a “Swarm of Experts” AI agents and humans in the loop to protect customers' systems. ®
1 hour 40 min ago
You can't trust anyone these days! Get together with seven of your colleagues, and there’s a decent chance one of the eight will say they’ve either sold company login details in the past year or know someone who has, says UK fraud prevention outfit Cifas. That 13 percent figure is shocking. Just as strikingly, Cifas found a similar 13 percent of employees overall believed selling access to company systems was justifiable, though the org’s Workplace Fraud Trends report did not spell out those justifications. Regardless, Cifas says it suggests that there’s a worrying shift happening among attitudes toward insider-enabled fraud that should trouble leadership. Then again, leadership might not be too worried based on the data. Cifas doesn’t give a precise number for the share of rank-and-file employees who feel selling credentials is justified, but it does call attention to how leadership feels, and the more power they have, the more they seem to think it’s okay to sell their access. Thirty-two percent of managers, 36 percent of directors, and 43 percent of C-suite executives said it was justifiable to sell their login details. Even more shockingly, a full 81 percent of business owners felt the exact same way. As for why, that’s not entirely clear, though Cifas told us it’s heard various excuses in the past. Financial challenges, the belief it would be a harmless one-off, confidence they wouldn’t get caught, and disgruntlement were among the reasons cited for selling credentials. If you’re wondering who to keep an eye on, Cifas suggests looking at IT and telecoms professionals, who showed the highest tolerance for fraud-related behavior across multiple scenarios covered in the study. Those scenarios included the aforementioned selling of login details, as well as secretly moonlighting for a competitor, using fraudulent references on job applications, expense fraud, and the like. Selling access to company systems was one of the less common types of fraud covered in the survey, but the 13 percent figure reflects respondents who said they had done it or knew someone who had - meaning that, in a company of 1,000 people, around 130 might report direct or indirect exposure to the behavior. The fact that leadership respondents and IT and telecoms professionals showed higher tolerance for such activity makes the findings more concerning, even if the survey focused specifically on selling login details, in some cases to a former colleague. This data is specific to the UK, mind you, but there’s no reason to assume a relaxed attitude toward such a critical cybersecurity weakness is confined to the Isles - that’s just as likely as the person buying those credentials keeping it to themselves. When asked if Cifas had comparable data from prior years to compare this to, the organization described its findings as revealing “a worrying shift in attitudes toward insider-enabled fraud.” However, the firm said that this is the first year it compiled this report, so it doesn’t have comparable data. Nonetheless, Cifas Director of Learning Rachael Tiffen said in a press release that the point is that organizations need to be aware of how many employees might be willing to sell access to company systems. “These findings show how vital it is for organisations to build fraud‑aware cultures, where employees at all levels understand their responsibilities and the consequences of their actions,” Tiffen said. Be sure to pay them well, too. ®
3 hours 34 min ago
Researchers at Rapid7 say that they have spotted what they believe was an Iranian intelligence cyber unit masquerading as the Chaos ransomware gang to hide a state-sponsored espionage operation. The intrusion was spotted earlier this year, and investigators say breadcrumbs left behind give them "medium confidence" in saying it was the work of MuddyWater, which has been linked to intrusions affecting Western government and banking networks in recent months. Attackers began with a Microsoft Teams phishing campaign, which is not uncommon. They also encouraged targets to share their screens. Again, it was nothing too out of the ordinary. However, what must have required some expert persuasion work was that they convinced these individuals to enter their credentials into local text files, and even modify MFA settings to allow attacker-controlled devices to complete authentication. Rapid7 researchers Alexandra Blia and Ivan Feigl wrote: "While connected, the [threat actor (TA)] executed basic discovery commands, accessed files related to the victim's VPN configuration, and instructed users to enter their credentials into locally-created text files. "In at least one instance, the TA also deployed a remote management tool (AnyDesk) to further facilitate access." From there, browser artifacts suggested that attackers lifted credentials through phishing pages. At least one mimicked a Microsoft Quick Assist page. Armed with valid credentials, the attackers then executed various commands via RDP, which downloaded payloads using curl. These payloads included a backdoor malware dubbed Darkcomp, a malicious Microsoft WebView2 loader to disguise traffic, and an encrypted configuration file that sent instructions to Darkcomp. Then it was a case of performing lateral movement by using additional compromised accounts and scooping up sensitive data along the way. The attackers used the same accounts to send emails internally notifying organization leaders about the intrusion and data theft, and included an onion link leading to Chaos ransomware’s data leak site (DLS), where a corresponding entry appeared with all data redacted and hidden behind a countdown timer. Follow-up emails aimed to build the illusion of a genuine ransomware attack, although the illusion was short-lived. The attackers instructed recipients to look for a file containing "access credentials" they could use to begin ransom negotiations. Unlike the plaintext credential files the attackers had socially engineered the original targets into creating, this file did not actually exist. There was no way to contact the attackers, whereas in a typical scenario the intruders would be looking for a payout. There was also no file encryption, which is inconsistent with Chaos affiliates' typical way of working. "Despite these inconsistencies in the initial proof-of-compromise, the TA later published the stolen data on its DLS in line with modern extortion tactics," Blia and Feigl wrote. "The leaked data was assessed to be legitimate." If not for financial gain, then what? MuddyWater – if that is indeed the group behind this – did not extort the organizations in question, nor did they deploy a ransomware payload, but they did pose as an established ransomware group. Rapid7 believes the group did this as an extension of its false-flag operations to provide a plausible front for cyberespionage activity, or preposition work to underpin potential destructive cyberattacks. It wouldn't be the first time MuddyWater or Iranian intelligence (MOIS) was found LARPing as a ransomware crew. Both have previously been linked to an attack on an Israeli hospital, allegedly carried out by a Qilin affiliate. "Following the subsequent public attribution of that incident to the MOIS, it is plausible that the group adopted alternative ransomware branding, in this case Chaos, in an effort to reduce attribution risk and maintain a degree of plausible deniability," said the researchers. The unique benefits of masquerading as ransomware crooks include muddying attribution for attacks by leaving behind ransomware breadcrumbs, as well as redirecting defensive efforts toward locating signs of ransomware deployment instead of the backdoors that underpin espionage activity. ®
6 hours 34 min ago
Privacy groups, VPN providers, and civil liberties outfits have lined up to warn the UK government that its latest plan to slap age gates across swathes of the internet risks breaking the web while doing little to keep kids safe. In a joint statement, signatories including the Electronic Frontier Foundation, Mozilla, the Open Rights Group, Proton, and the Tor Project took aim at proposals now moving forward after the Children's Wellbeing and Schools Bill cleared Parliament, with access to some platforms, services, and specific features potentially restricted by age checks. "The open internet is a global public resource that has long since become foundational to the flourishing of individuals, businesses, and societies," the letter states, warning that "this openness and the opportunities it affords are coming under threat in the UK." Ministers are now consulting on measures that could include curfews for younger users and restrictions across services ranging from games and VPNs to static websites. The signatories say that will quickly turn into a system where everyone, not just children, has to prove their age to get full access. "Implementing such access restrictions hinges on all users having to verify their ages, not just young people," the letter warns, adding that the approach "focuses on restricting young people's access, rather than ensuring services are designed to uphold their rights and interests by default." Early results are not exactly inspiring. It's been months since tougher checks under the Online Safety Act began rolling out, and some systems have already been fooled by little more than a drawn-on mustache, raising questions about how effective the tech really is at keeping minors out. This hasn't gone unnoticed. "Existing age assurance technologies are either insufficiently accurate, undermine privacy and data security, or are not widely available across populations," the letter says, warning that rolling them out broadly "creates serious new security threats." It is not just a privacy headache either: the groups argue the policy could tilt the market further toward Big Tech. Mandating checks across more services risks "cementing the dominance of gatekeeper app stores, operating systems, and platforms' walled gardens," while turning the web into "a patchwork of age-gated jurisdictions." Instead of doubling down on access controls, the groups argue policymakers are targeting the wrong problem. "These risks are real and require thoughtful policy interventions that address the root of the issue, not just simplistic policies like access bans," the letter says, pointing to business models built on "massive collection of user data" as a bigger driver of harm. The closing line does not leave much room for interpretation: "Now is the time to hold tech to account, not undermine the open internet." ®
17 hours 5 min ago
India’s Securities and Exchange Board has advised participants in the nation’s equities industry to immediately revisit their information security systems and practices, in case Anthropic’s Mythos bug-finding AI sparks a cyberattack spree. The Board is India’s equivalent of the USA’s Securities and Exchange Commission, or the UK’s Financial Conduct Authority. On Tuesday, the Indian regulator issued an advisory that opens with the following observation: In response to those threats, the Board has established a taskforce that will examine the risks posed by models like Mythos, share threat intelligence, report incidents, and initiate a review of cybersecurity at third-party software vendors who supply the regulator and the entities it oversees. The advisory then offers some basic infosec advice: ensure patches are up to date, conduct audits of potential vulnerabilities, conduct inventories of APIs and secure them, run a serious SOC and take its advice, and harden systems by adopting principles such as zero-trust networking and running only essential services. The regulator also told participants in India’s equities markets to have their IT committees issue guidance on how to mitigate risks created by AI-led vulnerability detection models, then develop a plan to use AI as part of their infosec armoury. “Also, undertake other measures including recalibration of risks for AI accelerated threats, AI-augmented SOC transformation, and continuous vulnerability management using AI tools,” the advisory states. The Board directed the above advice at 19 different classes of company, ranging from venture capitalists to merchant bankers, mutual funds, stock exchanges, and even niche suppliers such as agencies that store know your customer information. Other regulators around the world have also acknowledged the risks Mythos poses. US Treasury Secretary Scott Bessent convened an emergency meeting with the nation’s banks a few weeks back. Singaporean regulators did likewise, yesterday. Australian regulators sent local banks a strongly worded reminder that they must develop AI strategies that consider risks the technology creates. Hong Kong’s Monetary Authority is working on new infosec guidance for the age of Mythos. India’s approach stands out for effectively putting entities it regulates on alert to an imminent threat and ordering them to take action to prevent problems. ®
17 hours 5 min ago
Securities regulator urges market players to develop new strategies and nail cyber-basics before AI models fuel mass attacks
India’s Securities and Exchange Board has advised participants in the nation’s equities industry to immediately revisit their information security systems and practices, in case Anthropic’s Mythos bug-finding AI sparks a cyberattack spree.…
Tue, 05/05/2026 - 18:00
ServiceNow announced an expansion of its AI Control Tower, transforming what began last year as a governance dashboard into what the company now describes as a command center for managing AI assets across an entire enterprise, including those running outside ServiceNow's own platform. The updated AI Control Tower, shipping as part of ServiceNow's Australia platform release, now operates across five areas: discovery, observation, governance, security, and measurement. The company said that this is its answer to AI agent sprawl, as enterprises have deployed more AI than they can account for and the tools to govern it have not kept pace. “What we launched last year gave customers a governance layer, but what we're shipping this year goes significantly deeper, evolving from visibility and management into a full enterprise AI command center,” Nenshad Bardoliwalla, group vice president of AI products at ServiceNow told reporters during a media briefing ahead of the company’s annual product show, Knowledge 26. “Our AI control tower ensures every AI system asset and identity is compliant, secure, and aligned with your strategy.” The AI Control Tower now reaches beyond ServiceNow's own platform with 30 new enterprise connectors that span all three major hyperscalers, Amazon Web Services, Google Cloud, and Microsoft Azure, along with enterprise applications such as SAP, Oracle, and Workday. The system can now discover AI assets, models, agents, prompts, and datasets running across an organization's full technology estate, not just those deployed on ServiceNow. “With our Veza integration, we're bringing patented access graph technology into the AI control tower, extending identity access governance to hyperscaler AI environments and every connected device, every agent, every model, every action has scope permissions, least privilege enforcement and auditable identity chains,” Bardoliwalla said. Bardoliwalla walked through a demo in which the AI Control Tower detected a prompt injection attack on a pricing agent. The system identified malicious instructions hidden inside order payloads, mapped the blast radius of affected systems using access graph technology from Veza, and presented a kill switch to disable the compromised agent, without human intervention. "You need a system that senses, decides and acts on its own, that can scale with your AI portfolio, not your head count," said Bardoliwalla. Two recent acquisitions underpin the security architecture. ServiceNow announced in December it would acquire Veza, which contributes an access graph that maps every identity and access path across systems whether it belongs to humans, machines, or AI agents. It also knows which entities have create, read, update, and delete-level permissions. ServiceNow said the access graph currently maps over 30 billion fine-grained permissions. When a vendor pushes a new version of a model or agent, the platform detects permission changes and automatically triggers a re-scoping workflow. Traceloop, which ServiceNow acquired in March, provides deep AI observability inside the Control Tower by tracking every LLM call that is running in the system. The integration delivers continuous runtime monitoring with live alerts, replacing what ServiceNow described as the periodic manual audits most enterprises still rely on. Teams can watch how agents reason, where they make decisions, and when to course-correct. ServiceNow also addressed the cost side of the AI equation. Control Tower now includes cost tracking and ROI dashboards to give finance teams visibility into model spend. The measurements track token consumption across providers such as OpenAI, Anthropic, and Google so customers can predict costs and tie spending to business outcomes. ServiceNow said it uses the AI Control Tower internally to manage over 1,600 AI assets and tracked half a billion dollars in cumulative AI value from internal use cases in 2025. "The number one question every CFO is asking is, where's the value?" said Bardoliwalla during the briefing. He added that runaway model spend ranks among the biggest pain points enterprises currently face as they scale AI deployments. Alongside the Control Tower expansion, ServiceNow announced Action Fabric, a mechanism that opens the company's full workflow engine to external AI agents. Through a generally available MCP server, agents built on Claude, Copilot, or custom platforms can now trigger governed enterprise actions — not just read and write data, but execute the flows, playbooks, approval chains, and catalog requests that ServiceNow customers have built over years. Anthropic is the first design partner for Action Fabric. The integration connects Claude directly to ServiceNow's governed system of action. "The gap between knowing what needs to happen and making it happen is where productivity dies," said Boris Cherny, head of Claude Code at Anthropic said in a statement. "Connecting Claude Cowork to ServiceNow's system of action closes that gap with enterprise execution, directly in the flow of work." Every action routed through Action Fabric runs through the AI Control Tower, so it carries identity verification, permission scoping, and a full audit trail. The MCP server is included in every Now Assist and AI Native SKU, with additional features planned for the second half of 2026.
Tue, 05/05/2026 - 16:01
CISA is warning that a newly-disclosed Linux kernel bug dubbed "CopyFail" is already being exploited, just days after researchers dropped a working root-level exploit. Tracked as CVE-2026-31431, the bug sits in the Linux kernel and gives low-level users a way to take full control of a system by modifying data they should only be able to read, effectively turning limited access into full root privileges on unpatched machines. The issue was disclosed by cybersecurity consultancy Theori, which said the flaw was discovered by its AI-powered penetration testing platform, Xint, and reported to the Linux kernel security team on March 23. Major Linux distributions pushed out patches ahead of public disclosure, which Theori published alongside a proof-of-concept exploit. The Python-based code works against Ubuntu 24.04 LTS, Amazon Linux 2023, RHEL 10.1, and SUSE 16, but the researchers warned that every mainstream Linux kernel built since 2017 is in scope of potential exploitation. "Same script, four distributions, four root shells — in one take. The same exploit binary works unmodified on every Linux distribution," Theori says. That level of reliability has not gone unnoticed. The CISA, the US government's cybersecurity agency, has added the bug to its Known Exploited Vulnerabilities catalog and ordered Federal Civilian Executive Branch agencies to patch within two weeks, setting a May 15 deadline. Microsoft backed CISA's findings and said it is already seeing signs of activity following the PoC's release. "Given the availability of a fully working exploit proof-of-concept (PoC) and the race to patch systems, Microsoft Defender is seeing preliminary testing activity that might result most likely in increased threat actor exploitation over the next few days," the company warned. The mechanics help explain the urgency. The attack is local and requires little access, with no user interaction, so anyone who already has a foothold on a vulnerable box can try their luck. It is the kind of bug that turns a small break-in into full control pretty quickly. As The Register reported last week, the flaw stems from how the kernel handles certain cryptographic operations, opening a path to tamper with cached data in ways that were never meant to be user-controlled. With a reliable exploit now in the wild, that design quirk has effectively turned into a universal privilege-escalation trick. ®
Tue, 05/05/2026 - 16:01
Researchers dropped a reliable root exploit and it didn’t sit idle for long
CISA is warning that a newly-disclosed Linux kernel bug dubbed "CopyFail" is already being exploited, just days after researchers dropped a working root-level exploit.…
Tue, 05/05/2026 - 14:34
Real estate giant Cushman & Wakefield has confirmed a data breach after two cybercrime groups, ShinyHunters and Qilin, separately claimed responsibility for attacks on the company. A spokesperson told The Register the attack was "limited" in scope and stemmed from vishing (voice phishing), suggesting an employee was socially engineered. The representative said: "Cushman & Wakefield recently became aware of a limited data security incident due to vishing. We have activated our response protocols, including taking steps to contain the unauthorized activity and engaging third-party expert advisors to support a comprehensive response. "Our systems and operations continue to run normally, and we are working diligently to investigate the incident. We recognize the trust placed in us to protect sensitive data and we take this responsibility very seriously." Cushman & Wakefield (C&W) did not address the apparent dual targeting by both ShinyHunters, which operates a pay-or-leak model, and Qilin, currently viewed as the world's most prolific ransomware group. There is no previously established coalition between ShinyHunters and Qilin, which suggests the two alleged attacks are separate but coincidentally timed. In a message sent to The Register, ShinyHunters claimed they attacked the company on May 1, while Qilin listed C&W on its data leak site on May 4. Qilin's website listing did not detail how it allegedly attacked C&W, although ShinyHunters claimed it stole "over 500,000 Salesforce records containing PII and other internal corporate data." ShinyHunters set a May 6 deadline for C&W to make contact to prevent the data from being leaked, but the cybercriminals claimed this had yet to happen. ShinyHunters has been on something of a tear recently. Known for its large-scale, high-impact attacks, the group's latest wave of activity began in March when it laid claim to an expansive supply chain attack after breaching Salesforce customers via the CRM giant itself. At the time, it said it had stolen data belonging to Salesforce and more than 100 of its high-profile customers. Since then, big-name brands like ADT, Carnival Cruise Line, Rockstar Games, Vimeo, and others have all confirmed ShinyHunters-linked cyberattacks, although not all were explicitly linked to its earlier Salesforce compromise. ®
Tue, 05/05/2026 - 14:34
Cushman & Wakefield activated incident response protocols after serial extortionists issued separate threats
Real estate giant Cushman & Wakefield has confirmed a data breach after two cybercrime groups, ShinyHunters and Qilin, separately claimed responsibility for attacks on the company.…
Tue, 05/05/2026 - 13:15
More than 119,000 Vimeo users's email addresses were extracted in a breach traced to a third-party analytics vendor, according to Have I Been Pwned. The incident first surfaced in April when the ShinyHunters crew added Vimeo to its growing "pay or leak" hit list, claiming it had pulled hundreds of gigabytes of data and threatening to dump the lot unless a deal was struck. That dump has since landed, and breach notification service Have I Been Pwned now puts a number on at least part of the fallout: 119,000 unique email addresses, in some cases paired with names. Vimeo last week confirmed that data was taken, but stopped short of saying how many people were affected. The company pinned the incident on Anodot, a third-party analytics provider used across its systems, and said the attacker gained access via that integration rather than breaking into Vimeo directly. Anodot has not said anything publicly, but its status page shows the incident kicked off on April 4. According to Vimeo, the stolen databases were heavy on technical data, video titles, metadata, and some customer email addresses. The company has been keen to stress what was not included: no actual video content, no valid login credentials, and no payment card information. That does not make the data harmless. Email lists like this get reused, resold, and recycled into phishing runs for years, especially when they come with enough context to make a message look convincing. The attackers, for their part, claim the breach went deeper. In a post seen by The Register, ShinyHunters alleged that "Snowflake and BigQuery instances data was compromised thanks to Anodot.com," adding that the company "failed to reach an agreement" despite multiple attempts to negotiate. Vimeo says it has cut off the problem at the source, disabling Anodot credentials, ripping out the integration, and bringing in outside security help while notifying law enforcement. The investigation is ongoing, and the company says it will update customers as it learns more. For now, the numbers from Have I Been Pwned seem to fill in the gap left by Vimeo's initial disclosure, and underline a familiar problem: you can lock down your own systems, but your vendors only have to slip once. ®
Tue, 05/05/2026 - 13:15
Vimeo points finger at analytics supplier Anodot, says no logins or card data were touched
More than 119,000 Vimeo users's email addresses were extracted in a breach traced to a third-party analytics vendor, according to Have I Been Pwned.…
Tue, 05/05/2026 - 12:43
Romance fraudsters scammed Britons out of £102 million ($138 million) last year, according to the latest police figures. That works out to roughly £280,000 ($379,000) a day, the City of London Police said Tuesday. The average victim loses around £9,500 ($12,866) per scam, though individual cases have reached £1 million ($1.35 million). The figures come from Report Fraud, a City of London Police service that logged 10,784 romance scam reports in 2025, a 29 percent year-on-year bump. "Romance fraud is particularly harmful because it targets trust and emotional connection," said Detective Superintendent Oliver Little at the City of London Police. "Offenders will often spend significant time building what appears to be a genuine relationship before attempting to exploit their victim financially," he added. "While the monetary losses can be substantial, the emotional impact is often just as damaging. This crime can affect anyone, and by reporting it, victims help us build intelligence, disrupt offenders, and protect others from harm." The scams disproportionately hit older victims, with almost half of 2025's total losses coming from those aged 55-74. Men submitted the highest number of reports, but women incurred the greatest financial losses. The playbook is well-established: criminals build fake profiles on social media, cultivate rapport with targets – often expressing strong feelings early – then request money for various reasons, including travel, medical expenses, and other invented needs. City of London Police has urged the public to look out for common tactics used by fraudsters: unsolicited affection from strangers online, excuses to avoid video calls or in-person meetings, and sudden investment pitches. A second opinion from a friend or family member can help. Confidence/romance scams are an even bigger problem in the US, where they rank as the fifth most costly form of cybercrime. An annual report from the FBI's Internet Crime Complaint Center (IC3) estimated total losses in 2025 at $929.4 million, ahead of data breaches, phishing, extortion, and ransomware. In the UK, romance fraud sits at the lower end of the cybercrime spectrum. Advance fee fraud, banking fraud, investment fraud, and online shopping scams all generate far more reports. Total fraud losses in the UK reached £3.4 billion ($4.6 billion) in 2025 across 388,895 reports, according to data, a figure that puts romance fraud's toll in stark perspective. Underreporting is also thought to be widespread, with many victims staying silent out of shame. ®
Tue, 05/05/2026 - 12:43
Victims losing £280K a day to fake profiles and sob stories
Romance fraudsters scammed Britons out of £102 million ($138 million) last year, according to the latest police figures.…
Tue, 05/05/2026 - 10:15
Healthcare giant's maintainers handed May deadline to enact the change
The UK's National Health Service (NHS) is ordering all of its technology leaders to temporarily wall off the organization's open source projects over concerns relating to advanced AI and Anthropic's Mythos.…
Tue, 05/05/2026 - 09:30
If you can't bother to keep GitHub running, why should we bother with you?
Opinion It's been another shabby week for Microsoft, and a shabbier one for its users. We learnt that Windows 11's epic habit of trying to corral customers into paid-for Microsoft services just got worse with a low-rent trick. Remote Desktop got a bit more secure, which is good, but in a way that suggests not too much user testing took place. As for GitHub… GitHub got two helpings of Chef Redmondo's Special Sauce.…
Tue, 05/05/2026 - 03:12
Academics from Singapore and China have found a way to make AI useful for cyber-defenders, by creating a technique that translates rules from diverse Security Information and Event Managements (SIEMs) so they’re easier to consume across multiple systems. SIEMs collect log files from many sources and allow users to set rules that trigger alerts that a security operations center (SOC) considers in case they represent security incidents. Testing for an “impossible travel” scenario – in which the same user logs on from New York and London within an hour, suggesting credential theft or other skulduggery – is a common SIEM rule. Many organizations end up with multiple SIEMs, which means complexity for SOCs. Enter researchers from the National University of Singapore and China’s Fudan University, who recently presented a paper [PDF] titled “ARuleCon: Agentic Security Rule Conversion” in which they explain a technique they developed to translate rules so they’re consumable by multiple SIEMs. Lead author Ming Xu told The Register she and her colleagues developed ARuleCon because SIEMs use specific schemas for rules, so a rule created with one SIEM won’t work with another. While some vendors provide translation tools, they don’t offer support for many SIEMs: the authors say Microsoft’s tool shifts Splunk rules into Redmond’s Sentinel SIEM but can’t handle others. “Rule conversion can be performed manually by security experts, which are slow and imposes a heavy workload,” the paper observes. Tools like the Sigma framework aim to help manage and share rules across multiple platforms, but Ming and her co-authors think it, and other existing translation tools, don’t do well with complex or interlinked rules. It’s 2026 so it seems natural to try using an LLM to convert SIEM rules into different formats. The authors say that approach “typically yield a poor accuracy and lacks vendor-specific correctness” because training data used to build LLMs doesn’t include enough data about SIEM rule schemas. “These shortcomings call for a scalable, vendor-neutral, and reliable SIEM-rule conversion framework that retains existing rule value and eases SOC workloads,” the paper states, before explaining how ARuleCon gets the job done with an "agentic RAG [retrieval augmented generation] pipeline that retrieves authoritative official vendor documentation to address the convention/schema mismatches, and Python-based consistency check that running both source and target rules in controlled test environments to mitigate subtle semantic drifts." Long story short, the researchers developed agentic tech capable of translating SIEM rules created using Splunk, Microsoft Sentinel, IBM QRadar, Google Chronicle and RSA NetWitness. Not all the conversions are brilliant, but ARuleCon can translate the proprietary rule format each SIEM vendor uses to multiple rival platforms – and does it more accurately than a generic LLM. ARuleCon therefore makes it possible to export rules from one SIEM and use them in another. Ming told The Register she hopes the tool helps organizations to consider and plan SIEM consolidations or migrations, and emerge with SOCs that can more easily detect the signals of security threats and stop worrying about noise from multiple alerts. ®
Tue, 05/05/2026 - 03:12
Vendors all use different formats. This tech translates them all so you can smooth your SOC
Academics from Singapore and China have found a way to make AI useful for cyber-defenders, by creating a technique that translates rules from diverse Security Information and Event Managements (SIEMs) so they’re easier to consume across multiple systems.…
Mon, 04/05/2026 - 21:50
It’s been months since the UK government began requiring stronger age checks under the Online Safety Act, and recent research suggests those measures are falling short of keeping kids away from harmful content. In some cases, even drawing on a mustache has been reported as enough to fool age detection software. Like keeping booze away from teenagers or nudie mags out of the hands of young lads, slapping a big “restricted, 18+” label on parts of the internet hasn't stopped kids testing the limits. Those limits, according to UK online safety group Internet Matters, are easy to sidestep. The group surveyed over 1,000 UK children and their parents, and while it did report some positive effects from changes made under the OSA, many children saw age verification as an easy-to-bypass hurdle rather than something that kept them genuinely safe. A full 46 percent of children even said that age checks were easy to bypass, while just 17 percent said that they were difficult to fool. The methods kids use to fool age gates vary, but most are pretty simple: There's the classic use of a video game character to fool video selfie systems, while in other instances, children reported just entering a fake birthday or using someone else's ID card when that was required. The report even cites cases of children drawing a mustache on their faces to fool age detection filters. Seriously. While nearly half of UK kids say it's easy to bypass online age checks (and another 17 percent say it's neither hard nor easy), only 32 percent say they've actually bypassed them, according to Internet Matters. Dude, want some TikTok? My mom will hook us up Like scoring some booze from "cool" parents, keeping age-gated content out of the hands of kids under the OSA is only as effective as parents let it be, and a quarter of them enable their kids' online delinquency. More specifically, Internet Matters found that a full 17 percent of parents admitted to actively helping their kids evade age checks, while an additional 9 percent simply turned a blind eye to it. "When speaking to parents and children about these situations, they described scenarios in which parents felt they understood the risks involved and, based on their knowledge of their child, were confident the activity was safe," Internet Matters said of parents who let their kids engage in risky behavior as long as they did it where they could be supervised. What this means for a major part of the OSA - namely keeping kids from accessing harmful content online - is that it’s falling short. Internet Matters has data to that end, too. Half of children (49 percent) who responded to the group's survey said that they've encountered harmful content online recently, suggesting that even those who don't circumvent age gates are still finding it in their feeds. So, what can be done to make kids' online safety more effective? Parents told Internet Matters that lawmakers need to do more, and CEO Rachel Huggins agreed that they need help. "Stronger action is needed from both government and industry to ensure that children can only access online services appropriate for their age and stage and where safety is built in from the outset, rather than added in response to harm," Huggins said in the report. The Internet Matters chief pointed to the prime minister’s recent talks with social media firms about tackling online harms, describing the moment as “a timely opportunity for positive change.” ®
Pages