By continuing to use the site or forum, you agree to the use of cookies, find out more by reading our GDPR policy

FEDERAL AGENTS from the Department of Homeland Security and the Justice Department used “a sophisticated cell phone cloning attack—the details of which remain classified—to intercept protesters’ phone communications” in Portland this summer, Ken Klippenstein reported this week in The Nation. Put aside for the moment that, if the report is true, federal agents conducted sophisticated electronic surveillance against American protesters, an alarming breach of constitutional rights. Do ordinary people have any hope of defending their privacy and freedom of assembly against threats like this? Without more details, it’s hard to be entirely sure what type of surveillance was used, but The Nation’s mention of “cell phone cloning” makes me think it was a SIM cloning attack. This involves duplicating a small chip used by virtually every cellphone to link itself to its owner’s phone number and account; this small chip is the subscriber identity module, more commonly known as SIM.  SIM cards contain a secret encryption key that is used to encrypt data between the phone and cellphone towers. They’re designed so that this key can be used (like when you receive a text or call someone) but so the key itself can’t be extracted. But it’s still possible to extract the key from the SIM card, by cracking it. Older SIM cards used a weaker encryption algorithm and could be cracked quickly and easily, but newer SIM cards use stronger encryption and might take days or significantly longer to crack. It’s possible that this is why the details of the type of surveillance used in Portland “remain classified.” Do federal agencies know of a way to quickly extract encryption keys from SIM cards? (On the other hand, it’s also possible that “cell phone cloning” doesn’t describe SIM cloning at all but something else instead, like extracting files from the phone itself instead of data from the SIM card.) Assuming the feds were able to extract the encryption key from their target’s SIM card, they could give the phone back to their target and then spy on all their target’s SMS text messages and voice calls going forward. To do this, they would have to be physically close to their target, monitoring the radio waves for traffic between their target’s phone and a cell tower. When they see it, they can decrypt this traffic using the key they stole from the SIM card. This would also fit with what the anonymous former intelligence officials told The Nation; they said the surveillance was part of a “Low-Level Voice Intercept” operation, a military term describing audio surveillance by monitoring radio waves. Even if law enforcement agencies don’t clone a target’s SIM card, they could gather quite a bit of information after temporarily confiscating the target’s phone. They could power off the phone, pop out the SIM card, put it in a separate phone, and then power that phone on. If someone sends the target an SMS message (or texts a group that the target is in), the feds’ phone would receive that message instead of the target’s phone. And if someone called the target’s phone number, the feds’ phone would ring instead. They could also hack their target’s online accounts, so long as those accounts support resetting the password using a phone number. But, in order to remain stealthy, they would need to power off their phone, put the SIM card back in their target’s phone, and power that phone on again before returning it, which would restore the original phone’s access to the target’s phone number, and the feds would lose access. Read this entire posting on OUR FORUM.

Researchers have uncovered a threat group launching surveillance campaigns that target victims’ personal device data, browser credentials, and Telegram messaging application files. One notable tool in the group’s arsenal is an Android malware that collects all two-factor authentication (2FA) security codes sent to devices, sniffs out Telegram credentials, and launches Google account phishing attacks. Researchers found the threat group, dubbed Rampant Kitten, has targeted Iranian entities with surveillance campaigns for at least six years. It specifically targets Iranian minorities and anti-regime organizations, including the Association of Families of Camp Ashraf and Liberty Residents (AFALR); and the Azerbaijan National Resistance Organization. The threat group has relied on a wide array of tools for carrying out their attacks, including four Windows info-stealer variants used for pilfering Telegram and KeePass account information; phishing pages that impersonate Telegram to steal passwords; and the aforementioned Android backdoor that extracts 2FA codes from SMS messages and records the phone’s voice surroundings. “Following the tracks of this attack revealed a large-scale operation that has largely managed to remain under the radar for at least six years,” said researchers with Check Point Research, in a Friday analysis. “According to the evidence we gathered, the threat actors, who appear to be operating from Iran, take advantage of multiple attack vectors to spy on their victims, attacking victims’ personal computers and mobile devices.” The Attacks Researchers first discovered Rampant Kitten’s campaign through a document, the title of which translates to “The Regime Fears the Spread of the Revolutionary Cannons.docx.” It’s unclear how this document is spread (via spear-phishing or otherwise), but it purports to describe the ongoing struggle between the Iranian regime and the Revolutionary Cannons, an anti-regime, Mujahedin-e Khalq movement. The document when opened loads a document template from a remote server (afalr-sharepoint[.]com), which impersonates a website for a non-profit that aids Iranian dissidents. It then downloads malicious macro code, which executes a batch script to download and execute a next-stage payload. This payload then checks if the popular Telegram messenger service is installed on the victims’ system. If so, it extracts three executables from its resources. These executables include an information stealer, which lifts Telegram files from the victim’s computer, steals information from the KeePass password-management application, uploads any file it can find which ends with a set of pre-defined extensions, and logs clipboard data and takes desktop screenshots. Researchers were able to track multiple variants of this payload dating back to 2014. These include the TelB (used in June and July 2020) and TelAndExt variants (May 2019 to February 2020), which focus on Telegram; a Python info stealer (February 2018 to January 2020) that is focused on stealing data from Telegram, Chrome, Firefox and Edge; and a HookInjEx variant (December 2014 to May 2020), an info stealer that targets browsers, device audio, keylogging and clipboard data. During their investigation, researchers also uncovered a malicious Android application tied to the same threat actors. The application was purporting to be a service to help Persian speakers in Sweden get their driver’s license. Instead, once victims download the application, the backdoor steals their SMS messages and bypasses 2FA by forwarding all SMS messages containing 2FA codes to an attacker-controlled phone number. “One of the unique functionalities in this malicious application is forwarding any SMS starting with the prefix G- (The prefix of Google two-factor authentication codes) to a phone number that it receives from the C2 server,” said researchers. “Furthermore, all incoming SMS messages from Telegram, and other social network apps, are also automatically sent to the attackers’ phone number.” Of note, the application also launches a phishing attack targeting victims’ Google account (Gmail) credentials. The user is presented with a legitimate Google login page, inside Android’s WebView. In reality, attackers have used Android’s JavascriptInterface to steal typed-in credentials, as well as a timer that periodically retrieves the information from the username and password input fields. We have more of this posted on OUR FORUM.

A newly discovered technique by a researcher shows how Google's App Engine domains can be abused to deliver phishing and malware while remaining undetected by leading enterprise security products. Google App Engine is a cloud-based service platform for developing and hosting web apps on Google's servers. While reports of phishing campaigns leveraging enterprise cloud domains are nothing new, what makes Google App Engine infrastructure risky in how the subdomains get generated and paths are routed. Typically scammers use cloud services to create a malicious app that gets assigned a subdomain. They then host phishing pages there. Or they may use the app as a command-and-control (C2) server to deliver malware payload. But the URL structures are usually generated in a manner that makes them easy to monitor and block using enterprise security products, should there be a need. Therefore, a cybersecurity professional could block traffic to and from this particular app by simply blocking requests to and from this subdomain. This wouldn't prevent communication with the rest of the Microsoft Azure apps that use other subdomains. It gets a bit more complicated, however, in the case of Google App Engine. Security researcher Marcel Afrahim demonstrated an intended design of Google App Engine's subdomain generator, which can be abused to use the app infrastructure for malicious purposes, all while remaining undetected. A subdomain, in this case, does not only represent an app, it represents an app's version, the service name, project ID, and region ID fields. But the most important point to note here is, if any of those fields are incorrect, Google App Engine won't show a 404 Not Found page, but instead show the app's "default" page (a concept referred to as soft routing). "Requests are received by any version that is configured for traffic in the targeted service. If the service that you are targeting does not exist, the request gets Soft Routed," states Afrahim, adding: "If a request matches the PROJECT_ID.REGION_ID.r.appspot.com portion of the hostname, but includes a service, version, or instance name that does not exist, then the request is routed to the default service, which is essentially your default hostname of the app." Essentially, this means there are a lot of permutations of subdomains to get to the attacker's malicious app. As long as every subdomain has a valid "project_ID" field, invalid variations of other fields can be used at the attacker's discretion to generate a long list of subdomains, which all lead to the same app. The fact that a single malicious app is now represented by multiple permutations of its subdomains makes it hard for sysadmins and security professionals to block malicious activity. But further, to a technologically unsavvy user, all of these subdomains would appear to be a "secure site." After all, the appspot.com domain and all its subdomains come with the seal of "Google Trust Services" in their SSL certificates. Even further, most enterprise security solutions such as Symantec WebPulse web filter automatically allow traffic to trusted category sites. And Google's appspot.com domain, due to its reputation and legitimate corporate use cases, earns an "Office/Business Applications" tag, skipping the scrutiny of web proxies. This complete article is posted on OUR FORUM with much more information.