By continuing to use the site or forum, you agree to the use of cookies, find out more by reading our GDPR policy

Some of the most successful and lucrative online scams employ a “low-and-slow” approach — avoiding detection or interference from researchers and law enforcement agencies by stealing small bits of cash from many people over an extended period. Here’s the story of a cybercrime group that compromises up to 100,000 email inboxes daily, and apparently does little else with this access except for siphon gift card and customer loyalty program data that can be sold online. The data in this story come from a trusted source in the security industry that has visibility into a network of hacked machines that fraudsters in just about every corner of the Internet are using to anonymize their malicious Web traffic. For the past three years, the source — we’ll call him “Bill” to preserve his requested anonymity — has been watching one group of threat actors that is mass-testing millions of usernames and passwords against the world’s major email providers day. Bill said he’s not sure where the passwords are coming from, but he assumes they are tied to various databases for compromised websites that get posted to password cracking and hacking forums on a regular basis. Bill said this criminal group averages between five and ten million email authentication attempts daily and comes away with anywhere from 50,000 to 100,000 working inbox credentials. In about half the cases the credentials are being checked via “IMAP,” which is an email standard used by email software clients like Mozilla’s Thunderbird and Microsoft Outlook. With his visibility into the proxy network, Bill can see whether or not an authentication attempt succeeds based on the network response from the email provider (e.g. mail server responds “OK” = successful access). You might think that whoever is behind such a sprawling crime machine would use their access to blast out spam, or conduct targeted phishing attacks against each victim’s contacts. But based on interactions that Bill has had with several large email providers so far, this crime gang merely uses a custom, automated scripts that periodically log in and search each inbox for digital items of value that can easily be resold. And they seem particularly focused on stealing gift card data. “Sometimes they’ll log in as much as two to three times a week for months at a time,” Bill said. “These guys are looking for low-hanging fruit — basically cash in your inbox. Whether it’s related to hotel or airline rewards or just Amazon gift cards after they successfully log in to the account their scripts start pilfering inboxes looking for things that could be of value.” How do the compromised email credentials break down in terms of ISPs and email providers? There are victims on nearly all major email networks, but Bill said several large Internet service providers (ISPs) in Germany and France are heavily represented in the compromised email account data. “With some of these international email providers we’re seeing something like 25,000 to 50,000 email accounts a day get hacked,” Bill said.  “I don’t know why they’re getting popped so heavily.” That may sound like a lot of hacked inboxes, but Bill said some of the bigger ISPs represented in his data have tens or hundreds of millions of customers. Measuring which ISPs and email providers have the biggest numbers of compromised customers is not so simple in many cases, nor is identifying companies with employees whose email accounts have been hacked. This kind of mapping is often more difficult than it used to be because so many organizations have now outsourced their email to cloud services like Gmail and Microsoft Office365 — where users can access their email, files, and chat records all in one place. In a December 2020 blog post about how Microsoft is moving away from passwords to more robust authentication approaches, the software giant said an average of one in every 250 corporate accounts is compromised each month. As of last year, Microsoft had nearly 240 million active users, according to this analysis. “To me, this is an important story because for years people have been like, yeah we know email isn’t very secure, but this generic statement doesn’t have any teeth to it,” Bill said. “I don’t feel like anyone has been able to call attention to the numbers that show why email is so insecure.” Bill says that in general companies have a great many more tools available for securing and analyzing employee email traffic when that access is funneled through a Web page or VPN, versus when that access happens via IMAP. “It’s just more difficult to get through the Web interface because on a website you have a plethora of advanced authentication controls at your fingertips, including things like device fingerprinting, scanning for HTTP header anomalies, and so on,” Bill said. “But what are the detection signatures you have available for detecting malicious logins via IMAP?” Microsoft declined to comment specifically on Bill’s research but said customers can block the overwhelming majority of account takeover efforts by enabling multi-factor authentication. Read the detailed report on OUR FORUM.

A shocking new tracking admission from Google, one that hasn’t yet made headlines, should be a serious warning to Chrome’s 2.6 billion users. If you’re one of them, this nasty new surprise should be a genuine reason to quit. Behind the slick marketing and feature updates, the reality is that Chrome is in a mess when it comes to privacy and security. It has fallen behind rivals in protecting users from tracking and data harvesting, its plan to ditch nasty third-party cookies has been awkwardly postponed, and the replacement technology it said would prevent users from being profiled and tracked turns out to have just made everything worse. “Ubiquitous surveillance... harms individuals and society,” Firefox developer Mozilla warns, and “Chrome is the only major browser that does not offer meaningful protection against cross-site tracking... and will continue to leave users unprotected.” Google readily (and ironically) admits that such ubiquitous web tracking is out of hand and has resulted in “an erosion of trust... [where] 72% of people feel that almost all of what they do online is being tracked by advertisers, technology firms or others, and 81% say the potential risks from data collection outweigh the benefits.” So, how can Google continue to openly admit that this tracking undermines user privacy, and yet enable such tracking by default on its flagship browser? The answer is simple—follow the money. Restricting tracking will materially reduce ad revenue from targeting users with sales pitches, political messages, and opinions. And right now, Google doesn’t have a Plan B—its grand idea for anonymized tracking is in disarray. “Research has shown that up to 52 companies can theoretically observe up to 91% of the average user’s web browsing history,” a senior Chrome engineer told a recent Internet Engineering Task Force call, “and 600 companies can observe at least 50%.” Google’s Privacy Sandbox is supposed to fix this, to serve the needs of advertisers seeking to target users in a more “privacy-preserving” way. But the issue is that even Google’s staggering level of control over the internet advertising ecosystem is not absolute. There is already a complex spider’s web of trackers and data brokers in place. And any new technology simply adds to that complexity and cannot exist in isolation. It’s this unhappy situation that’s behind the failure of FLoC, Google’s self-heralded attempt to deploy anonymized tracking across the web. It turns out that building a wall around only half a chicken coop is not especially effective—especially when some of the foxes are already hanging around inside. Rather than target you as an individual, FLoC assigns you to a cohort of people with similar interests and behaviors, defined by the websites you all visit. So, you’re not 55-year-old Jane Doe, sales assistant, residing at 101 Acacia Avenue. Instead, you’re presented as a member of Cohort X, from which advertisers can infer what you’ll likely do and buy from common websites the group members visit. Google would inevitably control the entire process, and advertisers would inevitably pay to play. FLoC came under immediate fire. The privacy lobby called out the risks that data brokers would simply add cohort IDs to other data collected on users—IP addresses or browser identities or any first-party web identifiers, giving them even more knowledge on individuals. There was also the risk that cohort IDs might betray sensitive information—politics, sexuality, health, finances, ... No, Google assured as it launched its controversial FLoC trial, telling me in April that “we strongly believe that FLoC is better for user privacy compared to the individual cross-site tracking that is prevalent today.” Not so, Google has suddenly now admitted, telling IETF that “today’s fingerprinting surface, even without FLoC, is easily enough to uniquely identify users,” but that “FLoC adds new fingerprinting surfaces.” Let me translate that—just as the privacy lobby had warned, FLoC makes things worse, not better. Follow this thread on OUR FORUM.

Coinciding with Tim Cook hitting the 10-year mark as Apple’s CEO, the iPhone maker has found itself in a strange place. The consumer electronics giant that’s spent years positioning itself as the pro-privacy alternative to tech giants like Google and Facebook has inadvertently landed smack in the middle of two things. One, a huge controversy that has normally pliant journalists treating Apple with rare skepticism. And, two, a controversy that also threatens to undermine Apple’s privacy-focused core philosophy under Cook. The culprit here: One of the many new iOS 15 features, included with the next big software update this fall. By now, if you follow Apple news to any degree, you’re probably familiar with the particulars. Starting with iOS 15, Apple is going to start doing something new. It will hash and compare photos destined to be uploaded to iCloud against a CSAM (child sexual abuse material) database. The National Center for Missing and Exploited Children, or NCMEC, maintains the database in the US. And the new iOS system kicks into action if the following conditions are met. First, if you possess specific CSAM material — which is already marked or hashed, and able to be matched against what’s in the NCMEC database. Also, if you use iCloud to store your photos, which the vast majority of iPhone owners do. After you hit a threshold of successful comparisons of CSAM material — meaning, material that’s in your possession matches what’s in the database, a certain number of times — Apple notifies law enforcement. Meanwhile, ironically, there’s actually a pretty easy way to avoid all this new scrutiny from Apple in the first place. All you’ve got to do? Just disable the sharing of photos to iCloud. Open the Settings app on your iPhone or iPad > then navigate to “Photos” > and disable the “iCloud Photos” option. After that, choose “Download Photos & Videos” when the popup appears, to pull everything in your iCloud Photos library down to your device. If you then want to migrate away from Apple? Maybe, say, because you feel that the iPhone maker is invading your privacy via these new iOS 15 features? Well … all we can say is good luck with that transition. Almost every provider of cloud backup service already does this same kind of scanning. The key difference, and it’s a huge one, is that they do it all in the cloud. On their end. Apple, however, performs both cloud scanning as well as some of the image matching on your device itself. And therein is the reason for the outcry from privacy advocates. Apple is going to be looking for a specific kind of contraband on your personal device going forward. Like it or not. Unless that is, you disable the setting we noted above. Speaking of which, NSA whistleblower Edward Snowden angrily blasted the fact that you can so easily do so in a new post he published to his Substack on Wednesday evening. “If you’re an enterprising pedophile with a basement full of CSAM-tainted iPhones,” he writes, “Apple welcomes you to entirely exempt yourself from these scans by simply flipping the ‘Disable iCloud Photos’ switch.” It’s “a bypass which reveals that this system was never designed to protect children, as they would have you believe, but rather to protect their brand.” In other words, he continues, this is about keeping that material off their servers. And thus keeping Apple out of negative headlines. Do Snowden (and, for that matter, privacy advocates like him) seem overly concerned here about some dark hypothetical future because of these new iOS 15 features? “So what happens when, in a few years at the latest … in order to protect the children, bills are passed in the legislature to prohibit this (Disable iCloud) bypass, effectively compelling Apple to scan photos that aren’t backed up to iCloud?” Snowden continues in his new post. Or, what about if a party in India starts demanding that Apple scan for memes associated with a separatist movement? “How long do we have left before the iPhone in your pocket begins quietly filing reports about encountering ‘extremist’ political material, or about your presence at a ‘civil disturbance’?” For more in-depth reading visit OUR FORUM.