adv

When threats get weird security solutions get weirder

When threats get weird security solutions get weirderThe world of security is getting super weird. And the solutions may be even weirder than the threats.

Some of the biggest companies in technology have been caught deliberately introducing potential vulnerabilities into mobile operating systems and making no effort to inform users.

One of those was introduced into Android by Google. In that case, Android had been caught transmitting location data that didn’t require the GPS system in a phone, or even an installed SIM card. Google claimed that it never stored or used the data, and it later ended the practice.

Tracking is a real problem for mobile apps, and this problem is underappreciated in considerations around BYOD policies.

Yale University Law School’s Privacy Lab and the France-based nonprofit Exodus Privacy have documented that more than 75% of the more than 300 Android apps they looked at contained trackers of one kind or another, which mostly exist for advertising, behavioral analytics or location tracking.

Most of that location tracking relies on accessing GPS information, which requires user opt-in. But now, researchers at Princeton University have demonstrated a potential privacy breach by creating an app called PinMe, which harvests location information on a smartphone without using GPS information.

In general, our belief that turning off the location feature of phones protected us from location snoops has been invalidated.

In fact, many of our assumptions around security are being challenged by new facts. Take two-factor authentication, for example.

A report last month by Javelin Strategy & Research claimed that current applications of multi-factor authentication are “being undermined.” Two- or multi-factor authentication is also underutilized by enterprises, with just over one-third using “two or more factors to secure access to their data and systems.”

So we can’t trust two-factor authentication like we used to, and even if we could it’s wildly underutilized.

But surely we can trust Apple devices, right? Apple has a sterling reputation for strong security. Or, I should say, “had” such a reputation.

Apple apologized and issued a patch this week for a major security flaw that enabled anyone with physical access to an Apple computer running macOS High Sierra to gain full access without even using a password (by simply using “root” as the username).

Apple fixed the flaw. But the fact that it existed at all is new and weird and challenges our beliefs about Apple’s security cred.

Apple’s new Face ID authentication has been defeated by researchers, and some security experts refuse to use it. The methods for overcoming Face ID range from simply finding someone who looks similar to creating a realistic mask to fool it. Cybercriminals are going to be building and wearing masks, apparently.

And some authentication systems sound worse than the risks they’re supposed to protect us from.

Facebook is reportedly testing an authentication scheme that requires users to take a selfie at the point of logging in. Many smartphone photos contain time and location information.

In the past month or two, our assumptions around security have been upended. Things we used to believe were secure are not.

And it’s going to get worse before it gets better.



Comments