The Apple AppStore and Google Play are doing a great job in guarding their mobile users from downloading and installing malicious apps.
By centralizing App distribution, mobile platform owners can prevent hackers from uploading malicious apps and potentially infecting millions of users. This is a great lesson that we learned from the PC days where a decentralized distribution system, and open platform, made it easy for malware to spread. Can we claim a victory on the hackers? Not quite yet.
The fact that the AppStore and Google Play managed to control the distribution of malicious apps does not mean there are no vulnerable apps out there. Hackers are clever; they have found ways to get around stringent app store controls by exploiting existing non-malicious apps that are vulnerable. This can be done either via a different app, by inspecting data on transit or even via the web, while you browse from your mobile browser.
How can an app be vulnerable?
There are three main ways that an app can be vulnerable to hackers.
Almost all mobile apps transmit and receive data between our devices and remote servers. This allows apps to update, send statistics, check licenses, monitor analytics and so on. There are two ways that this leaves app vulnerable:
- No encryption – if data leaving your device is unencrypted, hackers can ‘look inside’ it and get your passwords, credit card number or any other personal details you many not want to share. This is most common on public Wi-Fi hotspots like those found in airports, malls or coffee shops.
- Certificate validation – when apps send data to a remote server, it’s important that it is the correct one and not one owned by a hacker. The use of digital certificates on the server can help the app validate the server’s identity. Without these digital certificates, data can be at risk.
As we use mobile apps, most of them store data locally on our devices. These often take the form of log files, which record our activities within an app, the strings we typed in it, cached data/reports and more. There are two ways that these files can leave apps vulnerable:
- No encryption – storing data on the device can greatly improve app performance and user experience. However, leaving private data unencrypted on the device can be dangerous. A separate app installed on the device can potentially have a permission to access such file, ‘look inside’ and retrieve personal data.
- Files left after uninstall – when we uninstall apps from our devices, many of us expect that all related files (with our private data in them) are also removed. However, this is no always the case. Apps often have permission to create files in various locations on our devices, these can be left behind when apps are removed. Such fragments can later be accesses by other apps to retrieve data.
3rd party components
It’s quite common for app developers to release their products out to the market very quickly. As time is short, developers reuse components (SDKs) from 3rd parties to support the functionality they need. Example of popular development tools and components can be found here – http://www.appbrain.com/stats/libraries/dev The issue with these toolkits is that they are not always secure. Here are a few examples:
- Android WebView – many mobile apps display web content. In order to download and render such content on a mobile device, most Android developers use the WebView component. However this component was identified to be vulnerable to remote attacks – CVE-2012-6636.
- Dropbox Android SDK – when mobile apps would like to integrate its functionality with cloud storage (like photo apps, wallets, vaults etc.) they integrate SDKs from cloud storage providers. The Dropbox Android SDK was found to be vulnerable – CVE-2014-8889. This vulnerability may enable theft of sensitive information from apps that use the vulnerable Dropbox SDK both locally by malware and also remotely by using drive-by exploitation techniques.
- Configuration and development errors – as long as humans will continue to code software, vulnerabilities will exist. The increasing complexity of operating systems, databases, app logic and platforms, compounded by short development windows makes it very difficult for developers to catch each and every error in their code. Unfortunately this leaves large volumes of untested code that are potentially vulnerable.
Why do apps have these vulnerabilities?
Now that we have identified the main types of vulnerability found within mobile apps, it’s important to understand the root causes behind them. It’s not simply a question of bad coding.
Just as with any problem, if you unaware of a risk you won’t pay attention to it. Most developers are trained to deliver functionality, not security.
Small development teams
Unlike PC products, most mobile apps require relatively small development teams. With the ever increasing functionality required and short time to market, the available time to spend on finding vulnerabilities is getting shorter and shorter.
Developers have abandoned thousands of apps due to low monetization. These abandoned apps are no longer supported and any vulnerabilities remain indefinitely.
Rush to market
The mobile world is moving faster than ever. Developers need to code and release their apps in almost ‘no time’. While the business demand functionality, that leaves almost no-time to security scanning and audits.
What can developers do to secure their Apps?
It’s not all bad news though, there are several things that app developers can do to improve the security of their apps. Learn about secure coding and vulnerable SDKs to avoid common mistakes and deliver a secure app to your users.
- Embed security testing in the general quality assurance procedures; from unit testing to continuous integration.
- Use automated tools to statically and dynamically scan and test for vulnerabilities
- Remove unneeded functionality from your code or stop the distribution of an app that is no longer supported.
What can App Store and Google Play do?
Still, developers are not entirely responsible for eradicating vulnerable apps. Official mobile stores employ automatic security scanners to identify malicious apps. These can often be very difficult to detect and it requires lots of resources and attention. However, a lot of improvements can be made to help prevent the distribution of vulnerable apps. I believe the most progress can be made in improving communication between the app stores and developers when issues arise: Developers should receive a notice once their app was found to be vulnerable.
- Apps that include popular development tools that were found vulnerable should be notified and asked to update the tool/SDK to a safe version.
- Developers should have sufficient time to release a fix, otherwise their app should be unlisted.
Reference: A list of top 10 Mobile Risks was published by OWASP group during 2014 : https://www.owasp.org/index.php/OWASP_Mobile_Security_Project#tab=Top_10_Mobile_Risks