Top 3 Security Vulnerabilities I Find in iOS and Android Projects

Top 3 Security Vulnerabilities I Find in iOS and Android Projects


About the author

Chris Griffith has been a game developer for over 19 years and a mobile developer since 2010. He’s produced dozens of apps and games for brands like Match.com, LEGO, Microsoft, Kraft, Anheuser-Busch and PepsiCo.

Chris has been a member of the PullRequest network since April 2018.

Be sure to check out other great work from Chris:


Given the speed at which software is often developed these days, it shouldn’t be too surprising that security is an aspect that can get overlooked. There’s a tendency to think, “Well, we’re using HTTPS and hashing passwords properly on the server - we’re good!” However, there are a number of non-obvious ways developers can put their projects at risk. In this article I’ll talk about the three most common security issues I find when reviewing iOS and Android projects.

1. Secrets in Code

It’s happened to all of us: you’ve been asked at the 11th hour of a project to integrate yet another 3rd-party SDK. It’s for analytics or something - you can’t remember - but you’re up against the deadline and need to get it in. You’re told that it’s super simple - just initialize it with a couple of lines of code and an API key. You slot the key into a constant value in a file that at this point should just be named keychain.swift for succinctness:

let someSDKKey = "aa3c7e329824aa378d25f3a461cf684e"

On the surface, there may seem to be little wrong with this. The value is in code, which is compiled and digitally signed before going to the app marketplaces. It’s not like it’s sitting around in plain text. What’s the harm?

Something that is often forgotten is how long software can live. I’ve worked on mobile projects started in 2012 that were still using chunks of original code well into 2019. Over the course of the life of a larger software project (the kind people are going to be more interested in hacking), you might have many dozens of engineers who pull the code onto their machines. How long do those developers stay employed by the company that owns the code? How many of them have been the victims of hacking or stolen equipment? How many of them might be disgruntled enough by the circumstances of their departure to post code somewhere (or, just be careless with it)? How many data breaches occur on an annual basis in medium-to-large companies?

This may seem like paranoia, but for larger projects - particularly ones dealing with things like financials - this is not far-fetched. For your run-of-the-mill analytics package, the risk is probably not that high. Why would a hacker want to use your key to pollute analytics data they can’t even see? What harm could it do? For starters, this could lead to important product and business decisions being made based on bad data. There are a number of other reasons, but the bottom line is that no one wants to be responsible for having caused a leak in the first place.

So, what if you use the same API key for a real-time chat feature hosted by the same company?

The Solution: Injection

More and more projects are using continuous integration systems where the majority of builds are produced by a machine in the cloud (or sometimes on a local network). I see this as a good thing - making builds of larger mobile projects can eat up hours of an engineer’s time over the course of a week. While not a trivial amount of work, setting up CircleCI, Jenkins or TeamCity is not a herculean task and provides value almost immediately. Some solo developers think it’s not worth the effort when there are no other team members to benefit - I’d argue this is when it’s most helpful; because managing builds is a job best suited for robots, not humans.

A common part of these CI systems (and Fastlane, which is usually what’s under the hood to make the magic happen for mobile apps) is the ability to define environment variables. The purpose of these values is to solve the problem discussed above - obfuscate sensitive data away from prying eyes. Some of these values might be the username/password combination for the App Store Connect account used to upload builds to TestFlight, or the Crashlytics API key for uploading debug symbols. But you can also use them to obscure any sensitive values in your code. By having template files with placeholder values you can (at build time) replace them with secrets.

There are numerous articles outlining different approaches, see here, here, and here. The bottom line is that if you’re going to store any secret values in your app at run-time, you should at least consider the possible risks, and think about a mitigation strategy. The same solution may not work for every project, but if you’re keeping secret values outside your source code you’re at least on the right track.

2. Outdated Cryptography

It’s less common for mobile apps these days to do a bunch of their own cryptography. With SSL providing protection for HTTP requests and heavy lifters like Amazon, Google, and Microsoft all providing SDKs that obfuscate those worries away, there’s not as much need. That said, there are still times when you need to generate a hash of some sensitive information or two-way encrypt some locally stored data so a nosy user can’t extract the file from their device and tamper with it (games come to mind).

There are numerous libraries and methods for common cryptographic algorithms. Many of us will never know the guts of how they work - they’re industry standards and they “just do.” However, for the intrepid developer who wants to learn more on that front, I highly recommend this channel. What we all should know is which algorithms are safe to use and which may be compromised after years of hacking have exposed their weaknesses.

For example, take hashing functions. They take a data input and produce a fixed-length result often represented as a hexadecimal string. Remember the key from above? It was generated with the MD5 algorithm. A common use case of MD5 is as a checksum for file transfers; regardless of the size of a file, it will produce a 32 character string that can be validated is still the same once the file has traversed a network. However, it has been compromised for many years as a hashing algorithm for secure data. I won’t get into the details here, as there’s been a ton written about this already, but suffice it to say that this tends to be the way all security algorithms go over time. As computer hardware continues to get faster (and cloud computing can be purchased for pennies), the encryption algorithms we use must get more and more complex and elaborate.

The Solution: Stay Informed on Best Practices

This is all well and good, but it presents a problem for software developers not directly engaged in building new security protocols and algorithms. How do we know what’s safe to use and also performant for the environment we’re working in? After all, you can still find MD5 and other antiquated options in most common crypto packages. There are many helpful resources for keeping up-to-date on the latest standards, such as here and here.

If that’s TL;DR, I’ll go ahead and make two recommendations based on current computing standards in 2020. If you can, use:

  • SHA-2 or SHA-3 for hashing.
  • AES for two-way encryption.

There are performance implications to consider for any cryptography, so ultimately you have to pick the solution that provides the best safety without compromising the user experience.

3. Insecure Runtime Storage

We’ve covered topics around secure information at the source code level and encryption often used for the transit of data over the air. The final aspect I want to dig into is how data is stored by the app at runtime. Specifically, data like access tokens or other sensitive app data.

When an app authenticates with a remote server, it will usually get some type of short-lived access token back that it uses to talk to APIs. For apps that don’t need to require the user to log back in between sessions (like banking apps), a good user experience typically involves storing this token and attempting to use it on the next launch if it hasn’t expired yet. This lets users get in and out of their apps quickly and without hassle.

How do you store this token, though? Something I see all too commonly in iOS projects is to use UserDefaults, the local storage option for app preferences. This is a tempting option as it’s super easy to store a string and read it back out. However, UserDefaults are stored as a plist file (an Apple-specific form of XML) beside the application that is entirely human-readable. It’s the equivalent of storing a password in plain text. A similar situation can arise on Android with the SharedPreferences class, though it’s admittedly more work to get at this data.

Along the same lines are apps that store local data in JSON or plist formats without any form of encryption, such as configuration data. These files are also susceptible to prying eyes and able to be opened as easily as a text file.

The Solution: Secure Storage Options

Google offers a more secure version, EncryptedSharedPreferences, which can be read about here. Apple offers its Keychain library, which you can find more about here. If you’re like me and have found the Keychain API rather clunky to use, I recommend adopting one of the many battle-tested, open-source community wrappers such as KeychainAccess. They make access to secure data basically as simple as UserDefaults, but without the aforementioned vulnerabilities. For local data storage, there are protocols like NSSecureCoding on iOS or EncryptedFiles on Android. These make it safer to save data to the file system without worry that the contents will be easily accessed.

Conclusion

Combining one or more of the various techniques described in this article will help make your apps and projects more secure. It may not be as glamorous to work on as animation or exciting for the business like features to promote viral growth or in-app purchases, but security should be the backbone of any app that handles user data. It only takes one breach to spark a costly catastrophe, and it’s often avoidable by following a set of basic best practices.

Be sure to check out other great work from Chris:


About PullRequest

HackerOne PullRequest is a platform for code review, built for teams of all sizes. We have a network of expert engineers enhanced by AI, to help you ship secure code, faster.

Learn more about PullRequest

Chris Griffith headshot
by Chris Griffith

May 6, 2020