Skip to content

Toggle service links

Privacy perspectives: dos, don’ts, and to-dos

Every time you sign up for a new website, share your latest run with your friends, or scan your loyalty card at a supermarket, you leave a record of your activity which is permanent, attached to your identity, and increasingly linked with other information to build a more complete picture of who you are and the life you lead. OU privacy and computing experts explain…

New technologies are constantly coming to market which promise new ways to help us understand ourselves and improve our lives, such as fitness trackers and smartwatches. These might motivate you to be more physically active, help reduce your insurance premiums, or tell you precisely when to leave the house to get to your next appointment on time. This drive towards self-improvement through the constant collection and interpretation of personal data realises the ‘quantified self’. These devices are increasingly embedded inside our homes, offices, and worn by us, sharing data and co-ordinating themselves, as part of the so-called Internet of Things.

Control over how our information is stored

Researchers at The Open University are working to make sure that future products and services are designed to respect our privacy, use this ever-increasing wealth of data responsibly, and give us the right sorts of controls over how our information is used.

When you use devices such as fitness trackers, can you say with confidence how they use the data they collect, or where the data are stored? Who else do they share the data with? Our research shows that many believe that their data is stored in their fitness tracker, such as a Fitbit, or in their phone when they are using an app. In reality, such data is often stored in the cloud, by companies specialising in data storage and computation, such as Google and Amazon.

Is it possible to make an informed choice?

This knowledge might be embedded in long and complex terms of service, but we argue it’s not reasonable to expect people to read through these documents just to understand how they work, and much of our research seeks to find better ways to make this information readily accessible and easily understood to give people real choice around their privacy. For example, when we study a number of self-tracking apps, including MyFitnessPal and Sleep Cycle, we find that many fail to sufficiently explain how your personal data will be used before they begin to collect and store it. This makes it impossible to make an informed choice about how to manage your privacy.

It’s also easy to assume that all similar apps function the same way, but again this is not necessarily the case. Some apps may be invisibly collecting and sharing different types of data for very different reasons, without you knowing about it. Your activity from multiple services might be combined to show you more relevant adverts, but you might not want advertisers to have this sort of information about you, particularly when you are not the one who benefits. Similarly, when such data is used to decide your insurance premiums, or how much you pay for a plane ticket, it’s important to understand what is being shared and why.

Maintaining privacy: a collective effort

The Monetize Me project is working to improve how such devices and services maintain your privacy, to meet your expectations, and help make privacy-preserving devices and apps easier to identify in the marketplace. One such way is for software designers to make the trade-offs between privacy and benefits clearer, so you can actively decide what you are willing to share to enable specific features, without your expectations being violated. Using this principle, we have shown how social networking sites such as Facebook could recommend groups of people to share individual posts with to maintain the social benefits while minimising privacy risks. Rather than trying to address privacy challenges as they emerge once products are in the market, we suggest that developers adopt a ‘privacy by design’ approach. To support this, we have proposed a set of guidelines for embedding privacy into the Internet of Things.

We would like to highlight that it is not individuals’ responsibility alone to protect themselves from privacy intrusions. Technology companies and developers can adopt privacy by design principles to mitigate some of these risks, and such practice should become common, to avoid us becoming disillusioned with emerging technologies. Therefore, we argue that maintaining privacy is ultimately a collective effort, shared between researchers, developers and those who ultimately use the devices and services they produce.

For researchers, it is important to investigate new ways of thinking about privacy, such as how it can be shared between groups of people. The Open University’s Privacy Dynamics project is investigating how knowledge of social dynamics can be used to make ways of managing privacy and more appropriate, such as for social networking sites and novel interfaces for privacy awareness and control.

For companies and developers building these data collection technologies, we believe there is a need to explore ways of putting privacy at the centre of their systems. Indeed, another facet of our Monetize Me project is to explore novel business models that make privacy a core value proposition.

What can you do to protect your privacy?

Finally, for individuals using these services, it’s worth thinking about what you can do to protect your privacy with the apps, websites, and devices you regularly use. Are they trustworthy? Check what permissions the apps are using, and decide whether they seem appropriate considering what you use for them for. Do they let you control who your data is shared with? Look at the controls you’re offered to make sure you’re sharing your data with the right people, including controls over secondary purposes such as marketing.

Ultimately, we believe through greater understanding of people’s needs and expectations, and allowing people to actively participate in how their data is used, these risks can be mitigated, and we can have more confidence in the technologies we use.

This article was written by research Associates Luke Hutton and Tally Hatzakis, Senior Lecturers in Computing Arosha K. Bandara and Blaine Price and Professor of Computing Bashar Nuseibeh.

About Author

Robyn is Senior Manager (Social Media Strategy) in Communications. Formerly a newspaper journalist, she is an experienced comms and content professional now leading the University's multi award-winning Social Media Engagement Team. She likes walking her cocker spaniel, Ralphie, reading crime novels and anything that involves laughing.

Comments are closed.