Home » Analysis & Comment » Opinion | What We’ve Learned From Our Privacy Project (So Far)
Opinion | What We’ve Learned From Our Privacy Project (So Far)
07/16/2019
The Privacy Project has been underway for four months, and in that time we’ve learned quite a bit about what we do — and don’t — know about privacy.
Privacy is a complex, nebulous and constantly evolving issue, but amid the chaos and complexity, we have discerned four main themes: the ubiquity of surveillance and the ready availability of surveillance tools; our considerable ignorance of where personal data goes and how companies and governments use that data; the tangible harm of privacy violations; and the possibility that sacrificing privacy for other values (say, convenience or security) can be a worthwhile trade-off.
Surveillance Tools Are Readily Available
It’s unnervingly easy to violate the privacy of others — purposefully or inadvertently — using surveillance tools accessible to most everyone.
To show just how easy it is, Stuart A. Thompson, the graphics director for The New York Times Opinion Section, bought a set of targeted advertisements and then designed them to explicitly reveal the (usually hidden) information on which they operate. Targeted ads are often praised as a way for companies to help you find and buy products specific to your needs and interests — which seems harmless enough. But targeted ads can do more than that: They can influence your beliefs and manipulate your behavior.
In another vivid demonstration of the availability of surveillance tools, Sahil Chinoy, a graphics editor for the Opinion Section, built a facial recognition system for less than $100. There are many ways this technology can be abused. San Francisco has banned the use of facial recognition technology by its police and other agencies, but other cities, including New York, haven’t taken any steps to impede its use.
We Don’t Know Enough About What Happens to Our Data
Most people who take advantage of the services of companies like Google and Facebook are aware that the companies store and use their personal data. It might seem as if the trade-offs are clear and worthwhile — I give up my location data in exchange for access to Google Maps, for example — but the reality is darker and murkier.
The United States is becoming a surveillance state, perhaps on par with China. But unlike in China, where mass surveillance is a government endeavor, the monitoring in the United States is done in large part by private corporations — and we don’t know enough about those practices. Facebook and Google, for example, have made a killing monetizing the personal data of their users. (Google keeps a record of nearly everything users buy online.) But it’s not clear where most personal data goes and what such companies decide to do with it. Is it being used merely to improve a company’s services? Or is it being used by foreign governments or political consultants to interfere with democratic elections?
The uncertainty doesn’t end there. Marketing companies obtain our location data using the Bluetooth technology on our phones; cars now generate large amounts of data about behavior of their drivers; and data-inference technology allows companies to glean information about us that we have never disclosed. In none of these cases do we fully understand how that data is used.
[If you’re online — and, well, you are — chances are someone is using your information. We’ll tell you what you can do about it. Sign up for our limited-run newsletter.]
Privacy Violations Affect Us in Tangible Ways
Privacy is essential to our well-being and moral development. It isn’t an abstract notion. Privacy affects our ability to get life insurance. Most of us are monitored in retail stores, our location minutely tracked as we shop. We are monitored in airports, sporting arenas and in so-called smart cities. We are even monitored in our workplaces. Our children are monitored in their schools.
Privacy violations affect everyone, but they often disproportionately affect immigrants, people of color, women, people who live in poverty, L.G.B.T.Q. people and children. Domestic abusers use surveillance tools to spy on their victims. The Department of Homeland Security uses social media history to make immigration decisions. Children in schools are subjected to extensive and intrusive monitoring of their behavior. Many of these technologies are prone to error, including potentially lethal ones.
Sacrificing Your Privacy Might Sometimes Be Worthwhile
When people give their personal data to corporations or governments, it is not always a bad decision; sometimes they may get something of greater value in return. In China, some citizens say that facial recognition cameras make them safer. The police commissioner in New York City made a similar argument for the benefits of facial recognition technology.
Google’s chief executive, Sundar Pichai, argues that Google not only provides valuable services in exchange for users’ data, but it also protects that data. Others have argued that the big tech companies that use personal data also create jobs and promote innovation; that more aggressive, European-style privacy regulations hamper innovation and free speech; and that we might benefit from giving up even more of our privacy.
What Can Be Done?
We have also learned a fair amount about what we can do — both individually and collectively — to better protect our privacy.
Many opportunities are available to individuals, such as stopping sharing the most important moments of their lives (and the lives of their children) on social media. They can also vote with their wallets and use their buying power strategically.
But that might not be enough. Facebook, for example, doesn’t believe that people have a right to privacy. People can’t opt out of the surveillance economy, and they can’t always say no to unreasonable searches or remain anonymous, no matter how hard they try.
If tech companies are to be better regulated, they must face real consequences for their privacy violations, and federal law must provide clear and understandable privacy regulations. It won’t be easy, and there’s a chance regulations will be ineffective. Companies like Facebook have more power than we realize.
The more we learn about privacy, the more there is to understand; every answer raises further questions. But we must keep investigating, and showing that we care, because if we act as if we don’t have a right to privacy, we run the risk of losing it.
Susan Fowler is an editor in the Op-Ed section of The New York Times and the author of the forthcoming memoir “Whistleblower.”
Like other media companies, The Times collects data on its visitors when they read stories like this one. For details please see our privacy policy and our publisher's description of The Times's practices and continued steps to increase transparency and protections.
Follow @privacyproject on Twitter and The New York Times Opinion Section on Facebook and Instagram.
We and our partners use cookies on this site to improve our service, perform analytics, personalize advertising, measure advertising performance, and remember website preferences.Ok
Home » Analysis & Comment » Opinion | What We’ve Learned From Our Privacy Project (So Far)
Opinion | What We’ve Learned From Our Privacy Project (So Far)
The Privacy Project has been underway for four months, and in that time we’ve learned quite a bit about what we do — and don’t — know about privacy.
Privacy is a complex, nebulous and constantly evolving issue, but amid the chaos and complexity, we have discerned four main themes: the ubiquity of surveillance and the ready availability of surveillance tools; our considerable ignorance of where personal data goes and how companies and governments use that data; the tangible harm of privacy violations; and the possibility that sacrificing privacy for other values (say, convenience or security) can be a worthwhile trade-off.
Surveillance Tools Are Readily Available
It’s unnervingly easy to violate the privacy of others — purposefully or inadvertently — using surveillance tools accessible to most everyone.
To show just how easy it is, Stuart A. Thompson, the graphics director for The New York Times Opinion Section, bought a set of targeted advertisements and then designed them to explicitly reveal the (usually hidden) information on which they operate. Targeted ads are often praised as a way for companies to help you find and buy products specific to your needs and interests — which seems harmless enough. But targeted ads can do more than that: They can influence your beliefs and manipulate your behavior.
In another vivid demonstration of the availability of surveillance tools, Sahil Chinoy, a graphics editor for the Opinion Section, built a facial recognition system for less than $100. There are many ways this technology can be abused. San Francisco has banned the use of facial recognition technology by its police and other agencies, but other cities, including New York, haven’t taken any steps to impede its use.
We Don’t Know Enough About What Happens to Our Data
Most people who take advantage of the services of companies like Google and Facebook are aware that the companies store and use their personal data. It might seem as if the trade-offs are clear and worthwhile — I give up my location data in exchange for access to Google Maps, for example — but the reality is darker and murkier.
The United States is becoming a surveillance state, perhaps on par with China. But unlike in China, where mass surveillance is a government endeavor, the monitoring in the United States is done in large part by private corporations — and we don’t know enough about those practices. Facebook and Google, for example, have made a killing monetizing the personal data of their users. (Google keeps a record of nearly everything users buy online.) But it’s not clear where most personal data goes and what such companies decide to do with it. Is it being used merely to improve a company’s services? Or is it being used by foreign governments or political consultants to interfere with democratic elections?
The uncertainty doesn’t end there. Marketing companies obtain our location data using the Bluetooth technology on our phones; cars now generate large amounts of data about behavior of their drivers; and data-inference technology allows companies to glean information about us that we have never disclosed. In none of these cases do we fully understand how that data is used.
[If you’re online — and, well, you are — chances are someone is using your information. We’ll tell you what you can do about it. Sign up for our limited-run newsletter.]
Privacy Violations Affect Us in Tangible Ways
Privacy is essential to our well-being and moral development. It isn’t an abstract notion. Privacy affects our ability to get life insurance. Most of us are monitored in retail stores, our location minutely tracked as we shop. We are monitored in airports, sporting arenas and in so-called smart cities. We are even monitored in our workplaces. Our children are monitored in their schools.
Privacy violations affect everyone, but they often disproportionately affect immigrants, people of color, women, people who live in poverty, L.G.B.T.Q. people and children. Domestic abusers use surveillance tools to spy on their victims. The Department of Homeland Security uses social media history to make immigration decisions. Children in schools are subjected to extensive and intrusive monitoring of their behavior. Many of these technologies are prone to error, including potentially lethal ones.
Sacrificing Your Privacy Might Sometimes Be Worthwhile
When people give their personal data to corporations or governments, it is not always a bad decision; sometimes they may get something of greater value in return. In China, some citizens say that facial recognition cameras make them safer. The police commissioner in New York City made a similar argument for the benefits of facial recognition technology.
Google’s chief executive, Sundar Pichai, argues that Google not only provides valuable services in exchange for users’ data, but it also protects that data. Others have argued that the big tech companies that use personal data also create jobs and promote innovation; that more aggressive, European-style privacy regulations hamper innovation and free speech; and that we might benefit from giving up even more of our privacy.
What Can Be Done?
We have also learned a fair amount about what we can do — both individually and collectively — to better protect our privacy.
Many opportunities are available to individuals, such as stopping sharing the most important moments of their lives (and the lives of their children) on social media. They can also vote with their wallets and use their buying power strategically.
But that might not be enough. Facebook, for example, doesn’t believe that people have a right to privacy. People can’t opt out of the surveillance economy, and they can’t always say no to unreasonable searches or remain anonymous, no matter how hard they try.
If tech companies are to be better regulated, they must face real consequences for their privacy violations, and federal law must provide clear and understandable privacy regulations. It won’t be easy, and there’s a chance regulations will be ineffective. Companies like Facebook have more power than we realize.
The more we learn about privacy, the more there is to understand; every answer raises further questions. But we must keep investigating, and showing that we care, because if we act as if we don’t have a right to privacy, we run the risk of losing it.
Susan Fowler is an editor in the Op-Ed section of The New York Times and the author of the forthcoming memoir “Whistleblower.”
Like other media companies, The Times collects data on its visitors when they read stories like this one. For details please see our privacy policy and our publisher's description of The Times's practices and continued steps to increase transparency and protections.
Follow @privacyproject on Twitter and The New York Times Opinion Section on Facebook and Instagram.
glossary replacer
General Data Protection Regulation personal data
Source: Read Full Article