In 1982, when I began my career as a technology investor, privacy was not a concern. The denizens of Silicon Valley shared a goal: To improve the lives of the people who used technology. An idealised form of capitalism reigned supreme. IBM had just shipped its PC, and the personal computer was about to take off. Optimism pervaded the nascent industry. Steve Jobs spoke of computers as “bicycles for the mind”, expanding human capabilities with little or no downside.
Over the course of a three-decade-plus career, I have advised countless companies and entrepreneurs, from those early days of personal computers to the current generation of social networks. I was an adviser to Mark Zuckerberg at Facebook from 2006 to 2009.
Privacy did not become a problem until the widespread deployment of networks in the 1990s, and even then the issues were comparatively small. Up until around 2000, the technology industry never had enough processing power, memory, storage or network bandwidth to build products that could be deeply integrated with our lives. Every product required compromises, every design depended on the experience and artistry of its creators.
The bursting of the internet bubble in March 2000 set in motion forces that would alter the culture and priorities of Silicon Valley. The venture capital industry retreated, and in its place arose a new group of investors, known as angels, who typically invest their own personal fortunes in start-ups. Foremost among them was the so-called PayPal Mafia who transformed Silicon Valley with two brilliant insights.
First, they saw that the internet would evolve from a web of pages to a web of people, called Web 2.0, which led them to start or bankroll companies like Plaxo, LinkedIn and Facebook.
Second, they anticipated that the limits imposed by technology would soon evaporate, enabling the first global tech platforms.
These leaders of Web 2.0 were young entrepreneurs with a different value system. They left behind the hippie libertarianism of Jobs for an aggressive version that was more in line with Ayn Rand. The new danger was pioneered by the last great Web 1.0 company, Google. They realised that much more data would lead to much better behavioural predictions. They embraced surveillance and invented a market for behavioural predictions.
They created Gmail, an email product that connected identity to purchase intent, but also shredded traditional notions of privacy. Machine reading of Gmail messages enabled Google to gather valuable insights about users’ current and future behaviour.
Google Maps gathered user location and movements. Soon thereafter, Google sent out a fleet of cars to photograph every building on every street, a product called Street View, and took pictures from satellites for a product they called Google Earth. Other products enabled Google to track users as they made their way around the web. They converted human experience into data and claimed ownership of it.
Data from third parties like banks, credit card processors, cellular carriers and online apps — along with data from web tracking, and data from surveillance products like Google Assistant, Street View and Sidewalk Labs — became part of user profiles. Our digital avatars are used to predict our behaviour, a valuable commodity that is then sold to the highest bidder.
Having started Facebook with relatively strong privacy controls, Zuckerberg adopted Google’s monetisation strategy, which required systematic privacy invasions. In 2010, Zuckerberg declared that Facebook users no longer had an expectation of privacy.
Because of the nature of Facebook’s platform, it was able to capture emotional signals of users that were not available to Google, and in 2014 it followed Google’s lead by incorporating data from users’ browsing history and other sources to make its behavioural predictions more accurate and valuable.
Platforms are under no obligation to protect user privacy. They are free to directly monetise the information they gather by selling it to the highest bidder. For example, platforms that track user mouse movements over time could be the first to notice symptoms of a neurological disorder like Parkinson’s disease — and this information could be sold to an insurance company. (And that company might then raise rates or deny coverage to a customer before he is even aware of his symptoms.)
For consumers, the time has come to say “no more”. We need to reclaim our privacy, our freedom to make choices without fear. Our data is out there, but we have the political power to prevent inappropriate uses.
Why it is legal for service providers to comb our messages and documents for economically valuable data? Why is it legal for third parties to trade in our most private information, including credit card transactions, location and health data, and browsing history? Why is it legal to gather any data at all about minors? Why is it legal to trade predictions of our behaviour?
Corporate claims to our data are not legitimate and we must fight back.
To my friends in the tech industry: Please explain why we should allow the status quo to continue, given the increasing evidence of harm.
To my friends in government: The time has come to ban third-party exploitation of consumer data and to use antitrust law to promote competing business models. This is not a matter of Right or Left; it is a matter of right and wrong.
Roger McNamee is an American businessman, investor, venture capitalist and musician.