Imagine being talked about behind your back. Now picture that conversation taking place covertly in your own sitting room, with you unable to hear it.

That is the modus operandi of SilverPush, an Indian start-up that embeds inaudible sounds in television advertisements. As the advert plays, a high-frequency signal is emitted that can be picked up by a mobile or other device installed with an app containing SilverPush software. This “pairing” — currently targeted at Indian consumers — also identifies users’ other nearby devices and allows the company to monitor what they do across those. All without consumers hearing a thing.

This “cross-device tracking technology”, being explored by other companies including Adobe, is an emblem of a new era with which all of us — governments, companies, charities and consumers — will have to contend.

Last month, the Royal Statistical Society hosted a conference at Windsor castle to ponder the challenges of Big Data — an overused, underexplained term for both the flood of information churned out by our devices and the potential for this flood to be organised into revelatory and predictive rivers of knowledge.

The setting was apt: the ethics and governance surrounding the growing use of data are a right royal mess. Public discussion about how these vast quantities of information should be collected, stored, cross-referenced and exploited is urgently needed. There is excitement about how it might revolutionise healthcare — during outbreaks of disease, for example, search data can be mined for the greater good. Today, however, public engagement largely amounts to public outcry when things go wrong.

The extent to which tech shapes our lives — the average British adult spends more than 20 hours a week online, according to a report by UK media regulator Ofcom — means our behaviour, habits, desires and aspirations can be revealed by our swipes and keystrokes.

This has made analysis of online behaviour a new Klondike. Personal data are like gold dust, and we surrender them every time we casually click “OK” to a website’s terms and conditions.

And here is our first problem: most of us click unthinkingly (it is usually impenetrable legalese, anyhow). It is thus questionable whether we have given informed consent to all the ways in which our personal data are subsequently used. To demonstrate this, a security company set up a public WiFi spot in the City of London and inserted a “Herod clause” committing users to hand over their firstborn for eternity. Within a short period of time, several people unwittingly bartered away their offspring in return for a free connection.

Legal challenges aside, there is rarely independent scrutiny of what is a fair and reasonable relationship between an online company and its consumers. Facebook fell foul of this when it manipulated the news feeds of nearly 700,000 users for a psychology experiment. Users claimed they had been duped by the study, which found that those exposed to fewer positive news stories were more likely to write negative posts. The company retorted that consent had already been given. Approval last week of EU data protection rules permitting hefty fines for privacy breaches may prevent a repetition; consent will no longer be the elastic commodity it was.

A second challenge arises from the so-called internet of things, when devices bypass humans and talk directly to one another. So my depleted smart fridge could automatically email the supermarket requesting replenishment. But it could also mean my gossiping gadgets become a network of electronic spies that can paint a richly detailed picture of my prandial and other proclivities, raising privacy concerns. Indeed, at a robotics conference last month, technologists identified the ability of robots to collect data, especially in private homes, as the single biggest ethical issue in that field.

Alongside the new EU rules on data protection, we need something softer: a body of experts and laypeople that can bring knowledge, wisdom and judgment to this fast-moving field. There is already a Council for Big Data, Ethics and Society in the US, comprising lawyers, philosophers and anthropologists.

Europe should follow this example — because, as a stream of anecdotes at the Windsor conference revealed, companies and academics appear to be navigating this new data-rich world without a moral compass. In 2012 a Russian company created Girls Around Me, an app that pooled publicly available information to show the real-time locations and pictures of nearby women, without their consent; the app, a stalker’s dream, was withdrawn. High-tech rubbish bins in London’s Square Mile, which captured information from smartphones to track unwitting owners’ movements in order to target them with advertising, were ditched on grounds of creepiness.

Meanwhile, a scientist has created software that combs Twitter connections to infer a tweeter’s ethnicity and even religion, raising the question of whether public posts can legitimately be used to deduce private information. Do we, as one lawyer suggested, need laws against misuse of our online personae?

We already have wearable devices that, like Santa, see you when you are sleeping and know when you’re awake. It is perfectly possible that a company will find a way of deducing — through sentiment analysis of social media postings, visits to charity websites, checks on your bank balance and fitness tracking — if you’ve been bad or good.

This goes to show: just because big data makes it technically possible to do something, does not mean we should.

— Financial Times

Anjana Ahuja is a science commentator.