On December 2, 2015, 14 people were killed and 22 injured following a terrorist attack at the Inland Regional Center in San Bernardino, California. When the FBI Evidence Response Team later searched the house of the couple behind the attack they found three phones. Two were smashed and damaged, with no data retreivable. One was untouched — an iPhone 5c. Over the next few months, FBI tried to access all the data they could get from the phone. While the manufacturer — Apple Inc — provided all the information they had on the account backed up on the system, the FBI were not able to unlock the phone. The phone required a four-digit passcode and after 10 wrong tries the phone would erase itself.
The FBI then asked Apple to design a system that would allow them to bypass the 10-try limit. After all, a four-digit passcode only had 10,000 possibilities. If the limit was removed, the FBI could easily force its way into the phone by trying all possible options.
Apple, however, declined to create such a programme understandably angering the US government, but receiving support from the UN, some US senators as well as tech companies like Amazon, Google, AT&T, Twitter, LinkedIn and Microsoft.
In a detailed statement published on February 16 on their website, the company clarified why it did not comply with the request made by the FBI: “The government suggests this tool could only be used once, on one phone. But that’s simply not true. Once created, the technique could be used over and over again, on any number of devices. In the physical world, it would be the equivalent of a master key, capable of opening hundreds of millions of locks — from restaurants and banks to stores and homes. No reasonable person would find that acceptable.”
The FBI went on to sue Apple. The case was later dismissed after the US Department of Justice and FBI found an alternative way to get into the phone but it reignited the debate on which is a greater priority — online privacy or public security.