Introduction to Data Privacy and Security on the Internet
Most people share data without thinking about it. They provide information to companies while purchasing merchandise, signing up for email lists, downloading apps and more. They also expect the respective enterprises to safeguard those details. Many intrusions into government and private-sector systems have exposed sensitive mission, business and personal information. Every day it seems that more and more systems are breached and more and more personal information is made available either on the web or, worse, the dark web. Given this backdrop, it is often easy to get lost in the details of cybersecurity and privacy and the seemingly endless discussions about cyber attacks, system breaches, frameworks, requirements, controls, assessments, continuous monitoring and risk management and forget why security and personal privacy matter in an increasingly digital world. We are witnessing and taking part in the greatest information technology revolution in the history of mankind as our society undergoes the transition from a largely paper-based world to a fully digital world.
We have built an incredibly complex information technology infrastructure consisting of millions of billions of lines of code, hardware platforms with integrated circuits on computer chips, and millions of applications on every type of computing platform from smart watches to mainframes. And right in the middle of all that complexity, personal information is being routinely processed, stored and transmitted through global networks of connected systems. From a security and privacy perspective, we are not only concerned about the confidentiality, integrity and availability of the data contained in the systems embedded deep in the nation’s critical infrastructure, but also of our personal information.
Pressures in the personal data industry
We see three distinct pressures currently driving change in the personal data industry. All three are quickly becoming widespread and intertwined, causing seismic ripples across the sector.
The idea of “surveillance capitalism,” which its author Shoshana Zuboff describes as “an economic system built on the secret extraction and manipulation of human data,” has become common coinage, capturing consumers’ increasing awareness that their data is bought, sold, and used without their consent — and their growing reluctance to put up with it. People are starting to vote with their thumbs: in the core North American market, both Facebook and Twitter are facing declines in their daily active users.
Federal lawmakers are moving to curtail the power of big tech. Meanwhile, in 2021 state legislatures proposed or passed at least 27 online privacy bills, regulating data markets and protecting personal digital rights. Lawmakers from California to China are implementing legislation that mirrors Europe’s GDPR, while the EU itself has turned its attention to regulating the use of AI. Where once companies were always ahead of regulators, now they struggle to keep up with compliance requirements across multiple jurisdictions.
At the same time, many governments continue to be tempted to make an exception for themselves and collect their citizens’ personal data. In many cases, their actions have the side-effect of undermining security. Some examples: In July 2022, the Belgian government adopted a law to ban anonymous, encrypted communications (which was interpreted by some commentators as a measure against the messaging app Signal); the French government issued in March 2023 a law allowing automated identification and tracking of people in public spaces during the 2024 Olympics; the French government also proposes to allow government agencies to hack their citizen’s connected devices; and some European politicians want to allow targeted advertising if it is for political campaigns.
And if laws exist, authorities are not yet very quick at applying them. It may take years before a complaint under the GDPR is treated. Meta (Facebook) was fined €1.2 billion under the GDPR, but it took the Irish DPC no less than ten years to decide to impose a fine.
Last year, Apple’s upgrade to its iPhone operating system allowed users to shut down data harvesters’ ability to track them across their many apps. It was a refreshing change, providing customers with power and agency over their data. It also bit hard into companies that rely on cross-app tracking: it cost the major social media sites $10 billion in lost revenue in the second half of 2021. Facebook’s parent company, Meta, expects it will cost another $10 billion to them alone in 2022. Apple has made privacy protection a market differentiator: device manufacturers and app developers now use privacy features to draw new users.
This is a remarkable confluence of forces, and they are converging towards a clear endpoint where individuals will soon exercise full control over their personal data. While consumers still seek the conveniences and benefits that flow from their data, they will be the ones to set the terms over what data they share and who they share it with. People want that protection, governments have their backs, and technology firms are already falling in line, with competition over data privacy now impacting financial bottom lines.
More information in the below links:
Designing for Transparency and Trust (hbr.org)
The future of Data security and privacy
Our new rules of the data economy are fairly straightforward, all of them derived from the basic principle that personal data is an asset held by the people who generate it. But each rule entails the breaking of entrenched habits, routines and networks.
Rule 1: Trust over transactions.
This first rule is all about consent. Until now, companies have been gathering as much data as possible on their current and prospective customers’ preferences, habits, and identities, transaction by transaction — often without customers understanding what is happening. But with the shift towards customer control, data collected with meaningful consent will soon be the most valuable data of all, because that’s the only data companies will be permitted to act upon.
Firms need to consistently cultivate trust with customers, explaining in common-sense terms how their data is being used and what’s in it for them. Firms can follow the lead of recently-created data cooperatives, which provide users with different options for data sharing and secure each user’s consent for the option they are most comfortable with. The more robust and thorough your consent practices are, the more valuable your customer database becomes.
Rule 2: Insight over identity.
Firms need to re-think not only how they acquire data from their customers but from each other as well. Currently, companies routinely transfer large amounts of personal identifiable information (PII) through a complex web of data agreements, compromising both privacy and security. But today’s technology — particularly federated learning and trust networks — makes it possible to acquire insight from data without acquiring or transferring the data itself. The co-design of algorithms and data can facilitate the process of insight extraction by structuring each to better meet the needs of the other. As a result, rather than moving data around, the algorithms exchange non-identifying statistics instead.
For instance, many of Google’s apps, such as the Swipe typing facility, improve phone performance by analyzing customer data directly on their mobile phones in order to extract performance statistics, and then use those statistics to return performance updates to the phone while safely leaving the PII on the customers’ phone. Another firm, Dspark, uses a similar solution for extracting insights from highly-valued but deeply-sensitive personal mobility data. DSpark cleans, aggregates and anonymizes over one billion mobility data points every day. It then turns that data into insights on everything from demographics to shopping, which it markets to other companies — all while never selling or transferring the data itself.
Rule 3: Flows over silos.
This last rule flows from the first two, and doubles as a new organizing principle for internal data teams. Once all your customer data has meaningful consent and you are acquiring insight without transferring data, CIOs and CDOs no longer need to work in silos, with one trying to keep data locked up while the other is trying to break it out. Instead, CIOs and CDOs can work together to facilitate the flow of insights, with a common objective of acquiring maximum insight from consented data for the customer’s benefit.
For instance, a bank’s mortgage unit can secure a customer’s consent to help the customer move into their new house by sharing the new address with service providers such as moving companies, utilities, and internet providers. The bank can then act as a middleman to secure personalized offers and services for customers, while also notifying providers of address changes and move-in dates. The end result is a data ecosystem that is trustworthy, secure, and under customer control. It adds value for customers by relieving them of a burdensome checklist of moving chores, and by delivering a customer experience that’s less about mortgage rates and more about welcoming them into their new home.
More information in the below links:
The General Data Protection Regulation (GDPR) is the toughest privacy and security law in the world. Though it was drafted and passed by the European Union (EU), it imposes obligations onto organizations anywhere, so long as they target or collect data related to people in the EU. The regulation was put into effect on May 25, 2018. The GDPR will levy harsh fines against those who violate its privacy and security standards, with penalties reaching into the tens of millions of euros.
With the GDPR, Europe is signaling its firm stance on data privacy and security at a time when more people are entrusting their personal data with cloud services and breaches are a daily occurrence. The regulation itself is large, far-reaching, and fairly light on specifics, making GDPR compliance a daunting prospect, particularly for small and medium-sized enterprises (SMEs).
Data protection principles
If you process data, you have to do so according to seven protection and accountability principles outlined in Article 5.1-2:
- Lawfulness, fairness and transparency — Processing must be lawful, fair, and transparent to the data subject.
- Purpose limitation — You must process data for the legitimate purposes specified explicitly to the data subject when you collected it.
- Data minimization — You should collect and process only as much data as absolutely necessary for the purposes specified.
- Accuracy — You must keep personal data accurate and up to date.
- Storage limitation — You may only store personally identifying data for as long as necessary for the specified purpose.
- Integrity and confidentiality — Processing must be done in such a way as to ensure appropriate security, integrity, and confidentiality (e.g. by using encryption).
- Accountability — The data controller is responsible for being able to demonstrate GDPR compliance with all of these principles.
The GDPR says data controllers have to be able to demonstrate they are GDPR compliant. And this isn’t something you can do after the fact: If you think you are compliant with the GDPR but can’t show how, then you’re not GDPR compliant. Among the ways you can do this:
- Designate data protection responsibilities to your team.
- Maintain detailed documentation of the data you’re collecting, how it’s used, where it’s stored, which employee is responsible for it, etc.
- Train your staff and implement technical and organizational security measures.
- Have Data Processing Agreement contracts in place with third parties you contract to process data for you.
- Appoint a Data Protection Officer (though not all organizations need one — more on that in this article).
You’re required to handle data securely by implementing “appropriate technical and organizational measures.”
Technical measures mean anything from requiring your employees to use two-factor authentication on accounts where personal data are stored to contracting with cloud providers that use end-to-end encryption.
If you have a data breach, you have 72 hours to tell the data subjects or face penalties. (This notification requirement may be waived if you use technological safeguards, such as encryption, to render data useless to an attacker.)
Data protection by design and by default
From now on, everything you do in your organization must, “by design and by default,” consider data protection. Practically speaking, this means you must consider the data protection principles in the design of any new product or activity. The GDPR covers this principle in Article 25.
Suppose, for example, you’re launching a new app for your company. You have to think about what personal data the app could possibly collect from users, then consider ways to minimize the amount of data and how you will secure it with the latest technology.
When you’re allowed to process data
Article 6 lists the instances in which it’s legal to process person data. Don’t even think about touching somebody’s personal data — don’t collect it, don’t store it, don’t sell it to advertisers — unless you can justify it with one of the following:
- The data subject gave you specific, unambiguous consent to process the data. (e.g. They’ve opted in to your marketing email list.)
- Processing is necessary to execute or to prepare to enter into a contract to which the data subject is a party. (e.g. You need to do a background check before leasing property to a prospective tenant.)
- You need to process it to comply with a legal obligation of yours. (e.g. You receive an order from the court in your jurisdiction.)
- You need to process the data to save somebody’s life. (e.g. Well, you’ll probably know when this one applies.)
- Processing is necessary to perform a task in the public interest or to carry out some official function. (e.g. You’re a private garbage collection company.)
- You have a legitimate interest to process someone’s personal data. This is the most flexible lawful basis, though the “fundamental rights and freedoms of the data subject” always override your interests, especially if it’s a child’s data. (It’s difficult to give an example here because there are a variety of factors you’ll need to consider for your case. The UK Information Commissioner’s Office provides helpful guidance here.)
Once you’ve determined the lawful basis for your data processing, you need to document this basis and notify the data subject (transparency!). And if you decide later to change your justification, you need to have a good reason, document this reason, and notify the data subject.
There are strict new rules about what constitutes consent from a data subject to process their information.
- Consent must be “freely given, specific, informed and unambiguous.”
- Requests for consent must be “clearly distinguishable from the other matters” and presented in “clear and plain language.”
- Data subjects can withdraw previously given consent whenever they want, and you have to honor their decision. You can’t simply change the legal basis of the processing to one of the other justifications.
- Children under 13 can only give consent with permission from their parent.
- You need to keep documentary evidence of consent.
Data Protection Officers
Contrary to popular belief, not every data controller or processor needs to appoint a Data Protection Officer (DPO). There are three conditions under which you are required to appoint a DPO:
- You are a public authority other than a court acting in a judicial capacity.
- Your core activities require you to monitor people systematically and regularly on a large scale. (e.g. You’re Google.)
- Your core activities are large-scale processing of special categories of data listed under Article 9 of the GDPR or data relating to criminal convictions and offenses mentioned in Article 10. (e.g. You’re a medical office.)
You could also choose to designate a DPO even if you aren’t required to. There are benefits to having someone in this role. Their basic tasks involve understanding the GDPR and how it applies to the organization, advising people in the organization about their responsibilities, conducting data protection trainings, conducting audits and monitoring GDPR compliance, and serving as a liaison with regulators.
We go in depth about the DPO role in another article.
People’s privacy rights
You are a data controller and/or a data processor. But as a person who uses the Internet, you’re also a data subject. The GDPR recognizes a litany of new privacy rights for data subjects, which aim to give individuals more control over the data they loan to organizations. As an organization, it’s important to understand these rights to ensure you are GDPR compliant.
Below is a rundown of data subjects’ privacy rights:
- The right to be informed
- The right of access
- The right to rectification
- The right to erasure
- The right to restrict processing
- The right to data portability
- The right to object
- Rights in relation to automated decision making and profiling.
The European Commission put forward its EU Data Protection Reform in January 2012 to make Europe fit for the digital age. More than 90% of Europeans say they want the same data protection rights across the EU – and regardless of where their data is processed.
The regulation is an essential step to strengthen citizens’ fundamental rights in the digital age and facilitate business by simplifying rules for companies in the digital single market. A single law will also do away with the current fragmentation and costly administrative burdens, leading to savings for businesses of around €2.3 billion a year.
The directive for the police and criminal justice sector protects citizens’ fundamental right to data protection whenever personal data is used by criminal law enforcement authorities. It will in particular ensure that the personal data of victims, witnesses, and suspects of crime are duly protected and will facilitate cross-border cooperation in the fight against crime and terrorism.
On 15 December 2015, the European Parliament, the Council and the Commission reached agreement on the new data protection rules, establishing a modern and harmonised data protection framework across the EU. The European Parliament’s Civil Liberties committee and the Permanent Representatives Committee (Coreper) of the Council then approved the agreements with very large majorities. The agreements were also welcomed by the European Council of 17-18 December as a major step forward in the implementation of the Digital single market strategy.
On 8 April 2016 the Council adopted the regulation and the directive. On 14 April 2016 they were adopted by the European Parliament. On 4 May 2016, the official texts were published in the EU Official Journal in all the official languages. The regulation came into force on 24 May 2016 and will apply from 25 May 2018. The directive entered into force on 5 May 2016 and EU countries have to transpose it into their national law by 6 May 2018.
Regulation (EU) 2016/679 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data.
Directive (EU) 2016/680 on the protection of natural persons regarding processing of personal data connected with criminal offences or the execution of criminal penalties, and on the free movement of such data
DATA PROTECTION IN EU
The EU Charter of Fundamental Rights stipulates that EU citizens have the right to protection of their personal data.
Know your rights
Everyone has the right to
- the protection of personal data concerning him or her
- access to data which has been collected concerning him or her, and the right to have it rectified
This right is enshrined in article 8 of the Charter of Fundamental Rights.
What to do if your rights have been breached
The authorities of EU countries are bound to comply with the Charter of fundamental rights only when implementing EU law. Fundamental rights are protected by your country’s constitution.
Addressing your complaint to the relevant national authority, government, national courts or a specialised human rights body.
Data privacy in Telecom sector
In this digital age, telecommunication has become an integral part of our lives. People are constantly worried about their data privacy when they access a program or smartphone. Cyber-attacks can happen to anyone using technology, including telecommunication companies. Cybercriminals are developing new and sophisticated methods to breach data security and steal sensitive information as technology advances. This is why telecommunication data security is crucial. It protects personal data, confidential business data, and critical infrastructure from cyberattacks. But how can we prevent future cyberattacks? Proper implementation of data privacy measures is the key. Let’s look at the importance of telecommunication data security. Then, figure out how data privacy can be used to eliminate and prevent future cyberattacks.
Data Security in Telecoms
Telecommunication data security is critical because it protects sensitive information such as personal data, financial information, and confidential business data. This information is susceptible to cyberattacks, which can have detrimental effects on individuals, businesses, and governments. For instance, cybercriminals can steal personal data, such as names, addresses, and credit card information, and use it for identity theft or financial fraud. They can also steal confidential business data, such as intellectual property, financial records, and trade secrets, which can harm a company’s reputation and lead to financial losses. In addition, cyberattacks can disrupt telecommunication services. Which can cause a widespread outages that can affect critical infrastructure such as emergency services and transportation systems.
The Need for Data Privacy
Cyberattacks are evolving and becoming more sophisticated, and they can be launched from anywhere in the world. As technology advances, so do the methods used by cybercriminals to breach data security. For instance, they can use social engineering techniques such as phishing and baiting to trick people into revealing sensitive information or installing malware on their devices. They can also exploit vulnerabilities in software and hardware systems to gain access to networks and steal data. In addition, they can use ransomware to encrypt data and demand payment in exchange for its release.
It is essential to implement data privacy measures in telecommunication networks to prevent future cyberattacks. Data privacy refers to the protection of personal data and the right of individuals to control how their data is collected, used, and shared. To protect their networks and data, telecommunication companies can implement data privacy measures such as encryption, multi-factor authentication, firewalls, and intrusion detection systems. In addition, they can use data masking and tokenization to hide sensitive information and ensure that it is only accessible to authorized users.
Telecommunication companies can also provide regular training to their employees to prevent social engineering attacks and ensure that they follow best practices for data privacy. In addition, telecom companies should also be transparent about their data security practices and policies. They should provide users with clear information on how their data is collected, used, and protected. In the event of a data breach, telecom companies should have a clear plan to mitigate the damage and promptly notify affected users.
Governments in Telecoms Data Security
Governments also play a crucial role in telecommunication data security. They can enact laws and regulations that require telecommunication companies to implement data privacy measures. In addition to, ensuring that they comply with best practices for data security. For instance, the European Union’s General Data Protection Regulation (GDPR) requires companies to obtain explicit consent from individuals before collecting their data and to implement measures such as encryption and data anonymization to protect their data.
In addition, the United States has several laws, such as the California Consumer Privacy Act (CCPA) and the Health Insurance Portability and Accountability Act (HIPAA), that require companies to protect personal data and health information.
Moreover, governments can work with telecommunication companies to develop cybersecurity strategies that address emerging threats and protect critical infrastructure. For instance, the United States Cybersecurity and Infrastructure Security Agency (CISA) collaborates with telecommunication companies and other stakeholders to prevent cyberattacks on critical infrastructure such as energy, transportation, and emergency services.
Geolocation tracking and privacy
Location data tracking is ubiquitous. The tension between privacy and innovation in this space is exacerbated by rapid developments in tracking technologies and data analytics methodologies, as well as the sheer volume of available consumer data. This article focuses on the privacy risks associated with these developments. To the extent that current and proposed privacy law protects location data, such protection is limited to location data that is identified (or in some cases identifiable) to an individual. Requirements generally apply only to the initial data collector; however, recent media accounts and enforcement actions describe a robust secondary market in which (1) identified location data is regularly acquired and used by third parties with whom the individual has no direct relationship, and (2) de-identified or anonymized location data is regularly combined with identified personal data and used by third parties with whom the individual has no direct relationship to compile comprehensive profiles of the individual. These secondary-market practices are not currently addressed by U.S. law. This article proposes that the risks posed by location tracking and profiling are sufficient to warrant consideration of regulatory intervention at the following points: collection from the individual; use by the original data collector; transfer to and among secondary-market participants; identification of anonymized data to a specific individual; profiling of the individual; and decision-making based on profiling.
I. LOCATION DATA TRACKING GENERALLY
Consumer location is tracked regularly by multiple systems and devices. Many mobile applications (apps) continuously track user location; Facebook, Google, Apple, Amazon, Microsoft, and Twitter all track and use location data.
Individuals often opt into location tracking through personal devices and their apps, such as fitness monitors, smartphones, and GPS trackers, for the purposes of allowing the app to provide them with the underlying service, such as determining distance ran, providing the local weather forecast, and locating and obtaining directions to nearby restaurants.
Business use cases for identified individual location data include providing consumer goods or services (such as roadside assistance) and marketing and targeted advertising. Aggregated location data (i.e., data that is identifiable by distinct data location points but not by individual) can help urban planners alleviate traffic problems, health officials identify patterns of epidemics, and governmental agencies monitor air quality. Commercial uses of aggregated location data include inventory and fleet control, retail location planning, and geofencing. Specified data points may be aggregated over a defined time period and then presented as an overlay to a geographic map. For example, a trucking company can view in real time the locations of its trucks and the demand for trucking services to more efficiently assign routes. Alternatively, the trucking company can geofence its trucks, which means that if a truck goes out of a designated geographical zone, the company will be alerted in real time. Location data is critical to certain types of commercial and public data analytics.
Recent journalistic investigations have revealed that location data is tracked by a wider variety of parties for a greater number of purposes in ways that exceed our understanding or control. The sheer volume of location data tracked, disclosed, and repurposed is tremendous. The widespread availability of location tracking technologies compounds this issue. Furthermore, the use of multiple systems to track location, and the use of data analytics to combine location data with other personal data, enables the both the identification of anonymous data and the compilation of comprehensive and precise profiles of tracked individuals.
Are we at a point yet where place itself acts as a consumer identifier? Unique location tracking patterns can be used to identify the individual; and develop a profile of the individual. A person’s lifestyle, priorities, professional and personal endeavors, and crimes and peccadilloes can all be inferred from continuous location tracking.
The power of place: A person cannot be in more than one place at the same time.
A. Justification for the Initial Collection of Location Data
Location data is regularly collected by devices, apps, and other online services.
Generally, the basic app model is as follows. An individual downloads a map app in order to get directions. As part of the map app download, the individual agrees that his or her location will be tracked in order to provide personalized directions via the app. The app must know where the individual’s starting point is in order to give accurate directions to the individual’s destination. The individual’s smart phone hardware and the app software use GPS and other tracking technologies to determine the individual’s geographical location: the more accurate and recent the location data, the more accurate the app service.
The wireless carrier transmits this real-time location data to a third-party company (the aggregator), subject to a nondisclosure agreement. The aggregator transmits the location data to the app so that the app can generate the directions to provide to the individual. The location data is tracked and disclosed in order to provide the requested transaction (i.e., directions) to the individual. The sharing of information with third parties is limited to these purposes, and the parties are bound by written nondisclosure agreements not to otherwise use or disclose the individual’s location.
This can be referred to as the initial transaction between the individual and the data collector. The justification for this sharing is that (1) it is necessary to (a) honor the customer’s request for app services and (b) ensure consistency of app usage quality across carriers and devices, and (2) the customer has consented to location tracking as part of his or her enrollment in the app service.
II. THE PANDORA’S BOX OF LOCATION DATA: THE SECONDARY LOCATION-DATA MARKET
A. Monetization of Location Data in Secondary Market
The purpose of the initial collection of location data is to enable the data collector to provide a service to the individual; the secondary market purpose is to use that same location data to make conclusions and predictions about the tracked individual. The secondary location data market is used to monetize location data for unrelated purposes, such as enabling a subsequent buyer to compile a profile of the individual and sell access to the individual (whether the individual is identified by name or as part of a data category, like “engaged female retail shopper”). Location data analytics drive a variety of business strategies:
Business data usually contains geographical or location data which mostly goes unused. This data can be as broad as city and country or as specific as GPS location. When this data is placed within the context of big data dashboards and data science models, it allows companies to discover new trends and insights.
The secondary consumer-data market is huge. IBM claims that 90 percent of all consumer data that is currently in circulation was created in the last two years. This industry is expected to generate $350 million dollars annually by 2020. Location data is a big part of that business. The New York Times reported that:
At least 75 companies receive anonymous, precise location data from apps whose users enable location services to get local news and weather or other information, The Times found. Several of those businesses claim to track up to 200 million mobile devices in the United States—about half those in use last year. The database reviewed by The Times—a sample of information gathered in 2017 and held by one company—reveals people’s travels in startling detail, accurate to within a few yards and in some cases updated more than 14,000 times a day.
These companies sell, use or analyze the data to cater to advertisers, retail outlets and even hedge funds seeking insights into consumer behavior. It’s a hot market, with sales of location-targeted advertising reaching an estimated $21 billion this year.
Location tracking data analytics support targeted advertising and marketing for retail and other business purposes. This profiling is intended to individualize the customer experience as much as possible to encourage purchases and loyalty:
[T]he scale of data collected by early adopters of [location tracking] technology is staggering. Location analytics firm RetailNext currently tracks more than 500 million shoppers per year by collecting data from more than 65,000 sensors installed in thousands of retail stores. A single customer visit alone can result over 10,000 unique data points, not including the data gathered at the point of sale.
In addition, the potential combinations and re-use of location data is tremendous:
[B]y combining location data with existing customer data such as preferences, past purchases, and online behavioral data, companies gain a more complete understanding of customer needs, wants and behaviors than is achievable with online data only.
In 2014, Shoshana Zuboff coined the term “surveillance capitalism” to describe how consumer data has become a business unto itself. More recently, Zuboff explained how location data fits in this model:
[There] has been a learning curve for surveillance capitalists, driven by competition over prediction products. First they learned that the more surplus the better the prediction, which led to economies of scale in supply efforts. Then they learned that the more varied the surplus the higher its predictive value. This new drive toward economies of scope sent them from the desktop to mobile, out into the world: your drive, run, shopping, search for a parking space, your blood and face, and always … location, location, location.
Data is generally sold on the secondary market as identified data (which is directly associated with a distinct individual) or as de-identified or anonymous data (which is aggregated and not associated with a distinct individual).
B. Disclosure of Identified Location Data
1. Disclosures by Aggregators
Under the app model, the aggregators receive the individual’s location in order to send it to the app owner for purposes of furnishing the app service. Distribution of this data is much more widespread. Journalistic investigations reveal that aggregators routinely sell location data to a series of parties that are not intermediaries to the initial data transaction, leading to dissemination of location data beyond its intended purpose, and resulting in unrelated third-party access to the individual’s location data.
One such aggregator, LocationSmart, regularly sold continuous cell tower location tracking to Securus Technologies, a prison contractor that provides and monitors calls to inmates. As an ancillary service, Securus “offers [a] location-finding service as an additional feature for law enforcement and corrections officials, [as] part of an effort to entice customers in a lucrative but competitive industry.” This service was used by a variety of law enforcement officials for a wide variety of purposes, including search-and-rescue operations, thwarting prison escapes and smuggling rings, and closing cases.
The relationship between Securus and LocationSmart impacted almost all U.S. cell phone users, was unknown to them, and could not be opted out of:
So how was Securus getting all that data on the locations of mobile-phone users across the country? We learned more last week, when ZDNet confirmed that one key intermediary was a firm called LocationSmart. The big U.S. wireless carriers—AT&T, Verizon, Sprint, and T-Mobile—were all working with LocationSmart, sending their users’ location data to the firm so that it could triangulate their whereabouts more precisely using multiple providers’ cell towers. It seems no one can opt out of this form of tracking, because the carriers rely on it to provide their service.
Another Motherboard investigation showed that wireless carriers also routinely sell assisted or augmented global positioning system (aGPS) location data. aGPS data is more precise location data that is collected for use with enhanced 9-1-1 services to allow first responders to pinpoint an individual’s location with greater accuracy. For example, a cellular call made to the 9-1-1 emergency service that relies solely on GPS satellites might indicate the caller’s location within a given area, such as a building, and it might take several minutes to determine that location. aGPS relies on other external and systems to provide a faster, more precise location, like a floor within a building.
Federal law expressly prohibits the sale of aGPS data. The Federal Communications Commission issued an order in 2017 providing that data included in the National Emergency Address Database, which is collected using Wi-Fi and Bluetooth to locate 9-1-1 callers within a building, may not be used for any other purpose. In addition, the Federal Trade Commission could enforce section 5 of the Federal Trade Commission Act prohibiting deceptive and unfair trade practices against carriers whose privacy policies were inconsistent with this practice.
2. Privacy Leaks and Security Breaches
In addition to intentional disclosures, LocationSmart exposed this real-time location data through a bug in its website, which enabled users to track anyone without credentials or authorization using a free demo and a single cell phone number:
Anyone with a modicum of knowledge about how Web sites work could abuse the LocationSmart demo site to figure out how to conduct mobile number location lookups at will, all without ever having to supply a password or other credentials.
“I stumbled upon this almost by accident, and it wasn’t terribly hard to do,” Xiao [a security researcher] said. “This is something anyone could discover with minimal effort. And the gist of it is I can track most peoples’ cell phone without their consent.”
Xiao said his tests showed he could reliably query LocationSmart’s service to ping the cell phone tower closest to a subscriber’s mobile device. Xiao said he checked the mobile number of a friend several times over a few minutes while that friend was moving and found he was then able to plug the coordinates into Google Maps and track the friend’s directional movement.
Further, the Securus database was the subject of a data hack that separately exposed personal data. A Motherboard reporter obtained data that had been hacked from Securus’s database:
“Location aggregators are—from the point of view of adversarial intelligence agencies—one of the juiciest hacking targets imaginable,” Thomas Rid, a professor of strategic studies at Johns Hopkins University, told Motherboard in an online chat.
The data hack, which was attributed to a weak password reset feature, revealed personal data of thousands of law enforcement users and inmates.
This means that Securus, acting as an unregulated entity and outside of the scope of its nondisclosure agreements with the wireless carriers, was responsible for innumerable disclosures of identified location data.
Other privacy failures involving identified location data can result in exposure to threats of physical danger. A recent privacy failure by a family tracking app (React Apps “Family Locator”) that exposed children’s identified location data for weeks; the very app that parents obtained to protect their children arguably put them at great risk:
Family tracking apps can be very helpful if you’re worried about your kids or spouse, but they can be nightmarish if that data falls into the wrong hands. Security researcher Sanyam Jain has revealed to TechCrunch that React Apps’ Family Locator left real-time location data (plus other sensitive personal info) for over 238,000 people exposed for weeks in an insecure database. It showed positions within a few feet, and even showed the names for the geofenced areas used to provide alerts. You could tell if parents left home or a child arrived at school, for instance.
3. Access by Unauthorized Third Parties
The same Motherboard reporter was able to identify the exact location of a smartphone using only the phone number and a $300 payment to a bounty hunter in an attenuated process that apparently happens regularly and in violation of the apps’ posted privacy policies and the parties’ written nondisclosure agreements. In the Motherboard scenario, a wireless carrier sold an individual’s location data to an aggregator, that sold it to a skip-tracing firm, that sold it to a bail-bond company, that sold it to an independent bounty hunter. The bounty hunter had no written agreement with anyone and no relationship with the wireless carrier or the individual customer, and neither did its source.
The article’s aftermath included revelations that all of the major wireless carriers sold location data to aggregators that ultimately sold the data to hundreds of bounty hunters. Multiple lawmakers sent the major carriers and aggregators letters requesting an explanation of these location data sharing practices.
The ensuing furor prompted the wireless carriers to commit to stop selling location data to aggregators. The Wall Street Journal reported that Verizon, Sprint, T-Mobile, and AT&T all committed to end agreements with downstream location aggregators, and Zumigo (the initial aggregator in the bounty hunter scandal) cut off access by the intermediary aggregator to whom it sold the location data.
4. Privacy and Security Risks
These investigations indicate that real-time location data that is identified to a particular individual is regularly monetized and sold to third parties in a manner that is arguably inconsistent with the individual’s consent, the apps’ stated privacy policies, the data collector’s third-party nondisclosure agreements, and applicable law.
In other words, location data identified to a specified individual is routinely collected and sold by a variety of parties for a variety of purposes unrelated to the original transaction that justified the initial location data collection. This results in a myriad of privacy and security risks to the individual. Consider a stalker who tracks his or her victim’s location either by signing up for a free Securus or similar trial or by paying a bounty hunter. The victim may be taking strict precautions to elude location tracking and would not even be aware of this risk. In addition, the more entities that possess the victim’s location data, the greater the likelihood of a privacy exposure or data breach.
B. Sales of De-identified or Anonymous Location Data
1. Sales by App Owners
Separately, apps that receive individual user location data from aggregators frequently sell location data to third-party buyers for their own commercial purposes. The data is provided in large sets that do not identify the specific individuals who are tracked. The purpose of the data set is to enable the buyer to identify patterns in location data. Such business use cases may involve allowing buyers to spot trends for investment or marketing purposes.
In this context, the justification for the sale and reuse is that the individual’s personally identifiable information (like phone number or name) is deleted from the data and replaced instead with a unique identifier.
The model is basically as follows. A map app organizes location data for a specified commercial neighborhood over a defined time period to show the number of people who walk through the neighborhood during the time period. This foot traffic may show times of day when foot traffic is greatest and areas in the neighborhood that may attract more or less foot traffic. This data may be sold to a retailer for purposes of deciding whether the neighborhood, or any particular part of it, would be suitable for establishing a brick-and-mortar location. The retailer purchases the data for research and investment purposes. Its interest is in the number and patterns of individuals who walk through the neighborhood.
For these purposes, the identity of the individual is not relevant to the data buyer and is not included in the data set. It is the traffic patterns or trends and not the individual’s identity that gives this data set value.
2. Re-identification by Unknown Third Parties
Data sets may be used to identify the individual through other means, however.
In order to verify the authenticity of the data points that comprise the data set and facilitate the tracking by the app/seller of the unique location data of an individual, the individual is assigned a unique identifier, and the individual’s unique identifier can remain the same. Presumably, then, buyers could use the unique identifier to track identifiers over time and combine them with other data to identify the individual subject.
Separately, using data analytics, location data can be combined with nonlocation data to ascertain an individual’s identity. For example, the retailer that buys the anonymous data set could note that a single data point or individual goes back and forth from a nearby residential address throughout the day. Matching the individual to his address enables identification of the individual.
A more sensational example of this is the use by law enforcement of DNA information combined with location data to identify suspects in cold cases.
The New York Times, with permission from a school teacher, was able to accurately associate anonymous location data with the individual teacher solely by reviewing four months’ and more than a million phones’ worth of location data and combining that with their knowledge of where she worked and lived. The report posits that:
[t]hose with access to the raw [anonymized] data—including employees or clients—could still identify a person without consent. They could follow someone they knew, by pinpointing a phone that regularly spent time at that person’s home address. Or, working in reverse, they could attach a name to an anonymous dot, by seeing where the device spent nights and using public records to figure out who lived there.
In fact, location data alone may be used to identify consumers in large anonymized data sets.
In 2013, MIT and Belgian researchers: “analyzed data on 1.5 million cellphone users in a small European country over a span of 15 months and found that just four points of reference, with fairly low spatial and temporal resolution, was enough to uniquely identify 95 percent of them.”
As technology has evolved and the use and dissemination of location data has proliferated, reidentification of individuals included in anonymized data sets has been greatly facilitated:
With an increasing number of service providers nowadays routinely collecting location traces of their users on unprecedented scales, there is a pronounced interest in the possibility of matching records and datasets based on spatial trajectories. Extending previous work on reidentifiability of spatial data and trajectory matching, we present the first large-scale analysis of user matchability in real mobility datasets on realistic scales, i.e. among two datasets that consist of several million people’s mobility traces, coming from a mobile network operator and transportation smart card usage. . . .We show that for individuals with typical activity in the transportation system (those making 3-4 trips per day on average), a matching algorithm based on the co-occurrence of their activities is expected to achieve a 16.8% success only after a one-week long observation of their mobility traces, and over 55% after four weeks. We show that the main determinant of matchability is the expected number of co-occurring records in the two datasets. Finally, we discuss different scenarios in terms of data collection frequency and give estimates of matchability over time. We show that with higher frequency data collection becoming more common, we can expect much higher success rates in even shorter intervals.
3. Privacy and Security Risks
As tracking technologies become further developed and more widely accessible and data analytics become more sophisticated, anonymous data points (particularly when tracked over time) can be used to facilitate identification of the individual.
Consider the private investigation of various retail robberies. If the retailer did not have a suspect’s name, its private investigator could identify possible suspects by:
purchasing from an aggregator anonymized cell phone location data for all individuals near each robbed location during the time of each robbery;
pinpointing unique IDs or data points for all phones present at some or all of the robberies;
requesting extended cell phone location data for the unique IDs or data points from the wireless carriers;
purchasing larger pools of anonymized data from an aggregator and reidentify data points within a given area and timeframe; or
hiring a bounty hunter to track the numbers and locations of the phones tied to the unique IDs or data points.
The City of Los Angeles passed rules requiring scooter companies to provide the per-trip location data of each scooter to city officials within 24 hours of the end of the trip. Although the rider’s identity is not disclosed to the city and the location data will be treated as confidential by the city, it will be accessible in aggregated form to various city agencies and accessible in per-trip form to law enforcement, subject to a warrant, and to third parties, in response to a subpoena. Given the sensitivity of location data and the ability of using location data itself to identify individuals, consumer advocates have framed this not as a matter between the scooter companies and the city but as a matter of governmental surveillance and debate between individual citizens and the city:
“This data is incredibly, incredibly sensitive,” said Jeremy Gillula, the technology projects director for the Electronic Frontier Foundation, a San Francisco-based digital rights group.
The vast trove of information could reveal many personal details of regular riders — such as whom they’re dating and where they worship — and could be misused if it fell into the wrong hands, the nonprofit Center for Democracy and Technology told the city in a letter.[i]
De-identified, real-time location data is regularly monetized and sold to third parties for a variety of purposes unrelated to the original transaction that justified the initial location data collection. Location tracking use cases include the following scenarios:
- location data point identified to a specific individual;
- location data point identifiable to a specific individual;
- location data point not identified to the individual;
- continuous location tracking identified to a specific individual;
- continuous location tracking identifiable to a specific individual;
- continuous location tracking not identified to the individual;
- development of a profile based on location tracking identified to a specific individual;
- development of a profile based on location tracking that is identifiable to a specific individual; and
- location data used to compile a profile of an unidentified individual.
As described above, the distinctions among these categories become less relevant in practice, and the risks posed by transfers of anonymized location data may be as great as those posed by sales of identified location data.
III. LOCATION TRACKING: PROFILING THE INDIVIDUAL
Precise tracking of an individual’s location over time can be used to discover information about the individual that may not be otherwise available (consider repeat trips to a bar, the home of a person not the individual’s spouse, or to an oncologist), which when combined with other data, can be used to develop a fairly comprehensive profile of the individual. Even anonymized data profiles can pose these risks to the individual due to the relative ease of reidentifying an individual, as described above.
A. Data Profiling and Decision-Making
Profiling is done for a variety of purposes; targeted advertising and marketing is the most well-known effort. For example, if an Apple customer is in geographical proximity to an Apple Store, his or her phone could provide ads for Apple TV. These ads may be more successful if the individual were located in a TV store near an Apple Store, or better yet, if the individual were located for several minutes in an Apple Store near the Apple TV demo.
Individual data profiling has become sophisticated and comprehensive, and location data is an integral part of profiling:
A profile is a combination of metrics, key performance indicators, scores, business rules, and analytic insights that combine to make up the tendencies, behaviors, and propensities of an individual entity (customer, device, partner, machine). The profile could include:
Key demographic data such as age, gender, education level, home location, marital status, income level, wealth level, make and model of car, age of car, age of children, gender of children, and other data. For a machine, it might include model type, physical location, manufacturer, manufacturer location, purchase date, last maintenance date, technician who performed the last maintenance, etc.
Key transactional metrics such as number of purchases, purchase amounts, returns, frequency of visits, recency of visits, payments, claims, calls, social posts, etc. For a machine, that might include miles and/or hours of usage, most recent usage time and date, type of usage, usage load, who operated the product, route of product usage (for something like a truck, car, airplane, or train)
Scores (combinations of multiple metrics) that measure customer satisfaction level, financial risk tolerance, retirement readiness, FICO, advocacy grade, likelihood to recommend (LTR), and other data. For a machine, that might include performance scores, reliability scores, availability scores, capacity utilization scores, and optimal performance ranges, among other things
Business rules inferred using association analysis; for example, if CUST_101 visits a certain Starbucks and a certain Walgreens, we can predict (with 90% confidence level) that there is an 85% likelihood that this customer will visit a certain Chipotle within 60 minutes
Group or network relationships (number, strength, direction, sequencing, and clustering of relationships) that capture interests, passions, associations and affiliations gained from using graphic analysis
Coefficients that predict certain outcomes or responses based upon certain independent variables found through regression analysis; for example, a machine’s likelihood to break down given a number of interrelated variables such as usage loads since last maintenance, the technician who performed the maintenance, the machine manufacturer, temperatures, humidity, elevation, traffic, idle time, etc.)
Behavioral groupings of like or similar machines or people based upon usage transactions (purchases, returns, payments, web clicks, call detail records, credit card payments, claims, etc.) using clustering, K-nearest neighbor (KNN), and segmentation analysis
Location data analytics are used to make a variety of decisions that may impact the individual. One use case for data profiling is credit-risk analysis. Such data profiles may arguably be considered “consumer reports” governed by the federal Fair Credit Reporting Act (FCRA). As the lines have blurred between online decision making and targeted advertising, and prescreening and marketing (the former are protected by FCRA and the latter are not), it certainly appears as if credit availability depends, in part, on secondary market data that the consumer reporting agencies do not treat as “consumer reports” under FCRA.
Payment-card fraud management can also be enhanced by developing profiles of each cardholder. By combining device location data with transaction histories, fraud detection is more precise:
New technologies . . . merg[e] a broader range of financial data, mobile-phone data, and even social-networking data to better establish the likelihood it’s actually you behind the transactions racking up on your cards or mobile device. Nguyen says that Feedzai’s system can improve fraud detection rates from 47 percent to almost 80 percent. Chirag Bakshi, founder and CEO of Zumigo, a company in San Jose, California, that provides location-based mobile services, says his company’s data algorithms reduce fraud losses by at least 50 percent.
“When fraudsters steal your identity, what they can’t do is steal your behavior,” Nguyen says. That, in fact, has long been the principle behind credit card fraud alerts. But a conventional credit card company is relying on information from your past to guess whether each attempted transaction is genuine. Today’s new technologies tap into your mobile phone and its more up-to-date information to see if your current behavior matches your purchase.
“[We can use] a SIM card as a proxy for a person,” says Rodger Desai, CEO of Payfone, which works with banks, mobile operators, and fraud detection companies to assess the legitimacy of a given payment. Payfone builds a profile of a user and tracks more than 400 types of data to create what it calls a persistent identity. Change phone company? Noted. Someone steal your phone or clone it? The company will catch that, too. Even if you’ve canceled your cellular data plan, it has ways of flagging the activity of someone who then tries to use the phone’s Wi-Fi connection.
Data analytics for decreasing fraud are likely welcome to the individual. Once a “persistent identity” is created by profiling the individual’s location and related data, however, there are few limits on how that profile may be used or sold:
Mobile location data firms interviewed for this story stressed their dedication to encrypting data to prevent direct connections to individuals, yet there are no industry-wide accepted practices or U.S. government regulations preventing the use of such data in ways that weren’t originally intended. For instance, data reflecting drinking or drug use arguably could find its way into data models for targeting ads for health insurance plans, or even find its way into formulas used to calculate health or auto insurance rates or job eligibility.
B. Behavioral Influencing
Use of predictive modelling has been extended to influence behavior:
It works like this: Ads press teenagers on Friday nights to buy pimple cream, triggered by predictive analyses that show their social anxieties peaking as the weekend approaches. “Pokémon Go” players are herded to nearby bars, fast-food joints and shops that pay to play in its prediction markets, where “footfall” is the real-life equivalent of online clicks.
The intrusiveness of such profiles cannot be overstated. Facebook has shown advertisers:
how it has the capacity to identify when teenagers feel “insecure”, “worthless” and “need a confidence boost”, according to a leaked documents based on research quietly conducted by the social network[, which] states that the company can monitor posts and photos in real time to determine when young people feel “stressed”, “defeated”, “overwhelmed”, “anxious”, “nervous”, “stupid”, “silly”, “useless” and a “failure.”
Location data is key to this type of influencing:
The next step—”from flat, to fast, to deep, is psychic,” Friedman believes. “I now know your whole psychographic from your phone. I will just push you your groceries, push you the supplies you need, push you the information you need.”
The use of profiles for behavioral targeting is likely as limitless as the use of profiles for predictive behavior:
Imagine a not-so-distant future where you’re just driving on the highway. Your car is sending real-time data about your performance behind the wheel to your insurance company. And in return, the insurance company is sending real-time driver behavior modification punishments back—real-time rate hikes, curfews, even engine lockdowns. Or, if you behave in the way they like, you get an instant rate discount.
In other words, the insurance company is shaping your behavior right then and there. Would you like that? What does it mean for our entire understanding of free will?
Once the individual’s “persistent identity” is created, however, its uses are not limited. Consider another Facebook scandal: Cambridge Analytica. Cambridge Analytica combined personal user data obtained from a Facebook app developer (in violation of its nondisclosure agreement) with data combined from other sources, including location data, to compile profiles of voters around the world for the purpose of influencing elections using propaganda and direct marketing. Up to 87 million Facebook users worldwide were profiled with the intent of waging “psychological warfare” against and targeting “influence operations” to these users. Cambridge Analytica’s parent company’s reach exceeded “100 election campaigns in over 30 countries spanning five continents.” Cambridge Analytica was a secondary market user of the location data collected from Facebook profiles and from external sources. The Facebook users had no idea that the voter profiles were being compiled or that their location data was being used to identify them for specific political campaigns for the purposes of influencing their votes.
C. Privacy and Security Risks
Use of location tracking data to create individual profiles is not addressed under current law and poses unique risks. The eventual data buyer that compiles the data profile or identifies an individual in relation to a profile may not be in privity with either the individual or the original data collector. Further, once a profile is created for a specific purpose, there are few limits on using the profile for other purposes:
The fact is that location data is flowing around the digital ecosystem with little control. Many of the firms that have built businesses on using mobile location data for ad targeting gather the data from ad calls made by programmatic ad systems. And audience segments like “frequent quick serve restaurant visitors” could be accessed for ad targeting as easily as they could be excluded from targeting parameters for health insurance ads, for instance. “Even though data is used just for marketing, there’s no reason to think it will only be used just for that purpose,” said Dixon. “Those formulas—they are data hungry,” she said of data models used by insurance firms or other corporations.
At this point, the uses and distribution of individual profiles based on location data appears limitless, even though the individual has no control over or knowledge of them and may not opt out of data profiling or access or correct data profiles. Moreover, use of such profiles has become increasingly intrusive as secondary market participants seek to monetize their value.
Data security and privacy in the Financial sector
When a bank first on boards a customer, it is responsible for gathering all the documents which are legally required for carrying out the necessary KYC checks. However, at present banks do not normally share their KYC documents, or the results of their KYC document verifications and checks on customer data. This is due to each bank being legally responsible for any data it holds, and the fact that banks are market place rivals. As such, there is a huge duplication of document acquisition activities and verification work carried out by banks. From a client perspective, the lack of data sharing means that using the services of another bank requires the numerous documents to be collected and presented to each bank. This is especially onerous for more complex entities such as corporates, for which the amount of documentation required is often large and requires more regular updates. Therefore, banks have a need to exchange data in order to: reduce the cost of compliance, ease the burden of their customers caused by the need to repeatedly present documentation to the banks, and ensure banks fully comply with KYC/GDPR legislation while sharing data. The client’s need is to reduce the time spent on the provision of documents. In order for banks and their clients to use a data sharing system such as the one detailed in this paper the client must be confident that all data will be secure and shared only at their request. This means a system with strong data access control and data privacy methods is necessary. The system we detail is designed for banks in the EU. They are legally mandated to comply with KYC and anti-money laundering (AML) legislation, and the recently introduced General Data Protection Regulation (GDPR). As such, banks now have more compliance requirements than ever before. In this work we focus on two major issues surrounding KYC. Firstly, the time and expense incurred by carrying out the KYC processes necessary to comply with the legislative areas mentioned above. We do this by allowing banks to share customer data between one another. Through doing so we enable the banks to reduce their costs and save time for their. When a bank first on boards a customer, it is responsible for gathering all the documents which are legally required for carrying out the necessary KYC checks. However, at present banks do not normally share their KYC documents, or the results of their KYC document verifications and checks on customer data. This is due to each bank being legally responsible for any data it holds, and the fact that banks are market place rivals. As such, there is a huge duplication of document acquisition activities and verification work carried out by banks. From a client perspective, the lack of data sharing means that using the services of another bank requires the numerous documents to be collected and presented to each bank. This is especially onerous for more complex entities such as corporates, for which the amount of documentation required is often large and requires more regular updates. Therefore, banks have a need to exchange data in order to: reduce the cost of compliance, ease the burden of their customers caused by the need to repeatedly present documentation to the banks, and ensure banks fully comply with KYC/GDPR legislation while sharing data. The client’s need is to reduce the time spent on the provision of documents. In order for banks and their clients to use a data sharing system such as the one detailed in this paper the client must be confident that all data will be secure and shared only at their request. This means a system with strong data access control and data privacy methods is necessary.The system we detail is designed for banks in the EU. They are legally mandated to comply with KYC and anti-money laundering (AML) legislation, and the recently introduced General Data Protection Regulation (GDPR). As such, banks now have more compliance requirements than ever before. In this work we focus on two major issues surrounding KYC. Firstly, the time and expense incurred by carrying out the KYC processes necessary to comply with the legislative areas mentioned above. We do this by allowing banks to share customer data between one another. Through doing so we enable the banks to reduce their costs and save time for their. When a bank first on boards a customer, it is responsible for gathering all the documents which are legally required for carrying out the necessary KYC checks. However, at present banks do not normally share their KYC documents, or the results of their KYC document verifications and checks on customer data. This is due to each bank being legally responsible for any data it holds, and the fact that banks are market place rivals. As such, there is a huge duplication of document acquisition activities and verification work carried out by banks. From a client perspective, the lack of data sharing means that using the services of another bank requires the numerous documents to be collected and presented to each bank. This is especially onerous for more complex entities such as corporates, for which the amount of documentation required is often large and requires more regular updates. Therefore, banks have a need to exchange data in order to: reduce the cost of compliance, ease the burden of their customers caused by the need to repeatedly present documentation to the banks, and ensure banks fully comply with KYC/GDPR legislation while sharing data. The client’s need is to reduce the time spent on the provision of documents. In order for banks and their clients to use a data sharing system such as the one detailed in this paper the client must be confident that all data will be secure and shared only at their request. This means a system with strong data access control and data privacy methods is necessary. The system we detail is designed for banks in the EU. They are legally mandated to comply with KYC and anti-money laundering (AML) legislation, and the recently introduced General Data Protection Regulation (GDPR). As such, banks now have more compliance requirements than ever before. In this work we focus on two major issues surrounding KYC. Firstly, the time and expense incurred by carrying out the KYC processes necessary to comply with the legislative areas mentioned above. We do this by allowing banks to share customer data between one another. Through doing so we enable the banks to reduce their costs and save time for their
When a bank first on boards a customer, it is responsible for gathering all the documents which are legally required for carrying out the necessary KYC checks. However, at present banks do not normally share their KYC documents, or the results of their KYC document verifications and checks on customer data. This is due to each bank being legally responsible for any data it holds, and the fact that banks are market place rivals. As such, there is a huge duplication of document acquisition activities and verification work carried out by banks. From a client perspective, the lack of data sharing means that using the services of another bank requires the numerous documents to be collected and presented to each bank. This is especially onerous for more complex entities such as corporates, for which the amount of documentation required is often large and requires more regular updates. Therefore, banks have a need to exchange data in order to: reduce the cost of compliance, ease the burden of their customers caused by the need to repeatedly present documentation to the banks, and ensure banks fully comply with KYC/GDPR legislation while sharing data. The client’s need is to reduce the time spent on the provision of documents. In order for banks and their clients to use a data sharing system such as the one detailed in this paper the client must be confident that all data will be secure and shared only at their request. This means a system with strong data access control and data privacy methods is necessary. The system we detail is designed for banks in the EU. They are legally mandated to comply with KYC and anti-money laundering (AML) legislation, and the recently introduced General Data Protection Regulation (GDPR). As such, banks now have more compliance requirements than ever before. In this work we focus on two major issues surrounding KYC. Firstly, the time and expense incurred by carrying out the KYC processes necessary to comply with the legislative areas mentioned above. We do this by allowing banks to share customer data between one another. Through doing so we enable the banks to reduce their costs and save time for their.
When a bank first on boards a customer, it is responsible for gathering all the documents which are legally required for carrying out the necessary KYC checks. However, at present banks do not normally share their KYC documents, or the results of their KYC document verifications and checks on customer data. This is due to each bank being legally responsible for any data it holds, and the fact that banks are market place rivals. As such, there is a huge duplication of document acquisition activities and verification work carried out by banks.
From a client perspective, the lack of data sharing means that using the services of another bank requires the numerous documents to be collected and presented to each bank. This is especially onerous for more complex entities such as corporates, for which the amount of documentation required is often large and requires more regular updates. Therefore, banks have a need to exchange data in order to: reduce the cost of compliance, ease the burden of their customers caused by the need to repeatedly present documentation to the banks, and ensure banks fully comply with KYC/GDPR legislation while sharing data. The client’s need is to reduce the time spent on the provision of documents. This article – (PDF) A Security and Privacy Focused KYC Data Sharing Platform (researchgate.net) describes a data sharing system which can make the customer confident that the data is secure
Governmental authoritative data exchange like Educational certificates, job experience etc.
<Information to be collected>
Linked Data, W3C DPVCG, Sticky policy in Personal Data Handling
‘Linked Data’ is the name given to data that is expressed in well-defined vocabularies (preferably standardized) and uses URIs as links between different pieces of data. Often, Linked Data is expressed in one of the syntaxes of the Resource Data Framework (RDF), such as JSON-LD. RDF and JSON-LD are W3C standards.
A subset of Linked Data is ‘Linked Open Data’, linked data that is publicly available. According to Tim Berners-Lee, its inventor, within Linked Open Data there are five types, in increasing levels of quality: (1) Linked Data that is available on the Web with an open license; (2) like (1), but in a machine-readable form; (3) like (2), but in a non-proprietary format; (4) like (3) where the format is an open standard; and (5) like (4), but with data that is linked to other data on the web.
Linked Open Data is often government data. Increasingly, governments at all levels, national, regional and local, are making data available for re-use. In France, e.g., an association of local and regional governments called ‘OpenDataFrance’ aims to help each other and other local governments to publish data online in standard formats and under appropriate licenses. The national government also operates a site with Linked Open Data, data.gouv.fr. For a typical town the data includes a wide variety of data, such as statistics on the population, GPS coordinates of artworks in public spaces, or air quality and pollution. Most often the data is in the CSV format, i.e., at level four (‘4 stars’) in the five-levels classification of Tim Berners-Lee.
Linked Data is meant to be automatically processed and portable. It is used in many domains, e.g., to monitor and control industrial processes, to encode the data of ‘smart cities’ and other aspects of the Web of Things, and also to describe personal data.
One set of vocabularies has been defined by the Data Privacy Vocabularies and Controls Community Group (DPVCG), This group was created by the predecessor project of TRAPEZE, the SPECIAL project. It operates within W3C, but is open to any participant. The group continues to improve the vocabulary. The DPV vocabulary and its related vocabularies allow to reason about data and automate the verification of data usage against a policy. The TRAPEZE project uses it in this way.
Linked Data is also used for ‘digital twins’, a concept for process control in industry, or the Web of Things in general. A digital twin is a virtual copy of a real object and it communicates with other digital twins. It can be linked to its physical twin in real time and the real object can then be controlled via its digital twin, or the digital twin can be used for simulations.
Among the different ways to store and transmit Linked Data, the most popular format is JSON-LD, which uses the JSON format, but with well-defined vocabularies and URIs as links. JSON-LD is used, e.g., in ActivityPub and Solid, technologies which are transforming social media, by basing them on distributed data and open protocols. The most popular and fastest growing new social media based on ActivityPub is Mastodon, but there are several other (Pixelfed, Peertube, etc.). Thanks to the shared protocol, these different networks can communicate with each other. The distributed nature allows people to more easily control their own data. The press recently reported that Meta (Facebook) is investigating basing a social media network on ActivityPub.
ActivityPub is a W3C standard since 2018 and there are calls to W3C to make a new version. Solid is currently the subject of a Community Group in W3C and preparations are under way to set up a Working Group to standardize the technology.
Another area where linked data is important is geospatial data. The OGC (Open Geospatial Consortium) and W3C together defined a pair of vocabularies, SOSA and SSN to implement ISO 19156 (‘Observations and Measurements’). An update of both vocabularies is expected this year.
A recent concept around personal data is ‘data spaces’. These are ‘markets’ of data, including personal data. The International Data Spaces Association (IDSA) is working on defining rules for data spaces that respect the rights of data owners. A workshop recently brought together people from IDSA, W3C and research to discuss interoperability of data, and metadata, in data spaces.
A ‘sticky policy’ is a piece of linked data that is associated with some other data and that describes the rules under which the latter data can be used. The idea is that the sticky policy remains attached to the data and is copied wherever the data is copied. And, because it is machine-readable, it allows automatic verification that the data is used in a way that is allowed by the policy. Thus, the owner of some personal data can restrict its use to a given purpose or time.
Sticky policies have been the subject of research since the term was invented (probably in a paper in 2002), including by various EU research projects. Most recently the research has been looking at combining sticky policies with blockchain technologies.
Practical tips for data protection and Security: Vendors like KSP, Security providers, EU authorities, Research institutions, Standards
Private browsers generally incorporate web technologies that have been around for years:
- They rely on something called private mode, also known as incognito mode, which is a browsing session that does not record a history of the websites you have visited. This is useful if you don’t want people with physical access to your device to snoop on you.
- Private browsers also use tracker blockers, which can often be downloaded as an add-on for a browser. The blockers depend on a list of known trackers that grab information about your identity. Whenever you load a website, the software then detects those trackers and restricts them from following you from site to site. The big downside of this approach is that blocking them can sometimes break parts of websites, like shopping carts and videos.
Privacy-focused browsers typically turn private mode on by default, or automatically purge browsing history when you quit the browser. The browsers also have tracking prevention baked in, which lets them aggressively block trackers using approaches that minimize website breakage.
But private browsers do not prevent your internet provider from seeing what websites you visit. So if you are on vacation and using a hotel’s Wi-Fi connection, a private browser will not keep your browsing information private from the hotel’s internet provider. For that type of protection, you still need to connect to a virtual private network, a technology that creates a virtual tunnel that shields your browsing information.
Firefox Focus, DuckDuckGo and Brave are all similar, but with some important differences.
Firefox Focus, available only for mobile devices like iPhones and Android smartphones, is bare-bones. You punch in a web address and, when done browsing, hit the trash icon to erase the session. Quitting the app automatically purges the history. When you load a website, the browser relies on a database of trackers to determine which to block.
The DuckDuckGo browser, also available only for mobile devices, is more like a traditional browser. That means you can bookmark your favorite sites and open multiple browser tabs.
When you use the search bar, the browser returns results from the DuckDuckGo search engine, which the company says is more focused on privacy because its ads do not track people’s online behavior. DuckDuckGo also prevents ad trackers from loading. When done browsing, you can hit the flame icon at the bottom to erase the session.
Brave is also more like a traditional web browser, with anti-tracking technology and features like bookmarks and tabs. It includes a private mode that must be turned on if you don’t want people scrutinizing your web history.
Brave is also so aggressive about blocking trackers that in the process, it almost always blocks ads entirely. The other private browsers blocked ads less frequently.
For most people, not seeing ads is a benefit. But for those who want to give back to a publisher whose ads are blocked, Brave hosts its own ad network that you can opt into. In exchange for viewing ads that do not track your behavior, you earn a cut of revenue in the form of a token. You can then choose to give tokens to websites that you like. (Only web publishers that have a partnership with Brave can receive tokens.)
A further alternative browser that deserves to be mentioned is Vivaldi. Vivaldi’s developers concentrate primarily on privacy. The browser has a built-in ad and web-tracker blocker. Vivaldi allows to set different search engines for normal and private windows, which makes it possible to quickly switch between, Bing, Google, DuckDuckGo and more.
Vivaldi’s creators openly state that they don’t engage in any kind of user data collection, profiling or tracking.
What is important to keep in mind is that Vivaldi is based on Google’s Chromium engine and could be affected by some of its security and privacy vulnerabilities.
Updated list of browsers for privacy: The best browsers for privacy in 2023 | ZDNET
More tips regarding Data security and privacy
Even if citizens cannot control leaks and hacks to their infrastructures, each of us can make sure that malicious data controller collect as little data about us as possible. Moreover, it is very important to know the techniques of the attackers, to minimize the risk of theft of personal data or identity theft. To reach this target, the tips below can be used separately or combined.
|Block web trackers||The Firefox browser allows to enable and fine-tune Enhanced Tracking Protection. Specialized privacy plugins are available in the Chrome, Firefox and Safari catalogs of officially recommended extensions. Citizens can find these by entering privacy or tracking protection in the search bar||To prevent web beacons|
|Protect the internet connection||Secure DNS can be enabled in the operating system or router settings, and specify a DNS server that blocks trackers. A VPN connection can sometimes provide tracking protection too, but it is important to make sure that the chosen VPN provider actually offers a tracker blocking service.||To prevent web beacons|
|Perform a factory reset or format the storage media when buying a secondhand device||We have to assume a secondhand device is dirty. Better safe than sorry||To remove pre-installed malware|
|Install and activate a reliable security solution immediately||Perform a scan before using the device for the first time.||To offset the risk of encountering malware already present on a device|
|Use browsers (e.g. Safari) giving to website scripts collecting the browser fingerprint incomplete or incorrect information||Providing only basic, impersonal information protects users from tracking through fingerprinting||To prevent website scripts from taking the browser fingerprint|
|Use browser extensions (e.g. Privacy Badger) to block website scripts collecting the browser fingerprint||Blocking website scripts protects users from tracking through fingerprinting||To prevent website scripts from taking the browser fingerprint|
|Telegram users should configure their profile privacy appropriately||To do so, go through Telegram’s Privacy settings, changing the set values — all options and data are available to everyone by default. We recommend the following: Phone Number → Who can see my phone number — Nobody. Phone Number → Who can find me by my number — My Contacts. Last Seen & Online → Who can see my timestamp — Nobody. Profile photo → Who can see my profile photo — My Contacts. Calls → Who can call me — My Contacts (or Nobody, if you prefer). Calls→ Peer-to-peer — My contacts (or Nobody, if you prefer not to share your IP address with chat partners). Forwarded Messages → Who can add a link to my account when forwarding my messages — My Contacts. Groups & Channels → Who can add me — My Contacts.||In order to not to share unnecessary details with all 500 million–plus Telegram users|
|Configure Facebook privacy settings||Follow the details explained in this web page https://privacy.kaspersky.com/articles/facebook-windows-medium/||In order to raise the level of Facebook privacy and security settings|
|Configure Facebook settings to get Facebook to stop adding data from a particular site to its dossier on you||Go to Settings → Your Facebook Information → Off-Facebook Activity → Manage Your Off-Facebook Activity; Click or tap the name of the site in the list, and in the window that opens, select Turn off future activity from [website name]; Click or tap the Turn Off button. Within two days, Facebook will stop serving you ads based on information from that resource.||In order to not to link information about your activity on third-party services to your profile or use it to display personalized ads|
|Ban all sending of data by sites and apps on Facebook||On the Manage Your Off-Facebook Activity page with a list of services that have shared information about you with Facebook, click or tap Manage Future Activity. In the Web version, this item is on the right of the screen; in the mobile app, it is available in the three-dot menu; Click or tap Manage Future Activity; Disable Future Off-Facebook Activity; Click or tap Turn Off.||In order to stop Facebook from targeting ads based on what sites and apps you use|
|Protect against stalkerware||Set a complex alphanumeric password of at least eight characters on your phone. Do not give it to anyone! Change your password regularly — for example, every few months. Be careful about who has physical access to your phone. Leave it unattended as little as possible. Download apps only from official stores. Always pay attention to the comments, ratings, and functions of the application. Install trustworthy security software on your device. Make sure that the security solution you choose can detect stalkerware.||To lower the risk of getting stalkerware onto your device|
|Detect signs that you have stalkerware on your device||The device battery and mobile data are running out too fast: Stalker applications actively use up your device’s resources because they need to constantly maintain a connection with the servers controlling them. Unfamiliar applications installed on the device have dangerous permissions (e.g. the possibility to root or jailbreak a mobile device).||To reduce the consequences of getting stalkerware onto the device|
|What to do if stalkerware is already on your device||Contacting a local support group. A list of them is present on the Coalition Against Stalkerware website; Not attempting to remove the stalkerware. The person who installed it might switch from digital abuse to physical violence: it is not recommended to remove a tracking app if discovered one on the citizen’s phone. The stalker will sooner or later find out, which can often lead to further problems. To help protect victims from stalkerware, Kaspesrky has developed TinyCheck — a tool which allows to discreetly check the device for spyware. There is no need to install TinyCheck on the affected phone, but rather on a separate external device: a Raspberry Pi microcomputer. This device functions as an intermediary between the Wi-Fi router and the affected phone. After installation, TinyCheck analyses the device’s internet traffic in real time. Based on that, it is possible to understand if there is stalkerware on the analyzed phone: if it is sending a lot of data to known spyware servers, TinyCheck will detect it.||To reduce the consequences of getting stalkerware onto the device|
|Block automatic loading of images in e-mail||When you set up e-mail on your phone, computer, or in a web-based client, make sure you enable the setting that blocks automatic image display. Most e-mail makes sense even without the pictures in it. Most e-mail clients add a “show images” button right above the e-mail body, so loading the pictures if you really need to takes just one click||To protect from tracker pixels|
|Be aware of how smartphone makers track users||A recent study shows that even “clean” Android smartphones collect a lot of information about their owners. More details are available at this link: https://www.kaspersky.com/blog/android-built-in-tracking/42654/||To avoid sharing private information with smartphone makers|
|Protect yourself from nosy maintenance technicians||Be sure to make a complete backup of all data contained on the device to an external storage device or to the cloud (if possible, of course). It’s standard practice for service centers to make no guarantees as to the safety of customer data, so you may well lose valuable files in the course of a repair. Ideally, your device should be completely cleared of all data and reset to factory settings before taking it in for repair. For example, this is exactly what Apple recommends doing. If clearing and preparing the device for service isn’t possible (for example, your smartphone’s display is broken), then try to find a service that will do everything quickly and directly in front of you. Smaller centers are usually more flexible in this regard. As for laptops, it may be sufficient to hide all confidential information in a crypto container (for instance, using a security solution), or at least in a password-protected archive. Owners of Android smartphones should use the app locking feature in Kaspersky Premium for Android. It allows to lock all your apps using a separate pin code that’s in no way related to the one used to unlock your smartphone.||To avoid technicians accessing your private information while your device is being repaired|
|Secure your smart home||Smart home should be configured correctly and secured adequately. Securing the Wi-Fi router;Checking the network against unauthorized devices regularly;Considering the vendor reputation when purchasing a gadget. More details are available at this link: https://www.kaspersky.com/blog/how-to-secure-smart-home/47472/||To minimize the sharing of data with the vendors; to prevent attackers from device hijacking; to avoid data leaks|
|Keep password hygiene in mind||Do not reuse the same password for several accounts; make your passwords long and strong; store them securely; change them immediately upon first hearing news about a data breach at the service or website that this password is used to protect. Enable two-factor authentication wherever possible. It provides an additional layer of security and will prevent hackers accessing your account — even if someone manages to obtain your login and password. Set up your social networks for better privacy. This will make it more difficult to find information about you, and therefore complicate use of a brute-force dictionary for attacking your accounts. Stop oversharing personal information, even if it’s visible only to friends. Today’s friend may become tomorrow’s enemy.||Protect your accounts from unwanted access|
|Properly handle phishing and spam emails||Open emails and links only when you are sure of the sender. When you know the sender but the content of the message seems suspicious, it is worth checking with the sender through an alternative communication channel. Check the spelling of a website URL if you think you are dealing with a phishing page. If so, the URL may contain errors that are difficult to spot at first glance, such as a 1 instead of an I or a 0 instead of an O When surfing the web it is always better to use a reliable security solution that thanks to access to international sources of threat intelligence, are able to identify and block spam and phishing campaigns.||To not to end up a victim of spam or phishing scams|
|Try to recover data encrypted by ransomware||Ransomware is a malware (a Trojan or another type of virus) that locks a device or encrypts files, and then asks to pay ransom to get data back. It’s not cheap, and there’s no guarantee of success. Victims of ransomware can try the free decryption tools available on noransom.kaspersky.com and get their digital life back.||To get back data encrypted by known Ransomware.|
|Avoid data leak in a document||Carefully check document contents before sharing; Use dedicated graphics programs to edit images. Use 100% opaque elements to block information and save images in formats that don’t support layering: JPG or PNG; Be particularly careful with cloud documents, which keep a record of each file’s entire change history, potentially enabling other people to restore deleted or changed information; Do not give coauthors access to cloud documents that ever contained secret data; instead, create a new file and copy across only nonsensitive information; Check Word documents with Document Inspector. Download cloud documents in DOCX format and check them as well; Be attentive and don’t rush.||To keep private information in shared documents private — from colleagues, coeditors, and the general public|
|Keep Your Software Updated||Regularly update your operating system, applications, and antivirus software to protect against vulnerabilities. Software updates often include security patches that address known issues||To protect against vulnerabilities|
|Use a Reliable Security Solution||Install reputable antivirus software and keep it up to date. Regularly scan your devices for malware and remove any detected threats||To detect and remove cyber-threats|
|Secure the Wi-Fi Network||Change the default username and password of the Wi-Fi router and use strong encryption (WPA2 or WPA3) to protect the wireless network. Avoid using public Wi-Fi networks for sensitive activities like online banking||To avoid attackers stealing personal information through unsecure Wi-Fi networks.|
|Avoid malicious links||Be wary of suspicious emails, messages, or links that ask for personal information. Avoid clicking on unknown links or providing sensitive information unless you are certain of the source’s legitimacy||To minimize the risk of Phishing Attacks|
|Backup Your Data||Regularly back up your important files and documents to an external hard drive, cloud storage, or other backup solutions.||To recover data in case of hardware failure, loss, or ransomware attacks.|
|Exercise Caution When Downloading Files||Only download files or software from trusted sources. Avoid downloading attachments or clicking on links from unknown or suspicious emails.||To avoid self-installing malware on a device.|
|Educate Yourself||Stay informed about the latest security threats, scams, and best practices for online safety. Regularly update your knowledge about current security trends and take steps to protect yourself accordingly.||To minimize the risk of cyber-attacks.|
|Review and Adjust Privacy Settings||Regularly review the privacy settings on your devices, applications, and social media platforms. Customize the settings to limit the amount of personal information that is collected and shared||To minimize the risk of data leaks and identity thefts, and to avoid providing data to malicious data controllers.|
|Protect Your Personal Information||Be cautious about sharing sensitive personal information online, such as your full name, address, phone number, and financial details. Only provide such information on secure websites and platforms||To minimize the risk of data leaks and identity thefts, and to avoid providing data to malicious data controllers.|
|Use Secure Communication Channels||When communicating online, use secure channels such as encrypted messaging apps or platforms that offer end-to-end encryption.||To ensure that your conversations remain private and secure.|
|Be Mindful of Social Media Sharing||Consider the potential privacy implications before sharing personal details, location information, or photos on social media. Restrict the visibility of your posts to trusted friends and family, and regularly review and update your privacy settings.||To minimize the risk of privacy violation.|
Users registered on the TRAPEZE Portal, by accessing the training tools integrated into the portal, can measure their level of expertise on privacy topics using the Kaspersky Gamified Assessment Tool, and can deepen these and other issues on security and privacy using the Kaspersky Automated Security Awareness Platform.
Data security and privacy standards:
- Information security management system (ISMS) (ISO/IEC 27000 Family): It is a set of guidelines for maintaining infrastructure, mainly the company’s data centers, to follow certain legal, technical and physical policies to ensure confidentiality, integrity, and availability of data reside in the company’s data centers. It consists of a set of ISO/IEC 27001, ISO/IEC 27002, ISO/IEC 27003, ISO/IEC 27004, ISO/IEC 27005, ISO/IEC 27006, and ISO/IEC 27007.
- Common Criteria (ISO/IEC 15408): This standard mainly deals with the certification of IT products. It ensures the evaluation of IT products based on a set of approving standards that are widely followed by industry and governments. ISO/IEC 15408 consists of three parts: Part 1 (Introduction and general model), Part 2 (Security functional requirements), and Part 3 (Security assurance requirements). The Common Evaluation Methodology (CEM) is another document used by security auditors to evaluate IT products.
- ISO/IEC 18043: This standard helps an organization in the selection, deployment, and operations of intrusion detection systems within an organization’s IT infrastructure.
- Center of Internet Security, CIS (https://www.cisecurity.org/): CIS publishes security benchmarks for mobile devices, network devices, server operating systems, virtualization platforms and cloud, desktops, and web browsers. These benchmarks are security configuration guides that governments and the industry widely accept and are available for free. Most security auditing organizations used these benchmarks to evaluate the configuration of IT infrastructure.
- ISO 22301:2012: This standard contains requirements for Business continuity management systems.
- National Information Security Technology (NIST) Standard Specification: NIST is a US-based agency that publishes cybersecurity-related standards. Most of the cryptography-related standards come from NIST, and different countries across the globe widely follow them. NIST 800-115 (Technical Guide to Information Security Testing and Assessment) is an important standard for assessing the IT system.
- SANS Security Policy Resource: This resource contains templates related to network devices, servers, and application security.
- ISO 28000: This ISO standard contains the specification for security management systems for the supply chain.
- OWASP Foundation: It is a non-profit organization that regularly publishes the Top 10 security issues of the web application, mobile, web services, etc. Most security auditing organizations follow these Top 10 security issues to categorize security vulnerabilities.
- ISO/IEC 27037: This ISO standard contains guidelines for the identification, collection, acquisition, and preservation of digital evidence.
- Payment Card Industry Data Security Standard (PCI DSS): This compliance formulates financial organizations’ and sellers’ requirements to transact credit card payments securely.
- Cloud Security Alliance (CSA): CSA is a non-profit organization that regularly publishes the best security practices related to cloud security.
- ISO/SAE 21434: Standard covers the aspects of automotive cybersecurity. This standard includes a list of requirements related to cyber security risk management. It also covers a cybersecurity process framework that helps OEMs to come on a common platform and communicate risks related to security.
- ISO/IEC 20243-1: This Information technology standard refers Open Trusted Technology ProviderTM Standard (O-TTPS). This particular standard helps in mitigating maliciously tainted and counterfeit products.
- ISO/IEC 27400:2022 – This standard provides a set of guidelines for Internet of Things (IoT) solutions. It provides a list of risks, principles, and controls for security and privacy for IoT solutions.
- ISO/IEC 27017 – Based on ISO/IEC 27001 and ISO/IEC 27002, covers specifically the cloud controls applicable for cloud service providers.
- ISO/IEC 29147 – related to vulnerability disclosure in IT products and services. It provides both guidance and recommendation to vendors in technical vulnerability management.
- ISO/IEC 30111 – related to vulnerability handling processes. It provides requirements and recommendations to manage and remediate vulnerabilities in IT products.
In today’s digital world, authentication is essential in ensuring secure access to online resources and it is usually the basis of all security. With the rise of cloud computing, the need for effective identity management solutions has become more apparent. This has led to the development of several open-source identity management solutions that offer robust features and functionalities powerful and capable to satisfy needs of nearly all software systems.
- OpenIAM (https://www.openiam.com)
This is one of the most well-known open-source identity management tools. It features Single Sign-On, user and group management, flexible authentication, and automated provisioning, which is a major component of identity governance and administration. Moreover, OpenIAM aims to assist in reducing enterprise operational costs and improving identity audits via centralized control station. The community version doesn’t impose a time limit on subscriptions and benefits from community forum support.
- Shibboleth Consortium (https://www.shibboleth.net)
This is another popular identity open-source identity management tool. Shibboleth Consortium offers web Single Sign-On, authentication, and user data aggregation. Additionally, it can enforce identity management policies on user authentication requests and implement fine-grain controls.
- Apache Syncope (https://syncope.apache.org)
The Apache Syncope platform describes itself as an open-source system managing digital identities in enterprise environments. It focuses on providing identity lifecycle management, identity storage, provisioning engines, and access management capabilities. Furthermore, it offers some monitoring and security capabilities for third-party applications.
- MidPoint (https://evolveum.com/midpoint)
Midpoint is an open-source IAM tool which combines identity management and identity governance. It aims to provide scalability, allowing enterprises to grow to accommodate millions of users and to offer diverse customization. Additionally, Midpoint offers an auditing feature, as well as compliance fulfilment. It works for enterprises of all sizes but has features designed for the financial, governmental, and educational industries.
- WSO2 (https://wso2.com)
The WSO2 Identity Service stands as one of the few open-source identity management tools providing customer identity and access management capabilities. WSO2 promotes lower-friction access for customers, data gathering for business intelligence, and streamlined preference management. In addition, the WSO2 Identity Service offers API and microservices security, access control, account management, identity provisioning, identity bridging, and analytics.
- Soffid (https://www.soffid.com)
Like many other open-source identity management tools, Soffid offers Single Sign-On and identity management at the enterprise level. Further, it aims to reduce the IAM support costs and assist with auditing and legal compliance. Soffid also offers identity provisioning, workflow features, reporting, and a unified directory. It also provides enterprise-wide role management through predefined risk levels.
- Keycloak (https://www.keycloak.org)
Uniquely among the many open-source identity management tools, Keycloak positions its design as primarily for applications and services. The emphasis on third-party application identity security enables enterprises to monitor and secure third-party programs with little effort. Keycloak also provides out-of-the-box user authentication and federation. Furthermore, it provides standard protocols, centralized management, password policies, and even social login for customer identity and access management needs.
Keycloak is a solid product with a highly active community and many contributors that improve the project continuously, which is why the Trapeze platform authentication solution is based on Keycloak, as described in the deliverable D1.8 – Platform specification and design.
Designing for Transparency and Trust (hbr.org)
 Wet betreffende het verzamelen en het bewaren van de identificatiegegevens en van metagegevens in de sector van de elektronische communicatie en de verstrekking ervan aan de autoriteiten, 20 July 2020
 Article 3 of the Projet de loi d’orientation et de programmation du ministère de la justice, March 2023
Euronews, 5 May 2023
 5 years of the GDPR: National authorities let down European legislator, EDRI, 31 May 2023
 Meta considers a new social network, as decentralized model gains steam, Washington Post, 11 March 2023.
 Karjoth, Günter & Schunter, Matthias & Waidner, Michael. (2002). Platform for Enterprise Privacy Practices: Privacy-Enabled Management of Customer Data. 2482. 10.1007/3-540-36467-6_6.