Do we really need regulators to come and tell us that each person’s data is, well, private? A few years before the GDPR regulation came into effect in Europe, the Law for Protection of Personal Data Held by Private Parties (LFPDPPP) in Mexico stated basically the same principles with which many companies are now struggling to comply:
- Individuals have the right to know what personal data about them is stored by any company
- Individuals have the right to request such information to be deleted or withheld from being shared with any other third party
The enactment of these regulations has made both individuals and companies alike aware of the basic fact that too much information about ourselves has been voluntarily but unknowingly disclosed; that some common-sense boundaries have been breached; and that so much information is definitely not needed to provide the digital services we are signing into, and thus, we could block access to prevent further dissemination and commercialization of our habits, browsing history, location, network of family, friends and colleagues, and so on.
So, if rules could be rewritten from start … if you were actually to read the license or service agreement of each online service that you really want to stick with, what terms would you consider reasonable to understand the information about yourself that you are willing to disclose in order to receive those digital services? Of course, we are discarding the possibility that you are happy with such clauses like “by using this app, you understand that we can obtain every piece of your personal data, contacts, location, browsing history and sell it and share it with whomever we can get to pay more for it, with no obligation to you or your descendants.”
So, trying to solve this puzzle, allow me to propose the following Taxonomy of Private Identity and briefly explain the different components.
In today’s model, we have assumed the fact that we are required to authenticate ourselves in the online universe basically through one of two widely adopted credentials: your email address and/or your Facebook credentials. Yes, sure, your Facebook account was originally authenticated through an email account but now qualifies as equally valid. However, both can be faked. Yet we are comfortable with an authentication mechanism that is not certain and can be easily stolen.
In the proposed taxonomy, different data is protected behind purpose-specific gates. Those gates can be opened with their respective private key, plus one key linked to you as an individual.
Detailed description of proposed encryption mechanism and data structure of the proposed blockchains will be the subject of an upcoming article. At this level, let’s say that the key that allows access to the other gateways should be generated from biometric data. Fingerprints and facial recognition are now easily implemented, but a widespread model would require more complex data, potentially even DNA data that would link that personal key to the owner.
Implementation of such taxonomy would allow participants to segregate the information that they open to different actors or services, for specific purposes. For example, your LinkedIn profile could add a tag in each of your education or professional milestones, indicating that each of these items has been “verified,” without a need to provide a copy of it in the open network. As long as LinkedIn is a participant of the authentication protocol, it can confirm that participant universities or employers have confirmed your information, without the need to provide any unnecessary data to persons requesting confirmation of the event. In a similar way, personal legal papers (say, shares deposited into a trust fund) could become public legal papers when linked to a document like a will. You, and only you as owner of your private data, would be tagging the existence of such personal legal documents to what could be consulted by the public, if that’s needed or required by law.
So, the key point is that we start thinking about whether we can identify all these important pieces of private data, know where it is stored, and whether we have given unnecessary access to a huge technology company to link it to marketing algorithms … or worse, if rogue actors have very easy access to the digital representation of our lives and assets.
Author’s note: Jose Angel Arias has started and led several technology and business consulting companies over his 30-year career. In addition to having been an angel investor himself, as head of Grupo Consult, he participated in TechBA’s business acceleration programs in Austin and Madrid. He transitioned his career to lead the Global Innovation Group in Softtek for four years. He is currently Technology Audit Director with a global financial services company. He has been a member of ISACA and a Certified Information Systems Auditor (CISA) since 2003.