TRACE Whitepaper 1.0

Published March 26, 2024

Abstract: For online creators and influencers, the communities and platforms they engage with to showcase their work and build their identity are the very places where they are most likely to be fraudulently impersonated by other users, have their content stolen, or watch someone else profit off the original creators’ work using false pretenses. Advances in generative AI that make it trivially easy to create credible facsimiles of another’s image, likeness, and work product will only aggravate these issues. We propose a set of criteria by which to judge any potential solution to this problem leveraging verification and show how a novel approach to identity creates the most resilient, trustworthy, and universally applicable solution for establishing identity and content ownership for creators.

Introduction

Establishing and safeguarding identity in online forums has long faced challenges–so-called “verified” accounts impersonating prominent figures, creation of “lurker/burner accounts,” and proliferation of bots, to name a few. For creators, influencers, and other prominent people, this dynamic challenges their ability to ensure that representations of them are accurate. And intended audiences consequently struggle to assess the validity of information in the online sphere.

At the time of this writing, many platforms have put solutions in place to help especially prominent figures verify themselves and credibly assert that they are who they claim to be. For most, this involves elements of the following: a) using government credentials to assert one’s identity b) private verification that these credentials belong to the purported owner either by a direct employee and/or by AI-powered analysis of live video feed c) granting a verified credential that is specific to that one platform providing the verification.

While the existing paradigm could be improved upon, it faces heightened challenges in the face of generative AI, where the costs of creating increasingly convincing deep fakes of someone’s image, voice, writing style, etc. are falling steadily. As costs fall and quality rises, the calculus changes for would-be impersonators such that the number of vulnerable individuals increases beyond presidents, popes, and A-list celebrities to include niche authors, indie artists, microinfluencers, and many others who benefit from the current costs being too high to merit targeting.

Further, as the number of platforms where content lives continues to proliferate, fragmented credentials and credentialing processes will become increasingly untenable. There are obvious user frictions in having to certify to multiple platforms, financial costs of maintaining multiple certifications on platforms that charge for certification, difficulties tying all the different credentials into one unified online identity, and obvious audience questions as to how one should evaluate a credential that was granted under one credentialing regime that might not have been granted on another platform with a more stringent certification flow.

Success Criteria

TRACE White Paper Social Connections
  • The solution should be both highly resistant to infiltration and duly reactive to new information that sheds light on an actor’s credibility.

  • To enhance the value of trust earned by any creator who passes verification, the audience should be able to easily assess and understand the relevant data used to infer a creator’s authentic identity.

  • For an identity solution that credibly ties creators to their content, such a solution should naturally tie a creator to their full digital footprint across all platforms and easily enable creators to assert ownership of all types of assets.

Resilience

Existing verification protocols on major online platforms today typically leverage only a narrow set of mechanisms to validate a user’s identity. Sometimes to be certified a user must validate an email, send in government documentation, be endorsed by another member on the network, or simply pay a monthly subscription. Rarely are multiple mechanisms of establishing identity used and as such, each of the major platforms has had fraudsters infiltrate their verification system and used the inappropriately obtained certification to more effectively propagate either the misinformation or scam they’re peddling.

A resilient system would operate on the theory that while any one mechanism of establishing identity might be vulnerable, constructing a portfolio of identity mechanisms can drastically reduce the leakage of a certification construct.

Such a system would also smartly react to the discovery of infiltration to not only adjust the perceived trustworthiness of the identified bad actor but also the trust of all connected actors who might reasonably be considered with greater suspicion of inaccurate representation.

Legibility

While there are a number of verification processes that exist across different platforms today, these verification processes are almost always conducted behind closed doors – e.g., perhaps an individual is verified by government identification, but other users are (appropriately) never allowed to see the actual verification of the individual. While this is an understandable choice in light of the validation process, it creates the problem that it lowers other users’ ability to fully trust the final credential granted as the granting of that credential was done behind closed doors, using a workflow that is often specific to the platform, and few platforms have shown themselves to be fully unbiased actors in terms of who is allowed to participate on their platforms and what type of engagement is permissible. 

Such limitations beg the question of “who checks the checkers?” Ideally, a skeptically minded person could independently trace the trail of evidence that led to an assertion of identity and draw their own conclusions. Further, this shouldn’t require enabling direct access to documents verified individuals would rather maintain privately.

Universality

As there is a large and growing number of online platforms that exist for digital engagement, it is not uncommon for a creator to be present and produce work that is held across a broad number of media forums. It would not be uncommon, for instance, that an industry thought leader would have written opinion columns featured in major news outlets, published a Substack, interviewed on a number of podcasts, featured in a debate on YouTube, and so on. 

As creators commonly have a sprawling digital footprint comprised of many types of files (ranging from raw text to photos, videos, audio files, PDFs, GitHub repositories, etc) featured across many platforms, an ideal identity solution would effectively claim a creator’s full digital footprint across all platforms. 

This is in direct contrast to the current state of affairs where creators face a balkanized certification offering as platforms offer narrow, platform-specific certification using inconsistent standards, and where many platforms offer no certification possibility at all.

Robust Identity

With these success criteria in mind, we outline a solution that is uniquely robust and resilient, legible to end audiences, and naturally connects creators into their full digital footprint.

To achieve the goals of robust and resilient identity, we start with the simple insight that leveraging multiple sources of verification enhances the confidence with which we can infer a user’s identity because while any one method might be vulnerable to infiltration, the combination of methods presents a substantially higher barrier to impersonation.

Mechanism 1: Claiming Online Platform Profiles

The first mechanism of claiming one’s online digital footprint is to assert ownership of each  online digital identity. To do this, a user can create a bidirectional assertion of authenticity between each social media identity and a “parent identity” which serves as the hub connecting the spokes of one’s various online identities. 

Each act of certifying a profile operates under the logical premise that a user would have to have the access necessary to a parent certifying account as well as the profile in question. As such, each claimed profile credibly evidences that the holder of the parent identity and the owner of the online profile are the same. By transitivity, each new account that gets connected to the parent account creates an implicit connection between each of the individually claimed accounts – such that when finished, a user can credibly claim that every one of the portfolio accounts shares the same author.

This has the added benefit that if an online prominent user were to create a new account on a new platform where their identity could only be weakly inferred from the new account’s activity, connecting the new and in-doubt account to the verified portfolio instantly conveys the full credibility embedded in the already connected portfolio to the newly connected account.

TRACE White Paper Authenticity Assertions

Mechanism 3: Endorsement by an Existing, Certified Member of the Network

While claiming existing social identity platforms and passing a digital KYC process create a useful base for drawing the conclusion of an author’s authenticity, it is plausible that a highly valued target of impersonation might have an impersonator create and connect a portfolio of accounts pretending to be the target and pass KYC using fabricated data.

In analog encounters with strangers, we often leverage known common connections to establish the trustworthiness and background of the stranger. The nature and strength of the conclusion we draw about the new individual hinges on our assessment of the common connection’s quality of judgment and honesty.

In digital environments, we can perform a similar act by leveraging an endorsement from a verified node on the network. This would effectively amount to a new candidate for verification sending a request to an existing member and asking them to attest to their being who they claim. Requiring that all new arrivals be endorsed by exactly one endorser creates a traceable linear ancestry of authenticity assertions.

Mechanism 2: Traditional KYC

Many online platforms treat KYC as the gold standard of inferring identity. This often involves doing a liveness check where a live feed of a user’s face is compared against the face in a submitted government document. While the method has shown itself vulnerable to a generated video feed of a target, we believe that the method provides a valuable signal about identity when it is used in conjunction with other lines of evidence as part of a portfolio strategy of identification. Many governments are also in the process of creating stronger digital identification methods that could be leveraged in the future.

TRACE White Paper Generations Quality Assertions

Mechanism 4: Laying Claim to One’s Digital Work

A final method of verification comes in the form of laying claim to one’s created content. This type of verification performs the simultaneous act of verifying the creator and connecting the creator to their creations.

Conceptually, the method we employ mirrors the concept of what was once known as “poor man’s copyright” where an author might mail themselves a copy of a manuscript and use the fact that they have an unopened envelope with a timestamp predating anyone else’s ownership as proof that they owned the work before anyone else.

Mechanically, our method has creators hash and upload to the blockchain a digital asset in any format and sign this hash using their public key. If done prior to hosting this asset on any online forum, it serves as immutable evidence to the fact that the creator had access to the asset before any public debut of the same. Any parties in doubt can recreate the same hash value when passing the same asset through the same hashing algorithm. The usage of a hash instead of the asset itself additionally allows creators to make public claims to the content without actually releasing any information about that content to their audience at large.

Independent of the other sources of identity, this mechanism when properly used creates a credible tie between the owner of the public key in question and the content that the creator alleges to claim ownership of.

Legible Identity

While the outlined process publishes the relevant data to a publicly visible blockchain – searchable by any party interested in directly auditing, we believe a core component of legibility is easy access for users with average technical competence. To that end, our outlined solution features an interactive UI layer that lets audience members see and explore all the pieces of evidence that were used to certify an individual. For instance, in the diagram below, an individual could quickly toggle between seeing which social media profiles are associated with a verified user, what KYC checks the verified user has done (along with metadata about that KYC process, such as document type, date, and KYC provider) , and would allow the user to ascend or descend a user’s endorsement lineage.

The content that has been claimed is aggregated into a profile where the user is directly connected to their most important work and where those interested in verifying authenticity can, for any piece of featured work, determine when the piece was published to the blockchain and can recreate the same hash signature using the underlying asset if in possession of it (It is worth noting that in some instances, e.g., a book requiring purchase, a casual audience member may not have access to the initial asset).

Resilient Identity

Though the system outlined is self-evidently more resistant to penetration than the vast majority of existing verification paradigms in use in online forums, no system is fully immune to would-be nefarious actors who might seek to impersonate another user, lay claim to content that is not theirs, etc.

While there are different mechanisms of discovering malicious behavior, a key consideration for any would-be verifier of identity is how to adjust confidence in response to an offending event. In light of the connected network of verified individuals above, there is a unique opportunity to leverage new information about authentic representation or, alternatively, misrepresentation in assessing confidence in various nodes in the network.

For example, if a recently endorsed user is discovered to be fraudulently impersonating another’s identity, this has implications not only for the trustworthiness of the offending user but also the trustworthiness of the person who endorsed a fraud, and the person who endorsed the person who endorsed a fraud, and so on. It is also true that the now descendants of this recently uncovered fraud are likely to be compromised in some way themselves, and so there is a downweighting of confidence of those connected to the uncovered fraud with the magnitude inversely related to the distance from the fraud. We implement the function Trusti, t+1=Trusti, t-dsuch that Trusti=d(where is the penalty associated with the offending action, is the decay factor applied to those directly or indirectly connected to offender, and d is the distance from the offending actor).

TRACE White Paper Endorser Gen

While this has the benefit of harnessing information from any breach to increase the precision of estimates about directly and indirectly adjacent nodes of the network, it has the added benefit of creating favorable incentive effects that cause people to want to endorse high quality adds to the network and to seek endorsement from trustworthy sources whose actions won’t detract from one’s credibility.

Conclusion

The authors identify a core set of identity problems disproportionately impacting online creators – impersonation, theft or inappropriate claim to creative work, etc – and highlight key ways in which existing verification schemes do not serve authors: existing verification schemes are narrowly understood, inconsistently enforced, are largely illegible to end users, and are not sufficiently robust.

In light of the increasing identity threats faced by creators as a result of advances in generative AI, the authors posit a set of success criteria against which identity solutions should be measured. 

These criteria include:

  1. Resilience: Resistant to infiltration and responsive to any known breach

  2. Legibility: Ability for the end audience to assess the same evidence used by the platform in confidently inferring a user’s identity

  3. Universality: The ability for one’s identification to work across all digital platforms and with all types of digital assets

In light of the aims outlined, the authors illustrate a candidate solution which uses multiple sources of evidence to establish identity, asserts all of these evidences to the blockchain and provides a visible interface that audience members can reference to easily infer authenticity, adapts to any suspected incidences of misrepresentation, and provides a certification of authenticity that naturally embeds verification into one’s full digital portfolio of work.