Gitcoin logo
August 29, 2022

A Community Based Roadmap for Sybil Detection Across Web 3

The Fraud Detection & Defense (FDD) workstream of GitcoinDAO is responsible for detecting and deterring sybil attacks against Gitcoin Grants. This roadmap is designed to continue our process of paying down centralization debt by building a community centric sybil-detection process. This began with the launch of the GTC governance token. The FDD workstream budget was approved and began to decentralize the process by having the community take over responsibility of approving grant eligibility,…

The Fraud Detection & Defense (FDD) workstream of GitcoinDAO is responsible for detecting and deterring sybil attacks against Gitcoin Grants. This roadmap is designed to continue our process of paying down centralization debt by building a community centric sybil-detection process.

This began with the launch of the GTC governance token. The FDD workstream budget was approved and began to decentralize the process by having the community take over responsibility of approving grant eligibility, paying contributors to evaluate grants and accounts flagged as sybil by the ML algorithm, and even paying the contracts to SMEs guiding the machine learning efforts.

The future of sybil detection includes a “community data lake” with anonymized data allowing data scientists anywhere to compete to find the best models using an open-source feature engineering pipeline. Sybil accounts will be detected across Ethereum addresses, DIDs, and Gitcoin accounts. Eventually, meta-models will give even greater insights.

The insights from this information will be exported to the community in a decentralized version of proofofpersonhoodpassport.com. Any developer will be able to access Gitcoin’s sybil-resistance for their own dapps by simply adding a few lines of code. Quadratic voting and staking will be safe to use. This will create a Cambrian explosion in the options available to mechanism designers finding new ways to allocate resources using web 3.

The best part is it will be ethically owned by the community.

Apply to Join GitcoinDAO here

Purpose of this post:

  1. To identify and define the problem of sybil attacks
  2. To align all stakeholders on the past, present and future state of the project
  3. To have a document that excites potential DAO contributors to participate
  4. To share with other organizations to gain context before collaboration

What is a Sybil attack?

Balancing power in collective decision making

A sybil attack can happen in any network attempting to provide equal opportunity to all participants. In the use case of Gitcoin Grants, the network of participants is allocating a pool of matching funds by making donations to the projects they support.

Historically, the mechanisms used to disperse in this manner would be one of the following:

1 person, 1 vote (1p1v)

Each (human) individual gets 1 vote on how to allocate funds. This can cause problems of “crowd-madness”, popularity contests, and the best allocators of capital not having enough control. To be trite, the worry here is that we become Idiocracy.

1 dollar (or share), 1 vote (1d1v)

Each dollar donated is counted as a vote regardless of who donated it. The problem here is that the wealthy end up not only capturing the funds, but the governance itself. Plutocracy is the end state of a system based on 1d1v.

Relationship between consensus & sybil-resistance

In a network attempting to provide outcomes, power must reside in the consent of the governed. In many crypto networks, the consent to follow the rules (including the ones you didn’t agree with) is based on your continuing to hold the network asset.

The community (those who hold stake in the network) see legitimacy in blockchain based networks because the consent of the governed is determined by open-source (verifiable) code without room for subjectivity.

(This article will not discuss the contiguous layers of social coordination which DO provide layers for subjectivity outside of the code ie. code is NOT law, but it would be irresponsible not to mention here!)

Proof-of-work (PoW) and proof-of-stake (PoS) are not consensus mechanisms.

They are sybil-resistance protocols designed to increase the legitimacy of outcomes a network consensus protocol (BFT, aBFT, Honeybadger BFT, Snowball) provides. These are probabilistic guarantees that the network worked as was intended.

The combination of these standards in consensus and sybil-resistance result in a mechanism design where the optimal strategy for any individual participating in the network is aligned with the intended outcomes of the community.

The Need for Sybil-Resistance in Gitcoin Grants

Fake ballots are a good analogy to a sybil attack. An individual whose voice should count as one individual is deliberately falsifying the way their voice is counted.

In Gitcoin Grants, the sybil attack means that a user spreads their funds across multiple wallets and donates to the same project. Instead of 1 person at $16 which would count as 4 credits to how the matching pool is allocated (Sq rt of 16), they instead make 16 separate $1 donations. The square root of 1 is 1, thus giving the “attacker” 16 credits instead of 4.

This means that an attack must gain more illegitimate matching allocation than the attackers cost of gas. Additionally, it is very easy to spot this behavior when an attacker only donates to their own grant. Many attacks we see involve the attackers also making donations to legitimate grants to obfuscate the pattern.

This makes the attack equation [Cost of gas + cost of valid donations (based on the trust bonus) must be greater or equal to the amount gained in illegitimate allocation from the matching pool.

The problem which began in GR7, is that the ZKsync layer 2 integration brings the cost of gas way down. Our solution now must involve raising the price of identity.

Defend Quadratic Funding Against Sybil Attacks

How Gitcoin Defends the Grants Mechanism Today

If we could be sure that each individual human was only donating from one account (or address), we could solve this problem today. Unfortunately this is an unsolved research problem which is being worked on by Proof of Humanity, BrightID, Idena, and others. Gitcoin integrates with these solutions to allow funders the ability to verify their identity.

A “Trust Bonus” is given to the funder to allow their contributions to be assigned a larger weight in the matching calculations based on how many of the identity integrations they have verified.

This trust bonus helps, but it isn’t the end solution. A comprehensive solution will take all variables into account.

During Grants Round 7, we began working with BlockScience to develop a model for flagging donations from users who displayed behavior which presents as adversarial. Over the next few rounds, a random-forest algorithm was trained on a data set determined to be some of the most important indicators. We began to flag potentially sybil accounts.

https://medium.com/block-science/deterring-adversarial-behavior-at-scale-in-gitcoin-grants-a8a5cd7899ff

The algorithm also needed human input. First, to train it as to what really does present as adversarial. Secondly, to have human evaluators supervising the actions of the algorithm so sanctions would not be entirely algorithmic, but overseen our community.

By Grants Round 9 we had the sybil detection pipeline functioning with human evaluators. There were enough potentially sybil accounts flagged that we couldn’t supply enough human evaluators to individually review every flag. Therefore, we decided it best not to take punitive actions.

We instead chose to pay the “Fraud Tax”. This meant no grant would receive less than they were expecting. We also began to manually turn off the matching eligibility for users deemed sybil by the algorithm AND a human or just a human reviewer. We did not sanction the ones which were only flagged by the algorithm.

Each round the algorithm was tweaked. New user behaviors were introduced and the team adjusted. During this time we were highly practical about the sensitivity vs specificity tradeoff of these types of algorithms. We decided to minimize false positives even if it meant letting some false negatives get away with it.

GitcoinDAO & FDD Effects on GR11

https://medium.com/block-science/gitcoin-grants-round-11-anti-fraud-evaluation-results-50f4b0f15125

The Fraud Tax was dropping round by round as we detected, evaluated, and manually sanctioned the users found to be sybil. We continued to pay the Fraud Tax until GR11 when the stewards voted that we should no longer pay the fraud tax, thus stripping the incentive away from these attackers.

We were able to take this step because of significant advancements the DAO made possible.

With the FDD Q3 budget being passed on August 23rd, the stream had 2 weeks to organize before the start of GR11 on Sept, 9th. Solving sybil-resistance for Gitcoin grants is a primary initiative of the FDD. We focused on three key areas for GR11 which we knew we could execute with our first budget.

Reduce the number of “illegitimate grants”

The policy stream maintains the participation policy for which grants are approved for each round. Then, the Grant Approval squad reviews the grants one by one and makes a recommendation. The last step is the activation of the approved grants by someone with proper access. This is a double check on the work of the reviewers as well.

This entire process is made possible by providing transparency. During GR10 we set up an integration to Notion providing a Public Oversight page where grant creators can check the status of a grant and see the reasons why it is denied or approved.

The Grant Approval squad also allows the FDD workstream to use incentives to put decentralizing pressure on the process. Without the dRewards system, the job would most likely require one part time employee to make these decisions. dRewards allows us to “tune in” to the right amount of incentive needed to get multiple community reviewers to participate in the decision making process.

Verify the results of the machine learning algorithm

Another big step forward during GR11 emerged via the ML Human Evaluations squad. Using the same dReward mechanism as described above, the number of human reviewers and evaluations completed grew.

The contributors in this squad look at the accounts flagged by the ML algorithm and provide feedback as to whether the algorithm is correctly identifying sybil accounts. Many behaviors, including being a new account, can look like a sybil attacker. Adding human oversight to the process trains the algorithm to slowly get better and adapt to new adversarial behaviors we hadn’t seen before.

A tangential benefit of these reviews is that they provide a basis for statistical analysis to validate that the ml algorithm is finding approximately the correct number of sybil accounts.

Increase the transparency and community participation in decision making

Many of the decisions made by FDD are ones that need an oversight and appeals process. Eventually, some decisions might go all the way to the stewards. Until the stewards ratify a policy, social norms are encouraged as guidelines.

There are times when FDD has to make a decision for the round to continue. Let’s say a user flags a grant for having had a token sale.

In the past, it would have been someone on the core team who made the decision and moved on. Now, the decision is recommended by community reviewers. Then the final decision is made by either Disruption Joe or Gitcoin Holdings team.

FDD has worked to create public oversight pages where a grant owner can see why their grant is approved or denied. There is also one for disputes. In our token sale example, the dispute would be listed publicly. The results would show not only the core team reasoning, but also what community reviewers thought about the situation.

This connects to the sybil account issue because the sybil attack only works if there is a grant accepting the mis-allocated matching funds. The components of the FDD anti-sybil effort cannot be reduced to analytic reasoning. The problem is complex and requires complex solutions.

Gitcoin Grants Machine Learning Use Case

Model Training Considerations

https://medium.com/block-science/towards-computer-aided-governance-of-gitcoin-grants-730de7bcdbef

Given the volume of transactions happening within a funding round, some kind of automated fraud detection is necessary. Automated decision-making requires machine learning, but full machine learning algorithms are not, in general, effective in ambiguous situations. For this reason, as well as legal and ethical concerns, it was necessary to develop a semi-supervised, human-in-the-loop solution for sybil detection.

There are several flavors of machine learning: supervised, unsupervised, and reinforcement learning. Supervised learning requires an explicit ‘target’ variable; that is, the algorithm must know when it has made a correct or incorrect prediction. For our purposes, this would mean identifying a particular account as either ‘sybil’ or ‘not-sybil.’

Unsupervised learning doesn’t have that ground truth built in. These algorithms group observations based on their characteristics. So, given the data we have about Gitcoin funders, we might use an unsupervised algorithm to group or cluster the accounts, and then try to discern human-intelligible characteristics from those groups.

The third flavor, reinforcement learning, is primarily used in situations where an agent has to learn from and adapt to its environment; self-driving cars are the canonical example of reinforcement learning.

It would seem that a supervised learning approach would work best for detecting sybil attacks. But the ambiguity of sybil attacks makes true supervised learning unsuitable. That ambiguity is why the model being used is semi-supervised: both a machine learning process and a human review process are involved in the decision to flag and sanction a user. The inclusion of a human review process also means that we are able to increase the amount of labeled data that we provide for future iterations of the system.

Modularity and Workflow Design

In keeping with the spirit of a DAO like Gitcoin, the sybil detection system has been designed from the beginning to be modular. It has been built to allow and encourage contributions from any one who wants to contribute, regardless of their skills. The system itself is composed of a series of microservices that collect and process data, feed data human evaluators to label, train a random forest machine learning algorithm into a model, and then produce labels for the data. The human evaluators take a look at a random subset of users and try to determine whether or not a particular user is part of a sybil attack.

The microservices that are set up follow a standard machine learning workflow: data ingestion, feature engineering, model training, and model validation. By using human evaluators, we can introduce an element of model supervision into the process. Let’s take a moment to talk about what each of these pieces means in the context of our sybil detection workflow.

Once the data has been ingested and prepared, it goes into model training. Training a model is the process of putting an algorithm to work learning the data that it is given. It’s the first step in the machine learning process, once the data are actually clean enough to work with. Training, in this case, means that the algorithm we’re using – in this case a random forest – looks at a subset of the data and learns patterns that are associated with particular outcomes. Once an algorithm has seen enough data, we can say that we have a model: a set of instructions for generating a decision based on consistent data.

Semi-Supervised Machine Learning Model Validation

A trained model is great, but the machine learning process is not yet over. The model now has to be validated against what is called a hold out set or a validation set: part of the data not used in training. This is intended to simulate how the model will perform on ‘wild’ data – data it has not seen before. We certainly hope that the model performs well on this new data, but must be prepared to make changes and revisions to our algorithm, our feature set, or even to go back to the drawing board if it does not.

In ‘true’ supervised learning, we generally have the advantage of knowing for sure what the ground truth of a particular situation is. We, as machine learning engineers, know the actual price of many, many houses in Ames, Iowa. We know the actual species of iris that we have measurements for. We do not know with absolute certainty whether a user is actually part of a sybil attack, or just looks like it is. Therefore, the algorithm learns its behavior from incomplete or ambiguous data. The addition of human evaluators, who look at a random subset of the user data and try to identify potential sybil attackers, help not only label new data for future iterations, but also add information to the model’s decision-making process.

Ethical considerations

The technical considerations of building an automated, or automatable, decision process also carry ethical considerations. We do have to keep in mind very human concerns like reviewer bias and compliance with existing legal frameworks, as well as issues with the algorithms themselves. We’ll treat the algorithmic bias concerns first, and conclude with discussion of the human ethical and legal considerations.

Sensitivity vs Specificity Trade-Off

In any classification algorithm – an algorithm that tries to sort observations into two or more groups – there is a trade-off between sensitivity and specificity. In making the algorithm more sensitive, we increase the likelihood that a given observation will be flagged as sybil. But as we do that, we reduce its specificity, its ability to differentiate true sybil attacks from things that just look like sybil attacks. In other words, a more sensitive algorithm is going to send up a lot of false positives. This is a trade-off in the algorithm, built-in to the machine learning process in fact, and requires human consideration to resolve.

Bias Potential

For all of our attempts at rationality and objectivity, humans remain flawed observers of human action. Two reviewers can look at the same user account data and come to completely different conclusions about whether or not a particular user is a sybil attacker. This is baked into the human condition. During the most recent round, for example, a reviewer who is fluent in Chinese looked at a user whose Github profile is in Chinese and saw that it was gibberish. A user who was not fluent in Chinese would not see that so clearly, and might even give that profile the benefit of the doubt because it was in a foreign language.

This is just one example of human bias entering into the fraud detection efforts. In this case, it’s benign: knowledge of Chinese helped spot a bogus account. It’s obviously possible for bias to be malign. Should a sybil attacker join the ranks of the evaluators, they’d have inside knowledge to help them get around the system and its safeguards. But this potential for bias is not reducible; it is a trade-off, much like anything with machine learning. We use humans evaluators to help obviate algorithmic bias, but they will by default introduce some bias of their own.

Regulatory Concerns

Another reason for the use of a human-in-the-loop, semi-supervised process is legal. The General Data Protection Regulation (EU GDPR) Article 22 requires that any person have the right to not be subject to a decision that affects them based solely on automatic processing. A fully automated system – one that does not have the human-in-the-loop semi-supervised approach we have built – would absolutely fall foul of the GDPR, creating serious legal ramifications for the DAO.

Roadmap for FDD Anti-sybil Detection

Deliverables for Q4 2021

Have sybil results integrated to have the live estimates shown on site

The current setup shows the estimates without any sanctions being taken. Now that the stewards have approved NOT paying the Fraud Tax, we know that we can adjust these amounts on the estimates during the round rather than waiting until the end.

This will benefit grant owners by eliminating some of the horse race effects of trying to beat the sybil grants momentum out of the gate.

Run the primary sybil detection pipeline

This pipeline is currently run by Blockscience. They send results to a Gitcoin endpoint every 12 hours as a csv. The Gitcoin Holdings team then has control as to when they want to sanction users who are flagged. An account which is sanctioned no longer has its donations counted toward the matching pool allocation.

A heuristic is used to identify the accounts which are flagged as sybil by the algorithm so they can be approved again if the algorithm learns that they are not likely to be a problem as it acquires more information. If a human evaluator identified it as sybil, the script used will not reactivate their eligibility.

Gitcoin Holdings runs the script for this heuristic. Before GR12, it was only run once at the end of the round.

Run Bsci algo in parallel with Data & DevOps team to prove out operational processes (Data & DevOps | Bsci)

This pipeline will not be “plugged in” to the active sanctioning of users. It will only be used to verify that our DAO contributors get the same results as the Blockscience team to verify their ability to run the process end to end.

Blockscience scope of work for GR12 includes a lot of process documentation that will empower the community to continue the work they started. This will allow them to continue working as subject matter experts rather than technical support.

Our intention with FDD is to end our contract with Blockscience as soon as we are capable of running the algorithms they built end to end with validated results. That is their intention too.

This will allow us to allocate budget to a wider group of participants and allow Blockscience to stay focused on cutting edge problems in the capacity they prefer.

Run community algo designed by Omnianalytics

The “community algo” will be at a very early stage during GR12 (and maybe GR13 )and not used for sanctions. At this point, we are building the pipeline and adding contributors to the mix to begin pulling in more data points. Processes are being created to quickly get new contributors up to speed and able to deliver high quality work.

This part of the ml process can be thought of as the playground. It is where real world observations will be transformed into insights by data scientists collaborating to solve a problem. The best insights will be given to the Data & DevOps team to include in the primary ml pipeline that will be used for sanctions on the platform.

Open-source feature engineering is the primary focus of this stream for GR12. Any Gitcoin user will eventually be able to report insights or work on the process of changing insights to machine readable code which works best for getting accurate results.

Identify at least one feature using on-chain analytics

On-chain data provides a ton of insight. One of our first projects to come into our open source feature engineering pipeline is to find a way to incorporate at least one on-chain data point into our sybil detection analysis.

Create plan to identify sybil accounts based on eth address rather than Gitcoin User ID (Data & DevOps)

We are working with all stakeholders to design a way for sybil protection to be exported to any application or dapp. This will most likely be done using dPoPP. The primary purpose of this will be to provide sybil protection to dGrants. A secondary reason would be to support other apps needing to verify individual humans.

Q4 Stretch Goals and Other Work Started

Anonymized Data

Anonymized data allows more contributors to access data quickly and in a safe way. It provides a way for anyone to be given access to the data set without needing background checks or making trust assumptions. This will be crucial for building an open source feature engineering pipeline.

Data Access Policy

Setting up compliant access policies that work for the end users and contributors who need to access data to develop insights. Thinking through how data access works in a reputation based system.

Community Data Lake

Applying to other DAOs to share their data in a safe way that allows a greater number of variables to be used in detecting sybil accounts. This also includes warehousing of data from cGrants, dGrants, GitcoinDAO, Github, onchain data and other sources in analytic db formats. https://github.com/activeloopai/Hub

Set Requirements for dPoPP

The decentralized Proof-of-personhood-passport will allow a user to take their sybil score with them across the metaverse & web 3. A developer building a dapp can simply plug in a few lines of code and choose to work with a binary or probabilistic sybil score. This could also offer the opportunity for developers to choose the score they wish to use (ie. use Proof-of-Humanity) or use the Gitcoin meta-model score.

This product will open up the opportunity for DAOs to easily incorporate quadratic mechanism design for voting and staking.

Future High Leverage Opportunities

Sybil Detection Data Standardization

Creating interoperability between sybil resistance protocols will allow for data scientists to focus on the science and administrative support to handle the ETL & integration processes.

Unsupervised Learning Models

https://towardsdatascience.com/supervised-vs-unsupervised-learning-14f68e32ea8d

The greatest innovation happens where the desired end state is known, but there is no “leading the witness” in finding the best path to a solution. Once legitimate training data is available to lessen the ambiguity of sybil accounts the model currently faces, unsupervised learning models will be able to create new pathways to identify sybil accounts.

Creating Meta Models

https://medium.com/datathings/meta-modelling-meta-learning-34734cd7451b

Models that use the anonymized dataset will all have their own output. These outputs can be inputs into a meta-model which will find new solutions amongst the set of solutions. These can be meta-models linking scores connected to DIDs to eth addresses to Github accounts.

Staking on Models (Numerai Signals)

https://docs.numer.ai/numerai-signals/signals-overview

Creating a mechanism that aligns conviction of the builder into the mechanism design for finding the most accurate models has been tested by Numerai’s signals protocol. Their signals data science competition outperformed top hedge funds like Renaissance & Six Sigma on a long/short no-risk strategy trading the Russell 5000 during the 2020 bear market.

Machine Learning Models as NFTs – Agents & Economics

https://www.alteredstatemachine.xyz/how-asm-works/

Not only can the feature engineering be open sourced, but the model creation can be as well. The creators of models can use a protocol like Allstatemachine which turns the models into NFTs effectively letting a model trainer earn based on the economic significance of their model.

Non-ML Sybil Detection

Some of the data used for sybil detection in the ML pipelines can be used to in other methods of identifying sybil accounts. Finding how to use this data to develop insights will give us a better view of how to detect and deter sybil accounts.

Optimality Gap

https://gitcoin.co/blog/how-to-attack-and-defend-quadratic-funding/

Analysis of the optimality gap is a good way to identify the grants which exist to game the system. While we have known about this measurement since GR10, we have not had the operational capacity to prioritize a strategy around this metric.

Tuning the trust bonus

https://gitcoin.co/blog/defending-quadratic-funding-in-grants-round-10-and-beyond/

The current trust bonus system is somewhat frivolously set with values that seem intuitively close, but lack any formal validation. Finding mathematical reasoning for the decisions made.

In-round Data Visualization

Using visual models to make sybil behavior easier to see. For example, plotting grants with an X axis of number contributions and a y-axis of amount of average donation allows us to see which grants are likely using sybil accounts to gain an advantage.

An example of this can be seen in the first presentation in the video above.

Collusion Detection

https://medium.com/block-science/colluding-communities-or-new-markets-f64194a1b754

Behavior is not standardized, especially across cultures and continents. Therefore, we have not addresses collusion in this article because of the inability to clearly define it in the context of Gitcoin Grants. As the community begins to actively own their role in governing the mechanism, they will be able to establish norms which serve as guidelines that apply in a specific situation.

A Sybil Detection DAO

Proposals to Other DAOs needing Sybil Detection

There are multiple ways we can propose collaboration between DAOs.

The first would be to offer to fund a squad of data scientists to collect data from a partner like BrightID. This data would be engineered into features, anonymized, and then made available to the community in our community data lake.

Another model would be to apply for funding from a DAO like ENS or even multiple other DAOs. Their support could partly fund our decentralizing of the human oversight process in the ml pipelines. A portion could be held back for a Gitcoin contributor, or the DAO itself, to act as a steward to the ENS DAO.

Launching a Sybil Detection DAO

The long-term goal of this stream is to provide expert data science services which benefit public goods. As the stream gains a history of contributions, it also gains a bottoms up source of legitimacy with aligned values.

Fractal layers of curation are the best opportunity for building verifiable unique human identities in web 3. A token model allows those with capital to purchase the asset of the DAO enabling those with time and passion to catalyze the shared goals into action.

This DAO would be governed by a non-transferable governance token guiding its decision making with funding provided via GitcoinDAO through FDD.

There are public goods questions which machine learning models can help us better understand. These types of questions are best answered via publicly owned and governed infrastructure which Gitcoin is providing as a public good.

This article was written by collectively with contributors including Disruption Joe, Blockscience, Omnianalytics, Gitcoin Holdings, and the Fraud Detection & Defense workstream of GitcoinDAO contributors.

https://medium.com/block-science/gitcoin-grants-round-11-anti-fraud-evaluation-results-50f4b0f15125

Featured Posts

Announcing: Zuzalu QF Grants Program on Grants Stack

Announcing: The Village Infra on Polygon#1 Round on Grants Stack

Gitcoin Grants Round 19: Results and Recap

loading
loading