Skip to main content

DAO Index: Piloting the V0.9 Questionnaire

Reflecting on piloting the V0.9 questionnaire and the results we obtained

Published onApr 11, 2024
DAO Index: Piloting the V0.9 Questionnaire
·

Introduction

As part of our work on the DGSF project, we are developing an assessment framework to benchmark and compare DAOs.

In pursuit of that goal, we have developed the DAO Index, a set of principles that we believe can provide a reference point for working towards the Ideal DAO (ID), operationalized as a questionnaire to assess adherence to the principles by evaluating real-world DAO practices.

In this article, we describe our work piloting the DAO Index principles and questionnaire, as of Version 0.9 (V0.9).

The main goal of the pilot was to determine the general applicability of the principles to DAOs, receive feedback on the questionnaire’s design and our selected principles, and find learnings to guide future development of the DAO Index.

We invite feedback on our work. Please leave your comments here on PubPub or Hypothesis (public channel), or send an email to [email protected].

Terms

Table 1

Term

Definition

ID

Ideal DAO

V0.9

Version 0.9

DAO

Decentralized Autonomous Organization

Background

Before assessing DAOs, we needed to determine an appropriate basis for benchmarking and comparing DAOs.

We settled on principles (as of V0.9), “[a] basic idea or rule that guides behavior, or explains or controls how something happens or works” [1], because principles:

  1. influence and constrain the governance and technical choices an organization can make [2][3],

  2. provide a normative logic for social governance [4],

  3. provide a meeting place for academia, industry, and society to agree on how DAOs should be defined and organized [5], and

  4. provide a blueprint for how to organize [6].

We believe that the DAO Index principles are principles that all DAOs should be characterized by.

Our choice of principles is influenced by our view (as of V0.9) of DAOs as organizations that are 1) ideologically-driven, 2) self-governed through social and algorithmic governance mechanisms, and 3) self-infrastructured through a combination of blockchain technologies and other decentralized technologies [7][4][8][9].

We believe the principles should help address issues with part 1 , and provide appropriate constraints for parts 2 and 3 [10][11][12].

The principles also help us conceptualize how an ID, a reference point for how DAOs should be governed and operated (i.e., a standard for DAO practices), could be described, upon which real-world DAOs can use as a guide for their own development.1 We believe the ID can also serve as a base for describing IDs in specific contexts (e.g., Decentralized Finance (DeFi) and Decentralized Science (DeSci)).

Additionally, most of the issues associated with DAOs generally cover parts 1 and 2, such as concerns with plutocracy, rather than issues with self-infrastructuring (it is also easier to find signals here from simply focusing on on-chain activity) [13].

Methods

Literature Review

We reviewed seventy-five (75) articles from DAO-related academic and grey literature to develop items (or criteria) relevant to assessing DAOs (primarily, identifying signals of good and bad practices), and to identify principles that could orient DAO practices towards the ID [3][2][14][15][4].

We searched for literature from academic search engines such as Semantic Scholar for insights from academic researchers, and popular online platforms for Web3 discourse such as Twitter, Mirror and Substack, to find knowledge and insights from real-world practitioners and industry researchers in the grey literature.

We believe this will allow for the principles to bridge understanding between academia, industry, and society.

You can find the our literature collection in the table below.

Iframe 1

Principles

The principles, as of V0.9, and the description and rationale for each principle, are described in the table below.

Iframe 2

Instrument Development

Initially, the questionnaire was developed by generating a set of items (used interchangeably with questions).

Generally, questions were generated from drawing insights from our literature collection, that better informed us of good and bad practices for DAOs. Similarly for the principles.

After developing our set of questions, we grouped them under the principles.

The items are currently constructed as pass/fail items, with desired outcomes leading to pass (yes, partial), and undesired outcomes leading to fail (no, does not answer).

The questionnaire can be considered an audit tool for ensuring adherence to the DAO Index principles [6]. Through our audits (referred to as assessments here), we hope to guide 1) values-informed decision-making by DAO operators, and 2) tool development by DAO infrastructure or tooling developers to make working towards making an ID technically feasible [6].

Additionally, by operationalizing the principles through a questionnaire, we are pushed towards “more concretely defin[ing] [our] values and principles in terms of measurable actions, so [our principles] can be readily assessed and audited” [6].

The questionnaire is comprised of forty-five (45) items in eight (8) dimensions (the dimensions here being the principles), with the following item count per principle.

Table 2

Dimension

Items

BSP

13

PDC

11

CPB

2

D2D

2

IDT

7

CBC

3

OT

4

HCAG

3

The model for how the principles are operationalized as a questionnaire, is described in the graphic below.

Image 1

DAO Index Principles Operationalized as a Questionnaire

You can find the questionnaire, and the rationale for each item, in the table below.

Iframe 3

The data dictionary for the questionnaire data fields can be found in the table below.

Table 3

Fields

Description

Example

Principle

An organizing principle for DAOs, specific to the particular version of the DAO Index in use

Indicator

An area that indicates where a DAO is or can turn the principle into practice

Question-ID

The identification number of the question, per principle.

Question

A yes/no question written in text to assess whether a DAO’s practices adhere or advance a principle

Plain-English

A response to the Question in English (or natural language).

Yes/No/Partial/Does not answer the question, N/A

Points

The numeric score received for the question, corresponding to the Plain-English response.

Explanation

A brief explanation of why the DAO received a certain Plain-English and Point response

Sources

The sources referred to in drafting a response

Author

The rater(s) or respondent(s) to the Question

Notes

Respondent(s) notes regarding a question

Search Difficulty

The difficulty in searching for documents to reference to respond to a Question

Documents

The documents cited in a Response

Snippets

The snippets cited in a Response

A graphical representation of how the fields in the questionnaire table relate to each other is described below.

Image 2

Visualization showing the relationship between documents, responses, questions, snippets, and principles

Case Studies

We sought to see if the principles could be applied to real-world DAOs, and serve as a basis for benchmarking and comparing DAO practices, through exploratory case studies.

We applied the questionnaire to eleven (11) DAOs, described in the table below.

Iframe 4

For more details on the DAO’s characteristics, please refer to the DAO’s DeepDAO profile page.

Scoring Method

As of Version 0.9, responses are scored using the following method.

The scoring breakdown (Plain English response = corresponding numerical score) is described below.

Table 4

Plain-english Response

Numerical Score

Yes

100

Partial

50

No

0

N/A

Points redistributed to other items in the principle

Does not answer

0

If a DAO positively answers a question, then a Yes is appropriate [16].

If a DAO negatively answers a question, then a No is appropriate [16].

Partial is appropriate when a DAO positively answers the question, but the practices do not fully answer the question [16].

N/A is appropriate when the question does not apply to that particular DAO [16].

Does not answer is appropriate when the question is applicable to the particular DAO, but the DAO does not provide enough information to positively or negatively answer the question [16].

The questionnaire penalizes DAOs if there is not enough information to answer a question (refer to Does not answer response).

The score is calculated by simply adding the points accrued for each question, and dividing by the total number of points for each applicable question (i.e., 100 multiplied by the number of applicable questions).

Every question has a maximum of 100 points.

The overall score is produced from totaling the points received for every question.

Currently, there are no weights applied to scores per principle and scores per item, nor are scores required to have the same total.

We used the overall score to generate a rating for DAOs.

Data Collection

Data Sources

You can find our data sources for materials (used interchangeably with evidence) cited in our assessments, evaluations of DAOs with the questionnaire, in the table below.

Iframe 5

Assessments

You can find our draft assessments in the table below.

Iframe 6

Data Analysis

We tested the internal consistency of the Questionnaire V0.9 [17].

At this early stage, we only focused on:

  1. Cronbach’s Alpha to determine how well the questions per principle worked together,

  2. Cronbach’s Alpha If Deleted to see whether the Cronbach’s Alpha coefficient would be improved by removing an item, and

  3. average inter-item correlation to determine if any items were redundant or were not measuring the construct [17].

We determined the overall response distribution, how DAOs performed per principle, and overall scores and ratings.

We could not conduct a confirmatory factor analysis (CFA) to test our conceptual framework (the DAO Index principles) at this time because we could not determine if the dataset was appropriate for factor analysis.

Tools

Scorecard Toolkit

To improve the ease of using the questionnaire, we developed the DAO Index Scorecard Toolkit, an Airtable base for using questionnaire, with tables to assist users such as a glossary, and managing data (e.g., evidence) associated with responses.

You can find the Airtable base here.

Image 3

Prototype Dashboard

We made our results publicly accessible through a web user interface, available at https://joan816.softr.app/.

You can find an embed of the dashboard below.

Iframe 7

Analysis Toolkit

As part of our work, we developed a Jupyter notebook on Google Colab to analyze the DAO Index assessments and other data, available here:

Image 4

Data Collection & Archival Toolkit

As part of our work, we developed a Jupyter notebook on Google Colab to archive materials we cited as evidence in our assessments.

Image 5

Results

Cronbach’s Alpha

Iframe 8

An acceptable Cronbach’s Alpha score at this preliminary stage is a value between 0.60 and 0.80 [17][18].

Our Cronbach’s Alpha coefficients ranged from -0.279 to 0.816.

The PDC, CPB, IDT, and OT principles had unacceptable values for Cronbach’s Alpha, suggesting a need to evaluate the questions as grouped to measure the principles.

Cronbach’s Alpha if Deleted

Iframe 9

An acceptable value is if the Cronbach’s Alpha coefficient improves if the item is deleted [17].

We excluded D2D and CBP from this analysis because there were not enough items (needed at least two (2) items after item removal) to perform the analysis.

Most items are likely to be kept because the Cronbach’s Alpha coefficient did not increase significantly or reach an acceptable value.

Items that we may delete in a future version because of this analysis:

  1. HCAG-03,

  2. CBC-03,

  3. BSP-07,

  4. BSP-11,

  5. BSP-02,

  6. PDC-10,

  7. PDC-04,

  8. PDC-09,

  9. PDC-02,

  10. CBC-01,

  11. OT-01, and

  12. OT-03.

Average Inter-item Correlation

Iframe 10

Average Inter-item Correlation Table

Acceptable values for average inter-item correlations are between 0.15 and 0.50 [17].

The average inter-item correlation ranges are described in the table below.

Table 5

Principle

Range

PDC

-0.417 - 0.299

CBC

0.403 - 0.538

BSP

-0.058 - 0.293

IDA

0.058 - 0.219

D2D

0.690 - 0.690

HCAG

0.340 - 0.503

OT

0.168 - 0.168

Unfortunately, we could not determine the average inter-item correlation for the following items:

  1. BSP-02,

  2. CPB-01,

  3. CPB-02,

  4. IDA-02,

  5. OT-04, and

  6. OT-02.

Only HCAG and OT had acceptable average inter-item correlation ranges [17].

Thus, suggesting that there may be items that do not measure the principle or are redundant.

Response Distribution

Image 6

The most common response was a Yes. The least common response was Not applicable.

Surprisingly, Does not answer question was the second most common response.

From realizing the high frequency of Does not answer responses, this further cemented our own pre-conceptions on the lack of documentation publicized by DAOs (i.e., transparency poverty) to provide a holistic understanding of on- and off-chain activities [19][20].

Additionally, the number of items for D2D and CPB are likely too inadequate to perform a proper analysis.

We will need to add additional items in the next version for each dimension to at least ten (10) questions.

Ratings

Iframe 11

You can find the ratings table for DAOs described above.

MakerDAO had the highest rating with a D+, while PrimeDAO had the lowest rating with an F.

Overall Scores

Image 7

The chart above shows the overall score for each DAO assessed with the questionnaire V0.9.

Maker DAO received the highest score, with an overall score of 3050

PrimeDAO received the lowest score, with an overall score of 1950.

DAO performance per principle

The bar charts below show how each DAO performed per principle.

HCAG

Image 8

BSP

Image 9

CPB

Image 10

DAO2DAO

Image 11

PDC

Image 12

IDT

Image 13

CBC

Image 14

OT

Image 15

Comparative Performance per Principle

Image 16

The overlay radar chart above shows how DAOs comparatively performed, given the principles.

This chart summarizes the results shown in the previous sections.

Cumulative Points Per Item

Image 17

The chart above shows the cumulative distribution of points per question.

DAOs performed best on CPB-01 and -02, D2D-01 and -02, CBC-01 and -02, OT-02 and -04.

Cumulative Points per Principle

Image 18

The chart above shows the cumulative distribution of points per principle.

The eleven DAOs evaluated performed best on CPB, OT, D2D, and BSP.

In order of performance:

  1. CPB,

  2. OT,

  3. D2D,

  4. BSP,

  5. PDC,

  6. IDT,

  7. HCAG, and

  8. CBC.

Inapplicable Questions

The items were generally applicable to every DAO assessed. At most, we only determined that one question was inapplicable to a DAO, here being BSP-01 for dOrg.

Iframe 12

Count of inapplicable questions.

Unanswered Questions

Iframe 13

Count of Does not answer responses per DAO.

Generally, we could find some information to answer a question during our assessments.

PrimeDAO had the most Does not answer responses with twenty-four (24), and MakerDAO had the lowest at seven (7).

We felt that too many DAOs were not transparent enough with their information, with documentation generally lacking to provide a good understanding of off-chain activities. However, this could also be because of a lack of information reporting standards in the DAO ecosystem. Hopefully, the DAO Index can help shed light on the need for the publication of more off-chain information.

An additional issue was that an item’s content could not be evaluated because we could not find any information to do so. Without adequate information to rely on, made it harder to determine if the content of our questions needs to be revised.

Lastly, it makes it difficult to determine if the set of questions work well with each other.

Discussion

Thoughts from Piloting the Questionnaire

From piloting or testing out the questionnaire for DAO Index V0.9, we gained valuable feedback and learned many lessons.

We found it difficult to respond to the HCAG items, primarily because of how we constructed the questions. In other words, the questions lacked enough clarity to meaningfully respond, given the evidence we found. Thus, we realized we need to improve the clarity of the questions.

Additionally, that we may need more questions under HCAG to truly understand how this principle can guide the development and design of DAOs.

We received an interesting note from a MakerDAO member on the use of the Banzhaf Power Index for BSP-02. The member commented that BSP-02 should take into account quorum settings for different voting scenarios. We did not consider this situation originally when we created the question, as we assumed that DAOs would use a single voting setting for their decision-making.

We found that the scoring method for responses was too harsh on DAOs. The current scoring method is too harsh on DAOs that answered No to a question (i.e., received zero points), because if we can find information to respond to the question, that also promotes our goal of improving transparency in DAO activities. Thus, we realized that we need to update our scoring method for V1.0.

We found that certain questions, such as BSP-09, were likely compound questions (i.e., the question sought multiple answers). Thus, we realized that we need to divide such questions into more specific questions in future versions.

Assumptions and Limitations

Our current work on the DAO Index V0.9, faced the following assumptions and limitations:

  1. Our DAO definition inherently excludes organizations that may be considered DAOs by others [15];

  2. The assessment takes an outside-in approach to assessing DAOs. Thus, we do not have complete information about the internal workings of the DAO, but only the publicly available information we can find provided by the DAO directly, or indirectly through third parties;

  3. The principles we selected may not be representative of ideals for how a DAO should be operated and governed (or for a vision of an ID). In other words, our principles may not reflect the views of members of DAO, society, or academia;

  4. Our assessments were limited by the lack of standardized documentation, such as the Securities and Exchange Commission (SEC)'s standard for Form 10-K [21]. The lack of standardized documentation limited our efforts to identify potential sources for responses;

  5. Our dataset of eleven assessments is a small dataset;

  6. Our methodology suffered from a lack of systematic research approach to developing the principles and questionnaire, which may have led to compromised results or the inability to interpret our results to determine reasonable outcomes;

  7. Our scoring method may be inadequate for benchmarking and comparing DAOs;

  8. Some of the questions may be organized under the wrong principle;

  9. As assessors, our own set of working knowledge may have hindered us from truly understanding how to interpret a question or certain materials in formulating a response to a question;

  10. Some of our older assessments from 2022 suffered from link rot, making it hard to re-check or review our responses because our sources were no longer accessible;

  11. Some of the questions (e.g., BSP-02), will make more sense if measured periodically (or monitored constantly), rather than solely when we conduct an assessment; and

  12. The inability to secure experts or DAO operators to assess content validity likely led to us including items that had issues with item construction, such as being ambiguous or overly complex.

Future Directions

Possible future directions we are considering include:

  1. Testing the reliability of the questionnaire;

  2. Developing a more robust research methodology for developing our conceptual framework, operationalizing our conceptual framework, and clarifying the DAO attributes we seek to measure;

  3. Developing a more robust scoring methodology for assessments;

  4. Compare the DAO Index with other DAO assessment frameworks to determine concurrent validity and mappings between frameworks;

  5. Conduct more assessments to conduct a exploratory factor analysis (EFA) and confirmatory factor analysis (CFA) on the instrument;

  6. Address issues with item construction by sharing items with more experts and DAO operators to assess content validity;

  7. Developing new tools to speed up the assessment process, such as a Banzhaf Power Index calculator for DAOs;

  8. Improve our assessment process to speed up assessments while improving the accuracy and clarity of responses;

  9. Improve our existing tools;

  10. Increasing the number of items per dimension to at least ten items; and

  11. Adding additional criteria for testing internal consistency.

Appendix

We also have charts for the response distribution per principle and per DAO. If you are interested in these charts (and any other chart), please leave a comment or send an email to [email protected].

Glossary

Iframe 14
Comments
0
comment
No comments here
Why not start the discussion?