Meeting - 19th July 2022
Meeting 3 - Fund 8 - dRep White Paper Working Group Meeting
Last updated
Meeting 3 - Fund 8 - dRep White Paper Working Group Meeting
Last updated
Aharon Porath
Consenz
Kenric Nelson
Philip Lazos
Steph Macurdy
Stephen Whitenstall
Thorsten Pottebaum
George Ramayya
Frank Albanese
Documentation funded by Fund 7 - QA-DAO Transcription Service
(Used for Project Management tracking and reporting)
Brief Updates/Notices from each participant - All
Stephen Whitenstall - 00:02
Kenric Nelson - 00:41 - And Budget Update
Thorsten Pottebaum - 01:52
Aharon Porath - 02:37
Steph Macurdy - 04:03
Philip Lazos - 04:43
Any Budget updates or issues ?
Using allocation to distribute co-allocations
Money available for community reviewers
Kenric Nelson - 00:41 - And Budget Update
Any updates on Project Plan timetable and deliverables
There's no real project plan updates, we are continuing meetings. Things are proceeding smoothly. I will do a monthly report on the 24th July 2022 and mention that we're holding these meetings regularly.
Please prepare Detailed Outline of your section for the next meeting - All
Goal for July 2022 is to have a solid draft each section of the White paper. The target for each section is between 3 to 5 pages.
We intend to be ready in August 2022 to invite the Catalyst community to provide reviews. And then in September 2022 to incorporate that feedback and complete our work.
George and Frank likely out of picture for the present.
Discussions with Thorsten & Steve concluded that we should focus on a good introduction for White Paper. As the lead editor, I should lead that writing that [introductory] section.
Once others have made progress [...] I can write two to three sentences introducing your section.
I have been reading the [available] papers on the pros and cons of liquid democracy and vote delegation.
There are a couple of models that people have used to study vote delegation [that each have] a few different objectives.
And most of the papers have pessimistic results.
At a first reading of the [papers] abstracts this seems like a massive revelation. That they have uncovered the dirty secrets of liquid democracy.
But when you check the models they are using it becomes pretty clear that under [those] benchmarks liquid democracy, in the worst case, will look [always] bad.
In response to this, there are a few papers that have come out recently that [formed] a critique of the critique of liquid democracy. They explain that the previous modelling may be inaccurate and the worst case analysis, which is pretty standard when studying algorithms, is too pessimistic to be to be used here.
For instance the first significant paper on vote delegation [and] liquid democracy. Essentially, two properties are defined. One of them is called positive gain, which is how much better the outcome can be. And the other is the do no harm.
[Using] a model that can also be criticized, I'll explain it in a bit. They say that there is no mechanism that always has positive gains, and never makes the result worse.
In isolation this sounds interesting. But actually, what this means is that you could have a mechanism that in most cases actually gives very good results.
But there could be some cases where it [just] makes a really good outcome slightly less good.
Stated as properties, it seems that this mechanism has the potential to produce worse results, as well as better. Because they don't really go into how much better the good results are compared to no delegation, and how much worse the bad ones are.
Papers started looking into it. And saying, hang on, these negative results are there. But all this means that is a vote that was going to go perfectly without delegation is going to go almost perfectly using delegation.
Technically it became a bit worse. But the potential benefits [that outweigh this] is that a vote that would have produced a bad result without delegation [can now] produce a better one with the delegation.
Essentially, the way the first paper [was] presented was not particularly fair.
Another criticism of the original paper is the assumption that [for] every voter there's two outcomes. One [voting] outcome is correct and the other is wrong. And every voter has a chance independently of voting [to select] the correct outcome. And the chance of voting for the correct outcome for every voter is greater than 0.5.
This is an assumption of that paper and most of the subsequent ones. The problem [with this] model is that it quickly becomes clear that the more voters you have (if all of them have a greater than half chance of picking the correct outcome) the higher the probability that you select the good outcome just by having more voters.
[But] if you do delegation, even if the voters delegates to others who are more informed than them, because with delegation the pool of actual voters goes down, you don't have this large number effect.
That guarantees that the more voters you add into the mix the more likely it is to get the optimal outcome.
Specifically, they cite various constructions that they have where essentially 1000s of voters that have a one half plus epsilon [infinitesimally small positive quantity ?] chance of being right. But because there's 1000s of them, on average, it's way more likely that the good outcome will have more votes than the bad outcome.
All of these decide to delegate to an expert, who has a 0.99 chance of being right. But because so many of them delegate, in the end, that 1% chance of the expert being right turns out to be worse than having a million people who are just barely informed vote.
And this is not a particularly realistic outcome.
I don't think it's fair to say that every voter has an above average chance (above half) of being right. Because mathematically, it's almost guaranteed to give you a bad result for delegation. Because the more you delegate the more you lose these concentration bounds on the probabilities that the voters will, on average, vote in favor of the good outcome.
So there's a couple of papers dealing with this model. And saying that liquid democracy is bad for this and that. And then there's a few others papers criticizing the first one [by] explaining that the first results are too pessimistic. And showing that how under more sane circumstances vote delegation can actually improve the outcome, even with this benchmark.
And specifically, there is also one very recent paper that does delegation for liquid democracy with one step rather than having re-delegation.
I'm going to include these three lines of work. The original negative results and then the critique of them. Then the final paper covering one-step delegation and say a few things about what Catalyst is doing.
That's sounds great. That's a really nice contribution to this because it gives a kind of high level view of what people have investigated. And what people are documenting as potential pros and cons, and the challenges of doing this kind of research and understanding exactly [how] making forecasts might happen.
It is very practical in terms of examining what assumptions people are making and how we need to examine those assumptions. Particularly where is there an extreme example. And what can you reasonably expect of the average voter.
[Firstly] I wonder how do you measure the good outcome of voting.
[Secondly] those papers took [assumed] a large participation [which] also requires motivation.
If you focus on direct democracy, direct voting, then you may lose some engagement. Because because not everybody has the same motivation and free time.
[But] if you focus on some kind of delegation then you can can bring those that are a motivated to participate in the voting. And maybe this can can level the the pros and cons of delegation and liquid democracy in comparison to direct voting.
I have an answer to both questions.
So the first about how they measure the quality of the outcome.
Almost every paper has the following model. There's two outcomes, let's say the good and the bad outcome. And the only thing we care about is the chance that the voters take the good outcome.
And the way they model the vote is that every voter has a probability of selecting a vote in favor of the good outcome.
So let's say if there's just one voter whose probability of voting for the good outcome is 0.7.
Then the score, let's say of the selection is 0.7. If we have a million voters with a probability 0.7 voting for the good outcome. Then it's almost certain that the good outcome will win.
So this will have a score of one, because we're taking such a huge sample, it's very unlikely that even though everybody has a 70% chance of voting for the good thing, the bad event will happen.
So that's the benchmark they using. Just a chance for every voter to do the right thing. What's the chance that more than half of the voters end voting in favor of the right outcome?
[Secondly] In terms of the engagement effort, there is only one paper that seriously [addresses this].
And they do [document] a relationship between how much effort somebody puts in, and how close to 1, the probability of voting for the right outcome becomes.
Essentially, from the voters perspective [...] the effort they put in to increase their chance of voting for the good outcome is subtracted from their personal utility.
In that model there is zero cost of delegating [1] and non zero costs, going from zero to one, the more engaged the voter is when voting directly.
Unfortunately this paper, other than setting the model and explaining what the rules of the game are, They call this the delegation game. There isn't any serious result. [All] they show is one sort of bad Nash equilibrium of how that vote could go. Everything else, like better delegation mechanisms, or other equilibria, they leave this as future work. So other than the conceptual contribution on the model it's not very definitive.
One downside is that this is a pretty new area. All these papers are from the last five to 10 years. [...] And I can say for a fact that [no one] has actually run any election [...] related to that research.
I think part of keeping the scope realistic has been this idea that we're trying to lay out what should be measured. And it's for a later project actually do those measurements as the dReps within Catalyst evolve.
There's a great opportunity here in using these models in real world use cases [...] from a pragmatic perspective.
It's a good connection, because this is what I hoped that Consenz would bring [is] the opportunity to examine these assumptions in real use cases. [To work] in parallel with the work of the IOG team and the Catalyst process.
[Presently] there are really practical [use cases] like editing guidelines for Proposal Assessors, the Code of Conduct for Catalyst Circle and many others. We can start experimenting with different mechanisms and use cases.
And then to compare [this] to the result of those papers and theoretical assumptions mentioned.
We focus on other aspects of the voting [...] on the impact of the voting. Because Consenz is a system for editing documents and agreements.
We are trying to find the right formula for determining [whether] a suggestion is part of the document, still in discussion or even being denied.
This is [additional] building blocks for the future parts of delegation.
The same process for direct voting, for representative voting and for liquid democracy voting. And the same [for] on chain voting, and in person voting.
[We are] still building. So it will take a few more months. [But], I hope by the end of the year we can start really experimenting with it.
Right now the theoretical background is still theoretical. [...] I hope that very soon, before or after the introduction of dReps, we can start using those experiences. [To] start to experiment on document editing and bring more flexibility and freedom to experiment.
I'm not sure that the [current] timetable will fit completely this kind of process. It will depend on the pace of our development team work.
I would emphasize that while your development schedule has its own pace. There's no need or expectation that the development work would be completed for this white paper. It's fine for the white paper to be more of like a design plan. And also to be documenting what the challenges are. Both in terms of development challenges [and] particularly on measuring the quality of the inputs and outputs to the [dRep] process.
We can suggest some kind of a framework for experimenting with different voting mechanisms and comparing them. We are also working on a parameter for optimization of the voting. Which is something that we are working on and we can bring it to the white paper as well.
A parameter for optimization that can later add some layer of AI for this process and the framework for experimenting with different voting mechanism using the consensus platform.
32:10
For framing different aspects of this specific white paper. Philip and Kenric will bring the more academic background and the references that set the ground for this research. And I will suggest [the] more practical aspect of how we can continue, optimize and continue to work in more practical manners in the near future.
I added some notes to the White paper. On [..] calculating a Banzhof power index is easy in simplified settings, and [how to] tailor it to fit the [more] nuanced Project Catalyst [is a] more interesting and complex problem.
We're limited by the available data. Some is available by request, some is not available, because we are not privy to which wallet is voting for which proposal
From my side of things, Kenric and I are troubleshooting the working definition we have in the Wolfram language.
The author Seth Chandler, who does not work for Wolfram [but] is a kind of a friend of the company, has a slightly different definition of the Banzhaf power index. [Consequently] we were not expecting some of the results that we were getting.
In terms of writing out the details of some of the difficulties [we have encountered] and and establishing the narrative, why we want to do it and what's some of the obstacles, I think is totally doable.
Is it like an exposition of what the Banzhaf power index is and then running it through relevant data ?
I think so. We can do the calculation for a simple example and make it specific to Catalyst. [But], I think we need to wait until the technical definition that I have in code is agreed upon. At least we know how to work it, and start to layer in [...] what we do know about wallets [...] filtering some of that into the calculation.
Do you think it would be feasible to define requirements against voting data [in order] to have a more meaningful processing according to demands of index?
Because Charles [Hoskinson] is talking about moving all of this on to onto main chain [....] which may take some of the obstacles [to data access] away.
I'm just wondering whether that, in itself, would satisfy all requirements to be able to process [the Banzhof power index] or if there are further ones, which would help and support to do an analysis?
[So we can arrive at] the ideal data set. And [compare] where we are right now. [with] what we need additionally to make it more efficient.
I think that's possible. At least we will know [the sizes of] every wallet that voted [...] we don't need to know the identity.
I could envision if we make further progress [we could] develop for every round of Catalyst a definition of what wallets are voting, its size and whether they delegated or not.
And that's all it's needed, theoretically, to calculate the Banzhof Power Index for each wallet.
However, given the number of participants, which is in the 10s of 1000s, there is some computational cost to doing that calculation.
[One benefit to the community] could be that part of the feedback to every wallet holder is the percentage of influence you had on the outcome.
[...] the mapping between what a voter voted on which project [...] right now that data is possibly available. It is very hard to organize [but] at least it's available for some rounds.
There was some activity last year to add some cryptographic layers to the voting to keep that [voting choices] secret. And there are trade offs on the privacy issues.
I want to respond about the combinatorial [implications].
Let's say there's 50,000 Vote voting wallets, in a Banshof Index, one of those wallets could vote vote by itself, and then one with two and then one with two and three and one with two, three and 50,000.
And then there could be two with three in two to 50,000. And then all of those.
That's going to require some computational power to determine that.
But does it matter if half of those wallets don't vote for a specific proposal? And so then they're omitted in that calculation? So I'm imagining that all 50,000 Vote for every single proposal yes or no?
There's tricks to it. [...] I'm kind of reaching out to what's possible.
But there are intermediate steps where you can simplify this dramatically and still get useful results.
We can work on what exactly the feedback would be, how we do the simplification and what assumptions are made.
But maybe the broader point is [to achieve] one nice outcome : the ability to give back to each voter some measure of the amount of influence they have in the process.
I'd like to talk a little bit about the editorial process.
Thorsten I'd like to get an image that represents Catalyst, voting, and liquid democracy. Or some combination of those things. So if you could try to find a good image for us to use, or if someone in their research comes across a good image that they recommend.
Thorsten and I are going to hold an After Town Hall discussion to go over our progress on the paper and to recruit reviewers.
I think we'll do that either next week, or at the latest the following week. So we'll be trying to get it together for next week.
That'll give a better context for giving the community an update on our progress and recruiting the reviewers to provide comments on the on the work.
I want to use the same group that we were using from the last fund.
Thorsten to meet with Philip to refine drafts
Recommend Thorsten be co-author
adanamics ()