Audit Circle external survey analysis

Survey details

  • Total responses: 47

  • Date of survey: June 2022

  • Where publicised: Discord; Telegram; Ideascale DM; Town Hall Slides; Twitter.

  • Authors: Vanessa Cardui, Phil Khoo, Ron Hill

  • 87.2% of respondents rated the survey execution as 4 or 5 stars (out of 5).

General comments/About the survey

1) 47 responses is quite high for a non-incentivised Catalyst survey - but it’s not a big enough sample to draw any statistical conclusions. So like most Catalyst surveys, it’s essentially a thinking prompt, and another route for people to voice opinion. Our aims were exploratory, rather than definitive - the survey cannot tell us “The community thinks this!” but it may give us a starting point for further discussion.

2) Catalyst could benefit from developing some shared language and terminology around audit, to facilitate deeper conversations about it. It’s clear in the survey that people have different expectations and understandings of terms like “audit” “monitoring” “output” “outcome” etc - and if we don’t all agree what we mean by these things, it’s harder to talk about them.

Broadly, audit is something external, where someone else assesses your project, often involving a value-judgement; and monitoring is the data, both quantitative and qualitative, that you collect internally, often as you go along, to make it possible for audit to be conducted. There are several ways to define the difference between output and outcome - but in simple terms, outputs are the things you produce (both tangible and intangible), and outcomes are what happens as a result of producing them. In some fields, the line can be blurry.

3) What proposers are measuring doesn’t always correlate with what their proposal says it will achieve; so they’re not really evidencing their work. This is true whether they are intending to evidence process, outputs, or outcomes - in many cases, they’re simply not collecting data that is demonstrates what they say they want to show. Some training, support, or wider community thinking on how to design a metric - and indeed, what a metric is in the context of different types of proposal - might help here.

4) Leading on from this - we already know there isn’t consensus in Catalyst on the perennial discussion about whether proposers should measure process, or outputs, or outcomes, or all three. But what emerges from the survey, and from Audit Circle’s overall work, is that the answer is probably different for different kinds of projects. So we may not need to look for a “one-size-fits all” answer - the project itself should probably lead the monitoring process used, rather than the monitoring process ending up shaping - and possibly deforming - the project.

5) Finally, there is always discussion in Catalyst about the real-world impact of our projects. Much is said about how Catalyst impacts the world, and creates projects that “change the world”. This may be true, or at least, potentially true - but unless at least some projects begin to think beyond evidencing process and outputs, and look at how to measure outcomes and impact, it could come across to the wider world as mere rhetoric. In this survey, we see evidence that people don’t fully understand how impact-measuring is done - several comments show that people think it can only be very long-term, extremely subjective, and very difficult to do, but this is not actually the case. If Catalyst wants to explore evidencing impact in some cases, then learning more about it as a community would probably be useful. In this, Catalyst could learn from the publicly-funded arts and third sector, where evidencing impact is a given, and where there is a lot of experience and knowledge on how to go about it.

Q 1) Survey respondents’ project budgets

Just over half of the survey respondents were in the $5-20k range. If we include the next tier up as well, $20-50k, this accounts for almost 75% of the respondents. The remaining 25% includes both “below $5k” and “$50k+” - this translates to only 9 people in total, so we have little data on either the upper and lower budgets.

This spread probably reflects the most common budget range for Catalyst projects as a whole at the time the survey was done. (This may change in F9, with the availability of an $8m challenge making higher-budget proposals possible.)

Q 2) Proposal aims

Around 60% of respondents had at least one social, education, real-world impact or community aim; and 40% were focused on products and integrations. This doesn’t reflect Catalyst proposals as a whole - although we don’t have figures for the exact proportions, we know that the majority of Catalyst proposals are about products and integrations. The different balance we see here could be coincidence, or it could indicate that “social and community” proposers are more likely to respond to a survey.

There was a noticeable trend where respondents with social aims all had budgets under $20k; and the higher-budget proposals who responded (greater than $50k) were mainly products or integrations.

Q 3) and 4) Did you choose the right metrics to measure?

Overall, 57.4% said yes. A higher project budget correlated with greater certainty that the right things were being measured - none of the proposals with budgets above $50k answered “no” or “don’t know” here. It could be that higher-budget proposals are more likely to give detailed thought in advance to their metrics; but on the other hand, since this question is a self-assessment (“do YOU think you are measuring the right things?”) it could be that high-budget proposals are less likely to feel comfortable admitting to having taken the wrong tack on what they measure.

Overall, 27.7% of respondents said ‘partly’, 8.5% said ‘no’, and 6.4% said ‘don’t know’ – so even in this small sample, there is a significant group who are conscious that they might need to measure some different things from what they originally planned. So perhaps this should be a more recognised phenomenon in Catalyst – perhaps we need more awareness that as work progresses, a proposal might need to rethink its metrics, and measure different or additional things from what it expected, and that this is OK. And proposers should perhaps be encouraged to evaluate, at appropriate points during their project, whether they are measuring the right things, and to reassess their metrics, and even their milestones.

Delving further into why respondents felt as they did, with the optional “please say more” part of this question: 4 people had no comment, but most people did make one, and raised interesting points.

Here’s a sample:

  • I thought at the time the demographics were important but now l am not sure.

  • Auditability is difficult from an outside perspective if a team is focused on working on the "hard" code and their isnt much to show front end wise... With a software project is quite difficult to "show" something working, until at minimum an alpha product is working.

  • It is hard for people to audit intangible items.

  • I tried to identity specific pain points (Note: “pain points” is an interesting and potentially useful idea on how to select audit metrics)

  • I provided KPIs based on use, at this stage of our project it would have been better to provide KPIs based on milestones and deliverables.

  • I would have included more items to help myself to monitor my project.

  • Although some aspects of the proposal are easily auditable, some aren’t. We are not at a stage where we can say one way or the other interms of increasing wallet count but the plan is to pivot based on data we acquire in the following month or so. (Note: this was the only responder that mentioned assessing their monitoring data as they go along, and using it to direct the future progress of their work. We didn’t specifically ask about this, so of course others may be doing it too - but it is a good practice which proposers could benefit from adopting.)

  • Working directly in communities means the project develops out of the work and out of participants' engagement; so the things we measure have shifted a bit in response.

  • Its hard to know what you put before you get started on the project. You arent too sure what direction it might take at that very early stage.

  • Our unexpected Success is not even mentioned in previously submitted KPIs.

  • Another interesting data to track would be something similar to Bhutan’s Happiness Index. This would be useful in our opinion to track in all Catalyst projects, as many succesfull project leaders have been reporting mental and even physical health issues.

There is plenty to explore here – but the idea already noted, that a project’s metrics might need to change as the work goes on, emerges clearly in the comments. Another clear idea is that it’s not “all about the numbers” - interactions between people, and even happiness for the actual team, are significant metrics too. Finally, this question again demonstrates the need to develop a shared vocabulary on these issues - several respondents talk about “audit” when they clearly mean “monitoring”, “data collection”, or “evaluation”.

5) What do you monitor and how?

For us as survey designers, this was a core question. We wanted to look at respondents’ understanding of how to monitor - was there a match between what they said they monitor, and how they said they monitor it? For example, if they said “We monitor pageviews by using Google Analytics”, that makes sense, because Google Analytics does indeed tell you about pageviews. But if they said “We monitor our team’s happiness, by counting how many retweets we get”, this is clearly more questionable. We wanted to see if we could observe any patterns, or any widespread fallacies in deciding what to monitor.

Going a little deeper, we also looked for correlations (or lack of them) between what proposers are measuring, and their proposal’s stated aims. In doing this, we wanted to get a sense of whether proposers are choosing metrics that help us track their progress towards their aims, or not. Note that we were agnostic on whether or not it was good to do so; at this stage, we just wanted to know whether it was happening. However, the project close-out process as currently defined by IOG does ask projects to state what their aims were, and outline how they addressed them - so given that this is something proposals need to do in order to close out, it would be useful if the metrics they selected enabled them to evidence it.

It’s important to note that this is not about the debate in Catalyst about “audit of outcomes”, because a proposal’s “aims” might not be about outcomes at all, but simply about processes and outputs; and many of the aims described by participants in this survey were exactly that. So without touching on the “Should we audit outcomes?” debate, we were interested to see if proposers’ metrics relate to the aims they stated in Question 2, and whether we could see any barriers to measuring achievement of aims if a proposer wanted to do so.

debate in Catalyst about “audit of outcomes: There is a strand of thought in the community that we should not attempt to assess or audit outcomes, and that it’s not relevant whether the outcomes that a proposal said it would achieve are actually achieved or not - what’s important is that we funded something that set out to achieve them.

This was, then, a complex question to analyse; and we looked at the answers respondent-by-respondent as well as overall.

Some respondents didn’t fully answer the question. We had tried to be very clear about the information we wanted (“please say what you monitor, and how you monitor it”), but some respondents were not able to break it down like this. (For example, responses like “The Amount and diversity of participation” doesn’t explain how “diversity” is being defined or how it is being measured; and “I monitor comments of the auditors” doesn’t tell us whether it’s simply the number of comments that is being monitored, or something more, such as monitoring to see whether the comments require action, and noting when it’s done, and whether it helps.) These particular examples might simply be because respondents did not have time to elaborate; but we did see what looked like some clear mismatches in the answers to this question, so it might be worth doing further investigation on how confident proposers feel about choosing logical and appropriate metrics, and even peer-led workshops to derive some ideas on how to do so.

With the more complete answers we got, we saw a range of approaches to what to measure, and often, proposers are using more than one approach. The most common approach is “counting things” (how many pageviews, how many participants, how many Github commits, etc). Another common approach is working from a list of tasks that they had said they would do, and ticking them off when complete. (No-one reported assessing or evaluating task-completion, however - for example, whether it felt like a useful task by the time it was completed, or any learnings derived from doing it.) There is also some collection of qualitative data happening - for example, by using feedback forms or group discussion. As mentioned already, several proposers found that their approach changed from what they had originally intended; for example, one respondent said “I provided KPIs based on use, at this stage of our project it would have been better to provide KPIs based on milestones and deliverables.” In retrospect, an additional question that we perhaps should have asked, is how easy proposers have found it to change their metrics, and any barriers to doing so - perhaps this would be material for a future survey or for focus-group discussion.

One issue we noticed with the quantitative (“counting things”) approach was that proposers may sometimes be conflating how many people are merely seeing their project or platform, with how far this is leading to the effect they intended it to have; or occasionally, conflating time spent working, with progress towards aims. Several responses reported measuring things like numbers of pageviews, signups, session attendees, or hours spent, even though these metrics often do not tell us anything very useful, and certainly nothing about whether the expected aims of the proposal are being achieved. A huge number of signups or transactions doesn’t mean that people find the service useful in the way the project intended, and may not mean that people are even using the platform at all after initially signing up; and hours of research mightn’t lead you to build what you said you would. Naturally, it’s different if the aims of the proposal are simply to achieve pageviews, signups, attendees, or work hours for their own sake, but for the majority of Catalyst proposals, this isn’t the case - most projects want these things for a reason, in order to secure a specific result. There is a difference between monitoring progress towards milestones (“what we’ve done”) and monitoring progress towards aims (“what effect it had”.). So this raises the questions - if we focus on counting things, what do these things tell us; and if they don’t tell us much, what is the monitoring for?

It seems that as a community, we may be falling too easily into monitoring things that are easy to count, regardless of what they actually demonstrate. This is not a matter of trying to prove “impact”, or “return on intention”, or “outcomes” - we may decide as a community that we only want to measure processes or outputs, and not consider outcomes at all - but we still need to give thought to which processes or outputs we are measuring and why, and be clear about what is demonstrated by measuring them. We should not fall into counting things because they are countable.

We would also like to note here that if Catalyst did eventually adopt monitoring processes that assess progress towards aims rather than milestones, and if this should reveal that a particular proposer’s work is not leading to achieving their aims, then this should not be used as a stick to beat the proposer with. Instead, it potentially represents valuable learning for Catalyst at large, and for future proposals with similar aims. This may mean we need some kind of reflective practice built in to Catalyst - a process for assessing why aims were not achieved, and some clear pathways to share that information.

Our overall impression on this survey question is, of course, that there is a range. Some proposers clearly have a good grasp of what to measure, and are comfortable with refining this in response to the development of their project, whereas others don’t know. Some of the issues here could, of course, be linguistic – this was quite a difficult question to answer succinctly, especially in a language you might not be confident in. Nevertheless, it does seem that some proposers, although they may say that they are confident that they know what to measure, might still be measuring things that do not actually demonstrate what they say they want to show.

Two interesting comments from respondents:

  • “I think that once the project is finished that the monitoring really begins. Also, since Catalyst doesnt require that I monitor anything then I dont really feel the need to at the moment.”

  • “I'm happy when people either follow me or my repositories on Github, there is where the work resides, yet is not a measure of my progress. I'm accountable to myself and to get the things to work that I want to work. If the tool works for me, I'm the first line of feedback and I build a tool I'm happy to use.”

6) What obstacles have you faced in collecting data?

The majority (59.6%) have no obstacles.

The most common obstacles cited are “I don’t know what to collect” (12.8%) and “I don’t have time” (10.6%). (Note that some of the “Other” answers would also have fitted into one of these two categories.)

The “Other” comments mentioned the difficulties of collecting qualitative data in a system apparently geared towards the quantitative; issues around access to data; privacy issues; and one made the heartfelt but cryptic complaint, “People are selfish and greedy”!

Note that on this question, respondents could tick all the answers that apply - so it’s worth noting that the majority of respondents had none, one, or two obstacles, rather than many.

7) Where do you store your data?

The most common answer (48.9%) is Google docs, followed by a Git repo (36.2%). But a large proportion said “on paper” (18.2%) and “in my head” (25%), so there are quite a few people who are not (or not yet) storing data in discoverable and savable ways. One person explicitly says “It’s a time issue. Some of it is still in my head because I haven’t had time”, but if we also examine Question 9, on barriers to making data public, the issue may be more than just time.

There are a range of other one-vote storage places, including YouTube, WhatsApp, Markdown notes, and Discord; and Miro was mentioned in the “other” answers several times.

In retrospect, one thing that perhaps we should have asked is how many different places people keep data, and whether they bring it all together anywhere or not. Most of the respondents use several unconnected places to keep their data; and although this approach can have advantages, it can also make the data less manageable for the proposer, and less discoverable for others. It would be interesting to investigate this issue further.

8) Do you make your project data public?

The most common response was “Yes” (46.8%), but obviously, this is less than half. There was a substantial vote for “Some of it” (36.2%), and some respondents said “No” (17%), so making data public is not as universal as we had expected it would be. We wondered whether, if the survey had attracted responses from different types of project, this would still be the case?

9) Have you faced any issues with making your data public?

55.3% had no issues; and as always, there were a few one-person “Other” answers that actually fitted into one of the pre-defined categories given. A couple of respondents noted that because their projects were new, the issue had not arisen for them yet.

The biggest reason given for a problem was confidentiality or privacy issues (19.1%). The answer “I don’t know how” was the lowest (just one respondent, 2.1%), so most respondents are confident in their skills in this area; although one “Other” answer said “do not know the appropriate place to share”. This is something that Audit Circle has heard in our problem-sensing too, perhaps indicating that there could be more clarity on defining where proposers are supposed to share their data. Several respondents made the point that while data might be publicly available, it might not be widely publicised or readily accessible, so that people who wanted to view it would need to seek it out.

We noted that while Question 8 showed that 53.2% do not make all of their data public, only 44.7% face some kind of problem with doing so. This suggests that some proposers have deliberately chosen not to share all of their project data, and do not see this as a problem. In Catalyst, we often tend to assume that complete transparency is a universal good; but perhaps this is not the case? There are evidently several reasons why data might not be shared, or shareable; and as we mature as an ecosystem, perhaps our blanket enthusiasm for total transparency is being tempered.

10) Do you mainly monitor your process or your outcomes?

“Outcomes” was the most common, at 44.7%; and “roughly equal” is next at 29.8%. Only 21.3% primarily monitor the process and the way they work; and 4.3% (2 people) don’t know.

This question, along with Q5 and Q13, is part of the deeper “What should we monitor?” issue. Process vs outputs vs outcomes, and whether what is monitored should relate to the project aims, is a question that has been debated in Catalyst, and which Audit Circle’s ongoing problem-sensing process did not draw strong conclusions on. As we stated in our introductory comments, this is clearly an issue that needs broader and more specific discussion, and also, one for which we need not expect a single, unified answer.

Note that at this time (Summer 2022), in another part of the Catalyst ecosystem, the Funded Proposers’ Subcircle was seeking community input on whether the “Auditability” question on the proposal submission form should be split into “audit of process” and “audit of outcome”. In that discussion (captured here), there were some strong views that proposers should be monitoring their process only, and that audit should focus on that; so that opinion also has its adherents in the community, even though they were not particularly strongly represented in this survey.

11) What is mandatory / useful for audit?

On each of the non-mandatory metrics that we specified, there were at least a couple of people who thought it was mandatory. Conversely there were 3 people who don’t think it’s mandatory to complete a monthly report for their proposal, and a handful who don’t know they have to do a final report and video. If this scales up across Catalyst as a whole, it suggests that the information on what a proposer’s obligations actually are might need to be shared better, or differently, since it might not be reaching proposers well.

It’s worth noting that in Audit Circle’s problem-sensing, we heard sometimes that proposers are experiencing their reporting obligations as difficult, offputting and onerous, and a burden that doesn’t contribute to their needs. For example, in our Meeting 3 in April 2022, during a discussion of our Town Hall slides, team member Matthias Sieber said I was in a Twitter space where someone mentioned that they increased the budget a lot more because of the dread of reporting. And wished they did not get funded. This isn’t particularly reflected in this survey; but arguably, proposers who feel this way would be unlikely to respond to a survey on monitoring and audit. So from the survey we don’t get much of a sense of how big a problem this dread of reporting is, although we do know it exists. Further investigation would be useful.

Respondents identified a range of things as “useful for audit” but the most popular answer was “collecting and storing data” (36 people). And a surprisingly large proportion (given that challenge teams often say they find it difficult to get responses from proposers) said that meeting with challenge teams is useful for audit - unless, of course, its very “usefulness for audit” is the reason why many proposers avoid it?

The “least useful for audit” according to this survey (albeit not by a very large margin, with 22 people recognising it as useful) was the (currently obligatory) monthly reports. In our problem-sensing and our meetings, Audit Circle noted that monthly reports, as they were framed at this point (Summer 2022) do not actually constitute “audit” at all, and that it’s unclear whether they are intended as audit. If Audit Circle had had the capacity to run our planned internal survey, with questions to IOG, we could have determined whether, in IOG’s view, monthly reporting is supposed to represent audit as such. This may be a question that the community could examine in future. It’s also worth noting that since the survey was done, IOG have moved towards using monthly reporting as something perhaps more recognisable as “audit”, with more evidence demanded, more scrutiny of the reports, and the data more publicly accessible.

12) What do you think your audit obligations SHOULD be?

A solid majority (68.1%) think their current audit obligations are about right, and only 1 respondent thought they should have to give less information than at present. That respondent also said they should have to give a different kind of information; as did 9 other respondents.

“Other” responses noted how easy it might be to fake the information demanded, and raised the question of whether it is enough to collect the data, or whether proposers should also have to make it accessible. There was also the interesting point that while minimal audit obligations might be fine in a community of trust where people are known to have integrity, more stringent audit might be needed if people have less integrity or less motivation. Of course, this raises the question of how you assess whether or not proposers have “integrity”, before it is too late.

13) “Auditing the process” vs “auditing the outcome”: which do you think is more important for your proposal, and why?

Here, we offered space for respondents to address this complex question directly in their own words. There was some interesting input, and a fairly wide range of views; there was also some range in how people defined “process” and “outcome”, which further supports our earlier point about the usefulness of an agreed vocabulary.

There is a limit to what can be deduced from a question of this nature, of course; but we did see a slight bias in favour of auditing the outcome, which is interesting in itself, and perhaps surprising in view of the commonly-expressed view in Catalyst that proposers need only focus on the process. Possibly, the difference came about because our sample included a higher proportion of social and community proposals than in Catalyst in general; these tend to have a strong sense of purpose, and may thus see more value in demonstrating outcomes. But we also noted that responders made interesting arguments on both sides; so maybe it depends on the specifics of an individual project, and thus we feel it would be a mistake for monitoring and audit in Catalyst to come down definitively on one side or the other.

On “auditing the outcome”, the issue was raised that it is hard to measure outcomes until long after. This is not always the case however - short-term outcomes certainly exist, although it is a commonly-held misconception that all outcomes are long-term. Another point raised is that measuring outcomes gives a better sense of return on intention - this is a common counter-argument in Catalyst to the idea that proposers should not measure outcomes. Some respondents were very concerned that auditing outcomes is “subjective” – but this seems to reflect a cultural issue, an unease and discomfort with subjectivity. In fact, some outcomes are objective; and even those that are subjective can still be monitored and assessed if they are framed in a way that makes this possible. Overall, this kind of answer might be evidence of a lack of knowledge about qualitative audit, perhaps demonstrating a need for education and resources about it, so that proposers feel more confident to use qualitative methods and subjective material where it is appropriate.

A sample of interesting comments:

  • Audit Transparency. Audit future impact to Cardano and ADA value. Auditing transparency is an interesting idea - perhaps via developing some sort of transparency index for proposals? Auditing a project’s impact on the value of ADA is very interesting too. It is not immediately clear how this could be done - given that there are so many variables, it would be little more than guesswork - but the fact that it was suggested is interesting, and raises the question of a) is this what we, as a community, believe that proposals can achieve? and b) what kinds of things do we think would have the potential to raise (or lower) the value of ADA?

  • I dont think we currently audit anything at all. Between those two choices, I think auditing the outcome is more important as that is what you end up producing. But there is a ton of stuff I think we should track and audit throughout the process and if people dont do that I think they shouldnt continue receiving funding. This raises the question of how far funding should depend on proposers providing monitoring information. Since this survey was conducted, IOG have moved more towards this approach, by introducing milestone-based funding.

  • One respondent mentions input metrics as something they think should be measured. It’s probably true that many projects don’t do this, or don’t do it in any consistent way; it would probably be worth investigating further.

  • auditing the process for me personally would be helpful cos I dunno what I'm doing and I'd appreciate feedback. Clearly, this respondent sees “being audited” as something quite positive and collaborative – someone audits you, and then gives feedback to help you improve, operating more like a “critical friend” than an auditor. Audit Circle’s problem-sensing suggested that this positive view of audit is not especially widespread in Catalyst; would it become more common if audit was done as this respondent suggests (like a “critical friend” approach, and with a focus on the proposal’s process, not its outputs?) Or does it indicate that the two roles (friend/supporter/mentor, and auditor) need to be separated?

  • For us it’s about the process. But in a sense, the process is the outcome. If a project is doing things then eventually the goal is met. Compare it to working out. Either you measure how much you can lift or you measure how many times you go to the gym. If you go every day for 2 hours, then the real goal (lifting more) will be realised sooner or later. This raised some eyebrows amongst us. We noted that, contrary to what this comment suggests, it is perfectly possible to keep on and on doing things, and still get no closer to achieving what you wanted to achieve, if you are doing entirely the wrong things. However, if the aim is simply to “do things”, regardless of what those “things” are, then this comment is true. This comment raises some fundamental questions about what the point of Project Catalyst is - but in terms of audit, it also raises an interesting point about whether there is a reason to avoid measuring the (as the respondent phrases it) “real goal” in favour of measuring process, especially when we have not yet proved that process and goals are even linked?

14) Contact details

As survey designers, we debated whether to make the survey anonymous, and eventually opted to make contact details optional. In the end, roughly two-thirds did give their details, and one-third didn’t – so it seems we were right to make it optional, as a significant chunk of the respondents might not have done the survey at all if contact details had been compulsory.

Of those who did give their details, we can see that only 2 respondents answered for more than one project. Of course we don’t know about those who chose to stay anonymous.

Summary of recommendations

Most of our recommendations from our interpretation of this survey are not about implementing external audit, but are about creating the preconditions for a set of audit processes that are led by the community - things such as peer education for proposers on how to monitor their work effectively and in a way that respects differences; community discussion on what we should evaluate; and creating community-owned, responsive, and participatory approaches rather than top-down and authoritarian auditing.

Our key suggestions are:

  • Develop a shared core vocabulary in Catalyst on audit issues, agreeing what we mean by terms such as “audit”, “monitoring”, “outputs”, “outcomes”, “impact”, “process” etc, to enable us to talk about it..

  • Look further into the question of whether proposers do feel the “sense of burden and dread about reporting” that some have reported, and if so, why.

  • Create a clearer, easy-to-find guide to what a proposal’s monitoring obligations are, to be given to all proposers on being funded.

  • Remember it’s not “all about the numbers” - qualitative measures matter too.

  • Raise awareness in Catalyst that it is normal, and even desirable, for a proposal team to adjust its metrics and its monitoring during the course of a project, in response to what its work reveals.

  • Hold some peer-led community workshops on audit issues. We would suggest topics such as:

    • How to create an effective monitoring metric which actually measures what the proposer is trying to show. Include different approaches for different types of proposals; and qualitative metrics as well as quantitative ones.

    • How to monitor whether you are reaching your aims.

    • Practical approaches to evidencing impact and outcomes (perhaps drawing on knowledge from the arts and the third sector, where doing this is common and widespread), and how Catalyst proposals might do this if they choose.

  • Once we all know more about how to monitor a wide range of things, have a broad community conversation on what proposers should be monitoring - their outcomes, their impact, their outputs, their processes and praxis, or all of these? Bear in mind that this might vary for different types of proposal.

  • Explore how proposers are storing project data, and the relative merits of storage in one place or several.

  • Develop better ideas for how proposers can share their data, and develop guidelines on what data they should be expected to share (particularly considering privacy, data protection, and IP).

  • Discuss what “reflective practice” might look like for different types of proposals and what routes there should be for recording and sharing insights drawn from reflective practice.

  • Be aware, in all of this, that one size doesn’t fit all, and that different types of proposals need to do things differently.

Final note

As we said at the start, we were not expecting statistically significant information from this survey, since we have no way to tell how far these responses scale up across Catalyst as a whole - but we hope that it gives some starting points for further investigation and discussion of audit in Catalyst.

We hope the wider audit community will follow up; and we’d like to thank all those who took part.

Audit Circle survey team (Phil Khoo, Ron Hill, Vanessa Cardui); Jan 2023

Last updated