Indigenous evaluation: Bridging perspectives event report
Table of contents
Executive summary
Building on recent efforts to improve its offerings to Canadians, Public Safety Canada (PS) brought together representatives from Indigenous communities and public policy experts in April 2024 to examine how various PS programs could be strengthened to better meet the needs of the people they serve. The engagement exercise focused primarily on the approaches PS uses to gauge the effectiveness of funded projects and how it can provide supports for methods of evaluation which are more inclusive and culturally centered.
Participants identified that establishing a decolonial mindset as a principle for the engagement exercise was important and worked collaboratively with PS to explore the four following areas for discussion.
In examining current evaluation systems, participants felt that they encourage competition instead of collaboration, that metrics fail to capture the longer-term impacts of programs and initiatives, and that onerous reporting requirements often discourage uptake from communities. Participants also noted that government isn't always transparent in how it uses the feedback it collects, nor are there systems in place to ensure program continuity or build trust with evaluators. Participants recommended more direct contact between project administrators, evaluators, and communities as a means of addressing some of these systemic issues.
Participants also discussed how the positive impacts of programming aren't always captured by existing evaluation methods, especially when benefits occur over longer timeframes or are the result of a broader suite of programs and initiatives. Similarly, participants pointed to the rigid structure of evaluation methods that don't always allow for the reporting of unforeseen benefits or the collection of qualitative feedback. Participants suggested that evaluation methodologies should be more flexible, should be administered by members of the community, and should be designed to meet the needs of the people they serve instead of government.
The third area of discussion focused on how communities know when programs have been successful. Participants felt that many programs had indeed been impactful but that this was difficult to gauge at a macro level since the effects were felt individually and were varied across communities. As a result, it was noted that evaluation approaches should be designed and rooted in specific community experiences, should be conducted in-person, should be properly resourced, and should be grounded in Indigenous ways of knowing.
The final area of discussion centered on the actions needed to make existing evaluation systems more flexible and culturally relevant. In bringing together elements of the three previous discussions, participants identified some of the following solutions: more adequate resourcing, better support and training for program staff and evaluators, increased flexibility and tailoring in how evaluations are designed and delivered at the community level, as well as improved communication and collaboration. Participants highlighted that the question is not how to identify the right metrics to evaluate a program, but rather, how to create the best space for the right stories to shine through.
While the findings in this report reflect only the perspectives of those who participated in the session, they nonetheless provide important insights into how PS programs and evaluation methods can better reflect the values, ethics, and priorities of Indigenous communities.
Land acknowledgement
We respectfully acknowledge that the engagement organized in the context of this report occurred within Treaty Six Territory and within the Métis homelands and the Métis Nation of Alberta Region 4. We acknowledge that Edmonton is on the traditional territories of many First Nations including the Nehiyaw (Cree), Denesuliné (Dene), Natoka Sioux (Stoney), Anishinaabe (Saulteaux), and Niitsitapi (Blackfoot).
The planning for this engagement occurred on Tyendinaga Mohawk territory; the unceded territory of the Algonquin Anishinaabe; and the territories of the Lək̓ʷəŋən and W̱SÁNEĆ peoples, including Songhees and Esquimalt First Nations.
The drafting of this report occurred on Lək̓ʷəŋən and W̱SÁNEĆ territories, as well as on the unceded territory of the Algonquin Anishinaabe peoples.
We are grateful to the individuals who took the time to speak with the engagement team to share their perspectives that are included in this report. We are also grateful to Elder Arnold Alexis of Alexis First Nation for his support and guidance throughout the event.
Overview of engagement
Background
Public Safety Canada (PS) is responsible for exercising leadership at the national level relating to public safety and emergency preparedness including through the implementation of different programs, policies, and structures such as the Aboriginal Community Safety Development Contribution Program (ACSDCP), which delivers funds through the Aboriginal Community Safety Planning Initiative (ACSPI) and the Indigenous Community Corrections Initiative (ICCI). These initiatives are designed to provide funding for the implementation of community-specific and community-led safety plans and community-based corrections programs in Indigenous communities across the country. Additionally, PS is responsible for implementing the National Crime Prevention Strategy (NCPS) of which PS’s Research Division is primarily responsible for research and evaluation to support the NCPS.
In 2023, PS coordinated an online engagement with past funding recipients of the NCPS and their third-party evaluators, including Indigenous and non-Indigenous participants, to better understand their experience with PS evaluation processes. This engagement outlined some components of the process that work as well as areas for improvement, with the intention of incorporating insights to strengthen project evaluation practices at PS. PS heard from respondents that the current evaluation processes do not fully resonate with and reflect the diversity of Indigenous communities across the country, and that they can perpetuate inequalities and unethical practices in research involving Indigenous peoples.
Building on the work done through that engagement, PS convened representatives of Indigenous communities, project managers, professional evaluators, and experts in Indigenous ethics and evaluation methodologies, to further discuss challenges and opportunities, and to explore recommendations to strengthen PS evaluations. This engagement process, centred around a two-day event held in Edmonton, Alberta on April 24th and 25th, 2024, provided a meaningful opportunity to expand understandings of experiences related to project experiences with PS and explore what a culturally centred evaluation approach to Indigenous ethics could entail.
Engagement objectives
In bringing together a broad group of individuals representing a diversity of Indigenous communities, various levels of experience in project management and evaluation, and experts in Indigenous ethics and evaluation approaches, PS hoped to build a space that would be conducive to productive and forward-looking conversations. PS hoped to outline key insights and recommendations that program administrators could draw from when considering how to strengthen evaluation processes that better consider and address the impacts programs are having in communities, and to create evaluations that are useful to both PS and the communities themselves.
PS hoped to explore several key areas of discussion with participants including:
- Identifying priority areas in Indigenous ethics and evaluation, especially as they pertain to community safety and crime prevention programming
- Exploring the tensions between Western and Indigenous approaches to evaluation, and some of the concerns related to harmonizing Western and Indigenous approaches to ethics and evaluation
- Exploring best practices for authentically capturing and applying Indigenous approaches to Public Safety Canada evaluation approaches, and how to mitigate the risks of trying to represent program impacts in a diversity of communities through Government of Canada evaluations
- Exploring what a culturally appropriate and ethically sensitive research and evaluation relationship between PS and Indigenous communities could look like in practice; and
- Exploring past participant experiences with evaluations for PS-funded programs and what an ideal program evaluation could look like in the future
Engagement approach
PS sought to build on the input collected from the 2023 virtual engagement by bringing together representatives from Indigenous communities, program managers and program evaluators, including Indigenous evaluation consultants, and experts in Indigenous ethics and evaluation for a two-day in-person engagement session to further explore the topics identified through previous engagements. This session was held on April 24th and 25th, 2024 at the Delta Edmonton South Conference Centre in Edmonton, Alberta.
PS invited respondents from the 2023 engagement program and others with relevant experiences to attend the two-day session in Edmonton. PS worked with external partners, Hill & Knowlton (H&K) and Redstone, to help design and implement an inclusive engagement strategy, rooted in trust and relationship building, to create a space for productive dialogue between engagement participants and PS staff at the event. H&K helped design and facilitate the engagement program, while Redstone worked closely with H&K to arrange conference facilities, accommodations, and transportation for participants.
PS recognized that it was important to create a safe and welcoming space for participants that allowed time for trust and relationship building to occur, while still addressing the key discussion areas that participants were gathered to discuss. H&K proposed a two-day engagement program, with the first day focusing on getting to know those in the room and the topic areas better, and the second day focusing on the key discussion areas that PS hoped to explore.
The first day of the program was designed to help participants get to know one another and the PS team, to better understand the topics that PS hoped to explore, and to create space for participants to express initial reactions, concerns, and add any additional discussion questions that should be explored. Participants were invited to introduce themselves, along with the PS team and the H&K facilitation team, before jumping into an exercise to outline some shared values in which participants wanted to root the discussions going forward. Participants then had an opportunity to define their role in these discussions and the priorities around crime prevention and community safety projects that exist in their communities. After lunch, PS provided a context-setting presentation outlining the previous engagement, the scope of the evaluations set out to be explored during the engagement and answering questions from participants. Before closing, participants went through the agenda for the next day in detail to have a better understanding of the discussion areas, to provide initial reactions to the questions that would be asked, and to suggest other questions that should be explored during the discussions.
The second day of the program was designed to focus on the key discussion areas that PS had brought participants together to explore. The day was broken into four key discussion areas: Existing Program Evaluation Systems, Evaluations and Program Effectiveness, Impactful Community-Based Programs and Evaluations, and Moving Towards Better Evaluation Systems. These discussion areas were designed to guide participants' discussions through various key areas, from their experiences using existing programs, challenges related to communicating the impacts of programs in their communities using existing evaluations, how communities know the impacts of their programs and how that could be measured, and finally, how PS can work to evolve evaluations in a way that better reflects Indigenous ethics and evaluation approaches – better reflecting how communities understand the impacts of their programs. The discussions were captured by notetakers at each table, as well as by participants themselves on a large notetaking template that was placed at each table for each discussion, and in individual booklets that were created for participants and encouraged to be left behind.
The intention behind this engagement approach was to create a space that was safe and welcoming – encouraging honest and productive conversations and leaving enough space for conversations to take the direction that participants felt they needed to, and to allow conversations to be captured in a variety of formats. Participants were encouraged to self-reflect on poster-style questions placed around the room, as well as in an individual participant booklet that was created for participants to capture their reflections throughout the event. These booklets included some individual activities, space for participants to take notes during discussions, as well as an engagement evaluation for participants to complete after the end of the event. These booklets were left behind for the engagement team to include in the analysis. Participants were also given notetaking templates that were used to take notes during the discussions and to help participants report-back their discussions to the other tables. These templates were collected at the end of each discussion. In addition to the self-reflections and participant-driven notetaking, the engagement team was strategically positioned throughout the room to take detailed notes, and to ask further probing questions should the conversation require redirection. Otherwise, participants were encouraged to explore the discussion areas in ways that felt right to them – which allowed for fruitful and productive conversations, highlighted key insights and areas for further reflection, and provided direction for PS in strengthening evaluations to better reflect the cultural diversity of Indigenous communities in Canada.
What we heard
As the program was titled, participants explored the idea of “Bridging Perspectives”, discussing the challenges that come from evaluating programming in Indigenous communities through the lens of a Government department and what opportunities might arise from efforts on the part of PS to re-evaluate how evaluations are done to better reflect the values, ethics, and priorities of Indigenous communities. However, participants stressed that these changes cannot and will not occur because of a single engagement event held in one place, that there are other voices that should be brought to the table, and that there are risks in trying to explore “Indigenous” approaches to ethics and evaluation – given the diversity of Indigenous communities in Canada, and that no one approach will reflect the needs or priorities of all First Nation, Métis, or Inuit communities. Participants expressed concern and the event team acknowledges, that attendees represented only those who had responded to previous engagement efforts and were extended an invitation – and that a majority of participants represented projects funded in First Nation communities, with a strong contingent from the Prairies and the Atlantic regions. There was only one Métis community represented among participants, and no Inuit representation at the event. In addition, several participants were themselves non-Indigenous, despite playing a role in overseeing a NCPS program or the evaluation of a program in an Indigenous community. The event team also acknowledges their position as non-Indigenous Canadians, the role they played in designing and facilitating the event, and that there are limitations to conversations that are not led by Indigenous peoples in Indigenous spaces. As such, the results presented in this report are presented with these important reflections acknowledged and contextualized – these are the reflections of participants captured during the “Bridging Perspectives: Indigenous Ethics and Evaluation” event and cannot be interpreted as representing a definite perspective of Indigenous program evaluators, Indigenous ethics and evaluation experts, or Indigenous communities broadly.
Guiding principles
In order to create a space that would encourage trust and relationship building, and be conducive to positive and constructive dialogue, the program was designed to start participants with a round of introductions, followed by an activity meant to outline shared group principles in which to root the conversations going forward. Participants were asked to select a number of principles from a list in their booklets, before sharing their list with others at their table. Each table was then asked to come up with a list of shared principles among those highlighted by each person, and then each table shared their list with the others in the room.
Participants highlighted several important principles in which to root the conversations. These included centering the relationship with the land, leadership, love, wisdom, courage, justice, respect, truth, humility, collaboration, safety, honesty, and responsibility.
Decolonization
The principle that was reflected the most between tables was decolonization. Participants noted that discussions focused on strengthening Government of Canada programs, including evaluations, should be rooted in a decolonial mindset – which includes centering Indigenous perspectives, encouraging Indigenous ownership of programs and evaluations, and fundamentally rethinking how systems, beyond evaluations, operate. It was added that decolonization cannot occur within systems that Indigenous peoples did not create but are required to operate within, and that decolonizing evaluation systems might not be possible in a system in which they continue to be required. It was also noted that it might not be possible to root these conversations in decolonization, as they were designed and facilitated by a non-Indigenous engagement team and held in a non-Indigenous space – and that a decolonized discussion on program impacts would look very different. While participants identified the difficulties and nuance associated with decolonizing conversations in a government setting, it was agreed that the principle remained an important component in which to root the discussions going forward.
Collaboration
The other principle that was reflected between several tables was collaboration. Participants highlighted the importance of including communities throughout evaluation processes and engagements around these evaluations, and to avoid one-way dialogue. Participants stressed that one-way dialogue, in which government takes experiences and insight from Indigenous communities but fails to communicate back what was heard, and what is being done about it, is not collaboration. This creates a situation in which individuals and communities are unable to see how the information and stories of positive impacts that they shared is used by government after it is submitted. This creates distrust and frustration and creates a disinterest in participating in further discussions. Participants encouraged PS to be transparent with what was heard, and more importantly, how what was heard might impact evaluations or other program requirements. Ultimately, they emphasized that conversations between PS and Indigenous communities should continue to be open and collaborative throughout this process.
Participants discussed the importance of language, and the impact that language can have on how issues are discussed and framed. There were several examples of how language can be chosen in a way that uplifts communities, like talking about "resources" over "capacity", "repowering" over "empowering", and ensuring distinctions-based language over pan-Indigenous language that has been used by federal government departments in the past. This report aims to address this by using language used by participants throughout the engagement event.
"You can't empower people when they're already empowered. You have to repower them."
Initial participant discussions and context setting
A participant likened the evaluation component of a program to a pair of glasses that require a prescription to function effectively. They emphasized that involving the community is the necessary prescription for these 'glasses.'
PS set out to build on the results of the 2023 engagement, which demonstrated that there continues to be important challenges to program evaluations, and continued gaps in understanding on the part of government administrators and evaluators on the impacts and successes of programs implemented in Indigenous communities. As one participant noted during the session, regarding conversations on government evaluations and their impacts on community programs, Indigenous peoples should be the ones sharing concerns and what should be fixed with the government, rather than the government presenting information, and asking questions of Indigenous peoples.
As such, this event was designed to create space for participants to express initial reactions, concerns, and questions to PS, ensuring participants felt their most pressing points had been captured, before getting into the discussion areas proposed by PS. Participants then had an opportunity to hear from PS around the proposed discussion areas, and to provide comments and suggest additional questions to be included.
Participants shared several key insights around program evaluation, and the process to evaluate program evaluations that PS is undertaking. Participants expressed that evaluations were originally introduced for fiscal reasons, with the initial goal of identifying "unsuccessful" programs to save government funding. This means that evaluations, in their nature, are meant to serve as a tool to cut programs that are deemed unsuccessful – solely through Western lenses of success, meaning that many impactful programs in Indigenous communities have been cut as a result of their evaluations. Participants do not see evaluations as a tool to track impacts or successes, but rather as a tool used by government which puts their programs at risk.
While this may not be the case for program evaluations today, participants noted that they are also often perceived as a box-checking exercise that helps governments administer programs but have little use for communities themselves.
"We're done being check boxes."
This is particularly challenging for programs that operate with limited resources, which are required to complete evaluations for programs, and that are perceived as communities doing work that should be done by the government itself. While the evaluations may be important for the administration of the program from the government's perspective, the community's required investment in time and effort to complete the evaluation is unnecessary from the perspective of the program itself. There are often concerns around resources within communities to complete burdensome evaluations. Participants highlighted that language around capacity building assumes that Indigenous people are not capable to do the work – which is not the case. It is often government that does not have the capacity to adequately support and measure all programs, with communities required to provide the resources to complete the evaluations. One participant suggested using "enhance capacity", recognizing that communities have the expertise to evaluate their own programs – but are encumbered by the government's evaluation requirements.
Participants expressed that current program and evaluation structures create accountability and ownership challenges – it is unclear who owns the program, to whom the program is accountable, and who defined the “success” that evaluations are trying to measure. While programs are often intended and designed to reflect the needs and priorities of individual communities, evaluations are designed to reflect the program in its aggregate and is accountable not to the individuals served by the programs but to other government agencies. Participants expressed that if the intention is truly for programs to serve Indigenous people and to be owned by Indigenous communities, then creating the space for Indigenous-owned evaluation processes rooted in Indigenous worldviews would be automatic. It would not be communities trying to communicate successes through a rigid evaluation system, but a more flexible evaluation system that reflects the individual needs and priorities of each community, which is then able to communicate the success of the overarching program through the weight of the stories – and not just through quantitative results.
Participants encouraged evaluations and evaluators to be more flexible, to work more collaboratively alongside communities, and to recognize the impacts of programs in the ways they are happening. Participants suggested that evaluations could and should be able to take a variety of formats – including photo and video evaluations, in-person evaluations, artistic evaluations or other formats that require flexibility on the part of government to interpret. It should not be communities trying to fit the impacts of their program into an evaluation, but rather government working to draw insights on a program's impact through whatever comes from the program. In addition, participants noted that program evaluations are often siloed by the department that funded the program, despite programs having impacts across several priority areas – an "education" program may have positive impacts on community safety, or vice versa, but it remains impossible to communicate these successes across various programs or priority areas.
This level of flexibility, according to participants, requires more trust and stronger relationships between government, evaluators, program managers, and communities themselves. Participants encouraged government administrators and program evaluators to travel to and attend programming implemented through their programs as a part of their evaluations – to spend time in and with the communities that they are meant to evaluate – and to better understand how programs are having an impact on-the-ground. Participants noted that "bridging perspectives", especially when rooting that in a decolonial lens, requires more work on the part of government than it does on the part of communities, and that there will need to be significant reflection on the part of government administrators on the intention and desired outcomes of evaluations. Evaluations designed to cut programs for fiscal efficiency will never reflect the needs and priorities of Indigenous communities, and evaluations that do reflect the needs and priorities of Indigenous communities need to be flexible and able to communicate from the community's perspective how it has had an impact on the ground.
Finally, participants asked PS what the scope of this evaluation was – if there was an intention to truly rethink how evaluations are done within the Government of Canada – or if PS will continue to be required to evaluate and report results according to systems beyond its control. While PS did respond that the scope of this engagement is to explore how to strengthen evaluations for programs within its purview, and that PS would continue to be required to report on program evaluations to other government agencies in the same manner – there is a flexibility and an interest within PS to do what is possible to strengthen evaluations and make the process smoother for communities, while respecting requirements from other departments beyond the scope of PS. Participants added that this engagement, and work that comes as a result, cannot stand as a simple "check-box" exercise. Unless there are tangible impacts that come from the conversations that were held, several participants expressed a disinterest in participating in future conversations.
Key discussion areas
The key topics that PS hoped to address during the engagement session were broken down into four discussion areas to help guide the conversation.
- The first discussion area had participants discuss their experiences with the current evaluation systems, some of the challenges that arise, and suggestions pertaining to specific experiences related to existing evaluations
- The second discussion area explored some of the challenges and nuance that comes from trying to communicate the success or impacts of a program in Indigenous communities using program evaluations as they currently exist, and some of the key information that is being missed by existing evaluation systems
- The third discussion area focused on the impacts and successes of community safety and crime prevention programs, as communities understand them. This topic sought to explore Indigenous approaches to programs, evaluations, and ethics, and what metrics are most important to gauge the impact of a program
- The fourth, and final, discussion area sought to bring together all the various points previously discussed to explore how PS can adjust their evaluations to move towards a system that is better reflective of Indigenous communities, is not a significant burden to communities, and is ultimately helpful to both PS and the communities themselves
Existing evaluation systems
The first discussion area focused on existing evaluation systems, seeking to better understand participants' experiences completing evaluations, and what challenges or barriers might be experienced by communities or by evaluators while completing a program evaluation.
The question presented on the screen during the event to help guide participant discussions was: What has your experience with crime prevention or community safety evaluations been like?
To help further guide discussions, table facilitators were equipped with additional probing questions that included:
- What evaluations have you been a part of, and what has that experience been like?
- What has been your experience reporting program evaluations to someone outside of the community, like the Government of Canada?
- What information gaps exist? What barriers exist? What challenges exist?
- What resources are needed to complete evaluations? Do your teams or communities have the capacity required?
- What impact does this system have on your work and on your community?
- Is there anything that is working well, or an example of an evaluation that worked well?
Participants discussed systematic challenges related to the onerous nature of reporting processes, and challenges related to the program funding and evaluation cycles. They also discussed challenges related to transparency and accountability, not knowing how information is being used, or challenges related to their relationship with program administrators or external evaluators. Finally, participants discussed the realities of using existing evaluations to measure the impacts of programs, and challenges related to gaps in knowledge and experience between administrators at the government level and those running the programs in communities.
System challenges
Participants recognized that many of the challenges related to evaluation are rooted in the systems of providing funding, running programming, and evaluating programming themselves. It was highlighted that evaluation systems were put in place by governments to cut programs that were deemed unsuccessful through lenses employed by governments – and that this created a system in which evaluations are overly complex, to “weed out” the weakest of the programs. Some participants were wary that evaluation results are used to weed out or cut programs that are deemed unsuccessful by the government. In turn, this creates a sense of competition between communities – those that have the resources to deliver strong evaluations that are in-line with government standards will thrive and continue to receive funding, and those that do not have such resources (human or funding) will lose their program funding entirely. Participants added that there could be opportunities for collaboration between various programs or between communities that are not happening because the programs are competing for the same funding.
Participants stressed that evaluations should not be used as a tool to cut programs – but rather, could be employed in a manner to help programs share best practices and further develop approaches, in a way that is flexible and responsive to the needs of individual communities. Participants added that programs, especially crime reduction and prevention programs, often require longer-term funding arrangements in order to demonstrate their efficacy. Several programs find themselves in situations where they need to evaluate one round of funding before being able to apply for another and are often required to start the application process from scratch. Crime prevention and community safety programs need the space to grow alongside communities, which can be made impossible knowing that funding is not secure.
"Government evaluations are set up for well-oiled machines and not for ground-level programs that are Indigenous. They want information on statistics. We don't have that. They can't measure our needs. Then, we lose our funding because we couldn't get the evaluation results they wanted."
Participants also highlighted that, despite the benefits of accessing funding to implement crime prevention and community safety programming, the reporting requirements often do not make it worthwhile to apply to many programs. The reporting can be a significant burden on financial teams, if they exist in communities, and if not, it is often program staff that have to learn how to do the reporting on top of their regular work. Participants added that the information required in evaluations is often not reflective of the program that is being implemented, creating a sense that the program is not going to be deemed successful, despite whatever positive impacts may have occurred. The information required is often not important, so it is sometimes not collected during a program, which renders the evaluation much more complicated for the staff involved.
In addition, participants discussed that evaluation metrics and indicators can often be so specific that they can fail to recognise the broader impacts that a program may have. In these cases, there is no mechanism through which adjacent positive impacts can be measured or communicated. This may lead to a program, that has had significant positive impacts in a community, to receive a poor evaluation.
It was also highlighted that the reporting forms themselves are confusing and difficult to navigate, and that elements of the evaluation process are not well communicated. For example, important deadlines for evaluating programs and submitting program evaluations are not always well known, which can sometimes lead to programs not submitting evaluations on-time. Several participants noted that it often feels like evaluations are a separate piece of the process, outside of the program delivery, and can sometimes be overlooked until it is too late.
Transparency
Participants discussed that many of the challenges with existing evaluations are rooted in the relationship between communities and the program administrators or external evaluators that are overseeing their project.
It was shared that communities are required to give information to government, but there is little reciprocity with the information shared back with communities. This includes information about how data is being collected, stored, and compiled, and being used by governments. Participants highlighted the OCAP Principles of ownership, control, access, and possession, and that information collected by, and about Indigenous people should stay within the communities. Current evaluations are designed by PS, and therefore PS has control over how that information is being used. It was expressed that, in a decolonized approach to program evaluation, information would stay with communities, and communities would have the power to determine how that information is being housed and used. Participants added that, in a truly decolonized evaluation approach, communities would gather the information that they deem necessary, by the means they deem necessary, and would have the ownership and control to use that information how they deem necessary.
Participants also discussed at length some of the challenges related to their relationship with program administrators. Several participants noted that, because of employee turnover, there are occasions when an administrator overseeing their project has left their role and been replaced. In many cases, new administrators are not provided with enough information during onboarding and are unaware of individual projects or communities that they might be responsible for. This leads to situations where information is lost or where agreements or understandings between communities and administrators are forgotten. There must be emphasis placed on continuity within PS, to ensure that irrespective of potential departmental changes, the department can continue to be a strong partner.
In addition, participants discussed challenges related to the role of the evaluator, especially when the evaluator is external to the community. It was expressed that some evaluators don't seem to understand their role and come to the table with varying levels of expectations – some wanting to be more closely involved in the program, others wanting to evaluate from a distance. Participants note that this can be a challenge, as it is important to have everyone working together in order to keep everyone accountable.
It was hoped that challenges related to transparency and accountability between Indigenous communities, PS administrators, and external program evaluators can be solved by bringing partners together to collaborate. Evaluators who meet regularly with program managers have proven to be helpful. Participants add that there is a need to create a shared understanding around program impacts and evaluation between communities, evaluators, and PS as partners, and that this will not look the same in each community. This requires active effort on the part of all partners involved to come together – and will often require that PS administrators and external evaluators visit communities to build stronger relationships. It was added that a reduction in government travel expenses for program administrators has also had a significant negative impact on the ability for communities to build relationships with program administrators and has made the ability to implement programs in communities more challenging. Personal connections between administrators, evaluators, and PS administrators make it far easier for everyone to understand the program, and ultimately, to create stronger evaluations.
Measuring impact
Participants discussed in depth that the current system for evaluating programs often fails to accurately capture the impacts that programs are having on community members, on crime reduction and prevention, and on increasing community safety. This is largely because although evaluations are intended to capture the efficacy of the program, they are often perceived to evaluate fiscal performance, and are not always reflective of the needs or priorities of Indigenous communities.
Several participants expressed that evaluations are not set-up to capture interim success. The long-term indicators of success may not be seen for generations, and there is a need to have evaluations capture the incremental changes that are occurring. Participants added that, ultimately, the success of a program is determined by what the community deems successful. Participants also added that evaluation metrics and indicators may not be the same for every community, and that there is a risk of missing important information by using a single evaluation template.
Participants talked about the fact that, in many communities, positive impacts might look like taking two steps forward and one step backwards, which sometimes leads to situations where an evaluation is only looking at that one step backwards. It was added that governments often talk about "narrowing the gap" between Indigenous and non-Indigenous communities, but this requires measuring Indigenous communities against non-Indigenous communities – when the baselines and trajectories are not going to be the same.
In addition to capturing incremental wins, it was highlighted that existing evaluation systems are not able to capture the impact of programs on other priority areas. There is often a perspective within government departments that a certain program will impact a certain priority area – education programs increasing education, or community safety programs increasing community safety – but there are cases where programs can have ripple effects on other priority areas that are not immediately obvious. It is important to understand the interdependency of programs, and how their collective impacts can impact community wellbeing. As one participant put it, it's important to measure the "little footprints on other outcomes".
"It's about who else was there and the story that was told."
It is important to understand the context of programs, of communities, and of those individuals involved to truly understand the impact, or to understand how to evaluate the impact of a program. Indigenous communities know what works for their communities, but after centuries of colonialism, current programs expect results within months, and expect those results to match targets outlined by governments. This system is one designed to fail, with the blame often falling on communities, and resulting in the loss of funding for community-based programming. Understanding context of each community is essential, especially where the community is, how it came to be, and what other work is being done alongside a single program. Participants reiterated that current evaluation systems assume that there is a “standard Indigenous community”, and that all programs can be measured using the same indicators. Participants stressed that this is simply not the case, and that until government makes changes that reflect the context in each community, programs and the individuals they serve will continue to fall between the cracks.
Evaluation and program effectiveness
The second discussion area focused on the impacts that a program might have on a community and the challenges related to communicating those impacts through existing evaluation systems. Participants were asked to explore some of the challenges and nuance related to trying to communicate the success or impacts of their programs using evaluations that were not designed to capture those impacts.
The question presented on the screen during the event to help guide participant discussions was: What impacts and outcomes are you seeing in crime prevention and community safety programs that are not being communicated through current evaluation practices?
To help further guide discussions, table facilitators were equipped with additional probing questions that included:
- Do you think evaluation tools used for crime prevention and community safety programs are effective in tracking these programs' impact? Why or why not?
- What are some of the challenges that come from trying to evaluate a crime prevention or community safety programs through the evaluation criteria outlined by the Government of Canada?
- Are there aspects or examples of existing evaluation systems that work well, or that are helpful for communities to track?
- Are there evaluation metrics that are required by the government that are difficult to measure, or that don't feel relevant to the implementation of crime prevention or community safety programs?
- What concerns arise from program/community staff spending time on the evaluation?
- If your program/community is working with external consultants on program evaluation, do you feel they accurately reflect the impacts of a program in the community?
- What kinds of impacts and outcomes are you seeing in your communities, especially around crime prevention and community safety programs, that are being left out of current reporting practices?
- Are there tensions between Western and Indigenous ethical approaches that will be difficult to reconcile? If so, which elements and why?
Participants highlighted that there remain challenges related to the structure of programs and funding cycles and how these relate to the long-term impacts of a program. They discussed evaluation structures themselves, and how the language and materials used tend to not reflect the realities experienced by communities which can make it challenging to communicate the impacts of those programs. Finally, participants talked about what evaluations are meant to do, being able to help identify and mobilize experience and best practices to better equip other communities – and how current evaluations are not designed in a way that encourages this to happen.
Program structure
Participants expressed that there are often programs, including those funded by PS, that do have profound impacts on communities. The impacts of these programs may not always be immediate and can often occur after a program ends. Sometimes, it may take time for the program to establish roots within the community and for its impacts to ripple throughout the community.
"The trauma that has been caused to our communities is so deep and so intense that the improvements aren't going to show themselves because they are so deep and so intense."
In many cases, positive outcomes that are seen in communities come not from one single program, but from a network of programs funded by various partners, that create a space in which positive change can occur. However, program application, funding, evaluation, and reporting structures are not flexible enough to track these longer-term impacts, and often overlook some of the positive outcomes that do eventually occur.
Participants noted that program evaluations often occur right as funding arrangements for programs are set to expire, which doesn't allow for an assessment of the long-term impacts of a program on a community. In some cases, the metrics that are required to complete an evaluation are not yet available, requiring evaluators to speculate impacts that may occur. Participants added that evaluation metrics required in existing evaluations do not reflect what "success" might look like for a particular program in a community. For some, the definition of success may be as simple as a young person reconnecting to language or being open to participating in a land-based program with Elders. These are major impacts for communities but are often not the metrics evaluations seek to measure and are often not metrics that are easy to capture in an evaluation format.
It was also discussed that the current funding, evaluation, and reporting structure often means that programs are only funded in the short-term (e.g., one-off blocks) which can make it challenging for communities to implement longer-term programs or measure their impacts over the longer term. There are opportunities for genuinely creative and potentially impactful crime prevention and community safety work that can be facilitated through PS programs, but that requires significant, sustainable, and predictable funding. This is especially true for programs catered to young people, as they can become weary of programming that does not proceed over the long term. It takes time to build trust within a community, but that time is not built into program application, funding, and evaluation structures.
Participants added that programs that involve a collaboration component, especially those centered on community safety and crime prevention, have a higher likelihood of generating positive impacts within communities. However, program cycles – especially programs funded from different government agencies or departments – are sometimes misaligned, which can make ongoing collaboration and the long-term planning for projects very challenging. Participants added that crime prevention and community safety programs need time to take root in communities, build networks of programming with other services and organizations in the community, and work to address the root causes of crime and other safety concerns. Participants also note that sustainable program funding, and funding not directly tied to evaluations at the end of program cycles, creates a sense of consistency that is needed for communities to plan their efforts and resources in the long term.
Evaluation structure
One participant shared the story of a PS-funded program that has been successful in generating employment opportunities for at-risk youth in their community. However, they were not able to share this success in the evaluation report because generating employment opportunities was not the objective of the program. Evaluations often focus on whether the objectives of a specific program are met and fail to consider the ripple effects or byproducts of those programs. Other participants shared other examples of positive impacts that were lost because of how evaluations themselves are structured. The qualitative nuance is lost when evaluations focus too much on collecting quantitative data (e.g., the number of participants in a particular program or event). While this may seem like a tangible metric to track, it often fails to capture the nuance and the stories of experiences from those who did attend. There may be an example in which program attendance falls, but the continued participation of a certain number of individuals has a demonstrable impact not reflected in participation numbers.
Participants also highlighted that evaluation terminology can be highly technical and challenging for communities to navigate. One evaluator in attendance shared that they needed to rewrite their evaluation reports in a plain language format to be able to share back the results of the evaluation with those involved. It was noted that this is not common practice, and that evaluations are often presented back to communities in a format not always useful for community members.
A participant shared that there is no "one-size-fits-all" approach. Every community is different, with specific factors, specific contexts, and specific populations. PS needs to allow for flexibility to allow evaluations to reflect communities as they are.
Participants explored an evaluation process in which communities could present program impacts and "evaluations" in a format that makes sense for the program, participants, and for community members. One participant shared that they saw the impacts of their program rippling out from the center in four directions – like a medicine wheel – but added that there is currently no way to demonstrate and submit that visual representation as a part of the evaluation. Other participants added that visual, photo, video, or other recorded submissions could have a significant impact in how programs are evaluated. It was noted that writing "the child was happy" in a report is much different than seeing the sparkle in their eye through a video of the program. It was noted that the written reporting format is not accessible for many people, and that there must be better ways to communicate the impact of a program (e.g., through stories), that does not require the authoring of multiple page reports.
Finally, participants discussed challenges that arise from engaging non-Indigenous evaluators to complete evaluations of programs in Indigenous communities. Several participants shared experiences where a non-Indigenous evaluator skewed the results of an evaluation, because they didn't have the first-hand experience in the community and were not able to communicate the nuance of how the program impacted the community. In many cases, it was explained that external evaluators will not take the time to visit the community, participate or observe the programming, and will rely solely on program activity reports from the community to complete their evaluations. It was expressed that these evaluations cannot be reflective of the actual impacts that a program has, because it is not possible to capture all the salient indicators without witnessing the program for oneself.
When it is possible to have an Indigenous evaluator, especially someone with a relationship to the community, participants expressed that the experience is very different. Indigenous evaluators tend to complete the evaluations with the intention of sharing the results with community members, and not just PS. Evaluations are often communicated back to the community and evaluators often engage with leadership, youth, Elders, and program staff. Participants expressed that Indigenous evaluators are able to do the work while humanizing the projects and the people they serve. They added that Indigenous communities have long been overstudied by non-Indigenous researchers, and that it is a welcome change to have an evaluation that doesn't make a person feel like a "rat in a cage".
Knowledge mobilization
Several tables explored the idea of evaluation, including who evaluations are meant to serve, and how the information is meant to be used. Participants expressed that the current evaluation system is designed to meet the needs of government bureaucracy – responding to questions around program uptake through lenses defined by priorities outside Indigenous communities. In many cases, evaluations are not designed to help communities collect information that is relevant to them, share best practices and lessons learned between communities, and serve as a tool to help strengthen or grow programs to better meet the needs of communities.
However, participants also recognized that evaluations could be restructured to become a tool to help capture and share knowledge in a way that is reflective of and useful to communities. Participants added that there is an opportunity to provide post-program funding to facilitate knowledge mobilization, capturing lessons learned and best practices, and creating tools to communicate these back to communities and to other communities. Participants note that this can only be possible when there are more people at the table, and more tools to collaborate. Evaluations should be built to include everyone involved – including youth, Elders, or other community members – to increase the scope of the evaluation and develop evaluations that are reflective of their experiences. Evaluations should be used as an opportunity to bring people together, mobilize the collective knowledge and experience of those involved in a program, and outline steps to take to strengthen the program going forward.
Participants also noted that context is extremely important, noting that what might work in one community may not necessarily be successful in another. It is important to capture best practices and lessons learned, but also not to assume that these experiences can be seamlessly transferred from one community to another.
Participants suggested an innovative model where PS, or another government department, brings together project managers from various communities at the time when programs are being funded to discuss the overarching objectives of the funding program and how each community is planning to implement their programs. After programs are complete, or at a time when an evaluation is deemed necessary, those same project managers are brought together once again to discuss the impacts of their program, share best practices, and discuss how programs – at the federal or individual community levels – can be adjusted to better meet the needs of communities. They noted that this could replace traditional evaluations and could actually serve as an opportunity to strengthen relationships between program staff in communities and administrators at the government level.
"If the community is in control of their information, the feedback loop is closed from the beginning of the process. The community is empowered, has ownership of their info and evidence. There is no need to 'present' or 'disseminate' the results back to community, because the community was involved from the get-go and has power over that knowledge."
Participants emphasized that taking information from Indigenous communities requires trust, transparency about where the information is going and how it’s going to be used, and time to cultivate strong relationships in order to conduct the process ethically and thoughtfully. Participants highlighted that trust, in particular, is one of the most important pieces of evaluation. People need to be able to trust that information shared with PS is used in a transparent way, to strengthen programs or better support communities, and that communities will have the flexibility to capture what’s relevant and essential. It also means trusting in what the benefits of conducting an evaluation through an Indigenous lens could bring and trusting that the successes communities are reporting demonstrates program impact.
Impactful community-based programs and evaluations
The third discussion area focused on the impacts that crime prevention and community safety programs have in Indigenous communities, how communities know when programs have had an impact, and identified some of the indicators for successful programs. Participants were encouraged to think outside of PS evaluation processes and to reflect on the real impacts that programs have and how communities can measure this.
The question presented on the screen during the event to help guide participant discussions was: What do communities currently do to evaluate community programs and initiatives?
To help further guide discussions, table facilitators were equipped with additional probing questions that included:
- How do communities know when a program or a plan, particularly one related to crime prevention and/or community safety, is implemented effectively?
- What does the successful implementation of a program look like?
- What evaluation processes exist in communities, how are communities involved in overseeing the implementation of a program or a plan?
- How do you think that the way crime prevention or community safety programs are implemented impacts their effectiveness – in what ways do the programs force an outcome that is desired, or not, by the community?
- Based on that, what would the successful implementation of these programs look like?
- How do communities determine that a plan has been adopted and implemented, and what metrics are in place to measure the impacts?
Participants highlighted that many crime prevention and community safety projects funded through PS have had positive impacts on communities, sometimes on individuals in a community, and sometimes having impacts that take a longer time to be able to measure. Participants talked about how knowing when a program has had a positive impact on a community might change from one community to another, and that evaluations should be flexible enough to be able to track the impacts in a way that makes sense to communities. Finally, participants talked about gaps that exist between the resources available to communities and the resources needed to complete evaluations in a reflective and ethical manner.
Positive impacts in communities
During the discussions, participants shared stories of successful community-based programming, such as at-risk youth who reconnected with their culture and community. They also noted an increased sense of connectivity and safety within communities, and even cited examples of projects that continued to have positive impacts long after the program's funding had ceased.
A participant likened the balance between programs in a community to a tornado. When programs are all funded and work together, balanced, it is like the calm at the center of the storm, you can look up to the Creator and see light. When even one program is lost, the balance is lost, and you're thrown into darkness and chaos.
Participants added that these impacts are often hard to track, difficult to translate into written reports, and can take shape in many ways. In many cases, impacts are difficult to track at a macro level, when really the most positive and long-lasting impacts happen at the individual level. However, evaluations are not structured in such a way that these impacts can be communicated.
Participants highlighted that the impacts seen within a community may be because of several programs coming together, or because of how an individual chooses to use a network of programs and services on their own. For example, a crime prevention program could have provided resources to better inform young people about crime and crime prevention, but when paired with accessible childcare funded through an education program, and with accessible mental health care provided through another program, those young people have more capacity to use and implement the resources shared through the crime prevention program. While these impacts might be profound and easily noticeable among community members, it can occur where evaluations target other indicators, meaning that the impacts that had occurred are lost in the evaluation.
It was stressed that there needs to be a to shift in how "success" is discussed – recognizing the ripple effects that programs can have on other areas of people's lives, and centering programs that foster a sense of pride and passion in culture, language, and Indigeneity. The challenge is not with identifying the right metrics and indicators for success or identifying programs that have a greater impact on communities, rather, it is with creating the space for communities to define their needs and expectations and evaluate and report on program impacts in a way that is reflective of those needs and expectations. Participants added that this is not something that can come from government, but rather, is something that must be defined by Indigenous communities themselves.
"I like to think we're making an impact, especially when I see the small successes."
Flexibility in evaluation reporting
Participants emphasized the importance of communities retaining ownership over their information, evidence, and data. One of the key questions identified across several table discussions was around who evaluations were meant to serve and who benefits. Participants emphasized that community-driven evaluation processes should strive to have evaluation planning built into funding, have evaluators engaged throughout the program, have frequent communication and transparency, and seek to communicate the truth of how programs are impacting communities.
One example shared of a collaborative and flexible evaluation saw program participants and Elders take photos throughout the community, and then come together to discuss how they felt about the photos. These were included as part of the evaluation.
Participants added that evaluations should be reflective of the community experience and that space must be made for communities to define success, implement programs in ways that will have the most benefit, and come up with their own solutions to create impactful change. It is critical that the questions set out in the evaluation tracks data that is important to those in the community. If there is data that is required for PS to report up to other government departments, PS should determine how to present community data in a formalized reporting process.
It was also noted that there should be more emphasis on in-person engagement as a part of evaluation processes. There is a need for evaluators to go out and talk to community members to have a better understanding of what is happening, what is working, and what additional supports could be provided. Participants added that this engagement requires a sufficient travel budget, as these costs can have a significant impact on program delivery if not already incorporated into the planning process. It was also suggested that PS should find ways to be involved in conversations in communities to hear first-hand stories from people experiencing those programs. This in-person presence will help build trust in evaluators, government administrators, and the evaluation process. However, participants stressed that this must be done from a place of openness and trust building – and cannot be done in a way that makes community members feel as though their work is being monopolized or stolen.
Ultimately, participants expressed that, regardless of the format it takes, evidence and data should be grounded in tradition and in Indigenous ways of knowing. This will take shape in a variety of ways, reflecting the needs, priorities, culture, and language of each community – but it is important to have enough flexibility built into evaluations to allow for these types of data to be reportable.
Questions participants thought should be included as a part of future evaluations:
- Has the community been engaged in the process of Indigenizing and localizing the program?
- How was the community involved in developing/adapting this program?
- How was the program adapted to be relevant to and informed by the community?
- If the program was considered a ‘failure’, what did you learn from it? What can you take from it?
- How has the progress of the program been shared back with the community? How has that influenced further directions, modifications for the program to improve effectiveness?
- What are some lessons learned from the perspective of the community, successful or not? What worked and what did not work?
Resourcing and capacity
Participants discussed the varying levels of resourcing and capacity that can be dedicated to evaluations across different communities. For example, some communities are well-equipped with data centres and have their own ethics review boards in place, while others have not yet reached the evaluation stage in their programs. This means that in some cases, evaluations have the resources required to respond to evaluation reporting requirements with ease, while in others these become more significant undertakings for staff that might not have the experience required to complete the evaluations. Several participants added that this creates a sense of competition between communities and a fear that a program that is otherwise successful might no longer receive funding.
Participants added that funding plays a significant role in determining a community's capacity for evaluation, with some communities having access to more resources than others. In several cases, programs are already underfunded and can find it challenging to respond to all the program's objectives within the funding envelope allocated, making evaluation an afterthought that is sometimes completed by staff funded outside of the program.
It was also expressed that the availability of resources also varies between communities. Some communities can lean on external evaluation expertise, while others rely on local resources to support their evaluation efforts. Throughout the conversations, participants highlighted that alongside the ability to provide evaluation feedback that is reflective of the program and its impacts in the community, a more flexible approach to evaluation would allow communities to evaluate programs in a way that is respectful of the resources and capacity available to communities and their staff members.
Moving toward better evaluation systems
The fourth, and final, discussion area focused on the steps that are required to move existing evaluation systems towards a more flexible, culturally relevant, and reflective approach that is owned and guided by Indigenous communities themselves. Participants were asked to think about some of the changes that are needed to increase flexibility in the system, as well as the supports that are needed to implement these changes.
The question presented on the screen during the event to help guide participant discussions was: What kinds of supports are needed for communities to implement effective crime prevention and community safety evaluations?
To help further guide discussions, table facilitators were equipped with additional probing questions that included:
- What would an evaluation system for crime prevention and community safety programs under Public Safety that is reflective of the values and principles we've discussed look like?
- What kind of reporting would happen?
- How often would there be evaluations and reporting, and what kind of information is being tracked by whom?
- What kinds of support should be in place to ensure capacity at the community level to ensure effective evaluation?
- What kinds of support should be in place at the Public Safety level to ensure effective evaluation?
- How should crime prevention and community safety programs be evaluated?
- What research methodologies might be most culturally relevant and ethical for Public Safety to use when engaging with Indigenous communities?
- What would a culturally appropriate and ethically sensitive research relationship look like between PS and Indigenous communities in practice?
- Can, and if so, how can a Western-informed ethical approach be adapted so that it is culturally appropriate and ethically sensitive to Indigenous contexts?
Participants highlighted that the work to strengthen evaluation processes and to incorporate Indigenous worldviews is not something that needs to be done by Indigenous communities, but rather is something that PS will need to undertake to be able to make change. Participants discussed the need for increased funding and support, including support for evaluation processes, and to enhance evaluation resources at the community level to be better equipped for evaluations. Participants also discussed the critical importance of centering community-driven and community-owned evaluation processes, emphasizing the need to incorporate more voices into the evaluation processes, and finding ways to be more flexible in how evaluations are completed. Finally, participants expressed a need to see more transparency and accountability on the part of PS, including the work to build stronger relationships between program administrators and the communities that they serve.
Providing support
Throughout the conversations, participants emphasized the importance of providing adequate flexible funding for program evaluations. This includes adequate funding for the human resources required for evaluations, but also to provide support to community-driven evaluations, including – but not limited to – funding for equipment to capture events and program feedback, travel funding for evaluators and government staff to participate in programming, funding for larger gatherings for the purposes of evaluating programs, and funding to ensure that evaluation results are shared back with the community. Several participants reiterated the importance of in-person gatherings to evaluate and redirect core community programming and added the importance of simple things – like food – at these events as a part of bringing people together and fostering open communication. It is crucial to have adequate funding for feasts and other community events that bring people together to foster a sense of community, culture, and knowledge sharing.
A participant shared that good evaluations should be like a fasting. You remove all of the distractions and create the time and space for meditation, reflection, quiet, and a time for judgement. Evaluations that are reflective of Indigenous communities will have enough flexible funding to support these types of evaluations.
Participants explained that it is critical to ensure that resources and capacity are made available for analysis based on community needs and priorities, based on the program itself, and rooted in the important contexts of that community. Evaluations should be viewed as a learning opportunity to inform future work, rather than a check-mark exercise tied to funding opportunities. It's important that these evaluations do not leave communities in a worse position than where they were before, for example, spreading community resources too thin, requiring evaluations take place during significant cultural or community events, or seeing a program cut. Participants want to see the evaluations prove to be useful tools for the communities themselves, and that communities and their programs shine brighter after an evaluation has been completed.
Participants also noted the importance of providing adequate time for reflection after a program has ended. Allowing time for introspection allows for a more thoughtful and deliberate evaluation process. Participants suggested that building time for this reflection into the evaluation process allows for the evaluation to occur in a more natural way and for the evaluation to capture some of the longer-term impacts, without the pressure of stringent reporting requirements and deadlines. One participant shared an analogy that things are often calm in the eye of the storm – where there is balance – but if that balance is lost, chaos ensues.
In addition to adequate support for evaluation processes, participants discussed that investments in education and training are crucial to achieve meaningful progress towards reconciliation with Indigenous peoples. In several conversations, participants highlighted key areas where investment is needed, including training and education for evaluators and training for PS staff in anti-racism, decolonial approaches, and understanding the basis for evaluation. Participants emphasized that there is a need to invest in training for PS staff, particularly in better understanding the differences and nuances between First Nation, Métis, and Inuit communities. It is critical to recognize, and incorporate into programming and evaluation processes, that there is no "one" Indigenous community, and that each group has their own needs, priorities, and ways of knowing.
"We need to unleash existing capacity – everyone is an evaluator, we just do it in different ways, and need the resources to do it."
Participants also highlighted the need for resources to help communities better understand evaluation processes. They included examples like programs where communities could meet with one another and share best practices and lessons learned, share training materials, engage with a list of evaluators that have experience working on evaluating Indigenous programs, or providing basic training guides for evaluations. Communities are mandated to do an evaluation but may not have the necessary knowledge, resources, or tools to carry it out effectively. Guidance and knowledge-sharing, especially before the evaluation starts, can help communities better understand what is required and how to carry out an effective evaluation.
Community-driven and community-owned
Participants emphasized that community-driven program evaluation processes are crucial for ensuring that evaluations are owned and run by communities. It was added that evaluation process should look different depending on the community as each community should be able to define the approach, inputs, and deliverables on their own in a way that best reflects the programs and the impacts that occurred.
There is an ethical responsibility towards stories, and the people that carry those stories. There is an obligation to remain true to the story, and to carry the story forward with care and respect.
One key takeaway across several conversations is that it is essential to acknowledge the responsibility of accurately, correctly, and respectfully sharing community stories during these evaluation processes. Ethically, there is a responsibility to remain true to the stories, and to remain accountable to the people that shared them. Evaluators should be able to hear the stories, record them for reporting, and present them back to community members for approval before they travel up the ladder and into other reporting structures.
Another key takeaway from participants is that Indigenous ways of knowing must be prioritized in evaluations, and that Indigenous communities should determine what is considered a success. Participants stressed that self-determination should be centered at the core of all evaluation processes. While this may result in communities reporting successes differently, which could require additional effort from PS to consolidate impacts across multiple communities, it is crucial to establish an evaluation system that genuinely reflects Indigenous communities and the needs and priorities of their members.
"Our success has to become your success."
It was also noted that building evaluation processes rooted in community and building a forum for communities to connect with one another is also important. This creates a space where communities can discover what others are doing, learn from each other, and share resources, knowledge, and expertise around evaluations. Participants highlighted that creating the space for communities to come together to reflect on priorities and to share successes can go a long way to help build trust, strengthen relationships, and promote positive change. Several tables suggested organizing events that bring together managers of projects from various communities to talk about their objectives, approaches, and finally, to evaluate their impacts together. This not only creates a dedicated space and time for evaluations, with dedicated resources, but encourages PS, evaluators, and program managers to come together, strengthening relationships and building a sense of trust and accountability.
Finally, it was highlighted that it is important to ensure that third-party evaluators have the cultural knowledge and experience to engage with communities effectively. This includes providing adequate training to evaluators, but also includes ensuring enough resources for external evaluators to spend time with communities and enhancing capacities within Indigenous communities so that communities do not need to rely on external resources to complete evaluations. When external evaluators are involved, it is critical that the evaluators are there to serve the needs of the community first and foremost, and that time is taken to develop a relationship between the evaluator and the community they are serving. Participants added that developing evaluation models that are reflective of specific Indigenous communities can be challenging, but that building trust with communities will always remain essential.
Partnership and accountability
Across several tables, participants talked about the need for PS to strengthen its position as a partner to communities, and not simply as an administrator to government funding. Participants reiterated that this is not an exercise in empowering communities, but rather repowering communities to lead evaluations in the way they always have. The need for trust building and the strengthening of relationships between PS and other government agencies and Indigenous communities was identified as being of paramount importance across several tables and across several key discussion areas.
Participants expressed that being a better partner means prioritizing accountability and transparency. For example, when a story is shared with PS in the context of an evaluation, that report should be shared back with the community to ensure that the story is being represented fairly and accurately. This time needs to be built into evaluation processes to ensure that community ownership of the process, and accountability to the community throughout that process, is prioritized.
"Indigenous communities know what they need and what they want. Allow them the resources and space to do it."
Participants also highlighted the need for greater continuity and communication from PS. One participant explained that they had been in contact with up to seven advisors at PS during a four-year program cycle, which led to significant delays and other challenges when specific items were not followed-up on after staff turnover.
Finally, participants reiterated the importance for PS staff and advisors to take the time to visit communities, spend time getting to know the project managers they are working with and the people that are impacted by their programs, and experience first-hand the impacts of their programs in Indigenous communities.
"Just come visit."
Conclusion and recommendations
The “Bridging Perspectives: Indigenous Ethics and Evaluation” event brought together representatives of Indigenous communities, project managers, evaluators, and Indigenous ethics experts to explore important challenges, key opportunities, and tangible steps to strengthen PS reporting processes, and to work to create evaluations that are reflective of and helpful to Indigenous communities. Over two days, participants explored several key considerations, shared stories, offered important nuance, and identified key recommendations for Public Safety to consider when rethinking evaluation processes.
Based on what was heard throughout the engagement process, several longer-term recommendations emerged to help enhance evaluation processes and make them more ethically and culturally appropriate.
Overarching evaluation structure
Participants explored the challenges related to evaluations as they currently exist, the onerous reporting requirements often involved, and the difficulties communities face trying to communicate their successes through these existing models. They suggested that existing evaluation processes should be less rigid as they currently overlook positive impacts that are being felt by communities. Communities know the positive impacts that are being felt in their area, but often are unable to capture this crucial information in current evaluation processes. Overall, there should be a greater willingness to revamp the evaluation system in a way that reflects the priorities and lived experiences of the Indigenous communities they serve.
Participants reiterated the need for a distinctions-based approach from the outset that recognizes the various needs, interests, and priorities of individual First Nation, Métis, or Inuit communities. There is not one answer to how to strengthen Indigenous evaluations and reporting, but it is possible to strengthen evaluations at the community level.
The role of Public Safety Canada
Participants highlighted that it is the responsibility of PS to do the work of restructuring how evaluations work and to build flexibility into the model to encourage communities to evaluate programs in ways that best reflect their needs and priorities. Indigenous communities need to be able to define a program’s success and how best to communicate that success in a way that makes sense to them. It should be the responsibility of PS to take that information as it comes and translate it into formal reporting structures that may be required.
Participants encouraged PS to spend more time on the ground in communities, getting to know the people they work with and the communities they serve, and to witness the impacts of the programs they are supporting for themselves. If PS is to position itself as a partner, and not simply as an arbiter of funding, it must be willing to spend the time necessary to build stronger relationships with partners in Indigenous communities.
Participants expressed a need for more practical supports from PS to support evaluation processes. This includes providing funding for large gatherings or community feasts where evaluations might occur; funding for evaluators, program staff, and PS administrators to meet in-person and to build stronger relationships; and funding to bring communities together to learn and share experiences with one another.
Community-led and owned processes
Participants made clear that evaluations must be led, owned, and driven by Indigenous communities themselves. There is a need to trust communities to implement and to evaluate programs in ways that best reflect their needs and priorities, and allowing communities to communicate back program impacts in the community’s own ways is an important step. Indigenous communities must be in control of their data and PS must be accountable to the communities – and, more importantly, the individuals – who have shared their stories as a part of these evaluations.
Participants expressed that there is a need for more training for program staff to be involved in evaluation work, creating the space for evaluations to be community-led. This requires increased training and resources at the community level to strengthen evaluation skills and to have the evaluations be led by those that were leading the programs themselves.
It was also noted that evaluations be shared back with community members, so communities can see how the information is being used by government once those evaluations have been submitted. There were concerns that when evaluation results are shared to government departments they are no longer in the control of a community and there is no sense of how that information is being used.
Flexible reporting approaches
Participants suggested that there is a need to allow for more flexible reporting approaches, including creating space for media such as photos, videos, or other recordings to serve as part of evaluations. Participants expressed a critical need to invest in more in-person gatherings as a part of a program’s funding cycle and, more importantly, as a part of the evaluation process. This creates an opportunity for program managers, evaluators, and PS to evaluate programs together, reflect on changes that are needed, and build stronger relationships as partners.
Greater conversations with Indigenous communities
Participants suggested that there is a need for more conversations, in more places, and with more people to better understand the implications of evaluations, Indigenous ethical approaches to evaluation, and how best to position PS evaluations to support Indigenous communities. Participants highlighted that not all participants in this session were Indigenous, and that there was little Métis and no Inuit representation at the event. Participants encouraged PS to have similar conversations in communities across the country, to make them more accessible to a broader community of individuals – including those who were invited to this event – but also those who were not, including community leadership, youth, and Elders.
Creating tangible change
Finally, participants stressed that these conversations can not and should not continue unless PS is willing to invest the time, effort, and resources to work towards making change. Several participants expressed that they are not interested in being consulted on these issues time and time again, and that, without real change, distrust in PS's genuine willingness to change will grow.
"Bridging Perspectives" is not an easy thing to do and is not something that can come from a single event over two days – or a report written afterwards. Rather, it will come from building trust, building relationships, and creating the space in which stories of success can be shared in a way that makes sense. Impacts in a community are just the compilation of countless individual stories that together to make tangible change across an entire community. Perhaps the question is not how best to evaluate programs, ensure success, and track the right metrics, but rather how to build a space for the right stories to shine through.
- Date modified: