The Digitization of National Security: Technology, Transparency & Trust

Members of the National Security Transparency Advisory Group (NS-TAG) at the time of this report's publication:

John Ariyo, Executive Director, Scarborough Charter Secretariat at University of Toronto
Chantal Bernier, Counsel in the Privacy and Security practice at Dentons Canada
Amira Elghawaby, Canada's Special Representative on Combatting Islamophobia
Mary Francoli (co-chair), Associate Dean and Director at the Arthur Kroeger College of Public Affairs
Daniel Jean, Former National Security and Intelligence Advisor to the Prime Minister of Canada
Stéphane Leman-Langlois, Professor of Criminology at Laval University and Member of the Centre for International Security at the Laval University Graduate School of International Studies
Rizwan Mohammad, Researcher, Civic Engagement Specialist and Advocacy Officer with the National Council of Canadian Muslims,
Jeffrey Roy, Professor at the School of Public Administration at Dalhousie University's Faculty of Management
Patrick Boucher (government co-chair), Senior Assistant Deputy Minister of the National and Cyber Security Branch at Public Safety Canada
Jillian Stirk, Former Ambassador and Assistant Deputy Minister at Global Affairs Canada
Lorelei Williams, Indigenous Activist and Founder of Butterflies in Spirit

Executive Summary

The National Security Transparency Advisory Group (NS-TAG) was created in 2019 as an independent and external body. Our role is to advise the Deputy Minister of Public Safety Canada, and the rest of the national security and intelligence community, on steps to infuse transparency into Canada's national security policies, programs, and activities in a manner that will increase democratic accountability and public awareness.

Our consultations over the past four years have been heavily peppered with references to the changing technological landscape. Not only has the quantity of information and data available to both the national security community, and to Canadians, exploded, but expanded capabilities of digitization have brought new opportunities and challenges that need to be navigated very carefully with transparency in mind so as not to further erode the fragile trust Canadians have in national security agencies.

While digitization was certainly prevalent in the NS-TAG's first three reports, this fourth report explores it in greater depth. That said, it must be recognized that digitization is a huge and complex topic. It is difficult to capture such complexity in the context of a report. As such, the report does not purport to be exhaustive. Instead, it offers a sampling of some of the risks, challenges and opportunities related to digitization that were raised by multiple stakeholders over time.

This report is divided into three sections. The first is an introductory section which aims to contextualize digitization as it relates to transparency, trust, and openness. The second touches on some of the main risks, challenges and opportunities stemming from national security's expanding digital landscape. Finally, the third section looks back at the Government of Canada's National Security Transparency Commitment and provides some specific recommendations for national security agencies to consider for adoption.

The National Security Transparency Advisory Group (NS-TAG) was created in 2019 as an independent and external body. Our role is to advise the Deputy Minister of Public Safety Canada, and the rest of the national security and intelligence community, on steps to infuse transparency into Canada's national security policies, programs, and activities in a manner that will increase democratic accountability and public awareness.

In our first report, published in 2020, we offered a survey of the state of transparency in Canada's national security community, and highlighted areas for future improvement.

In our second report, published in 2021, we laid out principles related to the definition, measurement, and institutionalization of transparency in the national security and intelligence community.

In our third report, published in 2022, we explored how national security and intelligence institutions engage with racialized communities and offer various recommendations to help bridge the trust gap that exists between Canada's national security institutions and Canadians.

1. Introduction

As digital technologies become more pervasive across society, governments are challenged more than ever to respond to new threats and heightened risks on the one hand, while also seeking to leverage new opportunities afforded by digitization on the other hand. Essentially, digitization of national security responds to the exponential increase in technological capacity that can enhance the protection of national security and the necessity to update protection against current and future cyber threats that may come with emerging technologies like artificial intelligence, internet scraping, facial recognition, and predictive analysis, to name a few. Within this rapidly evolving and dynamic context, the overarching purpose of this report is to examine how openness and transparency are essential enablers of innovation, accountability, and public trust as national security and digitization become evermore intertwined. In doing so, we seek to better hold the Government of Canada to account for respecting and applying its own principles embedded in the National Security Transparency Commitment.

At the heart of national security's still nascent digital transformation is a massive expansion of information and data both within the public sector and across an interconnected and online society. With mobile devices and digital platforms driving this expansion, many aspects of traditional government are under strain, notably the ethos of information control and secrecy. As we first observed in our inaugural NS-TAG report:

In the national security community, there is a dominant reflex to keep information as secret as possible; the default position is usually to protect information. While this is sometimes necessary, efforts to improve transparency need to be accompanied by changes to this culture of secrecy….This lack of transparency can have unintended consequences, as the information gap is more likely to be filled with misinformation or raise suspicions.Footnote 1

Since the publication of our first report, a report of findings from an investigation of the Office of the Privacy Commissioner of Canada (OPC) brought to light the use, until then undisclosed, by the RCMP, of Clearview AI's facial recognition technology.Footnote 2 It confirms our concerns with respect to transparency in the expansion of technological capacity of national security agencies.

Online misinformation and hate, electoral interference, and the growing number of privacy and cyber-breaches across all sectors are indicative of this perilous environment in which data and information are weaponized in ways both large and small. Indeed, in late August 2023, the Canadian Centre for Cyber-Security (CCCS) sounded the alarm, warning that criminal hackers are especially targeting education, energy, utility, and health-care facilities.Footnote 3 CCCS further warned that nebulous online actors will 'very likely pose a threat to Canada's national security and economic prosperity over the next two years.'Footnote 4

In response, the Government of Canada has sought to bolster its own cyber defences while seeking to develop new offensive capabilities. As we have highlighted in prior reports, most security agencies and entities are significantly expanding their digital and data-driven systems. Artificial intelligence (AI) has likewise become a growing priority for governments around the world, and the Government of Canada is no exception. Here too, our prior report examined the nexus between national security and AI, and the importance of transparency safeguards as such systems are more widely deployed by governments and industry. Lack of such safeguards will further impact public trust and digitization must be open by design.

1.1 Public trust and government response

According to the 2023 Edelman Trust Barometer, the average percent trust in NGOs, business, government and media has gone down from 54% in 2022 to 52% in 2023.Footnote 5 With respect to the police and the Federal Parliament, Statistics Canada's 2023 survey shows Canadians have a high level of confidence in the police (67%), with racialized communities having significantly less confidence, and a generally low level of confidence in Parliament (36%).Footnote 6 On emerging technologies such as artificial intelligence, one global study of trust levels from 2023 conducted by KPMG and the University of Queensland found most citizens divided as to whether the prospect benefits outweigh the likely risks, while just 40% of respondents across all countries believe current regulations, laws and safeguards are sufficient to make AI use safe.Footnote 7 A 2022 Ipsos survey of familiarity with and trust in AI found Canadians more skeptical about AI than the global average. Only a third (34%) of respondents agreed that they trust companies that use AI as much as they trust other companies.Footnote 8

At the very least, it is encouraging that many governments recognize the need for a more open and outward mindset in terms of internal cyber-readiness and digital resilience. As a case in point, on October 30, 2023, President Biden issued an Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence.Footnote 9 It includes measures for "Ensuring Responsible and Effective Government Use of AI." Canada is mentioned as one of the countries the U.S. consulted on AI governance frameworks. Addressing the specific need for transparency in Government use of technology, the May 12, 2021 Executive Order on Improving the Nation's Cybersecurity observes "In the end, the trust we place in our digital infrastructure should be proportional to how trustworthy and transparent that infrastructure is, and the consequences we will incur if that trust is misplaced."Footnote 10

The Government of Canada has adopted its own internal tools, governing its development of applications of artificial intelligence. They include the Treasury Board Secretariat Directive on Automated Decision-MakingFootnote 11, a suite of policies on the use of artificial intelligence under the guidance on Responsible use of AI.Footnote 12 It includes the Algorithmic Impact Assessment tool, addressing, among other concerns, the need for "planned transparency measures to communicate information about the initiative to clients and the public".Footnote 13 As fair and transparent government use of artificial intelligence cannot be dissociated from its ethical development in the private sector, in June 2022, the Government of Canada introduced Bill C-27, proposing, among other legislative amendments, the Artificial Intelligence and Data Act, still before the House of Commons.

Despite such laudable sentiments, and initiatives, the Canadian Government has also acknowledged limited progress and systemic challenges – and that much more must be done. As governments therefore seek to expand their digital capabilities (as many entities such as CSE, CBSA and the RCMP have explicitly committed to doing so), such efforts give rise to important questions of means and consequences (intended or otherwise) that determine effectiveness and accountability. And it is here where openness and transparency are critically important.

In addition to profound consequences for individual privacy (a key theme of our prior reports and this one as well), there are many ways in which digitization and government's increasingly data-driven processes challenge transparency. As just a few important examples:

1.2 Openness on digitization

In addressing such challenges, while also seeking to recognize the critically important opportunities afforded by digital and data-driven innovation, transparency matters - and it must be viewed as central to the pursuit of digital governance and national security reforms. The new internal and outward partnerships called for by Governments themselves necessitate a commitment to both systemic openness and public engagement in order to reverse the erosion of trust that can greatly impede government's abilities to innovate and adapt, particularly given the accelerating pace of technological change.

Openness and engagement are also essential to mitigating potential biases inherent in digital systems that carry heightened risks for the most vulnerable and stigmatized communities (a key theme of our prior report). Within the public sector as well, a diverse and inclusive workforce is an essential enabler of digital innovation, as Canada's national security agencies have themselves recognized, yet struggle to fully address.

Building on this first introductory section, this report is organized in the following manner:

Our recommendations are formulated with the Government of Canada's own transparency principles in mind - as these principles were devised to underpin the National Security Transparency Commitment.Footnote 14 Accordingly, it is our hope and intent that this report's discussion and guidance can help strengthen the Government's national security readiness and digital prowess by better leveraging transparency as a basis for continuous innovation, systemic accountability, and public trust.

2. National landscape and the expanding digital landscape

Transparency can mean different things to different people. As we discussed in our second report, released in November 2021, transparency can be interpreted in a narrow and passive way, referring to a decision by the government to release, or withhold, certain information or data. In that report, we strongly encouraged national security and intelligence institutions to reject such an approach. Instead, we recommended a broader and more proactive approach, based on recognizing the importance of exchanges between the government and Canadians. According to such a view, national security and intelligence institutions have both the responsibility to be, and an interest in being, open. Under such an approach, openness extends beyond the one-way flow of information and data to include engagement where dialogue with Canadians is continuous (and not only at times of crises), dynamic (and not unidirectional), and comprehensive.

This section explores the intersection between national security and the expanding dynamic digital landscape. As was noted in the previous section, this discussion is not exhaustive. Rather, it constitutes more of a scoping and sampling of contemporary risks, challenges and opportunities facing the national security community as a result of digital technology. The fast pace of change, coupled with long and complex histories renders a more detailed overview challenging. Similarly, practical limitations such as the NS-TAG's time to engage in data collection for each report further complicates a broader and deeper study. That said, the NS-TAG did hear from a number of community members representing media, the national security community, academia, civil society, and the private sector (see annex). Following this, a number of themes emerged. These are reflected in the following subsections:

Each theme is considered in turn and forms the foundation for the recommendations offered in Section Three.

2.1 Cyber-security and privacy

As the world grows more reliant on digital infrastructure, cyber-security is a growing concern across all sectors. Cyber-attacks become national security concerns when they are directed to our critical infrastructure. For the Government of Canada, the twofold challenge is to ensure cyber-defences are in place without intruding upon privacy in cyber-space. Risks to privacy are heightened as offensive cyber-capabilities are expanding and becoming more pronounced as tools of national security and intelligence operations that often enjoin Canadian authorities and its allies in facing shared external threats.

With respect to the governance of cyber-security, and the importance of transparency in addressing some of the key challenges and opportunities facing the public sector, three specific sub-themes will be examined here: i) digital infrastructure and data capacities; ii) external collaboration and engagement in forging collective resilience across sectors; and iii) electoral interference and democratic stability.

A) Complexity of digital infrastructure and transparency

The Government of Canada's National Security Transparency Commitment seeks to shed more openness on the workings and consequences of the national security apparatus which is increasingly dependent upon digital technology and ever-expanding capacities for data gathering and analysis. Within such an organizational and policy community that has traditionally functioned in an insular and secretive manner, the complexity of digital infrastructure, which adds to the opacity of national security operations, can reinforce such tendencies and further stymie oversight, accountability, and learning and innovation.

By contrast, systemic transparency can be an important enabler of the trust that is essential to both the public servants relying on such infrastructure and the stakeholders and citizens impacted by its functioning and usage.

The complex question then becomes – how best to ensure that digital infrastructure is both trustworthy and transparent? One key challenge in response is determining an appropriate balance between proprietary solutions and tools - digital infrastructure where code is carefully guarded – on the one hand, and open-source principles and solutions on the other. As governments seek to develop ever-more specialized cyber- defences internally, the risk of an excessive reliance on external vendors with protected methodologies and intellectual property is that it limits collective understanding and adaptability within government, while also further shielding mechanisms for oversight and review. As one response to such risk, the U.S. Cybersecurity and Infrastructure Security Agency (CISA) has developed an 'Open-Source Software Security Roadmap' meant to articulate how the agency will enable the secure usage of open-source software within the federal government and support a healthy, secure, and sustainable global open source software ecosystem.Footnote 15 From a transparency perspective, this is a significant development.

While open-source software can help to improve transparency, accountability still needs to be carefully considered. There is always a risk that the software industry will leverage the opportunity to migrate to cloud computing as a means of relinquishing accountability through sophisticated contractual disclaimers.

Another important, growing and closely related challenge of digitization of national security is privacy. As national security authorities seek to gather, analyze, and potentially share personal data gathered from a myriad of internal and online sources, the opacity of such processes must be a concern. Publishing protocols and guidelines for data usage and privacy protection is a crucial aspect of openness and accountability. CBSA's current efforts to facilitate more seamless and digitized border crossing and protection mechanisms offer an example. The NS-TAG has been pleased to engage with the Canada Border Services Agency in its efforts to proactively engage stakeholders on matters of data usage and protection.

Canada is not alone in digitizing border control; Singapore has recently announced plans to deploy biometrics in place of passports as a basis of more automated passport- free immigration clearance. As one CNN article explains: "Biometrics will be used to create a 'single token of authentication' that will be employed at various automated touch points – from bag drops to immigration clearance and boarding – eliminating the need for physical travel documents like boarding passes and passports."Footnote 16

These technological advances underscore the urgency to modernize the public sector's obligations with respect to the protection of privacy. We join the Office of the Privacy Commissioner of Canada (OPC) in its support of the proposal of the Department of Justice to modernize the Privacy Act including with the introduction of mandatory breach record-keeping, reporting to the OPC and notifying individuals of any breach of personal information that creates a real risk of significant harm to an individual, as already exists for the private sector.

As such service innovations grow, the challenges of transparency and trust with respect to the cyber-underpinnings of digital infrastructure become more complex and consequential. Policy objectives, governance mechanisms, and digital readiness must all be aligned and the basis of continual review and adaptation.

B) Transparency collaboration with the private sector

As governments struggle to balance openness, complexity and secrecy within their own organizations, there is also a public interest in determining the appropriate level of transparency that should apply to the workings of the private sector. Whether financial, retail, travel, entertainment, or medical services, cyber-security underpins more and more of everyday life, giving rise to escalating risks from insufficient readiness and defences as well as more nefarious threats. CCCS has underscored how such challenges are not only of concern for individual organizations, but also represent wider national security threats given the interdependencies across systems and sectors, and the resulting instability that can ensue.

Building on the 2018 National Cyber-security Strategy, the Government of Canada recently introduced An Act Respecting Cybersecurity, amending the Telecommunications Act and making consequential amendments to other acts (Bill C- 26) that seeks to address such concerns by establishing a framework "to better shield systems vital to national security and give authorities new tools to respond to emerging dangers in cyberspace."Footnote 17 Accordingly, the new legalisation would allow government to both regulate and audit private companies (and especially those in key sectors, notably banking and telecommunications) over cyber-security practices and responses, in some cases even assigning criminal penalties for non-compliance.

The proposed Bill has generated important debate with respect to transparency, oversight, and review. One report by the Citizen's Lab at the University of Toronto has been especially critical of the legislation's limitations in terms of such elements, calling for significant reforms before the Bill is adopted.Footnote 18 In devising a balance between national security and democratic principles, among other recommendations, the report urges the Government to "make clear what roles the federal privacy commissioner, the National Security and Intelligence Committee of Parliamentarians and the National Security and Intelligence Review Agency would have at different stages of the order- or regulation-making process."Footnote 19

While recognizing the complexity of the issues at hand, it is our view that more engrained measures and mechanisms for openness and oversight are indispensable. We note as well that such an approach is in keeping with the Government's own efforts to be more transparent in terms of its development and deployment of Artificial Intelligence (AI) systems, a sub-theme returned to later in this report.

We also note that widened and enhanced forms of collaboration with the private sector is a critically important dimension of cyber-readiness for the country as a whole and is recognized as such in the Government's own 2018 National Cyber-Security Strategy and by CCCS in its recent 2023 report on widening cyber-threats. An internal review by Public Safety Canada of the 2019 strategy found that such relationships have been slow to evolve, and that more collaboration is called for going forward. Accordingly, and in parallel to any specific Bill C-26 measures that may eventually be implemented, ensuring some meaningful degree of transparency with respect to the purpose and functioning of these collaborative relationships is critically important for both learning and accountability.

C) Electoral interference and democratic integrity

With widening concerns about electoral interference from foreign entities, the pending public inquiry on such matters will be an important occasion to examine this issue in detail. As both CCCS and former Governor General David Johnston in his review of electoral interference have made clear, there are important 'cyber-'dimensions to such threats and concerns that must be fully appreciated and examined. The sophisticated cyber-attack by Russia on the World Anti-Doping Agency (WADA) headquarters in Montreal and its Canadian partner, the Canadian Centre for Ethics, the same Russian GRU elements meddling in the US election using similar tactics and recent news of a "Spamouflage" disinformation campaign against Canadian lawmakers, allegedly from China, support their advice.

The ETHI Committee Report on Foreign Interference and the Threats to the Integrity of Democratic Institutions, Intellectual Property and Canadian StateFootnote 20 includes several recommendations directly aimed at improving transparency in national security operations. Perhaps most relevant to the mandate of the NS-TAG is Recommendation 3, that the Government of Canada direct increased and regular sharing of relevant information to the public by the Canadian Security Intelligence Service (CSIS) in order to increase national security literacy.Footnote 21 We have observed that both CSIS and CSE have been active in alerting Canadians on the foreign interference threat, including the risks to democratic institutions well before the recent attention to foreign interference, although the naming of some of countries such as China is recent (2020 in the unredacted version of the National Security Intelligence Committee of Parliamentarians 2019 Annual Report). If the intent of Recommendation 3 is to invite the sharing of more specificity on the nature of the threat, we note that many of the expert witnesses who appeared on the subject have described the limits of the current CSIS legislation.

Moreover, we strongly recommend that the outcome of engagement activities be actively and consistently integrated into policy making and operational processes. A common perception among racialized communities is that engagement is largely viewed as a "box to tick" exercise inside national security institutions, and that reports based on the conversations in those activities are shelved afterwards. The onus here is on national security institutions to be transparent and to convince stakeholders that this is not the case.

One concern expressed to the NS-TAG by some stakeholders with regards to past efforts aimed at ensuring democratic readiness during elections, has been the relative secrecy of the internal mechanism for reviewing such threats during electoral periods. The approach deployed to date has featured a small group of four or five senior public servants tasked with monitoring developments and determine any appropriate response (including any public issuances). Their efforts are guided by the Critical Election Incident Public Protocol and following the 2021 election an independent review by a former Deputy Minister (Morris Rosenberg) was prepared, with findings made public.Footnote 22 It is one element of an integrated plan to strengthen Canada's electoral system against cyber and other threats. And, while the mechanism was developed to ensure that there would be no intervention by the independent committee unless the nature of the foreign interference was such that it would be in the public's interest to be made aware, questions have been raised regarding the threshold to share and whether it is currently set too high.

As David Johnston's own review looking at claims of Chinese electoral interference made clear, there are important distinctions between analog and digital forms of threats at play that must be appreciated. Yet, digital threats are on the rise – both in Canada and in democracies around the world. The Australian Strategic Policy Institute's (ASPI) International Cyber Policy Centre identified 41 elections and seven referendums between January 2010 and October 2020 subject to cyber-enabled foreign interference in the form of cyber operations, online information operations or a combination of the two. ASPI further describes this widening set of threats and the centrality of social media as an enabler of electoral interference:

The proliferation of actors involved in elections and the digitisation of election functions has dramatically widened the attack surface available to foreign state actors. This has in large part been facilitated by the pervasive and persistent growth of social media and networking platforms, which has made targeted populations more accessible than ever to foreign state actors.Footnote 23

Looking ahead, strengthened transparency must be viewed as an essential opportunity to foster greater public awareness and stakeholder engagement as democratic institutions are continually refurbished to adapt to new digital settings. The Rosenberg report makes many useful recommendations aimed at expanding openness and stakeholder engagement in this spirit. As one important example, recommendation 14 centres on improved political awareness and digital literacy of elected officials stemming from national security agencies and outreach effort: "The national security agencies should develop a program of unclassified briefings to increase the awareness of Members of Parliament and Senators on foreign interference and on election interference and on measures they can take to safeguard themselves and their online information."Footnote 24

As the report makes clear, cyber-security and electoral integrity are also inter-related elements of wider concerns about democratic conduct in an increasingly online world, concerns that shape democratic discourse and participation between elections as well. Disinformation and online hate stemming from both domestic and foreign sources are becoming more pervasive and impactful. Consequently, we now turn our focus to such concerns and our own observations in this realm.

2.2 Online safety/hate and ideologically motivated violent extremism

While social media and other online platforms offer myriad opportunities by breaking the barriers of scale, space, and time for people to connect, to acquire and share knowledge, and to cooperate on common causes, these opportunities can be overshadowed by disinformation, efforts to radicalize and recruit vulnerable people along extreme ideological or religious worldviews, and even incite violence within groups or in lone actors.

Not all these undesirable behaviours are addressed by existing legal frameworks. As such, the concept of "online safety" is often used to identify issues at the margins of, or entirely outside of traditional cybersecurity. It is also meant to convey that many areas where issues of safety, or vulnerability arise are found lay outside the reach of most types of explicit regulation. At its minimum, safety is simply a matter of personal awareness and circumspection while browsing and interacting online.

A) Transparency in the Government of Canada's response

Even clearly identified offences such as hate speech, threats, and harassment have proven difficult to either prevent or punish. Many of the behaviours referred to in discussions regarding online safety often do not meet the standards for criminal prosecution. Yet in the past decades many citizens, experts and community groups have been calling for increased control and regulation of online behaviour. Consequently, the Government of Canada has made multiple attempts to tackle such issues. An Act to amend the Criminal Code and the Canadian Human Rights Act and to make related amendments to another Act (hate propaganda, hate crimes and hate speech) (Bill C-36), which died on the Order Paper when Parliament was dissolved in 2021, focuses on online harm.Footnote 25 It purported to tackle issues typically associated with national security, including terrorism, propaganda, interference with democratic institutions, as well as mobilizing CSIS and the RCMP. CSIS was to be given the power to request subscriber information from service and platform providers, and a special "Digital Safety Commissioner" would have had the power to compel the removal of contravening contents and users from the Internet. However, the Bill was criticized for failing clearly to define the boundary between harms and inconvenience, between hate and free speech, and deputizing private entities to police their platforms. Bill C-36 quickly showed the complexity of online safety when considering the intersection of conflicting rights and ideologies, where the introduction of strict legal requirements, such as those included in Bill C-36, generate many more problems than solutions.

Equally important and notable for the present report, and the work of the NS-TAG, is that the government kept all public consultation documents related to Bill C-36 secret, in a complete absence of transparency. The documents were only released in response to an Access to Information request. Following this, a good summary of views expressed in the consultations was provided. This did show that while it is somewhat easy to reach consensus on adopting further measures to protect children, it is far more challenging to reach consensus on the measures needed to deal with other concerns.

In a later consultation with experts, designed to revive the bill, it became obvious that a sizable portion of conduct deemed harmful by some is protected by the Charter of Rights and Freedoms and that government action may not be warranted. In short, the government does not seem to have found a clear path to policing of criminal behaviour online. Its ongoing efforts in that regard must benefit from public security and engagement. Transparency will be required at all stages of the process when attempting future legislative change.

B) Transparency of engagement

Assuming that hate speech and extremist or terrorist propaganda are actual threats to national security, part of the mandate of national security organizations, the issue of transparency takes a slightly different meaning: monitoring, identifying and acting upon such propaganda implies some form of surveillance of the internet. Consequently, transparency is critical to protecting human rights in the context of these national security operations. One important preliminary aspect in this matter is that extremist groups do not necessarily publish extremist speech (they often deliberately "sanitize" their public face) and that, at the same time, any extreme speech is not necessarily an indicator that those uttering it are bona fide extremists: cases of misguided, excited, or misinformed utterances abound. In other words, it is possible that the "online harm" approach might more easily capture the less dangerous of the two, which would not be an effective way to protect national security.

Both CSE and CSIS have been policing speech acts in the past, essentially casting their nets over so-called "jihadi" or "Islamist extremist" terrorists, partly in the belief that such acts are precursors to illicit activities in the physical world. Overall, CSIS's annual reports since 2002 have consistently listed international Islamist networks as the main threat to Canada. To the point, in fact, where many Canadian Muslim and civil liberties associations have been vocal about the exaggerated attention and its consequences. Today the new wider, and vaguer categories of Ideologically Motivated Violent Extremism (IMVE), Racially Motivated Violent Extremism (RMVE) and Politically Motivated Violent Extremism (PMVE) may raise similar questions, on a much larger scale. Canadians will want to know:

We would encourage national security agencies to continue and expand the efforts that they have deployed to better describe the online threats, the active partnerships with independent organizations better placed to counter some of the online threats. As always, such descriptions need to contain clear and complete information that may help citizens understand the relative importance of various forms of threats and vulnerabilities. Simple lists of possible or potential threats, without any means of determining their importance, likelihood or frequency, do not fulfill that all-important objective. Furthermore, because of the fluid, dynamic nature of this environment, such information must be updated in a timely manner. The government also needs to do a better job describing to Canadians the nature of the online threat in environments, including those on the dark web, to develop the rationale and support for measured instruments and tools that may reduce the threat without compromising our democratic values.

2.3 Artificial intelligence

In its 2022 report, NS-TAG adopted a wide definition of the term "artificial intelligence," based on OECD guidelines, which includes any machine-based process used to produce or help produce organizational or individual decisions. Such processes are currently used in the production of intelligence in multiple agencies related to national security. This potentially includes surveillance in massive datasets (e.g. individual surveillance, pattern recognition, network analysis, geographic analysis), identification and authentication of individuals and objects (e.g. biometric and behavioural recognition, parcel and luggage scanners), space-time prediction based on multiple data sources (e.g. predictive policing) and data de-anonymization and database reconciliation (open source data analysis).

Since then, An Act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts (Bill C-27) was tabled, introducing the Artificial Intelligence and Data Act to regulate the use of artificial intelligence (AI) in international and interprovincial trade and commerce. The Bill defines an AI system as "a technological system that, autonomously or partly autonomously, processes data related to human activities through the use of a genetic algorithm, a neural network, machine learning or another technique in order to generate content or make decisions, recommendations or predictions."Footnote 26

As explained in the NS-TAG's 2022 third report (Section four on AI), there are three types of harm that can potentially stem from the design and deployment of AI systems: intended, unintended, and systemic. Intended harm can reflect militaristic endeavours, or a range of cyber-intrusions or even intelligence gathering exercises with disruptive or offensive objectives. Unintended harm often reflects algorithmic design flaws (often not foreseen in advance) that can have perverse impacts on individuals or groups of individuals; one frequently cited example is a financial institution that deploys an AI program that inadvertently excludes more individuals from a racialized minority. Systemic harm is often closely related to unintended harm, but the scale is often wider and the problems more deep-rooted and difficult to rectify. As the national security community seeks to expand its reliance on AI systems across a range of programs and services (with both security and service objectives in mind), recognizing all three types of harm is important. AI transparency brings a new, discrete, challenge in transparency and accountability of national security operations.

Within the context of this report's wider and more direct focus on digitization, our focus here is better understanding the emerging contours between national security and AI, as well as some key tensions between transparency and secrecy shaping this evolution. We then examine and situate these tensions with respect to specific Government of Canada responses – including existing national security entities and proposed policies and mechanisms currently under development.

A) National security and artificial intelligence

With respect to government's capacities to respond to national security challenges, there are two fundamental drivers of AI usage in this space: human skills shortages on the one hand, and AI performance enhancement through process automation, pattern analysis, and data-driven learning and innovation, on the other. These latter elements are arguably necessary in a world where national security threats have moved from clear, avowed hostile States to dispersed, hidden, individual actors.

In general, the use of AI solutions by public safety and national security agencies is ramping up to analyse an unprecedented volume of unstructured data, whether texts, images, or engagement data, and to enhance pattern and image recognition as well as interpreting trends and behaviours. For example, closed circuit television cameras (CCTVs) incorporating AI can go beyond capturing images to identifying anomalous activities and suspicious behaviour.

With respect to cybersecurity, according to the federal government's Canadian Centre for Cyber-Security (CCCS), "AI is useful in detecting new threats to organizations through automation. By using sophisticated algorithms, AI is able to automate threat detection such as malware, run pattern recognition to find relationships between different attack vectors and provides superior predictive intelligence."Footnote 27 In a 2023 media interview, the Head of CCCS confirmed that AI is being used "in phishing emails, or crafting emails in a more focused way, in malicious code (and) in misinformation and disinformation."Footnote 28

There are also added cybersecurity risks from how others develop and deploy AI systems, adding to the sorts of escalating, malicious threats from hostile state and non-state actors alike (a reality underlined by CCCS in its August 2023 report). According to one American expert: "Security experts have noted that AI-generated phishing emails actually have higher rates of being opened — [for example] tricking possible victims to click on them and thus generate attacks — than manually crafted phishing emails…AI can also be used to design malware that is constantly changing, to avoid detection by automated defensive tools."Footnote 29

However, the actual extent of AI use in various national security settings is not known. This is not necessarily a typical form of lack of transparency, as often times the organizations themselves are not entirely aware of AI tools used by their members, especially given the flexibility of the category. For instance, we do not know whether government employees use tools such as ChatGPT or other automated generative AI (including image generation). Lately the federal government has introduced guidelines for the use of generative AI in its operations. It will be interesting to know to what extents they are applied in the coming months; however, such information may be difficult to collect, since the guidelines are voluntary.

As these sorts of challenges invariably grow, tensions between transparency and secrecy will likely intensify due to a combination of technological complexity (including both open source and propriety elements), legal and policy frameworks governing national security efforts (often shielding transparency), and newer and more proactive forms of openness and engagement being sought by governments to varying degrees. Accordingly, we consider the transparency challenges specific to AI and, further below, we examine some of the evolving contours of Government of Canada responses – before providing some specific recommendations for enhanced openness in Section three of this report.

B) Challenges

The reasons why close control and surveillance of automated decision tools are essential, became obvious when we consulted AI experts. First, as we noted in our 2022 report, AI applications may be biased because they were trained on, or fed, biased data. Knowing that police data, for instance, is the product of organizations grappling with systemic discrimination, it is in fact likely that automated decisional outcomes will be equally biased. In fact, over time since such outcomes then become part of the analytical data, they will add up and increase at an accelerated rate if not held in check. Second, AI decisional processes are almost always opaque, in part because they contain the trade secrets of the enterprises that market them. But in fact, opacity is an unavoidable property of all advanced AI processes, especially those based on deep learning. As our experts pointed out, their complexity and analytical power are proportional to their opacity, to a point where even their creators can no longer understand how the analytical outputs are related to the input data.

i) Transparency as a central challenge with AI

Transparency comes into play with respect to the use of AI by national security agencies in mainly two ways: informing Canadians on the use of AI for the purpose of national security and ensuring algorithmic transparency to maintain "explainability" of the inner workings of AI. While the principle of transparent government, as a matter of accountability, is long established, it is confronted to an unprecedented challenge in relation to AI: the opacity of algorithms and machine learning models now coined as the "black box." National security use of AI and its outputs could become unexplainable from the fact that the national security agencies themselves could lose visibility into the processes of their own AI applications.

The Interim Report of the UK Parliament Science, Innovation and Technology Committee (UK Committee) identifies the black box effect as one of the "twelve challenges of AI Governance." It quotes Burges Salomon LLP stating that "The challenge is further complicated by the fact that the better the AI tool performs, the less explainable it is likely to be."Footnote 30 Essentially, to produce insights, the machine learning makes associations beyond human capacity. The more the machine learns, the more these associations become inscrutable to humans.

The "black box" effect has in fact given rise to a new discipline of "explainability research." Unfortunately, it severely lags behind the development of AI, in part because in order to explain, the product has to exist, but also because the budgets involved are microscopic in comparison. Furthermore, it is highly likely that this type of research will provide process accounts that are only intelligible by computer scientists and geared towards a set of goals that is different from the ones generally associated with governance and oversight. For instance, computer scientists will be interested in how specific algorithms are optimized for effective processing, while civilian oversight will want to know which data variable was used, why, and with what weighting.

ii) The compounding impact of lack of transparency

Transparency challenges compound three other risks identified by the UK Committee: bias, because AI associates data compiled by humans, with their biases, thus potentially compounding discrimination; intrusion upon privacy through the unprecedented capacity for personal data collection and generation, without knowledge or consent of the individual; and misrepresentation, being the creation of fakes, either malicious, through manipulation by individuals, or accidental, through mistaken data entries or "AI hallucinations" being the derailing of AI systems with false associations.

Two more aspects must be considered. First, current AI applications are known to routinely "leak" input data, which is clearly a national security problem. Second, soon we will need to consider the possibility that entirely malicious AI might be introduced in various systems. In a national security context, malicious AI may be understood as any automated process that produces diverted (i.e. beneficial to a third party), disruptive or otherwise damaging outcomes, against the will or unbeknownst to its immediate user. As with other applications, it may hide on systems, or it can present itself as a legitimate tool or part of a legitimate tool. The former case falls in the cybersecurity paradigm, but the latter are clearly matters of organizational management and surveillance of AI uses. And given the "black box" problem above, that promises to be a challenge.

iii) Government of Canada responses and proposed initiatives

As acknowledged in our 2022 report, the Government of Canada has sought to make transparency a central tenant in terms of its own development, deployment, and governance of AI. Five core principles have been adopted with such openness in mind:

Acting on these principles, the Government has developed tools and introduced legislation, through Bill C-27 mentioned earlier, and individual agencies have adopted initiatives.

The Government of Canada Algorithm Impact Assessment framework has further provided some important tools for internal assessment and external scrutiny with regards to AI usage and impacts. Our 2022 report further endorsed two key recommendations from a recent report issued jointly by the Information and Privacy Commissioners of BC and Yukon examining the implications of AI for government use and services:

Within a context of AI-driven innovation racing ahead in the private sector – and a competition for scarce and specialized AI skills across industry and government organizations, it is our view that this latter recommendation is particularly important in emphasizing the importance of government capacity building versus excessively outsourcing such expertise to private companies as a matter of transparency. Additionally, the outreach and engagement aspects of government's role in devising greater collective awareness and undersetting, and heightened political literacy, are also especially important.

As mentioned above, legislation has been introduced to devise new policy and regulatory mechanisms for greater oversight of industry, as proposed in Bill C-27's creation of the Artificial Intelligence and Data Act. One laudable aim of the Act is to prohibit certain conduct in relation to artificial intelligence systems that may result in serious harm to individuals or harm to their interests. The scope of application of the proposed Act, however, is not yet defined as its scope is limited to "high impact systems"Footnote 33 leaving the definition of "high impact system" to regulations.Footnote 34 The proposed legislation would also create an Artificial Intelligence Data Commissioner housed within the Department of Industry (also known as Innovation, Science and Economic Development Canada or ISED), if no other is designated, who would assist with enforcement (with powers delegated by the Minister to request records, require organizations to conduct audits, take action to address issues, and cease operation of certain high-impact systems where there is a "serious risk of imminent harm").Footnote 35 In so doing, government's own internal capacities for understanding and regulating AI across society must be notably deepened in ways that necessitate openness and scrutiny. As with the closely related proposed cyber-security legislation (Bill -26), matters of transparency are essential to understanding and overseeing the scope and execution of these expanded powers.

With respect to the conduct and usage of AI systems by national security actors, we observe that the Government of Canada's core principles include language that seeks to 'protect' national security and defence, a potential vice on openness. Indeed, our recommendation in our 2022 report was that Canada's security and defence entities adhere to NATO's 2021 framework of six guiding principles of AI usage (closely related to the Government of Canada principles but specially applied to the realm of security and defence).

In its own public response to our 2022 report, CSIS says the following with regards to AI:

CSIS fully recognizes the enormous power of AI and the threats and opportunities that it presents. For these reasons, we have carefully reviewed this section of the report and will be looking to actively engage with and draw on the perspectives and expertise of Canadians on this matter.Footnote 36

This commitment closely aligns with the emphasis on public engagement an outreach conveyed by the Information and Privacy Commissioners of BC and the Yukon (presented above), and we both encourage and welcome the efforts of CSIS and others in this important realm.

The RCMP's own past usage of facial recognition software has been a controversial subject and provides further testament to the importance of proactive openness and engagement, as we observed in our 2022 report, notably with respect to Clearview AI tools and concerns raised by the Privacy Commissioner that led to the suspension of such programs.

Accordingly, the RCMP has reportedly developed a 'National Technology Onboarding Program' (said to be in direct response to the Clearview case) that is meant to 'bring more transparency to the processes that govern how the RCMP…approves the usage of new and emerging technologies.Footnote 37 Though encouraging to learn of a more thoughtful and proactive approach being devised, it will be important that RCMP provides transparent information to the public on the specifics of the new Onboarding Program reportedly created in 2021 (beyond a single line mention in its 2022-23 Departmental Plan).Footnote 38 If all activities carried out under the Onboarding Program are secret, transparency will continue to suffer.

In sum, as AI usage invariably expands across the national security community, it is essential that the public know more about the objectives and undertakings of various public entities (notably CSE, CCCS, CBSA, CSIS, and the RCMP). Appropriate mechanisms must be designed and implemented to strengthen systemic and proactive openness within government, while better enabling external oversight and review. As the Government aims to foster greater transparency around AI usage in the private sector, it must also do so internally. As both sectors collaborate in shared national security objectives, openness and engagement are crucial enablers of innovation, adaptation, and public trust.

By contrast, secrecy breeds suspicion, while inward bureaucratic inertia leads to heightened risks of dysfunction, corruption, as well as unintended consequences and systemic harm stemming from AI deployments. Accordingly, the National Security Transparency Commitment is one such mechanism for strengthening openness and engagement. Some specific recommendations link AI development and the principles of the National Security Transparency Commitment in section three of this report.

2.4 Surveillance and encryption

AI-based and other constantly evolving techniques for data analysis logically call for several other practices that require a closer look. One that has been in the public eye is the collection of increasingly numerous, and increasingly wide data sources, which is both encouraged and made easier by the massive digitization of daily life that has occurred in the last decade. Another is indefinite data retention. Having met with CBSA, CSE and CSIS over the last few months, NS-TAG has found it difficult to obtain information on either of these practices, especially the latter. Greater transparency regarding data retention schedules is required.

The Government of Canada has seen fit to broaden the power of security institutions to collect and retain data through An Act Respecting National Security Matters, in force since 2019, which in turn increases the size of the space in need of transparency. That said, the 2019 legislative changes did impose some guardrails on CSIS in particular. This coupled with the introduction of NSICOP (through Bill C-22), the creation of National Security and Intelligence Review Agency, and the establishment of the Intelligence Commissioner has demonstrated some commitment to transparency and accountability.

A) Transparency about data management

NS-TAG is convinced that Canadians would benefit greatly in detailed information about how CSIS's "dataset" management fits within both the mandate of the Service and Canada's national security policy. Beyond the strict requirements for reporting within government channels or to oversight bodies, there is still a need for direct public policy and executive transparency. Information transparency also requires better linkages between the collection, retention and analysis of data and the objective operational environment of the Service. As it is, CSIS's yearly reports contain definitions and categorizations of threats, but no clear idea of how threats are evaluated, compared or related to strategic decisions. The reports also contain little information on the Service's activities. We understand that revealing too much about the tasking, methods or results of the Service's activities would be injurious to national security, an objection that is in fact often heard. However, policy and executive transparency requires more than abstract considerations about threats or especially selected illustrative cases.

B) Shedding light on metadata

Another key aspect of data surveillance is the notion of "metadata." One oft used analogy to explain the difference between data and metadata is that the former is akin to the contents of the envelope, while the latter is the routing information written on its outside. That is inexact in two key aspects. First, as has been repeatedly pointed out, the frequency, quantity and nature of metadata is nothing like the simple street addresses listed on a piece of mail. Our days' worth of phone calls, emails and texts alone generates a massively superior quantity of usable (meta)data. Metadata also contains more diverse information, such as location, time of day, duration or length of message, presence and type of attachments, language, subject, etc. Second, this rich metadata allows multiple forms of analysis and inferences about those sending, receiving, and transferring various communications. Research has shown that predictions and inferences made with efficient analysis of metadata are not improved when the actual contents of the communications are added to the model.

With that in mind, NS-TAG queried CSE about its use of metadata and whether it could be made clearer to Canadians how and to what extent it captures, retains, and analyzes metadata. At the present time it appears that neither can be made public in any way. NS-TAG finds that these answers are not compatible with the information and executive transparency guidelines of the transparency commitment.

C) Transparency about surveillance

Other forms of surveillance have become public concerns in the last few years. Many First Nations, minority or environmental groups have expressed concerns that they may be targets of forms of observation, collection and, presumably, analysis. Continuous, intense efforts at recruiting confidential informers in various targeted populations have been identified. At the same time, the Report of the Public Inquiry into the 2022 Public Order Emergency (Rouleau Commission) following the Ottawa occupation of 2022 has amply revealed that Canada's intelligence community underperformed in analyzing, predicting and communicating threats and vulnerabilities: "there is a gap in the federal government's authority and ability to monitor the digital information environment, and [...] this gap hampered its ability to anticipate the convoy and understand and gauge the situation as events unfolded."Footnote 39 This paradoxical state of concurrent, apparent over- and under- surveillance needs to be better accounted for if Canadians are to maintain their trust in government. In terms of transparency, there seems at the very least to be a gap in political transparency, where the strategic priorities of the intelligence community are difficult to grasp. To a point, in fact, where some may begin to doubt that they exist or, if they do, that they are being followed in practice.

Some areas where Canadians need better transparency in governmental surveillance:

D) Information about encryption

One further issue regarding the deployment of government surveillance is data encryption. In 2022 the Minister of Public Safety signed the International Statement on End-To-End Encryption And Public Safety, assuring the Canadian public of the government's support for public encryption. "End-to-end" encryption refers specifically to the protection of communications data between sender and receiver, notably along the path and network nodes that data will have to follow between devices. If sufficiently strong it entirely negates any form of interception, from organized fraud networks, hackers, foreign data collectors, etc., as well as from legitimate law enforcement. Encryption is also used to protect stored data and devices. Assuming users select adequate passwords, strong encryption reduces, or annihilates, any unauthorized effort to access data, again, including by law enforcement or intelligence agencies.

In both cases, of course, encryption helps protect any form of criminal activity online and off, including child pornography, illicit markets, illicit financial transfers, espionage, criminal and terrorist plotting, and the like. This is part of what some policing agencies have referred to as "going dark," where traditionally accessible forms of data are now irremediably hashed and therefore unusable, either as intelligence or as evidence.

There are a number of ways in which law enforcement could try to circumvent or defeat encryption:

Two separate issues must be considered. First, the personal security of Canadians against fraud, identity theft, etc. is heavily dependent on encryption, in multiple forms of interaction online and off (for instance, when simply using a credit card). Encryption is also crucial in the protection of fundamental rights in Canada and abroad, as well as in the protection of whistle-blowers, journalistic sources and, it must be noted, to the protection of police and security intelligence sources domestically and abroad. Consequently, any weakening of encryption irremediably constitutes an increasing of risks to Canadians, individually and as a nation.

The second issue is corollary to the first: if national security required that encryption must indeed be weakened, either by making it "breakable" or through back doors, a number of safeguards will have to be prepared, including the rapid, if not automatic diffusion of information to the public about breaches and the close oversight and reporting of law enforcement use; proper justification will have to be provided to the public and complete transparency should be required about who is officially allowed to decrypt citizens' data, how, to what extent and to what purpose.

NS-TAG believes that Canadians are prepared to allow law enforcement more leeway in combating illicit activities, but not at any price or without conditions or, more importantly still, without proper, proportionally satisfactory information. Concerns about possible exceptions to encryption to allow authorities access must be addressed. The NS-TAG recognizes that statements such as the International Statement on End-to-End Encryption and Public Safety may be attempts to present a nuanced position to complicated issues. However, the impact of the message can sometimes be lost behind the language. Given the importance of this issue, clear and direct language is critical.Footnote 42

In terms of encryption, we believe that:

2.5 Equity, diversity, and inclusion

In our previous three reports, we have stressed the importance for national security organizations to embrace equity, diversity and inclusion (EDI) in their culture and embed it in an ongoing and transparent way in their activities to improve trust, particularly with the underrepresented groups like women, black, racialized, marginalized, and other minority communities.

For example, in our 2020 report, the NS-TAG stated that "National security agencies must acknowledge that systemic racism and unconscious biases exist within them, and that these biases are manifested in their interactions toward certain members of the public. If national security agencies do not recognize and vigorously address these existing biases, they risk individually or collectively failing to detect, and act upon, actual dangers posed to society."Footnote 43

In our 2021 report, we recommended that in their pursuit of more transparency, national security departments and agencies should develop specific indicators to measure progress toward achieving greater transparency. These should be relevant to the mandate of each specific agency, though some indicators could also be common across the community. The regular publication of disaggregated data on diversity, including on race was one of the suggested indicators.Footnote 44

A) EDI, innovation, and AI

The business and innovation literature speaks volumes about the importance of diversity in fostering innovation and responsible use of technology. This is particularly true for the critical transition from inventing new technologies, such as AI, to the design of innovative applications and their successful adoption by users.Footnote 45 A March 2022 article in Policy Options magazine stresses that while there is indeed a raging battle for deep AI technology specialists, the "more pressing need is for people who are able to support digital adoption and transformation – the "hybrids" who understand the potential of the technology and also how to match it to organizational needs."Footnote 46 A recent OECD AI Policy Observatory article describes how women are leveraging AI technologies to tackle some of the most significant humanity challenges (e.g. food security, health care, combat modern slavery).Footnote 47 Yet the same article and other recent publications describe how women are underrepresented in the AI innovation workforce or entrepreneurs. A 2020 World Economic Forum report found that women make up only 26 % of data and AI positions in the workforce and the OECD reminds us that "women are 13 times less likely to file for a technology patent than men" with "estimates that women obtain just 7% of ICT patents in G-20 countries and founded only 10 % of technology start-up companies seeking venture capital."Footnote 48 According to an AINow study, despite the presence of diversity and inclusion programs, the composition of the workforce leading digital corporations like Google and Facebook (Meta) is not reassuring "with women making up only 15% of AI researchers at Facebook (Meta) and 10 % at Google" and Black making less than 5% of the U.S. staff of Google, Facebook (Meta) and Microsoft workers, less than half of their labour market availability (12 %).Footnote 49

In Canada, despite years of advocacy, women enrollment in the computer engineering field has not increased and is now declining.Footnote 50 In the public service, despite major progress in their participation in senior ranks, women represent just 27 percent of all positions in computer science, and this percentage is also on the decline.Footnote 51 The participation rate of Indigenous and persons with disabilities in this field remains very low in both the private and public sectors. "While immigrants and some racialized people are over-represented in the ICT sector, more than 40 of internationally educated engineers are under-employed"Footnote 52 with many working in a job that does not require an engineering degree, even if they have one. This points to barriers that undermine the use of their talent and productivity. "Black Canadians and Indigenous peoples are significantly under-represented. Indigenous Peoples represent slightly more than one per cent of ICT workers."Footnote 53

On the one hand, the inability to increase the participation of women, racialized, Indigenous and persons disabilities and the use of the full potential of the workforce in the ICT sector undermines Canada and its public service's ability to leverage the full potential of a diverse and inclusive talent pool as a key contributor to innovation. In parallel, inclusion and diversity in the design and development in AI by researchers and engineers are even more crucial to avoid that the large-scale propagation of bias. For that reason, transparency in the realization of EDI in the national security community is essential to ensure accountability in our efforts toward innovation that leads to greater equality.

In our 2022 report we described how "predictive analytics and decision support systems (one of seven potential AI use cases identified by the OECD) have become critical elements of national security and intelligence capacities."Footnote 54 As we have heard from officials, these new capabilities offer significant productivity and service improvement in high volume activities like immigration (IRCC) and border inspection (CBSA). It can also assist intelligence organizations in better identifying threats. However, the design and use of these AI applications must be carefully assessed to avoid unintended harms that may create, replicate and/or perpetuate bias, systemic or else, that have disadvantaged individuals and groups in society such as women, indigenous, people with disabilities, Black and racialized people and other minority communities including members of the 2SLGBTQI+. The protection of privacy must also remain central to this growing capacity to analyze and process data.

Our 2022 report echoed many experts that best way to prevent unintended harms or outcomes is to ensure the active participation of diverse talent in all its forms in the research, design, implementation of AI initiatives and the open and transparent engagement with communities on the use of the technology. Two of our recommendations were specific to the advantage of using a more diverse and knowledgeable workforce that is sensitized to systemic biases namely:

Through our 2021 and 2022 reports we also encouraged that organizations in the national security community to be more open and transparent both with regards to 1) their EDI data and strategies and 2) their training material to promote greater awareness on the risks of systemic bias.

B) EDI within National Security organizations

Ensuring that various National Security organizations can count on a workforce that better reflects the diversity of Canada and can contribute to responsible AI that mitigate against systemic bias requires two key elements. First, is an overall equitable and diverse workforce that embraces the benefits of diversity in both seizing on the opportunities of new technologies like AI but protecting against systemic biases. Second is an equitable and diverse workforce in the key areas of research, design and implementation of new digital technologies like AI.

In our 2022 report, we noted that "despite recent improvements, racialized Canadians are still underrepresented in national security and intelligence institutions, especially at senior levels" and it led us to recommend that:

National security organizations not only continue and intensify their efforts to diversify their human resources, but also that they be more transparent on issues such as data about employment and existing barriers for racialized individuals.Footnote 55

Our observation relied primarily on the findings of the diversity and inclusion in the security and intelligence community of the 2019 NSICOP Annual Report. Most data sets used by the NSICOP at the times were for years 2017-18. We have tried to update our assessment with more recent data. We have been able to do so with a consolidated update of the data up to 2020-21 for most key national security organizations provided by Public Safety (see annex #1) We have also been able to access 2021-22 data for many of the organizations by accessing either the government wide diversity and inclusion statistics published by Treasury Board (TB) and some of the national security organizations departmental/agency own EDI publications, particularly for those who are not part of the TB publication because they are separate employers or a distinct organization like the Canadian Armed Forces.Footnote 56 This review has provided us with an opportunity to note the improvement made my most national security organizations in both publishing general EDI data but also their objectives and strategies to make their workforce more representative of Canada. We welcome this progress, but as we will describe below, transparent access to more disaggregated data is required if we want to assess EDI in employment groups particularly involved in the research, design and deployment of new technology applications such as AI.

This review of data up to 2021-22 has allowed us to assess progress during that period (2018-18 to 2021-22). With the exception of the designated group of "people with disabilities," most organizations have continued to make progress on the three other EDI designated categories and their workforce, overall, is becoming more representative of Canada. This is encouraging knowing that not all designated employees choose to self-identify, so the actual data is probably higher. With regards to the "people with disabilities" category we note that the Workforce Availability (WFA) figure has doubled for most organizations during that period. Some of the reports that we have consulted indicate a change of methodology in the WFA definition that has resulted in higher WFA figures but with data collected by departments that does not yet reflect this change. Here are some of the highlights of our analysis:

We did not have access to updated information for the executive category (EXs) although we understand anecdotally from most organizations that many organizations are making progress in making senior ranks more representative of Canada. Having senior ranks that are conscious and predisposed to challenge potential bias is an important component of leveraging EDI to ensure responsible use of AI.

As we have described above, while the national security organizations have made significant progress in reporting EDI data and their plans to close the gaps in a more transparent way, most of these reports remain at the aggregated workforce level. The quasi absence of disaggregated EDI data that would allow us to assess the diversity of the workforce in the most relevant AI functions (researchers, designers, business analysts) limit our ability to comment. As described earlier, the current literature suggests that the workforce dedicated to these key AI tasks is particularly underrepresented when it comes to women (below 20-25 % in both private and public sector) and significantly underrepresented when it comes to Indigenous and Black people in the group "members of visible minority." While our sample is very low, the CSIS Diversity, Equity and Inclusion Strategy offers a small degree of disaggregated data by workforce specialties that seems consistent with these overall public service figure with women representing 16,1 %, Indigenous 1,3% and persons with disabilities 3,4 % of the science and technology workers at CSIS.Footnote 62

3. Conclusions and recommendations

The protection of national security must be the top priority for any national government. A conversation with Canadians about national security challenges and the trade-offs between security and transparency is overdue. Some might argue that Canadians have become complacent about the nature of threats to national security, and it is clear there is no consensus around what constitutes a threat. As the nature of threats and risks become increasingly complex, in part due to the development and proliferation of new technologies, it may be timely to undertake a conversation with Canadians about national security in a way which is meaningful and engages non-experts. While Canadians broadly support efforts to protect national security, and indeed are concerned when lapses emerge, they also expect government to be accountable both for how threats are addressed and when they are not. As part of the dialogue with Canadians, Government should be transparent about the measures in place to ensure accountability and what kinds of appeals mechanisms are available in the event of complaints. Complaints and appeals mechanisms should be well understood, accessible and operate in a timely and transparent manner.

Our recommendations seek to build a path to meaningful engagement of Canadians around national security challenges and strategies. However, as others including Jody Thomas has noted, engagement alone is not enough to secure improved trust in national security agencies. This must be coupled with changes in policy and practice. The introduction of a declassification policy and working with an open by default mindset are good examples here.

As digital and data-driven dimensions of national security governance become increasingly more pronounced and consequential, we have sought to pursue two broad objectives in this report: first, to better understand the evolving and deepening inter- relationships between national security and digitization; and second, to provide guidance in terms of how the Government of Canada can better leverage transparency to strengthen public awareness and understanding - and ultimately collective trust, as new threats are confronted and new responses are devised.

In our view, this latter objective is also closely aligned with the genesis of our three prior reports, and the deeply embedded tensions between openness and secrecy that underpin the formation of the Government of Canada's own Transparency Commitment (and the very existence of the NS-TAG). As such, building on our own review and analysis in the prior section of this report focused on some of the most centrally important digital realms of national security, we have sought here to align our own recommendations with the six principles of the Transparency Commitment and their three thematic pairings: Information Transparency, Executive Transparency, and Policy Transparency.

Information transparency

Information Transparency, under the National Security Transparency Commitment, includes two principles aimed at clearly communicating what Government of Canada departments and agencies are doing to protect national security. Principle one states, "Departments and agencies will release information that explains the main elements of their national security activities and the scale of those efforts." Principle two states "Departments and agencies will enable and support Canadians in accessing national security-related information to the maximum extent possible without compromising the national interest, the effectiveness of operations, or the safety or security of an individual."

Recommendation 1: Implementation of recommendations from mid-term evaluation of the National Cyber Security Strategy

In light of the rising prominence of cyber-security threats and challenges, and given the Government of Canada's own 2022 midterm evaluation of its National Cyber-Security Strategy (NCSS), more transparency is required in terms of how the four recommendations made to the Senior Assistant Deputy Minister, National and Cyber Security Branch are being acted upon, namely:

Recommendation 2: Transparency of plans for use of AI

We recommend that all national security entities publish detailed plans outlining their current and intended uses of AI systems and software applications, as well as safeguard and review mechanisms to mitigate risks such as ethnic, gender and other biases, algorithmic opacity, and other unintended consequences.

Recommendation 3: Transparency of Technology Onboarding Program

We also recommend the RCMP be much more forthcoming with regards to the specific contours, usage, and findings of its Technology Onboarding Program. This includes providing specific and detailed guidance explaining how its own Technology Onboarding Program (created in large part to address AI concerns) aligns with Canadian values, including those expressed by the Charter.

Recommendation 4: Transparency on diversity and inclusion in staffing

We recommend that Public Safety Canada provide more detailed information and analysis of its workforce development challenges (including skills shortages, recruitment efforts, and DEI milestones), while also addressing one of the recommendations of the 2022 NCSS Midterm Review which calls on the Department to 'increase awareness of how aspects of diversity and inclusion inherent to GBA Plus apply to cybersecurity.'Footnote 64

Recommendation 5: Transparency of threat assessments

Canadians understand that not all information related to national security can be shared. However, recent events, for example in the area of foreign interference, raise questions about what policies and practices are in place for addressing threats and informing Canadians. The Government could increase proactive sharing of policies governing threat assessment in the digital world and beyond, including the legal and regulatory frameworks governing the tools or means being used both to identify threats and to address them.

Executive transparency

Executive transparency includes communication which seeks to explain the legal structure for protecting national security, and how choices are made within that structure. Again, here there are two related principles. Principle three of the Transparency Commitment states "Departments and agencies will explain how their national security activities are authorized in law and how they interpret and implement their authorities in line with Canadian values, including those expressed by the Charter." Principle four states "Departments and agencies will explain what guides their national security-related decision making in line with Canadian values, including those expressed by the Charter."

Recommendation 6: Transparency of the Strategic Coordination Centre on Information Sharing

With both perceived and real tensions between data-driven national security efforts and individual privacy, we recommend that Public Safety Canada be more open and transparent with regards to the objectives, conduct and impact of its Strategic Coordination Centre on Information Sharing.

Recommendation 7: Consultation on AI privacy impact assessments

In terms of the privacy impacts of AI usage and applications, we recommend that Public Safety Canada work the Privacy Commissioner and other stakeholders in order to establish mechanisms for open and verifiable standards of algorithmic design and deployment as well as independent review.

Recommendation 8: Openness on encryption and lawful access

We recommend greater openness, outreach, and engagement as Public Safety Canada undertakes internal policy development focused on digital encryption and seeks to meet its own pledges made in adopting the 'International Statement: End to End Encryption and Public Safety (notably to 'engage in consultation with governments and other stakeholders to facilitate legal access in a way that is substantive and genuinely influences design decisions' and 'working with industry to develop reasonable proposals that will allow technology companies and governments to protect the public and their privacy, defend cyber security and human rights and support technological innovation'Footnote 65).

Recommendation 9: Engagement and transparency to mitigate systemic racism

We recommend that Public Safety Canada mitigate against systemic racism, which can lead to the over securitization of specific ethnic communities and biased use of digital tools, through ongoing engagement and transparency with impacted communities.

Recommendation 10: Engagement to combat online misinformation and hate

We recommend that Public Safety Canada realize the commitment it made in the 2022- 2024 Open Government Action planFootnote 66, and deepen public outreach and engagement efforts, particularly with vulnerable communities, in devising innovative and meaningful ways to combat online misinformation and hate. In doing so, and as one area of particular concern and urgency, we urge all national security entities to review and respond to the findings of the recent Senate report on Islamophobia and its focus on rising levels of online hate and abuseFootnote 67, as well as monitoring and responding to growing trends of antisemitism, anti-feminist, and anti-2SLGBTQSI hate.

Recommendation 11: Oversight of Government use of AI

The Artificial Intelligence and Data Act (AIDA) proposed in Bill C-27 excludes government institutions from its scope and the proposed AI Commissioner would not have jurisdiction over government institutions. We recommend the Government of Canada examine the possibility of extending AIDA to government institutions including transparency obligations.

Policy transparency

Policy transparency is the third aspect of transparency outlined under the National Security Transparency Commitment. It includes engaging Canadians in a dialogue about the strategic issues impacting national security. As with Information and Executive Transparency, policy transparency also includes three principles. Principle five under the Commitment states "The Government will inform Canadians of the strategic issues impacting national security and its current efforts and future plans for addressing those issues." Finally, Principle six states "to the extent possible, the Government will consult stakeholders and Canadians during the development of substantive policy proposals and build transparency into the design of national security programs and activities."

Recommendation 12: Transparency of cyber-security threats

Building on the efforts of the Canadian Centre for Cyber-Security (CCSE) and its August 2023 reporting in particular, we recommend that CCSE develop a regular monitoring and public reporting mechanism to both catalogue and categorize cyber-security threats, and to better explain and continually update the Government of Canada's own efforts to improve cyber-resilience for the public sector and for the country as a whole.

Recommendation 13: Engagement around critical infrastructure

In keeping with the mandate of the Canadian Centre for Cyber Security the findings and recommendations of Public Safety Canada's own 2022 Midterm Review of the National Cyber Security Strategy, as well as our own prior report, we further recommend more openness and engagement with Canada's critical infrastructure owners as well as all levels of government, academia, and private industry to increase readiness and resiliency to combat cyber threats that may affect technologies, networks, assets, and services essential to the health, safety, security, and economic well-being of Canadians.

Recommendation 14: Promoting national security transparency internationally

In collaboration with other democracies and building upon the NATO's 2021 Artificial Intelligence Strategy (including Principles of Responsible Use), we recommend that Public Safety Canada seek to devise anticipatory capacities for better understanding the growing inter-linkages between AI and national security governance, and to enhance transparency of strategies and safeguards on an international scale.

Recommendation 15: Review and reporting on the National Security Transparency Commitment

With the exception of our own NS-TAG reports, and CSIS's public response to our 3rd report, we note that more than five years into Public Safety Canada's T2017 National Security Transparency Commitment, with no public reporting on progress, achievements, assessment, or evaluation tied to this Commitment. Accordingly, we recommend that in 2024 a formal review of the Transparency Commitment be undertaken with public reporting of initiatives undertaken, impacts to date, and activities to come.

Recommendation 16: Publication of bi-annual plan for reporting on the National Security Transparency Commitment

We further recommend that Public Safety Canada commit to the formulation of a published, bi-annual plan for the Transparency Commitment, modeled on the Open Government Action Plans of recent years. Where appropriate, alignment and cross- pollination of these action plans should be pursued (building on the inclusion of two national security initiatives tied to misinformation included in the current Open Government Action Plan).Our next report in 2024 will complement this review effort by taking stock of NS-TAG's own activities and impacts, and the willingness of the national security entities to have responded to our own past and current recommendations over the course of the past five years.

Annex A: Charts on Diversity and Inclusion in the National Security Community

Public plans and strategies

Public Safety Canada (PS)

Internal: "Strategic Framework on Diversity and Inclusion for Public Safety (2020)

External: Bias, Sensitivity, Diversity and Identity in National Security Annual Symposia (2020-present)

Royal Canadian Mounted Police (RCMP)

Internal: "Equity, Diversity and Inclusion Strategy" (2021)

External: Established the RCMP-Indigenous Collaboration, Co-Development & Accountability (RICCA) Office to build collaborative relationships with Indigenous Communities (2021)

Canada Border Services Agency (CBSA)

Internal: "2021 to 2024 Employment Equity (EE), Diversity and Inclusion Action Plan" (2021)

External: Targeted EX-01 and EX-02 staffing processes from members of Employment Equity groups (2021)

Canadian Security Intelligence Service (CSIS)

Internal: New CSIS Diversity, Equity, and Inclusion Strategy with action plan released in 2022 based on employee experiences and recommendations; interim recruitment and staffing plan (2022)

External: Ongoing stakeholder engagement that reflects intersectional considerations and equity sensitivities (2022)

Communications Security Establishment (CSE)

Internal: Working toward the establishment of a Sponsorship Program to focus on creating opportunities for progression into more senior ranks (2021)

Canadian Armed Forces (CAF)

Internal: "CAF Employment Equity Plan 2021-2026" (2021)

External: Targeted recruitment and prioritizing applications from members of EE groups (2021)

Department of National Defence (DND)

Internal: "D&I Lens Tool" to complement GBA+ with a prescriptive process to analyze policies, processes, communications products, research materials and other initiatives (2021)

External: Sessions with external stakeholders to support the work of the Anti-Racism Secretariat (2021)

Global Affairs Canada (GAC)

Internal: "Action Plan on Reconciliation with Indigenous Peoples" (2021)

External: Hired a consultant to conduct an environmental scan of the current context and challenges related to equity, inclusion and systemic racism (2021)

Privy Council Office (PCO)

Internal: Employment Equity, Diversity and Inclusion (EED&I) plan for 2020-2023 (2020)

External: Sought external recruitment services via Black, Indigenous and People of Colour (BIPOC) Executive Search and Odgers Berndtson Executive Search to maintain an inventory of qualified members of EE groups (2021)

Transport Canada

Internal: Diversity and Inclusion Action Plan for 2020-2023 (2020)

External: Engagement with Indigenous, Black and 2SLGBTQIA+ groups

D&I in the NS Workforce 2019-2021

D&I in the NS Workforce 2019-2021

Women

Indigenous Peoples

2019 WFA*

2019

2020-2021 WFA**

2020-2021

2019 WFA

2019

2020-2021 WFA

2020-2021

PS

55.3%

61.1%

61.6%

59.1%

3.1%

4.2%

4%

4.8%

RCMP

48%

39.5%

n/a

n/a

4%

6.8%

n/a

n/a

CBSA

44.4%

47.5%

43.5%

47.4%

4.1%

3.3%

3.8%

3.8%

CSIS

47.3%

48.5%

n/a

49%

2.6%

2.3%

n/a

2%

CSE

36.7%

37.3%

n/a

36.8%

1.8%

2%

n/a

1.9%

CAF

14.5%

15%

n/a

16.1%

3.4%

2.8%

n/a

3.4%

DND

39.5%

40%

44.3%

41.3%

2.6%

3.1%

4.1%

3.5%

GAC

57.6%

55.3%

55.9%

56.3%

3.1%

4.6%

3.5%

7%

PCO

52.2%

57%

53.1%

53.6%

1.8%

2.9%

3.3%

3.6%

ITAC

47.3%

68%

n/a

53.6%

2.6%

5%

n/a

1.9%

TC

42%

46.7%

n/a

45.7%

3%

3.5%

n/a

3.2%

*WFA – Workforce Availability

**The 2019 data is extracted from the 2019 NSICOP Report (except for TC's data). For 2020-21 data (and TC's 2019 data), sources include information available on individual websites, annual reports, internal HR data, as well as the report on Employment Equity in the Public Service of Canada for Fiscal Year 2020 to 2021. Please note that the RCMP's overall percentage data is unavailable for 2020-21. Additionally, the 2020-21 WFA is not available. The 2020-21 WFA is based on the most recent departmental workforce availability estimates (Census 2016 and 2017 Survey on Disability). Please also note that TC's 2020-21 data is also unavailable.

D&I in the NS Workforce 2019-2021

Members of Visible Minorities

Persons with Disabilities

2019 WFA

2019

2020-2021 WFA

2020-2021

2019 WFA

2019

2020-2021 WFA**

2020-2021

PS

15.1%

11%

16.7%

18.3%

3.9%

5.9%

8.8%

7.3%

RCMP

18%

12%

n/a

n/a

5%

2.4%

n/a

n/a

CBSA

11.9%

14.7%

15.6%

18.7%

4.4%

3.4%

9.4%

4.2%

CSIS

18.5%

16.5%

n/a

19%

4.6%

4.2%

n/a

5%

CSE

21.5%

11.4%

n/a

12.9%

4.2%

3.7%

n/a

3.9%

CAF

6%

7.2%

n/a

9.6%

n/a

n/a

n/a

5.5%

DND

8.7%

7.8%

11.8%

10.5%

4.6%

5.4%

9%

5.6%

GAC

13.9%

20.3%

16.1%

25.7%

3.9%

3.6%

9.1%

3.8%

PCO

12.7%

13%

16.8%

19.9%

4%

3.4%

8.7%

3.8%

ITAC

18.5%

13%

n/a

21.1%

4.6%

n/a

n/a

4.6%

TC

17.5%

15%

n/a

15.6%

8.2%

3.8%

n/a

3.5%

*WFA – Workforce Availability

**The 2019 data is extracted from the 2019 NSICOP Report (except for TC's data). For 2020-21 data (and TC's 2019 data), sources include information available on individual websites, annual reports, internal HR data, as well as the report on Employment Equity in the Public Service of Canada for Fiscal Year 2020 to 2021. Please note that the RCMP's overall percentage data is unavailable for 2020-21. Additionally, the 2020-21 WFA is not available. The 2020-21 WFA is based on the most recent departmental workforce availability estimates (Census 2016 and 2017 Survey on Disability). Please also note that TC's 2020-21 data is also unavailable.

2021 Public Opinion Research

Public Service Commission Audit of Employment Equity Representation in Recruitment: Employment Equity Group Representation Rates Following Each Stage of the Recruitment Process (2021)

Image description

The bar chart above shows the rates of representation in each stage of the recruitment process in the public service. At the job application stage representation rates were as follows: 52.8% women, 30.4% members of visible minorities, 3.5% indigenous peoples, 4.4% persons with disabilities. At the automated screening stage representation rates were as follows: 53.2% women, 30.3% members of visible minorities, 3.6% indigenous peoples, 4.7% persons with disabilities. At the organizational screening stage, representation rates were as follows: 59.7% women, 27.4% members of visible minorities, 4% indigenous peoples, 4.5% persons with disabilities. At the assessment stage representation rates were as follows: 59.8% women, 24.6% members of visible minorities, 2.9% indigenous peoples, 3.6% persons with disabilities. At the appointment stage representation rates were as follows: 58.2% women, 24.7% members of visible minorities, 2.9% indigenous peoples, 2.4% persons with disabilities.

Annex B: CBSA's response to the NS-TAG's second report, "How National Security and Intelligence Institutions Engage with Racialized Communities"

Canada Border Services Agency
President
Ottawa, Canada
K1A 0L8

National Security Transparency Advisory Group Public Safety Canada

August 18, 2023

Dear National Security Transparency Advisory Group members:

On behalf of the Canada Border Services Agency (CBSA), I would like to thank you for your feedback on the Agency`s Traveller Modernization initiative. As we move forward, the CBSA is cognizant of the need to strike a fine balance. Our mandate to keep Canadians safe is paramount, and we cannot divulge information that would give threat actors an advantage in circumventing Canada's border-security measures. The Agency`s move towards a more efficient, data-driven approach to border management will be most effective if Canadians have confidence that the CBSA is a secure, responsible, and sensitive steward of their personal information.

In the interest of striking the right balance, there are several ways that the Agency is leveraging the work of the National Security Transparency Advisory Group (NS-TAG) to modernize our efforts on the Traveller Modernization initiative:

  1. The CBSA has considered the recommendations from your report How National Security and Intelligence Institutions Engage with Racialized Communities. With these recommendations in mind, the CBSA is working to strengthen relationships with Indigenous and racialized communities that are regularly impacted by our border-management programs. The Agency is also considering additional ways to bring these voices directly into our governance.
  2. Immediately following our engagement with you in January 2023, we set out to integrate your advice into our planning for public engagement, transparency, and assurance efforts around the Traveller Modernization initiative.
  3. The Agency has carefully reviewed the additional points raised in your written feedback of March 2023, and are integrating this advice into the next phases of Traveller Modernization that will include stakeholder engagement, public outreach, and education.

Enclosed you will find a table that indicates the areas of your feedback where we are

already taking action. We've made good progress, but there are still areas of your feedback that we will need to consider through subsequent development of the Traveller Modernization initiative. As we continue this work, we may seek to engage NS-TAG members on specific areas of effort that would benefit from an independent view.

I would like to reiterate our thanks for your ongoing support for our transparency and engagement efforts at the CBSA and, more broadly, for your valuable guidance on these efforts across our national security community.

Yours sincerely,
Erin O'Gorman

Enclosure

Ensuring transparency, privacy, and mitigating bias in the traveller modernization initiative : progress as of August 2023

Transparency:

In addition to planned external engagement activities, including the publication of three consultation papers, NS-TAG members believe the CBSA will only truly meet its transparency objectives if it proactively reaches out to travellers by, for example:

Explaining Motivation for TM Initiative:

Recommended messaging to Canadians: Through voluntary initiatives where travellers provide access to personal data, the CBSA has been able to introduce alternative forms of port of entry examination that have been critical to meet growing volumes of travellers, reduce the waiting time for travellers and protect border integrity. The Travel modernization initiative is a new voluntary initiative where your consent to the access and storage of some of your private date will allow the CBSA to proceed with your examination even before you arrive at the port of entry. We are committed to safeguard the storage and use of your private data. You can also opt for a traditional face-to-face examination with the understanding that you may experience longer delays.

CBSA Action

The Agency will incorporate this messaging into the existing Traveller Modernization communications and stakeholder material. This will include amending key messaging, web content and both internal and external stakeholder engagement plans.

Pushing information out to travellers:

Since the TM Initiative will be implemented in a phased approach over the next few years, the CBSA could provide information at Ports of Entry in advance of full implementation. Flyers, signs at ports of entry and ads on airport internet portals, for example, could announce the initiative, providing key information (i.e., the fact that the use of the TM Initiative is optional, that it includes facial recognition and that the purpose is to improve both the travellers' experience at the border and the efficiency of the CBSA) as well as the internet links to the consultation papers.

CBSA Action

The Agency is working with key industry stakeholders to push information to travellers via various channels (Air Canada mobile application, Advanced Declaration signage, etc).

Making full information on the TM Initiative easily accessible:

With respect to the personal information used in the TM Initiative, the consultation papers must "make readily available, in plain language, information that explains the [CBSA's] policies and practices", to use the exact wording the Canadian government proposes in relation to transparency obligations for private sector organizations in Bill C-27, amending the federal private sector privacy law.

CBSA Action

The Agency has begun to incorporate plain language into the Traveller Modernization web page, affording the public access to both summary information as well as more detailed information for those who seek additional clarity on CBSA policies and procedures.

Being transparent about the CBSA's legal obligations relevant to the TM Initiative:

Public consultations must address the legal test imposed upon the Government of Canada in the legitimate restriction of any rights and freedoms. Public consultations must contain the following explanations:

CBSA Action

The Agency will seek to provide plain language information to Canadians on collection and use of their information throughout the traveller continuum. As part of these efforts, the CBSA is developing a comprehensive privacy framework, which includes overarching privacy impact assessments that consolidate existing functions in the Traveller Program and will have the capacity to incorporate new initiatives. The privacy management framework will support the protection of traveller information and leverage protocols that will ensure collection, storage, use, and retention of personal information in a privacy-conscious manner that is consistent with legislative requirements. Finally, engagement with the public and stakeholders will outline the effectiveness of digital tools offered through the Traveller Modernization initiative and why it is in the public interest to modernize Canada's border.

Seeking feedback:

Canadians should be given an opportunity to provide feedback and ask questions through a dedicated contact. For example, information on the TM should always contain a "Tell us what you think" option with a direct link to an email address for the CBSA to receive comments and answer questions.

This "Tell us what you think" option, to ensure greater transparency, should be accompanied by the commitment from the CBSA to make usage of, and report on this feedback in open and meaningful manners.

CBSA Action

The Agency has built a help function into the current digital tool that allows people to directly request support from the CBSA. The Agency will include a "what we heard report" as part of our public engagement efforts to summarize the feedback in an open and meaningful manner to ensure greater transparency.

In seeking comments, allowing the option to self-identify according to relevant interest groups:

The TM Initiative will not be experienced in the same way by all demographic groups. For example, religious considerations may not be accommodated in an initiative based on full face recognition. Race has also been proven to impact the accuracy of facial recognition. Transparency on the differential impact that the TM Initiative may impose upon diverse groups and how it is mitigated will be critical to meet transparency objectives fully.

CBSA Action

The Agency has used biometrics in primary inspection kiosks in airports since 2017 and, based on this experience, we continue to evaluate the impact of demographic attributes such as gender, age, and country of birth on the overall performance of our biometric systems to ensure the accuracy of facial recognition for diverse groups.

To enable the delivery of Traveller Modernization biometric capabilities, we are also actively monitoring research and development in the biometric domain to ensure that new technologies do not introduce systemic barriers or inequalities, and are ensuring the use of facial recognition at Canada's borders will remain voluntary.

Additionally, the Agency is considering how biometric technologies could present an opportunity to remove existing barriers and promote equality for all individuals who arrive at Canada's borders.

Privacy:

Ensuring transparency is intrinsically linked to the objective of respecting privacy through the TM Initiative. Transparency of the TM Initiative is critical to the validity of consent to provide personal information to adhere to it, which is a privacy issue, and transparency of privacy protection is critical to accountability in that regard. The CBSA can meet these goals by:

Approaching privacy related issues in a context of diversity:

As mentioned above, the use of personal information envisaged in the TM Initiative does not impact all notions of privacy in the same way. If some should or can be accommodated, they must be addressed. The "Tell us what you think" function should serve as an additional source of diverse opinions and needs in relation to the implementation of the TM.

CBSA Action

The Agency is preparing to engage diverse communities, not only on the issue of privacy but on Traveller Modernization as a whole. This will take the form of public user research and facilitated group discussions.

Allowing withdrawal of consent:

Where the use of personal information is optional, its legality rests upon validity of consent. Under "Transparency", we mention above the transparency requirements for valid consent to provide personal information. Under the right to privacy, where the collection of personal information is subject to consent, such consent must be as easy to withdraw as it is to give. Further, individuals, upon giving consent, must be clearly informed of their right to withdraw consent at any given time. Personal information used with consent previous to withdrawal must be destroyed unless it is still demonstrably necessary. In line with the "Tell us what you think function", the withdrawal of consent process should provide a link with the opportunity to provide reasons/feedback for those who wish to do so.

CBSA Action

Some personal information must be provided under border legislation to allow the verification of traveller identity at the border. Beyond this, the Agency is building consent into the Traveller Modernization initiative as a foundational element, including monitoring feedback, withdrawal of consent, and how these factors would impact the retention and use of personal data.

Creating internal mechanisms for privacy oversight:

The CBSA already has a complaint mechanism. It should be clearly extended to privacy complaints under the TM Initiative.

CBSA Action

The Agency's complaint mechanism will capture complaints related to the Traveller Modernization Initiative and the Agency will use this information to identify problems and adjust its service delivery, as it does in other program areas.

The Agency also works closely with the Office of the Privacy Commissioner on traveller complaints related to privacy, and this would extend to tools used as part of the Traveller Modernization initiative.

There is also legislation before Parliament to create a new external review body that would address complaints against the CBSA. This new body would open up an additional avenue for independent review of complaints against the CBSA, and an additional layer of public assurance.

Informing individuals of their privacy rights upfront:

As they provide consent, individuals should receive information relating to their right to privacy, including the right to request access to their personal information (in addition or through the direct access mechanism proposed above); the right to obtain correction if it is not accurate; the right to withdraw consent and to have their personal information deleted unless it is still necessary; and, the right to complain to the CBSA in relation to a perceived violation of privacy and to lodge a complaint with the Office of the Privacy Commissioner of Canada if they are not satisfied with CBSA's response. A function to ask questions should be added as a link.

CBSA Action

The Agency agrees that individuals should be informed of their privacy rights up front, and any digital tool developed under Traveller Modernization will include a privacy notice statement and a disclosure of terms and conditions of collection, use and retention of data to inform individuals of their privacy rights.

Travellers have the ability to request and access data the CBSA maintains on them via a privacy request under the Access to Information Act. In addition, the Access to Information Act allows individuals to correct the information the CBSA has on file about them via a Record Correction Request.

Mitigating Bias

In addressing algorithmic bias, the following are seen as best practices:

Ensuring algorithmic transparency:

Algorithmic transparency faces mainly the challenge of technological complexity (meaning that the application of the algorithm by the machine often escapes even its creators) and of necessary secrecy for effectiveness. Best practices include:

CBSA Action

The Agency is incorporating safeguards and guardrails for the implementation, operationalization, monitoring and reporting for tools that incorporate artificial intelligence.

The CBSA is focusing on algorithmic transparency through multiple channels, including but not limited to:

CBSA is actively working to identify and mitigate data and analytic bias. The Agency has created a Data Ethics and Bias team to conduct research and analysis into best practices, develop tools to test and mitigate bias, and serve as an internal challenge function to ensure ethical considerations are central to analytics.

Further, the CBSA has created an Office of Biometrics and Identity Management to guide the Agency's application of biometrics in Traveller Modernization. The Office of Biometrics and Identity Management will play a key role in evaluating technical capabilities and guiding the design, implementation and operation of Agency biometrics initiatives in way that enhances transparency and strengthens public trust.

Reporting on the impact of the algorithms:

This refers to year-end reports on the impact of the algorithms to verify the absence of bias.

CBSA Action

CBSA is ensuring compliance and alignment with the Treasury Board's Directive on Automated Decision Making for all aspects of Traveller Modernization. The CBSA complies with all existing policies and directives and strives to be a leader in ethical data use. As part of this effort the CBSA complies with privacy impact report requirements and open government requirements (when there are no security considerations, or it is possible to mitigate any security risks).

Specific to the CBSA's efforts to align with the TB Directive on Automated Decision Making, we are engaged in the following specific work:

Under the Office of Biometrics and Identity Management, a research agenda has been developed to understand the impacts of demographic attributes such as gender, age, and country of birth on overall biometric performance.

Annex C: Overview of NS-TAG meetings from October 2022 to May 2023

In – Person Meeting – October 23 to 24, 2022

Theme/Topic: Briefings and Forward Planning

Representatives from various national security departments and agencies met with NS- TAG members to discuss respective mandates and responsibilities in safeguarding the security and interests of Canada and Canadians, to discuss existing oversight mechanisms, as well as their various transparency initiatives.

Virtual Meeting – January 19, 2023

Theme/Topic: Discussion with CBSA on their Traveller Modernization Initiative

The NS-TAG welcomed the Canada Border Service Agency (CBSA) to discuss their Traveller Modernization Initiative, which aims to equipe border officers with the tools to search for high-risk travellers, while providing a more efficient border experience for travellers.

Virtual Meeting – March 10, 2023

Theme/Topic: The Use of Digital Tools and Emerging Technologies in the Protection of National Security – Part 1

The NS-TAG welcomed guests from the Diversity Institute and Open AI to explore measures that can help national security departments and agencies ensure that they are is increasing its knowledge of, and ability to use, digital tools during times of rapid technological change.

Virtual Meeting – April 4, 2023

Theme/Topic: The Use of Digital Tools and Emerging Technologies in the Protection of National Security – Part 2

Guest speakers from the Canadian Civil Liberties Association and met with NS-TAG members to explore the importance of privacy in national security and the intersection between privacy, policy, and law.

In – Person Meeting – May 26 to 27, 2023

Theme/Topic: The Use of Digital Tools and Emerging Technologies in the Protection of National Security – Part 3

NS-TAG Members met with members of civil society, academia, and select national security departments and agencies to discuss this year's theme "Emerging Technologies and Digital Tools in the Protection of National Security". Over the course of two days, NS-TAG members engaged with guests in discussions surrounding the ways technology is being used in the national security space and the potential problems that could arise from them. They also explored how to migrate transparency into Canada's national security framework.

List of Guest Speakers Who Participated in the NS-TAG Meetings:

Annex D: Transparency Frameworks

This annex provides a comprehensive overview of the transparency initiatives governing the national security and intelligence community in Canada. It also compares the transparency frameworks of the Five Eyes countries. It endeavors to serve as a resource for Canadians to learn about transparency as it relates to the National Security Transparency Advisory Group's theme on The Use of Digital Tools in the Protection of National Security.

Frameworks governing the National Security sphere

Bill C-59: An Act Respecting National Security MattersFootnote 68

Bill C-59 made amendments to the Security of Canada Information Sharing Act. Amendments included imposing an obligation on record keeping and obliging all disclosing institutions to provide records to the National Security and Intelligence Review Agency (NSIRA) on an annual basis. The NSIRA is now required to report annually on activities that are made by disclosing institutions under the Security of Canada Information Disclosure Act. These reports are published and tabled in Parliament by the Minister of Public Safety.

Bill C-58: An Act to amend the Access to Information Act and the Privacy ActFootnote 69

This Act enhances the accountability and transparency of federal institutions to promote an open and democratic society and enables public debate on the conduct of those institutions. This enactment amended the Access to Information Act giving new authority to the head of a government institution – with the approval of the Information Commissioner – to decline to act on a request for access to a record for various reasons. It permits the Information Commissioner to make orders for the disclosure of records and publish reports, and authorizes parties the right to apply to the Federal Court for the review of the matter.

Bill C-27: An Act to enact the Consumer Privacy Protection Act, and the Artificial Intelligence and Data ActFootnote 70

The Consumer Privacy Protection Act repealed parts of the Personal Information Protection and Electronic Documents Act and replaced it with a new legislative regime governing the collection, use, and disclosure of personal information. This maintains, modernizes, and extends existing rules while imposing new rules on private sector organizations for the protection of personal information.

The Artificial Intelligence and Data Act regulates international and interprovincial trade and commerce in artificial intelligence systems. It established common requirements for the design, development, and use of artificial intelligence systems that include measures to mitigate biased output.

Directive on Automatic Decision-MakingFootnote 71

The objective of this directive is to ensure that automated decision systems are deployed in a manner that reduces risks to clients, federal institutions and Canadian society. Automated decision systems are computer programs or algorithms that make decisions or predictions based on predefined rules and data, often without direct human oversight. It requires decisions made by federal institutions to be data-driven and comply with procedural fairness and due process requirements. Where appropriate, data and information on the use of automated decision systems in federal institutions are made available to the public and impacts of algorithms on administrative decisions are assessed to reduce negative outcomes to the greatest extent possible.

Privacy impact assessments and audits

Babel XFootnote 72

Babel X is an American-based software that allows users to use its platform to locate and isolate publicly available information from online channels. The RCMP uses Babel X to analyze and organize data sources while undertaking national enforcement priorities. This privacy assessment was initiated by the RCMP to ensure that they adhere to the legal requirements under the Privacy Act.

Findings revealed that the privacy impacts of Babel X by the RCMP are moderate. The assessment recommended that the RCMP should continue to ensure that the use of Babel X remains compliant with the Privacy Act.

In-Car Digital Video SystemFootnote 73

The In-Car Digital Video System (ICDVS) device is used by law enforcement agencies in the midst of investigations to capture audio and video recordings of incidents and interactions with the public. A privacy assessment was initiated by the RCMP to assess how personal information was being collected and disclosed to federal agencies.

The privacy assessment findings revealed that information collected by the ICDVS is compliant with legislative requirements.

ClearviewFootnote 74

Clearview AI is a facial recognition system that can match images of individuals against an extensive database of publicly available photos. The company's technology allows law enforcement agencies, as well as private organizations, to identify and track individuals by comparing images to its database. The audit was initiated by the RCMP due to growing concern over Clearview's policies that do not require consent in the collection and sharing of individuals' images and personal data.

The audit concluded that Clearview's collection and disclosure of personal information through its facial recognition application was for a purpose that a "reasonable person would find to be inappropriate." As such, it was recommended that the RCMP should immediately terminate its use of Clearview, and that Clearview would no longer provide services to any client in Canada. Additionally, it was recommended that they delete all images and biometric facial arrays of Canadians in their database.

Mobile BorderFootnote 75

The Mobile Border application is used to provide a more convenient and accessible reporting tool for travellers who do not meet all the required criteria for entering Canada. The initiative simplifies processing times and improves identity assurance levels. Individuals wishing to use this technology are required to apply for it by creating a traveler's profile and entering their personal information.

A privacy impact assessment was conducted by the CBSA to assess the risk of privacy breaches. The assessment revealed that the technology collects personal data in a way that is compliant with the Customs Act and Privacy Act.

Primary Inspection KioskFootnote 76

The Primary Inspection Kiosk technology allows travelers to voluntarily submit their immigration and customs declarations to the CBSA before their arrival in Canada. The information collected is put into CBSA Protected B cloud database to be encrypted and later retrieved when the traveler has entered the country. Once they arrive, the traveler's declarations are certified, aiding in the process of the inspection of their travel documents. To avoid potential privacy breaches, a privacy impact assessment was done that found the Primary Inspection Kiosk to be compliant with the Privacy Act.

Privacy impact assessments and audits

United Kingdom

The UK National Data Strategy aims to promote openness and accountability in the use of data.Footnote 77 It seeks to ensure that data practices and decision-making processes are transparent by providing greater visibility into data collection, usage, and sharing.

The UK government seeks to address evolving issues on impact measurement and effective governance by reviewing open data publication and decision-making processes to ensure their consistency.

The Algorithmic Transparency Recording Standard is a fundamental part of the National Data Strategy.Footnote 78 The strategy has a commitment to explore an effective way to deliver greater transparency on algorithm-assisted decision-making in the public sector through several key approaches:

By integrating these approaches, the UK National AI Strategy aims to create an ecosystem where AI technologies are developed, deployed, and used in a transparent manner.

Australia

Australia is part of the Open Government Partnership, an international initiative involving local and global governments and civil society organizations to work together to promote accountable, responsible, and inclusive governance.Footnote 79 To date, Australia has released two Open Government National Action Plans and is currently working on its third.

Their second 2018-2020 Open Government National Action Plan contains eight commitments to enhance access to information, civic participation, public accountability, as well as technology and innovation for openness.Footnote 80

Notably, this includes the Access to Government Data Commitment which combines a variety of initiatives to increase the accessibility and use of open government data.Footnote 81 This includes: improving the functionality of data.gov.au (which is the Australian government repository of open data); establishing GrantsConnect, a web-based portal to provide a central repository of information on government grants; and, further developing the use of digital information management practices.

Australia also enforced the Notifiable Data Breaches Scheme as part of the Privacy Act.Footnote 82 This requires notifying affected individuals and the Office of the Australian Information Commissioner when an entity subject to the Privacy Act experiences a data breach of personal information that can pose serious harm.

New Zealand

The Algorithm Charter for Aotearoa New Zealand is built on the commitment that the government of New Zealand will maintain transparency by explaining how decisions are informed by algorithms.Footnote 83 This includes plain English documentation of algorithms and publishing information about how data is collected, secured, and stored.

It also involves adherence to treaty commitments by embedding Te Ao Māori principles development and use of algorithms consistent with the principles of the Treaty of Waitangi.Footnote 84 The Charter commits to ensuring data is fit for purpose by understanding its limitations, identifying and managing bias, and ensuring that privacy, ethics, and human rights are safeguarded by regularly conducted peer review. Finally, the Charter retains consistent human oversight to nominate a point of contact for public inquiries about algorithms and to ultimately explain the role of humans in decisions informed by algorithms.

United States

The Freedom of Information Act (FOIA) gives the American public the right to access government records in the US government.Footnote 85 Notably however, the FOIA was amended and altered numerous times. The 1976 Government in the Sunshine Act required government agencies to open meetings to the public.Footnote 86 In 1982 a Reagan administration executive order "created new classification rules that made withholding potentially sensitive government information as a response to FOIA requests much easier. In 1996, the Electronic Freedom of Information Act Amendmentsrequired agencies to make documents available in electronic formats to be digitally distributed upon request.Footnote 87 In 2007, the Bush administration signed the OPEN Government Act, which established the Office of Government Information Services.Footnote 88 In 2016, the Obama administration passed the FOIA Improvement Act, which required federal agencies to create a central online portal for anyone to file a request, and put a 25-year limit on federal agencies' ability to withhold classified documents.Footnote 89

When it comes to AI transparency, the White House Office of Science and Technology Policy released the Blueprint for an AI Bill of Rights in October, 2022.Footnote 90

The Office of Science and Technology says that the protections outlined in the document should be applied to all automated systems to ensure accountability and responsible AI use by the US federal government and civil society. The blueprint spells out four main principles that acts as a guide to help prevent AI systems from limiting the rights of U.S. residents:Footnote 91

Date modified: