April 16th 2024
By Marleen Mavrow, CISM, CRISC, PMP, GRC Practice Lead & Privacy Officer, Charter
With the public release of ChatGPT in November 2022, Generative Artificial Intelligence (Generative AI) launched into the public consciousness and quickly dominated headlines. (1) Within two months, it gathered over 100 million users, making it the fastest growing consumer application in history. (2) Since then, several tech giants have launched competing Generative AI applications and services, eager to seize market position in this fast-growing area.
Generative AI promises to provide significant benefits to individuals, corporations, and society if it can be harnessed effectively. New insights and trends can be gleaned from stagnant data, helping to foster creativity and stimulate product design. Enhanced automation will drive efficiency and cost reduction, as well as the detection of threats and vulnerabilities.
With these enticing new opportunities, organizations are seeking ways to leverage the power of Generative AI. Yet, there are growing concerns around privacy, accuracy, and ethical use. Even if organizations are not ready for implementation, its prevalence within industry and public availability indicates it could have potential influence on staff, customers, partners, and other stakeholders. Executive leaders are seeking ways to put in place governance strategies while simultaneously harnessing this technology to ensure safe, secure, and predictable business outcomes.
This paper will explore approaches for safe adoption of Generative AI technology that can be leveraged across all industries, as well as organizations of any size.
In full disclosure, this paper was produced entirely with human intelligence, with no source data from any Generative AI tools.
Perhaps surprisingly, Artificial Intelligence has been in use for a long time. Alan Turing already saw its potential as early as 1950, per his paper “Computing Machinery and Intelligence”, but it took the computational power and storage capabilities of modern computers to enable large-scale progress. (3) By 1997, IBM’s famous ‘Deep Blue’ computer beat world chess champion Gary Kasparov, (4) and a year later Cynthia Breazeal introduced KISmet, an emotionally intelligent robot. (5) In 2011, Apple integrated Siri into the iPhone 4S (6) and Amazon introduced Alexa in 2014. (7)
Starting around 2010, Artificial Intelligence went mainstream, allowing organizations to create business value, particularly in the banking, marketing, and entertainment sectors. (8) Popular uses included data processing, predictive behavior (for sales, market trends, or financial performance), as well as data validation and protection, to sniff out anomalies from vast amounts of information. While these tools remained important for corporations, they were corporate-owned, robotic, and static in nature.
While related to those prior works, Generative AI is a quantum leap forward as it can easily synthesize and “generate” net-new data, at scale, almost instantaneously - which is where its name was derived. Its power to quickly process, learn, create, and provide content in multiple formats is transformative across every industry sector. Disruption is already occurring in traditional business fields. In a recent Dell survey, 76% of IT decision makers estimate that Generative AI will have a considerable impact on their organization. (9)
Significantly, this power can be harnessed by individuals, eliminating the barrier that separates domain experts from users. Its positive impact is evidenced in Generative AI-created music, art, education, and film.
However, cybercriminals also have the same ability to harness Generative AI, creating a notable cyber risk to individuals, business, and national security. Due to the rate at which a malicious Generative AI system could operate, many cybersecurity experts see Generative AI as the only practical response to those same threats. (10)
Additionally, there are vulnerabilities within Generative AI guardrails, such as manipulative prompting, that can be bypassed or exploited. If realized, this can lead to unauthorized data access or technology manipulation, such as embedding harmful instructions or generating inappropriate content.
While this technology presents tremendous opportunity, there are many questions it raises including:
Canada’s finance regulator, the Office of the Superintendent of Financial Institutions (OSFI), has described the ethics of generative AI as a complex endeavor, where “One person’s view of fairness may differ from another’s.” (11)
As organizations rush to embrace this innovation accelerator, some pitfalls have already occurred. Deepfakes of individuals have led to inappropriate payments (12) and chatbots have provided false recommendations. (13) If Generative AI creates a new song, who ultimately owns the song? And if it infringes upon another artist’s copyright, who is at fault?
Generative AI is lurking behind the scenes, shifting the way businesses operate and changing the mechanics of operations. Traditional understandings, processes, and technologies are being upended and this presents new layers of risk and uncertainty.
Many organizations have a strong desire to leverage Generative AI and experimentation with Generative AI tools is relatively common by all seniority levels, both within and outside of work. (14) Given this widespread use, the question is no longer “if” but “how.” It is, therefore, necessary for organizations to build guidance on the appropriate use of Generative AI, followed by a business case for the adoption of Generative AI technology.
As a first step, organizations should put in place an Acceptable Use Policy (AUP) around the use of Generative AI tools. Such guidance would provide clear expectations around the use of these tools, particularly noting activity that is acceptable or prohibited. Without a clear policy in place, organizations risk the loss or exposure of data, as well as potential security compromises. There are a growing number of common policies organizations can easily adopt and customize to meet their desired culture and sensitivities.
As organizations look to deploy Generative AI, a business case should be established outlining the following important factors:
As a new, complex technology, the ramifications of Generative AI can have far reaching ethical and legal repercussions that can quickly, and significantly, harm businesses if not appropriately addressed up front. To reduce risk, the following areas should be addressed in a business case for Generative AI adoption.
Regardless of people, process, or technology, organizations will always be legally responsible for ensuring they comply with current laws and regulations - particularly provincial and federal privacy laws and legislation. Using a new technology is not a valid excuse to regulators or the courts for bypassing laws. For example, Air Canada recently argued to BC courts it wasn’t responsible for outputs from its Generative AI-powered chatbot – and were unsuccessful. (13) Given this, organizations need to consider the ethical influence and ramifications of Generative AI outcomes by their organization.
Current laws and older legislation are silent on Generative AI, as most were adopted prior to the Generative AI revolution. Still, Canada’s privacy legislation, PIPEDA, is a ‘consent’-based legislation, whereby organizations must obtain consumer consent before using their information, regardless of the tools they utilize. (20)
In response to fraud and the rapid rise of ransomware, Canada is moving forward to modernize its legislation.
In addition, federal and provincial regulators are regularly issuing guidance on Generative AI, including provincial health authorities, as well as OSFI, Canada’s Financial Regulator. OSFI, together with the Global Risk Institute formed the Financial Industry Forum on Artificial Intelligence and have put forward recommended safeguards and risk management strategies, grouped into 4 principals (the EDGE Principal): Explainability, Data, Governance, and Ethics. (23) Its recommendations are applicable to any industry and should be leveraged into Generative AI business cases and designs.
It is not in an organization’s best interest to wait for regulators to issue guidance or laws to come into effect – they should take steps to get ahead of the curve. Steps are already available, from respected standards and frameworks, including:
Generative AI will likely be the most transformative technology of our time and represents exciting opportunities for efficiency, growth, and protection. As this new technology unfolds, it must be guided as a tool that can enhance, rather than undermine potential and creativity. Organizations can move forward with confidence and overcome risk with departmental planning leaders taking the following steps.
While Generative AI does not necessarily change the entire business model of an organization, it can be a catalyst for growth. Leaders that take the above steps will enhance organizational resilience, retain talent, accelerate digital transformation, and position their company well to compete in their marketplaces.
Marleen Mavrow is Charter’s GRC Practice Leader and Privacy Officer and brings 20+ years of experience in strategic planning and operational success. Focused on success through teamwork, collaboration, and stakeholder management, Marleen has led many business transformation engagements with small, medium and large-scale clients across various Canadian sectors including Financial, Transportation, Technology, and Public sectors, leading GRC and Digital Transformation efforts. A proven leader with strong analytical and communications abilities, Marleen aims to align IT practices to business goals.
Marleen is a member of ISACA, DAMA and PMI and is an active member of the ISACA Vancouver Chapter Board, championing the advancement of digital trust by empowering IT professionals through growth in knowledge and skills.
Charter is an award-winning IT solution and managed services provider established in 1997 in Victoria, BC, Canada. Our mission is to align people, process, and technologies to help build better organizations, enhance communication, boost operational performance, and modernize businesses.
Our team of experts leverage a business architecture methodology and a human-centered design approach to drive successful digital transformation for our clients, unlocking new opportunities, generating new value, and driving growth. Charter offers a comprehensive range of services, including advisory and consulting services, project services, and managed services. This allows the company to provide end-to-end solutions to clients, from planning and design to ongoing support and implementation. We offer knowledge and support that extends beyond our clients’ businesses, empowering them to focus on their core operations. Charter helps businesses generate new value, drive growth, and unlock new opportunities, enabling them to go to market faster and more effectively. Let Charter drive business outcome Forward, Together.
For more information on Charter, please contact:
Dawn van Galen
Marketing Manager
250-412-2517
(1) OpenAI. (2022, November 30). Introducing ChatGPT. https://openai.com/blog/chatgpt
(2) Hu, K. (2023, February 2). ChatGPT sets record for fastest-growing user base. Reuters. https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01/
(3) Turing, A. (1950). Computing Machinery and Intelligence. Mind, 59(236), 433–460. https://doi.org/10.1093/mind/lix.236.433
(4) IBM. (n.d.). Deep Blue | IBM. Www.ibm.com. https://www.ibm.com/history/deep-blue
(5) Graham-Rowe, D. (n.d.). Meet Kismet ... New Scientist. https://www.newscientist.com/article/mg15921480-800-meet-kismet/
(6) Apple Launches iPhone 4S, iOS 5 & iCloud. (n.d.). Apple Newsroom. https://www.apple.com/newsroom/2011/10/04Apple-Launches-iPhone-4S-iOS-5-iCloud/
(7) Stone, B. (2021, May 11). The Secret Origins of Amazon’s Alexa. Wired. https://www.wired.com/story/how-amazon-made-alexa-smarter/
(8) Before generative AI there was... just AI. (n.d.). CIO. Retrieved April 24, 2024, from https://www.cio.com/article/656697/before-generative-ai-there-was-just-ai.html
(9) Generative AI Pulse Survey. (n.d.). Retrieved April 24, 2024, from https://www.delltechnologies.com/asset/en-us/solutions/infrastructure-solutions/templates-forms/dell-technologies-genai-pulse-survey.pdf
(10) What Generative AI Means for Cybersecurity in 2024. (2024, February 8). Trend Micro. https://www.trendmicro.com/en_ca/research/24/b/generative-ai-cybersecurity-2024.html
(11) Office. (2023, April 30). How ethical subjectivity complicates AI regulation - Office of the Superintendent of Financial Institutions. Osfi-Bsif.gc.ca. https://www.osfi-bsif.gc.ca/en/about-osfi/reports-publications/financial-industry-forum-artificial-intelligence-canadian-perspective-responsible-ai/ethical-subjectivity-complicates-ai-regulation
(12) Kohli, A. (2023, April 29). AI Voice Cloning Is on the Rise. Here’s What to Know. Time. https://time.com/6275794/ai-voice-cloning-scams-music/
(13) Proctor, J. (2024, February 16). Air Canada found liable for chatbot’s bad advice on plane tickets. CBC. https://www.cbc.ca/news/canada/british-columbia/air-canada-chatbot-lawsuit-1.7116416
(14) McKinsey & Company. (2023, August 1). The state of AI in 2023: Generative AI’s breakout year. Www.mckinsey.com. https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai-in-2023-generative-ais-breakout-year
(15) Dastin, J. (2018, October 11). Insight - Amazon scraps secret AI recruiting tool that showed bias against women. Reuters. https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G/
(16) Proctor, J. (2024, February 27). B.C. lawyer reprimanded for citing fake cases invented by ChatGPT. CBC. https://www.cbc.ca/news/canada/british-columbia/lawyer-chatgpt-fake-precedent-1.7126393
(17) McKendrick, J., & Thurai, A. (2022, September 15). AI Isn’t Ready to Make Unsupervised Decisions. Harvard Business Review. https://hbr.org/2022/09/ai-isnt-ready-to-make-unsupervised-decisions
(18) Boehm, J., Grennan, L., Singla, A., & Smaje, K. (2022, September 12). Digital trust: Why It Matters for Businesses | McKinsey. Www.mckinsey.com. https://www.mckinsey.com/capabilities/quantumblack/our-insights/why-digital-trust-truly-matters
(19) Ray, S. (n.d.). Samsung Bans ChatGPT Among Employees After Sensitive Code Leak. Forbes. Retrieved April 24, 2024, from https://www.forbes.com/sites/siladityaray/2023/05/02/samsung-bans-chatgpt-and-other-chatbots-for-employees-after-sensitive-code-leak
(20) Canada, O. of the P. C. of. (2015, June 23). PIPEDA legislation and related regulations. Www.priv.gc.ca. https://www.priv.gc.ca/en/privacy-topics/privacy-laws-in-canada/the-personal-information-protection-and-electronic-documents-act-pipeda/r_o_p/#:~:text=change%20and%20review-
(21) C-26 (44-1) - LEGISinfo - Parliament of Canada. (n.d.). Www.parl.ca. https://www.parl.ca/legisinfo/en/bill/44-1/c-26
(22) C-27 (44-1) - LEGISinfo - Parliament of Canada. (n.d.). Www.parl.ca. https://www.parl.ca/LegisInfo/en/bill/44-1/C-27
(23) Institutions, O. of the S. of F. (2023, July 17). Financial Industry Forum on Artificial Intelligence: A Canadian Perspective on Responsible AI. Www.osfi-Bsif.gc.ca. https://www.osfi-bsif.gc.ca/en/about-osfi/reports-publications/financial-industry-forum-artificial-intelligence-canadian-perspective-responsible-ai
(24) Tabassi, E. (2023). AI Risk Management Framework. Artificial Intelligence Risk Management Framework (AI RMF 1.0). https://doi.org/10.6028/nist.ai.100-1
(25) 14:00-17:00. (n.d.). ISO/IEC 23894:2023. ISO. https://www.iso.org/standard/77304.html
(26) 14:00-17:00. (n.d.). ISO/IEC DIS 42001. ISO. https://www.iso.org/standard/81230.html
(27) 14:00-17:00. (n.d.). ISO/IEC 23053:2022. ISO. https://www.iso.org/standard/74438.html