Loading...

Loading...

Leveraging AI for your enterprise securely

  • Posted on June 19, 2023
  • Estimated reading time 5 minutes
Leveraging secure AI

AI is has arrived! It is driving significant value for enterprises and our people (approximately $126 billion per annum by 2025), with more opportunity and growth forecasted ($1.81 trillion by 2030). However, realising the value from AI securely is still a concern and a challenge that enterprises are wrestling with. Approximately 70% of executives believe that incorporating generative AI in business operations could lead to the exposure of company data to fresh security threats.

We have identified three common challenges on the AI journey – spanning from trialling to implementing and ending with operating or maintaining AI – and how to navigate them securely while still realising benefits.

1. Trialling AI with a Pilot – Great first step, but what about the data?
Many organisations are wanting to trial/pilot AI with real data to see true reflection of capabilities and business value. During the trial(s), new data repositories are spun up to train AI with real data. With the speed and excitement of a trial, often the same rigorous treatment, classification, and security considerations that would normally be followed, when implementing or using new software, gets overlooked.

Here is what you can do to decrease risk whilst realising benefit at speed:

  • Trial on secure ecosystem (for example Azure hosted tool – Azure AI) – If you trial on unsecure or publicly available AI service (e.g., ChatGPT) you can’t use sensitive or commercial data to train the model.

  • Consider data sensitivity – When establishing trial repositories, only utilise data with lowest sensitivity rating needed to gain real insight into use cases.

  • Work closely with information security team – Ensure that your information security team is informed and assisting with the pilot program as they may have a better understanding of necessary policies and procedures (Approval processes, PoC Security Templates, Data Treatment Policies, etc.) in-line with Privacy and Security landscape.

  • Pilot working group – Establishing a working group with stakeholder(s) from various teams (Info Sec team, Dev team, Data team, etc.) will allow for regular check ins and allow rapid problem solving when necessary.

2. Implementing AI whilst maintaining confidence in data security
With larger number of people affected by data breaches (over 400 million in 2022) and organisations paying over four billion dollars for each breach, it is no surprise that data security is on everyone’s minds. When looking at the AI market, the concerns frequently raised focuses on security of chatbot system, lack of transparency around data collection, storage, and access, as well as third party usage. This begs the question what can organisations do before/during the implementation of AI system(s) to maintain stakeholder confidence in security of data?

Here is what you can do to increase confidence:

  • Privacy impact assessment (PIA) – PIA is an effective tool to assist organisations in understanding and evaluating a project’s compliance to internal policies as well as any regulatory compliance. The PIA should be a living document with regular updates around the scope, risks, purpose, context, and nature of the new AI system.

  • Responsible AI impact assessment – The concept of Responsible AI involves the creation, evaluation, and implementation of AI systems in a manner that prioritises safety, reliability, and ethics. Microsoft developed a template to assist with this assessment.

  • Establish AI strategy – An AI strategy will aid in developing a roadmap to success ensuring stakeholder buy-in. Developing a strategy will assist with building a business case for relevant decision makers while breaking down the plan for implementation, maintenance, and associated costs.

  • Leveraging existing platform (Azure platform with Azure AI) – Leveraging your existing Azure Platform with Azure AI allows users to:
    • Restrict access to resources and operations by user account or group (AAD)
    • Restrict incoming and outgoing network communications
    • Encrypt data in transit and at rest
    • Scan for vulnerabilities
    • Apply and audit configuration policies

  • Utilising tools available in market – Scanning and implementing the market for other AI privacy and security tools like SmartNoise and Counterfit could increase the organisations data security maturity and demonstrate cyber-readiness to stakeholders.

  • Data needed for training – When implementing AI system, close attention must be paid to privacy and data security, since AI systems rely on access to large and representative data sets in order to make accurate and informed predictions and decisions. However, this runs a higher privacy risk in case of a data breach. To minimise the risk, utilise data sets with lowest security rating available and restrict access to these databases.

  • Communication with end user – Communicating policies around data collection, storage, access, and usage will increase transparency with the end users and thereby boost confidence and excitement for the new technology.

3. Maintaining and operating AI in line with security framework
Any software or system is vulnerable to hacking; however, AI poses an even greater risk due to the way the system needs to be trained. To gain real value from AI, a large data set is necessary when training the model. These repositories are gold mines for malicious actors. To extract information about your model, its behaviour, or the data used for training, an attacker may make mathematically guided queries. This poses a significant security risk for organisations. What can organisations do to reduce this risk and provide assurances to consumers and regulatory bodies?

Here is how you can maintain AI in line with security framework:

  • Always encrypt data – Encrypting your data and code will assist with keeping your applications more secure from bad actors.

  • Test incident response plan – Developing and documenting an incident response plan is a step in the right direction however this plan needs to be tested regularly to identify any weaknesses in your defences.

  • Maintaining existing platform (Azure platform with Azure AI) – Maintaining your existing Azure Platform with Azure AI allows users to utilise the Azure platform to its full capability and thereby uplift your security profile.

  • Utilising tools available in market – Onboarding privacy and security tools specific to AI can be done at any point, this does not have to be done during original AI implementation and can help monitor AI solution for any security risks or threats.

  • Regular internal security assessment/audit – An internal security assessment is a good tool to evaluate your current risk profile and mitigating strategies. This can assist with identifying any weaknesses and opportunities for improvement before a hacker can exploit them.

The future of AI has just started, however bad actors are here to stay. If you are concerned about any of these challenges or would like to know how Avanade can assist your secure AI journey, get in touch with us today.

Avanade Insights Newsletter

Stay up to date with our latest news.

Share this page
CLOSE
Modal window
Contract