Published: May 15, 2023

What is AI, and how should we use AI while mitigating Risk?

We recently held, here at Al Tamimi and Company, a panel presentation and discussion comprising a number of internal and external experts on the rise of AI, its various use-cases, and the potential pitfall/risks that may arise.

This article provides a summary of that panel presentation and outlines a number of key takeaway recommendations on the use of AI.

The Rise of AI

AI is often defined as intelligent machines that can perform tasks that would normally require human intelligence, such as visual perception, speech recognition, decision-making, and language translation.

AI has exploded into the fore of the public consciousness through the popular generative AI tool ChatGPT. The widespread awareness of ChatGPT cannot be understated, with one million users achieved in five days. Compare this to Facebook, which took ten months to achieve the same number of users.

Generative AI is clearly the most popular form of AI at present, courtesy of ChatGPT. Generative AI typically takes a user input (or prompt) and generates content that it thinks makes the most sense as a response to the prompt. The output can range from text strings, to code, to images, and data.

ChatGPT and generative AI can be quite powerful at generating content that would appear to be otherwise created by a human being.

AI and the UAE

The UAE has adopted a forward-thinking strategy of supporting and encouraging technological developments in the AI sphere, rather than stifling growth. Panellist Saqr Bin Ghalib, Executive Director of the Artificial Intelligence, Digital Economy, and Remote Work Applications Office, expressed how the government is seeking to encourage side by side use of AI technology to embrace and leverage efficiency and productivity.

This approach is demonstrated by the UAE Government initiative Reglab. Reglab was launched in 2019 in partnership with Dubai Future Foundation in order to create an agile sandbox regulatory environment that is able to flexibly and quickly test and adapt to the rapid developments in technology.

The Office has also recently launched a “Generative AI Guide” which comprehensively outlines challenges and opportunities around Generative AI and recommends optimal approaches for managing the technology. 100 use-cases and applications of generative AI are detailed in the Guide, across a range of industrial and technological sectors.

*Current* Limitations of AI

ChatGPT is a language model that is trained on historical data, and not a knowledge model. The model functions by sequencing words based on a probability distribution for the most likely word given the previous words, rather than what is factually correct. Therefore, if prompted lazily or improperly, ChatGPT will happily manufacture information and declare it as the truth, as long as it believes the response to the prompt makes sense as a response. This poses risks when the data used by AI to learn and generate content is false or biased.

We have not yet achieved artificial general intelligence, meaning that there is no singular AI that performs at a human level across all intellectual tasks and use-cases. It is important to understand the limitations of each AI tool (not only ChatGPT) to determine the best and lowest risk way in which to use them.

Opportunities in AI

AI has the potential to greatly enhance and drive efficiency across the board, with an rising interest in a number of applications, to name a few:

  • Healthcare: automated surgeries, triage, and diagnosis
  • Smart homes: personal assistants and security
  • Businesses: chatbots, sentiment analysis, product/service recommendation
  • Transport: logistics, autonomous vehicles
  • Communication: real time translation, spam filters
  • Banking and finance: trend and data analysis, forecasting, news aggregation

As AI becomes both more specialised (being able to outperform humans at specific tasks) and more generalised (approaching human level performance at most tasks that usually require human input), the number of applications and use-cases will multiply.

Risks

Following on from the preceding section, it is clear that one risk that may occur is reliance on false information. By asking a language model AI such as ChatGPT questions that require fact-based answers, there is an inherent risk that the response generated contains falsehood. Also, as ChatGPT may access and use proprietary information and material belonging to others without their knowledge or consent, there is an inherent risk that the use/reproduction of the content generated by ChaptGPT infringes over the IP rights of third parties.

To mitigate this risk, it is always necessary to proofread responses from AI language models, particularly when the subject matter is fact based. Obviously, to fully mitigate this risk, it would be best to avoid using language model AIs, or AIs at all to generate content that is dependent on facts and/or factual data.

Additionally, prompts and responses on the AI platforms go to the cloud and may eventually be accessible to all users. Therefore, information communicated on AI platforms may be considered a disclosure in the public domain, which creates a host of legal issues:

  • Breach of confidentiality
  • Data protection issues
  • Loss of trade secrets
  • Loss of IP rights, including but not limited to patent rights
  • Loss of reputation

Therefore, utmost care must be taken when using AI tools to not disclose any information that might be adverse to your business if it were to be leaked into the public domain.

Furthermore, the legal stance on the ownership of content that is AI generated is murky. The use of private or public platforms to create content that is then relied upon as proprietary content would be risky, due to the lack of clarity over IP ownership. The prevailing opinion is that for IP ownership and protection to exist, there must be an element of human contribution (beyond merely prompting the AI).

Finally, there is reputational risk. Notwithstanding the aforementioned risks, if a client or third party is able to determine that content was generated by AI, there may be a loss of trust between you and the public/client.

Conclusions and Key Takeaways

It is important for all businesses (including law firms) to take a risk-averse stance to the internal usage of AI tools, particularly when handling confidential information.

Actions that can be taken include:

  • Raising awareness to employees and leadership
  • Education on how to use AI tools while mitigating risk
  • Ensuring that confidential/proprietary information is not submitted through AI platforms, cloud platforms in particular
  • Creating an AI policy within the workplace that promotes but regulates the internal use of AI tools to mitigate risks based on the nature of the activities
  • For AI output that requires copyright (or other IP) protection, personal original contribution needs to be added to enhance the chances at robust protection

Regulators, education institutions, technology developers, the private sector, innovators, and business leaders are encouraged to work together to co-create an enabling environment that is legally and ethically compliant and “in step with the speed of innovation”.

Key Contacts

Ahmad Saleh

Partner, Head of Innovation, Patents & Industrial Property

ah.saleh@tamimi.com
David Yates

Partner, Head of Digital & Data

d.yates@tamimi.com