Filter
Back

Government guidance on proper use of AI in Hong Kong

2023-09-27

Background

The Office of the Government Chief Information Officer has recently published the Ethical Artificial Intelligence Framework (the “Framework”), which was originally developed for internal adoption within the Hong Kong Government regarding the applications of Artificial Intelligence (“AI”) and big data analytics, for organisations which develop AI applications to take reference.

What is the Ethical Artificial Intelligence Framework?

The Framework is originally developed to assist government bureaux and departments in planning, designing and implementing AI and big data analytics in their IT projects and services. It sets out the guiding principles, leading practices and AI assessment that should be adopted while engaging in AI-powered IT projects. Although the Framework is designed for the use of government bureaux and departments, other organisations are encouraged to adopt it for general reference when adopting AI and big data analytics in their IT projects.

Content of the Framework

The Framework consists of the following components and sub-components:

1.       The Tailored AI Framework

a.       The Ethical AI Principles

b.       The AI Governance Structure

c.       The AI Lifecycle

d.       The AI Practice Guide

 

2.       The AI Application Impact Assessment

 

Each of the components will be explained in this article.

The ethical AI principles

The Framework sets out the following 12 ethical AI principles, which shall be observed for all AI projects:

1.       Transparency and Interpretability – Organisations should be able to explain the decision-making processes of the AI applications to humans in a clear and comprehensible manner. To achieve interpretability, i.e. creating human-understandable explanations of an AI model’s features and parameters, organisations should ensure that the AI model could generate human readable explanations, and adopt clear and honest communication channel between the organisation and its end-users and regulators.

 

2.       Reliability, Robustness and Security – Organisations should ensure that the AI applications could operate reliably over long periods of time, provide consistent results and remain secure against cyber-attacks.

 

3.       Fairness – Organisations should ensure that the result generated by the AI applications treat individuals within similar groups in a fair manner, without favouring or discriminating against particular groups, or causing harm. AI applications shall maintain respect for individuals behind the datasets and avoid using datasets that contain discriminatory biases.

 

4.       Diversity and Inclusion – AI application developers shall promote the diversity of the user base of the application such that it will not behave differently towards certain groups of individuals. To do so, organisations shall involve the largest possible number of AI users representing the broadest variety of cultures, lifestyles, interests and disciplines.

 

5.       Human Oversight – While developing AI applications, organisations should allow human intervention into the AI applications’ operations so as to prevent ethical issues. For example, human intervention or auto-shutdown should be allowed when system failure occurs, especially when such failure will have an impact on human safety.

 

6.       Lawfulness and Compliance – While developing AI applications, organisations should comply with principles contained in international treaties or regulations, national legislations and industry standards. Organisations should keep track of regulatory changes to ensure compliance.

 

7.       Data Privacy – Organisations shall comply with the Personal Data (Privacy) Ordinance (Cap. 486) (“PDPO”) when handling personal data collected from users of AI applications. In particular, amongst other obligations set out in PDPO, organisations should inform users of the purpose of collection of data, take all practicable steps to protect the data collected against unauthorised or accidental access, processing, erasure, loss or use, and provide information on its policies and practices in relation to personal data collected.

 

8.       Safety – Organisations should implement measures to minimise unintended risks of harm to physical, emotional and environmental safety which may be caused in the course of the operation of the AI models.

 

9.       Accountability – There should be a clearly identifiable accountable party held responsible for the moral implications and misuse of AI applications. To achieve accountability, organisations should implement policies, procedures and oversight to manage the risk of the AI systems.

 

10.    Beneficial AI – Organisations should ensure that the development of AI applications promotes the common good and wellbeing, and that the AI applications would not cause harm to humanity.

 

11.    Cooperation and Openness – Cooperation and openness entails the collaboration and communication between organisations and end-users and other impacted groups on risks and risk management plans. Organisations should also proactively collaborate with diverse stakeholders to create a culture of multi-stakeholder cooperation in the AI ecosystem.

 

12.    Sustainability and Just Transition – Organisations should implement mitigation strategies to manage any potential societal and environmental system impacts which may be caused by AI applications. Checks and balances should be developed over the use of AI applications to ensure its sustainability.

 

Out of the above 12 principles, Transparency and Interpretability and Reliability, Robustness and Security are categorised as “Performance Principles”, which are fundamental principles that must be achieved for the execution of the remaining 8 “General Principles”.

The AI governance structure

Based on the aforesaid principles, the Framework suggests that organizations should adopt an AI governance structure when developing and maintaining AI applications. It refers to the practices and direction by which AI projects and applications are managed and controlled. The recommended practices are:

1.       Establish a governance structure to oversee the implementation of AI projects and AI Assessment;

 

2.       Define roles and responsibilities that affect the use and maintenance of the Framework;

 

3.       Specify a set of practices to guide and support the planning, development, deployment and monitoring of AI applications; and

 

4.       Assess the adoption of such practices in terms of application impact.

The AI lifecycle and the AI practice guide

The Framework also includes the AI Lifecycle and the AI Practice Guide to help organisations understand different stages of the projects and the requirements involved.

The AI Application Impact Assessment

The Framework also advocates the adoption of AI Application Impact Assessment (the “Assessment”), which sets out a systematic thinking process for organisations to assess the associated benefits and risks of AI applications and identifies follow-up actions to ensure necessary measures and controls required for implementing ethical AI. The Assessment should be conducted regularly throughout different stages of the AI project.

10 tips for users of AI chatbots

Aside from AI application developers, users of AI applications should also beware of the potential risks.

The Privacy Commissioner’s Officer has recently issued the following 10 tips for Users of AI chatbots:

(a)  Before registration or use

1.       Read the privacy policy, the terms of use and other relevant data handling policies.

2.       Beware of fake Apps and phishing websites posing as known AI chatbots.

3.       Adjust the settings to opt-out of sharing chat history (if applicable).

(b)  When interacting with AI chatbots

4.       Refrain from sharing your own personal data and others’ personal data.

5.       If necessary, submit a correction or removal request.

6.       Guard against cybersecurity threats.

7.       Delete outdated conversations from chat history.

(c)   Safe and responsible use of AI chatbots

8.       Be cautious about using the information provided by AI chatbots.

9.       Refrain from sharing confidential information and files.

10.    Teachers / parents should provide guidance to students when they are interacting with AI chatbots.

 

Conclusion

As AI applications become more prevalent, concerns over their potential detriments to different stakeholders and society in whole increase. To develop ethical AI applications, developers are encouraged to follow the Framework to ensure that AI applications align with the Ethical AI Principles and to minimize the detriments that they may bring.

On the other hand, users of AI applications, in particular AI chatbots, shall be aware of their personal data privacy when utilising such tools.

 


For enquiries, please feel free to contact us at:

E: technology@onc.hk                                                       T: (852) 2810 1212
W:
www.onc.hk                                                                    F: (852) 2804 6311

19th Floor, Three Exchange Square, 8 Connaught Place, Central, Hong Kong

Important: The law and procedure on this subject are very specialised and complicated. This article is just a very general outline for reference and cannot be relied upon as legal advice in any individual case. If any advice or assistance is needed, please contact our solicitors.

Published by ONC Lawyers © 2023

Our People

Dominic Wai
Dominic Wai
Partner
Dominic Wai
Dominic Wai
Partner
Back to top