9 principles of research and development of responsible AI systems
The Ministry of Science and Technology sets out principles to promote research and development of safe, responsible artificial intelligence (AI) systems, limit negative effects and control risks.
For the first time, a set of general principles for responsible research and development of artificial intelligence (AI) systems, for agencies, science and technology organizations, and individual businesses with design activities , developing and providing AI systems stated in Decision No. 1290 of the Ministry of Science and Technology, issued on June 11. Accordingly, 9 principles of responsible AI system research and development include:
Spirit of cooperation, promoting innovation. Developers need to pay attention to the connectivity and interoperability of AI systems to enhance the benefits of AI systems through the process of connecting systems and enhancing coordination to control risks. To do so, developers need to cooperate and share relevant information to ensure the interoperability and interoperability of the system. Prioritize the development of AI systems in accordance with technical regulations, national standards or international standards, in addition to standardization of data formats and openness of interfaces and protocols, including application programming interfaces (APIs). Sharing and exchanging terms of intellectual property rights such as patents also contributes to enhancing connectivity and interoperability when it comes to intellectual assets.
Transparency: Developers need to pay attention to the control of AI system inputs/outputs and the ability to explain relevant analytics based on the characteristics of the technology applied and how they are used .
System control ability: One of the risk assessment methods is to conduct testing in a private space such as a laboratory or testing environment where there are measures to ensure security and safety before when put into practical application. Developers pay attention to system monitoring (have tools to evaluate/monitor or adjust/update based on user feedback) and response measures (such as system shutdown, network shutdown, etc.). .).
The assessment identifies and mitigates risks related to the safety of AI systems.
Developers need to pay attention to security, with special attention to the reliability and ability of artificial intelligence systems to withstand physical attacks or accidents. At the same time, it is necessary to ensure confidentiality; The integrity and availability of necessary information related to the information security of the system.
Make sure the artificial intelligence system does not violate the privacy of users or third parties. Privacy mentioned in this principle includes space (peace of personal life), information (personal data) and confidentiality of communications. Developers can take measures appropriate to the characteristics of the technology applied throughout the system development process (from design) to avoid violating privacy when put into use.
When developing AI systems that involve humans, developers must take special care to respect human rights and dignity, taking precautions to ensure that human values are not violated , societal morality.
Support users and create conditions for users to have the opportunity to choose such as creating interfaces that are ready to provide timely information, measures to help the elderly and people with disabilities easily use.
Finally, developers need to exercise accountability for developed AI systems to ensure user trust.
The Ministry of Science and Technology said that the development of standards is to guide and orient thereby increasing benefits from AI systems, controlling and minimizing risks in the process of developing and using artificial intelligence. Create and balance economic, ethical and legal factors.
Previously, Deputy Minister of Science and Technology Bui The Duy said AI ethics is a complex, global-scale issue that is attracting many countries and organizations around the world to participate in finding solutions. including UNESCO. AI ethics affects many aspects of life such as social, legal, political and commercial competition.
Therefore, the set of principles for research and development of AI systems in Vietnam adheres to goals such as moving towards a human-centered society, ensuring a reasonable balance between benefits and risks of the organizations. AI system, promoting the benefits of artificial intelligence through research, development and innovation activities, minimizing the risk of rights infringement. In addition, research needs to ensure technological neutrality, and developers should not be affected by the rapid development of technologies related to artificial intelligence in the future.
Post Comment