University of Illinois System

University of Illinois System Generative AI Principles

Overview

Given the uncertainty surrounding the evolution of generative artificial intelligence (AI) in terms of technological advances, societal acceptance, and regulation, it is important for the University of Illinois System to adopt principles for generative AI governance.

The principles commit the University of Illinois System to the appropriate, responsible, and ethical development, adoption, and use of generative AI in alignment with the System’s guiding principles and mission: to transform lives and serve society by educating, creating knowledge, and putting knowledge to work on a large scale and with excellence.

When applying these principles and evaluating options, always consider the ethical implications of possible outcomes and act with the highest ethical standards. This includes complying with applicable University of Illinois System policies, industry standards, and laws and regulations.

The University of Illinois System principles governing generative AI are:

Accountability

Take responsibility for generative AI outcomes and establish clear lines of accountability within the System to address any issues or unintended consequences that may arise from the use of generative AI. Establish procedures for remediations, recourse, or redress in case of unintended consequences, discrimination, or privacy breaches. Maintain ongoing monitoring and evaluation of generative AI systems' performance, impact, and compliance with ethical standards to help identify and address any emerging issues or areas for improvement.

Inclusiveness

Consider all human races and experiences and use inclusive design practices when developing generative AI systems and applications to identify and address potential barriers and biases that could unintentionally exclude people from generative AI outcomes. Seek input from diverse parties during the development and deployment of generative AI systems. Engage with relevant stakeholders, such as the public, experts, and affected communities, to provide diverse perspectives and help address potential concerns or unintended consequences.

Reliability and Safety

Embed reliability and safety into generative AI development and use to foster trust in generative AI outcomes. Generative AI systems and applications must be resilient to resist intended or unintended manipulation and sufficiently flexible to address new situations safely and reliably.

Fairness

Build fairness into the development and use of generative AI through checks and balances that prevent unlawful discrimination against individuals or groups of individuals. Identify and mitigate biases that could be present in the data used to train generative AI systems.

Transparency

Foster transparency in the development and deployment of generative AI systems. Clearly communicate the capabilities and limitations of the technology to users, stakeholders, and the public, to support generative AI-based decisions and conclusions. Disclose any biases or potential ethical concerns associated with the generated content. Provide transparency in the underlying algorithms and their decision-making processes to enhance trust and accountability.

Privacy and Security

Protect data and ensure generative AI systems and applications incorporate privacy and security by design. Comply with applicable privacy laws and regulations and best practices when using personal data. Implement strong security measures to prevent unauthorized access to generative AI systems and the data they process.