-
The EU's General Data Protection
Regulation (GDPR) - sets out rules for the collection, processing, and
storage of personal data, including provisions that apply specifically
to AI systems.
-
The OECD's AI Principles - provide
a framework for the responsible development and deployment of AI
systems, covering issues such as transparency, accountability, and
human-centered values.
-
The IEEE's Ethically Aligned
Design - a framework that provides guidance on ethical considerations
in the design and development of AI systems.
-
The Asilomar AI Principles - a set
of 23 principles developed by leading AI researchers and industry
experts, covering issues such as transparency, privacy, and safety.
-
The Montreal Declaration for
Responsible AI - a set of principles developed by AI researchers,
policymakers, and civil society organizations, calling for the
development and deployment of AI systems that are transparent,
accountable, and fair.
-
The AI Now Institute's
Recommendations for AI Accountability - a set of principles developed
by a leading AI research institute, covering issues such as bias,
transparency, and accountability.
-
The California Consumer Privacy
Act (CCPA) - a law that gives California residents more control over
their personal data, including data collected by AI systems.
-
The AI Ethics Guidelines for
Trustworthy AI by the European Commission - A comprehensive framework
consisting of principles and guidance for ensuring that AI systems are
trustworthy, respectful of fundamental rights, and able to reflect
ethical considerations.
-
The World Economic Forum's Global
AI Action Alliance - An initiative aimed at creating a global coalition
of stakeholders committed to promoting responsible and ethical AI
development and deployment.
-
The UK Government's AI Code of
Conduct - A set of guidelines that provide a framework for AI
development and deployment in the public sector, emphasizing
transparency, accountability, and human oversight.
-
The Partnership on AI - A
multi-stakeholder organization that brings together leading companies,
civil society organizations, and academic institutions to collaborate
on developing and promoting responsible AI practices.
-
The UNESCO Recommendation on the
Ethics of Artificial Intelligence - A set of principles and guidelines
developed by UNESCO that emphasize the importance of protecting human
rights, ensuring transparency and accountability, and promoting the
social and environmental benefits of AI.
-
The AI4People Charter - A set of
recommendations developed by a group of European experts and
stakeholders, calling for the development and deployment of AI systems
that are transparent, accountable, and socially beneficial.
-
The Japanese Society for
Artificial Intelligence's Ethical Guidelines for AI - A set of
guidelines developed by a leading AI research organization in Japan,
covering issues such as transparency, fairness, and privacy in AI
development and deployment.
-
The Montreal AI Ethics Institute's
AI Ethics Guidelines - A set of guidelines developed by a non-profit
organization focused on promoting ethical and socially responsible AI
development and deployment.
-
The Singapore Model AI Governance
Framework - A framework developed by the government of Singapore that
provides guidance on the responsible and ethical use of AI in various
sectors.
-
The Global Partnership on
Artificial Intelligence (GPAI) - A multi-stakeholder initiative that
brings together governments, industry, and civil society organizations
to promote responsible and human-centric AI development and deployment.
-
The Microsoft AI Principles - A
set of principles developed by Microsoft that guide the company's
approach to developing and deploying AI systems, emphasizing
transparency, fairness, and accountability.
-
The Institute of Electrical and
Electronics Engineers (IEEE) Global Initiative on Ethics of Autonomous
and Intelligent Systems - A comprehensive framework that provides
guidance on ethical considerations in the development and deployment of
autonomous and intelligent systems.
-
The Canadian AI Ethics Framework -
A set of principles developed by the Canadian government that emphasize
transparency, accountability, and respect for human rights in AI
development and deployment.
-
The UN Guiding Principles on
Business and Human Rights - While not specific to AI, these principles
provide a framework for companies to respect human rights in their
operations, which includes the development and deployment of AI systems.
-
The IEEE Standards Association's
P7000 series of standards - A set of standards that provide guidance
for the development of ethical and transparent AI systems in various
domains, including health care, autonomous systems, and algorithmic
bias.
-
The European Union's proposed
Artificial Intelligence Act - A regulatory proposal that aims to set
rules for AI development and deployment in the EU, including provisions
related to transparency, human oversight, and data privacy.
-
The UNESCO Recommendation on the
Ethics of AI in Education - A set of guidelines that emphasize the
importance of using AI in education in a way that respects human
rights, ensures transparency and accountability, and promotes social
and environmental benefits.
-
The OECD's Recommendation on AI -
A set of guidelines that provide policy makers with guidance on how to
ensure that AI systems are developed and deployed in a way that is
inclusive, transparent, and respects human rights.
-
The Center for Democracy and
Technology's AI Principles - A set of principles that aim to promote
the development and deployment of AI systems that are transparent,
accountable, and respect human rights.
-
The US Federal Trade Commission's
guidance on AI and algorithms - Guidance that outlines best practices
for businesses using AI and algorithms, including transparency,
fairness, and accountability.
-
The Algorithmic Impact Assessment
Toolkit - A toolkit developed by the AI Now Institute that provides a
framework for assessing the potential impacts of algorithms and AI
systems on individuals and society, including issues related to bias,
discrimination, and privacy.
-
The European Commission's White
Paper on Artificial Intelligence - A document that outlines the
European Union's vision for AI development and deployment, including a
focus on human-centric and trustworthy AI.
-
The UK Centre for Data Ethics and
Innovation's AI Ethics Guidelines - A set of guidelines that provide a
framework for ethical and responsible AI development and deployment in
the UK, covering issues such as fairness, transparency, and
accountability.
-
The Montreal Declaration for a
Responsible Development of Artificial Intelligence - A set of
principles and recommendations developed by an international group of
AI experts and stakeholders, emphasizing the importance of human
rights, transparency, and accountability in AI development and
deployment.
-
The AI Governance Framework by the
Australian Government - A framework that provides guidance on the
ethical and responsible use of AI in various sectors in Australia,
including principles related to fairness, accountability, and
transparency.
-
The AI Transparency Obligations by
the California Department of Fair Employment and Housing - A set of
guidelines that require companies that use AI in hiring, promotion, or
termination decisions to provide transparency and explanation about how
AI was used in the decision-making process.
-
The European Commission's
Guidelines on AI and Data Protection - A set of guidelines that aim to
ensure that the use of AI systems in the EU is compatible with the
General Data Protection Regulation (GDPR), including principles related
to data protection, transparency, and fairness.
-
The IEEE Standards Association's
P7006 Standard for Personal Data Artificial Intelligence (AI) Agent - A
standard that provides guidelines for the development and deployment of
AI systems that handle personal data, including principles related to
privacy, security, and transparency.