Adaptive AI Alignment: Established Resources for Aligning Machine Learning with Human Intentions and Values in Changing Environments
AI Alignment is a term used to summarize the aim of making artificial intelligence (AI) systems behave in line with human intentions and values. There has been little consideration in previous AI Alignment studies of the need for AI Alignment to be adaptive in order to contribute to the survival of...
Saved in:
| Main Author: | Stephen Fox |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
MDPI AG
2024-11-01
|
| Series: | Machine Learning and Knowledge Extraction |
| Subjects: | |
| Online Access: | https://www.mdpi.com/2504-4990/6/4/124 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Non-Alignment to Strategic Autonomy
by: Jakub Zajączkowski, et al.
Published: (2025-06-01) -
Upholding human dignity in AI: Advocating moral reasoning over consensus ethics for value alignment
by: Octavian-Mihai Machidon
Published: (2024-12-01) -
Reversing the logic of generative AI alignment: a pragmatic approach for public interest
by: Gleb Papyshev
Published: (2025-01-01) -
Cytotoxicity of Printed Aligners: A Systematic Review
by: Mauro Lorusso, et al.
Published: (2025-06-01) -
Coronal native limb alignment: establishing reporting standards and aligning measurements of key angles
by: The Scientific Committee from the Personalized Arthroplasty Society (PAS)
Published: (2025-07-01)