OpenAI’s Large Grant To Study Human Moral Decision-Making: What Are The Chances Of Success Recently, the development of artificial intelligence (AI) technology has been rapidly progressing. Accepting AI suggestions is gaining popularity in important fields such as education, health, business, and law. Similarly, with the latest development of AI and its popularity, it has been estimated that it can replace the human brain.
However, ‘moral judgment’ is important in maintaining the difference between humans and AI. There is also a widespread belief that AI cannot effectively encompass complex concepts like morality. However, recently, OpenAI has provided a grant worth Rs 130 million to study and predict human moral decisions.
OpenAI Inc., the nonprofit arm of OpenAI awarded the grant to researchers at Duke University. The research, which began in 2023 and will run until 2025, is expected to be completed by 2025. According to a statement from Duke University, the study aims to develop algorithms to predict human decisions in ethical conflicts in fields such as medicine, law, and business. The study, ‘Research AI Morality,’ is led by Walter Sinnott-Armstrong and co-investigator Jana Berg. These researchers have long been active in the development of ethically responsible AI.
AI is trained on many examples from the web. Therefore, it seems to fail to evaluate moral concepts, including logic and emotion. However, given the recent progress made by AI in a short period, this task is not impossible.
This effort by OpenAI and the researchers can be believed to bring AI closer to human moral decisions. Its success could add a milestone to the development of ethically responsible AI.