One of the worries many people have in regard to AI is the ethical and moral code of robots and machines when asked to make important decisions.
Would a machine switch off a life-saving device for someone about to die, heartlessly, if it figured the machine could be used to save someone else? On what basis would that decision be made?
Curtin software and engineering researchers have developed a way to assess and inform on the ethical, social and moral logic of AI (artificial intelligence) and robotic systems, when they are making decisions. In turn, the researchers believe this could boost their reliability and integrity.
The breakthrough could also make the use of AI more acceptable and applicable across multiple areas including the medical sector, legal sector and mining industry.
Dr Masood Mehmood Khan from Curtin’s School of Civil and Mechanical Engineering has not only invented the framework, called the Accountable eXplainable AI (AXAI), but also proposed a software design process to integrate it into AI systems.
“Failing to provide measures of comprehensibility and accountability, means most AI and Machine Learning (ML) systems are unable to gain wider public support and approval,” he said.
“At present, these systems cannot explain their decisions and their level of intelligence, expertise and capabilities are also limited because of the built-in biases in the training data and algorithms.
Jordan Vice, a PhD student at Curtin University has incorporated and tested the AXAI framework in a real-time affective state assessment system.
The portable, light-weight system is equipped with a camera and sound capture device to acquire facial and speech data in real time for assessing one’s affective states. The system explains its reasoning and logic to users and subjects via a self-explanatory user interface. The multi-level user interface provides explanations in numbers, text and speech formats (see photo below):
For example, if a rescue robot reaches an accident scene and decides to help one person leaving others unattended, questions will be asked on the ethical, social and moral logic of the robot. Failing to explain its decision, the robot will never be trusted by the society. Our system solves that problem.
Dr Masood Mehmood Khan
“Even the best Explainable AI (XAI) solutions explain their inferences in terms of accuracy and some textual information. Such explanations usually do not provide means of measuring system accountability.
“Failing across areas, such as the medical sector and mining industry. For example, the mining activity in an area begins with mineral exploration and prospecting. AI and ML systems are now being used to determine which areas have the potential for further exploration.
“These systems use algorithms designed and trained to help in identifying rock faces and mineral-centric regions. Prospectors combine multiple data to get a comprehensive understanding of the area’s geological make-up to decide the most plausible location of high-grade ores. Using our framework, the ML systems would be able to explain their conclusions and results.”
The work of Dr Khan and his team was detailed in two papers recently published in IEEE Access found here and here.
~
Curtin University is a sponsor of Startup News.