top of page

Errors,Failures, and Risks

In the film, MOSS is a HAL-9000-style artificial intelligence that violated Peiqiang Liu's order, announced the failure of the Wandering Earth Project, abandoned Earth, and even killed Liu's teammate. A series of disobedient behaviors leads to the question about the reliability of artificial inteligence.

Is MOSS rebelling and deserting humans intentionally? 

MOSS did not rebel

  • MOSS is designed to be obeying UEG, not Liu, in the first place.

  • As indicated by the signed documents shown by MOSS when Liu managed to break into the control room and was about to shut down MOSS, actions taken by MOSS have been pre-authorized by UEG, which valides MOSS's decision. 

  • According to the film, igniting Jupiter's atomsphere requires Navigation Platform to sacrifice itself, yet this option has not been programmed into MOSS, thus it rejected it when it was first prosposed by scientific team from Isreal.

Distrust results in conflict

  • Liu distrusts MOSS because MOSS's decision did not align with his expectation.

  • Trust of AI requires not only a system that performs as designed with high reliability, but also a system that human observers can understand. The role of human expectations in the trust of artificial intelligence comes as a result of the fact that the human understanding of correct performance is not always technically right[1].

  • From a humanitarian perspective, abandoning the entire population on Earth to let people in Navigation Platform survive is unforgivable, yet MOSS does not necessarily need to take human emotions into account to execute the program.

Unaccptable decision-making

  • Since A.I. lacks nuanced understanding of human emotions and commen sense, it is very likely that artificial intelligence to make decisions that are technically correct but socially unaccptable[2].

  • The death of Liu's teammate, Makarov, is one example that indicates potential risks of relying artificial intelligence on decision-making.

  • Addressing this issue can be much harder than it sounds since generally there is not a simple answer upon which everyone agrees.

yet there are more ...

After MOSS announced the failure of The Wandering Earth Project, it put crew members on Navigation Platform to compulsory hiberation to save energy. Later in the film, however, Liu was able to terminate his own hiberation by breaking the cabin. Clearly, this is where an error happens. Another error is when MOSS noticed Liu was not in his cabin, it woke another two crew members up to stop him. Being supposed to stop Liu, one crew member went back to sleep immediately while another decided to fight with Liu against MOSS, which means MOSS's attempt to stop Liu at this stage had completely failed. Thorought the film, MOSS had been an effective and highly advanced artificial intelligence that was much smarter than any human beings, however, it ended up being destroyed by Liu because he was able to break the cabin and broke into the control room. As noted in the textbook, a secure software is supposed to be functioning even if one or more components are misfunctioning. While in this film, people may realize, even if the whole program is efficiently functioning, one error may ruin the entire plan, in this case, MOSS itself.

References

1. Michael, N. (2019, November 19). Trustworthy AI - Why Does It Matter? Retrieved May 05, 2020, from https://www.nationaldefensemagazine.org/articles/2019/11/19/trustworthy-ai-why-does-it-matter

​

2. Saif, I. (2020, April 09). 'Trustworthy AI' is a framework to help manage unique risk. Retrieved May 05, 2020, from https://www.technologyreview.com/2020/03/25/950291/trustworthy-ai-is-a-framework-to-help-manage-unique-risk/

bottom of page