Artificial Intelligence (AI) is being promoted as an important contributor to ensuring the safety of autonomous ships. However, utilizing AI technologies to enhance safety may be problematic. For instance, AI is only capable of performing well in situations that it has been trained on, or otherwise programmed to handle. Quantifying the true performance of such technologies is, therefore, difficult. This raises the question if these technologies can be applied on larger ships that need approval and safety certification. The issue gains further complexity when introduced as an element in remote control centres. This paper presents an overview of the most relevant applications of AI for autonomous ships, as well as their limitations in the context of approval. It is found that approval processes may be eased by restricting the operational envelope of such systems, as well as leveraging recent developments in explainable and trustworthy AI. If leveraged properly, AI models can be rendered self-aware of their limitations, and applied only in low risk situations, reducing the workload of human operators. In high risk situations, e.g. high AI model uncertainty or complex navigational situations, a timely and effective handover to a human operator should take place. In this manner, AI-based systems need not be capable of handling all possible situations, but rather be capable of identifying their limitations and alerting human operators to situations that they are incapable of handling with an acceptable level of risk.