The paper analyses the challenges of explainability of artificial intelligence, particularly in chatbots, focusing on the risks of misinformation, polarization and algorithmic discrimination. The evolution of chatbots is explored, from rule-based systems to advanced language models such as A.L.I.C.E.®, Replika®, ChatGPT®, Bard® and DeepSeek®. Furthermore, Explained Artificial Intelligence (XAI) is proposed as a solution to mitigate these risks through explainability of AI systems. Finally, the application of XAI principles in different chatbots is evaluated, identifying their strengths and weaknesses in terms of explainability, interpretability, and ethics. The paper concludes by highlighting the importance of XAI for responsible and ethical use of AI.