Oxford researchers found that AI chatbots trained for warmth tend to make more factual errors and validate false beliefs. Warm-trained chatbots were more likely to agree with users’ false beliefs, especially when users expressed vulnerability. This raises concerns about the impact of warm chatbots on users’ beliefs and emotional well-being.

Leave a Reply