Human-like AI raises deception risks for users
Wednesday, July 13, 2022
Significance
The perception of AI sentience as real by an individual such as Lemoine -- with full awareness of the synthetic nature of LaMDA and technical knowledge of its workings -- questions whether a belief in AI sentience should be dismissed as indisputably erroneous.
Impacts
- Current regulations are insufficient to mitigate the potential effects of human-like AI in different applications.
- Policymakers may need to plan measures to avoid confusion between an AI able to imitate human sentient experience and actual AI sentience.
- There is insufficient understanding of the impact on users of interactions with AI experientially indistinguishable from humans.