Mehmet Caner, co-author of the study and the Thurman-Raytheon Distinguished Professor of Economics at NC State's Poole College of Management, highlighted the challenge faced by current AI systems, which rely heavily on statistical algorithms for forecasting. These algorithms, while powerful, often fail to account for human deceit, inadvertently encouraging misinformation in pursuits like securing loans or reducing insurance costs.
The breakthrough came with the development of a new set of training parameters designed to fine-tune AI's predictive capabilities. These parameters empower AI to identify and adjust for situations where economic incentives might drive a person to deceive. The result is an AI that not only recognizes the likelihood of deceit but also reduces the benefit of lying to the system.
In simulations designed to test the effectiveness of these adjustments, the updated AI models showed a marked improvement in detecting inaccuracies provided by users. Caner pointed out that while this development curtails the advantages of dishonesty, it's not yet foolproof against minor falsehoods, signaling a direction for future research.
This pioneering work has been made accessible to the public, encouraging AI developers to integrate and experiment with these new training parameters. Caner expressed optimism about the potential of these advancements to mitigate, and possibly eliminate, the incentives for dishonesty in interactions with AI systems.
Research Report:Should Humans Lie to Machines? The Incentive Compatibility of Lasso and GLM Structured Sparsity Estimators
Related Links
NC State University
All About Human Beings and How We Got To Be Here
Subscribe Free To Our Daily Newsletters |
Subscribe Free To Our Daily Newsletters |