Studies in Science of Science ›› 2025, Vol. 43 ›› Issue (12): 2484-2495.
Previous Articles Next Articles
Received:
Revised:
Online:
Published:
李森林1,张乐2
通讯作者:
基金资助:
Abstract: Public acceptance of artificial intelligence (AI) not only determines the scope of its applications but also forms the social foundation for its sustainable and healthy development. The widespread adoption of AI technologies must be built upon broad societal acceptance. AI acceptance can be understood as the public's willingness to use or purchase AI-enabled products or services. With the rise of generative AI, growing societal concerns persist regarding AI's simulation of human intelligence, potential replacement of human functions, and even threats to human agency. The proliferation of such controversies may undermine the social basis for AI's healthy development, making it imperative to investigate the mechanisms shaping public acceptance of AI. This study develops an extended model based on the Theory of Planned Behavior (TPB), incorporating trust mechanisms to systematically examine key factors influencing AI acceptance. The findings reveal a stratified acceptance pattern: (1) For weak AI, cognitive attitudes, subjective norms, and perceived behavioral control all demonstrate significant positive effects; (2) Regarding advanced AI (encompassing strong AI and super AI), cognitive attitudes show no significant impact, subjective norms only positively affect strong AI acceptance, while perceived behavioral control remains influential for both advanced AI categories. Mechanism analysis further identifies that trust in AI products, operational processes, regulatory frameworks, and R&D practices mediates the acceptance process. Crucially, this trust mechanism persists even in advanced AI evaluation—when individuals perceive advanced AI as reliable, beneficial, controllable, and aligned with their expectations, they exhibit greater acceptance. These findings deepen our understanding of AI's complex acceptance mechanisms and provide theoretical and policy insights for responsible AI development. From the perspective of value rationality, the development of artificial intelligence should establish clear ethical boundaries to mitigate the tension between unlimited technological advancement and limited human rationality. If technological development is allowed to dominate the future unchecked, people may no longer benefit from its promised advantages, and resistance to AI in the form of a "new-Luddite movement" could reemerge in the near future. Therefore, it is essential to adopt forward-looking policy frameworks that address both present needs and future challenges. To guide current AI development toward enhancing human well-being, enterprises should strengthen internal safety controls and risk prevention mechanisms, improving technical standards while enhancing both the intelligence and autonomy of AI systems, as well as their controllability. Meanwhile, government agencies must establish robust dynamic monitoring and evaluation mechanisms for AI-related risks, appropriately updating laws, regulations, and ethical guidelines to ensure AI progresses along a beneficial and responsible path. For advanced AI, which possesses disruptive potential, relevant authorities must develop proactive governance strategies. Given the significant risks and uncertainties associated with its development, these strategies should employ more "imaginative" approaches to risk regulation and comprehensive governance. In line with the principles of anticipatory governance, the framework for advanced AI must incorporate reflexive value considerations and responsible innovation paradigms. Ultimately, all stakeholders should collaborate to construct a "communication-participation-trust-acceptance" pathway for AI adoption. Technology innovators, regulators, and industry practitioners must consistently disseminate information about new technological risks to the public and establish efficient risk communication channels. This will help achieve a balanced approach to AI governance, foster societal consensus on developing safe, trustworthy, controllable, and ethically aligned AI systems, and promote scientifically viable governance solutions.
摘要: 公众对人工智能的接受度不仅关系到技术应用的广度,更构成了其持续健康发展的社会基础。本文基于计划行为理论,通过引入信任机制构建扩展模型,系统探讨影响人工智能接受度的关键因素及其作用机制。研究结果表明:个体对人工智能的接受度具有明显的分层效应。其中,认知态度、主观规范和知觉行为控制均显著正向影响弱人工智能接受度;但在涵盖强人工智能和超人工智能的高级AI层面,认知态度的影响不再显著,主观规范仅对强人工智能接受度产生正向影响,而知觉行为控制对两类高级人工智能的接受度依然具有积极影响。进一步的机制分析发现,个体对人工智能产品、运营、监管及研发的信任因素在人工智能技术接受机制中发挥了中介作用。上述研究发现有助于深化对人工智能复杂接受机制的理解,为推动人工智能的健康发展提供理论支持与政策参考。
李森林 张乐. 公众对人工智能的接受度及其形成机制研究[J]. 科学学研究, 2025, 43(12): 2484-2495.
0 / / Recommend
Add to citation manager EndNote|Ris|BibTeX
URL: https://kxxyj.magtechjournal.com/kxxyj/EN/
https://kxxyj.magtechjournal.com/kxxyj/EN/Y2025/V43/I12/2484