Studies in Science of Science ›› 2025, Vol. 43 ›› Issue (10): 2066-2075.
Previous Articles Next Articles
Received:
Revised:
Online:
Published:
陈登航1,王硕2,汪琛3
通讯作者:
基金资助:
Abstract: Trust is regarded as one of the important factors to reduce individual perception. However, for artificial intelligence technology, there are still insufficient discussions on whether trust?affects differently?and how trust affects risk perception. A quota sampling survey based on a national sample found that: trust in scientists and the government can significantly reduce the Chinese public's risk perception of artificial intelligence, while trust in engineers and relatives tends to amplify risk perception; the influence of scientists on reducing public risk is greater than that of the government; the Chinese public only shows an overall sense of trust in the government, while the intention credibility, process credibility and behavior credibility of scientists have a negative predictive effect on public risk perception. To address modern risks of artificial intelligence, it is necessary to?establish diversity?risk communication channels, build a benign interaction mechanism among scientists, the government and the public, and adopt differentiated strategies to strengthen the trust characteristics of different groups.
摘要: 在智能化“人机”交互持续走向深入的过程中,信任是降低个体风险感知的关键基础性要素之一。针对AI这一新兴领域,既有关于社会信任系统的作用是否存在差异,及其如何影响风险感知的作用机制尚显不足。基于配额抽样的网络调查发现:对科学家与政府的信任能够显著降低个体的AI风险感知,而对工程师群体与亲友的信任则倾向于放大风险感知;科学家降低公众风险的影响力明显大于政府;公众对政府仅表现出一种总体信任感,而科学家所表现出的意图可信性、过程可信性与行为可信性对公众风险感知存在负向预测作用。为此,应当建立多元化风险沟通渠道,构筑科学家、政府与公众的良性互动机制,并采取差异化策略强化不同群体的信任特质,以此来预见和应对AI发展可能产生的信任危机。
陈登航 王硕 汪琛. 社会信任系统何以降低公众的人工智能风险感知? ———基于 2023 年科技与社会晴雨表调查的实证分析[J]. 科学学研究, 2025, 43(10): 2066-2075.
0 / / Recommend
Add to citation manager EndNote|Ris|BibTeX
URL: https://kxxyj.magtechjournal.com/kxxyj/EN/
https://kxxyj.magtechjournal.com/kxxyj/EN/Y2025/V43/I10/2066