Studies in Science of Science ›› 2025, Vol. 43 ›› Issue (9): 1872-1880.
Previous Articles Next Articles
Received:
Revised:
Online:
Published:
张珺皓
通讯作者:
Abstract: Studying algorithmic black boxes from a cognitive dimension represents a third pathway beyond the existing technical and normative approaches. This perspective emphasizes understanding algorithmic black boxes through conceptual frameworks similar to those used for human cognition. Compared to the "black box" of the human mind, algorithmic black boxes face higher demands for transparency, which has given rise to the "algorithmic black box-transparency-accountability" framework. However, this focus often overshadows other important attributes of black boxes, such as their strangeness and legitimacy.Contrary to cognitive common sense, algorithmic black boxes also possess positive cognitive functions. They unify transitions across organizational levels in cognition and facilitate the shift from causal relationships to mechanisms. This capability challenges the myth of algorithmic explainability, avoids the illusion of explanatory depth, and provides directions for horizontal or nested explanations to unfold the black box. A visual "gray box" ladder explanation method, positioned between "white box" and "black box" approaches, can be employed.The analytical framework of the information and control phases s a re-evaluation of scenario-based regulation methods. Trust in algorithms can be categorized into three types: inherent trust, learned trust, and situational trust, corresponding to factors related to humans, algorithms, and the environment, respectively. Reducing "algorithm aversion" is influenced by the objectivity of algorithms, the autonomy of human processes, concerns about social judgment, and the degree of intrusion algorithms impose on human thought.
摘要: 从认知维度对算法黑箱进行研究,是已有的技术路线和规范路线之外的第三条路径,人们将期望使用与理解人类认知相同的概念框架来理解算法黑箱。比起人类心智黑箱,对算法黑箱要求更高的透明度,由此形成了“算法黑箱—透明度—责任制”的架构,遮蔽了黑箱的陌生性与正当性等其他性质。反认知常识的是,算法黑箱也具有正向的认知功能,黑箱能够统一认知中组织层次之间的移动和从因果关系到机制的过程,可破除算法可解释性的神话和避免解释深度错觉,同时为黑箱的展开提供了水平或嵌套解释的方向,可采用介于“白箱”和“黑箱”之间的可视化“灰箱”阶梯解释方法。信息阶段和控制阶段的分析框架反思场景化规制的方法。对算法的信任分为固有信任、学习信任和情境信任,分别与人、算法与环境的因素有关。“算法厌恶”的减少与算法的客观性、人的自主过程、人类的社会评判担忧和算法对人类思想的侵入程度有关。
张珺皓. 算法黑箱研究:基于认知科学的视角[J]. 科学学研究, 2025, 43(9): 1872-1880.
0 / / Recommend
Add to citation manager EndNote|Ris|BibTeX
URL: https://kxxyj.magtechjournal.com/kxxyj/EN/
https://kxxyj.magtechjournal.com/kxxyj/EN/Y2025/V43/I9/1872