可持续创新生态系统中人-AI医疗信任——理论综述与跨学科框架构建

韩永, 欧阳武旻, 乔泽斌, 杜鹤民, 黄健龙

工业工程设计 ›› 2026, Vol. 8 ›› Issue (1) : 1-17.

PDF(6921 KB)
PDF(6921 KB)
工业工程设计 ›› 2026, Vol. 8 ›› Issue (1) : 1-17. DOI: 10.19798/j.cnki.2096-6946.2026.01.001
设计理论

可持续创新生态系统中人-AI医疗信任——理论综述与跨学科框架构建

  • 韩永1,2, 欧阳武旻1,2, 乔泽斌3, 杜鹤民2*, 黄健龙4
作者信息 +

Human-AI Trust in Medical Contexts Within a Sustainable Innovation Ecosystem: A Theoretical Review and Interdisciplinary Framework

  • HAN Yong1,2, OUYANG Wumin1,2, QIAO Zebin3, DU Hemin2*, HUANG Jianlong4
Author information +
文章历史 +

摘要

人工智能正在重塑医疗等高风险场景,人-AI协作信任成为技术采纳与价值创造的关键约束。为回应人-AI信任研究中的两类结构性断裂——可解释人工智能(XAI)与人因科学(HFE)研究脱节、过度聚焦个体层而忽视生态层级,构建一个对齐实际可信度-感知信任的综合分析框架。采用批判性叙述综述,将人机协作信任纳入到持续创新生态系统理论(SIE)的宏观视角下,将XAI与HFE等学科共同体视为可持续创新生态中的知识子生态,并聚焦个体层面的XAI-HFE对齐机制。提出了一个可持续创新生态系统视角下的人-AI医疗协作信任框架(SIE-Trust),概括性提炼出影响人-AI信任的三大类关键因素(用户特征、AI系统、使用环境与社会情境),并创新性地引入了“风险敏感的人-AI信任校准带”。在此基础上,提出面向可信AI的设计与治理建议,以支撑可信且可持续的医疗AI设计与部署。

Abstract

Artificial intelligence (AI) is reshaping high-risk scenarios such as healthcare, and trust in human-AI collaboration has become a key constraint on technology adoption and value creation. The work aims to address two major gaps in existing human-AI trust research: the disconnection between explainable AI (XAI) and human factors engineering (HFE), and the overemphasis on individual-level factors while overlooking ecological layers. The critical narrative review approach is adopted to situate human-AI collaborative trust within the macro-level perspective of the Sustainable Innovation Ecosystem (SIE) theory, treating XAI and HFE as knowledge sub-ecosystems and focusing particularly on individual-level alignment mechanisms between XAI and HFE. The SIE-Trust framework for understanding human-AI collaborative trust in healthcare is proposed, identifying three major categories of influencing factors: user characteristics, AI system attributes, and environmental and social contexts. Then, the novel concept of a "risk-sensitive trust calibration zone" is introduced. Based on these insights, the study offers design and governance recommendations to support trustworthy and sustainable medical AI systems.

关键词

医疗AI / 人-AI协作信任 / 可持续创新生态系统 / 信任校准 / 可解释人工智能 / XAI-HFE跨学科对齐

Key words

medical AI / human-AI collaborative trust / sustainable innovation ecosystem (SIE) / trust calibration / explainable AI (XAI) / XAI-HFE interdisciplinary alignment

引用本文

导出引用
韩永, 欧阳武旻, 乔泽斌, 杜鹤民, 黄健龙. 可持续创新生态系统中人-AI医疗信任——理论综述与跨学科框架构建[J]. 工业工程设计. 2026, 8(1): 1-17 https://doi.org/10.19798/j.cnki.2096-6946.2026.01.001
HAN Yong, OUYANG Wumin, QIAO Zebin, DU Hemin, HUANG Jianlong. Human-AI Trust in Medical Contexts Within a Sustainable Innovation Ecosystem: A Theoretical Review and Interdisciplinary Framework[J]. Industrial & Engineering Design. 2026, 8(1): 1-17 https://doi.org/10.19798/j.cnki.2096-6946.2026.01.001
中图分类号: TB47    TB21    J524   

参考文献

[1] SADYBEKOV A V,KATRITCH V.Computational Approaches Streamlining Drug Discovery[J]. Nature,2023,616(7958):673-685.
[2] THANGAVEL K,SABATINI R,GARDI A,et al.Artificial Intelligence for Trusted Autonomous Satellite Operations[J]. Progress in Aerospace Sciences,2024,144:100960.
[3] KING A.Digital Targeting: Artificial Intelligence, Data, and Military Intelligence[J]. Journal of Global Security Studies, 2024, 9(2):009.
[4] SARPATWAR K,GANAPAVARAPU V S,SHANMUGAM K,et al.Blockchain Enabled AI Marketplace:The Price You Pay for Trust[C]//2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). Long Beach:IEEE,2020:2857-2866.
[5] MUGARI I,OBIOHA E E.Predictive Policing and Crime Control in the United States of America and Europe:Trends in a Decade of Research and the Future of Predictive Policing[J]. Social Sciences,2021,10(6):234.
[6] BUGHIN J, HAZAN E, LUND S, et al.Skill Shift: Automation and the Future of the Workforce[R]. San Francisco: McKinsey Global Institute, 2018: 3-84.
[7] GAO J,WANG D S.Quantifying the Use and Potential Benefits of Artificial Intelligence in Scientific Research[J]. Nature Human Behaviour,2024,8(12):2281-2292.
[8] ANSHARI M,HAMDAN M,AHMAD N,et al.Public Service Delivery,Artificial Intelligence and the Sustainable Development Goals:Trends,Evidence and Complexities[J]. Journal of Science and Technology Policy Management,2025,16(1):163-181.
[9] JIANG P,SINHA S,ALDAPE K,et al.Big Data in Basic and Translational Cancer Research[J]. Nature Reviews Cancer,2022,22(11):625-639.
[10] ELEMENTO O,LESLIE C,LUNDIN J,et al.Artificial Intelligence in Cancer Research,Diagnosis and Therapy[J]. Nature Reviews Cancer,2021,21(12):747-752.
[11] EL NAQA I,KAROLAK A,LUO Y,et al.Translation of AI into Oncology Clinical Practice[J]. Oncogene,2023,42(42):3089-3097.
[12] KUMAR P,CHAUHAN S,AWASTHI L K.Artificial Intelligence in Healthcare:Review,Ethics,Trust Challenges & Future Research Directions[J]. Engineering Applications of Artificial Intelligence,2023,120:105894.
[13] CHANDA T,HAGGENMUELLER S,BUCHER T C,et al.Dermatologist-Like Explainable AI Enhances Melanoma Diagnosis Accuracy:Eye-Tracking Study[J]. Nature Communications,2025,16:4739.
[14] WALDROP M M.What Are the Limits of Deep Learning?[J]. Proceedings of the National Academy of Sciences of the United States of America,2019,116(4):1074-1077.
[15] RAI A.Explainable AI:From Black Box to Glass Box[J]. Journal of the Academy of Marketing Science,2020,48(1):137-141.
[16] PARASURAMAN R,RILEY V.Humans and Automation:Use,Misuse,Disuse,Abuse[J]. Human Factors,1997,39(2):230-253.
[17] MEHROTRA S,DEGACHI C,VERESCHAK O,et al.A Systematic Review on Fostering Appropriate Trust in Human-AI Interaction:Trends,Opportunities and Challenges[J]. ACM Journal on Responsible Computing,2024,1(4):1-45.
[18] CASTELVECCHI D.Can we Open the Black Box of AI?[J]. Nature,2016,538(7623):20-23.
[19] RETZLAFF C O,ANGERSCHMID A,SARANTI A,et al.Post-Hoc vs Ante-Hoc Explanations:XAI Design Guidelines for Data Scientists[J]. Cognitive Systems Research,2024,86:101243.
[20] KRUEGER F, RIEDL R, BARTZ J A, et al.A Call for Transdisciplinary Trust Research in the Artificial Intelligence Era[J]. Humanities and Social Sciences Communications, 2025, 12: 1-10.
[21] NUSSBERGER A M,LUO L,CELIS L E,et al.Public Attitudes Value Interpretability but Prioritize Accuracy in Artificial Intelligence[J]. Nature Communications,2022,13:5821.
[22] RUDIN C.Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead[J]. Nature Machine Intelligence,2019,1(5):206-215.
[23] DAVENPORT T,KALAKOTA R.The Potential for Artificial Intelligence in Healthcare[J]. Future Healthcare Journal,2019,6(2):94-98.
[24] LIANG W X,TADESSE G A,HO D,et al.Advances,Challenges and Opportunities in Creating Data for Trustworthy AI[J]. Nature Machine Intelligence,2022,4(8):669-677.
[25] BENGIO Y,HINTON G,YAO A,et al.Managing Extreme AI Risks Amid Rapid Progress[J]. Science,2024,384(6698):842-845.
[26] SANNEMAN L,SHAH J A.The Situation Awareness Framework for Explainable AI (SAFE-AI) and Human Factors Considerations for XAI Systems[J]. International Journal of Human-Computer Interaction,2022,38(18/19/20):1772-1788.
[27] MILLER T.Explanation in Artificial Intelligence:Insights from the Social Sciences[J]. Artificial Intelligence,2019,267:1-38.
[28] ABDUL A,VERMEULEN J,WANG D D,et al.Trends and Trajectories for Explainable,Accountable and Intelligible Systems:An HCI Research Agenda[C]//Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. Montreal: ACM,2018:1-18.
[29] BABIC B,GERKE S,EVGENIOU T,et al.Beware Explanations from AI in Health Care[J]. Science,2021,373(6552):284-286.
[30] PHAM Q H,VU K P.Digitalization in Small and Medium Enterprise:A Parsimonious Model of Digitalization of Accounting Information for Sustainable Innovation Ecosystem Value Generation[J]. Asia Pacific Journal of Innovation and Entrepreneurship,2022,16(1):2-37.
[31] ZENG D L,HU J B,OUYANG T H.Managing Innovation Paradox in the Sustainable Innovation Ecosystem:A Case Study of Ambidextrous Capability in a Focal Firm[J]. Sustainability,2017,9(11):2091.
[32] 齐玥,陈俊廷,秦邵天,等. 通用人工智能时代的人与AI信任[J]. 心理科学进展,2024,32(12):2124-2136.
[33] HOFF K A,BASHIR M.Trust in Automation:Integrating Empirical Evidence on Factors that Influence Trust[J]. Human Factors,2015,57(3):407-434.
[34] HANCOCK P A,BILLINGS D R,SCHAEFER K E,et al.A Meta-Analysis of Factors Affecting Trust in Human-Robot Interaction[J]. Human Factors,2011,53(5):517-527.
[35] GLIKSON E,WOOLLEY A W.Human Trust in Artificial Intelligence:Review of Empirical Research[J]. Academy of Management Annals,2020,14(2):627-660.
[36] KAPLAN A D,KESSLER T T,BRILL J C,et al.Trust in Artificial Intelligence:Meta-Analytic Findings[J]. Human Factors,2023,65(2):337-359.
[37] 张志学,华中生,谢小云. 数智时代人机协同的研究现状与未来方向[J]. 管理工程学报,2024,38(1):1-13.
[38] ASAN O,BAYRAK A E,CHOUDHURY A.Artificial Intelligence and Human Trust in Healthcare:Focus on Clinicians[J]. Journal of Medical Internet Research,2020,22(6):e15154.
[39] DE CAMARGO CATAPAN S,SAZON H,ZHENG S,et al. A Systematic Review of Consumers’ and Healthcare Professionals’ Trust in Digital Healthcare[J]. npj Digital Medicine,2025,8:115.
[40] 许为,高在峰,葛列众. 智能时代人因科学研究的新范式取向及重点[J]. 心理学报,2024,56(3):363-382.
[41] RYAN M.In AI we Trust:Ethics,Artificial Intelligence,and Reliability[J]. Science and Engineering Ethics,2020,26(5):2749-2767.
[42] CASTALDO S.Trust in market relationships[M]. Cheltenham:Edward Elgar,2007.
[43] MAYER R.C., DAVIS J. H., SCHOORMAN F. D. An Integrative Model of Organizational Trust[J]. Academy of Management Review, 1995, 20: 709-734.
[44] COECKELBERGH M.Can we Trust Robots?[J]. Ethics and Information Technology,2012,14(1):53-60.
[45] LEE J D,SEE K A.Trust in Automation:Designing for Appropriate Reliance[J]. Human Factors,2004,46(1):50-80.
[46] OKAMURA K,YAMADA S.Adaptive Trust Calibration for Human-AI Collaboration[J]. PLoS One,2020,15(2):e0229132.
[47] 解煜彬,周荣刚. 新型人机关系下的人机双向信任[J]. 心理科学进展,2025,33(6):916-932.
[48] LEWIS P R,MARSH S.What Is It Like to Trust a Rock?A Functionalist Perspective on Trust and Trustworthiness in Artificial Intelligence[J]. Cognitive Systems Research,2022,72:33-49.
[49] GLASSBERG I,ILAN Y B,ZWILLING M.The Key Role of Design and Transparency in Enhancing Trust in AI-Powered Digital Agents[J]. Journal of Innovation & Knowledge,2025,10(5):100770.
[50] RAJ M,SEAMANS R.Primer on Artificial Intelligence and Robotics[J]. Journal of Organization Design,2019,8(1):11.
[51] HANCOCK P A,KESSLER T T,KAPLAN A D,et al.How and why Humans Trust:A Meta-Analysis and Elaborated Model[J]. Frontiers in Psychology,2023,14:1081086.
[52] SAßMANNSHAUSEN T,BURGGRÄF P,WAGNER J,et al. Trust in Artificial Intelligence within Production Management - an Exploration of Antecedents[J]. Ergonomics,2021,64(10):1333-1350.
[53] DWIVEDI R,DAVE D,NAIK H,et al.Explainable AI (XAI):Core Ideas,Techniques,and Solutions[J]. ACM Computing Surveys,2023,55(9):1-33.
[54] GUNNING D. Explainable Artificial Intelligence (XAI)[R]. Arlington, VA: Defense Advanced Research Projects Agency (DARPA), 2017.
[55] GILLE F,JOBIN A,IENCA M.What we Talk about when we Talk about Trust:Theory of Trust for AI in Healthcare[J]. Intelligence-Based Medicine,2020,1:100001.
[56] AFROOGH S,AKBARI A,MALONE E,et al.Trust in AI:Progress,Challenges,and Future Directions[J]. Humanities and Social Sciences Communications,2024,11:1568.
[57] 谭征宇,张瑞佛,刘芝孜,等. 智能网联汽车人机交互信任研究现状与展望[J]. 机械工程学报,2024,60(10):366-383.
[58] TOPOL E J.High-Performance Medicine:The Convergence of Human and Artificial Intelligence[J]. Nature Medicine,2019,25(1):44-56.
[59] SIDDIQUI F, MERRILL J B. 17 Fatalities, 736 Crashes: The Shocking Toll of Tesla's Autopilot[EB/OL]. (2023-06-10)[2025-08-30]. https://www.washingtonpost.com/technology/2023/06/10/tesla-autopilot-crashes-elon-musk/.
[60] KOK B C,SOH H.Trust in Robots:Challenges and Opportunities[J]. Current Robotics Reports,2020,1(4):297-309.
[61] KÄSTNER L,LANGER M,LAZAR V,et al. On the Relation of Trust and Explainability:Why to Engineer for Trustworthiness[C]//2021 IEEE 29th International Requirements Engineering Conference Workshops (REW). Notre Dame,IN,USA. IEEE,2021:169-175.
[62] LOPES P,SILVA E,BRAGA C,et al.XAI Systems Evaluation:A Review of Human and Computer-Centred Methods[J]. Applied Sciences,2022,12(19):9423.
[63] MOHSENI S,ZAREI N,RAGAN E D.A Multidisciplinary Survey and Framework for Design and Evaluation of Explainable AI Systems[J]. ACM Transactions on Interactive Intelligent Systems,2021,11(3/4):1-45.
[64] ADADI A,BERRADA M.Peeking Inside the Black-Box:A Survey on Explainable Artificial Intelligence (XAI)[J]. IEEE Access,2018,6:52138-52160.
[65] VISSER R,PETERS T M,SCHARLAU I,et al.Trust,Distrust,and Appropriate Reliance in (X)AI:A Conceptual Clarification of User Trust and Survey of Its Empirical Evaluation[J]. Cognitive Systems Research,2025,91:101357.
[66] CHEN M,NIKOLAIDIS S,SOH H,et al.Trust-Aware Decision Making for Human-Robot Collaboration:Model Learning and Planning[J]. ACM Transactions on Human-Robot Interaction,2020,9(2):1-23.
[67] AMANN J, BLASIMME A, VAYENA E, et al.Artificial Intelligence Explainability in Clinical Decision Support Systems: A Review of Arguments for and Against Explainability[J]. JMIR Medical Informatics, 2022, 10(1): e28432.
[68] STUBBS W,COCKLIN C.An Ecological Modernist Interpretation of Sustainability:The Case of Interface Inc[J]. Business Strategy and the Environment,2008,17(8):512-523.
[69] JAHN T,BERGMANN M,KEIL F.Transdisciplinarity:Between Mainstreaming and Marginalization[J]. Ecological Economics,2012,79:1-10.
[70] MESKE C,BUNDE E.Transparency and Trust in Human-AI-Interaction:The Role of Model-Agnostic Explanations in Computer Vision-Based Decision Support[C]//Artificial Intelligence in HCI. Cham:Springer,2020:54-69.
[71] VON ESCHENBACH W J. Transparency and the Black Box Problem:Why we Do Not Trust AI[J]. Philosophy & Technology,2021,34(4):1607-1622.
[72] SARANYA A,SUBHASHINI R.A Systematic Review of Explainable Artificial Intelligence Models and Applications:Recent Developments and Future Trends[J]. Decision Analytics Journal,2023,7:100230.
[73] HASSIJA V,CHAMOLA V,MAHAPATRA A,et al.Interpreting Black-Box Models:A Review on Explainable Artificial Intelligence[J]. Cognitive Computation,2024,16(1):45-74.
[74] BARREDO ARRIETA A,DÍAZ-RODRÍGUEZ N,DEL SER J,et al. Explainable Artificial Intelligence (XAI):Concepts,Taxonomies,Opportunities and Challenges Toward Responsible AI[J]. Information Fusion,2020,58:82-115.
[75] SHABAN-NEJAD A,MICHALOWSKI M,BROWNSTEIN J S,et al.Guest Editorial Explainable AI:Towards Fairness,Accountability,Transparency and Trust in Healthcare[J]. IEEE Journal of Biomedical and Health Informatics,2021,25(7):2374-2375.
[76] ZOLANVARI M,YANG Z B,KHAN K,et al.TRUST XAI:Model-Agnostic Explanations for AI with a Case Study on IIoT Security[J]. IEEE Internet of Things Journal,2023,10(4):2967-2978.
[77] KAMATH U,LIU J.Explainable Artificial Intelligence:An Introduction to Interpretable Machine Learning[M]. Cham:Springer International Publishing,2021.
[78] GUIDOTTI R,MONREALE A,GIANNOTTI F,et al.Factual and Counterfactual Explanations for Black Box Decision Making[J]. IEEE Intelligent Systems,2019,34(6):14-23.
[79] DOSHI-VELEZ F,KIM B. Towards a Rigorous Science of Interpretable Machine Learning[EB/OL].2017:arXiv:1702.08608. https://arxiv.org/abs/1702.08608
[80] RIBEIRO M T, SINGH S, GUESTRIN C."Why Should I Trust You" Explaining the Predictions of Any Classifier[C]//Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. New York: ACM, 2016: 1135-1144.
[81] SEAH J C Y,TANG C H M,BUCHLAK Q D,et al. Effect of a Comprehensive Deep-Learning Model on the Accuracy of Chest X-Ray Interpretation by Radiologists:A Retrospective,Multireader Multicase Study[J]. The Lancet Digital Health,2021,3(8):e496-e506.
[82] GUIDOTTI R,MONREALE A,RUGGIERI S,et al. Local Rule-Based Explanations of Black Box Decision Systems [J/OL]. (2018-05-28)[2025-08-29]. https://arxiv.org/abs/1805.10820.
[83] BREIMAN L,FRIEDMAN J,OLSHEN R,et al.Classification and Regression Trees[M]. Belmont,Calif:Wadsworth International Group,1984.
[84] HOSMER D W,LEMESHOW S,STURDIVANT R X.Applied Logistic Regression[M]. Hoboken, NJ: John Wiley & Sons, Inc., 2013.
[85] LAKKARAJU H,BACH S H,JURE L.Interpretable Decision Sets:A Joint Framework for Description and Prediction[J]. KDD,2016,2016:1675-1684.
[86] HASTIE T J,TIBSHIRANI R J.Generalized Additive Models[M]. London:Chapman and Hall,1990.
[87] CHOI E,BAHADORI M T,SUN J M,et al.RETAIN:An Interpretable Predictive Model for Healthcare Using Reverse Time Attention Mechanism[C]//Neural Information Processing Systems. ,2016
[88] GUIDOTTI R,MONREALE A,RUGGIERI S,et al.A Survey of Methods for Explaining Black Box Models[J]. ACM Computing Surveys,2019,51(5):1-42.
[89] LIPTON Z C.The Mythos of Model Interpretability[J]. Communications of the ACM,2018,61(10):36-43.
[90] MURDOCH W J,SINGH C,KUMBIER K,et al.Definitions,Methods,and Applications in Interpretable Machine Learning[J]. Proceedings of the National Academy of Sciences of the United States of America,2019,116(44):22071-22080.
[91] METTA C,BERETTA A,PELLUNGRINI R,et al.Towards Transparent Healthcare:Advancing Local Explanation Methods in Explainable Artificial Intelligence[J]. Bioengineering,2024,11(4):369.
[92] DOMBROWSKI A K,ALBER M,ANDERS C J,et al. Explanations Can Be Manipulated and Geometry Is to Blame[EB/OL].2019:arXiv:1906.07983. https://arxiv.org/abs/1906.07983
[93] KOMPA B,SNOEK J,BEAM A L.Second Opinion Needed:Communicating Uncertainty in Medical Machine Learning[J]. npj Digital Medicine,2021,4:4.
[94] JIANG F W,ZHOU L,ZHANG C,et al.Malondialdehyde Levels in Diabetic Retinopathy Patients:A Systematic Review and Meta-Analysis[J]. Chinese Medical Journal,2023,136(11):1311-1321.
[95] SAPORTA A,GUI X T,AGRAWAL A,et al.Benchmarking Saliency Methods for Chest X-Ray Interpretation[J]. Nature Machine Intelligence,2022,4(10):867-878.
[96] GHASSEMI M,OAKDEN-RAYNER L,BEAM A L.The False Hope of Current Approaches to Explainable Artificial Intelligence in Health Care[J]. The Lancet Digital Health,2021,3(11):e745-e750.
[97] SINGH Y,HATHAWAY Q A,KEISHING V,et al.Beyond Post Hoc Explanations:A Comprehensive Framework for Accountable AI in Medical Imaging through Transparency,Interpretability,and Explainability[J]. Bioengineering,2025,12(8):879.
[98] LUNDBERG S.M., LEE S. I. A Unified Approach to Interpreting Model Predictions[C]//Advances in Neural Information Processing Systems. Red Hook, NY: Curran Associates, Inc., 2017, 30: 4765-4774.
[99] ZIERAU N,FLOCK K,JANSON A,et al.The Influence of AI-Based Chatbots and Their Design on Users’ Trust and Information Sharing in Online Loan Applications[J]. Proceedings of the Annual Hawaii International Conference on System Sciences,2021.
[100] KUMAR B,SINGH A V,AGARWAL P.Retracted:Trust in Banking Management System Using Firebase in Python Using AI[C]//2021 9th International Conference on Reliability,Infocom Technologies and Optimization (Trends and Future Directions)(ICRITO). Noida,India. IEEE,2021:1-6.
[101] RILEY R.D., ENSOR J., COLLINS G. S. Uncertainty of Risk Estimates from Clinical Prediction Models[J]. BMJ, 2025, 388: e080749.
[102] BHATT U,ANTORÁN J,ZHANG Y F,et al. Uncertainty as a Form of Transparency:Measuring,Communicating,and Using Uncertainty[C]//Proceedings of the 2021 AAAI/ACM Conference on AI,Ethics,and Society. Virtual Event USA. ACM,2021:401-413.
[103] ALI S,ABUHMED T,EL-SAPPAGH S,et al.Explainable Artificial Intelligence (XAI):What we Know and What Is Left to Attain Trustworthy Artificial Intelligence[J]. Information Fusion,2023,99:101805.
[104] SAMEK W,MONTAVON G,LAPUSCHKIN S,et al.Explaining Deep Neural Networks and Beyond:A Review of Methods and Applications[J]. Proceedings of the IEEE,2021,109(3):247-278.
[105] ALBAHRI A S,DUHAIM A M,FADHEL M A,et al.A Systematic Review of Trustworthy and Explainable Artificial Intelligence in Healthcare:Assessment of Quality,Bias Risk,and Data Fusion[J]. Information Fusion,2023,96:156-191.
[106] MEHROTRA S,JORGE C C,JONKER C M,et al.Integrity-Based Explanations for Fostering Appropriate Trust in AI Agents[J]. ACM Transactions on Interactive Intelligent Systems,2024,14(1):1-36.
[107] KESSLER T,STOWERS K,BRILL J C,et al.Comparisons of Human-Human Trust with Other Forms of Human-Technology Trust[J]. Proceedings of the Human Factors and Ergonomics Society Annual Meeting,2017,61(1):1303-1307.
[108] CAI C J,REIF E,HEGDE N,et al.Human-Centered Tools for Coping with Imperfect Algorithms during Medical Decision-Making[C]//Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. Glasgow Scotland Uk. ACM,2019:1-14.
[109] MILLER L,KRAUS J,BABEL F,et al.More than a Feeling-Interrelation of Trust Layers in Human-Robot Interaction and the Role of User Dispositions and State Anxiety[J]. Frontiers in Psychology,2021,12:592711.
[110] OMRANI N,RIVIECCIO G,FIORE U,et al.To Trust or Not to Trust?An Assessment of Trust in AI-Based Systems:Concerns,Ethics and Contexts[J]. Technological Forecasting and Social Change,2022,181:121763.
[111] YANG R B,WIBOWO S.User Trust in Artificial Intelligence:A Comprehensive Conceptual Framework[J]. Electronic Markets,2022,32(4):2053-2077.
[112] HANCOCK P A.Are Humans still Necessary?[J]. Ergonomics,2023,66(11):1711-1718.
[113] RIEDL R.Is Trust in Artificial Intelligence Systems Related to User Personality?Review of Empirical Evidence and Future Research Directions[J]. Electronic Markets,2022,32(4):2021-2051.
[114] GILLESPIE N,DALY N.Repairing Trust in Public Sector Agencies[M]//Handbook on Trust in Public Governance. Cheltenham:Edward Elgar Publishing,2025:98-115.
[115] SIAU K, WANG W.Building Trust in Artificial Intelligence, Machine Learning, and Robotics[J]. Cutter Business Technology Journal, 2018, 31: 47-53.
[116] DE VISSER E,PARASURAMAN R.Adaptive Aiding of Human-Robot Teaming:Effects of Imperfect Automation on Performance,Trust,and Workload[J]. Journal of Cognitive Engineering and Decision Making,2011,5(2):209-231.
[117] DIETVORST B J,SIMMONS J P,MASSEY C.Algorithm Aversion:People Erroneously Avoid Algorithms after Seeing Them Err[J]. Journal of Experimental Psychology. General,2015,144(1):114-126.
[118] KÄTSYRI J,FÖRGER K,MÄKÄRÄINEN M,et al. A Review of Empirical Evidence on Different Uncanny Valley Hypotheses:Support for Perceptual Mismatch as One Road to the Valley of Eeriness[J]. Frontiers in Psychology,2015,6:390.
[119] SCHAEFER K E,CHEN J Y C,SZALMA J L,et al. A Meta-Analysis of Factors Influencing the Development of Trust in Automation:Implications for Understanding Autonomy in Future Systems[J]. Human Factors,2016,58(3):377-400.
[120] GAUDIELLO I,ZIBETTI E,LEFORT S,et al.Trust as Indicator of Robot Functional and Social Acceptance. an Experimental Study on User Conformation to iCub Answers[J]. Computers in Human Behavior,2016,61:633-655.
[121] YANG X J,SCHEMANSKE C,SEARLE C.Toward Quantifying Trust Dynamics:How People Adjust Their Trust after Moment-to-Moment Interaction with Automation[J]. Human Factors,2023,65(5):862-878.
[122] RITTENBERG B S P,HOLLAND C W,BARNHART G E,et al. Trust with Increasing and Decreasing Reliability[J]. Human Factors,2024,66(12):2569-2589.
[123] SPIGEL B.The Relational Organization of Entrepreneurial Ecosystems[J]. Entrepreneurship Theory and Practice,2017,41(1):49-72.
[124] ISENBERG D J.Applying the Ecosystem Metaphor to Entrepreneurship:Uses and Abuses[J]. The Antitrust Bulletin,2016,61(4):564-573.
[125] GREENHALGH T,WHERTON J,PAPOUTSI C,et al.Beyond Adoption:A New Framework for Theorizing and Evaluating Nonadoption,Abandonment,and Challenges to the Scale-Up,Spread,and Sustainability of Health and Care Technologies[J]. Journal of Medical Internet Research,2017,19(11):e367.
[126] VAYENA E,BLASIMME A,COHEN I G.Machine Learning in Medicine:Addressing Ethical Challenges[J]. PLoS Medicine,2018,15(11):e1002689.
[127] XU T,DRAGOMIR A,LIU X C,et al.An EEG Study of Human Trust in Autonomous Vehicles Based on Graphic Theoretical Analysis[J]. Frontiers in Neuroinformatics,2022,16:907942.
[128] KOHN S C,DE VISSER E J,WIESE E,et al. Measurement of Trust in Automation:A Narrative Review and Reference Guide[J]. Frontiers in Psychology,2021,12:604977.
[129] 黄心语,李晔. 人机信任校准的双途径:信任抑制与信任提升[J]. 心理科学进展,2024,32(3):527-542.

基金

教育部人文社会科学研究项目

PDF(6921 KB)

Accesses

Citation

Detail

段落导航
相关文章

/