讲座主持:刘玉坤特聘研究员,体育外围平台APP
讲座地点:体育外围平台APPA723会议室
Seminar No.85:
讲座时间:12月12日(星期二)10:00-11:30
讲座标题:Assessing Personality through a Chatbot-based Structured Interview: Examining the Role of Ground Truth Source and Level of Scoring
通过基于聊天机器人的结构化面试评估人格:考察基准真实值和评分水平的作用
讲座摘要:
Building on our lab’s early study (Fan et al., 2023), we continue to investigate the feasibility of measuring personality indirectly through an AI-based chatbot. We developed a set of interview questions specifically targeting 17 personality facets and then deployed them in an AI-based chatbot. We investigated the role of (a) ground truth source (self-reported vs. interviewed-rated personality scores) and (b) level of scoring (based on interview-level text vs. domain-level text) in affecting psychometric properties of machine-inferred personality scores. We trained predictive models using full-time employees (recruited through Prolific) and tested them in an independent, managerial sample (online MBA students). Preliminary results indicate that (a) when scored based on interview level text, the predictive models showed good reliability and convergent validity, but very poor discriminant validity (which is true for both self-report and interview-report models), (b) when scored based on domain-level text, the predictive models yielded lower reliability, mixed convergent validity (slightly lower convergent validity for self-report models but slightly higher convergent validity for interviewer-report models), and substantially improved discriminant validity, and (c) the effects on criterion-related validity were somewhat unclear. Theoretical and practical implications, as well as future research directions will be discussed.
以我们之前的研究为基础(Fan et al., 2023),本研究团队进一步考察使用AI聊天机器人来测量个性的可行性。我们针对17个人格维度分别设计了面试问题,并将这些面试放入一个AI聊天机器人。本研究考察 (a) 不同的基准变量(自我报告的个性分数vs. 专家评定的个性分数)和(b) 建模所用的不同文本(全部文本vs. 针对不同人格维度面试题目的文本)对机器估算出的个性分数的心理测量学品质的影响。在本研究中,训练样本是1000多名全职员工,测试样本是100多名线上MAQ项目的在读学生。初步的结果分析显示:(a) 当使用全部文本进行模型训练时,机器估算的个性分数展现出了良好的信度和汇聚效度,但是非常差的区分效度(不管是自我报告模型还是专家评定模型,都是同样的结果),(b) 当使用针对具体人格维度的面试问题的回答的文本进行模型训练时,机器估算的个性分数表现出了较低的信度,分化的汇聚效度(自我报告模型的汇聚效度略低,但是专家评定模型的汇聚效度略高),但是大幅提升的区分效度,(c) 不同的基准变量和建模所用的不同文本对于机器分数校标效度的影响不清晰。最后我将讨论该研究的理论和实践意义,以及将来研究的方向。
Seminar No.86:
讲座时间:12月12日(星期二)14:00-15:30
讲座嘉宾:范津砚教授,美国奥本大学
讲座主持:刘玉坤特聘研究员,体育外围平台APP
讲座地点:体育外围平台APPA723会议室
讲座标题:How to Communicate Effectively with Action Editors during the Submission Process? Sharing a Real Case
在投稿过程中如何跟执行主编进行有效沟通?基于一个实际案例的分享
讲座摘要:组织行为和组织心理学方向的年轻学者和博士研究生在学术生涯中的一个重要工作是将自己的研究成果写成文章,投稿给学术杂志。他们在投稿过程中经常碰到的一个头痛问题是如何跟杂志的执行主编就一些重要问题进行有效的沟通,有些人不敢去沟通,有些人沟通方式不恰当而得罪了对方,因此可能错过了一些好杂志的发表机会。在本次交流中,我将就沟通中的一些重要原则进行阐述,并分享一个发生在自己身上的实际案例,希望对大家有所启迪和帮助。
嘉宾简介:
Jinyan Fan earned a PhD in industrial/organizational psychology from Ohio State University in 2004. Prior to joining Auburn’s faculty, Fan taught for six years at Hofstra University in Long Island, New York. His research interests are in the domains of artificial intelligence, personnel selection, newcomer orientation and socialization, and cross-cultural adjustment and training. His work has appeared in premier outlets such as Journal of Applied Psychology, Journal of Management, Journal of Organizational Behavior, Psychological Assessment, among others. Dr. Fan has received several awards and research funds from SIOP and AoM. He was an Associate Editor at Journal of Vocational Behavior between 2019 and 2021. In addition, Dr. Fan has developed several talent assessment tools, models, and methods and has actively engaged in HR consulting with various organizations in the U.S. and Mainland China.
范津砚博士,华东师范大学心理学学士(1994年)、硕士(1997年),美国俄亥俄州立大学工业与组织心理学博士(2004年),目前任美国奥本大学心理系教授。主要研究领域是人工智能、人事选拔、新员工入职培训和社会化过程、和跨文化适应和培训。他在组织行为与组织心理学的专业杂志上(比如Journal of Applied Psychology, Journal of Organizational Behavior, Journal of Management)发表了多篇论文,获得了美国工业与组织心理学会(SIOP)和美国管理学会(AoM)的一系列奖项和基金资助,曾担任Journal of Vocational Behavior杂志的副主编(2019 – 2021)。在实践方面,范博士开发了一系列人才测评的工具、模型、方法,并长期从事人力资源管理相关的企业咨询工作。
欢迎广大师生积极参与!