skip to main content
research-article
Open access

Performance-Aware Trust Modeling within a Human–Multi-Robot Collaboration Setting

Published: 28 June 2024 Publication History

Abstract

In this study, a novel time-driven mathematical model for trust is developed considering human–multi-robot performance for a Human–Robot Collaboration (HRC) framework. For this purpose, a model is developed to quantify human performance considering the effects of physical and cognitive constraints and factors such as muscle fatigue and recovery, muscle isometric force, human (cognitive and physical) workload, workloads due to the robots’ mistakes, and task complexity. The performance of multi-robot in the HRC setting is modeled based upon the rate of task assignment and completion as well as the mistake probabilities of the individual robots. The human trust in HRC setting with single and multiple robots is modeled over different operation regions, namely unpredictable region, predictable region, dependable region, and faithful region. The relative performance difference between the human operator and the robot is used to analyze the effect on the human operator’s trust in robots’ operation. The developed model is simulated for a manufacturing workspace scenario considering different task complexities and involving multiple robots to complete shared tasks. The simulation results indicate that for a constant multi-robot performance in operation, the human operator’s trust in robots’ operation improves whenever the comparative performance of the robots improves with respect to the human operator performance. The impact of robot hypothetical learning capabilities on human trust in the same HRC setting is also analyzed. The results confirm that a hypothetical learning capability allows robots to reduce human workloads, which improves human performance. The simulation result analysis confirms that the human operator’s trust in the multi-robot operation increases faster with the improvement of the multi-robot performance when the robots have a hypothetical learning capability. An empirical study was conducted involving a human operator and two collaborator robots with two different performance levels in a software-based HRC setting. The experimental results closely followed the pattern of the developed mathematical models when capturing human trust and performance in terms of human–multi-robot collaboration.

1 Introduction

Rapid developments in robotic technologies have enabled their close collaboration with human operators and laid the grounds for effective, safe, and reliable applications of trustworthy autonomy in Human–Robot Interactions (HRI) [17, 24]. Human–Robot Collaboration (HRC) could enhance the joint performance of the human operators and the robots to complete the assigned work [31, 32]. A key factor that facilitates the collaboration of human operators and robots is the trust that they develop in each other [2, 7, 11, 44]. Trust development and analysis have been investigated in literature for different HRC settings such as decentralized control of multiple unmanned vehicles for complex missions [9, 49], performing surgical tasks [39], supervised control of robotic swarms [29], and the approaching behavior of human operator and robots when they collaborate in close proximity [10]. Hence, trust development is important for effective collaboration among humans and robots for comprehending each other’s states/intentions for a more efficient joint task handling [28, 42].
In an HRC, the physical and cognitive tasks are divided between collaborators [5], i.e., the human and the robots, and hence, their performances should be taken into account when modeling an HRC. Different human performance models, the employed modeling tools, and the validation metrics for HRC have been discussed in [14]. Several techniques of modeling human cognition and behavior in HRCs are reviewed including computational, algorithmic, and implementational models and are evaluated for their benefits and drawbacks in [16]. An effective human–robot interaction model is developed in [47] using Petri nets to analyze the sequence of actions of human operators in a search-and-rescue mission using an Unmanned Aerial Vehicle (UAV). A human performance model is discussed in [35], in which a linear relation between a human and a robot’s performance has been developed for accomplishing shared physical tasks. The performance models of human operator and robot in an HRC setting help to understand the trust development which is influenced by the performance of all collaborators.
The existing literature discusses human trust in robots using different approaches such as heuristics approaches [30, 40, 41], progressive interaction approaches [3, 6, 13, 22, 43 48], theory of user psychology [8, 21, 23], and computational/mathematical models [19, 26]. Inspired by the need for a quantitative explanation of developed trust over time, this research focuses on mathematical models for human trust in robot operation. In this respect, Muir’s three-dimensional theory (predictability, dependability, and faith) is one of the first to model human trust in automation [26]. Another significant work in this area is the Online Probabilistic Trust Inference Model [46] that represents trust as a hidden variable in a dynamic model [20]. The authors in [45] discussed trust as a dynamic measure, which is captured using Markov Decision Process (MDP) to model how the robot performance affects the human. A Partially Observable MDP has been used in [7] to infer the trust of a human through interactions with a robot that maximizes the overall team’s performance. A trust architecture, named Human-Swarm-Teaming Transparency and Trust (HST3), is presented in [15] that uses explainable Artificial Intelligence (AI) for human swarm teaming. The authors in the article concluded that reinforced transparency is a key contributor to situation awareness and trust development in HST3-Architecture. A unified bi-directional trust model for HRC has been presented in [1] for both humans and robots. The human and robotic collaborators can develop trust based on the approximate knowledge of the teammate’s capabilities in handling the task.
From this brief review, we realize that (1) most existing trust models are case-specific and difficult to extend/generalize for different scenarios, (2) many of these models do not directly take the human performance into account and instead use other factors such as human emotion, teammate’s capabilities, fault occurrences in the robot, or only the robot performance, (3) most of these existing trust models are suitable for static environments only which are incapable of describing the evolution of the trust, and (4) most of these models do not quantitatively include the effects of complexity of the tasks and the rate of human cognitive utilization on human performance within the HRC settings. To address these challenges, this article develops a mathematical time-driven1 model for human trust in multi-robot operation. The trust model is developed based upon performance models of a human and multiple robots. The proposed human performance model is composed of physical and cognitive performances of the human operator. The physical and cognitive performances are impacted by the workload due to the tasks assigned to the human operator, the workload added due to the mistakes of the robots, and the task complexity. On the other hand, the multi-robot performance is modeled in terms of the rate of appearance of tasks in the workspace, the rate of task completion, the task assignment probabilities, and the mistake probabilities of the individual robots. Having the performance models of the human operator and the robots, we next develop a model for human trust in the robots’ operation. The commonly used robot operation regions based on robot performance in the literature for HRC settings are predictable, dependable, and faithful regions, where each region represents a particular level of trust in robot [18]. We have defined an additional unpredictable region of operation for the robots in which the robots’ performance is below a certain minimum threshold. The human operator’s trust in robots’ operation is then modeled over these four operation regions. We have provided simulation results to analyze the human and multi-robot performances in a manufacturing workspace scenario to demonstrate trust development. The simulation results show that human trust in robots’ operation improves whenever the relative performance of robots versus the human operator’s performance increases. Moreover, as the robots’ performance improves through the guidance provided by the human operator, the human trust in the robots’ operation improves.
In summary, the contributions of this article include the following:
Development of a time-driven human performance model that considers the effects of physical and cognitive constraints and factors such as muscle fatigue and recovery, muscle isometric force, human (cognitive and physical) workload, and workload added due to robots’ mistakes, and task complexity.
Development of a multi-robot performance model taking into account the rate of task assignment and completion as well as the mistake probabilities of the individual robots.
Development of a model for human operator’s trust in HRC settings with single or multiple robots and over different operation regions (depending upon the comparative performance of the human operator and the robots), namely unpredictable region, predictable region, dependable region, and faithful region.
Development of a software-based HRI experimental setup and validation of the developed mathematical model through empirical studies.
Further, an analysis of the effect of robots’ change of performance (e.g., using a learning mechanism) on human performance and their trust in robots within the proposed multi-robot HRC setting has also been included in this research work.
The preliminary results of this work were presented in [33] for human cognitive performance modeling and [34] for trust in robot operation in an HRC with a single robot, where the robot has no learning capability. The developed performance and trust models in this article extend the results in [33, 34] to an HRC setting with multiple robots when the robots have a hypothetical learning capability. It is also worth mentioning that the proposed model considers both the human operator’s physical and cognitive performances for trust modeling, whereas other existing models consider only the human physical performance within an HRC setting [37] or take into account only the cognitive workload in modeling trust in teaming of manned aerial vehicles and UAVs [36]. Further, the developed trust model in this article considers time-driven models of human (physical and cognitive) and robot performances, allowing for capturing the trust evolution unlike the static trust models discussed in [18, 26, 27].
The rest of this research work has been organized as follows. The proposed human–multi-robot collaboration framework is presented in Section 2. Then, the developed multi-robot performance model is discussed in Section 3, and the developed human operator’s performance model is described in Section 4 for the multi-robot setting. Section 5 discusses the proposed trust model. The simulation scenarios for the analysis of the proposed models with fixed multi-robot performances are presented in Sections 6 and 7, respectively. The effect of robot’s hypothetical learning on human performance for the evolution of trust is analyzed in Section 8. An empirical study was conducted by developing a software-based HRC setting in Section 9 to validate the capability of the developed trust model in terms of human–multi-robot performance. Finally, a conclusion is provided in Section 10.

2 Proposed Multi-Robot HRC Framework

In this article, an HRC framework is considered in Figure 1 that involves multiple robots with a human operator, which are collaboratively working to perform shared tasks in a workspace. In this work, we assume that the human operator provides instructions to directly guide the robots (in case of no hypothetical learning capability) or to improve the robots’ performance (in case of having a hypothetical learning capability) for handling the tasks. The human operator continuously observes the workspace to ensure that the tasks are being handled properly, e.g., objects on the workspace being transferred to correct destinations/conveyors by the robots. In this way, the human operator handles the decision making and provides instructions to control the robots to complete the shared tasks while the robot(s) is/are equipped with the basic capability to pick an incoming object and place it at a designated destination thus physically transferring the object out of the workspace. Further, the human operator in this HRC is involved in some physical activities such as picking and inspecting an object, physically adjusting the orientation of the incoming objects for the robots to facilitate the object pick-up by the robot from the workspace. In this setting, if non-hypothetical learning robots frequently make mistakes, the human operator will need to frequently provide instructions to correct the robots to perform the tasks, which rapidly degrades the human operator’s performance. If human cognitive performance decays below a certain threshold, the collaboration will stop. Whenever the collaboration between a human operator and the robots in the workspace is stopped, the robots that are incapable of handling the workspace tasks are considered to be malfunctioning and need to be replaced with robots that can handle the tasks with better performance (less rate of mistakes). On the contrary, if robots have a hypothetical learning capability, even though they may make mistakes, by receiving instructions from the human operator, the robots can improve their performance. In this case, while the performance of the human operator decays during the robots’ hypothetical learning process, with a proper hypothetical learning mechanism, the robots’ performance will gradually show improvement demanding less human utilization (cognitive/physical workload) and hence the collaboration between the human and the robots can continue for a longer period.
Fig. 1.
Fig. 1. The symbolic representation of evaluating the human operator performance and developing trust in a multi-robot HRC setting.

3 Proposed Multi-Robot Performance Model

The performance of robots has a significant impact on human performance in an HRC framework. The human operator’s trust in the multi-robot performance varies and can be enhanced if the robots’ performance improves over time. Indeed, the way a robot can perform human instruction(s) to complete an action/task determines the human trust in the robot. Hence, the human trust in the robot’s operation builds for the correct operation done by the robot. Otherwise, the human workload increases and the trust in the robot’s operation/performance decreases. In this section, dynamic mathematical modeling of multi-robot performance is presented. We used a dynamic model to be able to represent the changes in robots’ performance, which may happen due to the changes in the workspace, and embedded learning capability of the robot, or other similar situations. This model later will be used to investigate the effect of robots’ performance on human performance and trust in a multi-robot HRC setting. For “Robot \(i\) ,” \(i=1,\ldots,n\) , in the multi-robot HRC setting, its performance \(R_{P_{i}}(t)\) can be described as
\begin{align}R_{P_{i}}(t)=R_{{P,max}_{i}}-\frac{S_{R}(t-1)-(1-P_{mR_{i}}(t)){P_{o_{i}}}D_{R }(t-1)}{S_{R}(t-1)},\end{align}
(1)
where \(R_{P,max_{i}}\) is the maximum level of robot performance, which “Robot \(i\) ” is able to achieve (which is ideally 1 if the robot is performing all of its assigned tasks perfectly), \(S_{R}(t-1)\) is the rate of the assigned tasks that appears on the workspace to be handled by the robot (e.g., the rate of the incoming object(s) by the source conveyor), \(D_{R}(t-1)\) is the completion rate of the assigned tasks in the workspace (e.g., the correct number of objects moved by the specific robot from the source to the robot conveyor), \(P_{o_{i}}\) is the assignment probability to “Robot \(i\) ” for an incoming task that appears in the workspace where \(\sum_{i=1}^{n}P_{o_{i}}=1\) , and \(P_{mR_{i}}\) is the mistake probability by “Robot \(i\) ,” where \(0\leq P_{mR_{i}}(t)\leq 1\) .
To collectively capture the overall performance of multiple robots in a shared workspace, we introduce \(R_{P}(t)\) , which can be represented in terms of the rate at which the (homogeneous) robots can correctly handle the allocated tasks (e.g., recognizing the target set of object, picking the correct one, and placing the object(s) on the destination conveyor) and the sum of the individual robot’s mistakes on the workspace. Therefore, the overall performance of the robots on the workspace can be modeled as
\begin{align} R_{P}(t)=\sum_{i=1}^{n}R_{{P,max}_{i}}-\frac{S_{R}(t-1)-\sum_{i=1}^{n}(1-P_{mR _{i}}(t)){P_{o_{i}}}D_{R}(t-1)}{S_{R}(t-1)}.\end{align}
(2)

4 Developed Human Performance Model

In this section, the human performance is modeled for an HRC. In particular, a model for human physical and cognitive performances is developed, which indicates the ability of a human operator to perform physical and mental/cognitive tasks. The human operator performance is related to the fatigue level, workload, and robots’ performance.

4.1 Human Physical Performance and Workload Model

Muscle contraction and expansion are related to a human operator’s physical performance. It can be related to the tiredness/fatigue and recovery of muscles. The fatigue and recovery models of muscles affect human physical performance, which can be modeled as [33, 38]
\begin{align} P_{P}(t)=\frac{F_{max,iso}(t)-F_{th}}{\mathrm{MVC}-F_{th}},\end{align}
(3)
where \(P_{P}(t)\) stands for the human physical performance at the corresponding time instant, \(t\) and \(F_{th}\) is the associated threshold force at the equilibrium point (the point where the muscles fatigue and the muscles recovery balance out each other). \(F_{iso}(t)\) , is the isometric force that is generated without any change in the muscle lengths [38]. \(F_{max,iso}(t)\) indicates the maximum possible isometric force. Maximum Voluntary Contraction (MVC) is the highest value of isometric force, which one can generate at rest [38]. Clearly, \(F_{max,iso}(t)\) decreases over time because of muscle fatigue. Taking into account from [12, 25, 38] and modifying the generated maximum isometric force, the first-order Euler approximation for the time-driven maximum isometric force is calculated as follows:
\begin{align} F_{max,iso}(t)=F_{max,iso}(t-1)-nC_{f}F_{max,iso}(t-1)\frac{F(t- 1)}{\mathrm{MVC}}+C_{r}(\mathrm{MVC}-F_{max,iso}(t-1)),\end{align}
(4)
where \(C_{f}\) and \(C_{r}\) are the constants that represent fatigue/tiredness and recovery, respectively; \(F(t)\) stands for the dynamic force that is applied to perform any task and it reduces over time for an increase in fatigue, and \(n\) is the number of robots. Here, the fatigue and recovery processes are used in continuous time to capture the dynamic nature of the maximum isometric force. The maximum isometric force can be justified by Equation (4) that the fatigue level increases when muscles are applying the force continuously. However, at the times when there is no force being applied or the applied force is in relatively low magnitude, the muscles will undergo the recovery process, i.e., \(\mathrm{MVC}-F_{max,iso}(t-1)\) will be increasing during these times. Also, \(F_{max,iso}(t)\) will be at the highest value when human starts an action/task, i.e., \(F_{max,iso}(t=0)=\mathrm{MVC}\) . Hence, based on Equation (3), \(P_{P}(t=0)=1\) (when \(F_{max,iso}(t=0)=\mathrm{MVC}\) ) would reduce to zero once \(F_{max,iso}\) equals \(F_{th}\) [38]. Further, from Equation (3), it can be verified that in a multi-robot scenario, the isometric force is calculated by the associated fatigue/tiredness levels such that the increase in fatigue levels decreases the isometric force, which in turn reduces human physical performance.
The physical workload of the human operator can be indirectly calculated from human physical performance as follows:
\begin{align} H_{W}(t)=\gamma_{W}(P_{P,max}-P_{P}(t)),\end{align}
(5)
where \(P_{p,max}\) is the maximum human physical performance and \(\gamma_{w}\) is a real nonzero scaling factor.

4.2 Human Cognitive Performance and Workload Model

Human cognitive performance is a function of cognitive and physical workloads as well as the additional workload that is added to the human operator’s workload due to the robots’ mistakes [37]. In particular, the human operator’s cognitive performance is affected the most when the cognitive workload is too high or too complex. The longer the period of low or no cognitive workload for the human operator is, the greater will be the increase in cognitive performance and vice versa. For an extended period of low or no cognitive workloads, human cognitive performance level can be improved to an Optimum Level of Arousal point [38]. In this article, we model the human cognitive performance, \(C_{P}(t)\) , as follows:
\begin{align} C_{P}(t)=C_{P,max}-\alpha C_{W}(t)-\beta H_{W}(t)-\gamma H_{R}(t),\end{align}
(6)
where \(C_{P,max}\) is the highest possible value for human cognitive performance; \(C_{W}(t)\) is the cognitive workload; and \(H_{W}(t)\) is the physical workload of the human operator, which was discussed in Equation (5); \(H_{R}(t)=\gamma_{R}(R_{P,max}-R_{P}(t))\) is the workload that is added due to the robots’ wrong operation/mistakes; and \(\alpha\) , \(\beta\) , \(\gamma\) , and \(\gamma_{R}\) are real nonzero numbers, where \(\alpha+\beta+\gamma=1\) , and the human cognitive workload, \(C_{W}\) , can be quantified in terms of the utilization factor and the complexity of tasks that are performed by the human operator as [33]
\begin{align} C_{W}(t)=(C_{W,max}-C_{W,min})\left(\frac{u(t)}{1-c(t)}\right)^{1-c(t)}\left(\frac{1-u(t)}{c(t)}\right)^{c(t)}+C_{W,min},\end{align}
(7)
where \(C_{W,min}\) and \(C_{W,max}\) stand for the minimum and maximum cognitive workloads, respectively, and both of these values vary depending upon the individual’s abilities to handle the workspace tasks; \(0 \lt c(t) \lt 1\) represents the associated complexity of the task that is undertaken at time \(t\) (larger values of \(c\) represent tasks of higher complexity), and \(0\leq u(t)\leq 1\) is the human utilization factor, which can be calculated as
\begin{align} u(t)=u(t-1)+\frac{\sum_{i=1}^{n}\lambda P_{o_{i}}P_{mR_{i}}(t)+\eta(t-1)-2u(t- 1)}{2\tau},\end{align}
(8)
where \(0\leq\eta\leq 1\) is the cognitive workload that is directly assigned to the human operator; \(\tau\geq 1\) is the time constant, and \(\lambda\) is a constant, which determines the proportionate impact of robots’ mistakes on the human operator’s utilization. In this way, human utilization is quantified as a dynamic model in terms of the changes in cognitive workload that is directly assigned to the human operator and the cognitive workload that is added due to the robots’ mistakes.

4.3 Human Operator’s Performance Model

Considering human physical performance Equation (3), and cognitive performance Equation (6), the overall human performance can be calculated as follows:
\begin{align} H_{P}(t)=aP_{P}(t)+bC_{P}(t),\end{align}
(9)
where \(a\) and \(b\) are the positive real coefficients that respectively determine the proportions of human physical and cognitive performances in the overall performance, whereas \(a+b\leq 1\) . The values of the two parameters depend upon the nature of the work and are \([0,1]\) .

5 Proposed Human Trust Modeling

Human trust in the robots within an HRC setting relies upon his/her perception of the workspace. This perception is based on his/her beliefs, the nature and complexity of tasks to be performed, the comparative performance of the human and the robots in handling the tasks, and the required collaboration level for performing those tasks. In this case, if the performance of the robots is low with respect to human performance, the associated trust in the robots’ operation is deemed to be low, and the human instructs the robots to properly achieve the appointed tasks to improve the performance of the robots. Based on the commonly used terminology in the literature [18], we have classified the robots’ operation over normalized values between zero and one, as four distinct regions, namely, unpredictable region, predictable region, dependable region, and faithful region that are all defined for the normalized values. These regions are discussed in the following sections:
Definition 1
The unpredictable region, \(s_{p^{\prime}}\) , is associated with the region in the trust buildup process in which the performance of the robots is much lower than a minimum expected level, and hence, the value of trust is deemed to be zero. For this region, the human operator guides the robots for each action that is required to be performed in handling the tasks. The unpredictable region can be mathematically represented as
\begin{align} s_{p^{\prime}}=\{R_{P}(t) {}| {}R_{P}(t) < f_{P}\},\end{align}
(10)
where \(f_{P}\) is a threshold, below which human trust in the operation of the robots is zero. The threshold \(f_{P}\) is later defined as a function of human performance.
Definition 2
The predictable region, \(s_{p}\) , is associated with the region in the process of trust buildup in which the initial trust of a human toward the robots is at a constant level. In this region, the performance of the robots is observed to be comparatively better than the performance of the robots in the unpredictable region, but it is still lower than the desired performance. Here, the human operator develops a basic level of trust in the operation of robots in the predictable region, i.e., the value of trust is fixed at \(\epsilon\) for the predictable region. In this region, the human operator provides fewer number instructions to the robots for performing actions to handle the tasks compared to the unpredictable region. The predictable region can be mathematically represented as
\begin{align} s_{p}=\{R_{P}(t) {}| {}f_{P}\leq R_{P}(t) < f_{D}\},\end{align}
(11)
where \(f_{D}\) is a threshold, below which the human trust in the robots’ operation remains at a small constant value. The threshold \(f_{D}\) is later defined as a function of human performance.
Definition 3
The dependable region, \(s_{d}\) , is associated with the region in the process of trust buildup in which human trust in the robots’ operation improves with the relative increase in the multi-robot performance. In this region, the performance of the robots is observed to be very close to the corresponding human performance, and hence, trust in the robots gradually increases. In the dependable region, the human operator provides fewer instructions to the robots for performing actions to handle the tasks compared to the predictable and unpredictable regions. The dependable region can be mathematically represented as
\begin{align} s_{d}=\{R_{P}(t) {}| {}f_{D}\leq {}R_{P}(t) < f_{F}\},\end{align}
(12)
where \(f_{F}\) is a value of threshold above which maximum trust in the robots’ operation is observed.
Definition 4
The faithful region, \(s_{f}\) , is where the human trust in the robot is at the highest level, and the performance of the robots in this region is observed to be higher than a satisfactory level. Here, the human operator has maximum trust in the operation of robots, i.e., the level of trust for the dependable region is one. In this region, the human operator provides rare/almost no instructions to the robots for performing actions to handle the workspace tasks. The faithful region can be mathematically represented as
\begin{align} s_{f}=\{R_{P}(t) {}| {}R_{P}(t)\geq f_{F}\}.\end{align}
(13)
Incorporating these definitions, the trust for an individual robot’s performance can be defined as follows:
\begin{align}T_{i}(t)=\begin{cases}0 &: \text{ }R_{P_{i}}(t) < f_{P} \\\epsilon &: \text{ }f_{P}\leq R_{P_{i}}(t) < f_{D} \\min(1,\epsilon_{adj}+tanh(c\Delta P)) &: \text{ }f_{D}\leq R_{P_{i}}(t) < f_{F} \\1 &: \text{ }R_{P_{i}}(t)\geq f_{F} \\\end{cases}, \end{align}
(14)
where \(\Delta P=R_{P}(t)-f_{D}\) , and \(\epsilon\) , \(\epsilon_{adj}\) , and \(c\) are constants, which depend on the human operator. Extending this concept to multiple robots, the overall trust in a system of multiple robots can be modeled as
\begin{align}T(t)=\begin{cases}0 &: \text{ }R_{P}(t) < f_{P} \\\epsilon &: \text{ }f_{P}\leq R_{P}(t) < f_{D} \\min(1,\epsilon_{adj}+tanh(c\Delta P)) &: \text{ }f_{D}\leq R_{P}(t) < f_{F} \\1 &: \text{ }R_{P}(t)\geq f_{F} \\\end{cases}. \end{align}
(15)
Figure 2 shows the trust of the human operator in robots’ operation in terms of the robots’ performance \(R_{P}(t)\) . In the developed trust model in Equation (15), for low robots’ performance values, given by \(R_{P}(t) \lt f_{P}=\sigma H_{P}(t)\) , with \(\sigma\) as a constant value, the robots are operating in the unpredictable region, \(s_{p^{\prime}}\) , and the human trust in robots’ operation is set at zero. When robots’ performance is above or equal to \(f_{P}\) but below \(f_{D}=\rho H_{P}(t)\) , with \(\rho\) as a constant value and \(\rho \gt \sigma\) , robots are operating in the predictable region, \(s_{p}\) . During the predictable region, the human trust in robots’ operation is at a fixed level of \(\epsilon\) , which is a small constant value. Once the robots’ performance is equal to or more than \(f_{D}\) , the robots’ operation enters the dependable region, \(s_{d}\) . In this region, the robots’ performance is higher than or is equal to \(f_{D}=\rho H_{P}(t)\) , hence the trust increases. Once the maximum trust value is attained, the robots’ operation transitions to the faithful region, \(s_{f}\) . In this region, the human operator’s trust in robots’ operation is at the highest level. In the trust model developed in Equation (15), trust is defined as a nonlinear function of \(R_{P}\) , \(f_{P}\) , and \(f_{D}\) , where \(f_{P}\) and \(f_{D}\) are represented in terms of the human operator’s performance that dynamically changes with respect to the changes in human physical and cognitive performance values following Equation (9).
Fig. 2.
Fig. 2. The development of trust in a human–multi-robot collaboration scenario, where \(s_{p^{\prime}}\) is the zero or unpredictable trust region, \(s_{p}\) is the region with a constant trust level or predictable region, \(s_{d}\) is the trust buildup or dependable region, and \(s_{f}\) is the mature or faithful trust region.

6 Simulation Scenario Setup

Consider the workspace scenario in Figure 3 for a manufacturing environment, in which a source conveyor brings three different types of objects into the workspace. The source conveyor is shared among all the collaborators (both the human operator and the multiple robots). Each robot should pick an assigned object from the source conveyor and deliver the object to the correct destination conveyor. More specifically, “Robot 1,” “Robot 2,” and “Robot 3” should identify and pick Green, Red, and Blue objects from the “Source” conveyor and then transfer them to “Conveyor G,” “Conveyor R,” and “Conveyor B,” respectively. Each robot works to transfer a specific type of object that is assigned to it from the source to the designated area while letting all other types of object(s) be on the source conveyor. The human operator provides corrective instructions (electroencephalogram, vocal, or push-button) to guide robot(s) and correct the robots’ mistakes to ensure proper handling of the objects. Moreover, the human operator picks and inspects the objects, and then, places them back on the source conveyor. Also, the human operator physically adjusts the orientation of objects to assist the robots to pick and place the objects in the right destination conveyor.
Fig. 3.
Fig. 3. A multi-robot HRC scenario for human operator providing instructions to individual robots to separate a specific (target) type of object from “Source” and then place them onto the individual type of the “robot’s conveyor.”
For this HRC setting, Equations (1)(15) capture the trust model for this manufacturing scenario. The parameters that are associated with the simulation are shown in Table 1. These parameters are calculated following the existing literature [33, 34, 38] and the trust model discussed in Sections 35.
Table 1.
Symbol of the ParametersValue
\(\epsilon\) \(0.2\)
\(\epsilon_{adj}\) \(0.2\)
\(\gamma_{W}\) \(1.0\)
\(\gamma_{R}\) \(1.0\)
\(c\) \(3\)
\(\sigma\) \(0.2\)
\(\rho\) \(0.7\)
\(\eta\) \(0.2\)
\(\lambda\) \(0.95\)
\(F_{th}\) \(151.9\)
\(\mathrm{MVC}\) \(200\)
Table 1. Performance-Aware Trust Parameters
The coefficients that we consider are \(a\) and \(b\) from Equation (9) for physical and cognitive performances of the human operator, respectively, which vary for different operation regions as listed in Table 2. At lower trust levels, more physical engagement of the human operator is needed to compensate the lower performance levels of the robots, and hence, a higher value of \(a\) and lower value of \(b\) are used. As the trust in robots increases, the physical engagement of the human operator reduces and the human operator’s cognitive supervision will be sufficient to perform the tasks, and hence, a higher value of \(b\) and lower value of \(a\) are used. Ideally, these numbers should be obtained through calibration of the model, which depends on the human operator and the specific settings of the workspace.
Table 2.
Operation Regions \(R_{P}(t)\) \(a\) \(b\)
Unpredictable region, \(s_{p^{\prime}}\) \(\lt f_{P}\) \(0.6\) \(0.4\)
Predictable region, \(s_{p}\) \([f_{P},f_{D})\) \(0.4\) \(0.6\)
Dependable region, \(s_{d}\) \([f_{D},f_{F})\) \(0.2\) \(0.8\)
Faithful region, \(s_{f}\) \(\geq f_{F}\) \(0.1\) \(0.9\)
Table 2. Values of Coefficients for Human and Multi-Robot Performances in Equation (9) in Different Operation Regions
The simulation operating period is for 8 hrs with samples taken for 0.5 hrs. The condition to continue the collaboration is to maintain \(C_{P}(t)\) , at any given instant is at least \(0.2\) . Otherwise, the human operator cannot provide cognitive instruction to continue the collaboration and it will stop. In addition, the complexity of the task is assumed to be fixed over time.

7 Simulation Results

To assess the developed trust model, we provide simulation results for two cases with different levels of task complexity for three robots that are operating at different performance levels. Two cases shown in Figure 4 demonstrate the impact of a low value in task complexity \(c=0.3\) (left sub-figures) and a high value in task complexity \(c=0.7\) (right sub-figures). For both cases, the performances of “Robot 1,” “Robot 2,” and “Robot 3” are assumed to be \(0.7\) , \(0.5\) , and \(0.3\) , and their probabilities of task assignment, \(P_{o_{i}}\) are \(0.5\) , \(0.3\) , and \(0.2\) , respectively. The rationality behind having different performance values for different robots is that in a manufacturing setting, depending on the mechanical and maintenance conditions of the robots and the reliability of their software and perception for handling respective designated objects, they may operate at different performance levels. These numbers also represent low-, medium-, and high-performance levels to accommodate a diverse group of robots. With these parameters, the overall robot performance for the three robots based on the task arrival rate given by Equation (2) is \(0.56\) . As the individual robot performances and the associated values of task assignment probability are fixed, the multi-robot performance is hence fixed throughout the operation period. On the other hand, change in human performance is observed as both human physical and/or cognitive performance varies over time. The contribution of human physical activity stays the same for handling the same task and gradual decay in human physical performance is observed due to muscle fatigue over time. If the performance value of the multi-robot is lower than human performance, the human operator makes a more cognitive contribution to the improvement of the multi-robot performance, which degrades the human operator’s cognitive performance. All these changes in human performance along with the contribution of physical and cognitive performance are captured using Equation (9) in Table 2. It can be observed in Figure 4 for both cases, as the relative performance difference between the human operator and the robots decreases, human trust in the robots’ operation improves.
Fig. 4.
Fig. 4. Simulation results for an HRC with fixed performance of robots: (a and b) human workload; (c and d) human performance; and (e and f) trust in “Robot 1”(), “Robot 2”(), “Robot 3”(), and “Multi-robot”(). The task complexity is set to \(c=0.3\) for left sub-figures and \(c=0.7\) for right sub-figures.
When the value of task complexity is \(0.3\) as shown in Figure 4(a), the human operator’s workload \(C_{W}(t)\) increases over time due to the robots’ mistakes (the green, red, and blue lines represent the workloads that are added due to mistakes by “Robot 1,” “Robot 2,” and “Robot 3,” respectively and the black line represents the overall workload). The increase in human workload, in turn, reduces the overall human performance, \(H_{P}(t)\) , over time as shown in Figure 4(c).
Figure 4(e) shows the human trust in the multi-robot operation (the black line) as well as the trust in individual robots’ performances (the green, red, and blue lines for the trust in the performance of “Robot 1,” “Robot 2,” and “Robot 3” respectively). In Figure 4(e), the human trust buildup in the multi-robot operation starts when the multi-robot is operating in the predictable region, and hence, the value of human trust is \(0.2\) at \(t=0\) hrs. After this, due to the further degrade in human performance, eventually at \(t=0.5\) hrs, when \(H_{P}(t)=0.52\) , we will have \(f_{P}=\sigma H_{P}\leq R_{P}\) , the HRC enters dependable region. Then, over the time interval \([0.5,8]\) , the trust increases as a function of \(\Delta P\) (the difference between human and multi-robot performances), and the human operator performances keep degrading. At \(t=8\) hrs, the trust level reaches to \(T=0.86\) . A similar pattern can be seen for the trust in an individual robot’s operation with a slight difference depending on the comparative performance of the human operator and robot.
A similar trend for human workload and performance and the developed trust can be seen in the second case (right sub-figures of Figure 4), where the complexity of the task is increased to \(0.7\) . The impact of complexity in the task for the workload and performance of human operator can be examined by comparing the simulation results in Figure 4(a) and (c) with Figure 4(b) and (d) having similar values for robots’ performances. The results show that for constant multi-robot performance, human performance, \(H_{P}(t)\) , degrades, while workload increases at the corresponding time instants for a higher task complexity (when the task complexity increases from \(c=0.3\) to \(c=0.7\) ). More specifically, due to involved fatigue and higher human utilization factor associated with handling tasks of higher complexity, human performance degrades relatively faster in Figure 4(d) than the case in Figure 4(c). This results in the overall robot performance exceeding the thresholds \(f_{D}=\rho H_{P}\) and \(f_{F}\) earlier, which in turn transitions the trust in robots’ operation to the dependable and faithful regions faster, as shown in Figure 4(f). At \(t=2\) hrs, the trust level in multi-robot operation reaches \(T=1\) and hence the HRC enters into the faithful region. At this point, the human operator has the highest trust in the robots’ operation. An almost similar pattern of change in the cognitive workload and trust can be seen for the operation of individual robots with a slight difference depending on the comparative performance of the human operator and the robot. Note that, as it can be observed from the simulation results for \(c=0.7\) (right sub-figures of Figure 4), the simulation was terminated at \(t=5.5\) hrs. That is why the human cognitive performance degraded below the threshold value of \(0.2\) and human is assumed to be exhausted and unable to continue the collaboration below this threshold value.

8 Analyzing the Impact of Robots’ Learning on Human Trust in Robots’ Operation

A hypothetical learning process can help the robots to learn from their past mistakes and improve their performances over time. Consider a scenario in which initially there is no utilization of the human operator (i.e., the utilization factor of the human operator and workload is zero). This means that human performance is at the highest level at the initial stage of operation. During the operation, human utilization increases due to the assigned tasks as well as the workload added due to robots’ mistakes. However, throughout the operation, the robots can learn from their mistakes, and hence, as the robots make progress in hypothetical learning, lower human utilization is required over time. Therefore, the human workload increases at the beginning of the operation, and later, it starts reducing as the robots’ performances are improved enough, demanding lower human operator utilization. This is due to the fact that as the robots’ performances improve, the workload added due to robots’ mistakes, \(H_{R}\) , human physical workload, \(H_{W}\) , and cognitive workload, \(C_{W}\) decrease, and hence, the human operator physical and cognitive performances improve. As the human performance is decaying initially, the human trust in the robot operation improves because the robots are improving performances. However, the increase in robots’ performance reduces the human workload, which in turn decreases the difference in relative performance between humans and robots, and consequently, human trust in robot operations decreases.
To assess the impact of the hypothetical learning capability in an HRC setting with multiple robots on human performance and human trust in robots’ operation, we used a simulation setup that is similar to the one described in Section 6. Here, it is assumed that the multi-robot performance initially is zero but increases over time via an embedded hypothetical learning mechanism as shown in Figure 5. Therefore, the multi-robot operation is initiated in an unpredictable region. As the hypothetical learning process aids in improving the robots’ performances, the robots’ performances improve quickly as the robots successively transition their operation to predictable, dependable, and faithful regions, resulting in higher trust in their operation. Figure 6 shows the impact of the hypothetical learning process on the human operator’s performance and trust development for two different task complexity values, i.e., \(c=0.3\) and \(c=0.7\) , respectively.
Fig. 5.
Fig. 5. Performance of “Robot 1”(), “Robot 2”(), “Robot 3”(), and “Multi-robot”(), when they are equipped with hypothetical learning capabilities.
Fig. 6.
Fig. 6. Simulation results for an HRC when the robots are equipped with hypothetical learning capabilities: (a and b) human workload; (c and d) human performance; and (e and f) trust in “Robot 1”(), “Robot 2”(), “Robot 3”(), and “Multi-robot”(). The task complexity for left sub-figures is set to \(c=0.3\) and \(c=0.7\) for right sub-figures.
From Figure 6(a), and (b), it is observed that the human operator’s workload initially increases, but when the robots have attained a significant improvement in their performance via the embedded hypothetical learning mechanism, the human operator’s workload starts reducing. In a similar pattern, as can be seen in Figure 6(c) and (d), the overall human performance, \(H_{P}\) , degrades initially as human workload increases. During this meantime, robots improve their performances, which reduces the human operator’s utilization. This will eventually lead to a point where human performance starts improving at \(t=1.5\) hrs and \(t=3\) hrs as shown in Figure 6(c) and (d), respectively. The jumps in human performance improvement are due to the changes in the robots’ operation region (unpredictable, predictable, dependable, and faithful) with different values of the coefficients \(a\) and \(b\) in Equation (9) that are listed in Table 2.
Figure 6(e) and (f) show the evolution of trust in robots’ operation. In both of these figures, the multi-robot operation is initiated in the unpredictable region. For the case when the task complexity is set as \(c=0.3\) , as shown in Figure 6(e), when \(H_{P}(t)=0.53\) , we will have \(f_{P}=\sigma H_{P}\leq R_{P}\) , the multi-robot operation transitions to the predictable region. Then, over this time interval \((0.5,2.5)\) hrs, the trust improves as a function of \(\Delta P\) as human performance keeps degrading whereas the robots’ performance is improving. At \(t=2.5\) hrs, the trust level reaches \(T=1\) , and hence, the HRC enters into the faithful region. From this point onward, the human operator has maximum trust in the robots’ operation. A similar pattern can be seen for trust in the operation of an individual robot with a slight difference depending on the comparative performance of the human operator and robot. For example, human trust in the “Robot 1” operation initially increases and later decreases. The reason behind this is that the hypothetical learning capability of “Robot 1” is higher compared to other robots and so its contribution to the total task completion was high. As a result, human trust in “Robot 1” increases faster and reaches to a higher trust level earlier. However, when the other robots improve their performances, their contribution to the total task completion significantly improves compared to “Robot 1” although the tasks are independent. Moreover, the increase in robots’ performances decreases human workload which in turn improves human performance and decreases \(\Delta P\) . Thus, human trust in “Robot 1” decreases, but the human trust in the multi-robot performance increases or maintains the highest trust level. The first situation when the human trust in “Robot 1” was high mentions that overall team performance is highly dependent on “Robot 1.” However, the decay in human trust in “Robot 1” operation but the increase in human trust in the multi-robot operation indicates the highest teaming nature of this HRC setting. The same argument is also applicable to the trust in robots when the task complexity increases to \(c=0.7\) , as shown in Figure 6(f).
Comparing the trust development patterns for task complexities of \(c=0.3\) and \(c=0.7\) , in Figure 6, it can be observed that for the same hypothetical learning rate, higher task complexity speeds up the trust buildup process. The reason is that human workload increases faster, and hence the relative difference between human performance and the robots decreases quickly, resulting in faster development of trust in robots’ operation.
Comparing the right sub-figures of Figures 4 and 6, it is noted from Figure 6 that the simulation continued up to \(t=8\) hrs. This is because the decay rate of human performance is now less due to the increase in the multi-robot performance that keeps the collaboration continuing throughout the period.
The developed code for this simulation is available at https://github.com/ACCESSLab/Performance-aware-Trust-Modeling-for-HRCs.git. From these results, it can be observed that the developed model in this article is capable of explaining the changes in trust over time due to changes in different factors such as the performances of collaborators, the complexity of tasks, and the workload distribution, as well as the impact of learning capabilities, particularly in an HRC setting with multiple robots. The developed model, though, needs to be calibrated when practically planned to be deployed for a scenario of interest.

9 Experimental Results

9.1 Experimental Setup

A software-based collaborative workspace was developed to conduct the HRC experiment. In the developed workspace, two robots were considered, and they were trained as an object classifier (e.g., an AI model) with a certain performance level. The performance of each robot here is measured through object detection followed by correct classification by the robot. The performance levels of two robots, i.e., “Robot 1” and “Robot 2,” were \(0.5\) and \(0.6\) respectively. The collaborative goal of the human operator and the robots was to classify the objects correctly. The robots’ task was to take the initial turn to identify the objects, and then, share the classified images with the human collaborator. On the other hand, the human collaborator was tasked to check and correct the robots’ operation by marking the objects that were incorrectly classified by the robots in each image. At the same time, we collected data on the human trust level regarding the robot’s performance after each interaction using a form. The two robots operate independently; at any given time, the participant interacts with only one robot, with the robots taking turns interacting with the operator. The process starts with “Robot 1” and then “Robot 2.” The experiment keeps on in this circular fashion until it reaches the total number of images. The experiment involved handling 200 images, which means 100 images per robot. In this experiment, the human collaborator could check the robots’ performance and provide feedback in a limited time for each image depending on the number of objects in the image ( \(0.8\) seconds for each object). In this way, the allocated limited timeframe makes the task challenging for human operators, while also ensuring it is sufficient to feasibly provide feedback and corrections on the robots’ tasks. The performance of both the human operator and the robots was measured in terms of correct classification of the objects in the images. Due to limitations in available public datasets and to ensure that participants possess the necessary knowledge for correcting robots’ tasks, we utilized a general-purpose dataset. However, this does not restrict the developed tool and concept’s applicability to a specific context. A snapshot of the software-based collaborative workspace is illustrated in Figure 7.
Fig. 7.
Fig. 7. The developed software-based workspace for HRC, where each robot is an object classifier and shares the images of classified objects with the human collaborator, and the human collaborator helps the robots by marking the incorrectly identified objects. Also, the human collaborator provides their trust in the robot’s performance after each interaction with the robot.

9.2 Participants’ Details

A total of 10 participants (8 males and 2 females) took part in the experiment as human collaborators. The participant pool in this study consisted of undergraduate and graduate students (3 participants), academic faculty (3 participants), and industry professionals (4 participants) in the US. The subjects were in the age range of 25–65 (25–35: 3 participants; 36–45: 3 participants; 46–55: 2 participants, and 56–65: 2 participants).

9.3 Procedure

Upon arrival, each participant was greeted and briefed on the purpose of the study and provided an opportunity to ask any questions. After the participants signed the informed consent, they were provided with a training session to familiarize themselves with the study settings. The subjects joined in this experiment remotely through a virtual collaboration platform, such as Zoom, and remotely utilized a user-friendly graphical user interface. The experiment started with a sample trial (not counted toward the experimental results) to ensure that the subject fully understood the experiment and the expectations. The subjects were instructed to participate in the Zoom session with a functional camera, and their engagement with the assigned tasks was closely observed throughout the experiment. The subjects were advised to get done with any other tasks that they had to complete before starting this experiment. The subjects were asked to eat or attend the restroom if needed, before starting this test. All the advice was provided to ensure that the subjects executed this experiment with thorough concentration. The subject was given 0.8 seconds for each object in the image to respond and make corrections, and then, the image automatically transitioned to the next image provided by the other robot. This time enforcement was another way to keep the participants engaged and push them to their performance limit. The participant was then asked to provide their trust level in the robots on a scale of \([0-1]\) using a form and they were advised to initially start with a low (zero) level though they were informed that they could adjust their trust level during the experiment as they observe the robot performance. The entire experiment lasted about 60 minutes for each subject. Upon completion of the experiment, the participant was debriefed and thanked for their participation.

9.4 Experimental Results

The data including the robot and human performance as well as the time progress of the tasks were automatically recorded by the developed software. The average values for the performance of the human subjects participating in the experiment while interacting with “Robot 1” and “Robot 2” as well as the overall performance of the human operator in the multi-robot HRC setting are shown in Figure 8. The corresponding average trust level provided by the human subjects on performances of “Robot 1” and “Robot 2” as well as the overall trust of the human operator in the multi-robot setup are shown in Figure 9. Since the performance of “Robot 2” ( \(0.6\) ) was slightly higher than that of “Robot 1”( \(0.5\) ), for handling similar tasks, the decrease in the average performance of the human collaborator performance for “Robot 2” was slower as compared to the case when collaborating with “Robot 1.” Additionally, Figure 9 illustrates that human trust in “Robot 2” exceeds that in “Robot 1.” These findings exhibit similar patterns to those observed in the simulation results depicted in Figure 4, where fixed robot performance values were considered.
Fig. 8.
Fig. 8. The average change of human performance when collaborating with “Robot 1” (), “Robot 2” (), and “Multi-robot” (), within the developed software-based HRC setting.
Fig. 9.
Fig. 9. The average change of human trust in performance of robots, i.e., “Robot 1” (), “Robot 2” (), and “Multi-robot” (), within the developed software-based HRC setting.

9.5 Model Calibration

As observed in the previous section, the pattern of the experimental results is similar to the simulation results. To get almost the same results, the model needs to be calibrated for this experimental setup. Setting the parameters of the model as provided in Tables 2 and 3, the simulation results can closely follow the experimental results as shown in Figures 10 and 11.
Table 3.
Symbol of the ParametersValue
\(\epsilon\) \(0\)
\(\epsilon_{adj}\) \(-0.2\)
\(\gamma_{R}\) \(1.0\)
\(c\) \(5\times 10^{-4}\)
\(\sigma\) \(0.2\)
\(\rho\) \(0.7\)
\(\eta\) \(0.5\)
\(\lambda\) \(0.95\)
Table 3. Calibrated Performance-Aware Trust Parameters
Fig. 10.
Fig. 10. The human performance comparison between the simulation results ( ) and the experimental results ().
Fig. 11.
Fig. 11. The human trust comparison between the simulation results ( ), and the experimental results ().

10 Conclusion

In this research work, a novel time-driven trust model was developed that considered the collaborator performance within a human–multi-robot collaboration framework. The developed model considered the human model in terms of physical performance and cognitive performance, and the multi-robot performance model to quantify trust in the robots’ operation. In this perspective, the human performance model was developed in terms of the physical and cognitive constraints and factors such as muscle fatigue and recovery, muscle isometric force, human (cognitive and physical) workload, cognitive utilization factor, and workload due to the robots’ mistakes, and task complexity. The proposed multi-robot performance model was developed in terms of the rate of task allocation and execution, task assignment, and the mistake probabilities of the individual robots. The trust model was investigated in a manufacturing workspace scenario to show the trust development for four different robot operation regions namely, unpredictable region, predictable region, dependable region, and faithful region. The results that are calculated from the mathematical models showed that human trust in robots’ operation improved as the comparative performance of robots exceeded human performance. The proposed model was also evaluated for the impacts of the hypothetical learning capabilities of the robots on human trust in the robots’ operation. The results showed that the robots’ hypothetical learning capabilities reduced the human operator’s workload, and so improved the human operator’s performance, which in turn enhanced the trust in the robots’ operation. This was more evident for the tasks with higher complexities. In addition, the robots with hypothetical learning capabilities perform better than the ones without hypothetical learning capabilities, leading to less human workload and improved human performance values, resulting in faster development of trust in the multi-robot operation. Furthermore, an empirical study was conducted by developing a software-based HRC setting to validate the results of the proposed mathematical model in terms of capturing the human operators’ trust in robot performance. The experimental results validated the model and followed the same patterns as the output of the developed model. Further, the HRC setting in the developed model can capture the collaboration of a human operator with single or multiple robots. A future direction of this research will be extending the developed model to the case with multiple human operators, jointly collaborating with multiple robots, which may require incorporating both human-to-human and human-to-robot collaboration factors into the model. The developed model can be also further expanded to include the impact of other factors that potentially contribute to shaping the human operators’ trust in robots such as transparency, predictability, user experience, and so forth.

Acknowledgments

The authors would like to acknowledge the support from Dr. Tesfamichael Agidie Getahun at ACCESS Laboratory in North Carolina A&T State University for developing the software-based HRI platform.

Footnote

1
The time-driven nature of the model in this context refers to the evolution of the model with respect to time as opposed to event-driven models, which evolve upon occurrences of sequences of events (see [4] for more details).

References

[1]
Hebert Azevedo-Sa, X Jessie Yang, Lionel Robert, and Dawn Tilbury. 2021. A unified bi-directional model for natural and artificial trust in human-robot collaboration. IEEE Robotics and Automation Letters 6, 3 (2021), 5913–5920.
[2]
Andrea Bauer, Dirk Wollherr, and Martin Buss. 2008. Human–robot collaboration: A survey. International Journal of Humanoid Robotics 5, 01 (2008), 47–66.
[3]
Tom Bridgwater, Manuel Giuliani, Anouk van Maris, Greg Baker, Alan Winfield, and Tony Pipe. 2020. Examining profiles for robotic risk assessment: Does a robot’s approach to risk affect user trust? In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI ’20). ACM, 23–31.
[4]
Christos G. Cassandras. 2005. Discrete-event systems. In Handbook of Networked and Embedded Control Systems. Dimitrios Hristu-Varsakelis and William S. Levine (Eds.), Springer, 71–89.
[5]
Jessie Y. C. Chen and Michael J. Barnes. 2014. Human–agent teaming for multirobot control: A review of human factors issues. IEEE Transactions on Human–Machine Systems 44, 1 (2014), 13–29.
[6]
Jessie Y. C. Chen and Michael J. Barnes. 2015. Agent transparency for human-agent teaming effectiveness. In Proceedings of the 2015 IEEE International Conference on Systems, Man, and Cybernetics (SMC’ 15). IEEE, 1381–1385.
[7]
Min Chen, Stefanos Nikolaidis, Harold Soh, David Hsu, and Siddhartha Srinivasa. 2020. Trust-aware decision making for human–robot collaboration: Model learning and planning. ACM Transactions on Human–robot Interaction (THRI ’20) 9, 2 (2020), 1–23.
[8]
Houston Claure, Yifang Chen, Jignesh Modi, Malte Jung, and Stefanos Nikolaidis. 2019. Reinforcement learning with fairness constraints for resource distribution in human-robot teams. arXiv:1907.00313. Retrieved from http://arxiv.org/abs/1907.00313
[9]
Mary L. Cummings, Jonathan P. How, Andrew Whitten, and Olivier Toupet. 2011. The impact of human–automation collaboration in decentralized multiple unmanned vehicle control. Proceedings of the IEEE 100, 3 (2011), 660–671.
[10]
Christopher Deligianis, Christopher John Stanton, Craig McGarty, and Catherine J. Stevens. 2017. The impact of intergroup bias on trust and approach behaviour towards a humanoid robot. Journal of Human–Robot Interaction 6, 3 (2017), 4–20.
[11]
Yassen Dobrev, Tatiana Pavlenko, Johanna Geiß, Melanie Lipka, Peter Gulden, and Martin Vossiek. 2019. A 24-GHz wireless locating system for human–robot interaction. IEEE Transactions on Microwave Theory and Techniques 67, 5 (2019), 2036–2044.
[12]
Seyed A. Fayazi, Nianfeng Wan, Stephen Lucich, Ardalan Vahidi, and Gregory Mocko. 2013. Optimal pacing in a cycling time-trial considering cyclist’s fatigue dynamics. In Proceedings of the American Control Conference (ACC ’13). IEEE, 6442–6447.
[13]
Adriana Hamacher, Nadia Bianchi-Berthouze, Anthony G. Pipe, and Kerstin Eder. 2016. Believing in BERT: Using expressive communication to enhance trust and counteract operational error in physical human-robot interaction. In Proceedings of the 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN ’16). IEEE, 493–500.
[14]
Caroline E. Harriott and Julie A. Adams. 2013. Modeling human performance for human–robot systems. Reviews of Human Factors and Ergonomics 9, 1 (2013), 94–130.
[15]
Adam J. Hepworth, Daniel P. Baxter, Aya Hussein, Kate J. Yaxley, Essam Debie, and Hussein A. Abbass. 2020. Human-swarm-teaming transparency and trust architecture. IEEE/CAA Journal of Automatica Sinica 8, 7 (2020), 1281–1295.
[16]
Laura M. Hiatt, Cody Narber, Esube Bekele, Sangeet S. Khemlani, and J. Gregory Trafton. 2017. Human modeling for human–robot collaboration. The International Journal of Robotics Research 36, 5–7 (2017), 580–596.
[17]
Makoto Itoh and Marie-Pierre Pacaux-Lemoine. 2018. Trust view from the human–machine cooperation framework. In Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics (SMC ’18). IEEE, 3213–3218.
[18]
Makoto Itoh and Kenji Tanaka. 2000. Mathematical modeling of trust in automation: Trust, distrust, and mistrust. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Vol. 44. SAGE Publications, Sage, Los Angeles, CA, 9–12.
[19]
Bing C. Kok and Harold Soh. 2020. Trust in robots: Challenges and opportunities. Current Robotics Reports 1 (2020), 1–13.
[20]
Daphne Koller and Nir Friedman. 2009. Probabilistic Graphical Models: Principles and Techniques. MIT Press.
[21]
Minae Kwon, Erdem Biyik, Aditi Talati, Karan Bhasin, Dylan P. Losey, and Dorsa Sadigh. 2020. When humans aren’t optimal: Robots that collaborate with risk-aware humans. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI’ 20). ACM, 43–52.
[22]
Minae Kwon, Sandy H. Huang, and Anca D. Dragan. 2018. Expressing robot incapability. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI ’18). ACM, 87–95.
[23]
Joshua Lee, Jeffrey Fong, Bing C. Kok, and Harold Soh. 2020. Getting to know one another: Calibrating intent, capabilities and trust for human-robot collaboration. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS ’20). IEEE, 6296–6303.
[24]
Hongyi Liu and Lihui Wang. 2020. Remote human–robot collaboration: A cyber–physical system application for hazard manufacturing environment. Journal of Manufacturing Systems 54 (2020), 24–34.
[25]
Liang Ma, Damien Chablat, Fouad Bennis, and Wei Zhang. 2009. A new simple dynamic muscle fatigue model and its validation. International Journal of Industrial Ergonomics 39, 1 (2009), 211–220.
[26]
Bonnie M. Muir. 1994. Trust in automation: Part I. Theoretical issues in the study of trust and human intervention in automated systems. Ergonomics 37, 11 (1994), 1905–1922.
[27]
Bonnie M. Muir and Neville Moray. 1996. Trust in automation. Part II. Experimental studies of trust and human intervention in a process control simulation. Ergonomics 39, 3 (1996), 429–460.
[28]
Saeid Nahavandi. 2017. Trusted autonomy between humans and robots: Toward human-on-the-loop in robotics and autonomous systems. IEEE Systems, Man, and Cybernetics Magazine 3, 1 (2017), 10–17.
[29]
Changjoo Nam, Huao Li, Shen Li, Michael Lewis, and Katia Sycara. 2018. Trust of humans in supervisory control of swarm robots with varied levels of autonomy. In Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics (SMC ’18). IEEE, 825–830.
[30]
Kazuo Okamura and Seiji Yamada. 2020. Adaptive trust calibration for human-AI collaboration. PLoS One 15, 2 (2020), e0229132.
[31]
Harley Oliff, Ying Liu, Maneesh Kumar, Michael Williams, and Michael Ryan. 2020. Reinforcement learning for facilitating human–robot-interaction in manufacturing. Journal of Manufacturing Systems 56 (2020), 326–340.
[32]
Md K. M. Rabby, Ali Karimoddini, Mubbashar A. Khan, and Steven Jiang. 2022. A learning-based adjustable autonomy framework for human–robot collaboration. IEEE Transactions on Industrial Informatics 18, 9 (2022), 6171–6180.
[33]
Md K. M. Rabby, Mubbashar Khan, Ali Karimoddini, and Steven X. Jiang. 2019. An effective model for human cognitive performance within a human–robot collaboration framework. In Proceedings of the IEEE International Conference on Systems, Man and Cybernetics (SMC ’19). IEEE, 3872–3877.
[34]
Md K. M. Rabby, Mubbashar A. Khan, Ali Karimoddini, and Steven X. Jiang. 2020. Modeling of trust within a human–robot collaboration framework. In Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics (SMC ’20). IEEE, 4267–4272.
[35]
S. M. Mizanoor Rahman, Behzad Sadrfaridpour, and Yue Wang. 2015. Trust-based optimal subtask allocation and model predictive control for human-robot collaborative assembly in manufacturing. In Dynamic Systems and Control Conference, Vol. 57250. American Society of Mechanical Engineers, V002T32A004.
[36]
Gunar Roth, Axel Schulte, Fabian Schmitt, and Yannick Brand. 2019. Transparency for a workload-adaptive cognitive agent in a manned-unmanned teaming application. IEEE Transactions on Human–Machine Systems 50, 3 (2019), 225–233.
[37]
Behzad Sadrfaridpour. 2018. Trust-Based Control of Robotic Manipulators in Collaborative Assembly in Manufacturing. Doctoral Dissertation, Clemson University.
[38]
Behzad Sadrfaridpour, Jenny Burke, and Yue Wang. 2014. Human and robot collaborative assembly manufacturing: Trust dynamics and control. In RSS 2014 Workshop on Human–Robot Collaboration for Industrial Manufacturing. Springer, 1–6.
[39]
Hamed Saeidi, Justin D. Opfermann, Michael Kam, Sudarshan Raghunathan, Simon Léonard, and Axel Krieger. 2018. A confidence-based shared control strategy for the smart tissue autonomous robot (STAR). In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS ’18). IEEE, 1268–1275.
[40]
Maha Salem, Gabriella Lakatos, Farshid Amirabdollahian, and Kerstin Dautenhahn. 2015. Would you trust a (faulty) robot? Effects of error, task type and personality on human-robot cooperation and trust. In Proceedings of the 10th ACM/IEEE International Conference on Human-Robot Interaction (HRI ’15). IEEE, 1–8.
[41]
Sarah Sebo, Brett Stoll, Brian Scassellati, and Malte F. Jung. 2020. Robots in groups and teams: A literature review. Proceedings of the ACM on Human-Computer Interaction 4, CSCW2 (2020), 1–36.
[42]
Karolina Thorsén and Anna Lindström. 2024. Trust in human-computer relationships: Do cross country skiers have trust towards a physical intelligent tutoring system as an accurate feedback on performance? Bachelor’s Thesis. Ume˚a Universitet.
[43]
Ning Wang, David V. Pynadath, and Susan G. Hill. 2016. Trust calibration within a human-robot team: Comparing automatically generated explanations. In Proceedings of the 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI ’16). IEEE, 109–116.
[44]
Travis J. Wiltshire and Stephen M. Fiore. 2014. Social cognitive and affective neuroscience in human–machine aystems: A roadmap for improving training, human–robot interaction, and team performance. IEEE Transactions on Human–Machine Systems 44, 6 (2014), 779–787.
[45]
Bo Wu, Bin Hu, and Hai Lin. 2017. Toward efficient manufacturing systems: A trust based human robot collaboration. In Proceedings of the American Control Conference (ACC ’17). 1536–1541.
[46]
Anqi Xu and Gregory Dudek. 2015. OPTIMO: Online probabilistic trust inference model for asymmetric human-robot collaborations. In Proceedings of the 10th ACM/IEEE International Conference on Human-Robot Interaction (HRI ’15). IEEE, 221–228.
[47]
Rosemarie E. Yagoda and Michael D. Coovert. 2012. How to work and play with robots: An approach to modeling human–robot interaction. Computers in Human Behavior 28, 1 (2012), 60–68.
[48]
Xi J. Yang, Vaibhav V. Unhelkar, Kevin Li, and Julie A. Shah. 2017. Evaluating effects of user experience and system transparency on trust in automation. In Proceedings of the 12th ACM/IEEE International Conference on Human-Robot Interaction (HRI 17). IEEE, 408–416.
[49]
Sierra N. Young and Joshua M. Peschel. 2020. Review of human–machine interfaces for small unmanned systems with robotic manipulators. IEEE Transactions on Human–Machine Systems 50, 2 (2020), 131–143.

Index Terms

  1. Performance-Aware Trust Modeling within a Human–Multi-Robot Collaboration Setting

        Recommendations

        Comments

        Please enable JavaScript to view thecomments powered by Disqus.

        Information & Contributors

        Information

        Published In

        cover image ACM Transactions on Human-Robot Interaction
        ACM Transactions on Human-Robot Interaction  Volume 13, Issue 2
        June 2024
        434 pages
        EISSN:2573-9522
        DOI:10.1145/3613668
        Issue’s Table of Contents

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        Published: 28 June 2024
        Online AM: 22 April 2024
        Accepted: 28 March 2024
        Revised: 24 March 2024
        Received: 20 January 2021
        Published in THRI Volume 13, Issue 2

        Permissions

        Request permissions for this article.

        Check for updates

        Author Tags

        1. Human-Robot Collaboration (HRC)
        2. trust
        3. human performance
        4. multi-robot performance

        Qualifiers

        • Research-article

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • 0
          Total Citations
        • 351
          Total Downloads
        • Downloads (Last 12 months)351
        • Downloads (Last 6 weeks)126
        Reflects downloads up to 04 Sep 2024

        Other Metrics

        Citations

        View Options

        View options

        PDF

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        Get Access

        Login options

        Full Access

        Media

        Figures

        Other

        Tables

        Share

        Share

        Share this Publication link

        Share on social media