Human Understanding of Robot Motion: The Role of Velocity and Orientation

A general problem in human–robot interaction is how to test the quality of single robot behavior, in order to develop robust and human-acceptable skills. The most typical approach are user tests with subjective measures (questionnaires). We propose a new experimental paradigm that combines subjective measures with an objective behavioral measure, namely viewing times of images viewed as self-paced slide show. We applied this paradigm to human-aware robot navigation. With three experiments, we studied the influence of two aspects of robot motion: velocity profiles and the robot’s orientation. A decreasing velocity profile influenced the predictability of the observed motion, and robot orientations diverting from the robot’s motion vector caused reduced perceived autonomy ratings. We conclude that the viewing time paradigm is a promising tool for studying human-aware robot behavior and that the design of human-aware robot navigation needs to consider both the velocity and the orientation of robots.

whereas naturalness is also needed in situations where a robot can be observed by a person, even without direct contact.
In this paper, we study the naturalness and perceived autonomy of robot motion. Both contribute to an overall understanding by humans. We introduce a new experimental paradigm for measuring naturalness and apply it to two aspects of robot motion, namely velocity and orientation. We study isolated robot movement, i.e. without direct interaction with a person. In such a context, a common approach is the use of video material [28] and to measure naturalness with questionnaires [17,28], such as the Godspeed V questionnaire [3].
As an alternative to using questionnaires, Lu and Smart [18] offered an interesting implicit measure of naturalness: the efficiency of humans moving in the same environment as the robot. The idea is that if the robot moves naturally, the person will feel comfortable and move normally, whereas unnatural robot movement will make people be more cautious and slow down. But this approach requires a fully implemented navigation with perception on a real robot, the effort to organise participants, and the difficulty of guaranteeing their safety. Additionally, the robot must work so robustly that it shows exactly the same behavior for every participant. Differences in lighting or small variations in timing can lead to very different user perceptions of the same robot.
If more general properties underlying human understanding of robot motion could be identified, a generalization to a wide range of robots and human-aware robot navigation scenarios might be possible. Saerbeck and Bartneck [25] examined the influence of navigation behavior on perceived affect. Kim and Pineau [12] tested their adaptive path planning algorithm with objective metics such as "closest distance to the pedestrian, avoidance distance to the pedestrian, and average time to reach the goal." But the first two only apply for scenarios with human presence, whereas the third is a standard measure for navigation, but without human-awareness. Kirsch [13] evaluated the naturalness of robot motion by considering the direction of the robot relative to the movement direction, thus measuring the portion of time the robot moves side-or backwards. But the only justification for this measurement is the informal observation that people usually move in a forward direction. It is possible that side-or backwards movements are acceptable from a robot, even though people would not move like this themselves.
The application of general properties of robot motion to human-aware navigation requires the systematic study of human understanding of motion properties instead of the definition of such motion properties based on intuition only. However, studying human understanding of such properties using subjective measures based on questionnaires alone is not enough, because questionnaires are prone to biased responses. Thus, we introduce the viewing time paradigm and its application to the study of human-aware behavior in the following section. This paper makes two contributions: 1. The introduction of a new experimental paradigm for measuring naturalness of human-aware robot behavior. 2. The presentation of experimental results on observers' understanding considering two features of robot motion: velocity profiles and robot orientation relative to the movement direction.
The paper is organized as follows: Sect. 2 introduces the viewing time paradigm as an objective measure of naturalness of robot motion, providing a blueprint for an experimental setup. Section 3 uses this setup, combined with a subjective measure of perceived autonomy, to study the effect of two properties of robot motion on human understanding of robot motion: Experiment 1 examines the effect of different velocity profiles, and Experiment 2 examines the role of robot orientation relative to the movement direction. Experiment 3 studies both properties together. We conclude with a general discussion of the results, the limitations of the current work and future extensions of it.

Viewing Time Paradigm for Human-Aware Robot Behavior
Objective measurements of human understanding of robot motion are necessary in order to fully evaluate both humanaware robot behavior in general and the naturalness of robot motion in particular. We propose a new method of measuring the naturalness of robot motion without requiring direct human-robot interaction. Our approach is based on psychological research on human event perception. This research studies the cognitive processes involved when viewing goaldirected actions. According to research on event perception, human observers watching goal-directed actions comprehend these actions based on the construction of event models in working memory [29,30]. Event models are descriptions of the perceived scene and allow for predictions about upcoming events [29]. Thus, they guide perception. Whenever the deviation between observers' predictions based on the event model and the present sensory input exceeds an error detection threshold, event models in working memory are updated based on the current sensory information. Event model updating is resource intensive and requires time. Thus, in analogy to reading times used to measure event model updating during discourse comprehension [7,11,31], viewing times serve as a measure of event model updating during the comprehension of visual narratives and picture stories [6,8,19,20]. Viewing times were applied to dynamic scenes by recent research [8,23]. That is, movies were split into multiple images and participants viewed those images as self-paced slide shows. This allows for the measurement of viewing times per image. Images associated with event boundaries [8] or goal changes [23] cause reliable increases in viewing times. Thus, viewing times are a promising measure for investigating the dynamic process of event model updating during dynamic scenes, including the process of observers' comprehension of perceived actions.
We introduce the viewing time paradigm to the study of human-aware robot behavior by applying it to the study of the naturalness of robot motion. Natural motion is highly predictable based on observers' prior knowledge. Thus, the more natural robot motion is, the better observers' predictions derived from their event models in working memory should fit their observations and the less event model updating should occur. Deviations from natural motion should require event model updating resulting in increased viewing times. This methodology does not require fully implemented exemplars and direct interactions with humans but can be applied to video material. This allows for the study of broad and general factors influencing the naturalness of robot motion.
The following provides a blueprint of an experimental setup using the viewing time paradigm.

Apparatus and Stimuli
The robot behavior to be measured must be recorded and prepared as a sequence of images. One option is a video recording of a scene and an extraction of equidistant frames, the other option (which we used in this work) is a script that produces the situations in a simulator and records screenshots of those situations. The first option is best applied to real-life recordings of human-aware robot behavior. The second option is preferable in controlled experiments interested in perfect positioning of robots, such as when robot motion should exactly follow a pre-defined velocity profile (cf. Experiment 1).
No special equipment is required for the presentation of the images during the experiment. It is important to ensure, however, that the used screen and input device provide accurate timings in order to reduce the variance in the recorded viewing times and to ensure that viewing times are a valid measure of the time participants used for processing the respective images.
Procedure Participants are instructed that they will see several image sequences as self-paced slide show. They can proceed through each slide show with a predefined button, such as the space bar. Between respective images, a black screen of 200 ms is included. Because viewing times for the first image of a scene are usually increased [6], it can be advisable to include a preview of the scene. This is especially true if one is also interested in the viewing times during the first images of a sequence. In our experiments, for example, we added a semi-transparent layer showing a countdown from 3 to 1 on the first image and prepended this countdown as additional images to the image sequence. Following all frames of an image sequence, a last image showing the word "End" is shown. Following each image sequence, subjective ratings can be recorded. We used only one subjective rating (perceived autonomy), but more elaborate questionnaires are possible. Before each trial, a fixation cross is shown and participants are instructed that they are only allowed to make breaks whenever a fixation cross is shown.

Dependent Measures
Viewing time is defined as the time between image onset and button press for each image in the sequence. Thus, this paradigm provides viewing times as a continuous measure of processing effort for each individual image. Depending on the analysis of interest, viewing times for individual images may be compared across experimental conditions or viewing times can be aggregated across images before comparing them across conditions. In addition to the viewing time measure, the paradigm can be used to collect subjective ratings following each image sequence.

Experiments
We applied the viewing time paradigm to the study of two features of robot motion: velocity changes (Experiments 1 and 3) and robot orientation (Experiments 2 and 3). We chose those two parameters because they are easy to define and measure on any robot. The motion path is another parameter potentially influencing understandability, but it depends very much on the given navigation goal and would have to be defined by a set of parameters in order to generalize. Therefore we kept the path constant in our experiments.
In addition to viewing times as objective measure of the naturalness of robot motion, we also measured perceived autonomy as a subjective measure. We first explain the general method used in all experiments.

Apparatus and Stimuli
We presented image sequences showing a robot performing the same goal-driven action in each trial: moving from a starting position to a marked goal position that was 5 m away within the virtual simulation space (see Fig. 1). The stimuli for the image sequences were created with the 3D simulator Morse [16] with a simulated PR2 robot. A script positioned the robot at the locations shown in Fig. 2 and took screenshots. The images were presented with a size of approximatly 24 × 10.5 degrees of visual angle on the screen and participants had an unrestricted viewing distance of 60 cm to the screen.
The perception of movement may depend on the camera position it is presented from [4,5,14]. We always used a direct sideways view (see Fig. 1), following cinematographic conventions [24,27] at a virtual camera distance of 6 m, but varied the height of the camera (see Fig. 3 and "Appendix A"): -At the height of the robot's head (1.33 m above the ground). This perspective most closely resembles a natural perspective. -10 • rotated upwards, giving the spectator a view from slightly above. This camera perspective is usually used to make a character look inferior or helpless. It also simulates the view of a very tall person. -10 • rotated downwards. This perspective magnifies the character, making it look more potent or even dangerous. This perspective also simulates the view of a very small person, like a child. constant: We used the different camera positions to test whether the view towards the robot influences our measurements. It also serves as a variation in presenting the different situations, making participants evaluate the same situation several times without tiring.
We cropped all pictures to cinemascope format of 21:9 (a standard used for cinema movies) so that the robot's head was at 2/3 the picture height measured from the bottom, placing it at the "optical center" [9].
Additionally, we presented image sequences both in a standard and horizontally mirrored version. This allowed us to evaluate whether our measures are affected by the robot either moving in a left-to-right or right-to-left direction.
Procedure We followed the procedure described in Sect. 2. Participants performed multiple experimental trials and each trial depicted ten frames (images) of a robot moving along a linear trajectory. Following each motion sequence, participants rated the robot's autonomy on a scale from "pilotoperated" (1) to "self-propelled" (7).

Method
Participants Thirty students (27 female, age M = 20.07 years, SD = 1.66 years) from the University of Tübingen participated in this experiment in exchange of course credit. We gained informed consent from the participants.

Stimuli and Design
In this experiment, we varied the velocity profiles of the robot while leaving robot orientation constant (robot faced in the direction of motion in all conditions). Importantly, robot navigation was equally efficient in all conditions because the robot always needed 10 s to reach the target which was 5 m away wihtin the virtual simulation environment. Figure  It is important to note that we constructed the velocity profiles in Experiment 1 in a way that the robot had the same mean velocity between frames 4 and 5. Thus, the robot moved the same distance between frames 4 and 5 in all conditions. Any changes in viewing times in frame 5 can thus be attributed to our manipulation and cannot be the result of varying apparent motion across conditions.
The design of this experiment was a 4 (velocity: constant, increasing, decreasing, sinusoidal) × 3 (viewing angle: below, head, above) × 2 (motion direction: left-to-right, right-to-left) within-subjects design with four repetitions per conditions. Participants performed four successive blocks containing 24 trials each. Within each block, each condition occurred once and trials were presented in a randomized order. Thus, participants performed a total of 96 trials.

Results
For all ANOVA effects with violations of the sphericity assumption as indicated by a significant Mauchly's test, we applied the Greenhouse-Geisser correction.
Viewing Times We treated all responses with viewing times larger than 2 s as outliers and removed those responses from the data set. This resulted in the removal of 203 responses (0.7% of data) across all participants.
In a first step, we analyzed mean viewing times aggregated across all frames using a repeated measures ANOVA with the factors velocity, viewing angle, and motion direction. This revealed a significant main effect of velocity, F(3, 87) = 38.59, p < .001, η 2 p = .57 (see Fig. 5, top). Further evaluating this velocity effect with paired t tests (p values holm-corrected for multiple comparisons) revealed that robots with decreasing velocity were viewed longer than robots moving according to the other three velocity profiles, all ps < .001. Furthermore, sinusoidal velocity profiles were viewed longer than constant velocity, p = .008. There was no significant difference of viewing times between sinusoidal velocity and increasing velocity, p = .112, as well as between constant velocity and increasing velocity, p = .793. The other main effects and interactions of the ANOVA were not significant, all Fs ≤ 2.22, ps ≥ .089, η 2 p s ≤ .07. In a second step, we analyzed viewing times of frame 5 only. Frame 5 was constructed such that travelled distance between frames 4 and 5 was constant across all velocity conditions. Thus, any effects on viewing times observed in frame 5 cannot be attributed to varying motion occuring between successive frames. We analyzed viewing times using a repeated measures ANOVA with the factors velocity, viewing angle, and motion direction. This replicated the main Robots moving with decreasing velocity caused longer viewing times than the other three velocity conditions. The sinusdoidal velocity profile caused slightly increased viewing times only when aggregated across all frames. Error bars indicate 95% within-subject confidence intervals [1] findings of our previous analysis. We observed a significant main effect of velocity, F(3, 87) = 8.49, p < .001, η 2 p = .23 (see Fig. 5, bottom). Further evaluating this velocity effect with paired t tests (p values holm-corrected for multiple comparisons) revealed that robots with decreasing velocity were viewed longer than robots moving according to the other three velocity profiles, all ps ≤ .004. The other three velocity conditions did not differ significantly from one another, all ps = 1.000. The other main effects and interactions of the ANOVA were not significant, all Fs ≤ 1.63, ps ≥ .143, η 2 p s ≤ .05. Whereas sinusoidal velocity caused increased viewing times as compared with constant velocity in our first analysis (viewing times aggregated across all frames), this was not the case in our second analysis (viewing times at frame 5). Because the sinusoidal velocity profile consists of both, sections of increasing velocity and sections of decreasing velocity, these contradictory findings could be resolved if only the sections with decreasing velocity caused increased viewing times. We analyzed the difference in viewing times between the sinusoidal velocity condition and constant velocity condition across all frames (see Fig. 6). For each frame, we conducted a t test compairing the viewing times difference score against zero. This revealed that viewing times in the sinusoidal condition were significantly increased as compared with the constant condition in frames 2, 3, 7, 8, and 9 only, all ps ≤ .009 In all other frames, the difference in viewing times was not significantly different from zero, all ps ≥ .153. That is, only the sections with decreasing velocity of the sinusoidal condition (cf. Fig. 4) caused increased viewing times as compared with the constant velocity condition but not the sections with increasing velocity. This finding mimics our overall finding that robots moving at a constantly decreasing velocity but not robots moving at a constantly increasing velocity caused increased viewing times as compared with the constant velocity condition. Because velocity was increasing in frame 5, no effects of sinusoidal velocity occured. Further, sinusoidal velocity contains only some sections of decreasing velocity. Therefore, there was a slight increase in viewing times for the sinusoidal condition in our first analysis that was smaller than in the decreasing velocity condition that consists of decreasing velocity throughout.
Perceived Autonomy We analyzed the perceived autonomy rating using a repeated measures ANOVA with the factors velocity, viewing angle, and motion direction. Participants' rating of the perceived autonomy of the robot did not differ significantly across conditions. That is, there was no significant main effect of velocity,

Discussion
With Experiment 1, we studied whether velocity changes influence observers' understanding of robot motion. We Perceived autonomy ratings did not differ significantly across velocity conditions. Error bars indicate 95% withinsubject confidence intervals [1] observed a reliable increase in viewing times caused by decreasing velocity. Because increased viewing times indicate situation model updating during the perception of goaldirected actions [6,23], this finding indicates that observers applied more processing effort in order to comprehend robot motion with decreasing velocity. Whereas sinusoidal motion also caused a slight increase in viewing times, this was only true for those frames of the sinusoidal motion pattern that showed decreasing velocity. Thus, we conclude that the comprehension of decreasing velocity is associated with increased processing effort. In contrast to viewing times, the perceived autonomy ratings did not differ across conditions. That is, even though participants had a harder time in comprehending robot motion with decreasing velocity, the perceived autonomy of the robot was not affected. Thus, naturalness as measured by viewing times and perceived autonomy are indeed two distinct aspects of understandability.
Neither the viewing angle of the camera nor the direction of robot motion had any effect in our experiment. Although the deviations in camera angle were small, this provides first tentative evidence that decreasing robot motion might be a general factor increasing observers' processing effort.

Method
Participants Thirty new students (24 female, age M = 20.87 years, SD = 2.45 years) from the University of Tübingen participated in this experiment in exchange of course credit. We gained informed consent from the participants.

Stimuli and Design
In this experiment, we varied the orientation of the robot while leaving velocity constant (the robot moved with a constant velocity profile in all conditions). The robot moved along a linear trajectory and faced one of the four cardinal directions: forward, left, right, and backward (orientation relative to the motion vector). In each orientation condition, the robot started with the respective orientation and kept the orientation throughout the whole trial and motion. The condition with forward orientation is identical to the condition with constant velocity in Experiment 1.
The design of this experiment was a 4 (orientation: forward, left, right, backward) × 3 (viewing angle: below, head, above) × 2 (motion direction: left-to-right, right-toleft) within-subjects design with four repetitions per conditions. Participants performed four successive blocks containing 24 trials each. Within each block, each condition occurred once and trials were presented in a randomized order. Thus, participants performed a total of 96 trials.

Results
For all ANOVA effects with violations of the sphericity assumption as indicated by a significant Mauchly's test, we applied the Greenhouse-Geisser correction.
Viewing Times As in Experiment 1, we treated all responses with viewing times larger than 2 s as outliers and removed those responses from the data set. This resulted in the removal of 189 responses (0.7% of data) across all participants.
We performed two repeated measures ANOVAs with the factors orientation, viewing angle, and motion direction. The dependent measure of the first ANOVA was viewing time aggregated across all frames (see Fig. 8, top) and the dependent measure of the second ANOVA was viewing time at frame 5 (see Fig. 8 Fig. 9). Further evaluating this orientation effect with paired t tests (p values holm-corrected for multiple comparisons) revealed that participants rated the perceived autonomy of robots facing in motion direction (forward condition) higher than robots facing the other three orientations, all ps < .001. The perceived autonomy ratings for the backward, left, and right orientation conditions did not differ significantly from one another, all ps ≥ .663. The other main effects and interactions of the ANOVA were not significant, all Fs ≤ 2.02, ps ≥ .065, η 2 p s ≤ .07.

Discussion
With Experiment 2, we investigated the effect of robot orientation on observers' undestanding of robot motion. In contrast to the direct manipulation of motion parameters in Experiment 1, our robot orientation manipulation in Experiment 2 left motion parameters unaffected. Instead, we manipulated the congruence between the robot's motion direction and its orientation. Our viewing times analysis showed that the comprehension of robot motion was not affected by the orientation of the robot relative to its motion direction. It is important to note, however, that robot orientation was fixed through each motion sequence in our stimuli, thus not creating any additional motion signal. Research on visual attention showed that conflicting motion information of an object relative to its motion direction impairs attention [21,26]. Therefore, future research should investigate whether changes in robot orientation creating a separate motion signal, such as the indepedent rotation of the torso during robot motion, might affect observers' understanding of robot motion.
Robot orientation affected observers' perceived autonomy rating. Robots moving in the direction of their heading were rated as being more autonomous than robots being oriented either orthogonal to their movement direction or facing backward. This result shows that even if comprehension of the underlying motion pattern is unaffected by the orientation manipulation, the subjective assessment of the robot motion is not. Because the aim of human-aware robot navigation is not only the match between observers' predicted motion patterns and actual robot motion but also the correct assessment of robot motion, such as being autonomous in our case, robot orientation should also be considered when designing human-aware robots.
As in Experiment 1, we neither found any influences of the viewing angle of the camera nor of the motion direction of the robot.

Experiment 3
The results of Experiments 1 and 2 suggest that velocity profiles and robot orientation have distinct effects on human understanding of robot motion. Whereas velocity profiles affected viewing times but not perceived autonomy ratings in Experiment 1, robot orientation affected perceived autonomy ratings but not viewing times in Experiment 2. A potential argument against this conclusion is that we did not manipulate velocity and orientation together within one experiment, thus preventing us from testing for interactions between the two factors. Thus, it remains possible that velocity effects might be different, for example, when the robot is oriented backward instead of forward. Therefore, we conducted Experiment 3, in which we manipulated both velocity profiles (constant, increasing, decreasing) and robot orientation (forward, backward) within one experiment.

Method
Participants Twenty-two new students (16 female, age M = 22.45 years, SD = 4.10 years) from the University of Tübin-gen participated in this experiment in exchange of course credit or monetary compensation. We gained informed consent from the participants.

Stimuli and Design
Stimuli and procedure were the same as in Experiments 1 and 2 with the exception that we manipulated both the velocity profile (constant, increasing, decreasing) and robot orientation (forward, backward) in this experiment. Participants performed 144 trials.

Results
For all ANOVA effects with violations of the sphericity assumption as indicated by a significant Mauchly's test, we applied the Greenhouse-Geisser correction. Due to a technical problem, we lost the perceived autonomy rating of one participant in one trial. Therefore, we removed this participant from the data set prior to the analysis.
Viewing Times As in Experiments 1 and 2, we treated all responses with viewing times larger than 2 s as outliers and removed those responses from the data set. This resulted in the removal of 120 responses (0.4% of data) across all participants.
We performed two repeated measures ANOVAs with the factors velocity and orientation. The dependent measure of the first ANOVA was viewing time aggregated across all frames (see Fig. 10, top) and the dependent measure of the second ANOVA was viewing time at frame 5 (see Fig. 10, bottom). These analyses replicated the results from our Experiments 1 and 2. We observed a significant main effect of velocity on viewing times in both ANOVAs, F(1.42, 28.32) = 48.46, p < .001, η 2 p = .71 and F(1.42, 28.50) = 20.66, p < .001, η 2 p = .51, respectively. Importantly, these effects did not change with robot orientation as there was no significant interaction between velocity and orientation in both ANOVAs, both Fs < 1. Furthermore, the main effect of orientation on viewing times was not significant in both ANOVAs, both Fs < 1. Further evaluating the significant velocity effect with paired t tests (p values holmcorrected for multiple comparisons) revealed that viewing times in the decreasing velocity condition were longer than in the other two velocity conditions for both analyses, all ps < .001. In contrast to Experiment 1, viewing times for the increasing velocity condition were slightly lower than for the constant velocity condition when aggregating across all frames, p = .017. However, this finding was qualified by the analysis of frame 5 in which viewing times did not differ significantly between the increasing velocity condition and constant velocity condition, p = .099.
Perceived Autonomy We analyzed the perceived autonomy ratings using a repeated measures ANOVA with the factors velocity and orientation (see Fig. 11). This analysis replicated Decreasing velocity caused increased viewing times irrespective of robot orientation. Error bars indicate 95% within-subject confidence intervals [1] Fig. 11 Experiment 3: perceived autonomy ratings for robots oriented forward were higher than for robots oriented backward irrespective the velocity profile underlying robot motion. Error bars indicate 95% within-subject confidence intervals [1] the results from our Experiments 1 and 2. While there was a significant main effect of orientation, F(1, 20) = 9.68, p = .006, η 2 p = .33, velocity did not significantly affect the perceived autonomy ratings, F(2, 40) = 2.18, p = .127, η 2 p = . 10. Importantly, also the interaction of orientation and velocity was not significant, F(2, 40) = 1.13, p = .335, η 2 p = .05, indicating that the effect of orientation on perceived autonomy ratings was not influenced by the velocity profile underlying the respective robot motion. Replicating our results of Experiment 2, autonomy ratings were higher for robots oriented in the direction of their motion than for robots oriented backward.

Discussion
In this experiment, we manipulated velocity profiles and robot orientation within one experiment. This experiment replicated the results of our Experiments 1 and 2. That is, decreasing velocity caused increased viewing times but left perceived autonomy unaffected, whereas forward robot orientation caused higher perceived autonomy ratings but left viewing times unaffected. Critically, there were no interactions between velocity and orientation in all analyses. This provides further evidence that velocity profiles and robot orientation are two properties of robot motion with distinct effects on human understanding of human-aware robot navigation.

General Discussion
The successful design of human-aware robots requires that human observers can easily understand the robots' behavior. Based on psychological research on goal-directed actions, we introduced the viewing time paradigm to the study of human-aware robot behavior and applied this paradigm to human-aware robot navigation. This involves the partitioning of movies into multiple static images. Observers view those images as self-paced slide show and viewing times for each image are measured. Violations of observers' predictions about upcoming events and thus difficulties in the comprehension of observed robot behavior cause increased viewing times of respective images. Furthermore, subjective measures of perceived robot properties can be measured following each slide show. We applied this method to the study of two properties of robot motion: velocity and robot orientation.
Decreasing velocity caused pronounced increases in viewing times in Experiments 1 and 3. Thus, participants applied an increased processing effort when comprehending robot motion with decreasing velocity. Also sinusoidal velocity caused a slight increase in viewing times as compared with constant velocity. This is in line with previous research showing that observers are worse in tracking objects moving with sinusoidal velocity than objects moving with constant velocity [22]. However, our analysis revealed that only those sections of the sinusoidal velocity profile that contained decreasing velocity, but not increasing velocity, caused increased viewing times. Thus, we conclude that decreasing velocity reduces the naturalness of robot motion.
Whereas robot orientation relative to the robot's motion vector did not influence viewing times in Experiments 2 and 3, robot orientation affected observers' rating of the perceived autonomy of the robot. Observers rated robots aligned with their motion direction as being more autonomous than robots oriented orthogonal to their motion direction or facing backward. Thus, whereas observers' comprehension of robot motion was not affected by robot orientation in our experiments, subjective ratings were.

Implications for Human-Aware Robot Navigation
Based on our results, the design of human-aware robot navigation should consider both velocity profiles and orientation. Observers' understanding of robot motion is best when robots are facing into their direction of motion and when their velocity profile contains minimal portions of decreasing velocity. Robots will always have to decrease their velocity to reach a goal point, but navigation algorithms often produce jerky motions. For example when robots pass a door, they often alternately accelerate and decelerate to find a way through.
To quantify the quality of a motion trajectory with respect to the velocity profile, we suggest the following evaluation function: with the starting time t 0 and end time t 1 calculate where acc(t) is the acceleration at time t, and the function neg(x) = 1 for x < 0 and neg(x) = 0 for x ≥ 0. For the trajectories we used in our experiments, we get γ constant = γ increasing = 0, γ decreasing = −0.9, γ sinusoidal = −0.635.
The function γ punishes trajectories with decreasing velocities. But comparing the values with Fig. 5 shows that the evaluation of the sinusoidal velocity profile is closer to that of the decreasing profile, whereas our data indicates that it should be closer to that of the constant and increasing profiles. A simple quadratic function can better model our observations: For the trajectories of our experiments we get η constant = η increasing = 0, η decreasing = −0.72, η sinusoidal = −0.165. Further studies will have to confirm or detail the form and parameters of this function, as well as necessary scalings, possibly with respect to the trajectory length.
For quantifying side-and backwards movement, Kirsch [13] suggested to count the number of timesteps in which the robot moved sidewards (|v y | > |v x |) or backwards (v x < 0). This agrees with the observations of Experiment 2, but it is not the only possible interpretation. We do not know what exactly should be considered as a sidewards movement. The 90 degree angle we used in Experiment 2 is rather extreme, possibly smaller deviations from the movement direction could have the same effect on perceived autonomy. Our experiment suggests that the amount of deviation from the movement direction is unimportant, but this should be verified in further research.
We introduced viewing times as a continuous measure of processing effort. It is important to note that there are at least two interpretations on the source and consequences of such an effort. In the present manuscript, we interpreted increased viewing times as an indicator of a lower naturalness and predictability of robot motion, based on event cognition research. Our argument follows the rationale that natural robot motion is highly predictable and thus easy to integrate into event models held in working memory. If robot motion deviates from the expectations, further processing is required to integrate the perceived motion and viewing times increase. Thus, lower viewing times are associated with an easier comprehension of robot motion. This is true for our present stimulus material in particular, because the amount of presented information (robot moving a given distance) was comparable across conditions. As a second interpretation, increased viewing times may also be considered as reflecting increased attentional allocation towards the stimulus. That is, the increased viewing times for unexpected information might be associated with an increased attention and memory performance [2,10] for the novel and unexpected information. As a consequence, increased viewing times may also indicate processes that are adaptive and helpful to the process at hand, such as when trying to grab users' attention or when trying to induce a more thorough processing of the visual information. Future research should combine viewing times with additional measures, such as memory probes, to gain converging evidence on the processes underlying the human understanding of robot motion.

Limitations and Future Work
Whereas the robot moved with the same velocity profile throughout each trial in our experiments, real-world robot navigation consists of varying velocities across time. Therefore, future research should examine more complex velocity profiles. In particular, this applies to two aspects of velocity changes. First, the context might affect the impact of decreasing velocity on human understanding of robot motion. For example, it might be easier to comprehend decreasing velocity profiles if the robot reduces its velocity only once it almost reached its goal. Second, changes in velocity by themselves might affect human understanding of observed robot motion. For example, abrupt changes between increasing and decreasing velocity (or vice versa) might cause increased comprehension effort, because perceptual prediction based on one velocity profile do not match to robot motion observed based on the new velocity profile thus requiring event model updating [29].
We used a straight navigation path in our experiments. This allowed for the systematic and controlled study of how robot velocity profiles affect human understanding of robot motion. A side effect of using a single straight navigation path is that the direction towards the goal of the navigation task was identical to the motion direction of the robot. By dissociating the orientation of robot motion and the location of the navigation goal (i.e., by introducing an obstacle into the environment), future research should investigate whether the effect of robot orientation on perceived autonomy found in our experiments is the result of the deviation between robot orientation and motion vector as suggested by our experiments. As an alternative, it could be possible that robots that are looking into the direction of their navigational goal and not robots that are looking into the direction of motion are perceived as autonomous. However, when navigational goal and motion direction are dissociated, robots orientated toward their navigation goal will constantly change their orientation during motion, thus creating two motion signals (motion direction and torso rotation). Such a conflicting motion signal by itself might impair observers' ability in tracking [21,26] and thus understanding the robot motion. This must be considered by future research trying to disentangle this research question.
Robot motion in our experiments perfectly followed the designed velocity profiles. That is, we placed the robot at its respective locations instead of using a realistic robot motion controller. This served as a good starting point for establishing the viewing time paradigm to research on robot motion. It is important to note, however, that the viewing time paradigm is not restricted to such artificial motion patterns. Quite the opposite, it has been applied to realistic and real-world movie content in psychological research [8]. Importantly, viewing times are a sensitive measure that allows to determine the temporal locations of event model updating [8,23]. That is, viewing times increase in those frames where processing associated with the comprehension of the visual scene occurs. This was also the case in our Experiment 1, in which we observed increased viewing times in the sinusoidal velocity profile as compared with the constant control condition only for frames that showed decreasing velocity. Therefore, it seems promising to apply the viewing time paradigm also to instances of robot navigation with a realistic robot motion controller in order to differentiate between instances where observers have an easy or a hard time in comprehending the observed robot motion.

Conclusion
We introduced the viewing time paradigm to the study of human-aware robot behavior. With three experiments, we demonstrated its usefulness in investigating two properties of robot motion, namely velocity and robot orientation. Based on our results, we conclude that robot navigation should avoid phases of decreasing velocity, such as during jerky movements, and that the orientation of robots should be aligned with their motion direction in order to maximize the ease with which humans understand the underlying robot motion. Alexandra Kirsch is an independent scientist, using methods of artificial intelligence to improve user experience. She received her doctoral degree in computer science from Technische Universität München in 2008 and led an independent junior research group in the cluster of excellence Cognition for Technical Systems from 2008 to 2012. Between 2012 and 2018 she held an assistant professorship in Human-Computer Interaction and Artificial Intelligence at the University of Tübingen.