Difference between revisions of "Uncanny Valley Theory"

From IS Theory
Jump to navigation Jump to search
m
 
Line 1: Line 1:
===Acronym===
+
==Acronym==
 
UVT
 
UVT
  
===Alternate name(s)===
+
==Alternate name(s)==
 
N/A
 
N/A
  
===Main dependent construct(s)/factor(s)===
+
==Main dependent construct(s)/factor(s)==
 
Affinity
 
Affinity
  
===Main independent construct(s)/factor(s)===
+
==Main independent construct(s)/factor(s)==
 
Human-Likeness
 
Human-Likeness
  
===Concise description of theory===
+
==Concise description of theory==
 
Masahiro Mori, a Robotics Professor, predicted the nature of humans’ impermanent feelings towards various human-like technological innovations in an essay published in 1970 (Mori, MacDorman, & Kageki, 2012). At first, the essay didn’t draw much attention, but with increasing applications of robots and other artificial agents in possibly every field, the hypothesised effect has become more relevant than ever. Consequently, leading many human-computer interaction researchers to assess the uncanny valley effect (Mori et al., 2012).  
 
Masahiro Mori, a Robotics Professor, predicted the nature of humans’ impermanent feelings towards various human-like technological innovations in an essay published in 1970 (Mori, MacDorman, & Kageki, 2012). At first, the essay didn’t draw much attention, but with increasing applications of robots and other artificial agents in possibly every field, the hypothesised effect has become more relevant than ever. Consequently, leading many human-computer interaction researchers to assess the uncanny valley effect (Mori et al., 2012).  
 
In the translated version of Mori’s original essay, a non-linear relationship between a human’s affinity towards a robot and human-likeness of the robot is developed. The graph between affinity and human-likeness consists of a dip, termed as the “uncanny valley”, which pertains to the eeriness humans experience when subjected to an almost human-like robot (Mori et al., 2012). That is, as the human-likeness of the robot increases, the affinity increases at first, but after a certain point, the affinity reduces drastically indicating negative feelings towards the imperfect human-robot. The negative reactions can arise due to several reasons including disappointment when the robot is not exactly human or when it is perceived as a threat to human distinguishability (Ciechanowski, Przegalinska, Magnuski, & Gloor, 2019).
 
In the translated version of Mori’s original essay, a non-linear relationship between a human’s affinity towards a robot and human-likeness of the robot is developed. The graph between affinity and human-likeness consists of a dip, termed as the “uncanny valley”, which pertains to the eeriness humans experience when subjected to an almost human-like robot (Mori et al., 2012). That is, as the human-likeness of the robot increases, the affinity increases at first, but after a certain point, the affinity reduces drastically indicating negative feelings towards the imperfect human-robot. The negative reactions can arise due to several reasons including disappointment when the robot is not exactly human or when it is perceived as a threat to human distinguishability (Ciechanowski, Przegalinska, Magnuski, & Gloor, 2019).
 
The theory poses its obvious application as an effective means of navigation towards an efficient design of the interactive artificial agents.  Over the years, the theory has been tested several times in various AI and robot-related studies. Despite the extensive analysis of the theory, the stance with respect to the uncanny valley remains inconclusive (Betriana, Osaka, Matsumoto, Tanioka, & Locsin, 2020; Burleigh, Schoenherr, & Lacroix, 2013; Mathur & Reichling, 2016). However, researchers continue to employ the theory to account for the varied array of human reactions towards non-human agents and improve understanding of the human-non-human interaction.  
 
The theory poses its obvious application as an effective means of navigation towards an efficient design of the interactive artificial agents.  Over the years, the theory has been tested several times in various AI and robot-related studies. Despite the extensive analysis of the theory, the stance with respect to the uncanny valley remains inconclusive (Betriana, Osaka, Matsumoto, Tanioka, & Locsin, 2020; Burleigh, Schoenherr, & Lacroix, 2013; Mathur & Reichling, 2016). However, researchers continue to employ the theory to account for the varied array of human reactions towards non-human agents and improve understanding of the human-non-human interaction.  
  
===Diagram/schematic of theory===
+
==Diagram/schematic of theory==
 
https://drive.google.com/file/d/1J3bevgaD7bj3PXKTgEKA9U91XqzRZhQj/view?usp=sharing
 
https://drive.google.com/file/d/1J3bevgaD7bj3PXKTgEKA9U91XqzRZhQj/view?usp=sharing
  
===Originating author(s)===
+
==Originating author(s)==
 
Masahiro Mori  
 
Masahiro Mori  
  
===Seminal articles===
+
==Seminal articles==
 
Groom, V., Nass, C., Chen, T., Nielsen, A., Scarborough, J. K., & Robles, E. (2009). Evaluating the effects of behavioral realism in embodied agents. International Journal of Human Computer Studies, 67(10), 842–849. https://doi.org/10.1016/j.ijhcs.2009.07.001
 
Groom, V., Nass, C., Chen, T., Nielsen, A., Scarborough, J. K., & Robles, E. (2009). Evaluating the effects of behavioral realism in embodied agents. International Journal of Human Computer Studies, 67(10), 842–849. https://doi.org/10.1016/j.ijhcs.2009.07.001
 
Ho, C. C., & MacDorman, K. F. (2010). Revisiting the uncanny valley theory: Developing and validating an alternative to the Godspeed indices. Computers in Human Behavior, 26(6), 1508–1518. https://doi.org/10.1016/j.chb.2010.05.015
 
Ho, C. C., & MacDorman, K. F. (2010). Revisiting the uncanny valley theory: Developing and validating an alternative to the Godspeed indices. Computers in Human Behavior, 26(6), 1508–1518. https://doi.org/10.1016/j.chb.2010.05.015
Line 28: Line 28:
 
Burleigh, T. J., Schoenherr, J. R., & Lacroix, G. L. (2013). Does the uncanny valley exist? An empirical test of the relationship between eeriness and the human likeness of digitally created faces. Computers in Human Behavior, 29(3), 759–771. https://doi.org/10.1016/j.chb.2012.11.021
 
Burleigh, T. J., Schoenherr, J. R., & Lacroix, G. L. (2013). Does the uncanny valley exist? An empirical test of the relationship between eeriness and the human likeness of digitally created faces. Computers in Human Behavior, 29(3), 759–771. https://doi.org/10.1016/j.chb.2012.11.021
  
===Level of analysis===
+
==Level of analysis==
 
Individual
 
Individual
  
===External Links===
+
==External Links==
 
https://en.wikipedia.org/wiki/Uncanny_valley
 
https://en.wikipedia.org/wiki/Uncanny_valley
  
Line 38: Line 38:
 
Mathur, M. B., & Reichling, D. B. (2016). Navigating a social world with robot partners: A quantitative cartography of the Uncanny Valley. Cognition, 146, 22–32. https://doi.org/10.1016/j.cognition.2015.09.008
 
Mathur, M. B., & Reichling, D. B. (2016). Navigating a social world with robot partners: A quantitative cartography of the Uncanny Valley. Cognition, 146, 22–32. https://doi.org/10.1016/j.cognition.2015.09.008
  
===Links from this theory to other theories===
+
==Links from this theory to other theories==
 
Theory of Planned Behaviour, Social Presence Theory, Realism Inconsistency Theory, Realism Maximization Theory, Consistency Theory
 
Theory of Planned Behaviour, Social Presence Theory, Realism Inconsistency Theory, Realism Maximization Theory, Consistency Theory
  
===IS articles that use the theory===
+
==IS articles that use the theory==
 
Ciechanowski, L., Przegalinska, A., Magnuski, M., & Gloor, P. (2019). In the shades of the uncanny valley: An experimental study of human–chatbot interaction. Future Generation Computer Systems, 92, 539–548. https://doi.org/10.1016/j.future.2018.01.055
 
Ciechanowski, L., Przegalinska, A., Magnuski, M., & Gloor, P. (2019). In the shades of the uncanny valley: An experimental study of human–chatbot interaction. Future Generation Computer Systems, 92, 539–548. https://doi.org/10.1016/j.future.2018.01.055
  
Line 50: Line 50:
 
de Kleijn, R., van Es, L., Kachergis, G., & Hommel, B. (2019). Anthropomorphization of artificial agents leads to fair and strategic, but not altruistic behavior. International Journal of Human-Computer Studies, 122(September 2018), 168–173. https://doi.org/10.1016/j.ijhcs.2018.09.008
 
de Kleijn, R., van Es, L., Kachergis, G., & Hommel, B. (2019). Anthropomorphization of artificial agents leads to fair and strategic, but not altruistic behavior. International Journal of Human-Computer Studies, 122(September 2018), 168–173. https://doi.org/10.1016/j.ijhcs.2018.09.008
  
===Contributor(s)===
+
==Contributor(s)==
 
Diksha Singh, Doctoral Student at Indian Institute of Management, Kozhikode, India
 
Diksha Singh, Doctoral Student at Indian Institute of Management, Kozhikode, India
  
===Date last updated===
+
==Date last updated==
 
20/11/2020  
 
20/11/2020  
  
 
Please feel free to make modifications to this site. In order to do so, you must register.
 
Please feel free to make modifications to this site. In order to do so, you must register.

Latest revision as of 06:53, 20 November 2020

Acronym

UVT

Alternate name(s)

N/A

Main dependent construct(s)/factor(s)

Affinity

Main independent construct(s)/factor(s)

Human-Likeness

Concise description of theory

Masahiro Mori, a Robotics Professor, predicted the nature of humans’ impermanent feelings towards various human-like technological innovations in an essay published in 1970 (Mori, MacDorman, & Kageki, 2012). At first, the essay didn’t draw much attention, but with increasing applications of robots and other artificial agents in possibly every field, the hypothesised effect has become more relevant than ever. Consequently, leading many human-computer interaction researchers to assess the uncanny valley effect (Mori et al., 2012). In the translated version of Mori’s original essay, a non-linear relationship between a human’s affinity towards a robot and human-likeness of the robot is developed. The graph between affinity and human-likeness consists of a dip, termed as the “uncanny valley”, which pertains to the eeriness humans experience when subjected to an almost human-like robot (Mori et al., 2012). That is, as the human-likeness of the robot increases, the affinity increases at first, but after a certain point, the affinity reduces drastically indicating negative feelings towards the imperfect human-robot. The negative reactions can arise due to several reasons including disappointment when the robot is not exactly human or when it is perceived as a threat to human distinguishability (Ciechanowski, Przegalinska, Magnuski, & Gloor, 2019). The theory poses its obvious application as an effective means of navigation towards an efficient design of the interactive artificial agents. Over the years, the theory has been tested several times in various AI and robot-related studies. Despite the extensive analysis of the theory, the stance with respect to the uncanny valley remains inconclusive (Betriana, Osaka, Matsumoto, Tanioka, & Locsin, 2020; Burleigh, Schoenherr, & Lacroix, 2013; Mathur & Reichling, 2016). However, researchers continue to employ the theory to account for the varied array of human reactions towards non-human agents and improve understanding of the human-non-human interaction.

Diagram/schematic of theory

https://drive.google.com/file/d/1J3bevgaD7bj3PXKTgEKA9U91XqzRZhQj/view?usp=sharing

Originating author(s)

Masahiro Mori

Seminal articles

Groom, V., Nass, C., Chen, T., Nielsen, A., Scarborough, J. K., & Robles, E. (2009). Evaluating the effects of behavioral realism in embodied agents. International Journal of Human Computer Studies, 67(10), 842–849. https://doi.org/10.1016/j.ijhcs.2009.07.001 Ho, C. C., & MacDorman, K. F. (2010). Revisiting the uncanny valley theory: Developing and validating an alternative to the Godspeed indices. Computers in Human Behavior, 26(6), 1508–1518. https://doi.org/10.1016/j.chb.2010.05.015 Mori, M., MacDorman, K. F., & Kageki, N. (2012). The uncanny valley. IEEE Robotics and Automation Magazine, 19(2), 98–100. https://doi.org/10.1109/MRA.2012.2192811 Burleigh, T. J., Schoenherr, J. R., & Lacroix, G. L. (2013). Does the uncanny valley exist? An empirical test of the relationship between eeriness and the human likeness of digitally created faces. Computers in Human Behavior, 29(3), 759–771. https://doi.org/10.1016/j.chb.2012.11.021

Level of analysis

Individual

External Links

https://en.wikipedia.org/wiki/Uncanny_valley

Betriana, F., Osaka, K., Matsumoto, K., Tanioka, T., & Locsin, R. C. (2020). Relating Mori’s Uncanny Valley in generating conversations with artificial affective communication and natural language processing. Nursing Philosophy, (June), 1–8. https://doi.org/10.1111/nup.12322

Mathur, M. B., & Reichling, D. B. (2016). Navigating a social world with robot partners: A quantitative cartography of the Uncanny Valley. Cognition, 146, 22–32. https://doi.org/10.1016/j.cognition.2015.09.008

Links from this theory to other theories

Theory of Planned Behaviour, Social Presence Theory, Realism Inconsistency Theory, Realism Maximization Theory, Consistency Theory

IS articles that use the theory

Ciechanowski, L., Przegalinska, A., Magnuski, M., & Gloor, P. (2019). In the shades of the uncanny valley: An experimental study of human–chatbot interaction. Future Generation Computer Systems, 92, 539–548. https://doi.org/10.1016/j.future.2018.01.055

Skjuve, M., & Haugstveit, I. M. (2019). HELP! IS MY CHATBOT FALLING INTO THE UNCANNY VALLEY? AN EMPIRICAL STUDY OF USER EXPERIENCE IN HUMAN – CHATBOT INTERACTION.15(February), 30–54.

JOUR, Ciechanowski, Leon, Przegalinska, Aleksandra, Magnuski, Mikołaj, Gloor, Peter, 2018/02/06, In the Shades of the Uncanny Valley: An Experimental Study of Human-Chatbot Interaction, DO - 10.1016/j.future.2018.01.055, Future Generation Computer Systems

de Kleijn, R., van Es, L., Kachergis, G., & Hommel, B. (2019). Anthropomorphization of artificial agents leads to fair and strategic, but not altruistic behavior. International Journal of Human-Computer Studies, 122(September 2018), 168–173. https://doi.org/10.1016/j.ijhcs.2018.09.008

Contributor(s)

Diksha Singh, Doctoral Student at Indian Institute of Management, Kozhikode, India

Date last updated

20/11/2020

Please feel free to make modifications to this site. In order to do so, you must register.