Research about self-agency and AI

I find the concept about whether GenAI has the power to influence self-agency cognitions, beliefs and behaviours, fascinating from a psychological perspective. 

So I’ll ask first, what are the main influences of self-agency?

Out of personal interest, a search for historical literature acknowledges ancient beliefs were tied to moral and spiritual foundations, with historical movement from religious dogma toward self-agentic autonomy, leading to the understanding of ‘fluid self concept’, one which Michelangelo understood through the lens of personal achievement and intellect. Freud’s argument that self-agency was influenced by societal expectations and unconscious desires was later viewed by Foucault as relating to societal structures, cultural norms and power dynamics, which it could be argued as the foundations for which most modern research understands the modern views of ‘self’; thoroughly embedded in contextual experiences which are indeed fluid and adaptive.

So the next question is, how can algorithms interrupt self-agency?

Let’s look at Turing’s thinking machine. Alluding to the connection between machine intelligence and human understanding of self-awareness, although not directly addressing human self-agency, we understand the implications of whether ‘thinking machines’ would produce and output comparable to human thought has, until recently, seemed impossible. However the implications of GenAI applications seem to be profound for the discourse that has been produced recently.

So what shapes our behaviour and cognitions online, and does this have anything at all to do with AI?

Cyberpsychologists interested in exploring human interactions with technology, identify many influencers upon self-agentic cognitions, for example; social media (Dr J. Dean) could be blamed for online aggression as a result of people feeling protected by anonymity, while people feel more able to express those emotions as a result of open and honest communication. There’s a problem here though, technology itself cannot be blamed for negative human psychosocial responses. GenAI tools are like similar to the coded platforms, albeit founded on LLM technology. Just data and algorithmic probability.
There is one further issue though. Coded platforms where people interact with other people involve human exchanges, whereas GenAI applications do not (directly). The human checks and balances of online communities to somewhat help moderate those interactions are not there when using GenAI chatbots. There’s no-one there to help tell the end user that the result of their prompt is fantasy. The human-computer conversation is truly asymmetric, requiring the human to exercise critical thinking.

Does this suggest a new understanding is needed about the development of self-agency and GenAI applications?

I think so. Tech developers are assisting this problem, we are seeing GenAI applications include tools for cross-referencing, meaning people are able to enact their self-agentic curiosities. We know that there are other psychological factors which contribute to self-agency; communities and context. There are current concerns about how much we may become reliant on GenAI ‘companions’, a word which suggests the existence of a reciprocal relationship involving trust and support, none of which can be offered by these applications because they’re just tools, unable to detect maladaptive human behaviours produced by incompatible self-evaluations of self-agencies.
This leaves further research very much open, there is clearly a gap in research needed to understand how far GenAI applications have an impact upon human cognitions and behaviours relating to self-agency. 
Here’s some research …


Chaturvedi, R., Verma, S., Das, R., & Dwivedi, Y. K. (2023). Social companionship with artificial intelligence: Recent trends and future avenues. Technological Forecasting and Social Change193, 122634–122634. https://doi.org/10.1016/j.techfore.2023.122634 

Merrill, K., Kim, J., & Collins, C. (2022). AI companions for lonely individuals and the role of social presence. Communication Research Reports, 39(2), 93–103. https://doi.org/10.1080/08824096.2022.2045929

Moore, J. W. (2016). What Is the Sense of Agency and Why Does it Matter? Frontiers in Psychology, 7(1272). https://doi.org/10.3389/fpsyg.2016.01272

The Internet Organised Crime Threat Assessment (iOCTA) 2014 - EUROPOL. (n.d.). Www.europol.europa.eu. https://www.europol.europa.eu/iocta/2014/appendix-3.html 


Comments

Popular posts from this blog

Personality Traits and AI - Academic Integrity

Personality and AI - AI Anxiety and Traits