This project has been funded with support from the European Commission. The author is solely responsible for this publication (communication) and the Commission accepts no responsibility for any use may be made of the information contained therein. In compliance of the new GDPR framework, please note that the Partnership will only process your personal data in the sole interest and purpose of the project and without any prejudice to your rights.

How I fell in love with people in love with AI

In the Datafication & Digital Literacy master’s programme at the Rijksuniversiteit Groningen, everyone is free to choose their own interest topics and focus on them for the rest of the year, including their thesis. For me, that interest was not always as clear as it is now. By first focusing on digital literacy and then looking specifically at AI literacy, I came to understand that what I truly want to research is how people and society influence AI and vice versa. For my thesis research, I am specifically looking into different aspects of human-AI relationships.


Relationships, in the context of this study, do not only mean romantic partnerships. They also include friendships, therapist-patient relationships, familial bonds, and other meaningful ways of engaging with AI, as long as participants view AI models as more than just tools. Unfortunately, in the current day and age, such relationships are frowned upon, and many people who are in such relationships do not wish to speak about them. To address that, my research focuses not only on these individual cases, but also on communities, critics and sceptics, and the companies behind the AI models themselves.


By understanding these different perspectives, I aim not only to contribute to the academic corpus on AI and society, but also to bring more nuance to the discussion of human-AI relationships. The media tends to portray such connections as harmful or delusional, while in reality, relationships with AI can be meaningful or even beneficial for some individuals. Of course, there are risks involved. AI companies are driven by profit and cannot always fully support or protect more vulnerable users. My research is situated between these discussions and aims to bring a better understanding among different stakeholders.
This topic cannot be separated from broader conversations about AI and gender. There are persistent issues such as gendered discrimination, bias, and the feminization of AI systems (around 75% of which are designed to present as female). At the same time, the AI field itself remains heavily male-dominated. These two dynamics reinforce each other, shaping not only how AI is built but also how it is perceived and used. Moreover, cultural assumptions about relationships being something “women care about more than men” obscure the reality that people of all genders (including non-binary individuals) engage in and benefit from these interactions.


For me, this intersection of gender, literacy, and technology is central. Bringing more women and underrepresented groups into the AI field is not just about representation; it is about expanding the range of perspectives that shape how AI is designed, discussed, and experienced. Understanding human-AI relationships through a gender-aware lens helps reveal how power, care, and agency operate within these systems.

In the end, what initially sparked my interest in AI—digital and AI literacies—still lies at the core of my work today. By investigating the different ways people engage with and talk about human-AI relationships, I hope not only to shed light on this emerging area but also to encourage more inclusive and critical discussions about how we build and interact with AI.

menu