Her Campus Logo Her Campus Logo
Nottingham | Culture > Digital

Choosing to Step Back: Rethinking AI in Everyday Life

Hannah Harvey Student Contributor, University of Nottingham
This article is written by a student writer from the Her Campus at Nottingham chapter and does not reflect the views of Her Campus.

Artificial intelligence has now become so embedded in daily life that it’s almost invisible. It
edits our photos, completes our sentences, suggests what we should watch next, and
predicts what we might want before we’ve even realised it. In many ways, AI is a genuinely
helpful everyday tool; it saves time, increases accessibility, and can support learning,
creativity, and communication in forms unimaginable a decade ago. However, as AI quietly
expands into almost every app and platform we use, it’s worth asking a more uncomfortable
question: when does convenience turn into intrusion? This isn’t an argument for rejecting
technology or fearing catalysing innovation, but rather an invitation to be more intentional
about how much of our digital lives we offer up to automated systems and what we might
gain by occasionally choosing not to.


Most apps now come with AI features switched on by default, including Instagram, Gmail
and Safari. Smart replies, personalised feeds, facial recognition, productivity tracking, and
algorithmic recommendations are presented as neutral improvements. Moreover, they shape
how we communicate, what we see, and how we present ourselves. For students, this
constant mediation can feel subtly overwhelming. Emails start to sound the same when
suggested responses are accepted without thought, social media feeds restricts our
interests, and search engines answer questions before we’ve had time to research them
ourselves. AI does not just respond to behaviour it trains it. In a dystopian manner, AI
assistants can now reply to your friends messages over text. Over time, it encourages faster
decisions, quicker consumption, and less reflection. Turning off certain features in app
settings can interrupt that cycle, creating space for slower, more deliberate interaction with
the online world.


Furthermore, it’s important to acknowledge that the AI experience is brimming with
inequalities. These systems are trained on existing data, they often reproduce cultural
biases, especially around gender, appearance, and productivity. For women, this trend is
apparent in visual tools wherein beauty filters automatically apply to selfies, filters prioritise
smooth skin and specific facial proportions, and algorithms reward specific body types and
aesthetics. All such features contribute to narrow ideas of idealism. AI-driven wellness and
productivity tools can also reinforce expectations of constant self-optimisation: track your
sleep, your steps, your focus levels, your cycle, your mood. Whilst potentially useful, this
information can simultaneously create pressure to treat the body as a system that should
always be improving, rather than something that fluctuates, rests, and sometimes resists
measurement. Choosing to disable or limit these tools is not about rejecting self-care, it’s
redefining it.


In academic spaces, AI is increasingly positioned as a shortcut summarising readings,
generating outlines, even drafting essays. Used carefully, it can support understanding.
Used uncritically, it can undermine the very skills education is meant to develop. Struggling
through a first draft and taking time to articulate ideas are not inefficiencies; they are part of learning. When AI smooths over those moments too quickly, it risks reducing education to
output rather than process. The same applies to creative work. Writing, art, music, and
design all benefit from experimentation and imperfection. Relying less on automated
suggestions can feel harder at first, but it often leads to work that feels more personal and
more meaningful. There is value in effort that doesn’t immediately produce a polished result.

One of AI’s most powerful functions is its ability to hold attention. Recommendation systems
are designed to keep users engaged for as long as possible which, over time, can make our
digital environments feel strangely repetitive. By turning off personalised recommendations
or limiting data sharing, users can regain a degree of choice. Discovery becomes less
predictable. Silence and boredom return in small ways. While neither is particularly
comfortable, both are often necessary for reflection, curiosity, and independent thought. In a culture that values constant engagement, stepping back can be a quiet form of resistance.
Avoiding AI can mean reviewing app permissions, disabling features that feel unnecessary,
choosing manual over automated options, and being mindful about when AI is genuinely
useful versus when it is simply habitual. These choices are personal and situational. The
goal is not purity, but awareness.


AI will continue to shape the future, and it will undoubtedly bring benefits alongside
challenges. The question is not whether it should exist, but how much control we want it to
have over our attention, creativity, and self-perception. For a generation navigating academic


pressure, digital saturation, and social expectation all at once, learning when to step back
matters. Turning off certain AI features won’t simplify life overnight, but it can create
moments of clarity, spaces where thoughts feel self-directed rather than suggested. In a
world increasingly designed to anticipate us, choosing when not to be anticipated may be
one of the most meaningful decisions we can make.

Hannah Harvey

Nottingham '26

Hannah is a first year English student at the University of Nottingham, and an aspiring journalist. Her favourite topics to write about range from advice and wellness to sociopolitics. In her spare time, she enjoys sewing, reading and club nights with friends!