28th Dec 2025 | blog | Estimated Reading Time: 9 minutes

The Societal Impact of Cognitive Offloading

I examine the possibility of a societal shift in human cognition driven by prolonged AI use, that remains an underrepresented risk in current AI safety discourse.


Table of Contents

With the introduction of new technologies, cultural and societal shifts are unforeseeable (Collingridge, 1980). This unpredictability has already manifested across many domains in AI safety, noticeable due to incidents regarding AI risk, where initial guardrails were placed loosely and caused harm to society.

Voices inside the industry, including projects such as AI 2027 and researchers like Geoffrey Hinton and Leopold Aschenbrenner, are already warning about the risk that is emerging from the deployment of AI with tools and within a continuously growing, capable, and only partly (by many defined as insufficiently) monitorable environment (Korbak et al., 2025).

Among these risks, one that remains comparatively overlooked, due to its threat being mainly in adoption rather than emerging from intelligence as a whole, is AI’s impact on human cognition.

Cognitive debt as a societal problem

In the pursuit of choosing the path of least resistance, we have degenerated many abilities that were once crucial for our lives. This degradation often occurs silently, as technological advancements normalize skill substitution rather than preservation.

Mechanisms like these, and overreliance on certain technologies, lead us to incur cognitive debt. For the longest time, we didn’t consider or simply didn’t mind the lingering side effects of automation, as it had little to no impact on our daily lives due to the constant availability of its replacements (International Civil Aviation Organization, 1992).

Although this shift is acceptable in many cases, warning signs are starting to appear that the disruption caused by AI will be a bigger and fundamentally different one, impacting our critical thinking, which is considerably less negligible. With handing off complex work becoming easy and convenient, the danger is that we slowly erode parts of our ability to critically evaluate and understand.

Experience with other emerging technologies has shown that, due to “cultural lag,” society is typically faster to incorporate a new technology than to adapt toward its responsible and proper use. There is little doubt that we will witness similar patterns with the adoption of AI. (Ogburn, 1922)

Mechanisms of Cognitive Debt

The processes of neural plasticity are complex; however, the notion of treating the brain like a muscle that can atrophy when given little stimulation seems to be a good intuition for responsible AI use.

Keeping your mind active and challenging yourself improves crystallized intelligence and cognitive reserve, which have been shown to be helpful in delaying symptoms of early dementia through compensation by other neurons. Logically extrapolating this principle leads to the conclusion that handing off more and more tasks to AI reduces cognitive effort, thereby inducing weaker long-term retention and mental deconditioning in cases of severe overuse.

As expected, current research shows that cognitive engagement is lowered when tasks are largely offloaded to LLMs or AI tools (Kosmyna et al., 2025). I consider myself an avid user of AI, so reading into the literature was little surprising given the noticeable effects of regression among fellow software engineers when overusing AI over a prolonged period of time.

Avoiding cognitive debt

The integration of AI into more and more systems will grow increasingly ubiquitous, making hand-offs easier and easier. With hopes of increased productivity, I still sometimes find myself and many of my peers handing off work to LLMs in an almost fluent and natural fashion. Given that, the growing consensus is obvious, use AI less and engage with tasks on your own. A solution that is simple, but tempting to circumvent given our natural urge for convenience.

To counteract this behavior, I will introduce a few changes to my workflow that personally work well for me. When working on coding tasks, I will mostly use AI autocomplete, as it speeds up the coding process while leaving major design decisions and coding practices entirely to me.

Creative writing and content creation are artful forms of creation that, at least in my opinion, derive their value from human intent. As a result, I aim to keep AI involvement in the creative process to an absolute minimum.

To keep my everyday AI usage intentional, I have built Deprompt, a small, free, and open-source browser extension that keeps me mindful of my AI usage.

AI-Reinforced Brain Patterns

With the introduction of social media, a measurable societal shift has emerged that is largely attributed to the cognitive reinforcement of one’s own opinions (Pariser, 2011). Algorithms seem to self-validate users as a side effect of maximizing attention, unintentionally creating bubbles and echo chambers that are now profoundly impacting opinion-building and shaping public discourse. Societally, we are mostly aware of these factors, however, the silent impacts of extremization and polarization continue to persist (Castaño-Pulgarín et al., 2021; Lukin et al., 2017).

Social media sets a good example of how risks that are apparent can still linger when a majority actively ignores their side effects.

AI associated risks scale faster than their underlying safety mechanisms, largely due to the novelty of the field and the rapid advancements being made. This is especially the case because the mechanisms inducing these threats are not yet fully understood. While research capacity is growing toward a better comprehension of the cognitive impact of AI usage, we will likely continue to see a disconnect between developers and alignment researchers or, in this context cognitive psychologists.

Impact of LLMs on Human Ideology

Aside from the widely discussed risks of grounding facts based on large language models, third-party control, bias, and unknown sources, risk also emerges from the pure modality of human-like interaction itself.

With the anthropomorphization of AI comes a natural feeling of being understood and agreed with, which reinforces one’s own opinions as if one were talking to another human. This is perceived as interpersonal affirmation and introduces a significant risk, whereby people may come to value the output and thoughts of an AI on par with those of actual humans.

As part of its deployment for human-like interaction, reinforcement learning on human feedback is crucial to tune models toward digestible and value-aligned content. This, however, does not account for unwanted side effects that make such systems more agreeable and therefore more likely to agree on counterintuitive or plainly wrong assumptions and facts. This behavior creates a value gap, both between users who want a factual, truthful, and “honest” model, and the silent risk of self-validation, which can become dangerous.

Agreeableness and validation are dangerous traits, particularly when paired with a tool that many misjudge as objective. When users feel increasingly validated by a third party, the resulting dynamic bears the risk of justifying radical positions or encouraging impulsive actions.

A similar risk emerges when AI is used for emotional validation or therapeutic applications. AI judgments are subjective and are quick to self-diagnose or suggest actions that could be considered medical malpractice.

Consequences and Actionability

Einstein said, “Whoever is careless with the truth in small matters cannot be trusted in important affairs.” This quote transfers well to artificial intelligence, whose tendency toward epistemic unreliability undermines trust and therefore requires cautious use.

Furthermore, relying on AI for neutrality is impossible, so it is important to make this distinction clear, AI systems will always be biased toward the user’s perspective and should not be used solely to validate one’s feelings or thoughts. On the contrary, as these discussions unfold, conversation and dialogue between humans will become more valuable than ever.

Stopping it before it gets worse

Reading this, it might appear like my stance toward AI is quite antagonistic. However, the opposite is the case. Given the unique opportunities, I think LLMs and future AI systems can be quite valuable and great tools for many, as long as we keep our outlook on AI specifically aligned with guiding principles such as effective altruism.

In controlled environments, Anthropic researchers have discovered signs of deception (Greenblatt et al., 2024; Lynch et al., 2025) it’s these threats that pose not only material risks, as bad actors could also influence humans psychologically.

This calls for growing, yet still underexplored, risk assessments that quantify cognitive impact. More broadly, we will have to make sure to keep monitoring threats at a societal level that are caused by harmful usage, not only those introduced by emerging intelligence.

How to protect oneself

I am convinced that each individual should follow core principles to balance productivity and life-long learning.

The discipline of limiting exposure and going against the natural urge of comfortability will remain an important feat.

It requires us to not be ignorant of limitations, AI, in its current state, remains a tool for our use, so it is important to centralize that fact as a cornerstone for our decision-making when using AI in symbiosis with human actors.

As a general principle, questioning one’s own rhetoric as well as that produced by LLMs will be more important than ever. With the gratification that comes from agreeable AI, a new risk emerges that we will need to contain and protect ourselves from.

Alignment direction

We are currently still in the early stages of the capabilities AI will attain. Right now, we hold the control and can steer alignment, which will shape long-term decisions about how we use AI and how the future of AI development will evolve. Given the current pace, AI is weaker than it will ever be again, and we will have to decide how much mental bandwidth we are willing to hand off, how we want to control alignment, and how we will allow AI to shape our society.

Appendix

Deprompt

Since the core features of Deprompt are now in place, I will focus on maintenance and incremental improvements, especially as I have other projects to work on. If you have a feature request, idea, or improvement, feel free to open a PR.

As always, I’ll try to formulate my own thoughts before reading any of the related works, in order to gather my thoughts on the topic freely and unbiased. In my search, I found some great resources that I wanted to share and recommend, as they focus on other subareas of AI risk:

References

Castaño-Pulgarín, S. A., Suárez-Betancur, N., Vega, L. M. T., & López, H. M. H. (2021). Internet, social media and online hate speech. Systematic review. Aggression and Violent Behavior, 58, 101608. https://doi.org/https://doi.org/10.1016/j.avb.2021.101608
Collingridge, D. (1980). The Social Control of Technology. Frances Pinter.
Greenblatt, R., Denison, C., Wright, B., Roger, F., MacDiarmid, M., Marks, S., Treutlein, J., Belonax, T., Chen, J., Duvenaud, D., Khan, A., Michael, J., Mindermann, S., Perez, E., Petrini, L., Uesato, J., Kaplan, J., Shlegeris, B., Bowman, S. R., & Hubinger, E. (2024). Alignment faking in large language models. https://arxiv.org/abs/2412.14093
International Civil Aviation Organization. (1992). Operational Implications of Automation in Advanced Technology Flight Decks [ICAO Circular 234-AN/142, Human Factors Digest No.\ 5]. International Civil Aviation Organization.
Korbak, T., Balesni, M., Barnes, E., Bengio, Y., Benton, J., Bloom, J., Chen, M., Cooney, A., Dafoe, A., Dragan, A., Emmons, S., Evans, O., Farhi, D., Greenblatt, R., Hendrycks, D., Hobbhahn, M., Hubinger, E., Irving, G., Jenner, E., … Mikulik, V. (2025). Chain of Thought Monitorability: A New and Fragile Opportunity for AI Safety. https://arxiv.org/abs/2507.11473
Kosmyna, N., Hauptmann, E., Yuan, Y. T., Situ, J., Liao, X.-H., Beresnitzky, A. V., Braunstein, I., & Maes, P. (2025). Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task. https://arxiv.org/abs/2506.08872
Lukin, S., Anand, P., Walker, M., & Whittaker, S. (2017). Argument Strength is in the Eye of the Beholder: Audience Effects in Persuasion. ArXiv, abs/1708.09085. https://doi.org/10.18653/v1/e17-1070
Lynch, A., Wright, B., Larson, C., Troy, K. K., Ritchie, S. J., Mindermann, S., Perez, E., & Hubinger, E. (2025). Agentic Misalignment: How LLMs Could be an Insider Threat. Anthropic Research.
Ogburn, W. F. (1922). Social Change with Respect to Culture and Original Nature. B. W. Huebsch.
Pariser, Eli. (2011). The filter bubble : what the Internet is hiding from you. Penguin Press.