Blog

Our Blog

We help junior tech professionals, such as developers and designers, to grow.

Why AI struggles with "no" and what that teaches us about ourselves

Mariana Caldas   2025-05-23

Blog post cover

Over the last few months, I’ve been building some pretty layered automation using Zapier, Ghost, PDFMonkey, and Cloudinary, guided step by step by ChatGPT, and it’s been... eye-opening.

I think AI is the best assistant ever when exploring possibilities to solve a problem, but it occasionally fails in ways that feel surprisingly human. As a former teacher, I’ve come to appreciate two patterns—two things large language models consistently struggle with—that have deep roots in how people think and learn.

Let’s break them down.


1. Negation is hard for both humans and machines

One of the most common mistakes I’ve seen ChatGPT make is with “negative commands.”

For example, I once said:

“Don’t overwrite existing tags unless the user doesn’t have any.”

The result? The tag was overwritten, even when it shouldn’t have been.

Why? Because large language models don’t “understand” logic like a programmer. They predict the most likely sequence of words based on examples from their training data. When phrasing is complex or wrapped in multiple negations, models often pick up on the structure of the sentence without truly grasping the logic.

This is where it gets human.

Children, especially infants and toddlers, also struggle with negative commands. Telling a 2-year-old, “Don’t touch that,” often leads to… them touching it.

Why? Because understanding “don’t” requires holding two ideas in mind:

  • What the action is (“touch that”)
  • That it should not be done

Developmental psychologists have studied this for decades. A classic source is:

Even adults process negative statements more slowly and with more errors than positive ones.

This parallels how LLMs “misread” complex negation—they’re great with surface forms, but logic wrapped in linguistic twists? That’s still a blind spot.


2. Memory gets fuzzy when things get long

Here’s another pattern: long conversations with ChatGPT often lead to inconsistent behavior.

You’ll say something, set up a rule, get it working, and 20 minutes later, the AI starts forgetting the rule or contradicting something it already confirmed.

Why?

Large language models have a limited “context window.” GPT-4 can look at many tokens (~128k words at most), but the longer the conversation, the more compressed and imprecise that earlier information becomes. It’s like trying to summarize 40 pages of notes and then recall just one detail from page 4—you might miss it.

OpenAI describes this in its technical report on GPT-4: memory is not long-term; it's a temporary window.

Again, this echoes human learning.

When we overload working memory, especially without structured reinforcement, information decays. Educational research shows that our brains retain new concepts best through spaced repetition, simplified input, and direct reinforcement.

Long chats with no clear breaks? That’s like reading 12 chapters of a textbook in one night and hoping it all sticks.


3. “Structured memory” sounds great — but here’s the real story

One thing I wanted to figure out was how to help ChatGPT remember key info across different workflows. At first, I assumed the new Projects feature meant each workspace had its own memory. Not quite.

Here’s how it really works:

ChatGPT’s memory is global, not project-specific. If memory is on, it might remember something you told it (like your name or that you're working on a Ghost theme), but it doesn’t organize those memories by project.

  • The Projects feature is amazing for keeping chats and uploads organized, but memory isn’t isolated to one project versus another.
  • If you go to Settings → Personalization → Manage Memory, you can see what it remembers and delete specific entries—but it’s still one big pool of memory.

So, how do you carry memory across projects?

There’s no native way to export memory and plug it into a new project. But here's what I’ve been doing that actually works:

  1. Export your chat history: Go to Settings → Data Controls → Export Data and you’ll get a ZIP with all your chats.
  2. Save useful logic and notes from past chats into a document.
  3. When starting a new project, upload that document or paste in key info. ChatGPT will use it during that conversation—even if it doesn’t "remember" it forever.

It’s not real memory, but it’s a repeatable way to simulate it.


What it teaches us

This is what I love most: these quirks in AI aren’t just bugs—they’re mirrors.

  • When LLMs trip over negation, they reveal how language is more than structure—it’s logic in disguise, and logic is never as simple as it looks. When their memory fades, they remind us that attention, reinforcement, and structure matter—not just for machines but also for our own learning.
  • When we build systems that help them remember better, we’re also uncovering what we need to organize complexity in our own minds.

What I’ve learned

Here’s what’s helped me when working with AI (and people, frankly, haha. 🫠 ):

  • Say what you want to happen—avoid phrasing in terms of what not to do
  • Keep logic simple and sequential
  • Break long workflows into smaller steps or shorter conversations
  • Use memory intentionally—don’t expect it to hold your entire logic stack indefinitely
  • Leverage a project-based organization to simulate long-term context

And maybe the biggest lesson?

AI and humans remember things in very different ways.

For us, memory is layered—it’s shaped by emotion, context, repetition, and meaning. We don’t just recall facts; we hold on to stories, mistakes, and feelings. We forget when we’re overwhelmed, but we remember what touches us deeply.

ChatGPT’s memory, by contrast, is more like a list: detached, structured, and factual. It forgets by default unless told otherwise, and it doesn’t “feel” the past.

And yet, watching how AI handles memory—where it helps and where it fails—has taught me a lot about how my own memory works, too:

  • We need clarity and repetition.
  • We remember better when things are meaningful.
  • And just like AI, we benefit from structure, but we add our own human layers on top.

In the end, these tools don’t just assist us, they reflect us.

If this topic resonates with you, I highly recommend watching this short talk:

As the video highlights, we’re not just using AI; we’re working with it, learning alongside it, and shaping what it becomes while reflecting on who we are. If that’s not creativity, I don’t know what is.


Want to go deeper?

I’m thinking about writing a short series on how large language models really work and what that means for everyday people using AI in their projects.

Would that interest you? Leave a comment and let me know. If there’s interest, I’ll dive in 🥽. Talk soon, take care.

Author Bio

Author's profile

Digital Product Manager & Front-End Developer here! <> Hello :) </> Product Management, JavaScript, React.js, Next.js, CMS platforms (WordPress), and Agile methodologies.

Github LogoWeb SiteLinkedin account of Web Dev Path

Still got questions?

Feel free to contact us.
Contact us
Hashtag