After reading another article about how bad AI is and how it (or some flavors of it) can lead toward sycophancy, we decided to tell a different story. We see the sycophancy too, but we have a different way of looking at this. Hear us out.
This idea fits naturally with threads we’ve discussed already in Travelmarx: AI as private companion vs. shared artifact, entropy of chats, and the importance of curation. We believe there is a social layer of AI that isn’t talked about enough in the doom-and-gloom articles. Most commentary about AI assistants assumes a private relationship between a person and a machine. But some of the most interesting uses appear when the assistant is dragged into the open, into kitchens, living rooms, and conversations between people.
A Third Voice at the Table
The first time I asked a question out loud to an assistant while sitting next to my partner, it felt… off. Not wrong exactly, but exposed. As if I had broken some small, unspoken rule about how conversations are supposed to work. Questions go to people. Devices stay in the background.
I could ask anything quietly, of course. That felt normal. But saying it out loud so that someone else could hear the question, hear the answer was something else entirely.
Phase 1: The Private Whisper
At first, the assistant lived in a kind of side channel.
A question here, a clarification there. A quick check on a fact or a translation. It was efficient, almost invisible. The conversation with my partner continued on one track, and the conversation with the assistant happened on another.
Two parallel threads. No overlap.
In hindsight, this is also where some of the unease around AI probably comes from. The interaction is private, unexamined, and easy to get lost in. You can ask, refine, and ask again. No friction. No second opinion unless you go looking for one.
Phase 2: Bringing It Into the Room
The shift happened almost accidentally.
Instead of quietly checking something, I asked out loud. My partner heard the question, then the answer. There was a noticeable pause. My partner was slightly hurt and surprised that I was talking to the phone instead of him. That was followed by a look of “what a stupid question, ask it better, you are leading too much!”
Once we got over those initial hurdles, our reactions to the assistant moved to “That doesn’t sound right!” or “Are you sure? Check again.” or “What about....?”.
And just like that, the assistant had entered the conversation.
Initially as an intrusion, then as something to ignite a discussion and even push against. We questioned the answer, rephrased the prompt, asked again. The device was still there, but it had become less of a private tool and more of a shared reference point.
Something had changed, and it wasn’t the technology.
Phase 3: The Assistant as a Third Participant
Over time, this became more natural.
We’d both ask questions. Sometimes to my assistant, sometimes to his. Sometimes we’d compare answers. Occasionally, we’d ignore them entirely and keep talking.
What emerged wasn’t a dependence on the assistant, but a different rhythm of conversation:
- We have a question or thought.
- We ask the assistant and it offers something.
- One or both of us disagrees.
- The real conversation starts.
The answer is rarely the end point. It’s a starting point.
In that sense, the assistant behaves less like a search engine and more like a third voice at the table. Not a person, not an authority, but a participant that can be questioned, corrected, or simply set aside.
Why It Felt Strange
Looking back, the initial hesitation makes more sense.
Asking a question out loud to an assistant reveals your thinking process in real time. It’s a bit like showing your notes before you’ve decided what you think. There’s also something faintly performative about it. You’re not just asking; you’re asking in front of someone.
And maybe there’s a deeper instinct at play. Conversations, at least the ones we value, are supposed to be human. Bringing in a device feels like introducing an outsider.
But that assumes the device replaces the conversation. In practice, it can do the opposite.
The Conversation After the Answer
The most interesting part isn’t what the assistant says. It’s what happens after it stops talking.
We’ll put the phone down and keep going. The answer lingers, gets reshaped, sometimes discarded entirely. What remains is ours shared interpretation, disagreement, and insight that comes from talking it through.
It reminds me a bit of how we use our Scrapbook assistant. The value isn’t just in retrieving an answer, but in what that answer connects and sparks next.
Here, the “retrieval” is happening in real time, in conversation, between two people.
A Different Way to Use the Tool
There’s a lot of discussion about assistants creating echo chambers, encouraging passivity, or leading people into private conversational loops. Those risks are real enough.
But they seem to depend on one key condition: that the interaction stays private.
When the assistant is brought out into the open, something changes. The answers are no longer final. They are provisional, subject to immediate scrutiny. They become part of a shared space rather than a closed loop.
It doesn’t eliminate the risks, but it shifts them.
You might compare this to earlier technologies:
- calculators in classrooms
- Wikipedia during dinner conversations
- smartphones during debates
At first they felt disruptive; eventually they became shared cognitive tools.
Closing Thought
The moment we started asking questions out loud, the assistant stopped being the point. It became a way to keep talking to each other.
And when the device is set down and the conversation continues, you realize that the assistant was never really the conversation at all but just the thing that helped move it along.

No comments:
Post a Comment
All comments are moderated. If your comment doesn't appear right away, it was likely accepted. Check back in a day if you asked a question.