Stephen waited. Ten seconds. Maybe twelve. The Teams call was silent. Nobody moved to fill it.
And then Will spoke, and the session became something neither of us had planned.
I should back up. A few weeks into this peer mentoring program, I told Stephen he was going too fast. Not with the content—with the pauses. He’d ask the room a question and then, before anyone had a real chance to think, he’d answer it himself or move on. It’s a hard habit to break, especially in virtual sessions where silence feels like a technical problem. But I’ve seen what happens when a facilitator learns to wait: people say things they wouldn’t have said if the space had been filled for them.
Stephen took the feedback seriously. He started counting to ten before moving on. By this past Monday, when he turned a question over to the room and the silence stretched well past ten seconds, he messaged me a quiet “ah well”—and held it anyway.
What rushed in was worth waiting for.
We’d had a full first half. Kristen shared a practical discovery about Claude’s Projects feature—storing files in a Project rather than uploading them fresh each session keeps you from burning through your context limit so quickly, something she’d understood intellectually but hadn’t felt until she tried it. Patty ran into 403 errors trying to push updates to Alma, which turned out to be my doing: I’d given everyone read-only API access while we’re still learning. The sandboxes are next.
Then Stephen walked through RAG—retrieval-augmented generation—with a metaphor I liked: the judge who knows the law but asks a clerk to find the relevant precedents. Vector databases, agentic retrieval, why vendor implementations of these tools so often disappoint. And then, as a kind of live demonstration of everything he’d just explained, he showed us a reference agent he built for the Graduate Center. It draws on LibGuides for campus-specific information and the Primo Search API for discovery, and in his own testing it outperforms the library’s existing chat service. (The code is on GitHub.) He hasn’t deployed it. He’s been having careful conversations about what that would mean—whether an AI agent should be a patron’s first impression of the library, whether the profession is ready.
At least three people in our group are building something similar. I noted that we’re a self-selected group, probably in the minority. “Also very true,” Stephen said. He called it both desirable and problematic.
Then he asked: has any of this changed your mind? And he waited.
Will is a library school student, still early in his studies, the youngest person in a program designed for working librarians. He said he’d barely touched these tools before joining, that curiosity was most of what brought him here. But the more he learned, the more a question nagged at him. He could see that AI handles reference questions efficiently and well. What he kept turning over was what that means for the patron on the other end—whether you can build the curiosity and relationship that makes learning stick if the interaction is frictionless by design. “What actually is the point?” he asked. And then, almost in the same breath: “It’s also really cool to learn how this works.”
I’ve been thinking about that pairing ever since. Not the question and the answer, but the question and the delight sitting right next to each other. That’s not confusion. That’s intellectual honesty. And it opened something in the room.
Robin said that humanity spent a long time trying to make humans into computers—the assembly line, regimented productivity, structural consistency—and now we’ve built machines that do those things better than us. Her worry was for people who are still developing judgment, who now have access to powerful tools that let them appear to arrive at answers without doing the work of getting there. “AI is really great,” she said, “if you’re already wise.”
Jason pushed back gently: understanding what an AI system can and can’t do, what it was trained on, what it retrieves and why, is fundamentally an information literacy question. That’s not a threat to librarianship—it’s an argument for it. Anthony, who teaches a foundational reference course at Queens College’s GSLIS, was more blunt: students are already receiving AI-generated citations they can’t evaluate. Getting out in front of this isn’t optional. It’s a professional obligation.
Ashley brought the political dimension that the rest of us had been circling. The previous Friday, Defense Secretary Pete Hegseth had blacklisted Anthropic after the company refused to remove safeguards against using Claude for mass surveillance or autonomous weapons. The company held its line and accepted the consequences. Ashley’s question was simple and a little destabilizing: what conversation would we be having right now if they hadn’t? Would we still be using these tools? For a room full of people with professional commitments to intellectual freedom, it wasn’t a rhetorical question.
Shamiana said her worry isn’t job loss. It’s that access will eventually become unequal—that these tools will be available to some and not others, compounding existing advantages rather than distributing new ones.
Stephen closed by saying we’d need to keep having this conversation. He wasn’t trying to resolve it, and he didn’t. His homework for the week: try building a RAG pipeline, whatever makes sense for your project.
Here is what I keep coming back to. We spent the first part of the session talking about how AI systems retrieve and contextualize information. We spent the second part trying to articulate something those systems can’t do: sit with a hard question, resist the pull toward a tidy answer, and trust that the silence is doing something.
Stephen learned to count to ten. Will asked something real. The room held it.
That’s what peer learning looks like when it’s working.
We’re six weeks into a 16-week cohort. Post 1 is here. More to come.