Thirteen librarians and library staff from across CUNY’s 26 campuses are spending their Mondays this semester building AI tools together—peer to peer, one session at a time. We know what these sessions look like: someone shares a breakthrough, someone hits a wall, Stephen teaches us something new. This one had all of that. And then something else happened.
Kristen shared a tip about Claude’s Projects feature that turned out to be more useful than she expected. Patty ran into 403 errors trying to push updates to Alma. Debugging, troubleshooting, the usual texture of a working group that is actually working. Then Stephen walked through RAG (retrieval-augmented generation): the judge who knows the law but asks a clerk to find the relevant precedents, the difference between a vector database and agentic retrieval, why vendor implementations of these tools are often disappointing. Good, grounded, practical.
The RAG discussion led naturally to a demo of something Stephen has been sitting on for months: a reference agent he built for the Graduate Center that draws on two sources — LibGuides for campus-specific information like hours, policies, and research guides, and the Primo Search API to retrieve results from the library’s discovery tool. In his own testing, it outperforms the library’s existing chat service at answering reference questions. (The code is on GitHub.) He’s not deployed it yet. He’s been having individual conversations with colleagues about what it would mean to do so: whether an AI agent should be a patron’s first impression of the library, whether the profession is ready. At least three people in our group are building something similar. I noted that we’re a self-selected group, probably in the minority. “Also very true,” Stephen said. He called it both desirable and problematic.
Then he turned it over to the room. And waited.1
And then Will spoke.
Will is a library school student, still early in his studies, participating in a program designed for working librarians. What he said was this: he’s going into the field because of the relational, humanistic pull of it, the thing that happens when a person who needs help encounters another person who can provide it. And he’s nervous about what happens to a field when every job becomes more technical and less about being with people. “What actually is the point?” he asked.
Robin offered a frame I keep returning to: humanity, she observed, spent a long time trying to make humans into computers. The assembly line, regimented productivity, structural consistency. And now we’ve actually built the machines that do those things better than us. Her concern was about the younger generation specifically: people who are still becoming wise and may now have powerful tools that let them appear to arrive at answers without doing the work of getting there. “AI is really great,” she said, “if you’re already wise.”
Jason pushed back gently: AI literacy is fundamentally an information literacy problem, and that’s where librarians have something irreplaceable to offer. Not despite the disruption, but because of it. Anthony, who teaches a foundational reference course at Queens College’s GSLIS, was more direct: getting out in front of this isn’t a choice. It’s a professional obligation.
Ashley brought the political dimension that the rest of us had been circling without quite landing on. She noted that Anthropic had recently refused to comply with government demands that would have opened the technology to mass surveillance and military use. She asked the room: what conversation would we be having right now if they hadn’t? If Anthropic had handled that moment differently, would we have kept using it? That, she said, is also our ethical obligation as librarians. To know these systems well enough to answer that question when it matters.
Shamiana said her main worry isn’t that AI is coming for her job. It’s that access will eventually become unequal. That these tools will be available to some and not others, consolidating advantage rather than distributing it.
Stephen closed the meeting by saying he thought we’d need to keep having this conversation, that AI is going to impact our labor in ways that will vary a great deal and that we can’t fully anticipate. He wasn’t trying to wrap it up neatly, and he didn’t. His homework suggestion for the week: try building a RAG pipeline, something that does agentic retrieval or vector embedding, whatever makes sense for your project.
It was a fitting note to end on. We’d spent part of the session talking about how AI systems retrieve and contextualize information. We spent the rest of it trying to articulate what humans bring that those systems can’t retrieve or contextualize: judgment, relationship, the particular kind of knowing that comes from being uncertain and staying in the room anyway.
And part of it, I think, was the question itself, asked by someone new enough to the profession to still be asking why, not just how. That instinct, Robin would say, is exactly what we need to protect.
We’re six weeks into a 16-week cohort. Post 1 is here. More to come.
-
Stephen had been working on this. Early in the program, we talked about the facilitation instinct to fill silence—to jump in with a follow-up before anyone has had a real chance to think. He started counting to ten before moving on. By today, when the silence stretched past ten seconds, he messaged me a resigned “ah well” and held it anyway. ↩