Six months after Binghamton University released its Baxter AI chatbot, some students have mixed opinions about the initiative’s design and utility.
Launched in October 2024, the chatbot was created by the Office of the Dean of Students and EdSights, an educational technology corporation, as a tool for new students to receive support and give feedback on their college experience. According to Amanda Finch, the dean of students, the chatbot has “fielded over 15k text messages and sent out 17 targeted campaign texts” to first-year students. She said that the initiative has a 95 percent opt in and use rate, and more than half of the students use the chatbot regularly.
“At the end of last semester, Baxter did a temperature check with students regarding their feelings about returning to Binghamton in the spring,” Finch wrote. “Sixty-seven students said they didn’t intend to return or were unsure about returning. After direct outreach to those students by the CARE Team to help navigate issues, 61 of those students (91%) returned for the spring semester.”
Pipe Dream interviewed three students about their experience using the chatbot. All three students said that the AI asked general questions on topics like their experience at BU, workload and mental health. The chatbot then usually asked the student to select one of three responses.
Liam Rupprecht, a freshman majoring in geography, shared some of the messages sent by the chatbot with Pipe Dream. One text, dated Feb. 11, asked him to rate whether he was enjoying his classes for the semester with three options: “[1] Yes, I am,” followed by a smiling-face emoji 😁; “[2] Neutral,” with a neutral-face emoji 😐; and “[3] Not at all,” with an unamused-face emoji 😒.
“I guess I answer the questions seriously, but the chatbot’s name in my phone is ‘Big Bax,’ and I usually laugh every time I get a message,” Rupprecht wrote. “I assume it’s a similar case for most students, where this doesn’t really feel like a legitimate or serious resource.”
In a September 2024 interview, Finch said that the chatbot would help the University “hear the student voice directly.” As of April, University services directly reached out to students after around 17 percent of the targeted messages, according to Finch.
Students can also ask the chatbot questions about campus resources. While Rupprecht and Hatim Husainy, a freshman majoring in political science, have not used this feature, Kris Patel, a freshman majoring in computer science, asked the chatbot trivia questions about the University and directions to campus locations.
“I asked where the Watson College Advising Office was located, but it gave me this link,” Patel said. “I think this link did help, but on other things, for example, I asked, ‘Where’s the Engineering Commons,’ and it just gave me a map of Binghamton University, which did not help me because it doesn’t show where it is in the Engineering Building.”
Husainy said his UNIV101 class already connected him with campus resources, but that the chatbot “seems like a good resource” for those “looking for a person to talk to,” while allowing students to interact with “new technologies.”
In its first message, the chatbot informed users that their “texts aren’t anonymous” and that “if I’m ever not the best resource, someone from BU may reach out to help.” The students, however, said they did not realize that University personnel could view their messages.
Husainy said he did not see the disclaimer on his Samsung phone because he had to tap a “View All” button to read the entire message. After learning his messages were not private, he said, “I’m happy I didn’t chat with it then. That kind of freaks me out.”
“Creating deeper support networks and actually incorporating these discussions into all parts of campus would all be helpful even if it’s a minimum, but instead my huge tuition goes to making AI slop,” Rupprecht wrote. “In addition to this, it should be up to the student to decide where to talk about their mental health and who to take it to.”
“The university not explicitly stating who is reading these messages feels like a violation of privacy and security,” he continued.
Finch said in September that students must opt in to keep receiving AI messages. Rupprecht, Patel and Husainy said there was no explicit opt-in process and they could only choose to stop receiving messages.
“As we move toward the end of the semester, we continue to evaluate our progress and the impact of the initiative,” Finch wrote. “So far, we are very optimistic.”