Binghamton University Professor Yu Chen is in the early stages of developing a program that flags images generated by artificial intelligence in hopes of helping the public regain trust in online media.

Chen, a professor of electrical and computer engineering, originally conducted research along with his team on network security, specifically focusing on countering deepfake attacks. With AI starting to define the digital landscape, he began applying his research to the rising prevalence of AI in social media.

“If the information itself is fake, the decision-making is going to be misled,” Chen said in an article by BingUNews. “That’s the main motivation and turning point I had to switch from the physical network infrastructure to digital ecosystem infrastructure.”

Chen aims to create a publicly available app to help the public detect AI deepfakes. In recent years, the use of AI in digital content has fostered greater distrust in the media, with research indicating that only 41 percent of Americans believe that what they read online is accurate and made by humans, according to SQ Magazine, a digital news and analysis publication.

He plans to complement other AI-detecting software by using an alternate detection method to machine learning, which analyzes media at the pixel level for irregularities in lighting, shadows and composition. By contrast, Chen’s software uses background environmental data to verify where the content originated.

“Our work is based on a different philosophy: We do not look into the content of the image, video or audio,” Chen told BingUNews. “We look at the invisible features embedded into that multimedia, which are generated by the device used to shoot the video or audio, like a camera, camcorder or microphone. Those things, when they make such a record, naturally embed some of what we call an ‘environmental fingerprint.’”

SQ Magazine notes that an estimated 71 percent of social media images are now AI generated, with nine out of the top 100 fastest-growing YouTube channels in July 2025 using AI to generate their content. A similar increase in AI-generated content has been seen on various social media platforms including Instagram, LinkedIn, Reddit and Facebook.

To address concerns over this technology, over 30 states, including New York, Texas, California and Illinois, have introduced or are considering legislation to regulate AI.

In December, New York Gov. Kathy Hochul signed the Responsible AI Safety and Education Act into law, requiring major AI developers to create basic safety and security protocols to prevent AI from assisting illegal activities like creating bioweapons or facilitating automated crime. The RAISE Act also requires AI developers to report any critical safety incidents to state officials within 72 hours.

In 2024, fraudsters used AI technology to pose as senior officials working for Arup, a British Engineering company. Fake videos and images were used to trick an employee into handing over about $25.6 million.

“They can even just hijack some of the communication channels and they inject some fake information in those things,” Chen said in an interview with Pipe Dream. “For example, you are talking to someone, but actually, during the call, it actually has been switched — you think you’re talking to a friend and at the beginning, it was — but in the middle, it has been cut off and is being hijacked.”

Chen hopes that his research will help the public avoid falling victim to deepfakes by implementing the app directly on phones, as many people are still harmed by scam calls. He believes that this initiative can also benefit the economic sector by improving security.

An independent artificial intelligence research center is coming to the University, the first of its kind in U.S. public universities. The center will be part of New York’s Empire AI project, an initiative to boost AI innovation in the state through partnerships between several New York-based institutions and universities, including Columbia University, Cornell University, New York University, Rensselaer Polytechnic Institute, the State University of New York, the Flatiron Institute and the City University of New York.

Chen also recently received a $50,000 grant from the SUNY Technology Accelerator Fund, part of SUNY’s overarching goal to turn academic research into real-world, commercial uses.

“We hope our technology does not stop there,” Chen said. “We really want that benefit to society [so it] can help people in their daily life. So we want to build that — based on principle, we have validated into some real tool. That is what the state grant is for, so we are now delivering that grant to build some prototype.”