Baltimore County State’s Attorney Scott Shellenberger had difficulty finding a similar case when his office looked at charging a man on allegations that he used artificial intelligence to impersonate Pikesville High School Principal Eric Eiswert and led the public to believe that he made racist and antisemitic remarks.

“As we dug into this a little bit more,” Shellenberger told reporters on Thursday, “it seems very clear to me that we may need to make our way down to Annapolis in the legislature next year to make some adaptions to bring the law up to date with the technology that was being used.”

Pikesville High School, he said, was “turned upside down.” Calls from parents and students overwhelmed the front desk, law enforcement increased its presence to address safety concerns and additional counselors came in to help people process the events, Baltimore County Police reported.

Shellenberger found a charge that was on point but not specific to AI: disrupting school activities, which carries a fine of up to $2,500 and a maximum sentence of six months in jail. Dazhon Darien, 31, of Greektown, is also charged with theft, retaliating against a witness and stalking.

The Baltimore Banner thanks its sponsors. Become one.

Darien, police allege, faked the voice of the principal and circulated the audio on social media. Law enforcement and AI experts believe that Eiswert’s voice was simulated to make disparaging comments about Black students and the Jewish community.

Law enforcement and elected officials across the United States are grappling with how to address the use, and potential abuse, of AI.

“AI and its easy availability is relatively new,” Shellenberger said in an interview. “And it usually doesn’t take very long for the bad guys to come up with ideas.”

The National Conference of State Legislatures reported that at least 40 states, as well as Washington, D.C., Puerto Rico and the U.S. Virgin Islands, have introduced bills related to AI in 2024. Meanwhile, the Federal Communications Commission on Feb. 8 adopted a ruling making robocalls with AI-generated voices illegal.

In Maryland, legislators passed a bill in the 2019 legislative session that expanded the law related to possession of child sexual abuse material to cover AI-generated images that are “indistinguishable from an actual and identifiable child” under 16.

The Baltimore Banner thanks its sponsors. Become one.

State lawmakers considered at least two bills in the 2024 legislative session that would have made it a crime to knowingly distribute AI-generated images that are “indistinguishable from another actual and identifiable person” with “intimate parts exposed or while engaged in an act of sexual activity” to harm, harass, intimidate, threaten or coerce and without consent. They did not receive final passage.

In written testimony, Debbie Feinstein, chief of the Special Victims Division in the Montgomery County State’s Attorney’s Office, wrote that prosecutors must prove under the current law that the whole person in an image is identified or known.

“My office has been unable to prosecute cases where the victim’s head is connected to an unidentifiable naked body that is often in sexually explicit positions,” Feinstein said. “The loophole that the current law provides for abusers must be closed.”

But Stephen Saltzburg, a professor of law at the George Washington University Law School, said there is no reason to believe that existing law is inadequate to deal with crimes involving AI.

If a person had used a tape recorder and spliced words together, he said, the same issues would be present in the case.

The Baltimore Banner thanks its sponsors. Become one.

“Eventually, legislatures may pass laws specifically focused on AI,” said Saltzburg, who’s a former deputy assistant attorney general in the U.S. Department of Justice Criminal Division. “In the short run, they don’t really need to do it.”

”The reality is this is pretty standard stuff,” he added. “It’s just using a new form of fraud to commit old crimes.”

U.S. Deputy Attorney General Lisa Monaco spoke about AI at the University of Oxford on Feb. 14 and stated that the technology has the potential to be an “indispensable tool to help identify, disrupt, and deter criminals, terrorists and hostile nation-states from doing us harm.”

At the same time, Monaco said, AI is “accelerating risks to our collective security.”

The Department of Justice is applying current legal tools, she said, and looking at where new ones might be needed.

“As it did with cyber, the law governing AI will develop,” Monaco said. “But our existing laws offer a firm foundation.”