ChatGPT Sent Users to a Website for a Feature It Didn't Have—So the Founder Built It

by Margarita Armstrong

What impress you impress when your web voice is bombarded with uploads it may well’t course of? That’s the ache tool developer and musician Adrian Holovaty learned himself in when he noticed a irregular surge in failed uploads to his company’s sheet music scanner.

What he didn’t predict became that the offender became allegedly ChatGPT.

In a latest blog submit, the Soundslice co-founder defined that he became error logs when he learned that ChatGPT became instructing users to upload ASCII “tabs”—a straightforward musical structure dilapidated by guitarists and others in lieu of musical notation—into Soundslice to hear audio playback. The ache became, the feature did no longer exist. So Holovaty decided to form it.

“To my data, this is the necessary case of a company growing a feature because ChatGPT is incorrectly telling folks it exists,” Holovaty wrote.

Launched in 2012, Soundslice is an interactive music learning and sharing platform that digitizes sheet music from pictures or PDFs.

“Our scanning gadget wasn’t intended to fortify this form of notation,” Holovaty wrote. “Why, then, had been we being bombarded with so many ASCII tab ChatGPT screenshots? I became mystified for weeks—till I messed around with ChatGPT myself.”

“We’ve never supported ASCII tab; ChatGPT became outright mendacity to folks. And making us watch wrong within the intention, setting false expectations about our carrier.”

The phenomenon of AI hallucinations is traditional. Since the public open of ChatGPT in 2022, moderately a couple of cases of chatbots, in conjunction with ChatGPT, Google Gemini, and Anthropic’s Claude AI, hang presented false or misleading data as truth.

Whereas OpenAI did no longer mention Holovaty’s claims, the company acknowledged that hallucinations are soundless a ache.

“Addressing hallucinations is an ongoing house of learn,” an OpenAI spokesperson steered Decrypt. “To boot to clearly informing users that ChatGPT can form mistakes, we’re constantly working to enhance the accuracy and reliability of our fashions via a vary of suggestions.”

OpenAI advises users to address ChatGPT responses as first drafts and test any serious data via unswerving sources. It publishes model evaluate data in gadget playing cards and a security evaluate hub.

“Hallucinations aren’t going away,” Northwest AI Consulting co-founder and CEO Wyatt Mayham steered Decrypt. “In some cases, like inventive writing or brainstorming, hallucinations can in truth be helpful.”

And that’s precisely the model Holovaty embraced.

“We ended up deciding: What the heck? Lets as successfully meet the market request,” he said. “So we set together a bespoke ASCII tab importer, which became terminate to the bottom of my ‘Application I expected to jot down in 2025’ list, and we changed the UI copy in our scanning gadget to screech folks about that feature.”

Holovaty did no longer reply to Decrypt’s request for observation.

Related Posts