While most generative artificial intelligence bots are accessed by a chat interface, that’s not the only user interface AI can have. Viable is one example of a company that uses AI without resorting to a chatbot interface.
The startup aggregates and analyzes customer feedback for companies. That data can come from online reviews, survey responses, social media and customer service platforms such as ZenDesk or Intercom — basically anywhere where customers are talking to the company about its service. Viable then uses AI to combine that customer feedback with AI analysis to create reports instead of a chat, said founder and CEO Daniel Erickson.
“The data comes in, it processes that data, it digs in to find topics and identifies themes within that data set, then analyzes those themes for you, and puts out about 10 paragraphs per theme that we analyze, and it reads like a report,” Erickson, who is also a software engineer, told The New Stack. “When you’re actually looking at at the Viable app, what you’re doing is you’re actually reading reports, and they read like a human analyst would have written them.”
It was among the first companies to leverage OpenAI‘s GPT API, he added.
AI without the Chat
Honeycomb is another example of an AI deployment that doesn’t leverage chat, Erickson noted. Honeycomb uses a natural language interface that allows users to create queries in plain language. The AI then outputs a more technical, SQL-like query, he said. He also foresees other uses for natural language models beyond chatbots.
“I think people are going to do a lot less toggling filters and drop downs and more just typing out what they want to find and they’re going to get that stuff back,” he said. “The other thing that I’ve seen is people often struggle with interacting with these AIs, because there’s a little bit of a learning curve to understand how — I’m not going to claim they think necessarily — but how they ‘think.’”
That’s why it’s really important to provide feedback to customers about the things they’re asking AI to do, he added. To that end, Viable created a prompt coach to aid customers with their queries.
“We built basically this sort of coach thing that goes in and looks at that prompt and says, ‘Here’s how you can improve that prompt to make it easier for the AI to understand and get better output for it,’” he said.
Why Next.js and Node.js
Viable uses the Next.js framework hosted on Vercel to create its user interface and APIs. Next.js makes it easy to spin up new API endpoints in new pages in the UI, Erickson said. That’s because in Next.js, creating a new route requires just a new file in a folder, which is much easier than other open source options like Express, he added.
“It basically just does it,” he said. “So many other frameworks out there, you have to go in and say like, ‘I want my API route to look like this, only accept these things, and really just go in and do the nitty gritty there. Next.js, all I have to do is create a new file, drop the pages [in] /API directory, and all of a sudden I have a new API route.”
Another benefit of Next.js for Erickson is the ecosystem, which he noted is larger than any other framework out there except maybe React itself. And Next.js uses React under the hood anyway, he added.
“Basically if it’s compatible with React […] and then there’s a bunch of extra libraries that are open source that build around authentication, around different data sources, around different components — like UI components — and libraries,” he said. “There’s just a ton there that the ecosystem is really easy to plug into and it has a lot of tools for me that I don’t have to build myself.”
One of the challenges Viable faced was that its data ingestion pipeline needs to be able to support everything from a stream of data to a monsoon, since customer feedback can be “spiky,” he explained.
“You don’t know if that’s going to be five messages a day or if it’s going to be 500,000 messages a day. It all depends on what your company is doing and what people are talking about,” he said. “Vercel’s serverless architecture and edge functions really help us scale to meet those demands.”
He opted for JavaScript because, as a JavaScript engineer, he’d worked with the Node.js runtime environment since 2009, so it was part of his go-to toolbox for writing code. It’s also very good at dealing with asynchronous data processing, he added. What makes it good is that it runs in an asynchronous manner, meaning it basically has a runtime loop that happens when the code runs.
“It can pause the execution of a process,” he said. “It’s pulling in more data, meaning it can actually multitask a lot better than a lot of other programming languages out there. You have to think about multitasking less with Node than you do when you deal with other things.”
Caveats When Developing with AI
One thing developers should be aware of before they dive into developing with AI is that most AI requires support for real-time streaming, Erickson said.
“If you’ve chatted with ChatGPT or anything when you do that, you can actually see the text streaming in,” He said. “It doesn’t like having a little loading indicator and then typing all the text in at once. You need to see the text coming in as if the computer was typing into you, and that is because of the latency.”
The models take “forever” to generate text, so it’s important to get that first bit of text to the user as fast as possible, he added. Next.js, and Vercel’s AI tools have made that easier than manually coding to support the streaming, he added.
TRENDING STORIES
YOUTUBE.COM/THENEWSTACK
Tech moves fast, don't miss an episode. Subscribe to our YouTube channel to stream all our podcasts, interviews, demos, and more.
Loraine Lawson is a veteran technology reporter who has covered technology issues from data integration to security for 25 years. Before joining The New Stack, she served as the editor of the banking technology site Bank Automation News. She has...