Hatch wants to bring back Flash — not the actual player, mind you, but the fun animations and visual effects it enabled. One way the low-code company is accomplishing that is by integrating generative AI into its offering without using a chatbot.
Admittedly, Hatch’s primary audience isn’t frontend developers, although the tooling could save frontend developers time, pointed out co-founder Darrin Massena. Massena founded Hatch with Mike Harrington — both of whom started the photo editing service Picnik, which was acquired by Google in 2010.
“Frontend developers are always looking for a way to save time, something that’s more convenient. So they don’t necessarily want to code everything, if it’s possible,” Massena said. “There’s certainly opportunity there, but that’s not our primary target at this point.”
Bringing Back the Flash
Hatch provides animated templates and low-code tools to create websites and mobile apps that are interactive and … well, fun. Users tend to be creatives such as artists and musicians who want something imaginative and colorful; in other words, not the standard all-business website or mobile app.
“We were pretty heavily influenced by Flash,” Harrington said. “When that went away, it was this big blow, because here was this cross-platform tool that lets you do all this cool stuff, like color transforms, and all this stuff… Everybody kind of hand waved and said HTML 5 is going to cover and solve all your problems at the time. Yeah, it did solve some problems, but not in an accessible way.”
Massena and Harrington want to bring back the creativity Flash enabled for non-technical users.
The company recently launched a new generative AI playground that allows users to perform visual scripting. Within the AI playground, users can select an object — say a star— and choose what they’d like to have happen when an event triggers. For instance, the star might rotate, grow larger or change colors. The user decides and the AI generates the code to make it so.
Hatch AI playground
Editing it is a little like a flow chart meets drag-and-drop visual interface functionality.
An AI Accelerant
AI is proving to be a big accelerant for the company because of its ability to generate code, Massena said. Hatch uses OpenAI’s GPT-4 API. They worked with the model in two phases to explore what it could do. The first phase was a proof of concept that lasted about a week.
“I think a good portion happened on a plane flight,” he said. “It is amazing how far you can go with larger models; they have a very big context window.”
Once they were convinced it could do what they wanted with training, they took on the more serious work of integrating their backend with the OpenAI GPT-4 API. They had to deal with questions like how to make it robust and what are the different ways it could fail. The team took roughly a month and a half to work out the details. One big challenge was figuring out how to move beyond chat in building the user interface, which he described as “still in the first phase.” Instead of a chatbot, the AI is accessed via dropdown lists and other options.
The AI playground, currently in beta, allows the user to see what happens as the change is created.
“If you’re doing React programming, you’ve got your really powerful IDE, it doesn’t really know about the running state of your program,” Massena said. “We have all the information, we can really accelerate how do you reference existing objects, and go back and forth between the visuals and the logic really fluidly.”
They did explore other GPTs, including open source options, but found that at this point, GPT-4 is still “quite a bit better,” specifically for code generation, Massena said.
That said, even though the solution leverages OpenAI’s GPT, they don’t consider themselves “locked in” because it took so little time to prep the AI, and, fundamentally, GPTs follow a similar pattern, Massena added.
Future Plans for AI
One advantage Hatch enjoys is they can see what people are trying to do with the AI tool, and then add training to the model for specific uses people want. They hope to improve the UI to help users understand more about what it can and cannot do, Massena added.
“You can actually see as it’s building, how the AI is interpreting what you’ve asked for, and then you can edit it,” Massena explained. “Even if it’s not exactly what you wanted, it gives you a starting point. It becomes an accelerator for really anyone who wants to have something to start with so they’re not starting from scratch.”
Currently, the AI is focused on helping users make interactions on the page, but the Hatch team believes it can do more. For instance, AI might be able to become a power tool that could change color schemes so that designers or developers wouldn’t have to, say, change a design color from green to orange manually.
“We imagine that it could take on a higher level more creative role, like take a look at my PDF and make me a page, or you will be able to describe at [a] high level what you want, and it would give you a better starting point,” Massena said. “There’s a lot it can do along those lines, to help with [the] productivity of creating things.”
TRENDING STORIES
YOUTUBE.COM/THENEWSTACK
Tech moves fast, don't miss an episode. Subscribe to our YouTube channel to stream all our podcasts, interviews, demos, and more.
Loraine Lawson is a veteran technology reporter who has covered technology issues from data integration to security for 25 years. Before joining The New Stack, she served as the editor of the banking technology site Bank Automation News. She has...