Dev News: SvelteKit 2.0, State of Rust Survey and AI on Apple

SvelteKit 2.0 released in late December. The framework for building apps with Svelte now supports Vite 5 and paves the way for Svelte 5, which is expected to be released in 2024.

It also adds support for one “much-requested feature,” the team noted in a SvelteKit 2.0 release blog post. Svelte is calling it shallow routing, and it allows developers “to associate state with a history entry without causing navigation,” the team wrote. It can be used to create modal dialog that can be dismissed by swiping back or pop-up views of routes when developers don’t want to do a full navigation.

The release marked SvelteKit’s one-year launch anniversary.

“In the past year, we’ve seen a number of open source projects like Storybook, Tailwind and Playwright officially support SvelteKit as well as a number of commercial entities like Prismic, Sentry and InLang,” the Svelte team wrote.

The Svelte team recommend updating to the most recent 1.x release first, along with Svelte 4, in order to address any deprecation warnings. Then, upgrade to SvelteKit 2 by running the automated migration tool:

npx svelte-migrate sveltekit-2

State of Rust Survey Open

The Rust Project opened its annual Rust survey on Monday and will be up until Jan. 15. It’s designed to gather information about how the Rust community and how the Rust Project is performing. The responses are open to anyone interested in or using Rust. Responses are anonymous, although it’s worth noting that it takes 10 to 25 minutes to complete the survey.

AI on Apple?

Last week, I shared how Google is making Gemini AI accessible for Android developers. It looks like iOS developers will soon have similar news. MacRumors is reporting that Apple AI researchers say they’ve made a breakthrough in deploying large language models on Apple devices.

“Apple researchers have developed a novel technique that uses flash memory – the same memory where your apps and photos live – to store the AI model’s data,” the site reported Thursday. The article links to research paper by Apple engineers that explains how flash memory can be used to run AI.

The article summaries the research, noting that flash storage “is more abundant in mobile devices than the RAM traditionally used for running LLMs.” The method outlined by researchers uses two techniques to minimize data transfers while maximizing flash memory throughput:

“The combination of these methods allows AI models to run up to twice the size of the iPhone’s available memory,” the article noted.

LangChain Top Option for Deploying AI Apps

To find out how developers are building their AI applications, LangChain evaluated metadata from LangSmith, a cloud platform for building and testing large language model applications.

Eighty-five percent of AI apps are built using LangChain. Retrievals are used by 42% of developers in LLM apps. It’s the dominant way to combine handle complex queries involving data retrieval, according to the study. It also found that 17% of complex queries are part of an agent. Agents let the LLM decide what steps to take, which allows the  system to better handle complex queries or edge cases, although the post noted agents are not super reliable or performant. That may explain the low adoption point for agents, it added.

Also, use of One LangChain Expression Language grew rapidly over the course of this year. Introduced in July, usage rose quickly to 57% by December. LangChain Expression Language “is an easy way to compose components together, making it perfect for creating complex, customized chains,” the post noted.

Not surprisingly, the study found openAI ranked as the top LLM provider, followed by with Azure OpenAI. Anthropic ranked third and open source provider Hugging Face placed fourth.

Group Created with Sketch.

 

 

 

 

Top