Deno 2.0, Angular Updates, Anthropic for Devs, and More

The Deno team introduced a new Deno for Enterprise program Wednesday, along with releasing Deno 2.0.

Deno for Enterprise includes priority support, direct access to their engineers, guaranteed response times, and priority for subscribers’ feature requests.

Deno 2.0 also released Wednesday, and the focus was on enabling Deno to be deployed at scale.

“This means seamless interoperability with legacy JavaScript infrastructure and support for a wider range of projects and development teams,” the team wrote in a blog post announcing the release. “All without sacrificing the simplicity, security, and ‘batteries included’ nature that Deno users love.”

Deno 2 supports Next.js, Astro, Remix, Angular, SvelteKit, QwikCity and other frameworks, the team added. It incorporates native support for package.json and node_modules, as well as a stabilized standard library and monorepo support, the team wrote. It also adds:

It is backward compatible with Node and npm.

On top of that, there’s a slew of improvements to existing features, including:

It is backward compatible with Node and npm.

The blog post also outlines what developers can expect from the Deno 2.1 release, although it doesn’t specify when that release will be available.

Angular To Modify effect() API

Angular identified improvements to its effect() API, thanks to developer feedback during its preview phase. Angular team members Alex Rickabaugh and Mark Thompson shared two planned changes:

  1. Removing allSignalWrites in version 19. “To encourage good patterns, our initial design for the effect() API prohibited setting signals unless the allowSignalWrites flag was explicitly set,” they wrote. “Through developer feedback and observing real-world usage, we’ve concluded that the flag is not providing enough value. We’ve found that it wasn’t effective at encouraging good patterns, and ended up discouraging usage of effect() in cases where it would be reasonable to update signals.”
  2. Significant changes to the timing of when effects run. Previously, effects would be queued and scheduled independently as microtasks, but now effects run as part of the component hierarchy during change detection, they wrote. There are a few use cases where this may impact your projects, they warn, such as effects against view query results and to Observable() of input signals, which now emit earlier than before, affecting the timing of Observable chains.

“When testing this change at Google, we fixed around 100 cases where the timing change meaningfully impacted code,” Rickabaugh and Thompson wrote. “Around half of these were test-only changes, and in a few cases the timing difference led to more correct application behavior.”

Anthropic Message Batches API

Anthropic released a public beta of a Message Batches API late last week that allows developers to send batches of up to 10,000 queries per batch for processing in less than 24 hours, the company said.

The API costs 50% less than standard API calls, the Anthropic team wrote in a blog post about the new Message Batches API.

“This makes processing non-time-sensitive tasks more efficient and cost-effective,” the team wrote.

The ability to process so many queries at once makes the API better at tasks that require analyzing large amounts of information, such as customer feedback or translating large documents.

Also, instead of building systems to manage many requests, developers can send a batch of queries to Claude for handling, a company spokesman said. He added that the batch processing plus the lower costs makes it possible to do things that previously were too expensive, such as analyzing an entire document archive.

The API currently supports Claude 3.5 Sonnet, Claude 3 Opus, and Claude 3 Haiku on the Anthropic API. Support for batch processing for Claude on Google Cloud’s Vertex AI will soon be available as well. Customers using Claude in Amazon Bedrock can use batch inference, the team added.

Price does vary slightly by model used, with Claude 3 Opus being the most expensive, with Batch Input costing $7.50/MTok and batch output costing $37.50/MTok, according to the blog post and Claude 3 Haiku being the least expensive at $0.125/MTok for batch input and $0.625/MTok for batch output.

Sonar’s New Tools Clean Up AI-Generated Code

Sonar, which offers products to clean code, announced two new AI-driven solutions last week: Sonar AI Code Assurance and Sonar CodeFix.

Sonar AI Code Assurance is designed to improve the quality of code produced by generative AI. It analyzes the codebase for potential issues to ensure the code meets standards of quality and security.

Sonar offered the new tool as a way to address a pain point it saw in AI-driven development.

“AI is transforming the way developers work, streamlining processes, and reducing the toil associated with writing code,” Sonar CEO Tariq Shaukat said in a prepared statement. “As the adoption of AI coding assistants grows, however, we are seeing a new issue emerge: code accountability. AI-generated code needs review by developers, but accountability for doing this is increasingly diluted. As a result, we’re seeing the review step frequently being shortchanged.”

Sonar AI CodeFix enhances Sonar’s offering with AI to deliver a better developer experience. It allows developers to resolve issues detected by Sonar’s code analysis engine with a single click, directly within their workflow, the company stated.

The features are currently available for both SonarQube and SonarCloud.

Group Created with Sketch.

 

 

 

 

Top