-
Today, we are thrilled to announce Media Transformations, a new service that brings the magic of Image Transformations to short-form video files, wherever they are stored!
For customers with a huge volume of short video — generative AI output, e-commerce product videos, social media clips, or short marketing content — uploading those assets to Stream is not always practical. Sometimes, the greatest friction to getting started was the thought of all that migrating. Customers want a simpler solution that retains their current storage strategy to deliver small, optimized MP4 files. Now you can do that with Media Transformations.
To transform a video or image, enable transformations for your zone, then make a simple request with a specially formatted URL. The result is an MP4 that can be used in an HTML video element without a player library. If your zone already has Image Transformations enabled, then it is ready to optimize videos with Media Transformations, too.
URL format https://example.com/cdn-cgi/media/<OPTIONS>/<SOURCE-VIDEO>For example, we have a short video of the mobile in Austin's office. The original is nearly 30 megabytes and wider than necessary for this layout. Consider a simple width adjustment:
Example URL https://example.com/cdn-cgi/media/width=640/<SOURCE-VIDEO>https://developers.cloudflare.com/cdn-cgi/media/width=640/https://pub-d9fcbc1abcd244c1821f38b99017347f.r2.dev/aus-mobile.mp4The result is less than 3 megabytes, properly sized, and delivered dynamically so that customers do not have to manage the creation and storage of these transformed assets.
For more information, learn about Transforming Videos.
-
We’ve streamlined the Logpush setup process by integrating R2 bucket creation directly into the Logpush workflow!
Now, you no longer need to navigate multiple pages to manually create an R2 bucket or copy credentials. With this update, you can seamlessly configure a Logpush job to R2 in just one click, reducing friction and making setup faster and easier.
This enhancement makes it easier for customers to adopt Logpush and R2.
For more details refer to our Logs documentation.
-
You can now use bucket locks to set retention policies on your R2 buckets (or specific prefixes within your buckets) for a specified period — or indefinitely. This can help ensure compliance by protecting important data from accidental or malicious deletion.
Locks give you a few ways to ensure your objects are retained (not deleted or overwritten). You can:
- Lock objects for a specific duration, for example 90 days.
- Lock objects until a certain date, for example January 1, 2030.
- Lock objects indefinitely, until the lock is explicitly removed.
Buckets can have up to 1,000 bucket lock rules. Each rule specifies which objects it covers (via prefix) and how long those objects must remain retained.
Here are a couple of examples showing how you can configure bucket lock rules using Wrangler:
Terminal window npx wrangler r2 bucket lock add <bucket> --name 180-days-all --retention-days 180Terminal window npx wrangler r2 bucket lock add <bucket> --name indefinite-logs --prefix logs/ --retention-indefiniteFor more information on bucket locks and how to set retention policies for objects in your R2 buckets, refer to our documentation.
-
We're excited to announce that new logging capabilities for Remote Browser Isolation (RBI) through Logpush are available in Beta starting today!
With these enhanced logs, administrators can gain visibility into end user behavior in the remote browser and track blocked data extraction attempts, along with the websites that triggered them, in an isolated session.
{"AccountID": "$ACCOUNT_ID","Decision": "block","DomainName": "www.example.com","Timestamp": "2025-02-27T23:15:06Z","Type": "copy","UserID": "$USER_ID"}User Actions available:
- Copy & Paste
- Downloads & Uploads
- Printing
Learn more about how to get started with Logpush in our documentation.
-
Access for SaaS applications now include more configuration options to support a wider array of SaaS applications.
OIDC apps now include:
- Group Filtering via RegEx
- OIDC Claim mapping from an IdP
- OIDC token lifetime control
- Advanced OIDC auth flows including hybrid and implicit flows
SAML apps now include improved SAML attribute mapping from an IdP.
SAML identities sent to Access applications can be fully customized using JSONata expressions. This allows admins to configure the precise identity SAML statement sent to a SaaS application.
-
We've released a release candidate of the next major version of Wrangler, the CLI for Cloudflare Workers —
wrangler@4.0.0-rc.0
.You can run the following command to install it and be one of the first to try it out:
Terminal window npm i wrangler@v4-rcTerminal window pnpm add wrangler@v4-rcTerminal window yarn add wrangler@v4-rcUnlike previous major versions of Wrangler, which were foundational rewrites ↗ and rearchitectures ↗ — Version 4 of Wrangler includes a much smaller set of changes. If you use Wrangler today, your workflow is very unlikely to change. Before we release Wrangler v4 and advance past the release candidate stage, we'll share a detailed migration guide in the Workers developer docs. But for the vast majority of cases, you won't need to do anything to migrate — things will just work as they do today. We are sharing this release candidate in advance of the official release of v4, so that you can try it out early and share feedback.
Version 4 of Wrangler updates the version of esbuild ↗ that Wrangler uses internally, allowing you to use modern JavaScript language features, including:
The
using
keyword from the Explicit Resource Management standard makes it easier to work with the JavaScript-native RPC system built into Workers. This means that when you obtain a stub, you can ensure that it is automatically disposed when you exit scope it was created in:function sendEmail(id, message) {using user = await env.USER_SERVICE.findUser(id);await user.sendEmail(message);// user[Symbol.dispose]() is implicitly called at the end of the scope.}Import attributes ↗ allow you to denote the type or other attributes of the module that your code imports. For example, you can import a JSON module, using the following syntax:
import data from "./data.json" with { type: "json" };All commands that access resources (for example,
wrangler kv
,wrangler r2
,wrangler d1
) now access local datastores by default, ensuring consistent behavior.Moving forward, the active, maintenance, and current versions of Node.js ↗ will be officially supported by Wrangler. This means the minimum officially supported version of Node.js you must have installed for Wrangler v4 will be Node.js v18 or later. This policy mirrors how many other packages and CLIs support older versions of Node.js, and ensures that as long as you are using a version of Node.js that the Node.js project itself supports, this will be supported by Wrangler as well.
All previously deprecated features in Wrangler v2 ↗ and in Wrangler v3 ↗ have now been removed. Additionally, the following features that were deprecated during the Wrangler v3 release have been removed:
- Legacy Assets (using
wrangler dev/deploy --legacy-assets
or thelegacy_assets
config file property). Instead, we recommend you migrate to Workers assets ↗. - Legacy Node.js compatibility (using
wrangler dev/deploy --node-compat
or thenode_compat
config file property). Instead, use thenodejs_compat
compatibility flag ↗. This includes the functionality from legacynode_compat
polyfills and natively implemented Node.js APIs. wrangler version
. Instead, usewrangler --version
to check the current version of Wrangler.getBindingsProxy()
(viaimport { getBindingsProxy } from "wrangler"
). Instead, use thegetPlatformProxy()
API ↗, which takes exactly the same arguments.usage_model
. This no longer has any effect, after the rollout of Workers Standard Pricing ↗.
We'd love your feedback! If you find a bug or hit a roadblock when upgrading to Wrangler v4, open an issue on the
cloudflare/workers-sdk
repository on GitHub ↗. - Legacy Assets (using
-
Radar has expanded its DNS insights, providing visibility into aggregated traffic and usage trends observed by our 1.1.1.1 DNS resolver. In addition to global, location, and ASN traffic trends, we are also providing perspectives on protocol usage, query/response characteristics, and DNSSEC usage.
Previously limited to the
top
locations and ASes endpoints, we have now introduced the following endpoints:timeseries
: Retrieves DNS query volume over time.summary
: Retrieves summaries of DNS query distribution across ten different dimensions.timeseries_group
: Retrieves timeseries data for DNS query distribution across ten different dimensions.
For the
summary
andtimeseries_groups
endpoints, the following dimensions are available, displaying the distribution of DNS queries based on:cache_hit
: Cache status (hit vs. miss).dnsssec
: DNSSEC support status (secure, insecure, invalid or other).dnsssec_aware
: DNSSEC client awareness (aware vs. not-aware).dnsssec_e2e
: End-to-end security (secure vs. insecure).ip_version
: IP version (IPv4 vs. IPv6).matching_answer
: Matching answer status (match vs. no-match).protocol
: Transport protocol (UDP, TLS, HTTPS or TCP).query_type
: Query type (A
,AAAA
,PTR
, etc.).response_code
: Response code (NOERROR
,NXDOMAIN
,REFUSED
, etc.).response_ttl
: Response TTL.
Learn more about the new Radar DNS insights in our blog post ↗, and check out the new Radar page ↗.
-
We've released a new REST API for Browser Rendering in open beta, making interacting with browsers easier than ever. This new API provides endpoints for common browser actions, with more to be added in the future.
With the REST API you can:
- Capture screenshots – Use
/screenshot
to take a screenshot of a webpage from provided URL or HTML. - Generate PDFs – Use
/pdf
to convert web pages into PDFs. - Extract HTML content – Use
/content
to retrieve the full HTML from a page. Snapshot (HTML + Screenshot) – Use/snapshot
to capture both the page's HTML and a screenshot in one request - Scrape Web Elements – Use
/scrape
to extract specific elements from a page.
For example, to capture a screenshot:
Screenshot example curl -X POST 'https://api.cloudflare.com/client/v4/accounts/<accountId>/browser-rendering/screenshot' \-H 'Authorization: Bearer <apiToken>' \-H 'Content-Type: application/json' \-d '{"html": "Hello World!","screenshotOptions": {"type": "webp","omitBackground": true}}' \--output "screenshot.webp"Learn more in our documentation.
- Capture screenshots – Use
-
AI Gateway now includes Guardrails, to help you monitor your AI apps for harmful or inappropriate content and deploy safely.
Within the AI Gateway settings, you can configure:
- Guardrails: Enable or disable content moderation as needed.
- Evaluation scope: Select whether to moderate user prompts, model responses, or both.
- Hazard categories: Specify which categories to monitor and determine whether detected inappropriate content should be blocked or flagged.
Learn more in the blog ↗ or our documentation.
-
Workers AI now supports structured JSON outputs with JSON mode, which allows you to request a structured output response when interacting with AI models.
This makes it much easier to retrieve structured data from your AI models, and avoids the (error prone!) need to parse large unstructured text responses to extract your data.
JSON mode in Workers AI is compatible with the OpenAI SDK's structured outputs ↗
response_format
API, which can be used directly in a Worker:import { OpenAI } from "openai";// Define your JSON schema for a calendar eventconst CalendarEventSchema = {type: "object",properties: {name: { type: "string" },date: { type: "string" },participants: { type: "array", items: { type: "string" } },},required: ["name", "date", "participants"],};export default {async fetch(request, env) {const client = new OpenAI({apiKey: env.OPENAI_API_KEY,// Optional: use AI Gateway to bring logs, evals & caching to your AI requests// https://developers.cloudflare.com/ai-gateway/providers/openai/// baseUrl: "https://gateway.ai.cloudflare.com/v1/{account_id}/{gateway_id}/openai"});const response = await client.chat.completions.create({model: "gpt-4o-2024-08-06",messages: [{ role: "system", content: "Extract the event information." },{role: "user",content: "Alice and Bob are going to a science fair on Friday.",},],// Use the `response_format` option to request a structured JSON outputresponse_format: {// Set json_schema and provide ra schema, or json_object and parse it yourselftype: "json_schema",schema: CalendarEventSchema, // provide a schema},});// This will be of type CalendarEventSchemaconst event = response.choices[0].message.parsed;return Response.json({calendar_event: event,});},};import { OpenAI } from "openai";interface Env {OPENAI_API_KEY: string;}// Define your JSON schema for a calendar eventconst CalendarEventSchema = {type: 'object',properties: {name: { type: 'string' },date: { type: 'string' },participants: { type: 'array', items: { type: 'string' } },},required: ['name', 'date', 'participants']};export default {async fetch(request: Request, env: Env) {const client = new OpenAI({apiKey: env.OPENAI_API_KEY,// Optional: use AI Gateway to bring logs, evals & caching to your AI requests// https://developers.cloudflare.com/ai-gateway/providers/openai/// baseUrl: "https://gateway.ai.cloudflare.com/v1/{account_id}/{gateway_id}/openai"});const response = await client.chat.completions.create({model: 'gpt-4o-2024-08-06',messages: [{ role: 'system', content: 'Extract the event information.' },{ role: 'user', content: 'Alice and Bob are going to a science fair on Friday.' },],// Use the `response_format` option to request a structured JSON outputresponse_format: {// Set json_schema and provide ra schema, or json_object and parse it yourselftype: 'json_schema',schema: CalendarEventSchema, // provide a schema},});// This will be of type CalendarEventSchemaconst event = response.choices[0].message.parsed;return Response.json({"calendar_event": event,})}}To learn more about JSON mode and structured outputs, visit the Workers AI documentation.