In August 16 2025 I did a keynote with this title on the FrOSCon conference in Bonn, Germany. The room held a few hundred seats and every single one was occupied with people also filling up the stairs and was standing along the walls. Awesome!
Seven years ago I wrote about how a hundred million cars were running curl and as I brought up this blog post in a discussion recently, I came to reflect over how the world might have changed since. Is curl perhaps used in more cars now?
Yes it is.
With the help of friendly people on Mastodon, and a little bit of Googling, the current set of car brands known to have cars running curl contains 47 names. Most of the world’s top brands:
I think it is safe to claim that curl now runs in several hundred million cars.
How do we know?
The curl license seen in a Volvo
This is based on curl or curl’s copyright being listed in documentation and/or shown on screen on the car’s infotainment system.
The manufacturers need to provide that information per the curl license. Even if some of course still don’t.
Some brands are missing
For brands missing in the list, we don’t know their status. There are many more car brands that we can suspect probably also run and use curl, but for which we have not found enough evidence for it. If you do, please let me know!
What curl are the running?
These are all using libcurl, not the command line tool. It is not uncommon for them to run fairly old versions.
What are they using curl for?
I can’t tell for sure as they don’t tell me. Presumably though, a modern care does a lot of Internet transfers for all sorts of purposes and curl is a reliable library for doing that. Download firmware images, music, maps or media. Upload statistics, messages, high-scores etc. Modern cars are full-blown computers plus mobile phones combined, of course they transfer data.
Brands, not companies
The list contains 47 brands right now. They are however manufactured by a smaller number of companies, as most car companies sell cars under multiple different brands. So maybe 15 car companies?
Additionally, many of these companies buy their software from a provider who bundles it up for them. Several of these companies probably get their software from the same suppliers. So maybe there is only 7 different ones?
I have still chosen to list and talk about the brands because those are the consumer facing names used in everyday conversations, and they are the names we mere mortals are most likely to recognize.
Not a single sponsor or customer
Ironically enough, while curl runs in practically almost every new modern car that comes out from factories, not a single of the companies producing the cars or the software they run, are sponsors of curl or customers of curl support. Not one.
An Open Source sustainability story in two slides
47 car brands using curl
car brands sponsoring or paying for curl support
Yes they are allowed to
We give away curl for free for everyone to use at no cost and there is no obligation for anyone to pay anyone for this. These companies are perfectly in their rights to act like this.
You could possibly argue that companies should think about their own future and make sure that dependencies they rely on and would like to keep using, also survive so that they can keep depending on these components going forward as well. But obviously that is not how this works.
I want curl to remain Open Source and I really like providing it in a way, under a liberal license, that makes it possible to get used everywhere. I mean, if we use the measurement of how widely used a software is, I think we can agree that curl is a top candidate.
I would like the economics and financials around the curl project to work out anyway, but maybe that is a utopia we can never reach. Maybe we eventually will have to change the license or something to entice or force a different behavior.
The curl command line option --write-out or just -w for short, is a powerful and flexible way to extract information from transfers done with the tool. It was introduced already back in version 6.5 in the early 2000.
This option takes an argument in which you can add “variables” that hold all sorts of different information, from time information, to speed, sizes, header content and more.
Some users have right out started to use the -w output for logging of the performed transfer, and when you do that there was a little detail missing: the ability to output the time the transfer completed. After all, most log lines actually feature the time in one way or another.
Starting in curl 8.16.0, curl -w knows the time and now allows the user to specify exactly how to output that time in the output. Suddenly this output is a whole notch better for logging purposes.
%time{format}
Since log files also tend to use different time formats I decided I didn’t want to use a fixed format and risk that a huge portion of users will think it is the wrong one, so I went straight with strftime formatting: the user controls the time format using standard %-flags: different ones for year, month, day, hour, minute, second etc.
Some details to note:
The time is provided using UTC, not local.
It also supports %f for microseconds, which is a POSIX extension already used by Python and possible others
%z and %Z (for time zone offset and name) had to be fixed to become portable and identical across systems and platforms
Example
Here’s a sample command line outputting the time the transfer completed:
In the early days of curl development we (I suppose it was me personally but let’s stick with we so that I can pretend the blame is not all on me) made the possibly slightly unwise decision to make the -X option change the HTTP method for all requests in a curl transfer, even when -L is used – and independently of what HTTP responses the server returns.
In curl 8.16.0, we introduce a different take on the problem, or better yet, a solution really: a new command line option that offers a modified behavior. Possibly the behavior people were thinking curl was having all along.
Just learn to use --follow going forward (in curl 8.16.0 and later).
This option works fine together with -X and will adjust the method in the possible subsequent requests according to the HTTP response code.
A long time ago I wrote separately about the different HTTP response codes and what they mean in terms of changing (or not) the method.
–location remains the same
Since we cannot break existing users and scripts, we had to leave the exiting --location option working exactly like it always has. This option is this mutually exclusive with --follow, so only pick one.
QUERY friendly
Part of the reason for this new option is to make sure curl can follow redirects correctly for other HTTP methods than the good old fashioned GET, POST and PUT. We already see PATCH used to some extent but perhaps more important is the work on the spec for the new QUERY method. It is a flavor of POST, but with a few minor but important different properties. Possibly enough for me to write a separate blog post about, but right now we can stick to it being “like POST”, in particular from a HTTP client’s perspective.
We want curl to be able to do a “post” but with a QUERY method and still follow redirects correctly. The -L and -X combination does not support this.
curl can be made to issue a proper QUERY request and follow redirects correctly like this:
From March 20, 1998 when the first curl release was published, to this day August 5, 2025 is exactly 10,000 days. We call it the curl-10000-day. Or just c10kday. c ten K day.
We want to celebrate this occasion by collecting and sharing stories. Your stories about curl. Your favorite memories. When you used curl for the first time. When curl saved your situation. When curl rescued your lost puppy. What curl has meant or perhaps still means to you, your work, your business, or your life. We want to favor and prioritize the good, the fun, the nostalgic and the emotional stories but it is of course up to your discretion.
We have created this thread in curl’s GitHub Discussion section for this purpose, so please go there and submit your story or read what others have shared.
Back in 2012, the Happy EyeballsRFC 6555 was published. It details how a sensible Internet client should proceed when connecting to a server. It basically goes like this:
Give the IPv6 attempt priority, then with a delay start a separate IPv4 connection in parallel with the IPv6 one; then use the connection that succeeds first.
We also tend to call this connection racing, since it is like a competition where multiple attempts compete trying to “win”.
In a normal name resolve, a client may get a list of several IPv4 and IPv6 addresses to try. curl would then pick the first, try that and if that fails, move on the next etc. If a whole family fails, it would start the other immediately.
v2
The updated Happy Eyeballs v2 RFC 8305 was published in 2017. It focused a lot on that the client should start its connections earlier in the process, preferably while getting responses from DNS instead of waiting for the hostname resolve phase to end before starting that.
This is complicated for lots of clients because there is no established (POSIX) API for doing such name resolves, so for a portable network library like libcurl we could not follow most of the new advice in this spec.
QUIC added a dimension
In 2012 we did not have QUIC on the map and not practically in 2017 either so those eyeballing specs did not include such details.
Even later, HTTP/3 was documented to require an alt-svc response header before a client would know if the server speaks HTTP/3 and only then could it attempt QUIC with it and expect it to work.
While curl works with alt-svc response approach, that’s information arriving far too late for many users – and it is especially damning for a command line tool as opposed to a browser, since lots of users just do single shot transfers and then never get to use HTTP/3 at all.
To combat that drawback, we decided that adding QUIC to the mix should add a separate connection competition. To allow faster and earlier use of QUIC.
Start the QUIC-IPv6 connect attempt first, then in order the QUIC-IPv4, TCP-ipv6 and finally the TCP-ipv4.
To users, this typically makes a very smooth operation where the client just automatically connects to the “best” alternative without it having to make any particular choices or decisions. It graciously and transparently adapts to situations where IPv6 or UDP have problems etc.
v3 and HTTPS-RR
With the introduction of HTTPS-RR, there are also more ways introduced to get IP addresses for hosts and there is now ongoing work within the IETF on making a v3 of the Happy Eyeballs specification detailing how exactly everything should be put together.
We are of course following that development to monitor and learn how we should adapt and polish curl connects further.
Parallel more
While waiting on the happy eyeballs v3 work to land in a document, Stefan Eissing took it upon himself to further tweak how curl behaves in an attempt to find the best connection even faster. Using more parallelism.
Starting in curl 8.16.0, curl will start the first IPv6 and the first IPv4 connection attempts exactly like before, but then, if none of them have connected after 200 milliseconds curl continues to the next address in the list and starts another attempt in parallel.
An illustration
Let’s take a look at an example of curl connecting to a server, let’s call the server curl.se. The orange numbers show the order of things after the DNS response has been received.
curl connection racing
The first connect attempt starts using the first IPv6 address from the DNS response. If it has not succeeded within 200 milliseconds…
The second attempt starts in parallel, using the first IPv4 address. Now two connect attempts are running and if neither have succeeded in yet another 200 milliseconds…
A second IPv6 connect attempt is started in parallel, using the second IPv6 address from the list. Now three connect attempts are racing. If none of them succeeds in another 200 milliseconds…
A second IPv4 race starts, using the second IPv4 address from the list.
… and this can continue, if this is a really slow or problematic server with many IP addresses.
Of course, each failed attempt makes curl immediately move to the next address in the list until all alternatives have been tested.
Add QUIC to that
The illustration above can be seen as “per transport”. If only TCP is wanted, there is a single such race going on. With potentially quite a few parallel attempts in the worst cases.
If instead HTTP/3 or a lower HTTP version is wanted, curl first starts a QUIC connection race as illustrated and then after 200 milliseconds it starts a similar TCP race in parallel to the QUIC one! Both run at the same time, the first one to connect wins.
A little table to illustrate when the different connect attempts starts when either QUIC or TCP is okay:
Time (ms)
QUIC
TCP
0
Start IPv6 connect
–
200
Start IPv4 connect
Start IPv6 connect
400
Start 2nd IPv6 connect
Start IPv4 connect
600
Start 2nd IPv4 connect
Start 2nd IPv6 connect
800
Start 3rd IPv6 connect
Start 2nd IPv4 connect
So in the case of trying to connect to a server that doesn’t respond that has more than two IPV6 and IPv4 addresses each, there could be nine connection attempts running after 801 milliseconds.
200 ms can be changed
The 200 milliseconds delay mentioned above is just the default time. It can easily be changed both using the library or the command line tool.
I’m convinced a lot of people have not yet figured out that curl has supported parallel downloads for six years already by now.
Provided a practically unlimited number of URLs, curl can be asked to get them in a parallel fashion. It then makes sure to keep N transfers alive for as long as there is N or more transfers left to complete, where X is a custom number but 50 by default.
Concurrently transferring data from potentially a large number of different hosts can drastically shorten transfer times and who doesn’t prefer to complete their download job sooner rather than later?
Limit connections per host
At times however, you may want to do a lot of transfers, and you want to do them in parallel for speed, but maybe you prefer to limit how many connections curl should use per each hostname among all the URLs?
This per-host limit is a feature libcurl has offered applications for a long time and now the time has come for curl tool users to also enjoy its powers.
Per host should perhaps be called per origin if we spoke web lingo, because it rather limits the number of connections to the same protocol + hostname + port number. We call that host here for simplicity.
To set a cap on how many connections curl is allowed to use for each specific server use --parallel-max-host [number].
For example, if you want to download ten million images from this site, but never use more than six connections:
Pay special attention to the exact term: this limits the number of connections used to each host. If the transfers are done using HTTP/2 or HTTP/3, they can be done using many streams over just one or a few connections so doing 50 or 200 transfers in parallel should still be perfectly doable even with a limited number of connections. Not so much with HTTP/1.
Ships in 8.16.0
This command line option will become available in the pending curl version 8.16.0 release.
We have always had a custom command line option parser in curl. It is fast and uncomplicated and gives us the perfect mix of flexibility and function. It also saves us from importing or using code with another license.
In one aspect it has behaved slightly different than many other command line parsers: the way it accepts arguments to long options.
Long options are the options provided using a name that starts with two dashes and are often not single-letters. Example:
The example above tells curl to use the user agent curl/2000 in the transfer. The argument provided to the --user-agent option is provided separated with a space.
When instead using the short version of the same option, the argument can be specified with a space in between or not:
curl -A curl/2000 https://example.com/
A common paradigm and syntax style for accepting long options in command line tools is the “equals sign” approach. When you provide an argument to a long option you do this by appending an equals sign followed by the argument to the option; with no space. Like this:
This example uses double quotes but they are of course not necessary if there is no space or similar in the argument.
Bridging the gap
To make life easier for future users, curl now also support this latter style – starting in curl 8.16.0. With this syntax supported, curl accepts a more commonly used style and therefore should induce less surprises to users. To make it easier to write curl command lines.
I emphasize that change this is an improvement for future users, because I really don’t think it is a good idea for most user to switch to this syntax immediately. This of course because all the older curl versions that are still used widely around the word do not support it.
I think it is better if we wait a year or two until we start using this option style in curl documentation and example command lines. To give time for users to upgrade to a version that has support for it.
Downloading data from a remote URL is probably the single most common operation people do with curl.
Often, users then add various additional options to the command line to extract information from that transfer but may also decide that the actually fetched data is not interesting. Sometimes they don’t get the accurate meta-data if the full download is not made, sometimes they run performance measurements where the actual content is not important, and so on. Users sometimes have reasons for not saving their downloads.
They do downloads where the actual downloaded content is tossed away. On GitHub alone, we can find almost one million command lines doing such curl invokes.
curl of course offers multiple ways to discard the downloaded data, but the maybe most straight-forward way is to write the contents to a null device such as /dev/null on *nix systems or NUL: on windows. Like this:
The command line above is perfectly fine and works fine and has been doing so for decades. It does however have two drawbacks:
Lack of portability. curl runs on most operating systems and most options and operations work identically, to the degree that you can often copy command lines back and forth between machines without thinking much about it. Outputting data to /dev/null is however not terribly portable and trying that operation on Windows for example will cause the command line to fail.
Performance. It may not look like much, but completely avoiding writing the data instead of writing it to /dev/null makes benchmarks show a measurable improvement. So if you don’t want the data, why not do the operation faster rather than slower?
The shell redirect approach has the same drawbacks.
Usage
The new option is used as follows, where it needs one --out-null occurrence per URL it wants to redirect.
I hope that by now most readers of my blog have understood that curl, and libcurl specifically, is an architecture with a transfer core with a set of different backends plugged in. Backends powered by different third party libraries.
The exact set of backends used in a particular build is decided by the person that builds curl.
What backends that curl supports varies over time (and platform). We appreciate adding support for more backends and to let users decide which ones to use, as this allows us to approach it with a survival of the fittest attitude. What does not work in the long run or what isn’t actually used, we can deprecate and remove again. Ideally this helps us select the better ones for the future.
HTTP/3
For the last few years curl has supported the HTTP/3 protocol powered by one out of four different backends:
nghttp3 + ngtcp2
quiche
nghttp3 + OpenSSL-QUIC
msh3 + msquic
(All except the first listed combination, we still label experimental.)
Dropping msh3
In this quartet, there is one option that stands out a little: the last one. The msh3 powered backend was brought in and merged into the curl source tree a few years ago with the hope that this solution would end up a good choice for people on Windows since it is the only choice in the list that can get built to use the native Windows TLS solution SChannel.
Unfortunately, this work was never finalized. It never worked correctly in curl and the API and architecture of msh3 makes it quirky and cumbersome to integrate – and quite frankly we can’t seem to drum up any interest for people to test nor work on improving this backend.
As we have three other working backends, all of which also can build and run on Windows, we see no benefit in dragging msh3 along. In fact, there is a cost in maintenance and keeping the build working and the tests running etc that we rather avoid. In particular as we seem to be doing that for virtually no gain.
I want to stress that I don’t think there is anything wrong with msh3 nor its underlying msquic library. They simply have not been made to work properly in curl.
Updated backend map
The msh3 backend has now been removed from git in the current master branch and this is how the HTTP/3 offer will look like in the coming curl 8.16.0 release.