Advertisement · 728 × 90
#
Hashtag
#Curl
Advertisement · 728 × 90

"Unfortunately in the bank where i work it is not easy to change the version, it requires lots of resources to retest." says reporter who uses a ten years old #curl version with a suspicious bug... The problem is not open source.

9 1 0 0
Preview
Don’t trust, verify Software and digital security should rely on _verification_ , rather than trust. I want to strongly encourage more users and consumers of software to verify curl. And ideally require that you could do at least this level of verification of other software components in your dependency chains. ## Attacks are omnipresent With every source code commit and every release of software, there are risks. Also entirely independent of those. Some of the things a widely used project can become the victim of, include… * _Jia Tan_ is a skilled and friendly member of the project team but is _deliberately_ merging malicious content disguised as something else. * An established committer might have been breached unknowingly and now their commits or releases contain tainted bits. * A rando convinced us to merge what looks like a bugfix but is a small step in a long chain of tiny pieces building up a planted vulnerability or even backdoor * Someone blackmails or extorts an existing curl team member into performing changes not otherwise accepted in the project * A change by an established and well-meaning project member that adds a feature or fixes a bug mistakenly creates a security vulnerability. * The website on which tarballs are normally distributed gets hacked and now evil alternative versions of the latest release are provided, spreading malware. * Credentials of a known curl project member is breached and misinformation gets distributed appearing to be from a _known_ and _trusted source_. Via email, social media or websites. Could even be this blog! * Something in this list is backed up by an online deep-fake video where a known project member seemingly repeats something incorrect to aid a malicious actor. * A tool used in CI, hosted by a cloud provider, is hacked and runs something malicious * While the primary curl git repository has a downtime, someone online (impersonating a curl team member?) offers a temporary “curl mirror” that contains tainted code. In the event any of these would happen, they could of course also happen in combinations and in a rapid sequence. ## You can verify curl, mostly in the shape of libcurl, runs in tens of billions of devices. Clearly one of the most widely used software components in the world. People ask me how I sleep at night given the vast amount of nasty things that _could_ occur virtually at any point. There is only one way to combat this kind of insomnia: do everything possible and do it openly and transparently. Make it a little better this week than it was last week. Do software engineering right. Provide means for everyone to verify what we do and what we ship. Iterate, iterate, iterate. If even just a few users verify that they got a curl release signed by the curl release manager and they verify that the release contents is untainted and only contains bits that originate from the git repository, then we are in a pretty good state. We need enough _independent_ outside users to do this, so that one of them can blow the whistle if anything at any point would look wrong. I can’t tell you who these users are, or in fact if they actually exist, as they are and must be completely independent from me and from the curl project. We do however provide all the means and we make it easy for such users to do this verification. ## We must verify The few outsiders who verify that nothing was tampered with in the releases can only validate that the releases are made from what exists in git. It is our own job to make sure that what exists in git is _the real thing_. The secure and safe curl. We must do _a lot_ to make sure that whatever we land in git is okay. Here’s a list of activities we do. 1. we have a consistent code style (invalid style causes errors). This reduces the risk for mistakes and makes it easier to debug existing code. 2. we ban and avoid a number of “sensitive” and “hard-to-use” C functions (use of such functions causes errors) 3. we have a ceiling for complexity in functions to keep them easy to follow, read and understand (failing to do so causes errors) 4. we review all pull requests before merging, both with humans and with bots. We link back commits to their origin pull requests in commit messages. 5. we ban use of “binary blobs” in git to not provide means for malicious actors to bundle encrypted payloads (trying to include a blob causes errors) 6. we actively avoid base64 encoded chunks as they too could function as ways to obfuscate malicious contents 7. we ban most uses of Unicode in code and documentation to avoid easily mixed up characters that look like other characters. (adding Unicode characters causes errors) 8. we document everything to make it clear how things are supposed to work. No surprises. Lots of documentation is tested and verified in addition to spellchecks and consistent wording. 9. we have thousands of tests and we add test cases for (ideally) every functionality. Finding “white spots” and adding coverage is a top priority. curl runs on countless operating systems, CPU architectures and you can build curl in billions of different configuration setups: not every combination is practically possible to test 10. we build curl and run tests in over two hundred CI jobs that are run for every commit and every PR. We do not merge commits that have unexplained test failures. 11. we build curl in CI with the most picky compiler options enabled and we never allow compiler warnings to linger. We always use `-Werror` that converts warnings to errors and fail the builds. 12. we run all tests using valgrind and several combinations of sanitizers to find and reduce the risk for memory problems, undefined behavior and similar 13. we run all tests as “torture tests”, where each test case is rerun to have every invoked fallible function call fail once each, to make sure curl never leaks memory or crashes due to this. 14. we run fuzzing on curl: non-stop as part of Google’s OSS-Fuzz project, but also briefly as part of the CI setup for every commit and PR 15. we make sure that the CI jobs we have for curl never “write back” to curl. They access the source repository read-only and even if they would be breached, they cannot infect or taint source code. 16. we run `zizmor` and other code analyzer tools on the CI job config scripts to reduce the risk of us running or using insecure CI jobs. 17. we are committed to always fix reported vulnerabilities in the following release. Security problems never linger around once they have been reported. 18. we document everything and every detail about all curl vulnerabilities ever reported 19. our commitment to never breaking ABI or API allows all users to easily upgrade to new releases. This enables users to run recent security-fixed versions instead of legacy insecure versions. 20. our code has been audited several times by external security experts, and the few issues that have been detected in those were immediately addressed 21. Two-factor authentication on GitHub is mandatory for all committers All this done in the open with full transparency and full accountability. Anyone can follow along and verify that we follow this. Require this for all your dependencies. ## Not paranoia We plan for the event when someone actually wants and tries to hurt us and our users really bad. Or when that happens by mistake. A successful attack on curl can in theory reach _widely_. This is not paranoia. This setup allows us to sleep well at night. _This is why users still rely on curl after thirty years in the making._ ## Documented I recently added a verify page to the curl website explaining some of what I write about in this post.

Don’t trust, verify

daniel.haxx.se/blog/2026/03/26/dont-tru...

#curl

10 19 1 0
Post image

PlaneAlert ICAO: 5083C0 Tail: URCQV Flt: VKA809
Owner: VulkanAir
Aircraft: Antonov An-26 B
2026-03-26 09:07:55 CET
AN26 GotItWhereItCounts Budmo Curl
adsb planefence planealert by kx1t - link

0 0 0 0
curl - Verify

This new page on the curl website explains how you can (should?) verify #curl, and a little what we do to verify what we do.

We can never be 100% safe, but we can try.

https://curl.se/docs/verify.html

2 1 0 0
Preview
curl distro discussion 2026 A command line tool and library for transferring data with URL syntax, supporting DICT, FILE, FTP, FTPS, GOPHER, GOPHERS, HTTP, HTTPS, IMAP, IMAPS, LDAP, LDAPS, MQTT, MQTTS, POP3, POP3S, RTMP, RTMP...

This is the day of #curl distro meeting 2026.

github.com/curl/curl/wiki/curl-dist...

2 0 0 0
Post image

The Wild Calls

#blackandwhite #nature #plant #leaf #curl #beckon #garden #bwnature #blackandwhitephotography #blackandwhitenature

5 1 0 0
Screenshot of German-speaking OSGeo4W install window

Screenshot of German-speaking OSGeo4W install window

In the latest release of OSGeo4W they list the #curl license as "non-open-source"... 🤔

5 2 0 0

Best Explanation of #Gradient, #Divergence and #Curl #Physics #Mathematics
youtube.com/watch

1 2 0 0
Post image

Ever wanted to flash a Linux distro directly from the internet without saving the image locally? This guide shows you how with `curl`, `wget`, and `dd` – but beware of the live system trap!

thepixelspulse.com/posts/how-to-flash-os-wi...

#curl #wget #dd

0 0 0 0
docs/VERIFY.md: verify curl by bagder · Pull Request #21085 · curl/curl Don't trust, verify! Explains how you can verify curl.

Don't trust, verify!

Explains how you can verify #curl.

https://github.com/curl/curl/pull/21085

5 1 0 0

RE: https://hachyderm.io/@kees/116282745861595200

Fun to see the Linux kernel follow in #curl's footsteps! 😎 (we removed the last strncpy from curl in late 2025)

2 8 0 0
curl: remove NTLM, SMB and local crypto implementations

Later this year, we are set to remove remove NTLM, SMB and local crypto implementations from #curl

https://curl.se/mail/lib-2026-03/0026.html

3 0 0 0
Preview
PHP Solution for Service Platform Interaction PHP & Web Development Projects for $250-750 USD. I’m looking for a robust PHP-based solution that can interact with existing user accounts on a service platform, navigate



#API #Backend #Development #cURL #JavaScript #MySQL #PHP #Software #Architecture #Web #Development

Origin | Interest | Match

0 0 0 0
Preview
curl goes git Just a few days ago the curl project turned twelve years old, and I decided that it was time for us to ditch our trusty old CVS setup and switch over to use git instead for source code control. ## Why Switch at All I’ve been very content with CVS over the years and in our small project we don’t really have any particularly weird or high demands on the version control software. Lately (like in recent years) I’ve dipped my toes into various projects that have been using git, and more and more over time I’ve learned to appreciate the little goodies that git does that CVS simply cannot. I’m then not even speaking about branches or merges etc that git does a whole lot better and easier than CVS, I’m in fact even more in love with git’s way to ease handling with diffs sent by email and its great way of keeping track of authors separately from the committer etc. git am and git commit –author are simply two very handy tools missing in CVS. ## Why Git So if we want to switch from CVS to another tool what would we chose? That wasn’t really the question in my case so I didn’t answer it. In my case, it was rather that I’ve been using git in several projects and it is used in some of the biggest projects I work with so it was some git’s features I wanted. I didn’t consider any of the other distributed version tools as quite frankly: they wouldn’t be much better for me than what CVS already is. I want to reduce the number of different tools I need, and I’m quite sure anyway that git is one of the top contenders even if I would do an actual comparison. So the choice to go git was quite selfish and done by me, but I felt that quite a few guys in the curl community supported this decision and very few actually believe remaining with CVS was a better idea. The fact that git itself uses libcurl for its HTTP access of course also proves its good taste! 🙂 ## How did the conversion go Very easy and swiftly. First, as I mentioned above we never used branches much so we basically had a linear development with a set of tags. I did an rsync of the full repo to get a local copy to work with, then I ran ‘git cvsimport’ on that to created a new repo. I did run it a couple of times to make sure I had done a correct mapping of all CVS user names to their git equivalents. Converting >10 years of CVS commits took roughly 10 minutes on my desktop machine so it wasn’t that tedious even. Once I had a local repo created with all authors looking good, I simply followed the instructions on github.com on how to add a remote origin to a local branch and when I pushed to that, git sent off all commits ever made to curl to the remote repo now exposed to the world from github.com. When that part was done, I did a quick read on the ‘git help daemon’ docs and 30 seconds later I had a local repo setup that is a mirror of the github one, so that users can still opt to get the code from haxx.se. ## Unchanged work flow Git allows different ways of working with the code, but I’ve decided that at least as a start we won’t change the way we work. I’ll offer all committers push rights to the master branch on the repository and we will simply all push to that, as our head development branch. We will prefer patches made with git format-patch sent to the mailing list, but as before you can still produce patches by diffing source code using extracted tarballs or whatever approach you prefer. All details on how to get the code for curl using git is available online.

16 years ago #curl officially switched over to using #git

https://daniel.haxx.se/blog/2010/03/23/curl-goes-git/

6 1 1 0
Preview
Anatomy of a ghost CVE > **“The Lord giveth and the Lord taketh away.”** > > Job 1:21 On March 16 2022, the curl security team received an email in which the reporter highlighted an Apple web page. _What can you tell us about this?_ I hadn’t seen it before. On this page with the title “About the security content of macOS Monterey 12.3”, said to have been published just two days prior, Apple mentions recent package upgrades and the page lists a bunch of products and what security fixes that were done for them in this update. Among the many products listed, curl is mentioned. This is what the curl section of the page looked like: Screenshot from March 17, 2022 In the curl project we always make all CVEs public with as much detail as we can possibly extract and provide about them. We take great pride in being the **best in class in security flaw information** and transparency. Apple listed four CVE fixed. The three first IDs we immediately recognized from the curl security page. The last one however, was a surprise. What was that? ## CVE-2022-22623 This is **not** a CVE published by the curl project. The curl project has in fact not shipped any CVE at all in 2022 (yet) so that’s easy to spot. When we looked at the MITRE registration for the ID, it also didn’t disclose any clues really. Not that it was expected to. It did show it was created on January 5 though, so it wasn’t completely new. Was it a typo? I compared this number to other recent CVE numbers announced from curl and I laid eyes on CVE-2021-22923 which had just two digits changed. Did they perhaps mean that CVE? The only “problem” with that CVE is that it was in regards to Metalink and I don’t think Apple ever shipped their curl package with metalink support so therefore they wouldn’t have fixed a Metalink problem. So probably not a typo for that number at least! I reached out to a friend at Apple as well with an email to Apple Product Security. ## Security is our number one priority In the curl project, we take security seriously. The news that there might be a security problem in curl that we haven’t been told about and that looks like it was about to get public sooner or later was of course somewhat alarming and something we just needed to get to the bottom of. It was also slightly disappointing that a large vendor and packager of curl _since over 20 years_ would go about it this way and jab this into our back. ## No source code Apple has not made the source code for their macOS 12.3 version and the packages they use in there public, so there was no way for us to run diffs or anything to check for the exact modifications that this claimed fix would’ve resulted in. ## Apple said so Several “security websites” (the quotes are there to indicate that clearly these sites are more security in the name than in reality) immediately posted details about this “vulnerability”. Some of them with CVSS scores and CWE numbers , explaining how this problem can hurt users. Obviously completely made up since none of that info was made available by any first party sources anywhere. Not from Apple and not from the curl project. If you now did a web search on that CVE number, several of the top search results linked to such sites providing details – obviously made up from thin air. As I think these sites don’t add much value to humanity, I won’t link to them here but instead I will show you a screenshot from such an article to show you what a made up CVE number posted by Apple can make people claim: Screenshot from exploitone.com At 23:28 (my time zone) on the 17th, my Apple friend responded saying they had forwarded the issue to “the right team”. The Apple Product Security team I also emailed about this issue, answered at 00:23 (still my time) on the 18th saying _“we are looking into this and will provide an update soon when we have more information.”_ ## The MITRE page got more details The MITRE CVE page from March 21st After the weekend passed with no response, I looked back again on the MITRE page for the CVE in question and it had then gotten populated with additional curl details; mentioning Apple as CNA and now featuring links back to the Apple page! Now it _really_ started to look like the CVE was something real that Apple (or someone) had registered but not told us about. It included real curl related snippets like this: > Multiple issues were addressed by updating to curl version 7.79.1. This issue is fixed in macOS Monterey 12.3. Multiple issues in curl. ## Please tell us more details On Monday the 21st, I continued to get questions about this CVE. Among others, from a member of a major European ISP’s CERT team curious about this CVE as they couldn’t find any specific information about this issue either and they were concerned they might have this vulnerability in the curl versions they run. They of course (rightfully) assumed that I would know about curl CVEs. It turns out that when a major company randomly mentions a new CVE, it actually has an impact on the world! ## Gone! At around 20:30 on March 21st, someone on Twitter spotted that the ghost CVE had been removed from Apple’s web page and it only listed three issues (and a mention that the section had been updated). At 21:39 I get an email response from Apple Product Security: > Thank you for reaching out to us about the error with this CVE on our security advisory. We’ve updated our site and requested that MITRE reject CVE-2022-22623 on their end. > Please let us know if you have any questions. Screenshot from March 21, 2022 The reject request to MITRE is expected to be slow so that page will remains showing the outdated data for a while longer. ## Exploit one When Apple had retracted the wrong CVE, I figured I should maybe try to get exploitone.com to remove their “article” to maybe at least stop one avenue of further misinformation about this curl “issue”. I tweeted (in perhaps a tad bit inflammatory manner): I get the feeling they didn’t quite understand my point. They replied: ## What happened? As I _had_ questions about Apple’s mishap, I replied (sent off 22:28 on the 21st, still only early afternoon on the US west coast), asking for details on what exactly had happened here. If it was a typo, then how come it got registered with MITRE? It’s just **so** puzzling and mysterious! When I’m posting this article on my blog (36 hours after I sent the question), I still haven’t gotten any response or explanation. I don’t expect to get any either, but if I do, I will update this post accordingly. ## Update March 26 exploitone.com updated their page at some point after my tweet to remove the mention of the imaginary CVE, but the wording remains very odd:

Today we celebrate four years since Apple pulled the ghost CVE prank on us:

daniel.haxx.se/blog/2022/03/23/anatomy-...

#curl

4 3 1 0
Number of options for curl_multi_setopt over time

Number of options for curl_multi_setopt over time

How about a new graph? Number of options for curl_multi_setopt() over time

https://github.com/curl/stats/pull/27

#curl

1 0 0 0
Number of options for curl_multi_setopt over time

Number of options for curl_multi_setopt over time

How about a new graph? Number of options for curl_multi_setopt() over time

https://github.com/curl/stats/pull/27

#curl

2 0 0 0
Post image Post image Post image Post image

PlaneAlert ICAO: 5083C0 Tail: URCQV Flt: VKA810
Owner: VulkanAir
Aircraft: Antonov An-26 B
2026-03-22 09:24:47 CDT
AN26 Civ GotItWhereItCounts Budmo Curl Ukraine
- link

1 0 0 0
Preview
NTLM and SMB go opt-in The NTLM authentication method was always a beast. It is a proprietary protocol designed by Microsoft which was reverse engineered a long time ago. That effort resulted in the online documentation that I based the curl implementation on back in 2003. I then also wrote the NTLM code for wget while at it. NTLM broke with the HTTP paradigm: it is made to authenticate _the connection_ instead of _the request_ , which is what HTTP authentication is supposed to do and what all the other methods do. This might sound like a tiny and insignificant detail, but it has a major impact in all HTTP implementations everywhere. Indirectly it is also the cause for quite a few security related issues in HTTP code, because NTLM needs many special exceptions and extra unique treatments. curl has recorded no less than _seven_ past security vulnerabilities in NTLM related code! While that may not be only NTLM’s fault, it certainly does not help. The connection-based concept also makes the method _incompatible_ with HTTP/2 and HTTP/3. NTLM requires services to stick to HTTP/1. NTLM (v1) uses super weak cryptographic algorithms (DES and MD5), which makes it a bad choice even when disregarding the other reasons. We are slowly deprecating NTLM in curl, but we are starting out by making it opt-in. Starting in curl 8.20.0, NTLM is disabled by default in the build unless specifically enabled. Microsoft themselves have deprecated NTLM already. The wget project looks like it is about to make their NTLM support opt-in. ## SMB curl only supports SMB version 1. This protocol uses NTLM for the authentication and it is equally bad in this protocol. Without NTLM enabled in the build, SMB support will also get disabled. But also: SMBv1 is in itself a weak protocol that is barely used by curl users, so this protocol is also opt-in starting in curl 8.20.0. You need to explicitly enable it in the build to get it added. ## Not removed yet I want to emphasize that we have not removed support for these ancient protocols, we just strongly discourage using them and I believe this is a first step down the ladder that in a future will make them get removed completely.

NTLM and SMB go opt-in

daniel.haxx.se/blog/2026/03/22/ntlm-and...

#curl

2 3 0 0
Preview
NTLM and SMB go opt-in The NTLM authentication method was always a beast. It is a proprietary protocol designed by Microsoft which was reverse engineered a long time ago. That effort resulted in the online documentation that I based the curl implementation on back in 2003. I then also wrote the NTLM code for wget while at it. NTLM broke with the HTTP paradigm: it is made to authenticate _the connection_ instead of _the request_ , which is what HTTP authentication is supposed to do and what all the other methods do. This might sound like a tiny and insignificant detail, but it has a major impact in all HTTP implementations everywhere. Indirectly it is also the cause for quite a few security related issues in HTTP code, because NTLM needs many special exceptions and extra unique treatments. curl has recorded no less than _seven_ past security vulnerabilities in NTLM related code! While that may not be only NTLM’s fault, it certainly does not help. The connection-based concept also makes the method _incompatible_ with HTTP/2 and HTTP/3. NTLM requires services to stick to HTTP/1. NTLM (v1) uses super weak cryptographic algorithms (DES and MD5), which makes it a bad choice even when disregarding the other reasons. We are slowly deprecating NTLM in curl, but we are starting out by making it opt-in. Starting in curl 8.20.0, NTLM is disabled by default in the build unless specifically enabled. Microsoft themselves have deprecated NTLM already. The wget project looks like it is about to make their NTLM support opt-in. ## SMB curl only supports SMB version 1. This protocol uses NTLM for the authentication and it is equally bad in this protocol. Without NTLM enabled in the build, SMB support will also get disabled. But also: SMBv1 is in itself a weak protocol that is barely used by curl users, so this protocol is also opt-in starting in curl 8.20.0. You need to explicitly enable it in the build to get it added. ## Not removed yet I want to emphasize that we have not removed support for these ancient protocols, we just strongly discourage using them and I believe this is a first step down the ladder that in a future will make them get removed completely.

NTLM and SMB go opt-in The NTLM authentication method was always a beast. It is a proprietary protocol designed by Microsoft which was reverse engineered a long time ago. That effort resulted in th...

#cURL #and #libcurl

Origin | Interest | Match

0 0 0 0
Preview
lib: make SMB support opt-in by bagder · Pull Request #20846 · curl/curl The SMB protocol has weak security and is rarely used these days

SMB support in #curl is now opt-in: https://github.com/curl/curl/pull/20846

4 4 0 0
Post image

The updated map of #curl backends now when RTMP is going? Happy to oblige:

9 0 0 0
Preview
md5/md4: enable unaligned access fast path on powerpc64 by Scottcjn · Pull Request #20985 · curl/curl Summary Extend the MD5/MD4 unaligned access fast path to include __powerpc64__. Currently the fast path only covers __i386__, __x86_64__, and __vax__. PowerPC64 (both LE and BE) supports efficient ...

Welcome Scott Boudreaux as #curl commit author 1458: https://github.com/curl/curl/pull/20985

0 0 0 0

It's loud booing. I'm sitting in the lower bowl. It's even louder than in Ottawa! #Curl #Britta

1 0 0 0
Preview
bye bye RTMP In May 2010 we merged support for the RTMP protocol suite into curl, in our desire to support the world’s internet transfer protocols. ## RTMP The protocol is an example of the spirit of an earlier web: back when we still thought we would have different transfer protocols for different purposes. Before HTTP(S) truly became the one protocol that rules them all. RTMP was done by Adobe, used by Flash applications etc. Remember those? RTMP is an ugly proprietary protocol that simply was never used much in Open Source. The common Open Source implementation of this protocol is done in the rtmpdump project. In that project they produce a library, _librtmp_ , which curl has been using all these years to handle the actual binary bits over the wire. Build curl to use librtmp and it can transfer RTMP:// URLs for you. ## librtmp In our constant pursuit to improve curl, to find spots that are badly tested and to identify areas that _could_ be weak from a security and functionality stand-point, our support of RTMP was singled out. Here I would like to stress that I’m not suggesting that this is the only area in need of attention or improvement, but this was one of them. As I looked into the RTMP situation I realized that we had _no_ (zero!) tests of our own that actually verify RTMP with curl. It could thus easily break when we refactor things. Something we do quite regularly. I mean refactor (but also breaking things). I then took a look upstream into the librtmp code and associated project to investigate what exactly we are leaning on here. What we implicitly tell our users they can use. I quickly discovered that the librtmp project does not have a single test either. They don’t even do releases since many years back, which means that most Linux distros have packaged up their code straight from their repositories. (The project insists that there is nothing to release, which seems contradictory.) Is there perhaps any librtmp tests perhaps in the pipe? There had not been a single commit done in the project within the last twelve months and when I asked one of their leading team members about the situation, I was made clear to me that there is no tests in the pipe for the foreseeable future either. ## How about users? In November 2025 I explicitly asked for RTMP users on the curl-library mailing list, and _one_ person spoke up who uses it for testing. In the 2025 user survey, 2.2% of the respondents said they had used RTMP within the last year. The combination of _few users_ and _untested code_ is a recipe for pending removal from curl unless someone steps up and improves the situation. We therefor announced that we would remove RTMP support six months into the future unless someone cried out and stepped up to improve the RTMP situation. We repeated this _we-are-doing-to-drop-RTMP_ message in every release note and release video done since then, to make sure we do our best to reach out to anyone actually still using RTMP and caring about it. If anyone would come out of the shadows _now_ and beg for its return, we can always discuss it – but that will of course require work and adding test cases before it would be considered. ## Compatibility Can we remove support for a protocol and still claim API and ABI backwards compatibility with a clean conscience? This is the first time in modern days we remove support for a URL scheme and we do this without bumping the SONAME. We do not consider this an incompatibility primarily because _no one will notice_. It is only a break if it actually breaks something. (RTMP in curl actually could be done using six separate URL schemes, all of which are no longer supported: rtmp`,`rtmpe`,`rtmps, rtmpt`,`rtmpte`,`rtmpts.) The offical _number of URL schemes supported by curl_ is now down to 27: DICT, FILE, FTP, FTPS, GOPHER, GOPHERS, HTTP, HTTPS, IMAP, IMAPS, LDAP, LDAPS, MQTT, MQTTS, POP3, POP3S, RTSP, SCP, SFTP, SMB, SMBS, SMTP, SMTPS, TELNET, TFTP, WS and WSS. ## When The commit that actually removed RTMP support has been merged. We had the protocol supported for almost sixteen years. The first curl release without RTMP support will be 8.20.0 planned to ship on April 29, 2026

bye bye RTMP

https://daniel.haxx.se/blog/2026/03/21/bye-bye-rtmp/

#curl

6 5 0 0

curl 8.19.0 will go down as the last #curl release with support for RTMP. It is going away.

3 0 0 0

During the 9.6 days since the previous #curl release we have merged 89 bugfixes...

2 0 0 0
Original post on mastodon.online

Начал копать этот вопрос. Если б на #macos9 был #curl, то можно было бы попробовать сделать полноценный #mastodon-клиент через его API (без proxy, как было сделано 2 года назад). Но curl на девятке не собирается, ибо завязан на сокеты, а в «классике» собственное видение на TCP/IP и работу с ним […]

0 0 0 0
Original post on mastodon.social

March 20 is a popular #curl release date.

We did the first one in 1998 and we have made curl releases on this date no less than four more times. In 2000, 2006 and 2023.

In 2023 we did it twice because I botched the 8.0.0 release and made a 8.0.1 release on the same day.

This means the curl […]

9 3 0 0

Reminder: next week we do the #curl distro meeting 2026!

https://github.com/curl/curl/discussions/21022

1 1 0 0