• Caddy

    This is my first post with my blog running Caddy. In short, it’s a web server with a focus on making HTTPS simple. It accomplishes this by supporting ACME out of the box. ACME is the protocol that Let’s Encrypt uses. Technically, Caddy supports any Certificate Authority that supports ACME. Practically, few besides Let’s Encrypt do, though I am aware of other CAs making an effort to support issuance with ACME.

    Though I’ve seen lots of praise for Caddy and its HTTPS ALL THE THINGS mantra for a while now, I never really dug in to it until recently. I was actually grabbed by several of its other features that I really liked.

    Configuration is simple. That isn’t always a good thing. Simple usually means advanced configuration or features is lost in the trade off. Fortunately, this does not seem to be the case with Caddy, for me. I am sure it may be for others. When evaluating Caddy, there were a number of things nginx was taking care of besides serving static content.

    1. Rewrite to WebP if the user agent accepts WebP.
    2. Serve pre-compressed gzip files if the user agent accepts it.
    3. Serve pre-compressed brotli files if the user agent accepts it.
    4. Take care of some simple redirects.
    5. Flexible TLS configuration around cipher suites, protocols, and key exchanges.

    Caddy does all of those. It also does them better. Points two and three Caddy just does. It’ll serve gzip or brotli if the user agent is willing to accept them if a pre-compressed version of the file is on disk.

    Rewriting to WebP was easy:

    header /images {
        Vary Accept
    }
    
    rewrite /images {
        ext .png .jpeg .jpg
        if {>Accept} has image/webp
        to {path}.webp {path}
    }
    

    The configuration does two things. First, it adds the Vary: Accept header to all responses under /images. This is important if a proxy or CDN is caching assets. The second part says, if the Accept header contains “image/webp”, rewrite the response to “{path}.webp”, so it will look for “foo.png.webp” if a browser requests “foo.png”. The second {path} means use the original if there is no webp version of the file. Nginx on the other hand, was a bit more complicated.

    HTTPS / TLS configuration is simple and well documented. As the documentation points out, most people don’t need to do anything other than enable it. It has sensible defaults, and will use Let’s Encrypt to get a certificate.

    I’m optimistic about Caddy. I think it’s a very nice web server / reverse proxy. I spent about an hour moving my 400 lines of nginx configuration to 51 lines of Caddy configuration.

    I’d recommend giving it a shot.

  • Azure SignTool

    A while ago, Oren Novotny and I started exploring the feasibility of doing Authenticode signing with Azure Key Vault. Azure Key Vault lets you do some pretty interesting things, including which lets you treat it as a pseudo network-attached HSM.

    A problem with Azure Key Vault though is that it’s an HTTP endpoint. Integrating it in to existing standards like CNG or PKCS#11 hasn’t been done yet, which makes it difficult to use in some cases. Specifically, tools that wanted to use a CSP or CNG provider, like Authenticode signing.

    Our first attempt at getting this working was to see if we could use the existing signtool. A while ago, I wrote about using some new options in signtool that let you sign the digest with whatever you want in my post Custom Authenticode Signing.

    This made it possible, if not a little unwieldy, to sign things with Authenticode and use Azure Key Vault as the signing source. As I wrote, the main problem with it was you needed to run signtool twice and also develop your own application to sign a file with Azure Key Vault. The steps went something like this.

    1. Run signtool with /dg flag to produce a base64-encoded digest to sign.
    2. Produce signature for that file using Azure Key Vault using custom tool.
    3. Run signtool again with /di to ingest the signature.

    This was, in a word, “slow”. The dream was to be able to produce a signing service that could sign files in bulk. While a millisecond or two may not be the metric we care about, this was costing many seconds. It also let us feeling like the solution was held together by shoestrings and bubblegum.

    /dlib

    However, signtool mysteriously mentions a flag called /dlib. It says it combines /dg and /di in to a single operation. The documentation, in its entirety, is this:

    Specifies the DLL implementing the AuthenticodeDigestSign function to sign the digest with. This option is equivalent to using SignTool separately with the /dg, /ds, and /di switches, except this option invokes all three as one atomic operation.

    This lacked a lot of detail, but it seems like it is exactly what we want. We can surmise though that the value to this flag is a path to a library that exports a function called AuthenticodeDigestSign. That is easy enough to do. However, it fails to mention what is passed to this function, or what we should return to it.

    This is not impossible to figure out if we persist with windbg. To make a long story short, the function looks something like this:

    HRESULT WINAPI AuthenticodeDigestSign(
        CERT_CONTEXT* certContext,
        void* unused,
        ALG_ID algId,
        BYTE* pDigestToSign,
        DWORD cDigestToSign,
        CRYPTOAPI_BLOB* signature
    );
    

    With this, it was indeed possible to make a library that signtool would call this function for signing the digest. Oren put together a C# library that did exactly that on GitHub under KeyVaultSignToolWrapper. I even made some decent progress on a rust implementation.

    This was a big improvement. Instead of multiple invocations to signtool, we can do this all at once. This still presented some problems though. The first being that there was no way to pass any configuration to it with signtool. The best we could come up with was to wrap the invocation of signtool and set environment variables in the signtool process, and let this get its configuration from environment variables, such as which vault to authenticate to, and how to authenticate. A final caveat was that this still depended on signtool. Signtool is part of the Windows SDK, which technically doesn’t allow us to distribute it in pieces. If we wanted to use signtool, we would need to install parts of the entire Windows SDK.

    SignerSignEx3

    Later, I noticed that Windows 10 includes a new signing API, SignerSignEx3. I happened upon this when I was using windbg in AuthenticodeDigestSign and saw that the caller of it was SignerSignEx3, not signtool. I checked out the exports in mssign32 and did see it as a new export starting in Windows 10. The natural conclusion was that Windows 10 was shipping a new API that is capable of using callbacks for signing the digest and signtool wasn’t doing anything special.

    As you may have guessed, SignerSignEx3 is not documented. It doesn’t exist in Microsoft Docs or in the Windows SDK headers. Fortunately, SignerSignEx2 was documented, so we weren’t starting from scratch. If we figured out SignerSignEx3, then we could skip signtool completely and develop our own tool that does this.

    SignerSignEx3 looks very similar to SignerSignEx2:

    // Not documented
    typedef HRESULT (WINAPI *SignCallback)(
        CERT_CONTEXT* certContext,
        PVOID opaque,
        ALG_ID algId,
        BYTE* pDigestToSign,
        DWORD cDigestToSign,
        CRYPT_DATA_BLOB* signature
    );
    
    // Not documented
    typedef struct _SIGN_CALLBACK_INFO {
        DWORD cbSize;
        SignCallback callback;
        PVOID opaque;
    } SIGN_CALLBACK_INFO;
    
    HRESULT WINAPI SignerSignEx3(
        DWORD                  dwFlags,
        SIGNER_SUBJECT_INFO    *pSubjectInfo,
        SIGNER_CERT            *pSignerCert,
        SIGNER_SIGNATURE_INFO  *pSignatureInfo,
        SIGNER_PROVIDER_INFO   *pProviderInfo,
        DWORD                  dwTimestampFlags,
        PCSTR                  pszTimestampAlgorithmOid,
        PCWSTR                 pwszHttpTimeStamp,
        PCRYPT_ATTRIBUTES      psRequest,
        PVOID                  pSipData,
        SIGNER_CONTEXT         **ppSignerContext,
        PCERT_STRONG_SIGN_PARA pCryptoPolicy,
        SIGN_CALLBACK_INFO     *signCallbackInfo,
        PVOID                  pReserved
    );
    

    Reminder: These APIs are undocumented. I made a best effort at reverse engineering them, and to my knowledge, function. I do not express any guarantees though.

    There’s a little more to it than this. First, in order for the callback parameter to even be used, there’s a new flag that needs to be passed in. The value for this flag is 0x400. If this is not specified, the signCallbackInfo parameter is ignored.

    The usage is about what you would expect. A simple invocation might work like this:

    HRESULT WINAPI myCallback(
        CERT_CONTEXT* certContext,
        void* opaque,
        ALG_ID algId,
        BYTE* pDigestToSign,
        DWORD cDigestToSign,
        CRYPT_DATA_BLOB* signature)
    {
        //Set the signature property
        return 0;
    }
    
    int main()
    {
        SIGN_CALLBACK_INFO callbackInfo = { 0 };
        callbackInfo.cbSize = sizeof(SIGN_CALLBACK_INFO);
        callbackInfo.callback = myCallback;
        HRESULT blah = SignerSignEx3(0x400, /*omitted*/ callbackInfo, NULL);
        return blah;
    }
    

    When the callback is made, the signature parameter must be filled in with the signature. It must be heap allocated, but it can be freed after the call to SignerSignEx3 completes.

    APPX

    We’re not quite done yet. The solution above works with EXEs, DLLs, etc - it does not work with APPX packages. This is because signing an APPX requires some additional work. Specifically, the APPX Subject Interface Package requires some additional data be supplied in the pSipData parameter.

    Once again we are fortunate that there is some documentation on how this works with SignerSignEx2, however the details here are incorrect for SignerSignEx3.

    Unfortunately, the struct shape is not documented for SignerSignEx3.

    To the best of my understanding, SIGNER_SIGN_EX3_PARAMS structure should look like this:

    typedef _SIGNER_SIGN_EX3_PARAMS {
        DWORD                   dwFlags;
        SIGNER_SUBJECT_INFO     *pSubjectInfo;
        SIGNER_CERT             *pSigningCert;
        SIGNER_SIGNATURE_INFO   *pSignatureInfo;
        SIGNER_PROVIDER_INFO    *pProviderInfo;
        DWORD                   dwTimestampFlags;
        PCSTR                   pszTimestampAlgorithmOid;
        PCWSTR                  pwszTimestampURL;
        CRYPT_ATTRIBUTES        *psRequest;
        SIGN_CALLBACK_INFO      *signCallbackInfo;
        SIGNER_CONTEXT          **ppSignerContext;
        CERT_STRONG_SIGN_PARA   *pCryptoPolicy;
        PVOID                   pReserved;
    } SIGNER_SIGN_EX3_PARAMS;
    

    If you’re curious about the methodology I use for figuring this out, I documented the process in the GitHub issue for APPX support. I rarely take the time to write down how I learned something, but for once I managed to think of my future self referring to it. Perhaps that is worthy of another post on another day.

    Quirks

    SignerSignEx3 with a signing callback seems to have one quirk: it cannot be combined with the SIG_APPEND flag, so it cannot be used to append signatures. This seems to be a limitation of SignerSignEx3, as signtool has the same problem when using /dlib with the /as option.

    Conclusion

    It’s a specific API need, I’ll give you that. However, combining this with Subject Interface Packages, Authenticode is extremely flexible. Not only what it can sign, but now also how it signs.

    AzureSignTool’s source is on GitHub, MIT licensed, and has C# bindings.

  • macOS Platform Invoke

    I started foraying a bit in to macOS platform invocation with .NET Core and C#. For the most part, it works exactly like it did with Windows. However, there are some important differences between Windows’ native APIs and macOS’.

    The first is calling convention. Win32 APIs are typically going to be stdcall on 32-bit or the AMD64 calling convention on 64-bit. That may not be true for 3rd party libraries, but it is true for most (but not all) Win32 APIs.

    MacOS’ OS provided libraries are overwhelmingly cdecl and have a similar but different calling convention for AMD64 (the same as the System V ABI).

    For the most part, that doesn’t affect platform invoke signatures that much. However if you are getting in to debugging with LLDB, it’s something to be aware of.

    It does mean that you need to set the CallingConvention appropriately on the DllImportAttribute. For example:

    [DllImport("libcrypto.41",
        EntryPoint = "TS_REQ_set_version",
        CallingConvention = CallingConvention.Cdecl)
    ]
    

    Another point is that MacOS uses the LP64 memory model, whereas Windows uses the LLP64 for types.

    A common Win32 platform invocation mistake is trying to marshal a native long to a managed long. The native long in Win32 is 32bits, whereas in .NET it is 64-bit. Mismatching them will do strange things to the stack. In Win32 platform invocation, a native long gets marshalled as an int. Win32 will use long long or int64_t for 64-bit types.

    MacOS is different. It’s long type is platform dependent. That is, on 32-bit systems the long type is 32-bit, and on 64-bit it is 64-bit. In that regard, the long type is most accurately marshalled as an IntPtr. The alternative is to provide two different platform invoke signatures and structs and use the appropriate one depending on the platform.

    Keep in mind with MacOS, MacOS is exclusively 64-bit now. It’s still possible that one day your code will run 32-bit on a Mac as it is still capable of running 32-bit. At the time of writing even .NET Core itself doesn’t support running 32-bit on a Mac.

    [DllImport("libcrypto.41",
        EntryPoint = "TS_REQ_set_version",
        CallingConvention = CallingConvention.Cdecl)
    ]
    public static extern int TS_REQ_set_version
    (
        [param: In] TsReqSafeHandle a,
        [param: In, MarshalAs(UnmanagedType.SysInt)] IntPtr version
    );
    

    Using IntPtr for the long type is a bit of a pain since for, whatever reason, C# doesn’t really treat it like a numeric type. You cannot create literals of IntPtr cleanly, instead having to do something like (IntPtr)1.

    A final possibility is to make a native shim that coerces the data types to something consistent, like int32_t and have a shim per architecture.

    Another point of difference is string encoding. Windows vastly prefers to use Unicode and ANSI strings (W or A), where MacOS libraries will frequently use UTF8. The easiest thing to do here is to marshal them as pointers, unfortunately.

    Overall, it’s not too much different. Pay attention to the calling convention and be aware of LP64 over LLP64.

  • Peeking at RubyGems Package Signing

    I last wrote about NuGet signing for packages. This has been a hot topic for some folks in the approach that is being taken. However, signing packages was something I didn’t have a whole lot of data on. I didn’t have a good feel for how package communities adopt signing, and decided to get a little more information.

    I turned to the RubyGems community. Gems support signing, also with X509 certificates like the NuGet proposal. Support has been there for a while, so the community there has been plenty of time for adoption. This is on top of a high profile hack on RubyGems, giving plenty of motivation for developers to consider signing their packages.

    Problem is, there isn’t a whole lot of information about it that I could find, so I decided to create it. I decided to look at the top 200 gems and see where they stood on signing.

    The Gems

    The top 200 list is based off of RubyGems own statistics. One problem: their list by popularity only gives up to 100 gems. Fortunately, RubyGems doesn’t do such a hot job on validating their query strings. If I change the page=10 URL query string, supposedly the last page, to page=11, it is quite happy to give me gems 101-110. So first problem solved.

    Many of these gems are supporting gems. That is, not gems that people typically include in their projects directly, but rather included by as a dependency of another gem.

    Getting the latest version of each gem is easy enough with gem fetch. After building our list of gems, we just cache them to disk for inspection later.

    Extracting Certificates

    Certificates can be extracted from gems using gem spec <gempath> cert_chain. This will dump the certificate chain as a YAML document. We can use a little bit of ruby to get the certificates out of the YAML document and as files on disk.

    The Results

    I will be the first to admit that 200 gems is not a huge sample. However, they represent the most popular gems and the ones I would typically expect to be signed.

    Of the 200 gems specified, 17 were signed. That’s approximately 12% of gems. Initially I didn’t know what to think of that number. Is it good? Is it bad? If you had asked me to guess, I would have thought only three or four of them would have been signed. I don’t think 17 is good, either. It’s just not as bad as I would have expected it to be.

    The next matter is, what is the quality of the signatures? Are they valid? Are they self signed? What digest algorithms and key sizes are used?

    Of the 17 signed gems, two of them weren’t really signed at all. They contained placeholders for the certificate to go. Indeed, performing gem install badgem -P HighSecurity resulted in Gem itself thinking the signature was invalid. So we are down to 15 signed gems.

    Some other interesting figures:

    • 15/15 of them were self signed.
    • 2/15 of them used SHA2 signature algorithms. The rest used SHA1.
    • 4/15 were expired.
    • 8/15 used RSA-2048; 1/15 used RSA-3072; 6/15 used RSA-4096.

    Data

    I set up a GitHub repository for the scripts used to create this data. It is available at vcsjones/rubygem-signing-research. Everything that you need to extract the certificates from Gems is there.

    The gemlist.txt contains the list of Gems examined. The fetch.sh script will download all of the Gems in this file.

    extract_certs.sh will extract all of the certificates to examine how you see fit.

    Thoughts

    It doesn’t seem like signing has really taken off with RubyGems. Part of the issue is that RubyGems simply doesn’t validate the signature by default. This is due to the default validation option in Gem being NoSecurity at the time of writing. Every single Gem that is signed would fail to install with the MediumSecurity trust policy:

    gem install gemname -P MediumTrust
    

    This will fail for one reason or another, usually because the certificate doesn’t chain back to a trusted root certificate.

    I’m not sure if this is indicative of how adoption will go for NuGet. I’m curious to see where NuGet is three years from now on signing.

  • NuGet Package Signing

    Recently the NuGet team announced they were going to start supporting package signing.

    The NuGet team announced that their solution would be based on x509, or PKI certificates from a traditional Certificate Authority. They haven’t announced much beyond that, but it’s likely to be just a plain Code Signing certificate with the Code Signing EKU. Certificates and PKI is not a perfect solution. Particularly, one of the problems around code signing certificates is the accessibility of them. Certificate Authorities typically charge for certificates and require identification.

    This presents a problem to a few groups of people. Young people who are just getting in to software development may be excited to publish a NuGet package. However getting a code signing certificate may be out of their reach, for example for a 15 year old. I’m not clear how a CA would handle processing a certificate for a minor who may not have a ID. The same goes for individuals of lesser privileged countries. The median monthly income of Belarus, a country very dear to me, is $827. A few hundred dollars for a code signing certificate is not nothing to sneeze at. There are many groups of people that will struggle with obtaining a certificate.

    Not signing might be okay, with a few exceptions. The first being that the NuGet team described that there would be a visual indicator for signed packages.

    Visual Studio Signed Package from https://blog.nuget.org/20170417/Package-identity-and-trust.html

    This indicator is necessary for part of the NuGet team’s desire to indicate a level of trustworthiness. However, as a package consumer, the indicator will likely draw preference. This puts packages that are able to sign in a position of preference over unsigned packages. This also hurts the community as a whole; it’s simply better for everyone if as many packages as possible were signed.

    Given that, the natural conclusion may be that x509 and PKI are not the correct solution. There are other options that will work, such as PGP and Web of Trust (WOT). Some members are asking the NuGet team to reconsider x509 and PKI. There are other issues with x509 and PKI, but the accessibility of code signing certificates seems to be the central point of the community’s concerns.

    I am sympathetic to these concerns which I have also expressed myself previously. However despite that, I would like to now explain why I think the NuGet team made the right decision, and why the other options are less likely to be workable solutions.

    PKI

    x509 Code Signing certificates use Public Key Infrastructure, or PKI for short. The hardest part of signing anything with a key is not a technical problem. It is “Should I trust this key?”. Anyone in the world can make a certificate with a Common Name of “Kevin Jones” and sign something with it. How would you, the consumer of NuGet package signed by CN=Kevin Jones, know that the certificate belongs to Kevin Jones?

    The PKI solution for that is to have the certificate for CN=Kevin Jones to be signed by someone you already trust, or in this case a Certificate Authority. The CA, since they are vouching for your certificate’s validity, will vet the application for the certificate. Even when I applied for a free Code Signing certificate (disclosure, Digicert gives free certs to MVPs, which I am grateful for), they still performed their verification procedures which involved a notarized document for my identification. CAs are motivated to do this correctly every time because if they prove to be untrustworthy, the CA is no longer trusted anymore. The CA’s own certificate will be removed or blacklisted from the root store, which operating systems maintain themselves, usually.

    While this has problems and is not foolproof, it has been a system that has worked for quite a long time. x509 certificates are well understood and also serve as the same technology as HTTPS. There is significant buy in from individuals and businesses alike that are interested in the further advancement of x509. Such advancements might be improved cryptographic primitives, such as SHA256 a few years ago, to new things such as ed25519.

    A certificate which is not signed by a CA, but rather signs itself, is said to a self-signed certificate. These certificates are not trusted unless they are explicitly trusted by the operating system for every computer which will use it.

    A final option is an internal CA, or enterprise CA. This is a Certificate Authority that the operating system does not trust by default, but has been trusted through some kind of enterprise configuration (such as Group Policy or a master image). Enterprises choose to run their own private CA for many reasons.

    Any of these options can be used to sign a NuGet package with x509. I’m not clear if the Microsoft NuGet repository will accept self signed packages or enterprise signed packages. However an enterprise will be able to consume a private NuGet feed that is signed by their enterprise CA.

    This model allows for some nice scenarios, such trusting packages that are signed by a particular x509 certificate. This might be useful for an organization that wants to prevent NuGet packages from being installed that have not been vetted by the corporation yet, or preventing non-Microsoft packages from being installed.

    Finally, x509 has not great, but at least reasonably well understood and decently documented tools. Let’s face it: NuGet and .NET Core are cross platform, but likely skew towards the Windows and Microsoft ecosystem at the moment. Windows, macOS, and Linux are all set up at this point to handle x509 certificates both from a platform perspective and from a tooling perspective.

    PKI is vulnerable to a few problems. One that is of great concern is the “central-ness” of a handful of Certificate Authorities. The collapse of a CA would be very problematic, and has happened, and more than once.

    PGP

    Let’s contrast with PGP. PGP abandons the idea of a Certificate Authority and PKI in general in favor for something called a Web of Trust. When a PGP key is generated with a tool like GPG, they aren’t signed by a known-trustworthy authority like a CA. In that respect, they very much start off like self-signed certificates in PKI. They aren’t trusted until the certificate has been endorsed by one, or multiple, people. These are sometimes done at “key signing parties” where already trusted members of the web will verify the living identity to those with new PGP keys. This scheme is flexible in that it doesn’t rely on a handful of corporations.

    Most importantly to many people, it is free. Anyone can participate in this with out monetary requirements or identifications. However getting your PGP key trusted by the Web of Trust can be challenging and due to its flexibility, may not be be immediately actionable.

    It’s likely that if NuGet did opt to go with PGP, a Web of Trust may not be used at all, but rather to tie the public key of the PGP key to the account in NuGet. GitHub actually does something similar with verified commits.

    Github GPG

    This, however, has an important distinction from an x509 code signing certificate: the key does not validate that Kevin Jones the person performed the signature. It means that whoever is in control of the vcsjones Github account performed the signature. I could have just as easily created a GitHub account called “Marky Mark” and created a GPG key with the email markymark@example.com.

    That may be suitable enough for some people and organizations. Microsoft may be able to state that, “our public key is ABC123” and organizations can explicitly trust ABC123. That would work until there was a re-keying event. Re-keying is a natural and encouraged process. Then organizations would need to find the new key to trust.

    This is harder for individuals. Do I put my public key on my website? Does anyone know if vcsjones.com is really operated by someone named Kevin Jones? What if I don’t have HTTPS on my site - would you trust the key that you found there?

    Adopting in to the “web of trust” tries to work around that problems of key distribution. However the website evil32.com puts it succinctly:

    Aren’t you suppose to use the Web of Trust to verify the authenticity of keys?

    Absolutely! The web of trust is a great mechanism by which to verify keys but it’s complicated. As a result, it is often not used. There are examples of GPG being used without the Web of Trust all over the web.

    The Web of Trust is also not without its problems. An interesting aspect of this is since it requires other users to vouch for your key, you are now disclosing your social relationships, possibly because you are friends with the other people used to vouch for the key.

    It also has a very large single point of failure. Anyone that is part of the strong set is essentially a CA compared to x509 - a single individual compromised in the strong set could arguably be said to compromise the entire web.

    For those reasons, we don’t see the WOT used very often. We don’t see it used in Linux package managers, for example.

    Linux, such as Debian’s Aptitude, use their own set of known keys. By default, a distribution ships with a set of known and trusted keys, almost like a certificate store. You can add keys yourself using apt-key add, which many software projects ask you to do! This is not unlike trusting a self signed certificate. You have to be really sure what key you are adding, and that you obtained it from a trustworthy location.

    PGP doesn’t add much of an advantage to x509 in that respect. You can manually trust an x509 just as much as you can manually trust a PGP key.

    It does however mean that the distribution now takes on the responsibilities of a CA - they need to decide which keys they trust, and the package source needs to vet all of the packages included for the signature to have any meaning.

    Since PGP has no authority, revoking requires access to the existing private key. If you did something silly like put the private key on a laptop and lose the laptop, and you didn’t have the private key backed up anywhere, guess what? You can’t revoke it without the original private key or a revoke certificate. So now you are responsible for two keys: your own private key and the certificate that can be used to revoke the key. I have seen very little guidance in the way of creating revoke certificates. This isn’t quite as terrible as it sounds, as many would argue that revocation is broken in x509 as well for different reasons.

    Tooling

    On a more personal matter, I find the tooling around GnuPG to be in rough shape, particularly on Windows. It’s doable on macOS and Linux, and I even have such a case working with a key in hardware.

    GPG / PGP has historically struggled with advancements in cryptography and migrating to modern schemes. GPG/PGP is actually quite good at introducing support for new algorithms. For example, the GitHub example above is an ed25519/cv25519 key pair. However migrating to such new algorithms is has been a slow process. PGP keys have no hard-set max validity, so RSA-1024 keys are still quite common. There is little key hygiene going on and people often pick expiration dates of years or decades (why not, most people see expiration as a pain to deal with).

    Enterprise

    We mustn’t forget the enterprise, who are probably the most interested in how to consume signed packages. Frankly, package signing would serve little purpose if there was no one interested in the verify step - and we can thank the enterprise for that. Though I lack anything concrete, I am willing to bet that enterprises are able to handle x509 better than PGP.

    Wrap Up

    I don’t want to slam PGP or GnuPG as bad tools - I think they have their time and place. I just don’t think NuGet is the right place. Most people that have interest in PGP have only used it sparingly, or are hard-core fanatics that can often miss the forest for the trees when it comes to usable cryptography.

    We do get some value from PGP if we are willing to accept that signatures are not tied to a human being, but rather a NuGet.org account. That means signing is tied to NuGet.org and couldn’t easily be used with a private NuGet server or alternative non-Microsoft server.

    To state my opinion plainly, I don’t think PGP works unless Microsoft is willing to take on the responsibility to vet keys, we adopt in to the web of trust, or we accept that signing does not provide identity of the signer. None of these solutions are good in my opinion.