HTTPS certificate for private HTTP service?

I am thinking of two ways I might arrive at my goal. For the first of the two proposed solutions, I recognize the fact that the browZer HTTP agent is still incubating and is not yet generally available. As such, assuming it is in a nearly final form, I believe this approach would work when it does ship.

  1. Is it possible to self-host the browZer HTTP agent w/ NF Teams plan? I have an HTTP website served up over Ziti w/ my NF Teams plan. Can I deploy the browZer agent as an HTTPS binding for my normal web browser w/ NF in a self-supported manner, or does this entail a particular configuration of the controller that’s managed by you? If that’s a viable path forward, I’m happy to configure my own private router or an additional controller. I would use a LetsEncrypt certificate and bind it to the HTTP(S) agent’s web server. I could use either the TXT resource record or TXT document challenges in this case.

  2. Obtain a LetsEncrypt certificate through their DNS challenge instead of the public document challenge. The only downside here is that I need to find a way to “front” my web server with a proxy (it’s built in to another app, so I can’t bind the cert natively), and change the Ziti service to go to the new proxy.

1 Like

Thanks for sharing, as this is a great example.

I was doing something close to this a while back, and got stuck at the same DNS challenge.

I did uncover a possible solution, but never explored it because it required a cost.

To purchase a wildcard certificate from a highly credible certificate authority.

These can be quite expensive… so I put it on the back burner until I really needed to do this.

This way… you can have all sorts of other domain names in the SAN… which I believe will resolve your issue… though I could be wrong… as I don’t know all of the details.

If this is just for a private internal use, one work around approach that I found is to create a new private certificate authority… and set this up as a trusted certificate authority with the controller. You can then issue your own private server certificate for the server… that is trusted with the controller… you can you use it over a ziti network.

if you do this… you will still need to do one more thing… which is to add the certificate to the host… so that it recognises the server certificate.

The only downside is that you still need to deal with the client side … but I believe this can be addressed with a bit more work… by handing it over to the team that manages the OS… to make sure the private certificate is rolled out to each of the user machines.

I hope that helps.

Scott

PS. there is one more thing I remember, where you can bundle the certificate authorities together to facilitate trust. I am not 100% sure how it all works, but is something to add into the mix of things to work through a solution.

This is the certbot CLI command that uses the DNS challenge.

sudo certbot certonly --manual --manual-auth-hook /etc/letsencrypt/acme-dns-auth.py --preferred-challenges dns --debug-challenges -d "${DOMAIN_NAME}"  # domain may be wildcard

Ref: https://cloudness.net/certbot-dns-challenge/

Prerequisites include:

  1. follow instructions in linked reference to obtain the Python script
  2. provide PyPi module requests to root’s Python or run certbot certonly CLI with user-writeable config and log dirs (see certbot docs for specifics).

Then, to compose the PKCS12 keystore with the issued cert and backing private key I needed to run this command.

sudo openssl pkcs12 -export -in /etc/letsencrypt/live/${DOMAIN_NAME}/cert.pem -inkey /etc/letsencrypt/live/${DOMAIN_NAME}/privkey.pem -name ${APP_NAME} -out /etc/letsencrypt/live/${DOMAIN_NAME}/${APP_NAME}.p12`

Then I needed to configure my server app to use the keystore with the same encryption passphrase I defined when I created the PKCS12 file. Some server apps will use the PEM key and PEM cert instead of a PKCS12 keystore file which basically combines both of those PEM files into a single file with a standard format.

1 Like

My new favorite way to solve this is with Caddy (valid certificates for private HTTPS services).

Caddy consolidates the features of NGINX and certbot (or acme.sh) and works great as a container.

I need three files for this: compose.yml and Dockerfile and Caddyfile.

The Dockerfile builds the Caddy image with the DNS plugin for whichever provider I’m using, e.g., CloudFlare, DigitalOcean, AWS, etc.

The Compose file ties it all together, mounting my Caddyfile on the Caddy container, and using the Dockerfile to build the Caddy image.

Here’s an example of each.

# Use the official Caddy image as a parent image
FROM caddy:2-builder AS builder

RUN xcaddy build \
    --with github.com/caddy-dns/cloudflare # add more like "--with github.com/org/repo"

# Use the official Caddy image to create the final image
FROM caddy:2

# Copy the custom Caddy build into the final image
COPY --from=builder /usr/bin/caddy /usr/bin/caddy

Caddyfile:

{
        email me@example.com  # use a real email for EFF/LetsEncrypt
        acme_ca https://acme-v02.api.letsencrypt.org/directory
        # you can use the staging service to make sure it's working before burning quota
        #acme_ca https://acme-staging-v02.api.letsencrypt.org/directory
}

# get a wildcard cert and handle any matching HTTPS requests, this is why you use a REAL domain name that you control for Ziti service addresses!
*.ziti.example.com {
        tls {
                dns cloudflare {env.CF_API_TOKEN}
                resolvers 1.1.1.1
        }

        log {
                output stdout
                format console
                level INFO
                #level DEBUG
        }

        # optionally mount some static files on the container as the doc root
        root * /mnt

        # Caddy has many options, just one example of forwarding the request to a port on the Docker host. You can also forward requests to separate containers in the same isolated Docker network.
        reverse_proxy /* host.docker.internal:8096
}

Finally, the compose.yml:

services:
  caddy:
    build:
      context: .
    restart: unless-stopped
    environment:
      CF_API_TOKEN:
    ports:
      - "80:80"
      - "443:443"
      - "443:443/udp"
    volumes:
      - ./Caddyfile:/etc/caddy/Caddyfile
      - /my/web/files/:/mnt/
      - caddy_data:/data
      - caddy_config:/config
    user: "$PUID:$PGID"
    extra_hosts:
      - "host.docker.internal:host-gateway"
  init:
    image: busybox
    command: >
        chown -R "$PUID:$PGID" /data /config; 
        chmod -R ug=rwX,o-rwx   /data /config;
    volumes:
      - caddy_data:/data
      - caddy_config:/config
volumes:
  caddy_data:
  caddy_config:

This part is then run with:

docker compose up --detach

Now add some Ziti! The Ziti service addresses must match your Caddyfile. Caddy automatically provides the certificate and auto-renewal as long as the DNS provider token is valid.

An example of a matching Ziti service address would be www.ziti.example.com:443. You can have separate Caddyfile blocks for each service to handle each domain name or use a wildcard, depending on the requirements of the destination app you’re providing.