NGINX http://www.bengbeng.net.cn The High Performance Reverse Proxy, Load Balancer, Edge Cache, Origin Server Tue, 13 Aug 2019 22:41:46 +0000 en-US hourly 1 NGINX Updates Mitigate the August 2019 HTTP/2 Vulnerabilities http://www.bengbeng.net.cn/blog/nginx-updates-mitigate-august-2019-http-2-vulnerabilities/ Tue, 13 Aug 2019 17:26:11 +0000 http://www.bengbeng.net.cn/?p=62872 Today we are releasing updates to NGINX Open Source and NGINX Plus in response to the recent discovery of vulnerabilities in many implementations of HTTP/2. We strongly recommend upgrading all systems that have HTTP/2 enabled. In May 2019, researchers at Netflix discovered a number of security vulnerabilities in several HTTP/2 server implementations. These were responsibly reported to [...]

Read More...

The post NGINX Updates Mitigate the August 2019 HTTP/2 Vulnerabilities appeared first on NGINX.

]]>
Today we are releasing updates to NGINX Open Source and NGINX Plus in response to the recent discovery of vulnerabilities in many implementations of HTTP/2. We strongly recommend upgrading all systems that have HTTP/2 enabled.

In May 2019, researchers at Netflix discovered a number of security vulnerabilities in several HTTP/2 server implementations. These were responsibly reported to each of the vendors and maintainers concerned. NGINX was vulnerable to three attack vectors, as detailed in the following CVEs:

We have addressed these vulnerabilities, and added other HTTP/2 security safeguards, in the following NGINX versions:

  • NGINX 1.16.1 (stable)
  • NGINX 1.17.3 (mainline)
  • NGINX Plus R18 P1

The post NGINX Updates Mitigate the August 2019 HTTP/2 Vulnerabilities appeared first on NGINX.

]]>
Using the NGINX Plus Ingress Controller for Kubernetes with OpenID Connect Authentication from Azure AD http://www.bengbeng.net.cn/blog/nginx-plus-ingress-controller-for-kubernetes-openid-connect-azure-ad/ Thu, 25 Jul 2019 20:35:25 +0000 http://www.bengbeng.net.cn/?p=62747 NGINX Open Source is already the default Ingress resource for Kubernetes, but NGINX Plus provides additional enterprise‑grade capabilities, including JWT validation, session persistence, and a large set of metrics. In this blog we show how to use NGINX Plus to perform OpenID Connect (OIDC) authentication for applications and resources behind the Ingress in a Kubernetes environment, in a [...]

Read More...

The post Using the NGINX Plus Ingress Controller for Kubernetes with OpenID Connect Authentication from Azure AD appeared first on NGINX.

]]>
table.nginx-blog, table.nginx-blog th, table.nginx-blog td { border: 2px solid black; border-collapse: collapse; } table.nginx-blog { width: 100%; } table.nginx-blog th { background-color: #d3d3d3; align: left; padding-left: 5px; padding-right: 5px; padding-bottom: 2px; padding-top: 2px; line-height: 120%; } table.nginx-blog td { padding-left: 5px; padding-right: 5px; padding-bottom: 2px; padding-top: 5px; line-height: 120%; } table.nginx-blog td.center { text-align: center; padding-bottom: 2px; padding-top: 5px; line-height: 120%; }

NGINX Open Source is already the default Ingress resource for Kubernetes, but NGINX Plus provides additional enterprise‑grade capabilities, including JWT validation, session persistence, and a large set of metrics. In this blog we show how to use NGINX Plus to perform OpenID Connect (OIDC) authentication for applications and resources behind the Ingress in a Kubernetes environment, in a setup that simplifies scaled rollouts.

The following graphic depicts the authentication process with this setup:

To create the setup, perform the steps in these sections:

Notes:

  • This blog is for demonstration and testing purposes only, as an illustration of how to use NGINX Plus for authentication in Kubernetes using OIDC credentials. The setup is not necessarily covered by your NGINX Plus support contract, nor is it suitable for production workloads without modifications that address your organization’s security and governance requirements.
  • Several NGINX colleagues collaborated on this blog and I thank them for their contributions. I particularly want to thank the NGINX colleague (he modestly wishes to remain anonymous) who first came up with this use case!

Obtaining Credentials from the OpenID Connect Identity Provider (Azure Active Directory)

The purpose of OpenID Connect (OIDC) is to use established, well‑known user identities without increasing the attack surface of the identity provider (IdP, in ODC terms). Our application trusts the IdP, so when it calls the IdP to authenticate a user, it is then willing to use the proof of authentication to control authorized access to resources.

In this example, we’re using Azure Active Directory (AD) as the IdP, but you can choose any of the many OIDC IdPs operating today. For example, our earlier blog post Authenticating Users to Existing Applications with OpenID Connect and NGINX Plus uses Google.

To use Azure AD as the IdP, perform the following steps, replacing the sample values with the ones appropriate for your application:

  1. If you don’t already use Azure, create an account.

  2. Navigate to the Azure portal and click Azure Active Directory in the left navigation column.

    In this blog we’re using features that are available in the Premium version of AD and not the standard free version. If you don’t already have the Premium version (as is the case for new accounts), you can start a free trial as prompted on the AD Overview page.

  3. Click App registrations in the left navigation column (we have minimized the global navigation column in the screenshot).

  4. On the App registrations page, click New registration.

  5. On the Register an application page that opens, enter values in the Name and Redirect URI fields, click the appropriate radio button in the Supported account types section, and then click the  Register  button. We’re using the following values:

    • Name – cafe
    • Supported account types – Account in this organizational directory only
    • Redirect URI (optional) – Web: https://cafe.nginx.net/_codexch

  6. Make note of the values in the Application (client) ID and Directory (tenant) ID fields on the cafe confirmation page that opens. We’ll add them to the cafe-ingress.yaml file we create in Setting Up the Sample Application to Use OpenID Connect.

  7. In the Manage section of the left navigation bar, click Certificates & secrets (see the preceding screenshot). On the page that opens, click the New client secret button.

  8. In the Add a client secret pop‑up window, enter the following values and click the  Add  button:

    • Description – client_secret
    • Expires – Never

  9. Copy the value for client_secret that appears, because it will not be recoverable after you close the window. In our example it is kn_3VLh]1I3ods*[DDmMxNmg8xxx.

  10. URL‑encode the client secret. There are a number of ways to do this but for a non‑production example we can use the urlencoder.org website. Paste the secret in the upper gray box, click the  > ENCODE <  button, and the encoded value appears in the lower gray box. Copy the encoded value for use in configuration files. In our example it is kn_3VLh%5D1I3ods%2A%5BDDmMxNmg8xxx.

Installing and Configuring Kubernetes

There are many ways to install and configure Kubernetes, but for this example we’ll use one of my favorite installers, Kubespray. You can install Kubespray from the GitHub repo.

You can create the Kubernetes cluster on any platform you wish. Here we’re using a MacBook. We’ve previously used VMware Fusion to create four virtual machines (VMs) on the MacBook. We’ve also created a custom network that supports connection to external networks using Network Address Translation (NAT). To enable NAT in Fusion, navigate to Preferences > Network, create a new custom network, and enable NAT by expanding the Advanced section and checking the option for NAT.

The VMs have the following properties:

Name OS IP Address Alias IP Address Memory Disk Size
node1 CentOS 7.6 172.16.186.101 172.16.186.100 4 GB 20 GB
node2 CentOS 7.6 172.16.186.102 2 GB 20 GB
node3 CentOS 7.6 172.16.186.103 2 GB 20 GB
node4 CentOS 7.6 172.16.186.104 2 GB 20 GB

Note that we set a static IP address for each node and created an alias IP address on node1. In addition we satisfied the following requirements for Kubernetes nodes:

  • Disabling swap
  • Allowing IP address forwarding
  • Copying the ssh key from the host running Kubespray (the Macbook) to each of the four VMs, to enable connecting over ssh without a password
  • Modifying the sudoers file on each of the four VMs to allow sudo without a password (use the visudo command and make the following changes):

    ## Allows people in group wheel to run all commands
    # %Wheel           ALL=(ALL)    ALL
     
    ## Same thing without a password
    %Wheel  ALL=(ALL)       NOPASSWD: ALL

We disabled firewalld on the VMs but for production you likely want to keep it enabled and define the ports through which the firewall accepts traffic. We have SELinux in enforcing mode.

On the MacBook we also satisfied all the Kubespray prerequisites, including installation of an Ansible version supported by Kubespray.

Kubespray comes with a number of configuration files. We’re replacing the values in several fields in two of them:

  • group_vars/all/all.yml

    # adding the ability to call upstream DNS
    upstream_dns_servers:
      - 8.8.8.8
      - 8.8.4.4
  • group_vars/k8s-cluster/k8s-cluster.yml

    kube_network_plugin: flannel
    # Make sure the following subnets aren't used by active networks
    kube_service_addresses: 10.233.0.0/18
    kube_pods_subnet: 10.233.64.0/18
    # change the cluster name to whatever you plan to use
    cluster_name: k8s.nginx.net
    # add so we get kubectl and the config files locally
    kubeconfig_localhost: true
    kubectl_localhost: true

We also create a new hosts.yml file with the following contents:

all:
  hosts:
    node1:
      ansible_host: 172.16.186.101
      ip: 172.16.186.101
      access_ip: 172.16.186.101
    node2:
      ansible_host: 172.16.186.102
      ip: 172.16.186.102
      access_ip: 172.16.186.102
    node3:
      ansible_host: 172.16.186.103
      ip: 172.16.186.103
      access_ip: 172.16.186.103
    node4:
      ansible_host: 172.16.186.104
      ip: 172.16.186.104
      access_ip: 172.16.186.104
  children:
    kube-master:
      hosts:
        node1:
    kube-node:
      hosts:
        node1:
        node2:
        node3:
        node4:
    etcd:
      hosts:
        node1:
    k8s-cluster:
      children:
        kube-master:
        kube-node:
    calico-rr:
      hosts: {}

Now we run the following command to create a four‑node Kubernetes cluster with node1 as the single master. (Kubernetes recommends three master nodes in a production environment, but one node is sufficient for our example and eliminates any possible issues with synchronization.)

$ ansible-playbook -i inventory/mycluster/hosts.yml -b cluster.yml

Creating a Docker Image for the NGINX Plus Ingress Controller

NGINX publishes a Docker image for the open source NGINX Ingress Controller, but we’re using NGINX Plus and so need to build a private Docker image with the certificate and key associated with our NGINX Plus subscription. We’re following the instructions at the GitHub repo for the NGINX Ingress Controller, but replacing the contents of the Dockerfile provided in that repo, as detailed below.

Note: Be sure to store the image in a private Docker Hub repository, not a standard public repo; otherwise your NGINX Plus credentials are exposed and subject to misuse. A free Docker Hub account entitles you to one private repo.

Replace the contents of the standard Dockerfile provided in the kubernetes-ingress repo with the following text. One important difference is that we include the NGINX JavaScript (njs) module in the Docker image by adding the nginx-plus-module-njs argument to the second apt-get install command.

FROM debian:stretch-slim

LABEL maintainer="NGINX Docker Maintainers "

ENV NGINX_PLUS_VERSION 18-1~stretch
ARG IC_VERSION

# Download certificate and key from the customer portal (https://cs.nginx.com)
# and copy to the build context
COPY nginx-repo.crt /etc/ssl/nginx/
COPY nginx-repo.key /etc/ssl/nginx/

# Make sure the certificate and key have correct permissions
RUN chmod 644 /etc/ssl/nginx/*

# Install NGINX Plus
RUN set -x \
  && apt-get update \
  && apt-get install --no-install-recommends --no-install-suggests -y apt-transport-https ca-certificates gnupg1 \
  && \
  NGINX_GPGKEY=573BFD6B3D8FBC641079A6ABABF5BD827BD9BF62; \
  found=''; \
  for server in \
    ha.pool.sks-keyservers.net \
    hkp://keyserver.ubuntu.com:80 \
    hkp://p80.pool.sks-keyservers.net:80 \
    pgp.mit.edu \
  ; do \
    echo "Fetching GPG key $NGINX_GPGKEY from $server"; \
    apt-key adv --keyserver "$server" --keyserver-options timeout=10 --recv-keys "$NGINX_GPGKEY" && found=yes && break; \
  done; \
  test -z "$found" && echo >&2 "error: failed to fetch GPG key $NGINX_GPGKEY" && exit 1; \
  echo "Acquire::https::plus-pkgs.nginx.com::Verify-Peer \"true\";" >> /etc/apt/apt.conf.d/90nginx \
  && echo "Acquire::https::plus-pkgs.nginx.com::Verify-Host \"true\";" >> /etc/apt/apt.conf.d/90nginx \
  && echo "Acquire::https::plus-pkgs.nginx.com::SslCert     \"/etc/ssl/nginx/nginx-repo.crt\";" >> /etc/apt/apt.conf.d/90nginx \
  && echo "Acquire::https::plus-pkgs.nginx.com::SslKey      \"/etc/ssl/nginx/nginx-repo.key\";" >> /etc/apt/apt.conf.d/90nginx \
  && echo "Acquire::https::plus-pkgs.nginx.com::User-Agent  \"k8s-ic-$IC_VERSION-apt\";" >> /etc/apt/apt.conf.d/90nginx \
  && printf "deb https://plus-pkgs.nginx.com/debian stretch nginx-plus\n" > /etc/apt/sources.list.d/nginx-plus.list \
  && apt-get update && apt-get install -y nginx-plus=${NGINX_PLUS_VERSION} nginx-plus-module-njs \
  && apt-get remove --purge --auto-remove -y gnupg1 \
  && rm -rf /var/lib/apt/lists/* \
  && rm -rf /etc/ssl/nginx \
  && rm /etc/apt/apt.conf.d/90nginx /etc/apt/sources.list.d/nginx-plus.list


# Forward NGINX access and error logs to stdout and stderr of the Ingress
# controller process
RUN ln -sf /proc/1/fd/1 /var/log/nginx/access.log \
	&& ln -sf /proc/1/fd/1 /var/log/nginx/stream-access.log \
	&& ln -sf /proc/1/fd/1 /var/log/nginx/oidc_auth.log \
	&& ln -sf /proc/1/fd/2 /var/log/nginx/error.log \
	&& ln -sf /proc/1/fd/2 /var/log/nginx/oidc_error.log


EXPOSE 80 443

COPY nginx-ingress internal/configs/version1/nginx-plus.ingress.tmpl internal/configs/version1/nginx-plus.tmpl internal/configs/version2/nginx-plus.virtualserver.tmpl  /

RUN rm /etc/nginx/conf.d/* \
  && mkdir -p /etc/nginx/secrets

# Uncomment the line below to add the default.pem file to the image
# and use it as a certificate and key for the default server
# ADD default.pem /etc/nginx/secrets/default

ENTRYPOINT ["/nginx-ingress"]

We tag the Dockerfile with 1.5.0-oidc and push the image to a private repo on Docker Hub under the name nginx-plus:1.5.0-oidc. Our private repo is called magicalyak, but we’ll remind you to substitute the name of your private repo as necessary below.

To prepare the Kubernetes nodes for the custom Docker image, we run the following commands on each of them. This enables Kubernetes to place the Ingress resource on the node of its choice. (You can also run the commands on just one node and then direct the Ingress resource to run exclusively on that node.) In the final command, substitute the name of your private repo for magicalyak:

$ sudo groupadd docker
$ sudo usermod -aG docker $USER
$ docker login # this prompts you to enter your Docker username and password
$ docker pull magicalyak/nginx-plus:1.5.0-oidc

At this point the Kubernetes nodes are running.

In order to use the Kubernetes dashboard, we run the following commands. The first enables kubectl on the local machine (the MacBook in this example). The second returns the URL for the dashboard, and the third returns the token we need to access the dashboard (we’ll paste it into the token field on the dashboard login page).

$ cp inventory/mycluster/artifacts/admin.conf ~/.kube/config
$ kubectl cluster-info # gives us the dashboard URL
$ kubectl -n kube-system describe secrets \
   `kubectl -n kube-system get secrets | awk '/clusterrole-aggregation-controller/ {print $1}'` \
       | awk '/token:/ {print $2}'

Installing and Customizing the NGINX Plus Ingress Controller

We now install the NGINX Plus Ingress Controller in our Kubernetes cluster and customize the configuration for OIDC by incorporating the IDs and secret generated by Azure AD in Obtaining Credentials from an OpenID Connect Identity Provider.

Cloning the NGINX Plus Ingress Controller Repo

We first clone the kubernetes-ingress GitHub repo and change directory to the deployments subdirectory. Then we run kubectl commands to create the resources needed: the namespace and service account, the default server secret, the custom resource definition, and role‑based access control (RBAC).

$ git clone https://github.com/nginxinc/kubernetes-ingress
$ cd kubernetes-ingress/deployments
$ kubectl create -f common/ns-and-sa.yaml
$ kubectl create -f common/default-server-secret.yaml
$ kubectl create -f common/custom-resource-definitions.yaml
$ kubectl create -f rbac/rbac.yaml

Creating the NGINX ConfigMap

Now we replace the contents of the common/nginx-config.yaml file with the following, a ConfigMap that enables the njs module and includes configuration for OIDC.

kind: ConfigMap
apiVersion: v1
metadata:
  name: nginx-config
  namespace: nginx-ingress
data:
  #external-status-address: 172.16.186.101
  main-snippets: |
    load_module modules/ngx_http_js_module.so;
  ingress-template: |
    # configuration for {{.Ingress.Namespace}}/{{.Ingress.Name}}
    {{- if index $.Ingress.Annotations "custom.nginx.org/enable-oidc"}}
    {{$oidc := index $.Ingress.Annotations "custom.nginx.org/enable-oidc"}}
    {{- if eq $oidc "True"}}
    {{- $kv_zone_size := index $.Ingress.Annotations "custom.nginx.org/keyval-zone-size"}}
    {{- $refresh_time := index $.Ingress.Annotations "custom.nginx.org/refresh-token-timeout"}}
    {{- $session_time := index $.Ingress.Annotations "custom.nginx.org/session-token-timeout"}}
    {{- if not $kv_zone_size}}{{$kv_zone_size = "1M"}}{{end}}
    {{- if not $refresh_time}}{{$refresh_time = "8h"}}{{end}}
    {{- if not $session_time}}{{$session_time = "1h"}}{{end}}
    keyval_zone zone=opaque_sessions:{{$kv_zone_size}} state=/var/lib/nginx/state/opaque_sessions.json timeout={{$session_time}};
    keyval_zone zone=refresh_tokens:{{$kv_zone_size}} state=/var/lib/nginx/state/refresh_tokens.json timeout={{$refresh_time}};
    keyval $cookie_auth_token $session_jwt zone=opaque_sessions;
    keyval $cookie_auth_token $refresh_token zone=refresh_tokens;
    keyval $request_id $new_session zone=opaque_sessions;
    keyval $request_id $new_refresh zone=refresh_tokens;
    
    proxy_cache_path /var/cache/nginx/jwk levels=1 keys_zone=jwk:64k max_size=1m;
    
    map $refresh_token $no_refresh {
        ""      1;
        "-"     1;
        default 0;
    }
    
    log_format  main_jwt  '$remote_addr $jwt_claim_sub $remote_user [$time_local] "$request" $status '
                          '$body_bytes_sent "$http_referer" "$http_user_agent" "$http_x_forwarded_for"';
    
    js_include conf.d/openid_connect.js;
    js_set $requestid_hash hashRequestId;
    {{end}}{{end -}}
    {{range $upstream := .Upstreams}}
    upstream {{$upstream.Name}} {
        zone {{$upstream.Name}} 256k;
        {{if $upstream.LBMethod }}{{$upstream.LBMethod}};{{end}}
        {{range $server := $upstream.UpstreamServers}}
        server {{$server.Address}}:{{$server.Port}} max_fails={{$server.MaxFails}} fail_timeout={{$server.FailTimeout}}
            {{- if $server.SlowStart}} slow_start={{$server.SlowStart}}{{end}}{{if $server.Resolve}} resolve{{end}};{{end}}
        {{if $upstream.StickyCookie}}
        sticky cookie {{$upstream.StickyCookie}};
        {{end}}
        {{if $.Keepalive}}keepalive {{$.Keepalive}};{{end}}
        {{- if $upstream.UpstreamServers -}}
        {{- if $upstream.Queue}}
        queue {{$upstream.Queue}} timeout={{$upstream.QueueTimeout}}s;
        {{- end -}}
        {{- end}}
    }
    {{- end}}
    
    {{range $server := .Servers}}
    server {
        {{if not $server.GRPCOnly}}
        {{range $port := $server.Ports}}
        listen {{$port}}{{if $server.ProxyProtocol}} proxy_protocol{{end}};
        {{- end}}
        {{end}}
        {{if $server.SSL}}
        {{- range $port := $server.SSLPorts}}
        listen {{$port}} ssl{{if $server.HTTP2}} http2{{end}}{{if $server.ProxyProtocol}} proxy_protocol{{end}};
        {{- end}}
        ssl_certificate {{$server.SSLCertificate}};
        ssl_certificate_key {{$server.SSLCertificateKey}};
        {{if $server.SSLCiphers}}
        ssl_ciphers {{$server.SSLCiphers}};
        {{end}}
        {{end}}
        {{range $setRealIPFrom := $server.SetRealIPFrom}}
        set_real_ip_from {{$setRealIPFrom}};{{end}}
        {{if $server.RealIPHeader}}real_ip_header {{$server.RealIPHeader}};{{end}}
        {{if $server.RealIPRecursive}}real_ip_recursive on;{{end}}
        
        server_tokens "{{$server.ServerTokens}}";
        
        server_name {{$server.Name}};
        
        status_zone {{$server.StatusZone}};
        
        {{if not $server.GRPCOnly}}
        {{range $proxyHideHeader := $server.ProxyHideHeaders}}
        proxy_hide_header {{$proxyHideHeader}};{{end}}
        {{range $proxyPassHeader := $server.ProxyPassHeaders}}
        proxy_pass_header {{$proxyPassHeader}};{{end}}
        {{end}}
        
        {{if $server.SSL}}
        {{if not $server.GRPCOnly}}
        {{- if $server.HSTS}}
        set $hsts_header_val "";
        proxy_hide_header Strict-Transport-Security;
        {{- if $server.HSTSBehindProxy}}
        if ($http_x_forwarded_proto = 'https') {
        {{else}}
        if ($https = on) {
        {{- end}}
            set $hsts_header_val "max-age={{$server.HSTSMaxAge}}; {{if $server.HSTSIncludeSubdomains}}includeSubDomains; {{end}}preload";
        }
        
        add_header Strict-Transport-Security "$hsts_header_val" always;
        {{end}}
        
        {{- if $server.SSLRedirect}}
        if ($scheme = http) {
            return 301 https://$host:{{index $server.SSLPorts 0}}$request_uri;
        }
        {{- end}}
        {{end}}
        {{- end}}
        
        {{- if $server.RedirectToHTTPS}}
        if ($http_x_forwarded_proto = 'http') {
            return 301 https://$host$request_uri;
        }
        {{- end}}
        
        {{with $jwt := $server.JWTAuth}}
        auth_jwt_key_file {{$jwt.Key}};
        auth_jwt "{{.Realm}}"{{if $jwt.Token}} token={{$jwt.Token}}{{end}};
        
        {{- if $jwt.RedirectLocationName}}
        error_page 401 {{$jwt.RedirectLocationName}};
        {{end}}
        {{end}}
        
        {{- if $server.ServerSnippets}}
        {{range $value := $server.ServerSnippets}}
        {{$value}}{{end}}
        {{- end}}
        
        {{- range $healthCheck := $server.HealthChecks}}
        location @hc-{{$healthCheck.UpstreamName}} {
            {{- range $name, $header := $healthCheck.Headers}}
            proxy_set_header {{$name}} "{{$header}}";
            {{- end }}
            proxy_connect_timeout {{$healthCheck.TimeoutSeconds}}s;
            proxy_read_timeout {{$healthCheck.TimeoutSeconds}}s;
            proxy_send_timeout {{$healthCheck.TimeoutSeconds}}s;
            proxy_pass {{$healthCheck.Scheme}}://{{$healthCheck.UpstreamName}};
            health_check {{if $healthCheck.Mandatory}}mandatory {{end}}uri={{$healthCheck.URI}} interval=
                {{- $healthCheck.Interval}}s fails={{$healthCheck.Fails}} passes={{$healthCheck.Passes}};
        }
        {{end -}}
        
        {{- range $location := $server.JWTRedirectLocations}}
        location {{$location.Name}} {
            internal;
            return 302 {{$location.LoginURL}};
        }
        {{end -}}
        
        {{- if index $.Ingress.Annotations "custom.nginx.org/enable-oidc"}}
        {{- $oidc_resolver := index $.Ingress.Annotations "custom.nginx.org/oidc-resolver-address"}}
        {{- if not $oidc_resolver}}{{$oidc_resolver = "8.8.8.8"}}{{end}}
        resolver {{$oidc_resolver}};
        subrequest_output_buffer_size 32k;
        
        {{- $oidc_jwt_keyfile := index $.Ingress.Annotations "custom.nginx.org/oidc-jwt-keyfile"}}
        {{- $oidc_logout_redirect := index $.Ingress.Annotations "custom.nginx.org/oidc-logout-redirect"}}
        {{- $oidc_authz_endpoint := index $.Ingress.Annotations "custom.nginx.org/oidc-authz-endpoint"}}
        {{- $oidc_token_endpoint := index $.Ingress.Annotations "custom.nginx.org/oidc-token-endpoint"}}
        {{- $oidc_client := index $.Ingress.Annotations "custom.nginx.org/oidc-client"}}
        {{- $oidc_client_secret := index $.Ingress.Annotations "custom.nginx.org/oidc-client-secret"}}
        {{ $oidc_hmac_key := index $.Ingress.Annotations "custom.nginx.org/oidc-hmac-key"}}
        set $oidc_jwt_keyfile "{{$oidc_jwt_keyfile}}";
        set $oidc_logout_redirect "{{$oidc_logout_redirect}}";
        set $oidc_authz_endpoint "{{$oidc_authz_endpoint}}";
        set $oidc_token_endpoint "{{$oidc_token_endpoint}}";
        set $oidc_client "{{$oidc_client}}";
        set $oidc_client_secret "{{$oidc_client_secret}}";
        set $oidc_hmac_key "{{$oidc_hmac_key}}";
        {{end -}}
        
        {{range $location := $server.Locations}}
        location {{$location.Path}} {
            {{with $location.MinionIngress}}
            # location for minion {{$location.MinionIngress.Namespace}}/{{$location.MinionIngress.Name}}
            {{end}}
            {{if $location.GRPC}}
            {{if not $server.GRPCOnly}}
            error_page 400 @grpcerror400;
            error_page 401 @grpcerror401;
            error_page 403 @grpcerror403;
            error_page 404 @grpcerror404;
            error_page 405 @grpcerror405;
            error_page 408 @grpcerror408;
            error_page 414 @grpcerror414;
            error_page 426 @grpcerror426;
            error_page 500 @grpcerror500;
            error_page 501 @grpcerror501;
            error_page 502 @grpcerror502;
            error_page 503 @grpcerror503;
            error_page 504 @grpcerror504;
            {{end}}
            
            {{- if $location.LocationSnippets}}
            {{range $value := $location.LocationSnippets}}
            {{$value}}{{end}}
            {{- end}}
            
            {{with $jwt := $location.JWTAuth}}
            auth_jwt_key_file {{$jwt.Key}};
            auth_jwt "{{.Realm}}"{{if $jwt.Token}} token={{$jwt.Token}}{{end}};
            {{end}}
            
            grpc_connect_timeout {{$location.ProxyConnectTimeout}};
            grpc_read_timeout {{$location.ProxyReadTimeout}};
            grpc_set_header Host $host;
            grpc_set_header X-Real-IP $remote_addr;
            grpc_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            grpc_set_header X-Forwarded-Host $host;
            grpc_set_header X-Forwarded-Port $server_port;
            grpc_set_header X-Forwarded-Proto $scheme;
            
            {{- if $location.ProxyBufferSize}}
            grpc_buffer_size {{$location.ProxyBufferSize}};
            {{- end}}
            
            {{if $location.SSL}}
            grpc_pass grpcs://{{$location.Upstream.Name}}
            {{else}}
            grpc_pass grpc://{{$location.Upstream.Name}};
            {{end}}
            {{else}}
            proxy_http_version 1.1;
            {{if $location.Websocket}}
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection $connection_upgrade;
            {{- else}}
            {{- if $.Keepalive}}proxy_set_header Connection "";{{end}}
            {{- end}}
            
            {{- if $location.LocationSnippets}}
            {{range $value := $location.LocationSnippets}}
            {{$value}}{{end}}
            {{- end}}
            
            {{ with $jwt := $location.JWTAuth }}
            auth_jwt_key_file {{$jwt.Key}};
            auth_jwt "{{.Realm}}"{{if $jwt.Token}} token={{$jwt.Token}}{{end}};
            {{if $jwt.RedirectLocationName}}
            error_page 401 {{$jwt.RedirectLocationName}};
            {{end}}
            {{end}}
            
            {{- if index $.Ingress.Annotations "custom.nginx.org/enable-oidc"}}
            auth_jwt "" token=$session_jwt;
            auth_jwt_key_request /_jwks_uri;
            error_page 401 @oidc_auth;
            {{end}}
            
            proxy_connect_timeout {{$location.ProxyConnectTimeout}};
            proxy_read_timeout {{$location.ProxyReadTimeout}};
            client_max_body_size {{$location.ClientMaxBodySize}};
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Host $host;
            proxy_set_header X-Forwarded-Port $server_port;
            proxy_set_header X-Forwarded-Proto {{if $server.RedirectToHTTPS}}https{{else}}$scheme{{end}};
            proxy_buffering {{if $location.ProxyBuffering}}on{{else}}off{{end}};
            {{- if $location.ProxyBuffers}}
            proxy_buffers {{$location.ProxyBuffers}};
            {{- end}}
            {{- if $location.ProxyBufferSize}}
            proxy_buffer_size {{$location.ProxyBufferSize}};
            {{- end}}
            {{- if $location.ProxyMaxTempFileSize}}
            proxy_max_temp_file_size {{$location.ProxyMaxTempFileSize}};
            {{- end}}
            {{if $location.SSL}}
            proxy_pass https://{{$location.Upstream.Name}}{{$location.Rewrite}};
            {{else}}
            proxy_pass http://{{$location.Upstream.Name}}{{$location.Rewrite}};
            {{end}}
            {{end}}
        }{{end}}
        {{if $server.GRPCOnly}}
        error_page 400 @grpcerror400;
        error_page 401 @grpcerror401;
        error_page 403 @grpcerror403;
        error_page 404 @grpcerror404;
        error_page 405 @grpcerror405;
        error_page 408 @grpcerror408;
        error_page 414 @grpcerror414;
        error_page 426 @grpcerror426;
        error_page 500 @grpcerror500;
        error_page 501 @grpcerror501;
        error_page 502 @grpcerror502;
        error_page 503 @grpcerror503;
        error_page 504 @grpcerror504;
        {{end}}
        {{if $server.HTTP2}}
        location @grpcerror400 { default_type application/grpc; return 400 "\n"; }
        location @grpcerror401 { default_type application/grpc; return 401 "\n"; }
        location @grpcerror403 { default_type application/grpc; return 403 "\n"; }
        location @grpcerror404 { default_type application/grpc; return 404 "\n"; }
        location @grpcerror405 { default_type application/grpc; return 405 "\n"; }
        location @grpcerror408 { default_type application/grpc; return 408 "\n"; }
        location @grpcerror414 { default_type application/grpc; return 414 "\n"; }
        location @grpcerror426 { default_type application/grpc; return 426 "\n"; }
        location @grpcerror500 { default_type application/grpc; return 500 "\n"; }
        location @grpcerror501 { default_type application/grpc; return 501 "\n"; }
        location @grpcerror502 { default_type application/grpc; return 502 "\n"; }
        location @grpcerror503 { default_type application/grpc; return 503 "\n"; }
        location @grpcerror504 { default_type application/grpc; return 504 "\n"; }
        {{end}}
        {{- if index $.Ingress.Annotations "custom.nginx.org/enable-oidc" -}}
        include conf.d/openid_connect.server_conf;
        {{- end}}
    }{{end}}

Now we deploy the ConfigMap in Kubernetes, and change directory back up to kubernetes-ingress.

$ kubectl create -f common/nginx-config.yaml
$ cd ..

Incorporating OpenID Connect into the NGINX Plus Ingress Controller

Since we are using OIDC resources, we’re taking advantage of the OIDC reference implementation provided by NGINX on GitHub. After cloning the nginx-openid-connect repo inside our existing kubernetes-ingress repo, we create ConfigMaps from the openid-connect.js and openid-connect.server-conf files.

$ git clone https://github.com/nginxinc/nginx-openid-connect
$ cd nginx-openid-connect
$ kubectl create configmap -n nginx-ingress openid-connect.js --from-file=openid_connect.js
$ kubectl create configmap -n nginx-ingress openid-connect.server-conf --from-file=openid_connect.server_conf

Now we incorporate the two files into our Ingress controller deployment as Kubernetes volumes of type ConfigMap, by adding the following directives to the existing nginx-plus-ingress.yaml file in the deployments/deployment subdirectory of our kubernetes-ingress repo:

    volumes:
    - name: openid-connect-js
      configMap:
        name: openid-connect.js
    - name: openid-connect-server-conf
      configMap:
        name: openid-connect.server-conf

We also add the following directives to nginx-plus-ingress.yaml to make the files accessible in the /etc/nginx/conf.d directory of our deployment:

         volumeMounts:
          - name: openid-connect-js
            mountPath: /etc/nginx/conf.d/openid_connect.js
            subPath: openid_connect.js
          - name: openid-connect-server-conf
            mountPath: /etc/nginx/conf.d/openid_connect.server_conf
            subPath: openid_connect.server_conf

Here’s the complete nginx-plus-ingress.yaml file for our deployment. If using it as the basis for your own deployment, replace magicalyak with the name of your private registry.

apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-ingress
namespace: nginx-ingress
labels:
  app: nginx-ingress
spec:
replicas: 1
selector:
  matchLabels:
    app: nginx-ingress
template:
  metadata:
    labels:
      app: nginx-ingress
    #annotations:
    #  prometheus.io/scrape: "true"
    #  prometheus.io/port: "9113"
  spec:
    containers:
    - image: magicalyak/nginx-plus:1.5.0-oidc
      imagePullPolicy: IfNotPresent
      name: nginx-plus-ingress
      args:
        - -nginx-plus
        - -nginx-configmaps=$(POD_NAMESPACE)/nginx-config
        - -default-server-tls-secret=$(POD_NAMESPACE)/default-server-secret
        - -report-ingress-status
        #- -v=3 # Enables extensive logging. Useful for troubleshooting.
        #- -external-service=nginx-ingress
        #- -enable-leader-election
        #- -enable-prometheus-metrics
        #- -enable-custom-resources
      env:
      - name: POD_NAMESPACE
        valueFrom:
          fieldRef:
            fieldPath: metadata.namespace
      - name: POD_NAME
        valueFrom:
          fieldRef:
            fieldPath: metadata.name
      ports:
      - name: http
        containerPort: 80
      - name: https
        containerPort: 443
      #- name: prometheus
      #  containerPort: 9113
      volumeMounts:
      - name: openid-connect-js
        mountPath: /etc/nginx/conf.d/openid_connect.js
        subPath: openid_connect.js
      - name: openid-connect-server-conf
        mountPath: /etc/nginx/conf.d/openid_connect.server_conf
        subPath: openid_connect.server_conf
    serviceAccountName: nginx-ingress
    volumes:
    - name: openid-connect-js
      configMap:
        name: openid-connect.js
    - name: openid-connect-server-conf
      configMap:
        name: openid-connect.server-conf

Creating the Kubernetes Service

We also need to define a Kubernetes service by creating a new file called nginx-plus-service.yaml in the deployments/service subdirectory of our kubernetes-ingress repo. We set the ExternalIPs field to the alias IP address (172.16.186.100) we assigned to node1 in Installing and Configuring Kubernetes, but you could use NodePorts or other options instead.

apiVersion: v1
kind: Service
metadata:
  name: nginx-ingress
  namespace: nginx-ingress
  labels:
    svc: nginx-ingress
spec:
  type: ClusterIP
  clusterIP:
  externalIPs:
  - 172.16.186.100
  ports:
  - name: http
    port: 80
    targetPort: http
    protocol: TCP
  - name: https
    port: 443
    targetPort: https
    protocol: TCP
  selector:
    app: nginx-ingress

Deploying the Ingress Controller

With all of the YAML files in place, we run the following commands to deploy the Ingress controller and service resources in Kubernetes:

$ cd ../deployments
$ kubectl create -f deployment/nginx-plus-ingress.yaml
$ kubectl create -f service/nginx-plus-service.yaml
$ cd ..

At this point our Ingress controller is installed and we can focus on creating the sample resource for which we’re using OIDC authentication.

Setting Up the Sample Application to Use OpenID Connect

To test our OIDC authentication setup, we’re using a very simple application called cafe, which has tea and coffee service endpoints. It’s included in the examples/complete-example directory of the kubernetes-ingress repo on GitHub, and you can read more about it in NGINX and NGINX Plus Ingress Controllers for Kubernetes Load Balancing on our blog.

We need to make some modifications to the sample app, however – specifically, we need to insert the values we obtained from Azure AD into the YAML file for the application, cafe-ingress.yaml in the examples/complete-example directory.

We’re making two sets of changes, as shown in the full file below:

  1. We’re adding an annotations section. The file below uses the {client_key}, {tenant_key}, and {client_secret} variables to represent the values obtained from an IdP. To make it easier to track which values we’re referring to, in the list we’ve specified the literal values we obtained from Azure AD in the indicated step in Obtaining Credentials from the OpenID Connect Identity Provider. When creating your own deployment, substitute the values you obtain from Azure AD (or other IdP).

    • {client_key} – The value in the Application (client) ID field on the Azure AD confirmation page. For our deployment, it’s a2b20239-2dce-4306-a385-ac9xxx, as reported in Step 6.
    • {tenant_key} – The value in the Directory (tenant) ID field on the Azure AD confirmation page. For our deployment, it’s dd3dfd2f-6a3b-40d1-9be0-bf8xxx, as reported in Step 6.
    • {client_secret} – The URL-encoded version of the value in the Client secrets section in Azure AD. For our deployment, it’s kn_3VLh%5D1I3ods%2A%5BDDmMxNmg8xxx, as noted in Step 9.

    In addition, note that the value in the custom.nginx.org/oidc-hmac-key field is just an example. Substitute your own unique value that ensures nonce values are unpredictable.

  2. We’re changing the value in the hosts and host fields to cafe.nginx.net, and adding an entry for that domain to the /etc/hosts file on each of the four Kubernetes nodes, specifying the IP address from the ClusterIP field in nginx-plus-service.yaml. In our deployment, we set this to 172.16.186.100 in Installing and Configuring Kubernetes.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: cafe-ingress
  annotations:
    custom.nginx.org/enable-oidc:  "True"
    custom.nginx.org/keyval-zone-size: "1m" #(default 1m)
    custom.nginx.org/refresh-token-timeout: "8h" #(default 8h)
    custom.ngnix.org/session-token-timeout: "1h" #(default 1h)
    custom.nginx.org/oidc-resolver-address: "8.8.8.8" #(default 8.8.8.8)
    custom.nginx.org/oidc-jwt-keyfile: "https://login.microsoftonline.com/{tenant}/discovery/v2.0/keys"
    custom.nginx.org/oidc-logout-redirect: "https://login.microsoftonline.com/{tenant}/oauth2/v2.0/logout"
    custom.nginx.org/oidc-authz-endpoint: "https://login.microsoftonline.com/{tenant}/oauth2/v2.0/authorize"
    custom.nginx.org/oidc-token-endpoint: "https://login.microsoftonline.com/{tenant}/oauth2/v2.0/token"
    custom.nginx.org/oidc-client:  "{client_key}"
    custom.nginx.org/oidc-client-secret: "{client_secret}"
    custom.nginx.org/oidc-hmac-key:  "vC5FabzvYvFZFBzxtRCYDYX+"
spec:
  tls:
  - hosts:
    - cafe.nginx.net
    secretName: cafe-secret
  rules:
  - host: cafe.nginx.net
    http:
      paths:
      - path: /tea
        backend:
          serviceName: tea-svc
          servicePort: 80
      - path: /coffee
        backend:
          serviceName: coffee-svc
          servicePort: 80

We now create the cafe resource in Kubernetes:

$ cd examples/complete-example
$ kubectl create -f cafe-secret.yaml
$ kubectl create -f cafe.yaml
$ kubectl create -f cafe-ingress.yaml
$ cd ../..

To verify that the OIDC authentication process is working, we navigate to http://cafe/nginx.com/tea in a browser. It prompts for our login credentials, authenticates us, and displays some basic information generated by the tea service. For an example, see NGINX and NGINX Plus Ingress Controllers for Kubernetes Load Balancing.

The post Using the NGINX Plus Ingress Controller for Kubernetes with OpenID Connect Authentication from Azure AD appeared first on NGINX.

]]>
#Culture@NGINX http://www.bengbeng.net.cn/blog/culture-at-nginx/ Mon, 08 Jul 2019 23:51:17 +0000 http://www.bengbeng.net.cn/?p=62355 In our Life@NGINX post, we answer the question “what is life like at NGINX?”. In this post, we want to expand on a related topic: company culture. Senior Vice President and General Manager of NGINX at F5, Gus Robertson, recently noted how compatible the NGINX and F5 cultures are. In this post, we want to expand [...]

Read More...

The post #Culture@NGINX appeared first on NGINX.

]]>
In our Life@NGINX post, we answer the question “what is life like at NGINX?”. In this post, we want to expand on a related topic: company culture. Senior Vice President and General Manager of NGINX at F5, Gus Robertson, recently noted how compatible the NGINX and F5 cultures are. In this post, we want to expand on why we think the NGINX culture within F5 is so special.

Culture Is Key to Sustaining Success

Company culture is a popular topic in today’s headlines, and an important one considering the number of hours each of us spends at work. But what is company culture? More importantly, what is NGINX’s culture?

When we talk about our culture, we consider how our values affect the past, present, and future of our employees, our products, and our customers. We remember that our experiences shape our opinions and outlook – from our CEO to our interns, from our Cork office to our Sydney office and beyond. Regardless of where our teams are in the world, or which function we work in, the five core values we have held since the beginning are Curiosity, Openness, Progress, Excellence, and Mutual Accountability.

Read on to discover more about what we mean by each of our core values at NGINX, and if what you read reminds you of yourself, then take a look at our current vacancies!

NGINX’s Core Values

NGINX Core Values Mountain and Flags

Curiosity

At NGINX, curiosity is more than an eagerness to learn. Curiosity is not being afraid to ask why. It is digging into our customers’ goals and how we can help to exceed expectations. It is looking to the future and challenging the status quo, with the goal of making our products and customer service even better. It is having the autonomy to pursue a new idea, and being encouraged to try. It is #funfactfriday, our tradition of profiling one of our teammates each week, highlighting interesting facts about his or her life history and interests outside work. Each of us looks for ways to improve, to come together, and to make NGINX better.

Openness

Openness has many facets at NGINX: it starts with our open source roots, grows with each person who joins us, and brings us together as friends and as a family. We bring our whole selves to work: our experiences, our knowledge, and our pun jokes. We partner with our community, our customers, and each other. We look at problems and solutions from all angles. We strive for openness about where we’ve been, where we are, and where to go next. We celebrate our achievements together. The most important part of openness, however, is our transparency. We believe in providing everyone with all the information they need to succeed. This permeates every facet of the business, as explained by F5’s CEO, Fran?ois Locoh-Donou.

Progress

Progress is built into our DNA. NGINX co‑founder and CTO Igor Sysoev’s passion for solving the C10K problem led him to become the author of NGINX, and inspired by that passion, we continue to build products on the cutting edge of technology. Our desire to develop our professional skill set not only helps each of us on our career path within the NGINX business unit, but it also helps our teams to collectively push forward and reach further. Advancing our technology is important, and the NGINX family is the core to our success. We support each other individually through career growth, and as a family through initiatives like team‑building events, diversity discussions, and social gatherings.

Excellence

Excellence is what NGINX’s reputation is built on. It’s putting our best foot forward every day, building the best products for our open source users and enterprise customers alike, and supporting our colleagues, teams, and company. Excellence has been the goal of NGINX since its inception, and it’s the key to our identity within F5. Each of us strives for excellence individually; together, we strive for it as a team.

Mutual Accountability

Mutual accountability is a simple concept popularized by the former CEO of GE, Jeff Immelt. He sums up mutual accountability this way: Do your job, and take care of others along the way. To us, it means that we don’t operate in silos. We work as a team, supporting each other for the sake of our customers and our community. When we face challenges, we put our best foot forward because we know our teammates will do the same for us. Good teamwork has mutual accountability at its core, making it key to NGINX’s award‑winning and consistent success.

Join Us If You Agree

Curiosity, Openness, Progress, Excellence, and Mutual Accountability – separately, each of these values is important. Together, though, they unite and drive our underlying success. As we consider our values and as we work to maintain our culture as a positive environment, we know that curiosity fosters openness, that openness drives progress, that progress enables excellence, and that excellence depends upon mutual accountability. If our values speak to you, check out Life@NGINX on our blog, and apply to join the NGINX family.

The post #Culture@NGINX appeared first on NGINX.

]]>
#Life@NGINX http://www.bengbeng.net.cn/blog/life-at-nginx/ Mon, 08 Jul 2019 23:51:11 +0000 http://www.bengbeng.net.cn/?p=62350 One of the questions we hear most often from prospective employees is “what’s life like at NGINX?”. The answer is simple: we blend a tight-knit group of teams and colleagues who share a start‑up heritage and culture with the global support and benefits that come with being part of F5. In this blog post, we’ll [...]

Read More...

The post #Life@NGINX appeared first on NGINX.

]]>
One of the questions we hear most often from prospective employees is “what’s life like at NGINX?”.

The answer is simple: we blend a tight-knit group of teams and colleagues who share a start‑up heritage and culture with the global support and benefits that come with being part of F5. In this blog post, we’ll look at life at NGINX in our global offices.

NGINX Around the World

Life at NGINX is diverse. The NGINX business unit at F5 has office locations internationally – in San Francisco, Cork, Moscow, Singapore, Sydney, and Tokyo – not to mention our remote colleagues living and working in various places around the globe. While our team is scattered geographically, our culture is something that keeps us together as a community and makes Life at NGINX, and within the broader F5 family, what it is.

Our culture is driven by our core values: Curiosity, Openness, Progress, Excellence, and Mutual Accountability. As Gus Robertson, Senior Vice President and General Manager of NGINX at F5, explains, “Not everyone wants to climb mountains on a daily basis. Our team does.” And a team is just what we are, fueled by mutual accountability and close collaboration: a winning combination. Our cultural values align perfectly with F5, with the BeF5 mantra preserving all the values we came to know and love as NGINX employees.

Collaboration underpins us and, as a team, we love to come together. This happens in a lot of different ways – weekly office lunches and breakfasts, Happy Hours to celebrate the end of a successful week, team bowling outings, zip‑line adventures, summer barbecues, holiday parties, and sweet treats to celebrate birthdays, anniversaries, and soccer team victories.

Our Office Locations

San Francisco, CA

The original NGINX headquarters in San Francisco remains our largest office. We’re a mere 15-minute walk away from Market Street, a draw for tourists and shoppers alike. When you arrive at the office, you may well be greeted by some of our canine colleagues, such as Flower – a regular visitor! A short walk from the office brings you to the Yerba Buena Gardens, San Francisco Museum of Modern Art, and the bustling Union Square shopping district. And what about the gorgeous views in the Bay Area, including the famous Golden Gate Bridge and world‑renowned former prison, Alcatraz? With lots to do – cycling, sailing, hiking, eating, and drinking among them – San Francisco has it all.

Cork, Ireland

Cork is a bustling European city on the south coast of Ireland, and remains the NGINX business unit’s largest office in EMEA. It’s located in the heart of the city (or “town” as the locals call it) overlooking the River Lee on the South Mall. Our office is 15 minutes from Cork Airport and a stone’s throw from busy bars, cafés, and restaurants. Thanks to the huge choice of eateries, Lonely Planet regards Ireland’s second city as “arguably the best foodie scene in the country”. Did we mention all the activities that make Cork great? Music, dancing, rugby, soccer, Gaelic games, sailing, surfing, hiking, and more are all on our doorstep! Embrace the Wild Atlantic Way and come visit us in beautiful Cork, where céad míle fáilte (a hundred thousand welcomes) await you.

Moscow, Russia

Moscow is the capital of Russia and importantly for us, the home of Igor Sysoev, original author of NGINX Open Source and co‑founder of NGINX, Inc. Home to many members of our engineering team, the Moscow office is just 3 minutes’ walk from the Sportivnaya Metro station and 15 minutes’ walk from Luzhniki Stadium, the main venue for the 1980 Olympic Games. It’s also a short walk from the office to the banks of the Moskva River and 40 minutes to historic Gorky Park.

Singapore

Singapore’s strategic location at the southern tip of the Malaysian peninsula makes it an ideal location for NGINX in the Asia‑Pacific region. Our office is in in Suntec City, a retail and office complex in the Central Business District that’s home to many global tech companies, and just a stone’s throw away from landmarks like the Marina Bay Financial Centre, the 250‑acre Gardens By The Bay park, and the iconic 5‑star Marina Bay Sands hotel.

Sydney, Australia

Our APJC regional office in Australia is in Pyrmont, Sydney, across the popular Darling Harbour from the city center. Situated in a once‑industrial part of the city, the NGINX office on Harris Street is surrounded by hip restaurants, bars, and cafes, all there to cater to both the professionals and young crowd that flock to this old‑school district.

Tokyo, Japan

Last but not least, our APJC regional office in Japan is in Tokyo Square Garden, at the heart of the capital’s central business district. It’s just a block away from the famous Ginza district, and not far from the Imperial Palace and Gardens which features the Edo Castle, dating back to 1457. Don’t forget to head to nearby Chidori-ga-fuchi Moat during the beautiful cherry blossom season!

Join Us and Help Shape Life at NGINX

Life at NGINX is only as good as the people who work with us and share our values. To find out more about what drives us as part of the F5 family, check out #LifeatNGINX on Twitter and read about #Culture@NGINX on our blog. If our values speak to you, apply to join the team at NGINX, or browse through the current open roles available throughout F5 and Aspen Mesh.

The post #Life@NGINX appeared first on NGINX.

]]>
Catching Up with the NGINX Application Platform: What’s New in 2019 http://www.bengbeng.net.cn/blog/nginx-application-platform-whats-new-2019/ Tue, 02 Jul 2019 23:34:09 +0000 http://www.bengbeng.net.cn/?p=62585 More than ever before, enterprises are recognizing that digital transformation is critical to their survival. In fact, the Wall Street Journal reports that executives currently see legacy operations and infrastructure as the #1 risk factor jeopardizing their ability to compete with companies that are “born digital”. Cloud, DevOps, and microservices are key technologies that accelerate [...]

Read More...

The post Catching Up with the NGINX Application Platform: What’s New in 2019 appeared first on NGINX.

]]>
table.nginx-blog, table.nginx-blog th, table.nginx-blog td { border: 2px solid black; border-collapse: collapse; } table.nginx-blog { width: 100%; } table.nginx-blog th { background-color: #d3d3d3; align: left; padding-left: 5px; padding-right: 5px; padding-bottom: 2px; padding-top: 2px; line-height: 120%; } table.nginx-blog td { padding-left: 5px; padding-right: 5px; padding-bottom: 2px; padding-top: 5px; line-height: 120%; } table.nginx-blog td.center { text-align: center; padding-bottom: 2px; padding-top: 5px; line-height: 120%; }

More than ever before, enterprises are recognizing that digital transformation is critical to their survival. In fact, the Wall Street Journal reports that executives currently see legacy operations and infrastructure as the #1 risk factor jeopardizing their ability to compete with companies that are “born digital”.

Cloud, DevOps, and microservices are key technologies that accelerate digital transformation initiatives. And they’re paying off at companies that leverage them – according to a study from Freeform Dynamics, commissioned by CA Technologies, organizations that have adopted DevOps practices have achieved 60% higher growth in revenue and profits than their peers, and are 2x more likely to be growing at more than 20% annually. Enterprises are also modernizing their app architectures – 86% of respondents in a survey commissioned by LightStep expect microservices to be their default architecture in 5 years.

We unveiled the NGINX Application Platform in late 2017 to enable enterprises undergoing digital transformation to modernize legacy, monolithic applications as well as deliver new, microservices?based applications and APIs at scale across a multi‑cloud environment. Enterprises deploy the NGINX Application Platform to improve agility, accelerate performance, and reduce capital and operational costs. Since the launch, we have been introducing enterprise‑grade capabilities at a regular pace to all of the component solutions, including NGINX Controller, NGINX Plus, and NGINX Unit. This blog outlines key updates to the NGINX Application Platform and the NGINX Ingress Controller for Kubernetes since the beginning of 2019.

The following table summarizes the new features and benefits introduced to each component since the beginning of 2019. For details, see the linked sections that follow.

Component Feature Benefits
NGINX Controller Load Balancing Module Policy‑based approach to configuration management using configuration templates Prevent misconfigurations and ensure consistency

Save time

Easily scale application of configurations across multiple NGINX Plus instances

ServiceNow integration Streamline troubleshooting workflows
NGINX Controller API Management Module Filtering and searching

Environment‑specific API definition visualizations

Improved usability:

  • More flexible API definition
  • Easy to filter and search by hostname and APIs
NGINX Plus Dynamic certificate loading

Shared configuration across cluster members

Simplified configuration workflows
Support for port ranges in server listen configuration NGINX Plus can be deployed as a proxy for an FTP server in passive mode
Certificates and keys can be stored in in‑memory key‑value store

Support for opaque session tokens

Enhanced security:

  • Secrets cannot be obtained from deployment images or filesystem backups
  • No personally identifiable information is stored on the client
TCP connection can be closed immediately when the server goes offline Improved reliability:

  • Client reconnects to a healthy server right away, eliminating delays due to timeout
NGINX Unit Experimental (beta-level) support for Java servlet containers Support for the most popular enterprise programming language brings the number of supported languages to seven
Internal routing Multiple applications can be hosted on the same IP address and port

Granular control of the target application

NGINX Ingress Controller for Kubernetes NGINX custom resources Using native Kubernetes-style API simplifies configuration
Additional Prometheus metrics Quick detection of performance and availability issues with the Ingress Controller itself
Load balancing traffic to external resources Easier migration to Kubernetes environments
Dedicated Helm chart repository Easy and effortless deployment of NGINX in Kubernetes environments

Updates in NGINX Controller 2.0–2.4

We have adopted a SaaS‑like upgrade cadence for NGINX Controller – we release a new version consisting of new features (sometimes minor, sometimes major) and bug fixes on a monthly basis.

Load Balancing Module in NGINX Controller 2.0–2.4

The Load Balancing Module in NGINX Controller enables you to configure, validate, and monitor all your NGINX Plus load balancers at scale across a multi‑cloud environment.

There are two primary enhancements to the Load Balancing Module:

  • Policy‑based approach to configuration management – You can create configuration templates for your NGINX Plus load balancers, including environment‑specific templates – for example, one for production environments and another for test environments. These templates save time, help you achieve scale, and eliminate issues due to misconfiguration. They can be version‑controlled, and you can revert to a ‘golden image’ in case there are any problems.
  • Integration with ServiceNow – You can streamline troubleshooting workflows by forwarding alerts from NGINX Controller to ServiceNow.

For more details about the changes to the Load Balancing Module, see our blog.

API Management Module in NGINX Controller 2.0 –2.4

The API Management Module empowers Infrastructure & Operations and DevOps teams to achieve full API lifecycle management including defining, publishing, securing, managing traffic, and monitoring APIs, without compromising performance. Built on an innovative architecture, and using NGINX as the data‑plane component, it is well‑suited to the needs of both traditional applications and modern distributed applications based on microservices.

The API Management Module became generally available in January of 2019. Since then, we’ve been hard at work on usability improvements to the API Definitions interface:

  • Entry point hostnames are color‑coded to indicate the state of the NGINX Plus API gateway configuration:
    • Grey – Config not pushed to the entry point
    • Green – Config pushed and all associated instances are online
    • Yellow – Config pushed but some instances remain offline
    • Red – Config pushed but all instances are offline
  • New card layout for API definitions to easily visualize and access different environments
  • Ability to filter by API name and hostname
  • Warnings when parts of the API definition are not routed to backend services
  • Error responses for unknown API endpoints (404 errors) can be customized

For details on defining APIs with the API Management Module, see our blog.

NGINX Plus R18

NGINX Plus’ flexibility, portability, and seamless integration with CI/CD automation tools help accelerate enterprise adoption of DevOps. NGINX Plus R18 advances this objective by simplifying configuration workflows and enhancing the security and reliability of your applications. Key enhancements in NGINX Plus R18 include:

  • Simplified configuration workflows

    • Dynamic certificate loading – TLS certificates are loaded into memory only when a request is made for a matching hostname. You can save time and effort by automating the upload of certificates and private keys into the key‑value store using the NGINX Plus API. This is especially ideal for deployments with large numbers of certificates or when configuration reloads are very frequent.
    • Support for port ranges for server configurations – You can specify port ranges for a virtual server to listen on, rather than just individual ports. This also allows NGINX Plus to act as a proxy for an FTP server in passive mode.
    • Simplified cluster management – NGINX Plus R15 introduced synchronization of runtime state across a cluster of NGINX Plus instances. This release enhances clustering by enabling the same clustering configuration to be used on all members of the cluster. This is particularly helpful in dynamic environments such as AWS Auto Scaling groups or containerized clusters.
  • Enhanced security

    • Minimizing exposure of certificates – With this release, NGINX Plus can load certificates and the associated private keys directly from the in‑memory key‑value store. Not storing secrets on disk means attackers can no longer obtain copies of them from deployment images or backups of the filesystem.
    • Support for opaque session tokens – NGINX Plus supports OpenID Connect authentication and single sign‑on for backend applications. NGINX Plus R18 adds support for opaque session tokens issued by OpenID Connect. Opaque tokens contain no personally identifiable information about the user so that no sensitive information is stored at the client.
  • Improved reliability

    • Enabling clients to reconnect upon failed health checks – NGINX Plus active health checks continually probe the health of upstream servers to ensure traffic does not get forwarded to servers that are offline. With this release, client connections can also be terminated immediately when a server goes offline for any of several reasons. As client applications then reconnect, they are proxied to a healthy backend server, thereby improving the reliability of your applications.

For more details about NGINX Plus R18, see our blog.

NGINX Unit 1.8.0

NGINX Unit is an open source lightweight, flexible, dynamic, polyglot app server that currently supports seven different languages. So far this year we have improved NGINX Unit with:

  • Experimental support for Java servlet containers – According to a report from the Cloud Foundry Foundation, an open source Platform-as-a-service project, Java is the dominant language for enterprise development. Addressing a request from many of our users, we introduced beta‑level support for Java servlet containers in NGINX Unit 1.8.0. Java is a registered trademark of Oracle and/or its affiliates.
  • Internal routing – Internal routing enables granular control over the target application. With this support, you can run many applications on the same IP address and port. NGINX Unit can determine which application to forward requests to based on host, URI, and HTTP method. Sample use cases for internal routing include:
    • POST requests that are handled by a special app, maybe written in a different language.
    • Requests to administrative URLs that need a different security group and fewer application processes than the main application.

For more details about NGINX Unit 1.8.0, see our blog.

NGINX Ingress Controller for Kubernetes 1.5.0

NGINX is the most deployed Ingress controller in Kubernetes environments. The NGINX Ingress Controller for Kubernetes provides advanced load balancing capabilities including session persistence, WebSocket, HTTP/2, and gRPC for complex applications consisting of many microservices. Release 1.5.0 introduces the following capabilities:

  • Defining ingress policies using NGINX custom resources – This is a new approach to configuration that follows the Kubernetes API style so that developers get the same experience as when using the Ingress resource. With this approach, users don’t have to use annotations – all features must now be part of the spec. It also enables us to support RBAC and other capabilities in a scalable and predictable manner.
  • Additional metrics – Provided by a streamlined Prometheus exporter, new metrics have been introduced in this release to quickly detect performance degradations and “uptime” of NGINX Ingress Controller itself.
  • Support for load balancing traffic to external services – The NGINX Plus Ingress Controller can now load balance requests to destinations outside of the cluster, making it easier to migrate to Kubernetes environments.
  • Dedicated Helm chart repository – Helm is becoming the preferred way to package applications on Kubernetes. Release 1.5.0 of the NGINX Plus Ingress Controller is available via our Helm repo.

For more details about NGINX Ingress Controller for Kubernetes 1.5.0, see our blog.

Continued Investments in NGINX

Looking ahead, now that we are part of F5 Networks we are planning to bolster our investments in open source as well as the NGINX Application Platform. F5 is committed to the NGINX open source technology, developers, and community. We anticipate that the additional investments will inject new vigor into open source initiatives and will enable us to develop open source features, host more open source events, and produce more open source content. Read this blog from Gus Robertson, GM of the NGINX business unit, on F5’s commitment to open source.

We also expect more cross‑pollination across our solutions – we want to leverage the rich security capabilities that F5 offers and embed them into NGINX solutions. F5 solutions will become more agile, flexible, and portable without compromising on reliability, security, and governance. We are excited for what comes next. Follow us on Twitter and LinkedIn to learn about updates to the NGINX Application Platform.

Please attend NGINX Conf 2019 to learn more about our vision for the future with F5. You will hear about new product releases and our roadmap plans as well as have an opportunity to learn from industry luminaries.

The post Catching Up with the NGINX Application Platform: What’s New in 2019 appeared first on NGINX.

]]>
Ask NGINX | June 2019 http://www.bengbeng.net.cn/blog/ask-nginx-june-2019/ Thu, 27 Jun 2019 21:52:49 +0000 http://www.bengbeng.net.cn/?p=62549 Do you have an NGINX Plus offline installer for RHEL/CentOS/Oracle Linux 7.4+? Yes. It takes advantage of the yumdownloader utility. Here’s the procedure: Follow the installation instructions in the NGINX Plus Admin Guide, through Step 5. (In other words, don’t run the yum install command for NGINX Plus itself.) Install yumdownloader, if you haven’t already: Download the latest version of [...]

Read More...

The post Ask NGINX | June 2019 appeared first on NGINX.

]]>
Do you have an NGINX Plus offline installer for RHEL/CentOS/Oracle Linux 7.4+?

Yes. It takes advantage of the yumdownloader utility. Here’s the procedure:

  1. Follow the installation instructions in the NGINX Plus Admin Guide, through Step 5. (In other words, don’t run the yum install command for NGINX Plus itself.)

  2. Install yumdownloader, if you haven’t already:

    # yum install yumdownloader
  3. Download the latest version of the NGINX Plus package:

    # yumdownloader nginx-plus
  4. Copy the NGINX Plus rpm package to each target machine and run this command there to install it:

    # rpm -ihv rpm-package-name

For further help, or if other operating systes, get in touch with the NGINX support team.

Can I install NGINX Plus on Ubuntu?

Yes, and it’s just one of the many operating systems supported by NGINX Plus. As of this writing, NGINX Plus supports the following versions of Ubuntu:

  • 14.04 LTS (Trusty)
  • 16.04 LTS (Xenial)
  • 18.04 (Bionic)
  • 18.10 (Cosmic)

For installation instructions, see the NGINX Plus Admin Guide. For the complete list of supported operating systems, see NGINX Plus Releases.

What are F5’s plans for investing in NGINX open source projects post‑acquisition?

F5 values the NGINX open source community. We’re committed not just to maintaining, but to increasing, investment in open source initiatives, as well as expanding community engagement and contributing to the open source community in an even more substantial way.

F5 is committed to providing the same level of access to the open source code as before the acquisition.

Will F5 employees be making contributions to NGINX OSS projects?

Yes. F5 employees in the NGINX business unit will continue to contribute to NGINX Open Source, NGINX Unit, and other projects hosted at nginx.org?. Many F5 employees already contribute to other third‑party open source projects, such as the F5 repository on GitHub. Along with F5 customers, they also contribute code to the 300,000 user‑strong F5 DevCentral community.

Ask Us!

Got a question for our Ask NGINX series? Leave a comment below or get in touch with our team, and we’ll be happy to help!

The post Ask NGINX | June 2019 appeared first on NGINX.

]]>
OpenTracing for NGINX and NGINX Plus http://www.bengbeng.net.cn/blog/opentracing-nginx-plus/ Mon, 17 Jun 2019 17:02:00 +0000 http://www.bengbeng.net.cn/?p=62494 For all its benefits, a microservices architecture also introduces new complexities. One is the challenge of tracking requests as they are processed, with data flowing among all the microservices that make up the application. A new methodology called distributed (request) tracing has been invented for this purpose, and OpenTracing is a specification and standard set [...]

Read More...

The post OpenTracing for NGINX and NGINX Plus appeared first on NGINX.

]]>
For all its benefits, a microservices architecture also introduces new complexities. One is the challenge of tracking requests as they are processed, with data flowing among all the microservices that make up the application. A new methodology called distributed (request) tracing has been invented for this purpose, and OpenTracing is a specification and standard set of APIs intended to guide design and implementation of distributed tracing tools.

In NGINX Plus Release 18 (R18), we added the NGINX OpenTracing module to our dynamic modules repository (it has been available as a third‑party module on GitHub for a couple of years now). A big advantage of the NGINX OpenTracing module is that by instrumenting NGINX and NGINX Plus for distributed tracing you get tracing data for every proxied application, without having to instrument the applications individually.

In this blog we show how to enable distributed tracing of requests for NGINX or NGINX Plus (for brevity we’ll just refer to NGINX Plus from now on). We provide instructions for two distributed tracing services (tracers, in OpenTracing terminology), Jaeger and Zipkin. (For a list of other tracers, see the OpenTracing documentation.) To illustrate the kind of information provided by tracers, we compare request processing before and after NGINX Plus caching is enabled.

A tracer has two basic components:

  • An agent which collects tracing data from applications running on the host where it is running. In our case, the “application” is NGINX Plus and the agent is implemented as a plug‑in.
  • A server (also called the collector) which accepts tracing data from one or more agents and displays it in a central UI. You can run the server on the NGINX Plus host or another host, as you choose.

Installing a Tracer Server

The first step is to install and configure the server for the tracer of your choice. We’re providing instructions for Jaeger and Zipkin; adapt them as necessary for other tracers.

Installing the Jaeger Server

We recommend the following method for installing the Jaeger server. You can also download Docker images at the URL specified in Step 1.

  1. Navigate to the Jaeger download page and download the Linux binary (at the time of writing, jaeger-1.12.0-linux-amd64.tar).

  2. Move the binary to /usr/bin/jaeger (creating the directory first if necessary), and run it.

    $ mkdir /usr/bin/jaeger
    $ mv jaeger-1.12.0-linux-amd64.tar /usr/bin/jaeger
    $ cd /usr/bin/jaeger
    $ tar xvzf jaeger-1.12.0-linux-amd64.tar.gz
    $ sudo rm -rf jaeger-1.12.0-linux-amd64.tar.gz
    $ cd jaeger-1.12.0-linux-amd64
    $ ./jaeger-all-in-one
  3. Verify that you can access the Jaeger UI in your browser, at http://Jaeger-server-IP-address:16686/ (16686 is the default port for the Jaeger server).

Installing the Zipkin Server

  1. Download and run a Docker image of Zipkin (we’re using port 9411, the default).

    $ docker run -d -p 9411:9411 openzipkin/zipkin
  2. Verify that you can access the Zipkin UI in your browser, at http://Zipkin-server-IP-address:9411/.

Installing and Configuring a Tracer Plug‑In

Run these commands on the NGINX Plus host to install the plug‑in for either Jaeger or Zipkin.

Installing the Jaeger Plug‑In

  1. Install the Jaeger plug‑in. The following wget command is for x86‑64 Linux systems:

    $ cd /usr/local/lib
    $ wget https://github.com/jaegertracing/jaeger-client-cpp/releases/download/v0.4.2/libjaegertracing_plugin.linux_amd64.so -O /usr/local/lib/libjaegertracing_plugin.so

    Instructions for building the plug‑in from source are available on GitHub.

  2. Create a JSON‑formatted configuration file for the plug‑in, named /etc/jaeger/jaeger-config.json, with the following contents. We’re using the default port for the Jaeger server, 6831:

    {
      "service_name": "nginx",
      "sampler": {
        "type": "const",
        "param": 1
      },
      "reporter": {
        "localAgentHostPort": "Jaeger-server-IP-address:6831"
      }
    }

    For details about the sampler object, see the Jaeger documentation.

Installing the Zipkin Plug‑In

  1. Install the Zipkin plug‑in. The following wget command is for x86‑64 Linux systems:

    $ cd /usr/local/lib
    $ wget -O - https://github.com/rnburn/zipkin-cpp-opentracing/releases/download/v0.5.2/linux-amd64-libzipkin_opentracing_plugin.so.gz | gunzip -c > /usr/local/lib/libzipkin_opentracing_plugin.so
  2. Create a JSON‑formatted configuration file for the plug‑in, named /etc/zipkin/zipkin-config.json, with the following contents. We’re using the default port for the Zipkin server, 9411:

    {
      "service_name": "nginx",
      "collector_host": "Zipkin-server-IP-address",
      "collector_port": 9411
    }

    For details about the configuration objects, see the JSON schema on GitHub.

Configuring NGINX Plus

Perform these instructions on the NGINX Plus host.

  1. Install the NGINX OpenTracing module according to the instructions in the NGINX Plus Admin Guide.

  2. Add the following load_module directive in the main (top‑level) context of the main NGINX Plus configuration file (/etc/nginx/nginx.conf):

    load_module modules/ngx_http_opentracing_module.so;
  3. Add the following directives to the NGINX Plus configuration.

    If you use the conventional configuration scheme, put the directives in a new file called /etc/nginx/conf.d/opentracing.conf. Also verify that the following include directive appears in the http context in /etc/nginx/nginx.conf:

    http {
        include /etc/nginx/conf.d/*.conf;
    }
    • The opentracing_load_tracer directive enables the tracer plug‑in. Uncomment the directive for either Jaeger or Zipkin as appropriate.
    • The opentracing_tag directives make NGINX Plus variables available as OpenTracing tags that appear in the tracer UI.
    • To debug OpenTracing activity, uncomment the log_format and access_log directives. If you want to replace the default NGINX access log and log format with this one, uncomment the directives, then change the three instances of “opentracing” to “main“. Another option is to log OpenTracing activity just for the traffic on port 9001 – uncomment the log_format and access_log directives and move them into the server block.
    • The server block sets up OpenTracing for the sample Ruby application described in the next section.
    # Load a vendor tracer
    #opentracing_load_tracer /usr/local/libjaegertracing_plugin.so 
    #                        /etc/jaeger/jaeger-config.json
    #opentracing_load_tracer /usr/local/lib/libzipkin_opentracing_plugin.so
    #                        /etc/zipkin/zipkin-config.json
    
    # Enable tracing for all requests
    opentracing on;
    
    # Set additional tags that capture the value of NGINX Plus variables
    opentracing_tag bytes_sent $bytes_sent;
    opentracing_tag http_user_agent $http_user_agent;
    opentracing_tag request_time $request_time;
    opentracing_tag upstream_addr $upstream_addr;
    opentracing_tag upstream_bytes_received $upstream_bytes_received;
    opentracing_tag upstream_cache_status $upstream_cache_status;
    opentracing_tag upstream_connect_time $upstream_connect_time;
    opentracing_tag upstream_header_time $upstream_header_time;
    opentracing_tag upstream_queue_time $upstream_queue_time;
    opentracing_tag upstream_response_time $upstream_response_time;
    
    #uncomment for debugging
    # log_format opentracing '$remote_addr - $remote_user [$time_local] "$request" '
    #                        '$status $body_bytes_sent "$http_referer" '
    #                        '"$http_user_agent" "$http_x_forwarded_for" '
    #                        '"$host" sn="$server_name" '
    #                        'rt=$request_time '
    #                        'ua="$upstream_addr" us="$upstream_status" '
    #                        'ut="$upstream_response_time" ul="$upstream_response_length" '
    #                        'cs=$upstream_cache_status' ;
    #access_log /var/log/nginx/opentracing.log opentracing;
     
    server {
        listen 9001;
    
        location / {
            # The operation name used for OpenTracing Spans defaults to the name of the
            # 'location' block, but uncomment this directive to customize it.
            #opentracing_operation_name $uri;
    
            # Propagate the active Span context upstream, so that the trace can be 
            # continued by the backend.
            opentracing_propagate_context;
    
            # Make sure that your Ruby app is listening on port 4567
            proxy_pass http://127.0.0.1:4567;
        }
    }
  4. Validate and reload the NGINX Plus configuration:

    $ nginx -t
    $ nginx -s reload

Setting Up the Sample Ruby App

With the tracer and NGINX Plus configuration in place, we create a sample Ruby app that shows what OpenTracing data looks like. The app lets us measure how much NGINX Plus caching improves response time. When the app receives a request like the following HTTP GET request for /, it waits a random amount of time (between 2 and 5 seconds) before responding.

$ curl http://NGINX-Plus-IP-address:9001/
  1. Install and set up both Ruby and Sinatra (an open source software web application library and domain‑specific language written in Ruby as an alternative to other Ruby web application frameworks).

  2. Create a file called app.rb with the following contents:

    #!/usr/bin/ruby
    
    require 'sinatra'
    
    get '/*' do
        out = "<h1>Ruby simple app</h1>" + "\n"
    
        #Sleep a random time between 2s and 5s
        sleeping_time = rand(4)+2
        sleep(sleeping_time)
        puts "slept for: #{sleeping_time}s."
        out += '<p>some output text</p>' + "\n"
    
        return out
    end
  3. Make app.rb executable and run it:

    $ chmod +x app.rb
    $ ./app.rb

Tracing Response Times Without Caching

We use Jaeger and Zipkin to show how long it takes NGINX Plus to respond to a request when caching is not enabled. For each tracer, we send five requests.

Output from Jaeger Without Caching

Here are the five requests displayed in the Jaeger UI (most recent first):

Here’s the same information on the Ruby app console:

- -> /
slept for: 3s. 
127.0.0.1 - - [07/Jun/2019: 10:50:46 +0000] "GET / HTTP/1.1" 200 49 3.0028
127.0.0.1 - - [07/Jun/2019: 10:50:43 UTC] "GET / HTTP/1.0" 200 49
- -> /
slept for: 2s. 
127.0.0.1 - - [07/Jun/2019: 10:50:56 +0000] "GET / HTTP/1.1" 200 49 2.0018 
127.0.0.1 - - [07/Jun/2019: 10:50:54 UTC] "GET / HTTP/1.0"1 200 49
- -> /
slept for: 3s. 
127.0.0.1 - - [07/Jun/2019: 10:53:16 +0000] "GET / HTTP/1.1" 200 49 3.0029 
127.0.0.1 - - [07/Jun/2019: 10:53:13 UTC] "GET / HTTP/1.0" 200 49
- -> /
slept for: 4s.
127.0.0.1 - - [07/Jun/2019: 10:54:03 +0000] "GET / HTTP/1.1" 200 49 4.0030 
127.0.0.1 - - [07/Jun/2019: 10:53:59 UTC] "GET / HTTP/1.0" 200 49
- -> /
slept for: 3s.
127.0.0.1 - - [07/Jun/2019: 10:54:11 +0000] "GET / HTTP/1.1" 200 49 3.0012
127.0.0.1 - - [07/Jun/2019: 10:54:08 UTC] "GET / HTTP/1.0" 200 49

In the Jaeger UI we click on the first (most recent) request to view details about it, including the values of the NGINX Plus variables we added as tags:

Output from Zipkin Without Caching

Here are another five requests in the Zipkin UI:

The same information on the Ruby app console:

- -> /
slept for: 2s.
127.0.0.1 - - [07/Jun/2019: 10:31:18 +0000] "GET / HTTP/1.1" 200 49 2.0021 
127.0.0.1 - - [07/Jun/2019: 10:31:16 UTC] "GET / HTTP/1.0" 200 49
- -> /
slept for: 3s.
127.0.0.1 - - [07/Jun/2019: 10:31:50 +0000] "GET / HTTP/1.1" 200 49 3.0029 
127.0.0.1 - - [07/Jun/2019: 10:31:47 UTC] "GET / HTTP/1.0" 200 49
- -> /
slept for: 3s.
127.0.0.1 - - [07/Jun/2019: 10:32:08 +0000] "GET / HTTP/1.1" 200 49 3.0026 
127.0.0.1 - - [07/Jun/2019: 10:32:05 UTC] "GET / HTTP/1.0" 200 49
- -> /
slept for: 3s.
127.0.0.1 - - [07/Jun/2019: 10:32:32 +0000] "GET / HTTP/1.1" 200 49 3.0015 
127.0.0.1 - - [07/Jun/2019: 10:32:29 UTC] "GET / HTTP/1.0" 200 49
- -> /
slept for: 5s.
127.0.0.1 - - [07/Jun/2019: 10:32:52 +0000] "GET / HTTP/1.1" 200 49 5.0030 
127.0.0.1 - - [07/Jun/2019: 10:32:47 UTC] "GET / HTTP/1.0" 200 49

In the Zipkin UI we click on the first request to view details about it, including the values of the NGINX Plus variables we added as tags:

Tracing Response Times with Caching

Configuring NGINX Plus Caching

We enable caching by adding directives in the opentracing.conf file we created in Configuring NGINX Plus.

  1. In the http context, add this proxy_cache_path directive:

    proxy_cache_path /data/nginx/cache keys_zone=one:10m;
  2. In the server block, add the following proxy_cache and proxy_cache_valid directives:

    proxy_cache one;
    proxy_cache_valid any 1m;
  3. Validate and reload the configuration:

    $ nginx -t
    $ nginx -s reload

Output from Jaeger with Caching

Here’s the Jaeger UI after two requests.

The first response (labeled 13f69db) took 4 seconds. NGINX Plus cached the response, and when the request was repeated about 15 seconds later, the response took less than 2 milliseconds because it came from the NGINX Plus cache.

Looking at the two requests in detail explains the difference in response time. For the first request, upstream_cache_status is MISS, meaning the requested data was not in the cache. The Ruby app added a delay of 4 seconds.

For the second request, upstream_cache_status is HIT. Because the data is coming from the cache, the Ruby app cannot add a delay, and the response time is under 2 milliseconds. The empty upstream_* values also indicate that the upstream server was not involved in this response.

Output from Zipkin with Caching

The display in the Zipkin UI for two requests with caching enabled paints a similar picture:

And again looking at the two requests in detail explains the difference in response time. The response is not cached for the first request (upstream_cache_status is MISS) and the Ruby app (coincidentally) adds the same 4-second delay as in the Jaeger example.

The response has been cached before we make the second request, so upstream_cache_status is HIT.

Conclusion

The NGINX OpenTracing module enables tracing of NGINX Plus requests and responses, and provides access to NGINX Plus variables using OpenTracing tags. Different tracers can also be used with this module.

For more details about the NGINX OpenTracing module, visit the NGINX OpenTracing module repo on GitHub.

To try OpenTracing with NGINX Plus, start your free 30-day trial today or contact us to discuss your use cases.

The post OpenTracing for NGINX and NGINX Plus appeared first on NGINX.

]]>
7 Reasons to Attend NGINX Conf 2019 http://www.bengbeng.net.cn/blog/7-reasons-to-attend-nginx-conf-2019/ Thu, 13 Jun 2019 19:15:05 +0000 http://www.bengbeng.net.cn/?p=62509 About six weeks ago we announced that NGINX Conf 2019 will be taking place in Seattle, WA, from September 10 for two full days of keynotes, breakout sessions, case studies, community networking, and so much more. We hope to see you there, but if you need to convince yourself (or your manager!) of the benefits of attending, [...]

Read More...

The post 7 Reasons to Attend NGINX Conf 2019 appeared first on NGINX.

]]>
.big { font-size: 150%; text-indent: -3em; margin-left: 3em; } li { list-style-type: none !important; } .indented { margin-left: 1.5em; }

About six weeks ago we announced that NGINX Conf 2019 will be taking place in Seattle, WA, from September 10 for two full days of keynotes, breakout sessions, case studies, community networking, and so much more. We hope to see you there, but if you need to convince yourself (or your manager!) of the benefits of attending, then read on.

NGINX Conf is the highlight of our calendar, when we get a chance to meet with businesses that are at every point on the journey to digital transformation – from taking the first steps towards modernizing hardware?based delivery of legacy applications to implementing service mesh for advanced microservices architectures. NGINX Conf is the focal point for the NGINX community, our partners, and most importantly, you.

Why You (and Your Team) Should Attend NGINX Conf 2019

Looking to make the business case to attend? At NGINX Conf 2019, you can connect with other attendees and members of the NGINX and F5 family, upskill at NGINX training sessions, and gain insights into optimized web performance from speakers and industry experts. Here are seven reasons why you should attend this year’s event for all things NGINX:

  • 1. Learn more about our vision for the future with F5.

    Now that NGINX has become part of the F5 family, learn more about what we have in store for our ADC and WAF solutions, including our plans and joint roadmaps for these solutions. Discover what’s in store for NGINX Open Source, customers, and partners as we become part of F5 Networks.

  • 2. Stay on top of latest and greatest product updates.

    Hear about new products and major updates to existing products, including the NGINX Controller API Management Module and Load Balancing Module, from our product gurus. Learn about various use cases and best practices to help you get the most from NGINX.

  • 3. Build the foundation for microservices.

    Be among the first to hear about new capabilities from NGINX that simplify the supporting infrastructure for your microservices, allowing for easy configuration, deployment, and communication between your applications.

  • 4. Save more than 25% on hands‑on training.

    Learn from both experts and experienced peers with hands‑on training sessions, where various specialists will answer all your technical questions. From developing NGINX modules to advanced load balancing, our training sessions and workshops ensure that you are equipped to unlock the full potential of NGINX. Sign up and take advantage of our full‑day training for just $550, a $200 discount off the regular price!

  • 5. Learn about the latest open source innovations.

    NGINX is proudly committed to driving our open source innovation further every day. Be among the first to get NGINX news and technical details on some of the world’s most popular open source projects.

  • 6. Directly influence the NGINX product strategy.

    Meet with NGINX leadership team, along with our partners and esteemed experts, and provide feedback on the tools and capabilities designed to solve your toughest digital problems.

  • 7. Accelerate ROI on your NGINX investments.

    Learn from NGINX technical experts, fellow customers and community members, and users how to optimize NGINX deployments and simplify the technology stack for both traditional applications and distributed ones based on microservices.

With opportunities to upskill, learn, and network with experts and community leaders, NGINX Conf is an eagerly awaited highlight on the DevOps and open source event calendar. Register today!.

The post 7 Reasons to Attend NGINX Conf 2019 appeared first on NGINX.

]]>
A Regular Expression Tester for NGINX and NGINX Plus http://www.bengbeng.net.cn/blog/regular-expression-tester-nginx/ Tue, 11 Jun 2019 21:18:38 +0000 http://www.bengbeng.net.cn/?p=62483 While working on a regular expression (regex) to use with NGINX, I got an idea for a way to easily test a regex from within an actual NGINX configuration. (The regex tester works just the same for NGINX Plus, but for ease of reading I’ll refer to NGINX.) Support for regular expressions is one of the [...]

Read More...

The post A Regular Expression Tester for NGINX and NGINX Plus appeared first on NGINX.

]]>
While working on a regular expression (regex) to use with NGINX, I got an idea for a way to easily test a regex from within an actual NGINX configuration. (The regex tester works just the same for NGINX Plus, but for ease of reading I’ll refer to NGINX.)

Support for regular expressions is one of the powerful features of NGINX, but regexes can be complex and difficult to get right, especially if you don’t work with them regularly. NGINX allows regexes in multiple parts of a configuration, for example locations, maps, rewrites, and server names. The tester described here is for regexes in locations and maps.

There are other free online regex testers that are good for most regexes, but NGINX uses some non‑standard shortcuts optimized for web applications. For example, you don’t have to escape the forward slash (/) in a URI as you do in a standard regex. Also, when using a regex in a map, you specify what value to set based on a match. With other regex testers you might have to modify the regex or, in the case of a map, infer what value will be set. In addition, it is always good to be able to test a regex with the actual regex engine in the actual environment.

Overview

This post assumes a basic understanding of NGINX and regular expressions. NGINX uses Perl Compatible Regular Expressions (PCRE).

Before we get into the details of the regex tester, let’s first discuss how regexes can be used in NGINX locations and maps.

Locations

NGINX regex locations are of the form:

location regex {
    #...
}

For example, a location block with the following regex handles all PHP requests with a URI ending with myapp/filename.php, such as /test/myapp/hello.php and /myapp/hello.php. The asterisk after the tilde (~*) makes the match case insensitive.

location ~* /myapp/.+\.php$ {
    #...
}

NGINX and the regex tester support positional capture groups in location blocks. In the following example, the first group captures everything before the PHP file name and the second captures the PHP filename:

location ~* (.*/myapp)/(.+\.php)$ {
    #...
}

For the URI /myapp/hello.php, the variable $1 is set to /myapp and $2 is set to hello.php.

NGINX also supports named capture groups (but note that the regex tester does not):

location ~* (?<begin>.*myapp)/(?<end>.+\.php)$ {
    #...
}

In this case the variable $begin is set to /myapp and $end is set to hello.php.

Maps

NGINX maps that use regular expressions are of the form:

map variable-from-request variable-to-set {
    regex1 value-to-set-if-match;
    regex2 value-to-set-if-match;
    #...
    regexN value-to-set-if-match;
    default value-to-set-if-no-match;
}

For example, this map block sets the variable $isphp to 1 if the URI (as recorded in the $uri variable) ends in .php, and 0 if it does not (the match is case sensitive):

map $uri $isphp {
    ~\.php$ 1;
    default 0;
}

For maps, NGINX and the regex tester support both positional and named capture groups.

For example, these maps both set the variable $fileext to the value of the file extension, which is also captured as $1 in the first block and $ext in the second:

map $uri $fileext {
    ~*.+\.(.+)$  $1;
    default      '';
}

Or:

map $uri $fileext {
    ~*.+\.(?<ext>.+)$  $ext;
    default            '';
}

The Regular Expression Tester

The regex tester is implemented in a Docker container with NGINX and NGINX Unit installed. NGINX Unit serves two variations of a PHP page, one for regexes in location blocks and the other for regexes in map blocks. The two pages prompt the user for different inputs:

  • Location page:

    • The regex
    • Case sensitivity
    • The URI
  • Map page:

    • The regex
    • Case sensitivity
    • The value to test (the value of the variable that is the first parameter to the map directive)
    • The value to set in the variable specified as the second parameter to the map directive

After providing the information, the user clicks the Test button. The tester generates the necessary NGINX configuration file, the configuration is reloaded, and a request is sent to test the regex. The results are then displayed and indicate whether a match was found. If so, on the location page the values of the capture groups are dispalyed, and on the map page the value set by the map is reported.

Location Page Example

This example shows the results of a case‑insensitive test of the regex (.*myapp)/(.+\.php)$ against the URI /myapp/hello.php:

 

Map Page Example

This example shows the results of a case‑insensitive test of the regex .+\.(?<ext>.*)$ against the value /myapp/hello.php, with the named capture group $ext as the value to set:

 

Conclusion

You can see that the NGINX configuration is quite short and simple. The hard work is done by the PHP page that generates the necessary NGINX configuration file based on the values entered by the user, reloads NGINX, sends a request to NGINX, and displays the results.

You can try out the regex tester for yourself: all the code is available at our GitHub repo (https://github.com/nginxinc/NGINX-Demos/tree/master/nginx-regex-tester).

To make it easy to get the regex tester up and running, all the necessary files are included. To build the Docker image and build the container, simply run:

$ docker-compose up -d

Then point your browser to http://Docker-host/regextester.php.

I hope you find tester helpful when using regular expressions and that it gives you a glimpse of some of the power, flexibility, and simplicity of NGINX.

To try out the regex tester with NGINX Plus, start your free 30-day trial today or contact us to discuss your use cases.

The post A Regular Expression Tester for NGINX and NGINX Plus appeared first on NGINX.

]]>
Facing the Hordes on Black Friday and Cyber Monday http://www.bengbeng.net.cn/blog/facing-hordes-black-friday-cyber-monday/ Thu, 06 Jun 2019 18:33:13 +0000 /?p=1447 For many retailers, the holiday shopping season is a time of both anticipation and trepidation. While most people can relax and celebrate with friends and family, retailers – both brick-and-mortar and online – brace themselves for the onslaught that begins on Black Friday and Cyber Monday. The challenges confronting both types of retailers are very similar. Facing massive visitor [...]

Read More...

The post Facing the Hordes on Black Friday and Cyber Monday appeared first on NGINX.

]]>
For many retailers, the holiday shopping season is a time of both anticipation and trepidation. While most people can relax and celebrate with friends and family, retailers – both brick-and-mortar and online – brace themselves for the onslaught that begins on Black Friday and Cyber Monday.

zombiesThe challenges confronting both types of retailers are very similar. Facing massive visitor traffic to their stores, they must ensure every potential customer is welcomed in and given space to browse, and that the staff and resources are on hand to help them complete their transaction. Shoppers have plenty of choices, but are tight for time on these critical days. If one store appears overcrowded and slow to serve them, it’s easy to move next door.

In this article, we share four ways you can use NGINX Open Source and NGINX Plus to prepare for massive increases in customer traffic.

NGINX Powers Most of the World’s Million Busiest Websites

shock absorber
NGINX began as a web acceleration solution, and is now used by most of the world’s million busiest websites to load balance and scale. A great analogy is that NGINX and NGINX Plus act as an ‘Internet shock absorber’ – they enable your website to run faster and more smoothly across rough terrain.        Learn More…

Step 1: Deploy the Shock Absorber

Think of NGINX and NGINX Plus as a gatekeeper, managing traffic at the front of your store. They gently queue and admit each shopper (HTTP request), transforming the chaotic scrum on the sidewalk into a smooth, orderly procession in the store. Visitors are each given their own space and gently routed to the least‑busy area of the store, ensuring that traffic is distributed evenly and all resources are equally used.

NGINX and NGINX Plus act as a 'shock absorber', transforming the chaotic flood of traffic into an orderly procession, and load balancing each request to the appropriate server
NGINX Plus acts as a ‘shock absorber’, transforming a chaotic flood of traffic into an orderly procession, and load balancing each request to the appropriate server

NGINX and NGINX Plus employ a range of out‑of‑the‑box techniques to achieve this. The HTTP offload feature buffers slow HTTP requests and admits them only when they are ready. Transactions complete much more quickly when they originate from NGINX or NGINX Plus (on the fast local network) than when they originate from a distant client. Optimized use of HTTP keepalive connections and careful load balancing result in optimized traffic distribution and maximally efficient use of server resources.

Step 2: Employ Caching

Click‑and‑collect, online reservations for in‑store pickup and even customer checkout (for example, Apple’s EasyPay) reduce the time that customers need to spend in a physical store and increase the likelihood of a successful transaction.

Content caching with NGINX and NGINX Plus has a similar effect for web traffic. Common HTTP responses can be stored automatically at the NGINX or NGINX Plus edge; when several site visitors try to access the same web page or resource, NGINX or NGINX Plus can respond immediately and do not need to forward each request to an upstream application server.

With content caching, NGINX and NGINX Plus store a copy of commonly requested resources, meaning customers are served more quickly and servers have to handle less load
With content caching, NGINX Plus stores a copy of commonly requested resources, meaning customers are served more quickly and servers have to handle less load

Depending on your application, content caching can reduce the volume of internal traffic by a factor of up to 100, reducing the hardware capacity you need to serve your app.

Step 3: Control Visitor Traffic

At the busiest times, a gatekeeper in front of your store might need to restrict the traffic coming through the door. Perhaps it’s a safety issue – the store is dangerously overcrowded – or perhaps you’ve set up ‘VIP‑only’ shopping hours when only customers with the right dress code, store card, or invitation can enter.

Similar measures are sometimes necessary for web applications. You limit traffic to ensure that each active request has access to the resources it needs, without overwhelming your servers. NGINX and NGINX Plus offer a range of methods for enforcing such limits to protect your applications.

NGINX and NGINX Plus control the traffic to your application servers, applying entry control, queuing, and concurrency control to prevent your store being overwhelmed
NGINX Plus controls the traffic to your application servers, applying entry control, queuing, and concurrency control to prevent your store being overwhelmed

Concurrency limits restrict the number of concurrent requests forwarded to each server, to match the limited number of worker threads or processes in each. Rate limits apply a per‑second or per‑minute restriction on client activities, which protects services such as a payment gateway or complex search.

You can differentiate between different types of clients if necessary. Perhaps the delivery area for your store does not extend to Asia, or you want to prioritize users who have items in their shopping baskets. With NGINX and NGINX Plus, you can use cookies, geolocation data, and other parameters to control how rate limits are applied.

Step 4: Specialize Your Store

Large department stores and supermarkets are experts in maximizing revenue through clever store design. They partition the floor space into themes, use concession stores, and strategically place impulse buys throughout the store. They can respond quickly to changes in customer demand or to new promotions, reconfiguring their layout in hours.

This partitioning can also be applied to website infrastructure. Traditional application architecture with its three monolithic tiers (web, app, and database) has had its day. It’s proven to be inflexible, hard to update, and difficult to scale. Modern web architectures are settling around a much more distributed architecture made up of composable HTTP‑based services that can be scaled and updated independently.

With NGINX Plus, you can break away from a linear procession of departments (tiers) to a more flexible architecture that improves performance, scalability, and manageability

This approach requires a sophisticated traffic management solution to route and control traffic, and offload tasks where possible to consolidate and simplify. This is where NGINX Plus excels – it’s a combination load balancer, cache, and web server in a single high‑performance software solution.

Not only can NGINX and NGINX Plus route and load balance traffic like an application delivery controller (ADC), they can also interface directly with application servers using protocols such as FastCGI and uwsgi, and offload lightweight tasks, such as serving of static content, from your backend applications.

Deploying NGINX and NGINX Plus to Improve Performance

NGINX and NGINX Plus are a software reverse proxy, which makes initial deployment simple and enables a gradual migration of your application architecture to the distributed approach described above.

NGINX is the market‑leading open source web accelerator and web server used by most of the world’s million busiest websites. NGINX Plus is the commercially supported solution with extended load balancing, monitoring, and management features. To find out more, compare NGINX and NGINX Plus features or start a free 30-day trial.

For configuring, validating, and troubleshooting NGINX Plus instances at scale – both on premises and in a multi‑cloud environment – use the NGINX Controller Load Balancing Module. A free 30‑day trial of NGINX Controller includes NGINX Plus.

Use NGINX and NGINX Plus to deliver your web properties this shopping season and relax, confident in the knowledge that you’re ready for whatever the market sends your way.

The post Facing the Hordes on Black Friday and Cyber Monday appeared first on NGINX.

]]>
偷偷要费观看视频在线