8000 address feedback on readme · nginx/kubernetes-ingress@a01acf5 · GitHub
[go: up one dir, main page]

Skip to content

Commit a01acf5

Browse files
committed
address feedback on readme
1 parent a3c3aa8 commit a01acf5

File tree

2 files changed

+52
-52
lines changed
  • examples/custom-resources

2 files changed

+52
-52
lines changed

examples/custom-resources/rate-limit-tiered-apikey/README.md

Lines changed: 27 additions & 27 deletions
Original file line numberDiff line numberDiff line change
@@ -8,28 +8,28 @@ limit Policies, grouped in a tier, using the API Key client name as the key to t
88
## Prerequisites
99

1010
1. Follow the [installation](https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-manifests/)
11-
instructions to deploy the Ingress Controller.
12-
1. Save the public IP address of the Ingress Controller into a shell variable:
11+
instructions to deploy NGINX Ingress Controller.
12+
2. Save the public IP address of NGINX Ingress Controller into a shell variable:
1313

14-
```console
15-
IC_IP=XXX.YYY.ZZZ.III
16-
```
17-
18-
1. Save the HTTP port of the Ingress Controller into a shell variable:
19-
20-
```console
21-
IC_HTTP_PORT=<port number>
22-
```
14+
```shell
15+
IC_IP=XXX.YYY.ZZZ.III
16+
```
17+
<!-- markdownlint-disable MD029 -->
18+
3. Save the HTTP port of the Ingress Controller into a shell variable:
19+
<!-- markdownlint-enable MD029 -->
20+
```shell
21+
IC_HTTP_PORT=<port number>
22+
```
2323

24-
## Step 1 - Deploy a Web Application
24+
## Deploy a web application
2525

2626
Create the application deployments and services:
2727

28-
```console
28+
```shell
2929
kubectl apply -f coffee.yaml
3030
```
3131

32-
## Step 2 - Deploy the Rate Limit Policies
32+
## Deploy the Rate Limit Policies
3333

3434
In this step, we create three Policies:
3535

@@ -41,37 +41,37 @@ The `rate-limit-apikey-basic` Policy is also the default policy if the API Key c
4141

4242
Create the policies:
4343

44-
```console
44+
```shell
4545
kubectl apply -f api-key-policy.yaml
4646
kubectl apply -f rate-limits.yaml
4747
```
4848

49-
## Step 3 - Deploy the API Key Auth Secret
49+
## Deploy the API Key Secret
5050

5151
Create a secret of type `nginx.org/apikey` with the name `api-key-client-secret` that will be used for authorization on the server level.
5252

5353
This secret will contain a mapping of client names to base64 encoded API Keys.
5454

55-
```console
55+
```shell
5656
kubectl apply -f api-key-secret.yaml
5757
```
5858

59-
## Step 4 - Configure Load Balancing
59+
## Configure Load Balancing
6060

6161
Create a VirtualServer resource for the web application:
6262

63-
```console
63+
```shell
6464
kubectl apply -f cafe-virtual-server.yaml
6565
```
6666

6767
Note that the VirtualServer references the policies `api-key-policy`, `rate-limit-apikey-premium` & `rate-limit-apikey-basic` created in Step 2.
6868

69-
## Step 5 - Test the Premium Configuration
69+
## Test the premium configuration
7070

7171
Let's test the configuration. If you access the application with an API Key in an expected header at a rate that exceeds 5 requests per second, NGINX will
7272
start rejecting your requests:
7373

74-
```console
74+
```shell
7575
while true; do
7676
curl --resolve cafe.example.com:$IC_HTTP_PORT:$IC_IP -H "X-header-name: client1premium" http://cafe.example.com:$IC_HTTP_PORT/coffee;
7777
sleep 0.1;
@@ -95,14 +95,14 @@ Server name: coffee-dc88fc766-zr7f8
9595

9696
> Note: The command result is truncated for the clarity of the example.
9797
98-
## Step 6 - Test the Basic Configuration
98+
## Test the basic configuration
9999

100-
This test is similar to Step 5, however, this time we will be setting the API Key in the header to a value that maps to the `client1-basic` client name.
100+
This test is similar to the previous step, however, this time we will be setting the API Key in the header to a value that maps to the `client1-basic` client name.
101101

102102
Let's test the configuration. If you access the application at a rate that exceeds 1 request per second, NGINX will
103103
start rejecting your requests:
104104

105-
```console
105+
```shell
106106
while true; do
107107
curl --resolve cafe.example.com:$IC_HTTP_PORT:$IC_IP -H "X-header-name: client1basic" http://cafe.example.com:$IC_HTTP_PORT/coffee;
108108
sleep 0.5;
@@ -126,14 +126,14 @@ Server name: coffee-dc88fc766-zr7f8
126126

127127
> Note: The command result is truncated for the clarity of the example.
128128
129-
## Step 7 - Test the default Configuration
129+
## Test the default configuration
130130

131-
This test is similar to Step 5 & 6, however, this time we will setting the API Key in the header to a value that maps to the `random` client name, which matches neither of the regex patterns configured in the Policies. However, we will still be seeing the default `rate-limit-apikey-basic` Policy applied.
131+
This test is similar to the previous two steps, however, this time we will setting the API Key in the header to a value that maps to the `random` client name, which matches neither of the regex patterns configured in the Policies. However, we will still be seeing the default `rate-limit-apikey-basic` Policy applied.
132132

133133
Let's test the configuration. If you access the application at a rate that exceeds 1 request per second, NGINX will
134134
start rejecting your requests:
135135

136-
```console
136+
```shell
137137
while true; do
138138
curl --resolve cafe.example.com:$IC_HTTP_PORT:$IC_IP -H "X-header-name: random" http://cafe.example.com:$IC_HTTP_PORT/coffee;
139139
sleep 0.5;

examples/custom-resources/rate-limit-tiered-request-method/README.md

Lines changed: 25 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -6,28 +6,28 @@ limit Policies, grouped in a tier, using the client IP address as the key to the
66
## Prerequisites
77

88
1. Follow the [installation](https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-manifests/)
9-
instructions to deploy the Ingress Controller.
10-
1. Save the public IP address of the Ingress Controller into a shell variable:
9+
instructions to deploy NGINX Ingress Controller.
10+
2. Save the public IP address of NGINX Ingress Controller into a shell variable:
1111

12-
```console
13-
IC_IP=XXX.YYY.ZZZ.III
14-
```
15-
16-
1. Save the HTTP port of the Ingress Controller into a shell variable:
17-
18-
```console
19-
IC_HTTP_PORT=<port number>
20-
```
12+
```shell
13+
IC_IP=XXX.YYY.ZZZ.III
14+
```
15+
<!-- markdownlint-disable MD029 -->
16+
3. Save the HTTP port of the Ingress Controller into a shell variable:
17+
<!-- markdownlint-enable MD029 -->
18+
```shell
19+
IC_HTTP_PORT=<port number>
20+
```
2121

22-
## Step 1 - Deploy a Web Application
22+
## Deploy a web application
2323

2424
Create the application deployments and services:
2525

26-
```console
26+
```shell
2727
kubectl apply -f coffee.yaml
2828
```
2929

30-
## Step 2 - Deploy the Rate Limit Policies
30+
## Deploy the Rate Limit Policies
3131

3232
In this step, we create two Policies:
3333

@@ -38,26 +38,26 @@ The `rate-limit-request-method-put-post-patch-delete` Policy is also the default
3838

3939
Create the policies:
4040

41-
```console
41+
```shell
4242
kubectl apply -f rate-limits.yaml
4343
```
4444

45-
## Step 3 - Configure Load Balancing
45+
## Configure Load Balancing
4646

4747
Create a VirtualServer resource for the web application:
4848

49-
```console
49+
```shell
5050
kubectl apply -f cafe-virtual-server.yaml
5151
```
5252

5353
Note that the VirtualServer references the policies `rate-limit-request-method-get-head` & `rate-limit-request-method-put-post-patch-delete` created in Step 2.
5454

55-
## Step 4 - Test the Configuration
55+
## Test the configuration
5656

5757
Let's test the configuration. If you access the application at a rate that exceeds 5 requests per second with a `GET` request method, NGINX will
5858
start rejecting your requests:
5959

60-
```console
60+
```shell
6161
while true; do
6262
curl --resolve cafe.example.com:$IC_HTTP_PORT:$IC_IP http://cafe.example.com:$IC_HTTP_PORT/coffee";
6363
sleep 0.1
@@ -81,14 +81,14 @@ Server name: coffee-dc88fc766-zr7f8
8181
8282
> Note: The command result is truncated for the clarity of the example.
8383
84-
## Step 5 - Test the Request types that update a resource
84+
## Test the request types that update a resource
8585
86-
This test is similar to Step 4, however, this time we will be using the `POST` request method.
86+
This test is similar to previous step, however, this time we will be using the `POST` request method.
8787
8888
Let's test the configuration. If you access the application at a rate that exceeds 1 request per second, NGINX will
8989
start rejecting your requests:
9090
91-
```console
91+
```shell
9292
while true; do
9393
curl -XPOST --resolve cafe.example.com:$IC_HTTP_PORT:$IC_IP http://cafe.example.com:$IC_HTTP_PORT/coffee;
9494
sleep 0.5;
@@ -112,15 +112,15 @@ Server name: coffee-dc88fc766-zr7f8
112112
113113
> Note: The command result is truncated for the clarity of the example.
114114
115-
## Step 6 - Test the default Configuration
115+
## Test the default configuration
116116
117-
This test is similar to Step 4 & 5, however, this time we will not be using a configured request method, however we
117+
This test is similar to the previous two steps, however, this time we will not be using a configured request method, however we
118118
will still be seeing the default `rate-limit-request-method-put-post-patch-delete` Policy applied.
119119
120120
Let's test the configuration. If you access the application at a rate that exceeds 1 request per second, NGINX will
121121
start rejecting your requests:
122122
123-
```console
123+
```shell
124124
while true; do
125125
curl -XOPTIONS --resolve cafe.example.com:$IC_HTTP_PORT:$IC_IP http://cafe.example.com:$IC_HTTP_PORT/coffee;
126126
sleep 0.5;

0 commit comments

Comments
 (0)
0