-
Notifications
You must be signed in to change notification settings - Fork 8.2k
Description
Bug description
If pod defines a port (assumed port-1) but service do NOT defines this port-1, then for a single request to POD_IP:port-1, we can see very very much logs of this single request, and it makes a loop between sidecar and iptables which will final use all 1024 connection-pool of envoy.
for example, for this service and deployment:
apiVersion: v1
kind: Service
metadata:
name: ubuntu-svc
spec:
type: ClusterIP
selector:
app: ubuntu
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http-ubuntu
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: ubuntu
spec:
replicas: 1
template:
metadata:
labels:
app: ubuntu
spec:
containers:
- name: ubuntu
image: ubuntu:xxx
ports:
- containerPort: 80
name: http-ubuntu1
- containerPort: 2334
name: http-ubuntu2
the pod defines port 80 and port 2334 but the service only defines 80, and if we do a request from another service to POD_IP:2334, i see plenty of logs in envoy:
[2019-05-30T05:26:48.098Z] "- - -" 0 - "-" 81 0 89 - "-" "-" "-" "-" "10.2.245.146:2334" PassthroughCluster 10.2.245.146:47804 10.2.245.146:2334 10.2.245.146:47802 -
[2019-05-30T05:26:48.098Z] "- - -" 0 - "-" 81 0 89 - "-" "-" "-" "-" "10.2.245.146:2334" PassthroughCluster 10.2.245.146:47806 10.2.245.146:2334 10.2.245.146:47804 -
[2019-05-30T05:26:48.098Z] "- - -" 0 - "-" 81 0 89 - "-" "-" "-" "-" "10.2.245.146:2334" PassthroughCluster 10.2.245.146:47808 10.2.245.146:2334 10.2.245.146:47806 -
[2019-05-30T05:26:48.098Z] "- - -" 0 - "-" 81 0 89 - "-" "-" "-" "-" "10.2.245.146:2334" PassthroughCluster 10.2.245.146:47810 10.2.245.146:2334 10.2.245.146:47808 -
[2019-05-30T05:26:48.098Z] "- - -" 0 - "-" 81 0 89 - "-" "-" "-" "-" "10.2.245.146:2334" PassthroughCluster 10.2.245.146:47812 10.2.245.146:2334 10.2.245.146:47810 -
[2019-05-30T05:26:48.098Z] "- - -" 0 - "-" 81 0 89 - "-" "-" "-" "-" "10.2.245.146:2334" PassthroughCluster 10.2.245.146:47814 10.2.245.146:2334 10.2.245.146:47812 -
[2019-05-30T05:26:48.098Z] "- - -" 0 - "-" 81 0 89 - "-" "-" "-" "-" "10.2.245.146:2334" PassthroughCluster 10.2.245.146:47816 10.2.245.146:2334 10.2.245.146:47814 -
[2019-05-30T05:26:48.098Z] "- - -" 0 - "-" 81 0 89 - "-" "-" "-" "-" "10.2.245.146:2334" PassthroughCluster 10.2.245.146:47818 10.2.245.146:2334 10.2.245.146:47816 -
[2019-05-30T05:26:48.098Z] "- - -" 0 - "-" 81 0 89 - "-" "-" "-" "-" "10.2.245.146:2334" PassthroughCluster 10.2.245.146:47820 10.2.245.146:2334 10.2.245.146:47818 -
[2019-05-30T05:26:48.098Z] "- - -" 0 - "-" 81 0 89 - "-" "-" "-" "-" "10.2.245.146:2334" PassthroughCluster 10.2.245.146:47822 10.2.245.146:2334 10.2.245.146:47820 -
...
...
...
which means somewhere makes a bad loop !
the reason, i guess, because of this commit and this commit add a default passthrough cluster , all traffic will be fallthrough, and in this case, the request to POD_IP:2334 will be fallthrough and go to iptables, and it will go to OUTPUT table because it's dst_ip is itself, and got this chain:
Chain ISTIO_OUTPUT (1 references)
pkts bytes target prot opt in out source destination
2049 123K ISTIO_REDIRECT all -- * lo 0.0.0.0/0 !127.0.0.1
and then go back to envoy, and be fallthrough, and go to iptables ... so the loop became.
Expected behavior
I think the correct behavior should be 404 or 502, but maybe we can find some perfect answer.
Steps to reproduce the bug
as shown above.
Version (include the output of istioctl version --remote and kubectl version)
istio 1.1.7
kubernetes 1.12.7
How was Istio installed?
helm template ./install/kubernetes/helm/istio \
--set global.mtls.enabled=false \
--set global.enableTracing=false \
--set global.proxy.accessLogFile="/dev/stdout" \
--set gateways.istio-ingressgateway.type=NodePort \
--set gateways.istio-egressgateway.enabled=false \
--set grafana.enabled=true \
--set prometheus.enabled=false \
--name istio \
--namespace istio-system \
> ./istio.yaml
Environment where bug was observed (cloud vendor, OS, etc)
aws, ubuntu 16.04