It’s time to migrate the demo app to the new policies.
Each type of policy can be migrated separately; for example, once we have completely finished with the Timeout
s,
we will proceed to the next policy type, CircuitBreakers
.
It’s possible to migrate all policies at once, but small portions are preferable as they’re easily reversible.
The generalized migration process roughly consists of 4 steps:
- Create a new targetRef policy as a replacement for existing source/destination policy (do not forget about default policies that might not be stored in your source control).
The corresponding new policy type can be found in the table.
Deploy the policy in shadow mode to avoid any traffic disruptions.
- Using Inspect API review the list of changes that are going to be created by the new policy.
- Remove
kuma.io/effect: shadow
label so that policy is applied in a normal mode.
- Observe metrics, traces and logs. If something goes wrong change policy’s mode back to shadow and return to the step 2.
If everything is fine then remove the old policies.
The order of migrating policies generally doesn’t matter, except for the TrafficRoute policy,
which should be the last one deleted when removing old policies.
This is because many old policies, like Timeout and CircuitBreaker, depend on TrafficRoutes to function correctly.
-
Create a replacement policy for app-to-redis
TrafficPermission and apply it with kuma.io/effect: shadow
label:
echo 'apiVersion: kuma.io/v1alpha1
kind: MeshTrafficPermission
metadata:
namespace: kong-mesh-system
name: app-to-redis
labels:
kuma.io/mesh: default
kuma.io/effect: shadow
spec:
targetRef:
kind: MeshService
name: redis_kuma-demo_svc_6379
from:
- targetRef:
kind: MeshSubset
tags:
kuma.io/service: demo-app_kuma-demo_svc_5000
default:
action: Allow' | kubectl apply -f -
-
Check the list of changes for the redis_kuma-demo_svc_6379
kuma.io/service
in Envoy configuration using kumactl
, jq
and jd
:
DATAPLANE_NAME=$(kumactl get dataplanes -ojson | jq '.items[] | select(.networking.inbound[0].tags["kuma.io/service"] == "redis_kuma-demo_svc_6379") | .name')
kumactl inspect dataplane ${DATAPLANE_NAME} --type=config --shadow --include=diff | jq '.diff' | jd -t patch2jd
Expected output:
@ ["type.googleapis.com/envoy.config.listener.v3.Listener","inbound:10.42.0.13:6379","filterChains","0","filters","0","typedConfig","rules","policies","allow-all-default"]
- {"permissions":[{"any":true}],"principals":[{"authenticated":{"principalName":{"exact":"spiffe://default/demo-app_kuma-demo_svc_5000"}}}]}
@ ["type.googleapis.com/envoy.config.listener.v3.Listener","inbound:10.42.0.13:6379","filterChains","0","filters","0","typedConfig","rules","policies","MeshTrafficPermission"]
+ {"permissions":[{"any":true}],"principals":[{"authenticated":{"principalName":{"exact":"spiffe://default/demo-app_kuma-demo_svc_5000"}}}]}
As we can see, the only difference is the policy name MeshTrafficPermission
instead of allow-all-default
.
The value of the policy is the same.
-
Remove the kuma.io/effect: shadow
label:
kubectl label -n kuma-system meshtrafficpermission app-to-redis kuma.io/effect-
Even though the old TrafficPermission and the new MeshTrafficPermission are both in use, the new policy takes precedence, making the old one ineffective.
-
Check that the demo app behaves as expected. If everything goes well, we can safely remove TrafficPermissions:
kubectl delete trafficpermissions --all
-
Create a replacement policy for timeout-global
Timeout and apply it with kuma.io/effect: shadow
label:
echo 'apiVersion: kuma.io/v1alpha1
kind: MeshTimeout
metadata:
namespace: kong-mesh-system
name: timeout-global
labels:
kuma.io/mesh: default
kuma.io/effect: shadow
spec:
targetRef:
kind: Mesh
to:
- targetRef:
kind: Mesh
default:
connectionTimeout: 21s
idleTimeout: 22s
http:
requestTimeout: 23s
streamIdleTimeout: 25s
maxStreamDuration: 26s
from:
- targetRef:
kind: Mesh
default:
connectionTimeout: 10s
idleTimeout: 2h
http:
requestTimeout: 0s
streamIdleTimeout: 2h' | kubectl apply -f-
-
Check the list of changes for the redis_kuma-demo_svc_6379
kuma.io/service
in Envoy configuration using kumactl
, jq
and jd
:
kumactl inspect dataplane ${DATAPLANE_NAME} --type=config --shadow --include=diff | jq '.diff' | jd -t patch2jd
Expected output:
@ ["type.googleapis.com/envoy.config.cluster.v3.Cluster","demo-app_kuma-demo_svc_5000","typedExtensionProtocolOptions","envoy.extensions.upstreams.http.v3.HttpProtocolOptions","commonHttpProtocolOptions","maxConnectionDuration"]
+ "0s"
@ ["type.googleapis.com/envoy.config.listener.v3.Listener","outbound:10.43.146.6:5000","filterChains","0","filters","0","typedConfig","commonHttpProtocolOptions","idleTimeout"]
- "22s"
@ ["type.googleapis.com/envoy.config.listener.v3.Listener","outbound:10.43.146.6:5000","filterChains","0","filters","0","typedConfig","commonHttpProtocolOptions","idleTimeout"]
+ "0s"
@ ["type.googleapis.com/envoy.config.listener.v3.Listener","outbound:10.43.146.6:5000","filterChains","0","filters","0","typedConfig","routeConfig","virtualHosts","0","routes","0","route","idleTimeout"]
+ "25s"
@ ["type.googleapis.com/envoy.config.listener.v3.Listener","outbound:10.43.146.6:5000","filterChains","0","filters","0","typedConfig","requestHeadersTimeout"]
+ "0s"
Review the list and ensure the new MeshTimeout policy won’t change the important settings.
The key differences between old and new timeout policies:
- Previously, there was no way to specify
requestHeadersTimeout
, maxConnectionDuration
and maxStreamDuration
(on inbound).
These timeouts were unset. With the new MeshTimeout policy we explicitly set them to 0s
by default.
-
idleTimeout
was configured both on the cluster and listener. MeshTimeout configures it only on the cluster.
-
route/idleTimeout
is duplicated value of streamIdleTimeout
but per-route. Previously we’ve set it only per-listener.
These 3 facts perfectly explain the list of changes we’re observing.
-
Remove the kuma.io/effect: shadow
label.
kubectl label -n kuma-system meshtimeout timeout-global kuma.io/effect-
Even though the old Timeout and the new MeshTimeout are both in use, the new policy takes precedence, making the old one ineffective.
-
Check that the demo app behaves as expected. If everything goes well, we can safely remove Timeouts:
kubectl delete timeouts --all
-
Create a replacement policy for cb-global
CircutBreaker and apply it with kuma.io/effect: shadow
label:
echo 'apiVersion: kuma.io/v1alpha1
kind: MeshCircuitBreaker
metadata:
namespace: kong-mesh-system
name: cb-global
labels:
kuma.io/mesh: default
kuma.io/effect: shadow
spec:
targetRef:
kind: Mesh
to:
- targetRef:
kind: Mesh
default:
connectionLimits:
maxConnections: 24
maxPendingRequests: 25
maxRequests: 26
maxRetries: 27
outlierDetection:
interval: 21s
baseEjectionTime: 22s
maxEjectionPercent: 23
splitExternalAndLocalErrors: false
detectors:
totalFailures:
consecutive: 28
gatewayFailures:
consecutive: 29
localOriginFailures:
consecutive: 30
successRate:
requestVolume: 31
minimumHosts: 32
standardDeviationFactor: "1.33"
failurePercentage:
requestVolume: 34
minimumHosts: 35
threshold: 36' | kubectl apply -f-
-
Check the list of changes for the redis_kuma-demo_svc_6379
kuma.io/service
in Envoy configuration using kumactl
, jq
and jd
:
kumactl inspect dataplane ${DATAPLANE_NAME} --type=config --shadow --include=diff | jq '.diff' | jd -t patch2jd
The expected output is empty. CircuitBreaker and MeshCircuitBreaker configures Envoy in the exact similar way.
-
Remove the kuma.io/effect: shadow
label.
kubectl label -n kuma-system meshcircuitbreaker cb-global kuma.io/effect-
Even though the old CircuitBreaker and the new MeshCircuitBreaker are both in use, the new policy takes precedence, making the old one ineffective.
-
Check that the demo app behaves as expected. If everything goes well, we can safely remove CircuitBreakers:
kubectl delete circuitbreakers --all
It’s safe to simply remove route-all-default
TrafficRoute.
Traffic will flow through the system even if there are neither TrafficRoutes nor MeshTCPRoutes/MeshHTTPRoutes.
The biggest change is that there are now 2 protocol specific routes, one for TCP
and one for HTTP. MeshHTTPRoute
always takes precedence over MeshTCPRoute
if
both exist.
Otherwise the high-level structure of the routes hasn’t changed, though there are a number
of details to consider.
Some enum values and some field structures were updated, largely to reflect Gateway API.
Please first read the MeshGatewayRoute
docs, the MeshHTTPRoute
docs
and the MeshTCPRoute
docs.
Always refer to the spec to ensure your new resource is valid.
Note that MeshHTTPRoute
has precedence over MeshGatewayRoute
.
We’re going to start with a gateway and simple legacy MeshGatewayRoute
,
look at how to migrate MeshGatewayRoutes
in general
and then finish with migrating our example MeshGatewayRoute
.
Let’s start with the following MeshGateway
and MeshGatewayInstance
:
echo "---
apiVersion: kuma.io/v1alpha1
kind: MeshGateway
mesh: default
metadata:
name: demo-app
labels:
kuma.io/origin: zone
spec:
conf:
listeners:
- port: 80
protocol: HTTP
tags:
port: http-80
selectors:
- match:
kuma.io/service: demo-app-gateway_kuma-demo_svc
---
apiVersion: kuma.io/v1alpha1
kind: MeshGatewayInstance
metadata:
name: demo-app-gateway
namespace: kuma-demo
spec:
replicas: 1
serviceType: LoadBalancer" | kubectl apply -f-
and the following initial MeshGatewayRoute
:
echo "apiVersion: kuma.io/v1alpha1
kind: MeshGatewayRoute
mesh: default
metadata:
name: demo-app-gateway
spec:
conf:
http:
hostnames:
- example.com
rules:
- matches:
- path:
match: PREFIX
value: /
backends:
- destination:
kuma.io/service: demo-app_kuma-demo_svc_5000
weight: 1
selectors:
- match:
kuma.io/service: demo-app-gateway_kuma-demo_svc" | kubectl apply -f-
The main consideration is specifying which gateways are affected by the route.
The most important change is that instead of
solely using tags to select MeshGateway
listeners,
new routes target MeshGateways
by name and optionally with tags for specific
listeners.
So in our example:
spec:
selectors:
- match:
kuma.io/service: demo-app-gateway_kuma-demo_svc
port: http-80
becomes:
spec:
targetRef:
kind: MeshGateway
name: demo-app
tags:
port: http-80
to:
because we’re now using the name of the MeshGateway
instead of the kuma.io/service
it matches.
As with all new policies, the spec is now merged under a default
field.
MeshTCPRoute
is very simple, so the rest of this is focused on
MeshHTTPRoute
.
Note that for MeshHTTPRoute
the hostnames
are directly under the to
entry:
conf:
http:
hostnames:
- example.com
# ...
becomes:
to:
- targetRef:
kind: Mesh
hostnames:
- example.com
# ...
Matching works the same as before. Remember that for MeshHTTPRoute
that merging is done on a match
basis. So it’s possible for one route to define filters
and another
backendRefs
for a given match, and the resulting rule would both apply the filters and route to the
backends.
Given two routes, one with:
to:
rules:
- matches:
- path:
match: PathPrefix
value: /
default:
filters:
- type: RequestHeaderModifier
requestHeaderModifier:
set:
- name: x-custom-header
value: xyz
and the other:
to:
rules:
- matches:
- path:
match: PathPrefix
value: /
default:
backendRefs:
- kind: MeshService
name: backend
namespace: kuma-demo
port: 3001
Traffic to /
would have the x-custom-header
added and be sent to the backend
.
Every MeshGatewayRoute
filter has an equivalent in MeshHTTPRoute
.
Consult the documentation for both resources to
find out how each filter looks in MeshHTTPRoute
.
Backends are similar except that instead of targeting with tags, the targetRef
structure with kind: MeshService
/kind: MeshServiceSubset
is used.
So all in all we have:
-
Create the equivalent MeshHTTPRoute
echo "apiVersion: kuma.io/v1alpha1
kind: MeshHTTPRoute
metadata:
name: demo-app
namespace: kuma-system
labels:
kuma.io/origin: zone
kuma.io/mesh: default
spec:
targetRef:
kind: MeshGateway
name: demo-app
to:
- targetRef:
kind: Mesh
hostnames:
- example.com
rules:
- default:
backendRefs:
- kind: MeshService
name: demo-app_kuma-demo_svc_5000
matches:
- path:
type: PathPrefix
value: /" | kubectl apply -f -
-
Check that traffic is still working.
-
Delete the previous MeshGatewayRoute:
kubectl delete meshgatewayroute --all