text
stringlengths 1
1k
| id
int64 0
8.58k
|
---|---|
Each Pod gets its own IP address (Kubernetes expects network plugins to ensure this). For a
given Deployment in your cluster, the set of Pods running in one moment in time could be
different from the set of Pods running that application a moment later.
This leads to a problem: if some set of Pods (call them "backends") provides functionality to
other Pods (call them "frontends") inside your cluster, how do the frontends find out and keep
track of which IP address to connect to, so that the frontend can use the backend part of the
workload?
Enter Services .
Services in Kubernetes
The Service API, part of Kubernetes, is an abstraction to help you expose groups of Pods over a
network. Each Service object defines a logical set of endpoints (usually these endpoints are
Pods) along with a policy about how to make those pods accessible.
For example, consider a stateless image-processing backend which is running with 3 replicas.
Those replicas are fungible—frontends do not care which backend t | 600 |
hey use. While the actual
Pods that compose the backend set may change, the frontend clients should not need to be
aware of that, nor should they need to keep track of the set of backends themselves.
The Service abstraction enables this decoupling.
The set of Pods targeted by a Service is usually determined by a selector that you define. To
learn about other ways to define Service endpoints, see Services without selectors .
If your workload speaks HTTP, you might choose to use an Ingress to control how web traffic
reaches that workload. Ingress is not a Service type, but it acts as the entry point for your
cluster. An Ingress lets you consolidate your routing rules into a single resource, so that you
can expose multiple components of your workload, running separately in your cluster, behind a
single listener.
The Gateway API for Kubernetes provides extra capabilities beyond Ingress and Service. You
can add Gateway to your cluster - it is a family of extension APIs, implemented usin | 601 |
g
CustomResourceDefinitions - and then use these to configure access to network services that
are running in your cluster.
Cloud-native service discovery
If you're able to use Kubernetes APIs for service discovery in your application, you can query
the API server for matching EndpointSlices. Kubernetes updates the EndpointSlices for a
Service whenever the set of Pods in a Service changes.
For non-native applications, Kubernetes offers ways to place a network port or load balancer in
between your application and the backend Pods.
Either way, your workload can use these service discovery mechanisms to find the target it
wants to connect to | 602 |
Defining a Service
A Service is an object (the same way that a Pod or a ConfigMap is an object). You can create,
view or modify Service definitions using the Kubernetes API. Usually you use a tool such as
kubectl to make those API calls for you.
For example, suppose you have a set of Pods that each listen on TCP port 9376 and are labelled
as app.kubernetes.io/name=MyApp . You can define a Service to publish that TCP listener:
apiVersion : v1
kind: Service
metadata :
name : my-service
spec:
selector :
app.kubernetes.io/name : MyApp
ports :
- protocol : TCP
port: 80
targetPort : 9376
Applying this manifest creates a new Service named "my-service" with the default ClusterIP
service type . The Service targets TCP port 9376 on any Pod with the app.kubernetes.io/name:
MyApp label.
Kubernetes assigns this Service an IP address (the cluster IP ), that is used by the virtual IP
address mechanism. For more details on that mechanism, read Virtual IPs and Service Pr | 603 |
oxies .
The controller for that Service continuously scans for Pods that match its selector, and then
makes any necessary updates to the set of EndpointSlices for the Service.
The name of a Service object must be a valid RFC 1035 label name .
Note: A Service can map any incoming port to a targetPort . By default and for convenience,
the targetPort is set to the same value as the port field.
Port definitions
Port definitions in Pods have names, and you can reference these names in the targetPort
attribute of a Service. For example, we can bind the targetPort of the Service to the Pod port in
the following way:
apiVersion : v1
kind: Pod
metadata :
name : nginx
labels :
app.kubernetes.io/name : proxy
spec:
containers :
- name : nginx
image : nginx:stable
ports | 604 |
- containerPort : 80
name : http-web-svc
---
apiVersion : v1
kind: Service
metadata :
name : nginx-service
spec:
selector :
app.kubernetes.io/name : proxy
ports :
- name : name-of-service-port
protocol : TCP
port: 80
targetPort : http-web-svc
This works even if there is a mixture of Pods in the Service using a single configured name,
with the same network protocol available via different port numbers. This offers a lot of
flexibility for deploying and evolving your Services. For example, you can change the port
numbers that Pods expose in the next version of your backend software, without breaking
clients.
The default protocol for Services is TCP; you can also use any other supported protocol .
Because many Services need to expose more than one port, Kubernetes supports multiple port
definitions for a single Service. Each port definition can have the same protocol , or a different
one.
Services without selectors
Services most commonly abstract access to | 605 |
Kubernetes Pods thanks to the selector, but when
used with a corresponding set of EndpointSlices objects and without a selector, the Service can
abstract other kinds of backends, including ones that run outside the cluster.
For example:
You want to have an external database cluster in production, but in your test
environment you use your own databases.
You want to point your Service to a Service in a different Namespace or on another
cluster.
You are migrating a workload to Kubernetes. While evaluating the approach, you run
only a portion of your backends in Kubernetes.
In any of these scenarios you can define a Service without specifying a selector to match Pods.
For example:
apiVersion : v1
kind: Service
metadata :
name : my-service
spec:
ports :•
•
| 606 |
- protocol : TCP
port: 80
targetPort : 9376
Because this Service has no selector, the corresponding EndpointSlice (and legacy Endpoints)
objects are not created automatically. You can map the Service to the network address and port
where it's running, by adding an EndpointSlice object manually. For example:
apiVersion : discovery.k8s.io/v1
kind: EndpointSlice
metadata :
name : my-service-1 # by convention, use the name of the Service
# as a prefix for the name of the EndpointSlice
labels :
# You should set the "kubernetes.io/service-name" label.
# Set its value to match the name of the Service
kubernetes.io/service-name : my-service
addressType : IPv4
ports :
- name : '' # empty because port 9376 is not assigned as a well-known
# port (by IANA)
appProtocol : http
protocol : TCP
port: 9376
endpoints :
- addresses :
- "10.4.5.6"
- addresses :
- "10.1.2.3"
Custom EndpointSlices
When you create an | 607 |
EndpointSlice object for a Service, you can use any name for the
EndpointSlice. Each EndpointSlice in a namespace must have a unique name. You link an
EndpointSlice to a Service by setting the kubernetes.io/service-name label on that
EndpointSlice.
Note:
The endpoint IPs must not be: loopback (127.0.0.0/8 for IPv4, ::1/128 for IPv6), or link-local
(169.254.0.0/16 and 224.0.0.0/24 for IPv4, fe80::/64 for IPv6).
The endpoint IP addresses cannot be the cluster IPs of other Kubernetes Services, because kube-
proxy doesn't support virtual IPs as a destination.
For an EndpointSlice that you create yourself, or in your own code, you should also pick a value
to use for the label endpointslice.kubernetes.io/managed-by . If you create your own controller
code to manage EndpointSlices, consider using a value similar to "my-domain.example/name-
of-controller" . If you are using a third party tool, use the name of the tool in all-lowercase and
change spaces and other punctuation to dashes ( -) | 608 |
. If people are directly using a tool such as
kubectl to manage EndpointSlices, use a name that describes this manual management, such as
"staff" or "cluster-admins" . You should avoid using the reserved value "controller" , which
identifies EndpointSlices managed by Kubernetes' own control plane | 609 |
Accessing a Service without a selector
Accessing a Service without a selector works the same as if it had a selector. In the example for
a Service without a selector, traffic is routed to one of the two endpoints defined in the
EndpointSlice manifest: a TCP connection to 10.1.2.3 or 10.4.5.6, on port 9376.
Note: The Kubernetes API server does not allow proxying to endpoints that are not mapped to
pods. Actions such as kubectl proxy <service-name> where the service has no selector will fail
due to this constraint. This prevents the Kubernetes API server from being used as a proxy to
endpoints the caller may not be authorized to access.
An ExternalName Service is a special case of Service that does not have selectors and uses DNS
names instead. For more information, see the ExternalName section.
EndpointSlices
FEATURE STATE: Kubernetes v1.21 [stable]
EndpointSlices are objects that represent a subset (a slice) of the backing network endpoints for
a Service.
Your Kubernetes cluster | 610 |
tracks how many endpoints each EndpointSlice represents. If there are
so many endpoints for a Service that a threshold is reached, then Kubernetes adds another
empty EndpointSlice and stores new endpoint information there. By default, Kubernetes makes
a new EndpointSlice once the existing EndpointSlices all contain at least 100 endpoints.
Kubernetes does not make the new EndpointSlice until an extra endpoint needs to be added.
See EndpointSlices for more information about this API.
Endpoints
In the Kubernetes API, an Endpoints (the resource kind is plural) defines a list of network
endpoints, typically referenced by a Service to define which Pods the traffic can be sent to.
The EndpointSlice API is the recommended replacement for Endpoints.
Over-capacity endpoints
Kubernetes limits the number of endpoints that can fit in a single Endpoints object. When there
are over 1000 backing endpoints for a Service, Kubernetes truncates the data in the Endpoints
object. Because a Service can be | 611 |
linked with more than one EndpointSlice, the 1000 backing
endpoint limit only affects the legacy Endpoints API.
In that case, Kubernetes selects at most 1000 possible backend endpoints to store into the
Endpoints object, and sets an annotation on the Endpoints: endpoints.kubernetes.io/over-
capacity: truncated . The control plane also removes that annotation if the number of backend
Pods drops below 1000.
Traffic is still sent to backends, but any load balancing mechanism that relies on the legacy
Endpoints API only sends traffic to at most 1000 of the available backing endpoints.
The same API limit means that you cannot manually update an Endpoints to have more than
1000 endpoints | 612 |
Application protocol
FEATURE STATE: Kubernetes v1.20 [stable]
The appProtocol field provides a way to specify an application protocol for each Service port.
This is used as a hint for implementations to offer richer behavior for protocols that they
understand. The value of this field is mirrored by the corresponding Endpoints and
EndpointSlice objects.
This field follows standard Kubernetes label syntax. Valid values are one of:
IANA standard service names .
Implementation-defined prefixed names such as mycompany.com/my-custom-protocol .
Kubernetes-defined prefixed names:
Protocol Description
kubernetes.io/h2c HTTP/2 over cleartext as described in RFC 7540
Multi-port Services
For some Services, you need to expose more than one port. Kubernetes lets you configure
multiple port definitions on a Service object. When using multiple ports for a Service, you must
give all of your ports names so that these are unambiguous. For example:
apiVersion : v1
kind: Service
metadata :
name : my-se | 613 |
rvice
spec:
selector :
app.kubernetes.io/name : MyApp
ports :
- name : http
protocol : TCP
port: 80
targetPort : 9376
- name : https
protocol : TCP
port: 443
targetPort : 9377
Note:
As with Kubernetes names in general, names for ports must only contain lowercase
alphanumeric characters and -. Port names must also start and end with an alphanumeric
character.
For example, the names 123-abc and web are valid, but 123_abc and -web are not.•
•
| 614 |
Service type
For some parts of your application (for example, frontends) you may want to expose a Service
onto an external IP address, one that's accessible from outside of your cluster.
Kubernetes Service types allow you to specify what kind of Service you want.
The available type values and their behaviors are:
ClusterIP
Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only
reachable from within the cluster. This is the default that is used if you don't explicitly
specify a type for a Service. You can expose the Service to the public internet using an
Ingress or a Gateway .
NodePort
Exposes the Service on each Node's IP at a static port (the NodePort ). To make the node
port available, Kubernetes sets up a cluster IP address, the same as if you had requested a
Service of type: ClusterIP .
LoadBalancer
Exposes the Service externally using an external load balancer. Kubernetes does not
directly offer a load balancing component; you must provide one, | 615 |
or you can integrate
your Kubernetes cluster with a cloud provider.
ExternalName
Maps the Service to the contents of the externalName field (for example, to the hostname
api.foo.bar.example ). The mapping configures your cluster's DNS server to return a
CNAME record with that external hostname value. No proxying of any kind is set up.
The type field in the Service API is designed as nested functionality - each level adds to the
previous. However there is an exception to this nested design. You can define a LoadBalancer
Service by disabling the load balancer NodePort allocation .
type: ClusterIP
This default Service type assigns an IP address from a pool of IP addresses that your cluster has
reserved for that purpose.
Several of the other types for Service build on the ClusterIP type as a foundation.
If you define a Service that has the .spec.clusterIP set to "None" then Kubernetes does not
assign an IP address. See headless Services for more information.
Choosing your own IP | 616 |
address
You can specify your own cluster IP address as part of a Service creation request. To do this, set
the .spec.clusterIP field. For example, if you already have an existing DNS entry that you wish
to reuse, or legacy systems that are configured for a specific IP address and difficult to re-
configure.
The IP address that you choose must be a valid IPv4 or IPv6 address from within the service-
cluster-ip-range CIDR range that is configured for the API server. If you try to create a Service
with an invalid clusterIP address value, the API server will return a 422 HTTP status code to
indicate that there's a problem | 617 |
Read avoiding collisions to learn how Kubernetes helps reduce the risk and impact of two
different Services both trying to use the same IP address.
type: NodePort
If you set the type field to NodePort , the Kubernetes control plane allocates a port from a range
specified by --service-node-port-range flag (default: 30000-32767). Each node proxies that port
(the same port number on every Node) into your Service. Your Service reports the allocated
port in its .spec.ports[*].nodePort field.
Using a NodePort gives you the freedom to set up your own load balancing solution, to
configure environments that are not fully supported by Kubernetes, or even to expose one or
more nodes' IP addresses directly.
For a node port Service, Kubernetes additionally allocates a port (TCP, UDP or SCTP to match
the protocol of the Service). Every node in the cluster configures itself to listen on that assigned
port and to forward traffic to one of the ready endpoints associated with that Service. You'll be
| 618 |
able to contact the type: NodePort Service, from outside the cluster, by connecting to any node
using the appropriate protocol (for example: TCP), and the appropriate port (as assigned to that
Service).
Choosing your own port
If you want a specific port number, you can specify a value in the nodePort field. The control
plane will either allocate you that port or report that the API transaction failed. This means that
you need to take care of possible port collisions yourself. You also have to use a valid port
number, one that's inside the range configured for NodePort use.
Here is an example manifest for a Service of type: NodePort that specifies a NodePort value
(30007, in this example):
apiVersion : v1
kind: Service
metadata :
name : my-service
spec:
type: NodePort
selector :
app.kubernetes.io/name : MyApp
ports :
- port: 80
# By default and for convenience, the `targetPort` is set to
# the same value as the `port` field.
targetPort : 80
# O | 619 |
ptional field
# By default and for convenience, the Kubernetes control plane
# will allocate a port from a range (default: 30000-32767)
nodePort : 30007
Reserve Nodeport ranges to avoid collisions
FEATURE STATE: Kubernetes v1.29 [stable | 620 |
The policy for assigning ports to NodePort services applies to both the auto-assignment and the
manual assignment scenarios. When a user wants to create a NodePort service that uses a
specific port, the target port may conflict with another port that has already been assigned.
To avoid this problem, the port range for NodePort services is divided into two bands. Dynamic
port assignment uses the upper band by default, and it may use the lower band once the upper
band has been exhausted. Users can then allocate from the lower band with a lower risk of port
collision.
Custom IP address configuration for type: NodePort Services
You can set up nodes in your cluster to use a particular IP address for serving node port
services. You might want to do this if each node is connected to multiple networks (for
example: one network for application traffic, and another network for traffic between nodes and
the control plane).
If you want to specify particular IP address(es) to proxy the port, you c | 621 |
an set the --nodeport-
addresses flag for kube-proxy or the equivalent nodePortAddresses field of the kube-proxy
configuration file to particular IP block(s).
This flag takes a comma-delimited list of IP blocks (e.g. 10.0.0.0/8 , 192.0.2.0/25 ) to specify IP
address ranges that kube-proxy should consider as local to this node.
For example, if you start kube-proxy with the --nodeport-addresses=127.0.0.0/8 flag, kube-proxy
only selects the loopback interface for NodePort Services. The default for --nodeport-addresses
is an empty list. This means that kube-proxy should consider all available network interfaces
for NodePort. (That's also compatible with earlier Kubernetes releases.)
Note: This Service is visible as <NodeIP>:spec.ports[*].nodePort
and .spec.clusterIP:spec.ports[*].port . If the --nodeport-addresses flag for kube-proxy or the
equivalent field in the kube-proxy configuration file is set, <NodeIP> would be a filtered node
IP address (or possibly IP addresses).
type: Loa | 622 |
dBalancer
On cloud providers which support external load balancers, setting the type field to
LoadBalancer provisions a load balancer for your Service. The actual creation of the load
balancer happens asynchronously, and information about the provisioned balancer is published
in the Service's .status.loadBalancer field. For example:
apiVersion : v1
kind: Service
metadata :
name : my-service
spec:
selector :
app.kubernetes.io/name : MyApp
ports :
- protocol : TCP
port: 80
targetPort : 9376
clusterIP : 10.0.171.239
type: LoadBalance | 623 |
status :
loadBalancer :
ingress :
- ip: 192.0.2.127
Traffic from the external load balancer is directed at the backend Pods. The cloud provider
decides how it is load balanced.
To implement a Service of type: LoadBalancer , Kubernetes typically starts off by making the
changes that are equivalent to you requesting a Service of type: NodePort . The cloud-controller-
manager component then configures the external load balancer to forward traffic to that
assigned node port.
You can configure a load balanced Service to omit assigning a node port, provided that the
cloud provider implementation supports this.
Some cloud providers allow you to specify the loadBalancerIP . In those cases, the load-balancer
is created with the user-specified loadBalancerIP . If the loadBalancerIP field is not specified, the
load balancer is set up with an ephemeral IP address. If you specify a loadBalancerIP but your
cloud provider does not support the feature, the loadbalancerIP field that you s | 624 |
et is ignored.
Note:
The.spec.loadBalancerIP field for a Service was deprecated in Kubernetes v1.24.
This field was under-specified and its meaning varies across implementations. It also cannot
support dual-stack networking. This field may be removed in a future API version.
If you're integrating with a provider that supports specifying the load balancer IP address(es)
for a Service via a (provider specific) annotation, you should switch to doing that.
If you are writing code for a load balancer integration with Kubernetes, avoid using this field.
You can integrate with Gateway rather than Service, or you can define your own (provider
specific) annotations on the Service that specify the equivalent detail.
Load balancers with mixed protocol types
FEATURE STATE: Kubernetes v1.26 [stable]
By default, for LoadBalancer type of Services, when there is more than one port defined, all
ports must have the same protocol, and the protocol must be one which is supported by the
cloud provider.
| 625 |
The feature gate MixedProtocolLBService (enabled by default for the kube-apiserver as of v1.24)
allows the use of different protocols for LoadBalancer type of Services, when there is more than
one port defined.
Note: The set of protocols that can be used for load balanced Services is defined by your cloud
provider; they may impose restrictions beyond what the Kubernetes API enforces.
Disabling load balancer NodePort allocation
FEATURE STATE: Kubernetes v1.24 [stable | 626 |
You can optionally disable node port allocation for a Service of type: LoadBalancer , by setting
the field spec.allocateLoadBalancerNodePorts to false. This should only be used for load
balancer implementations that route traffic directly to pods as opposed to using node ports. By
default, spec.allocateLoadBalancerNodePorts is true and type LoadBalancer Services will
continue to allocate node ports. If spec.allocateLoadBalancerNodePorts is set to false on an
existing Service with allocated node ports, those node ports will not be de-allocated
automatically. You must explicitly remove the nodePorts entry in every Service port to de-
allocate those node ports.
Specifying class of load balancer implementation
FEATURE STATE: Kubernetes v1.24 [stable]
For a Service with type set to LoadBalancer , the .spec.loadBalancerClass field enables you to use
a load balancer implementation other than the cloud provider default.
By default, .spec.loadBalancerClass is not set and a LoadBalancer | 627 |
type of Service uses the cloud
provider's default load balancer implementation if the cluster is configured with a cloud
provider using the --cloud-provider component flag.
If you specify .spec.loadBalancerClass , it is assumed that a load balancer implementation that
matches the specified class is watching for Services. Any default load balancer implementation
(for example, the one provided by the cloud provider) will ignore Services that have this field
set. spec.loadBalancerClass can be set on a Service of type LoadBalancer only. Once set, it
cannot be changed. The value of spec.loadBalancerClass must be a label-style identifier, with an
optional prefix such as " internal-vip " or " example.com/internal-vip ". Unprefixed names are
reserved for end-users.
Specifying IPMode of load balancer status
FEATURE STATE: Kubernetes v1.29 [alpha]
Starting as Alpha in Kubernetes 1.29, a feature gate named LoadBalancerIPMode allows you to
set the .status.loadBalancer.ingress.ipMode for a | 628 |
Service with type set to LoadBalancer .
The .status.loadBalancer.ingress.ipMode specifies how the load-balancer IP behaves. It may be
specified only when the .status.loadBalancer.ingress.ip field is also specified.
There are two possible values for .status.loadBalancer.ingress.ipMode : "VIP" and "Proxy". The
default value is "VIP" meaning that traffic is delivered to the node with the destination set to the
load-balancer's IP and port. There are two cases when setting this to "Proxy", depending on how
the load-balancer from the cloud provider delivers the traffics:
If the traffic is delivered to the node then DNATed to the pod, the destination would be
set to the node's IP and node port;
If the traffic is delivered directly to the pod, the destination would be set to the pod's IP
and port.
Service implementations may use this information to adjust traffic routing.
Internal load balancer
In a mixed environment it is sometimes necessary to route traffic from Services inside the same
(v | 629 |
irtual) network address block.•
| 630 |
In a split-horizon DNS environment you would need two Services to be able to route both
external and internal traffic to your endpoints.
To set an internal load balancer, add one of the following annotations to your Service
depending on the cloud service provider you're using:
Default
GCP
AWS
Azure
IBM Cloud
OpenStack
Baidu Cloud
Tencent Cloud
Alibaba Cloud
OCI
Select one of the tabs.
metadata :
name : my-service
annotations :
networking.gke.io/load-balancer-type : "Internal"
metadata :
name : my-service
annotations :
service.beta.kubernetes.io/aws-load-balancer-internal : "true"
metadata :
name : my-service
annotations :
service.beta.kubernetes.io/azure-load-balancer-internal : "true"
metadata :
name : my-service
annotations :
service.kubernetes.io/ibm-load-balancer-cloud-provider-ip-type : "private"
metadata :
name : my-service
annotations :
service.beta.kubernetes.io/openstack-internal-load-balancer : "true"
metadata :
name : m | 631 |
y-service
annotations :
service.beta.kubernetes.io/cce-load-balancer-internal-vpc : "true"
metadata :
annotations :
service.kubernetes.io/qcloud-loadbalancer-internal-subnetid : subnet-xxxxx•
•
•
•
•
•
•
•
•
| 632 |
metadata :
annotations :
service.beta.kubernetes.io/alibaba-cloud-loadbalancer-address-type : "intranet"
metadata :
name : my-service
annotations :
service.beta.kubernetes.io/oci-load-balancer-internal : true
type: ExternalName
Services of type ExternalName map a Service to a DNS name, not to a typical selector such as
my-service or cassandra . You specify these Services with the spec.externalName parameter.
This Service definition, for example, maps the my-service Service in the prod namespace to
my.database.example.com :
apiVersion : v1
kind: Service
metadata :
name : my-service
namespace : prod
spec:
type: ExternalName
externalName : my.database.example.com
Note:
A Service of type: ExternalName accepts an IPv4 address string, but treats that string as a DNS
name comprised of digits, not as an IP address (the internet does not however allow such names
in DNS). Services with external names that resemble IPv4 addresses are not resolved by DNS
servers.
If | 633 |
you want to map a Service directly to a specific IP address, consider using headless Services .
When looking up the host my-service.prod.svc.cluster.local , the cluster DNS Service returns a
CNAME record with the value my.database.example.com . Accessing my-service works in the
same way as other Services but with the crucial difference that redirection happens at the DNS
level rather than via proxying or forwarding. Should you later decide to move your database
into your cluster, you can start its Pods, add appropriate selectors or endpoints, and change the
Service's type.
Caution:
You may have trouble using ExternalName for some common protocols, including HTTP and
HTTPS. If you use ExternalName then the hostname used by clients inside your cluster is
different from the name that the ExternalName references.
For protocols that use hostnames this difference may lead to errors or unexpected responses.
HTTP requests will have a Host: header that the origin server does not recognize; | 634 |
TLS servers
will not be able to provide a certificate matching the hostname that the client connected to | 635 |
Headless Services
Sometimes you don't need load-balancing and a single Service IP. In this case, you can create
what are termed headless Services , by explicitly specifying "None" for the cluster IP address
(.spec.clusterIP ).
You can use a headless Service to interface with other service discovery mechanisms, without
being tied to Kubernetes' implementation.
For headless Services, a cluster IP is not allocated, kube-proxy does not handle these Services,
and there is no load balancing or proxying done by the platform for them. How DNS is
automatically configured depends on whether the Service has selectors defined:
With selectors
For headless Services that define selectors, the endpoints controller creates EndpointSlices in
the Kubernetes API, and modifies the DNS configuration to return A or AAAA records (IPv4 or
IPv6 addresses) that point directly to the Pods backing the Service.
Without selectors
For headless Services that do not define selectors, the control plane does not create
| 636 |
EndpointSlice objects. However, the DNS system looks for and configures either:
DNS CNAME records for type: ExternalName Services.
DNS A / AAAA records for all IP addresses of the Service's ready endpoints, for all
Service types other than ExternalName .
For IPv4 endpoints, the DNS system creates A records.
For IPv6 endpoints, the DNS system creates AAAA records.
When you define a headless Service without a selector, the port must match the targetPort .
Discovering services
For clients running inside your cluster, Kubernetes supports two primary modes of finding a
Service: environment variables and DNS.
Environment variables
When a Pod is run on a Node, the kubelet adds a set of environment variables for each active
Service. It adds {SVCNAME}_SERVICE_HOST and {SVCNAME}_SERVICE_PORT variables,
where the Service name is upper-cased and dashes are converted to underscores.
For example, the Service redis-primary which exposes TCP port 6379 and has been allocated
cluster IP address 10.0 | 637 |
.0.11, produces the following environment variables:
REDIS_PRIMARY_SERVICE_HOST =10.0.0.11
REDIS_PRIMARY_SERVICE_PORT =6379
REDIS_PRIMARY_PORT =tcp://10.0.0.11:6379
REDIS_PRIMARY_PORT_6379_TCP =tcp://10.0.0.11:6379
REDIS_PRIMARY_PORT_6379_TCP_PROTO =tcp•
•
◦
| 638 |
REDIS_PRIMARY_PORT_6379_TCP_PORT =6379
REDIS_PRIMARY_PORT_6379_TCP_ADDR =10.0.0.11
Note:
When you have a Pod that needs to access a Service, and you are using the environment
variable method to publish the port and cluster IP to the client Pods, you must create the
Service before the client Pods come into existence. Otherwise, those client Pods won't have their
environment variables populated.
If you only use DNS to discover the cluster IP for a Service, you don't need to worry about this
ordering issue.
Kubernetes also supports and provides variables that are compatible with Docker Engine's
"legacy container links " feature. You can read makeLinkVariables to see how this is implemented
in Kubernetes.
DNS
You can (and almost always should) set up a DNS service for your Kubernetes cluster using an
add-on .
A cluster-aware DNS server, such as CoreDNS, watches the Kubernetes API for new Services
and creates a set of DNS records for each one. If DNS has been enabled throughout your clus | 639 |
ter
then all Pods should automatically be able to resolve Services by their DNS name.
For example, if you have a Service called my-service in a Kubernetes namespace my-ns , the
control plane and the DNS Service acting together create a DNS record for my-service.my-ns .
Pods in the my-ns namespace should be able to find the service by doing a name lookup for my-
service (my-service.my-ns would also work).
Pods in other namespaces must qualify the name as my-service.my-ns . These names will resolve
to the cluster IP assigned for the Service.
Kubernetes also supports DNS SRV (Service) records for named ports. If the my-service.my-ns
Service has a port named http with the protocol set to TCP, you can do a DNS SRV query for
_http._tcp.my-service.my-ns to discover the port number for http, as well as the IP address.
The Kubernetes DNS server is the only way to access ExternalName Services. You can find more
information about ExternalName resolution in DNS for Services and Pods .
Virt | 640 |
ual IP addressing mechanism
Read Virtual IPs and Service Proxies explains the mechanism Kubernetes provides to expose a
Service with a virtual IP address.
Traffic policies
You can set the .spec.internalTrafficPolicy and .spec.externalTrafficPolicy fields to control how
Kubernetes routes traffic to healthy (“ready”) backends | 641 |
See Traffic Policies for more details.
Session stickiness
If you want to make sure that connections from a particular client are passed to the same Pod
each time, you can configure session affinity based on the client's IP address. Read session
affinity to learn more.
External IPs
If there are external IPs that route to one or more cluster nodes, Kubernetes Services can be
exposed on those externalIPs . When network traffic arrives into the cluster, with the external IP
(as destination IP) and the port matching that Service, rules and routes that Kubernetes has
configured ensure that the traffic is routed to one of the endpoints for that Service.
When you define a Service, you can specify externalIPs for any service type . In the example
below, the Service named "my-service" can be accessed by clients using TCP, on
"198.51.100.32:80" (calculated from .spec.externalIPs[] and .spec.ports[].port ).
apiVersion : v1
kind: Service
metadata :
name : my-service
spec:
selector :
| 642 |
app.kubernetes.io/name : MyApp
ports :
- name : http
protocol : TCP
port: 80
targetPort : 49152
externalIPs :
- 198.51.100.32
Note: Kubernetes does not manage allocation of externalIPs ; these are the responsibility of the
cluster administrator.
API Object
Service is a top-level resource in the Kubernetes REST API. You can find more details about the
Service API object .
What's next
Learn more about Services and how they fit into Kubernetes:
Follow the Connecting Applications with Services tutorial.
Read about Ingress , which exposes HTTP and HTTPS routes from outside the cluster to
Services within your cluster.
Read about Gateway , an extension to Kubernetes that provides more flexibility than
Ingress.•
•
| 643 |
For more context, read the following:
Virtual IPs and Service Proxies
EndpointSlices
Service API reference
EndpointSlice API reference
Endpoint API reference (legacy)
Ingress
Make your HTTP (or HTTPS) network service available using a protocol-aware configuration
mechanism, that understands web concepts like URIs, hostnames, paths, and more. The Ingress
concept lets you map traffic to different backends based on rules you define via the Kubernetes
API.
FEATURE STATE: Kubernetes v1.19 [stable]
An API object that manages external access to the services in a cluster, typically HTTP.
Ingress may provide load balancing, SSL termination and name-based virtual hosting.
Note: Ingress is frozen. New features are being added to the Gateway API .
Terminology
For clarity, this guide defines the following terms:
Node: A worker machine in Kubernetes, part of a cluster.
Cluster: A set of Nodes that run containerized applications managed by Kubernetes. For
this example, and in most common Kubernetes | 644 |
deployments, nodes in the cluster are not
part of the public internet.
Edge router: A router that enforces the firewall policy for your cluster. This could be a
gateway managed by a cloud provider or a physical piece of hardware.
Cluster network: A set of links, logical or physical, that facilitate communication within a
cluster according to the Kubernetes networking model .
Service: A Kubernetes Service that identifies a set of Pods using label selectors. Unless
mentioned otherwise, Services are assumed to have virtual IPs only routable within the
cluster network.
What is Ingress?
Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster.
Traffic routing is controlled by rules defined on the Ingress resource.
Here is a simple example where an Ingress sends all its traffic to one Service:
ingress-diagram
Figure. Ingress•
•
•
•
•
•
•
•
•
| 645 |
An Ingress may be configured to give Services externally-reachable URLs, load balance traffic,
terminate SSL / TLS, and offer name-based virtual hosting. An Ingress controller is responsible
for fulfilling the Ingress, usually with a load balancer, though it may also configure your edge
router or additional frontends to help handle the traffic.
An Ingress does not expose arbitrary ports or protocols. Exposing services other than HTTP
and HTTPS to the internet typically uses a service of type Service.Type=NodePort or
Service.Type=LoadBalancer .
Prerequisites
You must have an Ingress controller to satisfy an Ingress. Only creating an Ingress resource has
no effect.
You may need to deploy an Ingress controller such as ingress-nginx . You can choose from a
number of Ingress controllers .
Ideally, all Ingress controllers should fit the reference specification. In reality, the various
Ingress controllers operate slightly differently.
Note: Make sure you review your Ingress controller's | 646 |
documentation to understand the caveats
of choosing it.
The Ingress resource
A minimal Ingress resource example:
service/networking/minimal-ingress.yaml
apiVersion : networking.k8s.io/v1
kind: Ingress
metadata :
name : minimal-ingress
annotations :
nginx.ingress.kubernetes.io/rewrite-target : /
spec:
ingressClassName : nginx-example
rules :
- http:
paths :
- path: /testpath
pathType : Prefix
backend :
service :
name : test
port:
number : 80
An Ingress needs apiVersion , kind, metadata and spec fields. The name of an Ingress object
must be a valid DNS subdomain name . For general information about working with config files,
see deploying applications , configuring containers , managing resources . Ingress frequently uses
annotations to configure some options depending on the Ingress controller, an example o | 647 |
which is the rewrite-target annotation . Different Ingress controllers support different
annotations. Review the documentation for your choice of Ingress controller to learn which
annotations are supported.
The Ingress spec has all the information needed to configure a load balancer or proxy server.
Most importantly, it contains a list of rules matched against all incoming requests. Ingress
resource only supports rules for directing HTTP(S) traffic.
If the ingressClassName is omitted, a default Ingress class should be defined.
There are some ingress controllers, that work without the definition of a default IngressClass .
For example, the Ingress-NGINX controller can be configured with a flag --watch-ingress-
without-class . It is recommended though, to specify the default IngressClass as shown below .
Ingress rules
Each HTTP rule contains the following information:
An optional host. In this example, no host is specified, so the rule applies to all inbound
HTTP traffic through th | 648 |
e IP address specified. If a host is provided (for example,
foo.bar.com), the rules apply to that host.
A list of paths (for example, /testpath ), each of which has an associated backend defined
with a service.name and a service.port.name or service.port.number . Both the host and
path must match the content of an incoming request before the load balancer directs
traffic to the referenced Service.
A backend is a combination of Service and port names as described in the Service doc or
a custom resource backend by way of a CRD . HTTP (and HTTPS) requests to the Ingress
that match the host and path of the rule are sent to the listed backend.
A defaultBackend is often configured in an Ingress controller to service any requests that do not
match a path in the spec.
DefaultBackend
An Ingress with no rules sends all traffic to a single default backend and .spec.defaultBackend is
the backend that should handle requests in that case. The defaultBackend is conventionally a
configuration o | 649 |
ption of the Ingress controller and is not specified in your Ingress resources. If
no .spec.rules are specified, .spec.defaultBackend must be specified. If defaultBackend is not set,
the handling of requests that do not match any of the rules will be up to the ingress controller
(consult the documentation for your ingress controller to find out how it handles this case).
If none of the hosts or paths match the HTTP request in the Ingress objects, the traffic is routed
to your default backend.
Resource backends
A Resource backend is an ObjectRef to another Kubernetes resource within the same
namespace as the Ingress object. A Resource is a mutually exclusive setting with Service, and
will fail validation if both are specified. A common usage for a Resource backend is to ingress
data to an object storage backend with static assets.
service/networking/ingress-resource-backend.yaml •
•
| 650 |
apiVersion : networking.k8s.io/v1
kind: Ingress
metadata :
name : ingress-resource-backend
spec:
defaultBackend :
resource :
apiGroup : k8s.example.com
kind: StorageBucket
name : static-assets
rules :
- http:
paths :
- path: /icons
pathType : ImplementationSpecific
backend :
resource :
apiGroup : k8s.example.com
kind: StorageBucket
name : icon-assets
After creating the Ingress above, you can view it with the following command:
kubectl describe ingress ingress-resource-backend
Name: ingress-resource-backend
Namespace: default
Address:
Default backend: APIGroup: k8s.example.com, Kind: StorageBucket, Name: static-assets
Rules:
Host Path Backends
---- ---- --------
*
/icons APIGroup: k8s.example.com, Kind: StorageBucket, Name: icon-assets
Annotations: <none>
Events: <none>
Path types
Each pat | 651 |
h in an Ingress is required to have a corresponding path type. Paths that do not include
an explicit pathType will fail validation. There are three supported path types:
ImplementationSpecific : With this path type, matching is up to the IngressClass.
Implementations can treat this as a separate pathType or treat it identically to Prefix or
Exact path types.
Exact : Matches the URL path exactly and with case sensitivity.
Prefix : Matches based on a URL path prefix split by /. Matching is case sensitive and done
on a path element by element basis. A path element refers to the list of labels in the path
split by the / separator. A request is a match for path p if every p is an element-wise
prefix of p of the request path.•
•
| 652 |
Note: If the last element of the path is a substring of the last element in request path, it is
not a match (for example: /foo/bar matches /foo/bar/baz , but does not match /foo/barbaz ).
Examples
Kind Path(s) Request path(s) Matches?
Prefix / (all paths) Yes
Exact /foo /foo Yes
Exact /foo /bar No
Exact /foo /foo/ No
Exact /foo/ /foo No
Prefix /foo /foo, /foo/ Yes
Prefix /foo/ /foo, /foo/ Yes
Prefix /aaa/bb /aaa/bbb No
Prefix /aaa/bbb /aaa/bbb Yes
Prefix /aaa/bbb/ /aaa/bbb Yes, ignores trailing slash
Prefix /aaa/bbb /aaa/bbb/ Yes, matches trailing slash
Prefix /aaa/bbb /aaa/bbb/ccc Yes, matches subpath
Prefix /aaa/bbb /aaa/bbbxyz No, does not match string prefix
Prefix /, /aaa /aaa/ccc Yes, matches /aaa prefix
Prefix /, /aaa, /aaa/bbb /aaa/bbb Yes, matches /aaa/bbb prefix
Prefix /, /aaa, /aaa/bbb /ccc Yes, matches / prefix
Prefix /aaa /ccc No, uses default backend
Mixed /foo (Prefix), /foo (Exact) /foo Yes, prefers Exact
Multiple matches
In some cases, multiple paths within an Ingre | 653 |
ss will match a request. In those cases precedence
will be given first to the longest matching path. If two paths are still equally matched,
precedence will be given to paths with an exact path type over prefix path type.
Hostname wildcards
Hosts can be precise matches (for example “ foo.bar.com ”) or a wildcard (for example
“*.foo.com ”). Precise matches require that the HTTP host header matches the host field.
Wildcard matches require the HTTP host header is equal to the suffix of the wildcard rule.
Host Host header Match?
*.foo.com bar.foo.com Matches based on shared suffix
*.foo.com baz.bar.foo.com No match, wildcard only covers a single DNS label
*.foo.com foo.com No match, wildcard only covers a single DNS label
service/networking/ingress-wildcard-host.yaml
apiVersion : networking.k8s.io/v1
kind: Ingress
metadata :
name : ingress-wildcard-host
spec | 654 |
rules :
- host: "foo.bar.com"
http:
paths :
- pathType : Prefix
path: "/bar"
backend :
service :
name : service1
port:
number : 80
- host: "*.foo.com"
http:
paths :
- pathType : Prefix
path: "/foo"
backend :
service :
name : service2
port:
number : 80
Ingress class
Ingresses can be implemented by different controllers, often with different configuration. Each
Ingress should specify a class, a reference to an IngressClass resource that contains additional
configuration including the name of the controller that should implement the class.
service/networking/external-lb.yaml
apiVersion : networking.k8s.io/v1
kind: IngressClass
metadata :
name : external-lb
spec:
controller : example.com/ingress-controller
parameters :
apiGroup : k8s.example.com
kind: IngressParameters
name : external-lb
The .spec.parameters | 655 |
field of an IngressClass lets you reference another resource that provides
configuration related to that IngressClass.
The specific type of parameters to use depends on the ingress controller that you specify in the
.spec.controller field of the IngressClass.
IngressClass scope
Depending on your ingress controller, you may be able to use parameters that you set cluster-
wide, or just for one namespace.
Cluster | 656 |
Namespaced
The default scope for IngressClass parameters is cluster-wide.
If you set the .spec.parameters field and don't set .spec.parameters.scope , or if you
set .spec.parameters.scope to Cluster , then the IngressClass refers to a cluster-scoped resource.
The kind (in combination the apiGroup ) of the parameters refers to a cluster-scoped API
(possibly a custom resource), and the name of the parameters identifies a specific cluster scoped
resource for that API.
For example:
---
apiVersion : networking.k8s.io/v1
kind: IngressClass
metadata :
name : external-lb-1
spec:
controller : example.com/ingress-controller
parameters :
# The parameters for this IngressClass are specified in a
# ClusterIngressParameter (API group k8s.example.net) named
# "external-config-1". This definition tells Kubernetes to
# look for a cluster-scoped parameter resource.
scope : Cluster
apiGroup : k8s.example.net
kind: ClusterIngressParameter
name : external-config-1
F | 657 |
EATURE STATE: Kubernetes v1.23 [stable]
If you set the .spec.parameters field and set .spec.parameters.scope to Namespace , then the
IngressClass refers to a namespaced-scoped resource. You must also set the namespace field
within .spec.parameters to the namespace that contains the parameters you want to use.
The kind (in combination the apiGroup ) of the parameters refers to a namespaced API (for
example: ConfigMap), and the name of the parameters identifies a specific resource in the
namespace you specified in namespace .
Namespace-scoped parameters help the cluster operator delegate control over the configuration
(for example: load balancer settings, API gateway definition) that is used for a workload. If you
used a cluster-scoped parameter then either:
the cluster operator team needs to approve a different team's changes every time there's a
new configuration change being applied.
the cluster operator must define specific access controls, such as RBAC roles and
bindings, tha | 658 |
t let the application team make changes to the cluster-scoped parameters
resource.
The IngressClass API itself is always cluster-scoped.
Here is an example of an IngressClass that refers to parameters that are namespaced:•
•
| 659 |
---
apiVersion : networking.k8s.io/v1
kind: IngressClass
metadata :
name : external-lb-2
spec:
controller : example.com/ingress-controller
parameters :
# The parameters for this IngressClass are specified in an
# IngressParameter (API group k8s.example.com) named "external-config",
# that's in the "external-configuration" namespace.
scope : Namespace
apiGroup : k8s.example.com
kind: IngressParameter
namespace : external-configuration
name : external-config
Deprecated annotation
Before the IngressClass resource and ingressClassName field were added in Kubernetes 1.18,
Ingress classes were specified with a kubernetes.io/ingress.class annotation on the Ingress. This
annotation was never formally defined, but was widely supported by Ingress controllers.
The newer ingressClassName field on Ingresses is a replacement for that annotation, but is not
a direct equivalent. While the annotation was generally used to reference the name of the
Ingress contro | 660 |
ller that should implement the Ingress, the field is a reference to an IngressClass
resource that contains additional Ingress configuration, including the name of the Ingress
controller.
Default IngressClass
You can mark a particular IngressClass as default for your cluster. Setting the
ingressclass.kubernetes.io/is-default-class annotation to true on an IngressClass resource will
ensure that new Ingresses without an ingressClassName field specified will be assigned this
default IngressClass.
Caution: If you have more than one IngressClass marked as the default for your cluster, the
admission controller prevents creating new Ingress objects that don't have an
ingressClassName specified. You can resolve this by ensuring that at most 1 IngressClass is
marked as default in your cluster.
There are some ingress controllers, that work without the definition of a default IngressClass .
For example, the Ingress-NGINX controller can be configured with a flag --watch-ingress-
without-class | 661 |
. It is recommended though, to specify the default IngressClass :
service/networking/default-ingressclass.yaml
apiVersion : networking.k8s.io/v1
kind: IngressClass
metadata :
labels :
app.kubernetes.io/component : controller
name : nginx-exampl | 662 |
annotations :
ingressclass.kubernetes.io/is-default-class : "true"
spec:
controller : k8s.io/ingress-nginx
Types of Ingress
Ingress backed by a single Service
There are existing Kubernetes concepts that allow you to expose a single Service (see
alternatives ). You can also do this with an Ingress by specifying a default backend with no rules.
service/networking/test-ingress.yaml
apiVersion : networking.k8s.io/v1
kind: Ingress
metadata :
name : test-ingress
spec:
defaultBackend :
service :
name : test
port:
number : 80
If you create it using kubectl apply -f you should be able to view the state of the Ingress you
added:
kubectl get ingress test-ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
test-ingress external-lb * 203.0.113.123 80 59s
Where 203.0.113.123 is the IP allocated by the Ingress controller to satisfy this Ingress.
Note: Ingress controllers and load balancers may take a minute or two to alloc | 663 |
ate an IP address.
Until that time, you often see the address listed as <pending> .
Simple fanout
A fanout configuration routes traffic from a single IP address to more than one Service, based
on the HTTP URI being requested. An Ingress allows you to keep the number of load balancers
down to a minimum. For example, a setup like:
ingress-fanout-diagram
Figure. Ingress Fan Out
It would require an Ingress such as:
service/networking/simple-fanout-example.yam | 664 |
apiVersion : networking.k8s.io/v1
kind: Ingress
metadata :
name : simple-fanout-example
spec:
rules :
- host: foo.bar.com
http:
paths :
- path: /foo
pathType : Prefix
backend :
service :
name : service1
port:
number : 4200
- path: /bar
pathType : Prefix
backend :
service :
name : service2
port:
number : 8080
When you create the Ingress with kubectl apply -f :
kubectl describe ingress simple-fanout-example
Name: simple-fanout-example
Namespace: default
Address: 178.91.123.132
Default backend: default-http-backend:80 (10.8.2.3:8080)
Rules:
Host Path Backends
---- ---- --------
foo.bar.com
/foo service1:4200 (10.8.0.90:4200)
/bar service2:8080 (10.8.0.91:8080)
Events:
Type Reason Age From Message
---- - | 665 |
----- ---- ---- -------
Normal ADD 22s loadbalancer-controller default/test
The Ingress controller provisions an implementation-specific load balancer that satisfies the
Ingress, as long as the Services ( service1 , service2 ) exist. When it has done so, you can see the
address of the load balancer at the Address field.
Note: Depending on the Ingress controller you are using, you may need to create a default-
http-backend Service | 666 |
Name based virtual hosting
Name-based virtual hosts support routing HTTP traffic to multiple host names at the same IP
address.
ingress-namebase-diagram
Figure. Ingress Name Based Virtual hosting
The following Ingress tells the backing load balancer to route requests based on the Host
header .
service/networking/name-virtual-host-ingress.yaml
apiVersion : networking.k8s.io/v1
kind: Ingress
metadata :
name : name-virtual-host-ingress
spec:
rules :
- host: foo.bar.com
http:
paths :
- pathType : Prefix
path: "/"
backend :
service :
name : service1
port:
number : 80
- host: bar.foo.com
http:
paths :
- pathType : Prefix
path: "/"
backend :
service :
name : service2
port:
number : 80
If you create an Ingress resource without any hosts defined in the rules, then any web traffic to
the IP address of your Ingress controller can | 667 |
be matched without a name based virtual host
being required.
For example, the following Ingress routes traffic requested for first.bar.com to service1 ,
second.bar.com to service2 , and any traffic whose request host header doesn't match
first.bar.com and second.bar.com to service3 .
service/networking/name-virtual-host-ingress-no-third-host.yaml
apiVersion : networking.k8s.io/v1
kind: Ingress
metadata | 668 |
name : name-virtual-host-ingress-no-third-host
spec:
rules :
- host: first.bar.com
http:
paths :
- pathType : Prefix
path: "/"
backend :
service :
name : service1
port:
number : 80
- host: second.bar.com
http:
paths :
- pathType : Prefix
path: "/"
backend :
service :
name : service2
port:
number : 80
- http:
paths :
- pathType : Prefix
path: "/"
backend :
service :
name : service3
port:
number : 80
TLS
You can secure an Ingress by specifying a Secret that contains a TLS private key and certificate.
The Ingress resource only supports a single TLS port, 443, and assumes TLS termination at the
ingress point (traffic to the Service and its Pods is in plaintext). If the TLS configuration section
in an Ingress specifies different hosts, they are multi | 669 |
plexed on the same port according to the
hostname specified through the SNI TLS extension (provided the Ingress controller supports
SNI). The TLS secret must contain keys named tls.crt and tls.key that contain the certificate and
private key to use for TLS. For example:
apiVersion : v1
kind: Secret
metadata :
name : testsecret-tls
namespace : default
data:
tls.crt : base64 encoded cert
tls.key : base64 encoded key
type: kubernetes.io/tl | 670 |
Referencing this secret in an Ingress tells the Ingress controller to secure the channel from the
client to the load balancer using TLS. You need to make sure the TLS secret you created came
from a certificate that contains a Common Name (CN), also known as a Fully Qualified Domain
Name (FQDN) for https-example.foo.com .
Note: Keep in mind that TLS will not work on the default rule because the certificates would
have to be issued for all the possible sub-domains. Therefore, hosts in the tls section need to
explicitly match the host in the rules section.
service/networking/tls-example-ingress.yaml
apiVersion : networking.k8s.io/v1
kind: Ingress
metadata :
name : tls-example-ingress
spec:
tls:
- hosts :
- https-example.foo.com
secretName : testsecret-tls
rules :
- host: https-example.foo.com
http:
paths :
- path: /
pathType : Prefix
backend :
service :
name : service1
port:
number : 8 | 671 |
0
Note: There is a gap between TLS features supported by various Ingress controllers. Please
refer to documentation on nginx , GCE , or any other platform specific Ingress controller to
understand how TLS works in your environment.
Load balancing
An Ingress controller is bootstrapped with some load balancing policy settings that it applies to
all Ingress, such as the load balancing algorithm, backend weight scheme, and others. More
advanced load balancing concepts (e.g. persistent sessions, dynamic weights) are not yet
exposed through the Ingress. You can instead get these features through the load balancer used
for a Service.
It's also worth noting that even though health checks are not exposed directly through the
Ingress, there exist parallel concepts in Kubernetes such as readiness probes that allow you to
achieve the same end result. Please review the controller specific documentation to see how
they handle health checks (for example: nginx , or GCE ).
Updating an Ingress
To upd | 672 |
ate an existing Ingress to add a new Host, you can update it by editing the resource:
kubectl describe ingress tes | 673 |
Name: test
Namespace: default
Address: 178.91.123.132
Default backend: default-http-backend:80 (10.8.2.3:8080)
Rules:
Host Path Backends
---- ---- --------
foo.bar.com
/foo service1:80 (10.8.0.90:80)
Annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ADD 35s loadbalancer-controller default/test
kubectl edit ingress test
This pops up an editor with the existing configuration in YAML format. Modify it to include the
new Host:
spec:
rules :
- host: foo.bar.com
http:
paths :
- backend :
service :
name : service1
port:
number : 80
path: /foo
pathType : Prefix
- host: bar.baz.com
http:
paths :
- backend :
service :
| 674 |
name : service2
port:
number : 80
path: /foo
pathType : Prefix
..
After you save your changes, kubectl updates the resource in the API server, which tells the
Ingress controller to reconfigure the load balancer.
Verify this:
kubectl describe ingress test
Name: test
Namespace: defaul | 675 |
Address: 178.91.123.132
Default backend: default-http-backend:80 (10.8.2.3:8080)
Rules:
Host Path Backends
---- ---- --------
foo.bar.com
/foo service1:80 (10.8.0.90:80)
bar.baz.com
/foo service2:80 (10.8.0.91:80)
Annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ADD 45s loadbalancer-controller default/test
You can achieve the same outcome by invoking kubectl replace -f on a modified Ingress YAML
file.
Failing across availability zones
Techniques for spreading traffic across failure domains differ between cloud providers. Please
check the documentation of the relevant Ingress controller for details.
Alternatives
You can expose a Service in multiple ways that don't directly involve the Ingress resource:
Use Service.Type=LoadBa | 676 |
lancer
Use Service.Type=NodePort
What's next
Learn about the Ingress API
Learn about Ingress controllers
Set up Ingress on Minikube with the NGINX Controller
Ingress Controllers
In order for an Ingress to work in your cluster, there must be an ingress controller running. You
need to select at least one ingress controller and make sure it is set up in your cluster. This page
lists common ingress controllers that you can deploy.
In order for the Ingress resource to work, the cluster must have an ingress controller running.
Unlike other types of controllers which run as part of the kube-controller-manager binary,
Ingress controllers are not started automatically with a cluster. Use this page to choose the
ingress controller implementation that best fits your cluster.
Kubernetes as a project supports and maintains AWS , GCE , and nginx ingress controllers.•
•
•
•
| 677 |
Additional controllers
Note: This section links to third party projects that provide functionality required by
Kubernetes. The Kubernetes project authors aren't responsible for these projects, which are
listed alphabetically. To add a project to this list, read the content guide before submitting a
change. More information.
AKS Application Gateway Ingress Controller is an ingress controller that configures the
Azure Application Gateway .
Alibaba Cloud MSE Ingress is an ingress controller that configures the Alibaba Cloud
Native Gateway , which is also the commercial version of Higress .
Apache APISIX ingress controller is an Apache APISIX -based ingress controller.
Avi Kubernetes Operator provides L4-L7 load-balancing using VMware NSX Advanced
Load Balancer .
BFE Ingress Controller is a BFE-based ingress controller.
Cilium Ingress Controller is an ingress controller powered by Cilium .
The Citrix ingress controller works with Citrix Application Delivery Controller.
Contour i | 678 |
s an Envoy based ingress controller.
Emissary-Ingress API Gateway is an Envoy -based ingress controller.
EnRoute is an Envoy based API gateway that can run as an ingress controller.
Easegress IngressController is an Easegress based API gateway that can run as an ingress
controller.
F5 BIG-IP Container Ingress Services for Kubernetes lets you use an Ingress to configure
F5 BIG-IP virtual servers.
FortiADC Ingress Controller support the Kubernetes Ingress resources and allows you to
manage FortiADC objects from Kubernetes
Gloo is an open-source ingress controller based on Envoy , which offers API gateway
functionality.
HAProxy Ingress is an ingress controller for HAProxy .
Higress is an Envoy based API gateway that can run as an ingress controller.
The HAProxy Ingress Controller for Kubernetes is also an ingress controller for HAProxy .
Istio Ingress is an Istio based ingress controller.
The Kong Ingress Controller for Kubernetes is an ingress controller driving Kong
Gatew | 679 |
ay .
Kusk Gateway is an OpenAPI-driven ingress controller based on Envoy .
The NGINX Ingress Controller for Kubernetes works with the NGINX webserver (as a
proxy).
The ngrok Kubernetes Ingress Controller is an open source controller for adding secure
public access to your K8s services using the ngrok platform .
The OCI Native Ingress Controller is an Ingress controller for Oracle Cloud Infrastructure
which allows you to manage the OCI Load Balancer .
The Pomerium Ingress Controller is based on Pomerium , which offers context-aware
access policy.
Skipper HTTP router and reverse proxy for service composition, including use cases like
Kubernetes Ingress, designed as a library to build your custom proxy.
The Traefik Kubernetes Ingress provider is an ingress controller for the Traefik proxy.
Tyk Operator extends Ingress with Custom Resources to bring API Management
capabilities to Ingress. Tyk Operator works with the Open Source Tyk Gateway & Tyk
Cloud control plane.
Voyager is a | 680 |
n ingress controller for HAProxy .
Wallarm Ingress Controller is an Ingress Controller that provides WAAP (WAF) and API
Security capabilities.•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
| 681 |
Using multiple Ingress controllers
You may deploy any number of ingress controllers using ingress class within a cluster. Note
the .metadata.name of your ingress class resource. When you create an ingress you would need
that name to specify the ingressClassName field on your Ingress object (refer to IngressSpec v1
reference ). ingressClassName is a replacement of the older annotation method .
If you do not specify an IngressClass for an Ingress, and your cluster has exactly one
IngressClass marked as default, then Kubernetes applies the cluster's default IngressClass to the
Ingress. You mark an IngressClass as default by setting the ingressclass.kubernetes.io/is-default-
class annotation on that IngressClass, with the string value "true" .
Ideally, all ingress controllers should fulfill this specification, but the various ingress controllers
operate slightly differently.
Note: Make sure you review your ingress controller's documentation to understand the caveats
of choosing it. | 682 |
What's next
Learn more about Ingress .
Set up Ingress on Minikube with the NGINX Controller .
Gateway API
Gateway API is a family of API kinds that provide dynamic infrastructure provisioning and
advanced traffic routing.
Make network services available by using an extensible, role-oriented, protocol-aware
configuration mechanism. Gateway API is an add-on containing API kinds that provide
dynamic infrastructure provisioning and advanced traffic routing.
Design principles
The following principles shaped the design and architecture of Gateway API:
Role-oriented: Gateway API kinds are modeled after organizational roles that are
responsible for managing Kubernetes service networking:
Infrastructure Provider: Manages infrastructure that allows multiple isolated
clusters to serve multiple tenants, e.g. a cloud provider.
Cluster Operator: Manages clusters and is typically concerned with policies,
network access, application permissions, etc.
Application Developer: Manages an applicati | 683 |
on running in a cluster and is
typically concerned with application-level configuration and Service composition.
Portable: Gateway API specifications are defined as custom resources and are supported
by many implementations .
Expressive: Gateway API kinds support functionality for common traffic routing use
cases such as header-based matching, traffic weighting, and others that were only
possible in Ingress by using custom annotations.•
•
•
◦
◦
◦
•
| 684 |
Extensible: Gateway allows for custom resources to be linked at various layers of the
API. This makes granular customization possible at the appropriate places within the API
structure.
Resource model
Gateway API has three stable API kinds:
GatewayClass: Defines a set of gateways with common configuration and managed by a
controller that implements the class.
Gateway: Defines an instance of traffic handling infrastructure, such as cloud load
balancer.
HTTPRoute: Defines HTTP-specific rules for mapping traffic from a Gateway listener to
a representation of backend network endpoints. These endpoints are often represented as
a Service .
Gateway API is organized into different API kinds that have interdependent relationships to
support the role-oriented nature of organizations. A Gateway object is associated with exactly
one GatewayClass; the GatewayClass describes the gateway controller responsible for
managing Gateways of this class. One or more route kinds such as HTTPRoute, are the | 685 |
n
associated to Gateways. A Gateway can filter the routes that may be attached to its listeners ,
forming a bidirectional trust model with routes.
The following figure illustrates the relationships of the three stable Gateway API kinds:
A figure illustrating the relationships of the three stable Gateway API kinds
GatewayClass
Gateways can be implemented by different controllers, often with different configurations. A
Gateway must reference a GatewayClass that contains the name of the controller that
implements the class.
A minimal GatewayClass example:
apiVersion : gateway.networking.k8s.io/v1
kind: GatewayClass
metadata :
name : example-class
spec:
controllerName : example.com/gateway-controller
In this example, a controller that has implemented Gateway API is configured to manage
GatewayClasses with the controller name example.com/gateway-controller . Gateways of this
class will be managed by the implementation's controller.
See the GatewayClass reference for a full definition o | 686 |
f this API kind.•
•
•
| 687 |
Gateway
A Gateway describes an instance of traffic handling infrastructure. It defines a network
endpoint that can be used for processing traffic, i.e. filtering, balancing, splitting, etc. for
backends such as a Service. For example, a Gateway may represent a cloud load balancer or an
in-cluster proxy server that is configured to accept HTTP traffic.
A minimal Gateway resource example:
apiVersion : gateway.networking.k8s.io/v1
kind: Gateway
metadata :
name : example-gateway
spec:
gatewayClassName : example-class
listeners :
- name : http
protocol : HTTP
port: 80
In this example, an instance of traffic handling infrastructure is programmed to listen for HTTP
traffic on port 80. Since the addresses field is unspecified, an address or hostname is assigned to
the Gateway by the implementation's controller. This address is used as a network endpoint for
processing traffic of backend network endpoints defined in routes.
See the Gateway reference for a full definition of th | 688 |
is API kind.
HTTPRoute
The HTTPRoute kind specifies routing behavior of HTTP requests from a Gateway listener to
backend network endpoints. For a Service backend, an implementation may represent the
backend network endpoint as a Service IP or the backing Endpoints of the Service. An
HTTPRoute represents configuration that is applied to the underlying Gateway
implementation. For example, defining a new HTTPRoute may result in configuring additional
traffic routes in a cloud load balancer or in-cluster proxy server.
A minimal HTTPRoute example:
apiVersion : gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata :
name : example-httproute
spec:
parentRefs :
- name : example-gateway
hostnames :
- "www.example.com"
rules :
- matches :
- path:
type: PathPrefix
value : /login
backendRefs | 689 |
- name : example-svc
port: 8080
In this example, HTTP traffic from Gateway example-gateway with the Host: header set to
www.example.com and the request path specified as /login will be routed to Service example-
svc on port 8080.
See the HTTPRoute reference for a full definition of this API kind.
Request flow
Here is a simple example of HTTP traffic being routed to a Service by using a Gateway and an
HTTPRoute:
A diagram that provides an example of HTTP traffic being routed to a Service by
using a Gateway and an HTTPRoute
In this example, the request flow for a Gateway implemented as a reverse proxy is:
The client starts to prepare an HTTP request for the URL http://www.example.com
The client's DNS resolver queries for the destination name and learns a mapping to one or
more IP addresses associated with the Gateway.
The client sends a request to the Gateway IP address; the reverse proxy receives the
HTTP request and uses the Host: header to match a configuration that was der | 690 |
ived from
the Gateway and attached HTTPRoute.
Optionally, the reverse proxy can perform request header and/or path matching based on
match rules of the HTTPRoute.
Optionally, the reverse proxy can modify the request; for example, to add or remove
headers, based on filter rules of the HTTPRoute.
Lastly, the reverse proxy forwards the request to one or more backends.
Conformance
Gateway API covers a broad set of features and is widely implemented. This combination
requires clear conformance definitions and tests to ensure that the API provides a consistent
experience wherever it is used.
See the conformance documentation to understand details such as release channels, support
levels, and running conformance tests.
Migrating from Ingress
Gateway API is the successor to the Ingress API. However, it does not include the Ingress kind.
As a result, a one-time conversion from your existing Ingress resources to Gateway API
resources is necessary.
Refer to the ingress migration guide for deta | 691 |
ils on migrating Ingress resources to Gateway API
resources.1.
2.
3.
4.
5.
6 | 692 |
What's next
Instead of Gateway API resources being natively implemented by Kubernetes, the specifications
are defined as Custom Resources supported by a wide range of implementations . Install the
Gateway API CRDs or follow the installation instructions of your selected implementation.
After installing an implementation, use the Getting Started guide to help you quickly start
working with Gateway API.
Note: Make sure to review the documentation of your selected implementation to understand
any caveats.
Refer to the API specification for additional details of all Gateway API kinds.
EndpointSlices
The EndpointSlice API is the mechanism that Kubernetes uses to let your Service scale to
handle large numbers of backends, and allows the cluster to update its list of healthy backends
efficiently.
FEATURE STATE: Kubernetes v1.21 [stable]
Kubernetes' EndpointSlice API provides a way to track network endpoints within a Kubernetes
cluster. EndpointSlices offer a more scalable and extensibl | 693 |
e alternative to Endpoints .
EndpointSlice API
In Kubernetes, an EndpointSlice contains references to a set of network endpoints. The control
plane automatically creates EndpointSlices for any Kubernetes Service that has a selector
specified. These EndpointSlices include references to all the Pods that match the Service
selector. EndpointSlices group network endpoints together by unique combinations of protocol,
port number, and Service name. The name of a EndpointSlice object must be a valid DNS
subdomain name .
As an example, here's a sample EndpointSlice object, that's owned by the example Kubernetes
Service.
apiVersion : discovery.k8s.io/v1
kind: EndpointSlice
metadata :
name : example-abc
labels :
kubernetes.io/service-name : example
addressType : IPv4
ports :
- name : http
protocol : TCP
port: 80
endpoints :
- addresses :
- "10.1.2.3"
conditions :
ready : tru | 694 |
hostname : pod-1
nodeName : node-1
zone : us-west2-a
By default, the control plane creates and manages EndpointSlices to have no more than 100
endpoints each. You can configure this with the --max-endpoints-per-slice kube-controller-
manager flag, up to a maximum of 1000.
EndpointSlices can act as the source of truth for kube-proxy when it comes to how to route
internal traffic.
Address types
EndpointSlices support three address types:
IPv4
IPv6
FQDN (Fully Qualified Domain Name)
Each EndpointSlice object represents a specific IP address type. If you have a Service that is
available via IPv4 and IPv6, there will be at least two EndpointSlice objects (one for IPv4, and
one for IPv6).
Conditions
The EndpointSlice API stores conditions about endpoints that may be useful for consumers. The
three conditions are ready , serving , and terminating .
Ready
ready is a condition that maps to a Pod's Ready condition. A running Pod with the Ready
condition set to True should have th | 695 |
is EndpointSlice condition also set to true. For
compatibility reasons, ready is NEVER true when a Pod is terminating. Consumers should refer
to the serving condition to inspect the readiness of terminating Pods. The only exception to this
rule is for Services with spec.publishNotReadyAddresses set to true. Endpoints for these
Services will always have the ready condition set to true.
Serving
FEATURE STATE: Kubernetes v1.26 [stable]
The serving condition is almost identical to the ready condition. The difference is that
consumers of the EndpointSlice API should check the serving condition if they care about pod
readiness while the pod is also terminating.
Note: Although serving is almost identical to ready , it was added to prevent breaking the
existing meaning of ready . It may be unexpected for existing clients if ready could be true for
terminating endpoints, since historically terminating endpoints were never included in the
Endpoints or EndpointSlice API to begin with. | 696 |
For this reason, ready is always false for
terminating endpoints, and a new condition serving was added in v1.20 so that clients can track
readiness for terminating pods independent of the existing semantics for ready .•
•
| 697 |
Terminating
FEATURE STATE: Kubernetes v1.22 [beta]
Terminating is a condition that indicates whether an endpoint is terminating. For pods, this is
any pod that has a deletion timestamp set.
Topology information
Each endpoint within an EndpointSlice can contain relevant topology information. The
topology information includes the location of the endpoint and information about the
corresponding Node and zone. These are available in the following per endpoint fields on
EndpointSlices:
nodeName - The name of the Node this endpoint is on.
zone - The zone this endpoint is in.
Note:
In the v1 API, the per endpoint topology was effectively removed in favor of the dedicated
fields nodeName and zone .
Setting arbitrary topology fields on the endpoint field of an EndpointSlice resource has been
deprecated and is not supported in the v1 API. Instead, the v1 API supports setting individual
nodeName and zone fields. These fields are automatically translated between API versions. For
exampl | 698 |
e, the value of the "topology.kubernetes.io/zone" key in the topology field in the v1beta1
API is accessible as the zone field in the v1 API.
Management
Most often, the control plane (specifically, the endpoint slice controller ) creates and manages
EndpointSlice objects. There are a variety of other use cases for EndpointSlices, such as service
mesh implementations, that could result in other entities or controllers managing additional
sets of EndpointSlices.
To ensure that multiple entities can manage EndpointSlices without interfering with each other,
Kubernetes defines the label endpointslice.kubernetes.io/managed-by , which indicates the
entity managing an EndpointSlice. The endpoint slice controller sets endpointslice-
controller.k8s.io as the value for this label on all EndpointSlices it manages. Other entities
managing EndpointSlices should also set a unique value for this label.
Ownership
In most use cases, EndpointSlices are owned by the Service that the endpoint slice o | 699 |