prompt
stringlengths
65
38.7k
response
stringlengths
41
29.1k
<p>I initially setup a cluster with the <code>kube-up.sh</code> utility on AWS. Now I ran out of capacity and I'm looking for a simple way to extend my cluster with more nodes. </p> <p>Is there a 'simple' way to add more nodes to a running cluster on AWS? I've been reading all the docs but cannot seem to find a way. </p> <p>Thanks!</p>
<p>If you used kube-up, your minions should be launched into an auto-scaling group, and you should just be able to change the size of that auto-scaling group.</p> <p>This question is a few months old, so I'm not sure whether this was the case back in July, but it should definitely be the case now.</p> <p>I'm also working on launching the master into an auto-scaling group, so that if the master dies it is automatically restated, but that is a little more complicated because of the persistent disk. But hopefully I'll make 1.1!</p>
<p>I'm struggling with the Kubernetes Go library. The docs--<a href="https://github.com/kubernetes/kubernetes/blob/master/pkg/client/unversioned/doc.go">at least the ones I found</a>--appear out-of-date with the library itself. The example provided does not build because of issues with the imports. I'm just trying to do something simple: get a Service object by name and print some attributes (like nodePort). I just need a simple example of library usage to get me going.</p> <p>I could easily do this using the RESTful API but that feels like re-inventing the wheel.</p>
<p>So after a little experimentation and a hint from the k8s Slack channel, I have this example. Perhaps someone can update the example with a proper import path.</p> <pre><code>package main import ( "fmt" "log" "github.com/kubernetes/kubernetes/pkg/api" client "github.com/kubernetes/kubernetes/pkg/client/unversioned" ) func main() { config := client.Config{ Host: "http://my-kube-api-server.me:8080", } c, err := client.New(&amp;config) if err != nil { log.Fatalln("Can't connect to Kubernetes API:", err) } s, err := c.Services(api.NamespaceDefault).Get("some-service-name") if err != nil { log.Fatalln("Can't get service:", err) } fmt.Println("Name:", s.Name) for p, _ := range s.Spec.Ports { fmt.Println("Port:", s.Spec.Ports[p].Port) fmt.Println("NodePort:", s.Spec.Ports[p].NodePort) } } </code></pre>
<p>I'm busy testing out kubernetes on my local pc using <a href="https://github.com/kubernetes/kubernetes/blob/master/docs/getting-started-guides/docker.md" rel="nofollow">https://github.com/kubernetes/kubernetes/blob/master/docs/getting-started-guides/docker.md</a> </p> <p>which launches a dockerized single node <code>k8s</code> cluster. I need to run a privileged container inside k8s (it runs docker in order to build images from dockerfiles). What I've done so far is add a security context <code>privileged=true</code> to the pod config which returns forbidden when trying to create the pod. I know that you have to enable privileged on the node with <code>--allow-privileged=true</code> and I've done this by adding the parameter arg to step two (running the master and worker node) but it still returns forbidden when creating the pod. </p> <p>Anyone know how to enable privileged in this dockerized k8s for testing?</p> <p>Here is how I run the <code>k8s</code> master:</p> <pre><code>docker run --privileged --net=host -d -v /var/run/docker.sock:/var/run/docker.sock gcr.io/google_containers/hyperkube:v1.0.1 /hyperkube kubelet --api-servers=http://localhost:8080 --v=2 --address=0.0.0.0 --allow-privileged=true --enable-server --hostname-override=127.0.0.1 --config=/etc/kubernetes/manifests </code></pre>
<p>Update: Privileged mode is now enabled by default (both in the apiserver and in the kubelet) starting with the 1.1 release of Kubernetes. </p> <hr> <p>To enable privileged containers, you need to pass the <code>--allow-privileged</code> flag to the Kubernetes apiserver <em>in addition</em> to the Kubelet when it starts up. The manifest file that you use to launch the Kubernetes apiserver in the single node docker example is bundled into the image (from <a href="https://github.com/kubernetes/kubernetes/blob/master/cluster/images/hyperkube/master.json" rel="nofollow">master.json</a>), but you can make a local copy of that file, add the <code>--allow-privileged=true</code> flag to the apiserver command line, and then change the <code>--config</code> flag you pass to the Kubelet in Step Two to a directory containing your modified file. </p>
<p>I reinstalled some nodes and a master. Now on the master I am getting:</p> <blockquote> <p>Sep 15 04:53:58 master kube-apiserver[803]: I0915 04:53:58.413581 803 logs.go:41] http: TLS handshake error from $ip:54337: remote error: bad certificate</p> </blockquote> <p>Where $ip is one of the nodes.</p> <p>So I likely need to delete or recreate certificates. What would the location of those be? Any recommended commands to recreate or remove those or copy them from node to master or vice versa? Whatever gets me past this error message...</p>
<p>Take a look through the <a href="https://kubernetes.io/docs/admin/authentication/" rel="nofollow noreferrer">Creating Certificates</a> section of authentication.md. It walks you through the certificates that you need to create and how to pass them to the system components, and you should be able to use that to re-generate certificates for your cluster. </p>
<p>I see in the <a href="https://github.com/GoogleCloudPlatform/kubernetes/blob/v0.21.3/docs/volumes.md">docs</a> how do do this for pods, but I want to use a replication controller to manage my pods, ensuring that there is always one up at all times.</p> <ol> <li><p>How can I define a replication controller where the pod being run has a persistent volume?</p></li> <li><p>How is this related to Kubernetes persistentVolumes and persistentVolumeClaims?</p></li> </ol>
<p>Using a persistent volume in a Replication Controller works great for shared storage. You include a persistentVolumeClaim in the RC's pod template. Each pod will use the same claim, which means it's shared storage. This also works for read-only access in gcloud if your Replica count > 1.</p> <p>If you wanted distinct volumes per pod, you currently have to create many RCs with Replicas=1 and with distinct persistentVolumeClaims.</p> <p>We're working out a design for scaling storage through an RC where each pod gets its own volume instead of sharing the same claim.</p>
<p>I'm using a SimpleProducer in the python kafka-library. This script has worked flawlessly previously with other more hard-configured kafka-setups I've tried.</p> <pre><code>kafka = KafkaClient(u'[masterNodeIp]:[servicePort]') producer = SimpleProducer(kafka) #make a simple message, while true run producer.send_messages(b'oneMoreTopic', sentence) </code></pre> <p>After running this script once, I get this response in the python-console.</p> <pre><code>kafka.common.LeaderNotAvailableError: TopicMetadata(topic='oneMoreTopic', error=5, partitions=[]) </code></pre> <p>I can then go into my Node on my zookeeper.log and see:</p> <pre><code>2015-09-14 12:16:32,276 - INFO [ProcessThread(sid:3 cport:-1)::PrepRequestProcessor@627] - Got user-level KeeperException when processing sessionid:0x34fcb982d030000 type:setData cxid:0x71 zxid:0x1000000d8 txntype:-1 reqpath:n/a Error Path:/config/topics/oneMoreTopic Error:KeeperErrorCode = NoNode for /config/topics/oneMoreTopic 2015-09-14 12:16:32,278 - INFO [ProcessThread(sid:3 cport:-1)::PrepRequestProcessor@627] - Got user-level KeeperException when processing sessionid:0x34fcb982d030000 type:create cxid:0x72 zxid:0x1000000d9 txntype:-1 reqpath:n/a Error Path:/config/topics Error:KeeperErrorCode = NodeExists for /config/topics 2015-09-14 12:16:32,302 - INFO [ProcessThread(sid:3 cport:-1)::PrepRequestProcessor@627] - Got user-level KeeperException when processing sessionid:0x34fcb982d030000 type:create cxid:0x7b zxid:0x1000000dc txntype:-1 reqpath:n/a Error Path:/brokers/topics/oneMoreTopic/partitions/0 Error:KeeperErrorCode = NoNode for /brokers/topics/oneMoreTopic/partitions/0 2015-09-14 12:16:32,304 - INFO [ProcessThread(sid:3 cport:-1)::PrepRequestProcessor@627] - Got user-level KeeperException when processing sessionid:0x34fcb982d030000 type:create cxid:0x7c zxid:0x1000000dd txntype:-1 reqpath:n/a Error Path:/brokers/topics/oneMoreTopic/partitions Error:KeeperErrorCode = NoNode for /brokers/topics/oneMoreTopic/partitions </code></pre> <p>This seems to just be the Zookeeper creating a new Znode for the topic since it is not there from before. And the Kafka server.log prints:</p> <pre><code>[2015-09-14 12:16:32,282] INFO Topic creation {"version":1,"partitions":{"0":[10200119]}} (kafka.admin.AdminUtils$) [2015-09-14 12:16:32,287] INFO [KafkaApi-10200219] Auto creation of topic oneMoreTopic with 1 partitions and replication factor 1 is successful! (kafka.server.KafkaApis) [2015-09-14 12:16:51,579] INFO Closing socket connection to /10.240.1.94. (kafka.network.Processor) </code></pre> <p>However, my message is never posted to the topic and the next time i run the python-script i always get:</p> <pre><code>kafka.common.FailedPayloadsError </code></pre> <p>In the cases where i made it work, the advertised.host.name was always the external IP of the node, but I can't seem to get that working through Kubernetes. Would it be possible to make the external IP callable from the container parhaps?</p> <p>my kafka/config/server.properties looks something like this for all brokers:</p> <pre><code>broker.id=10200121 host.name=kafka-f8p06 advertised.host.name=kafka-f8p06 ++ </code></pre>
<pre><code>broker.id=10200121 host.name=kafka-f8p06 &lt;----- use IP here advertised.host.name=kafka-f8p06 &lt;---- use IP here </code></pre> <p>I think you should have IPs for <code>host.name</code> and <code>advertised.host.name</code> as K8s does not resolve Pods by hostname but it does by IP.</p> <p>Your kafka nodes probably can't talk to each other that way and can't find the leader.</p>
<p>We are trying to Configure kubernetes RC in AWS instance with AWS Elastic Block Store(EBS). here is the key part of our controller yaml file -</p> <pre><code> volumeMounts: - mountPath: "/opt/phabricator/repo" name: ebsvol volumes: - name: ebsvol awsElasticBlockStore: volumeID: aws://us-west-2a/vol-***** fsType: ext4 </code></pre> <p>our rc can start pod and works fine with out mounting it to a AWS EBS but with volume mounting in an AWS EBS it gives us -</p> <pre><code>Fri, 11 Sep 2015 11:29:14 +0000 Fri, 11 Sep 2015 11:29:34 +0000 3 {kubelet 172.31.24.103} failedMount Unable to mount volumes for pod "phabricator-controller-zvg7z_default": error listing AWS instances: NoCredentialProviders: no valid providers in chain Fri, 11 Sep 2015 11:29:14 +0000 Fri, 11 Sep 2015 11:29:34 +0000 3 {kubelet 172.31.24.103} failedSync Error syncing pod, skipping: error listing AWS instances: NoCredentialProviders: no valid providers in chain </code></pre> <p>We have an credential file with appropiate credential in .aws directory. But its not working. Do we missing something? Is it a configuration issue?</p> <p>Kubectl version: 1.0.4 and 1.0.5 (Tried with both)</p>
<p>We opened an issue to discuss this: <a href="https://github.com/kubernetes/kubernetes/issues/13858" rel="nofollow">https://github.com/kubernetes/kubernetes/issues/13858</a></p> <p>The recommended way to go here is to use IAM instance profiles. kube-up does configure this for you, and if you're not using kube-up I recommend looking at it to emulate what it does!</p> <p>Although we did recently merge in support for using a .aws credentials file, I don't believe it has been back-ported into any release, and it isn't really the way I (personally) recommend.</p> <p>It sounds like you're not using kube-up; you may find it easier if you can use that (and I'd love to know if there's some reason you can't or don't want to use kube-up, as I personally am working on an alternative that I hope will meet everyone's needs!)</p> <p>I'd also love to know if IAM instance profiles aren't suitable for you for some reason.</p>
<p>I am working on a Java application which deploys web artifacts in Apache Tomcat Docker Containers with the use of Google Kubernetes. I am using <a href="https://github.com/spotify/docker-client" rel="nofollow">https://github.com/spotify/docker-client</a> in order to carry out Docker Image and Container handling activities and <a href="https://github.com/fabric8io/fabric8/tree/master/components/kubernetes-api" rel="nofollow">https://github.com/fabric8io/fabric8/tree/master/components/kubernetes-api</a> for Kubernetes related functionalities.</p> <p>In this application, I have added a functionality which enables the user to remove a web artifact which the user deploys.</p> <p>When removing I,</p> <ol> <li><p>delete the Kubernetes replication controller which I use to generate the desired number of pod replicas</p></li> <li><p>separately delete off the replica pods (as pods are not deleted automatically when the replication controller is deleted in the corresponding method in the Java API) </p></li> <li><p>delete off the corresponding Service created </p></li> <li><p>delete off the Docker Containers corresponding to the pods deleted off</p></li> <li><p>finally, remove the Docker Image used for the deployment</p></li> </ol> <p>Following code shows the removal functionality implemented:</p> <pre><code>public boolean remove(String tenant, String appName) throws WebArtifactHandlerException { String componentName = generateKubernetesComponentName(tenant, appName); final int singleImageIndex = 0; try { if (replicationControllerHandler.getReplicationController(componentName) != null) { String dockerImage = replicationControllerHandler.getReplicationController(componentName).getSpec() .getTemplate().getSpec().getContainers().get(singleImageIndex).getImage(); List&lt;String&gt; containerIds = containerHandler.getRunningContainerIdsByImage(dockerImage); replicationControllerHandler.deleteReplicationController(componentName); podHandler.deleteReplicaPods(tenant, appName); serviceHandler.deleteService(componentName); Thread.sleep(OPERATION_DELAY_IN_MILLISECONDS); containerHandler.deleteContainers(containerIds); imageBuilder.removeImage(tenant, appName, getDockerImageVersion(dockerImage)); return true; } else { return false; } } catch (Exception exception) { String message = String.format("Failed to remove web artifact[artifact]: %s", generateKubernetesComponentName(tenant, appName)); LOG.error(message, exception); throw new WebArtifactHandlerException(message, exception); } } </code></pre> <p>Implementation of the Docker Container deletion functionality is as follows:</p> <pre><code>public void deleteContainers(List&lt;String&gt; containerIds) throws WebArtifactHandlerException { try { for (String containerId : containerIds) { dockerClient.removeContainer(containerId); Thread.sleep(OPERATION_DELAY_IN_MILLISECONDS); } } catch (Exception exception) { String message = "Could not delete the Docker Containers."; LOG.error(message, exception); throw new WebArtifactHandlerException(message, exception); } } </code></pre> <p>In the above case although the execution of the desired function takes place without any sort of issue, at certain instances I tend to get the following exception.</p> <pre><code>Sep 11, 2015 3:57:28 PM org.apache.poc.webartifact.WebArtifactHandler remove SEVERE: Failed to remove web artifact[artifact]: app-wso2-com org.apache.poc.miscellaneous.exceptions.WebArtifactHandlerException: Could not delete the Docker Containers. at org.apache.poc.docker.JavaWebArtifactContainerHandler.deleteContainers(JavaWebArtifactContainerHandler.java:80) at org.apache.poc.webartifact.WebArtifactHandler.remove(WebArtifactHandler.java:206) at org.apache.poc.Executor.process(Executor.java:222) at org.apache.poc.Executor.main(Executor.java:46) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.intellij.rt.execution.application.AppMain.main(AppMain.java:140) Caused by: com.spotify.docker.client.DockerRequestException: Request error: DELETE unix://localhost:80/v1.12/containers/af05916d2bddf73dcf8bf41c6ea7f5f3b859c90b97447a8248ffa7b5b3968691: 409 at com.spotify.docker.client.DefaultDockerClient.propagate(DefaultDockerClient.java:1061) at com.spotify.docker.client.DefaultDockerClient.request(DefaultDockerClient.java:1021) at com.spotify.docker.client.DefaultDockerClient.removeContainer(DefaultDockerClient.java:544) at com.spotify.docker.client.DefaultDockerClient.removeContainer(DefaultDockerClient.java:535) at org.wso2.carbon6.poc.docker.JavaWebArtifactContainerHandler.deleteContainers(JavaWebArtifactContainerHandler.java:74) ... 8 more Caused by: com.spotify.docker.client.shaded.javax.ws.rs.ClientErrorException: HTTP 409 Conflict at org.glassfish.jersey.client.JerseyInvocation.createExceptionForFamily(JerseyInvocation.java:991) at org.glassfish.jersey.client.JerseyInvocation.convertToException(JerseyInvocation.java:975) at org.glassfish.jersey.client.JerseyInvocation.translate(JerseyInvocation.java:795) at org.glassfish.jersey.client.JerseyInvocation.access$500(JerseyInvocation.java:91) at org.glassfish.jersey.client.JerseyInvocation$5.completed(JerseyInvocation.java:756) at org.glassfish.jersey.client.ClientRuntime.processResponse(ClientRuntime.java:189) at org.glassfish.jersey.client.ClientRuntime.access$300(ClientRuntime.java:74) at org.glassfish.jersey.client.ClientRuntime$1.run(ClientRuntime.java:171) at org.glassfish.jersey.internal.Errors$1.call(Errors.java:271) at org.glassfish.jersey.internal.Errors$1.call(Errors.java:267) at org.glassfish.jersey.internal.Errors.process(Errors.java:315) at org.glassfish.jersey.internal.Errors.process(Errors.java:297) at org.glassfish.jersey.internal.Errors.process(Errors.java:267) at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:320) at org.glassfish.jersey.client.ClientRuntime$2.run(ClientRuntime.java:201) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) </code></pre> <p>I searched a large number of sources for any help for this but still I wasn't able to avoid it in all instances, I execute this functionality. </p> <p>At the beginning I tended to get this issue more often than now, but allowing the executing thread to sleep at the end of deleting each Docker Container and before deleting any Docker Containers, gradually reduced the number of instances I am getting this issue. </p> <p>Is sleeping the thread the ultimate solution for this issue or is there any other reason for this issue to pop and a solution that can help me to avoid this exception? Any help is greatly appreciated. </p>
<p>Unfortunately I'm not familiar with the Java client library.</p> <p>My suggestion would be to try using the regular command-line client (kubectl). If that works, then you know the problem is in the Java client library or your usage of it. If using the command line client doesn't work, then there will be more people who can help you (since a lot more people are familiar with the command-line client than with the Java client library).</p> <p>In other words % kubectl delete pods ... # --cascade=true by default % kubectl delete services ...</p> <p>I'm curious why you need step (4) and (5). Step (4) should happen automatically when you delete the pod, and step (5) should happen automatically in the background.</p> <p>If the two lines of "kubectl delete" work, then the problem is with the Java client library or your usage of it. As a starting point I would suggest remove calling deleteContainers() and removeImage() from your Java code and see if that helps. I think those steps are unnecessary.</p>
<p>I use DNS in kubernetes. and test result like:</p> <pre><code>core@core-1-86 ~ $ kubectl exec busybox -- nslookup kubernetes Server: 10.100.0.10 Address 1: 10.100.0.10 Name: kubernetes Address 1: 10.100.0.1 </code></pre> <p>And then I entried to busybox container, and ping kubernetes, like:</p> <pre><code>core@core-1-86 ~ $ kubectl exec -it busybox sh / # ping kubernetes PING kubernetes (10.100.0.1): 56 data bytes ^C --- kubernetes ping statistics --- 55 packets transmitted, 0 packets received, 100% packet loss / # </code></pre> <p>if I ping another ip , it ok!</p> <pre><code>/ # ping 10.12.1.85 PING 10.12.1.85 (10.12.1.85): 56 data bytes 64 bytes from 10.12.1.85: seq=0 ttl=63 time=0.262 ms 64 bytes from 10.12.1.85: seq=1 ttl=63 time=0.218 ms ^C --- 10.12.1.85 ping statistics --- 2 packets transmitted, 2 packets received, 0% packet loss round-trip min/avg/max = 0.218/0.240/0.262 ms / # </code></pre> <p>who can help me and tell me why?</p>
<p>The kubernetes service is a virtual IP and doesn't currently handle ICMP requests (see #<a href="https://github.com/kubernetes/kubernetes/issues/2259" rel="nofollow">2259</a>). You should be able to verify connectivity to the kubernetes service using a TCP connection, e.g. <code>curl https://kubernetes/</code>. </p>
<p>I have a kubernetes service called <code>staging</code> that selects all <code>app=jupiter</code> pods. It exposes an HTTP service on port 1337. Here's the describe output:</p> <pre><code>$ kubectl describe service staging Name: staging Namespace: default Labels: run=staging Selector: app=jupiter Type: NodePort IP: 10.11.255.80 Port: &lt;unnamed&gt; 1337/TCP NodePort: &lt;unnamed&gt; 30421/TCP Endpoints: 10.8.0.21:1337 Session Affinity: None No events. </code></pre> <p>But when I run a <code>kubectl rolling-update</code> on the RC, which removes the 1 pod running the application and adds another, and run describe again, I get:</p> <pre><code>$ kubectl describe service staging Name: staging Namespace: default Labels: run=staging Selector: app=jupiter Type: NodePort IP: 10.11.255.80 Port: &lt;unnamed&gt; 1337/TCP NodePort: &lt;unnamed&gt; 30421/TCP Endpoints: 10.8.0.22:1337 Session Affinity: None No events. </code></pre> <p>Everything is the same, except for the Endpoint IP address. In fact, it goes up by 1 every time I do this. This is the one thing I expected not to change, since services are an abstraction over pods, so they shouldn't change when the pods change.</p> <p>I know you can hardcode the endpoint address, so this is more of a curiosity.</p> <p>Also, can anyone tell me what the <code>IP</code> field in the describe output is for?</p>
<p>IP is the address of your service, which remains constant over time. Endpoints is the collection of backend addresses across which requests to the service address are spread at a given point in time. That collection changes every time the set of pods comprising your service changes, as you've noticed when performing a rolling update on your replication controller (RC).</p>
<p>I have a Kubernetes cluster that was initialized using the <code>kube-up.sh</code> script inside AWS, and occasionally there's a very slow DNS lookup when finding one service from inside another pod. Here's the basic picture:</p> <pre><code> (browser) | V (ELB) | V (front-end service) | V (front-end pod) | V (back-end service) | V (back-end pod) | V (database) </code></pre> <p>I have timing logging installed at the front-end and back-end levels, and their numbers are wildly divergent for <strong>some requests</strong>. Occasionally we'll see a request that the FE nginx logging says takes 8.3 seconds, but the back-end gunicorn process says takes 30ms.</p> <p>I can <code>exec</code> into the FE pod and do a <code>curl</code> to the backend endpoint to get timing data according to the example in <a href="https://josephscott.org/archives/2011/10/timing-details-with-curl/" rel="nofollow">this blog post</a>, and it looks like this:</p> <pre><code> time_namelookup: 3.513 time_connect: 3.513 time_appconnect: 0.000 time_pretransfer: 3.513 time_redirect: 0.000 time_starttransfer: 3.520 ---------- time_total: 3.520 </code></pre> <p>So the slowness seems to be coming from DNS. We have a separate cluster set up for staging, and this sort of thing doesn't seem to be happening there, so I'm not sure what to make of it. Most requests happen in a reasonable amount of time, less than 50ms, but every tenth one or so takes multiple seconds to resolve.</p> <p>I found <a href="https://groups.google.com/forum/#!topic/google-containers/oxVw35elJj8" rel="nofollow">this thread</a> that made it sound like SkyDNS's use of etcd might be the problem, but I'm not sure how to verify that or fix it. And this is happening <em>way</em> too often to be periodic missing configuration values (our traffic isn't that high).</p>
<p>There was a bug that was fixed here (<a href="https://github.com/kubernetes/kubernetes/pull/13345" rel="nofollow">https://github.com/kubernetes/kubernetes/pull/13345</a>) that has been shown to cause this problem in Kubernetes clusters 1.0.5 and older. The problem is fixed in the <a href="https://github.com/kubernetes/kubernetes/releases" rel="nofollow">1.0.6 release</a>.</p>
<p>I have a Kubernetes service on GKE as follows:</p> <pre><code>$ kubectl describe service staging Name: staging Namespace: default Labels: &lt;none&gt; Selector: app=jupiter Type: NodePort IP: 10.11.246.27 Port: &lt;unnamed&gt; 80/TCP NodePort: &lt;unnamed&gt; 31683/TCP Endpoints: 10.8.0.33:1337 Session Affinity: None No events. </code></pre> <p>I can access the service from a VM directly via one of its endpoints (<code>10.8.0.21:1337</code>) or via the node port (<code>10.240.251.174:31683</code> in my case). However, if I try to access <code>10.11.246.27:80</code>, I get nothing. I've also tried ports 1337 and 31683.</p> <p>Why can't I access the service via its IP? Do I need a firewall rule or something?</p>
<p>Service IPs are virtual IPs managed by kube-proxy. So, in order for that IP to be meaningful, the client must also be a part of the kube-proxy "overlay" network (have kube-proxy running, pointing at the same apiserver).</p> <p>Pod IPs on GCE/GKE are managed by GCE <a href="https://cloud.google.com/compute/docs/networking?hl=en#routes" rel="nofollow noreferrer">Routes</a>, which is more like an "underlay" of all VMs in the network.</p> <p>There are a couple of ways to access non-public services from outside the cluster. <a href="https://stackoverflow.com/questions/31664060/how-to-call-a-service-exposed-by-a-kubernetes-cluster-from-another-kubernetes-cl/31665248#31665248">Here</a> they are in more detail, but in short:</p> <ol> <li>Create a bastion GCE route for your cluster's services.</li> <li>Install your cluster's kube-proxy anywhere you want to access the cluster's services.</li> </ol>
<p>The <code>kubernetes</code> service is in the <code>default</code> namespace. I want to move it to <code>kube-system</code> namespace. So I did it as follow:</p> <pre class="lang-yaml prettyprint-override"><code>kubectl get svc kubernetes -o yaml &gt; temp.yaml </code></pre> <p>This generates <code>temp.yaml</code> using current <code>kubernetes</code> service information. Then I changed the value of namespace to <code>kube-system</code> in <code>temp.yaml</code>. Lastly, I ran the following command:</p> <pre class="lang-sh prettyprint-override"><code>kubectl replace -f temp.yaml </code></pre> <p>But I got the error:</p> <pre><code>Error from server: error when replacing &quot;temp.yaml&quot;: service &quot;kubernetes&quot; not found </code></pre> <p>I think there is no service named <code>kubernetes</code> in the <code>kube-system</code> namespace.</p> <p>Who can tell me how can to do this?</p>
<p>Name and namespace are immutable on objects. When you try to change the namespace, <code>replace</code> looks for the service in the new namespace in order to overwrite it. You should be able to do <code>create -f ...</code> to create the service in the new namespace</p>
<p>We have a private kubernetes cluster running on a baremetal CoreOS cluster (with Flannel for network overlay) with private addresses.</p> <p>On top of this cluster we run a kubernetes ReplicationController and Service for elasticsearch. To enable load-balancing, this service has a ClusterIP defined - which is also a private IP address: 10.99.44.10 (but in a different range to node IP addresses).</p> <p>The issue that we face is that we wish to be able to connect to this ClusterIP from <em>outside</em> the cluster. As far as we can tell this private IP is not contactable from other machines in our private network...</p> <p>How can we achieve this? </p> <hr> <p>The IP addresses of the nodes are:</p> <pre><code> node 1 - 192.168.77.102 node 2 - 192.168.77.103 </code></pre> <p>.</p> <p>and this is how the Service, RC and Pod appear with kubectl:</p> <pre><code>NAME LABELS SELECTOR IP(S) PORT(S) elasticsearch &lt;none&gt; app=elasticsearch 10.99.44.10 9200/TCP CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS elasticsearch elasticsearch elasticsearch app=elasticsearch 1 NAME READY STATUS RESTARTS AGE elasticsearch-swpy1 1/1 Running 0 26m </code></pre>
<p>You need to set the <code>type</code> of your Service.</p> <p><a href="http://docs.k8s.io/v1.0/user-guide/services.html#external-services" rel="nofollow">http://docs.k8s.io/v1.0/user-guide/services.html#external-services</a></p> <p>If you are on bare metal, you don't have a LoadBalancer integrated. You can use NodePort to get a port on each VM, and then set up whatever you use for load-balancing to aim at that port on any node.</p>
<p>I need to encrypt the data on a block device and allow the Pod to access it as a volume.</p> <p>I noticed its now possible on Google cloud to encrypt a new disk using <a href="https://cloud.google.com/compute/docs/disks/customer-supplied-encryption" rel="nofollow">Customer-Supplied Encryption Keys</a></p> <p>Can I use self encrypted disk with Kubernetes and attach it to the Pod as volume?</p> <p>If not, is there any other way to encrypt block device (for example LUKS) and use it with Pods?</p>
<p>My reading of the the Google docs (<a href="https://cloud.google.com/compute/docs/disks/customer-supplied-encryption" rel="nofollow">https://cloud.google.com/compute/docs/disks/customer-supplied-encryption</a>) are that no key is required to mount the disk. The keys are only provided at disk creation time.</p> <p>So, the following should work without changes to kubernetes:</p> <ol> <li>create encrypted disk "myencrypteddisk" per <a href="https://cloud.google.com/compute/docs/disks/customer-supplied-encryption" rel="nofollow">https://cloud.google.com/compute/docs/disks/customer-supplied-encryption</a></li> <li>create a pod which mounts GCEPD called "myencrypteddisk".</li> <li>kubelet will mount the disk on the VM. It's compute scope should be enough to perform the mount, IIUC.</li> </ol>
<p>So I am hesitant to ask as a newbie but I have hit a wall. I am following:</p> <p><a href="http://www.projectatomic.io/docs/gettingstarted/" rel="nofollow noreferrer">http://www.projectatomic.io/docs/gettingstarted/</a></p> <p>Using fedora atomic host 22 latest.</p> <p>I had trouble getting the system up with some of the port settings and with the api string. I was able to get all my services running on the master and my three minions. Kubelet and kube-proxy are failing to connect to the apiserver. I am able to reach the server from curl but the api paths return:</p> <p><a href="http://cas-vm-atomic-m:8080/api/v1beta3" rel="nofollow noreferrer">http://cas-vm-atomic-m:8080/api/v1beta3</a></p> <pre><code>{ "kind": "Status", "apiVersion": "v1beta3", "metadata": {}, "status": "Failure", "message": "the server could not find the requested resource", "reason": "NotFound", "details": {}, "code": 404 } KUBE_ADMISSION_CONTROL="--admission_control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota" </code></pre> <p>I have turned up the logging. I have tried a variety of setting for KUBE_ADMISSION_CONTROL. I think my problem is on the master and with the apiserver being up but not serving working correctly. kubectl does return my three nodes and services and endpoints. But the nodes stay in NotReady status. The node are attempting to move out of NotReady but can't reach the apiserver to do so.</p> <p>I am kinda of bummed that the newbie getting started howto has been so difficult. Though I guess educational. I have the logging set to 3 but now I mostly see the kube-proxy requests failing with 404 errors. Any ideas?</p> <p>If this is the wrong place for this please let me know.</p>
<p>That guide probably needs to be updated, given that the kubernetes v1beta3 api was deprecated in <a href="https://github.com/kubernetes/kubernetes/pull/11737" rel="nofollow">July</a>. I suspect you're running a recent build of the apiserver (which supports only the v1 api), but older builds of kube-proxy/kubelet.</p> <p>I'd recommend following one of the getting started guides from <a href="http://kubernetes.io/v1.0/docs/getting-started-guides/README.html" rel="nofollow">kubernetes.io/v1.0/docs/getting-started-guides</a>, as those are pretty stable and have dedicated maintainers. e.g. the <a href="http://kubernetes.io/v1.0/docs/getting-started-guides/fedora/flannel_multi_node_cluster.html" rel="nofollow">flannel on fedora</a> guide sounds pretty close to what you're setting up and having trouble with.</p>
<p>Both the Kubernetes <a href="http://kubernetes.io/v1.0/docs/admin/high-availability.html" rel="nofollow">HA guide</a> and the <a href="http://kubernetes.io/v1.0/docs/getting-started-guides/scratch.html" rel="nofollow">From Scratch guide</a> recommend running Etcd, kube-apiserver, kube-controller-manager, and kube-scheduler in containers. The idea of self-hosting Kubernetes on Kubernetes goes back quite a while (see PR 167 on K8s github and issues/PRs linked there), but I haven't found a discussion about why this approach is so beneficial that it should be the 'recommended' way. Here are the benefits and drawbacks as I see them currently:</p> <p>Benefits:</p> <ul> <li>Potentially easy upgrade path to just update manifests and have kubelet pull new images.</li> <li>"Container advantages": binary environment and the host environment separate, leverage others' existing images, etc.</li> <li>Follows the whole Kubernetes pattern, so 'fits the brain' once you are using that pattern extensively.</li> </ul> <p>Drawbacks:</p> <ul> <li>Increased installation/configuration complexity in some cases. For example, if your Etcd cluster is separate from your Kubernetes nodes, you now have to install Docker (with possible storage changes depending on Linux distro), kubelet, and Etcd. Without using containerized Etcd, you just have that one binary to install.</li> <li>Increased complexity at run time: With more moving parts, any bug in Docker or kubelet may be able to render critical components non-functional.</li> </ul> <p>I'm new to Kubernetes (and containers) and feel like I might be missing advantages (or underestimating their value) when compared to the extra complexity it introduces. But I also have to choose once way to try. Why are containerized master components the recommended way to run Kubernetes despite the extra complexity?</p>
<p>The biggest benefit is streamlined setup for most people. Running a few <code>docker run</code> commands is way easier than downloading binaries, unpacking, fine-tuning init scripts (which are different on every distro), running a supervisor, etc. We have a pretty good process manager - relying on that is powerful.</p> <p>We also don't recommend sharing etcd, so if you're doing that you are already off the beaten path.</p> <p>Overall, containerized components are vastly simpler than the alternatives for most people.</p>
<p>I would like to achieve the following functionality:</p> <ul> <li>when a given pod (let's call it application pod) is deployed on a node, another pod providing an ephemeral volume is deployed before that, if such "volume pod" has not existed on the target node yet </li> <li>the number of application pods can be scaled up and down, and all application pods on the same node share the single volume pod</li> </ul> <p>The first requirement assumes a kind of dependency definition among pods (just like it can be done among Marathon apps in case of Marathon).</p> <p>The second requirement assumes that an ephemeral volume created in a container in a pod can be attached to other container(s) in other pod(s).</p> <p>It is important that the volume is ephemeral (i.e. there is no host directory or attached storage, that could be mapped to the application). Also, it is important that it is not on GCE.</p> <p>Please advise how such a setup can be achieved with Kubernetes.</p> <p>I think such dynamic, dependency-based deployment would be welcomed by everyone. Also, sharing ephemeral volumes (e.g. files stored on a tmpfs volume, once such volume is supported by Docker) may be interesting for others, too.</p>
<p>We do not support dependencies at the moment. In the future will support a "daemon" scheduler which can run a pod on every node, but not only if some other pod is being scheduled to a node. In the future we might support existence dependencies, but that's more for creation: create Pod P iff Service S exists.</p> <p>We also do not support refcounted local storage, which seems to be what you're proposing. There are a lot of very unclear semantics in your quick sketch. In order to consider something as complex as this, we would have to really think through the corner cases. If this is something you want us to consider, you're welcome to file a proposal on GitHub with use cases and details.</p> <p>In the mean time, it sounds like you want a workflow manager and hostPath volumes.</p>
<p>I'm trying to setup Kubernetes in Openstack + CoreOS. </p> <p>I have master 10.240.63.84 and 2 minions .63 and .83. I also created 3 redis pods:</p> <pre><code>redis-gopher-gziey 10.244.32.2 10.240.63.66/10.240.63.66 redis-managed-oh43e 10.244.32.3 10.240.63.66/10.240.63.66 redis-primary-fplln 10.244.54.2 10.240.63.83/10.240.63.83 </code></pre> <p>master's routing table looks like:</p> <pre><code>10.240.63.0 * 255.255.255.0 U 0 0 0 eth0 10.240.63.1 * 255.255.255.255 UH 1024 0 0 eth0 10.244.0.0 * 255.255.0.0 U 0 0 0 flannel.1 10.244.50.0 * 255.255.255.0 U 0 0 0 docker0 </code></pre> <p>and output of ifconfig -a is :</p> <pre><code>docker0: flags=4099&lt;UP,BROADCAST,MULTICAST&gt; mtu 1500 inet 10.244.50.1 netmask 255.255.255.0 broadcast 0.0.0.0 inet6 fe80::542f:6fff:fe4a:adf3 prefixlen 64 scopeid 0x20&lt;link&gt; ether 56:84:7a:fe:97:99 txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 1 bytes 90 (90.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eth0: flags=4163&lt;UP,BROADCAST,RUNNING,MULTICAST&gt; mtu 1500 inet 10.240.63.84 netmask 255.255.255.0 broadcast 10.240.63.255 inet6 fe80::f816:3eff:fe89:e9a0 prefixlen 64 scopeid 0x20&lt;link&gt; ether fa:16:3e:89:e9:a0 txqueuelen 1000 (Ethernet) RX packets 430706 bytes 559764129 (533.8 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 238519 bytes 116083693 (110.7 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 flannel.1: flags=4163&lt;UP,BROADCAST,RUNNING,MULTICAST&gt; mtu 1450 inet 10.244.50.0 netmask 255.255.0.0 broadcast 0.0.0.0 inet6 fe80::601f:62ff:feed:1556 prefixlen 64 scopeid 0x20&lt;link&gt; ether 62:1f:62:ed:15:56 txqueuelen 0 (Ethernet) RX packets 20 bytes 1504 (1.4 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 79 bytes 7686 (7.5 KiB) TX errors 0 dropped 19 overruns 0 carrier 0 collisions 0 </code></pre> <p>Flanneld config used for initialization is:</p> <p>Master:</p> <pre><code> - name: flanneld.service command: start drop-ins: - name: 50-network-config.conf content: | [Service] ExecStartPre=/usr/bin/etcdctl set /coreos.com/network/config '{"Network":"10.244.0.0/16", "Backend": {"Type": "vxlan"}}' ExecStart= ExecStart=/usr/libexec/sdnotify-proxy /run/flannel/sd.sock \ /usr/bin/docker run --net=host --privileged=true --rm \ --volume=/run/flannel:/run/flannel \ --env=NOTIFY_SOCKET=/run/flannel/sd.sock \ --env-file=/run/flannel/options.env \ --volume=${ETCD_SSL_DIR}:/etc/ssl/etcd:ro \ quay.io/coreos/flannel:${FLANNEL_VER} /opt/bin/flanneld --ip-masq=true --iface=eth0 </code></pre> <p>Minion:</p> <pre><code> - name: flanneld.service command: start drop-ins: - name: 50-network-config.conf content: | [Service] ExecStartPre=/usr/bin/etcdctl set /coreos.com/network/config '{"Network":"10.244.0.0/16", "Backend": {"Type": "vxlan"}}' ExecStart= ExecStart=/usr/libexec/sdnotify-proxy /run/flannel/sd.sock \ /usr/bin/docker run --net=host --privileged=true --rm \ --volume=/run/flannel:/run/flannel \ --env=NOTIFY_SOCKET=/run/flannel/sd.sock \ --env-file=/run/flannel/options.env \ --volume=${ETCD_SSL_DIR}:/etc/ssl/etcd:ro \ quay.io/coreos/flannel:${FLANNEL_VER} /opt/bin/flanneld -etcd-endpoints http://10.240.63.84:4001 --ip-masq=true --iface=eth0 </code></pre> <p>So the issue is that i can't ping any of the pods from master, as well as connect to any port, error is:</p> <pre><code>ncat -v -t 10.244.32.2 6379 Ncat: Version 6.40 ( http://nmap.org/ncat ) Ncat: No route to host. </code></pre>
<p>This sort of thing is hard to debug remotely. Things I would check:</p> <p>1) on the sender: <code>iptables -t raw -I OUTPUT -d 10.244.32.2 -j TRACE; dmesg -c &gt; /dev/null; ncat -v -t 10.244.32.2 6379; dmesg;</code></p> <p>This will give you some insight into what the kernel is doing.</p> <p>2) on the sender: <code>tcpdump -i any host 10.244.32.2</code> &amp; ncat -v -t 10.244.32.2 6379;`</p> <p>This will give a bit more insight.</p> <p>3) on the receiver: <code>iptables -t raw -I OUTPUT -d 10.244.32.2 -j TRACE; dmesg -c &gt; /dev/null; ncat -v -t 10.244.32.2 6379; dmesg;</code></p> <p>This will tell you if the packet came through the encapsulation.</p> <p>You need to basically prove the plumbing through the whole connection.</p>
<p>Kubernetes create a load balancer, for each service; automatically in GCE. How can I manage something similar on AWS with juju?</p> <p>Kubernetes service basically use the kubeproxy to handle the internal traffic. But that kubeproxy ip its do not have access to the external network.</p> <p>There its a way to accomplish this deploying kubernetes cluster with juju?</p>
<p>I can't speak to juju specifically, but Kubernetes supports Amazon ELB - turning up a load-balancer should work.</p>
<p>I can't find out customer real IP address when apache-php enviroment runs in google container. Without modifying anything, I get ip address from container address range, when using mod_remoteip, I can add</p> <pre><code>RemoteIPHeader X-Client-IP RemoteIPInternalProxyList ournet/proxy-list </code></pre> <p>and add rows into "proxy-list"-file:</p> <pre><code>10.240.0.0/16 # google internal network 10.244.0.0/14 # Cluster aadress range </code></pre> <p>Only the row 10.244.0.0/14 gives any result. In this case I get the cluster node's ip as REMOTE_ADDR value from the 10.240.0.0/16 network. </p> <p>It seems, that the node itself acts as forwarder, without adding needed headers to the request or I am looking it from totally wrong perspective?</p>
<p>Some traffic is masqueraded, but it is done at L3, rather than L7, so there's no way to add a header. :(</p> <p>This will get better soon for in-cluster traffic, but we have to wait for cloud load-balancers to catch up before we can properly handle out-of-cluster traffic properly.</p>
<p>I'm searching for the answer but i didn't find it anywhere. Is it possible to share a service between multiple namespaces ?</p> <p>For instance, if i have 2 namespaces (let's say 'qa' and 'dev'), is it possible to use the same database server ? The database server would be preferably managed by kubernetes too.</p> <p>I've read this issue : <a href="https://github.com/openshift/origin/issues/1244" rel="noreferrer">https://github.com/openshift/origin/issues/1244</a> But it's not directly related to kubernetes.</p> <p>Regards, Smana</p>
<p>Services are accessible from all namespaces as long as you address them using both the name and the namespace.</p> <p>For example, if you have a service named <code>db</code> in namespace <code>dev</code>, you can access it using the DNS name <code>db</code>. While this won't work from <code>qa</code>, you can access it from both <code>qa</code> and <code>dev</code> if you instead use the DNS name <code>db.dev</code> (<code>&lt;service&gt;.&lt;namespace&gt;</code>), in order to clarify which namespace should be searched for the service.</p>
<p>I created cluster in gcloud with three nodes. So far so good.Thereafter i tried to run the pod.. it is giving error.. I found out the kubectl is not configured correct.. Getting following error when I try to run the pod.. Appreciate any help in this regard.</p> <p>error: could not read an encoded object from nodejs.yaml: unable to connect to a server to handle "pods": couldn't read version from server: Get <a href="http://localhost:8080/api" rel="nofollow">http://localhost:8080/api</a>: dial tcp 127.0.0.1:8080: connection refused</p> <p>thx</p>
<p>If your kubectl configuration is incorrect after creating a cluster, you can always run <code>gcloud container clusters get-credentials NAME</code> (see <a href="https://cloud.google.com/container-engine/docs/clusters/operations#configuring_kubectl" rel="noreferrer">configuring kubectl</a>) to restore a working kubeconfig file. </p>
<p>To use cinder volumes I added the options --cloud-provider and --cloud-config to my kubelet configuration:</p> <pre><code>$ cat /etc/kubernetes/kubelet ### # kubernetes kubelet (node) config # The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces) KUBELET_ADDRESS="--address=0.0.0.0" # The port for the info server to serve on # KUBELET_PORT="--port=10250" # You may leave this blank to use the actual hostname KUBELET_HOSTNAME="--hostname_override=192.168.100.76" # location of the api-server KUBELET_API_SERVER="--api_servers=https://localhost:6443" # Add your own! KUBELET_ARGS="--cluster_dns=10.100.0.10 --cluster_domain=cluster.local --kubeconfig=/etc/kubernetes/kubelet.kubeconfig --config=/etc/kubernetes/manifests --cloud-provider=openstack --cloud-config=/etc/kubernetes/cloud_config" </code></pre> <p>I've created a file with the necessary credentials:</p> <pre><code>$ sudo cat /etc/kubernetes/cloud_config [Global] auth-url=https://api.*********.com:5000/v2.0 user-id=kubecindertest username=kubecindertest password=***** region=RegionOne tenant-name=kubecindertest tenant-id=6568768756a7886767e676f7efe76fe7 project-name=kubecindertest </code></pre> <p>When starting kubelet (manually), the process only logs <code>unknown cloud provider "openstack"</code> and exists:</p> <pre><code>source /etc/kubernetes/kubelet; sudo /usr/bin/kubelet $KUBE_LOGTOSTDERR $KUBE_LOG_LEVEL $KUBELET_API_SERVER $KUBELET_ADDRESS $KUBELET_PORT $KUBELET_HOSTNAME $KUBE_ALLOW_PRIV $KUBELET_ARGS unknown cloud provider "openstack" </code></pre> <p>The openstack.go, defining the openstack provider in the kubernetes repository has the exact same name in lowercase:</p> <p><a href="https://github.com/kubernetes/kubernetes/blob/master/pkg/cloudprovider/providers/openstack/openstack.go#L49" rel="nofollow">const ProviderName = "openstack"</a></p> <p><strong>Update</strong></p> <p>It turned out the error was an expection that occured while parsing the config file. I removed all the optional or unwanted keys and now I use this as my config file:</p> <pre><code>$ sudo cat /etc/kubernetes/cloud_config [Global] auth-url=https://api.*********.com:5000/v2.0 username=kubecindertest password=***** region=RegionOne tenant-id=6568768756a7886767e676f7efe76fe7 </code></pre> <p>hower, starting the kublet only leads to another error:</p> <pre><code>I0923 07:14:33.315311 23743 manager.go:127] cAdvisor running in container: "/user.slice" I0923 07:14:33.316263 23743 fs.go:93] Filesystem partitions: map[/dev/vda1:{mountpoint:/ major:253 minor:1}] I0923 07:14:33.358848 23743 manager.go:158] Machine: {NumCores:2 CpuFrequency:2099998 MemoryCapacity:4144640000 MachineID:dae72fe0cc064eb0b7797f25bfaf69df SystemUUID:BEDAF943-624D-C04A-B92C-4EB07258246C BootID:e2d988e2-9aba-49bf-a344-fd62607a6754 Filesystems:[{Device:/dev/vda1 Capacity:21456445440}] DiskMap:map[252:0:{Name:dm-0 Major:252 Minor:0 Size:107374182400 Scheduler:none} 252:1:{Name:dm-1 Major:252 Minor:1 Size:10737418240 Scheduler:none} 252:2:{Name:dm-2 Major:252 Minor:2 Size:10737418240 Scheduler:none} 253:0:{Name:vda Major:253 Minor:0 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:eth0 MacAddress:fa:16:3e:64:fa:9a Speed:0 Mtu:1500} {Name:eth1 MacAddress:fa:16:3e:01:00:79 Speed:0 Mtu:1500} {Name:flannel.1 MacAddress:d2:9c:ad:29:df:c5 Speed:0 Mtu:1450}] Topology:[{Id:0 Memory:4294434816 Cores:[{Id:0 Threads:[0] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:4194304 Type:Unified Level:2}]}] Caches:[]} {Id:1 Memory:0 Cores:[{Id:0 Threads:[1] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:4194304 Type:Unified Level:2}]}] Caches:[]}] CloudProvider:Unknown InstanceType:Unknown} I0923 07:14:33.363915 23743 manager.go:164] Version: {KernelVersion:3.10.0-229.11.1.el7.x86_64 ContainerOsVersion:CentOS Linux 7 (Core) DockerVersion:1.8.1.el7 CadvisorVersion:0.16.0} panic: runtime error: invalid memory address or nil pointer dereference [signal 0xb code=0x1 addr=0x0 pc=0x8559cd] goroutine 1 [running]: k8s.io/kubernetes/pkg/cloudprovider/providers/openstack.(*OpenStack).Instances(0x0, 0x0, 0x0, 0xe) /builddir/build/BUILD/kubernetes-196f58b9cb25a2222c7f9aacd624737910b03acb/_output/local/go/src/k8s.io/kubernetes/pkg/cloudprovider/providers/openstack/openstack.go:167 +0x8ed k8s.io/kubernetes/cmd/kubelet/app.RunKubelet(0xc820144900, 0x0, 0x0, 0x0) /builddir/build/BUILD/kubernetes-196f58b9cb25a2222c7f9aacd624737910b03acb/_output/local/go/src/k8s.io/kubernetes/cmd/kubelet/app/server.go:628 +0x13c k8s.io/kubernetes/cmd/kubelet/app.(*KubeletServer).Run(0xc8202c2000, 0xc820144900, 0x0, 0x0) /builddir/build/BUILD/kubernetes-196f58b9cb25a2222c7f9aacd624737910b03acb/_output/local/go/src/k8s.io/kubernetes/cmd/kubelet/app/server.go:420 +0x84b main.main() /builddir/build/BUILD/kubernetes-196f58b9cb25a2222c7f9aacd624737910b03acb/_output/local/go/src/k8s.io/kubernetes/cmd/kubelet/kubelet.go:46 +0xab goroutine 17 [syscall, locked to thread]: runtime.goexit() /usr/lib/golang/src/runtime/asm_amd64.s:1696 +0x1 goroutine 5 [chan receive]: github.com/golang/glog.(*loggingT).flushDaemon(0x1dc7000) /builddir/build/BUILD/kubernetes-196f58b9cb25a2222c7f9aacd624737910b03acb/Godeps/_workspace/src/github.com/golang/glog/glog.go:879 +0x67 created by github.com/golang/glog.init.1 /builddir/build/BUILD/kubernetes-196f58b9cb25a2222c7f9aacd624737910b03acb/Godeps/_workspace/src/github.com/golang/glog/glog.go:410 +0x297 goroutine 37 [runnable]: syscall.Syscall6(0x36, 0x4, 0x29, 0x1a, 0xc820035a7c, 0x4, 0x0, 0x0, 0x1a, 0x0) /usr/lib/golang/src/syscall/asm_linux_amd64.s:44 +0x5 syscall.setsockopt(0x4, 0x29, 0x1a, 0xc820035a7c, 0x4, 0x0, 0x0) /usr/lib/golang/src/syscall/zsyscall_linux_amd64.go:1655 +0x73 syscall.SetsockoptInt(0x4, 0x29, 0x1a, 0x0, 0x0, 0x0) /usr/lib/golang/src/syscall/syscall_unix.go:267 +0x61 net.setDefaultSockopts(0x4, 0xa, 0x1, 0x0, 0x0, 0x0) /usr/lib/golang/src/net/sockopt_linux.go:17 +0x7f net.socket(0x135f188, 0x3, 0xa, 0x1, 0x0, 0x0, 0x7fe031c3cd50, 0xc8205e43f0, 0x0, 0x0, ...) /usr/lib/golang/src/net/sock_posix.go:42 +0xcb net.internetSocket(0x135f188, 0x3, 0x7fe031c3cd50, 0xc8205e43f0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x1, ...) /usr/lib/golang/src/net/ipsock_posix.go:160 +0x141 net.ListenTCP(0x135f188, 0x3, 0xc8205e43f0, 0x7fe02fbf6bc0, 0x0, 0x0) /usr/lib/golang/src/net/tcpsock_posix.go:324 +0x19b net.Listen(0x135f188, 0x3, 0xc8205ee100, 0x5, 0x0, 0x0, 0x0, 0x0) /usr/lib/golang/src/net/dial.go:393 +0x462 net/http.(*Server).ListenAndServe(0xc82007c660, 0x0, 0x0) /usr/lib/golang/src/net/http/server.go:1827 +0x8e k8s.io/kubernetes/pkg/kubelet/cadvisor.(*cadvisorClient).exportHTTP.func1(0xc82007c660, 0x1062) /builddir/build/BUILD/kubernetes-196f58b9cb25a2222c7f9aacd624737910b03acb/_output/local/go/src/k8s.io/kubernetes/pkg/kubelet/cadvisor/cadvisor_linux.go:100 +0x7b created by k8s.io/kubernetes/pkg/kubelet/cadvisor.(*cadvisorClient).exportHTTP /builddir/build/BUILD/kubernetes-196f58b9cb25a2222c7f9aacd624737910b03acb/_output/local/go/src/k8s.io/kubernetes/pkg/kubelet/cadvisor/cadvisor_linux.go:106 +0x3b6 goroutine 6 [syscall]: os/signal.loop() /usr/lib/golang/src/os/signal/signal_unix.go:22 +0x18 created by os/signal.init.1 /usr/lib/golang/src/os/signal/signal_unix.go:28 +0x37 goroutine 27 [sleep]: time.Sleep(0x12a05f200) /usr/lib/golang/src/runtime/time.go:59 +0xf9 k8s.io/kubernetes/pkg/util.Until(0x164e780, 0x12a05f200, 0xc82007cc00) /builddir/build/BUILD/kubernetes-196f58b9cb25a2222c7f9aacd624737910b03acb/_output/local/go/src/k8s.io/kubernetes/pkg/util/util.go:117 +0x61 created by k8s.io/kubernetes/pkg/util.InitLogs /builddir/build/BUILD/kubernetes-196f58b9cb25a2222c7f9aacd624737910b03acb/_output/local/go/src/k8s.io/kubernetes/pkg/util/logs.go:49 +0xba </code></pre> <p><strong>Update 2</strong></p> <p>The second problem was a certificate issue.</p>
<p>The error messages are not so clear in the openstack cloud provider plugin. It wont register if the there is any unexpected field in the config file, it will throw an error, which unfortunately is not properly bubbled.</p> <p>Here are the fields (and their identifiers) that are recognized by the plugin</p> <p><code> [Global] AuthUrl string</code>gcfg:"auth-url"<code> Username string UserId string</code>gcfg:"user-id"<code> Password string ApiKey string</code>gcfg:"api-key"<code> TenantId string</code>gcfg:"tenant-id"<code> TenantName string</code>gcfg:"tenant-name"<code> DomainId string</code>gcfg:"domain-id"<code> DomainName string</code>gcfg:"domain-name"<code> Region string </code></p> <p>Offcourse not all of them are required. I usually have</p> <p><code> [Global] auth-url username password region tenant-id </code></p>
<p>Did Google publish a whitepaper on Kubernetes, of a similar style as <a href="http://research.google.com/pubs/pub43438.html" rel="nofollow">Borg</a>?</p> <p>I am aware of <a href="http://kubernetes.io/v1.0/" rel="nofollow">end-user documentation</a> and it would likely explain a lot what I am looking for, but I find whitepapers easier to read than end-user docs. It's also easier to convert to dead-tree format and read in a single afternoon.</p>
<p>Perhaps you are looking for <a href="http://research.google.com/pubs/pub43826.html" rel="noreferrer">David Rensin's small book on Kubernetes</a>? It is listed with other Google research papers, although it is more of an brief introduction/overview and less a whitepaper (like the <a href="http://research.google.com/pubs/pub43438.html" rel="noreferrer">Borg one</a>). It seems to be available still as a promotion from <a href="https://www.openshift.com/promotions/kubernetes" rel="noreferrer">OpenShift</a>. </p> <p>It is certainly worth reading and won't take long, but for getting a Kubernetes cluster running, you'll still want to dive deep into the end-user docs at some point. Good luck!</p>
<p>I need to set environment variable in kubernetes slave which is a coreos system. I have tried using <code>export</code>and <code>declare</code> but it keeps reading each argument as a separate command</p>
<p>don't set variables in the <code>command</code> field, take a look at the <code>env</code> field.</p>
<pre> core@core-1-94 ~ $ kubectl exec -it busybox -- nslookup kubernetes Server: 10.100.0.10 Address 1: 10.100.0.10 nslookup: can't resolve 'kubernetes' core@core-1-94 ~ $ kubectl get svc --namespace=kube-system NAME LABELS SELECTOR IP(S) PORT(S) kube-dns k8s-app=kube-dns,kubernetes.io/cluster-service=true,kubernetes.io/name=KubeDNS k8s-app=kube-dns 10.100.0.10 53/UDP 53/TCP kube-ui k8s-app=kube-ui,kubernetes.io/cluster-service=true,kubernetes.io/name=KubeUI k8s-app=kube-ui 10.100.110.236 80/TCP core@core-1-94 ~ $ kubectl exec -it busybox -- nslookup kubernetes 10.100.0.10:53 Server: 10.100.0.10 Address 1: 10.100.0.10 nslookup: can't resolve 'kubernetes' core@core-1-94 ~ $ kubectl get endpoints --namespace=kube-system NAME ENDPOINTS kube-dns 10.244.31.16:53,10.244.31.16:53 kube-ui 10.244.3.2:8080 core@core-1-94 ~ $ kubectl exec -it busybox -- nslookup kubernetes 10.244.31.16:53 Server: 10.244.31.16 Address 1: 10.244.31.16 Name: kubernetes Address 1: 10.100.0.1 </pre> <p>I think the service of <code>kube-dns</code> is Not available.</p> <p>the <code>skydns-svc.yaml</code> :</p> <pre> apiVersion: v1 kind: Service metadata: name: kube-dns namespace: kube-system labels: k8s-app: kube-dns kubernetes.io/cluster-service: "true" kubernetes.io/name: "KubeDNS" spec: selector: k8s-app: kube-dns clusterIP: 10.100.0.10 ports: - name: dns port: 53 protocol: UDP - name: dns-tcp port: 53 protocol: TCP </pre> <p>Who can help ?</p>
<p>For DNS to work, the kubelet needs to be passed the flags --cluster_dns= and --cluster_domain=cluster.local at startup. This flag isn't included in <a href="https://github.com/GoogleCloudPlatform/kubernetes/blob/master/hack/local-up-cluster.sh#L240-L247" rel="noreferrer">the set of flags passed to the kubelet</a>, so the kubelet won't try to contact the DNS pod that you've created for name resolution services. To fix this, you can modify the script to add these two flags to the kubelet and then when you create a DNS service, you need to make sure that you set the same ip address that you passed to the --cluster_dns flag as the portalIP field of the service spec like <a href="https://github.com/GoogleCloudPlatform/kubernetes/blob/master/cluster/addons/dns/skydns-svc.yaml.in" rel="noreferrer">this</a>. For any other information, you can look <a href="https://github.com/kubernetes/kubernetes/issues/10265" rel="noreferrer">it</a>.</p>
<p>Running CoreOS, etcd is not secured by default. To secure it I can use TLS, which adds a level of complexity I'm willing to work on. </p> <p>Now, is Kubernetes able to use a TLS secured etcd cluster? </p> <p>In the config for the kubelet and various pods, Kubernetes passes the etcd endpoints as parameters, so they require etcd and will need the certificates to talk to it if it is secured. If Kubernetes supports TLS connection to etcd, how does it get configured? </p> <p>Thanks</p>
<p>The API server is the only component that speaks directly to etcd. When starting the API server, you can pass a <code>--etcd-config=/path/to/client/config</code> parameter instead of just pointing to an unsecured etcd server with <code>--etcd-server</code></p> <p>In that config file, you would specify the etcd servers, along with the client credentials (cert/key) to use to connect.</p> <p>The format is that expected by the go-etcd client NewClientFromFile function, which expects a JSON serialization of the <a href="https://github.com/coreos/go-etcd/blob/de3514f25635bbfb024fdaf2a8d5f67378492675/etcd/client.go#L50" rel="nofollow">Client</a> struct, specifically the <code>config</code> and <code>cluster</code> keys</p>
<p>I have one successfully working cluster, with out any problems, I've tried to make a copy of it. It's working basically, except one issue - token generated by apiserver is not valid with error message:</p> <pre><code>6 handlers.go:37] Unable to authenticate the request due to an error: crypto/rsa: verification error </code></pre> <p>I have api server started up with following parameters:</p> <pre><code>kube-apiserver --address=0.0.0.0 --admission_control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota --service-cluster-ip-range=10.116.0.0/23 --client_ca_file=/srv/kubernetes/ca.crt --basic_auth_file=/srv/kubernetes/basic_auth.csv --authorization-mode=AlwaysAllow --tls_cert_file=/srv/kubernetes/server.cert --tls_private_key_file=/srv/kubernetes/server.key --secure_port=6443 --token_auth_file=/srv/kubernetes/known_tokens.csv --v=2 --cors_allowed_origins=.* --etcd-config=/etc/kubernetes/etcd.config --allow_privileged=False </code></pre> <p>I think I'm missing something but can't find what exactly, any help will be appreciated! </p>
<p>So, apparently it was wrong server.key used by controller manager. According to <a href="http://kubernetes.io/v1.0/docs/admin/service-accounts-admin.html" rel="nofollow">kubernetes documentation</a> token is generated by controller manager. </p> <p>While I was doing copy of the all my configuration, I had to change ipaddress and had to change certificate due to this as well. But controller-manager started with "old" certificate and after the change created wrong keys because server.key. </p>
<p>So Google Kubernetes is a Docker container cluster management solution that helps deploy, scale, schedule and maintenance sets of containers simple.</p> <p>Apache Mesos is a work scheduler that figures out where jobs (e.g. deploying a Docker container, some batch processing job, etc.) should run.</p> <p>Kubernetes can be "raw" and run stadalone, or there is the <em>Kubernetes on Mesos</em> option where Kubernetes is deployed as a Mesos framework and runs on top of your Mesos cluster.</p> <p><strong>I'm wondering what benefits, if any, there are to running Kubernetes on Mesos, as opposed to "raw" Kubernetes?</strong></p> <p>Mesos can do non-Docker work: it can run Java/Python/Ruby/etc. apps on normal Linux VMs. It can run scripts. It's just a general work engine. So at first glance, it seems that the only advantage <em>Kubernetes on Mesos</em> offers is that it places your Kubernetes cluster onto Mesos, where you can run other jobs that aren't Dockerized.</p> <p>I'm sure I'm not seeing the "forest through the trees" here, but if I already know I'm going to use Docker and Kubernetes, why might I consider running Kubernetes on Mesos?</p>
<p>I recently gave a presentation at the London Mesos User Group, addressing exactly this question: <a href="https://speakerdeck.com/mhausenblas/can-i-have-mesos-and-kubernetes" rel="noreferrer">Can I have Mesos and Kubernetes?</a> with a demo available <a href="https://gist.github.com/mhausenblas/07c03d4230a5e6ec3b46" rel="noreferrer">here</a>. Bottom-line: it is all about hybrid workloads.</p> <p>If you can and want to go all in concerning Docker, that is, you containerize <em>all</em> your workloads, then nothing (or <a href="http://www.theplatform.net/2015/09/15/kubernetes-has-a-ways-to-go-to-scale-like-google-mesos/" rel="noreferrer">little</a>) speaks against Kubernetes standalone.</p>
<p>I am using containers to run both app servers &amp; Cassandra nodes.</p> <p>When starting the app server container, I need to specify which Cassandra node(1..n) to connect to. How would you divide the workload?</p> <ol> <li>One app container to one or more Cassandra nodes(How many).</li> <li>One or more app container to one Cassandra node(How many).</li> <li>Many to many(How many).</li> </ol> <p>This is for a production setup, 100 % uptime. Each data load from cassandra is small but many.<br> <br>I should be scalable so I can put in more app containers - like in Kubernetes they have pods. Pods is a set of nodes that make up granules of the application. <br>Therefore I am looking for the best possible group of containers(Cassandra and App server) that will scale <br><br> Info: Kubernetes is a to expensive setup in the beginning. And while waiting for Docker Swarm to be in release state I will do this manually. Any insight is welcome?</p> <p>Regards</p>
<p>Please see:</p> <p><a href="https://github.com/kubernetes/kubernetes/blob/release-1.0/examples/cassandra/README.md" rel="nofollow">https://github.com/kubernetes/kubernetes/blob/release-1.0/examples/cassandra/README.md</a></p> <p>for a tutorial of how to run Cassandra on Kubernetes.</p> <p>You will also need to add in best practices like snapshotting the databases to persistent storage and other such things.</p> <p>(and why do you say that Kubernetes is expensive? Google Container Engine only charges the cost of the VMs for small clusters, and you can deploy open source Kubernetes yourself for free)</p>
<p>I am trying to configure php phabricator example from kubernetes but after creating the replication controller. POD is not showing in ready state ever. It shows in below state:</p> <pre><code>NAME READY STATUS RESTARTS AGE phabricator-controller-z0nk3 0/1 CrashLoopBackOff 5 2m </code></pre> <p>Below is the controller yaml:</p> <pre><code>{ "kind": "ReplicationController", "apiVersion": "v1", "metadata": { "name": "phabricator-controller", "labels": { "name": "phabricator" } }, "spec": { "replicas": 1, "selector": { "name": "phabricator" }, "template": { "metadata": { "labels": { "name": "phabricator" } }, "spec": { "containers": [ { "name": "phabricator", "image": "fgrzadkowski/example-php-phabricator", "ports": [ { "name": "http-server", "containerPort": 80 } ] } ] } } } } </code></pre> <p>Can someone please suggest me how to fix this?</p>
<p>This Pod is crash-looping. You can tell because the number of restarts is greater than zero.</p> <pre><code>kubectl describe pods &lt;pod-name&gt; </code></pre> <p>Should give further details to help debug. As will</p> <pre><code>kubectl logs &lt;pod-name&gt; </code></pre>
<p>I'm reading the <a href="http://kubernetes.io/v1.0/docs/getting-started-guides/scratch.html" rel="nofollow">Kubernetes "<em>Getting Started from Scratch</em>" Guide</a> and have arrived at the dreaded <a href="http://kubernetes.io/v1.0/docs/admin/networking.html#kubernetes-model" rel="nofollow"><strong>Network Section</strong></a>, where they state:</p> <pre><code>Kubernetes imposes the following fundamental requirements on any networking implementation (barring any intentional network segmentation policies): * all containers can communicate with all other containers without NAT * all nodes can communicate with all containers (and vice-versa) without NAT * the IP that a container sees itself as is the same IP that others see it as </code></pre> <p>My first source of confusion is: <strong>How is this <em>different</em> than the "standard" Docker model?</strong> How is Docker different w.r.t. those 3 Kubernetes requirements?</p> <p>The article then goes on to summarize how GCE achieves these requirements:</p> <blockquote> <p>For the Google Compute Engine cluster configuration scripts, we use advanced routing to assign each VM a subnet (default is /24 - 254 IPs). Any traffic bound for that subnet will be routed directly to the VM by the GCE network fabric. This is in addition to the "main" IP address assigned to the VM, which is NAT'ed for outbound internet access. A linux bridge (called cbr0) is configured to exist on that subnet, and is passed to docker's --bridge flag.</p> </blockquote> <p>My question here is: <strong>Which requirement(s) from the 3 above does this paragraph address? More importantly, <em>how</em> does it achieve the requirement(s)?</strong> I guess I just don't understand how 1-subnet-per-VM accomplishes: container-container communication, node-container communication, and static IP.</p> <hr> <p>And, as a bonus/stretch concern: Why doesn't Marathon suffer from the same networking concerns as what Kubernetes is addressing here?</p>
<p>Docker's standard <a href="https://docs.docker.com/articles/networking/" rel="noreferrer">networking configuration</a> picks a container subnet for you out of its <a href="https://github.com/docker/docker/blob/master/vendor/src/github.com/docker/libnetwork/drivers/bridge/setup_ipv4.go#L25" rel="noreferrer">chosen defaults</a>. As long as it doesn't conflict with any interfaces on your host, Docker is okay with it.</p> <p>Then, Docker inserts an iptables MASQUERADE rule that allows containers to talk to the external world using the host's default interface.</p> <p>Kubernetes' 3 requirements are violated by the fact that subnets are chosen only based on addresses in use on the host, which forces the requirement to NAT all container traffic using the MASQUERADE rule.</p> <p>Consider the following 3-host Docker setup (a little contrived to highlight things):</p> <h2>Host 1:</h2> <p><strong>eth0</strong>: 10.1.2.3</p> <p><strong>docker0</strong>: 172.17.42.1/16</p> <p><strong>container-A</strong>: 172.17.42.2</p> <h2>Host 2:</h2> <p><strong>eth0</strong>: 10.1.2.4</p> <p><strong>docker0</strong>: 172.17.42.1/16</p> <p><strong>container-B</strong>: 172.17.42.2</p> <h2>Host 3:</h2> <p><strong>eth0</strong>: 172.17.42.2</p> <p><strong>docker0</strong>: 172.18.42.1</p> <p>Let's say <strong>container-B</strong> wants to access an HTTP service on port 80 of <strong>container-A</strong>. You can get docker to expose <strong>container-A</strong>'s port 80 somewhere on <strong>Host 1</strong>. Then <strong>container-B</strong> might make a request to 10.1.2.3:43210. This will be received on <strong>container-A</strong>'s port 80, but will look like it came from some random port on 10.1.2.4 because of the NAT on the way out of <strong>Host 2</strong>. This violates the <em>all containers communicate without NAT</em> and the <em>container sees same IP as others</em> requirements. Try to access <strong>container-A</strong>'s service directly from <strong>Host 2</strong> and you get your <em>nodes can communicate with containers without NAT</em> violation.</p> <p>Now if either of those containers want to talk to <strong>Host 3</strong>, they're SOL (just a general argument for being careful with the auto-assigned docker0 subnets).</p> <p>Kubernetes approach on GCE/AWS/Flannel/... is to <a href="https://github.com/kubernetes/kubernetes/blob/master/pkg/controller/node/nodecontroller.go#L364" rel="noreferrer">assign</a> each host VM a subnet carved out of a flat private network. No subnets overlap with VM addresses or with each other. This lets containers and VMs communicate NATlessly.</p>
<p>I have to get the real ip from the request in my business.actually I got the <code>10.2.100.1</code> every time at my test environment. any way to do this ?</p>
<p>This is the same question as <a href="https://stackoverflow.com/questions/32723675/gce-k8s-accessing-referral-ip-address">GCE + K8S - Accessing referral IP address</a> and <a href="https://stackoverflow.com/questions/32112922/how-to-read-client-ip-addresses-from-http-requests-behind-kubernetes-services">How to read client IP addresses from HTTP requests behind Kubernetes services?</a>.</p> <p>The answer, copied from them, is that this isn't yet possible in the released versions of Kubernetes.</p> <p>Services go through kube_proxy, which answers the client connection and proxies through to the backend (your web server). The address that you'd see would be the IP of whichever kube-proxy the connection went through.</p> <p><a href="https://github.com/kubernetes/kubernetes/pull/9210" rel="nofollow noreferrer">Work</a> is being actively done on a <a href="https://github.com/kubernetes/kubernetes/issues/3760" rel="nofollow noreferrer">solution</a> that uses iptables as the proxy, which will cause your server to see the real client IP.</p>
<p>We are using Heat + Kubernetes (V0.19) to manage our apps. When do rolling update, sometimes container staring will always fail on a node but kubelet on the node will always retry but always fail. So the updating will hang there which is not the behavior we expected.</p> <p>I found that using "kubectl delete node" to remove the node can avoid pods scheduled to that node. But in our env, the node to be deleted may have running pods on it.</p> <p>So my question is: After using "kubectl delete node" to remove the node, will the pods on that node still worked correctly ?</p>
<p>If you just want to cancel the rolling update, remove the failed pods and try again later, I have found that it is best to stop the update loop with <code>CTRL+c</code> and then delete the replication controller corresponding to the new app that is failing. </p> <pre><code> ^C kubectl delete replicationcontrollers your-app-v1.2.3 </code></pre>
<p>Running CoreOS, etcd is not secured by default. To secure it I can use TLS, which adds a level of complexity I'm willing to work on. </p> <p>Now, is Kubernetes able to use a TLS secured etcd cluster? </p> <p>In the config for the kubelet and various pods, Kubernetes passes the etcd endpoints as parameters, so they require etcd and will need the certificates to talk to it if it is secured. If Kubernetes supports TLS connection to etcd, how does it get configured? </p> <p>Thanks</p>
<p>digging further and asking on the github project, I was directed towards this post that I hope answers the question: </p> <p><a href="https://groups.google.com/forum/#!topic/google-containers/bTfEcRQ3N28/discussion" rel="nofollow">https://groups.google.com/forum/#!topic/google-containers/bTfEcRQ3N28/discussion</a></p> <p>In short the config file should look like:</p> <pre><code>{ "cluster": { "machines": [ "https://kube-master.internal:2379", "https://kube-minion1.internal:2379", "https://kube-minion2.internal:2379" ] }, "config": { "certFile": "/etc/etcd/kube-master.internal.pem", "keyFile": "/etc/etcd/kube-master.internal.key", "caCertFiles": [ "/etc/etcd/kubecluster-ca.pem" ], "consistency": "STRONG_CONSISTENCY" } } </code></pre> <p>Haven't tried yet but will asap.</p>
<p>Kubernetes has a pretty complicated networking model that <em>appears</em> to be predicated upon circumventing a critical flaw with Docker's default networking:</p> <p>By default Docker containers cannot be contacted directly from the outside world, because their IP addresses are local/private to the subnet they're on.</p> <p>To circumvent this, Kubernetes has a very complex network model that, amongst other things, requires you to carve out your own flat IP space that is then shared by all hosts and containers (pods), thus giving each pod its own public IP.</p> <p>But I ask: isn't this already addressed by <a href="https://docs.docker.com/articles/networking/#binding-ports" rel="nofollow">Docker port binding</a>? If not, then what about port binding is still lacking, that requires Kubernetes to use the networking solution that they use?</p>
<p>This is well described in the <a href="https://github.com/kubernetes/kubernetes/blob/release-1.0/docs/design/networking.md#model-and-motivation" rel="nofollow">motivation section of Kubernetes's networking design doc</a>.</p> <p>Essentially, relying on port binding requires dynamic port mapping to avoid conflicts between different containers wanting the same port (e.g. a lot of applications will want port 80). While dynamic port mapping can be made to work, it also causes a lot of problems, as outlined in the doc.</p>
<p>This is my first time using google cloud platform and feeling a bit lost. </p> <p>I pushed my custom ubuntu image (created from a Dockerfile) to GCE and then created a container cluster from the console.</p> <p>What I got is: a container cluster and a vm instances cluster. </p> <p>I can ssh into these instances and see them blank, I mean it does not have what I built from the Dockerfile on my system. I copied some source code into it. I did docker images in the whatever instance I sshed but could not see the image I pushed.</p> <p>I am totally lost now. My basic question is How do I see contents of my container or how do I ssh into it.</p> <p>I tried following some tutorials but no success. For ex: I tried this command <strong>gcloud compute instances list</strong> and got some big string (<em>gke-cluster-1-53f024ac-node-s5zc</em>) as instance name. When I tried this command <strong>gcloud compute ssh gke-cluster-1-53f024ac-node-s5zc</strong> I got error that instance not found.</p> <p>Please help.</p> <p>Thanks</p>
<p>To log into a container, use the following command:</p> <pre><code>kubectl exec -it POD bash </code></pre> <p>Replace <code>POD</code> with the name of the pod in which the container is running. This works for pods with a single container. For pods with multiple containers, use the <code>-c</code> option to specify the container.</p>
<p>I need to set environment variable in kubernetes slave which is a coreos system. I have tried using <code>export</code>and <code>declare</code> but it keeps reading each argument as a separate command</p>
<pre><code>apiVersion: v1 kind: ReplicationController metadata: labels: name: api name: api spec: replicas: 1 selector: name: api template: metadata: labels: name: api spec: containers: - env: - name: VARIABLE &lt;---- declare an env variable NAME value: "value-of-variable" &lt;--- here is the value - name: ANOTHER_VARIABLE value: "another-value" image: myregistry/api imagePullPolicy: Always name: api </code></pre>
<p>I try to serve 2 web applications that should be powered by hhvm. It is easy to build up one docker image that includes nginx and the default.conf. But now where I will get <code>n</code> apps as microservices I want to test them and share the nginx container as I proceed with others like DB e.g.</p> <p>So when nginx is externally accessed with hhvm do I have to provide hhvm on this image too? Or can I refer it to the debian where hhvm is already provided? Then, I could store the <code>nginx.conf</code> with something like this:</p> <pre><code>upstream api.local.io { server 127.0.0.1:3000; } upstream booking.local.io { server 127.0.0.1:5000; } </code></pre> <p>How can I set up a proper nginx container for this?</p>
<p>Yeah, you can create another nginx container with an <code>nginx.conf</code> that is configured similarly to this:</p> <pre><code>upstream api { # Assuming this nginx container can access 127.0.0.1:5000 server 127.0.0.1:3000; server server2.local.io:3000; } upstream booking { # Assuming this nginx container can access 127.0.0.1:5000 server 127.0.0.1:5000; server server2.local.io:5000; } server { name api.local.io; location / { proxy_pass http://api; } } server { name booking.local.io; location / { proxy_pass http://booking; } } </code></pre>
<p>When I do a <code>kubectl describe &lt;pod&gt;</code>, the bottom section has an "Events" section, displaying Events related to that pod. For example, an event with Reason "failedScheduling", with the message "Failed for reason PodFitsResources and possibly others"</p> <p>How can I query the API to return that list of events?</p> <p>If I call <code>/api/v1/namespaces/&lt;ns&gt;/pods/&lt;pod_name&gt;</code>, it doesn't return any Events. If I try the <code>/api/v1/events</code> endpoint, I can specify a <code>labelSelector</code> parameter, but the name of the pod isn't a label of the Event, though it is in the <code>object.involvedObject.name</code> field.</p> <p>I could request the entire Event stream and filter out the few Events that interest me client-side, but that seems like overkill. <code>kubectl</code> is able to do it, so I figure there must be some way that I'm missing.</p> <p>Thanks.</p>
<p>I think events support a fieldSelector for the involved object kind and name</p> <p>You can also turn the verbosity level on kubectl up to 8 to see network traces to see what it is doing</p>
<p>So I'm trying to wrap my head around what exactly a typical Kubernetes pod looks like. According to <a href="http://kubernetes.io/v1.0/docs/user-guide/pods.html" rel="nofollow">their docs</a>, a <em>pod</em>:</p> <blockquote> <p>"<em>A pod (as in a pod of whales or pea pod) corresponds to a colocated group of applications running with a shared context.</em>"</p> </blockquote> <p>Later in that same article:</p> <blockquote> <p>"<em>Pods can be used to host vertically integrated application stacks, but their primary motivation is to support co-located...</em>"</p> </blockquote> <p>OK, so you <em>can</em> organize a single pod as your entire vertical stack (from DB to web app). But apparently that's not typically how it's organized, so I assume that typically a "<em>horizontal</em>" organization is preferred (<strong>why??</strong>).</p> <p>But to me, horizontal layering/stratification implies that you'll only have one container in a pod, because typically in each tier of service (web, app, cache, db, etc.) you'll have one type of component.</p> <p>Let's take a concrete example. Say we have the following vertical stack of tiers:</p> <ul> <li>Web frontend containers; Grails or Spring MVC web/app server</li> <li>Microservices containers; RESTful web services where core business logic lives</li> <li>Message broker (say RabbitMQ) containers</li> <li>Microservice cache (some services have distributed Hazelcast cache clusters sitting between them and their DB/backing store) containers</li> <li>MySQL DB cluster containers</li> <li>MongoDB cluster containers</li> <li>3rd party RESTful cloud API (say SalesForce or Stripe or something similar)</li> </ul> <p>These are fairly typical components in an app stack. If we went against Kubernetes' own advice, and created "vertically-aligned" pods, each pod would consist of 1 type of container for each tier (the web/app server, each microservice, each DB, etc.).</p> <p><strong>But how would a horizontally-aligned pod be organized? What containers would go in which pods?</strong></p>
<p>A Pod is the basic scheduling unit in Kubernetes. It is the common case that a pod will only have a single container running in it, as most containers can be scheduled independently (i.e. they do not need to be co-located on the same machine).</p> <p>With regards to your example, you could put most containers in individual pods, and use a <strong>Replication Controller</strong> to horizontally scale the number of replicas of each Pod (and therefore container) as needed. Along with your replication controller, you'll also want a <strong>Service</strong> to load balance between the replicas. Vertical tiers could be organized using labels on the pods/replication controllers/services, such as <code>tier=message_broker</code>.</p> <p><em>Edit:</em></p> <p>The reason it's not a good idea to put your entire stack in a single pod is it limits your flexibility:</p> <ul> <li>It forces your entire stack to be scheduled on a single machine, which could make scheduling more difficult if machines lack some of the necessary resources.</li> <li>Individual components cannot be scaled independently (e.g. if you need more frontend replicas to handle traffic, but your DB is only used for a small number of queries)</li> <li>All the containers would need to agree on which ports to use. Each pod has a unique IP, so containers running in separate pods can use the same ports.</li> </ul>
<p>As I understand it, kube-proxy runs on every Kubernetes node (it is started on Master and on the Worker nodes)</p> <p>If I understand correctly, it is also the 'recommended' way to access the API (see: <a href="https://github.com/kubernetes/kubernetes/blob/release-1.0/docs/user-guide/accessing-the-cluster.md#accessing-the-api-from-a-pod" rel="nofollow">https://github.com/kubernetes/kubernetes/blob/release-1.0/docs/user-guide/accessing-the-cluster.md#accessing-the-api-from-a-pod</a>)</p> <p>So, since kube-proxy is already running on every node, is the 'recommended' way to start each pod with a new kube-proxy container in it, or is it possible to 'link' somehow to the running kube-proxy container?</p> <p>Originally I was using the URL with $KUBERNETES_SERVICE_HOST and the credentials passed as a Secret, on GKE, calling </p> <pre><code>curl https://$USER:$PASSWORD@${KUBERNETES_SERVICE_HOST}/api/v1/namespaces/${NAMESPACE}/endpoints/${SELECTOR} </code></pre> <p>and parsing the results, but on K8s deployed on a CoreOS cluster I only seem to be able to authenticate through TLS and certs and the linked proxy seems like a better way.</p> <p>So, I'm looking for the most efficient / easiest way to connect to the API from a pod to look up the IP of another pod referred to by a Service.</p> <p>Any suggestion/input?</p>
<p>There are a couple options here, as noted in the doc link you provided.</p> <p>The preferred method is using <a href="https://github.com/kubernetes/kubernetes/blob/release-1.0/docs/user-guide/service-accounts.md" rel="noreferrer">Service Accounts</a> to access the API:</p> <p>The short description is that your service would read the service-account secrets (token / CA-cert) that are mounted into the pod, then inject the token into the http header and validate the apiserver cert using the CA-cert. This somewhat simplifies the description of service accounts, but the above link can provide more detail.</p> <p>Example using curl and service-account data inside pod:</p> <pre><code>curl -H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt https://kubernetes/api/v1/namespaces </code></pre> <p>Another option, mentioned in the link you provided, is to run a side-car container running a "kubectl proxy" in the same pod as your application. </p> <p>A note of clarification: the "kube-proxy" and "kubectl proxy" are not referring to the same thing. The kube-proxy is responsible for routing "service" requests, kubectl proxy is a cli cmd which opens a local proxy to the Kubernetes API.</p> <p>What is happening under the covers when running <code>kubectl proxy</code> is that the kubectl command already knows how to use the service-account data, so it will extract the token/CA-cert and establish a connection to the API server for you, then expose an interface locally in the pod (which you can use without any auth/TLS).</p> <p>This is might be an easier approach as it likely requires no changes to your existing application, short of pointing it to the local kubectl proxy container running in the same pod.</p> <p>One other side-note: I'm not sure of your exact use-case, but generally it would be preferable to use the Service IP / Service DNS name and allow Kubernetes to handle service discovery, rather than extracting the pod IP itself (the pod IP will change if the pod gets scheduled to a different machine).</p>
<p>We are using kubernetes provided by "Google Container Engine" with enabled "Cloud Logging" feature. But we need to configure fluentd for our application (add more information about application what runs in container).</p> <p>I can't find any information how I can add my configs to logging agent provided by google and any way to replace it with my owned container.</p> <p>Does exists any way how I can do this? </p> <p>Thanks!</p>
<p>There isn't an easy way to customize the fluentd configuration in Google Container Engine (and if you try to customize it, your changes will be lost if a node gets replaced by the instance group manager or during a node upgrade). </p> <p>If you want to run a custom fluentd configuration, you should disable cloud logging on your cluster and then run your own fluentd container on each node with the configuration that you need for your application. </p> <p>Until <a href="https://github.com/kubernetes/kubernetes/blob/master/docs/admin/daemons.md" rel="nofollow">Daemon Sets</a> are available, the easiest way to run one pod per host is to assign the pod a host port and then create a replication controller with more replicas than you have hosts. </p>
<p>So Kubernetes has a pretty novel network model, that I believe is based on what it perceives to be a shortcoming with default Docker networking. While I'm still struggling to understand: (1) what it perceives the actual shortcoming(s) to be, and (2) what Kubernetes' general solution is, I'm now reaching a point where I'd like to just implement the solution and perhaps that will clue me in a little better.</p> <p>Whereas the rest of the Kubernetes documentation is very mature and well-written, the instructions for configuring the network are sparse, largely incoherent, and span many disparate articles, instead of being located in one particular place.</p> <p>I'm hoping someone who has set up a Kubernetes cluster before (from scratch) can help walk me through the basic procedures. I'm not interested in running on GCE or AWS, and for now I'm not interested in using any kind of overlay network like <code>flannel</code>.</p> <p>My <em>basic</em> understanding is:</p> <ol> <li>Carve out a <code>/16</code> subnet for all your pods. This will limit you to some 65K pods, which should be sufficient for most normal applications. All IPs in this subnet must be "public" and not inside of some traditionally-private (classful) range.</li> <li>Create a <code>cbr0</code> bridge somewhere and make sure its persistent (<strong>but on what machine?</strong>)</li> <li>Remove/disable the <code>MASQUERADE</code> rule installed by Docker.</li> <li>Some how configure <code>iptables</code> routes (again, <strong>where?</strong>) so that each pod spun up by Kubernetes receives one of those public IPs.</li> <li>Some other setup is required to make use of load balanced <code>Services</code> and dynamic DNS.</li> <li>Provision 5 VMs: 1 master, 4 minions</li> <li>Install/configure Docker on all 5 VMs</li> <li>Install/configure <code>kubectl</code>, <code>controller-manager</code>, <code>apiserver</code> and <code>etcd</code> to the master, and run them as services/daemons</li> <li>Install/configure <code>kubelet</code> and <code>kube-proxy</code> on each minion and run them as services/daemons</li> </ol> <p>This is the best I can collect from 2 full days of research, and they are likely wrong (or misdirected), out of order, and utterly incomplete.</p> <p>I have unbridled access to create VMs in an on-premise vCenter cluster. If changes need to be made to VLAN/Switches/etc. I can get infrastructure involved.</p> <p><strong>How many VMs should I set up for Kubernetes (for a small-to-medium sized cluster), and why? What exact corrections do I need to make to my vague instructions above, so as to get networking totally configured?</strong></p> <p>I'm good with installing/configuring all the binaries. Just totally choking on the network side of the setup.</p>
<p>For a general introduction into kubernetes networking, I found <a href="http://www.slideshare.net/enakai/architecture-overview-kubernetes-with-red-hat-enterprise-linux-71" rel="nofollow noreferrer">http://www.slideshare.net/enakai/architecture-overview-kubernetes-with-red-hat-enterprise-linux-71</a> pretty helpful.</p> <p>On your items (1) and (2): IMHO they are nicely described in <a href="https://github.com/kubernetes/kubernetes/blob/master/docs/admin/networking.md#docker-model" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/blob/master/docs/admin/networking.md#docker-model</a> . From my experience: What is the Problem with the Docker NAT type of approach? Sometimes you need to configure e.g. into the software all the endpoints of all nodes (172.168.10.1:8080, 172.168.10.2:8080, etc). in kubernetes you can simply configure the IP's of the pods into each others pod, Docker complicates it using NAT indirection. See also <a href="https://stackoverflow.com/questions/32852216/setting-up-the-network-for-kubernetes">Setting up the network for Kubernetes</a> for a nice answer.</p> <p>Comments on your other points: 1. </p> <blockquote> <p>All IPs in this subnet must be "public" and not inside of some traditionally-private (classful) range.</p> </blockquote> <p>The "internal network" of kubernetes normally uses private IP's, see also slides above, which uses 10.x.x.x as example. I guess confusion comes from some kubernetes texts that refer to "public" as "visible outside of the node", but they do not mean "Internet Public IP Address Range". </p>
<p>I have successfully shelled into a RUNNING docker container using</p> <p>docker exec -i -t 7be21f1544a5 bash</p> <p>I have made some changes to some json files and wanted to apply these changes to reflect online.</p> <p>I am a beginner and have tried to restart, mount in vain. What strings I have to replace when I mount using docker run?</p> <p>Is there any online sample?</p> <pre><code>CONTAINER ID: 7be21f1544a5 IMAGE: gater/web COMMAND: "/bin/sh -c 'nginx'" CREATED: 4 weeks ago STATUS: Up 44 minutes PORTS: 443/tcp, 172.16.0.1:10010-&gt;80/tcp NAMES: web </code></pre>
<p>You can run either create a Dockefile and run:</p> <pre><code>docker build . </code></pre> <p>from the same directory where your <code>Dockerfile</code> is located.</p> <p>or you can run:</p> <pre><code>docker run -i -t &lt;docker-image&gt; bash </code></pre> <p>or (if your container is already running)</p> <pre><code>docker exec -i -t &lt;container-id&gt; bash </code></pre> <p>once you are in the shell make all the changes you please. Then run:</p> <pre><code>docker commit &lt;container-id&gt; myimage:0.1 </code></pre> <p>You will have a new docker image locally <code>myimage:0.1</code>. If you want to push to a docker repository (dockerhub or your private docker repo) you can run:</p> <pre><code>docker push myimage:0.1 </code></pre>
<p>I've successfully <a href="https://cloud.google.com/container-registry/#pushing_to_the_registry" rel="nofollow">pushed my Docker container image to gcr.io</a> with the following command:</p> <p><code>$ gcloud docker push gcr.io/project-id-123456/my-image</code></p> <p>But when I try to <a href="https://cloud.google.com/container-engine/docs/pods/single-container#creating_a_pod" rel="nofollow">create a new pod</a> I get the following error:</p> <pre><code>$ kubectl run my-image --image=gcr.io/project-id-123456/my-image CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS my-image my-image gcr.io/project-id-123456/my-image run=my-image 1 $ kubectl get pods NAME READY STATUS RESTARTS AGE my-image-of9x7 0/1 Error pulling image (latest) from gcr.io/project-id-123456/my-image, Untar exit status 1 unexpected EOF 0 5m </code></pre> <p>It doesn't pull on my local as well:</p> <pre><code>$ docker rmi -f $(docker images -q) # Clear local image cache $ gcloud docker pull gcr.io/project-id-123456/my-image:latest … Error pulling image (latest) from gcr.io/project-id-123456/my-image, Untar re-exec error: exit status 1: output: unexpected EOF </code></pre> <p>Can someone please suggest me how to fix this?</p>
<p>Ok, after digging around in the Docker code base, I think I have found some similar reports of what you are seeing.</p> <p>The way this error is displayed changed in 1.7, but this thread seems related: <a href="https://github.com/docker/docker/issues/14792" rel="nofollow">https://github.com/docker/docker/issues/14792</a></p> <p>This turned me onto this fix, which landed in 1.8: <a href="https://github.com/docker/docker/pull/15040" rel="nofollow">https://github.com/docker/docker/pull/15040</a></p> <p>In particular, see this comment: <a href="https://github.com/docker/docker/pull/15040#issuecomment-125661037" rel="nofollow">https://github.com/docker/docker/pull/15040#issuecomment-125661037</a></p> <p>The comment seems to indicate that this is only a problem for v1 layers, so our Beta support for v2 may work around this issue.</p> <p>You can push to our v2 beta via: <code> gcloud docker --server=beta.gcr.io push beta.gcr.io/project-id-123456/... </code></p> <p>You can then simply change the reference in your Pod to "beta.gcr.io/..." and it will pull via v2.</p>
<p>Is there any way to access the 'internal' services (those not exposed outside) of the cluster in a secure way from the outside.</p> <p>The goal is simple: I need to debug clients of those services and need to access them, but don't want to expose them outside.</p> <p>On a regular single host I would normally tunnel to the host with SSH and map the ports to localhost; I tried using a SSHD container but that didn't get me very far: the services are not directly on that container so I'm not sure how to get to the next hop on the network since the services are dynamically managing IPs.</p> <p>Ideally a VPN would be much more convenient, but GKE doesn't seem to support VPN for road warrior situation.</p> <p>Is there any solution for this use-case?</p> <p>Thanks for your input.</p> <p>EDIT:</p> <p>I see here: <a href="https://github.com/kubernetes/kubernetes/blob/master/docs/user-guide/accessing-the-cluster.md#ways-to-connect" rel="noreferrer">https://github.com/kubernetes/kubernetes/blob/master/docs/user-guide/accessing-the-cluster.md#ways-to-connect</a></p> <p>that the only way to connect supported right now is HTTP/HTTPS meaning I can proxy HTTP calls but not to any port</p>
<p>You can do this with a combination of running <code>kubectl proxy</code> on your dev machine and using the proxying functionality built into the master (that's a lot of proxying, but bear with me). </p> <p>First, run <code>kubectl proxy</code>. Note the port that is bound locally (it should be 8001 by default). This will cause kubectl to create a tunnel to your master instance that you can hit locally without needing to pass any authentication (technically, you can do all of the following steps without doing this first by hitting the master directly, but this is simpler for debugging).</p> <p>Next, point a client (web browser, curl, etc) at <code>http://localhost:8001/api/v1/proxy/namespaces/&lt;ns&gt;/services/&lt;svc&gt;/</code>, replacing <code>&lt;ns&gt;</code> with the namespace in which your service is configured and <code>&lt;svc&gt;</code> with the name of your service. You can also append a particular request path to the end of the URL, so if your pods behind the service are hosting a file called <code>data.json</code> you would append that to the end of the request path. </p> <p>This is how the <a href="https://github.com/kubernetes/kubernetes/tree/master/docs/user-guide/update-demo" rel="nofollow">update-demo</a> tutorial works, so if you get stuck I'd recommend walking through that example and taking a close look at what the <a href="https://github.com/kubernetes/kubernetes/blob/master/docs/user-guide/update-demo/local/script.js" rel="nofollow">javascript</a> does (it isn't too complicated). </p>
<p>I'm trying to bring kubernetes api server up using etcd config (kubernetes uses go-etcd which has a method to read all parameters from a configuration file):</p> <pre><code>{ "cluster": { "machines": [ "https://my-public-hostname:2379" ] }, "config": { "certFile": "/etc/ssl/etcd/client.pem", "keyFile": "/etc/ssl/etcd/client.key.pem", "caCertFiles": [ "/etc/ssl/etcd/ca.pem" ], "timeout": 5, "consistency": "WEAK" } } </code></pre> <p>But fails in kube-apiserver because it cannot reach etcd successfully. I think this is because it tries to sync the cluster... but I don't know.</p> <p>I have created a (etcd) cluster using internal ips for advertise and client addresses except for the listen-client-urls which is set to 0.0.0.0/0. Also, the whole cluster is behind a load balancer which is accessible through <code>my-public-hostname</code>.</p> <p>Inside the container (because i'm using <code>hyperkube</code>), <code>etcdctl</code> won't work unless I set the '--no-sync' parameter. If i use etcdctl without that parameter it suspiciously fails like kube-apiserver does. But I wasn't able to check the piece of code in kubernetes which does the cluster syncrhonization...</p> <p>Any ideas?</p> <p>Thanks in advance.</p> <p><strong>EDIT:</strong></p> <p>It seems to be an error related to the current etcd client in kubernetes (<a href="https://github.com/coreos/go-etcd" rel="nofollow">https://github.com/coreos/go-etcd</a>), which is not the newest one (<a href="https://github.com/coreos/etcd/client" rel="nofollow">https://github.com/coreos/etcd/client</a>). I tested this empirically and "etcd/client" works but "go-etcd" doesn't, you can check this test here: <strike><a href="https://github.com/glerchundi/etcd-go-clients-test" rel="nofollow">https://github.com/glerchundi/etcd-go-clients-test</a></strike>. </p> <p>It's worth noting that there is an ongoing work to migrate go-etcd to etcd/client in kubernetes: <a href="https://github.com/kubernetes/kubernetes/issues/11962" rel="nofollow">https://github.com/kubernetes/kubernetes/issues/11962</a>.</p> <p>Can anyone from the Kubernetes team confirm this?</p> <p><strong>APPENDIX 1</strong></p> <p>I'm trying to run kubernetes in CoreOS and: <code>flannel</code> works, <code>locksmithd</code> works, <code>fleet</code> works (they access to etcd using the very same etcd client credentials) so it's probably something related to how kubernetes accesses to the etcd endpoint.</p> <p><strong>APPENDIX 2</strong> (these commands are executed inside the hyperkube container, concretely this one: <code>gcr.io/google_containers/hyperkube:v1.0.6</code>)</p> <p>etcdctl without --no-sync fails outputting this:</p> <pre><code>root@98b2524464f1:/# etcdctl --cert-file="/etc/ssl/etcd/client.pem" --key-file="/etc/ssl/etcd/client.key.pem" --ca-file="/etc/ssl/etcd/ca.pem" --peers="http//my-public-hostname:2379" ls / Error: 501: All the given peers are not reachable (failed to propose on members [https://10.1.0.1:2379 https://10.1.0.0:2379 https://10.1.0.2:2379] twice [last error: Get https://10.1.0.0:2379/v2/keys/?quorum=false&amp;recursive=false&amp;sorted=false: dial tcp 10.1.0.0:2379: i/o timeout]) [0] </code></pre> <p>And kube-apiserver with this:</p> <pre><code>root@98b2524464f1:/# /hyperkube \ apiserver \ --bind-address=0.0.0.0 \ --etcd_config=/etc/kubernetes/ssl/etcd.json \ --allow-privileged=true \ --service-cluster-ip-range=10.3.0.0/24 \ --secure_port=443 \ --advertise-address=10.0.0.2 \ --admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota \ --tls-cert-file=/etc/kubernetes/ssl/apiserver.pem \ --tls-private-key-file=/etc/kubernetes/ssl/apiserver.key.pem \ --client-ca-file=/etc/kubernetes/ssl/ca.pem \ --service-account-key-file=/etc/kubernetes/ssl/apiserver.key.pem F1002 09:47:29.348527 384 controller.go:80] Unable to perform initial IP allocation check: unable to refresh the service IP block: 501: All the given peers are not reachable (failed to propose on members [https://my-public-hostname:2379] twice [last error: Get https://my-public-hostname:2379/v2/keys/registry/ranges/serviceips?quorum=false&amp;recursive=false&amp;sorted=false: dial tcp: i/o timeout]) [0] </code></pre> <p><strong>APPENDIX 3</strong></p> <pre><code>etcd #0: etcd2: name: etcd0 initial-cluster-state: new initial-cluster: etcd0=http://10.1.0.0:2380,etcd1=http://10.1.0.1:2380,etcd2=http://10.1.0.2:2380 data-dir: /var/lib/etcd2 advertise-client-urls: https://10.1.0.0:2379 initial-advertise-peer-urls: http://10.1.0.0:2380 listen-client-urls: https://0.0.0.0:2379 listen-peer-urls: http://10.1.0.0:2380 client-cert-auth: true trusted-ca-file: /etc/ssl/etcd/certs/ca-chain.cert.pem cert-file: /etc/ssl/etcd/certs/etcd-server.cert.pem key-file: /etc/ssl/etcd/private/etcd-server.key.pem etcd #1: etcd2: name: etcd1 initial-cluster-state: new initial-cluster: etcd0=http://10.1.0.0:2380,etcd1=http://10.1.0.1:2380,etcd2=http://10.1.0.2:2380 data-dir: /var/lib/etcd2 advertise-client-urls: https://10.1.0.1:2379 initial-advertise-peer-urls: http://10.1.0.1:2380 listen-client-urls: https://0.0.0.0:2379 listen-peer-urls: http://10.1.0.1:2380 client-cert-auth: true trusted-ca-file: /etc/ssl/etcd/certs/ca-chain.cert.pem cert-file: /etc/ssl/etcd/certs/etcd-server.cert.pem key-file: /etc/ssl/etcd/private/etcd-server.key.pem etcd #2: etcd2: name: etcd2 initial-cluster-state: new initial-cluster: etcd0=http://10.1.0.0:2380,etcd1=http://10.1.0.1:2380,etcd2=http://10.1.0.2:2380 data-dir: /var/lib/etcd2 advertise-client-urls: https://10.1.0.2:2379 initial-advertise-peer-urls: http://10.1.0.2:2380 listen-client-urls: https://0.0.0.0:2379 listen-peer-urls: http://10.1.0.2:2380 client-cert-auth: true trusted-ca-file: /etc/ssl/etcd/certs/ca-chain.cert.pem cert-file: /etc/ssl/etcd/certs/etcd-server.cert.pem key-file: /etc/ssl/etcd/private/etcd-server.key.pem </code></pre>
<p>Finally I find out what was causing this problem. Timeout was not defined correctly because <code>go-etcd</code> unmarshalls json timeout value into time.Duration which uses nanoseconds as a base unit. So that for a value of 1s, 1000000000 should be written.</p> <p>Following the example above:</p> <pre><code>{ "cluster": { "machines": [ "https://my-public-hostname:2379" ] }, "config": { "certFile": "/etc/ssl/etcd/client.pem", "keyFile": "/etc/ssl/etcd/client.key.pem", "caCertFiles": [ "/etc/ssl/etcd/ca.pem" ], "timeout": 5000000000, "consistency": "WEAK" } } </code></pre>
<p>is there a way to setup automatic external ip allocation to service like google do in loadbalancer ? I'm running kubernetes on bare metal.</p> <p>Thank you</p>
<p>Use services with type nodePort, it will bind your service to a fixed port on all your nodes (<a href="http://kubernetes.io/v1.0/docs/user-guide/services.html#type-nodeport" rel="nofollow">http://kubernetes.io/v1.0/docs/user-guide/services.html#type-nodeport</a>)</p> <p>Then you have to use a loadbalancer (i.e haproxy) to forward calls to this service. </p> <p>The loadbalancer configuration can be done by a script that use kubernetes <code>/services</code> API.</p>
<p>I experience very strange behavior when I'm trying to set new Kubernetes cluster in AWS.</p> <p>Whenever I try to run kube-up.sh with its default config it works perfectly, The cluster and all its relative components are setting up in less than 10 minutes.</p> <p>The problem occur when I set the "kube-aws-zone" to be us-east-1e (the same as my current VPC) instead of us-west-2a (default). The installation process stuck in a loop with the following message- </p> <blockquote> <p>Waiting 3 minutes for cluster to settle ..................Re-running salt highstate sudo: unable to resolve host ip-172-20-0-9 Waiting for cluster initialization.</p> <p>This will continually check to see if the API for kubernetes is reachable. This might loop forever if there was some uncaught error during start up.</p> </blockquote> <p>I tried to dig a bit in the minions and find this error in /var/log/salt/minion</p> <blockquote> <p>2015-10-01 14:52:54,912 [salt.loaded.int.module.cmdmod][ERROR ] Command 'runlevel /run/utmp' failed with return code: 1 2015-10-01 14:52:54,913 [salt.loaded.int.module.cmdmod][ERROR ] output: Too many arguments. 2015-10-01 14:53:00,902 [salt.state ][ERROR ] The named service kubelet is not available 2015-10-01 14:53:03,078 [salt.state ][ERROR ] The named service kube-proxy is not available 2015-10-01 14:53:16,677 [salt.state ][ERROR ] An exception occurred in this state: Traceback (most recent call last):<br> File "/usr/lib/python2.7/dist-packages/salt/state.py", line 1533, in call **cdata['kwargs']) File "/usr/lib/python2.7/dist-packages/salt/states/sysctl.py", line 56, in present configured = <strong>salt</strong>'sysctl.show' File "/usr/lib/python2.7/dist-packages/salt/modules/linux_sysctl.py", line 86, in show for line in salt.utils.fopen(config_file_path): File "/usr/lib/python2.7/dist-packages/salt/utils/<strong>init</strong>.py", line 1065, in fopen fhandle = open(*args, **kwargs) IOError: [Errno 2] No such file or directory: '/etc/sysctl.d/99-salt.conf'</p> <p>2015-10-01 14:53:16,707 [salt.loaded.int.module.cmdmod][ERROR ] Command 'runlevel /run/utmp' failed with return code: 1 2015-10-01 14:53:16,708 [salt.loaded.int.module.cmdmod][ERROR ] output: Too many arguments. 2015-10-01 14:53:16,719 [salt.loaded.int.module.cmdmod][ERROR ] Command 'service docker status' failed with return code: 3 2015-10-01 14:53:16,719 [salt.loaded.int.module.cmdmod][ERROR ] output: * docker.service - Docker Application Container Engine Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled) Active: activating (auto-restart) (Result: exit-code) since Thu 2015-10-01 14:53:16 UTC; 262ms ago Docs: <a href="http://docs.docker.com" rel="nofollow">http://docs.docker.com</a> Process: 15285 ExecStart=/usr/bin/docker -d -H fd:// $DOCKER_OPTS (code=exited, status=1/FAILURE) Main PID: 15285 (code=exited, status=1/FAILURE)</p> <p>Oct 01 14:53:16 ip-172-20-0-90 systemd[1]: docker.service: main process exited, code=exited, status=1/FAILURE Oct 01 14:53:16 ip-172-20-0-90 systemd[1]: Unit docker.service entered failed state. Oct 01 14:53:16 ip-172-20-0-90 systemd[1]: docker.service failed. 2015-10-01 14:53:20,259 [salt.state ][ERROR ] The named service kubelet is not available 2015-10-01 14:53:20,687 [salt.state<br> ][ERROR ] The named service kube-proxy is not available</p> </blockquote> <p>I've tried to remove and re-set the IAM roles as suggested to similar issue, but ended up with no luck.</p> <p>Will appreciate any assistance. Thanks,</p>
<p>The problem was specific for us-east-1 region. I had to edit the dhcp-set that was created as part of kube-up.sh and add the following - </p> <blockquote> <p>domain-name = ec2.internal</p> </blockquote> <p>Then it worked like charm.</p> <p>More information - <a href="https://github.com/kubernetes/kubernetes/issues/7962#issuecomment-145324441" rel="nofollow">https://github.com/kubernetes/kubernetes/issues/7962#issuecomment-145324441</a></p>
<p>I have an app running on port 31280 exposed via nodePorts from a Kubernetes cluster. Same port is exposed through named-port on the instance group used by cluster for load balancing. While creating a backend-service with HTTP protocol, the service is created at default http port(80) even if I specify custom named-port.</p> <p>Exposed named-port for instance group is:</p> <pre><code>gcloud preview instance-groups --zone='asia-east1-a' list-services gke-dropwizard-service-31ccc162-group [ { "endpoints": [ { "name": "dropwizard-example-service-http", "port": 31280 } ], "fingerprint": "XXXXXXXXXXXXXXXX" } ] </code></pre> <p>Health check is: </p> <pre><code>gcloud compute http-health-checks describe dropwizard-example-service checkIntervalSec: 5 creationTimestamp: '2015-08-11T12:08:16.245-07:00' description: Dropwizard Example Sevice health check ping healthyThreshold: 2 host: '' id: 'XXXXXXX' kind: compute#httpHealthCheck name: dropwizard-example-service port: 31318 requestPath: /ping selfLink: https://www.googleapis.com/compute/v1/projects/XXX/global/httpHealthChecks/dropwizard-example-service timeoutSec: 3 unhealthyThreshold: 2 </code></pre> <p>Health port(31318) is also exposed via named port in instance group.</p> <p>Commands used to create backend-service is:</p> <pre><code>gcloud compute backend-services create "dropwizard-example-external-service" --description "Dropwizard Example Service via Nodeports from Kubernetes cluster" --http-health-check "dropwizard-example-service" --port-name "dropwizard-example-service-http" --timeout "30" </code></pre> <p>Command used to add instance group to backend-service is:</p> <pre><code>gcloud compute backend-services add-backend "dropwizard-example-external-service" --group "gke-dropwizard-service-31ccc162-group" --zone "asia-east1-a" --balancing-mode "UTILIZATION" --capacity-scaler "1" --max-utilization "0.8" </code></pre> <p>Finally created backend-service is described as:</p> <pre><code>gcloud compute backend-services describe dropwizard-example-external-service backends: - balancingMode: UTILIZATION capacityScaler: 1.0 description: '' group: https://www.googleapis.com/resourceviews/v1beta2/projects/XXX/zones/asia-east1-a/resourceViews/gke-dropwizard-service-31ccc162-group maxUtilization: 0.8 creationTimestamp: '2015-08-11T13:10:46.608-07:00' description: Dropwizard Example Service via Nodeport from Kubernetes cluster fingerprint: XXXXXXXXXXXX healthChecks: - https://www.googleapis.com/compute/v1/projects/XXX/global/httpHealthChecks/dropwizard-example-service id: 'XXXX' kind: compute#backendService name: dropwizard-example-external-service port: 80 portName: dropwizard-example-service-http protocol: HTTP selfLink: https://www.googleapis.com/compute/v1/projects/XXXX/global/backendServices/dropwizard-example-external-service timeoutSec: 30 </code></pre> <p>I don't understand which part is wrong. Why backend-service is using port 80?</p>
<p>EDIT: I was wrong. It does seem to work. I had a typo in my script.</p> <p>My script is here - I literally just ran this and it worked properly.</p> <p><a href="https://gist.github.com/thockin/36fea15cc0deb08a768a" rel="nofollow">https://gist.github.com/thockin/36fea15cc0deb08a768a</a></p> <p>Original response for posterity:</p> <p>I'm not an expert in the GCE L7 API yet, but I <em>have</em> made it work in Kubernetes. I think there's a bug in the --port-name logic. If you specify --port directly it seems to work for me. I'm filing an issue internally.</p>
<p>How can I output the PodSpec of a replication controller using kubectl?</p> <p>I tried <code> kubectl describe rc &lt;name&gt; </code></p> <p>but the output doesn't include the PodSpec.</p>
<p>You can see the <code>pod.spec</code> for a replicationController using <code>kubectl get</code> and the <code>--output=json</code> or <code>--output-yaml</code> arguments. Something like this should work:</p> <pre><code>kubectl get rc &lt;rc_name&gt; --output=yaml </code></pre>
<p>I'm starting a container using kubernetes and I have to send a -v parameter to docker. I'm searching for it about 3 hours and no success. Here you can see my run command:</p> <p><code>kubectl run api --image=${API_IMAGE} --port=${PORT_SERVICE} --overrides='{"apiVersion": "v1","spec": {"template": {"spec": {"containers": [{"name": "api","image": "'${API_IMAGE}'","env": [{"name": "listen","value": "0.0.0.0"},{"name": "etcdAddr","value": "'${ETCD_ADDR}'"}]}]}}}}'</code></p>
<p>You want to use the <a href="https://github.com/kubernetes/kubernetes/blob/release-1.0/docs/user-guide/volumes.md" rel="nofollow">Volumes</a> construct, which is quite well explained in the docs and used in a number of examples.</p> <p>If you want to use a host directory, use the <code>hostPath</code> volume type. There's an example of using it <a href="https://github.com/kubernetes/kubernetes/blob/38aeddec8f9bdf2e1f25a5048c1b84a95dff7eac/examples/vitess/vtctld-controller.yaml" rel="nofollow">here</a>.</p>
<p>If I create this pod:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: dsm-manager spec: containers: - name: dsm-manager image: ****** imagePullPolicy: Always command: - /sbin/init volumeMounts: - mountPath: /srv/project/DSMManager/snapshots name: dsm-snapshot-storage volumes: - name: dsm-snapshot-storage awsElasticBlockStore: volumeID: aws://us-west-2b/vol-43e44482 fsType: ext4 imagePullSecrets: - name: dockerregistrykey </code></pre> <p>It always works, but if I delete it and re-create it it gets stuck with status 'CreatingContainer'. Looking in the events yields:<br> -Unable to mount volumes for pod "dsm-manager_default": Timeout waiting for volume state<br> -Error syncing pod, skipping: Timeout waiting for volume state</p> <p>If I delete the pod and re-create it the same thing happens no matter what I do. However if I attach the volume to some instance and then detach it through the aws cli, then create the pod it works find. I'm wondering if the volume isn't being detached properly. For now I just have this odd work flow of attaching the volume to a random instance then detaching it while updating the container image</p>
<p>This is likely caused by a bug in the Kubernetes EBS management code, and should be fixed by <a href="https://github.com/kubernetes/kubernetes/pull/14493">PR #14493</a>. To summarize, not validating the device block cache was causing the kubelet to think the disk was still attached after it had actually been detached.</p>
<p>I have built a 4 node kubernetes cluster running multi-container pods all running on CoreOS. The images come from public and private repositories. Right now I have to log into each node and manually pull down the images each time I update them. I would like be able to pull them automatically.</p> <ol> <li>I have tried running docker login on each server and putting the .dockercfg file in /root and /core</li> <li>I have also done the above with the .docker/config.json</li> <li>I have added secret to the kube master and added imagePullSecrets: <ul> <li>name: docker.io to the Pod configuration file.</li> </ul></li> </ol> <p>When I create the pod i get the error message Error: </p> <pre><code>image &lt;user/image&gt;:latest not found </code></pre> <p>If I log in and run docker pull it will pull the image. I have tried this using docker.io and quay.io.</p>
<p>To add to what @rob said, as of docker 1.7, the use of .dockercfg has been deprecated and they now use a ~/.docker/config.json file. There is support for this type of secret in kube 1.1, but you must create it using different keys/type configuration in the yaml: </p> <p>First, base64 encode your <code>~/.docker/config.json</code>:</p> <pre><code>cat ~/.docker/config.json | base64 -w0 </code></pre> <p>Note that the base64 encoding should appear on a single line so with -w0 we disable the wrapping.</p> <p>Next, create a yaml file: <code>my-secret.yaml</code></p> <pre><code>apiVersion: v1 kind: Secret metadata: name: registrypullsecret data: .dockerconfigjson: &lt;base-64-encoded-json-here&gt; type: kubernetes.io/dockerconfigjson </code></pre> <p>-</p> <pre><code>$ kubectl create -f my-secret.yaml &amp;&amp; kubectl get secrets NAME TYPE DATA default-token-olob7 kubernetes.io/service-account-token 2 registrypullsecret kubernetes.io/dockerconfigjson 1 </code></pre> <p>Then, in your pod's yaml you need to reference <code>registrypullsecret</code> or create a replication controller:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: my-private-pod spec: containers: - name: private image: yourusername/privateimage:version imagePullSecrets: - name: registrypullsecret </code></pre>
<p>I've read documentation, I've seen exemples, but I don't know why would I add a serviceAccount in my pods ?</p> <p>The 'elasticsearch' exemple from Kubernetes (<a href="https://github.com/kubernetes/kubernetes/tree/master/examples/elasticsearch" rel="noreferrer">https://github.com/kubernetes/kubernetes/tree/master/examples/elasticsearch</a>) has a service account 'elasticsearch', what does it grant ?</p> <p>Thank you. </p>
<p>The service accounts inject authentication credentials into the pod to talk to the Kubernetes service (e.g. the apiserver). </p> <p>This is important if you are building an application that needs to inspect the pods/services/controllers that are running in the cluster to have correct behavior. For example, the kube2sky container watches services and endpoints to provide DNS within the cluster by connecting to the Kubernetes service. </p>
<p>I am using kube go client with kube api to access kube data. I am currently not finding any api call for logs of a particular pod.</p> <pre><code>kubectl logs pod-name </code></pre> <p>returns the logs for a particular pod. How do I do this using go client? I am using v1.0.6 of kubernetes.</p> <p>I can get the pod by using</p> <pre><code>client.Pods("namespace").Get("pod-name") </code></pre>
<p><a href="https://pkg.go.dev/k8s.io/[email protected]/kubernetes/typed/core/v1#PodExpansion" rel="nofollow noreferrer">Client Go</a> has offered a function <code>GetLogs</code> for this, which has been answered in <a href="https://stackoverflow.com/questions/53852530/how-to-get-logs-from-kubernetes-using-golang/53870271#53870271">How to get logs from kubernetes using Go?</a></p> <hr /> <p>Looking at how kubectl implements its commands can be helpful when getting a feel for how to use the client library. In this case, <a href="https://github.com/kubernetes/kubernetes/blob/c2e90cd1549dff87db7941544ce15f4c8ad0ba4c/pkg/kubectl/cmd/log.go#L188" rel="nofollow noreferrer">kubectl's implementation of the logs command</a> looks like this:</p> <pre class="lang-golang prettyprint-override"><code> req := client.RESTClient.Get(). Namespace(namespace). Name(podID). Resource(&quot;pods&quot;). SubResource(&quot;log&quot;). Param(&quot;follow&quot;, strconv.FormatBool(logOptions.Follow)). Param(&quot;container&quot;, logOptions.Container). Param(&quot;previous&quot;, strconv.FormatBool(logOptions.Previous)). Param(&quot;timestamps&quot;, strconv.FormatBool(logOptions.Timestamps)) if logOptions.SinceSeconds != nil { req.Param(&quot;sinceSeconds&quot;, strconv.FormatInt(*logOptions.SinceSeconds, 10)) } if logOptions.SinceTime != nil { req.Param(&quot;sinceTime&quot;, logOptions.SinceTime.Format(time.RFC3339)) } if logOptions.LimitBytes != nil { req.Param(&quot;limitBytes&quot;, strconv.FormatInt(*logOptions.LimitBytes, 10)) } if logOptions.TailLines != nil { req.Param(&quot;tailLines&quot;, strconv.FormatInt(*logOptions.TailLines, 10)) } readCloser, err := req.Stream() if err != nil { return err } defer readCloser.Close() _, err = io.Copy(out, readCloser) return err </code></pre>
<p>I'm interested in watching a stream of Events from Kubernetes, to determine whether a deployment was successful, or if any of the Pods were unable to be scheduled.</p> <p>I could call the endpoint <code>/api/v1/watch/events</code>, or I could call <code>/api/v1/events?watch=true</code>. Is there a difference between those two? I'm confused about the purpose of them.</p> <p>Thanks.</p>
<p>We're making <code>watch</code> a query param and removing it from the path (legacy form). You should call <code>/api/v1/events?watch=true</code>. See more discussions <a href="https://github.com/kubernetes/kubernetes/issues/8337" rel="nofollow">here</a> if you're interested. </p>
<p><br> I have problem with containers <code>garbage collection</code>. <br> When I start <code>kubelet</code> I have an error:</p> <pre><code>E1006 08:04:08.856100 25155 kubelet.go:682] Image garbage collection failed: unable to find data for container / </code></pre> <p>And <code>garbage collection</code> doesn't work. <br> <code>kubernetes v1.0.6</code> <br> <br> How to fix it?</p>
<p>The error message is from cadvisor, which gathers container stats. You can try <code>curl http://127.0.0.1:4194/validate/</code> on your node and to see if there is any problem preventing cadvisor from functioning correctly.</p> <p>Unlike image garbage collection (GC), container GC does not rely on cadvisor. You may want to check your <a href="https://github.com/kubernetes/kubernetes/blob/80f2d89a79bee987e6c1553ff7617a7c71b977d9/docs/admin/garbage-collection.md" rel="nofollow">GC policy settings</a> to see if the parameters are set correctly.</p>
<p>If I create 3 nodes in a cluster, how do I distribute the docker containers evenly across the containers? For example, if I create a cluster of 3 nodes with 8 cpus on each node, I've determined through performance profiling that I get the best performance when I run one container per cpu.</p> <pre><code>gcloud container clusters create mycluster --num-nodes 3 --machine-type n1-standard-8 kubectl run myapp --image=gcr.io/myproject/myapp -r 24 </code></pre> <p>When I ran <code>kubectl</code> above, it put 11 containers on the first node, 10 on the second, and 3 on the third. How to I make it so that it is 8 each?</p>
<p>Both your and jpapejr's solutions seem like they'd work, but using a <code>nodeSelector</code> to force scheduling to a single node has the downside of requiring multiple RCs for a single application and making that application less resilient to a node failure. The idea of a custom scheduler is nice but has the downside of the amount of work to write and maintain that code.</p> <p>I <em>think</em> another possible solution would be to set runtime constraints in your pod spec that might get you near to what you want. Based on <a href="https://github.com/kubernetes/kubernetes/tree/master/examples/runtime-constraints" rel="nofollow">this newly merged doc with examples of runtime contraints</a>, I think you could set <code>resources.requests.cpu</code> in the pod spec part of the RC and get close to a CPU-per-pod:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: myapp spec: containers: - name: myapp image: myregistry/myapp:v1 resources: requests: cpu: "1000m" </code></pre> <p>That docs has other good examples of how <code>requests</code> and <code>limits</code> differ and interact. There may be a combination that gives you what you want and also keeps your application at proper capacity when an individual node fails.</p>
<p>Does kubernetes accessible via a REST API? I was looking over at the <a href="http://kubernetes.io/v1.0/docs/api.html" rel="noreferrer">Kubernetes API</a> page and it all looks very cryptic / incomplete. They talk about new versions but have not disclosed the API usage or docs anywhere. I just wanted to know if there is a way to access the cluster information in any other way other than using the <code>kubectl</code> command.</p> <p>Example usage:</p> <p>What I do now:</p> <p><code>kubectl get pod --context='my-prod-cluster'</code></p> <p>What I'd like to do:</p> <p><code>curl GET /some/parameters/to/get/info</code></p>
<p>You can see all the API calls kubectl is making by passing <code>--v=8</code> to any kubectl command</p>
<p>I follow this docs <code>https://cloud.google.com/container-engine/docs/tutorials/guestbook</code> to created a Guestbook on GCE. And everything works fine.</p> <p>But when I try to access kube-ui. I am totally confused.</p> <p>first</p> <pre><code>≥ kubectl get svc 14:29 NAME LABELS SELECTOR IP(S) PORT(S) frontend name=frontend name=frontend 10.191.254.236 80/TCP 146.148.x.x kubernetes component=apiserver,provider=kubernetes &lt;none&gt; 10.191.240.1 443/TCP redis-master name=redis-master name=redis-master 10.191.253.125 6379/TCP redis-slave name=redis-slave name=redis-slave 10.191.254.248 6379/TCP </code></pre> <p>I can access my guestbook by 146.148.x.x; But I can't access web-ui through this ip. So I think this is not the master IP of my GCE. Then I execute this:</p> <pre><code>≥ kubectl get endpoints 14:33 NAME ENDPOINTS frontend 10.188.0.6:80,10.188.0.7:80,10.188.2.4:80 + 2 more... kubernetes 104.197.x.x:443 redis-master 10.188.2.7:6379 redis-slave 10.188.0.8:6379,10.188.2.3:6379 </code></pre> <p>Now, I got another IP, and I try to access kube-ui through this IP. I can get response from the server. But It will need Authtication.</p> <p>How can I get access to the kube-ui?</p>
<p>Never mind, I got it</p> <pre><code>gcloud container clusters describe CLUSTER-NAME </code></pre> <p>contains username and password!</p>
<p>I run the kube-apiserver with my self-signed certificate:</p> <pre><code>/opt/bin/kube-apiserver \ --etcd_servers=http://master:2379,http://slave1:2379,http://slave2:2379 \ --logtostderr=false \ --v=4 \ --client-ca-file=/home/kubernetes/ssl/ca.crt \ --service-cluster-ip-range=192.168.3.0/24 \ --tls-cert-file=/home/kubernetes/ssl/server.crt \ --tls-private-key-file=/home/kubernetes/ssl/server.key </code></pre> <p>Then I run the kubelet with the kubeconfig:</p> <pre><code>/opt/bin/kubelet \ --address=0.0.0.0 \ --port=10250 \ --api_servers=https://master:6443 \ --kubeconfig=/home/kubernetes/ssl/config.yaml \ --logtostderr=false \ --v=4 </code></pre> <p>The content of the config.yaml is below:</p> <pre><code>apiVersion: v1 kind: Config clusters: - name: ubuntu cluster: insecure-skip-tls-verify: true server: https://master:6443 contexts: - context: cluster: "ubuntu" user: "ubuntu" name: development current-context: development users: - name: ubuntu user: client-certificate: /home/kubernetes/ssl/ca.crt client-key: /home/kubernetes/ssl/ca.key </code></pre> <p>So, I thought the kubelet will not verify the self-signed certificate of apiserver, but the logs showed:</p> <pre><code>E1009 16:48:51.919749 100724 reflector.go:136] Failed to list *api.Pod: Get https://master:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dslave1: x509: certificate signed by unknown authority E1009 16:48:51.919876 100724 reflector.go:136] Failed to list *api.Node: Get https://master:6443/api/v1/nodes?fieldSelector=metadata.name%3Dslave1: x509: certificate signed by unknown authority E1009 16:48:51.923153 100724 reflector.go:136] Failed to list *api.Service: Get https://master:6443/api/v1/services: x509: certificate signed by unknown authority E1009 16:48:52.821556 100724 event.go:194] Unable to write event: 'Post https://master:6443/api/v1/namespaces/default/events: x509: certificate signed by unknown authority' (may retry after sleeping) E1009 16:48:52.922414 100724 reflector.go:136] Failed to list *api.Node: Get https://master:6443/api/v1/nodes?fieldSelector=metadata.name%3Dslave1: x509: certificate signed by unknown authority E1009 16:48:52.922433 100724 reflector.go:136] Failed to list *api.Pod: Get https://master:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dslave1: x509: certificate signed by unknown authority E1009 16:48:52.924432 100724 reflector.go:136] Failed to list *api.Service: Get https://master:6443/api/v1/services: x509: certificate signed by unknown authority </code></pre> <p>So I am confused with the meaning of the <code>insecure-skip-tls-verify</code>...</p>
<p>TL;DR. You can comment out &quot;certificate-authority-data:&quot; key to get it working.</p> <hr /> <p>More info</p> <p>There is an open issue (<a href="https://github.com/kubernetes/kubernetes/issues/13830" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/13830</a>) with the behavior of that flag when a client cert/key is provided. When a client certificate is provided, the insecure flag is ignored.</p>
<p>Trying to use Cinder volumens on OpenStack as persistent volumes for my pods. As soon as I configure the cloudprovider and restart the kubelet, the kubelet fails to get its external ID from the cloud provider.</p> <p>The OpenStack API is reachable via https using a comodo certificate. the comodo-ca-bundle is installed as trusted ca on the node. Using curl against the API works without --insecure and --cacert options.</p> <p>Using kubernetes 1.1.0-alpha on centos 7</p> <p>$ sudo journalctl -u kubelet</p> <pre><code>Oct 01 07:40:26 [4196]: I1001 07:40:26.303887 4196 debugging.go:129] Content-Length: 1159 Oct 01 07:40:26 [4196]: I1001 07:40:26.303895 4196 debugging.go:129] Content-Type: application/json Oct 01 07:40:26 [4196]: I1001 07:40:26.303950 4196 request.go:755] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/nodes","resourceVersion":"172921"},"items":[{"metadata":{"name":"192.168.100.80","selfLink":"/api/v1/nodes/192.168.100.80","uid":"b48b4cb9-676f-11e5-8521-fa163ef34ff1","resourceVersion":"172900","creationTimestamp":"2015-09-30T12:35:17Z","labels":{"kubernetes.io/hostname":"192.168.100.80"}},"spec":{"externalID":"192.168.100.80"},"status":{"capacity":{"cpu":"2","memory":"4047500Ki","pods":"40"},"conditions":[{"type":"Ready","status":"Unknown","lastHeartbeatTime":"2015-10-01T07:31:55Z","lastTransitionTime":"2015-10-01T07:32:36Z","reason":"Kubelet stopped posting node status."}],"addresses":[{"type":"LegacyHostIP","address":"192.168.100.80"},{"type":"InternalIP","address":"192.168.100.80"}],"nodeInfo":{"machineID":"dae72fe0cc064eb0b7797f25bfaf69df","systemUUID":"384A8E40-1296-9A42-AD77-445D83BB5888","bootID":"5c7eb3ff-d86f-41f2-b3eb-a39adf313a4f","kernelVersion":"3.10.0-229.14.1.el7.x86_64","osImage":"CentOS Linux 7 (Core)","containerRuntimeVersion":"docker://1.7.1","kubeletVersion":"v1.1.0-alpha.1.390+196f58b9cb25a2","kubeProxyVersion":"v1.1.0-alpha.1.390+196f58b9cb25a2"}}}]} Oct 01 07:40:26 [4196]: I1001 07:40:26.475016 4196 request.go:457] Request Body: {"kind":"DeleteOptions","apiVersion":"v1","gracePeriodSeconds":0} Oct 01 07:40:26 [4196]: I1001 07:40:26.475148 4196 debugging.go:101] curl -k -v -XDELETE -H "Authorization: Bearer rhARkbozkWcrJyvdLQqF9TNO86KHjOsq" -H "User-Agent: kubelet/v1.1.0 (linux/amd64) kubernetes/196f58b" https://localhost:6443/api/v1/namespaces/kube-system/pods/fluentd-elasticsearch-192.168.100.80 Oct 01 07:40:26 [4196]: I1001 07:40:26.526794 4196 debugging.go:120] DELETE https://localhost:6443/api/v1/namespaces/kube-system/pods/fluentd-elasticsearch-192.168.100.80 200 OK in 51 milliseconds Oct 01 07:40:26 [4196]: I1001 07:40:26.526865 4196 debugging.go:126] Response Headers: Oct 01 07:40:26 [4196]: I1001 07:40:26.526897 4196 debugging.go:129] Content-Type: application/json Oct 01 07:40:26 [4196]: I1001 07:40:26.526927 4196 debugging.go:129] Date: Thu, 01 Oct 2015 07:40:26 GMT Oct 01 07:40:26 [4196]: I1001 07:40:26.526957 4196 debugging.go:129] Content-Length: 1977 Oct 01 07:40:26 [4196]: I1001 07:40:26.527056 4196 request.go:755] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"fluentd-elasticsearch-192.168.100.80","namespace":"kube-system","selfLink":"/api/v1/namespaces/kube-system/pods/fluentd-elasticsearch-192.168.100.80","uid":"a90941f6-680f-11e5-988c-fa163e94cde4","resourceVersion":"172926","creationTimestamp":"2015-10-01T07:40:17Z","deletionTimestamp":"2015-10-01T07:40:26Z","deletionGracePeriodSeconds":0,"annotations":{"kubernetes.io/config.mirror":"mirror","kubernetes.io/config.seen":"2015-10-01T07:39:43.986114806Z","kubernetes.io/config.source":"file"}},"spec":{"volumes":[{"name":"varlog","hostPath":{"path":"/var/log"}},{"name":"varlibdockercontainers","hostPath":{"path":"/var/lib/docker/containers"}}],"containers":[{"name":"fluentd-elasticsearch","image":"gcr.io/google_containers/fluentd-elasticsearch:1.11","args":["-q"],"resources":{"limits":{"cpu":"100m"},"requests":{"cpu":"100m"}},"volumeMounts":[{"name":"varlog","mountPath":"/var/log"},{"name":"varlibdockercontainers","readOnly":true,"mountPath":"/var/lib/docker/containers"}],"terminationMessagePath":"/dev/termination-log","imagePullPolicy":"IfNotPresent"}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","nodeName":"192.168.100.80"},"status":{"phase":"Running","conditions":[{"type":"Ready","status":"True"}],"hostIP":"192.168.100.80","podIP":"172.16.58.24","startTime":"2015-10-01T07:40:17Z","containerStatuses":[{"name":"fluentd-elasticsearch","state":{"running":{"startedAt":"2015-10-01T07:37:23Z"}},"lastState":{"terminated":{"exitCode":137,"startedAt":"2015-10-01T07:23:00Z","finishedAt":"2015-10-01T07:33:17Z","containerID":"docker://1398736fd9b274132721206ccaf89030af5e8e304118d29286aec6b2529395ee"}},"ready":true,"restartCount":1,"image":"gcr.io/google_containers/fluentd-elasticsearch:1.11","imageID":"docker://03ba3d224c2a80600a0b44a9894ac0de5526d36b810b13924e33ada76f1e7406","containerID":"docker://d9ac24c8a0fbceea7c494bce73d56d6ea5f003f1d1b7b8ad3975fc7e3c7679b4"}]}} Oct 01 07:40:26 [4196]: I1001 07:40:26.528210 4196 status_manager.go:209] Pod "fluentd-elasticsearch-192.168.100.80" fully terminated and removed from etcd Oct 01 07:40:26 [4196]: I1001 07:40:26.675178 4196 debugging.go:101] curl -k -v -XGET -H "User-Agent: kubelet/v1.1.0 (linux/amd64) kubernetes/196f58b" -H "Authorization: Bearer rhARkbozkWcrJyvdLQqF9TNO86KHjOsq" https://localhost:6443/api/v1/services Oct 01 07:40:26 [4196]: I1001 07:40:26.710214 4196 debugging.go:120] GET https://localhost:6443/api/v1/services 200 OK in 34 milliseconds Oct 01 07:40:26 [4196]: I1001 07:40:26.710249 4196 debugging.go:126] Response Headers: Oct 01 07:40:26 [4196]: I1001 07:40:26.710260 4196 debugging.go:129] Content-Type: application/json Oct 01 07:40:26 [4196]: I1001 07:40:26.710270 4196 debugging.go:129] Date: Thu, 01 Oct 2015 07:40:26 GMT Oct 01 07:40:26 [4196]: I1001 07:40:26.710436 4196 request.go:755] Response Body: {"kind":"ServiceList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/services","resourceVersion":"172927"},"items":[{"metadata":{"name":"kubernetes","namespace":"default","selfLink":"/api/v1/namespaces/default/services/kubernetes","uid":"28717019-676b-11e5-afb9-fa163e94cde4","resourceVersion":"18","creationTimestamp":"2015-09-30T12:02:44Z","labels":{"component":"apiserver","provider":"kubernetes"}},"spec":{"ports":[{"protocol":"TCP","port":443,"targetPort":443}],"clusterIP":"10.100.0.1","type":"ClusterIP","sessionAffinity":"None"},"status":{"loadBalancer":{}}},{"metadata":{"name":"elasticsearch-logging","namespace":"kube-system","selfLink":"/api/v1/namespaces/kube-system/services/elasticsearch-logging","uid":"833c8df5-676b-11e5-958e-fa163e94cde4","resourceVersion":"153","creationTimestamp":"2015-09-30T12:05:16Z","labels":{"k8s-app":"elasticsearch-logging","kubernetes.io/cluster-service":"true","kubernetes.io/name":"Elasticsearch"}},"spec":{"ports":[{"protocol":"TCP","port":9200,"targetPort":"db"}],"selector":{"k8s-app":"elasticsearch-logging"},"clusterIP":"10.100.3.159","type":"ClusterIP","sessionAffinity":"None"},"status":{"loadBalancer":{}}},{"metadata":{"name":"kibana-logging","namespace":"kube-system","selfLink":"/api/v1/namespaces/kube-system/services/kibana-logging","uid":"833043fa-676b-11e5-958e-fa163e94cde4","resourceVersion":"149","creationTimestamp":"2015-09-30T12:05:16Z","labels":{"k8s-app":"kibana-logging","kubernetes.io/cluster-service":"true","kubernetes.io/name":"Kibana"}},"spec":{"ports":[{"protocol":"TCP","port":5601,"targetPort":"ui"}],"selector":{"k8s-app":"kibana-logging"},"clusterIP":"10.100.136.111","type":"ClusterIP","sessionAffinity":"None"},"status":{"loadBalancer":{}}},{"metadata":{"name":"kube-dns","namespace":"kube-system","selfLink":"/api/v1/namespaces/kube-system/services/kube-dns","uid":"8319ba13-676b-11e5-958e-fa163e94cde4","resourceVersion":"146","creationTimestamp":"2015-09-30T12:05:16Z","labels":{"k8s-app":"kube-dns Oct 01 07:40:26 [4196]: ","kubernetes.io/cluster-service":"true","kubernetes.io/name":"KubeDNS"}},"spec":{"ports":[{"name":"dns","protocol":"UDP","port":53,"targetPort":53},{"name":"dns-tcp","protocol":"TCP","port":53,"targetPort":53}],"selector":{"k8s-app":"kube-dns"},"clusterIP":"10.100.0.10","type":"ClusterIP","sessionAffinity":"None"},"status":{"loadBalancer":{}}},{"metadata":{"name":"kube-ui","namespace":"kube-system","selfLink":"/api/v1/namespaces/kube-system/services/kube-ui","uid":"83473271-676b-11e5-958e-fa163e94cde4","resourceVersion":"155","creationTimestamp":"2015-09-30T12:05:16Z","labels":{"k8s-app":"kube-ui","kubernetes.io/cluster-service":"true","kubernetes.io/name":"KubeUI"}},"spec":{"ports":[{"protocol":"TCP","port":80,"targetPort":8080}],"selector":{"k8s-app":"kube-ui"},"clusterIP":"10.100.246.61","type":"ClusterIP","sessionAffinity":"None"},"status":{"loadBalancer":{}}},{"metadata":{"name":"monitoring-grafana","namespace":"kube-system","selfLink":"/api/v1/namespaces/kube-system/services/monitoring-grafana","uid":"835da09c-676b-11e5-958e-fa163e94cde4","resourceVersion":"157","creationTimestamp":"2015-09-30T12:05:16Z","labels":{"kubernetes.io/cluster-service":"true","kubernetes.io/name":"Grafana"}},"spec":{"ports":[{"protocol":"TCP","port":80,"targetPort":8080}],"selector":{"k8s-app":"influxGrafana"},"clusterIP":"10.100.207.92","type":"ClusterIP","sessionAffinity":"None"},"status":{"loadBalancer":{}}},{"metadata":{"name":"monitoring-heapster","namespace":"kube-system","selfLink":"/api/v1/namespaces/kube-system/services/monitoring-heapster","uid":"83367b90-676b-11e5-958e-fa163e94cde4","resourceVersion":"151","creationTimestamp":"2015-09-30T12:05:16Z","labels":{"kubernetes.io/cluster-service":"true","kubernetes.io/name":"Heapster"}},"spec":{"ports":[{"protocol":"TCP","port":80,"targetPort":8082}],"selector":{"k8s-app":"heapster"},"clusterIP":"10.100.119.4","type":"ClusterIP","sessionAffinity":"None"},"status":{"loadBalancer":{}}},{"metadata":{"name":"monitoring-influxdb","namespace":"kube-system","selfLink":"/api/v1/names Oct 01 07:40:26 [4196]: paces/kube-system/services/monitoring-influxdb","uid":"836c95b8-676b-11e5-958e-fa163e94cde4","resourceVersion":"159","creationTimestamp":"2015-09-30T12:05:16Z","labels":{"kubernetes.io/cluster-service":"true","kubernetes.io/name":"InfluxDB"}},"spec":{"ports":[{"name":"http","protocol":"TCP","port":8083,"targetPort":8083},{"name":"api","protocol":"TCP","port":8086,"targetPort":8086}],"selector":{"k8s-app":"influxGrafana"},"clusterIP":"10.100.101.182","type":"ClusterIP","sessionAffinity":"None"},"status":{"loadBalancer":{}}},{"metadata":{"name":"reverseproxy","namespace":"kube-system","selfLink":"/api/v1/namespaces/kube-system/services/reverseproxy","uid":"15e65b7d-6776-11e5-a5d0-fa163e94cde4","resourceVersion":"10994","creationTimestamp":"2015-09-30T13:20:57Z","labels":{"k8s-app":"reverseproxy","kubernetes.io/cluster-service":"true","kubernetes.io/name":"reverseproxy"}},"spec":{"ports":[{"name":"http","protocol":"TCP","port":8181,"targetPort":8181,"nodePort":80},{"name":"https","protocol":"TCP","port":8181,"targetPort":8181,"nodePort":443}],"selector":{"k8s-app":"reverseproxy"},"clusterIP":"10.100.168.84","type":"NodePort","sessionAffinity":"None"},"status":{"loadBalancer":{}}}]} Oct 01 07:40:26 [4196]: I1001 07:40:26.875150 4196 debugging.go:101] curl -k -v -XGET -H "User-Agent: kubelet/v1.1.0 (linux/amd64) kubernetes/196f58b" -H "Authorization: Bearer rhARkbozkWcrJyvdLQqF9TNO86KHjOsq" https://localhost:6443/api/v1/watch/nodes?fieldSelector=metadata.name%3D192.168.100.80&amp;resourceVersion=172921 Oct 01 07:40:26 [4196]: I1001 07:40:26.900981 4196 debugging.go:120] GET https://localhost:6443/api/v1/watch/nodes?fieldSelector=metadata.name%3D192.168.100.80&amp;resourceVersion=172921 200 OK in 25 milliseconds Oct 01 07:40:26 [4196]: I1001 07:40:26.901009 4196 debugging.go:126] Response Headers: Oct 01 07:40:26 [4196]: I1001 07:40:26.901018 4196 debugging.go:129] Date: Thu, 01 Oct 2015 07:40:26 GMT Oct 01 07:40:27 [4196]: I1001 07:40:27.001744 4196 iowatcher.go:102] Unexpected EOF during watch stream event decoding: unexpected EOF Oct 01 07:40:27 [4196]: I1001 07:40:27.002685 4196 reflector.go:294] pkg/client/unversioned/cache/reflector.go:87: Unexpected watch close - watch lasted less than a second and no items received Oct 01 07:40:27 [4196]: W1001 07:40:27.002716 4196 reflector.go:224] pkg/client/unversioned/cache/reflector.go:87: watch of *api.Node ended with: very short watch Oct 01 07:40:27 [4196]: I1001 07:40:27.075065 4196 debugging.go:101] curl -k -v -XGET -H "User-Agent: kubelet/v1.1.0 (linux/amd64) kubernetes/196f58b" -H "Authorization: Bearer rhARkbozkWcrJyvdLQqF9TNO86KHjOsq" https://localhost:6443/api/v1/watch/services?resourceVersion=172927 Oct 01 07:40:27 [4196]: I1001 07:40:27.101642 4196 debugging.go:120] GET https://localhost:6443/api/v1/watch/services?resourceVersion=172927 200 OK in 26 milliseconds Oct 01 07:40:27 [4196]: I1001 07:40:27.101689 4196 debugging.go:126] Response Headers: Oct 01 07:40:27 [4196]: I1001 07:40:27.101705 4196 debugging.go:129] Date: Thu, 01 Oct 2015 07:40:27 GMT Oct 01 07:40:27 [4196]: I1001 07:40:27.104168 4196 openstack.go:164] openstack.Instances() called Oct 01 07:40:27 [4196]: I1001 07:40:27.133478 4196 openstack.go:201] Found 8 compute flavors Oct 01 07:40:27 [4196]: I1001 07:40:27.133519 4196 openstack.go:202] Claiming to support Instances Oct 01 07:40:27 [4196]: E1001 07:40:27.158908 4196 kubelet.go:846] Unable to construct api.Node object for kubelet: failed to get external ID from cloud provider: Failed to find object Oct 01 07:40:27 [4196]: I1001 07:40:27.202978 4196 iowatcher.go:102] Unexpected EOF during watch stream event decoding: unexpected EOF Oct 01 07:40:27 [4196]: I1001 07:40:27.203110 4196 reflector.go:294] pkg/client/unversioned/cache/reflector.go:87: Unexpected watch close - watch lasted less than a second and no items received Oct 01 07:40:27 [4196]: W1001 07:40:27.203136 4196 reflector.go:224] pkg/client/unversioned/cache/reflector.go:87: watch of *api.Service ended with: very short watch Oct 01 07:40:27 [4196]: I1001 07:40:27.275208 4196 debugging.go:101] curl -k -v -XGET -H "Authorization: Bearer rhARkbozkWcrJyvdLQqF9TNO86KHjOsq" -H "User-Agent: kubelet/v1.1.0 (linux/amd64) kubernetes/196f58b" https://localhost:6443/api/v1/pods?fieldSelector=spec.nodeName%3D192.168.100.80 Oct 01 07:40:27 [4196]: I1001 07:40:27.308434 4196 debugging.go:120] GET https://localhost:6443/api/v1/pods?fieldSelector=spec.nodeName%3D192.168.100.80 200 OK in 33 milliseconds Oct 01 07:40:27 [4196]: I1001 07:40:27.308464 4196 debugging.go:126] Response Headers: Oct 01 07:40:27 [4196]: I1001 07:40:27.308475 4196 debugging.go:129] Content-Type: application/json Oct 01 07:40:27 [4196]: I1001 07:40:27.308484 4196 debugging.go:129] Date: Thu, 01 Oct 2015 07:40:27 GMT Oct 01 07:40:27 [4196]: I1001 07:40:27.308491 4196 debugging.go:129] Content-Length: 113 Oct 01 07:40:27 [4196]: I1001 07:40:27.308524 4196 request.go:755] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/pods","resourceVersion":"172941"},"items":[]} Oct 01 07:40:27 [4196]: I1001 07:40:27.308719 4196 config.go:252] Setting pods for source api Oct 01 07:40:27 [4196]: I1001 07:40:27.308753 4196 kubelet.go:1921] SyncLoop (REMOVE): "fluentd-elasticsearch-192.168.100.80_kube-system" Oct 01 07:40:27 [4196]: I1001 07:40:27.308931 4196 volumes.go:100] Used volume plugin "kubernetes.io/host-path" for varlog Oct 01 07:40:27 [4196]: I1001 07:40:27.308960 4196 volumes.go:100] Used volume plugin "kubernetes.io/host-path" for varlibdockercontainers Oct 01 07:40:27 [4196]: I1001 07:40:27.308977 4196 kubelet.go:2531] Generating status for "fluentd-elasticsearch-192.168.100.80_kube-system" </code></pre> <p>$ kubectl version</p> <pre><code>Client Version: version.Info{Major:"1", Minor:"1+", GitVersion:"v1.1.0-alpha.1.390+196f58b9cb25a2", GitCommit:"196f58b9cb25a2222c7f9aacd624737910b03acb", GitTreeState:"clean"} Server Version: version.Info{Major:"1", Minor:"1+", GitVersion:"v1.1.0-alpha.1.390+196f58b9cb25a2", GitCommit: "196f58b9cb25a2222c7f9aacd624737910b03acb", GitTreeState:"clean"} </code></pre> <p>$ cat /etc/os-release </p> <pre><code>NAME="CentOS Linux" VERSION="7 (Core)" ID="centos" ID_LIKE="rhel fedora" VERSION_ID="7" PRETTY_NAME="CentOS Linux 7 (Core)" ANSI_COLOR="0;31" CPE_NAME="cpe:/o:centos:centos:7" HOME_URL="https://www.centos.org/" BUG_REPORT_URL="https://bugs.centos.org/" CENTOS_MANTISBT_PROJECT="CentOS-7" CENTOS_MANTISBT_PROJECT_VERSION="7" REDHAT_SUPPORT_PRODUCT="centos" REDHAT_SUPPORT_PRODUCT_VERSION="7" </code></pre> <p>$ cat /etc/kubernetes/kubelet</p> <pre><code>### # kubernetes kubelet (node) config # The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces) KUBELET_ADDRESS="--address=0.0.0.0" # The port for the info server to serve on # KUBELET_PORT="--port=10250" # You may leave this blank to use the actual hostname KUBELET_HOSTNAME="--hostname_override=192.168.100.80" # location of the api-server KUBELET_API_SERVER="--api_servers=https://localhost:6443" # Add your own! KUBELET_ARGS="--cluster_dns=10.100.0.10 --cluster_domain=cluster.local --kubeconfig=/etc/kubernetes/kubelet.kubeconfig --config=/etc/kubernetes/manifests --v=9 --cloud-config=/etc/kubernetes/cloud_config --cloud-provider=openstack --machine-id-file=/etc/machine-id" </code></pre> <p>$ cat /etc/kubernetes/cloud_config</p> <pre><code>[Global] auth-url=https://api.*******.de:5000/v2.0 username=username password=password region=RegionOne tenant-id=4ee7b21351d94f2b96d363efe131b833 </code></pre>
<p>Kubelet is able to reach Openstack, however it is failing to find this node in the list of servers, in this tenant, and in this region. <code>Oct 01 07:40:27 [4196]: I1001 07:40:27.133478 4196 openstack.go:201] Found 8 compute flavors Oct 01 07:40:27 [4196]: E1001 07:40:27.158908 4196 kubelet.go:846] Unable to construct api.Node object for kubelet: failed to get external ID from cloud provider: Failed to find object</code></p> <p>Node's hostname is used to identify it from the list of servers provided by cloud provider. However, it can be overridden using --hostname_overide flag.</p> <p>In your config, I see that you have overridden it with an ip, if this does not match the name of the server as reported by Nova, you are likely to get this error.</p>
<p>Currently services support multiple ports from a single selector, that maps onto pods of single type. This service/ports in turn can be exposed externally through 'type:LoadBalancer".</p> <p>For a given subsystem like Spark - there are 3 pod types - master, worker, driver with each exposing a set of management ui ports that need to be accessed externally. Current service definition requires to create 3-LoadBalancers, one for each type - master, worker, driver). </p> <p>These access ports are for low use management ui access. Is there anyway to combine all of these heterogeneous pod ports into a single "Service" with a corresponding "LoadBalancer". This is to avoid proliferation of LoadBalancers for external access of services</p>
<blockquote> <p>Is there anyway to combine all of these heterogeneous pod ports into a single "Service" with a corresponding "LoadBalancer".</p> </blockquote> <p>Services are intended to represent a homogeneous set of pods. Requests to the service are load balanced across the pods that back the service. </p> <p>If you want to reduce the number of load balancers you should add a proxying layer above the subsystem services that can redirect incoming requests to the right subsystem. HAProxy or nginx should be pretty easy to configure to do this. </p>
<p>I am using WmWare Workstation running a Linux vm which runs the vagrant and kubernetes environment.</p> <p>I have a problem using kubernetes with vagrant. Every time if I shutdown the kubernetes cluster with the kube-down.sh tool and restart with kube-up.sh, I cannot connect to the minions anymore! I think it has something to do with the IP binding. Does somebody know what to do?</p> <p>The other problem is if I try to install the guestbook example I cannot download the redis image. The pods are always stays in PENDING state. Is there a way to download the image manually and add it as a file?</p> <p>Thank you in advance.</p> <p>Regards :)</p>
<p>Each run of kube-up.sh is intended to generate a new cluster. As such, it will create new credentials for the cluster and any existing nodes are not expected to continue to work with the new master components. </p>
<p>Using GKE. I declared a disk I use as persistent volume.</p> <p>When the pod which uses the volume crashed and was re-started on a different node, the disk was still attached / mounted to the node it was mounted on before. </p> <p>How comes the volume is not unmounted by the RC?</p> <p>Then the pod fails and there is no recovery. i didn't find a way to unmount the disk, and the only way I could start the pod again was by restarting until it was started on the node that the disk was mounted on.</p> <p>That definitely doesn't sound right. Am I missing something here?</p>
<p>This is a known issue. See <a href="https://github.com/kubernetes/kubernetes/issues/14642" rel="nofollow">https://github.com/kubernetes/kubernetes/issues/14642</a></p> <p>We're working on a fix: See <a href="https://github.com/kubernetes/kubernetes/issues/15524" rel="nofollow">https://github.com/kubernetes/kubernetes/issues/15524</a></p> <p>Sorry for the inconvenience.</p>
<p>We are creating a new version of payment gateway processor and We want to use docker container with kubernetes but we are worried if Kubernetes and docker container follow the PCI DSS requirements.</p> <p>We don't find anything clear in PCI DSS specifications. </p>
<p>Re-iterating Tim's comment above: As far as I know nobody has implemented a fully PCI-compliant kubernetes install yet (they might have done and not told us). I don't know of anything specific to Docker or Kubernetes that would prevent you from getting your deployment certified. </p>
<p>I'm looking for a pattern that allows to share volumes between two containers running on the same pod in Kubernetes.</p> <p>My use case is: I have a Ruby on Rails application running inside a docker container. The docker image contains static assets in <code>/app/&lt;app-name&gt;/public</code> directory, and I need to access those assets from the nginx container running alongside in the same pod.</p> <p>In 'vanilla' docker I would have used <code>--volumes-from</code> flag to share this directory:</p> <pre><code>docker run --name app -v /app/&lt;app-dir&gt;/public &lt;app-image&gt; docker run --volumes-from app nginx </code></pre> <p>After reading this doc: <a href="https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/volumes.md" rel="noreferrer">https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/volumes.md</a> I tried this (only relevant entries presented):</p> <pre><code>spec: containers: - image: &lt;app-image&gt; name: &lt;app-name&gt; volumeMounts: - mountPath: /app/&lt;app-name&gt;/public name: assets - image: nginx name: nginx volumeMounts: - mountPath: /var/www/html name: assets readOnly: true volumes: - name: assets hostPath: path: /tmp/assets </code></pre> <p>But:</p> <ul> <li>Even though <code>/tmp/assets</code> on the node exists, it's empty</li> <li><code>/app/&lt;app-name&gt;/public</code> inside the app container is also empty</li> </ul> <p>As a workaround I'm gonna try to populate the shared directory when the application container is up (simply <code>cp /app/&lt;app-name&gt;/public/*</code> to shared directory), but I really dislike this idea.</p> <p><strong>Question:</strong> how to mimic <code>--volumes-from</code> in Kubernetes, or if there is no direct counterpart, how can I share files from one container to other running in the same pod ?</p> <p><code>apiVersion: v1beta3</code></p> <pre><code>Client Version: version.Info{Major:"0", Minor:"17", GitVersion:"v0.17.0", GitCommit:"82f8bdac06ddfacf493a9ed0fedc85f5ea62ebd5", GitTreeState:"clean"} Server Version: version.Info{Major:"0", Minor:"17", GitVersion:"v0.17.0", GitCommit:"82f8bdac06ddfacf493a9ed0fedc85f5ea62ebd5", GitTreeState:"clean"} </code></pre>
<p><strong>[update-2016-8]</strong> In latest Kubernetes release, you can use a very nice feature named <code>init-container</code> to replace the <code>postStart</code> part in my answer below, which will make sure the container order.</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: javaweb-2 spec: initContainers: - name: war image: resouer/sample:v2 command: ["cp", "/sample.war", "/app"] volumeMounts: - mountPath: /app name: app-volume containers: - name: tomcat image: resouer/mytomcat:7.0 command: ["sh","-c","/root/apache-tomcat-7.0.42-v2/bin/start.sh"] volumeMounts: - mountPath: /root/apache-tomcat-7.0.42-v2/webapps name: app-volume ports: - containerPort: 8080 hostPort: 8001 volumes: - name: app-volume emptyDir: {} </code></pre> <p><strong>NOTE</strong>: initContainer is still a <strong>beta feature</strong> so the work version of this yaml is actually like: <a href="http://kubernetes.io/docs/user-guide/production-pods/#handling-initialization" rel="noreferrer">http://kubernetes.io/docs/user-guide/production-pods/#handling-initialization</a>, please notice the <code>pod.beta.kubernetes.io/init-containers</code> part.</p> <p><em>---original answer begin---</em></p> <p>Actually, you can. You need to use container life cycle handler to control what files/dirs you want to share with other containers. Like:</p> <pre><code>--- apiVersion: v1 kind: Pod metadata: name: server spec: restartPolicy: OnFailure containers: - image: resouer/sample:v2 name: war lifecycle: postStart: exec: command: - "cp" - "/sample.war" - "/app" volumeMounts: - mountPath: /app name: hostv1 - name: peer image: busybox command: ["tail", "-f", "/dev/null"] volumeMounts: - name: hostv2 mountPath: /app/sample.war volumes: - name: hostv1 hostPath: path: /tmp - name: hostv2 hostPath: path: /tmp/sample.war </code></pre> <p>Please check my gist for more details:</p> <p><a href="https://gist.github.com/resouer/378bcdaef1d9601ed6aa" rel="noreferrer">https://gist.github.com/resouer/378bcdaef1d9601ed6aa</a></p> <p>And of course you can use emptyDir. Thus, war container can share its /sample.war to peer container without mess peer's /app directory.</p> <p>If we can tolerate /app been overridden, it will be much simpler:</p> <pre><code>--- apiVersion: v1 kind: Pod metadata: name: javaweb-2 spec: restartPolicy: OnFailure containers: - image: resouer/sample:v2 name: war lifecycle: postStart: exec: command: - "cp" - "/sample.war" - "/app" volumeMounts: - mountPath: /app name: app-volume - image: resouer/mytomcat:7.0 name: tomcat command: ["sh","-c","/root/apache-tomcat-7.0.42-v2/bin/start.sh"] volumeMounts: - mountPath: /root/apache-tomcat-7.0.42-v2/webapps name: app-volume ports: - containerPort: 8080 hostPort: 8001 volumes: - name: app-volume emptyDir: {} </code></pre>
<p>We are trying a test setup with Kubernetes version 1.0.6 on AWS. </p> <p>This setup involves pods for Cassandra (2-nodes), Spark (master, 2-workers, driver), RabbitMQ(1-node). Some the pods this setup die after a day or so</p> <p>Is there way to get logs from Kubernetes on how/why they died?</p> <p>When you try to restart died pods manually, you get some pods status as ''category/spark-worker is ready, container is creating' and the pod start never completes.</p> <p>Only option in the scenario is to "kube-down.sh and then kube-up.sh" and go through entire setup from scratch. </p>
<p><code>kubectl describe ${POD_NAME}</code> or <code>kubectl logs ${POD_NAME} ${CONTAINER_NAME}</code> should give you more information to debug.</p> <p>Please also see <a href="https://github.com/kubernetes/kubernetes/blob/master/docs/user-guide/application-troubleshooting.md#debugging-pods" rel="nofollow">https://github.com/kubernetes/kubernetes/blob/master/docs/user-guide/application-troubleshooting.md#debugging-pods</a> for general troubleshooting instructions.</p> <p>EDIT: </p> <p>After discussing in the comments, I think the problem with your node is that the node was unresponsive for >5 minutes (potentially due to high memory usage of influxdb). Node controller then deemed the node not ready and evicted all pods on the node. Note that pods managed by replication controllers would be re-created (with a different name), but pods created manually would not be recreated.</p> <p>If you suspect influxdb memory usage is the root cause, you can try not running this pod to see if the problem resolves itself. Alternatively, you can change the memory limit of <a href="https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/cluster-monitoring/influxdb/influxdb-grafana-controller.yaml#L28" rel="nofollow">influxdb container</a> to a smaller value.</p> <p>EDIT2:</p> <p>Some tips for finding out what happened to the node:</p> <ol> <li><p>Check <code>/var/log/kubelet.log</code>. This is the easiest approach.</p></li> <li><p><code>kubectl describe nodes</code> or <code>kubectl get events | grep &lt;node_name&gt;</code> (for older version of kubernetes)</p></li> </ol> <p>This command would give you the events associated with the node status. However, the events are flushed every two hours, so you would need to run this command within the window of time after your node encounters the problem.</p> <ol start="3"> <li><code>kubectl get node &lt;node_name&gt; -o yaml --watch</code> lets you monitor the node object, including its status in yaml. This would be updated periodically.</li> </ol>
<p>I have a 3 node coros kubernetes cluster up and running.</p> <p>I want to use persitentvolumes(pv) from a standalone NFS Server.</p> <p>nfs.yaml</p> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: kube1 spec: capacity: storage: 9.5G accessModes: - ReadWriteMany nfs: path: /mnt/nfs/kube1 server: 10.3.0.3 </code></pre> <p>claim.yaml</p> <pre><code>kind: PersistentVolumeClaim apiVersion: v1 metadata: name: pvc2-1 spec: accessModes: - ReadWriteMany resources: requests: storage: 1G </code></pre> <p><code>kubecfg get pv</code></p> <p><code>kube1 &lt;none&gt; 9500M RWX Released default/pvc2-1</code></p> <p><code>kubecfg get pvc</code></p> <p><code>pvc2-1 &lt;none&gt; Bound kube1 9500M RWX</code></p> <p>So why is the pvc created with the full capacity of pv? As I assumed that pvc is just a part of pv, otherwise it's pretty useless.</p> <p>Regards</p> <p>cdpb</p>
<p>As far as I've seen, that's the way it should work. The claim is for the entire volume. The part that confused me at first as well, was the resources.requests.storage value is only a minimum value that claim requires. I use this with Ceph, and when Pods bind to the block device, they take the whole volume.</p>
<p>Is it possible to pass the "--volume-driver" with in kubernetes' yml file?</p> <p>Ex. Using Docker I can perform the following</p> <p>docker run --volume-driver rbd -v image:/mountpoint ubuntu</p> <p>Thanks</p>
<p>Kubernetes does support several volume types, including rbd, as you mention in your example. When you create a pod, you can specify what volumes and their types you want in the yaml file. Documentation on volumes is here: <a href="http://kubernetes.io/v1.0/docs/user-guide/volumes.html#rbd" rel="nofollow">http://kubernetes.io/v1.0/docs/user-guide/volumes.html#rbd</a></p> <p>Kubernetes uses its own volume system that is different from Docker's: Kubenetes supports some types of volumes that Docker doesn't and vice versa.</p>
<p>I'm a bit disturbed on how to secure the kubernetes API for call and access, also Kube-ui is available to everybody. How can I set credential to secure all the services ?</p> <p>Thank you</p>
<p>The Kubernetes API supports multiple forms of <a href="http://kubernetes.io/v1.0/docs/admin/authentication.html">authentication</a>: http basic auth, bearer token, client certificates. When launching the apiserver, you can enable / disable each of these authentication methods with command line flags. </p> <p>You should also be running the apiserver where the insecure port is only accessible to localhost, so that all connections coming across the network use https. By having your api clients verify the TLS certificate presented by the apiserver, they can verify that the connection is both encrypted and not susceptible to man-in-the-middle attacks. </p> <p>By default, anyone who has access credentials to the apiserver has full access to the cluster. You can also configure more fine grained <a href="http://kubernetes.io/v1.0/docs/admin/authorization.html">authorization</a> policies which will become more flexible and configurable in future Kubernetes releases. </p>
<p>Stuck on how to make it run. I followed everything on the <a href="http://kubernetes.io/v1.0/examples/meteor/README.html" rel="nofollow">http://kubernetes.io/v1.0/examples/meteor/README.html</a></p> <p>I was able to build the image and push it to the gcloud. Now the problem is how to run it.</p> <p>I accessed the ip that it gave me when executing the command </p> <pre><code>kubectl get service meteor --template="{{range .status.loadBalancer.ingress}} {{.ip}} {{end}}" </code></pre> <p>but nothing's showing up. Web page not available..</p>
<p>Did you open up port 80 for meteor?</p> <pre><code>gcloud compute firewall-rules create meteor-80 --allow=tcp:80 --target-tags kubernetes-minion </code></pre>
<p>I tried looking for a Jenkins plugin (like AWS codeDeploy) so that I could deploy my application to a Kubernetes cluster. So far, I have been successful at pushing it to a Docker registry and adding some command line build steps to deploy to Kubernetes. Looking at the CloudBees announcement <a href="http://blog.cloudbees.com/2015/07/orchestrating-deployments-with-jenkins.html" rel="nofollow noreferrer">this seems possible</a></p> <p>Installing the Kubernetes plugin gave me errors...I can attach a screenshot if that helps ... Also it seems like this plugin allows you to run slaves in Docker containers not deploy your own app.</p> <p>After looking at <a href="https://www.youtube.com/watch?v=PFCSSiT-UUQ" rel="nofollow noreferrer">this video</a> , it seemed I could accomplish this using the <code>withKubernetes</code> workflow stage...</p> <p>However adding that line to my workflow script gives me the following error</p> <pre><code>java.lang.NoSuchMethodError: No such DSL method withKubernetes found among [archive, bat, build, catchError, checkout, dir, dockerFingerprintFrom, dockerFingerprintRun, echo, error, fileExists, git, input, load, mail, node, parallel, publishHTML, pwd, readFile, retry, sh, sleep, stage, stash, step, svn, timeout, tool, unarchive, unstash, waitUntil, withDockerContainer, withDockerRegistry, withDockerServer, withEnv, wrap, writeFile, ws] at org.jenkinsci.plugins.workflow.cps.DSL.invokeMethod(DSL.java:107) at org.jenkinsci.plugins.workflow.cps.CpsScript.invokeMethod(CpsScript.java:112) at groovy.lang.GroovyObject$invokeMethod.call(Unknown Source) at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:42) at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:108) at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:151) at org.kohsuke.groovy.sandbox.GroovyInterceptor.onMethodCall(GroovyInterceptor.java:21) at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.onMethodCall(SandboxInterceptor.java:75) at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:149) at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:146) at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:123) at com.cloudbees.groovy.cps.sandbox.SandboxInvoker.methodCall(SandboxInvoker.java:15) at WorkflowScript.run(WorkflowScript:17) at Unknown.Unknown(Unknown) at ___cps.transform___(Native Method) at com.cloudbees.groovy.cps.impl.ContinuationGroup.methodCall(ContinuationGroup.java:69) at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.dispatchOrArg(FunctionCallBlock.java:106) at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.fixArg(FunctionCallBlock.java:79) at sun.reflect.GeneratedMethodAccessor290.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.cloudbees.groovy.cps.impl.ContinuationPtr$ContinuationImpl.receive(ContinuationPtr.java:72) at com.cloudbees.groovy.cps.impl.ClosureBlock.eval(ClosureBlock.java:40) at com.cloudbees.groovy.cps.Next.step(Next.java:58) at com.cloudbees.groovy.cps.Continuable.run0(Continuable.java:145) at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.access$001(SandboxContinuable.java:19) at org.jenkinsci.plugins.workflow.cps.SandboxContinuable$1.call(SandboxContinuable.java:33) at org.jenkinsci.plugins.workflow.cps.SandboxContinuable$1.call(SandboxContinuable.java:30) at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.GroovySandbox.runInSandbox(GroovySandbox.java:106) at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.run0(SandboxContinuable.java:30) at org.jenkinsci.plugins.workflow.cps.CpsThread.runNextChunk(CpsThread.java:164) at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.run(CpsThreadGroup.java:271) at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.access$000(CpsThreadGroup.java:71) at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:180) at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:178) at org.jenkinsci.plugins.workflow.cps.CpsVmExecutorService$2.call(CpsVmExecutorService.java:47) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at hudson.remoting.SingleLaneExecutorService$1.run(SingleLaneExecutorService.java:112) at jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) </code></pre>
<p>The <a href="https://github.com/jenkinsci/kubernetes-plugin" rel="nofollow">Jenkins Kubernetes plugin</a> is (so far) only to run slaves dynamically in a Kubernetes cluster</p> <p>There's not a lot about deploying from Jenkins to Kubernetes, maybe this post <a href="https://medium.com/fabric8-io/create-and-explore-continuous-delivery-pipelines-with-fabric8-and-jenkins-on-openshift-661aa82cb45a" rel="nofollow">Continuous Delivery Pipelines with Fabric8 and Jenkins on OpenShift</a> helps </p>
<p>I was able to expose port 80 before, just last month, using kubernetes and google containers.<br> But now simple service example like this doesn't work anymore:</p> <pre><code>{ "kind":"Service", "apiVersion":"v1", "metadata":{ "name":"check", "labels":{ "app":"check" } }, "spec":{ "type": "LoadBalancer", "ports": [ { "port":80, "name":"check-server" } ], "selector":{ "app":"check" } } } </code></pre> <p>and this works:</p> <pre><code>{ "kind":"Service", "apiVersion":"v1", "metadata":{ "name":"check", "labels":{ "app":"check" } }, "spec":{ "type": "LoadBalancer", "ports": [ { "port":8080, "name":"check-server" } ], "selector":{ "app":"check" } } } </code></pre> <p>does anyone know what changed in google cloud?</p>
<p>I guess your pods are exposing port 8080? then you are missing <code>targetPort</code></p> <pre><code> "ports": [ { "port":80, "targetPort": "8080", "name":"check-server" } </code></pre>
<p>We are running a workload against a cluster hosting 2 instances of a small (3 container) pod. Accessing the pod using a service w/nodeport. If we stop a pod and rc starts a new one, our constant (low volume) workload has numerous failures (Rational Perf Tester, http test hitting the service on the master ... but likely same if it were hitting either minion ... master also has a minion). Anyway, if we just add a pod with kubectl scale, we also get errors. If we then take down this a pod (rc doesn't start a new one because we had one more than needed due to scale) ... no errors. Seems that service starts sending work to new pod because kubelet has done his thing, even though containers are not up. Thus, any time a pod is started ... it starts receiving work a little too soon (after kubelet did his work, but before all containers are ready). Is there a way to guarantee that the service will not route to this pod until all containers are up? Barring that is there some way to say wait 'n' seconds before sending to this pod? I may be wrong, but behavior seems to suggest this scenario.</p>
<p>This is precisely what the <code>readinessProbe</code> option is :)</p> <p>It's documented more <a href="http://kubernetes.io/v1.0/docs/user-guide/pod-states.html#container-probes" rel="nofollow">here</a> and <a href="http://kubernetes.io/v1.0/docs/user-guide/production-pods.html#liveness-and-readiness-probes-aka-health-checks" rel="nofollow">here</a>, and is part of the <a href="http://kubernetes.io/v1.0/docs/api-reference/definitions.html#_v1_container" rel="nofollow"><code>container</code> definition</a> in a pod specification.</p> <p>For example, you might use a pod specification like the one below to ensure that your nginx pod won't be marked as ready (and thus won't have traffic sent to it) until it responds to an HTTP request for <code>/index.html</code>:</p> <pre><code>apiVersion: v1 kind: ReplicationController metadata: name: my-nginx spec: replicas: 2 template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80 lifecycle: httpGet: path: /index.html port: 80 initialDelaySeconds: 10 timeoutSeconds: 5 </code></pre>
<p>I'm trying to enable the skyDNS addon for my kubernetes cluster. I'm behind a corporate proxy, and it seems to be unable to talk to gcr.io. The following errors show up in the logs:</p> <pre><code>Oct 20 13:55:46 atomic01.localdomain kubelet[112453]: W1020 13:55:46.143403 112453 manager.go:1569] Failed to pull image "gcr.io/google_containers/kube2sky:1.11" from pod "kube-dns-v9-w492r_kube-system" and container "kube2sky": image pull failed for gcr.io/google_containers/kube2sky:1.11, this may be because there are no credentials on this request. details: (invalid character '&lt;' looking for beginning of value) </code></pre> <p>Anything I try to pull from gcr.io fails, even manually:</p> <pre><code># docker pull gcr.io/google_containers/etcd:2.0.9 Trying to pull repository gcr.io/google_containers/etcd ... failed invalid character '&lt;' looking for beginning of value </code></pre> <p>I've got HTTP_PROXY and HTTPS_PROXY variables configured in <code>/etc/sysconfig/docker</code>. As well I have <code>INSECURE_REGISTRY='--insecure-registry gcr.io'</code> enabled in the same config file.</p> <p>I had to apply a workaround of manually pulling the <code>pause</code> container from docker.io and specifying it with <code>--pod_infra_container_image=docker.io/kubernetes/pause:latest</code></p> <p>Is there such a workaround for the DNS containers? I can pull images from other docker registries just fine. What is different with the gcr.io registry and how can I use it with kubernetes?</p>
<p>What config file are you using to launch the DNS add-on? You should be able to just edit it (or make a copy of <a href="https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/dns/skydns-rc.yaml.in" rel="nofollow">this one</a>) and point it at a different registry. </p>
<p>I'm having an issue whereby cronjob in kubernetes doesnt seem to work. Below is the test Dockerfile used</p> <pre><code>FROM debian:jessie RUN apt-get update RUN apt-get -y install --no-install-recommends cron RUN echo 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin' | crontab RUN echo '0-59/2 * * * 0-4 export ENV=dev [email protected]; echo "$(date) ${ENV} ${RECIPIENT}" &gt;&gt; /var/log/cron.log' | crontab CMD ["cron", "-f", "-L", "15"] </code></pre> <p>Using native docker to run the above dockerfile, I could see the output in the logfile but not in kubernetes. Checked the events but didn't notice anything unusual.</p> <p>Below is the replicationcontroller yaml file used</p> <pre><code>apiVersion: v1 kind: ReplicationController metadata: labels: name: cron-test name: cron-test spec: replicas: 1 selector: name: cron-test template: metadata: labels: name: cron-test spec: containers: - name: cron-test image: example/cron-test:latest resources: limits: cpu: 100m memory: 512Mi imagePullPolicy: Always </code></pre> <p>Thanks</p>
<p>Somehow I got it working by switching the Dockerfile around</p> <p>Dockerfile</p> <pre><code>FROM debian:jessie RUN apt-get update RUN apt-get -y install --no-install-recommends cron COPY . /src WORKDIR /src RUN cp run.sh /run.sh \ &amp;&amp; chmod a+x /run.sh \ &amp;&amp; touch /var/log/cron.log CMD ["/run.sh"] </code></pre> <p>run.sh</p> <pre><code>#!/bin/sh cat &lt;&lt; EOF &gt; /tmp/setup-env.sh export ENV=dev export [email protected] EOF crontab /src/crons.conf exec cron -f -L 15 </code></pre> <p>crons.conf</p> <pre><code>PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin 0-59/2 * * * 0-4 . /tmp/setup-env.sh ; echo "$(date) ${ENV} ${RECIPIENT}" &gt;&gt; /var/log/cron.log 2&gt;&amp;1 </code></pre> <p>My guess is that because <code>crontab /src/crons.conf</code> was running at build time and build file-system is different from running file-system i.e building with native docker (rootfs) and running it on kubernetes (overlayfs).</p>
<p>A little bit of background: I have a Go service that uses gRPC to communicate with client apps. gRPC uses HTTP2, so I can't use Google App Engine or the Google Cloud HTTP Load Balancer. I need raw TCP load balancing from the internet to my Go application.</p> <p>I went through the GKE tutorials and read the various docs and I can't find any way to give my application a static IP address. So how do you get a static IP attached to something running in GKE?</p>
<p>This is not supported in kubernetes v1.0.x but in v1.1.x it will be available as <code>service.spec.loadBalancerIP</code>. As long as you actually own that IP we will use it.</p>
<p>I have a working kubernetes cluster with the elasticsearch up and running returning a 200. I am trying to install the HQ plugin. So i log into the node running the es-client, and attach to the docker container. When i execute the command</p> <pre><code>/ # /elasticsearch/bin/plugin -install royrusso/elasticsearch-HQ </code></pre> <p>i am getting</p> <pre><code>Exception in thread "main" java.lang.IllegalArgumentException: Could not resolve placeholder 'DISCOVERY_SERVICE' </code></pre> <p>I am using the yaml files from here <a href="https://github.com/pires/kubernetes-elasticsearch-cluster" rel="nofollow">https://github.com/pires/kubernetes-elasticsearch-cluster</a> with these modifications <a href="https://github.com/UKHomeOffice/docker-elasticsearch/blob/master/examples/kubernetes.md" rel="nofollow">https://github.com/UKHomeOffice/docker-elasticsearch/blob/master/examples/kubernetes.md</a></p> <p>Am i missing something?</p> <p>Thanks in advance</p>
<p>Attach to the container</p> <pre><code>docker exec -it 9f13966b1201 /bin/sh </code></pre> <p>Export the DISCOVERY_SERVICE</p> <pre><code>export DISCOVERY_SERVICE=${DISCOVERY_SERVICE:-elasticsearch-discovery} </code></pre> <p>Install the Plugin</p> <pre><code>/elasticsearch/bin/plugin -install royrusso/elasticsearch-HQ </code></pre> <p>keep in mind this will not be persistent </p>
<p>Docker allows execution of commands as other user with <code>docker exec -u</code>, when <code>USER something</code> in used in Dockerfile. It is helpful to enter into superuser mode to debug issues, when you are running you <code>CMD</code> as system user in Dockerfile.</p> <p>How to execute commands on Kubernetes as other user?</p> <p>My kubectl version output is</p> <pre><code>Client Version: version.Info{Major:"1", Minor:"0", GitVersion:"v1.0.6", GitCommit:"388061f00f0d9e4d641f9ed4971c775e1654579d", GitTreeState:"clean"} Server Version: version.Info{Major:"1", Minor:"0", GitVersion:"v1.0.6", GitCommit:"388061f00f0d9e4d641f9ed4971c775e1654579d", GitTreeState:"clean"} </code></pre>
<p>You can check the spec schema to see what you can add in a pod or replication controller or whatever: <a href="https://cloud.google.com/container-engine/docs/spec-schema" rel="noreferrer">https://cloud.google.com/container-engine/docs/spec-schema</a></p> <p>You have runAsUser for what you want:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: nginx labels: app: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80 securityContext: runAsUser: 41 </code></pre>
<p>I'm setting up rethinkdb cluster inside kubernetes, but it doesn't work as expected for high availability requirement. Because when a pod is down, kubernetes will creates another pod, which runs another container of the same image, old mounted data (which is already persisted on host disk) will be erased and the new pod will join the cluster as a brand new instance. I'm running k8s in CoreOS v773.1.0 stable.</p> <p>Please correct me if i'm wrong, but that way it seems impossible to setup a database cluster inside k8s.</p> <p>Update: As documented here <a href="http://kubernetes.io/v1.0/docs/user-guide/pod-states.html#restartpolicy" rel="nofollow">http://kubernetes.io/v1.0/docs/user-guide/pod-states.html#restartpolicy</a>, if <code>RestartPolicy: Always</code> it will restart the container if exits failure. It means by "restart" that it brings up the same container, or create another one? Or maybe because I stop the pod via command <code>kubectl stop po</code> so it doesn't restart the same container?</p>
<p>That's how Kubernetes works, and other solution works probably same way. When a machine is dead, the container on it will be rescheduled to run on another machine. That other machine has no state of container. Event when it is the same machine, the container on it is created as a new one instead of restarting the exited container(with data inside it). </p> <p>To persistent data, you need some kind of external storage(NFS, EBS, EFS,...). In case of k8s, you may want to look into this <a href="https://github.com/kubernetes/kubernetes/blob/master/docs/design/persistent-storage.md" rel="nofollow">https://github.com/kubernetes/kubernetes/blob/master/docs/design/persistent-storage.md</a> This Github issue also has many information <a href="https://github.com/kubernetes/kubernetes/issues/6893" rel="nofollow">https://github.com/kubernetes/kubernetes/issues/6893</a></p> <p>And in deed, that's the way to achieve HA in my opinion. Container are all stateless, they don't hold anything inside them. Any configuration needs for them should be store outside such as using thing like Consul or Etcd. By separating this like this, it's easier to restart a container</p>
<p>For instance can I have following yaml to produce a pod with multiple containers:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: lampapp labels: app: app spec: containers: - name: lampdb image: mysql_test - name: app image: php-app-db-url-env env: - name: DB_URL value: 127.0.0.1:3306 - name: app2 image: php-app-db-url-env env: - name: DB_URL value: 127.0.0.1:3306 </code></pre>
<p>Yes, you can add multiple container with same image.</p> <p><strong>The containers object must contain:</strong></p> <ol> <li><strong>name:</strong> Name of the container. It must be a <em>DNS_LABEL</em> and be <code>unique</code> within the pod. Cannot be updated.</li> <li><strong>image:</strong> Docker image name.</li> </ol> <p>You have to make container name unique</p> <p>You can do following:</p> <pre><code>- name: app image: php-app-db-url-env --- - name: app2 |&gt; same image image: php-app-db-url-env --- </code></pre> <p>But not this one:</p> <pre><code>- name: app image: php-app-db-url-env - name: app image: &lt;any image&gt; </code></pre> <p>Also the containers spec should include a unique port number within the Pod</p>
<p>Kubernetes verison: 1.02 </p> <p>PATCH /api/v1/namespaces/default/replicationcontrollers/test </p> <pre><code>body {"spec": {"replicas": 3} } response '{ "kind": "Status", "apiVersion": "v1", "metadata": {}, "status": "Failure", "message": "the server responded with the status code 415 but did not return more information", "details": {}, "code": 415 }' </code></pre> <p>Is this a bug for API? </p>
<p>For PATCH to work you need to send one of the <a href="http://kubernetes.io/v1.0/docs/devel/api-conventions.html#patch-operations" rel="nofollow noreferrer">accepted content-type header values</a>.</p> <p>Your example uses a <a href="https://www.rfc-editor.org/rfc/rfc7386" rel="nofollow noreferrer">merge patch</a>, so you should send:</p> <pre><code>Content-Type: application/merge-patch+json </code></pre>
<p>I am trying to figure out how to update a node's pod capacity. I have a simple cluster setup using the Vagrant/VM environment outlined in the documentation. I have attempted to patch the node's pod capacity using kubectl doing the following:</p> <p>Sending just JSON needed for patch via:</p> <pre><code>kubectl patch node 10.245.1.3 -p '{"status": {"capacity": {"pods": "4"}}}' </code></pre> <p>and</p> <pre><code>kubectl patch node 10.245.1.3 -p "`cat node.json`" </code></pre> <p>Where node.json is the nodes JSON from a GET request except with pods change to 4 and the resourceVersion attribute removed.</p> <p>The command seems to be accepted because the node's resourceVersion number changes. However, the capacity of pods does not. Any ideas?</p> <p>I am using Kubernetes 1.0.6</p>
<p>NodeStatus is a subresource that is periodically updated by the node (kubelet) itself, and the capacity is calculated based on available resources (cpu, mem, etc) on the node. Updating the Node object does not update the status.</p> <p>If you want to set a maximum capacity of a node, you can pass a <a href="https://github.com/kubernetes/kubernetes/blob/54706661ad72d62ea0b494112a74e0467093c9f4/cmd/kubelet/app/server.go#L317" rel="nofollow">flag to the kubelet</a> during startup. This would require you to restart kubelet though.</p>
<p>I'm having an issue where a container I'd like to run doesn't appear to be getting started on my cluster.</p> <p>I've tried searching around for possible solutions, but there's a surprising lack of information out there to assist with this issue or anything of it's nature.</p> <p>Here's the most I could gather:</p> <pre><code>$ kubectl describe pods/elasticsearch Name: elasticsearch Namespace: default Image(s): my.image.host/my-project/elasticsearch Node: / Labels: &lt;none&gt; Status: Pending Reason: Message: IP: Replication Controllers: &lt;none&gt; Containers: elasticsearch: Image: my.image.host/my-project/elasticsearch Limits: cpu: 100m State: Waiting Ready: False Restart Count: 0 Events: FirstSeen LastSeen Count From SubobjectPath Reason Message Mon, 19 Oct 2015 10:28:44 -0500 Mon, 19 Oct 2015 10:34:09 -0500 12 {scheduler } failedScheduling no nodes available to schedule pods </code></pre> <p>I also see this:</p> <pre><code>$ kubectl get pod elasticsearch -o wide NAME READY STATUS RESTARTS AGE NODE elasticsearch 0/1 Pending 0 5s </code></pre> <p>I guess I'd like to know: What prerequisites exist so that I can be confident that my container is going to run in container engine? What do I need to do in this scenario to get it running?</p> <p>Here's my <code>yml</code> file:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: elasticsearch spec: containers: - name: elasticsearch image: my.image.host/my-project/elasticsearch ports: - containerPort: 9200 resources: volumeMounts: - name: elasticsearch-data mountPath: /usr/share/elasticsearch volumes: - name: elasticsearch-data gcePersistentDisk: pdName: elasticsearch-staging fsType: ext4 </code></pre> <p>Here's some more output about my node:</p> <pre><code>$ kubectl get nodes NAME LABELS STATUS gke-elasticsearch-staging-00000000-node-yma3 kubernetes.io/hostname=gke-elasticsearch-staging-00000000-node-yma3 NotReady </code></pre>
<p>You only have one node in your cluster and its status in <code>NotReady</code>. So you won't be able to schedule any pods. You can try to determine why your node isn't ready by looking in <code>/var/log/kubelet.log</code>. You can also add new nodes to your cluster (scale the cluster size up to 2) or delete the node (it will be automatically replaced by the instance group manager) to see if either of those options get you a working node. </p>
<p>I'm using Kubernetes v1.0.6 on AWS that has been deployed using <code>kube-up.sh</code>.<br> Cluster is using <code>kube-dns</code>.</p> <pre><code>$ kubectl get svc kube-dns --namespace=kube-system NAME LABELS SELECTOR IP(S) PORT(S) kube-dns k8s-app=kube-dns,kubernetes.io/cluster-service=true,kubernetes.io/name=KubeDNS k8s-app=kube-dns 10.0.0.10 53/UDP </code></pre> <p>Which works fine.</p> <pre><code>$ kubectl exec busybox -- nslookup kubernetes.default Server: 10.0.0.10 Address 1: 10.0.0.10 ip-10-0-0-10.eu-west-1.compute.internal Name: kubernetes.default Address 1: 10.0.0.1 ip-10-0-0-1.eu-west-1.compute.internal </code></pre> <p>This is the <code>resolv.conf</code> of a pod.</p> <pre><code>$ kubectl exec busybox -- cat /etc/resolv.conf nameserver 10.0.0.10 nameserver 172.20.0.2 search default.svc.cluster.local svc.cluster.local cluster.local eu-west-1.compute.internal </code></pre> <p>Is it possible to have the containers use an additional nameserver?</p> <p>I have a secondary DNS based service discovery Oon let's say 192.168.0.1) that I would like my kubernetes containers to be able to use for dns resolution.</p> <p>ps. A kubernetes 1.1 solution would also be acceptable :)</p> <p>Thank you very much in advance, George</p>
<p>The <a href="https://github.com/kubernetes/kubernetes/blob/b9cfab87e33ea649bdd13a1bd243c502d76e5d22/cluster/addons/dns/README.md#inheriting-dns-from-the-node" rel="nofollow">DNS addon README</a> has some details on this. Basically, the pod will inherit the <code>resolv.conf</code> setting of the node it is running on, so you could add your extra DNS server to the nodes' <code>/etc/resolv.conf</code>. The <code>kubelet</code> also takes a <a href="https://github.com/kubernetes/kubernetes/blob/69a8dc64c72b6268ebeb8cd03493aa219643a79d/cmd/kubelet/app/server.go#L284" rel="nofollow"><code>--resolv-conf</code> argument</a> that may provide a more explicit way for you to inject the extra DNS server. I don't see that flag documented anywhere yet, however.</p>
<p>i´ve created a cluster on google cloud platform consisting of 3 g1-small instances and have not yet added any pod / service / etc. Still, when I log on to the Kubernetes UI, all three instances show a very high memory consumption of ~ 1.3 GB. What is this memory used for? Or is it a problem with the kubernetes ui?</p> <p><a href="https://i.stack.imgur.com/ShzZi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ShzZi.png" alt="enter image description here"></a></p> <p>Thanks, Fabian</p>
<p>kube-ui seems to be showing the total memory usage, as opposed to the memory working set. The former includes inactive pages which are not in use, so the memory would appear higher. To see the memory working set, you can try reaching other monitoring services such as monitoring-grafana (backed by heapster) or simply reach the cadvisor port on the node.</p> <p>To reach cadvisor: Run <code>kubectl proxy</code> and then open <a href="http://localhost:8001/api/v1/proxy/nodes/NODENAME:4194/" rel="nofollow">http://localhost:8001/api/v1/proxy/nodes/NODENAME:4194/</a></p> <p>Alternatively, you can deploy <a href="https://github.com/kubernetes/kubedash" rel="nofollow">kubedash</a> as your UI.</p>
<p>I have 4 nodes (<code>kubelets</code>) configured with a label <code>role=nginx</code></p> <pre><code>master ~ # kubectl get node NAME LABELS STATUS 10.1.141.34 kubernetes.io/hostname=10.1.141.34,role=nginx Ready 10.1.141.40 kubernetes.io/hostname=10.1.141.40,role=nginx Ready 10.1.141.42 kubernetes.io/hostname=10.1.141.42,role=nginx Ready 10.1.141.43 kubernetes.io/hostname=10.1.141.43,role=nginx Ready </code></pre> <p>I modified the replication <code>controller</code> and added these lines</p> <pre><code>spec: replicas: 4 selector: role: nginx </code></pre> <p>But when I fire it up I get 2 pods on one host. What I want is 1 pod on each host. What am I missing?</p>
<p>Prior to DaemonSet being available, you can also specify that you pod uses a host port and set the number of replicas in your replication controller to something greater than your number of nodes. The host port constraint will allow only one pod per host. </p>
<p>We'd like to customise the fluentd config that comes out of the box with the kubernetes fluentd-elasticsearch addon. It seems however that there is no easy way of doing this with the current supplied Docker images.</p> <p>The following file: <a href="https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/fluentd-elasticsearch/fluentd-es-image/td-agent.conf" rel="nofollow" title="td-agent.conf">td-agent.conf</a> is copied to the fluentd-es Docker image with no (apparent) way of us being able to customise it.</p> <p>We need to customise this config file so that we can handle multi-line log entries as one event. Most likely this would invovle making use of the multiline format (as detailed here <a href="http://docs.fluentd.org/articles/in_tail" rel="nofollow">fluentd in_tail</a>) which would obviously mean a change from the default config file. </p> <p>Currently a multi line Java stack trace appears in Kibana as multiple entires which is not ideal.</p>
<p>Unfortunately, I am not aware of any method to customize the config. You can either create your own image, open a feature request at <a href="http://issues.k8s.io" rel="nofollow">issues.k8s.io</a>, or even submit a PR to to enhance fluentd.</p>
<p>Followed this guide to starting a local-machine kubernetes cluster: <a href="http://kubernetes.io/v1.0/docs/getting-started-guides/docker.html" rel="nofollow">http://kubernetes.io/v1.0/docs/getting-started-guides/docker.html</a></p> <p>I've created various pods with .yaml files and everything works, I can access nginx and mysql using container IPs (in the 172.17.x.x range, with docker0), however when I create services, service IPs are in the 10.0.0.x range, unreachable from other containers. </p> <p>Isn't kube-proxy supposed to create iptables rules automatically, providing access to containers behind the service IP? No iptables changes are happening, and other containers can't reach services. Thanks!</p>
<p>I just ran through this (slightly out of date) doc. What I found is that it works if you replace the <code>hyperkube:v0.21.2</code> with <code>hyperkube:v1.0.7</code> in the 2 "docker run" lines, and replace <code>0.18.2</code> with <code>1.0.7</code> in the kubectl download URL.</p> <p>I have offered a pull-request to update this doc. Sorry for the trouble.</p>
<p>I have a service in kubernetes that is exposed on port 80 via load balancer on AWS. I also have a DNS configured to point on the load balancer host name.</p> <p>I want to add another port to the service without replacing it, which also replaces the load balancer and its domain.</p> <p>The only option I saw is to apply "patch" operation via kubectl. Is there a more convenient way I'm missing?</p> <p>Thanks</p>
<p>I'm not an expert with ELB, so I don't know if it is possible, but I'll talk about GCE and then assert that AWS should operate similarly.</p> <p>In Kubernetes v1.0.x there is an unfortunate bug that releases your external load-balancer and recreates it when you update a Service. In Kubernetes v1.1 we have gone to great lengths to NOT release the load-balancer (more precisely the external IP), so that a PUT or a PATCH (kubectl replace or kubectl patch) on the Service is safe. If AWS releases the external load-balancer (I know it's not an IP for ELB) then we should try to find a way to fix that.</p>
<p>I'm trying to wrap my head around how kubernetes (k8s) utilises ports. Having read the API documentation as well as the available docs, I'm not sure how the port mapping and port flow works.</p> <p>Let's say I have three containers with an externally hosted database, my k8s cluster is three on-prem CoreOS nodes, and there is a software-defined load balancer in front of all three nodes to forward traffic to all three nodes on ports 3306 and 10082.</p> <ol> <li>Container A utilises incoming port 8080, needs to talk to Container B and C, but does not need external access. It is defined with Replication Controller A that has 1 replica.</li> <li>Container B utilises incoming port 8081 to talk to Container A and C, but needs to access the external database on port 3306. It is defined with Replication Controller B that has 2 replicas.</li> <li>Container C utilises incoming port 8082, needs to talk to Container A and B, but also needs external access on port 10082 for end users. It is defined with Replication Controller C that has 3 replicas.</li> </ol> <p>I have three services to abstract the replication controllers.</p> <ol> <li>Service A selects Replication Controller A and needs to forward incoming traffic on port 9080 to port 8080.</li> <li>Service B selects Replication Controller B and needs to forward incoming traffic on ports 9081 and 3306 to ports 8081 and 3306. </li> <li>Service C selects Replication Controller C and needs to forward incoming traffic on port 9082 to port 8082.</li> </ol> <p>I have one endpoint for the external database, configured to on port 3306 with an IPv4 address.</p> <p>Goals:</p> <ul> <li>Services need to abstract Replication Controller ports.</li> <li>Service B needs to be able to be reached from an external system on port 3306 on all nodes.</li> <li>Service C needs to be able to be reached from an external system on port 10082 on all nodes.</li> </ul> <p>With that:</p> <ol> <li>When would I use each of the types of ports; i.e. <code>port</code>, <code>targetPort</code>, <code>nodePort</code>, etc.?</li> </ol>
<p>Thanks for the very detailed setup, but I still have some questions.</p> <p>1) When you say "Container" {A,B,C} do you mean Pod? Or are A, B, C containers in the same Pod?</p> <p>2) "Container B utilises incoming port 8081 to talk to Container A and C" - What do you mean that it uses an INcoming port to talk to other containers? Who opens the connection, to whom, and on what destination port?</p> <p>3) "needs to access the external database on port 3306" but later "needs to be able to be reached from an external system on port 3306" - Does B access an external database or is it serving a database on 3306?</p> <p>I'm confused on where traffic is coming in and where it is going out in this explanation.</p> <p>In general, you should avoid thinking in terms of nodes and you should avoid thinking about pods talking to pods (or containers to containers). You have some number of Services, each of which is backed by some number of Pods. Client pods (usually) talk to Services. Services receive traffic on a <code>port</code> and send that traffic to the corresponding <code>targetPort</code> on Pods. Pods receive traffic on a containerPort.</p> <p>None of that requires hostPorts or nodePorts. The last question is which of these Services need to be accessed from outside the cluster, and what is your environment capable of wrt load-balancing.</p> <p>If you answer this far, then I can come back for round 2 :)</p>
<p>Is there any way to access the UI on the GKE service? </p> <p>I tried following the information on <a href="https://github.com/kubernetes/kubernetes/blob/v1.0.6/docs/user-guide/ui.md" rel="nofollow">https://github.com/kubernetes/kubernetes/blob/v1.0.6/docs/user-guide/ui.md</a> And got this</p> <pre><code>Error: 'empty tunnel list.' Trying to reach: 'http://10.64.xx.xx:8080/' </code></pre> <p>Is this feature turned on ?</p>
<p>That error means that the master can't communicate with the nodes in your cluster. Have you deleted the instances from your cluster, or messed with the firewalls? There should be a firewall allowing access SSH to the nodes in the cluster from the master's IP address, and an entry in your project-wide metadata with the master's public SSH key.</p>
<p>If possible I'd like setup a multi-zone Kubernetes cluster on GCE but when creating a new cluster from the web console I can only select a single zone:</p> <p><a href="https://i.stack.imgur.com/gcMxg.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gcMxg.png" alt="enter image description here"></a></p> <p>Is it possible to have a multi-zone or even multi-region Kubernetes cluster on GCE?</p>
<p>Google Container Engine is a zonal service, which means that each cluster runs wholly within a single zone. </p> <p>If you want to do this manually you can follow <a href="https://stackoverflow.com/questions/31302233/resize-instance-types-on-container-engine-cluster/31303169#31303169">these instructions</a> to create a new instance template in a different zone and spin up nodes that connect to your existing Kubernetes master. Just be aware that you will be paying for the bandwidth between the nodes and the master between different zones even if they are in the same region (see the <a href="https://cloud.google.com/compute/pricing#network" rel="nofollow noreferrer">GCE Network Pricing</a>).</p>
<p>I was hoping to follow this architecture:</p> <pre><code> -&gt; service 1 -&gt; nginx -&gt; service 2 -&gt; service 3 </code></pre> <p>The reason I want to use nginx (instead of the default kube services using kube-proxy) is that it has x-accel-redirects which is great for authentication. Normally, nginx and all three services would be on separate boxes.</p> <p>With Kubernetes, I'm trying to find the best way of architecting this. I know there's a way to do it by simulating the <a href="http://paulbakker.io/kubernetes/load-balancing-kubernetes/" rel="nofollow noreferrer">service proxying</a>, but this seems like a jenky approach. I was thinking to make nginx and all three services a kube service, so the flow would be</p> <p>-&gt; kube-proxy -&gt; nginx -&gt; kube-proxy -&gt; service 1, 2, 3</p> <p>and the nginx config would look like</p> <pre><code>upstream backend { server backend1.example.com; # with skydns name, or the service ip which is static } server { location / { proxy_pass http://backend; } } </code></pre> <p>I was wondering if there are any downsides to this approach, and if there are any better approaches? Seems like a request will go through quite a bit of hops and processing.</p>
<p>You might be interested in our new Ingress API and load-balancer controllers (beta in kubernetes v1.1) We have HAProxy support, but not nginx (yet).</p> <p><a href="https://github.com/kubernetes/contrib/tree/master/service-loadbalancer" rel="nofollow">https://github.com/kubernetes/contrib/tree/master/service-loadbalancer</a></p>
<p>I've installed Kubernetes via Vagrant on OS X and everything seems to be working fine, but I'm unsure how kubectl is able to communicate with the master node despite being local to the workstation filesystem. </p> <p>How is this implemented?</p>
<p>kubectl has a configuration file that specifies the location of the Kubernetes apiserver and the client credentials to authenticate to the master. All of the commands issued by kubectl are over the HTTPS connection to the apiserver. </p> <p>When you run the scripts to bring up a cluster, they typically generate this local configuration file with the parameters necessary to access the cluster you just created. By default, the file is located at <code>~/.kube/config</code>. </p>
<p>TL;DR Kubernetes allows all containers to access all other containers on the entire cluster, this seems to greatly increase the security risks. How to mitigate?</p> <p>Unlike <a href="https://www.docker.com/" rel="nofollow">Docker</a>, where one would usually only allow network connection between containers that need to communicate (via <code>--link</code>), each <em>Pod</em> on <a href="http://kubernetes.io/" rel="nofollow">Kubernetes</a> can access all other Pods on that <em>cluster</em>.</p> <p>That means that for a standard Nginx + PHP/Python + MySQL/PostgreSQL, running on Kubernetes, a compromised Nginx would be able to access the database.</p> <p>People used to run all those on a single machine, but that machine would have serious periodic updates (more than containers), and SELinux/AppArmor for serious people.</p> <p>One can mitigate a bit the risks by having each project (if you have various independent websites for example) run each on their own cluster, but that seems wasteful.</p> <p>The current <a href="https://github.com/kubernetes/kubernetes/blob/master/docs/design/security.md" rel="nofollow">Kubernetes security</a> seems to be very incomplete. Is there already a way to have a decent security for production?</p>
<p>In the not-too-distant future we will introduce controls for network policy in Kubernetes. As of today that is not integrated, but several vendors (e.g. Weave, Calico) have policy engines that can work with Kubernetes.</p>